Skip to main content
Log in

Competitive snoopy caching

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Archibald, J., and Baer, J.-L. An evaluation of cache coherence solutions in shared-bus multiprocessors. Technical Report 85-10-05, Department of Computer Science, University of Washington, 1985.

  2. Belady, L. A., A study of replacement algorithms for virtual storage computers.IBM Systems J.,5 78–101, 1966.

    Article  Google Scholar 

  3. Bentley, J. L., and McGeoch, C. C., Amortized analysis of self-organizing sequential search heuristics.Comm. ACM,28(4) 404–411, 1985.

    Article  Google Scholar 

  4. Frank, S. J., Tightly coupled multiprocessor system speeds memory access times.Electronics,57(1), 164–169, 1984.

    Google Scholar 

  5. Goodman, J. R. Using cache memory to reduce processor-memory traffic.Proc. 10th Annual IEEE International Symposium on Computer Architecture, pp. 124–131, 1983.

  6. Katz, R., Eggers, S., Wood, D. A., Perkins, C., and Sheldon, R. G. Implementing a cache consistency protocol.Proc. 12th Annual IEEE International Symposium on Computer Architecture, pp. 276–283, 1985.

  7. Papamarcos, M., and Patel, J. A low overhead coherence solution for multiprocessors with private cache memories.Proc. 11th Annual IEEE International Symposium on Computer Architecture, pp. 348–354, 1984.

  8. Rudolph, L., and Segall, Z. Dynamic decentralized cache schemes for MIMD parallel processors.Proc. 11th Annual IEEE International Symposium on Computer Architecture, pp. 340–347, 1984.

  9. Rudolph, L., and Segall, Z. Dynamic paging schemes for MIMD parallel processors. Technical Report, Computer Science Department, Carnegie-Mellon University, 1986.

  10. Sleator, D. D., and Tarjan, R. E., Amortized efficiency of list update and paging rules.Comm. ACM,28(2), 202–208, 1985.

    Article  MathSciNet  Google Scholar 

  11. Vernon, M. K., and Holliday, M. A. Performance analysis of multiprocessor cache consistency protocols using generalized timed Petri nets. Technical Report, Computer Science Department, University of Wisconsin, 1986.

Download references

Author information

Authors and Affiliations

Authors

Additional information

Communicated by Jeffrey Scott Vitter.

A preliminary and condensed version of this paper appeared in theProceedings of the 27th Annual Symposium on the Foundations of Computer Science, IEEE, 1986.

This author received support from an IBM doctoral fellowship, and did part of this work while a research student associate at IBM Almaden Research Center.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Karlin, A.R., Manasse, M.S., Rudolph, L. et al. Competitive snoopy caching. Algorithmica 3, 79–119 (1988). https://doi.org/10.1007/BF01762111

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01762111

Key words

Navigation