Skip to main content
Log in

Causal memory: definitions, implementation, and programming

  • Published:
Distributed Computing Aims and scope Submit manuscript

Summary

The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Adve SV, Hill MD: Weak ordering — a new definition. In: Proc. 17th Annual International Symposium on Computer Architecture, pp 2–14, May 1990

  2. Adve SV, Hill MD: A unified formalization of four sharedmemory models. IEEE Trans Parallel Distrib Syst 4(6): 613–624 (1993)

    Google Scholar 

  3. Afek Y, Brown G, Merritt M: Lazy caching. ACM Trans Program Lang Syst 15(1): 182–205 (1993)

    Google Scholar 

  4. Ahamad M, Bazzi RA, John R, Kohli P, Neiger G: The power of processor consistency. In: Proc 5th Symposium on Parallel Algorithms and Architectures, pp 251–260. ACM Press, June 1993. A full version appears a Technical Report 92/34, College of Computing, Georgia Institute of Technology

  5. Ahamad M, Burns JE, Hutto PW, Neiger G: Causal memory. In: Toueg S, Spirakis PG, Kirousis L (eds) Proc 5th International Workshop on Distributed Algorithms. Lect Notes Comput Sci, vol 579, pp 9–30. Springer, Berlin Heidelberg New York 1991

    Google Scholar 

  6. Ahamad M, Hutto PW, John R: Implementing and programming causal distributed shared memory. In: Proc 11th International Conference on Distributed Computing Systems, pp 274–281, May 1991

  7. Attiya H, Chaudhuri S, Friedman R, Welch JL: Shared memory consistency conditions for non-sequential execution: definitions and programming strategies. In: Proc 5th Symposium on Parallel Algorithms and Architectures, pp 241–250. ACM Press, June 1993

  8. Attiya H, Friedman R: A corretness condition for high performance multiprocessors. In: Proc 24th ACM Symposium on Theory of Computing, pp 679–690. ACM Press, May 1992

  9. Attiya H, Friedman R: Programming DEC-Alpha based multiprocessors the easy way. In: Proc 6th Symposium on Parallel Algorithms and Architectures, pp 157–166. ACM Press, June 1994. A revised and expanded version appears as Technical Report 9411, Laboratory for Parallel Computing Research, Israel Institute of Technology, October 1994

  10. Attiya H, Welch JL: Sequential consistency versus linearizability. ACM Trans Comput Syst 12(2): 91–122 (1994)

    Google Scholar 

  11. Bennett JK, Carter JB, Zwaenepoel W: Adaptive software cache management for distributed shared memory architectures. In: Proc 17th Annual International Symposium on Computer Architecture, May 1990

  12. Bertsekas DP, Tsitsiklis JN: Parallel and distributed computation: numerical methods. Prentice Hall, Englewood Cliffs, New Jersey, 1989

    Google Scholar 

  13. Birman K, Schiper A, Stephenson P: Lightweight causal and atomic group multicast. ACM Trans Comput Syst 9(3): 272–314 (1991)

    Google Scholar 

  14. Fidge C: Logical time in distributed computing systems. Computer 24(8): 28–33 (1991)

    Google Scholar 

  15. Friedman R: Personal communication, 1991

  16. Gharachorloo K, Lenoski D, Laudon J, Gibbons P, Gupta A, Hennssy J: Memory consistency and event ordering in scalable shared-memory multiprocessors. In: Proc 17th International Symposium on Computer Architecture, pp 15–26, May 1990

  17. Gibbons PB, Merritt M, Gharachorloo K: Proving sequential consistency of high-performance shared memories. In: Proc 3rd Symposium on Parallel Algorithms and Architectures, pp 292–303. ACM Press, July 1991

  18. Goodman JR: Cache consistency and sequential consistency. Technical Report 61, IEEE Scalable Coherent Interface Working Group, March 1989

  19. Heddaya A, Sinha HS: Coherence, non-coherence and local consistency in distributed shared memory for parallel computing. Technical Report 92-004, Computer Science Department, Boston University, May 1992

  20. Herlihy MP, Wing JM: Linearizability: a correctness condition for concurrent objects. ACM Trans Program Lang Syst 12(3): 463–492 (1990)

    Google Scholar 

  21. Hutto PW, Ahamad M: Slow memory: weakening consistency to enhance concurrency in distributed shared memories. In: Proc 10th International Conference on Distributed Computing Systems, May 1990. A complete version appears as Technical Report 89/39, School of Information and Computer Science, Georgia Institute of Technology

  22. John R: Implementing and programming weakly consistent memories. Ph.D. dissertation, Georgia Institute of Technology, 1994

  23. John R, Ahamad M: Implementation and evaluation of causal memory for data race free programs. Technical Report 94/30, College of Computing, Georgia Institute of Technology, July 1994

  24. Kessler RE, Livny M: An analysis of distributed shared memory algorithms. In: Proc 9th International Conference on Distributed Computing, pp 498–505, June 1989

  25. Kohli P, Neiger G, Ahamad M: A characterization of scalabe shared memories. In: Proc 22nd International Conference on Parallel Processing, pp I-332–I-335, August 1993

  26. Lamport L: Time, clocks, and the ordering of events in a distributed system. Commun ACM 21(7): 558–565 (1978)

    Google Scholar 

  27. Lamport L: How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Trans Comput C-28(9): 690–691 (1979)

    Google Scholar 

  28. Lamport L: On interprocess communication, part I: Basic formalism. Distrib Comput 1(2): 77–85 (1986)

    Google Scholar 

  29. Lipton RJ, Sandberg JS: PRAM: a scalable shared memory. Technical Report 180-88, Department of Computer Science, Princeton University, September 1988

  30. Mattern F: Virtual time and global states of distributed systems. In: Cosnard M, Quinton P, Robert Y, Raynal M (eds) Proc International Workshop on Parallel and Distributed Algorithms, pp 215–226. North-Holland, October 1988

  31. Mavronicolas M, Roth D: Sequential consistency and linearizability: read/write objects. In: Proc 29th Annual Allerton Conference on Communication, Control, and Computing, pp 683–692, October 1991. A revised version appears as Technical Report 28-91, Aiken Computation Laboratory, Harward University, June 1992 under the title “Linearizable Read/Write Objects”

  32. Mavronicolas M, Roth D: Efficient, strongly consistent implementations of shared memory. In: Segall A, Zaks S (eds) Proc 6th International Workshop on Distributed Algorithms. Lect Notes Comput Sci, vol 647, pp 346–361. Springer, Berlin Heidelberg New York 1992

    Google Scholar 

  33. Misra J: Axioms for memory access in asynchronous hardware systems. ACM Trans Program Lang Syst 8(1): 142–153 (1986)

    Google Scholar 

  34. Peterson GL: Myths about the mutual exclusion problem. Inf Process Lett 12(3): 115–116 (1981)

    Google Scholar 

  35. Singh AK: A framework for programming using non-atomic variables. In: Proc 8th International Parallel Processing Symposium, pp 133–140, IEEE Computer Society Press, April 1994

Download references

Author information

Authors and Affiliations

Authors

Additional information

Mustaque Ahamad is an Associate Professor in the College of Computing at the Georgia Institute of Technology. He received his M.S. and Ph.D. degrees in Computer Science from the State University of New York at Stony Brook in 1983 and 1985 respectively. His research interests include distributed operating systems, consistency of shared information in large scale distributed systems, and replicated data systems.

James E. Burns received the B.S. degree in mathematics from the California Institute of Technology, the M.B.I.S. degree from Georgia State University, and the M.S. and Ph.D. degrees in information and computer science from the Georgia Institute of Technology. He served on the faculty of Computer Science at Indiana University and the College of Computing at the Georgia Institute of Technology before joining Bellcore in 1993. He is currently a Member of Technical Staff in the Network Control Research Department, where he is studying the telephone control network with special interest in behavior when faults occur. He also has research interests in theoretical issues of distributed and parallel computing especially relating to problems of synchronization and fault tolerance.

This work was supported in part by the National Science Foundation under grants CCR-8619886, CCR-8909663, CCR-9106627, and CCR-9301454. Parts of this paper appeared in S. Toueg, P.G. Spirakis, and L. Kirousis, editors,Proceedings of the Fifth International Workshop on Distributed Algorithms, volume 579 ofLecture Notes on Computer Science, pages 9–30, Springer-Verlag, October 1991

The photograph of Professor J.E. Burns was published in Volume 8, No. 2, 1994 on page 59

This author's contributions were made while he was a graduate student at the Georgia Institute of Technology. No photograph and biographical information is available for P.W. Hutto

Gil Neiger was born on February 19, 1957 in New York, New York. In June 1979, he received an A.B. in Mathematics and Psycholinguistics from Brown University in Providence, Rhode Island. In February 1985, he spent two weeks picking cotton in Nicaragua in a brigade of international volunteers. In January 1986, he received an M.S. in Computer Science from Cornell University in Ithaca, New York and, in August 1988, he received a Ph.D. in Computer Science, also from Cornell University. On August 20, 1988, Dr. Neiger married Hilary Lombard in Lansing, New York. He is currently a Staff Software Engineer at Intel's Software Technology Lab in Hillsboro, Oregon. Dr. Neiger is a member of the editorial boards of theChicago Journal of Theoretical Computer Science and theJournal of Parallel and Distributed Computing.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ahamad, M., Neiger, G., Burns, J.E. et al. Causal memory: definitions, implementation, and programming. Distrib Comput 9, 37–49 (1995). https://doi.org/10.1007/BF01784241

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01784241

Key Words

Navigation