-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "Memory consistency models" subsection to "Memory orderings" section #16
base: main
Are you sure you want to change the base?
Changes from all commits
3833bb9
c99ea30
6c1b062
a48f633
81b7ce4
99a06b1
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1090,6 +1090,208 @@ \section{Do we always need sequentially consistent operations?} | |
|
||
\section{Memory orderings} | ||
|
||
\subsection{Memory consistency models} | ||
|
||
When a program is compiled and executed, it does not always follow the written order. | ||
The system may change the sequence and optimize it to simulate line-by-line execution, as long as the final result matches the expected outcome. | ||
This requires an agreement between the programmer and the system (hardware, compiler, etc.), ensuring that if the rules are followed, the execution will be correct. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Merge the statement into the previous paragraph. |
||
Correctness here means defining permissible outcomes among all possible results, known as memory consistency models. | ||
These models allow the system to optimize while ensuring correct execution. | ||
|
||
Memory consistency models operate at various levels. | ||
For example, when machine code runs on hardware, processors can reorder and optimize instructions, and the results must match expectations. | ||
Similarly, when converting high-level languages to assembly, compilers can rearrange instructions while ensuring consistent outcomes. | ||
Thus, from source code to hardware execution, agreements must ensure the expected results. | ||
|
||
\subsubsection{Sequential consistency (SC)} | ||
|
||
In the 1970s, Leslie Lamport proposed the most common memory consistency model, sequential consistency (SC), defined as follows: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add some references at the end of this document. |
||
|
||
\begin{quote} | ||
A multiprocessor system is sequentially consistent if the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program. | ||
\end{quote} | ||
|
||
On modern processors, ensuring sequential consistency involves many optimization constraints, which slow down program execution. | ||
If some conventions are relaxed, such as not guaranteeing program order within each processing unit, performance can be further improved. | ||
|
||
A memory consistency model is a conceptual convention. | ||
This means the program's execution results must conform to this model. | ||
However, when a program is compiled and run on computer hardware, there is significant flexibility in adjusting the execution order. | ||
As long as the execution results match the predefined convention, the actual order can vary depending on the circumstances. | ||
|
||
It is important to note that sequential consistency does not imply a single order or a single result for the program. | ||
On the contrary, sequential consistency only requires that the program appears to execute in some interleaved order on a single thread, meaning a sequentially consistent program can still have multiple possible results. | ||
|
||
To enhance the understanding of sequential consistency, consider the following simple example. | ||
Two threads write to and read from two shared variables \monobox{x} and \monobox{y}, both initially set to \monobox{0}. | ||
|
||
\begin{ccode} | ||
// Litmus Test: Message Passing | ||
int x = 0; | ||
int y = 0; | ||
|
||
// Thread 1 // Thread 2 | ||
x = 1; r1 = y; | ||
y = 1; r2 = x; | ||
\end{ccode} | ||
|
||
If this program satisfies sequential consistency, then for Thread 1, \monobox{x = 1} must occur before \monobox{y = 1}, and for Thread 2, \monobox{r1 = y} must occur before \monobox{r2 = x}. | ||
For the entire program, the following six execution orders are possible: | ||
|
||
\begin{center} | ||
\noindent | ||
\begin{tabular}{|c|c|c|} \hline | ||
\begin{lstlisting} | ||
x = 1 | ||
y = 1 | ||
r1 = y(1) | ||
r2 = x(1) | ||
\end{lstlisting}& | ||
\begin{lstlisting} | ||
x = 1 | ||
r1 = y(0) | ||
y = 1 | ||
r2 = y(1) | ||
\end{lstlisting}& | ||
\begin{lstlisting} | ||
x = 1 | ||
r1 = y(0) | ||
r2 = x(1) | ||
y = 1 | ||
\end{lstlisting}\\ \hline | ||
\begin{lstlisting} | ||
r1 = y(0) | ||
x = 1 | ||
y = 1 | ||
r2 = x(1) | ||
\end{lstlisting}& | ||
\begin{lstlisting} | ||
r1 = y(0) | ||
x = 1 | ||
r2 = x(1) | ||
y = 1 | ||
\end{lstlisting}& | ||
\begin{lstlisting} | ||
r1 = y(0) | ||
r2 = x(0) | ||
x = 1 | ||
y = 1 | ||
\end{lstlisting}\\ \hline | ||
\end{tabular} | ||
\captionof{table}{6 possible execution orders of the message passing litmus test.} | ||
\end{center} | ||
|
||
Observing these orders, we see that none result in \monobox{r1 = 1} and \monobox{r2 = 0}. | ||
Thus, sequential consistency only allows the outcomes \monobox{(r1, r2)} to be \monobox{(1, 1)}, \monobox{(0, 1)}, and \monobox{(0, 0)}. | ||
With this convention, software can expect that \monobox{(1, 0)} will not occur, and hardware can optimize as long as it ensures the result \monobox{(1, 0)} does not appear. | ||
|
||
\begin{center} | ||
\includegraphics[keepaspectratio,width=0.7\linewidth]{images/hw-seq-cst} | ||
\captionof{figure}{The memory model of sequentially consistent hardware.} | ||
\label{hw-seq-cst} | ||
\end{center} | ||
|
||
We can imagine sequentially consistent hardware as the figure \ref{hw-seq-cst} shows: each thread can directly access shared memory, and memory processes one read or write operation at a time, naturally ensuring sequential consistency. | ||
In fact, there are multiple ways to implement sequentially consistent hardware. | ||
It can even include caches and be banked, as long as it ensures that the results behave the same as the aforementioned model. | ||
|
||
\subsubsection{Total store order (TSO)} | ||
|
||
Although sequential consistency is considered the ``golden standard'' for multi-threaded programs, its many constraints limit performance optimization. | ||
As a result, it is rarely implemented in modern processors. | ||
Instead, more relaxed memory models are used, such as the total store order (TSO) memory model adopted by the x86 architecture. | ||
One can envision the hardware roughly as follows: | ||
|
||
\begin{center} | ||
\includegraphics[keepaspectratio,width=0.7\linewidth]{images/hw-tso} | ||
\captionof{figure}{The memory model of x86-TSO hardware.} | ||
\label{hw-tso} | ||
\end{center} | ||
|
||
All processors can read from a single shared memory, but each processor writes only to its own write queue. | ||
|
||
Consider the following Write Queue (Store Buffer) Litmus Test: | ||
|
||
\begin{ccode} | ||
// Litmus Test: Write Queue (Store Buffer) | ||
int x = 0; | ||
int y = 0; | ||
|
||
// Thread 1 // Thread 2 | ||
x = 1; y = 1; | ||
r1 = y; r2 = x; | ||
\end{ccode} | ||
|
||
Sequential consistency does not allow \monobox{r1 = r2 = 0}, but TSO does. | ||
In a sequentially consistent memory model, \monobox{x = 1} or \monobox{y = 1} must be written first, followed by the read operations, so \monobox{r1 = r2 = 0} cannot occur. | ||
However, under the TSO memory model, the write operations from both threads might still be in their respective queues when the read operations occur, allowing \monobox{r1 = r2 = 0}. | ||
|
||
Non-sequentially consistent hardware typically supports additional memory barrier (fence) instructions to control the order of read and write operations. | ||
These barriers ensure that writes before the barrier are completed (queues emptied) before any subsequent reads are performed. | ||
|
||
\begin{ccode} | ||
// Thread 1 // Thread 2 | ||
x = 1; y = 1; | ||
barrier; barrier; | ||
r1 = y; r2 = x; | ||
\end{ccode} | ||
|
||
The reason total store order (TSO) is named as such is because once a write operation reaches shared memory, it indicates that all processors are aware that the value has been written. | ||
There will be no situation where different processors see different values. | ||
That is, the following litmus test will not have \monobox{r1 = 1}, \monobox{r2 = 0}, \monobox{r3 = 1}, but \monobox{r4 = 0}. | ||
|
||
Consider the following Independent Reads of Independent Writes (IRIW) Litmus Test: | ||
|
||
\begin{ccode} | ||
// Litmus Test: Independent Reads of Independent Writes (IRIW) | ||
int x = 0; | ||
int y = 0; | ||
|
||
// Thread 1 // Thread 2 // Thread 3 // Thread 4 | ||
x = 1; y = 1; r1 = x; r3 = y; | ||
r2 = y; r4 = x; | ||
\end{ccode} | ||
|
||
Once Thread 3 reads \monobox{r1 = 1}, \monobox{r2 = 0}, it indicates that the write \monobox{x = 1} reached shared memory before \monobox{y = 1}. | ||
If at this point Thread 4 reads \monobox{r3 = 1}, it means both writes \monobox{y = 1} and \monobox{x = 1} are visible to Thread 4, so \monobox{r4} can only be \monobox{1}. | ||
We can say ``Thread 1's write to \monobox{x}'' happens before ``Thread 2's write to \monobox{y}''. | ||
|
||
\subsubsection{Relaxed memory order (RMO)} | ||
|
||
\begin{center} | ||
\includegraphics[keepaspectratio,width=0.4\linewidth]{images/hw-relaxed} | ||
\captionof{figure}{The memory model of \textsc{Arm} relaxed hardware.} | ||
\label{hw-relaxed} | ||
\end{center} | ||
|
||
As shown in figure \ref{hw-relaxed}, the \textsc{Arm} instruction set adopts a more relaxed memory model. | ||
Each thread maintains its own copy of the memory, and every read and write operation targets this private copy. | ||
When writing to its own memory, it also propagates the changes to the memory of other threads. | ||
Thus, this model does not have a total store order. | ||
Furthermore, read operations can be delayed until they are actually needed. | ||
|
||
The write order seen by one thread can differ from the order seen by other threads because write operations can be reordered during propagation. | ||
However, reads and writes to the same memory address must still follow a total order. | ||
Therefore, the following litmus test cannot result in \monobox{r1 = 1}, \monobox{r2 = 2}, but \monobox{r3 = 2}, \monobox{r4 = 1}. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Provide some information about running Litmus tests. |
||
Which write overwrites which must be visible to all threads. | ||
This guarantee is known as coherence. | ||
Without coherence, programming for such a system would be very difficult. | ||
|
||
All the litmus tests mentioned above are allowed under the relaxed memory model of \textsc{Arm}, except for the following example. | ||
Neither \textsc{Arm}, x86-TSO, nor sequential consistency model would result in \monobox{r1 = 1}, \monobox{r2 = 2}, \monobox{r3 = 2}, and \monobox{r4 = 1}. | ||
|
||
\begin{ccode} | ||
// Litmus Test: Coherence | ||
int x = 0; | ||
int y = 0; | ||
|
||
// Thread 1 // Thread 2 // Thread 3 // Thread 4 | ||
x = 1; x = 2; r1 = x; r3 = x; | ||
r2 = x; r4 = x; | ||
\end{ccode} | ||
|
||
\subsection{C11/C++11 atomics} | ||
|
||
By default, all atomic operations, including loads, stores, and various forms of \textsc{RMW}, | ||
are considered sequentially consistent. | ||
However, this is just one among many possible orderings. | ||
|
@@ -1136,7 +1338,7 @@ \section{Memory orderings} | |
let's look at what these orderings are and how we can use them. | ||
As it turns out, almost all of the examples we have seen so far do not actually need sequentially consistent operations. | ||
|
||
\subsection{Acquire and release} | ||
\subsubsection{Acquire and release} | ||
|
||
We have just examined the acquire and release operations in the context of the lock example from \secref{lock-example}. | ||
You can think of them as ``one-way'' barriers: an acquire operation permits other reads and writes to move past it, | ||
|
@@ -1201,7 +1403,7 @@ \subsection{Acquire and release} | |
} | ||
\end{cppcode} | ||
|
||
\subsection{Relaxed} | ||
\subsubsection{Relaxed} | ||
Relaxed atomic operations are useful for variables shared between threads where \emph{no specific order} of operations is needed. | ||
Although it may seem like a niche requirement, such scenarios are quite common. | ||
|
||
|
@@ -1243,7 +1445,7 @@ \subsection{Relaxed} | |
a \textsc{CAS} loop is performed to claim a job. | ||
All of the loads can be relaxed as we do not need to enforce any order until we have successfully modified our value. | ||
|
||
\subsection{Acquire-Release} | ||
\subsubsection{Acquire-Release} | ||
|
||
\cc|memory_order_acq_rel| is used with atomic \textsc{RMW} operations that need to both load-acquire \emph{and} store-release a value. | ||
A typical example involves thread-safe reference counting, | ||
|
@@ -1293,7 +1495,7 @@ \subsection{Acquire-Release} | |
experts-only construct we have in the language. | ||
\end{quote} | ||
|
||
\subsection{Consume} | ||
\subsubsection{Consume} | ||
|
||
Last but not least, we introduce \cc|memory_order_consume|. | ||
Imagine a situation where data changes rarely but is frequently read by many threads. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should address the impact from modern microprocessors and optimizing compilers.