1			 ============================
2			 LINUX KERNEL MEMORY BARRIERS
3			 ============================
4
5By: David Howells <dhowells@redhat.com>
6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7
8Contents:
9
10 (*) Abstract memory access model.
11
12     - Device operations.
13     - Guarantees.
14
15 (*) What are memory barriers?
16
17     - Varieties of memory barrier.
18     - What may not be assumed about memory barriers?
19     - Data dependency barriers.
20     - Control dependencies.
21     - SMP barrier pairing.
22     - Examples of memory barrier sequences.
23     - Read memory barriers vs load speculation.
24     - Transitivity
25
26 (*) Explicit kernel barriers.
27
28     - Compiler barrier.
29     - CPU memory barriers.
30     - MMIO write barrier.
31
32 (*) Implicit kernel memory barriers.
33
34     - Locking functions.
35     - Interrupt disabling functions.
36     - Sleep and wake-up functions.
37     - Miscellaneous functions.
38
39 (*) Inter-CPU locking barrier effects.
40
41     - Locks vs memory accesses.
42     - Locks vs I/O accesses.
43
44 (*) Where are memory barriers needed?
45
46     - Interprocessor interaction.
47     - Atomic operations.
48     - Accessing devices.
49     - Interrupts.
50
51 (*) Kernel I/O barrier effects.
52
53 (*) Assumed minimum execution ordering model.
54
55 (*) The effects of the cpu cache.
56
57     - Cache coherency.
58     - Cache coherency vs DMA.
59     - Cache coherency vs MMIO.
60
61 (*) The things CPUs get up to.
62
63     - And then there's the Alpha.
64
65 (*) Example uses.
66
67     - Circular buffers.
68
69 (*) References.
70
71
72============================
73ABSTRACT MEMORY ACCESS MODEL
74============================
75
76Consider the following abstract model of the system:
77
78		            :                :
79		            :                :
80		            :                :
81		+-------+   :   +--------+   :   +-------+
82		|       |   :   |        |   :   |       |
83		|       |   :   |        |   :   |       |
84		| CPU 1 |<----->| Memory |<----->| CPU 2 |
85		|       |   :   |        |   :   |       |
86		|       |   :   |        |   :   |       |
87		+-------+   :   +--------+   :   +-------+
88		    ^       :       ^        :       ^
89		    |       :       |        :       |
90		    |       :       |        :       |
91		    |       :       v        :       |
92		    |       :   +--------+   :       |
93		    |       :   |        |   :       |
94		    |       :   |        |   :       |
95		    +---------->| Device |<----------+
96		            :   |        |   :
97		            :   |        |   :
98		            :   +--------+   :
99		            :                :
100
101Each CPU executes a program that generates memory access operations.  In the
102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103perform the memory operations in any order it likes, provided program causality
104appears to be maintained.  Similarly, the compiler may also arrange the
105instructions it emits in any order it likes, provided it doesn't affect the
106apparent operation of the program.
107
108So in the above diagram, the effects of the memory operations performed by a
109CPU are perceived by the rest of the system as the operations cross the
110interface between the CPU and rest of the system (the dotted lines).
111
112
113For example, consider the following sequence of events:
114
115	CPU 1		CPU 2
116	===============	===============
117	{ A == 1; B == 2 }
118	A = 3;		x = B;
119	B = 4;		y = A;
120
121The set of accesses as seen by the memory system in the middle can be arranged
122in 24 different combinations:
123
124	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
125	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
126	STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
127	STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
128	STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
129	STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
130	STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
131	STORE B=4, ...
132	...
133
134and can thus result in four different combinations of values:
135
136	x == 2, y == 1
137	x == 2, y == 3
138	x == 4, y == 1
139	x == 4, y == 3
140
141
142Furthermore, the stores committed by a CPU to the memory system may not be
143perceived by the loads made by another CPU in the same order as the stores were
144committed.
145
146
147As a further example, consider this sequence of events:
148
149	CPU 1		CPU 2
150	===============	===============
151	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
152	B = 4;		Q = P;
153	P = &B		D = *Q;
154
155There is an obvious data dependency here, as the value loaded into D depends on
156the address retrieved from P by CPU 2.  At the end of the sequence, any of the
157following results are possible:
158
159	(Q == &A) and (D == 1)
160	(Q == &B) and (D == 2)
161	(Q == &B) and (D == 4)
162
163Note that CPU 2 will never try and load C into D because the CPU will load P
164into Q before issuing the load of *Q.
165
166
167DEVICE OPERATIONS
168-----------------
169
170Some devices present their control interfaces as collections of memory
171locations, but the order in which the control registers are accessed is very
172important.  For instance, imagine an ethernet card with a set of internal
173registers that are accessed through an address port register (A) and a data
174port register (D).  To read internal register 5, the following code might then
175be used:
176
177	*A = 5;
178	x = *D;
179
180but this might show up as either of the following two sequences:
181
182	STORE *A = 5, x = LOAD *D
183	x = LOAD *D, STORE *A = 5
184
185the second of which will almost certainly result in a malfunction, since it set
186the address _after_ attempting to read the register.
187
188
189GUARANTEES
190----------
191
192There are some minimal guarantees that may be expected of a CPU:
193
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195     respect to itself.  This means that for:
196
197	ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
198
199     the CPU will issue the following memory operations:
200
201	Q = LOAD P, D = LOAD *Q
202
203     and always in that order.  On most systems, smp_read_barrier_depends()
204     does nothing, but it is required for DEC Alpha.  The ACCESS_ONCE()
205     is required to prevent compiler mischief.  Please note that you
206     should normally use something like rcu_dereference() instead of
207     open-coding smp_read_barrier_depends().
208
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210     ordered within that CPU.  This means that for:
211
212	a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
213
214     the CPU will only issue the following sequence of memory operations:
215
216	a = LOAD *X, STORE *X = b
217
218     And for:
219
220	ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
221
222     the CPU will only issue:
223
224	STORE *X = c, d = LOAD *X
225
226     (Loads and stores overlap if they are targeted at overlapping pieces of
227     memory).
228
229And there are a number of things that _must_ or _must_not_ be assumed:
230
231 (*) It _must_not_ be assumed that the compiler will do what you want with
232     memory references that are not protected by ACCESS_ONCE().  Without
233     ACCESS_ONCE(), the compiler is within its rights to do all sorts
234     of "creative" transformations, which are covered in the Compiler
235     Barrier section.
236
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
238     in the order given.  This means that for:
239
240	X = *A; Y = *B; *D = Z;
241
242     we may get any of the following sequences:
243
244	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
245	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
246	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
247	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
248	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
249	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
250
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
252     discarded.  This means that for:
253
254	X = *A; Y = *(A + 4);
255
256     we may get any one of the following sequences:
257
258	X = LOAD *A; Y = LOAD *(A + 4);
259	Y = LOAD *(A + 4); X = LOAD *A;
260	{X, Y} = LOAD {*A, *(A + 4) };
261
262     And for:
263
264	*A = X; *(A + 4) = Y;
265
266     we may get any of:
267
268	STORE *A = X; STORE *(A + 4) = Y;
269	STORE *(A + 4) = Y; STORE *A = X;
270	STORE {*A, *(A + 4) } = {X, Y};
271
272And there are anti-guarantees:
273
274 (*) These guarantees do not apply to bitfields, because compilers often
275     generate code to modify these using non-atomic read-modify-write
276     sequences.  Do not attempt to use bitfields to synchronize parallel
277     algorithms.
278
279 (*) Even in cases where bitfields are protected by locks, all fields
280     in a given bitfield must be protected by one lock.  If two fields
281     in a given bitfield are protected by different locks, the compiler's
282     non-atomic read-modify-write sequences can cause an update to one
283     field to corrupt the value of an adjacent field.
284
285 (*) These guarantees apply only to properly aligned and sized scalar
286     variables.  "Properly sized" currently means variables that are
287     the same size as "char", "short", "int" and "long".  "Properly
288     aligned" means the natural alignment, thus no constraints for
289     "char", two-byte alignment for "short", four-byte alignment for
290     "int", and either four-byte or eight-byte alignment for "long",
291     on 32-bit and 64-bit systems, respectively.  Note that these
292     guarantees were introduced into the C11 standard, so beware when
293     using older pre-C11 compilers (for example, gcc 4.6).  The portion
294     of the standard containing this guarantee is Section 3.14, which
295     defines "memory location" as follows:
296
297     	memory location
298		either an object of scalar type, or a maximal sequence
299		of adjacent bit-fields all having nonzero width
300
301		NOTE 1: Two threads of execution can update and access
302		separate memory locations without interfering with
303		each other.
304
305		NOTE 2: A bit-field and an adjacent non-bit-field member
306		are in separate memory locations. The same applies
307		to two bit-fields, if one is declared inside a nested
308		structure declaration and the other is not, or if the two
309		are separated by a zero-length bit-field declaration,
310		or if they are separated by a non-bit-field member
311		declaration. It is not safe to concurrently update two
312		bit-fields in the same structure if all members declared
313		between them are also bit-fields, no matter what the
314		sizes of those intervening bit-fields happen to be.
315
316
317=========================
318WHAT ARE MEMORY BARRIERS?
319=========================
320
321As can be seen above, independent memory operations are effectively performed
322in random order, but this can be a problem for CPU-CPU interaction and for I/O.
323What is required is some way of intervening to instruct the compiler and the
324CPU to restrict the order.
325
326Memory barriers are such interventions.  They impose a perceived partial
327ordering over the memory operations on either side of the barrier.
328
329Such enforcement is important because the CPUs and other devices in a system
330can use a variety of tricks to improve performance, including reordering,
331deferral and combination of memory operations; speculative loads; speculative
332branch prediction and various types of caching.  Memory barriers are used to
333override or suppress these tricks, allowing the code to sanely control the
334interaction of multiple CPUs and/or devices.
335
336
337VARIETIES OF MEMORY BARRIER
338---------------------------
339
340Memory barriers come in four basic varieties:
341
342 (1) Write (or store) memory barriers.
343
344     A write memory barrier gives a guarantee that all the STORE operations
345     specified before the barrier will appear to happen before all the STORE
346     operations specified after the barrier with respect to the other
347     components of the system.
348
349     A write barrier is a partial ordering on stores only; it is not required
350     to have any effect on loads.
351
352     A CPU can be viewed as committing a sequence of store operations to the
353     memory system as time progresses.  All stores before a write barrier will
354     occur in the sequence _before_ all the stores after the write barrier.
355
356     [!] Note that write barriers should normally be paired with read or data
357     dependency barriers; see the "SMP barrier pairing" subsection.
358
359
360 (2) Data dependency barriers.
361
362     A data dependency barrier is a weaker form of read barrier.  In the case
363     where two loads are performed such that the second depends on the result
364     of the first (eg: the first load retrieves the address to which the second
365     load will be directed), a data dependency barrier would be required to
366     make sure that the target of the second load is updated before the address
367     obtained by the first load is accessed.
368
369     A data dependency barrier is a partial ordering on interdependent loads
370     only; it is not required to have any effect on stores, independent loads
371     or overlapping loads.
372
373     As mentioned in (1), the other CPUs in the system can be viewed as
374     committing sequences of stores to the memory system that the CPU being
375     considered can then perceive.  A data dependency barrier issued by the CPU
376     under consideration guarantees that for any load preceding it, if that
377     load touches one of a sequence of stores from another CPU, then by the
378     time the barrier completes, the effects of all the stores prior to that
379     touched by the load will be perceptible to any loads issued after the data
380     dependency barrier.
381
382     See the "Examples of memory barrier sequences" subsection for diagrams
383     showing the ordering constraints.
384
385     [!] Note that the first load really has to have a _data_ dependency and
386     not a control dependency.  If the address for the second load is dependent
387     on the first load, but the dependency is through a conditional rather than
388     actually loading the address itself, then it's a _control_ dependency and
389     a full read barrier or better is required.  See the "Control dependencies"
390     subsection for more information.
391
392     [!] Note that data dependency barriers should normally be paired with
393     write barriers; see the "SMP barrier pairing" subsection.
394
395
396 (3) Read (or load) memory barriers.
397
398     A read barrier is a data dependency barrier plus a guarantee that all the
399     LOAD operations specified before the barrier will appear to happen before
400     all the LOAD operations specified after the barrier with respect to the
401     other components of the system.
402
403     A read barrier is a partial ordering on loads only; it is not required to
404     have any effect on stores.
405
406     Read memory barriers imply data dependency barriers, and so can substitute
407     for them.
408
409     [!] Note that read barriers should normally be paired with write barriers;
410     see the "SMP barrier pairing" subsection.
411
412
413 (4) General memory barriers.
414
415     A general memory barrier gives a guarantee that all the LOAD and STORE
416     operations specified before the barrier will appear to happen before all
417     the LOAD and STORE operations specified after the barrier with respect to
418     the other components of the system.
419
420     A general memory barrier is a partial ordering over both loads and stores.
421
422     General memory barriers imply both read and write memory barriers, and so
423     can substitute for either.
424
425
426And a couple of implicit varieties:
427
428 (5) ACQUIRE operations.
429
430     This acts as a one-way permeable barrier.  It guarantees that all memory
431     operations after the ACQUIRE operation will appear to happen after the
432     ACQUIRE operation with respect to the other components of the system.
433     ACQUIRE operations include LOCK operations and smp_load_acquire()
434     operations.
435
436     Memory operations that occur before an ACQUIRE operation may appear to
437     happen after it completes.
438
439     An ACQUIRE operation should almost always be paired with a RELEASE
440     operation.
441
442
443 (6) RELEASE operations.
444
445     This also acts as a one-way permeable barrier.  It guarantees that all
446     memory operations before the RELEASE operation will appear to happen
447     before the RELEASE operation with respect to the other components of the
448     system. RELEASE operations include UNLOCK operations and
449     smp_store_release() operations.
450
451     Memory operations that occur after a RELEASE operation may appear to
452     happen before it completes.
453
454     The use of ACQUIRE and RELEASE operations generally precludes the need
455     for other sorts of memory barrier (but note the exceptions mentioned in
456     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
457     pair is -not- guaranteed to act as a full memory barrier.  However, after
458     an ACQUIRE on a given variable, all memory accesses preceding any prior
459     RELEASE on that same variable are guaranteed to be visible.  In other
460     words, within a given variable's critical section, all accesses of all
461     previous critical sections for that variable are guaranteed to have
462     completed.
463
464     This means that ACQUIRE acts as a minimal "acquire" operation and
465     RELEASE acts as a minimal "release" operation.
466
467
468Memory barriers are only required where there's a possibility of interaction
469between two CPUs or between a CPU and a device.  If it can be guaranteed that
470there won't be any such interaction in any particular piece of code, then
471memory barriers are unnecessary in that piece of code.
472
473
474Note that these are the _minimum_ guarantees.  Different architectures may give
475more substantial guarantees, but they may _not_ be relied upon outside of arch
476specific code.
477
478
479WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
480----------------------------------------------
481
482There are certain things that the Linux kernel memory barriers do not guarantee:
483
484 (*) There is no guarantee that any of the memory accesses specified before a
485     memory barrier will be _complete_ by the completion of a memory barrier
486     instruction; the barrier can be considered to draw a line in that CPU's
487     access queue that accesses of the appropriate type may not cross.
488
489 (*) There is no guarantee that issuing a memory barrier on one CPU will have
490     any direct effect on another CPU or any other hardware in the system.  The
491     indirect effect will be the order in which the second CPU sees the effects
492     of the first CPU's accesses occur, but see the next point:
493
494 (*) There is no guarantee that a CPU will see the correct order of effects
495     from a second CPU's accesses, even _if_ the second CPU uses a memory
496     barrier, unless the first CPU _also_ uses a matching memory barrier (see
497     the subsection on "SMP Barrier Pairing").
498
499 (*) There is no guarantee that some intervening piece of off-the-CPU
500     hardware[*] will not reorder the memory accesses.  CPU cache coherency
501     mechanisms should propagate the indirect effects of a memory barrier
502     between CPUs, but might not do so in order.
503
504	[*] For information on bus mastering DMA and coherency please read:
505
506	    Documentation/PCI/pci.txt
507	    Documentation/DMA-API-HOWTO.txt
508	    Documentation/DMA-API.txt
509
510
511DATA DEPENDENCY BARRIERS
512------------------------
513
514The usage requirements of data dependency barriers are a little subtle, and
515it's not always obvious that they're needed.  To illustrate, consider the
516following sequence of events:
517
518	CPU 1		      CPU 2
519	===============	      ===============
520	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
521	B = 4;
522	<write barrier>
523	ACCESS_ONCE(P) = &B
524			      Q = ACCESS_ONCE(P);
525			      D = *Q;
526
527There's a clear data dependency here, and it would seem that by the end of the
528sequence, Q must be either &A or &B, and that:
529
530	(Q == &A) implies (D == 1)
531	(Q == &B) implies (D == 4)
532
533But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
534leading to the following situation:
535
536	(Q == &B) and (D == 2) ????
537
538Whilst this may seem like a failure of coherency or causality maintenance, it
539isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
540Alpha).
541
542To deal with this, a data dependency barrier or better must be inserted
543between the address load and the data load:
544
545	CPU 1		      CPU 2
546	===============	      ===============
547	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
548	B = 4;
549	<write barrier>
550	ACCESS_ONCE(P) = &B
551			      Q = ACCESS_ONCE(P);
552			      <data dependency barrier>
553			      D = *Q;
554
555This enforces the occurrence of one of the two implications, and prevents the
556third possibility from arising.
557
558[!] Note that this extremely counterintuitive situation arises most easily on
559machines with split caches, so that, for example, one cache bank processes
560even-numbered cache lines and the other bank processes odd-numbered cache
561lines.  The pointer P might be stored in an odd-numbered cache line, and the
562variable B might be stored in an even-numbered cache line.  Then, if the
563even-numbered bank of the reading CPU's cache is extremely busy while the
564odd-numbered bank is idle, one can see the new value of the pointer P (&B),
565but the old value of the variable B (2).
566
567
568Another example of where data dependency barriers might be required is where a
569number is read from memory and then used to calculate the index for an array
570access:
571
572	CPU 1		      CPU 2
573	===============	      ===============
574	{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
575	M[1] = 4;
576	<write barrier>
577	ACCESS_ONCE(P) = 1
578			      Q = ACCESS_ONCE(P);
579			      <data dependency barrier>
580			      D = M[Q];
581
582
583The data dependency barrier is very important to the RCU system,
584for example.  See rcu_assign_pointer() and rcu_dereference() in
585include/linux/rcupdate.h.  This permits the current target of an RCU'd
586pointer to be replaced with a new modified target, without the replacement
587target appearing to be incompletely initialised.
588
589See also the subsection on "Cache Coherency" for a more thorough example.
590
591
592CONTROL DEPENDENCIES
593--------------------
594
595A load-load control dependency requires a full read memory barrier, not
596simply a data dependency barrier to make it work correctly.  Consider the
597following bit of code:
598
599	q = ACCESS_ONCE(a);
600	if (q) {
601		<data dependency barrier>  /* BUG: No data dependency!!! */
602		p = ACCESS_ONCE(b);
603	}
604
605This will not have the desired effect because there is no actual data
606dependency, but rather a control dependency that the CPU may short-circuit
607by attempting to predict the outcome in advance, so that other CPUs see
608the load from b as having happened before the load from a.  In such a
609case what's actually required is:
610
611	q = ACCESS_ONCE(a);
612	if (q) {
613		<read barrier>
614		p = ACCESS_ONCE(b);
615	}
616
617However, stores are not speculated.  This means that ordering -is- provided
618for load-store control dependencies, as in the following example:
619
620	q = ACCESS_ONCE(a);
621	if (q) {
622		ACCESS_ONCE(b) = p;
623	}
624
625Control dependencies pair normally with other types of barriers.
626That said, please note that ACCESS_ONCE() is not optional!  Without the
627ACCESS_ONCE(), might combine the load from 'a' with other loads from
628'a', and the store to 'b' with other stores to 'b', with possible highly
629counterintuitive effects on ordering.
630
631Worse yet, if the compiler is able to prove (say) that the value of
632variable 'a' is always non-zero, it would be well within its rights
633to optimize the original example by eliminating the "if" statement
634as follows:
635
636	q = a;
637	b = p;  /* BUG: Compiler and CPU can both reorder!!! */
638
639So don't leave out the ACCESS_ONCE().
640
641It is tempting to try to enforce ordering on identical stores on both
642branches of the "if" statement as follows:
643
644	q = ACCESS_ONCE(a);
645	if (q) {
646		barrier();
647		ACCESS_ONCE(b) = p;
648		do_something();
649	} else {
650		barrier();
651		ACCESS_ONCE(b) = p;
652		do_something_else();
653	}
654
655Unfortunately, current compilers will transform this as follows at high
656optimization levels:
657
658	q = ACCESS_ONCE(a);
659	barrier();
660	ACCESS_ONCE(b) = p;  /* BUG: No ordering vs. load from a!!! */
661	if (q) {
662		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
663		do_something();
664	} else {
665		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
666		do_something_else();
667	}
668
669Now there is no conditional between the load from 'a' and the store to
670'b', which means that the CPU is within its rights to reorder them:
671The conditional is absolutely required, and must be present in the
672assembly code even after all compiler optimizations have been applied.
673Therefore, if you need ordering in this example, you need explicit
674memory barriers, for example, smp_store_release():
675
676	q = ACCESS_ONCE(a);
677	if (q) {
678		smp_store_release(&b, p);
679		do_something();
680	} else {
681		smp_store_release(&b, p);
682		do_something_else();
683	}
684
685In contrast, without explicit memory barriers, two-legged-if control
686ordering is guaranteed only when the stores differ, for example:
687
688	q = ACCESS_ONCE(a);
689	if (q) {
690		ACCESS_ONCE(b) = p;
691		do_something();
692	} else {
693		ACCESS_ONCE(b) = r;
694		do_something_else();
695	}
696
697The initial ACCESS_ONCE() is still required to prevent the compiler from
698proving the value of 'a'.
699
700In addition, you need to be careful what you do with the local variable 'q',
701otherwise the compiler might be able to guess the value and again remove
702the needed conditional.  For example:
703
704	q = ACCESS_ONCE(a);
705	if (q % MAX) {
706		ACCESS_ONCE(b) = p;
707		do_something();
708	} else {
709		ACCESS_ONCE(b) = r;
710		do_something_else();
711	}
712
713If MAX is defined to be 1, then the compiler knows that (q % MAX) is
714equal to zero, in which case the compiler is within its rights to
715transform the above code into the following:
716
717	q = ACCESS_ONCE(a);
718	ACCESS_ONCE(b) = p;
719	do_something_else();
720
721Given this transformation, the CPU is not required to respect the ordering
722between the load from variable 'a' and the store to variable 'b'.  It is
723tempting to add a barrier(), but this does not help.  The conditional
724is gone, and the barrier won't bring it back.  Therefore, if you are
725relying on this ordering, you should make sure that MAX is greater than
726one, perhaps as follows:
727
728	q = ACCESS_ONCE(a);
729	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
730	if (q % MAX) {
731		ACCESS_ONCE(b) = p;
732		do_something();
733	} else {
734		ACCESS_ONCE(b) = r;
735		do_something_else();
736	}
737
738Please note once again that the stores to 'b' differ.  If they were
739identical, as noted earlier, the compiler could pull this store outside
740of the 'if' statement.
741
742You must also be careful not to rely too much on boolean short-circuit
743evaluation.  Consider this example:
744
745	q = ACCESS_ONCE(a);
746	if (a || 1 > 0)
747		ACCESS_ONCE(b) = 1;
748
749Because the second condition is always true, the compiler can transform
750this example as following, defeating control dependency:
751
752	q = ACCESS_ONCE(a);
753	ACCESS_ONCE(b) = 1;
754
755This example underscores the need to ensure that the compiler cannot
756out-guess your code.  More generally, although ACCESS_ONCE() does force
757the compiler to actually emit code for a given load, it does not force
758the compiler to use the results.
759
760Finally, control dependencies do -not- provide transitivity.  This is
761demonstrated by two related examples, with the initial values of
762x and y both being zero:
763
764	CPU 0                     CPU 1
765	=====================     =====================
766	r1 = ACCESS_ONCE(x);      r2 = ACCESS_ONCE(y);
767	if (r1 > 0)               if (r2 > 0)
768	  ACCESS_ONCE(y) = 1;       ACCESS_ONCE(x) = 1;
769
770	assert(!(r1 == 1 && r2 == 1));
771
772The above two-CPU example will never trigger the assert().  However,
773if control dependencies guaranteed transitivity (which they do not),
774then adding the following CPU would guarantee a related assertion:
775
776	CPU 2
777	=====================
778	ACCESS_ONCE(x) = 2;
779
780	assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
781
782But because control dependencies do -not- provide transitivity, the above
783assertion can fail after the combined three-CPU example completes.  If you
784need the three-CPU example to provide ordering, you will need smp_mb()
785between the loads and stores in the CPU 0 and CPU 1 code fragments,
786that is, just before or just after the "if" statements.
787
788These two examples are the LB and WWC litmus tests from this paper:
789http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
790site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
791
792In summary:
793
794  (*) Control dependencies can order prior loads against later stores.
795      However, they do -not- guarantee any other sort of ordering:
796      Not prior loads against later loads, nor prior stores against
797      later anything.  If you need these other forms of ordering,
798      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
799      later loads, smp_mb().
800
801  (*) If both legs of the "if" statement begin with identical stores
802      to the same variable, a barrier() statement is required at the
803      beginning of each leg of the "if" statement.
804
805  (*) Control dependencies require at least one run-time conditional
806      between the prior load and the subsequent store, and this
807      conditional must involve the prior load.  If the compiler
808      is able to optimize the conditional away, it will have also
809      optimized away the ordering.  Careful use of ACCESS_ONCE() can
810      help to preserve the needed conditional.
811
812  (*) Control dependencies require that the compiler avoid reordering the
813      dependency into nonexistence.  Careful use of ACCESS_ONCE() or
814      barrier() can help to preserve your control dependency.  Please
815      see the Compiler Barrier section for more information.
816
817  (*) Control dependencies pair normally with other types of barriers.
818
819  (*) Control dependencies do -not- provide transitivity.  If you
820      need transitivity, use smp_mb().
821
822
823SMP BARRIER PAIRING
824-------------------
825
826When dealing with CPU-CPU interactions, certain types of memory barrier should
827always be paired.  A lack of appropriate pairing is almost certainly an error.
828
829General barriers pair with each other, though they also pair with most
830other types of barriers, albeit without transitivity.  An acquire barrier
831pairs with a release barrier, but both may also pair with other barriers,
832including of course general barriers.  A write barrier pairs with a data
833dependency barrier, a control dependency, an acquire barrier, a release
834barrier, a read barrier, or a general barrier.  Similarly a read barrier,
835control dependency, or a data dependency barrier pairs with a write
836barrier, an acquire barrier, a release barrier, or a general barrier:
837
838	CPU 1		      CPU 2
839	===============	      ===============
840	ACCESS_ONCE(a) = 1;
841	<write barrier>
842	ACCESS_ONCE(b) = 2;   x = ACCESS_ONCE(b);
843			      <read barrier>
844			      y = ACCESS_ONCE(a);
845
846Or:
847
848	CPU 1		      CPU 2
849	===============	      ===============================
850	a = 1;
851	<write barrier>
852	ACCESS_ONCE(b) = &a;  x = ACCESS_ONCE(b);
853			      <data dependency barrier>
854			      y = *x;
855
856Or even:
857
858	CPU 1		      CPU 2
859	===============	      ===============================
860	r1 = ACCESS_ONCE(y);
861	<general barrier>
862	ACCESS_ONCE(y) = 1;   if (r2 = ACCESS_ONCE(x)) {
863			         <implicit control dependency>
864			         ACCESS_ONCE(y) = 1;
865			      }
866
867	assert(r1 == 0 || r2 == 0);
868
869Basically, the read barrier always has to be there, even though it can be of
870the "weaker" type.
871
872[!] Note that the stores before the write barrier would normally be expected to
873match the loads after the read barrier or the data dependency barrier, and vice
874versa:
875
876	CPU 1                               CPU 2
877	===================                 ===================
878	ACCESS_ONCE(a) = 1;  }----   --->{  v = ACCESS_ONCE(c);
879	ACCESS_ONCE(b) = 2;  }    \ /    {  w = ACCESS_ONCE(d);
880	<write barrier>            \        <read barrier>
881	ACCESS_ONCE(c) = 3;  }    / \    {  x = ACCESS_ONCE(a);
882	ACCESS_ONCE(d) = 4;  }----   --->{  y = ACCESS_ONCE(b);
883
884
885EXAMPLES OF MEMORY BARRIER SEQUENCES
886------------------------------------
887
888Firstly, write barriers act as partial orderings on store operations.
889Consider the following sequence of events:
890
891	CPU 1
892	=======================
893	STORE A = 1
894	STORE B = 2
895	STORE C = 3
896	<write barrier>
897	STORE D = 4
898	STORE E = 5
899
900This sequence of events is committed to the memory coherence system in an order
901that the rest of the system might perceive as the unordered set of { STORE A,
902STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
903}:
904
905	+-------+       :      :
906	|       |       +------+
907	|       |------>| C=3  |     }     /\
908	|       |  :    +------+     }-----  \  -----> Events perceptible to
909	|       |  :    | A=1  |     }        \/       the rest of the system
910	|       |  :    +------+     }
911	| CPU 1 |  :    | B=2  |     }
912	|       |       +------+     }
913	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
914	|       |       +------+     }        requires all stores prior to the
915	|       |  :    | E=5  |     }        barrier to be committed before
916	|       |  :    +------+     }        further stores may take place
917	|       |------>| D=4  |     }
918	|       |       +------+
919	+-------+       :      :
920	                   |
921	                   | Sequence in which stores are committed to the
922	                   | memory system by CPU 1
923	                   V
924
925
926Secondly, data dependency barriers act as partial orderings on data-dependent
927loads.  Consider the following sequence of events:
928
929	CPU 1			CPU 2
930	=======================	=======================
931		{ B = 7; X = 9; Y = 8; C = &Y }
932	STORE A = 1
933	STORE B = 2
934	<write barrier>
935	STORE C = &B		LOAD X
936	STORE D = 4		LOAD C (gets &B)
937				LOAD *C (reads B)
938
939Without intervention, CPU 2 may perceive the events on CPU 1 in some
940effectively random order, despite the write barrier issued by CPU 1:
941
942	+-------+       :      :                :       :
943	|       |       +------+                +-------+  | Sequence of update
944	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
945	|       |  :    +------+     \          +-------+  | CPU 2
946	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
947	|       |       +------+       |        +-------+
948	|       |   wwwwwwwwwwwwwwww   |        :       :
949	|       |       +------+       |        :       :
950	|       |  :    | C=&B |---    |        :       :       +-------+
951	|       |  :    +------+   \   |        +-------+       |       |
952	|       |------>| D=4  |    ----------->| C->&B |------>|       |
953	|       |       +------+       |        +-------+       |       |
954	+-------+       :      :       |        :       :       |       |
955	                               |        :       :       |       |
956	                               |        :       :       | CPU 2 |
957	                               |        +-------+       |       |
958	    Apparently incorrect --->  |        | B->7  |------>|       |
959	    perception of B (!)        |        +-------+       |       |
960	                               |        :       :       |       |
961	                               |        +-------+       |       |
962	    The load of X holds --->    \       | X->9  |------>|       |
963	    up the maintenance           \      +-------+       |       |
964	    of coherence of B             ----->| B->2  |       +-------+
965	                                        +-------+
966	                                        :       :
967
968
969In the above example, CPU 2 perceives that B is 7, despite the load of *C
970(which would be B) coming after the LOAD of C.
971
972If, however, a data dependency barrier were to be placed between the load of C
973and the load of *C (ie: B) on CPU 2:
974
975	CPU 1			CPU 2
976	=======================	=======================
977		{ B = 7; X = 9; Y = 8; C = &Y }
978	STORE A = 1
979	STORE B = 2
980	<write barrier>
981	STORE C = &B		LOAD X
982	STORE D = 4		LOAD C (gets &B)
983				<data dependency barrier>
984				LOAD *C (reads B)
985
986then the following will occur:
987
988	+-------+       :      :                :       :
989	|       |       +------+                +-------+
990	|       |------>| B=2  |-----       --->| Y->8  |
991	|       |  :    +------+     \          +-------+
992	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
993	|       |       +------+       |        +-------+
994	|       |   wwwwwwwwwwwwwwww   |        :       :
995	|       |       +------+       |        :       :
996	|       |  :    | C=&B |---    |        :       :       +-------+
997	|       |  :    +------+   \   |        +-------+       |       |
998	|       |------>| D=4  |    ----------->| C->&B |------>|       |
999	|       |       +------+       |        +-------+       |       |
1000	+-------+       :      :       |        :       :       |       |
1001	                               |        :       :       |       |
1002	                               |        :       :       | CPU 2 |
1003	                               |        +-------+       |       |
1004	                               |        | X->9  |------>|       |
1005	                               |        +-------+       |       |
1006	  Makes sure all effects --->   \   ddddddddddddddddd   |       |
1007	  prior to the store of C        \      +-------+       |       |
1008	  are perceptible to              ----->| B->2  |------>|       |
1009	  subsequent loads                      +-------+       |       |
1010	                                        :       :       +-------+
1011
1012
1013And thirdly, a read barrier acts as a partial order on loads.  Consider the
1014following sequence of events:
1015
1016	CPU 1			CPU 2
1017	=======================	=======================
1018		{ A = 0, B = 9 }
1019	STORE A=1
1020	<write barrier>
1021	STORE B=2
1022				LOAD B
1023				LOAD A
1024
1025Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1026some effectively random order, despite the write barrier issued by CPU 1:
1027
1028	+-------+       :      :                :       :
1029	|       |       +------+                +-------+
1030	|       |------>| A=1  |------      --->| A->0  |
1031	|       |       +------+      \         +-------+
1032	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1033	|       |       +------+        |       +-------+
1034	|       |------>| B=2  |---     |       :       :
1035	|       |       +------+   \    |       :       :       +-------+
1036	+-------+       :      :    \   |       +-------+       |       |
1037	                             ---------->| B->2  |------>|       |
1038	                                |       +-------+       | CPU 2 |
1039	                                |       | A->0  |------>|       |
1040	                                |       +-------+       |       |
1041	                                |       :       :       +-------+
1042	                                 \      :       :
1043	                                  \     +-------+
1044	                                   ---->| A->1  |
1045	                                        +-------+
1046	                                        :       :
1047
1048
1049If, however, a read barrier were to be placed between the load of B and the
1050load of A on CPU 2:
1051
1052	CPU 1			CPU 2
1053	=======================	=======================
1054		{ A = 0, B = 9 }
1055	STORE A=1
1056	<write barrier>
1057	STORE B=2
1058				LOAD B
1059				<read barrier>
1060				LOAD A
1061
1062then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
10632:
1064
1065	+-------+       :      :                :       :
1066	|       |       +------+                +-------+
1067	|       |------>| A=1  |------      --->| A->0  |
1068	|       |       +------+      \         +-------+
1069	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1070	|       |       +------+        |       +-------+
1071	|       |------>| B=2  |---     |       :       :
1072	|       |       +------+   \    |       :       :       +-------+
1073	+-------+       :      :    \   |       +-------+       |       |
1074	                             ---------->| B->2  |------>|       |
1075	                                |       +-------+       | CPU 2 |
1076	                                |       :       :       |       |
1077	                                |       :       :       |       |
1078	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1079	  barrier causes all effects      \     +-------+       |       |
1080	  prior to the storage of B        ---->| A->1  |------>|       |
1081	  to be perceptible to CPU 2            +-------+       |       |
1082	                                        :       :       +-------+
1083
1084
1085To illustrate this more completely, consider what could happen if the code
1086contained a load of A either side of the read barrier:
1087
1088	CPU 1			CPU 2
1089	=======================	=======================
1090		{ A = 0, B = 9 }
1091	STORE A=1
1092	<write barrier>
1093	STORE B=2
1094				LOAD B
1095				LOAD A [first load of A]
1096				<read barrier>
1097				LOAD A [second load of A]
1098
1099Even though the two loads of A both occur after the load of B, they may both
1100come up with different values:
1101
1102	+-------+       :      :                :       :
1103	|       |       +------+                +-------+
1104	|       |------>| A=1  |------      --->| A->0  |
1105	|       |       +------+      \         +-------+
1106	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1107	|       |       +------+        |       +-------+
1108	|       |------>| B=2  |---     |       :       :
1109	|       |       +------+   \    |       :       :       +-------+
1110	+-------+       :      :    \   |       +-------+       |       |
1111	                             ---------->| B->2  |------>|       |
1112	                                |       +-------+       | CPU 2 |
1113	                                |       :       :       |       |
1114	                                |       :       :       |       |
1115	                                |       +-------+       |       |
1116	                                |       | A->0  |------>| 1st   |
1117	                                |       +-------+       |       |
1118	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1119	  barrier causes all effects      \     +-------+       |       |
1120	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1121	  to be perceptible to CPU 2            +-------+       |       |
1122	                                        :       :       +-------+
1123
1124
1125But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1126before the read barrier completes anyway:
1127
1128	+-------+       :      :                :       :
1129	|       |       +------+                +-------+
1130	|       |------>| A=1  |------      --->| A->0  |
1131	|       |       +------+      \         +-------+
1132	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1133	|       |       +------+        |       +-------+
1134	|       |------>| B=2  |---     |       :       :
1135	|       |       +------+   \    |       :       :       +-------+
1136	+-------+       :      :    \   |       +-------+       |       |
1137	                             ---------->| B->2  |------>|       |
1138	                                |       +-------+       | CPU 2 |
1139	                                |       :       :       |       |
1140	                                 \      :       :       |       |
1141	                                  \     +-------+       |       |
1142	                                   ---->| A->1  |------>| 1st   |
1143	                                        +-------+       |       |
1144	                                    rrrrrrrrrrrrrrrrr   |       |
1145	                                        +-------+       |       |
1146	                                        | A->1  |------>| 2nd   |
1147	                                        +-------+       |       |
1148	                                        :       :       +-------+
1149
1150
1151The guarantee is that the second load will always come up with A == 1 if the
1152load of B came up with B == 2.  No such guarantee exists for the first load of
1153A; that may come up with either A == 0 or A == 1.
1154
1155
1156READ MEMORY BARRIERS VS LOAD SPECULATION
1157----------------------------------------
1158
1159Many CPUs speculate with loads: that is they see that they will need to load an
1160item from memory, and they find a time where they're not using the bus for any
1161other loads, and so do the load in advance - even though they haven't actually
1162got to that point in the instruction execution flow yet.  This permits the
1163actual load instruction to potentially complete immediately because the CPU
1164already has the value to hand.
1165
1166It may turn out that the CPU didn't actually need the value - perhaps because a
1167branch circumvented the load - in which case it can discard the value or just
1168cache it for later use.
1169
1170Consider:
1171
1172	CPU 1			CPU 2
1173	=======================	=======================
1174				LOAD B
1175				DIVIDE		} Divide instructions generally
1176				DIVIDE		} take a long time to perform
1177				LOAD A
1178
1179Which might appear as this:
1180
1181	                                        :       :       +-------+
1182	                                        +-------+       |       |
1183	                                    --->| B->2  |------>|       |
1184	                                        +-------+       | CPU 2 |
1185	                                        :       :DIVIDE |       |
1186	                                        +-------+       |       |
1187	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1188	division speculates on the              +-------+   ~   |       |
1189	LOAD of A                               :       :   ~   |       |
1190	                                        :       :DIVIDE |       |
1191	                                        :       :   ~   |       |
1192	Once the divisions are complete -->     :       :   ~-->|       |
1193	the CPU can then perform the            :       :       |       |
1194	LOAD with immediate effect              :       :       +-------+
1195
1196
1197Placing a read barrier or a data dependency barrier just before the second
1198load:
1199
1200	CPU 1			CPU 2
1201	=======================	=======================
1202				LOAD B
1203				DIVIDE
1204				DIVIDE
1205				<read barrier>
1206				LOAD A
1207
1208will force any value speculatively obtained to be reconsidered to an extent
1209dependent on the type of barrier used.  If there was no change made to the
1210speculated memory location, then the speculated value will just be used:
1211
1212	                                        :       :       +-------+
1213	                                        +-------+       |       |
1214	                                    --->| B->2  |------>|       |
1215	                                        +-------+       | CPU 2 |
1216	                                        :       :DIVIDE |       |
1217	                                        +-------+       |       |
1218	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1219	division speculates on the              +-------+   ~   |       |
1220	LOAD of A                               :       :   ~   |       |
1221	                                        :       :DIVIDE |       |
1222	                                        :       :   ~   |       |
1223	                                        :       :   ~   |       |
1224	                                    rrrrrrrrrrrrrrrr~   |       |
1225	                                        :       :   ~   |       |
1226	                                        :       :   ~-->|       |
1227	                                        :       :       |       |
1228	                                        :       :       +-------+
1229
1230
1231but if there was an update or an invalidation from another CPU pending, then
1232the speculation will be cancelled and the value reloaded:
1233
1234	                                        :       :       +-------+
1235	                                        +-------+       |       |
1236	                                    --->| B->2  |------>|       |
1237	                                        +-------+       | CPU 2 |
1238	                                        :       :DIVIDE |       |
1239	                                        +-------+       |       |
1240	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1241	division speculates on the              +-------+   ~   |       |
1242	LOAD of A                               :       :   ~   |       |
1243	                                        :       :DIVIDE |       |
1244	                                        :       :   ~   |       |
1245	                                        :       :   ~   |       |
1246	                                    rrrrrrrrrrrrrrrrr   |       |
1247	                                        +-------+       |       |
1248	The speculation is discarded --->   --->| A->1  |------>|       |
1249	and an updated value is                 +-------+       |       |
1250	retrieved                               :       :       +-------+
1251
1252
1253TRANSITIVITY
1254------------
1255
1256Transitivity is a deeply intuitive notion about ordering that is not
1257always provided by real computer systems.  The following example
1258demonstrates transitivity (also called "cumulativity"):
1259
1260	CPU 1			CPU 2			CPU 3
1261	=======================	=======================	=======================
1262		{ X = 0, Y = 0 }
1263	STORE X=1		LOAD X			STORE Y=1
1264				<general barrier>	<general barrier>
1265				LOAD Y			LOAD X
1266
1267Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1268This indicates that CPU 2's load from X in some sense follows CPU 1's
1269store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1270store to Y.  The question is then "Can CPU 3's load from X return 0?"
1271
1272Because CPU 2's load from X in some sense came after CPU 1's store, it
1273is natural to expect that CPU 3's load from X must therefore return 1.
1274This expectation is an example of transitivity: if a load executing on
1275CPU A follows a load from the same variable executing on CPU B, then
1276CPU A's load must either return the same value that CPU B's load did,
1277or must return some later value.
1278
1279In the Linux kernel, use of general memory barriers guarantees
1280transitivity.  Therefore, in the above example, if CPU 2's load from X
1281returns 1 and its load from Y returns 0, then CPU 3's load from X must
1282also return 1.
1283
1284However, transitivity is -not- guaranteed for read or write barriers.
1285For example, suppose that CPU 2's general barrier in the above example
1286is changed to a read barrier as shown below:
1287
1288	CPU 1			CPU 2			CPU 3
1289	=======================	=======================	=======================
1290		{ X = 0, Y = 0 }
1291	STORE X=1		LOAD X			STORE Y=1
1292				<read barrier>		<general barrier>
1293				LOAD Y			LOAD X
1294
1295This substitution destroys transitivity: in this example, it is perfectly
1296legal for CPU 2's load from X to return 1, its load from Y to return 0,
1297and CPU 3's load from X to return 0.
1298
1299The key point is that although CPU 2's read barrier orders its pair
1300of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1301this example runs on a system where CPUs 1 and 2 share a store buffer
1302or a level of cache, CPU 2 might have early access to CPU 1's writes.
1303General barriers are therefore required to ensure that all CPUs agree
1304on the combined order of CPU 1's and CPU 2's accesses.
1305
1306To reiterate, if your code requires transitivity, use general barriers
1307throughout.
1308
1309
1310========================
1311EXPLICIT KERNEL BARRIERS
1312========================
1313
1314The Linux kernel has a variety of different barriers that act at different
1315levels:
1316
1317  (*) Compiler barrier.
1318
1319  (*) CPU memory barriers.
1320
1321  (*) MMIO write barrier.
1322
1323
1324COMPILER BARRIER
1325----------------
1326
1327The Linux kernel has an explicit compiler barrier function that prevents the
1328compiler from moving the memory accesses either side of it to the other side:
1329
1330	barrier();
1331
1332This is a general barrier -- there are no read-read or write-write variants
1333of barrier().  However, ACCESS_ONCE() can be thought of as a weak form
1334for barrier() that affects only the specific accesses flagged by the
1335ACCESS_ONCE().
1336
1337The barrier() function has the following effects:
1338
1339 (*) Prevents the compiler from reordering accesses following the
1340     barrier() to precede any accesses preceding the barrier().
1341     One example use for this property is to ease communication between
1342     interrupt-handler code and the code that was interrupted.
1343
1344 (*) Within a loop, forces the compiler to load the variables used
1345     in that loop's conditional on each pass through that loop.
1346
1347The ACCESS_ONCE() function can prevent any number of optimizations that,
1348while perfectly safe in single-threaded code, can be fatal in concurrent
1349code.  Here are some examples of these sorts of optimizations:
1350
1351 (*) The compiler is within its rights to reorder loads and stores
1352     to the same variable, and in some cases, the CPU is within its
1353     rights to reorder loads to the same variable.  This means that
1354     the following code:
1355
1356	a[0] = x;
1357	a[1] = x;
1358
1359     Might result in an older value of x stored in a[1] than in a[0].
1360     Prevent both the compiler and the CPU from doing this as follows:
1361
1362	a[0] = ACCESS_ONCE(x);
1363	a[1] = ACCESS_ONCE(x);
1364
1365     In short, ACCESS_ONCE() provides cache coherence for accesses from
1366     multiple CPUs to a single variable.
1367
1368 (*) The compiler is within its rights to merge successive loads from
1369     the same variable.  Such merging can cause the compiler to "optimize"
1370     the following code:
1371
1372	while (tmp = a)
1373		do_something_with(tmp);
1374
1375     into the following code, which, although in some sense legitimate
1376     for single-threaded code, is almost certainly not what the developer
1377     intended:
1378
1379	if (tmp = a)
1380		for (;;)
1381			do_something_with(tmp);
1382
1383     Use ACCESS_ONCE() to prevent the compiler from doing this to you:
1384
1385	while (tmp = ACCESS_ONCE(a))
1386		do_something_with(tmp);
1387
1388 (*) The compiler is within its rights to reload a variable, for example,
1389     in cases where high register pressure prevents the compiler from
1390     keeping all data of interest in registers.  The compiler might
1391     therefore optimize the variable 'tmp' out of our previous example:
1392
1393	while (tmp = a)
1394		do_something_with(tmp);
1395
1396     This could result in the following code, which is perfectly safe in
1397     single-threaded code, but can be fatal in concurrent code:
1398
1399	while (a)
1400		do_something_with(a);
1401
1402     For example, the optimized version of this code could result in
1403     passing a zero to do_something_with() in the case where the variable
1404     a was modified by some other CPU between the "while" statement and
1405     the call to do_something_with().
1406
1407     Again, use ACCESS_ONCE() to prevent the compiler from doing this:
1408
1409	while (tmp = ACCESS_ONCE(a))
1410		do_something_with(tmp);
1411
1412     Note that if the compiler runs short of registers, it might save
1413     tmp onto the stack.  The overhead of this saving and later restoring
1414     is why compilers reload variables.  Doing so is perfectly safe for
1415     single-threaded code, so you need to tell the compiler about cases
1416     where it is not safe.
1417
1418 (*) The compiler is within its rights to omit a load entirely if it knows
1419     what the value will be.  For example, if the compiler can prove that
1420     the value of variable 'a' is always zero, it can optimize this code:
1421
1422	while (tmp = a)
1423		do_something_with(tmp);
1424
1425     Into this:
1426
1427	do { } while (0);
1428
1429     This transformation is a win for single-threaded code because it gets
1430     rid of a load and a branch.  The problem is that the compiler will
1431     carry out its proof assuming that the current CPU is the only one
1432     updating variable 'a'.  If variable 'a' is shared, then the compiler's
1433     proof will be erroneous.  Use ACCESS_ONCE() to tell the compiler
1434     that it doesn't know as much as it thinks it does:
1435
1436	while (tmp = ACCESS_ONCE(a))
1437		do_something_with(tmp);
1438
1439     But please note that the compiler is also closely watching what you
1440     do with the value after the ACCESS_ONCE().  For example, suppose you
1441     do the following and MAX is a preprocessor macro with the value 1:
1442
1443	while ((tmp = ACCESS_ONCE(a)) % MAX)
1444		do_something_with(tmp);
1445
1446     Then the compiler knows that the result of the "%" operator applied
1447     to MAX will always be zero, again allowing the compiler to optimize
1448     the code into near-nonexistence.  (It will still load from the
1449     variable 'a'.)
1450
1451 (*) Similarly, the compiler is within its rights to omit a store entirely
1452     if it knows that the variable already has the value being stored.
1453     Again, the compiler assumes that the current CPU is the only one
1454     storing into the variable, which can cause the compiler to do the
1455     wrong thing for shared variables.  For example, suppose you have
1456     the following:
1457
1458	a = 0;
1459	/* Code that does not store to variable a. */
1460	a = 0;
1461
1462     The compiler sees that the value of variable 'a' is already zero, so
1463     it might well omit the second store.  This would come as a fatal
1464     surprise if some other CPU might have stored to variable 'a' in the
1465     meantime.
1466
1467     Use ACCESS_ONCE() to prevent the compiler from making this sort of
1468     wrong guess:
1469
1470	ACCESS_ONCE(a) = 0;
1471	/* Code that does not store to variable a. */
1472	ACCESS_ONCE(a) = 0;
1473
1474 (*) The compiler is within its rights to reorder memory accesses unless
1475     you tell it not to.  For example, consider the following interaction
1476     between process-level code and an interrupt handler:
1477
1478	void process_level(void)
1479	{
1480		msg = get_message();
1481		flag = true;
1482	}
1483
1484	void interrupt_handler(void)
1485	{
1486		if (flag)
1487			process_message(msg);
1488	}
1489
1490     There is nothing to prevent the compiler from transforming
1491     process_level() to the following, in fact, this might well be a
1492     win for single-threaded code:
1493
1494	void process_level(void)
1495	{
1496		flag = true;
1497		msg = get_message();
1498	}
1499
1500     If the interrupt occurs between these two statement, then
1501     interrupt_handler() might be passed a garbled msg.  Use ACCESS_ONCE()
1502     to prevent this as follows:
1503
1504	void process_level(void)
1505	{
1506		ACCESS_ONCE(msg) = get_message();
1507		ACCESS_ONCE(flag) = true;
1508	}
1509
1510	void interrupt_handler(void)
1511	{
1512		if (ACCESS_ONCE(flag))
1513			process_message(ACCESS_ONCE(msg));
1514	}
1515
1516     Note that the ACCESS_ONCE() wrappers in interrupt_handler()
1517     are needed if this interrupt handler can itself be interrupted
1518     by something that also accesses 'flag' and 'msg', for example,
1519     a nested interrupt or an NMI.  Otherwise, ACCESS_ONCE() is not
1520     needed in interrupt_handler() other than for documentation purposes.
1521     (Note also that nested interrupts do not typically occur in modern
1522     Linux kernels, in fact, if an interrupt handler returns with
1523     interrupts enabled, you will get a WARN_ONCE() splat.)
1524
1525     You should assume that the compiler can move ACCESS_ONCE() past
1526     code not containing ACCESS_ONCE(), barrier(), or similar primitives.
1527
1528     This effect could also be achieved using barrier(), but ACCESS_ONCE()
1529     is more selective:  With ACCESS_ONCE(), the compiler need only forget
1530     the contents of the indicated memory locations, while with barrier()
1531     the compiler must discard the value of all memory locations that
1532     it has currented cached in any machine registers.  Of course,
1533     the compiler must also respect the order in which the ACCESS_ONCE()s
1534     occur, though the CPU of course need not do so.
1535
1536 (*) The compiler is within its rights to invent stores to a variable,
1537     as in the following example:
1538
1539	if (a)
1540		b = a;
1541	else
1542		b = 42;
1543
1544     The compiler might save a branch by optimizing this as follows:
1545
1546	b = 42;
1547	if (a)
1548		b = a;
1549
1550     In single-threaded code, this is not only safe, but also saves
1551     a branch.  Unfortunately, in concurrent code, this optimization
1552     could cause some other CPU to see a spurious value of 42 -- even
1553     if variable 'a' was never zero -- when loading variable 'b'.
1554     Use ACCESS_ONCE() to prevent this as follows:
1555
1556	if (a)
1557		ACCESS_ONCE(b) = a;
1558	else
1559		ACCESS_ONCE(b) = 42;
1560
1561     The compiler can also invent loads.  These are usually less
1562     damaging, but they can result in cache-line bouncing and thus in
1563     poor performance and scalability.  Use ACCESS_ONCE() to prevent
1564     invented loads.
1565
1566 (*) For aligned memory locations whose size allows them to be accessed
1567     with a single memory-reference instruction, prevents "load tearing"
1568     and "store tearing," in which a single large access is replaced by
1569     multiple smaller accesses.  For example, given an architecture having
1570     16-bit store instructions with 7-bit immediate fields, the compiler
1571     might be tempted to use two 16-bit store-immediate instructions to
1572     implement the following 32-bit store:
1573
1574	p = 0x00010002;
1575
1576     Please note that GCC really does use this sort of optimization,
1577     which is not surprising given that it would likely take more
1578     than two instructions to build the constant and then store it.
1579     This optimization can therefore be a win in single-threaded code.
1580     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1581     this optimization in a volatile store.  In the absence of such bugs,
1582     use of ACCESS_ONCE() prevents store tearing in the following example:
1583
1584	ACCESS_ONCE(p) = 0x00010002;
1585
1586     Use of packed structures can also result in load and store tearing,
1587     as in this example:
1588
1589	struct __attribute__((__packed__)) foo {
1590		short a;
1591		int b;
1592		short c;
1593	};
1594	struct foo foo1, foo2;
1595	...
1596
1597	foo2.a = foo1.a;
1598	foo2.b = foo1.b;
1599	foo2.c = foo1.c;
1600
1601     Because there are no ACCESS_ONCE() wrappers and no volatile markings,
1602     the compiler would be well within its rights to implement these three
1603     assignment statements as a pair of 32-bit loads followed by a pair
1604     of 32-bit stores.  This would result in load tearing on 'foo1.b'
1605     and store tearing on 'foo2.b'.  ACCESS_ONCE() again prevents tearing
1606     in this example:
1607
1608	foo2.a = foo1.a;
1609	ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
1610	foo2.c = foo1.c;
1611
1612All that aside, it is never necessary to use ACCESS_ONCE() on a variable
1613that has been marked volatile.  For example, because 'jiffies' is marked
1614volatile, it is never necessary to say ACCESS_ONCE(jiffies).  The reason
1615for this is that ACCESS_ONCE() is implemented as a volatile cast, which
1616has no effect when its argument is already marked volatile.
1617
1618Please note that these compiler barriers have no direct effect on the CPU,
1619which may then reorder things however it wishes.
1620
1621
1622CPU MEMORY BARRIERS
1623-------------------
1624
1625The Linux kernel has eight basic CPU memory barriers:
1626
1627	TYPE		MANDATORY		SMP CONDITIONAL
1628	===============	=======================	===========================
1629	GENERAL		mb()			smp_mb()
1630	WRITE		wmb()			smp_wmb()
1631	READ		rmb()			smp_rmb()
1632	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
1633
1634
1635All memory barriers except the data dependency barriers imply a compiler
1636barrier. Data dependencies do not impose any additional compiler ordering.
1637
1638Aside: In the case of data dependencies, the compiler would be expected to
1639issue the loads in the correct order (eg. `a[b]` would have to load the value
1640of b before loading a[b]), however there is no guarantee in the C specification
1641that the compiler may not speculate the value of b (eg. is equal to 1) and load
1642a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1643problem of a compiler reloading b after having loaded a[b], thus having a newer
1644copy of b than a[b]. A consensus has not yet been reached about these problems,
1645however the ACCESS_ONCE macro is a good place to start looking.
1646
1647SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1648systems because it is assumed that a CPU will appear to be self-consistent,
1649and will order overlapping accesses correctly with respect to itself.
1650
1651[!] Note that SMP memory barriers _must_ be used to control the ordering of
1652references to shared memory on SMP systems, though the use of locking instead
1653is sufficient.
1654
1655Mandatory barriers should not be used to control SMP effects, since mandatory
1656barriers unnecessarily impose overhead on UP systems. They may, however, be
1657used to control MMIO effects on accesses through relaxed memory I/O windows.
1658These are required even on non-SMP systems as they affect the order in which
1659memory operations appear to a device by prohibiting both the compiler and the
1660CPU from reordering them.
1661
1662
1663There are some more advanced barrier functions:
1664
1665 (*) set_mb(var, value)
1666
1667     This assigns the value to the variable and then inserts a full memory
1668     barrier after it, depending on the function.  It isn't guaranteed to
1669     insert anything more than a compiler barrier in a UP compilation.
1670
1671
1672 (*) smp_mb__before_atomic();
1673 (*) smp_mb__after_atomic();
1674
1675     These are for use with atomic (such as add, subtract, increment and
1676     decrement) functions that don't return a value, especially when used for
1677     reference counting.  These functions do not imply memory barriers.
1678
1679     These are also used for atomic bitop functions that do not return a
1680     value (such as set_bit and clear_bit).
1681
1682     As an example, consider a piece of code that marks an object as being dead
1683     and then decrements the object's reference count:
1684
1685	obj->dead = 1;
1686	smp_mb__before_atomic();
1687	atomic_dec(&obj->ref_count);
1688
1689     This makes sure that the death mark on the object is perceived to be set
1690     *before* the reference counter is decremented.
1691
1692     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1693     operations" subsection for information on where to use these.
1694
1695
1696 (*) dma_wmb();
1697 (*) dma_rmb();
1698
1699     These are for use with consistent memory to guarantee the ordering
1700     of writes or reads of shared memory accessible to both the CPU and a
1701     DMA capable device.
1702
1703     For example, consider a device driver that shares memory with a device
1704     and uses a descriptor status value to indicate if the descriptor belongs
1705     to the device or the CPU, and a doorbell to notify it when new
1706     descriptors are available:
1707
1708	if (desc->status != DEVICE_OWN) {
1709		/* do not read data until we own descriptor */
1710		dma_rmb();
1711
1712		/* read/modify data */
1713		read_data = desc->data;
1714		desc->data = write_data;
1715
1716		/* flush modifications before status update */
1717		dma_wmb();
1718
1719		/* assign ownership */
1720		desc->status = DEVICE_OWN;
1721
1722		/* force memory to sync before notifying device via MMIO */
1723		wmb();
1724
1725		/* notify device of new descriptors */
1726		writel(DESC_NOTIFY, doorbell);
1727	}
1728
1729     The dma_rmb() allows us guarantee the device has released ownership
1730     before we read the data from the descriptor, and the dma_wmb() allows
1731     us to guarantee the data is written to the descriptor before the device
1732     can see it now has ownership.  The wmb() is needed to guarantee that the
1733     cache coherent memory writes have completed before attempting a write to
1734     the cache incoherent MMIO region.
1735
1736     See Documentation/DMA-API.txt for more information on consistent memory.
1737
1738MMIO WRITE BARRIER
1739------------------
1740
1741The Linux kernel also has a special barrier for use with memory-mapped I/O
1742writes:
1743
1744	mmiowb();
1745
1746This is a variation on the mandatory write barrier that causes writes to weakly
1747ordered I/O regions to be partially ordered.  Its effects may go beyond the
1748CPU->Hardware interface and actually affect the hardware at some level.
1749
1750See the subsection "Locks vs I/O accesses" for more information.
1751
1752
1753===============================
1754IMPLICIT KERNEL MEMORY BARRIERS
1755===============================
1756
1757Some of the other functions in the linux kernel imply memory barriers, amongst
1758which are locking and scheduling functions.
1759
1760This specification is a _minimum_ guarantee; any particular architecture may
1761provide more substantial guarantees, but these may not be relied upon outside
1762of arch specific code.
1763
1764
1765ACQUIRING FUNCTIONS
1766-------------------
1767
1768The Linux kernel has a number of locking constructs:
1769
1770 (*) spin locks
1771 (*) R/W spin locks
1772 (*) mutexes
1773 (*) semaphores
1774 (*) R/W semaphores
1775 (*) RCU
1776
1777In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1778for each construct.  These operations all imply certain barriers:
1779
1780 (1) ACQUIRE operation implication:
1781
1782     Memory operations issued after the ACQUIRE will be completed after the
1783     ACQUIRE operation has completed.
1784
1785     Memory operations issued before the ACQUIRE may be completed after
1786     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1787     combined with a following ACQUIRE, orders prior loads against
1788     subsequent loads and stores and also orders prior stores against
1789     subsequent stores.  Note that this is weaker than smp_mb()!  The
1790     smp_mb__before_spinlock() primitive is free on many architectures.
1791
1792 (2) RELEASE operation implication:
1793
1794     Memory operations issued before the RELEASE will be completed before the
1795     RELEASE operation has completed.
1796
1797     Memory operations issued after the RELEASE may be completed before the
1798     RELEASE operation has completed.
1799
1800 (3) ACQUIRE vs ACQUIRE implication:
1801
1802     All ACQUIRE operations issued before another ACQUIRE operation will be
1803     completed before that ACQUIRE operation.
1804
1805 (4) ACQUIRE vs RELEASE implication:
1806
1807     All ACQUIRE operations issued before a RELEASE operation will be
1808     completed before the RELEASE operation.
1809
1810 (5) Failed conditional ACQUIRE implication:
1811
1812     Certain locking variants of the ACQUIRE operation may fail, either due to
1813     being unable to get the lock immediately, or due to receiving an unblocked
1814     signal whilst asleep waiting for the lock to become available.  Failed
1815     locks do not imply any sort of barrier.
1816
1817[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1818one-way barriers is that the effects of instructions outside of a critical
1819section may seep into the inside of the critical section.
1820
1821An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
1822because it is possible for an access preceding the ACQUIRE to happen after the
1823ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
1824the two accesses can themselves then cross:
1825
1826	*A = a;
1827	ACQUIRE M
1828	RELEASE M
1829	*B = b;
1830
1831may occur as:
1832
1833	ACQUIRE M, STORE *B, STORE *A, RELEASE M
1834
1835When the ACQUIRE and RELEASE are a lock acquisition and release,
1836respectively, this same reordering can occur if the lock's ACQUIRE and
1837RELEASE are to the same lock variable, but only from the perspective of
1838another CPU not holding that lock.  In short, a ACQUIRE followed by an
1839RELEASE may -not- be assumed to be a full memory barrier.
1840
1841Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not
1842imply a full memory barrier.  If it is necessary for a RELEASE-ACQUIRE
1843pair to produce a full barrier, the ACQUIRE can be followed by an
1844smp_mb__after_unlock_lock() invocation.  This will produce a full barrier
1845if either (a) the RELEASE and the ACQUIRE are executed by the same
1846CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable.
1847The smp_mb__after_unlock_lock() primitive is free on many architectures.
1848Without smp_mb__after_unlock_lock(), the CPU's execution of the critical
1849sections corresponding to the RELEASE and the ACQUIRE can cross, so that:
1850
1851	*A = a;
1852	RELEASE M
1853	ACQUIRE N
1854	*B = b;
1855
1856could occur as:
1857
1858	ACQUIRE N, STORE *B, STORE *A, RELEASE M
1859
1860It might appear that this reordering could introduce a deadlock.
1861However, this cannot happen because if such a deadlock threatened,
1862the RELEASE would simply complete, thereby avoiding the deadlock.
1863
1864	Why does this work?
1865
1866	One key point is that we are only talking about the CPU doing
1867	the reordering, not the compiler.  If the compiler (or, for
1868	that matter, the developer) switched the operations, deadlock
1869	-could- occur.
1870
1871	But suppose the CPU reordered the operations.  In this case,
1872	the unlock precedes the lock in the assembly code.  The CPU
1873	simply elected to try executing the later lock operation first.
1874	If there is a deadlock, this lock operation will simply spin (or
1875	try to sleep, but more on that later).	The CPU will eventually
1876	execute the unlock operation (which preceded the lock operation
1877	in the assembly code), which will unravel the potential deadlock,
1878	allowing the lock operation to succeed.
1879
1880	But what if the lock is a sleeplock?  In that case, the code will
1881	try to enter the scheduler, where it will eventually encounter
1882	a memory barrier, which will force the earlier unlock operation
1883	to complete, again unraveling the deadlock.  There might be
1884	a sleep-unlock race, but the locking primitive needs to resolve
1885	such races properly in any case.
1886
1887With smp_mb__after_unlock_lock(), the two critical sections cannot overlap.
1888For example, with the following code, the store to *A will always be
1889seen by other CPUs before the store to *B:
1890
1891	*A = a;
1892	RELEASE M
1893	ACQUIRE N
1894	smp_mb__after_unlock_lock();
1895	*B = b;
1896
1897The operations will always occur in one of the following orders:
1898
1899	STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B
1900	STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1901	ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1902
1903If the RELEASE and ACQUIRE were instead both operating on the same lock
1904variable, only the first of these alternatives can occur.  In addition,
1905the more strongly ordered systems may rule out some of the above orders.
1906But in any case, as noted earlier, the smp_mb__after_unlock_lock()
1907ensures that the store to *A will always be seen as happening before
1908the store to *B.
1909
1910Locks and semaphores may not provide any guarantee of ordering on UP compiled
1911systems, and so cannot be counted on in such a situation to actually achieve
1912anything at all - especially with respect to I/O accesses - unless combined
1913with interrupt disabling operations.
1914
1915See also the section on "Inter-CPU locking barrier effects".
1916
1917
1918As an example, consider the following:
1919
1920	*A = a;
1921	*B = b;
1922	ACQUIRE
1923	*C = c;
1924	*D = d;
1925	RELEASE
1926	*E = e;
1927	*F = f;
1928
1929The following sequence of events is acceptable:
1930
1931	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
1932
1933	[+] Note that {*F,*A} indicates a combined access.
1934
1935But none of the following are:
1936
1937	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
1938	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
1939	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
1940	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
1941
1942
1943
1944INTERRUPT DISABLING FUNCTIONS
1945-----------------------------
1946
1947Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
1948(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
1949barriers are required in such a situation, they must be provided from some
1950other means.
1951
1952
1953SLEEP AND WAKE-UP FUNCTIONS
1954---------------------------
1955
1956Sleeping and waking on an event flagged in global data can be viewed as an
1957interaction between two pieces of data: the task state of the task waiting for
1958the event and the global data used to indicate the event.  To make sure that
1959these appear to happen in the right order, the primitives to begin the process
1960of going to sleep, and the primitives to initiate a wake up imply certain
1961barriers.
1962
1963Firstly, the sleeper normally follows something like this sequence of events:
1964
1965	for (;;) {
1966		set_current_state(TASK_UNINTERRUPTIBLE);
1967		if (event_indicated)
1968			break;
1969		schedule();
1970	}
1971
1972A general memory barrier is interpolated automatically by set_current_state()
1973after it has altered the task state:
1974
1975	CPU 1
1976	===============================
1977	set_current_state();
1978	  set_mb();
1979	    STORE current->state
1980	    <general barrier>
1981	LOAD event_indicated
1982
1983set_current_state() may be wrapped by:
1984
1985	prepare_to_wait();
1986	prepare_to_wait_exclusive();
1987
1988which therefore also imply a general memory barrier after setting the state.
1989The whole sequence above is available in various canned forms, all of which
1990interpolate the memory barrier in the right place:
1991
1992	wait_event();
1993	wait_event_interruptible();
1994	wait_event_interruptible_exclusive();
1995	wait_event_interruptible_timeout();
1996	wait_event_killable();
1997	wait_event_timeout();
1998	wait_on_bit();
1999	wait_on_bit_lock();
2000
2001
2002Secondly, code that performs a wake up normally follows something like this:
2003
2004	event_indicated = 1;
2005	wake_up(&event_wait_queue);
2006
2007or:
2008
2009	event_indicated = 1;
2010	wake_up_process(event_daemon);
2011
2012A write memory barrier is implied by wake_up() and co. if and only if they wake
2013something up.  The barrier occurs before the task state is cleared, and so sits
2014between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2015
2016	CPU 1				CPU 2
2017	===============================	===============================
2018	set_current_state();		STORE event_indicated
2019	  set_mb();			wake_up();
2020	    STORE current->state	  <write barrier>
2021	    <general barrier>		  STORE current->state
2022	LOAD event_indicated
2023
2024To repeat, this write memory barrier is present if and only if something
2025is actually awakened.  To see this, consider the following sequence of
2026events, where X and Y are both initially zero:
2027
2028	CPU 1				CPU 2
2029	===============================	===============================
2030	X = 1;				STORE event_indicated
2031	smp_mb();			wake_up();
2032	Y = 1;				wait_event(wq, Y == 1);
2033	wake_up();			  load from Y sees 1, no memory barrier
2034					load from X might see 0
2035
2036In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2037to see 1.
2038
2039The available waker functions include:
2040
2041	complete();
2042	wake_up();
2043	wake_up_all();
2044	wake_up_bit();
2045	wake_up_interruptible();
2046	wake_up_interruptible_all();
2047	wake_up_interruptible_nr();
2048	wake_up_interruptible_poll();
2049	wake_up_interruptible_sync();
2050	wake_up_interruptible_sync_poll();
2051	wake_up_locked();
2052	wake_up_locked_poll();
2053	wake_up_nr();
2054	wake_up_poll();
2055	wake_up_process();
2056
2057
2058[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2059order multiple stores before the wake-up with respect to loads of those stored
2060values after the sleeper has called set_current_state().  For instance, if the
2061sleeper does:
2062
2063	set_current_state(TASK_INTERRUPTIBLE);
2064	if (event_indicated)
2065		break;
2066	__set_current_state(TASK_RUNNING);
2067	do_something(my_data);
2068
2069and the waker does:
2070
2071	my_data = value;
2072	event_indicated = 1;
2073	wake_up(&event_wait_queue);
2074
2075there's no guarantee that the change to event_indicated will be perceived by
2076the sleeper as coming after the change to my_data.  In such a circumstance, the
2077code on both sides must interpolate its own memory barriers between the
2078separate data accesses.  Thus the above sleeper ought to do:
2079
2080	set_current_state(TASK_INTERRUPTIBLE);
2081	if (event_indicated) {
2082		smp_rmb();
2083		do_something(my_data);
2084	}
2085
2086and the waker should do:
2087
2088	my_data = value;
2089	smp_wmb();
2090	event_indicated = 1;
2091	wake_up(&event_wait_queue);
2092
2093
2094MISCELLANEOUS FUNCTIONS
2095-----------------------
2096
2097Other functions that imply barriers:
2098
2099 (*) schedule() and similar imply full memory barriers.
2100
2101
2102===================================
2103INTER-CPU ACQUIRING BARRIER EFFECTS
2104===================================
2105
2106On SMP systems locking primitives give a more substantial form of barrier: one
2107that does affect memory access ordering on other CPUs, within the context of
2108conflict on any particular lock.
2109
2110
2111ACQUIRES VS MEMORY ACCESSES
2112---------------------------
2113
2114Consider the following: the system has a pair of spinlocks (M) and (Q), and
2115three CPUs; then should the following sequence of events occur:
2116
2117	CPU 1				CPU 2
2118	===============================	===============================
2119	ACCESS_ONCE(*A) = a;		ACCESS_ONCE(*E) = e;
2120	ACQUIRE M			ACQUIRE Q
2121	ACCESS_ONCE(*B) = b;		ACCESS_ONCE(*F) = f;
2122	ACCESS_ONCE(*C) = c;		ACCESS_ONCE(*G) = g;
2123	RELEASE M			RELEASE Q
2124	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*H) = h;
2125
2126Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2127through *H occur in, other than the constraints imposed by the separate locks
2128on the separate CPUs. It might, for example, see:
2129
2130	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2131
2132But it won't see any of:
2133
2134	*B, *C or *D preceding ACQUIRE M
2135	*A, *B or *C following RELEASE M
2136	*F, *G or *H preceding ACQUIRE Q
2137	*E, *F or *G following RELEASE Q
2138
2139
2140However, if the following occurs:
2141
2142	CPU 1				CPU 2
2143	===============================	===============================
2144	ACCESS_ONCE(*A) = a;
2145	ACQUIRE M		     [1]
2146	ACCESS_ONCE(*B) = b;
2147	ACCESS_ONCE(*C) = c;
2148	RELEASE M	     [1]
2149	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*E) = e;
2150					ACQUIRE M		     [2]
2151					smp_mb__after_unlock_lock();
2152					ACCESS_ONCE(*F) = f;
2153					ACCESS_ONCE(*G) = g;
2154					RELEASE M	     [2]
2155					ACCESS_ONCE(*H) = h;
2156
2157CPU 3 might see:
2158
2159	*E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1],
2160		ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D
2161
2162But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
2163
2164	*B, *C, *D, *F, *G or *H preceding ACQUIRE M [1]
2165	*A, *B or *C following RELEASE M [1]
2166	*F, *G or *H preceding ACQUIRE M [2]
2167	*A, *B, *C, *E, *F or *G following RELEASE M [2]
2168
2169Note that the smp_mb__after_unlock_lock() is critically important
2170here: Without it CPU 3 might see some of the above orderings.
2171Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
2172to be seen in order unless CPU 3 holds lock M.
2173
2174
2175ACQUIRES VS I/O ACCESSES
2176------------------------
2177
2178Under certain circumstances (especially involving NUMA), I/O accesses within
2179two spinlocked sections on two different CPUs may be seen as interleaved by the
2180PCI bridge, because the PCI bridge does not necessarily participate in the
2181cache-coherence protocol, and is therefore incapable of issuing the required
2182read memory barriers.
2183
2184For example:
2185
2186	CPU 1				CPU 2
2187	===============================	===============================
2188	spin_lock(Q)
2189	writel(0, ADDR)
2190	writel(1, DATA);
2191	spin_unlock(Q);
2192					spin_lock(Q);
2193					writel(4, ADDR);
2194					writel(5, DATA);
2195					spin_unlock(Q);
2196
2197may be seen by the PCI bridge as follows:
2198
2199	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2200
2201which would probably cause the hardware to malfunction.
2202
2203
2204What is necessary here is to intervene with an mmiowb() before dropping the
2205spinlock, for example:
2206
2207	CPU 1				CPU 2
2208	===============================	===============================
2209	spin_lock(Q)
2210	writel(0, ADDR)
2211	writel(1, DATA);
2212	mmiowb();
2213	spin_unlock(Q);
2214					spin_lock(Q);
2215					writel(4, ADDR);
2216					writel(5, DATA);
2217					mmiowb();
2218					spin_unlock(Q);
2219
2220this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2221before either of the stores issued on CPU 2.
2222
2223
2224Furthermore, following a store by a load from the same device obviates the need
2225for the mmiowb(), because the load forces the store to complete before the load
2226is performed:
2227
2228	CPU 1				CPU 2
2229	===============================	===============================
2230	spin_lock(Q)
2231	writel(0, ADDR)
2232	a = readl(DATA);
2233	spin_unlock(Q);
2234					spin_lock(Q);
2235					writel(4, ADDR);
2236					b = readl(DATA);
2237					spin_unlock(Q);
2238
2239
2240See Documentation/DocBook/deviceiobook.tmpl for more information.
2241
2242
2243=================================
2244WHERE ARE MEMORY BARRIERS NEEDED?
2245=================================
2246
2247Under normal operation, memory operation reordering is generally not going to
2248be a problem as a single-threaded linear piece of code will still appear to
2249work correctly, even if it's in an SMP kernel.  There are, however, four
2250circumstances in which reordering definitely _could_ be a problem:
2251
2252 (*) Interprocessor interaction.
2253
2254 (*) Atomic operations.
2255
2256 (*) Accessing devices.
2257
2258 (*) Interrupts.
2259
2260
2261INTERPROCESSOR INTERACTION
2262--------------------------
2263
2264When there's a system with more than one processor, more than one CPU in the
2265system may be working on the same data set at the same time.  This can cause
2266synchronisation problems, and the usual way of dealing with them is to use
2267locks.  Locks, however, are quite expensive, and so it may be preferable to
2268operate without the use of a lock if at all possible.  In such a case
2269operations that affect both CPUs may have to be carefully ordered to prevent
2270a malfunction.
2271
2272Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2273queued on the semaphore, by virtue of it having a piece of its stack linked to
2274the semaphore's list of waiting processes:
2275
2276	struct rw_semaphore {
2277		...
2278		spinlock_t lock;
2279		struct list_head waiters;
2280	};
2281
2282	struct rwsem_waiter {
2283		struct list_head list;
2284		struct task_struct *task;
2285	};
2286
2287To wake up a particular waiter, the up_read() or up_write() functions have to:
2288
2289 (1) read the next pointer from this waiter's record to know as to where the
2290     next waiter record is;
2291
2292 (2) read the pointer to the waiter's task structure;
2293
2294 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2295
2296 (4) call wake_up_process() on the task; and
2297
2298 (5) release the reference held on the waiter's task struct.
2299
2300In other words, it has to perform this sequence of events:
2301
2302	LOAD waiter->list.next;
2303	LOAD waiter->task;
2304	STORE waiter->task;
2305	CALL wakeup
2306	RELEASE task
2307
2308and if any of these steps occur out of order, then the whole thing may
2309malfunction.
2310
2311Once it has queued itself and dropped the semaphore lock, the waiter does not
2312get the lock again; it instead just waits for its task pointer to be cleared
2313before proceeding.  Since the record is on the waiter's stack, this means that
2314if the task pointer is cleared _before_ the next pointer in the list is read,
2315another CPU might start processing the waiter and might clobber the waiter's
2316stack before the up*() function has a chance to read the next pointer.
2317
2318Consider then what might happen to the above sequence of events:
2319
2320	CPU 1				CPU 2
2321	===============================	===============================
2322					down_xxx()
2323					Queue waiter
2324					Sleep
2325	up_yyy()
2326	LOAD waiter->task;
2327	STORE waiter->task;
2328					Woken up by other event
2329	<preempt>
2330					Resume processing
2331					down_xxx() returns
2332					call foo()
2333					foo() clobbers *waiter
2334	</preempt>
2335	LOAD waiter->list.next;
2336	--- OOPS ---
2337
2338This could be dealt with using the semaphore lock, but then the down_xxx()
2339function has to needlessly get the spinlock again after being woken up.
2340
2341The way to deal with this is to insert a general SMP memory barrier:
2342
2343	LOAD waiter->list.next;
2344	LOAD waiter->task;
2345	smp_mb();
2346	STORE waiter->task;
2347	CALL wakeup
2348	RELEASE task
2349
2350In this case, the barrier makes a guarantee that all memory accesses before the
2351barrier will appear to happen before all the memory accesses after the barrier
2352with respect to the other CPUs on the system.  It does _not_ guarantee that all
2353the memory accesses before the barrier will be complete by the time the barrier
2354instruction itself is complete.
2355
2356On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2357compiler barrier, thus making sure the compiler emits the instructions in the
2358right order without actually intervening in the CPU.  Since there's only one
2359CPU, that CPU's dependency ordering logic will take care of everything else.
2360
2361
2362ATOMIC OPERATIONS
2363-----------------
2364
2365Whilst they are technically interprocessor interaction considerations, atomic
2366operations are noted specially as some of them imply full memory barriers and
2367some don't, but they're very heavily relied on as a group throughout the
2368kernel.
2369
2370Any atomic operation that modifies some state in memory and returns information
2371about the state (old or new) implies an SMP-conditional general memory barrier
2372(smp_mb()) on each side of the actual operation (with the exception of
2373explicit lock operations, described later).  These include:
2374
2375	xchg();
2376	cmpxchg();
2377	atomic_xchg();			atomic_long_xchg();
2378	atomic_cmpxchg();		atomic_long_cmpxchg();
2379	atomic_inc_return();		atomic_long_inc_return();
2380	atomic_dec_return();		atomic_long_dec_return();
2381	atomic_add_return();		atomic_long_add_return();
2382	atomic_sub_return();		atomic_long_sub_return();
2383	atomic_inc_and_test();		atomic_long_inc_and_test();
2384	atomic_dec_and_test();		atomic_long_dec_and_test();
2385	atomic_sub_and_test();		atomic_long_sub_and_test();
2386	atomic_add_negative();		atomic_long_add_negative();
2387	test_and_set_bit();
2388	test_and_clear_bit();
2389	test_and_change_bit();
2390
2391	/* when succeeds (returns 1) */
2392	atomic_add_unless();		atomic_long_add_unless();
2393
2394These are used for such things as implementing ACQUIRE-class and RELEASE-class
2395operations and adjusting reference counters towards object destruction, and as
2396such the implicit memory barrier effects are necessary.
2397
2398
2399The following operations are potential problems as they do _not_ imply memory
2400barriers, but might be used for implementing such things as RELEASE-class
2401operations:
2402
2403	atomic_set();
2404	set_bit();
2405	clear_bit();
2406	change_bit();
2407
2408With these the appropriate explicit memory barrier should be used if necessary
2409(smp_mb__before_atomic() for instance).
2410
2411
2412The following also do _not_ imply memory barriers, and so may require explicit
2413memory barriers under some circumstances (smp_mb__before_atomic() for
2414instance):
2415
2416	atomic_add();
2417	atomic_sub();
2418	atomic_inc();
2419	atomic_dec();
2420
2421If they're used for statistics generation, then they probably don't need memory
2422barriers, unless there's a coupling between statistical data.
2423
2424If they're used for reference counting on an object to control its lifetime,
2425they probably don't need memory barriers because either the reference count
2426will be adjusted inside a locked section, or the caller will already hold
2427sufficient references to make the lock, and thus a memory barrier unnecessary.
2428
2429If they're used for constructing a lock of some description, then they probably
2430do need memory barriers as a lock primitive generally has to do things in a
2431specific order.
2432
2433Basically, each usage case has to be carefully considered as to whether memory
2434barriers are needed or not.
2435
2436The following operations are special locking primitives:
2437
2438	test_and_set_bit_lock();
2439	clear_bit_unlock();
2440	__clear_bit_unlock();
2441
2442These implement ACQUIRE-class and RELEASE-class operations. These should be used in
2443preference to other operations when implementing locking primitives, because
2444their implementations can be optimised on many architectures.
2445
2446[!] Note that special memory barrier primitives are available for these
2447situations because on some CPUs the atomic instructions used imply full memory
2448barriers, and so barrier instructions are superfluous in conjunction with them,
2449and in such cases the special barrier primitives will be no-ops.
2450
2451See Documentation/atomic_ops.txt for more information.
2452
2453
2454ACCESSING DEVICES
2455-----------------
2456
2457Many devices can be memory mapped, and so appear to the CPU as if they're just
2458a set of memory locations.  To control such a device, the driver usually has to
2459make the right memory accesses in exactly the right order.
2460
2461However, having a clever CPU or a clever compiler creates a potential problem
2462in that the carefully sequenced accesses in the driver code won't reach the
2463device in the requisite order if the CPU or the compiler thinks it is more
2464efficient to reorder, combine or merge accesses - something that would cause
2465the device to malfunction.
2466
2467Inside of the Linux kernel, I/O should be done through the appropriate accessor
2468routines - such as inb() or writel() - which know how to make such accesses
2469appropriately sequential.  Whilst this, for the most part, renders the explicit
2470use of memory barriers unnecessary, there are a couple of situations where they
2471might be needed:
2472
2473 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2474     so for _all_ general drivers locks should be used and mmiowb() must be
2475     issued prior to unlocking the critical section.
2476
2477 (2) If the accessor functions are used to refer to an I/O memory window with
2478     relaxed memory access properties, then _mandatory_ memory barriers are
2479     required to enforce ordering.
2480
2481See Documentation/DocBook/deviceiobook.tmpl for more information.
2482
2483
2484INTERRUPTS
2485----------
2486
2487A driver may be interrupted by its own interrupt service routine, and thus the
2488two parts of the driver may interfere with each other's attempts to control or
2489access the device.
2490
2491This may be alleviated - at least in part - by disabling local interrupts (a
2492form of locking), such that the critical operations are all contained within
2493the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2494routine is executing, the driver's core may not run on the same CPU, and its
2495interrupt is not permitted to happen again until the current interrupt has been
2496handled, thus the interrupt handler does not need to lock against that.
2497
2498However, consider a driver that was talking to an ethernet card that sports an
2499address register and a data register.  If that driver's core talks to the card
2500under interrupt-disablement and then the driver's interrupt handler is invoked:
2501
2502	LOCAL IRQ DISABLE
2503	writew(ADDR, 3);
2504	writew(DATA, y);
2505	LOCAL IRQ ENABLE
2506	<interrupt>
2507	writew(ADDR, 4);
2508	q = readw(DATA);
2509	</interrupt>
2510
2511The store to the data register might happen after the second store to the
2512address register if ordering rules are sufficiently relaxed:
2513
2514	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2515
2516
2517If ordering rules are relaxed, it must be assumed that accesses done inside an
2518interrupt disabled section may leak outside of it and may interleave with
2519accesses performed in an interrupt - and vice versa - unless implicit or
2520explicit barriers are used.
2521
2522Normally this won't be a problem because the I/O accesses done inside such
2523sections will include synchronous load operations on strictly ordered I/O
2524registers that form implicit I/O barriers. If this isn't sufficient then an
2525mmiowb() may need to be used explicitly.
2526
2527
2528A similar situation may occur between an interrupt routine and two routines
2529running on separate CPUs that communicate with each other. If such a case is
2530likely, then interrupt-disabling locks should be used to guarantee ordering.
2531
2532
2533==========================
2534KERNEL I/O BARRIER EFFECTS
2535==========================
2536
2537When accessing I/O memory, drivers should use the appropriate accessor
2538functions:
2539
2540 (*) inX(), outX():
2541
2542     These are intended to talk to I/O space rather than memory space, but
2543     that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2544     indeed have special I/O space access cycles and instructions, but many
2545     CPUs don't have such a concept.
2546
2547     The PCI bus, amongst others, defines an I/O space concept which - on such
2548     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2549     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2550     memory map, particularly on those CPUs that don't support alternate I/O
2551     spaces.
2552
2553     Accesses to this space may be fully synchronous (as on i386), but
2554     intermediary bridges (such as the PCI host bridge) may not fully honour
2555     that.
2556
2557     They are guaranteed to be fully ordered with respect to each other.
2558
2559     They are not guaranteed to be fully ordered with respect to other types of
2560     memory and I/O operation.
2561
2562 (*) readX(), writeX():
2563
2564     Whether these are guaranteed to be fully ordered and uncombined with
2565     respect to each other on the issuing CPU depends on the characteristics
2566     defined for the memory window through which they're accessing. On later
2567     i386 architecture machines, for example, this is controlled by way of the
2568     MTRR registers.
2569
2570     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2571     provided they're not accessing a prefetchable device.
2572
2573     However, intermediary hardware (such as a PCI bridge) may indulge in
2574     deferral if it so wishes; to flush a store, a load from the same location
2575     is preferred[*], but a load from the same device or from configuration
2576     space should suffice for PCI.
2577
2578     [*] NOTE! attempting to load from the same location as was written to may
2579	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2580	 example.
2581
2582     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2583     force stores to be ordered.
2584
2585     Please refer to the PCI specification for more information on interactions
2586     between PCI transactions.
2587
2588 (*) readX_relaxed(), writeX_relaxed()
2589
2590     These are similar to readX() and writeX(), but provide weaker memory
2591     ordering guarantees. Specifically, they do not guarantee ordering with
2592     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2593     ordering with respect to LOCK or UNLOCK operations. If the latter is
2594     required, an mmiowb() barrier can be used. Note that relaxed accesses to
2595     the same peripheral are guaranteed to be ordered with respect to each
2596     other.
2597
2598 (*) ioreadX(), iowriteX()
2599
2600     These will perform appropriately for the type of access they're actually
2601     doing, be it inX()/outX() or readX()/writeX().
2602
2603
2604========================================
2605ASSUMED MINIMUM EXECUTION ORDERING MODEL
2606========================================
2607
2608It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2609maintain the appearance of program causality with respect to itself.  Some CPUs
2610(such as i386 or x86_64) are more constrained than others (such as powerpc or
2611frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2612of arch-specific code.
2613
2614This means that it must be considered that the CPU will execute its instruction
2615stream in any order it feels like - or even in parallel - provided that if an
2616instruction in the stream depends on an earlier instruction, then that
2617earlier instruction must be sufficiently complete[*] before the later
2618instruction may proceed; in other words: provided that the appearance of
2619causality is maintained.
2620
2621 [*] Some instructions have more than one effect - such as changing the
2622     condition codes, changing registers or changing memory - and different
2623     instructions may depend on different effects.
2624
2625A CPU may also discard any instruction sequence that winds up having no
2626ultimate effect.  For example, if two adjacent instructions both load an
2627immediate value into the same register, the first may be discarded.
2628
2629
2630Similarly, it has to be assumed that compiler might reorder the instruction
2631stream in any way it sees fit, again provided the appearance of causality is
2632maintained.
2633
2634
2635============================
2636THE EFFECTS OF THE CPU CACHE
2637============================
2638
2639The way cached memory operations are perceived across the system is affected to
2640a certain extent by the caches that lie between CPUs and memory, and by the
2641memory coherence system that maintains the consistency of state in the system.
2642
2643As far as the way a CPU interacts with another part of the system through the
2644caches goes, the memory system has to include the CPU's caches, and memory
2645barriers for the most part act at the interface between the CPU and its cache
2646(memory barriers logically act on the dotted line in the following diagram):
2647
2648	    <--- CPU --->         :       <----------- Memory ----------->
2649	                          :
2650	+--------+    +--------+  :   +--------+    +-----------+
2651	|        |    |        |  :   |        |    |           |    +--------+
2652	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2653	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2654	|        |    | Queue  |  :   |        |    |           |--->| Memory |
2655	|        |    |        |  :   |        |    |           |    |        |
2656	+--------+    +--------+  :   +--------+    |           |    |        |
2657	                          :                 | Cache     |    +--------+
2658	                          :                 | Coherency |
2659	                          :                 | Mechanism |    +--------+
2660	+--------+    +--------+  :   +--------+    |           |    |	      |
2661	|        |    |        |  :   |        |    |           |    |        |
2662	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2663	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2664	|        |    | Queue  |  :   |        |    |           |    |        |
2665	|        |    |        |  :   |        |    |           |    +--------+
2666	+--------+    +--------+  :   +--------+    +-----------+
2667	                          :
2668	                          :
2669
2670Although any particular load or store may not actually appear outside of the
2671CPU that issued it since it may have been satisfied within the CPU's own cache,
2672it will still appear as if the full memory access had taken place as far as the
2673other CPUs are concerned since the cache coherency mechanisms will migrate the
2674cacheline over to the accessing CPU and propagate the effects upon conflict.
2675
2676The CPU core may execute instructions in any order it deems fit, provided the
2677expected program causality appears to be maintained.  Some of the instructions
2678generate load and store operations which then go into the queue of memory
2679accesses to be performed.  The core may place these in the queue in any order
2680it wishes, and continue execution until it is forced to wait for an instruction
2681to complete.
2682
2683What memory barriers are concerned with is controlling the order in which
2684accesses cross from the CPU side of things to the memory side of things, and
2685the order in which the effects are perceived to happen by the other observers
2686in the system.
2687
2688[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2689their own loads and stores as if they had happened in program order.
2690
2691[!] MMIO or other device accesses may bypass the cache system.  This depends on
2692the properties of the memory window through which devices are accessed and/or
2693the use of any special device communication instructions the CPU may have.
2694
2695
2696CACHE COHERENCY
2697---------------
2698
2699Life isn't quite as simple as it may appear above, however: for while the
2700caches are expected to be coherent, there's no guarantee that that coherency
2701will be ordered.  This means that whilst changes made on one CPU will
2702eventually become visible on all CPUs, there's no guarantee that they will
2703become apparent in the same order on those other CPUs.
2704
2705
2706Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2707has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2708
2709	            :
2710	            :                          +--------+
2711	            :      +---------+         |        |
2712	+--------+  : +--->| Cache A |<------->|        |
2713	|        |  : |    +---------+         |        |
2714	|  CPU 1 |<---+                        |        |
2715	|        |  : |    +---------+         |        |
2716	+--------+  : +--->| Cache B |<------->|        |
2717	            :      +---------+         |        |
2718	            :                          | Memory |
2719	            :      +---------+         | System |
2720	+--------+  : +--->| Cache C |<------->|        |
2721	|        |  : |    +---------+         |        |
2722	|  CPU 2 |<---+                        |        |
2723	|        |  : |    +---------+         |        |
2724	+--------+  : +--->| Cache D |<------->|        |
2725	            :      +---------+         |        |
2726	            :                          +--------+
2727	            :
2728
2729Imagine the system has the following properties:
2730
2731 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2732     resident in memory;
2733
2734 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2735     resident in memory;
2736
2737 (*) whilst the CPU core is interrogating one cache, the other cache may be
2738     making use of the bus to access the rest of the system - perhaps to
2739     displace a dirty cacheline or to do a speculative load;
2740
2741 (*) each cache has a queue of operations that need to be applied to that cache
2742     to maintain coherency with the rest of the system;
2743
2744 (*) the coherency queue is not flushed by normal loads to lines already
2745     present in the cache, even though the contents of the queue may
2746     potentially affect those loads.
2747
2748Imagine, then, that two writes are made on the first CPU, with a write barrier
2749between them to guarantee that they will appear to reach that CPU's caches in
2750the requisite order:
2751
2752	CPU 1		CPU 2		COMMENT
2753	===============	===============	=======================================
2754					u == 0, v == 1 and p == &u, q == &u
2755	v = 2;
2756	smp_wmb();			Make sure change to v is visible before
2757					 change to p
2758	<A:modify v=2>			v is now in cache A exclusively
2759	p = &v;
2760	<B:modify p=&v>			p is now in cache B exclusively
2761
2762The write memory barrier forces the other CPUs in the system to perceive that
2763the local CPU's caches have apparently been updated in the correct order.  But
2764now imagine that the second CPU wants to read those values:
2765
2766	CPU 1		CPU 2		COMMENT
2767	===============	===============	=======================================
2768	...
2769			q = p;
2770			x = *q;
2771
2772The above pair of reads may then fail to happen in the expected order, as the
2773cacheline holding p may get updated in one of the second CPU's caches whilst
2774the update to the cacheline holding v is delayed in the other of the second
2775CPU's caches by some other cache event:
2776
2777	CPU 1		CPU 2		COMMENT
2778	===============	===============	=======================================
2779					u == 0, v == 1 and p == &u, q == &u
2780	v = 2;
2781	smp_wmb();
2782	<A:modify v=2>	<C:busy>
2783			<C:queue v=2>
2784	p = &v;		q = p;
2785			<D:request p>
2786	<B:modify p=&v>	<D:commit p=&v>
2787			<D:read p>
2788			x = *q;
2789			<C:read *q>	Reads from v before v updated in cache
2790			<C:unbusy>
2791			<C:commit v=2>
2792
2793Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2794no guarantee that, without intervention, the order of update will be the same
2795as that committed on CPU 1.
2796
2797
2798To intervene, we need to interpolate a data dependency barrier or a read
2799barrier between the loads.  This will force the cache to commit its coherency
2800queue before processing any further requests:
2801
2802	CPU 1		CPU 2		COMMENT
2803	===============	===============	=======================================
2804					u == 0, v == 1 and p == &u, q == &u
2805	v = 2;
2806	smp_wmb();
2807	<A:modify v=2>	<C:busy>
2808			<C:queue v=2>
2809	p = &v;		q = p;
2810			<D:request p>
2811	<B:modify p=&v>	<D:commit p=&v>
2812			<D:read p>
2813			smp_read_barrier_depends()
2814			<C:unbusy>
2815			<C:commit v=2>
2816			x = *q;
2817			<C:read *q>	Reads from v after v updated in cache
2818
2819
2820This sort of problem can be encountered on DEC Alpha processors as they have a
2821split cache that improves performance by making better use of the data bus.
2822Whilst most CPUs do imply a data dependency barrier on the read when a memory
2823access depends on a read, not all do, so it may not be relied on.
2824
2825Other CPUs may also have split caches, but must coordinate between the various
2826cachelets for normal memory accesses.  The semantics of the Alpha removes the
2827need for coordination in the absence of memory barriers.
2828
2829
2830CACHE COHERENCY VS DMA
2831----------------------
2832
2833Not all systems maintain cache coherency with respect to devices doing DMA.  In
2834such cases, a device attempting DMA may obtain stale data from RAM because
2835dirty cache lines may be resident in the caches of various CPUs, and may not
2836have been written back to RAM yet.  To deal with this, the appropriate part of
2837the kernel must flush the overlapping bits of cache on each CPU (and maybe
2838invalidate them as well).
2839
2840In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2841cache lines being written back to RAM from a CPU's cache after the device has
2842installed its own data, or cache lines present in the CPU's cache may simply
2843obscure the fact that RAM has been updated, until at such time as the cacheline
2844is discarded from the CPU's cache and reloaded.  To deal with this, the
2845appropriate part of the kernel must invalidate the overlapping bits of the
2846cache on each CPU.
2847
2848See Documentation/cachetlb.txt for more information on cache management.
2849
2850
2851CACHE COHERENCY VS MMIO
2852-----------------------
2853
2854Memory mapped I/O usually takes place through memory locations that are part of
2855a window in the CPU's memory space that has different properties assigned than
2856the usual RAM directed window.
2857
2858Amongst these properties is usually the fact that such accesses bypass the
2859caching entirely and go directly to the device buses.  This means MMIO accesses
2860may, in effect, overtake accesses to cached memory that were emitted earlier.
2861A memory barrier isn't sufficient in such a case, but rather the cache must be
2862flushed between the cached memory write and the MMIO access if the two are in
2863any way dependent.
2864
2865
2866=========================
2867THE THINGS CPUS GET UP TO
2868=========================
2869
2870A programmer might take it for granted that the CPU will perform memory
2871operations in exactly the order specified, so that if the CPU is, for example,
2872given the following piece of code to execute:
2873
2874	a = ACCESS_ONCE(*A);
2875	ACCESS_ONCE(*B) = b;
2876	c = ACCESS_ONCE(*C);
2877	d = ACCESS_ONCE(*D);
2878	ACCESS_ONCE(*E) = e;
2879
2880they would then expect that the CPU will complete the memory operation for each
2881instruction before moving on to the next one, leading to a definite sequence of
2882operations as seen by external observers in the system:
2883
2884	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2885
2886
2887Reality is, of course, much messier.  With many CPUs and compilers, the above
2888assumption doesn't hold because:
2889
2890 (*) loads are more likely to need to be completed immediately to permit
2891     execution progress, whereas stores can often be deferred without a
2892     problem;
2893
2894 (*) loads may be done speculatively, and the result discarded should it prove
2895     to have been unnecessary;
2896
2897 (*) loads may be done speculatively, leading to the result having been fetched
2898     at the wrong time in the expected sequence of events;
2899
2900 (*) the order of the memory accesses may be rearranged to promote better use
2901     of the CPU buses and caches;
2902
2903 (*) loads and stores may be combined to improve performance when talking to
2904     memory or I/O hardware that can do batched accesses of adjacent locations,
2905     thus cutting down on transaction setup costs (memory and PCI devices may
2906     both be able to do this); and
2907
2908 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2909     mechanisms may alleviate this - once the store has actually hit the cache
2910     - there's no guarantee that the coherency management will be propagated in
2911     order to other CPUs.
2912
2913So what another CPU, say, might actually observe from the above piece of code
2914is:
2915
2916	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2917
2918	(Where "LOAD {*C,*D}" is a combined load)
2919
2920
2921However, it is guaranteed that a CPU will be self-consistent: it will see its
2922_own_ accesses appear to be correctly ordered, without the need for a memory
2923barrier.  For instance with the following code:
2924
2925	U = ACCESS_ONCE(*A);
2926	ACCESS_ONCE(*A) = V;
2927	ACCESS_ONCE(*A) = W;
2928	X = ACCESS_ONCE(*A);
2929	ACCESS_ONCE(*A) = Y;
2930	Z = ACCESS_ONCE(*A);
2931
2932and assuming no intervention by an external influence, it can be assumed that
2933the final result will appear to be:
2934
2935	U == the original value of *A
2936	X == W
2937	Z == Y
2938	*A == Y
2939
2940The code above may cause the CPU to generate the full sequence of memory
2941accesses:
2942
2943	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2944
2945in that order, but, without intervention, the sequence may have almost any
2946combination of elements combined or discarded, provided the program's view of
2947the world remains consistent.  Note that ACCESS_ONCE() is -not- optional
2948in the above example, as there are architectures where a given CPU might
2949reorder successive loads to the same location.  On such architectures,
2950ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2951Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2952special ld.acq and st.rel instructions that prevent such reordering.
2953
2954The compiler may also combine, discard or defer elements of the sequence before
2955the CPU even sees them.
2956
2957For instance:
2958
2959	*A = V;
2960	*A = W;
2961
2962may be reduced to:
2963
2964	*A = W;
2965
2966since, without either a write barrier or an ACCESS_ONCE(), it can be
2967assumed that the effect of the storage of V to *A is lost.  Similarly:
2968
2969	*A = Y;
2970	Z = *A;
2971
2972may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2973
2974	*A = Y;
2975	Z = Y;
2976
2977and the LOAD operation never appear outside of the CPU.
2978
2979
2980AND THEN THERE'S THE ALPHA
2981--------------------------
2982
2983The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
2984some versions of the Alpha CPU have a split data cache, permitting them to have
2985two semantically-related cache lines updated at separate times.  This is where
2986the data dependency barrier really becomes necessary as this synchronises both
2987caches with the memory coherence system, thus making it seem like pointer
2988changes vs new data occur in the right order.
2989
2990The Alpha defines the Linux kernel's memory barrier model.
2991
2992See the subsection on "Cache Coherency" above.
2993
2994
2995============
2996EXAMPLE USES
2997============
2998
2999CIRCULAR BUFFERS
3000----------------
3001
3002Memory barriers can be used to implement circular buffering without the need
3003of a lock to serialise the producer with the consumer.  See:
3004
3005	Documentation/circular-buffers.txt
3006
3007for details.
3008
3009
3010==========
3011REFERENCES
3012==========
3013
3014Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3015Digital Press)
3016	Chapter 5.2: Physical Address Space Characteristics
3017	Chapter 5.4: Caches and Write Buffers
3018	Chapter 5.5: Data Sharing
3019	Chapter 5.6: Read/Write Ordering
3020
3021AMD64 Architecture Programmer's Manual Volume 2: System Programming
3022	Chapter 7.1: Memory-Access Ordering
3023	Chapter 7.4: Buffering and Combining Memory Writes
3024
3025IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3026System Programming Guide
3027	Chapter 7.1: Locked Atomic Operations
3028	Chapter 7.2: Memory Ordering
3029	Chapter 7.4: Serializing Instructions
3030
3031The SPARC Architecture Manual, Version 9
3032	Chapter 8: Memory Models
3033	Appendix D: Formal Specification of the Memory Models
3034	Appendix J: Programming with the Memory Models
3035
3036UltraSPARC Programmer Reference Manual
3037	Chapter 5: Memory Accesses and Cacheability
3038	Chapter 15: Sparc-V9 Memory Models
3039
3040UltraSPARC III Cu User's Manual
3041	Chapter 9: Memory Models
3042
3043UltraSPARC IIIi Processor User's Manual
3044	Chapter 8: Memory Models
3045
3046UltraSPARC Architecture 2005
3047	Chapter 9: Memory
3048	Appendix D: Formal Specifications of the Memory Models
3049
3050UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3051	Chapter 8: Memory Models
3052	Appendix F: Caches and Cache Coherency
3053
3054Solaris Internals, Core Kernel Architecture, p63-68:
3055	Chapter 3.3: Hardware Considerations for Locks and
3056			Synchronization
3057
3058Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3059for Kernel Programmers:
3060	Chapter 13: Other Memory Models
3061
3062Intel Itanium Architecture Software Developer's Manual: Volume 1:
3063	Section 2.6: Speculation
3064	Section 4.4: Memory Access
3065