]>
Commit | Line | Data |
---|---|---|
108b42b4 DH |
1 | ============================ |
2 | LINUX KERNEL MEMORY BARRIERS | |
3 | ============================ | |
4 | ||
5 | By: David Howells <[email protected]> | |
90fddabf | 6 | Paul E. McKenney <[email protected]> |
108b42b4 DH |
7 | |
8 | Contents: | |
9 | ||
10 | (*) Abstract memory access model. | |
11 | ||
12 | - Device operations. | |
13 | - Guarantees. | |
14 | ||
15 | (*) What are memory barriers? | |
16 | ||
17 | - Varieties of memory barrier. | |
18 | - What may not be assumed about memory barriers? | |
19 | - Data dependency barriers. | |
20 | - Control dependencies. | |
21 | - SMP barrier pairing. | |
22 | - Examples of memory barrier sequences. | |
670bd95e | 23 | - Read memory barriers vs load speculation. |
241e6663 | 24 | - Transitivity |
108b42b4 DH |
25 | |
26 | (*) Explicit kernel barriers. | |
27 | ||
28 | - Compiler barrier. | |
81fc6323 | 29 | - CPU memory barriers. |
108b42b4 DH |
30 | - MMIO write barrier. |
31 | ||
32 | (*) Implicit kernel memory barriers. | |
33 | ||
34 | - Locking functions. | |
35 | - Interrupt disabling functions. | |
50fa610a | 36 | - Sleep and wake-up functions. |
108b42b4 DH |
37 | - Miscellaneous functions. |
38 | ||
39 | (*) Inter-CPU locking barrier effects. | |
40 | ||
41 | - Locks vs memory accesses. | |
42 | - Locks vs I/O accesses. | |
43 | ||
44 | (*) Where are memory barriers needed? | |
45 | ||
46 | - Interprocessor interaction. | |
47 | - Atomic operations. | |
48 | - Accessing devices. | |
49 | - Interrupts. | |
50 | ||
51 | (*) Kernel I/O barrier effects. | |
52 | ||
53 | (*) Assumed minimum execution ordering model. | |
54 | ||
55 | (*) The effects of the cpu cache. | |
56 | ||
57 | - Cache coherency. | |
58 | - Cache coherency vs DMA. | |
59 | - Cache coherency vs MMIO. | |
60 | ||
61 | (*) The things CPUs get up to. | |
62 | ||
63 | - And then there's the Alpha. | |
64 | ||
90fddabf DH |
65 | (*) Example uses. |
66 | ||
67 | - Circular buffers. | |
68 | ||
108b42b4 DH |
69 | (*) References. |
70 | ||
71 | ||
72 | ============================ | |
73 | ABSTRACT MEMORY ACCESS MODEL | |
74 | ============================ | |
75 | ||
76 | Consider the following abstract model of the system: | |
77 | ||
78 | : : | |
79 | : : | |
80 | : : | |
81 | +-------+ : +--------+ : +-------+ | |
82 | | | : | | : | | | |
83 | | | : | | : | | | |
84 | | CPU 1 |<----->| Memory |<----->| CPU 2 | | |
85 | | | : | | : | | | |
86 | | | : | | : | | | |
87 | +-------+ : +--------+ : +-------+ | |
88 | ^ : ^ : ^ | |
89 | | : | : | | |
90 | | : | : | | |
91 | | : v : | | |
92 | | : +--------+ : | | |
93 | | : | | : | | |
94 | | : | | : | | |
95 | +---------->| Device |<----------+ | |
96 | : | | : | |
97 | : | | : | |
98 | : +--------+ : | |
99 | : : | |
100 | ||
101 | Each CPU executes a program that generates memory access operations. In the | |
102 | abstract CPU, memory operation ordering is very relaxed, and a CPU may actually | |
103 | perform the memory operations in any order it likes, provided program causality | |
104 | appears to be maintained. Similarly, the compiler may also arrange the | |
105 | instructions it emits in any order it likes, provided it doesn't affect the | |
106 | apparent operation of the program. | |
107 | ||
108 | So in the above diagram, the effects of the memory operations performed by a | |
109 | CPU are perceived by the rest of the system as the operations cross the | |
110 | interface between the CPU and rest of the system (the dotted lines). | |
111 | ||
112 | ||
113 | For example, consider the following sequence of events: | |
114 | ||
115 | CPU 1 CPU 2 | |
116 | =============== =============== | |
117 | { A == 1; B == 2 } | |
615cc2c9 AD |
118 | A = 3; x = B; |
119 | B = 4; y = A; | |
108b42b4 DH |
120 | |
121 | The set of accesses as seen by the memory system in the middle can be arranged | |
122 | in 24 different combinations: | |
123 | ||
8ab8b3e1 PK |
124 | STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 |
125 | STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 | |
126 | STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 | |
127 | STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 | |
128 | STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 | |
129 | STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 | |
130 | STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 | |
108b42b4 DH |
131 | STORE B=4, ... |
132 | ... | |
133 | ||
134 | and can thus result in four different combinations of values: | |
135 | ||
8ab8b3e1 PK |
136 | x == 2, y == 1 |
137 | x == 2, y == 3 | |
138 | x == 4, y == 1 | |
139 | x == 4, y == 3 | |
108b42b4 DH |
140 | |
141 | ||
142 | Furthermore, the stores committed by a CPU to the memory system may not be | |
143 | perceived by the loads made by another CPU in the same order as the stores were | |
144 | committed. | |
145 | ||
146 | ||
147 | As a further example, consider this sequence of events: | |
148 | ||
149 | CPU 1 CPU 2 | |
150 | =============== =============== | |
151 | { A == 1, B == 2, C = 3, P == &A, Q == &C } | |
152 | B = 4; Q = P; | |
153 | P = &B D = *Q; | |
154 | ||
155 | There is an obvious data dependency here, as the value loaded into D depends on | |
156 | the address retrieved from P by CPU 2. At the end of the sequence, any of the | |
157 | following results are possible: | |
158 | ||
159 | (Q == &A) and (D == 1) | |
160 | (Q == &B) and (D == 2) | |
161 | (Q == &B) and (D == 4) | |
162 | ||
163 | Note that CPU 2 will never try and load C into D because the CPU will load P | |
164 | into Q before issuing the load of *Q. | |
165 | ||
166 | ||
167 | DEVICE OPERATIONS | |
168 | ----------------- | |
169 | ||
170 | Some devices present their control interfaces as collections of memory | |
171 | locations, but the order in which the control registers are accessed is very | |
172 | important. For instance, imagine an ethernet card with a set of internal | |
173 | registers that are accessed through an address port register (A) and a data | |
174 | port register (D). To read internal register 5, the following code might then | |
175 | be used: | |
176 | ||
177 | *A = 5; | |
178 | x = *D; | |
179 | ||
180 | but this might show up as either of the following two sequences: | |
181 | ||
182 | STORE *A = 5, x = LOAD *D | |
183 | x = LOAD *D, STORE *A = 5 | |
184 | ||
185 | the second of which will almost certainly result in a malfunction, since it set | |
186 | the address _after_ attempting to read the register. | |
187 | ||
188 | ||
189 | GUARANTEES | |
190 | ---------- | |
191 | ||
192 | There are some minimal guarantees that may be expected of a CPU: | |
193 | ||
194 | (*) On any given CPU, dependent memory accesses will be issued in order, with | |
195 | respect to itself. This means that for: | |
196 | ||
2ecf8101 | 197 | ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q); |
108b42b4 DH |
198 | |
199 | the CPU will issue the following memory operations: | |
200 | ||
201 | Q = LOAD P, D = LOAD *Q | |
202 | ||
2ecf8101 PM |
203 | and always in that order. On most systems, smp_read_barrier_depends() |
204 | does nothing, but it is required for DEC Alpha. The ACCESS_ONCE() | |
205 | is required to prevent compiler mischief. Please note that you | |
206 | should normally use something like rcu_dereference() instead of | |
207 | open-coding smp_read_barrier_depends(). | |
108b42b4 DH |
208 | |
209 | (*) Overlapping loads and stores within a particular CPU will appear to be | |
210 | ordered within that CPU. This means that for: | |
211 | ||
2ecf8101 | 212 | a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b; |
108b42b4 DH |
213 | |
214 | the CPU will only issue the following sequence of memory operations: | |
215 | ||
216 | a = LOAD *X, STORE *X = b | |
217 | ||
218 | And for: | |
219 | ||
2ecf8101 | 220 | ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X); |
108b42b4 DH |
221 | |
222 | the CPU will only issue: | |
223 | ||
224 | STORE *X = c, d = LOAD *X | |
225 | ||
fa00e7e1 | 226 | (Loads and stores overlap if they are targeted at overlapping pieces of |
108b42b4 DH |
227 | memory). |
228 | ||
229 | And there are a number of things that _must_ or _must_not_ be assumed: | |
230 | ||
2ecf8101 PM |
231 | (*) It _must_not_ be assumed that the compiler will do what you want with |
232 | memory references that are not protected by ACCESS_ONCE(). Without | |
233 | ACCESS_ONCE(), the compiler is within its rights to do all sorts | |
692118da PM |
234 | of "creative" transformations, which are covered in the Compiler |
235 | Barrier section. | |
2ecf8101 | 236 | |
108b42b4 DH |
237 | (*) It _must_not_ be assumed that independent loads and stores will be issued |
238 | in the order given. This means that for: | |
239 | ||
240 | X = *A; Y = *B; *D = Z; | |
241 | ||
242 | we may get any of the following sequences: | |
243 | ||
244 | X = LOAD *A, Y = LOAD *B, STORE *D = Z | |
245 | X = LOAD *A, STORE *D = Z, Y = LOAD *B | |
246 | Y = LOAD *B, X = LOAD *A, STORE *D = Z | |
247 | Y = LOAD *B, STORE *D = Z, X = LOAD *A | |
248 | STORE *D = Z, X = LOAD *A, Y = LOAD *B | |
249 | STORE *D = Z, Y = LOAD *B, X = LOAD *A | |
250 | ||
251 | (*) It _must_ be assumed that overlapping memory accesses may be merged or | |
252 | discarded. This means that for: | |
253 | ||
254 | X = *A; Y = *(A + 4); | |
255 | ||
256 | we may get any one of the following sequences: | |
257 | ||
258 | X = LOAD *A; Y = LOAD *(A + 4); | |
259 | Y = LOAD *(A + 4); X = LOAD *A; | |
260 | {X, Y} = LOAD {*A, *(A + 4) }; | |
261 | ||
262 | And for: | |
263 | ||
f191eec5 | 264 | *A = X; *(A + 4) = Y; |
108b42b4 | 265 | |
f191eec5 | 266 | we may get any of: |
108b42b4 | 267 | |
f191eec5 PM |
268 | STORE *A = X; STORE *(A + 4) = Y; |
269 | STORE *(A + 4) = Y; STORE *A = X; | |
270 | STORE {*A, *(A + 4) } = {X, Y}; | |
108b42b4 | 271 | |
432fbf3c PM |
272 | And there are anti-guarantees: |
273 | ||
274 | (*) These guarantees do not apply to bitfields, because compilers often | |
275 | generate code to modify these using non-atomic read-modify-write | |
276 | sequences. Do not attempt to use bitfields to synchronize parallel | |
277 | algorithms. | |
278 | ||
279 | (*) Even in cases where bitfields are protected by locks, all fields | |
280 | in a given bitfield must be protected by one lock. If two fields | |
281 | in a given bitfield are protected by different locks, the compiler's | |
282 | non-atomic read-modify-write sequences can cause an update to one | |
283 | field to corrupt the value of an adjacent field. | |
284 | ||
285 | (*) These guarantees apply only to properly aligned and sized scalar | |
286 | variables. "Properly sized" currently means variables that are | |
287 | the same size as "char", "short", "int" and "long". "Properly | |
288 | aligned" means the natural alignment, thus no constraints for | |
289 | "char", two-byte alignment for "short", four-byte alignment for | |
290 | "int", and either four-byte or eight-byte alignment for "long", | |
291 | on 32-bit and 64-bit systems, respectively. Note that these | |
292 | guarantees were introduced into the C11 standard, so beware when | |
293 | using older pre-C11 compilers (for example, gcc 4.6). The portion | |
294 | of the standard containing this guarantee is Section 3.14, which | |
295 | defines "memory location" as follows: | |
296 | ||
297 | memory location | |
298 | either an object of scalar type, or a maximal sequence | |
299 | of adjacent bit-fields all having nonzero width | |
300 | ||
301 | NOTE 1: Two threads of execution can update and access | |
302 | separate memory locations without interfering with | |
303 | each other. | |
304 | ||
305 | NOTE 2: A bit-field and an adjacent non-bit-field member | |
306 | are in separate memory locations. The same applies | |
307 | to two bit-fields, if one is declared inside a nested | |
308 | structure declaration and the other is not, or if the two | |
309 | are separated by a zero-length bit-field declaration, | |
310 | or if they are separated by a non-bit-field member | |
311 | declaration. It is not safe to concurrently update two | |
312 | bit-fields in the same structure if all members declared | |
313 | between them are also bit-fields, no matter what the | |
314 | sizes of those intervening bit-fields happen to be. | |
315 | ||
108b42b4 DH |
316 | |
317 | ========================= | |
318 | WHAT ARE MEMORY BARRIERS? | |
319 | ========================= | |
320 | ||
321 | As can be seen above, independent memory operations are effectively performed | |
322 | in random order, but this can be a problem for CPU-CPU interaction and for I/O. | |
323 | What is required is some way of intervening to instruct the compiler and the | |
324 | CPU to restrict the order. | |
325 | ||
326 | Memory barriers are such interventions. They impose a perceived partial | |
2b94895b DH |
327 | ordering over the memory operations on either side of the barrier. |
328 | ||
329 | Such enforcement is important because the CPUs and other devices in a system | |
81fc6323 | 330 | can use a variety of tricks to improve performance, including reordering, |
2b94895b DH |
331 | deferral and combination of memory operations; speculative loads; speculative |
332 | branch prediction and various types of caching. Memory barriers are used to | |
333 | override or suppress these tricks, allowing the code to sanely control the | |
334 | interaction of multiple CPUs and/or devices. | |
108b42b4 DH |
335 | |
336 | ||
337 | VARIETIES OF MEMORY BARRIER | |
338 | --------------------------- | |
339 | ||
340 | Memory barriers come in four basic varieties: | |
341 | ||
342 | (1) Write (or store) memory barriers. | |
343 | ||
344 | A write memory barrier gives a guarantee that all the STORE operations | |
345 | specified before the barrier will appear to happen before all the STORE | |
346 | operations specified after the barrier with respect to the other | |
347 | components of the system. | |
348 | ||
349 | A write barrier is a partial ordering on stores only; it is not required | |
350 | to have any effect on loads. | |
351 | ||
6bc39274 | 352 | A CPU can be viewed as committing a sequence of store operations to the |
108b42b4 DH |
353 | memory system as time progresses. All stores before a write barrier will |
354 | occur in the sequence _before_ all the stores after the write barrier. | |
355 | ||
356 | [!] Note that write barriers should normally be paired with read or data | |
357 | dependency barriers; see the "SMP barrier pairing" subsection. | |
358 | ||
359 | ||
360 | (2) Data dependency barriers. | |
361 | ||
362 | A data dependency barrier is a weaker form of read barrier. In the case | |
363 | where two loads are performed such that the second depends on the result | |
364 | of the first (eg: the first load retrieves the address to which the second | |
365 | load will be directed), a data dependency barrier would be required to | |
366 | make sure that the target of the second load is updated before the address | |
367 | obtained by the first load is accessed. | |
368 | ||
369 | A data dependency barrier is a partial ordering on interdependent loads | |
370 | only; it is not required to have any effect on stores, independent loads | |
371 | or overlapping loads. | |
372 | ||
373 | As mentioned in (1), the other CPUs in the system can be viewed as | |
374 | committing sequences of stores to the memory system that the CPU being | |
375 | considered can then perceive. A data dependency barrier issued by the CPU | |
376 | under consideration guarantees that for any load preceding it, if that | |
377 | load touches one of a sequence of stores from another CPU, then by the | |
378 | time the barrier completes, the effects of all the stores prior to that | |
379 | touched by the load will be perceptible to any loads issued after the data | |
380 | dependency barrier. | |
381 | ||
382 | See the "Examples of memory barrier sequences" subsection for diagrams | |
383 | showing the ordering constraints. | |
384 | ||
385 | [!] Note that the first load really has to have a _data_ dependency and | |
386 | not a control dependency. If the address for the second load is dependent | |
387 | on the first load, but the dependency is through a conditional rather than | |
388 | actually loading the address itself, then it's a _control_ dependency and | |
389 | a full read barrier or better is required. See the "Control dependencies" | |
390 | subsection for more information. | |
391 | ||
392 | [!] Note that data dependency barriers should normally be paired with | |
393 | write barriers; see the "SMP barrier pairing" subsection. | |
394 | ||
395 | ||
396 | (3) Read (or load) memory barriers. | |
397 | ||
398 | A read barrier is a data dependency barrier plus a guarantee that all the | |
399 | LOAD operations specified before the barrier will appear to happen before | |
400 | all the LOAD operations specified after the barrier with respect to the | |
401 | other components of the system. | |
402 | ||
403 | A read barrier is a partial ordering on loads only; it is not required to | |
404 | have any effect on stores. | |
405 | ||
406 | Read memory barriers imply data dependency barriers, and so can substitute | |
407 | for them. | |
408 | ||
409 | [!] Note that read barriers should normally be paired with write barriers; | |
410 | see the "SMP barrier pairing" subsection. | |
411 | ||
412 | ||
413 | (4) General memory barriers. | |
414 | ||
670bd95e DH |
415 | A general memory barrier gives a guarantee that all the LOAD and STORE |
416 | operations specified before the barrier will appear to happen before all | |
417 | the LOAD and STORE operations specified after the barrier with respect to | |
418 | the other components of the system. | |
419 | ||
420 | A general memory barrier is a partial ordering over both loads and stores. | |
108b42b4 DH |
421 | |
422 | General memory barriers imply both read and write memory barriers, and so | |
423 | can substitute for either. | |
424 | ||
425 | ||
426 | And a couple of implicit varieties: | |
427 | ||
2e4f5382 | 428 | (5) ACQUIRE operations. |
108b42b4 DH |
429 | |
430 | This acts as a one-way permeable barrier. It guarantees that all memory | |
2e4f5382 PZ |
431 | operations after the ACQUIRE operation will appear to happen after the |
432 | ACQUIRE operation with respect to the other components of the system. | |
433 | ACQUIRE operations include LOCK operations and smp_load_acquire() | |
434 | operations. | |
108b42b4 | 435 | |
2e4f5382 PZ |
436 | Memory operations that occur before an ACQUIRE operation may appear to |
437 | happen after it completes. | |
108b42b4 | 438 | |
2e4f5382 PZ |
439 | An ACQUIRE operation should almost always be paired with a RELEASE |
440 | operation. | |
108b42b4 DH |
441 | |
442 | ||
2e4f5382 | 443 | (6) RELEASE operations. |
108b42b4 DH |
444 | |
445 | This also acts as a one-way permeable barrier. It guarantees that all | |
2e4f5382 PZ |
446 | memory operations before the RELEASE operation will appear to happen |
447 | before the RELEASE operation with respect to the other components of the | |
448 | system. RELEASE operations include UNLOCK operations and | |
449 | smp_store_release() operations. | |
108b42b4 | 450 | |
2e4f5382 | 451 | Memory operations that occur after a RELEASE operation may appear to |
108b42b4 DH |
452 | happen before it completes. |
453 | ||
2e4f5382 PZ |
454 | The use of ACQUIRE and RELEASE operations generally precludes the need |
455 | for other sorts of memory barrier (but note the exceptions mentioned in | |
456 | the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE | |
457 | pair is -not- guaranteed to act as a full memory barrier. However, after | |
458 | an ACQUIRE on a given variable, all memory accesses preceding any prior | |
459 | RELEASE on that same variable are guaranteed to be visible. In other | |
460 | words, within a given variable's critical section, all accesses of all | |
461 | previous critical sections for that variable are guaranteed to have | |
462 | completed. | |
17eb88e0 | 463 | |
2e4f5382 PZ |
464 | This means that ACQUIRE acts as a minimal "acquire" operation and |
465 | RELEASE acts as a minimal "release" operation. | |
108b42b4 DH |
466 | |
467 | ||
468 | Memory barriers are only required where there's a possibility of interaction | |
469 | between two CPUs or between a CPU and a device. If it can be guaranteed that | |
470 | there won't be any such interaction in any particular piece of code, then | |
471 | memory barriers are unnecessary in that piece of code. | |
472 | ||
473 | ||
474 | Note that these are the _minimum_ guarantees. Different architectures may give | |
475 | more substantial guarantees, but they may _not_ be relied upon outside of arch | |
476 | specific code. | |
477 | ||
478 | ||
479 | WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? | |
480 | ---------------------------------------------- | |
481 | ||
482 | There are certain things that the Linux kernel memory barriers do not guarantee: | |
483 | ||
484 | (*) There is no guarantee that any of the memory accesses specified before a | |
485 | memory barrier will be _complete_ by the completion of a memory barrier | |
486 | instruction; the barrier can be considered to draw a line in that CPU's | |
487 | access queue that accesses of the appropriate type may not cross. | |
488 | ||
489 | (*) There is no guarantee that issuing a memory barrier on one CPU will have | |
490 | any direct effect on another CPU or any other hardware in the system. The | |
491 | indirect effect will be the order in which the second CPU sees the effects | |
492 | of the first CPU's accesses occur, but see the next point: | |
493 | ||
6bc39274 | 494 | (*) There is no guarantee that a CPU will see the correct order of effects |
108b42b4 DH |
495 | from a second CPU's accesses, even _if_ the second CPU uses a memory |
496 | barrier, unless the first CPU _also_ uses a matching memory barrier (see | |
497 | the subsection on "SMP Barrier Pairing"). | |
498 | ||
499 | (*) There is no guarantee that some intervening piece of off-the-CPU | |
500 | hardware[*] will not reorder the memory accesses. CPU cache coherency | |
501 | mechanisms should propagate the indirect effects of a memory barrier | |
502 | between CPUs, but might not do so in order. | |
503 | ||
504 | [*] For information on bus mastering DMA and coherency please read: | |
505 | ||
4b5ff469 | 506 | Documentation/PCI/pci.txt |
395cf969 | 507 | Documentation/DMA-API-HOWTO.txt |
108b42b4 DH |
508 | Documentation/DMA-API.txt |
509 | ||
510 | ||
511 | DATA DEPENDENCY BARRIERS | |
512 | ------------------------ | |
513 | ||
514 | The usage requirements of data dependency barriers are a little subtle, and | |
515 | it's not always obvious that they're needed. To illustrate, consider the | |
516 | following sequence of events: | |
517 | ||
2ecf8101 PM |
518 | CPU 1 CPU 2 |
519 | =============== =============== | |
108b42b4 DH |
520 | { A == 1, B == 2, C = 3, P == &A, Q == &C } |
521 | B = 4; | |
522 | <write barrier> | |
2ecf8101 PM |
523 | ACCESS_ONCE(P) = &B |
524 | Q = ACCESS_ONCE(P); | |
525 | D = *Q; | |
108b42b4 DH |
526 | |
527 | There's a clear data dependency here, and it would seem that by the end of the | |
528 | sequence, Q must be either &A or &B, and that: | |
529 | ||
530 | (Q == &A) implies (D == 1) | |
531 | (Q == &B) implies (D == 4) | |
532 | ||
81fc6323 | 533 | But! CPU 2's perception of P may be updated _before_ its perception of B, thus |
108b42b4 DH |
534 | leading to the following situation: |
535 | ||
536 | (Q == &B) and (D == 2) ???? | |
537 | ||
538 | Whilst this may seem like a failure of coherency or causality maintenance, it | |
539 | isn't, and this behaviour can be observed on certain real CPUs (such as the DEC | |
540 | Alpha). | |
541 | ||
2b94895b DH |
542 | To deal with this, a data dependency barrier or better must be inserted |
543 | between the address load and the data load: | |
108b42b4 | 544 | |
2ecf8101 PM |
545 | CPU 1 CPU 2 |
546 | =============== =============== | |
108b42b4 DH |
547 | { A == 1, B == 2, C = 3, P == &A, Q == &C } |
548 | B = 4; | |
549 | <write barrier> | |
2ecf8101 PM |
550 | ACCESS_ONCE(P) = &B |
551 | Q = ACCESS_ONCE(P); | |
552 | <data dependency barrier> | |
553 | D = *Q; | |
108b42b4 DH |
554 | |
555 | This enforces the occurrence of one of the two implications, and prevents the | |
556 | third possibility from arising. | |
557 | ||
558 | [!] Note that this extremely counterintuitive situation arises most easily on | |
559 | machines with split caches, so that, for example, one cache bank processes | |
560 | even-numbered cache lines and the other bank processes odd-numbered cache | |
561 | lines. The pointer P might be stored in an odd-numbered cache line, and the | |
562 | variable B might be stored in an even-numbered cache line. Then, if the | |
563 | even-numbered bank of the reading CPU's cache is extremely busy while the | |
564 | odd-numbered bank is idle, one can see the new value of the pointer P (&B), | |
6bc39274 | 565 | but the old value of the variable B (2). |
108b42b4 DH |
566 | |
567 | ||
e0edc78f | 568 | Another example of where data dependency barriers might be required is where a |
108b42b4 DH |
569 | number is read from memory and then used to calculate the index for an array |
570 | access: | |
571 | ||
2ecf8101 PM |
572 | CPU 1 CPU 2 |
573 | =============== =============== | |
108b42b4 DH |
574 | { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } |
575 | M[1] = 4; | |
576 | <write barrier> | |
2ecf8101 PM |
577 | ACCESS_ONCE(P) = 1 |
578 | Q = ACCESS_ONCE(P); | |
579 | <data dependency barrier> | |
580 | D = M[Q]; | |
108b42b4 DH |
581 | |
582 | ||
2ecf8101 PM |
583 | The data dependency barrier is very important to the RCU system, |
584 | for example. See rcu_assign_pointer() and rcu_dereference() in | |
585 | include/linux/rcupdate.h. This permits the current target of an RCU'd | |
586 | pointer to be replaced with a new modified target, without the replacement | |
587 | target appearing to be incompletely initialised. | |
108b42b4 DH |
588 | |
589 | See also the subsection on "Cache Coherency" for a more thorough example. | |
590 | ||
591 | ||
592 | CONTROL DEPENDENCIES | |
593 | -------------------- | |
594 | ||
ff382810 PM |
595 | A load-load control dependency requires a full read memory barrier, not |
596 | simply a data dependency barrier to make it work correctly. Consider the | |
597 | following bit of code: | |
108b42b4 | 598 | |
2ecf8101 | 599 | q = ACCESS_ONCE(a); |
18c03c61 PZ |
600 | if (q) { |
601 | <data dependency barrier> /* BUG: No data dependency!!! */ | |
602 | p = ACCESS_ONCE(b); | |
45c8a36a | 603 | } |
108b42b4 DH |
604 | |
605 | This will not have the desired effect because there is no actual data | |
2ecf8101 PM |
606 | dependency, but rather a control dependency that the CPU may short-circuit |
607 | by attempting to predict the outcome in advance, so that other CPUs see | |
608 | the load from b as having happened before the load from a. In such a | |
609 | case what's actually required is: | |
108b42b4 | 610 | |
2ecf8101 | 611 | q = ACCESS_ONCE(a); |
18c03c61 | 612 | if (q) { |
45c8a36a | 613 | <read barrier> |
18c03c61 | 614 | p = ACCESS_ONCE(b); |
45c8a36a | 615 | } |
18c03c61 PZ |
616 | |
617 | However, stores are not speculated. This means that ordering -is- provided | |
ff382810 | 618 | for load-store control dependencies, as in the following example: |
18c03c61 PZ |
619 | |
620 | q = ACCESS_ONCE(a); | |
18c03c61 | 621 | if (q) { |
2456d2a6 | 622 | ACCESS_ONCE(b) = p; |
18c03c61 PZ |
623 | } |
624 | ||
ff382810 PM |
625 | Control dependencies pair normally with other types of barriers. |
626 | That said, please note that ACCESS_ONCE() is not optional! Without the | |
2456d2a6 PM |
627 | ACCESS_ONCE(), might combine the load from 'a' with other loads from |
628 | 'a', and the store to 'b' with other stores to 'b', with possible highly | |
629 | counterintuitive effects on ordering. | |
18c03c61 PZ |
630 | |
631 | Worse yet, if the compiler is able to prove (say) that the value of | |
632 | variable 'a' is always non-zero, it would be well within its rights | |
633 | to optimize the original example by eliminating the "if" statement | |
634 | as follows: | |
635 | ||
636 | q = a; | |
2456d2a6 PM |
637 | b = p; /* BUG: Compiler and CPU can both reorder!!! */ |
638 | ||
639 | So don't leave out the ACCESS_ONCE(). | |
18c03c61 | 640 | |
2456d2a6 PM |
641 | It is tempting to try to enforce ordering on identical stores on both |
642 | branches of the "if" statement as follows: | |
18c03c61 PZ |
643 | |
644 | q = ACCESS_ONCE(a); | |
645 | if (q) { | |
9b2b3bf5 | 646 | barrier(); |
18c03c61 PZ |
647 | ACCESS_ONCE(b) = p; |
648 | do_something(); | |
649 | } else { | |
9b2b3bf5 | 650 | barrier(); |
18c03c61 PZ |
651 | ACCESS_ONCE(b) = p; |
652 | do_something_else(); | |
653 | } | |
654 | ||
2456d2a6 PM |
655 | Unfortunately, current compilers will transform this as follows at high |
656 | optimization levels: | |
18c03c61 PZ |
657 | |
658 | q = ACCESS_ONCE(a); | |
2456d2a6 | 659 | barrier(); |
18c03c61 PZ |
660 | ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ |
661 | if (q) { | |
662 | /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */ | |
663 | do_something(); | |
664 | } else { | |
665 | /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */ | |
666 | do_something_else(); | |
667 | } | |
668 | ||
2456d2a6 PM |
669 | Now there is no conditional between the load from 'a' and the store to |
670 | 'b', which means that the CPU is within its rights to reorder them: | |
671 | The conditional is absolutely required, and must be present in the | |
672 | assembly code even after all compiler optimizations have been applied. | |
673 | Therefore, if you need ordering in this example, you need explicit | |
674 | memory barriers, for example, smp_store_release(): | |
18c03c61 | 675 | |
2456d2a6 PM |
676 | q = ACCESS_ONCE(a); |
677 | if (q) { | |
678 | smp_store_release(&b, p); | |
18c03c61 PZ |
679 | do_something(); |
680 | } else { | |
2456d2a6 | 681 | smp_store_release(&b, p); |
18c03c61 PZ |
682 | do_something_else(); |
683 | } | |
684 | ||
2456d2a6 PM |
685 | In contrast, without explicit memory barriers, two-legged-if control |
686 | ordering is guaranteed only when the stores differ, for example: | |
687 | ||
688 | q = ACCESS_ONCE(a); | |
689 | if (q) { | |
690 | ACCESS_ONCE(b) = p; | |
691 | do_something(); | |
692 | } else { | |
693 | ACCESS_ONCE(b) = r; | |
694 | do_something_else(); | |
695 | } | |
696 | ||
697 | The initial ACCESS_ONCE() is still required to prevent the compiler from | |
698 | proving the value of 'a'. | |
18c03c61 PZ |
699 | |
700 | In addition, you need to be careful what you do with the local variable 'q', | |
701 | otherwise the compiler might be able to guess the value and again remove | |
702 | the needed conditional. For example: | |
703 | ||
704 | q = ACCESS_ONCE(a); | |
705 | if (q % MAX) { | |
706 | ACCESS_ONCE(b) = p; | |
707 | do_something(); | |
708 | } else { | |
2456d2a6 | 709 | ACCESS_ONCE(b) = r; |
18c03c61 PZ |
710 | do_something_else(); |
711 | } | |
712 | ||
713 | If MAX is defined to be 1, then the compiler knows that (q % MAX) is | |
714 | equal to zero, in which case the compiler is within its rights to | |
715 | transform the above code into the following: | |
716 | ||
717 | q = ACCESS_ONCE(a); | |
718 | ACCESS_ONCE(b) = p; | |
719 | do_something_else(); | |
720 | ||
2456d2a6 PM |
721 | Given this transformation, the CPU is not required to respect the ordering |
722 | between the load from variable 'a' and the store to variable 'b'. It is | |
723 | tempting to add a barrier(), but this does not help. The conditional | |
724 | is gone, and the barrier won't bring it back. Therefore, if you are | |
725 | relying on this ordering, you should make sure that MAX is greater than | |
726 | one, perhaps as follows: | |
18c03c61 PZ |
727 | |
728 | q = ACCESS_ONCE(a); | |
729 | BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ | |
730 | if (q % MAX) { | |
731 | ACCESS_ONCE(b) = p; | |
732 | do_something(); | |
733 | } else { | |
2456d2a6 | 734 | ACCESS_ONCE(b) = r; |
18c03c61 PZ |
735 | do_something_else(); |
736 | } | |
737 | ||
2456d2a6 PM |
738 | Please note once again that the stores to 'b' differ. If they were |
739 | identical, as noted earlier, the compiler could pull this store outside | |
740 | of the 'if' statement. | |
741 | ||
8b19d1de PM |
742 | You must also be careful not to rely too much on boolean short-circuit |
743 | evaluation. Consider this example: | |
744 | ||
745 | q = ACCESS_ONCE(a); | |
746 | if (a || 1 > 0) | |
747 | ACCESS_ONCE(b) = 1; | |
748 | ||
749 | Because the second condition is always true, the compiler can transform | |
750 | this example as following, defeating control dependency: | |
751 | ||
752 | q = ACCESS_ONCE(a); | |
753 | ACCESS_ONCE(b) = 1; | |
754 | ||
755 | This example underscores the need to ensure that the compiler cannot | |
756 | out-guess your code. More generally, although ACCESS_ONCE() does force | |
757 | the compiler to actually emit code for a given load, it does not force | |
758 | the compiler to use the results. | |
759 | ||
18c03c61 | 760 | Finally, control dependencies do -not- provide transitivity. This is |
5646f7ac PM |
761 | demonstrated by two related examples, with the initial values of |
762 | x and y both being zero: | |
18c03c61 PZ |
763 | |
764 | CPU 0 CPU 1 | |
765 | ===================== ===================== | |
766 | r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y); | |
5646f7ac | 767 | if (r1 > 0) if (r2 > 0) |
18c03c61 PZ |
768 | ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1; |
769 | ||
770 | assert(!(r1 == 1 && r2 == 1)); | |
771 | ||
772 | The above two-CPU example will never trigger the assert(). However, | |
773 | if control dependencies guaranteed transitivity (which they do not), | |
5646f7ac | 774 | then adding the following CPU would guarantee a related assertion: |
18c03c61 | 775 | |
5646f7ac PM |
776 | CPU 2 |
777 | ===================== | |
778 | ACCESS_ONCE(x) = 2; | |
779 | ||
780 | assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */ | |
18c03c61 | 781 | |
5646f7ac PM |
782 | But because control dependencies do -not- provide transitivity, the above |
783 | assertion can fail after the combined three-CPU example completes. If you | |
784 | need the three-CPU example to provide ordering, you will need smp_mb() | |
785 | between the loads and stores in the CPU 0 and CPU 1 code fragments, | |
786 | that is, just before or just after the "if" statements. | |
18c03c61 | 787 | |
5646f7ac PM |
788 | These two examples are the LB and WWC litmus tests from this paper: |
789 | http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this | |
790 | site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html. | |
18c03c61 PZ |
791 | |
792 | In summary: | |
793 | ||
794 | (*) Control dependencies can order prior loads against later stores. | |
795 | However, they do -not- guarantee any other sort of ordering: | |
796 | Not prior loads against later loads, nor prior stores against | |
797 | later anything. If you need these other forms of ordering, | |
d87510c5 | 798 | use smp_rmb(), smp_wmb(), or, in the case of prior stores and |
18c03c61 PZ |
799 | later loads, smp_mb(). |
800 | ||
9b2b3bf5 PM |
801 | (*) If both legs of the "if" statement begin with identical stores |
802 | to the same variable, a barrier() statement is required at the | |
803 | beginning of each leg of the "if" statement. | |
804 | ||
18c03c61 | 805 | (*) Control dependencies require at least one run-time conditional |
586dd56a PM |
806 | between the prior load and the subsequent store, and this |
807 | conditional must involve the prior load. If the compiler | |
18c03c61 PZ |
808 | is able to optimize the conditional away, it will have also |
809 | optimized away the ordering. Careful use of ACCESS_ONCE() can | |
810 | help to preserve the needed conditional. | |
811 | ||
812 | (*) Control dependencies require that the compiler avoid reordering the | |
813 | dependency into nonexistence. Careful use of ACCESS_ONCE() or | |
692118da PM |
814 | barrier() can help to preserve your control dependency. Please |
815 | see the Compiler Barrier section for more information. | |
18c03c61 | 816 | |
ff382810 PM |
817 | (*) Control dependencies pair normally with other types of barriers. |
818 | ||
18c03c61 PZ |
819 | (*) Control dependencies do -not- provide transitivity. If you |
820 | need transitivity, use smp_mb(). | |
108b42b4 DH |
821 | |
822 | ||
823 | SMP BARRIER PAIRING | |
824 | ------------------- | |
825 | ||
826 | When dealing with CPU-CPU interactions, certain types of memory barrier should | |
827 | always be paired. A lack of appropriate pairing is almost certainly an error. | |
828 | ||
ff382810 PM |
829 | General barriers pair with each other, though they also pair with most |
830 | other types of barriers, albeit without transitivity. An acquire barrier | |
831 | pairs with a release barrier, but both may also pair with other barriers, | |
832 | including of course general barriers. A write barrier pairs with a data | |
833 | dependency barrier, a control dependency, an acquire barrier, a release | |
834 | barrier, a read barrier, or a general barrier. Similarly a read barrier, | |
835 | control dependency, or a data dependency barrier pairs with a write | |
836 | barrier, an acquire barrier, a release barrier, or a general barrier: | |
108b42b4 | 837 | |
2ecf8101 PM |
838 | CPU 1 CPU 2 |
839 | =============== =============== | |
840 | ACCESS_ONCE(a) = 1; | |
108b42b4 | 841 | <write barrier> |
2ecf8101 PM |
842 | ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b); |
843 | <read barrier> | |
844 | y = ACCESS_ONCE(a); | |
108b42b4 DH |
845 | |
846 | Or: | |
847 | ||
2ecf8101 PM |
848 | CPU 1 CPU 2 |
849 | =============== =============================== | |
108b42b4 DH |
850 | a = 1; |
851 | <write barrier> | |
2ecf8101 PM |
852 | ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b); |
853 | <data dependency barrier> | |
854 | y = *x; | |
108b42b4 | 855 | |
ff382810 PM |
856 | Or even: |
857 | ||
858 | CPU 1 CPU 2 | |
859 | =============== =============================== | |
860 | r1 = ACCESS_ONCE(y); | |
861 | <general barrier> | |
862 | ACCESS_ONCE(y) = 1; if (r2 = ACCESS_ONCE(x)) { | |
863 | <implicit control dependency> | |
864 | ACCESS_ONCE(y) = 1; | |
865 | } | |
866 | ||
867 | assert(r1 == 0 || r2 == 0); | |
868 | ||
108b42b4 DH |
869 | Basically, the read barrier always has to be there, even though it can be of |
870 | the "weaker" type. | |
871 | ||
670bd95e | 872 | [!] Note that the stores before the write barrier would normally be expected to |
81fc6323 | 873 | match the loads after the read barrier or the data dependency barrier, and vice |
670bd95e DH |
874 | versa: |
875 | ||
2ecf8101 PM |
876 | CPU 1 CPU 2 |
877 | =================== =================== | |
878 | ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c); | |
879 | ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d); | |
880 | <write barrier> \ <read barrier> | |
881 | ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a); | |
882 | ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b); | |
670bd95e | 883 | |
108b42b4 DH |
884 | |
885 | EXAMPLES OF MEMORY BARRIER SEQUENCES | |
886 | ------------------------------------ | |
887 | ||
81fc6323 | 888 | Firstly, write barriers act as partial orderings on store operations. |
108b42b4 DH |
889 | Consider the following sequence of events: |
890 | ||
891 | CPU 1 | |
892 | ======================= | |
893 | STORE A = 1 | |
894 | STORE B = 2 | |
895 | STORE C = 3 | |
896 | <write barrier> | |
897 | STORE D = 4 | |
898 | STORE E = 5 | |
899 | ||
900 | This sequence of events is committed to the memory coherence system in an order | |
901 | that the rest of the system might perceive as the unordered set of { STORE A, | |
80f7228b | 902 | STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E |
108b42b4 DH |
903 | }: |
904 | ||
905 | +-------+ : : | |
906 | | | +------+ | |
907 | | |------>| C=3 | } /\ | |
81fc6323 JP |
908 | | | : +------+ }----- \ -----> Events perceptible to |
909 | | | : | A=1 | } \/ the rest of the system | |
108b42b4 DH |
910 | | | : +------+ } |
911 | | CPU 1 | : | B=2 | } | |
912 | | | +------+ } | |
913 | | | wwwwwwwwwwwwwwww } <--- At this point the write barrier | |
914 | | | +------+ } requires all stores prior to the | |
915 | | | : | E=5 | } barrier to be committed before | |
81fc6323 | 916 | | | : +------+ } further stores may take place |
108b42b4 DH |
917 | | |------>| D=4 | } |
918 | | | +------+ | |
919 | +-------+ : : | |
920 | | | |
670bd95e DH |
921 | | Sequence in which stores are committed to the |
922 | | memory system by CPU 1 | |
108b42b4 DH |
923 | V |
924 | ||
925 | ||
81fc6323 | 926 | Secondly, data dependency barriers act as partial orderings on data-dependent |
108b42b4 DH |
927 | loads. Consider the following sequence of events: |
928 | ||
929 | CPU 1 CPU 2 | |
930 | ======================= ======================= | |
c14038c3 | 931 | { B = 7; X = 9; Y = 8; C = &Y } |
108b42b4 DH |
932 | STORE A = 1 |
933 | STORE B = 2 | |
934 | <write barrier> | |
935 | STORE C = &B LOAD X | |
936 | STORE D = 4 LOAD C (gets &B) | |
937 | LOAD *C (reads B) | |
938 | ||
939 | Without intervention, CPU 2 may perceive the events on CPU 1 in some | |
940 | effectively random order, despite the write barrier issued by CPU 1: | |
941 | ||
942 | +-------+ : : : : | |
943 | | | +------+ +-------+ | Sequence of update | |
944 | | |------>| B=2 |----- --->| Y->8 | | of perception on | |
945 | | | : +------+ \ +-------+ | CPU 2 | |
946 | | CPU 1 | : | A=1 | \ --->| C->&Y | V | |
947 | | | +------+ | +-------+ | |
948 | | | wwwwwwwwwwwwwwww | : : | |
949 | | | +------+ | : : | |
950 | | | : | C=&B |--- | : : +-------+ | |
951 | | | : +------+ \ | +-------+ | | | |
952 | | |------>| D=4 | ----------->| C->&B |------>| | | |
953 | | | +------+ | +-------+ | | | |
954 | +-------+ : : | : : | | | |
955 | | : : | | | |
956 | | : : | CPU 2 | | |
957 | | +-------+ | | | |
958 | Apparently incorrect ---> | | B->7 |------>| | | |
959 | perception of B (!) | +-------+ | | | |
960 | | : : | | | |
961 | | +-------+ | | | |
962 | The load of X holds ---> \ | X->9 |------>| | | |
963 | up the maintenance \ +-------+ | | | |
964 | of coherence of B ----->| B->2 | +-------+ | |
965 | +-------+ | |
966 | : : | |
967 | ||
968 | ||
969 | In the above example, CPU 2 perceives that B is 7, despite the load of *C | |
670e9f34 | 970 | (which would be B) coming after the LOAD of C. |
108b42b4 DH |
971 | |
972 | If, however, a data dependency barrier were to be placed between the load of C | |
c14038c3 DH |
973 | and the load of *C (ie: B) on CPU 2: |
974 | ||
975 | CPU 1 CPU 2 | |
976 | ======================= ======================= | |
977 | { B = 7; X = 9; Y = 8; C = &Y } | |
978 | STORE A = 1 | |
979 | STORE B = 2 | |
980 | <write barrier> | |
981 | STORE C = &B LOAD X | |
982 | STORE D = 4 LOAD C (gets &B) | |
983 | <data dependency barrier> | |
984 | LOAD *C (reads B) | |
985 | ||
986 | then the following will occur: | |
108b42b4 DH |
987 | |
988 | +-------+ : : : : | |
989 | | | +------+ +-------+ | |
990 | | |------>| B=2 |----- --->| Y->8 | | |
991 | | | : +------+ \ +-------+ | |
992 | | CPU 1 | : | A=1 | \ --->| C->&Y | | |
993 | | | +------+ | +-------+ | |
994 | | | wwwwwwwwwwwwwwww | : : | |
995 | | | +------+ | : : | |
996 | | | : | C=&B |--- | : : +-------+ | |
997 | | | : +------+ \ | +-------+ | | | |
998 | | |------>| D=4 | ----------->| C->&B |------>| | | |
999 | | | +------+ | +-------+ | | | |
1000 | +-------+ : : | : : | | | |
1001 | | : : | | | |
1002 | | : : | CPU 2 | | |
1003 | | +-------+ | | | |
670bd95e DH |
1004 | | | X->9 |------>| | |
1005 | | +-------+ | | | |
1006 | Makes sure all effects ---> \ ddddddddddddddddd | | | |
1007 | prior to the store of C \ +-------+ | | | |
1008 | are perceptible to ----->| B->2 |------>| | | |
1009 | subsequent loads +-------+ | | | |
108b42b4 DH |
1010 | : : +-------+ |
1011 | ||
1012 | ||
1013 | And thirdly, a read barrier acts as a partial order on loads. Consider the | |
1014 | following sequence of events: | |
1015 | ||
1016 | CPU 1 CPU 2 | |
1017 | ======================= ======================= | |
670bd95e | 1018 | { A = 0, B = 9 } |
108b42b4 | 1019 | STORE A=1 |
108b42b4 | 1020 | <write barrier> |
670bd95e | 1021 | STORE B=2 |
108b42b4 | 1022 | LOAD B |
670bd95e | 1023 | LOAD A |
108b42b4 DH |
1024 | |
1025 | Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in | |
1026 | some effectively random order, despite the write barrier issued by CPU 1: | |
1027 | ||
670bd95e DH |
1028 | +-------+ : : : : |
1029 | | | +------+ +-------+ | |
1030 | | |------>| A=1 |------ --->| A->0 | | |
1031 | | | +------+ \ +-------+ | |
1032 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1033 | | | +------+ | +-------+ | |
1034 | | |------>| B=2 |--- | : : | |
1035 | | | +------+ \ | : : +-------+ | |
1036 | +-------+ : : \ | +-------+ | | | |
1037 | ---------->| B->2 |------>| | | |
1038 | | +-------+ | CPU 2 | | |
1039 | | | A->0 |------>| | | |
1040 | | +-------+ | | | |
1041 | | : : +-------+ | |
1042 | \ : : | |
1043 | \ +-------+ | |
1044 | ---->| A->1 | | |
1045 | +-------+ | |
1046 | : : | |
108b42b4 | 1047 | |
670bd95e | 1048 | |
6bc39274 | 1049 | If, however, a read barrier were to be placed between the load of B and the |
670bd95e DH |
1050 | load of A on CPU 2: |
1051 | ||
1052 | CPU 1 CPU 2 | |
1053 | ======================= ======================= | |
1054 | { A = 0, B = 9 } | |
1055 | STORE A=1 | |
1056 | <write barrier> | |
1057 | STORE B=2 | |
1058 | LOAD B | |
1059 | <read barrier> | |
1060 | LOAD A | |
1061 | ||
1062 | then the partial ordering imposed by CPU 1 will be perceived correctly by CPU | |
1063 | 2: | |
1064 | ||
1065 | +-------+ : : : : | |
1066 | | | +------+ +-------+ | |
1067 | | |------>| A=1 |------ --->| A->0 | | |
1068 | | | +------+ \ +-------+ | |
1069 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1070 | | | +------+ | +-------+ | |
1071 | | |------>| B=2 |--- | : : | |
1072 | | | +------+ \ | : : +-------+ | |
1073 | +-------+ : : \ | +-------+ | | | |
1074 | ---------->| B->2 |------>| | | |
1075 | | +-------+ | CPU 2 | | |
1076 | | : : | | | |
1077 | | : : | | | |
1078 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | | |
1079 | barrier causes all effects \ +-------+ | | | |
1080 | prior to the storage of B ---->| A->1 |------>| | | |
1081 | to be perceptible to CPU 2 +-------+ | | | |
1082 | : : +-------+ | |
1083 | ||
1084 | ||
1085 | To illustrate this more completely, consider what could happen if the code | |
1086 | contained a load of A either side of the read barrier: | |
1087 | ||
1088 | CPU 1 CPU 2 | |
1089 | ======================= ======================= | |
1090 | { A = 0, B = 9 } | |
1091 | STORE A=1 | |
1092 | <write barrier> | |
1093 | STORE B=2 | |
1094 | LOAD B | |
1095 | LOAD A [first load of A] | |
1096 | <read barrier> | |
1097 | LOAD A [second load of A] | |
1098 | ||
1099 | Even though the two loads of A both occur after the load of B, they may both | |
1100 | come up with different values: | |
1101 | ||
1102 | +-------+ : : : : | |
1103 | | | +------+ +-------+ | |
1104 | | |------>| A=1 |------ --->| A->0 | | |
1105 | | | +------+ \ +-------+ | |
1106 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1107 | | | +------+ | +-------+ | |
1108 | | |------>| B=2 |--- | : : | |
1109 | | | +------+ \ | : : +-------+ | |
1110 | +-------+ : : \ | +-------+ | | | |
1111 | ---------->| B->2 |------>| | | |
1112 | | +-------+ | CPU 2 | | |
1113 | | : : | | | |
1114 | | : : | | | |
1115 | | +-------+ | | | |
1116 | | | A->0 |------>| 1st | | |
1117 | | +-------+ | | | |
1118 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | | |
1119 | barrier causes all effects \ +-------+ | | | |
1120 | prior to the storage of B ---->| A->1 |------>| 2nd | | |
1121 | to be perceptible to CPU 2 +-------+ | | | |
1122 | : : +-------+ | |
1123 | ||
1124 | ||
1125 | But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 | |
1126 | before the read barrier completes anyway: | |
1127 | ||
1128 | +-------+ : : : : | |
1129 | | | +------+ +-------+ | |
1130 | | |------>| A=1 |------ --->| A->0 | | |
1131 | | | +------+ \ +-------+ | |
1132 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1133 | | | +------+ | +-------+ | |
1134 | | |------>| B=2 |--- | : : | |
1135 | | | +------+ \ | : : +-------+ | |
1136 | +-------+ : : \ | +-------+ | | | |
1137 | ---------->| B->2 |------>| | | |
1138 | | +-------+ | CPU 2 | | |
1139 | | : : | | | |
1140 | \ : : | | | |
1141 | \ +-------+ | | | |
1142 | ---->| A->1 |------>| 1st | | |
1143 | +-------+ | | | |
1144 | rrrrrrrrrrrrrrrrr | | | |
1145 | +-------+ | | | |
1146 | | A->1 |------>| 2nd | | |
1147 | +-------+ | | | |
1148 | : : +-------+ | |
1149 | ||
1150 | ||
1151 | The guarantee is that the second load will always come up with A == 1 if the | |
1152 | load of B came up with B == 2. No such guarantee exists for the first load of | |
1153 | A; that may come up with either A == 0 or A == 1. | |
1154 | ||
1155 | ||
1156 | READ MEMORY BARRIERS VS LOAD SPECULATION | |
1157 | ---------------------------------------- | |
1158 | ||
1159 | Many CPUs speculate with loads: that is they see that they will need to load an | |
1160 | item from memory, and they find a time where they're not using the bus for any | |
1161 | other loads, and so do the load in advance - even though they haven't actually | |
1162 | got to that point in the instruction execution flow yet. This permits the | |
1163 | actual load instruction to potentially complete immediately because the CPU | |
1164 | already has the value to hand. | |
1165 | ||
1166 | It may turn out that the CPU didn't actually need the value - perhaps because a | |
1167 | branch circumvented the load - in which case it can discard the value or just | |
1168 | cache it for later use. | |
1169 | ||
1170 | Consider: | |
1171 | ||
e0edc78f | 1172 | CPU 1 CPU 2 |
670bd95e | 1173 | ======================= ======================= |
e0edc78f IM |
1174 | LOAD B |
1175 | DIVIDE } Divide instructions generally | |
1176 | DIVIDE } take a long time to perform | |
1177 | LOAD A | |
670bd95e DH |
1178 | |
1179 | Which might appear as this: | |
1180 | ||
1181 | : : +-------+ | |
1182 | +-------+ | | | |
1183 | --->| B->2 |------>| | | |
1184 | +-------+ | CPU 2 | | |
1185 | : :DIVIDE | | | |
1186 | +-------+ | | | |
1187 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | | |
1188 | division speculates on the +-------+ ~ | | | |
1189 | LOAD of A : : ~ | | | |
1190 | : :DIVIDE | | | |
1191 | : : ~ | | | |
1192 | Once the divisions are complete --> : : ~-->| | | |
1193 | the CPU can then perform the : : | | | |
1194 | LOAD with immediate effect : : +-------+ | |
1195 | ||
1196 | ||
1197 | Placing a read barrier or a data dependency barrier just before the second | |
1198 | load: | |
1199 | ||
e0edc78f | 1200 | CPU 1 CPU 2 |
670bd95e | 1201 | ======================= ======================= |
e0edc78f IM |
1202 | LOAD B |
1203 | DIVIDE | |
1204 | DIVIDE | |
670bd95e | 1205 | <read barrier> |
e0edc78f | 1206 | LOAD A |
670bd95e DH |
1207 | |
1208 | will force any value speculatively obtained to be reconsidered to an extent | |
1209 | dependent on the type of barrier used. If there was no change made to the | |
1210 | speculated memory location, then the speculated value will just be used: | |
1211 | ||
1212 | : : +-------+ | |
1213 | +-------+ | | | |
1214 | --->| B->2 |------>| | | |
1215 | +-------+ | CPU 2 | | |
1216 | : :DIVIDE | | | |
1217 | +-------+ | | | |
1218 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | | |
1219 | division speculates on the +-------+ ~ | | | |
1220 | LOAD of A : : ~ | | | |
1221 | : :DIVIDE | | | |
1222 | : : ~ | | | |
1223 | : : ~ | | | |
1224 | rrrrrrrrrrrrrrrr~ | | | |
1225 | : : ~ | | | |
1226 | : : ~-->| | | |
1227 | : : | | | |
1228 | : : +-------+ | |
1229 | ||
1230 | ||
1231 | but if there was an update or an invalidation from another CPU pending, then | |
1232 | the speculation will be cancelled and the value reloaded: | |
1233 | ||
1234 | : : +-------+ | |
1235 | +-------+ | | | |
1236 | --->| B->2 |------>| | | |
1237 | +-------+ | CPU 2 | | |
1238 | : :DIVIDE | | | |
1239 | +-------+ | | | |
1240 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | | |
1241 | division speculates on the +-------+ ~ | | | |
1242 | LOAD of A : : ~ | | | |
1243 | : :DIVIDE | | | |
1244 | : : ~ | | | |
1245 | : : ~ | | | |
1246 | rrrrrrrrrrrrrrrrr | | | |
1247 | +-------+ | | | |
1248 | The speculation is discarded ---> --->| A->1 |------>| | | |
1249 | and an updated value is +-------+ | | | |
1250 | retrieved : : +-------+ | |
108b42b4 DH |
1251 | |
1252 | ||
241e6663 PM |
1253 | TRANSITIVITY |
1254 | ------------ | |
1255 | ||
1256 | Transitivity is a deeply intuitive notion about ordering that is not | |
1257 | always provided by real computer systems. The following example | |
1258 | demonstrates transitivity (also called "cumulativity"): | |
1259 | ||
1260 | CPU 1 CPU 2 CPU 3 | |
1261 | ======================= ======================= ======================= | |
1262 | { X = 0, Y = 0 } | |
1263 | STORE X=1 LOAD X STORE Y=1 | |
1264 | <general barrier> <general barrier> | |
1265 | LOAD Y LOAD X | |
1266 | ||
1267 | Suppose that CPU 2's load from X returns 1 and its load from Y returns 0. | |
1268 | This indicates that CPU 2's load from X in some sense follows CPU 1's | |
1269 | store to X and that CPU 2's load from Y in some sense preceded CPU 3's | |
1270 | store to Y. The question is then "Can CPU 3's load from X return 0?" | |
1271 | ||
1272 | Because CPU 2's load from X in some sense came after CPU 1's store, it | |
1273 | is natural to expect that CPU 3's load from X must therefore return 1. | |
1274 | This expectation is an example of transitivity: if a load executing on | |
1275 | CPU A follows a load from the same variable executing on CPU B, then | |
1276 | CPU A's load must either return the same value that CPU B's load did, | |
1277 | or must return some later value. | |
1278 | ||
1279 | In the Linux kernel, use of general memory barriers guarantees | |
1280 | transitivity. Therefore, in the above example, if CPU 2's load from X | |
1281 | returns 1 and its load from Y returns 0, then CPU 3's load from X must | |
1282 | also return 1. | |
1283 | ||
1284 | However, transitivity is -not- guaranteed for read or write barriers. | |
1285 | For example, suppose that CPU 2's general barrier in the above example | |
1286 | is changed to a read barrier as shown below: | |
1287 | ||
1288 | CPU 1 CPU 2 CPU 3 | |
1289 | ======================= ======================= ======================= | |
1290 | { X = 0, Y = 0 } | |
1291 | STORE X=1 LOAD X STORE Y=1 | |
1292 | <read barrier> <general barrier> | |
1293 | LOAD Y LOAD X | |
1294 | ||
1295 | This substitution destroys transitivity: in this example, it is perfectly | |
1296 | legal for CPU 2's load from X to return 1, its load from Y to return 0, | |
1297 | and CPU 3's load from X to return 0. | |
1298 | ||
1299 | The key point is that although CPU 2's read barrier orders its pair | |
1300 | of loads, it does not guarantee to order CPU 1's store. Therefore, if | |
1301 | this example runs on a system where CPUs 1 and 2 share a store buffer | |
1302 | or a level of cache, CPU 2 might have early access to CPU 1's writes. | |
1303 | General barriers are therefore required to ensure that all CPUs agree | |
1304 | on the combined order of CPU 1's and CPU 2's accesses. | |
1305 | ||
1306 | To reiterate, if your code requires transitivity, use general barriers | |
1307 | throughout. | |
1308 | ||
1309 | ||
108b42b4 DH |
1310 | ======================== |
1311 | EXPLICIT KERNEL BARRIERS | |
1312 | ======================== | |
1313 | ||
1314 | The Linux kernel has a variety of different barriers that act at different | |
1315 | levels: | |
1316 | ||
1317 | (*) Compiler barrier. | |
1318 | ||
1319 | (*) CPU memory barriers. | |
1320 | ||
1321 | (*) MMIO write barrier. | |
1322 | ||
1323 | ||
1324 | COMPILER BARRIER | |
1325 | ---------------- | |
1326 | ||
1327 | The Linux kernel has an explicit compiler barrier function that prevents the | |
1328 | compiler from moving the memory accesses either side of it to the other side: | |
1329 | ||
1330 | barrier(); | |
1331 | ||
18c03c61 | 1332 | This is a general barrier -- there are no read-read or write-write variants |
692118da | 1333 | of barrier(). However, ACCESS_ONCE() can be thought of as a weak form |
18c03c61 PZ |
1334 | for barrier() that affects only the specific accesses flagged by the |
1335 | ACCESS_ONCE(). | |
108b42b4 | 1336 | |
692118da PM |
1337 | The barrier() function has the following effects: |
1338 | ||
1339 | (*) Prevents the compiler from reordering accesses following the | |
1340 | barrier() to precede any accesses preceding the barrier(). | |
1341 | One example use for this property is to ease communication between | |
1342 | interrupt-handler code and the code that was interrupted. | |
1343 | ||
1344 | (*) Within a loop, forces the compiler to load the variables used | |
1345 | in that loop's conditional on each pass through that loop. | |
1346 | ||
1347 | The ACCESS_ONCE() function can prevent any number of optimizations that, | |
1348 | while perfectly safe in single-threaded code, can be fatal in concurrent | |
1349 | code. Here are some examples of these sorts of optimizations: | |
1350 | ||
449f7413 PM |
1351 | (*) The compiler is within its rights to reorder loads and stores |
1352 | to the same variable, and in some cases, the CPU is within its | |
1353 | rights to reorder loads to the same variable. This means that | |
1354 | the following code: | |
1355 | ||
1356 | a[0] = x; | |
1357 | a[1] = x; | |
1358 | ||
1359 | Might result in an older value of x stored in a[1] than in a[0]. | |
1360 | Prevent both the compiler and the CPU from doing this as follows: | |
1361 | ||
1362 | a[0] = ACCESS_ONCE(x); | |
1363 | a[1] = ACCESS_ONCE(x); | |
1364 | ||
1365 | In short, ACCESS_ONCE() provides cache coherence for accesses from | |
1366 | multiple CPUs to a single variable. | |
1367 | ||
692118da PM |
1368 | (*) The compiler is within its rights to merge successive loads from |
1369 | the same variable. Such merging can cause the compiler to "optimize" | |
1370 | the following code: | |
1371 | ||
1372 | while (tmp = a) | |
1373 | do_something_with(tmp); | |
1374 | ||
1375 | into the following code, which, although in some sense legitimate | |
1376 | for single-threaded code, is almost certainly not what the developer | |
1377 | intended: | |
1378 | ||
1379 | if (tmp = a) | |
1380 | for (;;) | |
1381 | do_something_with(tmp); | |
1382 | ||
1383 | Use ACCESS_ONCE() to prevent the compiler from doing this to you: | |
1384 | ||
1385 | while (tmp = ACCESS_ONCE(a)) | |
1386 | do_something_with(tmp); | |
1387 | ||
1388 | (*) The compiler is within its rights to reload a variable, for example, | |
1389 | in cases where high register pressure prevents the compiler from | |
1390 | keeping all data of interest in registers. The compiler might | |
1391 | therefore optimize the variable 'tmp' out of our previous example: | |
1392 | ||
1393 | while (tmp = a) | |
1394 | do_something_with(tmp); | |
1395 | ||
1396 | This could result in the following code, which is perfectly safe in | |
1397 | single-threaded code, but can be fatal in concurrent code: | |
1398 | ||
1399 | while (a) | |
1400 | do_something_with(a); | |
1401 | ||
1402 | For example, the optimized version of this code could result in | |
1403 | passing a zero to do_something_with() in the case where the variable | |
1404 | a was modified by some other CPU between the "while" statement and | |
1405 | the call to do_something_with(). | |
1406 | ||
1407 | Again, use ACCESS_ONCE() to prevent the compiler from doing this: | |
1408 | ||
1409 | while (tmp = ACCESS_ONCE(a)) | |
1410 | do_something_with(tmp); | |
1411 | ||
1412 | Note that if the compiler runs short of registers, it might save | |
1413 | tmp onto the stack. The overhead of this saving and later restoring | |
1414 | is why compilers reload variables. Doing so is perfectly safe for | |
1415 | single-threaded code, so you need to tell the compiler about cases | |
1416 | where it is not safe. | |
1417 | ||
1418 | (*) The compiler is within its rights to omit a load entirely if it knows | |
1419 | what the value will be. For example, if the compiler can prove that | |
1420 | the value of variable 'a' is always zero, it can optimize this code: | |
1421 | ||
1422 | while (tmp = a) | |
1423 | do_something_with(tmp); | |
1424 | ||
1425 | Into this: | |
1426 | ||
1427 | do { } while (0); | |
1428 | ||
1429 | This transformation is a win for single-threaded code because it gets | |
1430 | rid of a load and a branch. The problem is that the compiler will | |
1431 | carry out its proof assuming that the current CPU is the only one | |
1432 | updating variable 'a'. If variable 'a' is shared, then the compiler's | |
1433 | proof will be erroneous. Use ACCESS_ONCE() to tell the compiler | |
1434 | that it doesn't know as much as it thinks it does: | |
1435 | ||
1436 | while (tmp = ACCESS_ONCE(a)) | |
1437 | do_something_with(tmp); | |
1438 | ||
1439 | But please note that the compiler is also closely watching what you | |
1440 | do with the value after the ACCESS_ONCE(). For example, suppose you | |
1441 | do the following and MAX is a preprocessor macro with the value 1: | |
1442 | ||
1443 | while ((tmp = ACCESS_ONCE(a)) % MAX) | |
1444 | do_something_with(tmp); | |
1445 | ||
1446 | Then the compiler knows that the result of the "%" operator applied | |
1447 | to MAX will always be zero, again allowing the compiler to optimize | |
1448 | the code into near-nonexistence. (It will still load from the | |
1449 | variable 'a'.) | |
1450 | ||
1451 | (*) Similarly, the compiler is within its rights to omit a store entirely | |
1452 | if it knows that the variable already has the value being stored. | |
1453 | Again, the compiler assumes that the current CPU is the only one | |
1454 | storing into the variable, which can cause the compiler to do the | |
1455 | wrong thing for shared variables. For example, suppose you have | |
1456 | the following: | |
1457 | ||
1458 | a = 0; | |
1459 | /* Code that does not store to variable a. */ | |
1460 | a = 0; | |
1461 | ||
1462 | The compiler sees that the value of variable 'a' is already zero, so | |
1463 | it might well omit the second store. This would come as a fatal | |
1464 | surprise if some other CPU might have stored to variable 'a' in the | |
1465 | meantime. | |
1466 | ||
1467 | Use ACCESS_ONCE() to prevent the compiler from making this sort of | |
1468 | wrong guess: | |
1469 | ||
1470 | ACCESS_ONCE(a) = 0; | |
1471 | /* Code that does not store to variable a. */ | |
1472 | ACCESS_ONCE(a) = 0; | |
1473 | ||
1474 | (*) The compiler is within its rights to reorder memory accesses unless | |
1475 | you tell it not to. For example, consider the following interaction | |
1476 | between process-level code and an interrupt handler: | |
1477 | ||
1478 | void process_level(void) | |
1479 | { | |
1480 | msg = get_message(); | |
1481 | flag = true; | |
1482 | } | |
1483 | ||
1484 | void interrupt_handler(void) | |
1485 | { | |
1486 | if (flag) | |
1487 | process_message(msg); | |
1488 | } | |
1489 | ||
df5cbb27 | 1490 | There is nothing to prevent the compiler from transforming |
692118da PM |
1491 | process_level() to the following, in fact, this might well be a |
1492 | win for single-threaded code: | |
1493 | ||
1494 | void process_level(void) | |
1495 | { | |
1496 | flag = true; | |
1497 | msg = get_message(); | |
1498 | } | |
1499 | ||
1500 | If the interrupt occurs between these two statement, then | |
1501 | interrupt_handler() might be passed a garbled msg. Use ACCESS_ONCE() | |
1502 | to prevent this as follows: | |
1503 | ||
1504 | void process_level(void) | |
1505 | { | |
1506 | ACCESS_ONCE(msg) = get_message(); | |
1507 | ACCESS_ONCE(flag) = true; | |
1508 | } | |
1509 | ||
1510 | void interrupt_handler(void) | |
1511 | { | |
1512 | if (ACCESS_ONCE(flag)) | |
1513 | process_message(ACCESS_ONCE(msg)); | |
1514 | } | |
1515 | ||
1516 | Note that the ACCESS_ONCE() wrappers in interrupt_handler() | |
1517 | are needed if this interrupt handler can itself be interrupted | |
1518 | by something that also accesses 'flag' and 'msg', for example, | |
1519 | a nested interrupt or an NMI. Otherwise, ACCESS_ONCE() is not | |
1520 | needed in interrupt_handler() other than for documentation purposes. | |
1521 | (Note also that nested interrupts do not typically occur in modern | |
1522 | Linux kernels, in fact, if an interrupt handler returns with | |
1523 | interrupts enabled, you will get a WARN_ONCE() splat.) | |
1524 | ||
1525 | You should assume that the compiler can move ACCESS_ONCE() past | |
1526 | code not containing ACCESS_ONCE(), barrier(), or similar primitives. | |
1527 | ||
1528 | This effect could also be achieved using barrier(), but ACCESS_ONCE() | |
1529 | is more selective: With ACCESS_ONCE(), the compiler need only forget | |
1530 | the contents of the indicated memory locations, while with barrier() | |
1531 | the compiler must discard the value of all memory locations that | |
1532 | it has currented cached in any machine registers. Of course, | |
1533 | the compiler must also respect the order in which the ACCESS_ONCE()s | |
1534 | occur, though the CPU of course need not do so. | |
1535 | ||
1536 | (*) The compiler is within its rights to invent stores to a variable, | |
1537 | as in the following example: | |
1538 | ||
1539 | if (a) | |
1540 | b = a; | |
1541 | else | |
1542 | b = 42; | |
1543 | ||
1544 | The compiler might save a branch by optimizing this as follows: | |
1545 | ||
1546 | b = 42; | |
1547 | if (a) | |
1548 | b = a; | |
1549 | ||
1550 | In single-threaded code, this is not only safe, but also saves | |
1551 | a branch. Unfortunately, in concurrent code, this optimization | |
1552 | could cause some other CPU to see a spurious value of 42 -- even | |
1553 | if variable 'a' was never zero -- when loading variable 'b'. | |
1554 | Use ACCESS_ONCE() to prevent this as follows: | |
1555 | ||
1556 | if (a) | |
1557 | ACCESS_ONCE(b) = a; | |
1558 | else | |
1559 | ACCESS_ONCE(b) = 42; | |
1560 | ||
1561 | The compiler can also invent loads. These are usually less | |
1562 | damaging, but they can result in cache-line bouncing and thus in | |
1563 | poor performance and scalability. Use ACCESS_ONCE() to prevent | |
1564 | invented loads. | |
1565 | ||
1566 | (*) For aligned memory locations whose size allows them to be accessed | |
1567 | with a single memory-reference instruction, prevents "load tearing" | |
1568 | and "store tearing," in which a single large access is replaced by | |
1569 | multiple smaller accesses. For example, given an architecture having | |
1570 | 16-bit store instructions with 7-bit immediate fields, the compiler | |
1571 | might be tempted to use two 16-bit store-immediate instructions to | |
1572 | implement the following 32-bit store: | |
1573 | ||
1574 | p = 0x00010002; | |
1575 | ||
1576 | Please note that GCC really does use this sort of optimization, | |
1577 | which is not surprising given that it would likely take more | |
1578 | than two instructions to build the constant and then store it. | |
1579 | This optimization can therefore be a win in single-threaded code. | |
1580 | In fact, a recent bug (since fixed) caused GCC to incorrectly use | |
1581 | this optimization in a volatile store. In the absence of such bugs, | |
1582 | use of ACCESS_ONCE() prevents store tearing in the following example: | |
1583 | ||
1584 | ACCESS_ONCE(p) = 0x00010002; | |
1585 | ||
1586 | Use of packed structures can also result in load and store tearing, | |
1587 | as in this example: | |
1588 | ||
1589 | struct __attribute__((__packed__)) foo { | |
1590 | short a; | |
1591 | int b; | |
1592 | short c; | |
1593 | }; | |
1594 | struct foo foo1, foo2; | |
1595 | ... | |
1596 | ||
1597 | foo2.a = foo1.a; | |
1598 | foo2.b = foo1.b; | |
1599 | foo2.c = foo1.c; | |
1600 | ||
1601 | Because there are no ACCESS_ONCE() wrappers and no volatile markings, | |
1602 | the compiler would be well within its rights to implement these three | |
1603 | assignment statements as a pair of 32-bit loads followed by a pair | |
1604 | of 32-bit stores. This would result in load tearing on 'foo1.b' | |
1605 | and store tearing on 'foo2.b'. ACCESS_ONCE() again prevents tearing | |
1606 | in this example: | |
1607 | ||
1608 | foo2.a = foo1.a; | |
1609 | ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b); | |
1610 | foo2.c = foo1.c; | |
1611 | ||
1612 | All that aside, it is never necessary to use ACCESS_ONCE() on a variable | |
1613 | that has been marked volatile. For example, because 'jiffies' is marked | |
1614 | volatile, it is never necessary to say ACCESS_ONCE(jiffies). The reason | |
1615 | for this is that ACCESS_ONCE() is implemented as a volatile cast, which | |
1616 | has no effect when its argument is already marked volatile. | |
1617 | ||
1618 | Please note that these compiler barriers have no direct effect on the CPU, | |
1619 | which may then reorder things however it wishes. | |
108b42b4 DH |
1620 | |
1621 | ||
1622 | CPU MEMORY BARRIERS | |
1623 | ------------------- | |
1624 | ||
1625 | The Linux kernel has eight basic CPU memory barriers: | |
1626 | ||
1627 | TYPE MANDATORY SMP CONDITIONAL | |
1628 | =============== ======================= =========================== | |
1629 | GENERAL mb() smp_mb() | |
1630 | WRITE wmb() smp_wmb() | |
1631 | READ rmb() smp_rmb() | |
1632 | DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() | |
1633 | ||
1634 | ||
73f10281 NP |
1635 | All memory barriers except the data dependency barriers imply a compiler |
1636 | barrier. Data dependencies do not impose any additional compiler ordering. | |
1637 | ||
1638 | Aside: In the case of data dependencies, the compiler would be expected to | |
1639 | issue the loads in the correct order (eg. `a[b]` would have to load the value | |
1640 | of b before loading a[b]), however there is no guarantee in the C specification | |
1641 | that the compiler may not speculate the value of b (eg. is equal to 1) and load | |
1642 | a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the | |
1643 | problem of a compiler reloading b after having loaded a[b], thus having a newer | |
1644 | copy of b than a[b]. A consensus has not yet been reached about these problems, | |
1645 | however the ACCESS_ONCE macro is a good place to start looking. | |
108b42b4 DH |
1646 | |
1647 | SMP memory barriers are reduced to compiler barriers on uniprocessor compiled | |
81fc6323 | 1648 | systems because it is assumed that a CPU will appear to be self-consistent, |
108b42b4 DH |
1649 | and will order overlapping accesses correctly with respect to itself. |
1650 | ||
1651 | [!] Note that SMP memory barriers _must_ be used to control the ordering of | |
1652 | references to shared memory on SMP systems, though the use of locking instead | |
1653 | is sufficient. | |
1654 | ||
1655 | Mandatory barriers should not be used to control SMP effects, since mandatory | |
1656 | barriers unnecessarily impose overhead on UP systems. They may, however, be | |
1657 | used to control MMIO effects on accesses through relaxed memory I/O windows. | |
1658 | These are required even on non-SMP systems as they affect the order in which | |
1659 | memory operations appear to a device by prohibiting both the compiler and the | |
1660 | CPU from reordering them. | |
1661 | ||
1662 | ||
1663 | There are some more advanced barrier functions: | |
1664 | ||
1665 | (*) set_mb(var, value) | |
108b42b4 | 1666 | |
75b2bd55 | 1667 | This assigns the value to the variable and then inserts a full memory |
f92213ba | 1668 | barrier after it, depending on the function. It isn't guaranteed to |
108b42b4 DH |
1669 | insert anything more than a compiler barrier in a UP compilation. |
1670 | ||
1671 | ||
1b15611e PZ |
1672 | (*) smp_mb__before_atomic(); |
1673 | (*) smp_mb__after_atomic(); | |
108b42b4 | 1674 | |
1b15611e PZ |
1675 | These are for use with atomic (such as add, subtract, increment and |
1676 | decrement) functions that don't return a value, especially when used for | |
1677 | reference counting. These functions do not imply memory barriers. | |
1678 | ||
1679 | These are also used for atomic bitop functions that do not return a | |
1680 | value (such as set_bit and clear_bit). | |
108b42b4 DH |
1681 | |
1682 | As an example, consider a piece of code that marks an object as being dead | |
1683 | and then decrements the object's reference count: | |
1684 | ||
1685 | obj->dead = 1; | |
1b15611e | 1686 | smp_mb__before_atomic(); |
108b42b4 DH |
1687 | atomic_dec(&obj->ref_count); |
1688 | ||
1689 | This makes sure that the death mark on the object is perceived to be set | |
1690 | *before* the reference counter is decremented. | |
1691 | ||
1692 | See Documentation/atomic_ops.txt for more information. See the "Atomic | |
1693 | operations" subsection for information on where to use these. | |
1694 | ||
1695 | ||
1077fa36 AD |
1696 | (*) dma_wmb(); |
1697 | (*) dma_rmb(); | |
1698 | ||
1699 | These are for use with consistent memory to guarantee the ordering | |
1700 | of writes or reads of shared memory accessible to both the CPU and a | |
1701 | DMA capable device. | |
1702 | ||
1703 | For example, consider a device driver that shares memory with a device | |
1704 | and uses a descriptor status value to indicate if the descriptor belongs | |
1705 | to the device or the CPU, and a doorbell to notify it when new | |
1706 | descriptors are available: | |
1707 | ||
1708 | if (desc->status != DEVICE_OWN) { | |
1709 | /* do not read data until we own descriptor */ | |
1710 | dma_rmb(); | |
1711 | ||
1712 | /* read/modify data */ | |
1713 | read_data = desc->data; | |
1714 | desc->data = write_data; | |
1715 | ||
1716 | /* flush modifications before status update */ | |
1717 | dma_wmb(); | |
1718 | ||
1719 | /* assign ownership */ | |
1720 | desc->status = DEVICE_OWN; | |
1721 | ||
1722 | /* force memory to sync before notifying device via MMIO */ | |
1723 | wmb(); | |
1724 | ||
1725 | /* notify device of new descriptors */ | |
1726 | writel(DESC_NOTIFY, doorbell); | |
1727 | } | |
1728 | ||
1729 | The dma_rmb() allows us guarantee the device has released ownership | |
7a458007 | 1730 | before we read the data from the descriptor, and the dma_wmb() allows |
1077fa36 AD |
1731 | us to guarantee the data is written to the descriptor before the device |
1732 | can see it now has ownership. The wmb() is needed to guarantee that the | |
1733 | cache coherent memory writes have completed before attempting a write to | |
1734 | the cache incoherent MMIO region. | |
1735 | ||
1736 | See Documentation/DMA-API.txt for more information on consistent memory. | |
1737 | ||
108b42b4 DH |
1738 | MMIO WRITE BARRIER |
1739 | ------------------ | |
1740 | ||
1741 | The Linux kernel also has a special barrier for use with memory-mapped I/O | |
1742 | writes: | |
1743 | ||
1744 | mmiowb(); | |
1745 | ||
1746 | This is a variation on the mandatory write barrier that causes writes to weakly | |
1747 | ordered I/O regions to be partially ordered. Its effects may go beyond the | |
1748 | CPU->Hardware interface and actually affect the hardware at some level. | |
1749 | ||
1750 | See the subsection "Locks vs I/O accesses" for more information. | |
1751 | ||
1752 | ||
1753 | =============================== | |
1754 | IMPLICIT KERNEL MEMORY BARRIERS | |
1755 | =============================== | |
1756 | ||
1757 | Some of the other functions in the linux kernel imply memory barriers, amongst | |
670bd95e | 1758 | which are locking and scheduling functions. |
108b42b4 DH |
1759 | |
1760 | This specification is a _minimum_ guarantee; any particular architecture may | |
1761 | provide more substantial guarantees, but these may not be relied upon outside | |
1762 | of arch specific code. | |
1763 | ||
1764 | ||
2e4f5382 PZ |
1765 | ACQUIRING FUNCTIONS |
1766 | ------------------- | |
108b42b4 DH |
1767 | |
1768 | The Linux kernel has a number of locking constructs: | |
1769 | ||
1770 | (*) spin locks | |
1771 | (*) R/W spin locks | |
1772 | (*) mutexes | |
1773 | (*) semaphores | |
1774 | (*) R/W semaphores | |
1775 | (*) RCU | |
1776 | ||
2e4f5382 | 1777 | In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations |
108b42b4 DH |
1778 | for each construct. These operations all imply certain barriers: |
1779 | ||
2e4f5382 | 1780 | (1) ACQUIRE operation implication: |
108b42b4 | 1781 | |
2e4f5382 PZ |
1782 | Memory operations issued after the ACQUIRE will be completed after the |
1783 | ACQUIRE operation has completed. | |
108b42b4 | 1784 | |
8dd853d7 PM |
1785 | Memory operations issued before the ACQUIRE may be completed after |
1786 | the ACQUIRE operation has completed. An smp_mb__before_spinlock(), | |
1787 | combined with a following ACQUIRE, orders prior loads against | |
1788 | subsequent loads and stores and also orders prior stores against | |
1789 | subsequent stores. Note that this is weaker than smp_mb()! The | |
1790 | smp_mb__before_spinlock() primitive is free on many architectures. | |
108b42b4 | 1791 | |
2e4f5382 | 1792 | (2) RELEASE operation implication: |
108b42b4 | 1793 | |
2e4f5382 PZ |
1794 | Memory operations issued before the RELEASE will be completed before the |
1795 | RELEASE operation has completed. | |
108b42b4 | 1796 | |
2e4f5382 PZ |
1797 | Memory operations issued after the RELEASE may be completed before the |
1798 | RELEASE operation has completed. | |
108b42b4 | 1799 | |
2e4f5382 | 1800 | (3) ACQUIRE vs ACQUIRE implication: |
108b42b4 | 1801 | |
2e4f5382 PZ |
1802 | All ACQUIRE operations issued before another ACQUIRE operation will be |
1803 | completed before that ACQUIRE operation. | |
108b42b4 | 1804 | |
2e4f5382 | 1805 | (4) ACQUIRE vs RELEASE implication: |
108b42b4 | 1806 | |
2e4f5382 PZ |
1807 | All ACQUIRE operations issued before a RELEASE operation will be |
1808 | completed before the RELEASE operation. | |
108b42b4 | 1809 | |
2e4f5382 | 1810 | (5) Failed conditional ACQUIRE implication: |
108b42b4 | 1811 | |
2e4f5382 PZ |
1812 | Certain locking variants of the ACQUIRE operation may fail, either due to |
1813 | being unable to get the lock immediately, or due to receiving an unblocked | |
108b42b4 DH |
1814 | signal whilst asleep waiting for the lock to become available. Failed |
1815 | locks do not imply any sort of barrier. | |
1816 | ||
2e4f5382 PZ |
1817 | [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only |
1818 | one-way barriers is that the effects of instructions outside of a critical | |
1819 | section may seep into the inside of the critical section. | |
108b42b4 | 1820 | |
2e4f5382 PZ |
1821 | An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier |
1822 | because it is possible for an access preceding the ACQUIRE to happen after the | |
1823 | ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and | |
1824 | the two accesses can themselves then cross: | |
670bd95e DH |
1825 | |
1826 | *A = a; | |
2e4f5382 PZ |
1827 | ACQUIRE M |
1828 | RELEASE M | |
670bd95e DH |
1829 | *B = b; |
1830 | ||
1831 | may occur as: | |
1832 | ||
2e4f5382 | 1833 | ACQUIRE M, STORE *B, STORE *A, RELEASE M |
17eb88e0 | 1834 | |
8dd853d7 PM |
1835 | When the ACQUIRE and RELEASE are a lock acquisition and release, |
1836 | respectively, this same reordering can occur if the lock's ACQUIRE and | |
1837 | RELEASE are to the same lock variable, but only from the perspective of | |
1838 | another CPU not holding that lock. In short, a ACQUIRE followed by an | |
1839 | RELEASE may -not- be assumed to be a full memory barrier. | |
1840 | ||
1841 | Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not | |
1842 | imply a full memory barrier. If it is necessary for a RELEASE-ACQUIRE | |
1843 | pair to produce a full barrier, the ACQUIRE can be followed by an | |
1844 | smp_mb__after_unlock_lock() invocation. This will produce a full barrier | |
1845 | if either (a) the RELEASE and the ACQUIRE are executed by the same | |
1846 | CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable. | |
1847 | The smp_mb__after_unlock_lock() primitive is free on many architectures. | |
1848 | Without smp_mb__after_unlock_lock(), the CPU's execution of the critical | |
1849 | sections corresponding to the RELEASE and the ACQUIRE can cross, so that: | |
17eb88e0 PM |
1850 | |
1851 | *A = a; | |
2e4f5382 PZ |
1852 | RELEASE M |
1853 | ACQUIRE N | |
17eb88e0 PM |
1854 | *B = b; |
1855 | ||
1856 | could occur as: | |
1857 | ||
2e4f5382 | 1858 | ACQUIRE N, STORE *B, STORE *A, RELEASE M |
17eb88e0 | 1859 | |
8dd853d7 PM |
1860 | It might appear that this reordering could introduce a deadlock. |
1861 | However, this cannot happen because if such a deadlock threatened, | |
1862 | the RELEASE would simply complete, thereby avoiding the deadlock. | |
1863 | ||
1864 | Why does this work? | |
1865 | ||
1866 | One key point is that we are only talking about the CPU doing | |
1867 | the reordering, not the compiler. If the compiler (or, for | |
1868 | that matter, the developer) switched the operations, deadlock | |
1869 | -could- occur. | |
1870 | ||
1871 | But suppose the CPU reordered the operations. In this case, | |
1872 | the unlock precedes the lock in the assembly code. The CPU | |
1873 | simply elected to try executing the later lock operation first. | |
1874 | If there is a deadlock, this lock operation will simply spin (or | |
1875 | try to sleep, but more on that later). The CPU will eventually | |
1876 | execute the unlock operation (which preceded the lock operation | |
1877 | in the assembly code), which will unravel the potential deadlock, | |
1878 | allowing the lock operation to succeed. | |
1879 | ||
1880 | But what if the lock is a sleeplock? In that case, the code will | |
1881 | try to enter the scheduler, where it will eventually encounter | |
1882 | a memory barrier, which will force the earlier unlock operation | |
1883 | to complete, again unraveling the deadlock. There might be | |
1884 | a sleep-unlock race, but the locking primitive needs to resolve | |
1885 | such races properly in any case. | |
1886 | ||
1887 | With smp_mb__after_unlock_lock(), the two critical sections cannot overlap. | |
1888 | For example, with the following code, the store to *A will always be | |
1889 | seen by other CPUs before the store to *B: | |
17eb88e0 PM |
1890 | |
1891 | *A = a; | |
2e4f5382 PZ |
1892 | RELEASE M |
1893 | ACQUIRE N | |
17eb88e0 PM |
1894 | smp_mb__after_unlock_lock(); |
1895 | *B = b; | |
1896 | ||
8dd853d7 | 1897 | The operations will always occur in one of the following orders: |
17eb88e0 | 1898 | |
8dd853d7 PM |
1899 | STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B |
1900 | STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B | |
1901 | ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B | |
17eb88e0 | 1902 | |
2e4f5382 | 1903 | If the RELEASE and ACQUIRE were instead both operating on the same lock |
8dd853d7 PM |
1904 | variable, only the first of these alternatives can occur. In addition, |
1905 | the more strongly ordered systems may rule out some of the above orders. | |
1906 | But in any case, as noted earlier, the smp_mb__after_unlock_lock() | |
1907 | ensures that the store to *A will always be seen as happening before | |
1908 | the store to *B. | |
670bd95e | 1909 | |
108b42b4 DH |
1910 | Locks and semaphores may not provide any guarantee of ordering on UP compiled |
1911 | systems, and so cannot be counted on in such a situation to actually achieve | |
1912 | anything at all - especially with respect to I/O accesses - unless combined | |
1913 | with interrupt disabling operations. | |
1914 | ||
1915 | See also the section on "Inter-CPU locking barrier effects". | |
1916 | ||
1917 | ||
1918 | As an example, consider the following: | |
1919 | ||
1920 | *A = a; | |
1921 | *B = b; | |
2e4f5382 | 1922 | ACQUIRE |
108b42b4 DH |
1923 | *C = c; |
1924 | *D = d; | |
2e4f5382 | 1925 | RELEASE |
108b42b4 DH |
1926 | *E = e; |
1927 | *F = f; | |
1928 | ||
1929 | The following sequence of events is acceptable: | |
1930 | ||
2e4f5382 | 1931 | ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE |
108b42b4 DH |
1932 | |
1933 | [+] Note that {*F,*A} indicates a combined access. | |
1934 | ||
1935 | But none of the following are: | |
1936 | ||
2e4f5382 PZ |
1937 | {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E |
1938 | *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F | |
1939 | *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F | |
1940 | *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E | |
108b42b4 DH |
1941 | |
1942 | ||
1943 | ||
1944 | INTERRUPT DISABLING FUNCTIONS | |
1945 | ----------------------------- | |
1946 | ||
2e4f5382 PZ |
1947 | Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts |
1948 | (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O | |
108b42b4 DH |
1949 | barriers are required in such a situation, they must be provided from some |
1950 | other means. | |
1951 | ||
1952 | ||
50fa610a DH |
1953 | SLEEP AND WAKE-UP FUNCTIONS |
1954 | --------------------------- | |
1955 | ||
1956 | Sleeping and waking on an event flagged in global data can be viewed as an | |
1957 | interaction between two pieces of data: the task state of the task waiting for | |
1958 | the event and the global data used to indicate the event. To make sure that | |
1959 | these appear to happen in the right order, the primitives to begin the process | |
1960 | of going to sleep, and the primitives to initiate a wake up imply certain | |
1961 | barriers. | |
1962 | ||
1963 | Firstly, the sleeper normally follows something like this sequence of events: | |
1964 | ||
1965 | for (;;) { | |
1966 | set_current_state(TASK_UNINTERRUPTIBLE); | |
1967 | if (event_indicated) | |
1968 | break; | |
1969 | schedule(); | |
1970 | } | |
1971 | ||
1972 | A general memory barrier is interpolated automatically by set_current_state() | |
1973 | after it has altered the task state: | |
1974 | ||
1975 | CPU 1 | |
1976 | =============================== | |
1977 | set_current_state(); | |
1978 | set_mb(); | |
1979 | STORE current->state | |
1980 | <general barrier> | |
1981 | LOAD event_indicated | |
1982 | ||
1983 | set_current_state() may be wrapped by: | |
1984 | ||
1985 | prepare_to_wait(); | |
1986 | prepare_to_wait_exclusive(); | |
1987 | ||
1988 | which therefore also imply a general memory barrier after setting the state. | |
1989 | The whole sequence above is available in various canned forms, all of which | |
1990 | interpolate the memory barrier in the right place: | |
1991 | ||
1992 | wait_event(); | |
1993 | wait_event_interruptible(); | |
1994 | wait_event_interruptible_exclusive(); | |
1995 | wait_event_interruptible_timeout(); | |
1996 | wait_event_killable(); | |
1997 | wait_event_timeout(); | |
1998 | wait_on_bit(); | |
1999 | wait_on_bit_lock(); | |
2000 | ||
2001 | ||
2002 | Secondly, code that performs a wake up normally follows something like this: | |
2003 | ||
2004 | event_indicated = 1; | |
2005 | wake_up(&event_wait_queue); | |
2006 | ||
2007 | or: | |
2008 | ||
2009 | event_indicated = 1; | |
2010 | wake_up_process(event_daemon); | |
2011 | ||
2012 | A write memory barrier is implied by wake_up() and co. if and only if they wake | |
2013 | something up. The barrier occurs before the task state is cleared, and so sits | |
2014 | between the STORE to indicate the event and the STORE to set TASK_RUNNING: | |
2015 | ||
2016 | CPU 1 CPU 2 | |
2017 | =============================== =============================== | |
2018 | set_current_state(); STORE event_indicated | |
2019 | set_mb(); wake_up(); | |
2020 | STORE current->state <write barrier> | |
2021 | <general barrier> STORE current->state | |
2022 | LOAD event_indicated | |
2023 | ||
5726ce06 PM |
2024 | To repeat, this write memory barrier is present if and only if something |
2025 | is actually awakened. To see this, consider the following sequence of | |
2026 | events, where X and Y are both initially zero: | |
2027 | ||
2028 | CPU 1 CPU 2 | |
2029 | =============================== =============================== | |
2030 | X = 1; STORE event_indicated | |
2031 | smp_mb(); wake_up(); | |
2032 | Y = 1; wait_event(wq, Y == 1); | |
2033 | wake_up(); load from Y sees 1, no memory barrier | |
2034 | load from X might see 0 | |
2035 | ||
2036 | In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed | |
2037 | to see 1. | |
2038 | ||
50fa610a DH |
2039 | The available waker functions include: |
2040 | ||
2041 | complete(); | |
2042 | wake_up(); | |
2043 | wake_up_all(); | |
2044 | wake_up_bit(); | |
2045 | wake_up_interruptible(); | |
2046 | wake_up_interruptible_all(); | |
2047 | wake_up_interruptible_nr(); | |
2048 | wake_up_interruptible_poll(); | |
2049 | wake_up_interruptible_sync(); | |
2050 | wake_up_interruptible_sync_poll(); | |
2051 | wake_up_locked(); | |
2052 | wake_up_locked_poll(); | |
2053 | wake_up_nr(); | |
2054 | wake_up_poll(); | |
2055 | wake_up_process(); | |
2056 | ||
2057 | ||
2058 | [!] Note that the memory barriers implied by the sleeper and the waker do _not_ | |
2059 | order multiple stores before the wake-up with respect to loads of those stored | |
2060 | values after the sleeper has called set_current_state(). For instance, if the | |
2061 | sleeper does: | |
2062 | ||
2063 | set_current_state(TASK_INTERRUPTIBLE); | |
2064 | if (event_indicated) | |
2065 | break; | |
2066 | __set_current_state(TASK_RUNNING); | |
2067 | do_something(my_data); | |
2068 | ||
2069 | and the waker does: | |
2070 | ||
2071 | my_data = value; | |
2072 | event_indicated = 1; | |
2073 | wake_up(&event_wait_queue); | |
2074 | ||
2075 | there's no guarantee that the change to event_indicated will be perceived by | |
2076 | the sleeper as coming after the change to my_data. In such a circumstance, the | |
2077 | code on both sides must interpolate its own memory barriers between the | |
2078 | separate data accesses. Thus the above sleeper ought to do: | |
2079 | ||
2080 | set_current_state(TASK_INTERRUPTIBLE); | |
2081 | if (event_indicated) { | |
2082 | smp_rmb(); | |
2083 | do_something(my_data); | |
2084 | } | |
2085 | ||
2086 | and the waker should do: | |
2087 | ||
2088 | my_data = value; | |
2089 | smp_wmb(); | |
2090 | event_indicated = 1; | |
2091 | wake_up(&event_wait_queue); | |
2092 | ||
2093 | ||
108b42b4 DH |
2094 | MISCELLANEOUS FUNCTIONS |
2095 | ----------------------- | |
2096 | ||
2097 | Other functions that imply barriers: | |
2098 | ||
2099 | (*) schedule() and similar imply full memory barriers. | |
2100 | ||
108b42b4 | 2101 | |
2e4f5382 PZ |
2102 | =================================== |
2103 | INTER-CPU ACQUIRING BARRIER EFFECTS | |
2104 | =================================== | |
108b42b4 DH |
2105 | |
2106 | On SMP systems locking primitives give a more substantial form of barrier: one | |
2107 | that does affect memory access ordering on other CPUs, within the context of | |
2108 | conflict on any particular lock. | |
2109 | ||
2110 | ||
2e4f5382 PZ |
2111 | ACQUIRES VS MEMORY ACCESSES |
2112 | --------------------------- | |
108b42b4 | 2113 | |
79afecfa | 2114 | Consider the following: the system has a pair of spinlocks (M) and (Q), and |
108b42b4 DH |
2115 | three CPUs; then should the following sequence of events occur: |
2116 | ||
2117 | CPU 1 CPU 2 | |
2118 | =============================== =============================== | |
2ecf8101 | 2119 | ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e; |
2e4f5382 | 2120 | ACQUIRE M ACQUIRE Q |
2ecf8101 PM |
2121 | ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f; |
2122 | ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g; | |
2e4f5382 | 2123 | RELEASE M RELEASE Q |
2ecf8101 | 2124 | ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h; |
108b42b4 | 2125 | |
81fc6323 | 2126 | Then there is no guarantee as to what order CPU 3 will see the accesses to *A |
108b42b4 DH |
2127 | through *H occur in, other than the constraints imposed by the separate locks |
2128 | on the separate CPUs. It might, for example, see: | |
2129 | ||
2e4f5382 | 2130 | *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M |
108b42b4 DH |
2131 | |
2132 | But it won't see any of: | |
2133 | ||
2e4f5382 PZ |
2134 | *B, *C or *D preceding ACQUIRE M |
2135 | *A, *B or *C following RELEASE M | |
2136 | *F, *G or *H preceding ACQUIRE Q | |
2137 | *E, *F or *G following RELEASE Q | |
108b42b4 DH |
2138 | |
2139 | ||
2140 | However, if the following occurs: | |
2141 | ||
2142 | CPU 1 CPU 2 | |
2143 | =============================== =============================== | |
2ecf8101 | 2144 | ACCESS_ONCE(*A) = a; |
2e4f5382 | 2145 | ACQUIRE M [1] |
2ecf8101 PM |
2146 | ACCESS_ONCE(*B) = b; |
2147 | ACCESS_ONCE(*C) = c; | |
2e4f5382 | 2148 | RELEASE M [1] |
2ecf8101 | 2149 | ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e; |
2e4f5382 | 2150 | ACQUIRE M [2] |
17eb88e0 | 2151 | smp_mb__after_unlock_lock(); |
2ecf8101 PM |
2152 | ACCESS_ONCE(*F) = f; |
2153 | ACCESS_ONCE(*G) = g; | |
2e4f5382 | 2154 | RELEASE M [2] |
2ecf8101 | 2155 | ACCESS_ONCE(*H) = h; |
108b42b4 | 2156 | |
81fc6323 | 2157 | CPU 3 might see: |
108b42b4 | 2158 | |
2e4f5382 PZ |
2159 | *E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1], |
2160 | ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D | |
108b42b4 | 2161 | |
81fc6323 | 2162 | But assuming CPU 1 gets the lock first, CPU 3 won't see any of: |
108b42b4 | 2163 | |
2e4f5382 PZ |
2164 | *B, *C, *D, *F, *G or *H preceding ACQUIRE M [1] |
2165 | *A, *B or *C following RELEASE M [1] | |
2166 | *F, *G or *H preceding ACQUIRE M [2] | |
2167 | *A, *B, *C, *E, *F or *G following RELEASE M [2] | |
108b42b4 | 2168 | |
17eb88e0 PM |
2169 | Note that the smp_mb__after_unlock_lock() is critically important |
2170 | here: Without it CPU 3 might see some of the above orderings. | |
2171 | Without smp_mb__after_unlock_lock(), the accesses are not guaranteed | |
2172 | to be seen in order unless CPU 3 holds lock M. | |
2173 | ||
108b42b4 | 2174 | |
2e4f5382 PZ |
2175 | ACQUIRES VS I/O ACCESSES |
2176 | ------------------------ | |
108b42b4 DH |
2177 | |
2178 | Under certain circumstances (especially involving NUMA), I/O accesses within | |
2179 | two spinlocked sections on two different CPUs may be seen as interleaved by the | |
2180 | PCI bridge, because the PCI bridge does not necessarily participate in the | |
2181 | cache-coherence protocol, and is therefore incapable of issuing the required | |
2182 | read memory barriers. | |
2183 | ||
2184 | For example: | |
2185 | ||
2186 | CPU 1 CPU 2 | |
2187 | =============================== =============================== | |
2188 | spin_lock(Q) | |
2189 | writel(0, ADDR) | |
2190 | writel(1, DATA); | |
2191 | spin_unlock(Q); | |
2192 | spin_lock(Q); | |
2193 | writel(4, ADDR); | |
2194 | writel(5, DATA); | |
2195 | spin_unlock(Q); | |
2196 | ||
2197 | may be seen by the PCI bridge as follows: | |
2198 | ||
2199 | STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 | |
2200 | ||
2201 | which would probably cause the hardware to malfunction. | |
2202 | ||
2203 | ||
2204 | What is necessary here is to intervene with an mmiowb() before dropping the | |
2205 | spinlock, for example: | |
2206 | ||
2207 | CPU 1 CPU 2 | |
2208 | =============================== =============================== | |
2209 | spin_lock(Q) | |
2210 | writel(0, ADDR) | |
2211 | writel(1, DATA); | |
2212 | mmiowb(); | |
2213 | spin_unlock(Q); | |
2214 | spin_lock(Q); | |
2215 | writel(4, ADDR); | |
2216 | writel(5, DATA); | |
2217 | mmiowb(); | |
2218 | spin_unlock(Q); | |
2219 | ||
81fc6323 JP |
2220 | this will ensure that the two stores issued on CPU 1 appear at the PCI bridge |
2221 | before either of the stores issued on CPU 2. | |
108b42b4 DH |
2222 | |
2223 | ||
81fc6323 JP |
2224 | Furthermore, following a store by a load from the same device obviates the need |
2225 | for the mmiowb(), because the load forces the store to complete before the load | |
108b42b4 DH |
2226 | is performed: |
2227 | ||
2228 | CPU 1 CPU 2 | |
2229 | =============================== =============================== | |
2230 | spin_lock(Q) | |
2231 | writel(0, ADDR) | |
2232 | a = readl(DATA); | |
2233 | spin_unlock(Q); | |
2234 | spin_lock(Q); | |
2235 | writel(4, ADDR); | |
2236 | b = readl(DATA); | |
2237 | spin_unlock(Q); | |
2238 | ||
2239 | ||
2240 | See Documentation/DocBook/deviceiobook.tmpl for more information. | |
2241 | ||
2242 | ||
2243 | ================================= | |
2244 | WHERE ARE MEMORY BARRIERS NEEDED? | |
2245 | ================================= | |
2246 | ||
2247 | Under normal operation, memory operation reordering is generally not going to | |
2248 | be a problem as a single-threaded linear piece of code will still appear to | |
50fa610a | 2249 | work correctly, even if it's in an SMP kernel. There are, however, four |
108b42b4 DH |
2250 | circumstances in which reordering definitely _could_ be a problem: |
2251 | ||
2252 | (*) Interprocessor interaction. | |
2253 | ||
2254 | (*) Atomic operations. | |
2255 | ||
81fc6323 | 2256 | (*) Accessing devices. |
108b42b4 DH |
2257 | |
2258 | (*) Interrupts. | |
2259 | ||
2260 | ||
2261 | INTERPROCESSOR INTERACTION | |
2262 | -------------------------- | |
2263 | ||
2264 | When there's a system with more than one processor, more than one CPU in the | |
2265 | system may be working on the same data set at the same time. This can cause | |
2266 | synchronisation problems, and the usual way of dealing with them is to use | |
2267 | locks. Locks, however, are quite expensive, and so it may be preferable to | |
2268 | operate without the use of a lock if at all possible. In such a case | |
2269 | operations that affect both CPUs may have to be carefully ordered to prevent | |
2270 | a malfunction. | |
2271 | ||
2272 | Consider, for example, the R/W semaphore slow path. Here a waiting process is | |
2273 | queued on the semaphore, by virtue of it having a piece of its stack linked to | |
2274 | the semaphore's list of waiting processes: | |
2275 | ||
2276 | struct rw_semaphore { | |
2277 | ... | |
2278 | spinlock_t lock; | |
2279 | struct list_head waiters; | |
2280 | }; | |
2281 | ||
2282 | struct rwsem_waiter { | |
2283 | struct list_head list; | |
2284 | struct task_struct *task; | |
2285 | }; | |
2286 | ||
2287 | To wake up a particular waiter, the up_read() or up_write() functions have to: | |
2288 | ||
2289 | (1) read the next pointer from this waiter's record to know as to where the | |
2290 | next waiter record is; | |
2291 | ||
81fc6323 | 2292 | (2) read the pointer to the waiter's task structure; |
108b42b4 DH |
2293 | |
2294 | (3) clear the task pointer to tell the waiter it has been given the semaphore; | |
2295 | ||
2296 | (4) call wake_up_process() on the task; and | |
2297 | ||
2298 | (5) release the reference held on the waiter's task struct. | |
2299 | ||
81fc6323 | 2300 | In other words, it has to perform this sequence of events: |
108b42b4 DH |
2301 | |
2302 | LOAD waiter->list.next; | |
2303 | LOAD waiter->task; | |
2304 | STORE waiter->task; | |
2305 | CALL wakeup | |
2306 | RELEASE task | |
2307 | ||
2308 | and if any of these steps occur out of order, then the whole thing may | |
2309 | malfunction. | |
2310 | ||
2311 | Once it has queued itself and dropped the semaphore lock, the waiter does not | |
2312 | get the lock again; it instead just waits for its task pointer to be cleared | |
2313 | before proceeding. Since the record is on the waiter's stack, this means that | |
2314 | if the task pointer is cleared _before_ the next pointer in the list is read, | |
2315 | another CPU might start processing the waiter and might clobber the waiter's | |
2316 | stack before the up*() function has a chance to read the next pointer. | |
2317 | ||
2318 | Consider then what might happen to the above sequence of events: | |
2319 | ||
2320 | CPU 1 CPU 2 | |
2321 | =============================== =============================== | |
2322 | down_xxx() | |
2323 | Queue waiter | |
2324 | Sleep | |
2325 | up_yyy() | |
2326 | LOAD waiter->task; | |
2327 | STORE waiter->task; | |
2328 | Woken up by other event | |
2329 | <preempt> | |
2330 | Resume processing | |
2331 | down_xxx() returns | |
2332 | call foo() | |
2333 | foo() clobbers *waiter | |
2334 | </preempt> | |
2335 | LOAD waiter->list.next; | |
2336 | --- OOPS --- | |
2337 | ||
2338 | This could be dealt with using the semaphore lock, but then the down_xxx() | |
2339 | function has to needlessly get the spinlock again after being woken up. | |
2340 | ||
2341 | The way to deal with this is to insert a general SMP memory barrier: | |
2342 | ||
2343 | LOAD waiter->list.next; | |
2344 | LOAD waiter->task; | |
2345 | smp_mb(); | |
2346 | STORE waiter->task; | |
2347 | CALL wakeup | |
2348 | RELEASE task | |
2349 | ||
2350 | In this case, the barrier makes a guarantee that all memory accesses before the | |
2351 | barrier will appear to happen before all the memory accesses after the barrier | |
2352 | with respect to the other CPUs on the system. It does _not_ guarantee that all | |
2353 | the memory accesses before the barrier will be complete by the time the barrier | |
2354 | instruction itself is complete. | |
2355 | ||
2356 | On a UP system - where this wouldn't be a problem - the smp_mb() is just a | |
2357 | compiler barrier, thus making sure the compiler emits the instructions in the | |
6bc39274 DH |
2358 | right order without actually intervening in the CPU. Since there's only one |
2359 | CPU, that CPU's dependency ordering logic will take care of everything else. | |
108b42b4 DH |
2360 | |
2361 | ||
2362 | ATOMIC OPERATIONS | |
2363 | ----------------- | |
2364 | ||
dbc8700e DH |
2365 | Whilst they are technically interprocessor interaction considerations, atomic |
2366 | operations are noted specially as some of them imply full memory barriers and | |
2367 | some don't, but they're very heavily relied on as a group throughout the | |
2368 | kernel. | |
2369 | ||
2370 | Any atomic operation that modifies some state in memory and returns information | |
2371 | about the state (old or new) implies an SMP-conditional general memory barrier | |
26333576 NP |
2372 | (smp_mb()) on each side of the actual operation (with the exception of |
2373 | explicit lock operations, described later). These include: | |
108b42b4 DH |
2374 | |
2375 | xchg(); | |
2376 | cmpxchg(); | |
fb2b5819 PM |
2377 | atomic_xchg(); atomic_long_xchg(); |
2378 | atomic_cmpxchg(); atomic_long_cmpxchg(); | |
2379 | atomic_inc_return(); atomic_long_inc_return(); | |
2380 | atomic_dec_return(); atomic_long_dec_return(); | |
2381 | atomic_add_return(); atomic_long_add_return(); | |
2382 | atomic_sub_return(); atomic_long_sub_return(); | |
2383 | atomic_inc_and_test(); atomic_long_inc_and_test(); | |
2384 | atomic_dec_and_test(); atomic_long_dec_and_test(); | |
2385 | atomic_sub_and_test(); atomic_long_sub_and_test(); | |
2386 | atomic_add_negative(); atomic_long_add_negative(); | |
dbc8700e DH |
2387 | test_and_set_bit(); |
2388 | test_and_clear_bit(); | |
2389 | test_and_change_bit(); | |
2390 | ||
fb2b5819 PM |
2391 | /* when succeeds (returns 1) */ |
2392 | atomic_add_unless(); atomic_long_add_unless(); | |
2393 | ||
2e4f5382 | 2394 | These are used for such things as implementing ACQUIRE-class and RELEASE-class |
dbc8700e DH |
2395 | operations and adjusting reference counters towards object destruction, and as |
2396 | such the implicit memory barrier effects are necessary. | |
108b42b4 | 2397 | |
108b42b4 | 2398 | |
81fc6323 | 2399 | The following operations are potential problems as they do _not_ imply memory |
2e4f5382 | 2400 | barriers, but might be used for implementing such things as RELEASE-class |
dbc8700e | 2401 | operations: |
108b42b4 | 2402 | |
dbc8700e | 2403 | atomic_set(); |
108b42b4 DH |
2404 | set_bit(); |
2405 | clear_bit(); | |
2406 | change_bit(); | |
dbc8700e DH |
2407 | |
2408 | With these the appropriate explicit memory barrier should be used if necessary | |
1b15611e | 2409 | (smp_mb__before_atomic() for instance). |
108b42b4 DH |
2410 | |
2411 | ||
dbc8700e | 2412 | The following also do _not_ imply memory barriers, and so may require explicit |
1b15611e | 2413 | memory barriers under some circumstances (smp_mb__before_atomic() for |
81fc6323 | 2414 | instance): |
108b42b4 DH |
2415 | |
2416 | atomic_add(); | |
2417 | atomic_sub(); | |
2418 | atomic_inc(); | |
2419 | atomic_dec(); | |
2420 | ||
2421 | If they're used for statistics generation, then they probably don't need memory | |
2422 | barriers, unless there's a coupling between statistical data. | |
2423 | ||
2424 | If they're used for reference counting on an object to control its lifetime, | |
2425 | they probably don't need memory barriers because either the reference count | |
2426 | will be adjusted inside a locked section, or the caller will already hold | |
2427 | sufficient references to make the lock, and thus a memory barrier unnecessary. | |
2428 | ||
2429 | If they're used for constructing a lock of some description, then they probably | |
2430 | do need memory barriers as a lock primitive generally has to do things in a | |
2431 | specific order. | |
2432 | ||
108b42b4 | 2433 | Basically, each usage case has to be carefully considered as to whether memory |
dbc8700e DH |
2434 | barriers are needed or not. |
2435 | ||
26333576 NP |
2436 | The following operations are special locking primitives: |
2437 | ||
2438 | test_and_set_bit_lock(); | |
2439 | clear_bit_unlock(); | |
2440 | __clear_bit_unlock(); | |
2441 | ||
2e4f5382 | 2442 | These implement ACQUIRE-class and RELEASE-class operations. These should be used in |
26333576 NP |
2443 | preference to other operations when implementing locking primitives, because |
2444 | their implementations can be optimised on many architectures. | |
2445 | ||
dbc8700e DH |
2446 | [!] Note that special memory barrier primitives are available for these |
2447 | situations because on some CPUs the atomic instructions used imply full memory | |
2448 | barriers, and so barrier instructions are superfluous in conjunction with them, | |
2449 | and in such cases the special barrier primitives will be no-ops. | |
108b42b4 DH |
2450 | |
2451 | See Documentation/atomic_ops.txt for more information. | |
2452 | ||
2453 | ||
2454 | ACCESSING DEVICES | |
2455 | ----------------- | |
2456 | ||
2457 | Many devices can be memory mapped, and so appear to the CPU as if they're just | |
2458 | a set of memory locations. To control such a device, the driver usually has to | |
2459 | make the right memory accesses in exactly the right order. | |
2460 | ||
2461 | However, having a clever CPU or a clever compiler creates a potential problem | |
2462 | in that the carefully sequenced accesses in the driver code won't reach the | |
2463 | device in the requisite order if the CPU or the compiler thinks it is more | |
2464 | efficient to reorder, combine or merge accesses - something that would cause | |
2465 | the device to malfunction. | |
2466 | ||
2467 | Inside of the Linux kernel, I/O should be done through the appropriate accessor | |
2468 | routines - such as inb() or writel() - which know how to make such accesses | |
2469 | appropriately sequential. Whilst this, for the most part, renders the explicit | |
2470 | use of memory barriers unnecessary, there are a couple of situations where they | |
2471 | might be needed: | |
2472 | ||
2473 | (1) On some systems, I/O stores are not strongly ordered across all CPUs, and | |
2474 | so for _all_ general drivers locks should be used and mmiowb() must be | |
2475 | issued prior to unlocking the critical section. | |
2476 | ||
2477 | (2) If the accessor functions are used to refer to an I/O memory window with | |
2478 | relaxed memory access properties, then _mandatory_ memory barriers are | |
2479 | required to enforce ordering. | |
2480 | ||
2481 | See Documentation/DocBook/deviceiobook.tmpl for more information. | |
2482 | ||
2483 | ||
2484 | INTERRUPTS | |
2485 | ---------- | |
2486 | ||
2487 | A driver may be interrupted by its own interrupt service routine, and thus the | |
2488 | two parts of the driver may interfere with each other's attempts to control or | |
2489 | access the device. | |
2490 | ||
2491 | This may be alleviated - at least in part - by disabling local interrupts (a | |
2492 | form of locking), such that the critical operations are all contained within | |
2493 | the interrupt-disabled section in the driver. Whilst the driver's interrupt | |
2494 | routine is executing, the driver's core may not run on the same CPU, and its | |
2495 | interrupt is not permitted to happen again until the current interrupt has been | |
2496 | handled, thus the interrupt handler does not need to lock against that. | |
2497 | ||
2498 | However, consider a driver that was talking to an ethernet card that sports an | |
2499 | address register and a data register. If that driver's core talks to the card | |
2500 | under interrupt-disablement and then the driver's interrupt handler is invoked: | |
2501 | ||
2502 | LOCAL IRQ DISABLE | |
2503 | writew(ADDR, 3); | |
2504 | writew(DATA, y); | |
2505 | LOCAL IRQ ENABLE | |
2506 | <interrupt> | |
2507 | writew(ADDR, 4); | |
2508 | q = readw(DATA); | |
2509 | </interrupt> | |
2510 | ||
2511 | The store to the data register might happen after the second store to the | |
2512 | address register if ordering rules are sufficiently relaxed: | |
2513 | ||
2514 | STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA | |
2515 | ||
2516 | ||
2517 | If ordering rules are relaxed, it must be assumed that accesses done inside an | |
2518 | interrupt disabled section may leak outside of it and may interleave with | |
2519 | accesses performed in an interrupt - and vice versa - unless implicit or | |
2520 | explicit barriers are used. | |
2521 | ||
2522 | Normally this won't be a problem because the I/O accesses done inside such | |
2523 | sections will include synchronous load operations on strictly ordered I/O | |
2524 | registers that form implicit I/O barriers. If this isn't sufficient then an | |
2525 | mmiowb() may need to be used explicitly. | |
2526 | ||
2527 | ||
2528 | A similar situation may occur between an interrupt routine and two routines | |
2529 | running on separate CPUs that communicate with each other. If such a case is | |
2530 | likely, then interrupt-disabling locks should be used to guarantee ordering. | |
2531 | ||
2532 | ||
2533 | ========================== | |
2534 | KERNEL I/O BARRIER EFFECTS | |
2535 | ========================== | |
2536 | ||
2537 | When accessing I/O memory, drivers should use the appropriate accessor | |
2538 | functions: | |
2539 | ||
2540 | (*) inX(), outX(): | |
2541 | ||
2542 | These are intended to talk to I/O space rather than memory space, but | |
2543 | that's primarily a CPU-specific concept. The i386 and x86_64 processors do | |
2544 | indeed have special I/O space access cycles and instructions, but many | |
2545 | CPUs don't have such a concept. | |
2546 | ||
81fc6323 JP |
2547 | The PCI bus, amongst others, defines an I/O space concept which - on such |
2548 | CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O | |
6bc39274 DH |
2549 | space. However, it may also be mapped as a virtual I/O space in the CPU's |
2550 | memory map, particularly on those CPUs that don't support alternate I/O | |
2551 | spaces. | |
108b42b4 DH |
2552 | |
2553 | Accesses to this space may be fully synchronous (as on i386), but | |
2554 | intermediary bridges (such as the PCI host bridge) may not fully honour | |
2555 | that. | |
2556 | ||
2557 | They are guaranteed to be fully ordered with respect to each other. | |
2558 | ||
2559 | They are not guaranteed to be fully ordered with respect to other types of | |
2560 | memory and I/O operation. | |
2561 | ||
2562 | (*) readX(), writeX(): | |
2563 | ||
2564 | Whether these are guaranteed to be fully ordered and uncombined with | |
2565 | respect to each other on the issuing CPU depends on the characteristics | |
2566 | defined for the memory window through which they're accessing. On later | |
2567 | i386 architecture machines, for example, this is controlled by way of the | |
2568 | MTRR registers. | |
2569 | ||
81fc6323 | 2570 | Ordinarily, these will be guaranteed to be fully ordered and uncombined, |
108b42b4 DH |
2571 | provided they're not accessing a prefetchable device. |
2572 | ||
2573 | However, intermediary hardware (such as a PCI bridge) may indulge in | |
2574 | deferral if it so wishes; to flush a store, a load from the same location | |
2575 | is preferred[*], but a load from the same device or from configuration | |
2576 | space should suffice for PCI. | |
2577 | ||
2578 | [*] NOTE! attempting to load from the same location as was written to may | |
e0edc78f IM |
2579 | cause a malfunction - consider the 16550 Rx/Tx serial registers for |
2580 | example. | |
108b42b4 DH |
2581 | |
2582 | Used with prefetchable I/O memory, an mmiowb() barrier may be required to | |
2583 | force stores to be ordered. | |
2584 | ||
2585 | Please refer to the PCI specification for more information on interactions | |
2586 | between PCI transactions. | |
2587 | ||
a8e0aead WD |
2588 | (*) readX_relaxed(), writeX_relaxed() |
2589 | ||
2590 | These are similar to readX() and writeX(), but provide weaker memory | |
2591 | ordering guarantees. Specifically, they do not guarantee ordering with | |
2592 | respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee | |
2593 | ordering with respect to LOCK or UNLOCK operations. If the latter is | |
2594 | required, an mmiowb() barrier can be used. Note that relaxed accesses to | |
2595 | the same peripheral are guaranteed to be ordered with respect to each | |
2596 | other. | |
108b42b4 DH |
2597 | |
2598 | (*) ioreadX(), iowriteX() | |
2599 | ||
81fc6323 | 2600 | These will perform appropriately for the type of access they're actually |
108b42b4 DH |
2601 | doing, be it inX()/outX() or readX()/writeX(). |
2602 | ||
2603 | ||
2604 | ======================================== | |
2605 | ASSUMED MINIMUM EXECUTION ORDERING MODEL | |
2606 | ======================================== | |
2607 | ||
2608 | It has to be assumed that the conceptual CPU is weakly-ordered but that it will | |
2609 | maintain the appearance of program causality with respect to itself. Some CPUs | |
2610 | (such as i386 or x86_64) are more constrained than others (such as powerpc or | |
2611 | frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside | |
2612 | of arch-specific code. | |
2613 | ||
2614 | This means that it must be considered that the CPU will execute its instruction | |
2615 | stream in any order it feels like - or even in parallel - provided that if an | |
81fc6323 | 2616 | instruction in the stream depends on an earlier instruction, then that |
108b42b4 DH |
2617 | earlier instruction must be sufficiently complete[*] before the later |
2618 | instruction may proceed; in other words: provided that the appearance of | |
2619 | causality is maintained. | |
2620 | ||
2621 | [*] Some instructions have more than one effect - such as changing the | |
2622 | condition codes, changing registers or changing memory - and different | |
2623 | instructions may depend on different effects. | |
2624 | ||
2625 | A CPU may also discard any instruction sequence that winds up having no | |
2626 | ultimate effect. For example, if two adjacent instructions both load an | |
2627 | immediate value into the same register, the first may be discarded. | |
2628 | ||
2629 | ||
2630 | Similarly, it has to be assumed that compiler might reorder the instruction | |
2631 | stream in any way it sees fit, again provided the appearance of causality is | |
2632 | maintained. | |
2633 | ||
2634 | ||
2635 | ============================ | |
2636 | THE EFFECTS OF THE CPU CACHE | |
2637 | ============================ | |
2638 | ||
2639 | The way cached memory operations are perceived across the system is affected to | |
2640 | a certain extent by the caches that lie between CPUs and memory, and by the | |
2641 | memory coherence system that maintains the consistency of state in the system. | |
2642 | ||
2643 | As far as the way a CPU interacts with another part of the system through the | |
2644 | caches goes, the memory system has to include the CPU's caches, and memory | |
2645 | barriers for the most part act at the interface between the CPU and its cache | |
2646 | (memory barriers logically act on the dotted line in the following diagram): | |
2647 | ||
2648 | <--- CPU ---> : <----------- Memory -----------> | |
2649 | : | |
2650 | +--------+ +--------+ : +--------+ +-----------+ | |
2651 | | | | | : | | | | +--------+ | |
e0edc78f IM |
2652 | | CPU | | Memory | : | CPU | | | | | |
2653 | | Core |--->| Access |----->| Cache |<-->| | | | | |
108b42b4 | 2654 | | | | Queue | : | | | |--->| Memory | |
e0edc78f IM |
2655 | | | | | : | | | | | | |
2656 | +--------+ +--------+ : +--------+ | | | | | |
108b42b4 DH |
2657 | : | Cache | +--------+ |
2658 | : | Coherency | | |
2659 | : | Mechanism | +--------+ | |
2660 | +--------+ +--------+ : +--------+ | | | | | |
2661 | | | | | : | | | | | | | |
2662 | | CPU | | Memory | : | CPU | | |--->| Device | | |
e0edc78f IM |
2663 | | Core |--->| Access |----->| Cache |<-->| | | | |
2664 | | | | Queue | : | | | | | | | |
108b42b4 DH |
2665 | | | | | : | | | | +--------+ |
2666 | +--------+ +--------+ : +--------+ +-----------+ | |
2667 | : | |
2668 | : | |
2669 | ||
2670 | Although any particular load or store may not actually appear outside of the | |
2671 | CPU that issued it since it may have been satisfied within the CPU's own cache, | |
2672 | it will still appear as if the full memory access had taken place as far as the | |
2673 | other CPUs are concerned since the cache coherency mechanisms will migrate the | |
2674 | cacheline over to the accessing CPU and propagate the effects upon conflict. | |
2675 | ||
2676 | The CPU core may execute instructions in any order it deems fit, provided the | |
2677 | expected program causality appears to be maintained. Some of the instructions | |
2678 | generate load and store operations which then go into the queue of memory | |
2679 | accesses to be performed. The core may place these in the queue in any order | |
2680 | it wishes, and continue execution until it is forced to wait for an instruction | |
2681 | to complete. | |
2682 | ||
2683 | What memory barriers are concerned with is controlling the order in which | |
2684 | accesses cross from the CPU side of things to the memory side of things, and | |
2685 | the order in which the effects are perceived to happen by the other observers | |
2686 | in the system. | |
2687 | ||
2688 | [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see | |
2689 | their own loads and stores as if they had happened in program order. | |
2690 | ||
2691 | [!] MMIO or other device accesses may bypass the cache system. This depends on | |
2692 | the properties of the memory window through which devices are accessed and/or | |
2693 | the use of any special device communication instructions the CPU may have. | |
2694 | ||
2695 | ||
2696 | CACHE COHERENCY | |
2697 | --------------- | |
2698 | ||
2699 | Life isn't quite as simple as it may appear above, however: for while the | |
2700 | caches are expected to be coherent, there's no guarantee that that coherency | |
2701 | will be ordered. This means that whilst changes made on one CPU will | |
2702 | eventually become visible on all CPUs, there's no guarantee that they will | |
2703 | become apparent in the same order on those other CPUs. | |
2704 | ||
2705 | ||
81fc6323 JP |
2706 | Consider dealing with a system that has a pair of CPUs (1 & 2), each of which |
2707 | has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): | |
108b42b4 DH |
2708 | |
2709 | : | |
2710 | : +--------+ | |
2711 | : +---------+ | | | |
2712 | +--------+ : +--->| Cache A |<------->| | | |
2713 | | | : | +---------+ | | | |
2714 | | CPU 1 |<---+ | | | |
2715 | | | : | +---------+ | | | |
2716 | +--------+ : +--->| Cache B |<------->| | | |
2717 | : +---------+ | | | |
2718 | : | Memory | | |
2719 | : +---------+ | System | | |
2720 | +--------+ : +--->| Cache C |<------->| | | |
2721 | | | : | +---------+ | | | |
2722 | | CPU 2 |<---+ | | | |
2723 | | | : | +---------+ | | | |
2724 | +--------+ : +--->| Cache D |<------->| | | |
2725 | : +---------+ | | | |
2726 | : +--------+ | |
2727 | : | |
2728 | ||
2729 | Imagine the system has the following properties: | |
2730 | ||
2731 | (*) an odd-numbered cache line may be in cache A, cache C or it may still be | |
2732 | resident in memory; | |
2733 | ||
2734 | (*) an even-numbered cache line may be in cache B, cache D or it may still be | |
2735 | resident in memory; | |
2736 | ||
2737 | (*) whilst the CPU core is interrogating one cache, the other cache may be | |
2738 | making use of the bus to access the rest of the system - perhaps to | |
2739 | displace a dirty cacheline or to do a speculative load; | |
2740 | ||
2741 | (*) each cache has a queue of operations that need to be applied to that cache | |
2742 | to maintain coherency with the rest of the system; | |
2743 | ||
2744 | (*) the coherency queue is not flushed by normal loads to lines already | |
2745 | present in the cache, even though the contents of the queue may | |
81fc6323 | 2746 | potentially affect those loads. |
108b42b4 DH |
2747 | |
2748 | Imagine, then, that two writes are made on the first CPU, with a write barrier | |
2749 | between them to guarantee that they will appear to reach that CPU's caches in | |
2750 | the requisite order: | |
2751 | ||
2752 | CPU 1 CPU 2 COMMENT | |
2753 | =============== =============== ======================================= | |
2754 | u == 0, v == 1 and p == &u, q == &u | |
2755 | v = 2; | |
81fc6323 | 2756 | smp_wmb(); Make sure change to v is visible before |
108b42b4 DH |
2757 | change to p |
2758 | <A:modify v=2> v is now in cache A exclusively | |
2759 | p = &v; | |
2760 | <B:modify p=&v> p is now in cache B exclusively | |
2761 | ||
2762 | The write memory barrier forces the other CPUs in the system to perceive that | |
2763 | the local CPU's caches have apparently been updated in the correct order. But | |
81fc6323 | 2764 | now imagine that the second CPU wants to read those values: |
108b42b4 DH |
2765 | |
2766 | CPU 1 CPU 2 COMMENT | |
2767 | =============== =============== ======================================= | |
2768 | ... | |
2769 | q = p; | |
2770 | x = *q; | |
2771 | ||
81fc6323 | 2772 | The above pair of reads may then fail to happen in the expected order, as the |
108b42b4 DH |
2773 | cacheline holding p may get updated in one of the second CPU's caches whilst |
2774 | the update to the cacheline holding v is delayed in the other of the second | |
2775 | CPU's caches by some other cache event: | |
2776 | ||
2777 | CPU 1 CPU 2 COMMENT | |
2778 | =============== =============== ======================================= | |
2779 | u == 0, v == 1 and p == &u, q == &u | |
2780 | v = 2; | |
2781 | smp_wmb(); | |
2782 | <A:modify v=2> <C:busy> | |
2783 | <C:queue v=2> | |
79afecfa | 2784 | p = &v; q = p; |
108b42b4 DH |
2785 | <D:request p> |
2786 | <B:modify p=&v> <D:commit p=&v> | |
e0edc78f | 2787 | <D:read p> |
108b42b4 DH |
2788 | x = *q; |
2789 | <C:read *q> Reads from v before v updated in cache | |
2790 | <C:unbusy> | |
2791 | <C:commit v=2> | |
2792 | ||
2793 | Basically, whilst both cachelines will be updated on CPU 2 eventually, there's | |
2794 | no guarantee that, without intervention, the order of update will be the same | |
2795 | as that committed on CPU 1. | |
2796 | ||
2797 | ||
2798 | To intervene, we need to interpolate a data dependency barrier or a read | |
2799 | barrier between the loads. This will force the cache to commit its coherency | |
2800 | queue before processing any further requests: | |
2801 | ||
2802 | CPU 1 CPU 2 COMMENT | |
2803 | =============== =============== ======================================= | |
2804 | u == 0, v == 1 and p == &u, q == &u | |
2805 | v = 2; | |
2806 | smp_wmb(); | |
2807 | <A:modify v=2> <C:busy> | |
2808 | <C:queue v=2> | |
3fda982c | 2809 | p = &v; q = p; |
108b42b4 DH |
2810 | <D:request p> |
2811 | <B:modify p=&v> <D:commit p=&v> | |
e0edc78f | 2812 | <D:read p> |
108b42b4 DH |
2813 | smp_read_barrier_depends() |
2814 | <C:unbusy> | |
2815 | <C:commit v=2> | |
2816 | x = *q; | |
2817 | <C:read *q> Reads from v after v updated in cache | |
2818 | ||
2819 | ||
2820 | This sort of problem can be encountered on DEC Alpha processors as they have a | |
2821 | split cache that improves performance by making better use of the data bus. | |
2822 | Whilst most CPUs do imply a data dependency barrier on the read when a memory | |
2823 | access depends on a read, not all do, so it may not be relied on. | |
2824 | ||
2825 | Other CPUs may also have split caches, but must coordinate between the various | |
3f6dee9b | 2826 | cachelets for normal memory accesses. The semantics of the Alpha removes the |
81fc6323 | 2827 | need for coordination in the absence of memory barriers. |
108b42b4 DH |
2828 | |
2829 | ||
2830 | CACHE COHERENCY VS DMA | |
2831 | ---------------------- | |
2832 | ||
2833 | Not all systems maintain cache coherency with respect to devices doing DMA. In | |
2834 | such cases, a device attempting DMA may obtain stale data from RAM because | |
2835 | dirty cache lines may be resident in the caches of various CPUs, and may not | |
2836 | have been written back to RAM yet. To deal with this, the appropriate part of | |
2837 | the kernel must flush the overlapping bits of cache on each CPU (and maybe | |
2838 | invalidate them as well). | |
2839 | ||
2840 | In addition, the data DMA'd to RAM by a device may be overwritten by dirty | |
2841 | cache lines being written back to RAM from a CPU's cache after the device has | |
81fc6323 JP |
2842 | installed its own data, or cache lines present in the CPU's cache may simply |
2843 | obscure the fact that RAM has been updated, until at such time as the cacheline | |
2844 | is discarded from the CPU's cache and reloaded. To deal with this, the | |
2845 | appropriate part of the kernel must invalidate the overlapping bits of the | |
108b42b4 DH |
2846 | cache on each CPU. |
2847 | ||
2848 | See Documentation/cachetlb.txt for more information on cache management. | |
2849 | ||
2850 | ||
2851 | CACHE COHERENCY VS MMIO | |
2852 | ----------------------- | |
2853 | ||
2854 | Memory mapped I/O usually takes place through memory locations that are part of | |
81fc6323 | 2855 | a window in the CPU's memory space that has different properties assigned than |
108b42b4 DH |
2856 | the usual RAM directed window. |
2857 | ||
2858 | Amongst these properties is usually the fact that such accesses bypass the | |
2859 | caching entirely and go directly to the device buses. This means MMIO accesses | |
2860 | may, in effect, overtake accesses to cached memory that were emitted earlier. | |
2861 | A memory barrier isn't sufficient in such a case, but rather the cache must be | |
2862 | flushed between the cached memory write and the MMIO access if the two are in | |
2863 | any way dependent. | |
2864 | ||
2865 | ||
2866 | ========================= | |
2867 | THE THINGS CPUS GET UP TO | |
2868 | ========================= | |
2869 | ||
2870 | A programmer might take it for granted that the CPU will perform memory | |
81fc6323 | 2871 | operations in exactly the order specified, so that if the CPU is, for example, |
108b42b4 DH |
2872 | given the following piece of code to execute: |
2873 | ||
2ecf8101 PM |
2874 | a = ACCESS_ONCE(*A); |
2875 | ACCESS_ONCE(*B) = b; | |
2876 | c = ACCESS_ONCE(*C); | |
2877 | d = ACCESS_ONCE(*D); | |
2878 | ACCESS_ONCE(*E) = e; | |
108b42b4 | 2879 | |
81fc6323 | 2880 | they would then expect that the CPU will complete the memory operation for each |
108b42b4 DH |
2881 | instruction before moving on to the next one, leading to a definite sequence of |
2882 | operations as seen by external observers in the system: | |
2883 | ||
2884 | LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. | |
2885 | ||
2886 | ||
2887 | Reality is, of course, much messier. With many CPUs and compilers, the above | |
2888 | assumption doesn't hold because: | |
2889 | ||
2890 | (*) loads are more likely to need to be completed immediately to permit | |
2891 | execution progress, whereas stores can often be deferred without a | |
2892 | problem; | |
2893 | ||
2894 | (*) loads may be done speculatively, and the result discarded should it prove | |
2895 | to have been unnecessary; | |
2896 | ||
81fc6323 JP |
2897 | (*) loads may be done speculatively, leading to the result having been fetched |
2898 | at the wrong time in the expected sequence of events; | |
108b42b4 DH |
2899 | |
2900 | (*) the order of the memory accesses may be rearranged to promote better use | |
2901 | of the CPU buses and caches; | |
2902 | ||
2903 | (*) loads and stores may be combined to improve performance when talking to | |
2904 | memory or I/O hardware that can do batched accesses of adjacent locations, | |
2905 | thus cutting down on transaction setup costs (memory and PCI devices may | |
2906 | both be able to do this); and | |
2907 | ||
2908 | (*) the CPU's data cache may affect the ordering, and whilst cache-coherency | |
2909 | mechanisms may alleviate this - once the store has actually hit the cache | |
2910 | - there's no guarantee that the coherency management will be propagated in | |
2911 | order to other CPUs. | |
2912 | ||
2913 | So what another CPU, say, might actually observe from the above piece of code | |
2914 | is: | |
2915 | ||
2916 | LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B | |
2917 | ||
2918 | (Where "LOAD {*C,*D}" is a combined load) | |
2919 | ||
2920 | ||
2921 | However, it is guaranteed that a CPU will be self-consistent: it will see its | |
2922 | _own_ accesses appear to be correctly ordered, without the need for a memory | |
2923 | barrier. For instance with the following code: | |
2924 | ||
2ecf8101 PM |
2925 | U = ACCESS_ONCE(*A); |
2926 | ACCESS_ONCE(*A) = V; | |
2927 | ACCESS_ONCE(*A) = W; | |
2928 | X = ACCESS_ONCE(*A); | |
2929 | ACCESS_ONCE(*A) = Y; | |
2930 | Z = ACCESS_ONCE(*A); | |
108b42b4 DH |
2931 | |
2932 | and assuming no intervention by an external influence, it can be assumed that | |
2933 | the final result will appear to be: | |
2934 | ||
2935 | U == the original value of *A | |
2936 | X == W | |
2937 | Z == Y | |
2938 | *A == Y | |
2939 | ||
2940 | The code above may cause the CPU to generate the full sequence of memory | |
2941 | accesses: | |
2942 | ||
2943 | U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A | |
2944 | ||
2945 | in that order, but, without intervention, the sequence may have almost any | |
2946 | combination of elements combined or discarded, provided the program's view of | |
2ecf8101 PM |
2947 | the world remains consistent. Note that ACCESS_ONCE() is -not- optional |
2948 | in the above example, as there are architectures where a given CPU might | |
8dd853d7 | 2949 | reorder successive loads to the same location. On such architectures, |
2ecf8101 PM |
2950 | ACCESS_ONCE() does whatever is necessary to prevent this, for example, on |
2951 | Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the | |
2952 | special ld.acq and st.rel instructions that prevent such reordering. | |
108b42b4 DH |
2953 | |
2954 | The compiler may also combine, discard or defer elements of the sequence before | |
2955 | the CPU even sees them. | |
2956 | ||
2957 | For instance: | |
2958 | ||
2959 | *A = V; | |
2960 | *A = W; | |
2961 | ||
2962 | may be reduced to: | |
2963 | ||
2964 | *A = W; | |
2965 | ||
2ecf8101 PM |
2966 | since, without either a write barrier or an ACCESS_ONCE(), it can be |
2967 | assumed that the effect of the storage of V to *A is lost. Similarly: | |
108b42b4 DH |
2968 | |
2969 | *A = Y; | |
2970 | Z = *A; | |
2971 | ||
2ecf8101 | 2972 | may, without a memory barrier or an ACCESS_ONCE(), be reduced to: |
108b42b4 DH |
2973 | |
2974 | *A = Y; | |
2975 | Z = Y; | |
2976 | ||
2977 | and the LOAD operation never appear outside of the CPU. | |
2978 | ||
2979 | ||
2980 | AND THEN THERE'S THE ALPHA | |
2981 | -------------------------- | |
2982 | ||
2983 | The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, | |
2984 | some versions of the Alpha CPU have a split data cache, permitting them to have | |
81fc6323 | 2985 | two semantically-related cache lines updated at separate times. This is where |
108b42b4 DH |
2986 | the data dependency barrier really becomes necessary as this synchronises both |
2987 | caches with the memory coherence system, thus making it seem like pointer | |
2988 | changes vs new data occur in the right order. | |
2989 | ||
81fc6323 | 2990 | The Alpha defines the Linux kernel's memory barrier model. |
108b42b4 DH |
2991 | |
2992 | See the subsection on "Cache Coherency" above. | |
2993 | ||
2994 | ||
90fddabf DH |
2995 | ============ |
2996 | EXAMPLE USES | |
2997 | ============ | |
2998 | ||
2999 | CIRCULAR BUFFERS | |
3000 | ---------------- | |
3001 | ||
3002 | Memory barriers can be used to implement circular buffering without the need | |
3003 | of a lock to serialise the producer with the consumer. See: | |
3004 | ||
3005 | Documentation/circular-buffers.txt | |
3006 | ||
3007 | for details. | |
3008 | ||
3009 | ||
108b42b4 DH |
3010 | ========== |
3011 | REFERENCES | |
3012 | ========== | |
3013 | ||
3014 | Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, | |
3015 | Digital Press) | |
3016 | Chapter 5.2: Physical Address Space Characteristics | |
3017 | Chapter 5.4: Caches and Write Buffers | |
3018 | Chapter 5.5: Data Sharing | |
3019 | Chapter 5.6: Read/Write Ordering | |
3020 | ||
3021 | AMD64 Architecture Programmer's Manual Volume 2: System Programming | |
3022 | Chapter 7.1: Memory-Access Ordering | |
3023 | Chapter 7.4: Buffering and Combining Memory Writes | |
3024 | ||
3025 | IA-32 Intel Architecture Software Developer's Manual, Volume 3: | |
3026 | System Programming Guide | |
3027 | Chapter 7.1: Locked Atomic Operations | |
3028 | Chapter 7.2: Memory Ordering | |
3029 | Chapter 7.4: Serializing Instructions | |
3030 | ||
3031 | The SPARC Architecture Manual, Version 9 | |
3032 | Chapter 8: Memory Models | |
3033 | Appendix D: Formal Specification of the Memory Models | |
3034 | Appendix J: Programming with the Memory Models | |
3035 | ||
3036 | UltraSPARC Programmer Reference Manual | |
3037 | Chapter 5: Memory Accesses and Cacheability | |
3038 | Chapter 15: Sparc-V9 Memory Models | |
3039 | ||
3040 | UltraSPARC III Cu User's Manual | |
3041 | Chapter 9: Memory Models | |
3042 | ||
3043 | UltraSPARC IIIi Processor User's Manual | |
3044 | Chapter 8: Memory Models | |
3045 | ||
3046 | UltraSPARC Architecture 2005 | |
3047 | Chapter 9: Memory | |
3048 | Appendix D: Formal Specifications of the Memory Models | |
3049 | ||
3050 | UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 | |
3051 | Chapter 8: Memory Models | |
3052 | Appendix F: Caches and Cache Coherency | |
3053 | ||
3054 | Solaris Internals, Core Kernel Architecture, p63-68: | |
3055 | Chapter 3.3: Hardware Considerations for Locks and | |
3056 | Synchronization | |
3057 | ||
3058 | Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching | |
3059 | for Kernel Programmers: | |
3060 | Chapter 13: Other Memory Models | |
3061 | ||
3062 | Intel Itanium Architecture Software Developer's Manual: Volume 1: | |
3063 | Section 2.6: Speculation | |
3064 | Section 4.4: Memory Access |