1 CPUs perform independent memory operations effectively in random order.
2 but this can be a problem for CPU-CPU interaction (including interactions
3 between QEMU and the guest). Multi-threaded programs use various tools
4 to instruct the compiler and the CPU to restrict the order to something
5 that is consistent with the expectations of the programmer.
7 The most basic tool is locking. Mutexes, condition variables and
8 semaphores are used in QEMU, and should be the default approach to
9 synchronization. Anything else is considerably harder, but it's
10 also justified more often than one would like. The two tools that
11 are provided by qemu/atomic.h are memory barriers and atomic operations.
13 Macros defined by qemu/atomic.h fall in three camps:
15 - compiler barriers: barrier();
17 - weak atomic access and manual memory barriers: atomic_read(),
18 atomic_set(), smp_rmb(), smp_wmb(), smp_mb(), smp_mb_acquire(),
19 smp_mb_release(), smp_read_barrier_depends();
21 - sequentially consistent atomic access: everything else.
24 COMPILER MEMORY BARRIER
25 =======================
27 barrier() prevents the compiler from moving the memory accesses either
28 side of it to the other side. The compiler barrier has no direct effect
29 on the CPU, which may then reorder things however it wishes.
31 barrier() is mostly used within qemu/atomic.h itself. On some
32 architectures, CPU guarantees are strong enough that blocking compiler
33 optimizations already ensures the correct order of execution. In this
34 case, qemu/atomic.h will reduce stronger memory barriers to simple
37 Still, barrier() can be useful when writing code that can be interrupted
41 SEQUENTIALLY CONSISTENT ATOMIC ACCESS
42 =====================================
44 Most of the operations in the qemu/atomic.h header ensure *sequential
45 consistency*, where "the result of any execution is the same as if the
46 operations of all the processors were executed in some sequential order,
47 and the operations of each individual processor appear in this sequence
48 in the order specified by its program".
50 qemu/atomic.h provides the following set of atomic read-modify-write
55 void atomic_add(ptr, val)
56 void atomic_sub(ptr, val)
57 void atomic_and(ptr, val)
58 void atomic_or(ptr, val)
60 typeof(*ptr) atomic_fetch_inc(ptr)
61 typeof(*ptr) atomic_fetch_dec(ptr)
62 typeof(*ptr) atomic_fetch_add(ptr, val)
63 typeof(*ptr) atomic_fetch_sub(ptr, val)
64 typeof(*ptr) atomic_fetch_and(ptr, val)
65 typeof(*ptr) atomic_fetch_or(ptr, val)
66 typeof(*ptr) atomic_fetch_xor(ptr, val)
67 typeof(*ptr) atomic_fetch_inc_nonzero(ptr)
68 typeof(*ptr) atomic_xchg(ptr, val)
69 typeof(*ptr) atomic_cmpxchg(ptr, old, new)
71 all of which return the old value of *ptr. These operations are
72 polymorphic; they operate on any type that is as wide as a pointer.
74 Similar operations return the new value of *ptr:
76 typeof(*ptr) atomic_inc_fetch(ptr)
77 typeof(*ptr) atomic_dec_fetch(ptr)
78 typeof(*ptr) atomic_add_fetch(ptr, val)
79 typeof(*ptr) atomic_sub_fetch(ptr, val)
80 typeof(*ptr) atomic_and_fetch(ptr, val)
81 typeof(*ptr) atomic_or_fetch(ptr, val)
82 typeof(*ptr) atomic_xor_fetch(ptr, val)
84 Sequentially consistent loads and stores can be done using:
86 atomic_fetch_add(ptr, 0) for loads
87 atomic_xchg(ptr, val) for stores
89 However, they are quite expensive on some platforms, notably POWER and
90 ARM. Therefore, qemu/atomic.h provides two primitives with slightly
93 typeof(*ptr) atomic_mb_read(ptr)
94 void atomic_mb_set(ptr, val)
96 The semantics of these primitives map to Java volatile variables,
97 and are strongly related to memory barriers as used in the Linux
100 As long as you use atomic_mb_read and atomic_mb_set, accesses cannot
101 be reordered with each other, and it is also not possible to reorder
102 "normal" accesses around them.
104 However, and this is the important difference between
105 atomic_mb_read/atomic_mb_set and sequential consistency, it is important
106 for both threads to access the same volatile variable. It is not the
107 case that everything visible to thread A when it writes volatile field f
108 becomes visible to thread B after it reads volatile field g. The store
109 and load have to "match" (i.e., be performed on the same volatile
110 field) to achieve the right semantics.
113 These operations operate on any type that is as wide as an int or smaller.
116 WEAK ATOMIC ACCESS AND MANUAL MEMORY BARRIERS
117 =============================================
119 Compared to sequentially consistent atomic access, programming with
120 weaker consistency models can be considerably more complicated.
121 In general, if the algorithm you are writing includes both writes
122 and reads on the same side, it is generally simpler to use sequentially
123 consistent primitives.
125 When using this model, variables are accessed with atomic_read() and
126 atomic_set(), and restrictions to the ordering of accesses is enforced
127 using the memory barrier macros: smp_rmb(), smp_wmb(), smp_mb(),
128 smp_mb_acquire(), smp_mb_release(), smp_read_barrier_depends().
130 atomic_read() and atomic_set() prevents the compiler from using
131 optimizations that might otherwise optimize accesses out of existence
132 on the one hand, or that might create unsolicited accesses on the other.
133 In general this should not have any effect, because the same compiler
134 barriers are already implied by memory barriers. However, it is useful
135 to do so, because it tells readers which variables are shared with
136 other threads, and which are local to the current thread or protected
137 by other, more mundane means.
139 Memory barriers control the order of references to shared memory.
140 They come in six kinds:
142 - smp_rmb() guarantees that all the LOAD operations specified before
143 the barrier will appear to happen before all the LOAD operations
144 specified after the barrier with respect to the other components of
147 In other words, smp_rmb() puts a partial ordering on loads, but is not
148 required to have any effect on stores.
150 - smp_wmb() guarantees that all the STORE operations specified before
151 the barrier will appear to happen before all the STORE operations
152 specified after the barrier with respect to the other components of
155 In other words, smp_wmb() puts a partial ordering on stores, but is not
156 required to have any effect on loads.
158 - smp_mb_acquire() guarantees that all the LOAD operations specified before
159 the barrier will appear to happen before all the LOAD or STORE operations
160 specified after the barrier with respect to the other components of
163 - smp_mb_release() guarantees that all the STORE operations specified *after*
164 the barrier will appear to happen after all the LOAD or STORE operations
165 specified *before* the barrier with respect to the other components of
168 - smp_mb() guarantees that all the LOAD and STORE operations specified
169 before the barrier will appear to happen before all the LOAD and
170 STORE operations specified after the barrier with respect to the other
171 components of the system.
173 smp_mb() puts a partial ordering on both loads and stores. It is
174 stronger than both a read and a write memory barrier; it implies both
175 smp_mb_acquire() and smp_mb_release(), but it also prevents STOREs
176 coming before the barrier from overtaking LOADs coming after the
177 barrier and vice versa.
179 - smp_read_barrier_depends() is a weaker kind of read barrier. On
180 most processors, whenever two loads are performed such that the
181 second depends on the result of the first (e.g., the first load
182 retrieves the address to which the second load will be directed),
183 the processor will guarantee that the first LOAD will appear to happen
184 before the second with respect to the other components of the system.
185 However, this is not always true---for example, it was not true on
186 Alpha processors. Whenever this kind of access happens to shared
187 memory (that is not protected by a lock), a read barrier is needed,
188 and smp_read_barrier_depends() can be used instead of smp_rmb().
190 Note that the first load really has to have a _data_ dependency and not
191 a control dependency. If the address for the second load is dependent
192 on the first load, but the dependency is through a conditional rather
193 than actually loading the address itself, then it's a _control_
194 dependency and a full read barrier or better is required.
197 This is the set of barriers that is required *between* two atomic_read()
198 and atomic_set() operations to achieve sequential consistency:
201 |-----------------------------------------------|
202 1st operation | (after last) | atomic_read | atomic_set |
203 ---------------+----------------+-------------+----------------|
204 (before first) | | none | smp_mb_release |
205 ---------------+----------------+-------------+----------------|
206 atomic_read | smp_mb_acquire | smp_rmb | ** |
207 ---------------+----------------+-------------+----------------|
208 atomic_set | none | smp_mb()*** | smp_wmb() |
209 ---------------+----------------+-------------+----------------|
211 * Or smp_read_barrier_depends().
213 ** This requires a load-store barrier. This is achieved by
214 either smp_mb_acquire() or smp_mb_release().
216 *** This requires a store-load barrier. On most machines, the only
217 way to achieve this is a full barrier.
220 You can see that the two possible definitions of atomic_mb_read()
221 and atomic_mb_set() are the following:
223 1) atomic_mb_read(p) = atomic_read(p); smp_mb_acquire()
224 atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v); smp_mb()
226 2) atomic_mb_read(p) = smp_mb() atomic_read(p); smp_mb_acquire()
227 atomic_mb_set(p, v) = smp_mb_release(); atomic_set(p, v);
229 Usually the former is used, because smp_mb() is expensive and a program
230 normally has more reads than writes. Therefore it makes more sense to
231 make atomic_mb_set() the more expensive operation.
233 There are two common cases in which atomic_mb_read and atomic_mb_set
234 generate too many memory barriers, and thus it can be useful to manually
235 place barriers instead:
237 - when a data structure has one thread that is always a writer
238 and one thread that is always a reader, manual placement of
239 memory barriers makes the write side faster. Furthermore,
240 correctness is easy to check for in this case using the "pairing"
241 trick that is explained below:
244 ------------------------- ------------------------
247 atomic_mb_set(&a, x) atomic_set(&a, x)
249 atomic_mb_set(&b, y) atomic_set(&b, y)
253 ------------------------- ------------------------
254 y = atomic_mb_read(&b) y = atomic_read(&b)
256 x = atomic_mb_read(&a) x = atomic_read(&a)
259 Note that the barrier between the stores in thread 1, and between
260 the loads in thread 2, has been optimized here to a write or a
261 read memory barrier respectively. On some architectures, notably
262 ARMv7, smp_mb_acquire and smp_mb_release are just as expensive as
263 smp_mb, but smp_rmb and/or smp_wmb are more efficient.
265 - sometimes, a thread is accessing many variables that are otherwise
266 unrelated to each other (for example because, apart from the current
267 thread, exactly one other thread will read or write each of these
268 variables). In this case, it is possible to "hoist" the implicit
269 barriers provided by atomic_mb_read() and atomic_mb_set() outside
270 a loop. For example, the above definition atomic_mb_read() gives
271 the following transformation:
274 for (i = 0; i < 10; i++) => for (i = 0; i < 10; i++)
275 n += atomic_mb_read(&a[i]); n += atomic_read(&a[i]);
278 Similarly, atomic_mb_set() can be transformed as follows:
282 for (i = 0; i < 10; i++) => for (i = 0; i < 10; i++)
283 atomic_mb_set(&a[i], false); atomic_set(&a[i], false);
287 The two tricks can be combined. In this case, splitting a loop in
288 two lets you hoist the barriers out of the loops _and_ eliminate the
292 for (i = 0; i < 10; i++) { => for (i = 0; i < 10; i++)
293 atomic_mb_set(&a[i], false); atomic_set(&a[i], false);
294 atomic_mb_set(&b[i], false); smb_wmb();
295 } for (i = 0; i < 10; i++)
296 atomic_set(&a[i], false);
299 The other thread can still use atomic_mb_read()/atomic_mb_set()
302 Memory barrier pairing
303 ----------------------
305 A useful rule of thumb is that memory barriers should always, or almost
306 always, be paired with another barrier. In the case of QEMU, however,
307 note that the other barrier may actually be in a driver that runs in
310 For the purposes of pairing, smp_read_barrier_depends() and smp_rmb()
311 both count as read barriers. A read barrier shall pair with a write
312 barrier or a full barrier; a write barrier shall pair with a read
313 barrier or a full barrier. A full barrier can pair with anything.
317 =============== ===============
324 Note that the "writing" thread is accessing the variables in the
325 opposite order as the "reading" thread. This is expected: stores
326 before the write barrier will normally match the loads after the
327 read barrier, and vice versa. The same is true for more than 2
328 access and for data dependency barriers:
331 =============== ===============
337 smp_read_barrier_depends();
339 smp_read_barrier_depends();
342 smp_wmb() also pairs with atomic_mb_read() and smp_mb_acquire().
343 and smp_rmb() also pairs with atomic_mb_set() and smp_mb_release().
346 COMPARISON WITH LINUX KERNEL MEMORY BARRIERS
347 ============================================
349 Here is a list of differences between Linux kernel atomic operations
350 and memory barriers, and the equivalents in QEMU:
352 - atomic operations in Linux are always on a 32-bit int type and
353 use a boxed atomic_t type; atomic operations in QEMU are polymorphic
354 and use normal C types.
356 - Originally, atomic_read and atomic_set in Linux gave no guarantee
357 at all. Linux 4.1 updated them to implement volatile
358 semantics via ACCESS_ONCE (or the more recent READ/WRITE_ONCE).
360 QEMU's atomic_read/set implement, if the compiler supports it, C11
361 atomic relaxed semantics, and volatile semantics otherwise.
362 Both semantics prevent the compiler from doing certain transformations;
363 the difference is that atomic accesses are guaranteed to be atomic,
364 while volatile accesses aren't. Thus, in the volatile case we just cross
365 our fingers hoping that the compiler will generate atomic accesses,
366 since we assume the variables passed are machine-word sized and
368 No barriers are implied by atomic_read/set in either Linux or QEMU.
370 - atomic read-modify-write operations in Linux are of three kinds:
372 atomic_OP returns void
373 atomic_OP_return returns new value of the variable
374 atomic_fetch_OP returns the old value of the variable
375 atomic_cmpxchg returns the old value of the variable
377 In QEMU, the second kind does not exist. Currently Linux has
378 atomic_fetch_or only. QEMU provides and, or, inc, dec, add, sub.
380 - different atomic read-modify-write operations in Linux imply
381 a different set of memory barriers; in QEMU, all of them enforce
382 sequential consistency, which means they imply full memory barriers
383 before and after the operation.
385 - Linux does not have an equivalent of atomic_mb_set(). In particular,
386 note that smp_store_mb() is a little weaker than atomic_mb_set().
387 atomic_mb_read() compiles to the same instructions as Linux's
388 smp_load_acquire(), but this should be treated as an implementation
389 detail. QEMU does have atomic_load_acquire() and atomic_store_release()
390 macros, but for now they are only used within atomic.h. This may
391 change in the future.
397 * Documentation/memory-barriers.txt from the Linux kernel
399 * "The JSR-133 Cookbook for Compiler Writers", available at
400 http://g.oswego.edu/dl/jmm/cookbook.html