]>
Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | Semantics and Behavior of Atomic and |
2 | Bitmask Operations | |
3 | ||
4 | David S. Miller | |
5 | ||
6 | This document is intended to serve as a guide to Linux port | |
7 | maintainers on how to implement atomic counter, bitops, and spinlock | |
8 | interfaces properly. | |
9 | ||
10 | The atomic_t type should be defined as a signed integer. | |
11 | Also, it should be made opaque such that any kind of cast to a normal | |
12 | C integer type will fail. Something like the following should | |
13 | suffice: | |
14 | ||
72eef0f3 | 15 | typedef struct { int counter; } atomic_t; |
1da177e4 | 16 | |
8d7b52df ML |
17 | Historically, counter has been declared volatile. This is now discouraged. |
18 | See Documentation/volatile-considered-harmful.txt for the complete rationale. | |
19 | ||
1a2142b0 GG |
20 | local_t is very similar to atomic_t. If the counter is per CPU and only |
21 | updated by one CPU, local_t is probably more appropriate. Please see | |
22 | Documentation/local_ops.txt for the semantics of local_t. | |
23 | ||
8d7b52df ML |
24 | The first operations to implement for atomic_t's are the initializers and |
25 | plain reads. | |
1da177e4 LT |
26 | |
27 | #define ATOMIC_INIT(i) { (i) } | |
28 | #define atomic_set(v, i) ((v)->counter = (i)) | |
29 | ||
30 | The first macro is used in definitions, such as: | |
31 | ||
32 | static atomic_t my_counter = ATOMIC_INIT(1); | |
33 | ||
8d7b52df ML |
34 | The initializer is atomic in that the return values of the atomic operations |
35 | are guaranteed to be correct reflecting the initialized value if the | |
36 | initializer is used before runtime. If the initializer is used at runtime, a | |
37 | proper implicit or explicit read memory barrier is needed before reading the | |
38 | value with atomic_read from another thread. | |
39 | ||
1da177e4 LT |
40 | The second interface can be used at runtime, as in: |
41 | ||
42 | struct foo { atomic_t counter; }; | |
43 | ... | |
44 | ||
45 | struct foo *k; | |
46 | ||
47 | k = kmalloc(sizeof(*k), GFP_KERNEL); | |
48 | if (!k) | |
49 | return -ENOMEM; | |
50 | atomic_set(&k->counter, 0); | |
51 | ||
8d7b52df ML |
52 | The setting is atomic in that the return values of the atomic operations by |
53 | all threads are guaranteed to be correct reflecting either the value that has | |
54 | been set with this operation or set with another operation. A proper implicit | |
55 | or explicit memory barrier is needed before the value set with the operation | |
56 | is guaranteed to be readable with atomic_read from another thread. | |
57 | ||
1da177e4 LT |
58 | Next, we have: |
59 | ||
60 | #define atomic_read(v) ((v)->counter) | |
61 | ||
8d7b52df ML |
62 | which simply reads the counter value currently visible to the calling thread. |
63 | The read is atomic in that the return value is guaranteed to be one of the | |
64 | values initialized or modified with the interface operations if a proper | |
65 | implicit or explicit memory barrier is used after possible runtime | |
66 | initialization by any other thread and the value is modified only with the | |
67 | interface operations. atomic_read does not guarantee that the runtime | |
68 | initialization by any other thread is visible yet, so the user of the | |
69 | interface must take care of that with a proper implicit or explicit memory | |
70 | barrier. | |
71 | ||
72 | *** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! *** | |
73 | ||
74 | Some architectures may choose to use the volatile keyword, barriers, or inline | |
75 | assembly to guarantee some degree of immediacy for atomic_read() and | |
76 | atomic_set(). This is not uniformly guaranteed, and may change in the future, | |
77 | so all users of atomic_t should treat atomic_read() and atomic_set() as simple | |
78 | C statements that may be reordered or optimized away entirely by the compiler | |
79 | or processor, and explicitly invoke the appropriate compiler and/or memory | |
80 | barrier for each use case. Failure to do so will result in code that may | |
81 | suddenly break when used with different architectures or compiler | |
82 | optimizations, or even changes in unrelated code which changes how the | |
83 | compiler optimizes the section accessing atomic_t variables. | |
84 | ||
85 | *** YOU HAVE BEEN WARNED! *** | |
86 | ||
182dd4b2 PM |
87 | Properly aligned pointers, longs, ints, and chars (and unsigned |
88 | equivalents) may be atomically loaded from and stored to in the same | |
89 | sense as described for atomic_read() and atomic_set(). The ACCESS_ONCE() | |
90 | macro should be used to prevent the compiler from using optimizations | |
91 | that might otherwise optimize accesses out of existence on the one hand, | |
92 | or that might create unsolicited accesses on the other. | |
93 | ||
94 | For example consider the following code: | |
95 | ||
96 | while (a > 0) | |
97 | do_something(); | |
98 | ||
99 | If the compiler can prove that do_something() does not store to the | |
100 | variable a, then the compiler is within its rights transforming this to | |
101 | the following: | |
102 | ||
103 | tmp = a; | |
104 | if (a > 0) | |
105 | for (;;) | |
106 | do_something(); | |
107 | ||
108 | If you don't want the compiler to do this (and you probably don't), then | |
109 | you should use something like the following: | |
110 | ||
111 | while (ACCESS_ONCE(a) < 0) | |
112 | do_something(); | |
113 | ||
114 | Alternatively, you could place a barrier() call in the loop. | |
115 | ||
116 | For another example, consider the following code: | |
117 | ||
118 | tmp_a = a; | |
119 | do_something_with(tmp_a); | |
120 | do_something_else_with(tmp_a); | |
121 | ||
122 | If the compiler can prove that do_something_with() does not store to the | |
123 | variable a, then the compiler is within its rights to manufacture an | |
124 | additional load as follows: | |
125 | ||
126 | tmp_a = a; | |
127 | do_something_with(tmp_a); | |
128 | tmp_a = a; | |
129 | do_something_else_with(tmp_a); | |
130 | ||
131 | This could fatally confuse your code if it expected the same value | |
132 | to be passed to do_something_with() and do_something_else_with(). | |
133 | ||
134 | The compiler would be likely to manufacture this additional load if | |
135 | do_something_with() was an inline function that made very heavy use | |
136 | of registers: reloading from variable a could save a flush to the | |
137 | stack and later reload. To prevent the compiler from attacking your | |
138 | code in this manner, write the following: | |
139 | ||
140 | tmp_a = ACCESS_ONCE(a); | |
141 | do_something_with(tmp_a); | |
142 | do_something_else_with(tmp_a); | |
143 | ||
144 | For a final example, consider the following code, assuming that the | |
145 | variable a is set at boot time before the second CPU is brought online | |
146 | and never changed later, so that memory barriers are not needed: | |
147 | ||
148 | if (a) | |
149 | b = 9; | |
150 | else | |
151 | b = 42; | |
152 | ||
153 | The compiler is within its rights to manufacture an additional store | |
154 | by transforming the above code into the following: | |
155 | ||
156 | b = 42; | |
157 | if (a) | |
158 | b = 9; | |
159 | ||
160 | This could come as a fatal surprise to other code running concurrently | |
161 | that expected b to never have the value 42 if a was zero. To prevent | |
162 | the compiler from doing this, write something like: | |
163 | ||
164 | if (a) | |
165 | ACCESS_ONCE(b) = 9; | |
166 | else | |
167 | ACCESS_ONCE(b) = 42; | |
168 | ||
169 | Don't even -think- about doing this without proper use of memory barriers, | |
170 | locks, or atomic operations if variable a can change at runtime! | |
171 | ||
172 | *** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! *** | |
173 | ||
8d7b52df ML |
174 | Now, we move onto the atomic operation interfaces typically implemented with |
175 | the help of assembly code. | |
1da177e4 LT |
176 | |
177 | void atomic_add(int i, atomic_t *v); | |
178 | void atomic_sub(int i, atomic_t *v); | |
179 | void atomic_inc(atomic_t *v); | |
180 | void atomic_dec(atomic_t *v); | |
181 | ||
182 | These four routines add and subtract integral values to/from the given | |
183 | atomic_t value. The first two routines pass explicit integers by | |
184 | which to make the adjustment, whereas the latter two use an implicit | |
185 | adjustment value of "1". | |
186 | ||
187 | One very important aspect of these two routines is that they DO NOT | |
188 | require any explicit memory barriers. They need only perform the | |
189 | atomic_t counter update in an SMP safe manner. | |
190 | ||
191 | Next, we have: | |
192 | ||
193 | int atomic_inc_return(atomic_t *v); | |
194 | int atomic_dec_return(atomic_t *v); | |
195 | ||
196 | These routines add 1 and subtract 1, respectively, from the given | |
197 | atomic_t and return the new counter value after the operation is | |
198 | performed. | |
199 | ||
200 | Unlike the above routines, it is required that explicit memory | |
201 | barriers are performed before and after the operation. It must be | |
202 | done such that all memory operations before and after the atomic | |
203 | operation calls are strongly ordered with respect to the atomic | |
204 | operation itself. | |
205 | ||
206 | For example, it should behave as if a smp_mb() call existed both | |
207 | before and after the atomic operation. | |
208 | ||
209 | If the atomic instructions used in an implementation provide explicit | |
210 | memory barrier semantics which satisfy the above requirements, that is | |
211 | fine as well. | |
212 | ||
213 | Let's move on: | |
214 | ||
215 | int atomic_add_return(int i, atomic_t *v); | |
216 | int atomic_sub_return(int i, atomic_t *v); | |
217 | ||
218 | These behave just like atomic_{inc,dec}_return() except that an | |
219 | explicit counter adjustment is given instead of the implicit "1". | |
220 | This means that like atomic_{inc,dec}_return(), the memory barrier | |
221 | semantics are required. | |
222 | ||
223 | Next: | |
224 | ||
225 | int atomic_inc_and_test(atomic_t *v); | |
226 | int atomic_dec_and_test(atomic_t *v); | |
227 | ||
228 | These two routines increment and decrement by 1, respectively, the | |
229 | given atomic counter. They return a boolean indicating whether the | |
230 | resulting counter value was zero or not. | |
231 | ||
232 | It requires explicit memory barrier semantics around the operation as | |
233 | above. | |
234 | ||
235 | int atomic_sub_and_test(int i, atomic_t *v); | |
236 | ||
237 | This is identical to atomic_dec_and_test() except that an explicit | |
238 | decrement is given instead of the implicit "1". It requires explicit | |
239 | memory barrier semantics around the operation. | |
240 | ||
241 | int atomic_add_negative(int i, atomic_t *v); | |
242 | ||
243 | The given increment is added to the given atomic counter value. A | |
244 | boolean is return which indicates whether the resulting counter value | |
245 | is negative. It requires explicit memory barrier semantics around the | |
246 | operation. | |
247 | ||
8426e1f6 | 248 | Then: |
4a6dae6d | 249 | |
8d7b52df ML |
250 | int atomic_xchg(atomic_t *v, int new); |
251 | ||
252 | This performs an atomic exchange operation on the atomic variable v, setting | |
253 | the given new value. It returns the old value that the atomic variable v had | |
254 | just before the operation. | |
255 | ||
7e8b1e78 RB |
256 | atomic_xchg requires explicit memory barriers around the operation. |
257 | ||
4a6dae6d NP |
258 | int atomic_cmpxchg(atomic_t *v, int old, int new); |
259 | ||
260 | This performs an atomic compare exchange operation on the atomic value v, | |
261 | with the given old and new values. Like all atomic_xxx operations, | |
262 | atomic_cmpxchg will only satisfy its atomicity semantics as long as all | |
263 | other accesses of *v are performed through atomic_xxx operations. | |
264 | ||
265 | atomic_cmpxchg requires explicit memory barriers around the operation. | |
266 | ||
267 | The semantics for atomic_cmpxchg are the same as those defined for 'cas' | |
268 | below. | |
269 | ||
8426e1f6 NP |
270 | Finally: |
271 | ||
272 | int atomic_add_unless(atomic_t *v, int a, int u); | |
273 | ||
274 | If the atomic value v is not equal to u, this function adds a to v, and | |
275 | returns non zero. If v is equal to u then it returns zero. This is done as | |
276 | an atomic operation. | |
277 | ||
02c608c1 ON |
278 | atomic_add_unless requires explicit memory barriers around the operation |
279 | unless it fails (returns 0). | |
8426e1f6 NP |
280 | |
281 | atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0) | |
282 | ||
4a6dae6d | 283 | |
1da177e4 LT |
284 | If a caller requires memory barrier semantics around an atomic_t |
285 | operation which does not return a value, a set of interfaces are | |
286 | defined which accomplish this: | |
287 | ||
288 | void smp_mb__before_atomic_dec(void); | |
289 | void smp_mb__after_atomic_dec(void); | |
290 | void smp_mb__before_atomic_inc(void); | |
4249e08e | 291 | void smp_mb__after_atomic_inc(void); |
1da177e4 LT |
292 | |
293 | For example, smp_mb__before_atomic_dec() can be used like so: | |
294 | ||
295 | obj->dead = 1; | |
296 | smp_mb__before_atomic_dec(); | |
297 | atomic_dec(&obj->ref_count); | |
298 | ||
a0ebb3ff | 299 | It makes sure that all memory operations preceding the atomic_dec() |
1da177e4 | 300 | call are strongly ordered with respect to the atomic counter |
a0ebb3ff | 301 | operation. In the above example, it guarantees that the assignment of |
1da177e4 LT |
302 | "1" to obj->dead will be globally visible to other cpus before the |
303 | atomic counter decrement. | |
304 | ||
a0ebb3ff | 305 | Without the explicit smp_mb__before_atomic_dec() call, the |
1da177e4 LT |
306 | implementation could legally allow the atomic counter update visible |
307 | to other cpus before the "obj->dead = 1;" assignment. | |
308 | ||
309 | The other three interfaces listed are used to provide explicit | |
310 | ordering with respect to memory operations after an atomic_dec() call | |
311 | (smp_mb__after_atomic_dec()) and around atomic_inc() calls | |
312 | (smp_mb__{before,after}_atomic_inc()). | |
313 | ||
314 | A missing memory barrier in the cases where they are required by the | |
a0ebb3ff MH |
315 | atomic_t implementation above can have disastrous results. Here is |
316 | an example, which follows a pattern occurring frequently in the Linux | |
1da177e4 LT |
317 | kernel. It is the use of atomic counters to implement reference |
318 | counting, and it works such that once the counter falls to zero it can | |
a0ebb3ff | 319 | be guaranteed that no other entity can be accessing the object: |
1da177e4 | 320 | |
4764e280 | 321 | static void obj_list_add(struct obj *obj, struct list_head *head) |
1da177e4 LT |
322 | { |
323 | obj->active = 1; | |
4764e280 | 324 | list_add(&obj->list, head); |
1da177e4 LT |
325 | } |
326 | ||
327 | static void obj_list_del(struct obj *obj) | |
328 | { | |
329 | list_del(&obj->list); | |
330 | obj->active = 0; | |
331 | } | |
332 | ||
333 | static void obj_destroy(struct obj *obj) | |
334 | { | |
335 | BUG_ON(obj->active); | |
336 | kfree(obj); | |
337 | } | |
338 | ||
339 | struct obj *obj_list_peek(struct list_head *head) | |
340 | { | |
341 | if (!list_empty(head)) { | |
342 | struct obj *obj; | |
343 | ||
344 | obj = list_entry(head->next, struct obj, list); | |
345 | atomic_inc(&obj->refcnt); | |
346 | return obj; | |
347 | } | |
348 | return NULL; | |
349 | } | |
350 | ||
351 | void obj_poke(void) | |
352 | { | |
353 | struct obj *obj; | |
354 | ||
355 | spin_lock(&global_list_lock); | |
356 | obj = obj_list_peek(&global_list); | |
357 | spin_unlock(&global_list_lock); | |
358 | ||
359 | if (obj) { | |
360 | obj->ops->poke(obj); | |
361 | if (atomic_dec_and_test(&obj->refcnt)) | |
362 | obj_destroy(obj); | |
363 | } | |
364 | } | |
365 | ||
366 | void obj_timeout(struct obj *obj) | |
367 | { | |
368 | spin_lock(&global_list_lock); | |
369 | obj_list_del(obj); | |
370 | spin_unlock(&global_list_lock); | |
371 | ||
372 | if (atomic_dec_and_test(&obj->refcnt)) | |
373 | obj_destroy(obj); | |
374 | } | |
375 | ||
376 | (This is a simplification of the ARP queue management in the | |
377 | generic neighbour discover code of the networking. Olaf Kirch | |
378 | found a bug wrt. memory barriers in kfree_skb() that exposed | |
379 | the atomic_t memory barrier requirements quite clearly.) | |
380 | ||
381 | Given the above scheme, it must be the case that the obj->active | |
382 | update done by the obj list deletion be visible to other processors | |
383 | before the atomic counter decrement is performed. | |
384 | ||
385 | Otherwise, the counter could fall to zero, yet obj->active would still | |
386 | be set, thus triggering the assertion in obj_destroy(). The error | |
387 | sequence looks like this: | |
388 | ||
389 | cpu 0 cpu 1 | |
390 | obj_poke() obj_timeout() | |
391 | obj = obj_list_peek(); | |
392 | ... gains ref to obj, refcnt=2 | |
393 | obj_list_del(obj); | |
394 | obj->active = 0 ... | |
395 | ... visibility delayed ... | |
396 | atomic_dec_and_test() | |
397 | ... refcnt drops to 1 ... | |
398 | atomic_dec_and_test() | |
399 | ... refcount drops to 0 ... | |
400 | obj_destroy() | |
401 | BUG() triggers since obj->active | |
402 | still seen as one | |
403 | obj->active update visibility occurs | |
404 | ||
405 | With the memory barrier semantics required of the atomic_t operations | |
406 | which return values, the above sequence of memory visibility can never | |
407 | happen. Specifically, in the above case the atomic_dec_and_test() | |
408 | counter decrement would not become globally visible until the | |
409 | obj->active update does. | |
410 | ||
411 | As a historical note, 32-bit Sparc used to only allow usage of | |
a33f3224 | 412 | 24-bits of its atomic_t type. This was because it used 8 bits |
1da177e4 LT |
413 | as a spinlock for SMP safety. Sparc32 lacked a "compare and swap" |
414 | type instruction. However, 32-bit Sparc has since been moved over | |
415 | to a "hash table of spinlocks" scheme, that allows the full 32-bit | |
416 | counter to be realized. Essentially, an array of spinlocks are | |
417 | indexed into based upon the address of the atomic_t being operated | |
418 | on, and that lock protects the atomic operation. Parisc uses the | |
419 | same scheme. | |
420 | ||
421 | Another note is that the atomic_t operations returning values are | |
422 | extremely slow on an old 386. | |
423 | ||
424 | We will now cover the atomic bitmask operations. You will find that | |
425 | their SMP and memory barrier semantics are similar in shape and scope | |
426 | to the atomic_t ops above. | |
427 | ||
428 | Native atomic bit operations are defined to operate on objects aligned | |
429 | to the size of an "unsigned long" C data type, and are least of that | |
430 | size. The endianness of the bits within each "unsigned long" are the | |
431 | native endianness of the cpu. | |
432 | ||
a0ebb3ff MH |
433 | void set_bit(unsigned long nr, volatile unsigned long *addr); |
434 | void clear_bit(unsigned long nr, volatile unsigned long *addr); | |
435 | void change_bit(unsigned long nr, volatile unsigned long *addr); | |
1da177e4 LT |
436 | |
437 | These routines set, clear, and change, respectively, the bit number | |
438 | indicated by "nr" on the bit mask pointed to by "ADDR". | |
439 | ||
440 | They must execute atomically, yet there are no implicit memory barrier | |
441 | semantics required of these interfaces. | |
442 | ||
a0ebb3ff MH |
443 | int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); |
444 | int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); | |
445 | int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); | |
1da177e4 LT |
446 | |
447 | Like the above, except that these routines return a boolean which | |
448 | indicates whether the changed bit was set _BEFORE_ the atomic bit | |
449 | operation. | |
450 | ||
451 | WARNING! It is incredibly important that the value be a boolean, | |
452 | ie. "0" or "1". Do not try to be fancy and save a few instructions by | |
453 | declaring the above to return "long" and just returning something like | |
454 | "old_val & mask" because that will not work. | |
455 | ||
456 | For one thing, this return value gets truncated to int in many code | |
457 | paths using these interfaces, so on 64-bit if the bit is set in the | |
458 | upper 32-bits then testers will never see that. | |
459 | ||
460 | One great example of where this problem crops up are the thread_info | |
461 | flag operations. Routines such as test_and_set_ti_thread_flag() chop | |
462 | the return value into an int. There are other places where things | |
463 | like this occur as well. | |
464 | ||
465 | These routines, like the atomic_t counter operations returning values, | |
466 | require explicit memory barrier semantics around their execution. All | |
467 | memory operations before the atomic bit operation call must be made | |
468 | visible globally before the atomic bit operation is made visible. | |
469 | Likewise, the atomic bit operation must be visible globally before any | |
470 | subsequent memory operation is made visible. For example: | |
471 | ||
472 | obj->dead = 1; | |
473 | if (test_and_set_bit(0, &obj->flags)) | |
474 | /* ... */; | |
475 | obj->killed = 1; | |
476 | ||
a0ebb3ff | 477 | The implementation of test_and_set_bit() must guarantee that |
1da177e4 LT |
478 | "obj->dead = 1;" is visible to cpus before the atomic memory operation |
479 | done by test_and_set_bit() becomes visible. Likewise, the atomic | |
480 | memory operation done by test_and_set_bit() must become visible before | |
481 | "obj->killed = 1;" is visible. | |
482 | ||
483 | Finally there is the basic operation: | |
484 | ||
485 | int test_bit(unsigned long nr, __const__ volatile unsigned long *addr); | |
486 | ||
487 | Which returns a boolean indicating if bit "nr" is set in the bitmask | |
488 | pointed to by "addr". | |
489 | ||
490 | If explicit memory barriers are required around clear_bit() (which | |
491 | does not return a value, and thus does not need to provide memory | |
492 | barrier semantics), two interfaces are provided: | |
493 | ||
494 | void smp_mb__before_clear_bit(void); | |
495 | void smp_mb__after_clear_bit(void); | |
496 | ||
497 | They are used as follows, and are akin to their atomic_t operation | |
498 | brothers: | |
499 | ||
500 | /* All memory operations before this call will | |
501 | * be globally visible before the clear_bit(). | |
502 | */ | |
503 | smp_mb__before_clear_bit(); | |
504 | clear_bit( ... ); | |
505 | ||
506 | /* The clear_bit() will be visible before all | |
507 | * subsequent memory operations. | |
508 | */ | |
509 | smp_mb__after_clear_bit(); | |
510 | ||
26333576 NP |
511 | There are two special bitops with lock barrier semantics (acquire/release, |
512 | same as spinlocks). These operate in the same way as their non-_lock/unlock | |
513 | postfixed variants, except that they are to provide acquire/release semantics, | |
514 | respectively. This means they can be used for bit_spin_trylock and | |
515 | bit_spin_unlock type operations without specifying any more barriers. | |
516 | ||
517 | int test_and_set_bit_lock(unsigned long nr, unsigned long *addr); | |
518 | void clear_bit_unlock(unsigned long nr, unsigned long *addr); | |
519 | void __clear_bit_unlock(unsigned long nr, unsigned long *addr); | |
520 | ||
521 | The __clear_bit_unlock version is non-atomic, however it still implements | |
522 | unlock barrier semantics. This can be useful if the lock itself is protecting | |
523 | the other bits in the word. | |
524 | ||
1da177e4 LT |
525 | Finally, there are non-atomic versions of the bitmask operations |
526 | provided. They are used in contexts where some other higher-level SMP | |
527 | locking scheme is being used to protect the bitmask, and thus less | |
528 | expensive non-atomic operations may be used in the implementation. | |
529 | They have names similar to the above bitmask operation interfaces, | |
530 | except that two underscores are prefixed to the interface name. | |
531 | ||
532 | void __set_bit(unsigned long nr, volatile unsigned long *addr); | |
533 | void __clear_bit(unsigned long nr, volatile unsigned long *addr); | |
534 | void __change_bit(unsigned long nr, volatile unsigned long *addr); | |
535 | int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr); | |
536 | int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); | |
537 | int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr); | |
538 | ||
539 | These non-atomic variants also do not require any special memory | |
540 | barrier semantics. | |
541 | ||
542 | The routines xchg() and cmpxchg() need the same exact memory barriers | |
543 | as the atomic and bit operations returning values. | |
544 | ||
545 | Spinlocks and rwlocks have memory barrier expectations as well. | |
546 | The rule to follow is simple: | |
547 | ||
548 | 1) When acquiring a lock, the implementation must make it globally | |
549 | visible before any subsequent memory operation. | |
550 | ||
551 | 2) When releasing a lock, the implementation must make it such that | |
552 | all previous memory operations are globally visible before the | |
553 | lock release. | |
554 | ||
555 | Which finally brings us to _atomic_dec_and_lock(). There is an | |
556 | architecture-neutral version implemented in lib/dec_and_lock.c, | |
557 | but most platforms will wish to optimize this in assembler. | |
558 | ||
559 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); | |
560 | ||
561 | Atomically decrement the given counter, and if will drop to zero | |
562 | atomically acquire the given spinlock and perform the decrement | |
563 | of the counter to zero. If it does not drop to zero, do nothing | |
564 | with the spinlock. | |
565 | ||
566 | It is actually pretty simple to get the memory barrier correct. | |
567 | Simply satisfy the spinlock grab requirements, which is make | |
568 | sure the spinlock operation is globally visible before any | |
569 | subsequent memory operation. | |
570 | ||
571 | We can demonstrate this operation more clearly if we define | |
572 | an abstract atomic operation: | |
573 | ||
574 | long cas(long *mem, long old, long new); | |
575 | ||
576 | "cas" stands for "compare and swap". It atomically: | |
577 | ||
578 | 1) Compares "old" with the value currently at "mem". | |
579 | 2) If they are equal, "new" is written to "mem". | |
580 | 3) Regardless, the current value at "mem" is returned. | |
581 | ||
582 | As an example usage, here is what an atomic counter update | |
583 | might look like: | |
584 | ||
585 | void example_atomic_inc(long *counter) | |
586 | { | |
587 | long old, new, ret; | |
588 | ||
589 | while (1) { | |
590 | old = *counter; | |
591 | new = old + 1; | |
592 | ||
593 | ret = cas(counter, old, new); | |
594 | if (ret == old) | |
595 | break; | |
596 | } | |
597 | } | |
598 | ||
599 | Let's use cas() in order to build a pseudo-C atomic_dec_and_lock(): | |
600 | ||
601 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) | |
602 | { | |
603 | long old, new, ret; | |
604 | int went_to_zero; | |
605 | ||
606 | went_to_zero = 0; | |
607 | while (1) { | |
608 | old = atomic_read(atomic); | |
609 | new = old - 1; | |
610 | if (new == 0) { | |
611 | went_to_zero = 1; | |
612 | spin_lock(lock); | |
613 | } | |
614 | ret = cas(atomic, old, new); | |
615 | if (ret == old) | |
616 | break; | |
617 | if (went_to_zero) { | |
618 | spin_unlock(lock); | |
619 | went_to_zero = 0; | |
620 | } | |
621 | } | |
622 | ||
623 | return went_to_zero; | |
624 | } | |
625 | ||
626 | Now, as far as memory barriers go, as long as spin_lock() | |
627 | strictly orders all subsequent memory operations (including | |
628 | the cas()) with respect to itself, things will be fine. | |
629 | ||
a0ebb3ff | 630 | Said another way, _atomic_dec_and_lock() must guarantee that |
1da177e4 LT |
631 | a counter dropping to zero is never made visible before the |
632 | spinlock being acquired. | |
633 | ||
634 | Note that this also means that for the case where the counter | |
635 | is not dropping to zero, there are no memory ordering | |
636 | requirements. |