]>
Commit | Line | Data |
---|---|---|
1c12757c PM |
1 | RCU and Unloadable Modules |
2 | ||
3 | [Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/] | |
4 | ||
5 | RCU (read-copy update) is a synchronization mechanism that can be thought | |
6 | of as a replacement for read-writer locking (among other things), but with | |
7 | very low-overhead readers that are immune to deadlock, priority inversion, | |
8 | and unbounded latency. RCU read-side critical sections are delimited | |
9 | by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT | |
10 | kernels, generate no code whatsoever. | |
11 | ||
12 | This means that RCU writers are unaware of the presence of concurrent | |
13 | readers, so that RCU updates to shared data must be undertaken quite | |
14 | carefully, leaving an old version of the data structure in place until all | |
15 | pre-existing readers have finished. These old versions are needed because | |
16 | such readers might hold a reference to them. RCU updates can therefore be | |
17 | rather expensive, and RCU is thus best suited for read-mostly situations. | |
18 | ||
19 | How can an RCU writer possibly determine when all readers are finished, | |
20 | given that readers might well leave absolutely no trace of their | |
21 | presence? There is a synchronize_rcu() primitive that blocks until all | |
22 | pre-existing readers have completed. An updater wishing to delete an | |
23 | element p from a linked list might do the following, while holding an | |
24 | appropriate lock, of course: | |
25 | ||
26 | list_del_rcu(p); | |
27 | synchronize_rcu(); | |
28 | kfree(p); | |
29 | ||
30 | But the above code cannot be used in IRQ context -- the call_rcu() | |
31 | primitive must be used instead. This primitive takes a pointer to an | |
32 | rcu_head struct placed within the RCU-protected data structure and | |
33 | another pointer to a function that may be invoked later to free that | |
34 | structure. Code to delete an element p from the linked list from IRQ | |
35 | context might then be as follows: | |
36 | ||
37 | list_del_rcu(p); | |
38 | call_rcu(&p->rcu, p_callback); | |
39 | ||
40 | Since call_rcu() never blocks, this code can safely be used from within | |
41 | IRQ context. The function p_callback() might be defined as follows: | |
42 | ||
43 | static void p_callback(struct rcu_head *rp) | |
44 | { | |
45 | struct pstruct *p = container_of(rp, struct pstruct, rcu); | |
46 | ||
47 | kfree(p); | |
48 | } | |
49 | ||
50 | ||
51 | Unloading Modules That Use call_rcu() | |
52 | ||
53 | But what if p_callback is defined in an unloadable module? | |
54 | ||
55 | If we unload the module while some RCU callbacks are pending, | |
56 | the CPUs executing these callbacks are going to be severely | |
57 | disappointed when they are later invoked, as fancifully depicted at | |
58 | http://lwn.net/images/ns/kernel/rcu-drop.jpg. | |
59 | ||
60 | We could try placing a synchronize_rcu() in the module-exit code path, | |
61 | but this is not sufficient. Although synchronize_rcu() does wait for a | |
62 | grace period to elapse, it does not wait for the callbacks to complete. | |
63 | ||
64 | One might be tempted to try several back-to-back synchronize_rcu() | |
65 | calls, but this is still not guaranteed to work. If there is a very | |
66 | heavy RCU-callback load, then some of the callbacks might be deferred | |
67 | in order to allow other processing to proceed. Such deferral is required | |
68 | in realtime kernels in order to avoid excessive scheduling latencies. | |
69 | ||
70 | ||
71 | rcu_barrier() | |
72 | ||
d84297c9 PM |
73 | We instead need the rcu_barrier() primitive. Rather than waiting for |
74 | a grace period to elapse, rcu_barrier() waits for all outstanding RCU | |
75 | callbacks to complete. Please note that rcu_barrier() does -not- imply | |
76 | synchronize_rcu(), in particular, if there are no RCU callbacks queued | |
77 | anywhere, rcu_barrier() is within its rights to return immediately, | |
78 | without waiting for a grace period to elapse. | |
79 | ||
80 | Pseudo-code using rcu_barrier() is as follows: | |
1c12757c PM |
81 | |
82 | 1. Prevent any new RCU callbacks from being posted. | |
83 | 2. Execute rcu_barrier(). | |
84 | 3. Allow the module to be unloaded. | |
85 | ||
3f944adb PM |
86 | There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier() |
87 | functions for the other flavors of RCU, and you of course must match | |
88 | the flavor of rcu_barrier() with that of call_rcu(). If your module | |
89 | uses multiple flavors of call_rcu(), then it must also use multiple | |
90 | flavors of rcu_barrier() when unloading that module. For example, if | |
91 | it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on | |
92 | srcu_struct_2(), then the following three lines of code will be required | |
93 | when unloading: | |
94 | ||
95 | 1 rcu_barrier_bh(); | |
96 | 2 srcu_barrier(&srcu_struct_1); | |
97 | 3 srcu_barrier(&srcu_struct_2); | |
98 | ||
99 | The rcutorture module makes use of rcu_barrier() in its exit function | |
1c12757c PM |
100 | as follows: |
101 | ||
102 | 1 static void | |
103 | 2 rcu_torture_cleanup(void) | |
104 | 3 { | |
105 | 4 int i; | |
106 | 5 | |
107 | 6 fullstop = 1; | |
108 | 7 if (shuffler_task != NULL) { | |
109 | 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); | |
110 | 9 kthread_stop(shuffler_task); | |
111 | 10 } | |
112 | 11 shuffler_task = NULL; | |
113 | 12 | |
114 | 13 if (writer_task != NULL) { | |
115 | 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); | |
116 | 15 kthread_stop(writer_task); | |
117 | 16 } | |
118 | 17 writer_task = NULL; | |
119 | 18 | |
120 | 19 if (reader_tasks != NULL) { | |
121 | 20 for (i = 0; i < nrealreaders; i++) { | |
122 | 21 if (reader_tasks[i] != NULL) { | |
123 | 22 VERBOSE_PRINTK_STRING( | |
124 | 23 "Stopping rcu_torture_reader task"); | |
125 | 24 kthread_stop(reader_tasks[i]); | |
126 | 25 } | |
127 | 26 reader_tasks[i] = NULL; | |
128 | 27 } | |
129 | 28 kfree(reader_tasks); | |
130 | 29 reader_tasks = NULL; | |
131 | 30 } | |
132 | 31 rcu_torture_current = NULL; | |
133 | 32 | |
134 | 33 if (fakewriter_tasks != NULL) { | |
135 | 34 for (i = 0; i < nfakewriters; i++) { | |
136 | 35 if (fakewriter_tasks[i] != NULL) { | |
137 | 36 VERBOSE_PRINTK_STRING( | |
138 | 37 "Stopping rcu_torture_fakewriter task"); | |
139 | 38 kthread_stop(fakewriter_tasks[i]); | |
140 | 39 } | |
141 | 40 fakewriter_tasks[i] = NULL; | |
142 | 41 } | |
143 | 42 kfree(fakewriter_tasks); | |
144 | 43 fakewriter_tasks = NULL; | |
145 | 44 } | |
146 | 45 | |
147 | 46 if (stats_task != NULL) { | |
148 | 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); | |
149 | 48 kthread_stop(stats_task); | |
150 | 49 } | |
151 | 50 stats_task = NULL; | |
152 | 51 | |
153 | 52 /* Wait for all RCU callbacks to fire. */ | |
154 | 53 rcu_barrier(); | |
155 | 54 | |
156 | 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ | |
157 | 56 | |
158 | 57 if (cur_ops->cleanup != NULL) | |
159 | 58 cur_ops->cleanup(); | |
160 | 59 if (atomic_read(&n_rcu_torture_error)) | |
161 | 60 rcu_torture_print_module_parms("End of test: FAILURE"); | |
162 | 61 else | |
163 | 62 rcu_torture_print_module_parms("End of test: SUCCESS"); | |
164 | 63 } | |
165 | ||
166 | Line 6 sets a global variable that prevents any RCU callbacks from | |
167 | re-posting themselves. This will not be necessary in most cases, since | |
168 | RCU callbacks rarely include calls to call_rcu(). However, the rcutorture | |
169 | module is an exception to this rule, and therefore needs to set this | |
170 | global variable. | |
171 | ||
172 | Lines 7-50 stop all the kernel tasks associated with the rcutorture | |
173 | module. Therefore, once execution reaches line 53, no more rcutorture | |
174 | RCU callbacks will be posted. The rcu_barrier() call on line 53 waits | |
175 | for any pre-existing callbacks to complete. | |
176 | ||
177 | Then lines 55-62 print status and do operation-specific cleanup, and | |
178 | then return, permitting the module-unload operation to be completed. | |
179 | ||
74d874e7 | 180 | Quick Quiz #1: Is there any other situation where rcu_barrier() might |
1c12757c PM |
181 | be required? |
182 | ||
183 | Your module might have additional complications. For example, if your | |
184 | module invokes call_rcu() from timers, you will need to first cancel all | |
185 | the timers, and only then invoke rcu_barrier() to wait for any remaining | |
186 | RCU callbacks to complete. | |
187 | ||
240ebbf8 PM |
188 | Of course, if you module uses call_rcu_bh(), you will need to invoke |
189 | rcu_barrier_bh() before unloading. Similarly, if your module uses | |
190 | call_rcu_sched(), you will need to invoke rcu_barrier_sched() before | |
191 | unloading. If your module uses call_rcu(), call_rcu_bh(), -and- | |
192 | call_rcu_sched(), then you will need to invoke each of rcu_barrier(), | |
193 | rcu_barrier_bh(), and rcu_barrier_sched(). | |
194 | ||
1c12757c PM |
195 | |
196 | Implementing rcu_barrier() | |
197 | ||
198 | Dipankar Sarma's implementation of rcu_barrier() makes use of the fact | |
199 | that RCU callbacks are never reordered once queued on one of the per-CPU | |
200 | queues. His implementation queues an RCU callback on each of the per-CPU | |
201 | callback queues, and then waits until they have all started executing, at | |
202 | which point, all earlier RCU callbacks are guaranteed to have completed. | |
203 | ||
204 | The original code for rcu_barrier() was as follows: | |
205 | ||
206 | 1 void rcu_barrier(void) | |
207 | 2 { | |
208 | 3 BUG_ON(in_interrupt()); | |
209 | 4 /* Take cpucontrol mutex to protect against CPU hotplug */ | |
210 | 5 mutex_lock(&rcu_barrier_mutex); | |
211 | 6 init_completion(&rcu_barrier_completion); | |
212 | 7 atomic_set(&rcu_barrier_cpu_count, 0); | |
213 | 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1); | |
214 | 9 wait_for_completion(&rcu_barrier_completion); | |
215 | 10 mutex_unlock(&rcu_barrier_mutex); | |
216 | 11 } | |
217 | ||
218 | Line 3 verifies that the caller is in process context, and lines 5 and 10 | |
219 | use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the | |
220 | global completion and counters at a time, which are initialized on lines | |
221 | 6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is | |
222 | shown below. Note that the final "1" in on_each_cpu()'s argument list | |
223 | ensures that all the calls to rcu_barrier_func() will have completed | |
224 | before on_each_cpu() returns. Line 9 then waits for the completion. | |
225 | ||
226 | This code was rewritten in 2008 to support rcu_barrier_bh() and | |
227 | rcu_barrier_sched() in addition to the original rcu_barrier(). | |
228 | ||
229 | The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() | |
230 | to post an RCU callback, as follows: | |
231 | ||
232 | 1 static void rcu_barrier_func(void *notused) | |
233 | 2 { | |
234 | 3 int cpu = smp_processor_id(); | |
235 | 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu); | |
236 | 5 struct rcu_head *head; | |
237 | 6 | |
238 | 7 head = &rdp->barrier; | |
239 | 8 atomic_inc(&rcu_barrier_cpu_count); | |
240 | 9 call_rcu(head, rcu_barrier_callback); | |
241 | 10 } | |
242 | ||
243 | Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure, | |
244 | which contains the struct rcu_head that needed for the later call to | |
245 | call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line | |
246 | 8 increments a global counter. This counter will later be decremented | |
247 | by the callback. Line 9 then registers the rcu_barrier_callback() on | |
248 | the current CPU's queue. | |
249 | ||
250 | The rcu_barrier_callback() function simply atomically decrements the | |
251 | rcu_barrier_cpu_count variable and finalizes the completion when it | |
252 | reaches zero, as follows: | |
253 | ||
254 | 1 static void rcu_barrier_callback(struct rcu_head *notused) | |
255 | 2 { | |
256 | 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) | |
257 | 4 complete(&rcu_barrier_completion); | |
258 | 5 } | |
259 | ||
74d874e7 | 260 | Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes |
1c12757c PM |
261 | immediately (thus incrementing rcu_barrier_cpu_count to the |
262 | value one), but the other CPU's rcu_barrier_func() invocations | |
263 | are delayed for a full grace period? Couldn't this result in | |
264 | rcu_barrier() returning prematurely? | |
265 | ||
266 | ||
267 | rcu_barrier() Summary | |
268 | ||
269 | The rcu_barrier() primitive has seen relatively little use, since most | |
270 | code using RCU is in the core kernel rather than in modules. However, if | |
271 | you are using RCU from an unloadable module, you need to use rcu_barrier() | |
272 | so that your module may be safely unloaded. | |
273 | ||
274 | ||
275 | Answers to Quick Quizzes | |
276 | ||
74d874e7 | 277 | Quick Quiz #1: Is there any other situation where rcu_barrier() might |
1c12757c PM |
278 | be required? |
279 | ||
280 | Answer: Interestingly enough, rcu_barrier() was not originally | |
281 | implemented for module unloading. Nikita Danilov was using | |
282 | RCU in a filesystem, which resulted in a similar situation at | |
283 | filesystem-unmount time. Dipankar Sarma coded up rcu_barrier() | |
284 | in response, so that Nikita could invoke it during the | |
285 | filesystem-unmount process. | |
286 | ||
287 | Much later, yours truly hit the RCU module-unload problem when | |
288 | implementing rcutorture, and found that rcu_barrier() solves | |
289 | this problem as well. | |
290 | ||
74d874e7 | 291 | Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes |
1c12757c PM |
292 | immediately (thus incrementing rcu_barrier_cpu_count to the |
293 | value one), but the other CPU's rcu_barrier_func() invocations | |
294 | are delayed for a full grace period? Couldn't this result in | |
295 | rcu_barrier() returning prematurely? | |
296 | ||
297 | Answer: This cannot happen. The reason is that on_each_cpu() has its last | |
298 | argument, the wait flag, set to "1". This flag is passed through | |
299 | to smp_call_function() and further to smp_call_function_on_cpu(), | |
300 | causing this latter to spin until the cross-CPU invocation of | |
301 | rcu_barrier_func() has completed. This by itself would prevent | |
302 | a grace period from completing on non-CONFIG_PREEMPT kernels, | |
303 | since each CPU must undergo a context switch (or other quiescent | |
304 | state) before the grace period can complete. However, this is | |
305 | of no use in CONFIG_PREEMPT kernels. | |
306 | ||
307 | Therefore, on_each_cpu() disables preemption across its call | |
308 | to smp_call_function() and also across the local call to | |
309 | rcu_barrier_func(). This prevents the local CPU from context | |
310 | switching, again preventing grace periods from completing. This | |
311 | means that all CPUs have executed rcu_barrier_func() before | |
312 | the first rcu_barrier_callback() can possibly execute, in turn | |
313 | preventing rcu_barrier_cpu_count from prematurely reaching zero. | |
314 | ||
315 | Currently, -rt implementations of RCU keep but a single global | |
316 | queue for RCU callbacks, and thus do not suffer from this | |
317 | problem. However, when the -rt RCU eventually does have per-CPU | |
318 | callback queues, things will have to change. One simple change | |
319 | is to add an rcu_read_lock() before line 8 of rcu_barrier() | |
320 | and an rcu_read_unlock() after line 8 of this same function. If | |
321 | you can think of a better change, please let me know! |