]> Git Repo - linux.git/blame - Documentation/RCU/arrayRCU.rst
Merge tag 'nfsd-5.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux
[linux.git] / Documentation / RCU / arrayRCU.rst
CommitLineData
9ffdd798 1.. _array_rcu_doc:
1da177e4 2
9ffdd798
MB
3Using RCU to Protect Read-Mostly Arrays
4=======================================
1da177e4
LT
5
6Although RCU is more commonly used to protect linked lists, it can
7also be used to protect arrays. Three situations are as follows:
8
9ffdd798 91. :ref:`Hash Tables <hash_tables>`
1da177e4 10
9ffdd798 112. :ref:`Static Arrays <static_arrays>`
1da177e4 12
9ffdd798 133. :ref:`Resizable Arrays <resizable_arrays>`
1da177e4 14
cf9fbf80
PM
15Each of these three situations involves an RCU-protected pointer to an
16array that is separately indexed. It might be tempting to consider use
17of RCU to instead protect the index into an array, however, this use
9ffdd798 18case is **not** supported. The problem with RCU-protected indexes into
cf9fbf80
PM
19arrays is that compilers can play way too many optimization games with
20integers, which means that the rules governing handling of these indexes
21are far more trouble than they are worth. If RCU-protected indexes into
22arrays prove to be particularly valuable (which they have not thus far),
23explicit cooperation from the compiler will be required to permit them
24to be safely used.
25
26That aside, each of the three RCU-protected pointer situations are
27described in the following sections.
1da177e4 28
9ffdd798 29.. _hash_tables:
1da177e4
LT
30
31Situation 1: Hash Tables
9ffdd798 32------------------------
1da177e4
LT
33
34Hash tables are often implemented as an array, where each array entry
35has a linked-list hash chain. Each hash chain can be protected by RCU
36as described in the listRCU.txt document. This approach also applies
37to other array-of-list situations, such as radix trees.
38
9ffdd798 39.. _static_arrays:
1da177e4
LT
40
41Situation 2: Static Arrays
9ffdd798 42--------------------------
1da177e4
LT
43
44Static arrays, where the data (rather than a pointer to the data) is
45located in each array element, and where the array is never resized,
46have not been used with RCU. Rik van Riel recommends using seqlock in
47this situation, which would also have minimal read-side overhead as long
48as updates are rare.
49
9ffdd798
MB
50Quick Quiz:
51 Why is it so important that updates be rare when using seqlock?
52
53:ref:`Answer to Quick Quiz <answer_quick_quiz_seqlock>`
1da177e4 54
9ffdd798 55.. _resizable_arrays:
1da177e4 56
9ffdd798
MB
57Situation 3: Resizable Arrays
58------------------------------
1da177e4 59
9ffdd798 60Use of RCU for resizable arrays is demonstrated by the grow_ary()
cf9fbf80
PM
61function formerly used by the System V IPC code. The array is used
62to map from semaphore, message-queue, and shared-memory IDs to the data
63structure that represents the corresponding IPC construct. The grow_ary()
1da177e4
LT
64function does not acquire any locks; instead its caller must hold the
65ids->sem semaphore.
66
67The grow_ary() function, shown below, does some limit checks, allocates a
68new ipc_id_ary, copies the old to the new portion of the new, initializes
69the remainder of the new, updates the ids->entries pointer to point to
70the new array, and invokes ipc_rcu_putref() to free up the old array.
71Note that rcu_assign_pointer() is used to update the ids->entries pointer,
72which includes any memory barriers required on whatever architecture
9ffdd798 73you are running on::
1da177e4
LT
74
75 static int grow_ary(struct ipc_ids* ids, int newsize)
76 {
77 struct ipc_id_ary* new;
78 struct ipc_id_ary* old;
79 int i;
80 int size = ids->entries->size;
81
82 if(newsize > IPCMNI)
83 newsize = IPCMNI;
84 if(newsize <= size)
85 return newsize;
86
87 new = ipc_rcu_alloc(sizeof(struct kern_ipc_perm *)*newsize +
88 sizeof(struct ipc_id_ary));
89 if(new == NULL)
90 return size;
91 new->size = newsize;
92 memcpy(new->p, ids->entries->p,
93 sizeof(struct kern_ipc_perm *)*size +
94 sizeof(struct ipc_id_ary));
95 for(i=size;i<newsize;i++) {
96 new->p[i] = NULL;
97 }
98 old = ids->entries;
99
100 /*
101 * Use rcu_assign_pointer() to make sure the memcpyed
102 * contents of the new array are visible before the new
103 * array becomes visible.
104 */
105 rcu_assign_pointer(ids->entries, new);
106
107 ipc_rcu_putref(old);
108 return newsize;
109 }
110
111The ipc_rcu_putref() function decrements the array's reference count
112and then, if the reference count has dropped to zero, uses call_rcu()
113to free the array after a grace period has elapsed.
114
115The array is traversed by the ipc_lock() function. This function
116indexes into the array under the protection of rcu_read_lock(),
117using rcu_dereference() to pick up the pointer to the array so
118that it may later safely be dereferenced -- memory barriers are
119required on the Alpha CPU. Since the size of the array is stored
120with the array itself, there can be no array-size mismatches, so
121a simple check suffices. The pointer to the structure corresponding
122to the desired IPC object is placed in "out", with NULL indicating
123a non-existent entry. After acquiring "out->lock", the "out->deleted"
124flag indicates whether the IPC object is in the process of being
9ffdd798 125deleted, and, if not, the pointer is returned::
1da177e4
LT
126
127 struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id)
128 {
129 struct kern_ipc_perm* out;
130 int lid = id % SEQ_MULTIPLIER;
131 struct ipc_id_ary* entries;
132
133 rcu_read_lock();
134 entries = rcu_dereference(ids->entries);
135 if(lid >= entries->size) {
136 rcu_read_unlock();
137 return NULL;
138 }
139 out = entries->p[lid];
140 if(out == NULL) {
141 rcu_read_unlock();
142 return NULL;
143 }
144 spin_lock(&out->lock);
145
146 /* ipc_rmid() may have already freed the ID while ipc_lock
147 * was spinning: here verify that the structure is still valid
148 */
149 if (out->deleted) {
150 spin_unlock(&out->lock);
151 rcu_read_unlock();
152 return NULL;
153 }
154 return out;
155 }
156
9ffdd798 157.. _answer_quick_quiz_seqlock:
1da177e4
LT
158
159Answer to Quick Quiz:
9ffdd798 160 Why is it so important that updates be rare when using seqlock?
1da177e4
LT
161
162 The reason that it is important that updates be rare when
163 using seqlock is that frequent updates can livelock readers.
164 One way to avoid this problem is to assign a seqlock for
165 each array entry rather than to the entire array.
This page took 0.992015 seconds and 4 git commands to generate.