]>
Commit | Line | Data |
---|---|---|
c24b7201 DH |
1 | ============================== |
2 | UNEVICTABLE LRU INFRASTRUCTURE | |
3 | ============================== | |
4 | ||
5 | ======== | |
6 | CONTENTS | |
7 | ======== | |
8 | ||
9 | (*) The Unevictable LRU | |
10 | ||
11 | - The unevictable page list. | |
12 | - Memory control group interaction. | |
13 | - Marking address spaces unevictable. | |
14 | - Detecting Unevictable Pages. | |
15 | - vmscan's handling of unevictable pages. | |
16 | ||
17 | (*) mlock()'d pages. | |
18 | ||
19 | - History. | |
20 | - Basic management. | |
21 | - mlock()/mlockall() system call handling. | |
22 | - Filtering special vmas. | |
23 | - munlock()/munlockall() system call handling. | |
24 | - Migrating mlocked pages. | |
922c0551 | 25 | - Compacting mlocked pages. |
c24b7201 DH |
26 | - mmap(MAP_LOCKED) system call handling. |
27 | - munmap()/exit()/exec() system call handling. | |
28 | - try_to_unmap(). | |
29 | - try_to_munlock() reverse map scan. | |
30 | - Page reclaim in shrink_*_list(). | |
31 | ||
32 | ||
33 | ============ | |
34 | INTRODUCTION | |
35 | ============ | |
36 | ||
37 | This document describes the Linux memory manager's "Unevictable LRU" | |
38 | infrastructure and the use of this to manage several types of "unevictable" | |
39 | pages. | |
40 | ||
41 | The document attempts to provide the overall rationale behind this mechanism | |
42 | and the rationale for some of the design decisions that drove the | |
43 | implementation. The latter design rationale is discussed in the context of an | |
44 | implementation description. Admittedly, one can obtain the implementation | |
45 | details - the "what does it do?" - by reading the code. One hopes that the | |
46 | descriptions below add value by provide the answer to "why does it do that?". | |
47 | ||
48 | ||
49 | =================== | |
50 | THE UNEVICTABLE LRU | |
51 | =================== | |
52 | ||
53 | The Unevictable LRU facility adds an additional LRU list to track unevictable | |
54 | pages and to hide these pages from vmscan. This mechanism is based on a patch | |
55 | by Larry Woodman of Red Hat to address several scalability problems with page | |
fa07e787 | 56 | reclaim in Linux. The problems have been observed at customer sites on large |
c24b7201 DH |
57 | memory x86_64 systems. |
58 | ||
59 | To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of | |
60 | main memory will have over 32 million 4k pages in a single zone. When a large | |
61 | fraction of these pages are not evictable for any reason [see below], vmscan | |
62 | will spend a lot of time scanning the LRU lists looking for the small fraction | |
63 | of pages that are evictable. This can result in a situation where all CPUs are | |
64 | spending 100% of their time in vmscan for hours or days on end, with the system | |
65 | completely unresponsive. | |
66 | ||
67 | The unevictable list addresses the following classes of unevictable pages: | |
68 | ||
69 | (*) Those owned by ramfs. | |
70 | ||
71 | (*) Those mapped into SHM_LOCK'd shared memory regions. | |
72 | ||
73 | (*) Those mapped into VM_LOCKED [mlock()ed] VMAs. | |
74 | ||
75 | The infrastructure may also be able to handle other conditions that make pages | |
fa07e787 LS |
76 | unevictable, either by definition or by circumstance, in the future. |
77 | ||
78 | ||
c24b7201 DH |
79 | THE UNEVICTABLE PAGE LIST |
80 | ------------------------- | |
fa07e787 LS |
81 | |
82 | The Unevictable LRU infrastructure consists of an additional, per-zone, LRU list | |
83 | called the "unevictable" list and an associated page flag, PG_unevictable, to | |
c24b7201 DH |
84 | indicate that the page is being managed on the unevictable list. |
85 | ||
86 | The PG_unevictable flag is analogous to, and mutually exclusive with, the | |
87 | PG_active flag in that it indicates on which LRU list a page resides when | |
e6e8dd50 | 88 | PG_lru is set. |
fa07e787 LS |
89 | |
90 | The Unevictable LRU infrastructure maintains unevictable pages on an additional | |
91 | LRU list for a few reasons: | |
92 | ||
c24b7201 DH |
93 | (1) We get to "treat unevictable pages just like we treat other pages in the |
94 | system - which means we get to use the same code to manipulate them, the | |
95 | same code to isolate them (for migrate, etc.), the same code to keep track | |
96 | of the statistics, etc..." [Rik van Riel] | |
97 | ||
98 | (2) We want to be able to migrate unevictable pages between nodes for memory | |
99 | defragmentation, workload management and memory hotplug. The linux kernel | |
100 | can only migrate pages that it can successfully isolate from the LRU | |
101 | lists. If we were to maintain pages elsewhere than on an LRU-like list, | |
102 | where they can be found by isolate_lru_page(), we would prevent their | |
103 | migration, unless we reworked migration code to find the unevictable pages | |
104 | itself. | |
fa07e787 | 105 | |
fa07e787 | 106 | |
c24b7201 DH |
107 | The unevictable list does not differentiate between file-backed and anonymous, |
108 | swap-backed pages. This differentiation is only important while the pages are, | |
109 | in fact, evictable. | |
fa07e787 | 110 | |
c24b7201 DH |
111 | The unevictable list benefits from the "arrayification" of the per-zone LRU |
112 | lists and statistics originally proposed and posted by Christoph Lameter. | |
fa07e787 | 113 | |
c24b7201 DH |
114 | The unevictable list does not use the LRU pagevec mechanism. Rather, |
115 | unevictable pages are placed directly on the page's zone's unevictable list | |
116 | under the zone lru_lock. This allows us to prevent the stranding of pages on | |
117 | the unevictable list when one task has the page isolated from the LRU and other | |
118 | tasks are changing the "evictability" state of the page. | |
fa07e787 | 119 | |
fa07e787 | 120 | |
c24b7201 DH |
121 | MEMORY CONTROL GROUP INTERACTION |
122 | -------------------------------- | |
fa07e787 | 123 | |
c24b7201 | 124 | The unevictable LRU facility interacts with the memory control group [aka |
09c3bcce | 125 | memory controller; see Documentation/cgroup-v1/memory.txt] by extending the |
c24b7201 DH |
126 | lru_list enum. |
127 | ||
128 | The memory controller data structure automatically gets a per-zone unevictable | |
129 | list as a result of the "arrayification" of the per-zone LRU lists (one per | |
130 | lru_list enum element). The memory controller tracks the movement of pages to | |
131 | and from the unevictable list. | |
fa07e787 | 132 | |
fa07e787 LS |
133 | When a memory control group comes under memory pressure, the controller will |
134 | not attempt to reclaim pages on the unevictable list. This has a couple of | |
c24b7201 DH |
135 | effects: |
136 | ||
137 | (1) Because the pages are "hidden" from reclaim on the unevictable list, the | |
138 | reclaim process can be more efficient, dealing only with pages that have a | |
139 | chance of being reclaimed. | |
140 | ||
141 | (2) On the other hand, if too many of the pages charged to the control group | |
142 | are unevictable, the evictable portion of the working set of the tasks in | |
143 | the control group may not fit into the available memory. This can cause | |
144 | the control group to thrash or to OOM-kill tasks. | |
145 | ||
146 | ||
147 | MARKING ADDRESS SPACES UNEVICTABLE | |
148 | ---------------------------------- | |
149 | ||
150 | For facilities such as ramfs none of the pages attached to the address space | |
151 | may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE | |
152 | address space flag is provided, and this can be manipulated by a filesystem | |
153 | using a number of wrapper functions: | |
154 | ||
155 | (*) void mapping_set_unevictable(struct address_space *mapping); | |
156 | ||
157 | Mark the address space as being completely unevictable. | |
158 | ||
159 | (*) void mapping_clear_unevictable(struct address_space *mapping); | |
160 | ||
161 | Mark the address space as being evictable. | |
162 | ||
163 | (*) int mapping_unevictable(struct address_space *mapping); | |
164 | ||
165 | Query the address space, and return true if it is completely | |
166 | unevictable. | |
167 | ||
168 | These are currently used in two places in the kernel: | |
169 | ||
170 | (1) By ramfs to mark the address spaces of its inodes when they are created, | |
171 | and this mark remains for the life of the inode. | |
172 | ||
173 | (2) By SYSV SHM to mark SHM_LOCK'd address spaces until SHM_UNLOCK is called. | |
174 | ||
175 | Note that SHM_LOCK is not required to page in the locked pages if they're | |
176 | swapped out; the application must touch the pages manually if it wants to | |
177 | ensure they're in memory. | |
178 | ||
179 | ||
180 | DETECTING UNEVICTABLE PAGES | |
181 | --------------------------- | |
182 | ||
183 | The function page_evictable() in vmscan.c determines whether a page is | |
184 | evictable or not using the query function outlined above [see section "Marking | |
185 | address spaces unevictable"] to check the AS_UNEVICTABLE flag. | |
186 | ||
187 | For address spaces that are so marked after being populated (as SHM regions | |
188 | might be), the lock action (eg: SHM_LOCK) can be lazy, and need not populate | |
189 | the page tables for the region as does, for example, mlock(), nor need it make | |
190 | any special effort to push any pages in the SHM_LOCK'd area to the unevictable | |
191 | list. Instead, vmscan will do this if and when it encounters the pages during | |
192 | a reclamation scan. | |
193 | ||
194 | On an unlock action (such as SHM_UNLOCK), the unlocker (eg: shmctl()) must scan | |
195 | the pages in the region and "rescue" them from the unevictable list if no other | |
196 | condition is keeping them unevictable. If an unevictable region is destroyed, | |
197 | the pages are also "rescued" from the unevictable list in the process of | |
198 | freeing them. | |
199 | ||
200 | page_evictable() also checks for mlocked pages by testing an additional page | |
39b5f29a HD |
201 | flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is |
202 | faulted into a VM_LOCKED vma, or found in a vma being VM_LOCKED. | |
fa07e787 LS |
203 | |
204 | ||
c24b7201 DH |
205 | VMSCAN'S HANDLING OF UNEVICTABLE PAGES |
206 | -------------------------------------- | |
fa07e787 LS |
207 | |
208 | If unevictable pages are culled in the fault path, or moved to the unevictable | |
c24b7201 DH |
209 | list at mlock() or mmap() time, vmscan will not encounter the pages until they |
210 | have become evictable again (via munlock() for example) and have been "rescued" | |
211 | from the unevictable list. However, there may be situations where we decide, | |
212 | for the sake of expediency, to leave a unevictable page on one of the regular | |
213 | active/inactive LRU lists for vmscan to deal with. vmscan checks for such | |
214 | pages in all of the shrink_{active|inactive|page}_list() functions and will | |
215 | "cull" such pages that it encounters: that is, it diverts those pages to the | |
216 | unevictable list for the zone being scanned. | |
217 | ||
218 | There may be situations where a page is mapped into a VM_LOCKED VMA, but the | |
219 | page is not marked as PG_mlocked. Such pages will make it all the way to | |
fa07e787 | 220 | shrink_page_list() where they will be detected when vmscan walks the reverse |
c24b7201 DH |
221 | map in try_to_unmap(). If try_to_unmap() returns SWAP_MLOCK, |
222 | shrink_page_list() will cull the page at that point. | |
fa07e787 | 223 | |
c24b7201 DH |
224 | To "cull" an unevictable page, vmscan simply puts the page back on the LRU list |
225 | using putback_lru_page() - the inverse operation to isolate_lru_page() - after | |
226 | dropping the page lock. Because the condition which makes the page unevictable | |
227 | may change once the page is unlocked, putback_lru_page() will recheck the | |
228 | unevictable state of a page that it places on the unevictable list. If the | |
229 | page has become unevictable, putback_lru_page() removes it from the list and | |
230 | retries, including the page_unevictable() test. Because such a race is a rare | |
231 | event and movement of pages onto the unevictable list should be rare, these | |
232 | extra evictabilty checks should not occur in the majority of calls to | |
233 | putback_lru_page(). | |
fa07e787 LS |
234 | |
235 | ||
c24b7201 DH |
236 | ============= |
237 | MLOCKED PAGES | |
238 | ============= | |
fa07e787 | 239 | |
c24b7201 DH |
240 | The unevictable page list is also useful for mlock(), in addition to ramfs and |
241 | SYSV SHM. Note that mlock() is only available in CONFIG_MMU=y situations; in | |
242 | NOMMU situations, all mappings are effectively mlocked. | |
243 | ||
244 | ||
245 | HISTORY | |
246 | ------- | |
247 | ||
248 | The "Unevictable mlocked Pages" infrastructure is based on work originally | |
fa07e787 | 249 | posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU". |
c24b7201 DH |
250 | Nick posted his patch as an alternative to a patch posted by Christoph Lameter |
251 | to achieve the same objective: hiding mlocked pages from vmscan. | |
252 | ||
253 | In Nick's patch, he used one of the struct page LRU list link fields as a count | |
254 | of VM_LOCKED VMAs that map the page. This use of the link field for a count | |
255 | prevented the management of the pages on an LRU list, and thus mlocked pages | |
256 | were not migratable as isolate_lru_page() could not find them, and the LRU list | |
257 | link field was not available to the migration subsystem. | |
258 | ||
259 | Nick resolved this by putting mlocked pages back on the lru list before | |
260 | attempting to isolate them, thus abandoning the count of VM_LOCKED VMAs. When | |
261 | Nick's patch was integrated with the Unevictable LRU work, the count was | |
262 | replaced by walking the reverse map to determine whether any VM_LOCKED VMAs | |
263 | mapped the page. More on this below. | |
264 | ||
265 | ||
266 | BASIC MANAGEMENT | |
267 | ---------------- | |
268 | ||
269 | mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable | |
270 | pages. When such a page has been "noticed" by the memory management subsystem, | |
271 | the page is marked with the PG_mlocked flag. This can be manipulated using the | |
272 | PageMlocked() functions. | |
273 | ||
274 | A PG_mlocked page will be placed on the unevictable list when it is added to | |
275 | the LRU. Such pages can be "noticed" by memory management in several places: | |
276 | ||
277 | (1) in the mlock()/mlockall() system call handlers; | |
278 | ||
279 | (2) in the mmap() system call handler when mmapping a region with the | |
280 | MAP_LOCKED flag; | |
281 | ||
282 | (3) mmapping a region in a task that has called mlockall() with the MCL_FUTURE | |
283 | flag | |
284 | ||
285 | (4) in the fault path, if mlocked pages are "culled" in the fault path, | |
286 | and when a VM_LOCKED stack segment is expanded; or | |
287 | ||
288 | (5) as mentioned above, in vmscan:shrink_page_list() when attempting to | |
289 | reclaim a page in a VM_LOCKED VMA via try_to_unmap() | |
290 | ||
291 | all of which result in the VM_LOCKED flag being set for the VMA if it doesn't | |
292 | already have it set. | |
293 | ||
294 | mlocked pages become unlocked and rescued from the unevictable list when: | |
295 | ||
296 | (1) mapped in a range unlocked via the munlock()/munlockall() system calls; | |
297 | ||
298 | (2) munmap()'d out of the last VM_LOCKED VMA that maps the page, including | |
299 | unmapping at task exit; | |
300 | ||
301 | (3) when the page is truncated from the last VM_LOCKED VMA of an mmapped file; | |
302 | or | |
303 | ||
304 | (4) before a page is COW'd in a VM_LOCKED VMA. | |
305 | ||
306 | ||
307 | mlock()/mlockall() SYSTEM CALL HANDLING | |
308 | --------------------------------------- | |
fa07e787 LS |
309 | |
310 | Both [do_]mlock() and [do_]mlockall() system call handlers call mlock_fixup() | |
c24b7201 | 311 | for each VMA in the range specified by the call. In the case of mlockall(), |
fa07e787 | 312 | this is the entire active address space of the task. Note that mlock_fixup() |
c24b7201 DH |
313 | is used for both mlocking and munlocking a range of memory. A call to mlock() |
314 | an already VM_LOCKED VMA, or to munlock() a VMA that is not VM_LOCKED is | |
315 | treated as a no-op, and mlock_fixup() simply returns. | |
316 | ||
317 | If the VMA passes some filtering as described in "Filtering Special Vmas" | |
318 | below, mlock_fixup() will attempt to merge the VMA with its neighbors or split | |
319 | off a subset of the VMA if the range does not cover the entire VMA. Once the | |
320 | VMA has been merged or split or neither, mlock_fixup() will call | |
fc05f566 | 321 | populate_vma_page_range() to fault in the pages via get_user_pages() and to |
c24b7201 DH |
322 | mark the pages as mlocked via mlock_vma_page(). |
323 | ||
324 | Note that the VMA being mlocked might be mapped with PROT_NONE. In this case, | |
325 | get_user_pages() will be unable to fault in the pages. That's okay. If pages | |
326 | do end up getting faulted into this VM_LOCKED VMA, we'll handle them in the | |
fa07e787 LS |
327 | fault path or in vmscan. |
328 | ||
329 | Also note that a page returned by get_user_pages() could be truncated or | |
c24b7201 | 330 | migrated out from under us, while we're trying to mlock it. To detect this, |
fc05f566 | 331 | populate_vma_page_range() checks page_mapping() after acquiring the page lock. |
c24b7201 DH |
332 | If the page is still associated with its mapping, we'll go ahead and call |
333 | mlock_vma_page(). If the mapping is gone, we just unlock the page and move on. | |
334 | In the worst case, this will result in a page mapped in a VM_LOCKED VMA | |
335 | remaining on a normal LRU list without being PageMlocked(). Again, vmscan will | |
336 | detect and cull such pages. | |
337 | ||
338 | mlock_vma_page() will call TestSetPageMlocked() for each page returned by | |
339 | get_user_pages(). We use TestSetPageMlocked() because the page might already | |
340 | be mlocked by another task/VMA and we don't want to do extra work. We | |
341 | especially do not want to count an mlocked page more than once in the | |
342 | statistics. If the page was already mlocked, mlock_vma_page() need do nothing | |
343 | more. | |
fa07e787 LS |
344 | |
345 | If the page was NOT already mlocked, mlock_vma_page() attempts to isolate the | |
346 | page from the LRU, as it is likely on the appropriate active or inactive list | |
c24b7201 DH |
347 | at that time. If the isolate_lru_page() succeeds, mlock_vma_page() will put |
348 | back the page - by calling putback_lru_page() - which will notice that the page | |
349 | is now mlocked and divert the page to the zone's unevictable list. If | |
fa07e787 | 350 | mlock_vma_page() is unable to isolate the page from the LRU, vmscan will handle |
c24b7201 | 351 | it later if and when it attempts to reclaim the page. |
fa07e787 LS |
352 | |
353 | ||
c24b7201 DH |
354 | FILTERING SPECIAL VMAS |
355 | ---------------------- | |
fa07e787 | 356 | |
c24b7201 | 357 | mlock_fixup() filters several classes of "special" VMAs: |
fa07e787 | 358 | |
c24b7201 | 359 | 1) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind |
fa07e787 | 360 | these mappings are inherently pinned, so we don't need to mark them as |
c24b7201 DH |
361 | mlocked. In any case, most of the pages have no struct page in which to so |
362 | mark the page. Because of this, get_user_pages() will fail for these VMAs, | |
363 | so there is no sense in attempting to visit them. | |
364 | ||
365 | 2) VMAs mapping hugetlbfs page are already effectively pinned into memory. We | |
366 | neither need nor want to mlock() these pages. However, to preserve the | |
367 | prior behavior of mlock() - before the unevictable/mlock changes - | |
368 | mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to | |
369 | allocate the huge pages and populate the ptes. | |
370 | ||
314e51b9 KK |
371 | 3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages, |
372 | such as the VDSO page, relay channel pages, etc. These pages | |
fa07e787 | 373 | are inherently unevictable and are not managed on the LRU lists. |
c24b7201 | 374 | mlock_fixup() treats these VMAs the same as hugetlbfs VMAs. It calls |
fa07e787 LS |
375 | make_pages_present() to populate the ptes. |
376 | ||
c24b7201 | 377 | Note that for all of these special VMAs, mlock_fixup() does not set the |
fa07e787 | 378 | VM_LOCKED flag. Therefore, we won't have to deal with them later during |
c24b7201 DH |
379 | munlock(), munmap() or task exit. Neither does mlock_fixup() account these |
380 | VMAs against the task's "locked_vm". | |
381 | ||
382 | ||
383 | munlock()/munlockall() SYSTEM CALL HANDLING | |
384 | ------------------------------------------- | |
385 | ||
386 | The munlock() and munlockall() system calls are handled by the same functions - | |
387 | do_mlock[all]() - as the mlock() and mlockall() system calls with the unlock vs | |
388 | lock operation indicated by an argument. So, these system calls are also | |
389 | handled by mlock_fixup(). Again, if called for an already munlocked VMA, | |
390 | mlock_fixup() simply returns. Because of the VMA filtering discussed above, | |
391 | VM_LOCKED will not be set in any "special" VMAs. So, these VMAs will be | |
fa07e787 LS |
392 | ignored for munlock. |
393 | ||
c24b7201 DH |
394 | If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the |
395 | specified range. The range is then munlocked via the function | |
fc05f566 | 396 | populate_vma_page_range() - the same function used to mlock a VMA range - |
fa07e787 LS |
397 | passing a flag to indicate that munlock() is being performed. |
398 | ||
c24b7201 | 399 | Because the VMA access protections could have been changed to PROT_NONE after |
63d6c5ad | 400 | faulting in and mlocking pages, get_user_pages() was unreliable for visiting |
c24b7201 | 401 | these pages for munlocking. Because we don't want to leave pages mlocked, |
fa07e787 | 402 | get_user_pages() was enhanced to accept a flag to ignore the permissions when |
c24b7201 DH |
403 | fetching the pages - all of which should be resident as a result of previous |
404 | mlocking. | |
fa07e787 | 405 | |
fc05f566 | 406 | For munlock(), populate_vma_page_range() unlocks individual pages by calling |
fa07e787 | 407 | munlock_vma_page(). munlock_vma_page() unconditionally clears the PG_mlocked |
c24b7201 DH |
408 | flag using TestClearPageMlocked(). As with mlock_vma_page(), |
409 | munlock_vma_page() use the Test*PageMlocked() function to handle the case where | |
410 | the page might have already been unlocked by another task. If the page was | |
411 | mlocked, munlock_vma_page() updates that zone statistics for the number of | |
412 | mlocked pages. Note, however, that at this point we haven't checked whether | |
413 | the page is mapped by other VM_LOCKED VMAs. | |
414 | ||
415 | We can't call try_to_munlock(), the function that walks the reverse map to | |
416 | check for other VM_LOCKED VMAs, without first isolating the page from the LRU. | |
fa07e787 | 417 | try_to_munlock() is a variant of try_to_unmap() and thus requires that the page |
c24b7201 DH |
418 | not be on an LRU list [more on these below]. However, the call to |
419 | isolate_lru_page() could fail, in which case we couldn't try_to_munlock(). So, | |
420 | we go ahead and clear PG_mlocked up front, as this might be the only chance we | |
421 | have. If we can successfully isolate the page, we go ahead and | |
fa07e787 | 422 | try_to_munlock(), which will restore the PG_mlocked flag and update the zone |
c24b7201 | 423 | page statistics if it finds another VMA holding the page mlocked. If we fail |
fa07e787 | 424 | to isolate the page, we'll have left a potentially mlocked page on the LRU. |
c24b7201 DH |
425 | This is fine, because we'll catch it later if and if vmscan tries to reclaim |
426 | the page. This should be relatively rare. | |
427 | ||
428 | ||
429 | MIGRATING MLOCKED PAGES | |
430 | ----------------------- | |
431 | ||
432 | A page that is being migrated has been isolated from the LRU lists and is held | |
433 | locked across unmapping of the page, updating the page's address space entry | |
434 | and copying the contents and state, until the page table entry has been | |
435 | replaced with an entry that refers to the new page. Linux supports migration | |
436 | of mlocked pages and other unevictable pages. This involves simply moving the | |
437 | PG_mlocked and PG_unevictable states from the old page to the new page. | |
438 | ||
439 | Note that page migration can race with mlocking or munlocking of the same page. | |
440 | This has been discussed from the mlock/munlock perspective in the respective | |
441 | sections above. Both processes (migration and m[un]locking) hold the page | |
442 | locked. This provides the first level of synchronization. Page migration | |
443 | zeros out the page_mapping of the old page before unlocking it, so m[un]lock | |
444 | can skip these pages by testing the page mapping under page lock. | |
445 | ||
446 | To complete page migration, we place the new and old pages back onto the LRU | |
447 | after dropping the page lock. The "unneeded" page - old page on success, new | |
448 | page on failure - will be freed when the reference count held by the migration | |
449 | process is released. To ensure that we don't strand pages on the unevictable | |
450 | list because of a race between munlock and migration, page migration uses the | |
451 | putback_lru_page() function to add migrated pages back to the LRU. | |
452 | ||
453 | ||
922c0551 EM |
454 | COMPACTING MLOCKED PAGES |
455 | ------------------------ | |
456 | ||
457 | The unevictable LRU can be scanned for compactable regions and the default | |
458 | behavior is to do so. /proc/sys/vm/compact_unevictable_allowed controls | |
459 | this behavior (see Documentation/sysctl/vm.txt). Once scanning of the | |
460 | unevictable LRU is enabled, the work of compaction is mostly handled by | |
461 | the page migration code and the same work flow as described in MIGRATING | |
462 | MLOCKED PAGES will apply. | |
463 | ||
6fb8ddfc KS |
464 | MLOCKING TRANSPARENT HUGE PAGES |
465 | ------------------------------- | |
466 | ||
467 | A transparent huge page is represented by a single entry on an LRU list. | |
468 | Therefore, we can only make unevictable an entire compound page, not | |
469 | individual subpages. | |
470 | ||
471 | If a user tries to mlock() part of a huge page, we want the rest of the | |
472 | page to be reclaimable. | |
473 | ||
474 | We cannot just split the page on partial mlock() as split_huge_page() can | |
475 | fail and new intermittent failure mode for the syscall is undesirable. | |
476 | ||
477 | We handle this by keeping PTE-mapped huge pages on normal LRU lists: the | |
478 | PMD on border of VM_LOCKED VMA will be split into PTE table. | |
479 | ||
480 | This way the huge page is accessible for vmscan. Under memory pressure the | |
481 | page will be split, subpages which belong to VM_LOCKED VMAs will be moved | |
482 | to unevictable LRU and the rest can be reclaimed. | |
483 | ||
484 | See also comment in follow_trans_huge_pmd(). | |
922c0551 | 485 | |
c24b7201 DH |
486 | mmap(MAP_LOCKED) SYSTEM CALL HANDLING |
487 | ------------------------------------- | |
fa07e787 | 488 | |
df5cbb27 | 489 | In addition the mlock()/mlockall() system calls, an application can request |
c24b7201 | 490 | that a region of memory be mlocked supplying the MAP_LOCKED flag to the mmap() |
9b012a29 MH |
491 | call. There is one important and subtle difference here, though. mmap() + mlock() |
492 | will fail if the range cannot be faulted in (e.g. because mm_populate fails) | |
493 | and returns with ENOMEM while mmap(MAP_LOCKED) will not fail. The mmaped | |
494 | area will still have properties of the locked area - aka. pages will not get | |
495 | swapped out - but major page faults to fault memory in might still happen. | |
496 | ||
497 | Furthermore, any mmap() call or brk() call that expands the heap by a | |
fa07e787 | 498 | task that has previously called mlockall() with the MCL_FUTURE flag will result |
c24b7201 DH |
499 | in the newly mapped memory being mlocked. Before the unevictable/mlock |
500 | changes, the kernel simply called make_pages_present() to allocate pages and | |
501 | populate the page table. | |
fa07e787 LS |
502 | |
503 | To mlock a range of memory under the unevictable/mlock infrastructure, the | |
504 | mmap() handler and task address space expansion functions call | |
fc05f566 KS |
505 | populate_vma_page_range() specifying the vma and the address range to mlock. |
506 | ||
507 | The callers of populate_vma_page_range() will have already added the memory range | |
c24b7201 | 508 | to be mlocked to the task's "locked_vm". To account for filtered VMAs, |
fc05f566 | 509 | populate_vma_page_range() returns the number of pages NOT mlocked. All of the |
c24b7201 DH |
510 | callers then subtract a non-negative return value from the task's locked_vm. A |
511 | negative return value represent an error - for example, from get_user_pages() | |
512 | attempting to fault in a VMA with PROT_NONE access. In this case, we leave the | |
513 | memory range accounted as locked_vm, as the protections could be changed later | |
514 | and pages allocated into that region. | |
fa07e787 LS |
515 | |
516 | ||
c24b7201 DH |
517 | munmap()/exit()/exec() SYSTEM CALL HANDLING |
518 | ------------------------------------------- | |
fa07e787 LS |
519 | |
520 | When unmapping an mlocked region of memory, whether by an explicit call to | |
521 | munmap() or via an internal unmap from exit() or exec() processing, we must | |
c24b7201 | 522 | munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages. |
63d6c5ad HD |
523 | Before the unevictable/mlock changes, mlocking did not mark the pages in any |
524 | way, so unmapping them required no processing. | |
fa07e787 LS |
525 | |
526 | To munlock a range of memory under the unevictable/mlock infrastructure, the | |
c24b7201 | 527 | munmap() handler and task address space call tear down function |
fa07e787 | 528 | munlock_vma_pages_all(). The name reflects the observation that one always |
c24b7201 DH |
529 | specifies the entire VMA range when munlock()ing during unmap of a region. |
530 | Because of the VMA filtering when mlocking() regions, only "normal" VMAs that | |
fa07e787 LS |
531 | actually contain mlocked pages will be passed to munlock_vma_pages_all(). |
532 | ||
c24b7201 | 533 | munlock_vma_pages_all() clears the VM_LOCKED VMA flag and, like mlock_fixup() |
fa07e787 | 534 | for the munlock case, calls __munlock_vma_pages_range() to walk the page table |
c24b7201 DH |
535 | for the VMA's memory range and munlock_vma_page() each resident page mapped by |
536 | the VMA. This effectively munlocks the page, only if this is the last | |
537 | VM_LOCKED VMA that maps the page. | |
fa07e787 | 538 | |
fa07e787 | 539 | |
c24b7201 DH |
540 | try_to_unmap() |
541 | -------------- | |
fa07e787 | 542 | |
c24b7201 | 543 | Pages can, of course, be mapped into multiple VMAs. Some of these VMAs may |
fa07e787 | 544 | have VM_LOCKED flag set. It is possible for a page mapped into one or more |
c24b7201 DH |
545 | VM_LOCKED VMAs not to have the PG_mlocked flag set and therefore reside on one |
546 | of the active or inactive LRU lists. This could happen if, for example, a task | |
547 | in the process of munlocking the page could not isolate the page from the LRU. | |
548 | As a result, vmscan/shrink_page_list() might encounter such a page as described | |
549 | in section "vmscan's handling of unevictable pages". To handle this situation, | |
550 | try_to_unmap() checks for VM_LOCKED VMAs while it is walking a page's reverse | |
551 | map. | |
fa07e787 LS |
552 | |
553 | try_to_unmap() is always called, by either vmscan for reclaim or for page | |
c24b7201 | 554 | migration, with the argument page locked and isolated from the LRU. Separate |
b87537d9 HD |
555 | functions handle anonymous and mapped file and KSM pages, as these types of |
556 | pages have different reverse map lookup mechanisms, with different locking. | |
557 | In each case, whether rmap_walk_anon() or rmap_walk_file() or rmap_walk_ksm(), | |
558 | it will call try_to_unmap_one() for every VMA which might contain the page. | |
c24b7201 | 559 | |
b87537d9 HD |
560 | When trying to reclaim, if try_to_unmap_one() finds the page in a VM_LOCKED |
561 | VMA, it will then mlock the page via mlock_vma_page() instead of unmapping it, | |
562 | and return SWAP_MLOCK to indicate that the page is unevictable: and the scan | |
563 | stops there. | |
c24b7201 | 564 | |
b87537d9 HD |
565 | mlock_vma_page() is called while holding the page table's lock (in addition |
566 | to the page lock, and the rmap lock): to serialize against concurrent mlock or | |
567 | munlock or munmap system calls, mm teardown (munlock_vma_pages_all), reclaim, | |
568 | holepunching, and truncation of file pages and their anonymous COWed pages. | |
c24b7201 | 569 | |
c24b7201 DH |
570 | |
571 | try_to_munlock() REVERSE MAP SCAN | |
572 | --------------------------------- | |
573 | ||
574 | [!] TODO/FIXME: a better name might be page_mlocked() - analogous to the | |
575 | page_referenced() reverse map walker. | |
576 | ||
577 | When munlock_vma_page() [see section "munlock()/munlockall() System Call | |
578 | Handling" above] tries to munlock a page, it needs to determine whether or not | |
579 | the page is mapped by any VM_LOCKED VMA without actually attempting to unmap | |
580 | all PTEs from the page. For this purpose, the unevictable/mlock infrastructure | |
581 | introduced a variant of try_to_unmap() called try_to_munlock(). | |
fa07e787 LS |
582 | |
583 | try_to_munlock() calls the same functions as try_to_unmap() for anonymous and | |
b87537d9 | 584 | mapped file and KSM pages with a flag argument specifying unlock versus unmap |
fa07e787 | 585 | processing. Again, these functions walk the respective reverse maps looking |
7a14239a | 586 | for VM_LOCKED VMAs. When such a VMA is found, as in the try_to_unmap() case, |
b87537d9 HD |
587 | the functions mlock the page via mlock_vma_page() and return SWAP_MLOCK. This |
588 | undoes the pre-clearing of the page's PG_mlocked done by munlock_vma_page. | |
c24b7201 | 589 | |
c24b7201 DH |
590 | Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's |
591 | reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA. | |
b87537d9 | 592 | However, the scan can terminate when it encounters a VM_LOCKED VMA. |
c24b7201 DH |
593 | Although try_to_munlock() might be called a great many times when munlocking a |
594 | large region or tearing down a large address space that has been mlocked via | |
595 | mlockall(), overall this is a fairly rare event. | |
596 | ||
597 | ||
598 | PAGE RECLAIM IN shrink_*_list() | |
599 | ------------------------------- | |
600 | ||
601 | shrink_active_list() culls any obviously unevictable pages - i.e. | |
39b5f29a | 602 | !page_evictable(page) - diverting these to the unevictable list. |
c24b7201 DH |
603 | However, shrink_active_list() only sees unevictable pages that made it onto the |
604 | active/inactive lru lists. Note that these pages do not have PageUnevictable | |
605 | set - otherwise they would be on the unevictable list and shrink_active_list | |
606 | would never see them. | |
fa07e787 LS |
607 | |
608 | Some examples of these unevictable pages on the LRU lists are: | |
609 | ||
c24b7201 DH |
610 | (1) ramfs pages that have been placed on the LRU lists when first allocated. |
611 | ||
612 | (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to | |
613 | allocate or fault in the pages in the shared memory region. This happens | |
614 | when an application accesses the page the first time after SHM_LOCK'ing | |
615 | the segment. | |
fa07e787 | 616 | |
c24b7201 DH |
617 | (3) mlocked pages that could not be isolated from the LRU and moved to the |
618 | unevictable list in mlock_vma_page(). | |
fa07e787 | 619 | |
c24b7201 DH |
620 | shrink_inactive_list() also diverts any unevictable pages that it finds on the |
621 | inactive lists to the appropriate zone's unevictable list. | |
fa07e787 | 622 | |
c24b7201 DH |
623 | shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd |
624 | after shrink_active_list() had moved them to the inactive list, or pages mapped | |
625 | into VM_LOCKED VMAs that munlock_vma_page() couldn't isolate from the LRU to | |
626 | recheck via try_to_munlock(). shrink_inactive_list() won't notice the latter, | |
627 | but will pass on to shrink_page_list(). | |
fa07e787 LS |
628 | |
629 | shrink_page_list() again culls obviously unevictable pages that it could | |
63d6c5ad | 630 | encounter for similar reason to shrink_inactive_list(). Pages mapped into |
c24b7201 | 631 | VM_LOCKED VMAs but without PG_mlocked set will make it all the way to |
63d6c5ad HD |
632 | try_to_unmap(). shrink_page_list() will divert them to the unevictable list |
633 | when try_to_unmap() returns SWAP_MLOCK, as discussed above. |