]>
Commit | Line | Data |
---|---|---|
00f0b825 BS |
1 | Memory Resource Controller |
2 | ||
1306a85a JW |
3 | NOTE: This document is hopelessly outdated and it asks for a complete |
4 | rewrite. It still contains a useful information so we are keeping it | |
5 | here but make sure to check the current code if you need a deeper | |
6 | understanding. | |
7 | ||
67de0162 JS |
8 | NOTE: The Memory Resource Controller has generically been referred to as the |
9 | memory controller in this document. Do not confuse memory controller | |
10 | used here with the memory controller that is used in hardware. | |
1b6df3aa | 11 | |
dc10e281 KH |
12 | (For editors) |
13 | In this document: | |
14 | When we mention a cgroup (cgroupfs's directory) with memory controller, | |
15 | we call it "memory cgroup". When you see git-log and source code, you'll | |
16 | see patch's title and function names tend to use "memcg". | |
17 | In this document, we avoid using it. | |
1b6df3aa | 18 | |
1b6df3aa BS |
19 | Benefits and Purpose of the memory controller |
20 | ||
21 | The memory controller isolates the memory behaviour of a group of tasks | |
22 | from the rest of the system. The article on LWN [12] mentions some probable | |
23 | uses of the memory controller. The memory controller can be used to | |
24 | ||
25 | a. Isolate an application or a group of applications | |
1939c557 | 26 | Memory-hungry applications can be isolated and limited to a smaller |
1b6df3aa | 27 | amount of memory. |
1939c557 | 28 | b. Create a cgroup with a limited amount of memory; this can be used |
1b6df3aa BS |
29 | as a good alternative to booting with mem=XXXX. |
30 | c. Virtualization solutions can control the amount of memory they want | |
31 | to assign to a virtual machine instance. | |
32 | d. A CD/DVD burner could control the amount of memory used by the | |
33 | rest of the system to ensure that burning does not fail due to lack | |
34 | of available memory. | |
1939c557 | 35 | e. There are several other use cases; find one or use the controller just |
1b6df3aa BS |
36 | for fun (to learn and hack on the VM subsystem). |
37 | ||
dc10e281 KH |
38 | Current Status: linux-2.6.34-mmotm(development version of 2010/April) |
39 | ||
40 | Features: | |
41 | - accounting anonymous pages, file caches, swap caches usage and limiting them. | |
6252efcc | 42 | - pages are linked to per-memcg LRU exclusively, and there is no global LRU. |
dc10e281 KH |
43 | - optionally, memory+swap usage can be accounted and limited. |
44 | - hierarchical accounting | |
45 | - soft limit | |
1939c557 | 46 | - moving (recharging) account at moving a task is selectable. |
dc10e281 | 47 | - usage threshold notifier |
70ddf637 | 48 | - memory pressure notifier |
dc10e281 KH |
49 | - oom-killer disable knob and oom-notifier |
50 | - Root cgroup has no limit controls. | |
51 | ||
1939c557 | 52 | Kernel memory support is a work in progress, and the current version provides |
65c64ce8 | 53 | basically functionality. (See Section 2.7) |
dc10e281 KH |
54 | |
55 | Brief summary of control files. | |
56 | ||
57 | tasks # attach a task(thread) and show list of threads | |
58 | cgroup.procs # show list of processes | |
59 | cgroup.event_control # an interface for event_fd() | |
3e32cb2e | 60 | memory.usage_in_bytes # show current usage for memory |
a111c966 | 61 | (See 5.5 for details) |
3e32cb2e | 62 | memory.memsw.usage_in_bytes # show current usage for memory+Swap |
a111c966 | 63 | (See 5.5 for details) |
dc10e281 KH |
64 | memory.limit_in_bytes # set/show limit of memory usage |
65 | memory.memsw.limit_in_bytes # set/show limit of memory+Swap usage | |
66 | memory.failcnt # show the number of memory usage hits limits | |
67 | memory.memsw.failcnt # show the number of memory+Swap hits limits | |
68 | memory.max_usage_in_bytes # show max memory usage recorded | |
d66c1ce7 | 69 | memory.memsw.max_usage_in_bytes # show max memory+Swap usage recorded |
dc10e281 KH |
70 | memory.soft_limit_in_bytes # set/show soft limit of memory usage |
71 | memory.stat # show various statistics | |
72 | memory.use_hierarchy # set/show hierarchical account enabled | |
73 | memory.force_empty # trigger forced move charge to parent | |
70ddf637 | 74 | memory.pressure_level # set memory pressure notifications |
dc10e281 KH |
75 | memory.swappiness # set/show swappiness parameter of vmscan |
76 | (See sysctl's vm.swappiness) | |
77 | memory.move_charge_at_immigrate # set/show controls of moving charges | |
78 | memory.oom_control # set/show oom controls. | |
50c35e5b | 79 | memory.numa_stat # show the number of memory usage per numa node |
dc10e281 | 80 | |
d5bdae7d GC |
81 | memory.kmem.limit_in_bytes # set/show hard limit for kernel memory |
82 | memory.kmem.usage_in_bytes # show current kernel memory allocation | |
83 | memory.kmem.failcnt # show the number of kernel memory usage hits limits | |
84 | memory.kmem.max_usage_in_bytes # show max kernel memory usage recorded | |
85 | ||
3aaabe23 | 86 | memory.kmem.tcp.limit_in_bytes # set/show hard limit for tcp buf memory |
5a6dd343 | 87 | memory.kmem.tcp.usage_in_bytes # show current tcp buf memory allocation |
05a73ed2 WL |
88 | memory.kmem.tcp.failcnt # show the number of tcp buf memory usage hits limits |
89 | memory.kmem.tcp.max_usage_in_bytes # show max tcp buf memory usage recorded | |
e5671dfa | 90 | |
1b6df3aa BS |
91 | 1. History |
92 | ||
93 | The memory controller has a long history. A request for comments for the memory | |
94 | controller was posted by Balbir Singh [1]. At the time the RFC was posted | |
95 | there were several implementations for memory control. The goal of the | |
96 | RFC was to build consensus and agreement for the minimal features required | |
97 | for memory control. The first RSS controller was posted by Balbir Singh[2] | |
98 | in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the | |
99 | RSS controller. At OLS, at the resource management BoF, everyone suggested | |
100 | that we handle both page cache and RSS together. Another request was raised | |
101 | to allow user space handling of OOM. The current memory controller is | |
102 | at version 6; it combines both mapped (RSS) and unmapped Page | |
103 | Cache Control [11]. | |
104 | ||
105 | 2. Memory Control | |
106 | ||
107 | Memory is a unique resource in the sense that it is present in a limited | |
108 | amount. If a task requires a lot of CPU processing, the task can spread | |
109 | its processing over a period of hours, days, months or years, but with | |
110 | memory, the same physical memory needs to be reused to accomplish the task. | |
111 | ||
112 | The memory controller implementation has been divided into phases. These | |
113 | are: | |
114 | ||
115 | 1. Memory controller | |
116 | 2. mlock(2) controller | |
117 | 3. Kernel user memory accounting and slab control | |
118 | 4. user mappings length controller | |
119 | ||
120 | The memory controller is the first controller developed. | |
121 | ||
122 | 2.1. Design | |
123 | ||
5b1efc02 JW |
124 | The core of the design is a counter called the page_counter. The |
125 | page_counter tracks the current memory usage and limit of the group of | |
126 | processes associated with the controller. Each cgroup has a memory controller | |
127 | specific data structure (mem_cgroup) associated with it. | |
1b6df3aa BS |
128 | |
129 | 2.2. Accounting | |
130 | ||
131 | +--------------------+ | |
5b1efc02 JW |
132 | | mem_cgroup | |
133 | | (page_counter) | | |
1b6df3aa BS |
134 | +--------------------+ |
135 | / ^ \ | |
136 | / | \ | |
137 | +---------------+ | +---------------+ | |
138 | | mm_struct | |.... | mm_struct | | |
139 | | | | | | | |
140 | +---------------+ | +---------------+ | |
141 | | | |
142 | + --------------+ | |
143 | | | |
144 | +---------------+ +------+--------+ | |
145 | | page +----------> page_cgroup| | |
146 | | | | | | |
147 | +---------------+ +---------------+ | |
148 | ||
149 | (Figure 1: Hierarchy of Accounting) | |
150 | ||
151 | ||
152 | Figure 1 shows the important aspects of the controller | |
153 | ||
154 | 1. Accounting happens per cgroup | |
155 | 2. Each mm_struct knows about which cgroup it belongs to | |
156 | 3. Each page has a pointer to the page_cgroup, which in turn knows the | |
157 | cgroup it belongs to | |
158 | ||
348b4655 JL |
159 | The accounting is done as follows: mem_cgroup_charge_common() is invoked to |
160 | set up the necessary data structures and check if the cgroup that is being | |
161 | charged is over its limit. If it is, then reclaim is invoked on the cgroup. | |
1b6df3aa BS |
162 | More details can be found in the reclaim section of this document. |
163 | If everything goes well, a page meta-data-structure called page_cgroup is | |
dc10e281 KH |
164 | updated. page_cgroup has its own LRU on cgroup. |
165 | (*) page_cgroup structure is allocated at boot/memory-hotplug time. | |
1b6df3aa BS |
166 | |
167 | 2.2.1 Accounting details | |
168 | ||
5b4e655e | 169 | All mapped anon pages (RSS) and cache pages (Page Cache) are accounted. |
6252efcc | 170 | Some pages which are never reclaimable and will not be on the LRU |
dc10e281 | 171 | are not accounted. We just account pages under usual VM management. |
5b4e655e KH |
172 | |
173 | RSS pages are accounted at page_fault unless they've already been accounted | |
174 | for earlier. A file page will be accounted for as Page Cache when it's | |
175 | inserted into inode (radix-tree). While it's mapped into the page tables of | |
176 | processes, duplicate accounting is carefully avoided. | |
177 | ||
1939c557 | 178 | An RSS page is unaccounted when it's fully unmapped. A PageCache page is |
dc10e281 KH |
179 | unaccounted when it's removed from radix-tree. Even if RSS pages are fully |
180 | unmapped (by kswapd), they may exist as SwapCache in the system until they | |
1939c557 | 181 | are really freed. Such SwapCaches are also accounted. |
dc10e281 KH |
182 | A swapped-in page is not accounted until it's mapped. |
183 | ||
1939c557 | 184 | Note: The kernel does swapin-readahead and reads multiple swaps at once. |
dc10e281 KH |
185 | This means swapped-in pages may contain pages for other tasks than a task |
186 | causing page fault. So, we avoid accounting at swap-in I/O. | |
5b4e655e KH |
187 | |
188 | At page migration, accounting information is kept. | |
189 | ||
dc10e281 KH |
190 | Note: we just account pages-on-LRU because our purpose is to control amount |
191 | of used pages; not-on-LRU pages tend to be out-of-control from VM view. | |
1b6df3aa BS |
192 | |
193 | 2.3 Shared Page Accounting | |
194 | ||
195 | Shared pages are accounted on the basis of the first touch approach. The | |
196 | cgroup that first touches a page is accounted for the page. The principle | |
197 | behind this approach is that a cgroup that aggressively uses a shared | |
198 | page will eventually get charged for it (once it is uncharged from | |
199 | the cgroup that brought it in -- this will happen on memory pressure). | |
200 | ||
4b91355e KH |
201 | But see section 8.2: when moving a task to another cgroup, its pages may |
202 | be recharged to the new cgroup, if move_charge_at_immigrate has been chosen. | |
203 | ||
df7c6b99 | 204 | Exception: If CONFIG_MEMCG_SWAP is not used. |
8c7c6e34 | 205 | When you do swapoff and make swapped-out pages of shmem(tmpfs) to |
d13d1443 KH |
206 | be backed into memory in force, charges for pages are accounted against the |
207 | caller of swapoff rather than the users of shmem. | |
208 | ||
c255a458 | 209 | 2.4 Swap Extension (CONFIG_MEMCG_SWAP) |
dc10e281 | 210 | |
8c7c6e34 KH |
211 | Swap Extension allows you to record charge for swap. A swapped-in page is |
212 | charged back to original page allocator if possible. | |
213 | ||
214 | When swap is accounted, following files are added. | |
215 | - memory.memsw.usage_in_bytes. | |
216 | - memory.memsw.limit_in_bytes. | |
217 | ||
dc10e281 KH |
218 | memsw means memory+swap. Usage of memory+swap is limited by |
219 | memsw.limit_in_bytes. | |
220 | ||
221 | Example: Assume a system with 4G of swap. A task which allocates 6G of memory | |
222 | (by mistake) under 2G memory limitation will use all swap. | |
223 | In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap. | |
1939c557 | 224 | By using the memsw limit, you can avoid system OOM which can be caused by swap |
dc10e281 | 225 | shortage. |
8c7c6e34 | 226 | |
dc10e281 | 227 | * why 'memory+swap' rather than swap. |
8c7c6e34 KH |
228 | The global LRU(kswapd) can swap out arbitrary pages. Swap-out means |
229 | to move account from memory to swap...there is no change in usage of | |
dc10e281 KH |
230 | memory+swap. In other words, when we want to limit the usage of swap without |
231 | affecting global LRU, memory+swap limit is better than just limiting swap from | |
1939c557 | 232 | an OS point of view. |
22a668d7 KH |
233 | |
234 | * What happens when a cgroup hits memory.memsw.limit_in_bytes | |
67de0162 | 235 | When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out |
22a668d7 KH |
236 | in this cgroup. Then, swap-out will not be done by cgroup routine and file |
237 | caches are dropped. But as mentioned above, global LRU can do swapout memory | |
238 | from it for sanity of the system's memory management state. You can't forbid | |
239 | it by cgroup. | |
8c7c6e34 KH |
240 | |
241 | 2.5 Reclaim | |
1b6df3aa | 242 | |
dc10e281 KH |
243 | Each cgroup maintains a per cgroup LRU which has the same structure as |
244 | global VM. When a cgroup goes over its limit, we first try | |
1b6df3aa BS |
245 | to reclaim memory from the cgroup so as to make space for the new |
246 | pages that the cgroup has touched. If the reclaim is unsuccessful, | |
247 | an OOM routine is invoked to select and kill the bulkiest task in the | |
dc10e281 | 248 | cgroup. (See 10. OOM Control below.) |
1b6df3aa BS |
249 | |
250 | The reclaim algorithm has not been modified for cgroups, except that | |
1939c557 | 251 | pages that are selected for reclaiming come from the per-cgroup LRU |
1b6df3aa BS |
252 | list. |
253 | ||
4b3bde4c BS |
254 | NOTE: Reclaim does not work for the root cgroup, since we cannot set any |
255 | limits on the root cgroup. | |
256 | ||
daaf1e68 KH |
257 | Note2: When panic_on_oom is set to "2", the whole system will panic. |
258 | ||
9490ff27 KH |
259 | When oom event notifier is registered, event will be delivered. |
260 | (See oom_control section) | |
261 | ||
dc10e281 | 262 | 2.6 Locking |
1b6df3aa | 263 | |
dc10e281 | 264 | lock_page_cgroup()/unlock_page_cgroup() should not be called under |
b93b0163 | 265 | the i_pages lock. |
1b6df3aa | 266 | |
dc10e281 KH |
267 | Other lock order is following: |
268 | PG_locked. | |
269 | mm->page_table_lock | |
a52633d8 | 270 | zone_lru_lock |
dc10e281 KH |
271 | lock_page_cgroup. |
272 | In many cases, just lock_page_cgroup() is called. | |
273 | per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by | |
a52633d8 | 274 | zone_lru_lock, it has no lock of its own. |
1b6df3aa | 275 | |
c255a458 | 276 | 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM) |
e5671dfa GC |
277 | |
278 | With the Kernel memory extension, the Memory Controller is able to limit | |
279 | the amount of kernel memory used by the system. Kernel memory is fundamentally | |
280 | different than user memory, since it can't be swapped out, which makes it | |
281 | possible to DoS the system by consuming too much of this precious resource. | |
282 | ||
2bdbc5bc QH |
283 | Kernel memory accounting is enabled for all memory cgroups by default. But |
284 | it can be disabled system-wide by passing cgroup.memory=nokmem to the kernel | |
285 | at boot time. In this case, kernel memory will not be accounted at all. | |
d5bdae7d | 286 | |
e5671dfa | 287 | Kernel memory limits are not imposed for the root cgroup. Usage for the root |
d5bdae7d GC |
288 | cgroup may or may not be accounted. The memory used is accumulated into |
289 | memory.kmem.usage_in_bytes, or in a separate counter when it makes sense. | |
290 | (currently only for tcp). | |
291 | The main "kmem" counter is fed into the main counter, so kmem charges will | |
292 | also be visible from the user counter. | |
e5671dfa | 293 | |
e5671dfa GC |
294 | Currently no soft limit is implemented for kernel memory. It is future work |
295 | to trigger slab reclaim when those limits are reached. | |
296 | ||
297 | 2.7.1 Current Kernel Memory resources accounted | |
298 | ||
d5bdae7d GC |
299 | * stack pages: every process consumes some stack pages. By accounting into |
300 | kernel memory, we prevent new processes from being created when the kernel | |
301 | memory usage is too high. | |
302 | ||
92e79349 | 303 | * slab pages: pages allocated by the SLAB or SLUB allocator are tracked. A copy |
f884ab15 | 304 | of each kmem_cache is created every time the cache is touched by the first time |
92e79349 GC |
305 | from inside the memcg. The creation is done lazily, so some objects can still be |
306 | skipped while the cache is being created. All objects in a slab page should | |
307 | belong to the same memcg. This only fails to hold when a task is migrated to a | |
308 | different memcg during the page allocation by the cache. | |
309 | ||
e1aab161 GC |
310 | * sockets memory pressure: some sockets protocols have memory pressure |
311 | thresholds. The Memory Controller allows them to be controlled individually | |
312 | per cgroup, instead of globally. | |
e5671dfa | 313 | |
d1a4c0b3 GC |
314 | * tcp memory pressure: sockets memory pressure for the tcp protocol. |
315 | ||
29d293b6 | 316 | 2.7.2 Common use cases |
d5bdae7d GC |
317 | |
318 | Because the "kmem" counter is fed to the main user counter, kernel memory can | |
319 | never be limited completely independently of user memory. Say "U" is the user | |
320 | limit, and "K" the kernel limit. There are three possible ways limits can be | |
321 | set: | |
322 | ||
323 | U != 0, K = unlimited: | |
324 | This is the standard memcg limitation mechanism already present before kmem | |
325 | accounting. Kernel memory is completely ignored. | |
326 | ||
327 | U != 0, K < U: | |
328 | Kernel memory is a subset of the user memory. This setup is useful in | |
329 | deployments where the total amount of memory per-cgroup is overcommited. | |
330 | Overcommiting kernel memory limits is definitely not recommended, since the | |
331 | box can still run out of non-reclaimable memory. | |
332 | In this case, the admin could set up K so that the sum of all groups is | |
333 | never greater than the total memory, and freely set U at the cost of his | |
334 | QoS. | |
19717542 VD |
335 | WARNING: In the current implementation, memory reclaim will NOT be |
336 | triggered for a cgroup when it hits K while staying below U, which makes | |
337 | this setup impractical. | |
d5bdae7d GC |
338 | |
339 | U != 0, K >= U: | |
340 | Since kmem charges will also be fed to the user counter and reclaim will be | |
341 | triggered for the cgroup for both kinds of memory. This setup gives the | |
342 | admin a unified view of memory, and it is also useful for people who just | |
343 | want to track kernel memory usage. | |
344 | ||
1b6df3aa BS |
345 | 3. User Interface |
346 | ||
29d293b6 | 347 | 3.0. Configuration |
1b6df3aa BS |
348 | |
349 | a. Enable CONFIG_CGROUPS | |
5b1efc02 JW |
350 | b. Enable CONFIG_MEMCG |
351 | c. Enable CONFIG_MEMCG_SWAP (to use swap extension) | |
d5bdae7d | 352 | d. Enable CONFIG_MEMCG_KMEM (to use kmem extension) |
1b6df3aa | 353 | |
29d293b6 | 354 | 3.1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?) |
f6e07d38 JS |
355 | # mount -t tmpfs none /sys/fs/cgroup |
356 | # mkdir /sys/fs/cgroup/memory | |
357 | # mount -t cgroup none /sys/fs/cgroup/memory -o memory | |
1b6df3aa | 358 | |
29d293b6 | 359 | 3.2. Make the new group and move bash into it |
f6e07d38 JS |
360 | # mkdir /sys/fs/cgroup/memory/0 |
361 | # echo $$ > /sys/fs/cgroup/memory/0/tasks | |
1b6df3aa | 362 | |
dc10e281 | 363 | Since now we're in the 0 cgroup, we can alter the memory limit: |
f6e07d38 | 364 | # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes |
0eea1030 BS |
365 | |
366 | NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, | |
dc10e281 KH |
367 | mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.) |
368 | ||
c5b947b2 | 369 | NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited). |
4b3bde4c | 370 | NOTE: We cannot set limits on the root cgroup any more. |
0eea1030 | 371 | |
f6e07d38 | 372 | # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes |
2324c5dd | 373 | 4194304 |
0eea1030 | 374 | |
1b6df3aa | 375 | We can check the usage: |
f6e07d38 | 376 | # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes |
2324c5dd | 377 | 1216512 |
0eea1030 | 378 | |
1939c557 | 379 | A successful write to this file does not guarantee a successful setting of |
dc10e281 | 380 | this limit to the value written into the file. This can be due to a |
0eea1030 | 381 | number of factors, such as rounding up to page boundaries or the total |
dc10e281 | 382 | availability of memory on the system. The user is required to re-read |
0eea1030 BS |
383 | this file after a write to guarantee the value committed by the kernel. |
384 | ||
fb78922c | 385 | # echo 1 > memory.limit_in_bytes |
0eea1030 | 386 | # cat memory.limit_in_bytes |
2324c5dd | 387 | 4096 |
1b6df3aa BS |
388 | |
389 | The memory.failcnt field gives the number of times that the cgroup limit was | |
390 | exceeded. | |
391 | ||
dfc05c25 KH |
392 | The memory.stat file gives accounting information. Now, the number of |
393 | caches, RSS and Active pages/Inactive pages are shown. | |
394 | ||
1b6df3aa BS |
395 | 4. Testing |
396 | ||
dc10e281 KH |
397 | For testing features and implementation, see memcg_test.txt. |
398 | ||
399 | Performance test is also important. To see pure memory controller's overhead, | |
400 | testing on tmpfs will give you good numbers of small overheads. | |
401 | Example: do kernel make on tmpfs. | |
402 | ||
403 | Page-fault scalability is also important. At measuring parallel | |
404 | page fault test, multi-process test may be better than multi-thread | |
405 | test because it has noise of shared objects/status. | |
406 | ||
407 | But the above two are testing extreme situations. | |
408 | Trying usual test under memory controller is always helpful. | |
1b6df3aa BS |
409 | |
410 | 4.1 Troubleshooting | |
411 | ||
412 | Sometimes a user might find that the application under a cgroup is | |
1939c557 | 413 | terminated by the OOM killer. There are several causes for this: |
1b6df3aa BS |
414 | |
415 | 1. The cgroup limit is too low (just too low to do anything useful) | |
416 | 2. The user is using anonymous memory and swap is turned off or too low | |
417 | ||
418 | A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of | |
419 | some of the pages cached in the cgroup (page cache pages). | |
420 | ||
1939c557 | 421 | To know what happens, disabling OOM_Kill as per "10. OOM Control" (below) and |
dc10e281 KH |
422 | seeing what happens will be helpful. |
423 | ||
1b6df3aa BS |
424 | 4.2 Task migration |
425 | ||
a33f3224 | 426 | When a task migrates from one cgroup to another, its charge is not |
7dc74be0 | 427 | carried forward by default. The pages allocated from the original cgroup still |
1b6df3aa BS |
428 | remain charged to it, the charge is dropped when the page is freed or |
429 | reclaimed. | |
430 | ||
dc10e281 KH |
431 | You can move charges of a task along with task migration. |
432 | See 8. "Move charges at task migration" | |
7dc74be0 | 433 | |
1b6df3aa BS |
434 | 4.3 Removing a cgroup |
435 | ||
436 | A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a | |
437 | cgroup might have some charge associated with it, even though all | |
dc10e281 KH |
438 | tasks have migrated away from it. (because we charge against pages, not |
439 | against tasks.) | |
440 | ||
cc926f78 KH |
441 | We move the stats to root (if use_hierarchy==0) or parent (if |
442 | use_hierarchy==1), and no change on the charge except uncharging | |
443 | from the child. | |
1b6df3aa | 444 | |
8c7c6e34 KH |
445 | Charges recorded in swap information is not updated at removal of cgroup. |
446 | Recorded information is discarded and a cgroup which uses swap (swapcache) | |
447 | will be charged as a new owner of it. | |
448 | ||
cc926f78 | 449 | About use_hierarchy, see Section 6. |
8c7c6e34 | 450 | |
c1e862c1 KH |
451 | 5. Misc. interfaces. |
452 | ||
453 | 5.1 force_empty | |
454 | memory.force_empty interface is provided to make cgroup's memory usage empty. | |
c1e862c1 KH |
455 | When writing anything to this |
456 | ||
457 | # echo 0 > memory.force_empty | |
458 | ||
f61c42a7 | 459 | the cgroup will be reclaimed and as many pages reclaimed as possible. |
c1e862c1 | 460 | |
1939c557 | 461 | The typical use case for this interface is before calling rmdir(). |
c1e862c1 KH |
462 | Because rmdir() moves all pages to parent, some out-of-use page caches can be |
463 | moved to the parent. If you want to avoid that, force_empty will be useful. | |
464 | ||
d5bdae7d GC |
465 | Also, note that when memory.kmem.limit_in_bytes is set the charges due to |
466 | kernel pages will still be seen. This is not considered a failure and the | |
467 | write will still return success. In this case, it is expected that | |
468 | memory.kmem.usage_in_bytes == memory.usage_in_bytes. | |
469 | ||
cc926f78 KH |
470 | About use_hierarchy, see Section 6. |
471 | ||
7f016ee8 | 472 | 5.2 stat file |
c863d835 | 473 | |
185efc0f | 474 | memory.stat file includes following statistics |
c863d835 | 475 | |
dc10e281 | 476 | # per-memory cgroup local status |
c863d835 | 477 | cache - # of bytes of page cache memory. |
b070e65c DR |
478 | rss - # of bytes of anonymous and swap cache memory (includes |
479 | transparent hugepages). | |
480 | rss_huge - # of bytes of anonymous transparent hugepages. | |
dc10e281 | 481 | mapped_file - # of bytes of mapped file (includes tmpfs/shmem) |
0527b690 YH |
482 | pgpgin - # of charging events to the memory cgroup. The charging |
483 | event happens each time a page is accounted as either mapped | |
484 | anon page(RSS) or cache page(Page Cache) to the cgroup. | |
485 | pgpgout - # of uncharging events to the memory cgroup. The uncharging | |
486 | event happens each time a page is unaccounted from the cgroup. | |
dc10e281 | 487 | swap - # of bytes of swap usage |
c4843a75 | 488 | dirty - # of bytes that are waiting to get written back to the disk. |
9cb2dc1c SZ |
489 | writeback - # of bytes of file/anon cache that are queued for syncing to |
490 | disk. | |
a15e4190 | 491 | inactive_anon - # of bytes of anonymous and swap cache memory on inactive |
dc10e281 KH |
492 | LRU list. |
493 | active_anon - # of bytes of anonymous and swap cache memory on active | |
a15e4190 | 494 | LRU list. |
dc10e281 KH |
495 | inactive_file - # of bytes of file-backed memory on inactive LRU list. |
496 | active_file - # of bytes of file-backed memory on active LRU list. | |
c863d835 BR |
497 | unevictable - # of bytes of memory that cannot be reclaimed (mlocked etc). |
498 | ||
dc10e281 KH |
499 | # status considering hierarchy (see memory.use_hierarchy settings) |
500 | ||
501 | hierarchical_memory_limit - # of bytes of memory limit with regard to hierarchy | |
502 | under which the memory cgroup is | |
503 | hierarchical_memsw_limit - # of bytes of memory+swap limit with regard to | |
504 | hierarchy under which memory cgroup is. | |
505 | ||
eb6332a5 JW |
506 | total_<counter> - # hierarchical version of <counter>, which in |
507 | addition to the cgroup's own value includes the | |
508 | sum of all hierarchical children's values of | |
509 | <counter>, i.e. total_cache | |
dc10e281 KH |
510 | |
511 | # The following additional stats are dependent on CONFIG_DEBUG_VM. | |
c863d835 | 512 | |
c863d835 BR |
513 | recent_rotated_anon - VM internal parameter. (see mm/vmscan.c) |
514 | recent_rotated_file - VM internal parameter. (see mm/vmscan.c) | |
515 | recent_scanned_anon - VM internal parameter. (see mm/vmscan.c) | |
516 | recent_scanned_file - VM internal parameter. (see mm/vmscan.c) | |
517 | ||
518 | Memo: | |
dc10e281 KH |
519 | recent_rotated means recent frequency of LRU rotation. |
520 | recent_scanned means recent # of scans to LRU. | |
7f016ee8 KM |
521 | showing for better debug please see the code for meanings. |
522 | ||
c863d835 BR |
523 | Note: |
524 | Only anonymous and swap cache memory is listed as part of 'rss' stat. | |
525 | This should not be confused with the true 'resident set size' or the | |
dc10e281 | 526 | amount of physical memory used by the cgroup. |
03eac8b2 | 527 | 'rss + mapped_file" will give you resident set size of cgroup. |
dc10e281 | 528 | (Note: file and shmem may be shared among other cgroups. In that case, |
03eac8b2 | 529 | mapped_file is accounted only when the memory cgroup is owner of page |
dc10e281 | 530 | cache.) |
7f016ee8 | 531 | |
a7885eb8 | 532 | 5.3 swappiness |
a7885eb8 | 533 | |
688eb988 MH |
534 | Overrides /proc/sys/vm/swappiness for the particular group. The tunable |
535 | in the root cgroup corresponds to the global swappiness setting. | |
536 | ||
537 | Please note that unlike during the global reclaim, limit reclaim | |
538 | enforces that 0 swappiness really prevents from any swapping even if | |
539 | there is a swap storage available. This might lead to memcg OOM killer | |
540 | if there are no file pages to reclaim. | |
a7885eb8 | 541 | |
dc10e281 KH |
542 | 5.4 failcnt |
543 | ||
544 | A memory cgroup provides memory.failcnt and memory.memsw.failcnt files. | |
545 | This failcnt(== failure count) shows the number of times that a usage counter | |
546 | hit its limit. When a memory cgroup hits a limit, failcnt increases and | |
547 | memory under it will be reclaimed. | |
548 | ||
549 | You can reset failcnt by writing 0 to failcnt file. | |
550 | # echo 0 > .../memory.failcnt | |
a7885eb8 | 551 | |
a111c966 DN |
552 | 5.5 usage_in_bytes |
553 | ||
554 | For efficiency, as other kernel components, memory cgroup uses some optimization | |
555 | to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the | |
1939c557 | 556 | method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz |
a111c966 DN |
557 | value for efficient access. (Of course, when necessary, it's synchronized.) |
558 | If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) | |
559 | value in memory.stat(see 5.2). | |
560 | ||
50c35e5b YH |
561 | 5.6 numa_stat |
562 | ||
563 | This is similar to numa_maps but operates on a per-memcg basis. This is | |
564 | useful for providing visibility into the numa locality information within | |
565 | an memcg since the pages are allowed to be allocated from any physical | |
1939c557 MK |
566 | node. One of the use cases is evaluating application performance by |
567 | combining this information with the application's CPU allocation. | |
50c35e5b | 568 | |
071aee13 YH |
569 | Each memcg's numa_stat file includes "total", "file", "anon" and "unevictable" |
570 | per-node page counts including "hierarchical_<counter>" which sums up all | |
571 | hierarchical children's values in addition to the memcg's own value. | |
572 | ||
8173d5a4 | 573 | The output format of memory.numa_stat is: |
50c35e5b YH |
574 | |
575 | total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
576 | file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
577 | anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
578 | unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... | |
071aee13 | 579 | hierarchical_<counter>=<counter pages> N0=<node 0 pages> N1=<node 1 pages> ... |
50c35e5b | 580 | |
071aee13 | 581 | The "total" count is sum of file + anon + unevictable. |
50c35e5b | 582 | |
52bc0d82 | 583 | 6. Hierarchy support |
c1e862c1 | 584 | |
52bc0d82 BS |
585 | The memory controller supports a deep hierarchy and hierarchical accounting. |
586 | The hierarchy is created by creating the appropriate cgroups in the | |
587 | cgroup filesystem. Consider for example, the following cgroup filesystem | |
588 | hierarchy | |
589 | ||
67de0162 | 590 | root |
52bc0d82 | 591 | / | \ |
67de0162 JS |
592 | / | \ |
593 | a b c | |
594 | | \ | |
595 | | \ | |
596 | d e | |
52bc0d82 BS |
597 | |
598 | In the diagram above, with hierarchical accounting enabled, all memory | |
599 | usage of e, is accounted to its ancestors up until the root (i.e, c and root), | |
dc10e281 | 600 | that has memory.use_hierarchy enabled. If one of the ancestors goes over its |
52bc0d82 BS |
601 | limit, the reclaim algorithm reclaims from the tasks in the ancestor and the |
602 | children of the ancestor. | |
603 | ||
604 | 6.1 Enabling hierarchical accounting and reclaim | |
605 | ||
dc10e281 | 606 | A memory cgroup by default disables the hierarchy feature. Support |
52bc0d82 BS |
607 | can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup |
608 | ||
609 | # echo 1 > memory.use_hierarchy | |
610 | ||
611 | The feature can be disabled by | |
612 | ||
613 | # echo 0 > memory.use_hierarchy | |
614 | ||
689bca3b GT |
615 | NOTE1: Enabling/disabling will fail if either the cgroup already has other |
616 | cgroups created below it, or if the parent cgroup has use_hierarchy | |
617 | enabled. | |
52bc0d82 | 618 | |
daaf1e68 | 619 | NOTE2: When panic_on_oom is set to "2", the whole system will panic in |
dc10e281 | 620 | case of an OOM event in any cgroup. |
52bc0d82 | 621 | |
a6df6361 BS |
622 | 7. Soft limits |
623 | ||
624 | Soft limits allow for greater sharing of memory. The idea behind soft limits | |
625 | is to allow control groups to use as much of the memory as needed, provided | |
626 | ||
627 | a. There is no memory contention | |
628 | b. They do not exceed their hard limit | |
629 | ||
dc10e281 | 630 | When the system detects memory contention or low memory, control groups |
a6df6361 BS |
631 | are pushed back to their soft limits. If the soft limit of each control |
632 | group is very high, they are pushed back as much as possible to make | |
633 | sure that one control group does not starve the others of memory. | |
634 | ||
1939c557 | 635 | Please note that soft limits is a best-effort feature; it comes with |
a6df6361 BS |
636 | no guarantees, but it does its best to make sure that when memory is |
637 | heavily contended for, memory is allocated based on the soft limit | |
1939c557 | 638 | hints/setup. Currently soft limit based reclaim is set up such that |
a6df6361 BS |
639 | it gets invoked from balance_pgdat (kswapd). |
640 | ||
641 | 7.1 Interface | |
642 | ||
643 | Soft limits can be setup by using the following commands (in this example we | |
dc10e281 | 644 | assume a soft limit of 256 MiB) |
a6df6361 BS |
645 | |
646 | # echo 256M > memory.soft_limit_in_bytes | |
647 | ||
648 | If we want to change this to 1G, we can at any time use | |
649 | ||
650 | # echo 1G > memory.soft_limit_in_bytes | |
651 | ||
652 | NOTE1: Soft limits take effect over a long period of time, since they involve | |
653 | reclaiming memory for balancing between memory cgroups | |
654 | NOTE2: It is recommended to set the soft limit always below the hard limit, | |
655 | otherwise the hard limit will take precedence. | |
656 | ||
7dc74be0 DN |
657 | 8. Move charges at task migration |
658 | ||
659 | Users can move charges associated with a task along with task migration, that | |
660 | is, uncharge task's pages from the old cgroup and charge them to the new cgroup. | |
02491447 DN |
661 | This feature is not supported in !CONFIG_MMU environments because of lack of |
662 | page tables. | |
7dc74be0 DN |
663 | |
664 | 8.1 Interface | |
665 | ||
8173d5a4 | 666 | This feature is disabled by default. It can be enabled (and disabled again) by |
7dc74be0 DN |
667 | writing to memory.move_charge_at_immigrate of the destination cgroup. |
668 | ||
669 | If you want to enable it: | |
670 | ||
671 | # echo (some positive value) > memory.move_charge_at_immigrate | |
672 | ||
673 | Note: Each bits of move_charge_at_immigrate has its own meaning about what type | |
674 | of charges should be moved. See 8.2 for details. | |
1939c557 MK |
675 | Note: Charges are moved only when you move mm->owner, in other words, |
676 | a leader of a thread group. | |
7dc74be0 DN |
677 | Note: If we cannot find enough space for the task in the destination cgroup, we |
678 | try to make space by reclaiming memory. Task migration may fail if we | |
679 | cannot make enough space. | |
dc10e281 | 680 | Note: It can take several seconds if you move charges much. |
7dc74be0 DN |
681 | |
682 | And if you want disable it again: | |
683 | ||
684 | # echo 0 > memory.move_charge_at_immigrate | |
685 | ||
1939c557 | 686 | 8.2 Type of charges which can be moved |
7dc74be0 | 687 | |
1939c557 MK |
688 | Each bit in move_charge_at_immigrate has its own meaning about what type of |
689 | charges should be moved. But in any case, it must be noted that an account of | |
690 | a page or a swap can be moved only when it is charged to the task's current | |
691 | (old) memory cgroup. | |
7dc74be0 DN |
692 | |
693 | bit | what type of charges would be moved ? | |
694 | -----+------------------------------------------------------------------------ | |
1939c557 MK |
695 | 0 | A charge of an anonymous page (or swap of it) used by the target task. |
696 | | You must enable Swap Extension (see 2.4) to enable move of swap charges. | |
87946a72 | 697 | -----+------------------------------------------------------------------------ |
1939c557 | 698 | 1 | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) |
dc10e281 | 699 | | and swaps of tmpfs file) mmapped by the target task. Unlike the case of |
1939c557 | 700 | | anonymous pages, file pages (and swaps) in the range mmapped by the task |
87946a72 DN |
701 | | will be moved even if the task hasn't done page fault, i.e. they might |
702 | | not be the task's "RSS", but other task's "RSS" that maps the same file. | |
1939c557 MK |
703 | | And mapcount of the page is ignored (the page can be moved even if |
704 | | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to | |
87946a72 | 705 | | enable move of swap charges. |
7dc74be0 DN |
706 | |
707 | 8.3 TODO | |
708 | ||
7dc74be0 DN |
709 | - All of moving charge operations are done under cgroup_mutex. It's not good |
710 | behavior to hold the mutex too long, so we may need some trick. | |
711 | ||
2e72b634 KS |
712 | 9. Memory thresholds |
713 | ||
1939c557 | 714 | Memory cgroup implements memory thresholds using the cgroups notification |
2e72b634 KS |
715 | API (see cgroups.txt). It allows to register multiple memory and memsw |
716 | thresholds and gets notifications when it crosses. | |
717 | ||
1939c557 | 718 | To register a threshold, an application must: |
dc10e281 KH |
719 | - create an eventfd using eventfd(2); |
720 | - open memory.usage_in_bytes or memory.memsw.usage_in_bytes; | |
721 | - write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to | |
722 | cgroup.event_control. | |
2e72b634 KS |
723 | |
724 | Application will be notified through eventfd when memory usage crosses | |
725 | threshold in any direction. | |
726 | ||
727 | It's applicable for root and non-root cgroup. | |
728 | ||
9490ff27 KH |
729 | 10. OOM Control |
730 | ||
3c11ecf4 KH |
731 | memory.oom_control file is for OOM notification and other controls. |
732 | ||
1939c557 | 733 | Memory cgroup implements OOM notifier using the cgroup notification |
dc10e281 KH |
734 | API (See cgroups.txt). It allows to register multiple OOM notification |
735 | delivery and gets notification when OOM happens. | |
9490ff27 | 736 | |
1939c557 | 737 | To register a notifier, an application must: |
9490ff27 KH |
738 | - create an eventfd using eventfd(2) |
739 | - open memory.oom_control file | |
dc10e281 KH |
740 | - write string like "<event_fd> <fd of memory.oom_control>" to |
741 | cgroup.event_control | |
9490ff27 | 742 | |
1939c557 MK |
743 | The application will be notified through eventfd when OOM happens. |
744 | OOM notification doesn't work for the root cgroup. | |
9490ff27 | 745 | |
1939c557 | 746 | You can disable the OOM-killer by writing "1" to memory.oom_control file, as: |
dc10e281 | 747 | |
3c11ecf4 KH |
748 | #echo 1 > memory.oom_control |
749 | ||
dc10e281 KH |
750 | If OOM-killer is disabled, tasks under cgroup will hang/sleep |
751 | in memory cgroup's OOM-waitqueue when they request accountable memory. | |
3c11ecf4 | 752 | |
dc10e281 | 753 | For running them, you have to relax the memory cgroup's OOM status by |
3c11ecf4 KH |
754 | * enlarge limit or reduce usage. |
755 | To reduce usage, | |
756 | * kill some tasks. | |
757 | * move some tasks to other group with account migration. | |
758 | * remove some files (on tmpfs?) | |
759 | ||
760 | Then, stopped tasks will work again. | |
761 | ||
762 | At reading, current status of OOM is shown. | |
763 | oom_kill_disable 0 or 1 (if 1, oom-killer is disabled) | |
dc10e281 | 764 | under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may |
3c11ecf4 | 765 | be stopped.) |
9490ff27 | 766 | |
70ddf637 AV |
767 | 11. Memory Pressure |
768 | ||
769 | The pressure level notifications can be used to monitor the memory | |
770 | allocation cost; based on the pressure, applications can implement | |
771 | different strategies of managing their memory resources. The pressure | |
772 | levels are defined as following: | |
773 | ||
774 | The "low" level means that the system is reclaiming memory for new | |
775 | allocations. Monitoring this reclaiming activity might be useful for | |
776 | maintaining cache level. Upon notification, the program (typically | |
777 | "Activity Manager") might analyze vmstat and act in advance (i.e. | |
778 | prematurely shutdown unimportant services). | |
779 | ||
780 | The "medium" level means that the system is experiencing medium memory | |
781 | pressure, the system might be making swap, paging out active file caches, | |
782 | etc. Upon this event applications may decide to further analyze | |
783 | vmstat/zoneinfo/memcg or internal memory usage statistics and free any | |
784 | resources that can be easily reconstructed or re-read from a disk. | |
785 | ||
786 | The "critical" level means that the system is actively thrashing, it is | |
787 | about to out of memory (OOM) or even the in-kernel OOM killer is on its | |
788 | way to trigger. Applications should do whatever they can to help the | |
789 | system. It might be too late to consult with vmstat or any other | |
790 | statistics, so it's advisable to take an immediate action. | |
791 | ||
b6bb9811 DR |
792 | By default, events are propagated upward until the event is handled, i.e. the |
793 | events are not pass-through. For example, you have three cgroups: A->B->C. Now | |
794 | you set up an event listener on cgroups A, B and C, and suppose group C | |
795 | experiences some pressure. In this situation, only group C will receive the | |
796 | notification, i.e. groups A and B will not receive it. This is done to avoid | |
797 | excessive "broadcasting" of messages, which disturbs the system and which is | |
798 | especially bad if we are low on memory or thrashing. Group B, will receive | |
799 | notification only if there are no event listers for group C. | |
800 | ||
801 | There are three optional modes that specify different propagation behavior: | |
802 | ||
803 | - "default": this is the default behavior specified above. This mode is the | |
804 | same as omitting the optional mode parameter, preserved by backwards | |
805 | compatibility. | |
806 | ||
807 | - "hierarchy": events always propagate up to the root, similar to the default | |
808 | behavior, except that propagation continues regardless of whether there are | |
809 | event listeners at each level, with the "hierarchy" mode. In the above | |
810 | example, groups A, B, and C will receive notification of memory pressure. | |
811 | ||
812 | - "local": events are pass-through, i.e. they only receive notifications when | |
813 | memory pressure is experienced in the memcg for which the notification is | |
814 | registered. In the above example, group C will receive notification if | |
815 | registered for "local" notification and the group experiences memory | |
816 | pressure. However, group B will never receive notification, regardless if | |
817 | there is an event listener for group C or not, if group B is registered for | |
818 | local notification. | |
819 | ||
820 | The level and event notification mode ("hierarchy" or "local", if necessary) are | |
821 | specified by a comma-delimited string, i.e. "low,hierarchy" specifies | |
822 | hierarchical, pass-through, notification for all ancestor memcgs. Notification | |
823 | that is the default, non pass-through behavior, does not specify a mode. | |
824 | "medium,local" specifies pass-through notification for the medium level. | |
70ddf637 AV |
825 | |
826 | The file memory.pressure_level is only used to setup an eventfd. To | |
827 | register a notification, an application must: | |
828 | ||
829 | - create an eventfd using eventfd(2); | |
830 | - open memory.pressure_level; | |
b6bb9811 | 831 | - write string as "<event_fd> <fd of memory.pressure_level> <level[,mode]>" |
70ddf637 AV |
832 | to cgroup.event_control. |
833 | ||
834 | Application will be notified through eventfd when memory pressure is at | |
835 | the specific level (or higher). Read/write operations to | |
836 | memory.pressure_level are no implemented. | |
837 | ||
838 | Test: | |
839 | ||
840 | Here is a small script example that makes a new cgroup, sets up a | |
841 | memory limit, sets up a notification in the cgroup and then makes child | |
842 | cgroup experience a critical pressure: | |
843 | ||
844 | # cd /sys/fs/cgroup/memory/ | |
845 | # mkdir foo | |
846 | # cd foo | |
b6bb9811 | 847 | # cgroup_event_listener memory.pressure_level low,hierarchy & |
70ddf637 AV |
848 | # echo 8000000 > memory.limit_in_bytes |
849 | # echo 8000000 > memory.memsw.limit_in_bytes | |
850 | # echo $$ > tasks | |
851 | # dd if=/dev/zero | read x | |
852 | ||
853 | (Expect a bunch of notifications, and eventually, the oom-killer will | |
854 | trigger.) | |
855 | ||
856 | 12. TODO | |
1b6df3aa | 857 | |
f968ef1c LZ |
858 | 1. Make per-cgroup scanner reclaim not-shared pages first |
859 | 2. Teach controller to account for shared-pages | |
860 | 3. Start reclamation in the background when the limit is | |
1b6df3aa | 861 | not yet hit but the usage is getting closer |
1b6df3aa BS |
862 | |
863 | Summary | |
864 | ||
865 | Overall, the memory controller has been a stable controller and has been | |
866 | commented and discussed quite extensively in the community. | |
867 | ||
868 | References | |
869 | ||
870 | 1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ | |
871 | 2. Singh, Balbir. Memory Controller (RSS Control), | |
872 | http://lwn.net/Articles/222762/ | |
873 | 3. Emelianov, Pavel. Resource controllers based on process cgroups | |
874 | http://lkml.org/lkml/2007/3/6/198 | |
875 | 4. Emelianov, Pavel. RSS controller based on process cgroups (v2) | |
2324c5dd | 876 | http://lkml.org/lkml/2007/4/9/78 |
1b6df3aa BS |
877 | 5. Emelianov, Pavel. RSS controller based on process cgroups (v3) |
878 | http://lkml.org/lkml/2007/5/30/244 | |
879 | 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ | |
880 | 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control | |
881 | subsystem (v3), http://lwn.net/Articles/235534/ | |
2324c5dd | 882 | 8. Singh, Balbir. RSS controller v2 test results (lmbench), |
1b6df3aa | 883 | http://lkml.org/lkml/2007/5/17/232 |
2324c5dd | 884 | 9. Singh, Balbir. RSS controller v2 AIM9 results |
1b6df3aa | 885 | http://lkml.org/lkml/2007/5/18/1 |
2324c5dd | 886 | 10. Singh, Balbir. Memory controller v6 test results, |
1b6df3aa | 887 | http://lkml.org/lkml/2007/8/19/36 |
2324c5dd LZ |
888 | 11. Singh, Balbir. Memory controller introduction (v6), |
889 | http://lkml.org/lkml/2007/8/17/69 | |
1b6df3aa BS |
890 | 12. Corbet, Jonathan, Controlling memory use in cgroups, |
891 | http://lwn.net/Articles/243795/ |