]>
Commit | Line | Data |
---|---|---|
d6a3b247 | 1 | ===================== |
88ebc08e BR |
2 | CFS Bandwidth Control |
3 | ===================== | |
4 | ||
5 | [ This document only discusses CPU bandwidth control for SCHED_NORMAL. | |
d6a3b247 | 6 | The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst ] |
88ebc08e BR |
7 | |
8 | CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the | |
9 | specification of the maximum CPU bandwidth available to a group or hierarchy. | |
10 | ||
11 | The bandwidth allowed for a group is specified using a quota and period. Within | |
de53fd7a DC |
12 | each given "period" (microseconds), a task group is allocated up to "quota" |
13 | microseconds of CPU time. That quota is assigned to per-cpu run queues in | |
14 | slices as threads in the cgroup become runnable. Once all quota has been | |
15 | assigned any additional requests for quota will result in those threads being | |
16 | throttled. Throttled threads will not be able to run again until the next | |
17 | period when the quota is replenished. | |
18 | ||
19 | A group's unassigned quota is globally tracked, being refreshed back to | |
20 | cfs_quota units at each period boundary. As threads consume this bandwidth it | |
21 | is transferred to cpu-local "silos" on a demand basis. The amount transferred | |
88ebc08e BR |
22 | within each of these updates is tunable and described as the "slice". |
23 | ||
24 | Management | |
25 | ---------- | |
26 | Quota and period are managed within the cpu subsystem via cgroupfs. | |
27 | ||
28 | cpu.cfs_quota_us: the total available run-time within a period (in microseconds) | |
29 | cpu.cfs_period_us: the length of a period (in microseconds) | |
30 | cpu.stat: exports throttling statistics [explained further below] | |
31 | ||
d6a3b247 MCC |
32 | The default values are:: |
33 | ||
88ebc08e BR |
34 | cpu.cfs_period_us=100ms |
35 | cpu.cfs_quota=-1 | |
36 | ||
37 | A value of -1 for cpu.cfs_quota_us indicates that the group does not have any | |
38 | bandwidth restriction in place, such a group is described as an unconstrained | |
de53fd7a | 39 | bandwidth group. This represents the traditional work-conserving behavior for |
88ebc08e BR |
40 | CFS. |
41 | ||
42 | Writing any (valid) positive value(s) will enact the specified bandwidth limit. | |
de53fd7a DC |
43 | The minimum quota allowed for the quota or period is 1ms. There is also an |
44 | upper bound on the period length of 1s. Additional restrictions exist when | |
88ebc08e BR |
45 | bandwidth limits are used in a hierarchical fashion, these are explained in |
46 | more detail below. | |
47 | ||
48 | Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit | |
49 | and return the group to an unconstrained state once more. | |
50 | ||
51 | Any updates to a group's bandwidth specification will result in it becoming | |
52 | unthrottled if it is in a constrained state. | |
53 | ||
54 | System wide settings | |
55 | -------------------- | |
56 | For efficiency run-time is transferred between the global pool and CPU local | |
de53fd7a DC |
57 | "silos" in a batch fashion. This greatly reduces global accounting pressure |
58 | on large systems. The amount transferred each time such an update is required | |
88ebc08e BR |
59 | is described as the "slice". |
60 | ||
d6a3b247 MCC |
61 | This is tunable via procfs:: |
62 | ||
88ebc08e BR |
63 | /proc/sys/kernel/sched_cfs_bandwidth_slice_us (default=5ms) |
64 | ||
65 | Larger slice values will reduce transfer overheads, while smaller values allow | |
66 | for more fine-grained consumption. | |
67 | ||
68 | Statistics | |
69 | ---------- | |
70 | A group's bandwidth statistics are exported via 3 fields in cpu.stat. | |
71 | ||
72 | cpu.stat: | |
d6a3b247 | 73 | |
88ebc08e BR |
74 | - nr_periods: Number of enforcement intervals that have elapsed. |
75 | - nr_throttled: Number of times the group has been throttled/limited. | |
76 | - throttled_time: The total time duration (in nanoseconds) for which entities | |
77 | of the group have been throttled. | |
78 | ||
79 | This interface is read-only. | |
80 | ||
81 | Hierarchical considerations | |
82 | --------------------------- | |
83 | The interface enforces that an individual entity's bandwidth is always | |
84 | attainable, that is: max(c_i) <= C. However, over-subscription in the | |
85 | aggregate case is explicitly allowed to enable work-conserving semantics | |
d6a3b247 MCC |
86 | within a hierarchy: |
87 | ||
88ebc08e | 88 | e.g. \Sum (c_i) may exceed C |
d6a3b247 | 89 | |
88ebc08e BR |
90 | [ Where C is the parent's bandwidth, and c_i its children ] |
91 | ||
92 | ||
93 | There are two ways in which a group may become throttled: | |
d6a3b247 | 94 | |
88ebc08e BR |
95 | a. it fully consumes its own quota within a period |
96 | b. a parent's quota is fully consumed within its period | |
97 | ||
98 | In case b) above, even though the child may have runtime remaining it will not | |
99 | be allowed to until the parent's runtime is refreshed. | |
100 | ||
de53fd7a DC |
101 | CFS Bandwidth Quota Caveats |
102 | --------------------------- | |
103 | Once a slice is assigned to a cpu it does not expire. However all but 1ms of | |
104 | the slice may be returned to the global pool if all threads on that cpu become | |
105 | unrunnable. This is configured at compile time by the min_cfs_rq_runtime | |
106 | variable. This is a performance tweak that helps prevent added contention on | |
107 | the global lock. | |
108 | ||
109 | The fact that cpu-local slices do not expire results in some interesting corner | |
110 | cases that should be understood. | |
111 | ||
112 | For cgroup cpu constrained applications that are cpu limited this is a | |
113 | relatively moot point because they will naturally consume the entirety of their | |
114 | quota as well as the entirety of each cpu-local slice in each period. As a | |
115 | result it is expected that nr_periods roughly equal nr_throttled, and that | |
116 | cpuacct.usage will increase roughly equal to cfs_quota_us in each period. | |
117 | ||
118 | For highly-threaded, non-cpu bound applications this non-expiration nuance | |
119 | allows applications to briefly burst past their quota limits by the amount of | |
120 | unused slice on each cpu that the task group is running on (typically at most | |
121 | 1ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only | |
122 | applies if quota had been assigned to a cpu and then not fully used or returned | |
123 | in previous periods. This burst amount will not be transferred between cores. | |
124 | As a result, this mechanism still strictly limits the task group to quota | |
125 | average usage, albeit over a longer time window than a single period. This | |
126 | also limits the burst ability to no more than 1ms per cpu. This provides | |
127 | better more predictable user experience for highly threaded applications with | |
128 | small quota limits on high core count machines. It also eliminates the | |
129 | propensity to throttle these applications while simultanously using less than | |
130 | quota amounts of cpu. Another way to say this, is that by allowing the unused | |
131 | portion of a slice to remain valid across periods we have decreased the | |
132 | possibility of wastefully expiring quota on cpu-local silos that don't need a | |
133 | full slice's amount of cpu time. | |
134 | ||
135 | The interaction between cpu-bound and non-cpu-bound-interactive applications | |
136 | should also be considered, especially when single core usage hits 100%. If you | |
137 | gave each of these applications half of a cpu-core and they both got scheduled | |
138 | on the same CPU it is theoretically possible that the non-cpu bound application | |
139 | will use up to 1ms additional quota in some periods, thereby preventing the | |
140 | cpu-bound application from fully using its quota by that same amount. In these | |
141 | instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to | |
142 | decide which application is chosen to run, as they will both be runnable and | |
143 | have remaining quota. This runtime discrepancy will be made up in the following | |
144 | periods when the interactive application idles. | |
145 | ||
88ebc08e BR |
146 | Examples |
147 | -------- | |
d6a3b247 | 148 | 1. Limit a group to 1 CPU worth of runtime:: |
88ebc08e BR |
149 | |
150 | If period is 250ms and quota is also 250ms, the group will get | |
151 | 1 CPU worth of runtime every 250ms. | |
152 | ||
153 | # echo 250000 > cpu.cfs_quota_us /* quota = 250ms */ | |
154 | # echo 250000 > cpu.cfs_period_us /* period = 250ms */ | |
155 | ||
d6a3b247 | 156 | 2. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine |
88ebc08e | 157 | |
d6a3b247 MCC |
158 | With 500ms period and 1000ms quota, the group can get 2 CPUs worth of |
159 | runtime every 500ms:: | |
88ebc08e BR |
160 | |
161 | # echo 1000000 > cpu.cfs_quota_us /* quota = 1000ms */ | |
162 | # echo 500000 > cpu.cfs_period_us /* period = 500ms */ | |
163 | ||
164 | The larger period here allows for increased burst capacity. | |
165 | ||
166 | 3. Limit a group to 20% of 1 CPU. | |
167 | ||
d6a3b247 | 168 | With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU:: |
88ebc08e BR |
169 | |
170 | # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */ | |
171 | # echo 50000 > cpu.cfs_period_us /* period = 50ms */ | |
172 | ||
d6a3b247 MCC |
173 | By using a small period here we are ensuring a consistent latency |
174 | response at the expense of burst capacity. |