]>
Commit | Line | Data |
---|---|---|
9919cba7 FLVC |
1 | =============================================================== |
2 | Softlockup detector and hardlockup detector (aka nmi_watchdog) | |
3 | =============================================================== | |
4 | ||
5 | The Linux kernel can act as a watchdog to detect both soft and hard | |
6 | lockups. | |
7 | ||
8 | A 'softlockup' is defined as a bug that causes the kernel to loop in | |
9 | kernel mode for more than 20 seconds (see "Implementation" below for | |
10 | details), without giving other tasks a chance to run. The current | |
11 | stack trace is displayed upon detection and, by default, the system | |
12 | will stay locked up. Alternatively, the kernel can be configured to | |
13 | panic; a sysctl, "kernel.softlockup_panic", a kernel parameter, | |
8c27ceff | 14 | "softlockup_panic" (see "Documentation/admin-guide/kernel-parameters.rst" for |
8ae34ea7 | 15 | details), and a compile option, "BOOTPARAM_SOFTLOCKUP_PANIC", are |
9919cba7 FLVC |
16 | provided for this. |
17 | ||
18 | A 'hardlockup' is defined as a bug that causes the CPU to loop in | |
19 | kernel mode for more than 10 seconds (see "Implementation" below for | |
20 | details), without letting other interrupts have a chance to run. | |
21 | Similarly to the softlockup case, the current stack trace is displayed | |
22 | upon detection and the system will stay locked up unless the default | |
ac1f5912 DZ |
23 | behavior is changed, which can be done through a sysctl, |
24 | 'hardlockup_panic', a compile time knob, "BOOTPARAM_HARDLOCKUP_PANIC", | |
25 | and a kernel parameter, "nmi_watchdog" | |
8c27ceff | 26 | (see "Documentation/admin-guide/kernel-parameters.rst" for details). |
9919cba7 FLVC |
27 | |
28 | The panic option can be used in combination with panic_timeout (this | |
29 | timeout is set through the confusingly named "kernel.panic" sysctl), | |
30 | to cause the system to reboot automatically after a specified amount | |
31 | of time. | |
32 | ||
c926d4d4 MCC |
33 | Implementation |
34 | ============== | |
9919cba7 FLVC |
35 | |
36 | The soft and hard lockup detectors are built on top of the hrtimer and | |
37 | perf subsystems, respectively. A direct consequence of this is that, | |
38 | in principle, they should work in any architecture where these | |
39 | subsystems are present. | |
40 | ||
41 | A periodic hrtimer runs to generate interrupts and kick the watchdog | |
42 | task. An NMI perf event is generated every "watchdog_thresh" | |
43 | (compile-time initialized to 10 and configurable through sysctl of the | |
44 | same name) seconds to check for hardlockups. If any CPU in the system | |
45 | does not receive any hrtimer interrupt during that time the | |
46 | 'hardlockup detector' (the handler for the NMI perf event) will | |
47 | generate a kernel warning or call panic, depending on the | |
48 | configuration. | |
49 | ||
50 | The watchdog task is a high priority kernel thread that updates a | |
51 | timestamp every time it is scheduled. If that timestamp is not updated | |
52 | for 2*watchdog_thresh seconds (the softlockup threshold) the | |
53 | 'softlockup detector' (coded inside the hrtimer callback function) | |
54 | will dump useful debug information to the system log, after which it | |
55 | will call panic if it was instructed to do so or resume execution of | |
56 | other kernel code. | |
57 | ||
58 | The period of the hrtimer is 2*watchdog_thresh/5, which means it has | |
59 | two or three chances to generate an interrupt before the hardlockup | |
60 | detector kicks in. | |
61 | ||
62 | As explained above, a kernel knob is provided that allows | |
63 | administrators to configure the period of the hrtimer and the perf | |
64 | event. The right value for a particular environment is a trade-off | |
65 | between fast response to lockups and detection overhead. | |
fe4ba3c3 CM |
66 | |
67 | By default, the watchdog runs on all online cores. However, on a | |
68 | kernel configured with NO_HZ_FULL, by default the watchdog runs only | |
69 | on the housekeeping cores, not the cores specified in the "nohz_full" | |
70 | boot argument. If we allowed the watchdog to run by default on | |
71 | the "nohz_full" cores, we would have to run timer ticks to activate | |
72 | the scheduler, which would prevent the "nohz_full" functionality | |
73 | from protecting the user code on those cores from the kernel. | |
74 | Of course, disabling it by default on the nohz_full cores means that | |
75 | when those cores do enter the kernel, by default we will not be | |
76 | able to detect if they lock up. However, allowing the watchdog | |
77 | to continue to run on the housekeeping (non-tickless) cores means | |
78 | that we will continue to detect lockups properly on those cores. | |
79 | ||
80 | In either case, the set of cores excluded from running the watchdog | |
81 | may be adjusted via the kernel.watchdog_cpumask sysctl. For | |
82 | nohz_full cores, this may be useful for debugging a case where the | |
83 | kernel seems to be hanging on the nohz_full cores. |