Yang Jihong [Sat, 9 Jul 2022 01:50:33 +0000 (09:50 +0800)]
perf kwork: Add workqueue trace BPF support
Implements workqueue trace bpf function.
Test cases:
# perf kwork -k workqueue lat -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(w)addrconf_verify_work | 0002 | 5.856 ms | 1 | 5.856 ms | 111994.634313 s | 111994.640169 s |
(w)vmstat_update | 0001 | 1.247 ms | 1 | 1.247 ms | 111996.462651 s | 111996.463899 s |
(w)neigh_periodic_work | 0001 | 1.183 ms | 1 | 1.183 ms | 111996.462789 s | 111996.463973 s |
(w)neigh_managed_work | 0001 | 0.989 ms | 2 | 1.635 ms | 111996.462820 s | 111996.464455 s |
(w)wb_workfn | 0000 | 0.667 ms | 1 | 0.667 ms | 111996.384273 s | 111996.384940 s |
(w)bpf_prog_free_deferred | 0001 | 0.495 ms | 1 | 0.495 ms | 111986.314201 s | 111986.314696 s |
(w)mix_interrupt_randomness | 0002 | 0.421 ms | 6 | 0.749 ms | 111995.927750 s | 111995.928499 s |
(w)vmstat_shepherd | 0000 | 0.374 ms | 2 | 0.385 ms | 111991.265242 s | 111991.265627 s |
(w)e1000_watchdog | 0002 | 0.356 ms | 5 | 0.390 ms | 111994.528380 s | 111994.528770 s |
(w)vmstat_update | 0000 | 0.231 ms | 2 | 0.365 ms | 111996.384407 s | 111996.384772 s |
(w)flush_to_ldisc | 0006 | 0.165 ms | 1 | 0.165 ms | 111995.930606 s | 111995.930771 s |
(w)flush_to_ldisc | 0000 | 0.094 ms | 2 | 0.095 ms | 111996.460453 s | 111996.460548 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork -k workqueue rep -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)e1000_watchdog | 0002 | 0.627 ms | 2 | 0.324 ms | 112002.720665 s | 112002.720989 s |
(w)flush_to_ldisc | 0007 | 0.598 ms | 2 | 0.534 ms | 112000.875226 s | 112000.875761 s |
(w)wq_barrier_func | 0007 | 0.492 ms | 1 | 0.492 ms | 112000.876981 s | 112000.877473 s |
(w)flush_to_ldisc | 0007 | 0.281 ms | 1 | 0.281 ms | 112005.826882 s | 112005.827163 s |
(w)mix_interrupt_randomness | 0002 | 0.229 ms | 3 | 0.102 ms | 112005.825671 s | 112005.825774 s |
(w)vmstat_shepherd | 0000 | 0.202 ms | 1 | 0.202 ms | 112001.504511 s | 112001.504713 s |
(w)bpf_prog_free_deferred | 0001 | 0.181 ms | 1 | 0.181 ms | 112000.883251 s | 112000.883432 s |
(w)wb_workfn | 0007 | 0.130 ms | 1 | 0.130 ms | 112001.505195 s | 112001.505325 s |
(w)vmstat_update | 0000 | 0.053 ms | 1 | 0.053 ms | 112001.504763 s | 112001.504815 s |
--------------------------------------------------------------------------------------------------------------------------------
Yang Jihong [Sat, 9 Jul 2022 01:50:32 +0000 (09:50 +0800)]
perf kwork: Add softirq trace BPF support
Implements softirq trace bpf function.
Test cases:
Trace softirq latency without filter:
# perf kwork -k softirq lat -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0005 | 0.281 ms | 3 | 0.338 ms | 111295.752222 s | 111295.752560 s |
(s)RCU:9 | 0002 | 0.262 ms | 24 | 1.400 ms | 111301.335986 s | 111301.337386 s |
(s)SCHED:7 | 0005 | 0.177 ms | 14 | 0.212 ms | 111295.752270 s | 111295.752481 s |
(s)RCU:9 | 0007 | 0.161 ms | 47 | 2.022 ms | 111295.402159 s | 111295.404181 s |
(s)NET_RX:3 | 0003 | 0.149 ms | 12 | 1.261 ms | 111301.192964 s | 111301.194225 s |
(s)TIMER:1 | 0001 | 0.105 ms | 9 | 0.198 ms | 111301.180191 s | 111301.180389 s |
... <SNIP> ...
(s)NET_RX:3 | 0002 | 0.098 ms | 6 | 0.124 ms | 111295.403760 s | 111295.403884 s |
(s)SCHED:7 | 0001 | 0.093 ms | 19 | 0.242 ms | 111301.180256 s | 111301.180498 s |
(s)SCHED:7 | 0007 | 0.078 ms | 15 | 0.188 ms | 111300.064226 s | 111300.064415 s |
(s)SCHED:7 | 0004 | 0.077 ms | 11 | 0.213 ms | 111301.361759 s | 111301.361973 s |
(s)SCHED:7 | 0000 | 0.063 ms | 33 | 0.805 ms | 111295.401811 s | 111295.402616 s |
(s)SCHED:7 | 0003 | 0.063 ms | 14 | 0.085 ms | 111301.192255 s | 111301.192340 s |
--------------------------------------------------------------------------------------------------------------------------------
Trace softirq latency with cpu filter:
# perf kwork -k softirq lat -b -C 1
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0001 | 0.178 ms | 5 | 0.572 ms | 111435.534135 s | 111435.534707 s |
--------------------------------------------------------------------------------------------------------------------------------
Trace softirq latency with name filter:
# perf kwork -k softirq lat -b -n SCHED
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)SCHED:7 | 0001 | 0.295 ms | 15 | 2.183 ms | 111452.534950 s | 111452.537133 s |
(s)SCHED:7 | 0002 | 0.215 ms | 10 | 0.315 ms | 111460.000238 s | 111460.000553 s |
(s)SCHED:7 | 0005 | 0.190 ms | 29 | 0.338 ms | 111457.032538 s | 111457.032876 s |
(s)SCHED:7 | 0003 | 0.097 ms | 10 | 0.319 ms | 111452.434351 s | 111452.434670 s |
(s)SCHED:7 | 0006 | 0.089 ms | 1 | 0.089 ms | 111450.737450 s | 111450.737539 s |
(s)SCHED:7 | 0007 | 0.085 ms | 17 | 0.169 ms | 111452.471333 s | 111452.471502 s |
(s)SCHED:7 | 0004 | 0.071 ms | 15 | 0.221 ms | 111452.535252 s | 111452.535473 s |
(s)SCHED:7 | 0000 | 0.044 ms | 32 | 0.130 ms | 111460.001982 s | 111460.002112 s |
--------------------------------------------------------------------------------------------------------------------------------
Yang Jihong [Sat, 9 Jul 2022 01:50:31 +0000 (09:50 +0800)]
perf kwork: Add IRQ trace BPF support
Implements irq trace bpf function.
Test cases:
Trace irq without filter:
# perf kwork -k irq rep -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 31.026 ms | 285 | 1.493 ms | 110326.049963 s | 110326.051456 s |
eth0:10 | 0002 | 7.875 ms | 96 | 1.429 ms | 110313.916835 s | 110313.918264 s |
ata_piix:14 | 0002 | 2.510 ms | 28 | 0.396 ms | 110331.367987 s | 110331.368383 s |
--------------------------------------------------------------------------------------------------------------------------------
Trace irq with cpu filter:
# perf kwork -k irq rep -b -C 0
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 34.288 ms | 282 | 2.061 ms | 110358.078968 s | 110358.081029 s |
--------------------------------------------------------------------------------------------------------------------------------
Trace irq with name filter:
# perf kwork -k irq rep -b -n eth0
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
eth0:10 | 0002 | 2.184 ms | 21 | 0.572 ms | 110386.541699 s | 110386.542271 s |
--------------------------------------------------------------------------------------------------------------------------------
Trace irq with summary:
# perf kwork -k irq rep -b -S
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 42.923 ms | 285 | 1.181 ms | 110418.128867 s | 110418.130049 s |
eth0:10 | 0002 | 2.085 ms | 20 | 0.668 ms | 110416.002935 s | 110416.003603 s |
ata_piix:14 | 0002 | 0.970 ms | 4 | 0.656 ms | 110424.034482 s | 110424.035138 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 309
Total runtime (msec) : 45.977 (0.003% load average)
Total time span (msec) : 17017.655
--------------------------------------------------------------------------------------------------------------------------------
Committer testing:
# perf kwork -k irq rep -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
nvme0q20:145 | 0019 | 0.570 ms | 28 | 0.064 ms | 26966.635102 s | 26966.635167 s |
amdgpu:162 | 0002 | 0.568 ms | 29 | 0.068 ms | 26966.644346 s | 26966.644414 s |
nvme0q4:129 | 0003 | 0.565 ms | 31 | 0.037 ms | 26966.614830 s | 26966.614866 s |
nvme0q16:141 | 0015 | 0.205 ms | 66 | 0.012 ms | 26967.145161 s | 26967.145174 s |
nvme0q29:154 | 0028 | 0.154 ms | 44 | 0.014 ms | 26967.078970 s | 26967.078984 s |
nvme0q10:135 | 0009 | 0.134 ms | 43 | 0.011 ms | 26967.132093 s | 26967.132104 s |
nvme0q2:127 | 0001 | 0.132 ms | 26 | 0.011 ms | 26966.883584 s | 26966.883595 s |
nvme0q25:150 | 0024 | 0.127 ms | 32 | 0.014 ms | 26966.631419 s | 26966.631433 s |
nvme0q14:139 | 0013 | 0.110 ms | 21 | 0.017 ms | 26966.760843 s | 26966.760861 s |
nvme0q30:155 | 0029 | 0.102 ms | 30 | 0.022 ms | 26966.677171 s | 26966.677193 s |
nvme0q13:138 | 0012 | 0.088 ms | 20 | 0.015 ms | 26966.738733 s | 26966.738748 s |
nvme0q6:131 | 0005 | 0.087 ms | 13 | 0.020 ms | 26966.648445 s | 26966.648465 s |
nvme0q28:153 | 0027 | 0.066 ms | 12 | 0.015 ms | 26966.771431 s | 26966.771447 s |
nvme0q26:151 | 0025 | 0.060 ms | 13 | 0.012 ms | 26966.704266 s | 26966.704278 s |
nvme0q21:146 | 0020 | 0.054 ms | 20 | 0.011 ms | 26967.322082 s | 26967.322094 s |
nvme0q1:126 | 0000 | 0.046 ms | 11 | 0.013 ms | 26966.859754 s | 26966.859767 s |
nvme0q17:142 | 0016 | 0.046 ms | 10 | 0.011 ms | 26967.114513 s | 26967.114524 s |
xhci_hcd:74 | 0015 | 0.041 ms | 3 | 0.016 ms | 26967.086004 s | 26967.086020 s |
nvme0q8:133 | 0007 | 0.039 ms | 12 | 0.008 ms | 26966.712056 s | 26966.712063 s |
nvme0q32:157 | 0031 | 0.036 ms | 10 | 0.014 ms | 26966.627054 s | 26966.627068 s |
nvme0q9:134 | 0008 | 0.036 ms | 11 | 0.011 ms | 26967.258452 s | 26967.258462 s |
nvme0q7:132 | 0006 | 0.024 ms | 3 | 0.014 ms | 26966.767404 s | 26966.767418 s |
nvme0q11:136 | 0010 | 0.023 ms | 5 | 0.006 ms | 26966.935455 s | 26966.935461 s |
nvme0q31:156 | 0030 | 0.018 ms | 5 | 0.006 ms | 26966.627517 s | 26966.627524 s |
nvme0q12:137 | 0011 | 0.015 ms | 2 | 0.014 ms | 26966.799588 s | 26966.799602 s |
enp5s0-rx-0:164 | 0006 | 0.009 ms | 2 | 0.005 ms | 26966.742024 s | 26966.742028 s |
enp5s0-rx-1:165 | 0007 | 0.006 ms | 2 | 0.004 ms | 26966.939486 s | 26966.939490 s |
enp5s0-tx-0:166 | 0008 | 0.005 ms | 1 | 0.005 ms | 26966.939484 s | 26966.939489 s |
enp5s0-tx-1:167 | 0009 | 0.005 ms | 1 | 0.005 ms | 26966.939484 s | 26966.939489 s |
--------------------------------------------------------------------------------------------------------------------------------
Yang Jihong [Sat, 9 Jul 2022 01:50:30 +0000 (09:50 +0800)]
perf kwork: Implement BPF trace
'perf record' generates perf.data, which generates extra interrupts
for hard disk, amount of data to be collected increases with time.
Using eBPF trace can process the data in kernel, which solves the
preceding two problems.
Add -b/--use-bpf option for latency and report to support
tracing kwork events using eBPF:
1. Create bpf prog and attach to tracepoints,
2. Start tracing after command is entered,
3. After user hit "ctrl+c", stop tracing and report,
4. Support CPU and name filtering.
This commit implements the framework code and
does not add specific event support.
Test cases:
# perf kwork rep -h
Usage: perf kwork report [<options>]
-b, --use-bpf Use BPF to measure kwork runtime
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): runtime, max, count
-S, --with-summary Show summary with statistics
--time <str> Time span for analysis (start,stop)
# perf kwork lat -h
Usage: perf kwork latency [<options>]
-b, --use-bpf Use BPF to measure kwork latency
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): avg, max, count
--time <str> Time span for analysis (start,stop)
# perf kwork lat -b
Unsupported bpf trace class irq
# perf kwork rep -b
Unsupported bpf trace class irq
Yang Jihong [Sat, 9 Jul 2022 01:50:28 +0000 (09:50 +0800)]
perf kwork: Add workqueue latency support
Implements workqueue latency function.
Test cases:
# perf kwork -k workqueue lat
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(w)vmstat_update | 0001 | 5.004 ms | 1 | 5.004 ms | 44001.745646 s | 44001.750650 s |
(w)vmstat_update | 0006 | 1.773 ms | 1 | 1.773 ms | 44000.830840 s | 44000.832613 s |
(w)vmstat_shepherd | 0000 | 0.992 ms | 8 | 2.474 ms | 44007.717845 s | 44007.720318 s |
(w)vmstat_update | 0000 | 0.974 ms | 5 | 2.624 ms | 44004.785970 s | 44004.788594 s |
(w)e1000_watchdog | 0002 | 0.687 ms | 5 | 2.632 ms | 44005.009334 s | 44005.011966 s |
(w)vmstat_update | 0002 | 0.307 ms | 1 | 0.307 ms | 44004.817395 s | 44004.817702 s |
(w)vmstat_update | 0004 | 0.296 ms | 1 | 0.296 ms | 43997.913677 s | 43997.913973 s |
(w)mix_interrupt_randomness | 0000 | 0.283 ms | 285 | 3.724 ms | 44006.790889 s | 44006.794613 s |
(w)neigh_managed_work | 0001 | 0.271 ms | 1 | 0.271 ms | 43997.665542 s | 43997.665813 s |
(w)vmstat_update | 0005 | 0.261 ms | 1 | 0.261 ms | 44007.820542 s | 44007.820803 s |
(w)neigh_managed_work | 0004 | 0.220 ms | 1 | 0.220 ms | 44002.953287 s | 44002.953507 s |
(w)neigh_periodic_work | 0004 | 0.217 ms | 1 | 0.217 ms | 43999.929718 s | 43999.929935 s |
(w)mix_interrupt_randomness | 0002 | 0.199 ms | 5 | 0.310 ms | 44005.012316 s | 44005.012625 s |
(w)vmstat_update | 0003 | 0.199 ms | 4 | 0.307 ms | 44005.714391 s | 44005.714699 s |
(w)gc_worker | 0001 | 0.071 ms | 173 | 1.128 ms | 44002.062579 s | 44002.063707 s |
--------------------------------------------------------------------------------------------------------------------------------
INFO: 0.020% skipped events (17 including 10 raise, 7 entry, 0 exit)
Yang Jihong [Sat, 9 Jul 2022 01:50:27 +0000 (09:50 +0800)]
perf kwork: Add softirq latency support
Implements softirq latency function.
Test cases:
# perf kwork -k softirq lat
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0006 | 1.048 ms | 1 | 1.048 ms | 44000.829759 s | 44000.830807 s |
(s)TIMER:1 | 0001 | 1.008 ms | 4 | 3.434 ms | 43997.662069 s | 43997.665503 s |
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)RCU:9 | 0005 | 0.245 ms | 88 | 1.866 ms | 43997.683105 s | 43997.684971 s |
(s)SCHED:7 | 0000 | 0.158 ms | 677 | 2.639 ms | 44004.785716 s | 44004.788355 s |
... <SNIP> ...
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)RCU:9 | 0003 | 0.129 ms | 2193 | 1.507 ms | 44006.010208 s | 44006.011715 s |
(s)TIMER:1 | 0005 | 0.128 ms | 1 | 0.128 ms | 44007.820346 s | 44007.820474 s |
(s)SCHED:7 | 0002 | 0.040 ms | 1731 | 0.211 ms | 44005.009237 s | 44005.009447 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork -k softirq lat -C 1,2
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0001 | 1.008 ms | 4 | 3.434 ms | 43997.662069 s | 43997.665503 s |
(s)RCU:9 | 0001 | 0.216 ms | 1619 | 3.659 ms | 43997.662069 s | 43997.665727 s |
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)NET_RX:3 | 0002 | 0.106 ms | 5 | 0.163 ms | 44005.012255 s | 44005.012418 s |
(s)TIMER:1 | 0002 | 0.084 ms | 9 | 0.114 ms | 44005.009168 s | 44005.009282 s |
(s)SCHED:7 | 0001 | 0.049 ms | 655 | 0.837 ms | 44005.707998 s | 44005.708835 s |
(s)SCHED:7 | 0002 | 0.040 ms | 1731 | 0.211 ms | 44005.009237 s | 44005.009447 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork -k softirq lat -n RCU
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)RCU:9 | 0005 | 0.245 ms | 88 | 1.866 ms | 43997.683105 s | 43997.684971 s |
(s)RCU:9 | 0004 | 0.237 ms | 26 | 0.792 ms | 43997.683018 s | 43997.683810 s |
(s)RCU:9 | 0007 | 0.217 ms | 140 | 1.335 ms | 43997.671080 s | 43997.672415 s |
(s)RCU:9 | 0001 | 0.216 ms | 1619 | 3.659 ms | 43997.662069 s | 43997.665727 s |
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)RCU:9 | 0003 | 0.129 ms | 2193 | 1.507 ms | 44006.010208 s | 44006.011715 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork -k softirq lat -s count,avg -n RCU
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0003 | 0.129 ms | 2193 | 1.507 ms | 44006.010208 s | 44006.011715 s |
(s)RCU:9 | 0001 | 0.216 ms | 1619 | 3.659 ms | 43997.662069 s | 43997.665727 s |
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)RCU:9 | 0007 | 0.217 ms | 140 | 1.335 ms | 43997.671080 s | 43997.672415 s |
(s)RCU:9 | 0005 | 0.245 ms | 88 | 1.866 ms | 43997.683105 s | 43997.684971 s |
(s)RCU:9 | 0004 | 0.237 ms | 26 | 0.792 ms | 43997.683018 s | 43997.683810 s |
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork -k softirq lat --time 43997,
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0006 | 1.048 ms | 1 | 1.048 ms | 44000.829759 s | 44000.830807 s |
(s)TIMER:1 | 0001 | 1.008 ms | 4 | 3.434 ms | 43997.662069 s | 43997.665503 s |
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)TIMER:1 | 0004 | 0.083 ms | 21 | 0.127 ms | 44004.969171 s | 44004.969298 s |
... <SNIP> ...
(s)SCHED:7 | 0005 | 0.050 ms | 4 | 0.086 ms | 43997.684852 s | 43997.684938 s |
(s)SCHED:7 | 0001 | 0.049 ms | 655 | 0.837 ms | 44005.707998 s | 44005.708835 s |
(s)SCHED:7 | 0007 | 0.044 ms | 171 | 0.077 ms | 43997.943265 s | 43997.943342 s |
(s)SCHED:7 | 0002 | 0.040 ms | 1731 | 0.211 ms | 44005.009237 s | 44005.009447 s |
--------------------------------------------------------------------------------------------------------------------------------
Yang Jihong [Sat, 9 Jul 2022 01:50:26 +0000 (09:50 +0800)]
perf kwork: Implement perf kwork latency
Implements framework of perf kwork latency, which is used to report time
properties such as delay time and frequency.
Test cases:
# perf kwork lat -h
Usage: perf kwork latency [<options>]
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): avg, max, count
--time <str> Time span for analysis (start,stop)
# perf kwork lat -C 199
Requested CPU 199 too large. Consider raising MAX_NR_CPUS
Invalid cpu bitmap
# perf kwork lat -i perf_no_exist.data
failed to open perf_no_exist.data: No such file or directory
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): avg, max, count
--time <str> Time span for analysis (start,stop)
# perf kwork lat --time FFFF,
Invalid time span
# perf kwork lat
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
INFO: 36.570% skipped events (31537 including 0 raise, 31537 entry, 0 exit)
Since there are no latency-enabled events, the output is empty.
Yang Jihong [Sat, 9 Jul 2022 01:50:25 +0000 (09:50 +0800)]
perf kwork: Add workqueue report support
Implements workqueue report function.
Test cases:
# perf kwork -k workqueue rep
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)gc_worker | 0001 | 1912.389 ms | 173 | 12.896 ms | 44002.050787 s | 44002.063683 s |
(w)mix_interrupt_randomness | 0000 | 24.308 ms | 285 | 3.349 ms | 44004.784908 s | 44004.788257 s |
(w)e1000_watchdog | 0002 | 5.332 ms | 5 | 2.059 ms | 44000.914366 s | 44000.916424 s |
(w)vmstat_update | 0005 | 0.989 ms | 2 | 0.953 ms | 43997.986991 s | 43997.987944 s |
(w)vmstat_shepherd | 0000 | 0.964 ms | 8 | 0.195 ms | 43997.986453 s | 43997.986648 s |
(w)vmstat_update | 0003 | 0.306 ms | 6 | 0.077 ms | 44004.689543 s | 44004.689620 s |
(w)vmstat_update | 0000 | 0.196 ms | 5 | 0.049 ms | 44005.713732 s | 44005.713781 s |
(w)vmstat_update | 0001 | 0.162 ms | 2 | 0.130 ms | 44000.192034 s | 44000.192164 s |
(w)mix_interrupt_randomness | 0002 | 0.114 ms | 5 | 0.037 ms | 44005.012625 s | 44005.012662 s |
(w)vmstat_update | 0002 | 0.084 ms | 2 | 0.043 ms | 44004.817702 s | 44004.817745 s |
(w)vmstat_update | 0006 | 0.067 ms | 2 | 0.041 ms | 43997.987214 s | 43997.987254 s |
(w)neigh_periodic_work | 0004 | 0.039 ms | 1 | 0.039 ms | 43999.929935 s | 43999.929974 s |
(w)vmstat_update | 0007 | 0.037 ms | 1 | 0.037 ms | 43997.988969 s | 43997.989006 s |
(w)neigh_managed_work | 0001 | 0.036 ms | 1 | 0.036 ms | 43997.665813 s | 43997.665849 s |
(w)neigh_managed_work | 0004 | 0.036 ms | 1 | 0.036 ms | 44002.953507 s | 44002.953543 s |
(w)vmstat_update | 0004 | 0.027 ms | 1 | 0.027 ms | 43997.913973 s | 43997.914000 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork -k workqueue rep -S
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)gc_worker | 0001 | 1912.389 ms | 173 | 12.896 ms | 44002.050787 s | 44002.063683 s |
(w)mix_interrupt_randomness | 0000 | 24.308 ms | 285 | 3.349 ms | 44004.784908 s | 44004.788257 s |
(w)e1000_watchdog | 0002 | 5.332 ms | 5 | 2.059 ms | 44000.914366 s | 44000.916424 s |
(w)vmstat_update | 0005 | 0.989 ms | 2 | 0.953 ms | 43997.986991 s | 43997.987944 s |
(w)vmstat_shepherd | 0000 | 0.964 ms | 8 | 0.195 ms | 43997.986453 s | 43997.986648 s |
(w)vmstat_update | 0003 | 0.306 ms | 6 | 0.077 ms | 44004.689543 s | 44004.689620 s |
(w)vmstat_update | 0000 | 0.196 ms | 5 | 0.049 ms | 44005.713732 s | 44005.713781 s |
(w)vmstat_update | 0001 | 0.162 ms | 2 | 0.130 ms | 44000.192034 s | 44000.192164 s |
(w)mix_interrupt_randomness | 0002 | 0.114 ms | 5 | 0.037 ms | 44005.012625 s | 44005.012662 s |
(w)vmstat_update | 0002 | 0.084 ms | 2 | 0.043 ms | 44004.817702 s | 44004.817745 s |
(w)vmstat_update | 0006 | 0.067 ms | 2 | 0.041 ms | 43997.987214 s | 43997.987254 s |
(w)neigh_periodic_work | 0004 | 0.039 ms | 1 | 0.039 ms | 43999.929935 s | 43999.929974 s |
(w)vmstat_update | 0007 | 0.037 ms | 1 | 0.037 ms | 43997.988969 s | 43997.989006 s |
(w)neigh_managed_work | 0001 | 0.036 ms | 1 | 0.036 ms | 43997.665813 s | 43997.665849 s |
(w)neigh_managed_work | 0004 | 0.036 ms | 1 | 0.036 ms | 44002.953507 s | 44002.953543 s |
(w)vmstat_update | 0004 | 0.027 ms | 1 | 0.027 ms | 43997.913973 s | 43997.914000 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 500
Total runtime (msec) : 1945.085 (0.192% load average)
Total time span (msec) : 10155.026
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork -k workqueue rep -n vmstat_update
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)vmstat_update | 0005 | 0.989 ms | 2 | 0.953 ms | 43997.986991 s | 43997.987944 s |
(w)vmstat_update | 0003 | 0.306 ms | 6 | 0.077 ms | 44004.689543 s | 44004.689620 s |
(w)vmstat_update | 0000 | 0.196 ms | 5 | 0.049 ms | 44005.713732 s | 44005.713781 s |
(w)vmstat_update | 0001 | 0.162 ms | 2 | 0.130 ms | 44000.192034 s | 44000.192164 s |
(w)vmstat_update | 0002 | 0.084 ms | 2 | 0.043 ms | 44004.817702 s | 44004.817745 s |
(w)vmstat_update | 0006 | 0.067 ms | 2 | 0.041 ms | 43997.987214 s | 43997.987254 s |
(w)vmstat_update | 0007 | 0.037 ms | 1 | 0.037 ms | 43997.988969 s | 43997.989006 s |
(w)vmstat_update | 0004 | 0.027 ms | 1 | 0.027 ms | 43997.913973 s | 43997.914000 s |
--------------------------------------------------------------------------------------------------------------------------------
Committer testing:
# perf kwork -k workqueue rep -C 1 | head -20
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)commit_work | 0001 | 25.896 ms | 2 | 13.200 ms | 26522.906700 s | 26522.919900 s |
(w)commit_work | 0001 | 13.316 ms | 1 | 13.316 ms | 26522.573246 s | 26522.586562 s |
(w)commit_work | 0001 | 13.177 ms | 1 | 13.177 ms | 26522.673406 s | 26522.686583 s |
(w)commit_work | 0001 | 12.630 ms | 1 | 12.630 ms | 26522.123921 s | 26522.136551 s |
(w)btrfs_work_helper | 0001 | 3.544 ms | 1 | 3.544 ms | 26529.131296 s | 26529.134840 s |
(w)btrfs_work_helper | 0001 | 3.330 ms | 1 | 3.330 ms | 26529.137698 s | 26529.141028 s |
(w)btrfs_work_helper | 0001 | 2.855 ms | 1 | 2.855 ms | 26529.134842 s | 26529.137697 s |
(w)btrfs_work_helper | 0001 | 2.757 ms | 1 | 2.757 ms | 26529.124086 s | 26529.126843 s |
(w)btrfs_work_helper | 0001 | 2.182 ms | 1 | 2.182 ms | 26529.141030 s | 26529.143212 s |
(w)btrfs_work_helper | 0001 | 1.743 ms | 1 | 1.743 ms | 26520.415335 s | 26520.417078 s |
(w)btrfs_work_helper | 0001 | 1.499 ms | 1 | 1.499 ms | 26529.127774 s | 26529.129272 s |
(w)btrfs_work_helper | 0001 | 1.446 ms | 1 | 1.446 ms | 26529.129848 s | 26529.131294 s |
(w)btrfs_work_helper | 0001 | 1.373 ms | 1 | 1.373 ms | 26523.808270 s | 26523.809643 s |
(w)wb_workfn | 0001 | 1.165 ms | 2 | 0.763 ms | 26527.071056 s | 26527.071819 s |
(w)btrfs_work_helper | 0001 | 0.926 ms | 1 | 0.926 ms | 26529.126846 s | 26529.127771 s |
(w)btrfs_work_helper | 0001 | 0.571 ms | 1 | 0.571 ms | 26529.129275 s | 26529.129846 s |
(w)wb_workfn | 0001 | 0.525 ms | 1 | 0.525 ms | 26522.975151 s | 26522.975676 s |
#
Yang Jihong [Sat, 9 Jul 2022 01:50:24 +0000 (09:50 +0800)]
perf kwork: Add softirq report support
Implements softirq kwork report function.
Test cases:
# perf kwork -k softirq rep
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0003 | 181.387 ms | 2476 | 1.240 ms | 44004.787960 s | 44004.789201 s |
(s)RCU:9 | 0003 | 91.573 ms | 2193 | 0.650 ms | 44004.790258 s | 44004.790908 s |
(s)RCU:9 | 0001 | 78.960 ms | 1619 | 1.195 ms | 44001.496553 s | 44001.497749 s |
(s)SCHED:7 | 0003 | 55.962 ms | 1255 | 0.954 ms | 44004.812008 s | 44004.812962 s |
... <SNIP> ...
(s)RCU:9 | 0004 | 0.830 ms | 26 | 0.058 ms | 43997.666418 s | 43997.666476 s |
(s)TIMER:1 | 0001 | 0.471 ms | 4 | 0.158 ms | 44007.834694 s | 44007.834852 s |
(s)RCU:9 | 0006 | 0.220 ms | 7 | 0.048 ms | 44004.833764 s | 44004.833812 s |
(s)NET_RX:3 | 0002 | 0.164 ms | 5 | 0.049 ms | 44005.012418 s | 44005.012466 s |
(s)TIMER:1 | 0005 | 0.164 ms | 1 | 0.164 ms | 44007.820474 s | 44007.820638 s |
(s)TIMER:1 | 0006 | 0.087 ms | 1 | 0.087 ms | 44000.830807 s | 44000.830894 s |
(s)SCHED:7 | 0006 | 0.080 ms | 2 | 0.044 ms | 43997.826145 s | 43997.826189 s |
--------------------------------------------------------------------------------------------------------------------------------
#
# perf kwork -k softirq rep -S
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0003 | 181.387 ms | 2476 | 1.240 ms | 44004.787960 s | 44004.789201 s |
(s)RCU:9 | 0003 | 91.573 ms | 2193 | 0.650 ms | 44004.790258 s | 44004.790908 s |
(s)RCU:9 | 0001 | 78.960 ms | 1619 | 1.195 ms | 44001.496553 s | 44001.497749 s |
(s)SCHED:7 | 0000 | 63.631 ms | 680 | 2.690 ms | 44006.721976 s | 44006.724666 s |
... <SNIP> ...
(s)SCHED:7 | 0003 | 55.962 ms | 1255 | 0.954 ms | 44004.812008 s | 44004.812962 s |
(s)RCU:9 | 0006 | 0.220 ms | 7 | 0.048 ms | 44004.833764 s | 44004.833812 s |
(s)NET_RX:3 | 0002 | 0.164 ms | 5 | 0.049 ms | 44005.012418 s | 44005.012466 s |
(s)TIMER:1 | 0005 | 0.164 ms | 1 | 0.164 ms | 44007.820474 s | 44007.820638 s |
(s)TIMER:1 | 0006 | 0.087 ms | 1 | 0.087 ms | 44000.830807 s | 44000.830894 s |
(s)SCHED:7 | 0006 | 0.080 ms | 2 | 0.044 ms | 43997.826145 s | 43997.826189 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 12748
Total runtime (msec) : 661.433 (0.065% load average)
Total time span (msec) : 10176.441
--------------------------------------------------------------------------------------------------------------------------------
#
# perf kwork -k softirq rep -s count,max
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0003 | 181.387 ms | 2476 | 1.240 ms | 44004.787960 s | 44004.789201 s |
(s)RCU:9 | 0003 | 91.573 ms | 2193 | 0.650 ms | 44004.790258 s | 44004.790908 s |
(s)SCHED:7 | 0002 | 50.039 ms | 1731 | 0.074 ms | 44005.009447 s | 44005.009521 s |
(s)RCU:9 | 0001 | 78.960 ms | 1619 | 1.195 ms | 44001.496553 s | 44001.497749 s |
(s)SCHED:7 | 0003 | 55.962 ms | 1255 | 0.954 ms | 44004.812008 s | 44004.812962 s |
... <SNIP> ...
(s)RCU:9 | 0002 | 35.241 ms | 932 | 0.407 ms | 44005.009541 s | 44005.009949 s |
(s)RCU:9 | 0000 | 45.710 ms | 702 | 1.144 ms | 44004.787023 s | 44004.788167 s |
(s)SCHED:7 | 0006 | 0.080 ms | 2 | 0.044 ms | 43997.826145 s | 43997.826189 s |
(s)TIMER:1 | 0005 | 0.164 ms | 1 | 0.164 ms | 44007.820474 s | 44007.820638 s |
(s)TIMER:1 | 0006 | 0.087 ms | 1 | 0.087 ms | 44000.830807 s | 44000.830894 s |
--------------------------------------------------------------------------------------------------------------------------------
Committer testing:
# perf kwork -k softirq report -C 2 -s count,max
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)SCHED:7 | 0002 | 0.980 ms | 159 | 0.024 ms | 26035.571037 s | 26035.571061 s |
(s)RCU:9 | 0002 | 0.124 ms | 88 | 0.021 ms | 26035.177050 s | 26035.177071 s |
(s)TIMER:1 | 0002 | 0.122 ms | 56 | 0.007 ms | 26035.468045 s | 26035.468052 s |
--------------------------------------------------------------------------------------------------------------------------------
Yang Jihong [Sat, 9 Jul 2022 01:50:23 +0000 (09:50 +0800)]
perf kwork: Add irq report support
Implements irq kwork report function.
Test cases:
# perf kwork record -- sleep 10
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 6.134 MB perf.data ]
# perf kwork report
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report -C 2
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report -C 3
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report -i perf.data
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report -s max,freq
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report -S
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 18289
Total runtime (msec) : 1167.686 (0.115% load average)
Total time span (msec) : 10159.155
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report --time 44005,
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 402.173 ms | 4695 | 0.981 ms | 44007.831992 s | 44007.832973 s |
eth0:10 | 0002 | 0.089 ms | 2 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------
Committer testing:
# perf kwork report
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
nvme0q5:130 | 0004 | 1.101 ms | 49 | 0.051 ms | 26035.056403 s | 26035.056455 s |
amdgpu:162 | 0002 | 0.176 ms | 9 | 0.046 ms | 26035.268020 s | 26035.268066 s |
nvme0q24:149 | 0023 | 0.161 ms | 55 | 0.009 ms | 26035.655280 s | 26035.655288 s |
nvme0q20:145 | 0019 | 0.090 ms | 33 | 0.014 ms | 26035.939018 s | 26035.939032 s |
nvme0q31:156 | 0030 | 0.075 ms | 21 | 0.010 ms | 26035.052237 s | 26035.052247 s |
nvme0q8:133 | 0007 | 0.062 ms | 12 | 0.021 ms | 26035.416840 s | 26035.416861 s |
nvme0q6:131 | 0005 | 0.054 ms | 22 | 0.010 ms | 26035.199919 s | 26035.199929 s |
nvme0q19:144 | 0018 | 0.052 ms | 14 | 0.010 ms | 26035.110615 s | 26035.110625 s |
nvme0q7:132 | 0006 | 0.049 ms | 13 | 0.007 ms | 26035.125180 s | 26035.125187 s |
nvme0q18:143 | 0017 | 0.033 ms | 14 | 0.007 ms | 26035.169698 s | 26035.169705 s |
nvme0q17:142 | 0016 | 0.013 ms | 1 | 0.013 ms | 26035.565147 s | 26035.565160 s |
enp5s0-rx-0:164 | 0006 | 0.004 ms | 4 | 0.002 ms | 26035.928882 s | 26035.928884 s |
enp5s0-tx-0:166 | 0008 | 0.003 ms | 3 | 0.002 ms | 26035.870923 s | 26035.870925 s |
--------------------------------------------------------------------------------------------------------------------------------
Yang Jihong [Sat, 9 Jul 2022 01:50:22 +0000 (09:50 +0800)]
perf kwork: Implement 'report' subcommand
Implements framework of 'perf kwork report', which is used to report
time properties such as run time and frequency:
Test cases:
# perf kwork
Usage: perf kwork [<options>] {record|report}
-D, --dump-raw-trace dump raw trace in ASCII
-f, --force don't complain, do it
-k, --kwork <kwork> list of kwork to profile (irq, softirq, workqueue, etc)
-v, --verbose be more verbose (show symbol address, etc)
# perf kwork report -h
Usage: perf kwork report [<options>]
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): runtime, max, count
-S, --with-summary Show summary with statistics
--time <str> Time span for analysis (start,stop)
# perf kwork report
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report -S
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
Total count : 0
Total runtime (msec) : 0.000 (0.000% load average)
Total time span (msec) : 0.000
--------------------------------------------------------------------------------------------------------------------------------
# perf kwork report -C 0,100
Requested CPU 100 too large. Consider raising MAX_NR_CPUS
Invalid cpu bitmap
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): runtime, max, count
-S, --with-summary Show summary with statistics
--time <str> Time span for analysis (start,stop)
# perf kwork report -i perf_no_exist.data
failed to open perf_no_exist.data: No such file or directory
# perf kwork report --time 00FFF,
Invalid time span
Since there are no report supported events, the output is empty.
Briefly describe the data structure:
1. "class" indicates event type. For example, irq and softiq correspond
to different types.
2. "cluster" refers to a specific event corresponding to a type. For
example, RCU and TIMER in softirq correspond to different clusters,
which contains three types of events: raise, entry, and exit.
3. "atom" includes time of each sample and sample of the previous phase.
(For example, exit corresponds to entry, which is used for timehist.)
Committer notes:
- Add {} for multiline if blocks.
- report_print_work() should either return that ret variable that
accounts how many bytes were printed or stop accounting and be void.
Do the former for now to avoid this:
builtin-kwork.c:534:6: error: variable 'ret' set but not used [-Werror,-Wunused-but-set-variable]
int ret = 0;
^
1 error generated.
Yang Jihong [Sat, 9 Jul 2022 01:50:20 +0000 (09:50 +0800)]
perf kwork: Add workqueue kwork record support
Record workqueue events workqueue:workqueue_activate_work,
workqueue:workqueue_execute_start & workqueue:workqueue_execute_end
Tese cases:
Record all events:
# perf kwork record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 0.857 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:irq_handler_entry
irq:irq_handler_exit
irq:softirq_raise
irq:softirq_entry
irq:softirq_exit
workqueue:workqueue_activate_work
workqueue:workqueue_execute_start
workqueue:workqueue_execute_end
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
Record workqueue events:
# perf kwork -k workqueue record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.081 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
workqueue:workqueue_activate_work
workqueue:workqueue_execute_start
workqueue:workqueue_execute_end
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
Yang Jihong [Sat, 9 Jul 2022 01:50:19 +0000 (09:50 +0800)]
perf kwork: Add softirq kwork record support
Record softirq events irq:softirq_raise, irq:softirq_entry &
irq:softirq_exit.
Test cases:
Record all events:
# perf kwork record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 0.897 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:irq_handler_entry
irq:irq_handler_exit
irq:softirq_raise
irq:softirq_entry
irq:softirq_exit
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
Record softirq events:
# perf kwork -k softirq record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.141 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:softirq_raise
irq:softirq_entry
irq:softirq_exit
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
Yang Jihong [Sat, 9 Jul 2022 01:50:18 +0000 (09:50 +0800)]
perf kwork: Add irq kwork record support
Record interrupt events irq:irq_handler_entry & irq_handler_exit
Test cases:
# perf kwork record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 0.556 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:irq_handler_entry
irq:irq_handler_exit
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
#
Yang Jihong [Sat, 9 Jul 2022 01:50:17 +0000 (09:50 +0800)]
perf kwork: New tool to trace time properties of kernel work (such as softirq, and workqueue)
The 'perf kwork' tool is used to trace time properties of kernel work
(such as irq, softirq, and workqueue), including runtime, latency, and
timehist, using the infrastructure in the perf tools to allow tracing
extra targets.
This is the first commit to reuse the 'perf record' framework code to
implement a simple record function, kwork is not supported currently.
The most commonly used perf commands are:
<SNIP>
iostat Show I/O performance metrics
kallsyms Searches running kernel for symbols
kmem Tool to trace/measure kernel memory properties
kvm Tool to trace/measure kvm guest os
kwork Tool to trace/measure kernel work properties (latencies)
list List all symbolic event types
lock Analyze lock events
mem Profile memory accesses
record Run a command and record its profile into perf.data
<SNIP>
See 'perf help COMMAND' for more information on a specific command.
# perf kwork
Usage: perf kwork [<options>] {record}
-D, --dump-raw-trace dump raw trace in ASCII
-f, --force don't complain, do it
-k, --kwork <kwork> list of kwork to profile
-v, --verbose be more verbose (show symbol address, etc)
# perf kwork record -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 1.787 MB perf.data ]
Namhyung Kim [Mon, 25 Jul 2022 18:31:24 +0000 (11:31 -0700)]
perf lock: Support -t option for 'contention' subcommand
Like perf lock report, it can report lock contention stat of each task.
$ perf lock contention -t
contended total wait max wait avg wait pid comm
5 945.20 us 902.08 us 189.04 us 316167 EventManager_De
33 98.17 us 6.78 us 2.97 us 766063 kworker/0:1-get
7 92.47 us 61.26 us 13.21 us 316170 EventManager_De
14 76.31 us 12.87 us 5.45 us 12949 timedcall
24 76.15 us 12.27 us 3.17 us 767992 sched-pipe
15 75.62 us 11.93 us 5.04 us 15127 switchto-defaul
24 71.84 us 5.59 us 2.99 us 629168 kworker/u513:2-
17 67.41 us 7.94 us 3.96 us 13504 coroner-
1 59.56 us 59.56 us 59.56 us 316165 EventManager_De
14 56.21 us 6.89 us 4.01 us 0 swapper
Namhyung Kim [Mon, 25 Jul 2022 18:31:23 +0000 (11:31 -0700)]
perf lock: Add -k and -F options to 'contention' subcommand
Like perf lock report, add -k/--key and -F/--field options to control
output formatting and sorting. Note that it has slightly different
default options as some fields are not available and to optimize the
screen space.
Namhyung Kim [Mon, 25 Jul 2022 18:31:22 +0000 (11:31 -0700)]
perf lock: Add 'contention' subcommand
The 'perf lock contention' processes the lock contention events and
displays the result like perf lock report. Right now, there's not
much difference between the two but the lock contention specific
features will come soon.
$ perf lock contention
contended total wait max wait avg wait type caller
238 1.41 ms 29.20 us 5.94 us spinlock update_blocked_averages+0x4c
1 902.08 us 902.08 us 902.08 us rwsem:R do_user_addr_fault+0x1dd
81 330.30 us 17.24 us 4.08 us spinlock _nohz_idle_balance+0x172
2 89.54 us 61.26 us 44.77 us spinlock do_anonymous_page+0x16d
24 78.36 us 12.27 us 3.27 us mutex pipe_read+0x56
2 71.58 us 59.56 us 35.79 us spinlock __handle_mm_fault+0x6aa
6 25.68 us 6.89 us 4.28 us spinlock do_idle+0x28d
1 18.46 us 18.46 us 18.46 us rtmutex exec_fw_cmd+0x21b
3 15.25 us 6.26 us 5.08 us spinlock tick_do_update_jiffies64+0x2c
Namhyung Kim [Mon, 25 Jul 2022 18:31:21 +0000 (11:31 -0700)]
perf lock: Add lock aggregation enum
Introduce the aggr_mode variable to prepare a later code change.
The default is LOCK_AGGR_ADDR which aggregates the result for the lock
instances.
When -t/--threads option is given, it'd be set to LOCK_AGGR_TASK. The
LOCK_AGGR_CALLER is for the contention analysis and it'd aggregate the
stat by comparing the callstacks.
Kan Liang [Mon, 18 Jul 2022 16:43:10 +0000 (09:43 -0700)]
perf tsc: Add arch TSC frequency information
The TSC frequency information is required for the event metrics with the
literal, system_tsc_freq. For the newer Intel platform, the TSC
frequency information can be retrieved from the CPUID leaf 0x15. If the
TSC frequency information isn't present the /proc/cpuinfo approach is
used.
Refactor cpuid() for this use. Note, the previous stack pushing/popping
approach was broken on x86-64 that has stack red zones that would be
clobbered.
Committer testing:
Before:
$ perf record sleep 0.0001
[ perf record: Woken up 1 times to write data ]
$ perf report --header-only |& grep cpuid
# cpuid : AuthenticAMD,25,33,0
$
After the patch:
$ perf record sleep 0.0001
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.002 MB perf.data (8 samples) ]
$ perf report --header-only |& grep cpuid
# cpuid : AuthenticAMD,25,33,0
$
watchqueue: make sure to serialize 'wqueue->defunct' properly
When the pipe is closed, we mark the associated watchqueue defunct by
calling watch_queue_clear(). However, while that is protected by the
watchqueue lock, new watchqueue entries aren't actually added under that
lock at all: they use the pipe->rd_wait.lock instead, and looking up
that pipe happens without any locking.
The watchqueue code uses the RCU read-side section to make sure that the
wqueue entry itself hasn't disappeared, but that does not protect the
pipe_info in any way.
So make sure to actually hold the wqueue lock when posting watch events,
properly serializing against the pipe being torn down.
Eric Snowberg [Wed, 20 Jul 2022 16:40:27 +0000 (12:40 -0400)]
lockdown: Fix kexec lockdown bypass with ima policy
The lockdown LSM is primarily used in conjunction with UEFI Secure Boot.
This LSM may also be used on machines without UEFI. It can also be
enabled when UEFI Secure Boot is disabled. One of lockdown's features
is to prevent kexec from loading untrusted kernels. Lockdown can be
enabled through a bootparam or after the kernel has booted through
securityfs.
If IMA appraisal is used with the "ima_appraise=log" boot param,
lockdown can be defeated with kexec on any machine when Secure Boot is
disabled or unavailable. IMA prevents setting "ima_appraise=log" from
the boot param when Secure Boot is enabled, but this does not cover
cases where lockdown is used without Secure Boot.
To defeat lockdown, boot without Secure Boot and add ima_appraise=log to
the kernel command line; then:
perf scripting perl: Ignore some warnings to keep building with perl headers
On gcc 12 we started seeing this:
In file included from /usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/perl.h:2999,
from util/scripting-engines/trace-event-perl.c:35:
/usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/inline.h: In function 'Perl_is_utf8_valid_partial_char_flags':
/usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/handy.h:125:23: error: cast from function call of type 'STRLEN' {aka 'long unsigned int'} to non-matching type '_Bool' [-Werror=bad-function-cast]
125 | #define cBOOL(cbool) ((bool) (cbool))
| ^
/usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/inline.h:2363:12: note: in expansion of macro 'cBOOL'
2363 | return cBOOL(is_utf8_char_helper_(s0, e, flags));
| ^~~~~
In file included from /usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/perl.h:7242:
/usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/inline.h: In function 'Perl_cop_file_avn':
/usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/inline.h:3489:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]
3489 | const char *file = CopFILE(cop);
| ^~~~~
In file included from /usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/perl.h:7243:
/usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/sv_inline.h: In function 'Perl_newSV_type':
/usr/lib/perl5/5.36.0/x86_64-linux-thread-multi/CORE/sv_inline.h:376:5: error: enumeration value 'SVt_LAST' not handled in switch [-Werror=switch-enum]
376 | switch (type) {
| ^~~~~~
So disable those warnings to keep building with perl devel headers.
Noticed, among other distros, on opensuse tumbleweed:
Ian Rogers [Wed, 15 Jun 2022 01:42:06 +0000 (18:42 -0700)]
perf python: Avoid deprecation warning on distutils
Fix the following DeprecationWarning:
tools/perf/util/setup.py:31: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
Note: the setuptools module may need installing, for example:
$ sudo apt install python-setuptools
Reviewer comments:
James said:
Tested it with python 2.7 and 3.8 by running "make install-python_ext PYTHON=..."
Committer notes:
Tested with:
$ make -k BUILD_BPF_SKEL=1 PYTHON=python3 O=/tmp/build/perf -C tools/perf install-bin ; perf test python
$ make -k BUILD_BPF_SKEL=1 O=/tmp/build/perf -C tools/perf install-bin ; perf test python
Adrian Hunter [Mon, 11 Jul 2022 09:32:17 +0000 (12:32 +0300)]
perf intel-pt: Use guest pid/tid etc in guest samples
When decoding with guest sideband information, for VMX non-root (NR)
i.e. guest events, replace the host (hypervisor) pid/tid with guest values,
and provide also the new machine_pid and vcpu values.
Adrian Hunter [Mon, 11 Jul 2022 09:32:16 +0000 (12:32 +0300)]
perf intel-pt: Add machine_pid and vcpu to auxtrace_error
When decoding with guest sideband information, for VMX non-root (NR)
i.e. guest errors, replace the host (hypervisor) pid/tid with guest values,
and provide also the new machine_pid and vcpu values.
Adrian Hunter [Mon, 11 Jul 2022 09:32:14 +0000 (12:32 +0300)]
perf intel-pt: Disable sync switch with guest sideband
The sync_switch facility attempts to better synchronize context switches
with the Intel PT trace, however it is not designed for guest machine
context switches, so disable it when guest sideband is detected.
If a kernel mmap event was recorded inside a guest and injected into a host
perf.data file, then it will match a host mmap_name not a guest mmap_name,
see machine__set_mmap_name(). So try matching a host mmap_name in that
case.
Adrian Hunter [Mon, 11 Jul 2022 09:32:08 +0000 (12:32 +0300)]
perf machine: Use realloc_array_as_needed() in machine__set_current_tid()
Prepare machine__set_current_tid() for use with guest machines that do
not currently have a machine->env->nr_cpus_avail value by making use of
realloc_array_as_needed().
Adrian Hunter [Mon, 11 Jul 2022 09:32:07 +0000 (12:32 +0300)]
perf inject: Add support for injecting guest sideband events
Inject events from a perf.data file recorded in a virtual machine into
a perf.data file recorded on the host at the same time.
Only side band events (e.g. mmap, comm, fork, exit etc) and build IDs are
injected. Additionally, the guest kcore_dir is copied as kcore_dir__
appended to the machine PID.
This is non-trivial because:
o It is not possible to process 2 sessions simultaneously so instead
events are first written to a temporary file.
o To avoid conflict, guest sample IDs are replaced with new unused sample
IDs.
o Guest event's CPU is changed to be the host CPU because it is more
useful for reporting and analysis.
o Sample ID is mapped to machine PID which is recorded with VCPU in the
id index. This is important to allow guest events to be related to the
guest machine and VCPU.
o Timestamps must be converted.
o Events are inserted to obey finished-round ordering.
The anticipated use-case is:
- start recording sideband events in a guest machine
- start recording an AUX area trace on the host which can trace also the
guest (e.g. Intel PT)
- run test case on the guest
- stop recording on the host
- stop recording on the guest
- copy the guest perf.data file to the host
- inject the guest perf.data file sideband events into the host perf.data
file using perf inject
- the resulting perf.data file can now be used
Subsequent patches provide Intel PT support for this.
Adrian Hunter [Mon, 11 Jul 2022 09:32:05 +0000 (12:32 +0300)]
perf tools: Automatically use guest kcore_dir if present
When registering a guest machine using machine_pid from the id index,
check perf.data for a matching kcore_dir subdirectory and set the
kallsyms file name accordingly. If set, use it to find the machine's
kernel symbols and object code (from kcore).
Adrian Hunter [Mon, 11 Jul 2022 09:32:04 +0000 (12:32 +0300)]
perf tools: Make has_kcore_dir() work also for guest kcore_dir
Copies of /proc/kallsyms, /proc/modules and an extract of /proc/kcore can
be stored in the perf.data output directory under the subdirectory named
kcore_dir. Guest machines will have their files also under subdirectories
beginning kcore_dir__ followed by the machine pid. Make has_kcore_dir()
return true also if there is a guest machine kcore_dir.
Adrian Hunter [Mon, 11 Jul 2022 09:32:03 +0000 (12:32 +0300)]
perf tools: Remove also guest kcore_dir with host kcore_dir
Copies of /proc/kallsyms, /proc/modules and an extract of /proc/kcore can
be stored in the perf.data output directory under the subdirectory named
kcore_dir. Guest machines will have their files also under subdirectories
beginning kcore_dir__ followed by the machine pid. Remove these also when
removing kcore_dir.
Adrian Hunter [Mon, 11 Jul 2022 09:31:59 +0000 (12:31 +0300)]
perf dlfilter: Add machine_pid and vcpu
Add machine_pid and vcpu to struct perf_dlfilter_sample. The 'size' can be
used to determine if the values are present, however machine_pid is zero if
unused in any case. vcpu should be ignored if machine_pid is zero.
Adrian Hunter [Mon, 11 Jul 2022 09:31:56 +0000 (12:31 +0300)]
perf tools: Add machine_pid and vcpu to perf_sample
When parsing a sample with a sample ID, copy machine_pid and vcpu from
perf_sample_id to perf_sample.
Note, machine_pid will be zero when unused, so only a non-zero value
represents a guest machine. vcpu should be ignored if machine_pid is zero.
Note also, machine_pid is used with events that have come from injecting a
guest perf.data file, however guest events recorded on the host (i.e. using
perf kvm) have the (QEMU) hypervisor process pid to identify them - refer
machines__find_for_cpumode().
Adrian Hunter [Mon, 11 Jul 2022 09:31:55 +0000 (12:31 +0300)]
perf tools: Add guest_cpu to hypervisor threads
It is possible to know which guest machine was running at a point in time
based on the PID of the currently running host thread. That is, perf
identifies guest machines by the PID of the hypervisor.
To determine the guest CPU, put it on the hypervisor (QEMU) thread for
that VCPU.
This is done when processing the id_index which provides the necessary
information.
Adrian Hunter [Mon, 11 Jul 2022 09:31:54 +0000 (12:31 +0300)]
perf session: Create guest machines from id_index
Now that id_index has machine_pid, use it to create guest machines.
Create the guest machines with an idle thread because guest events
for "swapper" will be possible.
Adrian Hunter [Mon, 11 Jul 2022 09:31:53 +0000 (12:31 +0300)]
perf tools: Add machine_pid and vcpu to id_index
When injecting events from a guest perf.data file, the events will have
separate sample ID numbers. These ID numbers can then be used to determine
which machine an event belongs to. To facilitate that, add machine_pid and
vcpu to id_index records. For backward compatibility, these are added at
the end of the record, and the length of the record is used to determine
if they are present or not.
Note, this is needed because the events from a guest perf.data file contain
the pid/tid of the process running at that time inside the VM not the
pid/tid of the (QEMU) hypervisor thread. So a way is needed to relate
guest events back to the guest machine and VCPU, and using sample ID
numbers for that is relatively simple and convenient.
Adrian Hunter [Mon, 11 Jul 2022 09:31:52 +0000 (12:31 +0300)]
perf buildid-cache: Do not require purge files to also be in the file system
realname() returns NULL if the file is not in the file system, but we can
still remove it from the build ID cache in that case, so continue and
attempt the purge with the name provided.
Adrian Hunter [Mon, 11 Jul 2022 09:31:51 +0000 (12:31 +0300)]
perf buildid-cache: Add guestmount'd files to the build ID cache
When the guestmount option is used, a guest machine's file system mount
point is recorded in machine->root_dir.
perf already iterates guest machines when adding files to the build ID
cache, but does not take machine->root_dir into account.
Use machine->root_dir to find files for guest build IDs, and add them to
the build ID cache using the "proper" name i.e. relative to the guest root
directory not the host root directory.
Add perf_event__synthesize_id_sample() to enable the synthesis of
ID samples.
This is needed by perf inject. When injecting events from a guest perf.data
file, there is a possibility that the sample ID numbers conflict. In that
case, perf_event__synthesize_id_sample() can be used to re-write the ID
sample.
Adrian Hunter [Mon, 11 Jul 2022 09:31:48 +0000 (12:31 +0300)]
perf tools: Factor out evsel__id_hdr_size()
Factor out evsel__id_hdr_size() so it can be reused.
This is needed by perf inject. When injecting events from a guest perf.data
file, there is a possibility that the sample ID numbers conflict. To
re-write an ID sample, the old one needs to be removed first, which means
determining how big it is with evsel__id_hdr_size() and then subtracting
that from the event size.
Allow callers to get the ordered_events last flush timestamp.
This is needed in perf inject to obey finished-round ordering when
injecting additional events (e.g. from a guest perf.data file) with
timestamps. Any additional events that have timestamps before the last
flush time must be injected before the corresponding FINISHED_ROUND event.
Synthesized MMAP events have zero ino_generation, so do not compare
them to DSOs with a real ino_generation otherwise we end up with a DSO
without a build id.
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Pull rdma fixes from Jason Gunthorpe:
"Two bug fixes for irdma:
- x722 does not support 1GB pages, trying to configure them will
corrupt the dma mapping
- Fix a sleep while holding a spinlock"
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
RDMA/irdma: Fix sleep from invalid context BUG
RDMA/irdma: Do not advertise 1GB page size for x722
Blake Jones [Wed, 29 Jun 2022 21:36:32 +0000 (14:36 -0700)]
perf buildid-list: Add a "-m" option to show kernel and modules build-ids
This new option displays all of the information needed to do external
BuildID-based symbolization of kernel stack traces, such as those collected
by bpf_get_stackid().
For each kernel module plus the main kernel, it displays the BuildID,
the start and end virtual addresses of that module's text range (rounded
out to page boundaries), and the pathname of the module.
When run as a non-privileged user, the actual addresses of the modules'
text ranges are not available, so the tools displays "0, <text length>" for
kernel modules and "0, 0xffffffffffffffff" for the kernel itself.
u64 formatter should be PRIx64 for printing as hex numbers, fix this:
28 5.28 debian:experimental-x-mips : FAIL gcc version 11.2.0 (Debian 11.2.0-18)
builtin-buildid-list.c: In function 'buildid__map_cb':
builtin-buildid-list.c:32:24: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'u64' {aka 'long long unsigned int'} [-Werror=format=]
32 | printf("%s %16lx %16lx", bid_buf, map->start, map->end);
| ~~~~^ ~~~~~~~~~~
| | |
| long unsigned int u64 {aka long long unsigned int}
| %16llx
builtin-buildid-list.c:32:30: error: format '%lx' expects argument of type 'long unsigned int', but argument 4 has type 'u64' {aka 'long long unsigned int'} [-Werror=format=]
32 | printf("%s %16lx %16lx", bid_buf, map->start, map->end);
| ~~~~^ ~~~~~~~~
| | |
| long unsigned int u64 {aka long long unsigned int}
| %16llx
cc1: all warnings being treated as errors
Merge tag 'perf-tools-fixes-for-v5.19-2022-07-17' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull perf tools fixes from Arnaldo Carvalho de Melo:
- Fix SIGSEGV when processing syscall args in perf.data files in 'perf
trace'
- Sync kvm, msr-index and cpufeatures headers with the kernel sources
- Fix 'convert perf time to TSC' 'perf test':
- No need to open events twice
- Fix finding correct event on hybrid systems
* tag 'perf-tools-fixes-for-v5.19-2022-07-17' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
perf trace: Fix SIGSEGV when processing syscall args
perf tests: Fix Convert perf time to TSC test for hybrid
perf tests: Stop Convert perf time to TSC test opening events twice
tools arch x86: Sync the msr-index.h copy with the kernel sources
tools headers cpufeatures: Sync with the kernel sources
tools headers UAPI: Sync linux/kvm.h with the kernel sources
Matthew Auld [Tue, 12 Jul 2022 17:40:50 +0000 (18:40 +0100)]
drm/i915/ttm: fix 32b build
Since segment_pages is no longer a compile time constant, it looks the
DIV_ROUND_UP(node->size, segment_pages) breaks the 32b build. Simplest
is just to use the ULL variant, but really we should need not need more
than u32 for the page alignment (also we are limited by that due to the
sg->length type), so also make it all u32.
Merge tag 'perf_urgent_for_v5.19_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fix from Borislav Petkov:
- A single data race fix on the perf event cleanup path to avoid
endless loops due to insufficient locking
* tag 'perf_urgent_for_v5.19_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/core: Fix data race between perf_event_set_output() and perf_mmap_close()
Merge tag 'x86_urgent_for_v5.19_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Borislav Petkov:
- Improve the check whether the kernel supports WP mappings so that it
can accomodate a XenPV guest due to how the latter is setting up the
PAT machinery
- Now that the retbleed nightmare is public, here's the first round of
fallout fixes:
* Fix a build failure on 32-bit due to missing include
* Remove an untraining point in espfix64 return path
* other small cleanups
* tag 'x86_urgent_for_v5.19_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/bugs: Remove apostrophe typo
um: Add missing apply_returns()
x86/entry: Remove UNTRAIN_RET from native_irq_return_ldt
x86/bugs: Mark retbleed_strings static
x86/pat: Fix x86_has_pat_wp()
x86/asm/32: Fix ANNOTATE_UNRET_SAFE use on 32-bit
Merge tag 'input-for-v5.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input
Pull input fixes from Dmitry Torokhov:
- fix Goodix driver to properly behave on the Aya Neo Next
- some more sanity checks in usbtouchscreen driver
- a tweak in wm97xx driver in preparation for remove() to return void
- a clarification in input core regarding units of measurement for
resolution on touch events.
* tag 'input-for-v5.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: document the units for resolution of size axes
Input: goodix - call acpi_device_fix_up_power() in some cases
Input: wm97xx - make .remove() obviously always return 0
Input: usbtouchscreen - add driver_info sanity check
perf trace: Fix SIGSEGV when processing syscall args
On powerpc, 'perf trace' is crashing with a SIGSEGV when trying to
process a perf.data file created with 'perf trace record -p':
#0 0x00000001225b8988 in syscall_arg__scnprintf_augmented_string <snip> at builtin-trace.c:1492
#1 syscall_arg__scnprintf_filename <snip> at builtin-trace.c:1492
#2 syscall_arg__scnprintf_filename <snip> at builtin-trace.c:1486
#3 0x00000001225bdd9c in syscall_arg_fmt__scnprintf_val <snip> at builtin-trace.c:1973
#4 syscall__scnprintf_args <snip> at builtin-trace.c:2041
#5 0x00000001225bff04 in trace__sys_enter <snip> at builtin-trace.c:2319
That points to the below code in tools/perf/builtin-trace.c:
/*
* If this is raw_syscalls.sys_enter, then it always comes with the 6 possible
* arguments, even if the syscall being handled, say "openat", uses only 4 arguments
* this breaks syscall__augmented_args() check for augmented args, as we calculate
* syscall->args_size using each syscalls:sys_enter_NAME tracefs format file,
* so when handling, say the openat syscall, we end up getting 6 args for the
* raw_syscalls:sys_enter event, when we expected just 4, we end up mistakenly
* thinking that the extra 2 u64 args are the augmented filename, so just check
* here and avoid using augmented syscalls when the evsel is the raw_syscalls one.
*/
if (evsel != trace->syscalls.events.sys_enter)
augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls_args_size);
As the comment points out, we should not be trying to augment the args
for raw_syscalls. However, when processing a perf.data file, we are not
initializing those properly. Fix the same.
$ tools/perf/trace/beauty/tracepoints/x86_msr.sh > before
$ cp arch/x86/include/asm/msr-index.h tools/arch/x86/include/asm/msr-index.h
$ tools/perf/trace/beauty/tracepoints/x86_msr.sh > after
$ diff -u before after
$
Just silences this perf build warning:
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h'
diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h
CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o
CC /tmp/build/perf/bench/mem-memset-x86-64-asm.o
And addresses this perf build warning:
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h'
diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/disabled-features.h' differs from latest version at 'arch/x86/include/asm/disabled-features.h'
diff -u tools/arch/x86/include/asm/disabled-features.h arch/x86/include/asm/disabled-features.h
tools headers UAPI: Sync linux/kvm.h with the kernel sources
To pick the changes in:
1b870fa5573e260b ("kvm: stats: tell userspace which values are boolean")
That just rebuilds perf, as these patches don't add any new KVM ioctl to
be harvested for the the 'perf trace' ioctl syscall argument
beautifiers.
This is also by now used by tools/testing/selftests/kvm/, a simple test
build succeeded.
This silences this perf build warning:
Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h'
diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h
Merge tag 'for-5.19-rc7-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux
Pull btrfs reverts from David Sterba:
"Due to a recent report [1] we need to revert the radix tree to xarray
conversion patches.
There's a problem with sleeping under spinlock, when xa_insert could
allocate memory under pressure. We use GFP_NOFS so this is a real
problem that we unfortunately did not discover during review.
I'm sorry to do such change at rc6 time but the revert is IMO the
safer option, there are patches to use mutex instead of the spin locks
but that would need more testing. The revert branch has been tested on
a few setups, all seem ok.
The conversion to xarray will be revisited in the future"
Link: https://lore.kernel.org/linux-btrfs/[email protected]/
* tag 'for-5.19-rc7-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
Revert "btrfs: turn delayed_nodes_tree into an XArray"
Revert "btrfs: turn name_cache radix tree into XArray in send_ctx"
Revert "btrfs: turn fs_info member buffer_radix into XArray"
Revert "btrfs: turn fs_roots_radix in btrfs_fs_info into an XArray"