Tom Rix [Sun, 20 Mar 2022 01:51:43 +0000 (18:51 -0700)]
livepatch: Reorder to use before freeing a pointer
Clang static analysis reports this issue
livepatch-shadow-fix1.c:113:2: warning: Use of
memory after it is freed
pr_info("%s: dummy @ %p, prevented leak @ %p\n",
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The pointer is freed in the previous statement.
Reorder the pr_info to report before the free.
Similar issue in livepatch-shadow-fix2.c
Note that it is a false positive. pr_info() just prints the address.
The freed memory is not accessed. Well, the static analyzer could not
know this easily.
Chengming Zhou [Sat, 12 Mar 2022 15:22:20 +0000 (23:22 +0800)]
livepatch: Don't block removal of patches that are safe to unload
module_put() is not called for a patch with "forced" flag. It should
block the removal of the livepatch module when the code might still
be in use after forced transition.
klp_force_transition() currently sets "forced" flag for all patches on
the list.
In fact, any patch can be safely unloaded when it passed through
the consistency model in KLP_UNPATCHED transition.
In other words, the "forced" flag must be set only for livepatches
that are being removed. In particular, set the "forced" flag:
+ only for klp_transition_patch when the transition to KLP_UNPATCHED
state was forced.
+ all replaced patches when the transition to KLP_PATCHED state was
forced and the patch was replacing the existing patches.
David Vernet [Wed, 16 Feb 2022 16:11:01 +0000 (08:11 -0800)]
livepatch: Skip livepatch tests if ftrace cannot be configured
livepatch has a set of selftests that are used to validate the behavior of
the livepatching subsystem. One of the testcases in the livepatch
testsuite is test-ftrace.sh, which among other things, validates that
livepatching gracefully fails when ftrace is disabled. In the event that
ftrace cannot be disabled using 'sysctl kernel.ftrace_enabled=0', the test
will fail later due to it unexpectedly successfully loading the
test_klp_livepatch module.
While the livepatch selftests are careful to remove any of the livepatch
test modules between testcases to avoid this situation, ftrace may still
fail to be disabled if another trace is active on the system that was
enabled with FTRACE_OPS_FL_PERMANENT. For example, any active BPF programs
that use trampolines will cause this test to fail due to the trampoline
being implemented with register_ftrace_direct(). The following is an
example of such a trace:
tcp_drop (1) R I D tramp: ftrace_regs_caller+0x0/0x58
(call_direct_funcs+0x0/0x30)
direct-->bpf_trampoline_6442550536_0+0x0/0x1000
In order to make the test more resilient to system state that is out of its
control, this patch updates set_ftrace_enabled() to detect sysctl failures,
and skip the testrun when appropriate.
Linus Torvalds [Sun, 16 Jan 2022 08:08:13 +0000 (10:08 +0200)]
Merge tag 'livepatching-for-5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/livepatching/livepatching
Pull livepatching updates from Petr Mladek:
- Correctly handle kobjects when a livepatch init fails
- Avoid CPU hogging when searching for many livepatched symbols
- Add livepatch API page into documentation
* tag 'livepatching-for-5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/livepatching/livepatching:
livepatch: Avoid CPU hogging with cond_resched
livepatch: Fix missing unlock on error in klp_enable_patch()
livepatch: Fix kobject refcount bug on klp_init_patch_early failure path
Documentation: livepatch: Add livepatch API page
Linus Torvalds [Sun, 16 Jan 2022 06:08:11 +0000 (08:08 +0200)]
Merge tag 'pci-v5.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull pci updates from Bjorn Helgaas:
"Enumeration:
- Use pci_find_vsec_capability() instead of open-coding it (Andy
Shevchenko)
- Convert pci_dev_present() stub from macro to static inline to avoid
'unused variable' errors (Hans de Goede)
- Convert sysfs slot attributes from default_attrs to default_groups
(Greg Kroah-Hartman)
- Use DWORD accesses for LTR, L1 SS to avoid BayHub OZ711LV2 erratum
(Rajat Jain)
- Remove unnecessary initialization of static variables (Longji Guo)
Resource management:
- Always write Intel I210 ROM BAR on update to work around device
defect (Bjorn Helgaas)
PCIe native device hotplug:
- Fix pciehp lockdep errors on Thunderbolt undock (Hans de Goede)
- Fix infinite loop in pciehp IRQ handler on power fault (Lukas
Wunner)
Power management:
- Convert amd64-agp, sis-agp, via-agp from legacy PCI power
management to generic power management (Vaibhav Gupta)
IOMMU:
- Add function 1 DMA alias quirk for Marvell 88SE9125 SATA controller
so it can work with an IOMMU (Yifeng Li)
Error handling:
- Add PCI_ERROR_RESPONSE and related definitions for signaling and
checking for transaction errors on PCI (Naveen Naidu)
- Fabricate PCI_ERROR_RESPONSE data (~0) in config read wrappers,
instead of in host controller drivers, when transactions fail on
PCI (Naveen Naidu)
- Use PCI_POSSIBLE_ERROR() to check for possible failure of config
reads (Naveen Naidu)
ASPM:
- Calculate link L0s and L1 exit latencies when needed instead of
caching them (Saheed O. Bolarinwa)
- Calculate device L0s and L1 acceptable exit latencies when needed
instead of caching them (Saheed O. Bolarinwa)
- Remove struct aspm_latency since it's no longer needed (Saheed O.
Bolarinwa)
APM X-Gene PCIe controller driver:
- Fix IB window setup, which was broken by the fact that IB resources
are now sorted in address order instead of DT dma-ranges order (Rob
Herring)
Apple PCIe controller driver:
- Enable clock gating to save power (Hector Martin)
- Fix REFCLK1 enable/poll logic (Hector Martin)
Broadcom STB PCIe controller driver:
- Declare bitmap correctly for use by bitmap interfaces (Christophe
JAILLET)
- Clean up computation of legacy and non-legacy MSI bitmasks (Florian
Fainelli)
- Update suspend/resume/remove error handling to warn about errors
and not fail the operation (Jim Quinlan)
- Correct the "pcie" and "msi" interrupt descriptions in DT binding
(Jim Quinlan)
- Add DT bindings for endpoint voltage regulators (Jim Quinlan)
- Split brcm_pcie_setup() into two functions (Jim Quinlan)
- Add mechanism for turning on voltage regulators for connected
devices (Jim Quinlan)
- Turn voltage regulators for connected devices on/off when bus is
added or removed (Jim Quinlan)
- When suspending, don't turn off voltage regulators for wakeup
devices (Jim Quinlan)
Freescale Layerscape PCIe controller driver:
- Use DWC common ops instead of layerscape-specific link-up functions
(Hou Zhiqiang)
Intel VMD host bridge driver:
- Honor platform ACPI _OSC feature negotiation for Root Ports below
VMD (Kai-Heng Feng)
- Add support for Raptor Lake SKUs (Karthik L Gopalakrishnan)
- Reset everything below VMD before enumerating to work around
failure to enumerate NVMe devices when guest OS reboots (Nirmal
Patel)
Bridge emulation (used by Marvell Aardvark and MVEBU):
- Make emulated ROM BAR read-only by default (Pali Rohár)
- Make some emulated legacy PCI bits read-only for PCIe devices (Pali
Rohár)
- Update reserved bits in emulated PCIe Capability (Pali Rohár)
- Allow drivers to emulate different PCIe Capability versions (Pali
Rohár)
- Set emulated Capabilities List bit for all PCIe devices, since they
must have at least a PCIe Capability (Pali Rohár)
Marvell Aardvark PCIe controller driver:
- Add bridge emulation definitions for PCIe DEVCAP2, DEVCTL2,
DEVSTA2, LNKCAP2, LNKCTL2, LNKSTA2, SLTCAP2, SLTCTL2, SLTSTA2 (Pali
Rohár)
- Add aardvark support for DEVCAP2, DEVCTL2, LNKCAP2 and LNKCTL2
registers (Pali Rohár)
- Clear all MSIs at setup to avoid spurious interrupts (Pali Rohár)
- Disable bus mastering when unbinding host controller driver (Pali
Rohár)
- Mask all interrupts when unbinding host controller driver (Pali
Rohár)
- Fix memory leak in host controller unbind (Pali Rohár)
- Assert PERST# when unbinding host controller driver (Pali Rohár)
- Disable link training when unbinding host controller driver (Pali
Rohár)
- Disable common PHY when unbinding host controller driver (Pali
Rohár)
- Fix resource type checking to check only IORESOURCE_MEM, not
IORESOURCE_MEM_64, which is a flavor of IORESOURCE_MEM (Pali Rohár)
Marvell MVEBU PCIe controller driver:
- Implement pci_remap_iospace() for ARM so mvebu can use
devm_pci_remap_iospace() instead of the previous ARM-specific
pci_ioremap_io() interface (Pali Rohár)
- Use the standard pci_host_probe() instead of the device-specific
mvebu_pci_host_probe() (Pali Rohár)
- Replace all uses of ARM-specific pci_ioremap_io() with the ARM
implementation of the standard pci_remap_iospace() interface and
remove pci_ioremap_io() (Pali Rohár)
- Skip initializing invalid Root Ports (Pali Rohár)
- Check for errors from pci_bridge_emul_init() (Pali Rohár)
- Ignore any bridges at non-zero function numbers (Pali Rohár)
- Return ~0 data for invalid config read size (Pali Rohár)
- Disallow mapping interrupts on emulated bridges (Pali Rohár)
- Clear Root Port Memory & I/O Space Enable and Bus Master Enable at
initialization (Pali Rohár)
- Make type bits in Root Port I/O Base register read-only (Pali
Rohár)
- Disable Root Port windows when base/limit set to invalid values
(Pali Rohár)
- Set controller to Root Complex mode (Pali Rohár)
- Set Root Port Class Code to PCI Bridge (Pali Rohár)
- Update emulated Root Port secondary bus numbers to better reflect
the actual topology (Pali Rohár)
- Add PCI_BRIDGE_CTL_BUS_RESET support to emulated Root Ports so
pci_reset_secondary_bus() can reset connected devices (Pali Rohár)
- Add PCI_EXP_DEVCTL Error Reporting Enable support to emulated Root
Ports (Pali Rohár)
- Add PCI_EXP_RTSTA PME Status bit support to emulated Root Ports
(Pali Rohár)
- Add DEVCAP2, DEVCTL2 and LNKCTL2 support to emulated Root Ports on
Armada XP and newer devices (Pali Rohár)
- Export mvebu-mbus.c symbols to allow pci-mvebu.c to be a module
(Pali Rohár)
- Add support for compiling as a module (Pali Rohár)
MediaTek PCIe controller driver:
- Assert PERST# for 100ms to allow power and clock to stabilize
(qizhong cheng)
MediaTek PCIe Gen3 controller driver:
- Disable Mediatek DVFSRC voltage request since lack of DVFSRC to
respond to the request causes failure to exit L1 PM Substate
(Jianjun Wang)
MediaTek MT7621 PCIe controller driver:
- Declare mt7621_pci_ops static (Sergio Paracuellos)
- Give pcibios_root_bridge_prepare() access to host bridge windows
(Sergio Paracuellos)
- Move MIPS I/O coherency unit setup from driver to
pcibios_root_bridge_prepare() (Sergio Paracuellos)
- Add missing MODULE_LICENSE() (Sergio Paracuellos)
- Allow COMPILE_TEST for all arches (Sergio Paracuellos)
Microsoft Hyper-V host bridge driver:
- Add hv-internal interfaces to encapsulate arch IRQ dependencies
(Sunil Muthuswamy)
- Add arm64 Hyper-V vPCI support (Sunil Muthuswamy)
Qualcomm PCIe controller driver:
- Undo PM setup in qcom_pcie_probe() error handling path (Christophe
JAILLET)
- Use __be16 type to store return value from cpu_to_be16()
(Manivannan Sadhasivam)
- Constify static dw_pcie_ep_ops (Rikard Falkeborn)
Renesas R-Car PCIe controller driver:
- Fix aarch32 abort handler so it doesn't check the wrong bus clock
before accessing the host controller (Marek Vasut)
TI Keystone PCIe controller driver:
- Add register offset for ti,syscon-pcie-id and ti,syscon-pcie-mode
DT properties (Kishon Vijay Abraham I)
MicroSemi Switchtec management driver:
- Add Gen4 automotive device IDs (Kelvin Cao)
- Declare state_names[] as static so it's not allocated and
initialized for every call (Kelvin Cao)
Host controller driver cleanups:
- Use of_device_get_match_data(), not of_match_device(), when we only
need the device data in altera, artpec6, cadence, designware-plat,
dra7xx, keystone, kirin (Fan Fei)
- Drop pointless of_device_get_match_data() cast in j721e (Bjorn
Helgaas)
- Drop redundant struct device * from j721e since struct cdns_pcie
already has one (Bjorn Helgaas)
- Rename driver structs to *_pcie in intel-gw, iproc, ls-gen4,
mediatek-gen3, microchip, mt7621, rcar-gen2, tegra194, uniphier,
xgene, xilinx, xilinx-cpm for consistency across drivers (Fan Fei)
- Fix invalid address space conversions in hisi, spear13xx (Bjorn
Helgaas)
Miscellaneous:
- Sort Intel Device IDs by value (Andy Shevchenko)
- Change Capability offsets to hex to match spec (Baruch Siach)
- Correct misspellings (Krzysztof Wilczyński)
- Terminate statement with semicolon in pci_endpoint_test.c (Ming
Wang)"
* tag 'pci-v5.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (151 commits)
PCI: mt7621: Allow COMPILE_TEST for all arches
PCI: mt7621: Add missing MODULE_LICENSE()
PCI: mt7621: Move MIPS setup to pcibios_root_bridge_prepare()
PCI: Let pcibios_root_bridge_prepare() access bridge->windows
PCI: mt7621: Declare mt7621_pci_ops static
PCI: brcmstb: Do not turn off WOL regulators on suspend
PCI: brcmstb: Add control of subdevice voltage regulators
PCI: brcmstb: Add mechanism to turn on subdev regulators
PCI: brcmstb: Split brcm_pcie_setup() into two funcs
dt-bindings: PCI: Add bindings for Brcmstb EP voltage regulators
dt-bindings: PCI: Correct brcmstb interrupts, interrupt-map.
PCI: brcmstb: Fix function return value handling
PCI: brcmstb: Do not use __GENMASK
PCI: brcmstb: Declare 'used' as bitmap, not unsigned long
PCI: hv: Add arm64 Hyper-V vPCI support
PCI: hv: Make the code arch neutral by adding arch specific interfaces
PCI: pciehp: Use down_read/write_nested(reset_lock) to fix lockdep errors
x86/PCI: Remove initialization of static variables to false
PCI: Use DWORD accesses for LTR, L1 SS to avoid erratum
misc: pci_endpoint_test: Terminate statement with semicolon
...
Linus Torvalds [Sun, 16 Jan 2022 05:54:11 +0000 (07:54 +0200)]
Merge tag 'exfat-for-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat
Pull exfat updates from Namjae Jeon:
- Fix ->i_blocks truncation issue that still exists elsewhere.
- Four cleanups & typos fixes.
- Move super block magic number to magic.h
- Fix missing REQ_SYNC in exfat_update_bhs().
* tag 'exfat-for-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat:
exfat: fix missing REQ_SYNC in exfat_update_bhs()
exfat: remove argument 'sector' from exfat_get_dentry()
exfat: move super block magic number to magic.h
exfat: fix i_blocks for files truncated over 4 GiB
exfat: reuse exfat_inode_info variable instead of calling EXFAT_I()
exfat: make exfat_find_location() static
exfat: fix typos in comments
exfat: simplify is_valid_cluster()
Linus Torvalds [Sun, 16 Jan 2022 05:42:58 +0000 (07:42 +0200)]
Merge tag 'nfsd-5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux
Pull nfsd updates from Chuck Lever:
"Bruce has announced he is leaving Red Hat at the end of the month and
is stepping back from his role as NFSD co-maintainer. As a result,
this includes a patch removing him from the MAINTAINERS file.
There is one patch in here that Jeff Layton was carrying in the locks
tree. Since he had only one for this cycle, he asked us to send it to
you via the nfsd tree.
There continues to be 0-day reports from Robert Morris @MIT. This time
we include a fix for a crash in the COPY_NOTIFY operation.
Highlights:
- Bruce steps down as NFSD maintainer
- Prepare for dynamic nfsd thread management
- More work on supporting re-exporting NFS mounts
- One fs/locks patch on behalf of Jeff Layton
Notable bug fixes:
- Fix zero-length NFSv3 WRITEs
- Fix directory cinfo on FS's that do not support iversion
- Fix WRITE verifiers for stable writes
- Fix crash on COPY_NOTIFY with a special state ID"
* tag 'nfsd-5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux: (51 commits)
SUNRPC: Fix sockaddr handling in svcsock_accept_class trace points
SUNRPC: Fix sockaddr handling in the svc_xprt_create_error trace point
fs/locks: fix fcntl_getlk64/fcntl_setlk64 stub prototypes
nfsd: fix crash on COPY_NOTIFY with special stateid
MAINTAINERS: remove bfields
NFSD: Move fill_pre_wcc() and fill_post_wcc()
Revert "nfsd: skip some unnecessary stats in the v4 case"
NFSD: Trace boot verifier resets
NFSD: Rename boot verifier functions
NFSD: Clean up the nfsd_net::nfssvc_boot field
NFSD: Write verifier might go backwards
nfsd: Add a tracepoint for errors in nfsd4_clone_file_range()
NFSD: De-duplicate net_generic(nf->nf_net, nfsd_net_id)
NFSD: De-duplicate net_generic(SVC_NET(rqstp), nfsd_net_id)
NFSD: Clean up nfsd_vfs_write()
nfsd: Replace use of rwsem with errseq_t
NFSD: Fix verifier returned in stable WRITEs
nfsd: Retry once in nfsd_open on an -EOPENSTALE return
nfsd: Add errno mapping for EREMOTEIO
nfsd: map EBADF
...
Linus Torvalds [Sun, 16 Jan 2022 05:36:49 +0000 (07:36 +0200)]
Merge tag '9p-for-5.17-rc1' of git://github.com/martinetd/linux
Pull 9p updates from Dominique Martinet:
"Fixes, split 9p_net_fd, and new reviewer:
- fix possible uninitialized memory usage for setattr
- fix fscache reading hole in a file just after it's been grown
- split net/9p/trans_fd.c in its own module like other transports.
The new transport module defaults to 9P_NET and is autoloaded if
required so users should not be impacted
- add Christian Schoenebeck to 9p reviewers
- some more trivial cleanup"
* tag '9p-for-5.17-rc1' of git://github.com/martinetd/linux:
9p: fix enodata when reading growing file
net/9p: show error message if user 'msize' cannot be satisfied
MAINTAINERS: 9p: add Christian Schoenebeck as reviewer
9p: only copy valid iattrs in 9P2000.L setattr implementation
9p: Use BUG_ON instead of if condition followed by BUG.
net/p9: load default transports
9p/xen: autoload when xenbus service is available
9p/trans_fd: split into dedicated module
fs: 9p: remove unneeded variable
9p/trans_virtio: Fix typo in the comment for p9_virtio_create()
Linus Torvalds [Sun, 16 Jan 2022 04:52:38 +0000 (06:52 +0200)]
Merge tag 'drm-next-2022-01-14' of git://anongit.freedesktop.org/drm/drm
Pull drm fixes from Daniel Vetter:
"drivers fixes:
- i915 fixes for ttm backend + one pm wakelock fix
- amdgpu fixes, fairly big pile of small things all over. Note this
doesn't yet containe the fixed version of the otg sync patch that
blew up
- small driver fixes: meson, sun4i, vga16fb probe fix
drm core fixes:
- cma-buf heap locking
- ttm compilation
- self refresh helper state check
- wrong error message in atomic helpers
- mipi-dbi buffer mapping"
* tag 'drm-next-2022-01-14' of git://anongit.freedesktop.org/drm/drm: (49 commits)
drm/mipi-dbi: Fix source-buffer address in mipi_dbi_buf_copy
drm: fix error found in some cases after the patch d1af5cd86997
drm/ttm: fix compilation on ARCH=um
dma-buf: cma_heap: Fix mutex locking section
video: vga16fb: Only probe for EGA and VGA 16 color graphic cards
drm/amdkfd: Fix ASIC name typos
drm/amdkfd: Fix DQM asserts on Hawaii
drm/amdgpu: Use correct VIEWPORT_DIMENSION for DCN2
drm/amd/pm: only send GmiPwrDnControl msg on master die (v3)
drm/amdgpu: use spin_lock_irqsave to avoid deadlock by local interrupt
drm/amdgpu: not return error on the init_apu_flags
drm/amdkfd: Use prange->update_list head for remove_list
drm/amdkfd: Use prange->list head for insert_list
drm/amdkfd: make SPDX License expression more sound
drm/amdkfd: Check for null pointer after calling kmemdup
drm/amd/display: invalid parameter check in dmub_hpd_callback
Revert "drm/amdgpu: Don't inherit GEM object VMAs in child process"
drm/amd/display: reset dcn31 SMU mailbox on failures
drm/amdkfd: use default_groups in kobj_type
drm/amdgpu: use default_groups in kobj_type
...
Linus Torvalds [Sat, 15 Jan 2022 18:37:06 +0000 (20:37 +0200)]
Merge branch 'akpm' (patches from Andrew)
Merge misc updates from Andrew Morton:
"146 patches.
Subsystems affected by this patch series: kthread, ia64, scripts,
ntfs, squashfs, ocfs2, vfs, and mm (slab-generic, slab, kmemleak,
dax, kasan, debug, pagecache, gup, shmem, frontswap, memremap,
memcg, selftests, pagemap, dma, vmalloc, memory-failure, hugetlb,
userfaultfd, vmscan, mempolicy, oom-kill, hugetlbfs, migration, thp,
ksm, page-poison, percpu, rmap, zswap, zram, cleanups, hmm, and
damon)"
* emailed patches from Andrew Morton <[email protected]>: (146 commits)
mm/damon: hide kernel pointer from tracepoint event
mm/damon/vaddr: hide kernel pointer from damon_va_three_regions() failure log
mm/damon/vaddr: use pr_debug() for damon_va_three_regions() failure logging
mm/damon/dbgfs: remove an unnecessary variable
mm/damon: move the implementation of damon_insert_region to damon.h
mm/damon: add access checking for hugetlb pages
Docs/admin-guide/mm/damon/usage: update for schemes statistics
mm/damon/dbgfs: support all DAMOS stats
Docs/admin-guide/mm/damon/reclaim: document statistics parameters
mm/damon/reclaim: provide reclamation statistics
mm/damon/schemes: account how many times quota limit has exceeded
mm/damon/schemes: account scheme actions that successfully applied
mm/damon: remove a mistakenly added comment for a future feature
Docs/admin-guide/mm/damon/usage: update for kdamond_pid and (mk|rm)_contexts
Docs/admin-guide/mm/damon/usage: mention tracepoint at the beginning
Docs/admin-guide/mm/damon/usage: remove redundant information
Docs/admin-guide/mm/damon/usage: update for scheme quotas and watermarks
mm/damon: convert macro functions to static inline functions
mm/damon: modify damon_rand() macro to static inline function
mm/damon: move damon_rand() definition into damon.h
...
SeongJae Park [Fri, 14 Jan 2022 22:10:50 +0000 (14:10 -0800)]
mm/damon: hide kernel pointer from tracepoint event
DAMON's virtual address spaces monitoring primitive uses 'struct pid *'
of the target process as its monitoring target id. The kernel address
is exposed as-is to the user space via the DAMON tracepoint,
'damon_aggregated'.
Though primarily only privileged users are allowed to access that, it
would be better to avoid unnecessarily exposing kernel pointers so.
Because the trace result is only required to be able to distinguish each
target, we aren't need to use the pointer as-is.
This makes the tracepoint to use the index of the target in the
context's targets list as its id in the tracepoint, to hide the kernel
space address.
SeongJae Park [Fri, 14 Jan 2022 22:10:47 +0000 (14:10 -0800)]
mm/damon/vaddr: hide kernel pointer from damon_va_three_regions() failure log
The failure log message for 'damon_va_three_regions()' prints the target
id, which is a 'struct pid' pointer in the case. To avoid exposing the
kernel pointer via the log, this makes the log to use the index of the
target in the context's targets list instead.
SeongJae Park [Fri, 14 Jan 2022 22:10:44 +0000 (14:10 -0800)]
mm/damon/vaddr: use pr_debug() for damon_va_three_regions() failure logging
Failure of 'damon_va_three_regions()' is logged using 'pr_err()'. But,
the function can fail in legal situations. To avoid making users be
surprised and to keep the kernel clean, this makes the log to be printed
using 'pr_debug()'.
SeongJae Park [Fri, 14 Jan 2022 22:10:41 +0000 (14:10 -0800)]
mm/damon/dbgfs: remove an unnecessary variable
Patch series "mm/damon: Hide unnecessary information disclosures".
DAMON is exposing some unnecessary information including kernel pointer
in kernel log and tracepoint. This patchset hides such information.
The first patch is only for a trivial cleanup, though.
This patch (of 4):
This commit removes a unnecessarily used variable in
dbgfs_target_ids_write().
Guoqing Jiang [Fri, 14 Jan 2022 22:10:38 +0000 (14:10 -0800)]
mm/damon: move the implementation of damon_insert_region to damon.h
Usually, inline function is declared static since it should sit between
storage and type. And implement it in a header file if used by multiple
files.
And this change also fixes compile issue when backport damon to 5.10.
mm/damon/vaddr.c: In function `damon_va_evenly_split_region':
./include/linux/damon.h:425:13: error: inlining failed in call to `always_inline' `damon_insert_region': function body not available
425 | inline void damon_insert_region(struct damon_region *r,
| ^~~~~~~~~~~~~~~~~~~
mm/damon/vaddr.c:86:3: note: called from here
86 | damon_insert_region(n, r, next, t);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Baolin Wang [Fri, 14 Jan 2022 22:10:35 +0000 (14:10 -0800)]
mm/damon: add access checking for hugetlb pages
The process's VMAs can be mapped by hugetlb page, but now the DAMON did
not implement the access checking for hugetlb pte, so we can not get the
actual access count like below if a process VMAs were mapped by hugetlb.
SeongJae Park [Fri, 14 Jan 2022 22:10:29 +0000 (14:10 -0800)]
mm/damon/dbgfs: support all DAMOS stats
Currently, DAMON debugfs interface is not supporting DAMON-based
Operation Schemes (DAMOS) stats for schemes successfully applied regions
and time/space quota limit exceeds. This adds the support.
SeongJae Park [Fri, 14 Jan 2022 22:10:23 +0000 (14:10 -0800)]
mm/damon/reclaim: provide reclamation statistics
This implements new DAMON_RECLAIM parameters for statistics reporting.
Those can be used for understanding how DAMON_RECLAIM is working, and
for tuning the other parameters.
SeongJae Park [Fri, 14 Jan 2022 22:10:20 +0000 (14:10 -0800)]
mm/damon/schemes: account how many times quota limit has exceeded
If the time/space quotas of a given DAMON-based operation scheme is too
small, the scheme could show unexpectedly slow progress. However, there
is no good way to notice the case in runtime. This commit extends the
DAMOS stat to provide how many times the quota limits exceeded so that
the users can easily notice the case and tune the scheme.
SeongJae Park [Fri, 14 Jan 2022 22:10:17 +0000 (14:10 -0800)]
mm/damon/schemes: account scheme actions that successfully applied
Patch series "mm/damon/schemes: Extend stats for better online analysis and tuning".
To help online access pattern analysis and tuning of DAMON-based
Operation Schemes (DAMOS), DAMOS provides simple statistics for each
scheme. Introduction of DAMOS time/space quota further made the tuning
easier by making the risk management easier. However, that also made
understanding of the working schemes a little bit more difficult.
For an example, progress of a given scheme can now be throttled by not
only the aggressiveness of the target access pattern, but also the
time/space quotas. So, when a scheme is showing unexpectedly slow
progress, it's difficult to know by what the progress of the scheme is
throttled, with currently provided statistics.
This patchset extends the statistics to contain some metrics that can be
helpful for such online schemes analysis and tuning (patches 1-2),
exports those to users (patches 3 and 5), and add documents (patches 4
and 6).
This patch (of 6):
DAMON-based operation schemes (DAMOS) stats provide only the number and
the amount of regions that the action of the scheme has tried to be
applied. Because the action could be failed for some reasons, the
currently provided information is sometimes not useful or convenient
enough for schemes profiling and tuning. To improve this situation,
this commit extends the DAMOS stats to provide the number and the amount
of regions that the action has successfully applied.
SeongJae Park [Fri, 14 Jan 2022 22:10:14 +0000 (14:10 -0800)]
mm/damon: remove a mistakenly added comment for a future feature
Due to a mistake in patches reordering, a comment for a future feature
called 'arbitrary monitoring target support'[1], which is still under
development, has added. Because it only introduces confusion and we
don't have a plan to post the patches soon, this commit removes the
mistakenly added part.
SeongJae Park [Fri, 14 Jan 2022 22:10:08 +0000 (14:10 -0800)]
Docs/admin-guide/mm/damon/usage: mention tracepoint at the beginning
To get detailed monitoring results from the user space, users need to
use the damon_aggregated tracepoint. This commit adds a brief mention
of it at the beginning of the usage document.
SeongJae Park [Fri, 14 Jan 2022 22:10:02 +0000 (14:10 -0800)]
Docs/admin-guide/mm/damon/usage: update for scheme quotas and watermarks
DAMOS features including time/space quota limits and watermarks are not
described in the DAMON debugfs interface document. This commit updates
the document for the features.
Xin Hao [Fri, 14 Jan 2022 22:09:53 +0000 (14:09 -0800)]
mm/damon: move damon_rand() definition into damon.h
damon_rand() is called in three files:damon/core.c, damon/ paddr.c,
damon/vaddr.c, i think there is no need to redefine this twice, So move
it to damon.h will be a good choice.
Xin Hao [Fri, 14 Jan 2022 22:09:50 +0000 (14:09 -0800)]
mm/damon/schemes: add the validity judgment of thresholds
In dbgfs "schemes" interface, i do some test like this:
# cd /sys/kernel/debug/damon
# echo "2 1 2 1 10 1 3 10 1 1 1 1 1 1 1 1 2 3" > schemes
# cat schemes
# 2 1 2 1 10 1 3 10 1 1 1 1 1 1 1 1 2 3 0 0
There have some unreasonable places, i set the valules of these variables
"<min_sz, max_sz> <min_nr_a, max_nr_a>, <min_age, max_age>, <wmarks.high,
wmarks.mid, wmarks.low>" as "<2, 1>, <2, 1>, <10, 1>, <1, 2, 3>.
So there add a validity judgment for these thresholds value.
Xin Hao [Fri, 14 Jan 2022 22:09:44 +0000 (14:09 -0800)]
mm/damon: remove some unneeded function definitions in damon.h
In damon.h some func definitions about VA & PA can only be used in its
own file, so there no need to define in the header file, and the header
file will look cleaner.
If other files later need these functions, the prototypes can be added
to damon.h at that time.
Xin Hao [Fri, 14 Jan 2022 22:09:37 +0000 (14:09 -0800)]
mm/damon: add 'age' of region tracepoint support
In Damon, we can get age information by analyzing the nr_access change,
But short time sampling is not effective, we have to obtain enough data
for analysis through long time trace, this also means that we need to
consume more cpu resources and storage space.
Now the region add a new 'age' variable, we only need to get the change of
age value through a little time trace, for example, age has been
increasing to 141, but nr_access shows a value of 0 at the same time,
Through this,we can conclude that the region has a very low nr_access
value for a long time.
Alistair Popple [Fri, 14 Jan 2022 22:09:31 +0000 (14:09 -0800)]
mm/hmm.c: allow VM_MIXEDMAP to work with hmm_range_fault
hmm_range_fault() can be used instead of get_user_pages() for devices
which allow faulting however unlike get_user_pages() it will return an
error when used on a VM_MIXEDMAP range.
To make hmm_range_fault() more closely match get_user_pages() remove
this restriction. This requires dealing with the !ARCH_HAS_PTE_SPECIAL
case in hmm_vma_handle_pte(). Rather than replicating the logic of
vm_normal_page() call it directly and do a check for the zero pfn
similar to what get_user_pages() currently does.
Also add a test to hmm selftest to verify functionality.
Ting Liu [Fri, 14 Jan 2022 22:09:28 +0000 (14:09 -0800)]
mm: make some vars and functions static or __init
"page_idle_ops" as a global var, but its scope of use within this
document. So it should be static.
"page_ext_ops" is a var used in the kernel initial phase. And other
functions are aslo used in the kernel initial phase. So they should be
__init or __initdata to reclaim memory.
After the TLB is flushed on CPU1 via flush_tlb_mm() and before
mm->tlb_flush_batched is set to false, some PTE is unmapped on CPU0 and
the TLB flushing is pended. Then the pended TLB flushing will be lost.
Although both set_tlb_ubc_flush_pending() and
flush_tlb_batched_pending() are called with PTL locked, different PTL
instances may be used.
Because the race window is really small, and the lost TLB flushing will
cause problem only if a TLB entry is inserted before the unmapping in
the race window, the race is only theoretical. But the fix is simple
and cheap too.
Syzbot has reported this too as follows:
==================================================================
BUG: KCSAN: data-race in flush_tlb_batched_pending / try_to_unmap_one
write to 0xffff8881072cfbbc of 1 bytes by task 71 on cpu 0:
set_tlb_ubc_flush_pending mm/rmap.c:636 [inline]
try_to_unmap_one+0x60e/0x1220 mm/rmap.c:1515
rmap_walk_anon+0x2fb/0x470 mm/rmap.c:2301
try_to_unmap+0xec/0x110
shrink_page_list+0xe91/0x2620 mm/vmscan.c:1719
shrink_inactive_list+0x3fb/0x730 mm/vmscan.c:2394
shrink_list mm/vmscan.c:2621 [inline]
shrink_lruvec+0x3c9/0x710 mm/vmscan.c:2940
shrink_node_memcgs+0x23e/0x410 mm/vmscan.c:3129
shrink_node+0x8f6/0x1190 mm/vmscan.c:3252
kswapd_shrink_node mm/vmscan.c:4022 [inline]
balance_pgdat+0x702/0xd30 mm/vmscan.c:4213
kswapd+0x200/0x340 mm/vmscan.c:4473
kthread+0x2c7/0x2e0 kernel/kthread.c:327
ret_from_fork+0x1f/0x30
value changed: 0x01 -> 0x00
Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 71 Comm: kswapd0 Not tainted 5.16.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
==================================================================
Qi Zheng [Fri, 14 Jan 2022 22:09:12 +0000 (14:09 -0800)]
mm: memcg/percpu: account extra objcg space to memory cgroups
Similar to slab memory allocator, for each accounted percpu object there
is an extra space which is used to store obj_cgroup membership. Charge
it too.
Naoya Horiguchi [Fri, 14 Jan 2022 22:09:09 +0000 (14:09 -0800)]
mm/hwpoison: fix unpoison_memory()
After recent soft-offline rework, error pages can be taken off from
buddy allocator, but the existing unpoison_memory() does not properly
undo the operation. Moreover, due to the recent change on
__get_hwpoison_page(), get_page_unless_zero() is hardly called for
hwpoisoned pages. So __get_hwpoison_page() highly likely returns -EBUSY
(meaning to fail to grab page refcount) and unpoison just clears
PG_hwpoison without releasing a refcount. That does not lead to a
critical issue like kernel panic, but unpoisoned pages never get back to
buddy (leaked permanently), which is not good.
To (partially) fix this, we need to identify "taken off" pages from
other types of hwpoisoned pages. We can't use refcount or page flags
for this purpose, so a pseudo flag is defined by hacking ->private
field. Someone might think that put_page() is enough to cancel
taken-off pages, but the normal free path contains some operations not
suitable for the current purpose, and can fire VM_BUG_ON().
Note that unpoison_memory() is now supposed to be cancel hwpoison events
injected only by madvise() or
/sys/devices/system/memory/{hard,soft}_offline_page, not by MCE
injection, so please don't try to use unpoison when testing with MCE
injection.
Naoya Horiguchi [Fri, 14 Jan 2022 22:09:02 +0000 (14:09 -0800)]
mm/hwpoison: mf_mutex for soft offline and unpoison
Patch series "mm/hwpoison: fix unpoison_memory()", v4.
The main purpose of this series is to sync unpoison code to recent
changes around how hwpoison code takes page refcount. Unpoison should
work or simply fail (without crash) if impossible.
The recent works of keeping hwpoison pages in shmem pagecache introduce
a new state of hwpoisoned pages, but unpoison for such pages is not
supported yet with this series.
It seems that soft-offline and unpoison can be used as general purpose
page offline/online mechanism (not in the context of memory error). I
think that we need some additional works to realize it because currently
soft-offline and unpoison are assumed not to happen so frequently (print
out too many messages for aggressive usecases). But anyway this could
be another interesting next topic.
Originally mf_mutex is introduced to serialize multiple MCE events, but
it is not that useful to allow unpoison to run in parallel with
memory_failure() and soft offline. So apply mf_mutex to soft offline
and unpoison. The memory failure handler and soft offline handler get
simpler with this.
Nanyong Sun [Fri, 14 Jan 2022 22:08:59 +0000 (14:08 -0800)]
mm: ksm: fix use-after-free kasan report in ksm_might_need_to_copy
When under the stress of swapping in/out with KSM enabled, there is a
low probability that kasan reports the BUG of use-after-free in
ksm_might_need_to_copy() when do swap in. The freed object is the
anon_vma got from page_anon_vma(page).
It is because a swapcache page associated with one anon_vma now needed
for another anon_vma, but the page's original vma was unmapped and the
anon_vma was freed. In this case the if condition below always return
false and then alloc a new page to copy. Swapin process then use the
new page and can continue to run well, so this is harmless actually.
This patch exchange the order of above two judgment statement to avoid
the kasan warning. Let cpu run "page->index == linear_page_index(vma,
address)" firstly and return false basically to skip the read of
anon_vma->root which may trigger the kasan use-after-free warning:
==================================================================
BUG: KASAN: use-after-free in ksm_might_need_to_copy+0x12e/0x5b0
Read of size 8 at addr ffff88be9977dbd0 by task khugepaged/694
The buggy address belongs to the object at ffff88be9977dba0
which belongs to the cache anon_vma_chain of size 64
The buggy address is located 48 bytes inside of
64-byte region [ffff88be9977dba0, ffff88be9977dbe0)
The buggy address belongs to the page:
page:ffffea00fa65df40 count:1 mapcount:0 mapping:ffff888107717800 index:0x0
flags: 0x17ffffc0000100(slab)
==================================================================
mm/thp: drop unused trace events hugepage_[invalidate|splitting]
The trace events hugepage_[invalidate|splitting], were added via the
commit 9e813308a5c1 ("powerpc/thp: Add tracepoints to track hugepage
invalidate"). Afterwards their call sites i.e
trace_hugepage_[invalidate|splitting] were just dropped off, leaving
these trace points unused.
Colin Ian King [Fri, 14 Jan 2022 22:08:53 +0000 (14:08 -0800)]
mm/migrate: remove redundant variables used in a for-loop
The variable addr is being set and incremented in a for-loop but not
actually being used. It is redundant and so addr and also variable
start can be removed.
Huang Ying [Fri, 14 Jan 2022 22:08:49 +0000 (14:08 -0800)]
mm/migrate: move node demotion code to near its user
Now, node_demotion and next_demotion_node() are placed between
__unmap_and_move() and unmap_and_move(). This hurts code readability.
So move them near their users in the file. There's no functionality
change in this patch.
Baolin Wang [Fri, 14 Jan 2022 22:08:43 +0000 (14:08 -0800)]
mm: migrate: support multiple target nodes demotion
We have some machines with multiple memory types like below, which have
one fast (DRAM) memory node and two slow (persistent memory) memory
nodes. According to current node demotion policy, if node 0 fills up,
its memory should be migrated to node 1, when node 1 fills up, its
memory will be migrated to node 2: node 0 -> node 1 -> node 2 ->stop.
But this is not efficient and suitbale memory migration route for our
machine with multiple slow memory nodes. Since the distance between
node 0 to node 1 and node 0 to node 2 is equal, and memory migration
between slow memory nodes will increase persistent memory bandwidth
greatly, which will hurt the whole system's performance.
Thus for this case, we can treat the slow memory node 1 and node 2 as a
whole slow memory region, and we should migrate memory from node 0 to
node 1 and node 2 if node 0 fills up.
This patch changes the node_demotion data structure to support multiple
target nodes, and establishes the migration path to support multiple
target nodes with validating if the node distance is the best or not.
Baolin Wang [Fri, 14 Jan 2022 22:08:40 +0000 (14:08 -0800)]
mm: compaction: fix the migration stats in trace_mm_compaction_migratepages()
Now the migrate_pages() has changed to return the number of {normal
page, THP, hugetlb} instead, thus we should not use the return value to
calculate the number of pages migrated successfully. Instead we can
just use the 'nr_succeeded' which indicates the number of normal pages
migrated successfully to calculate the non-migrated pages in
trace_mm_compaction_migratepages().
Baolin Wang [Fri, 14 Jan 2022 22:08:37 +0000 (14:08 -0800)]
mm: migrate: correct the hugetlb migration stats
Correct the migration stats for hugetlb with using compound_nr() instead
of thp_nr_pages(), meanwhile change 'nr_failed_pages' to record the
number of normal pages failed to migrate, including THP and hugetlb, and
'nr_succeeded' will record the number of normal pages migrated
successfully.
Baolin Wang [Fri, 14 Jan 2022 22:08:34 +0000 (14:08 -0800)]
mm: migrate: fix the return value of migrate_pages()
Patch series "Improve the migration stats".
According to talk with Zi Yan [1], this patch set changes the return
value of migrate_pages() to avoid returning a number which is larger
than the number of pages the users tried to migrate by move_pages()
syscall. Also fix the hugetlb migration stats and migration stats in
trace_mm_compaction_migratepages().
As Zi Yan pointed out, the syscall move_pages() can return a
non-migrated number larger than the number of pages the users tried to
migrate, when a THP page is failed to migrate. This is confusing for
users.
Since other migration scenarios do not care about the actual
non-migrated number of pages except the memory compaction migration
which will fix in following patch. Thus we can change the return value
to return the number of {normal page, THP, hugetlb} instead to avoid
this issue, and the number of THP splits will be considered as the
number of non-migrated THP, no matter how many subpages of the THP are
migrated successfully. Meanwhile we should still keep the migration
counters using the number of normal pages.
hugetlbfs: fix off-by-one error in hugetlb_vmdelete_list()
Pass "end - 1" instead of "end" when walking the interval tree in
hugetlb_vmdelete_list() to fix an inclusive vs. exclusive bug. The two
callers that pass a non-zero "end" treat it as exclusive, whereas the
interval tree iterator expects an inclusive "last". E.g. punching a
hole in a file that precisely matches the size of a single hugepage,
with a vma starting right on the boundary, will result in
unmap_hugepage_range() being called twice, with the second call having
start==end.
The off-by-one error doesn't cause functional problems as
__unmap_hugepage_range() turns into a massive nop due to
short-circuiting its for-loop on "address < end". But, the mmu_notifier
invocations to invalid_range_{start,end}() are passed a bogus zero-sized
range, which may be unexpected behavior for secondary MMUs.
The bug was exposed by commit ed922739c919 ("KVM: Use interval tree to
do fast hva lookup in memslots"), currently queued in the KVM tree for
5.17, which added a WARN to detect ranges with start==end.
Jann Horn [Fri, 14 Jan 2022 22:08:27 +0000 (14:08 -0800)]
mm, oom: OOM sysrq should always kill a process
The OOM kill sysrq (alt+sysrq+F) should allow the user to kill the
process with the highest OOM badness with a single execution.
However, at the moment, the OOM kill can bail out if an OOM notifier
(e.g. the i915 one) says that it reclaimed a tiny amount of memory from
somewhere. That's probably not what the user wants, so skip the bailout
if the OOM was triggered via sysrq.
Randy Dunlap [Fri, 14 Jan 2022 22:08:24 +0000 (14:08 -0800)]
mm/mempolicy: fix all kernel-doc warnings
Fix kernel-doc warnings in mempolicy.c:
mempolicy.c:139: warning: No description found for return value of 'numa_map_to_online_node'
mempolicy.c:2165: warning: Excess function parameter 'node' description in 'alloc_pages_vma'
mempolicy.c:2973: warning: No description found for return value of 'mpol_parse_str'
This syscall can be used to set a home node for the MPOL_BIND and
MPOL_PREFERRED_MANY memory policy. Users should use this syscall after
setting up a memory policy for the specified range as shown below.
The syscall allows specifying a home node/preferred node from which
kernel will fulfill memory allocation requests first.
For address range with MPOL_BIND memory policy, if nodemask specifies
more than one node, page allocations will come from the node in the
nodemask with sufficient free memory that is closest to the home
node/preferred node.
For MPOL_PREFERRED_MANY if the nodemask specifies more than one node,
page allocation will come from the node in the nodemask with sufficient
free memory that is closest to the home node/preferred node. If there
is not enough memory in all the nodes specified in the nodemask, the
allocation will be attempted from the closest numa node to the home node
in the system.
This helps applications to hint at a memory allocation preference node
and fallback to _only_ a set of nodes if the memory is not available on
the preferred node. Fallback allocation is attempted from the node
which is nearest to the preferred node.
This helps applications to have control on memory allocation numa nodes
and avoids default fallback to slow memory NUMA nodes. For example a
system with NUMA nodes 1,2 and 3 with DRAM memory and 10, 11 and 12 of
slow memory
This will allocate from nodes closer to node 2 and will make sure the
kernel will only allocate from nodes 1, 2, and 3. Memory will not be
allocated from slow memory nodes 10, 11, and 12. This differs from
default MPOL_BIND behavior in that with default MPOL_BIND the allocation
will be attempted from node closer to the local node. One of the
reasons to specify a home node is to allow allocations from cpu less
NUMA node and its nearby NUMA nodes.
With MPOL_PREFERRED_MANY on the other hand will first try to allocate
from the closest node to node 2 from the node list 1, 2 and 3. If those
nodes don't have enough memory, kernel will allocate from slow memory
node 10, 11 and 12 which ever is closer to node 2.
mm/mempolicy: use policy_node helper with MPOL_PREFERRED_MANY
Patch series "mm: add new syscall set_mempolicy_home_node", v6.
This patch (of 3):
A followup patch will enable setting a home node with
MPOL_PREFERRED_MANY memory policy. To facilitate that switch to using
policy_node helper. There is no functional change in this patch.
Chen Wandun [Fri, 14 Jan 2022 22:08:10 +0000 (14:08 -0800)]
mm/page_isolation: unset migratetype directly for non Buddy page
In unset_migratetype_isolate(), we can bypass the call to
move_freepages_block() for non-buddy pages.
It will save a few cpu cycles for some situations such as cma and
hugetlb when allocating continue pages, in these situation function
alloc_contig_pages will be called.
alloc_contig_pages
__alloc_contig_migrate_range
isolate_freepages_range ==> pages has been remove from buddy
undo_isolate_page_range
unset_migratetype_isolate ==> can directly set migratetype
Mike Kravetz [Fri, 14 Jan 2022 22:08:04 +0000 (14:08 -0800)]
userfaultfd/selftests: clean up hugetlb allocation code
The message for commit f5c73297181c ("userfaultfd/selftests: fix hugetlb
area allocations") says there is no need to create a hugetlb file in the
non-shared testing case. However, the commit did not actually change
the code to prevent creation of the file.
While it is technically true that there is no need to create and use a
hugetlb file in the case of non-shared-testing, it is useful. This is
because 'hole punching' of a hugetlb file has the potentially incorrect
side effect of also removing pages from private mappings. The
userfaultfd test relies on this side effect for removing pages from the
destination buffer during rounds of stress testing.
Remove the incomplete code that was added to deal with no hugetlb file.
Just keep the code that prevents reserves from being created for the
destination area.
Waiman Long [Fri, 14 Jan 2022 22:07:58 +0000 (14:07 -0800)]
selftests/vm: make charge_reserved_hugetlb.sh work with existing cgroup setting
The hugetlb cgroup reservation test charge_reserved_hugetlb.sh assume
that no cgroup filesystems are mounted before running the test. That is
not true in many cases. As a result, the test fails to run. Fix that
by querying the current cgroup mount setting and using the existing
cgroup setup instead before attempting to freshly mount a cgroup
filesystem.
Similar change is also made for hugetlb_reparenting_test.sh as well,
though it still has problem if cgroup v2 isn't used.
The patched test scripts were run on a centos 8 based system to verify
that they ran properly.
Yang Yang [Fri, 14 Jan 2022 22:07:55 +0000 (14:07 -0800)]
mm/vmstat: add events for THP max_ptes_* exceeds
There are interfaces to adjust max_ptes_none, max_ptes_swap,
max_ptes_shared values, see
/sys/kernel/mm/transparent_hugepage/khugepaged/.
But system administrator may not know which value is the best. So Add
those events to support adjusting max_ptes_* to suitable values.
For example, if default max_ptes_swap value causes too much failures,
and system uses zram whose IO is fast, administrator could increase
max_ptes_swap until THP_SCAN_EXCEED_SWAP_PTE not increase anymore.
Yosry Ahmed [Fri, 14 Jan 2022 22:07:52 +0000 (14:07 -0800)]
mm, hugepages: make memory size variable in hugepage-mremap selftest
The hugetlb vma mremap() test currently maps 1GB of memory to trigger
pmd sharing and make sure that 'unshare' path in mremap code works. The
test originally only mapped 10MB of memory (as specified by the header
comment) but was later modified to 1GB to tackle this case.
However, not all machines will have 1GB of memory to spare for this
test. Adding a mapping size arg will allow run_vmtest.sh to pass an
adequate mapping size, while allowing users to run the test
independently with arbitrary size mappings.
Mina Almasry [Fri, 14 Jan 2022 22:07:48 +0000 (14:07 -0800)]
hugetlb: add hugetlb.*.numa_stat file
For hugetlb backed jobs/VMs it's critical to understand the numa
information for the memory backing these jobs to deliver optimal
performance.
Currently this technically can be queried from /proc/self/numa_maps, but
there are significant issues with that. Namely:
1. Memory can be mapped or unmapped.
2. numa_maps are per process and need to be aggregated across all
processes in the cgroup. For shared memory this is more involved as
the userspace needs to make sure it doesn't double count shared
mappings.
3. I believe querying numa_maps needs to hold the mmap_lock which adds
to the contention on this lock.
For these reasons I propose simply adding hugetlb.*.numa_stat file,
which shows the numa information of the cgroup similarly to
memory.numa_stat.
On cgroup-v2:
cat /sys/fs/cgroup/unified/test/hugetlb.2MB.numa_stat
total=2097152 N0=2097152 N1=0
On cgroup-v1:
cat /sys/fs/cgroup/hugetlb/test/hugetlb.2MB.numa_stat
total=2097152 N0=2097152 N1=0
hierarichal_total=2097152 N0=2097152 N1=0
This patch was tested manually by allocating hugetlb memory and querying
the hugetlb.*.numa_stat file of the cgroup and its parents.
The above failure happened when calling kmalloc() to allocate buffer with
GFP_DMA. It requests to allocate slab page from DMA zone while no managed
pages at all in there.
Because in the current kernel, dma-kmalloc will be created as long as
CONFIG_ZONE_DMA is enabled. However, kdump kernel of x86_64 doesn't have
managed pages on DMA zone since commit 6f599d84231f ("x86/kdump: Always
reserve the low 1M when the crashkernel option is specified"). The
failure can be always reproduced.
For now, let's mute the warning of allocation failure if requesting pages
from DMA zone while no managed pages.
Baoquan He [Fri, 14 Jan 2022 22:07:41 +0000 (14:07 -0800)]
dma/pool: create dma atomic pool only if dma zone has managed pages
Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone only has low 1M memory
presented and locked down by memblock allocator. So no pages are added
into buddy of DMA zone. Please check commit f1d4d47c5851 ("x86/setup:
Always reserve the first 1M of RAM").
Then in kdump kernel of x86_64, it always prints below failure message:
DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
swapper/0: page allocation failure: order:5, mode:0xcc1(GFP_KERNEL|GFP_DMA), nodemask=(null),cpuset=/,mems_allowed=0
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-0.rc5.20210611git929d931f2b40.42.fc35.x86_64 #1
Hardware name: Dell Inc. PowerEdge R910/0P658H, BIOS 2.12.0 06/04/2018
Call Trace:
dump_stack+0x7f/0xa1
warn_alloc.cold+0x72/0xd6
__alloc_pages_slowpath.constprop.0+0xf29/0xf50
__alloc_pages+0x24d/0x2c0
alloc_page_interleave+0x13/0xb0
atomic_pool_expand+0x118/0x210
__dma_atomic_pool_init+0x45/0x93
dma_atomic_pool_init+0xdb/0x176
do_one_initcall+0x67/0x320
kernel_init_freeable+0x290/0x2dc
kernel_init+0xa/0x111
ret_from_fork+0x22/0x30
Mem-Info:
......
DMA: failed to allocate 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocation
DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Here, let's check if DMA zone has managed pages, then create
atomic_pool_dma if yes. Otherwise just skip it.
***Root cause:
In the current kernel, it assumes that DMA zone must have managed pages
and try to request pages if CONFIG_ZONE_DMA is enabled. While this is not
always true. E.g in kdump kernel of x86_64, only low 1M is presented and
locked down at very early stage of boot, so that this low 1M won't be
added into buddy allocator to become managed pages of DMA zone. This
exception will always cause page allocation failure if page is requested
from DMA zone.
***Investigation:
This failure happens since below commit merged into linus's tree. 1a6a9044b967 x86/setup: Remove CONFIG_X86_RESERVE_LOW and reservelow= options 23721c8e92f7 x86/crash: Remove crash_reserve_low_1M() f1d4d47c5851 x86/setup: Always reserve the first 1M of RAM 7c321eb2b843 x86/kdump: Remove the backup region handling 6f599d84231f x86/kdump: Always reserve the low 1M when the crashkernel option is specified
Before them, on x86_64, the low 640K area will be reused by kdump kernel.
So in kdump kernel, the content of low 640K area is copied into a backup
region for dumping before jumping into kdump. Then except of those firmware
reserved region in [0, 640K], the left area will be added into buddy
allocator to become available managed pages of DMA zone.
However, after above commits applied, in kdump kernel of x86_64, the low
1M is reserved by memblock, but not released to buddy allocator. So any
later page allocation requested from DMA zone will fail.
At the beginning, if crashkernel is reserved, the low 1M need be locked
down because AMD SME encrypts memory making the old backup region
mechanims impossible when switching into kdump kernel.
Later, it was also observed that there are BIOSes corrupting memory
under 1M. To solve this, in commit f1d4d47c5851, the entire region of
low 1M is always reserved after the real mode trampoline is allocated.
Besides, recently, Intel engineer mentioned their TDX (Trusted domain
extensions) which is under development in kernel also needs to lock down
the low 1M. So we can't simply revert above commits to fix the page allocation
failure from DMA zone as someone suggested.
***Solution:
Currently, only DMA atomic pool and dma-kmalloc will initialize and
request page allocation with GFP_DMA during bootup.
So only initializ DMA atomic pool when DMA zone has available managed
pages, otherwise just skip the initialization.
For dma-kmalloc(), for the time being, let's mute the warning of
allocation failure if requesting pages from DMA zone while no manged
pages. Meanwhile, change code to use dma_alloc_xx/dma_map_xx API to
replace kmalloc(GFP_DMA), or do not use GFP_DMA when calling kmalloc() if
not necessary. Christoph is posting patches to fix those under
drivers/scsi/. Finally, we can remove the need of dma-kmalloc() as people
suggested.
This patch (of 3):
In some places of the current kernel, it assumes that dma zone must have
managed pages if CONFIG_ZONE_DMA is enabled. While this is not always
true. E.g in kdump kernel of x86_64, only low 1M is presented and locked
down at very early stage of boot, so that there's no managed pages at all
in DMA zone. This exception will always cause page allocation failure if
page is requested from DMA zone.
Here add function has_managed_dma() and the relevant helper functions to
check if there's DMA zone with managed pages. It will be used in later
patches.
Miles Chen [Fri, 14 Jan 2022 22:07:30 +0000 (14:07 -0800)]
include/linux/gfp.h: further document GFP_DMA32
kmalloc(..., GFP_DMA32) does not return DMA32 memory because the DMA32
kmalloc cache array is not implemented. (Reason: there is no such user
in kernel).
Put a short comment about this so people can understand this by reading
the comment.
Michal Hocko [Fri, 14 Jan 2022 22:07:27 +0000 (14:07 -0800)]
mm: drop node from alloc_pages_vma
alloc_pages_vma is meant to allocate a page with a vma specific memory
policy. The initial node parameter is always a local node so it is
pointless to waste a function argument for this. Drop the parameter.
Xiongwei Song [Fri, 14 Jan 2022 22:07:24 +0000 (14:07 -0800)]
mm: page_alloc: fix building error on -Werror=array-compare
Arthur Marsh reported we would hit the error below when building kernel
with gcc-12:
CC mm/page_alloc.o
mm/page_alloc.c: In function `mem_init_print_info':
mm/page_alloc.c:8173:27: error: comparison between two arrays [-Werror=array-compare]
8173 | if (start <= pos && pos < end && size > adj) \
|
In C++20, the comparision between arrays should be warned.
mm/pagealloc: sysctl: change watermark_scale_factor max limit to 30%
For embedded systems with low total memory, having to run applications
with relatively large memory requirements, 10% max limitation for
watermark_scale_factor poses an issue of triggering direct reclaim every
time such application is started. This results in slow application
startup times and bad end-user experience.
By increasing watermark_scale_factor max limit we allow vendors more
flexibility to choose the right level of kswapd aggressiveness for their
device and workload requirements.
NeilBrown [Fri, 14 Jan 2022 22:07:14 +0000 (14:07 -0800)]
mm: introduce memalloc_retry_wait()
Various places in the kernel - largely in filesystems - respond to a
memory allocation failure by looping around and re-trying. Some of
these cannot conveniently use __GFP_NOFAIL, for reasons such as:
- a GFP_ATOMIC allocation, which __GFP_NOFAIL doesn't work on
- a need to check for the process being signalled between failures
- the possibility that other recovery actions could be performed
- the allocation is quite deep in support code, and passing down an
extra flag to say if __GFP_NOFAIL is wanted would be clumsy.
Many of these currently use congestion_wait() which (in almost all
cases) simply waits the given timeout - congestion isn't tracked for
most devices.
It isn't clear what the best delay is for loops, but it is clear that
the various filesystems shouldn't be responsible for choosing a timeout.
This patch introduces memalloc_retry_wait() with takes on that
responsibility. Code that wants to retry a memory allocation can call
this function passing the GFP flags that were used. It will wait
however is appropriate.
For now, it only considers __GFP_NORETRY and whatever
gfpflags_allow_blocking() tests. If blocking is allowed without
__GFP_NORETRY, then alloc_page either made some reclaim progress, or
waited for a while, before failing. So there is no need for much
further waiting. memalloc_retry_wait() will wait until the current
jiffie ends. If this condition is not met, then alloc_page() won't have
waited much if at all. In that case memalloc_retry_wait() waits about
200ms. This is the delay that most current loops uses.
linux/sched/mm.h needs to be included in some files now,
but linux/backing-dev.h does not.
Michal Hocko [Fri, 14 Jan 2022 22:07:11 +0000 (14:07 -0800)]
mm: make slab and vmalloc allocators __GFP_NOLOCKDEP aware
sl?b and vmalloc allocators reduce the given gfp mask for their internal
needs. For that they use GFP_RECLAIM_MASK to preserve the reclaim
behavior and constrains.
__GFP_NOLOCKDEP is not a part of that mask because it doesn't really
control the reclaim behavior strictly speaking. On the other hand it
tells the underlying page allocator to disable reclaim recursion
detection so arguably it should be part of the mask.
Having __GFP_NOLOCKDEP in the mask will not alter the behavior in any
form so this change is safe pretty much by definition. It also adds a
support for this flag to SL?B and vmalloc allocators which will in turn
allow its use to kvmalloc as well. A lack of the support has been
noticed recently in
Michal Hocko [Fri, 14 Jan 2022 22:07:07 +0000 (14:07 -0800)]
mm: allow !GFP_KERNEL allocations for kvmalloc
Support for GFP_NO{FS,IO} and __GFP_NOFAIL has been implemented by
previous patches so we can allow the support for kvmalloc. This will
allow some external users to simplify or completely remove their
helpers.
GFP_NOWAIT semantic hasn't been supported so far but it hasn't been
explicitly documented so let's add a note about that.
ceph_kvmalloc is the first helper to be dropped and changed to kvmalloc.
Michal Hocko [Fri, 14 Jan 2022 22:07:04 +0000 (14:07 -0800)]
mm/vmalloc: be more explicit about supported gfp flags.
Commit b7d90e7a5ea8 ("mm/vmalloc: be more explicit about supported gfp
flags") has been merged prematurely without the rest of the series and
without addressed review feedback from Neil. Fix that up now. Only
wording is changed slightly.
Michal Hocko [Fri, 14 Jan 2022 22:07:01 +0000 (14:07 -0800)]
mm/vmalloc: add support for __GFP_NOFAIL
Dave Chinner has mentioned that some of the xfs code would benefit from
kvmalloc support for __GFP_NOFAIL because they have allocations that
cannot fail and they do not fit into a single page.
The large part of the vmalloc implementation already complies with the
given gfp flags so there is no work for those to be done. The area and
page table allocations are an exception to that. Implement a retry loop
for those.
Add a short sleep before retrying. 1 jiffy is a completely random
timeout. Ideally the retry would wait for an explicit event - e.g. a
change to the vmalloc space change if the failure was caused by the
space fragmentation or depletion. But there are multiple different
reasons to retry and this could become much more complex. Keep the
retry simple for now and just sleep to prevent from hogging CPUs.
Michal Hocko [Fri, 14 Jan 2022 22:06:57 +0000 (14:06 -0800)]
mm/vmalloc: alloc GFP_NO{FS,IO} for vmalloc
Patch series "extend vmalloc support for constrained allocations", v2.
Based on a recent discussion with Dave and Neil [1] I have tried to
implement NOFS, NOIO, NOFAIL support for the vmalloc to make life of
kvmalloc users easier.
A requirement for NOFAIL support for kvmalloc was new to me but this
seems to be really needed by the xfs code.
NOFS/NOIO was a known and a long term problem which was hoped to be
handled by the scope API. Those scope should have been used at the
reclaim recursion boundaries both to document them and also to remove
the necessity of NOFS/NOIO constrains for all allocations within that
scope. Instead workarounds were developed to wrap a single allocation
instead (like ceph_kvmalloc).
First patch implements NOFS/NOIO support for vmalloc. The second one
adds NOFAIL support and the third one bundles all together into kvmalloc
and drops ceph_kvmalloc which can use kvmalloc directly now.
vmalloc historically hasn't supported GFP_NO{FS,IO} requests because
page table allocations do not support externally provided gfp mask and
performed GFP_KERNEL like allocations.
Since few years we have scope (memalloc_no{fs,io}_{save,restore}) APIs
to enforce NOFS and NOIO constrains implicitly to all allocators within
the scope. There was a hope that those scopes would be defined on a
higher level when the reclaim recursion boundary starts/stops (e.g.
when a lock required during the memory reclaim is required etc.). It
seems that not all NOFS/NOIO users have adopted this approach and
instead they have taken a workaround approach to wrap a single
[k]vmalloc allocation by a scope API.
These workarounds do not serve the purpose of a better reclaim recursion
documentation and reduction of explicit GFP_NO{FS,IO} usege so let's
just provide them with the semantic they are asking for without a need
for workarounds.
Add support for GFP_NOFS and GFP_NOIO to vmalloc directly. All internal
allocations already comply with the given gfp_mask. The only current
exception is vmap_pages_range which maps kernel page tables. Infer the
proper scope API based on the given gfp mask.
Christian König [Fri, 14 Jan 2022 22:06:54 +0000 (14:06 -0800)]
mm/dmapool.c: revert "make dma pool to use kmalloc_node"
This reverts commit 2618c60b8b5836 ("dma: make dma pool to use
kmalloc_node").
While working myself into the dmapool code I've found this little odd
kmalloc_node().
What basically happens here is that we allocate the housekeeping
structure on the numa node where the device is attached to. Since the
device is never doing DMA to or from that memory this doesn't seem to
make sense at all.
So while this doesn't seem to cause much harm it's probably cleaner to
revert the change for consistency.
Pasha Tatashin [Fri, 14 Jan 2022 22:06:37 +0000 (14:06 -0800)]
mm: page table check
Check user page table entries at the time they are added and removed.
Allows to synchronously catch memory corruption issues related to double
mapping.
When a pte for an anonymous page is added into page table, we verify
that this pte does not already point to a file backed page, and vice
versa if this is a file backed page that is being added we verify that
this page does not have an anonymous mapping
We also enforce that read-only sharing for anonymous pages is allowed
(i.e. cow after fork). All other sharing must be for file pages.
Page table check allows to protect and debug cases where "struct page"
metadata became corrupted for some reason. For example, when refcnt or
mapcount become invalid.
Pasha Tatashin [Fri, 14 Jan 2022 22:06:33 +0000 (14:06 -0800)]
mm: ptep_clear() page table helper
We have ptep_get_and_clear() and ptep_get_and_clear_full() helpers to
clear PTE from user page tables, but there is no variant for simple
clear of a present PTE from user page tables without using a low level
pte_clear() which can be either native or para-virtualised.
Add a new ptep_clear() that can be used in common code to clear PTEs
from page table. We will need this call later in order to add a hook
for page table check.
Pasha Tatashin [Fri, 14 Jan 2022 22:06:29 +0000 (14:06 -0800)]
mm: change page type prior to adding page table entry
Patch series "page table check", v3.
Ensure that some memory corruptions are prevented by checking at the
time of insertion of entries into user page tables that there is no
illegal sharing.
We have recently found a problem [1] that existed in kernel since 4.14.
The problem was caused by broken page ref count and led to memory
leaking from one process into another. The problem was accidentally
detected by studying a dump of one process and noticing that one page
contains memory that should not belong to this process.
There are some other page->_refcount related problems that were recently
fixed: [2], [3] which potentially could also lead to illegal sharing.
In addition to hardening refcount [4] itself, this work is an attempt to
prevent this class of memory corruption issues.
It uses a simple state machine that is independent from regular MM logic
to check for illegal sharing at time pages are inserted and removed from
page tables.
There are a few places where we first update the entry in the user page
table, and later change the struct page to indicate that this is
anonymous or file page.
In most places, however, we first configure the page metadata and then
insert entries into the page table. Page table check, will use the
information from struct page to verify the type of entry is inserted.
Change the order in all places to first update struct page, and later to
update page table.
This means that we first do calls that may change the type of page (anon
or file):
Shuah Khan [Fri, 14 Jan 2022 22:06:26 +0000 (14:06 -0800)]
docs/vm: add vmalloced-kernel-stacks document
Add a new document to explain Virtually Mapped Kernel Stack Support.
This is a compilation of information from the code and original patch
series that introduced the Virtually Mapped Kernel Stacks feature.
This document summarizes the feature and provides details on allocation,
free, and stack overflow handling. Provides reference to available
tests.
mm/oom_kill: allow process_mrelease to run under mmap_lock protection
With exit_mmap holding mmap_write_lock during free_pgtables call,
process_mrelease does not need to elevate mm->mm_users in order to
prevent exit_mmap from destrying pagetables while __oom_reap_task_mm is
walking the VMA tree. The change prevents process_mrelease from calling
the last mmput, which can lead to waiting for IO completion in exit_aio.
mm: protect free_pgtables with mmap_lock write lock in exit_mmap
oom-reaper and process_mrelease system call should protect against races
with exit_mmap which can destroy page tables while they walk the VMA
tree. oom-reaper protects from that race by setting MMF_OOM_VICTIM and
by relying on exit_mmap to set MMF_OOM_SKIP before taking and releasing
mmap_write_lock. process_mrelease has to elevate mm->mm_users to
prevent such race.
Both oom-reaper and process_mrelease hold mmap_read_lock when walking
the VMA tree. The locking rules and mechanisms could be simpler if
exit_mmap takes mmap_write_lock while executing destructive operations
such as free_pgtables.
Change exit_mmap to hold the mmap_write_lock when calling unlock_range,
free_pgtables and remove_vma. Note also that because oom-reaper checks
VM_LOCKED flag, unlock_range() should not be allowed to race with it.
Before this patch, remove_vma used to be called with no locks held,
however with fput being executed asynchronously and vm_ops->close not
being allowed to hold mmap_lock (it is called from __split_vma with
mmap_sem held for write), changing that should be fine.
In most cases this lock should be uncontended. Previously, Kirill
reported ~4% regression caused by a similar change [1]. We reran the
same test and although the individual results are quite noisy, the
percentiles show lower regression with 1.6% being the worst case [2].
The change allows oom-reaper and process_mrelease to execute safely
under mmap_read_lock without worries that exit_mmap might destroy page
tables from under them.
Arnd Bergmann [Fri, 14 Jan 2022 22:06:10 +0000 (14:06 -0800)]
mm: move tlb_flush_pending inline helpers to mm_inline.h
linux/mm_types.h should only define structure definitions, to make it
cheap to include elsewhere. The atomic_t helper function definitions
are particularly large, so it's better to move the helpers using those
into the existing linux/mm_inline.h and only include that where needed.
As a follow-up, we may want to go through all the indirect includes in
mm_types.h and reduce them as much as possible.
Arnd Bergmann [Fri, 14 Jan 2022 22:06:07 +0000 (14:06 -0800)]
mm: move anon_vma declarations to linux/mm_inline.h
The patch to add anonymous vma names causes a build failure in some
configurations:
include/linux/mm_types.h: In function 'is_same_vma_anon_name':
include/linux/mm_types.h:924:37: error: implicit declaration of function 'strcmp' [-Werror=implicit-function-declaration]
924 | return name && vma_name && !strcmp(name, vma_name);
| ^~~~~~
include/linux/mm_types.h:22:1: note: 'strcmp' is defined in header '<string.h>'; did you forget to '#include <string.h>'?
This should not really be part of linux/mm_types.h in the first place,
as that header is meant to only contain structure defintions and need a
minimum set of indirect includes itself.
While the header clearly includes more than it should at this point,
let's not make it worse by including string.h as well, which would pull
in the expensive (compile-speed wise) fortify-string logic.
Move the new functions into a separate header that only needs to be
included in a couple of locations.