s390: introduce proper type handling call_on_stack() macro
The existing CALL_ON_STACK() macro allows for subtle bugs:
- There is no type checking of the function that is being called. That
is: missing or too many arguments do not cause any compile error or
warning. The same is true if the return type of the called function
changes. This can lead to quite random bugs.
- Sign and zero extension of arguments is missing. Given that the s390
C ABI requires that the caller of a function performs proper sign
and zero extension this can also lead to subtle bugs.
- If arguments to the CALL_ON_STACK() macros contain functions calls
register corruption can happen due to register asm constructs being
used.
Therefore introduce a new call_on_stack() macro which is supposed to
fix all these problems.
Make on_async_stack() a bit more readable, even though as usual it
depends if one considers "!!!" readable or not.
At least the new construct to check if the async stack is in use or
not is a bit shorter and generates slightly better code.
do_softirq_own_stack() is always called from task context and
therefore it is not necessary to check if the async stack is
currently used.
Remove the check and directly switch to async stack.
This is the second part of the cleanup for the header file ap.h
to remove the register asm statements. This patch deals with
the inline ap_dqap() function where within the assembler code
an odd register of an register pair is to be addressed.
[[email protected]: this intentionally breaks compilation with any
clang compilers prior to llvm-project commit 458eac257377 ("[SystemZ]
Support the 'N' code for the odd register in inline-asm.").
This is hopefully the last clang kernel compile breakage caused by
incompatibilities between gcc and clang.]
Sven Schnelle [Wed, 30 Jun 2021 12:02:41 +0000 (14:02 +0200)]
s390: rename PIF_SYSCALL_RESTART to PIF_EXECVE_PGSTE_RESTART
PIF_SYSCALL_RESTART is now only used to restart execve when loading
PGSTE binaries. Rename the flag to reflect that, and avoid people
thinking that this bit has anything to do with generic syscall
restarting.
Sven Schnelle [Wed, 30 Jun 2021 11:50:55 +0000 (13:50 +0200)]
s390: move restart of execve() syscall
On s390, execve might have to be restarted for PGSTE binaries
like kvm. In the past this was done via the PIF_SYSCALL_RESTART
bit. However, with the recent changes, syscalls are now restarted
differently. Now that execve() is the only call that might get
restarted via PIF_SYSCALL_RESTART, move the loop to do_syscall().
This also has the advantage that the restart is no longer visible
to userspace.
Sven Schnelle [Fri, 25 Jun 2021 13:06:06 +0000 (15:06 +0200)]
s390/signal: remove sigreturn on stack
{rt_}sigreturn is now called from the vdso, so we no longer
need the svc on the stack, and therefore no hack to support that
mechanism on machines with non-executable stack.
Sven Schnelle [Fri, 25 Jun 2021 13:02:08 +0000 (15:02 +0200)]
s390/signal: switch to using vdso for sigreturn and syscall restart
with generic entry, there's a bug when it comes to restarting of signals.
The failing sequence is:
a) a signal is coming in, and no handler is registered, so the lower
part of arch_do_signal_or_restart() in arch/s390/kernel/signal.c
sets PIF_SYSCALL_RESTART.
b) a second signal gets pending while the kernel is still in the exit
loop, and for that one, a handler exists.
c) The first part of arch_do_signal_or_restart() is called. That part
calls handle_signal(), which sets up stack + registers for handling
the signal.
d) __do_syscall() in arch/s390/kernel/syscall.c checks for
PIF_SYSCALL_RESTART right before leaving to userspace. If it is set,
it restart's the syscall. However, the registers are already setup
for handling a signal from c). The syscall is now restarted with the
wrong arguments.
Change the code to:
- use vdso for syscall_restart() instead of PIF_SYSCALL_RESTART because
we cannot rewind and go back to userspace on s390 because the system call
number might be encoded in the svc instruction.
- for all other syscalls we rewind the PSW and return to userspace.
Pavel Begunkov [Thu, 8 Jul 2021 12:37:06 +0000 (13:37 +0100)]
io_uring: mitigate unlikely iopoll lag
We have requests like IORING_OP_FILES_UPDATE that don't go through
->iopoll_list but get completed in place under ->uring_lock, and so
after dropping the lock io_iopoll_check() should expect that some CQEs
might have get completed in a meanwhile.
Currently such events won't be accounted in @nr_events, and the loop
will continue to poll even if there is enough of CQEs. It shouldn't be a
problem as it's not likely to happen and so, but not nice either. Just
return earlier in this case, it should be enough.
Merge tag 'drm-next-2021-07-08-1' of git://anongit.freedesktop.org/drm/drm
Pull drm fixes from Dave Airlie:
"Some fixes for rc1 that came in the past weeks, mainly a bunch of
amdgpu fixes, some i915 and the rest are misc around the place. I'm
sending this a bit early so some more stuff may show up, but I'll
probably take tomorrow off.
* tag 'drm-next-2021-07-08-1' of git://anongit.freedesktop.org/drm/drm: (52 commits)
drm/i915: Drop all references to DRM IRQ midlayer
drm/i915: Use the correct IRQ during resume
drm/i915/display/dg1: Correctly map DPLLs during state readout
drm/i915/display: Do not zero past infoframes.vsc
drm/amdgpu: Conditionally reset SDMA RAS error counts
drm/amdkfd: Maintain svm_bo reference in page->zone_device_data
drm/amdkfd: add invalid pages debug at vram migration
drm/amdkfd: skip migration for pages already in VRAM
drm/amdkfd: skip invalid pages during migrations
drm/amdkfd: classify and map mixed svm range pages in GPU
drm/amdkfd: use hmm range fault to get both domain pfns
drm/amdgpu: get owner ref in validate and map
drm/amdkfd: set owner ref to svm range prefault
drm/amdkfd: add owner ref param to get hmm pages
drm/amdkfd: device pgmap owner at the svm migrate init
drm/amdkfd: inc counter on child ranges with xnack off
drm/amd/display: Extend DMUB diagnostic logging to DCN3.1
drm/amdgpu: Update NV SIMD-per-CU to 2
drm/amdgpu: add new dimgrey cavefish DID
drm/amd/pm: skip PrepareMp1ForUnload message in s0ix
...
Merge tag 'pwm/for-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm
Pull pwm updates from Thierry Reding:
"This contains mostly various fixes, cleanups and some conversions to
the atomic API. One noteworthy change is that PWM consumers can now
pass a hint to the PWM core about the PWM usage, enabling PWM
providers to implement various optimizations.
There's also a fair bit of simplification here with the addition of
some device-managed helpers as well as unification between the DT and
ACPI firmware interfaces"
* tag 'pwm/for-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm: (50 commits)
pwm: Remove redundant assignment to pointer pwm
pwm: ep93xx: Fix read of uninitialized variable ret
pwm: ep93xx: Prepare clock before using it
pwm: ep93xx: Unfold legacy callbacks into ep93xx_pwm_apply()
pwm: ep93xx: Implement .apply callback
pwm: vt8500: Only unprepare the clock after the pwmchip was removed
pwm: vt8500: Drop if with an always false condition
pwm: tegra: Assert reset only after the PWM was unregistered
pwm: tegra: Don't needlessly enable and disable the clock in .remove()
pwm: tegra: Don't modify HW state in .remove callback
pwm: tegra: Drop an if block with an always false condition
pwm: core: Simplify some devm_*pwm*() functions
pwm: core: Remove unused devm_pwm_put()
pwm: core: Unify fwnode checks in the module
pwm: core: Reuse fwnode_to_pwmchip() in ACPI case
pwm: core: Convert to use fwnode for matching
docs: firmware-guide: ACPI: Add a PWM example
dt-bindings: pwm: pwm-tiecap: Add compatible string for AM64 SoC
dt-bindings: pwm: pwm-tiecap: Convert to json schema
pwm: sprd: Don't check the return code of pwmchip_remove()
...
Merge tag 'clk-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull more clk updates from Stephen Boyd:
- A handful of fixes for lmk04832 driver
- Migrate the basic clk divider to use determine rate ops
- Fix modpost build for hisilicon hi3559a driver
- Actually set the parent in k210_clk_set_parent()
* tag 'clk-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
Revert "clk: divider: Switch from .round_rate to .determine_rate by default"
clk: hisilicon: hi3559a: Drop __init markings everywhere
clk: meson: regmap: switch to determine_rate for the dividers
clk: divider: Switch from .round_rate to .determine_rate by default
clk: divider: Add re-usable determine_rate implementations
clk: k210: Fix k210_clk_set_parent()
clk: lmk04832: Fix spelling mistakes in dev_err messages and comments
clk: lmk04832: fix return value check in lmk04832_probe()
clk: stm32mp1: fix missing spin_lock_init()
PCIe native device hotplug:
- Ignore Link Down/Up caused by DPC (Lukas Wunner)
Power management:
- Leave Apple Thunderbolt controllers on for s2idle or standby
(Konstantin Kharlamov)
Virtualization:
- Work around Huawei Intelligent NIC VF FLR erratum (Chiqijun)
- Clarify error message for unbound IOV devices (Moritz Fischer)
- Add pci_reset_bus_function() Secondary Bus Reset interface (Raphael
Norwitz)
Peer-to-peer DMA:
- Simplify distance calculation (Christoph Hellwig)
- Finish RCU conversion of pdev->p2pdma (Eric Dumazet)
- Rename upstream_bridge_distance() and rework doc (Logan Gunthorpe)
- Collect acs list in stack buffer to avoid sleeping (Logan
Gunthorpe)
- Use correct calc_map_type_and_dist() return type (Logan Gunthorpe)
- Warn if host bridge not in whitelist (Logan Gunthorpe)
- Refactor pci_p2pdma_map_type() (Logan Gunthorpe)
- Avoid pci_get_slot(), which may sleep (Logan Gunthorpe)
Altera PCIe controller driver:
- Add Joyce Ooi as Altera PCIe maintainer (Joyce Ooi)
Broadcom iProc PCIe controller driver:
- Fix multi-MSI base vector number allocation (Sandor Bodo-Merle)
- Support multi-MSI only on uniprocessor kernel (Sandor Bodo-Merle)
Marvell Aardvark PCIe controller driver:
- Fix checking for PIO Non-posted Request (Pali Rohár)
- Implement workaround for the readback value of VEND_ID (Pali Rohár)
Like syscallhdr.sh and syscalltbl.sh, add a simple script to generate
the __NR_syscalls, which should not be exported to userspace.
This script is useful to replace arch/mips/kernel/syscalls/syscallnr.sh,
refactor arch/s390/kernel/syscalls/syscalltbl, and eliminate the code
surrounded by #ifdef __KERNEL__ / #endif from exported uapi/asm/unistd_*.h
files.
powerpc/book3s64/mm: update flush_tlb_range to flush page walk cache
flush_tlb_range is special in that we don't specify the page size used for
the translation. Hence when flushing TLB we flush the translation cache
for all possible page sizes. The kernel also uses the same interface when
moving page tables around. Such a move requires us to flush the page walk
cache.
Instead of adding another interface to force page walk cache flush, update
flush_tlb_range to flush page walk cache if the range flushed is more than
the PMD range. A page table move will always involve an invalidate range
more than PMD_SIZE.
Running microbenchmark with mprotect and parallel memory access didn't
show any observable performance impact.
This patchset enables MOVE_PMD/MOVE_PUD support on power. This requires
the platform to support updating higher-level page tables without updating
page table entries. This also needs to invalidate the Page Walk Cache on
architecture supporting the same.
This patch (of 3):
Architectures like ppc64 support faster mremap only with radix
translation. Hence allow a runtime check w.r.t support for fast mremap.
mm/mremap: hold the rmap lock in write mode when moving page table entries.
To avoid a race between rmap walk and mremap, mremap does
take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss
a page table entry due to PTE moves via move_pagetables(). The kernel
does further optimization of this lock such that if we are going to find
the newly added vma after the old vma, the rmap lock is not taken. This
is because rmap walk would find the vmas in the same order and if we don't
find the page table attached to older vma we would find it with the new
vma which we would iterate later.
As explained in commit eb66ae030829 ("mremap: properly flush TLB before
releasing the page") mremap is special in that it doesn't take ownership
of the page. The optimized version for PUD/PMD aligned mremap also
doesn't hold the ptl lock. This can result in stale TLB entries as show
below.
This patch updates the rmap locking requirement in mremap to handle the race condition
explained below with optimized mremap::
mm/mremap: use pmd/pud_poplulate to update page table entries
pmd/pud_populate is the right interface to be used to set the respective
page table entries. Some architectures like ppc64 do assume that
set_pmd/pud_at can only be used to set a hugepage PTE. Since we are not
setting up a hugepage PTE here, use the pmd/pud_populate interface.
selftest/mremap_test: avoid crash with static build
With a large mmap map size, we can overlap with the text area and using
MAP_FIXED results in unmapping that area. Switch to MAP_FIXED_NOREPLACE
and handle the EEXIST error.
Stephen Boyd [Thu, 8 Jul 2021 01:09:38 +0000 (18:09 -0700)]
scripts/decode_stacktrace.sh: indicate 'auto' can be used for base path
Add "auto" to the usage message so that it's a little clearer that you can
pass "auto" as the second argument. When passing "auto" the script tries
to find the base path automatically instead of requiring it be passed on
the commandline. Also use [<variable>] to indicate the variable argument
and that it is optional so that we can differentiate from the literal
"auto" that should be passed.
Stephen Boyd [Thu, 8 Jul 2021 01:09:35 +0000 (18:09 -0700)]
scripts/decode_stacktrace.sh: silence stderr messages from addr2line/nm
Sometimes if you're using tools that have linked things improperly or have
new features/sections that older tools don't expect you'll see warnings
printed to stderr. We don't really care about these warnings, so let's
just silence these messages to cleanup output of this script.
Stephen Boyd [Thu, 8 Jul 2021 01:09:31 +0000 (18:09 -0700)]
scripts/decode_stacktrace.sh: support debuginfod
Now that stacktraces contain the build ID information we can update this
script to use debuginfod-find to locate the debuginfo for the vmlinux and
modules automatically. This can replace the existing code that requires
specifying a path to vmlinux or tries to find the vmlinux and modules
automatically by using the release number. Work it into the script as a
fallback option if the vmlinux isn't specified on the commandline.
Stephen Boyd [Thu, 8 Jul 2021 01:09:27 +0000 (18:09 -0700)]
x86/dumpstack: use %pSb/%pBb for backtrace printing
Let's use the new printk formats to print the stacktrace entries when
printing a backtrace to the kernel logs. This will include any module's
build ID[1] in it so that offline/crash debugging can easily locate the
debuginfo for a module via something like debuginfod[2].
Stephen Boyd [Thu, 8 Jul 2021 01:09:24 +0000 (18:09 -0700)]
arm64: stacktrace: use %pSb for backtrace printing
Let's use the new printk format to print the stacktrace entry when
printing a backtrace to the kernel logs. This will include any module's
build ID[1] in it so that offline/crash debugging can easily locate the
debuginfo for a module via something like debuginfod[2].
Stephen Boyd [Thu, 8 Jul 2021 01:09:20 +0000 (18:09 -0700)]
module: add printk formats to add module build ID to stacktraces
Let's make kernel stacktraces easier to identify by including the build
ID[1] of a module if the stacktrace is printing a symbol from a module.
This makes it simpler for developers to locate a kernel module's full
debuginfo for a particular stacktrace. Combined with
scripts/decode_stracktrace.sh, a developer can download the matching
debuginfo from a debuginfod[2] server and find the exact file and line
number for the functions plus offsets in a stacktrace that match the
module. This is especially useful for pstore crash debugging where the
kernel crashes are recorded in something like console-ramoops and the
recovery kernel/modules are different or the debuginfo doesn't exist on
the device due to space concerns (the debuginfo can be too large for space
limited devices).
Originally, I put this on the %pS format, but that was quickly rejected
given that %pS is used in other places such as ftrace where build IDs
aren't meaningful. There was some discussions on the list to put every
module build ID into the "Modules linked in:" section of the stacktrace
message but that quickly becomes very hard to read once you have more than
three or four modules linked in. It also provides too much information
when we don't expect each module to be traversed in a stacktrace. Having
the build ID for modules that aren't important just makes things messy.
Splitting it to multiple lines for each module quickly explodes the number
of lines printed in an oops too, possibly wrapping the warning off the
console. And finally, trying to stash away each module used in a
callstack to provide the ID of each symbol printed is cumbersome and would
require changes to each architecture to stash away modules and return
their build IDs once unwinding has completed.
Instead, we opt for the simpler approach of introducing new printk formats
'%pS[R]b' for "pointer symbolic backtrace with module build ID" and '%pBb'
for "pointer backtrace with module build ID" and then updating the few
places in the architecture layer where the stacktrace is printed to use
this new format.
Stephen Boyd [Thu, 8 Jul 2021 01:09:17 +0000 (18:09 -0700)]
dump_stack: add vmlinux build ID to stack traces
Add the running kernel's build ID[1] to the stacktrace information header.
This makes it simpler for developers to locate the vmlinux with full
debuginfo for a particular kernel stacktrace. Combined with
scripts/decode_stracktrace.sh, a developer can download the correct
vmlinux from a debuginfod[2] server and find the exact file and line
number for the functions plus offsets in a stacktrace.
This is especially useful for pstore crash debugging where the kernel
crashes are recorded in the pstore logs and the recovery kernel is
different or the debuginfo doesn't exist on the device due to space
concerns (the data can be large and a security concern). The stacktrace
can be analyzed after the crash by using the build ID to find the matching
vmlinux and understand where in the function something went wrong.
The hex string aa23f7a1231c229de205662d5a9e0d4c580f19a1 is the build ID,
following the kernel version number. Put it all behind a config option,
STACKTRACE_BUILD_ID, so that kernel developers can remove this
information if they decide it is too much.
Stephen Boyd [Thu, 8 Jul 2021 01:09:13 +0000 (18:09 -0700)]
buildid: stash away kernels build ID on init
Parse the kernel's build ID at initialization so that other code can print
a hex format string representation of the running kernel's build ID. This
will be used in the kdump and dump_stack code so that developers can
easily locate the vmlinux debug symbols for a crash/stacktrace.
Stephen Boyd [Thu, 8 Jul 2021 01:09:06 +0000 (18:09 -0700)]
buildid: only consider GNU notes for build ID parsing
Patch series "Add build ID to stacktraces", v6.
This series adds the kernel's build ID[1] to the stacktrace header printed
in oops messages, warnings, etc. and the build ID for any module that
appears in the stacktrace after the module name. The goal is to make the
stacktrace more self-contained and descriptive by including the relevant
build IDs in the kernel logs when something goes wrong. This can be used
by post processing tools like script/decode_stacktrace.sh and kernel
developers to easily locate the debug info associated with a kernel crash
and line up what line and file things started falling apart at.
To show how this can be used I've included a patch to decode_stacktrace.sh
that downloads the debuginfo from a debuginfod server. This also includes
some patches to make the buildid.c file use more const arguments and
consolidate logic into buildid.c from kdump. These are left to the end as
they were mostly cleanup patches.
Some kernel elf files have various notes that also happen to have an elf
note type of '3', which matches NT_GNU_BUILD_ID but the note name isn't
"GNU". For example, this note trips up the existing logic:
Mike Rapoport [Thu, 8 Jul 2021 01:08:15 +0000 (18:08 -0700)]
secretmem: test: add basic selftest for memfd_secret(2)
The test verifies that file descriptor created with memfd_secret does not
allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.
Mike Rapoport [Thu, 8 Jul 2021 01:08:07 +0000 (18:08 -0700)]
PM: hibernate: disable when there are active secretmem users
It is unsafe to allow saving of secretmem areas to the hibernation
snapshot as they would be visible after the resume and this essentially
will defeat the purpose of secret memory mappings.
Prevent hibernation whenever there are active secret memory users.
Mike Rapoport [Thu, 8 Jul 2021 01:08:03 +0000 (18:08 -0700)]
mm: introduce memfd_secret system call to create "secret" memory areas
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The secretmem feature is off by default and the user must explicitly
enable it at the boot time.
Once secretmem is enabled, the user will be able to create a file
descriptor using the memfd_secret() system call. The memory areas created
by mmap() calls from this file descriptor will be unmapped from the kernel
direct map and they will be only mapped in the page table of the processes
that have access to the file descriptor.
Secretmem is designed to provide the following protections:
* Enhanced protection (in conjunction with all the other in-kernel
attack prevention systems) against ROP attacks. Seceretmem makes
"simple" ROP insufficient to perform exfiltration, which increases the
required complexity of the attack. Along with other protections like
the kernel stack size limit and address space layout randomization which
make finding gadgets is really hard, absence of any in-kernel primitive
for accessing secret memory means the one gadget ROP attack can't work.
Since the only way to access secret memory is to reconstruct the missing
mapping entry, the attacker has to recover the physical page and insert
a PTE pointing to it in the kernel and then retrieve the contents. That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.
* Prevent cross-process secret userspace memory exposures. Once the
secret memory is allocated, the user can't accidentally pass it into the
kernel to be transmitted somewhere. The secreremem pages cannot be
accessed via the direct map and they are disallowed in GUP.
* Harden against exploited kernel flaws. In order to access secretmem,
a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new privileged uiserspace process to perform
secrets exfiltration using ptrace.
The file descriptor based memory has several advantages over the
"traditional" mm interfaces, such as mlock(), mprotect(), madvise(). File
descriptor approach allows explicit and controlled sharing of the memory
areas, it allows to seal the operations. Besides, file descriptor based
memory paves the way for VMMs to remove the secret memory range from the
userspace hipervisor process, for instance QEMU. Andy Lutomirski says:
"Getting fd-backed memory into a guest will take some possibly major
work in the kernel, but getting vma-backed memory into a guest without
mapping it in the host user address space seems much, much worse."
memfd_secret() is made a dedicated system call rather than an extension to
memfd_create() because it's purpose is to allow the user to create more
secure memory mappings rather than to simply allow file based access to
the memory. Nowadays a new system call cost is negligible while it is way
simpler for userspace to deal with a clear-cut system calls than with a
multiplexer or an overloaded syscall. Moreover, the initial
implementation of memfd_secret() is completely distinct from
memfd_create() so there is no much sense in overloading memfd_create() to
begin with. If there will be a need for code sharing between these
implementation it can be easily achieved without a need to adjust user
visible APIs.
The secret memory remains accessible in the process context using uaccess
primitives, but it is not exposed to the kernel otherwise; secret memory
areas are removed from the direct map and functions in the
follow_page()/get_user_page() family will refuse to return a page that
belongs to the secret memory area.
Once there will be a use case that will require exposing secretmem to the
kernel it will be an opt-in request in the system call flags so that user
would have to decide what data can be exposed to the kernel.
Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which
affects the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to
have secretmem disabled by default with the ability of a system
administrator to enable it at boot time.
Pages in the secretmem regions are unevictable and unmovable to avoid
accidental exposure of the sensitive data via swap or during page
migration.
Since the secretmem mappings are locked in memory they cannot exceed
RLIMIT_MEMLOCK. Since these mappings are already locked independently
from mlock(), an attempt to mlock()/munlock() secretmem range would fail
and mlockall()/munlockall() will ignore secretmem mappings.
However, unlike mlock()ed memory, secretmem currently behaves more like
long-term GUP: secretmem mappings are unmovable mappings directly consumed
by user space. With default limits, there is no excessive use of
secretmem and it poses no real problem in combination with
ZONE_MOVABLE/CMA, but in the future this should be addressed to allow
balanced use of large amounts of secretmem along with ZONE_MOVABLE/CMA.
A page that was a part of the secret memory area is cleared when it is
freed to ensure the data is not exposed to the next user of that page.
The following example demonstrates creation of a secret mapping (error
handling is omitted):
Mike Rapoport [Thu, 8 Jul 2021 01:07:59 +0000 (18:07 -0700)]
set_memory: allow querying whether set_direct_map_*() is actually enabled
On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map. This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.
Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page
table, replace several occurrences of open coded checks in arm64 with the
new function and provide a generic stub for architectures that always
modify page tables upon calls to set_direct_map APIs.
Mike Rapoport [Thu, 8 Jul 2021 01:07:54 +0000 (18:07 -0700)]
riscv/Kconfig: make direct map manipulation options depend on MMU
ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable
them for the nommu case.
Add an explicit dependency on MMU for these options.
Mike Rapoport [Thu, 8 Jul 2021 01:07:50 +0000 (18:07 -0700)]
mmap: make mlock_future_check() global
Patch series "mm: introduce memfd_secret system call to create "secret" memory areas", v20.
This is an implementation of "secret" mappings backed by a file
descriptor.
The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present
in the direct map and will be present only in the page table of the owning
mm.
Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.
It's designed to provide the following protections:
* Enhanced protection (in conjunction with all the other in-kernel
attack prevention systems) against ROP attacks. Seceretmem makes
"simple" ROP insufficient to perform exfiltration, which increases the
required complexity of the attack. Along with other protections like
the kernel stack size limit and address space layout randomization which
make finding gadgets is really hard, absence of any in-kernel primitive
for accessing secret memory means the one gadget ROP attack can't work.
Since the only way to access secret memory is to reconstruct the missing
mapping entry, the attacker has to recover the physical page and insert
a PTE pointing to it in the kernel and then retrieve the contents. That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.
* Prevent cross-process secret userspace memory exposures. Once the
secret memory is allocated, the user can't accidentally pass it into the
kernel to be transmitted somewhere. The secreremem pages cannot be
accessed via the direct map and they are disallowed in GUP.
* Harden against exploited kernel flaws. In order to access secretmem,
a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new privileged uiserspace process to perform
secrets exfiltration using ptrace.
In the future the secret mappings may be used as a mean to protect guest
memory in a virtual machine host.
For demonstration of secret memory usage we've created a userspace library
that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.
Hiding secret memory mappings behind an anonymous file allows usage of the
page cache for tracking pages allocated for the "secret" mappings as well
as using address_space_operations for e.g. page migration callbacks.
The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native"
mm ABIs in the future.
Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which
affects the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to
have secretmem disabled by default with the ability of a system
administrator to enable it at boot time.
In addition, there is also a long term goal to improve management of the
direct map.
Oliver Glitta [Thu, 8 Jul 2021 01:07:47 +0000 (18:07 -0700)]
mm/slub: use stackdepot to save stack trace in objects
Many stack traces are similar so there are many similar arrays.
Stackdepot saves each unique stack only once.
Replace field addrs in struct track with depot_stack_handle_t handle. Use
stackdepot to save stack trace.
The benefits are smaller memory overhead and possibility to aggregate
per-cache statistics in the future using the stackdepot handle instead of
matching stacks manually.
ld.lld warns that the '.modinfo' section is not currently handled:
ld.lld: warning: kernel/built-in.a(workqueue.o):(.modinfo) is being placed in '.modinfo'
ld.lld: warning: kernel/built-in.a(printk/printk.o):(.modinfo) is being placed in '.modinfo'
ld.lld: warning: kernel/built-in.a(irq/spurious.o):(.modinfo) is being placed in '.modinfo'
ld.lld: warning: kernel/built-in.a(rcu/update.o):(.modinfo) is being placed in '.modinfo'
The '.modinfo' section was added in commit 898490c010b5 ("moduleparam:
Save information about built-in modules in separate file") to the DISCARDS
macro but Hexagon has never used that macro. The unification of DISCARDS
happened in commit 023bf6f1b8bf ("linker script: unify usage of discard
definition") in 2009, prior to Hexagon being added in 2011.
Switch Hexagon over to the DISCARDS macro so that anything that is
expected to be discarded gets discarded.
hexagon: handle {,SOFT}IRQENTRY_TEXT in linker script
Patch series "hexagon: Fix build error with CONFIG_STACKDEPOT and select CONFIG_ARCH_WANT_LD_ORPHAN_WARN".
This series fixes an error with ARCH=hexagon that was pointed out by the
patch "mm/slub: use stackdepot to save stack trace in objects".
The first patch fixes that error by handling the '.irqentry.text' and
'.softirqentry.text' sections.
The second patch switches Hexagon over to the common DISCARDS macro, which
should have been done when Hexagon was merged into the tree to match
commit 023bf6f1b8bf ("linker script: unify usage of discard definition").
The third patch selects CONFIG_ARCH_WANT_LD_ORPHAN_WARN so that something
like this does not happen again.
This patch (of 3):
Patch "mm/slub: use stackdepot to save stack trace in objects" in -mm
selects CONFIG_STACKDEPOT when CONFIG_STACKTRACE_SUPPORT is selected and
CONFIG_STACKDEPOT requires IRQENTRY_TEXT and SOFTIRQENTRY_TEXT to be
handled after commit 505a0ef15f96 ("kasan: stackdepot: move
filter_irq_stacks() to stackdepot.c") due to the use of the
__{,soft}irqentry_text_{start,end} section symbols. If those sections are
not handled, the build is broken.
$ make ARCH=hexagon CROSS_COMPILE=hexagon-linux- LLVM=1 LLVM_IAS=1 defconfig all
...
ld.lld: error: undefined symbol: __irqentry_text_start
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
ld.lld: error: undefined symbol: __irqentry_text_end
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
ld.lld: error: undefined symbol: __softirqentry_text_start
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
ld.lld: error: undefined symbol: __softirqentry_text_end
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
>>> referenced by stackdepot.c
>>> stackdepot.o:(filter_irq_stacks) in archive lib/built-in.a
...
Add these sections to the Hexagon linker script so the build continues to
work. ld.lld's orphan section warning would have caught this prior to the
-mm commit mentioned above:
ld.lld: warning: kernel/built-in.a(softirq.o):(.softirqentry.text) is being placed in '.softirqentry.text'
ld.lld: warning: kernel/built-in.a(softirq.o):(.softirqentry.text) is being placed in '.softirqentry.text'
ld.lld: warning: kernel/built-in.a(softirq.o):(.softirqentry.text) is being placed in '.softirqentry.text'
Zhen Lei [Thu, 8 Jul 2021 01:07:34 +0000 (18:07 -0700)]
lib: fix spelling mistakes in header files
Fix some spelling mistakes in comments found by "codespell":
Hoever ==> However
poiter ==> pointer
representaion ==> representation
uppon ==> upon
independend ==> independent
aquired ==> acquired
mis-match ==> mismatch
scrach ==> scratch
struture ==> structure
Analagous ==> Analogous
interation ==> iteration
And some were discovered manually by Joe Perches and Christoph Lameter:
stroed ==> stored
arch independent ==> an architecture independent
A example structure for ==> Example structure for
We must properly handle an errors when we increase the rlimit counter
and the ucounts reference counter. We have to this with RCU protection
to prevent possible use-after-free that could occur due to concurrent
put_cred_rcu().
The following reproducer triggers the problem:
$ cat testcase.sh
case "${STEP:-0}" in
0)
ulimit -Si 1
ulimit -Hi 1
STEP=1 unshare -rU "$0"
killall sleep
;;
1)
for i in 1 2 3 4 5; do unshare -rU sleep 5 & done
;;
esac
with the KASAN report being along the lines of
BUG: KASAN: use-after-free in put_ucounts+0x17/0xa0
Write of size 4 at addr ffff8880045f031c by task swapper/2/0
Memory state around the buggy address: ffff8880045f0200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880045f0280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
>ffff8880045f0300: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^ ffff8880045f0380: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff8880045f0400: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
Disabling lock debugging due to kernel taint
NFSv4/pNFS: Don't call _nfs4_pnfs_v3_ds_connect multiple times
After we grab the lock in nfs4_pnfs_ds_connect(), there is no check for
whether or not ds->ds_clp has already been initialised, so we can end up
adding the same transports multiple times.
Fixes: fc821d59209d ("pnfs/NFSv4.1: Add multipath capabilities to pNFS flexfiles servers over NFSv3") Signed-off-by: Trond Myklebust <[email protected]>
NFSv4/pnfs: Fix layoutget behaviour after invalidation
If the layout gets invalidated, we should wait for any outstanding
layoutget requests for that layout to complete, and we should resend
them only after re-establishing the layout stateid.
If we have multiple outstanding layoutget requests, the current code to
update the layout barrier assumes that the outstanding layout stateids
are updated in order. That's not necessarily the case.
Instead of using the value of lo->plh_outstanding as a guesstimate for
the window of values we need to accept, just wait to update the window
until we're processing the last one. The intention here is just to
ensure that we don't process 2^31 seqid updates without also updating
the barrier.
Fixes: 1bcf34fdac5f ("pNFS/NFSv4: Update the layout barrier when we schedule a layoutreturn") Signed-off-by: Trond Myklebust <[email protected]>
Dave Wysochanski [Tue, 29 Jun 2021 17:13:57 +0000 (13:13 -0400)]
NFS: Fix fscache read from NFS after cache error
Earlier commits refactored some NFS read code and removed
nfs_readpage_async(), but neglected to properly fixup
nfs_readpage_from_fscache_complete(). The code path is
only hit when something unusual occurs with the cachefiles
backing filesystem, such as an IO error or while a cookie
is being invalidated.
Mark page with PG_checked if fscache IO completes in error,
unlock the page, and let the VM decide to re-issue based on
PG_uptodate. When the VM reissues the readpage, PG_checked
allows us to skip over fscache and read from the server.
Dave Wysochanski [Tue, 29 Jun 2021 09:11:28 +0000 (05:11 -0400)]
NFS: Ensure nfs_readpage returns promptly when internal error occurs
A previous refactoring of nfs_readpage() might end up calling
wait_on_page_locked_killable() even if readpage_async_filler() failed
with an internal error and pg_error was non-zero (for example, if
nfs_create_request() failed). In the case of an internal error,
skip over wait_on_page_locked_killable() as this is only needed
when the read is sent and an error occurs during completion handling.
Once a transport has been put offline, this transport can be also
removed from the list of transports. Any tasks that have been stuck
on this transport would find the next available active transport
and be re-tried. This transport would be removed from the xprt_switch
list and freed.
NFSv4.1 identify and mark RPC tasks that can move between transports
In preparation for when we can re-try a task on a different transport,
identify and mark such RPC tasks as moveable. Only 4.1+ operarations can
be re-tried on a different transport.
sunrpc: provide transport info in the sysfs directory
Allow to query transport's attributes. Currently showing following
fields of the rpc_xprt structure: state, last_used, cong, cwnd,
max_reqs, min_reqs, num_reqs, sizes of queues binding, sending,
pending, backlog.
Using sysfs's xprt_state attribute, mark a particular transport offline.
It will not be picked during the round-robin selection. It's not allowed
to take the main (1st created transport associated with the rpc_client)
offline. Also bring a transport back online via sysfs by writing "online"
and that would allow for this transport to be picked during the round-
robin selection.