Steve French [Mon, 5 Jul 2021 20:05:39 +0000 (15:05 -0500)]
SMB3.1.1: Add support for negotiating signing algorithm
Support for faster packet signing (using GMAC instead of CMAC) can
now be negotiated to some newer servers, including Windows.
See MS-SMB2 section 2.2.3.17.
This patch adds support for sending the new negotiate context
with the first of three supported signing algorithms (AES-CMAC)
and decoding the response. A followon patch will add support
for sending the other two (including AES-GMAC, which is fastest)
and changing the signing algorithm used based on what was
negotiated.
To allow the client to request GMAC signing set module parameter
"enable_negotiate_signing" to 1.
Merge tag 'riscv-for-linus-5.14-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull RISC-V updates from Palmer Dabbelt:
"We have a handful of new features for 5.14:
- Support for transparent huge pages.
- Support for generic PCI resources mapping.
- Support for the mem= kernel parameter.
- Support for KFENCE.
- A handful of fixes to avoid W+X mappings in the kernel.
- Support for VMAP_STACK based overflow detection.
- An optimized copy_{to,from}_user"
* tag 'riscv-for-linus-5.14-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (37 commits)
riscv: xip: Fix duplicate included asm/pgtable.h
riscv: Fix PTDUMP output now BPF region moved back to module region
riscv: __asm_copy_to-from_user: Optimize unaligned memory access and pipeline stall
riscv: add VMAP_STACK overflow detection
riscv: ptrace: add argn syntax
riscv: mm: fix build errors caused by mk_pmd()
riscv: Introduce structure that group all variables regarding kernel mapping
riscv: Map the kernel with correct permissions the first time
riscv: Introduce set_kernel_memory helper
riscv: Enable KFENCE for riscv64
RISC-V: Use asm-generic for {in,out}{bwlq}
riscv: add ASID-based tlbflushing methods
riscv: pass the mm_struct to __sbi_tlb_flush_range
riscv: Add mem kernel parameter support
riscv: Simplify xip and !xip kernel address conversion macros
riscv: Remove CONFIG_PHYS_RAM_BASE_FIXED
riscv: Only initialize swiotlb when necessary
riscv: fix typo in init.c
riscv: Cleanup unused functions
riscv: mm: Use better bitmap_zalloc()
...
Merge tag 'powerpc-5.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
"Fix crashes on 64-bit Book3E due to use of Book3S only mtmsrd
instruction.
Fix "scheduling while atomic" warnings at boot due to preempt count
underflow.
Two commits fixing our handling of BPF atomic instructions.
Fix error handling in xive when allocating an IPI.
Fix lockup on kernel exec fault on 603.
Thanks to Bharata B Rao, Cédric Le Goater, Christian Zigotzky,
Christophe Leroy, Guenter Roeck, Jiri Olsa, Naveen N. Rao, Nicholas
Piggin, and Valentin Schneider"
* tag 'powerpc-5.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/preempt: Don't touch the idle task's preempt_count during hotplug
powerpc/64e: Fix system call illegal mtmsrd instruction
powerpc/xive: Fix error handling when allocating an IPI
powerpc/bpf: Reject atomic ops in ppc32 JIT
powerpc/bpf: Fix detecting BPF atomic instructions
powerpc/mm: Fix lockup on kernel exec fault
Merge tag 'for-linus-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml
Pull UML updates from Richard Weinberger:
- Support for optimized routines based on the host CPU
- Support for PCI via virtio
- Various fixes
* tag 'for-linus-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml:
um: remove unneeded semicolon in um_arch.c
um: Remove the repeated declaration
um: fix error return code in winch_tramp()
um: fix error return code in slip_open()
um: Fix stack pointer alignment
um: implement flush_cache_vmap/flush_cache_vunmap
um: add a UML specific futex implementation
um: enable the use of optimized xor routines in UML
um: Add support for host CPU flags and alignment
um: allow not setting extra rpaths in the linux binary
um: virtio/pci: enable suspend/resume
um: add PCI over virtio emulation driver
um: irqs: allow invoking time-travel handler multiple times
um: time-travel/signals: fix ndelay() in interrupt
um: expose time-travel mode to userspace side
um: export signals_enabled directly
um: remove unused smp_sigio_handler() declaration
lib: add iomem emulation (logic_iomem)
um: allow disabling NO_IOMEM
Merge tag 'for-linus-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs
Pull UBIFS updates from Richard Weinberger:
- Fix for a race xattr list and modification
- Various minor fixes (spelling, return codes, ...)
* tag 'for-linus-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs:
ubifs: Set/Clear I_LINKABLE under i_lock for whiteout inode
ubifs: Fix spelling mistakes
ubifs: Remove ui_mutex in ubifs_xattr_get and change_xattr
ubifs: Fix races between xattr_{set|get} and listxattr operations
ubifs: fix snprintf() checking
ubifs: journal: Fix error return code in ubifs_jnl_write_inode()
Jin Yao [Thu, 1 Jul 2021 06:42:53 +0000 (14:42 +0800)]
perf tools: Fix pattern matching for same substring in different PMU type
Some different PMU types may have the same substring. For example, on
Icelake server we have PMU types "uncore_imc" and
"uncore_imc_free_running". Both PMU types have the substring
"uncore_imc". But the parser wrongly thinks they are the same PMU type.
We enable an imc event,
perf stat -e uncore_imc/event=0xe3/ -a -- sleep 1
To parse symbols the metadata records, e.g., PERF_RECORD_COMM, which are
generated by the kernel, are required.
To decide whether to generate the metadata records, the kernel relies on
the event_filter_match() to filter the unrelated events.
On a hybrid system, event_filter_match() further checks the CPU mask of
the current enabled PMU. If an event is collected on the CPU which
doesn't have an enabled PMU, it's treated as an unrelated event.
The "big_small_workload" is created in a big core, but runs on a small
core. The metadata records are filtered, because the user only monitors
the PMU of the small core. The big core PMU is not enabled.
For a hybrid system, a dummy event is required to generate the complete
side-band events.
Kan Liang [Thu, 8 Jul 2021 16:02:49 +0000 (09:02 -0700)]
perf stat: Add Topdown metrics L2 events as default events
The Topdown Microarchitecture Analysis (TMA) Method is a structured
analysis methodology to identify critical performance bottlenecks in
out-of-order processors.
The Topdown metrics L1 event was added as default in 42641d6f4d15e6db
("perf stat: Add Topdown metrics events as default events")
From the Sapphire Rapids server and later platforms, the same dedicated
"metrics" register is extended to support both L1 and L2 events.
Add both L1 and L2 Topdown metrics events as default to enrich the
default measuring information if the new measurement register is
available.
On legacy systems there is no change to avoid extra multiplexing.
The topdown_level indicates the max metrics level for the top-down
statistics. Set it to 2 to display all L1 and L2 Topdown metrics events.
Jiri Olsa [Tue, 6 Jul 2021 15:17:02 +0000 (17:17 +0200)]
libperf: Adopt evlist__set_leader() from tools/perf as perf_evlist__set_leader()
Move the implementation of evlist__set_leader() to a new libperf
perf_evlist__set_leader() function with the same functionality make it a
libperf exported API.
Merge tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 updates from Ted Ts'o:
"Ext4 regression and bug fixes"
* tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
ext4: inline jbd2_journal_[un]register_shrinker()
ext4: fix flags validity checking for EXT4_IOC_CHECKPOINT
ext4: fix possible UAF when remounting r/o a mmp-protected file system
ext4: use ext4_grp_locked_error in mb_find_extent
ext4: fix WARN_ON_ONCE(!buffer_uptodate) after an error writing the superblock
Revert "ext4: consolidate checks for resize of bigalloc into ext4_resize_begin"
Merge tag 'ceph-for-5.14-rc1' of git://github.com/ceph/ceph-client
Pull ceph updates from Ilya Dryomov:
"We have new filesystem client metrics for reporting I/O sizes from
Xiubo, two patchsets from Jeff that begin to untangle some heavyweight
blocking locks in the filesystem and a bunch of code cleanups"
* tag 'ceph-for-5.14-rc1' of git://github.com/ceph/ceph-client:
ceph: take reference to req->r_parent at point of assignment
ceph: eliminate ceph_async_iput()
ceph: don't take s_mutex in ceph_flush_snaps
ceph: don't take s_mutex in try_flush_caps
ceph: don't take s_mutex or snap_rwsem in ceph_check_caps
ceph: eliminate session->s_gen_ttl_lock
ceph: allow ceph_put_mds_session to take NULL or ERR_PTR
ceph: clean up locking annotation for ceph_get_snap_realm and __lookup_snap_realm
ceph: add some lockdep assertions around snaprealm handling
ceph: decoding error in ceph_update_snap_realm should return -EIO
ceph: add IO size metrics support
ceph: update and rename __update_latency helper to __update_stdev
ceph: simplify the metrics struct
libceph: fix doc warnings in cls_lock_client.c
libceph: remove unnecessary ret variable in ceph_auth_init()
libceph: fix some spelling mistakes
libceph: kill ceph_none_authorizer::reply_buf
ceph: make ceph_queue_cap_snap static
ceph: make ceph_netfs_read_ops static
ceph: remove bogus checks and WARN_ONs from ceph_set_page_dirty
Merge tag 'nfs-for-5.14-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client updates from Trond Myklebust:
"Highlights include:
Features:
- Multiple patches to add support for fcntl() leases over NFSv4.
- A sysfs interface to display more information about the various
transport connections used by the RPC client
- A sysfs interface to allow a suitably privileged user to offline a
transport that may no longer point to a valid server
- A sysfs interface to allow a suitably privileged user to change the
server IP address used by the RPC client
Stable fixes:
- Two sunrpc fixes for deadlocks involving privileged rpc_wait_queues
Bugfixes:
- SUNRPC: Avoid a KASAN slab-out-of-bounds bug in xdr_set_page_base()
- SUNRPC: prevent port reuse on transports which don't request it.
- NFSv3: Fix memory leak in posix_acl_create()
- NFS: Various fixes to attribute revalidation timeouts
- NFSv4: Fix handling of non-atomic change attribute updates
- NFSv4: If a server is down, don't cause mounts to other servers to
hang as well
- pNFS: Fix an Oops in pnfs_mark_request_commit() when doing O_DIRECT
- NFS: Fix mount failures due to incorrect setting of the
has_sec_mnt_opts filesystem flag
- NFS: Ensure nfs_readpage returns promptly when an internal error
occurs
- NFS: Fix fscache read from NFS after cache error
- pNFS: Various bugfixes around the LAYOUTGET operation"
* tag 'nfs-for-5.14-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: (46 commits)
NFSv4/pNFS: Return an error if _nfs4_pnfs_v3_ds_connect can't load NFSv3
NFSv4/pNFS: Don't call _nfs4_pnfs_v3_ds_connect multiple times
NFSv4/pnfs: Clean up layout get on open
NFSv4/pnfs: Fix layoutget behaviour after invalidation
NFSv4/pnfs: Fix the layout barrier update
NFS: Fix fscache read from NFS after cache error
NFS: Ensure nfs_readpage returns promptly when internal error occurs
sunrpc: remove an offlined xprt using sysfs
sunrpc: provide showing transport's state info in the sysfs directory
sunrpc: display xprt's queuelen of assigned tasks via sysfs
sunrpc: provide multipath info in the sysfs directory
NFSv4.1 identify and mark RPC tasks that can move between transports
sunrpc: provide transport info in the sysfs directory
SUNRPC: take a xprt offline using sysfs
sunrpc: add dst_attr attributes to the sysfs xprt directory
SUNRPC for TCP display xprt's source port in sysfs xprt_info
SUNRPC query transport's source port
SUNRPC display xprt's main value in sysfs's xprt_info
SUNRPC mark the first transport
sunrpc: add add sysfs directory per xprt under each xprt_switch
...
Merge tag 'f2fs-for-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs
Pull f2fs updates from Jaegeuk Kim:
"In this round, we've improved the compression support especially for
Android such as allowing compression for mmap files, replacing the
immutable bit with internal bit to prohibits data writes explicitly,
and adding a mount option, "compress_cache", to improve random reads.
And, we added "readonly" feature to compact the partition w/
compression enabled, which will be useful for Android RO partitions.
Enhancements:
- support compression for mmap file
- use an f2fs flag instead of IMMUTABLE bit for compression
- support RO feature w/ extent_cache
- fully support swapfile with file pinning
- improve atgc tunability
- add nocompress extensions to unselect files for compression
Bug fixes:
- fix false alaram on iget failure during GC
- fix race condition on global pointers when there are multiple f2fs
instances
- add MODULE_SOFTDEP for initramfs
As usual, we've also cleaned up some places for better code
readability (e.g., sysfs/feature, debugging messages, slab cache
name, and docs)"
* tag 'f2fs-for-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (32 commits)
f2fs: drop dirty node pages when cp is in error status
f2fs: initialize page->private when using for our internal use
f2fs: compress: add nocompress extensions support
MAINTAINERS: f2fs: update my email address
f2fs: remove false alarm on iget failure during GC
f2fs: enable extent cache for compression files in read-only
f2fs: fix to avoid adding tab before doc section
f2fs: introduce f2fs_casefolded_name slab cache
f2fs: swap: support migrating swapfile in aligned write mode
f2fs: swap: remove dead codes
f2fs: compress: add compress_inode to cache compressed blocks
f2fs: clean up /sys/fs/f2fs/<disk>/features
f2fs: add pin_file in feature list
f2fs: Advertise encrypted casefolding in sysfs
f2fs: Show casefolding support only when supported
f2fs: support RO feature
f2fs: logging neatening
f2fs: introduce FI_COMPRESS_RELEASED instead of using IMMUTABLE bit
f2fs: compress: remove unneeded preallocation
f2fs: atgc: export entries for better tunability via sysfs
...
Pull yet more updates from Andrew Morton:
"54 patches.
Subsystems affected by this patch series: lib, mm (slub, secretmem,
cleanups, init, pagemap, and mremap), and debug"
* emailed patches from Andrew Morton <[email protected]>: (54 commits)
powerpc/mm: enable HAVE_MOVE_PMD support
powerpc/book3s64/mm: update flush_tlb_range to flush page walk cache
mm/mremap: allow arch runtime override
mm/mremap: hold the rmap lock in write mode when moving page table entries.
mm/mremap: use pmd/pud_poplulate to update page table entries
mm/mremap: don't enable optimized PUD move if page table levels is 2
mm/mremap: convert huge PUD move to separate helper
selftest/mremap_test: avoid crash with static build
selftest/mremap_test: update the test to handle pagesize other than 4K
mm: rename p4d_page_vaddr to p4d_pgtable and make it return pud_t *
mm: rename pud_page_vaddr to pud_pgtable and make it return pmd_t *
kdump: use vmlinux_build_id to simplify
buildid: fix kernel-doc notation
buildid: mark some arguments const
scripts/decode_stacktrace.sh: indicate 'auto' can be used for base path
scripts/decode_stacktrace.sh: silence stderr messages from addr2line/nm
scripts/decode_stacktrace.sh: support debuginfod
x86/dumpstack: use %pSb/%pBb for backtrace printing
arm64: stacktrace: use %pSb for backtrace printing
module: add printk formats to add module build ID to stacktraces
...
Colin reports that Coverity complains about checking for poll being
non-zero after having dereferenced it multiple times. This is a valid
complaint, and actually a leftover from back when this code was based
on the aio poll code.
Martin Fäcknitz [Mon, 5 Jul 2021 00:03:54 +0000 (02:03 +0200)]
MIPS: vdso: Invalid GIC access through VDSO
Accessing raw timers (currently only CLOCK_MONOTONIC_RAW) through VDSO
doesn't return the correct time when using the GIC as clock source.
The address of the GIC mapped page is in this case not calculated
correctly. The GIC mapped page is calculated from the VDSO data by
subtracting PAGE_SIZE:
However, the data pointer is not page aligned for raw clock sources.
This is because the VDSO data for raw clock sources (CS_RAW = 1) is
stored after the VDSO data for coarse clock sources (CS_HRES_COARSE = 0).
Therefore, only the VDSO data for CS_HRES_COARSE is page aligned:
When __arch_get_hw_counter() is called with &vd[CS_RAW], get_gic returns
the wrong address (somewhere inside the GIC mapped page). The GIC counter
values are not returned which results in an invalid time.
Fixes: a7f4df4e21dd ("MIPS: VDSO: Add implementations of gettimeofday() and clock_gettime()") Signed-off-by: Martin Fäcknitz <[email protected]> Signed-off-by: Thomas Bogendoerfer <[email protected]>
Marc Zyngier [Tue, 6 Jul 2021 10:38:59 +0000 (11:38 +0100)]
irqchip/mips: Fix RCU violation when using irqdomain lookup on interrupt entry
Since d4a45c68dc81 ("irqdomain: Protect the linear revmap with RCU"),
any irqdomain lookup requires the RCU read lock to be held.
This assumes that the architecture code will be structured such as
irq_enter() will be called *before* the interrupt is looked up
in the irq domain. However, this isn't the case for MIPS, and a number
of drivers are structured to do it the other way around when handling
an interrupt in their root irqchip (secondary irqchips are OK by
construction).
This results in a RCU splat on a lockdep-enabled kernel when the kernel
takes an interrupt from idle, as reported by Guenter Roeck.
Note that this could have fired previously if any driver had used
tree-based irqdomain, which always had the RCU requirement.
To solve this, provide a MIPS-specific helper (do_domain_IRQ())
as the pendent of do_IRQ() that will do thing in the right order
(and maybe save some cycles in the process).
Ideally, MIPS would be moved over to using handle_domain_irq(),
but that's much more ambitious.
S390's init_idle_preempt_count(p, cpu) doesn't actually let us initialize the
preempt_count of the requested CPU's idle task: it unconditionally writes
to the current CPU's. This clearly conflicts with idle_threads_init(),
which intends to initialize *all* the idle tasks, including their
preempt_count (or their CPU's, if the arch uses a per-CPU preempt_count).
Unfortunately, it seems the way s390 does things doesn't let us initialize
every possible CPU's preempt_count early on, as the pages where this
resides are only allocated when a CPU is brought up and are freed when it
is brought down.
Let the arch-specific code set a CPU's preempt_count when its lowcore is
allocated, and turn init_idle_preempt_count() into an empty stub.
s390: introduce proper type handling call_on_stack() macro
The existing CALL_ON_STACK() macro allows for subtle bugs:
- There is no type checking of the function that is being called. That
is: missing or too many arguments do not cause any compile error or
warning. The same is true if the return type of the called function
changes. This can lead to quite random bugs.
- Sign and zero extension of arguments is missing. Given that the s390
C ABI requires that the caller of a function performs proper sign
and zero extension this can also lead to subtle bugs.
- If arguments to the CALL_ON_STACK() macros contain functions calls
register corruption can happen due to register asm constructs being
used.
Therefore introduce a new call_on_stack() macro which is supposed to
fix all these problems.
Make on_async_stack() a bit more readable, even though as usual it
depends if one considers "!!!" readable or not.
At least the new construct to check if the async stack is in use or
not is a bit shorter and generates slightly better code.
do_softirq_own_stack() is always called from task context and
therefore it is not necessary to check if the async stack is
currently used.
Remove the check and directly switch to async stack.
This is the second part of the cleanup for the header file ap.h
to remove the register asm statements. This patch deals with
the inline ap_dqap() function where within the assembler code
an odd register of an register pair is to be addressed.
[[email protected]: this intentionally breaks compilation with any
clang compilers prior to llvm-project commit 458eac257377 ("[SystemZ]
Support the 'N' code for the odd register in inline-asm.").
This is hopefully the last clang kernel compile breakage caused by
incompatibilities between gcc and clang.]
Sven Schnelle [Wed, 30 Jun 2021 12:02:41 +0000 (14:02 +0200)]
s390: rename PIF_SYSCALL_RESTART to PIF_EXECVE_PGSTE_RESTART
PIF_SYSCALL_RESTART is now only used to restart execve when loading
PGSTE binaries. Rename the flag to reflect that, and avoid people
thinking that this bit has anything to do with generic syscall
restarting.
Sven Schnelle [Wed, 30 Jun 2021 11:50:55 +0000 (13:50 +0200)]
s390: move restart of execve() syscall
On s390, execve might have to be restarted for PGSTE binaries
like kvm. In the past this was done via the PIF_SYSCALL_RESTART
bit. However, with the recent changes, syscalls are now restarted
differently. Now that execve() is the only call that might get
restarted via PIF_SYSCALL_RESTART, move the loop to do_syscall().
This also has the advantage that the restart is no longer visible
to userspace.
Sven Schnelle [Fri, 25 Jun 2021 13:06:06 +0000 (15:06 +0200)]
s390/signal: remove sigreturn on stack
{rt_}sigreturn is now called from the vdso, so we no longer
need the svc on the stack, and therefore no hack to support that
mechanism on machines with non-executable stack.
Sven Schnelle [Fri, 25 Jun 2021 13:02:08 +0000 (15:02 +0200)]
s390/signal: switch to using vdso for sigreturn and syscall restart
with generic entry, there's a bug when it comes to restarting of signals.
The failing sequence is:
a) a signal is coming in, and no handler is registered, so the lower
part of arch_do_signal_or_restart() in arch/s390/kernel/signal.c
sets PIF_SYSCALL_RESTART.
b) a second signal gets pending while the kernel is still in the exit
loop, and for that one, a handler exists.
c) The first part of arch_do_signal_or_restart() is called. That part
calls handle_signal(), which sets up stack + registers for handling
the signal.
d) __do_syscall() in arch/s390/kernel/syscall.c checks for
PIF_SYSCALL_RESTART right before leaving to userspace. If it is set,
it restart's the syscall. However, the registers are already setup
for handling a signal from c). The syscall is now restarted with the
wrong arguments.
Change the code to:
- use vdso for syscall_restart() instead of PIF_SYSCALL_RESTART because
we cannot rewind and go back to userspace on s390 because the system call
number might be encoded in the svc instruction.
- for all other syscalls we rewind the PSW and return to userspace.
Pavel Begunkov [Thu, 8 Jul 2021 12:37:06 +0000 (13:37 +0100)]
io_uring: mitigate unlikely iopoll lag
We have requests like IORING_OP_FILES_UPDATE that don't go through
->iopoll_list but get completed in place under ->uring_lock, and so
after dropping the lock io_iopoll_check() should expect that some CQEs
might have get completed in a meanwhile.
Currently such events won't be accounted in @nr_events, and the loop
will continue to poll even if there is enough of CQEs. It shouldn't be a
problem as it's not likely to happen and so, but not nice either. Just
return earlier in this case, it should be enough.
Merge tag 'drm-next-2021-07-08-1' of git://anongit.freedesktop.org/drm/drm
Pull drm fixes from Dave Airlie:
"Some fixes for rc1 that came in the past weeks, mainly a bunch of
amdgpu fixes, some i915 and the rest are misc around the place. I'm
sending this a bit early so some more stuff may show up, but I'll
probably take tomorrow off.
* tag 'drm-next-2021-07-08-1' of git://anongit.freedesktop.org/drm/drm: (52 commits)
drm/i915: Drop all references to DRM IRQ midlayer
drm/i915: Use the correct IRQ during resume
drm/i915/display/dg1: Correctly map DPLLs during state readout
drm/i915/display: Do not zero past infoframes.vsc
drm/amdgpu: Conditionally reset SDMA RAS error counts
drm/amdkfd: Maintain svm_bo reference in page->zone_device_data
drm/amdkfd: add invalid pages debug at vram migration
drm/amdkfd: skip migration for pages already in VRAM
drm/amdkfd: skip invalid pages during migrations
drm/amdkfd: classify and map mixed svm range pages in GPU
drm/amdkfd: use hmm range fault to get both domain pfns
drm/amdgpu: get owner ref in validate and map
drm/amdkfd: set owner ref to svm range prefault
drm/amdkfd: add owner ref param to get hmm pages
drm/amdkfd: device pgmap owner at the svm migrate init
drm/amdkfd: inc counter on child ranges with xnack off
drm/amd/display: Extend DMUB diagnostic logging to DCN3.1
drm/amdgpu: Update NV SIMD-per-CU to 2
drm/amdgpu: add new dimgrey cavefish DID
drm/amd/pm: skip PrepareMp1ForUnload message in s0ix
...
Merge tag 'pwm/for-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm
Pull pwm updates from Thierry Reding:
"This contains mostly various fixes, cleanups and some conversions to
the atomic API. One noteworthy change is that PWM consumers can now
pass a hint to the PWM core about the PWM usage, enabling PWM
providers to implement various optimizations.
There's also a fair bit of simplification here with the addition of
some device-managed helpers as well as unification between the DT and
ACPI firmware interfaces"
* tag 'pwm/for-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm: (50 commits)
pwm: Remove redundant assignment to pointer pwm
pwm: ep93xx: Fix read of uninitialized variable ret
pwm: ep93xx: Prepare clock before using it
pwm: ep93xx: Unfold legacy callbacks into ep93xx_pwm_apply()
pwm: ep93xx: Implement .apply callback
pwm: vt8500: Only unprepare the clock after the pwmchip was removed
pwm: vt8500: Drop if with an always false condition
pwm: tegra: Assert reset only after the PWM was unregistered
pwm: tegra: Don't needlessly enable and disable the clock in .remove()
pwm: tegra: Don't modify HW state in .remove callback
pwm: tegra: Drop an if block with an always false condition
pwm: core: Simplify some devm_*pwm*() functions
pwm: core: Remove unused devm_pwm_put()
pwm: core: Unify fwnode checks in the module
pwm: core: Reuse fwnode_to_pwmchip() in ACPI case
pwm: core: Convert to use fwnode for matching
docs: firmware-guide: ACPI: Add a PWM example
dt-bindings: pwm: pwm-tiecap: Add compatible string for AM64 SoC
dt-bindings: pwm: pwm-tiecap: Convert to json schema
pwm: sprd: Don't check the return code of pwmchip_remove()
...
Merge tag 'clk-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull more clk updates from Stephen Boyd:
- A handful of fixes for lmk04832 driver
- Migrate the basic clk divider to use determine rate ops
- Fix modpost build for hisilicon hi3559a driver
- Actually set the parent in k210_clk_set_parent()
* tag 'clk-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
Revert "clk: divider: Switch from .round_rate to .determine_rate by default"
clk: hisilicon: hi3559a: Drop __init markings everywhere
clk: meson: regmap: switch to determine_rate for the dividers
clk: divider: Switch from .round_rate to .determine_rate by default
clk: divider: Add re-usable determine_rate implementations
clk: k210: Fix k210_clk_set_parent()
clk: lmk04832: Fix spelling mistakes in dev_err messages and comments
clk: lmk04832: fix return value check in lmk04832_probe()
clk: stm32mp1: fix missing spin_lock_init()
PCIe native device hotplug:
- Ignore Link Down/Up caused by DPC (Lukas Wunner)
Power management:
- Leave Apple Thunderbolt controllers on for s2idle or standby
(Konstantin Kharlamov)
Virtualization:
- Work around Huawei Intelligent NIC VF FLR erratum (Chiqijun)
- Clarify error message for unbound IOV devices (Moritz Fischer)
- Add pci_reset_bus_function() Secondary Bus Reset interface (Raphael
Norwitz)
Peer-to-peer DMA:
- Simplify distance calculation (Christoph Hellwig)
- Finish RCU conversion of pdev->p2pdma (Eric Dumazet)
- Rename upstream_bridge_distance() and rework doc (Logan Gunthorpe)
- Collect acs list in stack buffer to avoid sleeping (Logan
Gunthorpe)
- Use correct calc_map_type_and_dist() return type (Logan Gunthorpe)
- Warn if host bridge not in whitelist (Logan Gunthorpe)
- Refactor pci_p2pdma_map_type() (Logan Gunthorpe)
- Avoid pci_get_slot(), which may sleep (Logan Gunthorpe)
Altera PCIe controller driver:
- Add Joyce Ooi as Altera PCIe maintainer (Joyce Ooi)
Broadcom iProc PCIe controller driver:
- Fix multi-MSI base vector number allocation (Sandor Bodo-Merle)
- Support multi-MSI only on uniprocessor kernel (Sandor Bodo-Merle)
Marvell Aardvark PCIe controller driver:
- Fix checking for PIO Non-posted Request (Pali Rohár)
- Implement workaround for the readback value of VEND_ID (Pali Rohár)
Like syscallhdr.sh and syscalltbl.sh, add a simple script to generate
the __NR_syscalls, which should not be exported to userspace.
This script is useful to replace arch/mips/kernel/syscalls/syscallnr.sh,
refactor arch/s390/kernel/syscalls/syscalltbl, and eliminate the code
surrounded by #ifdef __KERNEL__ / #endif from exported uapi/asm/unistd_*.h
files.
powerpc/book3s64/mm: update flush_tlb_range to flush page walk cache
flush_tlb_range is special in that we don't specify the page size used for
the translation. Hence when flushing TLB we flush the translation cache
for all possible page sizes. The kernel also uses the same interface when
moving page tables around. Such a move requires us to flush the page walk
cache.
Instead of adding another interface to force page walk cache flush, update
flush_tlb_range to flush page walk cache if the range flushed is more than
the PMD range. A page table move will always involve an invalidate range
more than PMD_SIZE.
Running microbenchmark with mprotect and parallel memory access didn't
show any observable performance impact.
This patchset enables MOVE_PMD/MOVE_PUD support on power. This requires
the platform to support updating higher-level page tables without updating
page table entries. This also needs to invalidate the Page Walk Cache on
architecture supporting the same.
This patch (of 3):
Architectures like ppc64 support faster mremap only with radix
translation. Hence allow a runtime check w.r.t support for fast mremap.
mm/mremap: hold the rmap lock in write mode when moving page table entries.
To avoid a race between rmap walk and mremap, mremap does
take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss
a page table entry due to PTE moves via move_pagetables(). The kernel
does further optimization of this lock such that if we are going to find
the newly added vma after the old vma, the rmap lock is not taken. This
is because rmap walk would find the vmas in the same order and if we don't
find the page table attached to older vma we would find it with the new
vma which we would iterate later.
As explained in commit eb66ae030829 ("mremap: properly flush TLB before
releasing the page") mremap is special in that it doesn't take ownership
of the page. The optimized version for PUD/PMD aligned mremap also
doesn't hold the ptl lock. This can result in stale TLB entries as show
below.
This patch updates the rmap locking requirement in mremap to handle the race condition
explained below with optimized mremap::
mm/mremap: use pmd/pud_poplulate to update page table entries
pmd/pud_populate is the right interface to be used to set the respective
page table entries. Some architectures like ppc64 do assume that
set_pmd/pud_at can only be used to set a hugepage PTE. Since we are not
setting up a hugepage PTE here, use the pmd/pud_populate interface.
selftest/mremap_test: avoid crash with static build
With a large mmap map size, we can overlap with the text area and using
MAP_FIXED results in unmapping that area. Switch to MAP_FIXED_NOREPLACE
and handle the EEXIST error.
Stephen Boyd [Thu, 8 Jul 2021 01:09:38 +0000 (18:09 -0700)]
scripts/decode_stacktrace.sh: indicate 'auto' can be used for base path
Add "auto" to the usage message so that it's a little clearer that you can
pass "auto" as the second argument. When passing "auto" the script tries
to find the base path automatically instead of requiring it be passed on
the commandline. Also use [<variable>] to indicate the variable argument
and that it is optional so that we can differentiate from the literal
"auto" that should be passed.
Stephen Boyd [Thu, 8 Jul 2021 01:09:35 +0000 (18:09 -0700)]
scripts/decode_stacktrace.sh: silence stderr messages from addr2line/nm
Sometimes if you're using tools that have linked things improperly or have
new features/sections that older tools don't expect you'll see warnings
printed to stderr. We don't really care about these warnings, so let's
just silence these messages to cleanup output of this script.
Stephen Boyd [Thu, 8 Jul 2021 01:09:31 +0000 (18:09 -0700)]
scripts/decode_stacktrace.sh: support debuginfod
Now that stacktraces contain the build ID information we can update this
script to use debuginfod-find to locate the debuginfo for the vmlinux and
modules automatically. This can replace the existing code that requires
specifying a path to vmlinux or tries to find the vmlinux and modules
automatically by using the release number. Work it into the script as a
fallback option if the vmlinux isn't specified on the commandline.
Stephen Boyd [Thu, 8 Jul 2021 01:09:27 +0000 (18:09 -0700)]
x86/dumpstack: use %pSb/%pBb for backtrace printing
Let's use the new printk formats to print the stacktrace entries when
printing a backtrace to the kernel logs. This will include any module's
build ID[1] in it so that offline/crash debugging can easily locate the
debuginfo for a module via something like debuginfod[2].
Stephen Boyd [Thu, 8 Jul 2021 01:09:24 +0000 (18:09 -0700)]
arm64: stacktrace: use %pSb for backtrace printing
Let's use the new printk format to print the stacktrace entry when
printing a backtrace to the kernel logs. This will include any module's
build ID[1] in it so that offline/crash debugging can easily locate the
debuginfo for a module via something like debuginfod[2].
Stephen Boyd [Thu, 8 Jul 2021 01:09:20 +0000 (18:09 -0700)]
module: add printk formats to add module build ID to stacktraces
Let's make kernel stacktraces easier to identify by including the build
ID[1] of a module if the stacktrace is printing a symbol from a module.
This makes it simpler for developers to locate a kernel module's full
debuginfo for a particular stacktrace. Combined with
scripts/decode_stracktrace.sh, a developer can download the matching
debuginfo from a debuginfod[2] server and find the exact file and line
number for the functions plus offsets in a stacktrace that match the
module. This is especially useful for pstore crash debugging where the
kernel crashes are recorded in something like console-ramoops and the
recovery kernel/modules are different or the debuginfo doesn't exist on
the device due to space concerns (the debuginfo can be too large for space
limited devices).
Originally, I put this on the %pS format, but that was quickly rejected
given that %pS is used in other places such as ftrace where build IDs
aren't meaningful. There was some discussions on the list to put every
module build ID into the "Modules linked in:" section of the stacktrace
message but that quickly becomes very hard to read once you have more than
three or four modules linked in. It also provides too much information
when we don't expect each module to be traversed in a stacktrace. Having
the build ID for modules that aren't important just makes things messy.
Splitting it to multiple lines for each module quickly explodes the number
of lines printed in an oops too, possibly wrapping the warning off the
console. And finally, trying to stash away each module used in a
callstack to provide the ID of each symbol printed is cumbersome and would
require changes to each architecture to stash away modules and return
their build IDs once unwinding has completed.
Instead, we opt for the simpler approach of introducing new printk formats
'%pS[R]b' for "pointer symbolic backtrace with module build ID" and '%pBb'
for "pointer backtrace with module build ID" and then updating the few
places in the architecture layer where the stacktrace is printed to use
this new format.
Stephen Boyd [Thu, 8 Jul 2021 01:09:17 +0000 (18:09 -0700)]
dump_stack: add vmlinux build ID to stack traces
Add the running kernel's build ID[1] to the stacktrace information header.
This makes it simpler for developers to locate the vmlinux with full
debuginfo for a particular kernel stacktrace. Combined with
scripts/decode_stracktrace.sh, a developer can download the correct
vmlinux from a debuginfod[2] server and find the exact file and line
number for the functions plus offsets in a stacktrace.
This is especially useful for pstore crash debugging where the kernel
crashes are recorded in the pstore logs and the recovery kernel is
different or the debuginfo doesn't exist on the device due to space
concerns (the data can be large and a security concern). The stacktrace
can be analyzed after the crash by using the build ID to find the matching
vmlinux and understand where in the function something went wrong.
The hex string aa23f7a1231c229de205662d5a9e0d4c580f19a1 is the build ID,
following the kernel version number. Put it all behind a config option,
STACKTRACE_BUILD_ID, so that kernel developers can remove this
information if they decide it is too much.
Stephen Boyd [Thu, 8 Jul 2021 01:09:13 +0000 (18:09 -0700)]
buildid: stash away kernels build ID on init
Parse the kernel's build ID at initialization so that other code can print
a hex format string representation of the running kernel's build ID. This
will be used in the kdump and dump_stack code so that developers can
easily locate the vmlinux debug symbols for a crash/stacktrace.
Stephen Boyd [Thu, 8 Jul 2021 01:09:06 +0000 (18:09 -0700)]
buildid: only consider GNU notes for build ID parsing
Patch series "Add build ID to stacktraces", v6.
This series adds the kernel's build ID[1] to the stacktrace header printed
in oops messages, warnings, etc. and the build ID for any module that
appears in the stacktrace after the module name. The goal is to make the
stacktrace more self-contained and descriptive by including the relevant
build IDs in the kernel logs when something goes wrong. This can be used
by post processing tools like script/decode_stacktrace.sh and kernel
developers to easily locate the debug info associated with a kernel crash
and line up what line and file things started falling apart at.
To show how this can be used I've included a patch to decode_stacktrace.sh
that downloads the debuginfo from a debuginfod server. This also includes
some patches to make the buildid.c file use more const arguments and
consolidate logic into buildid.c from kdump. These are left to the end as
they were mostly cleanup patches.
Some kernel elf files have various notes that also happen to have an elf
note type of '3', which matches NT_GNU_BUILD_ID but the note name isn't
"GNU". For example, this note trips up the existing logic:
Mike Rapoport [Thu, 8 Jul 2021 01:08:15 +0000 (18:08 -0700)]
secretmem: test: add basic selftest for memfd_secret(2)
The test verifies that file descriptor created with memfd_secret does not
allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.
Mike Rapoport [Thu, 8 Jul 2021 01:08:07 +0000 (18:08 -0700)]
PM: hibernate: disable when there are active secretmem users
It is unsafe to allow saving of secretmem areas to the hibernation
snapshot as they would be visible after the resume and this essentially
will defeat the purpose of secret memory mappings.
Prevent hibernation whenever there are active secret memory users.
Mike Rapoport [Thu, 8 Jul 2021 01:08:03 +0000 (18:08 -0700)]
mm: introduce memfd_secret system call to create "secret" memory areas
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The secretmem feature is off by default and the user must explicitly
enable it at the boot time.
Once secretmem is enabled, the user will be able to create a file
descriptor using the memfd_secret() system call. The memory areas created
by mmap() calls from this file descriptor will be unmapped from the kernel
direct map and they will be only mapped in the page table of the processes
that have access to the file descriptor.
Secretmem is designed to provide the following protections:
* Enhanced protection (in conjunction with all the other in-kernel
attack prevention systems) against ROP attacks. Seceretmem makes
"simple" ROP insufficient to perform exfiltration, which increases the
required complexity of the attack. Along with other protections like
the kernel stack size limit and address space layout randomization which
make finding gadgets is really hard, absence of any in-kernel primitive
for accessing secret memory means the one gadget ROP attack can't work.
Since the only way to access secret memory is to reconstruct the missing
mapping entry, the attacker has to recover the physical page and insert
a PTE pointing to it in the kernel and then retrieve the contents. That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.
* Prevent cross-process secret userspace memory exposures. Once the
secret memory is allocated, the user can't accidentally pass it into the
kernel to be transmitted somewhere. The secreremem pages cannot be
accessed via the direct map and they are disallowed in GUP.
* Harden against exploited kernel flaws. In order to access secretmem,
a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new privileged uiserspace process to perform
secrets exfiltration using ptrace.
The file descriptor based memory has several advantages over the
"traditional" mm interfaces, such as mlock(), mprotect(), madvise(). File
descriptor approach allows explicit and controlled sharing of the memory
areas, it allows to seal the operations. Besides, file descriptor based
memory paves the way for VMMs to remove the secret memory range from the
userspace hipervisor process, for instance QEMU. Andy Lutomirski says:
"Getting fd-backed memory into a guest will take some possibly major
work in the kernel, but getting vma-backed memory into a guest without
mapping it in the host user address space seems much, much worse."
memfd_secret() is made a dedicated system call rather than an extension to
memfd_create() because it's purpose is to allow the user to create more
secure memory mappings rather than to simply allow file based access to
the memory. Nowadays a new system call cost is negligible while it is way
simpler for userspace to deal with a clear-cut system calls than with a
multiplexer or an overloaded syscall. Moreover, the initial
implementation of memfd_secret() is completely distinct from
memfd_create() so there is no much sense in overloading memfd_create() to
begin with. If there will be a need for code sharing between these
implementation it can be easily achieved without a need to adjust user
visible APIs.
The secret memory remains accessible in the process context using uaccess
primitives, but it is not exposed to the kernel otherwise; secret memory
areas are removed from the direct map and functions in the
follow_page()/get_user_page() family will refuse to return a page that
belongs to the secret memory area.
Once there will be a use case that will require exposing secretmem to the
kernel it will be an opt-in request in the system call flags so that user
would have to decide what data can be exposed to the kernel.
Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which
affects the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to
have secretmem disabled by default with the ability of a system
administrator to enable it at boot time.
Pages in the secretmem regions are unevictable and unmovable to avoid
accidental exposure of the sensitive data via swap or during page
migration.
Since the secretmem mappings are locked in memory they cannot exceed
RLIMIT_MEMLOCK. Since these mappings are already locked independently
from mlock(), an attempt to mlock()/munlock() secretmem range would fail
and mlockall()/munlockall() will ignore secretmem mappings.
However, unlike mlock()ed memory, secretmem currently behaves more like
long-term GUP: secretmem mappings are unmovable mappings directly consumed
by user space. With default limits, there is no excessive use of
secretmem and it poses no real problem in combination with
ZONE_MOVABLE/CMA, but in the future this should be addressed to allow
balanced use of large amounts of secretmem along with ZONE_MOVABLE/CMA.
A page that was a part of the secret memory area is cleared when it is
freed to ensure the data is not exposed to the next user of that page.
The following example demonstrates creation of a secret mapping (error
handling is omitted):
Mike Rapoport [Thu, 8 Jul 2021 01:07:59 +0000 (18:07 -0700)]
set_memory: allow querying whether set_direct_map_*() is actually enabled
On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map. This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.
Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page
table, replace several occurrences of open coded checks in arm64 with the
new function and provide a generic stub for architectures that always
modify page tables upon calls to set_direct_map APIs.
Mike Rapoport [Thu, 8 Jul 2021 01:07:54 +0000 (18:07 -0700)]
riscv/Kconfig: make direct map manipulation options depend on MMU
ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable
them for the nommu case.
Add an explicit dependency on MMU for these options.