mm: shrinker: remove redundant shrinker_rwsem in debugfs operations
debugfs_remove_recursive() will wait for debugfs_file_put() to return, so
the shrinker will not be freed when doing debugfs operations (such as
shrinker_debugfs_count_show() and shrinker_debugfs_scan_write()), so there
is no need to hold shrinker_rwsem during debugfs operations.
Adrian Hunter [Mon, 11 Sep 2023 11:21:14 +0000 (14:21 +0300)]
proc/kcore: do not try to access unaccepted memory
Support for unaccepted memory was added recently, refer commit dcdfdd40fa82 ("mm: Add support for unaccepted memory"), whereby a virtual
machine may need to accept memory before it can be used.
Do not try to access unaccepted memory because it can cause the guest to
fail.
For /proc/kcore, which is read-only and does not support mmap, this means a
read of unaccepted memory will return zeros.
Adrian Hunter [Mon, 11 Sep 2023 11:21:13 +0000 (14:21 +0300)]
efi/unaccepted: do not let /proc/vmcore try to access unaccepted memory
Patch series "Do not try to access unaccepted memory", v2.
Support for unaccepted memory was added recently, refer commit dcdfdd40fa82 ("mm: Add support for unaccepted memory"), whereby
a virtual machine may need to accept memory before it can be used.
Plug a few gaps where RAM is exposed without checking if it is
unaccepted memory.
This patch (of 2):
Support for unaccepted memory was added recently, refer commit dcdfdd40fa82 ("mm: Add support for unaccepted memory"), whereby a virtual
machine may need to accept memory before it can be used.
Do not let /proc/vmcore try to access unaccepted memory because it can
cause the guest to fail.
For /proc/vmcore, which is read-only, this means a read or mmap of
unaccepted memory will return zeros.
Add a regression test for the special case where memcpy() previously
failed to correctly set the origins: if upon memcpy() four aligned
initialized bytes with a zero origin value ended up split between two
aligned four-byte chunks, one of those chunks could've received the zero
origin value even despite it contained uninitialized bytes from other
writes.
kmsan: merge test_memcpy_aligned_to_unaligned{,2}() together
Introduce report_reset() that allows checking for more than one KMSAN
report per testcase.
Fold test_memcpy_aligned_to_unaligned2() into
test_memcpy_aligned_to_unaligned(), so that they share the setup phase and
check the behavior of a single memcpy() call.
Clang 18 learned to optimize away memcpy() calls of small uninitialized
scalar values. To ensure that memcpy tests in kmsan_test.c still perform
calls to memcpy() (which KMSAN replaces with __msan_memcpy()), declare a
separate memcpy_noinline() function with volatile parameters, which won't
be optimized.
Also retire DO_NOT_OPTIMIZE(), as memcpy_noinline() is apparently enough.
kmsan_internal_memmove_metadata() is the function that implements copying
metadata every time memcpy()/memmove() is called. Because shadow memory
stores 1 byte per each byte of kernel memory, copying the shadow is
trivial and can be done by a single memmove() call.
Origins, on the other hand, are stored as 4-byte values corresponding to
every aligned 4 bytes of kernel memory. Therefore, if either the source
or the destination of kmsan_internal_memmove_metadata() is unaligned, the
number of origin slots corresponding to the source or destination may
differ:
Previously, kmsan_internal_memmove_metadata() tried to solve this problem
by copying min(src_slots, dst_slots) as is and cloning the missing slot on
one of the ends, if needed.
This was error-prone even in the simple cases where 4 bytes were copied,
and did not account for situations where the total number of nonzero
origin slots could have increased by more than one after copying:
The new implementation simply copies the shadow byte by byte, and updates
the corresponding origin slot, if the shadow byte is nonzero. This
approach can handle complex cases with mixed initialized and uninitialized
bytes. Similarly to KMSAN inline instrumentation, latter writes to bytes
sharing the same origin slots take precedence.
memfd: drop warning for missing exec-related flags
Commit 434ed3350f57 ("memfd: improve userspace warnings for missing
exec-related flags") attempted to make these warnings more useful (so
they would work as an incentive to get users to switch to specifying
these flags -- as intended by the original MFD_NOEXEC_SEAL patchset).
Unfortunately, it turns out that even INFO-level logging is too extreme
to enable by default and alternative solutions to the spam issue (such
as doing more extreme rate-limiting per-task) are either too ugly or
overkill for something as simple as emitting a log as a developer aid.
Given that the flags are new and there is no harm to not specifying them
(after all, we maintain backwards compatibility) we can just drop the
warnings for now until some time in the future when most programs have
migrated and distributions start using vm.memfd_noexec=1 (where failing
to pass the flag would result in unexpected errors for programs that use
executable memfds).
Ying Sun [Wed, 6 Sep 2023 04:50:12 +0000 (12:50 +0800)]
mm/shmem: remove dead code can not be satisfied by "(CONFIG_SHMEM)&&(!(CONFIG_SHMEM))"
The value of “.fs_flags” in line 4608 is a dead code which will never
be implemented,because its conditions of line 47 "#ifdef CONFIG_SHMEM"
and line 4607 are mutually exclusive. It is recommended to delete
redundant code.
Yuan Can [Wed, 6 Sep 2023 09:31:57 +0000 (17:31 +0800)]
mm: hugetlb_vmemmap: allow alloc vmemmap pages fallback to other nodes
In vmemmap_remap_free(), a new head vmemmap page is allocated to avoid
breaking a contiguous block of struct page memory, however, the allocation
can always fail when the given node is movable node. Remove the
__GFP_THISNODE to help avoid fragmentation.
mm: remove duplicated vma->vm_flags check when expanding stack
expand_upwards() and expand_downwards() will return -EFAULT if VM_GROWSUP
or VM_GROWSDOWN is not correctly set in vma->vm_flags, however in
!CONFIG_STACK_GROWSUP case, expand_stack_locked() returns -EINVAL first if
!(vma->vm_flags & VM_GROWSDOWN) before calling expand_downwards(), to keep
the consistency with CONFIG_STACK_GROWSUP case, remove this check.
The usages of this function are as below:
A:fs/exec.c
ret = expand_stack_locked(vma, stack_base);
if (ret)
ret = -EFAULT;
or
B:mm/memory.c mm/mmap.c
if (expand_stack_locked(vma, addr))
return NULL;
which means the return value will not propagate to other places, so I
believe there is no user-visible effects of this change, and it's
unnecessary to backport to earlier versions.
SeongJae Park [Thu, 7 Sep 2023 02:29:27 +0000 (02:29 +0000)]
mm/damon/core: add more comments for nr_accesses
The comment on struct damon_region about nr_accesses field looks not
sufficient. Many people actually used to ask what nr_accesses mean.
There is more detailed explanation of the mechanism on the comment for
struct damon_attrs, but it is also ambiguous, as it doesn't specify the
name of the counter for aggregating the access check results. Make
those more detailed.
SeongJae Park [Thu, 7 Sep 2023 02:29:26 +0000 (02:29 +0000)]
mm/damon/core: fix a comment about damon_set_attrs() call timings
The comment on damon_set_attrs() says it should not be called while the
kdamond is running, but now some DAMON modules like sysfs interface and
DAMON_RECLAIM call it from after_aggregation() and/or
after_wmarks_check() callbacks for online tuning. Update the comment.
SeongJae Park [Thu, 7 Sep 2023 02:29:25 +0000 (02:29 +0000)]
Docs/admin-guide/mm/damon/usage: link design doc for details of kdamond and context
The explanation of kdamond and context is duplicated in the design and
the usage documents. Replace that in the usage with links to those in
the design document.
SeongJae Park [Thu, 7 Sep 2023 02:29:24 +0000 (02:29 +0000)]
Docs/mm/damon/design: add a section for kdamond and DAMON context
The design document is not explaining about the concept of kdamond and
the DAMON context, while usage document does. Those concept explanation
should be in the design document, and usage document should link those.
Add a section for those.
The design document is explaining about the access tracking mechanism
and the access rate counter (nr_accesses), but not directly mentions the
name. Add a sentence for making it clear.
SeongJae Park [Thu, 7 Sep 2023 02:29:21 +0000 (02:29 +0000)]
Docs/admin-guide/mm/damon/usage: move debugfs intro to the bottom of the section
On the DAMON usage introduction section, the introduction of DAMON
debugfs interface, which is deprecated, is above kernel API, which is
actively supported. Move the DAMON debugfs intro to bottom, so that
readers have less chances to read it.
zswap: change zswap's default allocator to zsmalloc
Out of zswap's 3 allocators, zsmalloc is the clear superior in terms of
memory utilization, both in theory and as observed in practice, with its
high storage density and low internal fragmentation. zsmalloc is also
more actively developed and maintained, since it is the allocator of
choice for zswap for many users, as well as the only allocator for zram.
A historical objection to the selection of zsmalloc as the default
allocator for zswap is its lack of writeback capability. However, this
has changed, with the zsmalloc writeback patchset, and the subsequent
zswap LRU refactor. With this, there is not a lot of good reasons to keep
zbud, an otherwise inferior allocator, as the default instead of zswap.
This patch changes the default allocator to zsmalloc. The only exception
is on settings without MMU, in which case zbud will remain as the default.
Joel Fernandes [Sun, 3 Sep 2023 15:13:28 +0000 (15:13 +0000)]
selftests: mm: add a test for moving from an offset from start of mapping
It is possible that the aligned address falls on no existing mapping,
however that does not mean that we can just align it down to that. This
test verifies that the "vma->vm_start != addr_to_align" check in
can_align_down() prevents disastrous results if aligning down when source
and dest are mutually aligned within a PMD but the source/dest addresses
requested are not at the beginning of the respective mapping containing
these addresses.
selftests: mm: add a test for remapping within a range
Move a block of memory within a memory range. Any alignment optimization
on the source address may cause corruption. Verify using kselftest that
it works. I have also verified with tracing that such optimization does
not happen due to this check in can_align_down():
if (!for_stack && vma->vm_start != addr_to_align)
return false;
selftests: mm: add a test for remapping to area immediately after existing mapping
This patch adds support for verifying that we correctly handle the
situation where something is already mapped before the destination of the remap.
Any realignment of destination address and PMD-copy will destroy that
existing mapping. In such cases, we need to avoid doing the optimization.
To test this, we map an area called the preamble before the remap
region. Then we verify after the mremap operation that this region did not get
corrupted.
Putting some prints in the kernel, I verified that we optimize
correctly in different situations:
Optimize when there is alignment and no previous mapping (this is tested
by previous patch).
<prints>
can_align_down(old_vma->vm_start=2900000, old_addr=2900000, mask=-2097152): 0
can_align_down(new_vma->vm_start=2f00000, new_addr=2f00000, mask=-2097152): 0
=== Starting move_page_tables ===
Doing PUD move for 2800000 -> 2e00000 of extent=200000 <-- Optimization
Doing PUD move for 2a00000 -> 3000000 of extent=200000
Doing PUD move for 2c00000 -> 3200000 of extent=200000
</prints>
Don't optimize when there is alignment but there is previous mapping
(this is tested by this patch).
Notice that can_align_down() returns 1 for the destination mapping
as we detected there is something there.
<prints>
can_align_down(old_vma->vm_start=2900000, old_addr=2900000, mask=-2097152): 0
can_align_down(new_vma->vm_start=5700000, new_addr=5700000, mask=-2097152): 1
=== Starting move_page_tables ===
Doing move_ptes for 2900000 -> 5700000 of extent=100000 <-- Unoptimized
Doing PUD move for 2a00000 -> 5800000 of extent=200000
Doing PUD move for 2c00000 -> 5a00000 of extent=200000
</prints>
selftests: mm: add a test for mutually aligned moves > PMD size
This patch adds a test case to check if a PMD-alignment optimization
successfully happens.
I add support to make sure there is some room before the source mapping,
otherwise the optimization to trigger PMD-aligned move will be disabled as
the kernel will detect that a mapping before the source exists and such
optimization becomes impossible.
mm/mremap: allow moves within the same VMA for stack moves
For the stack move happening in shift_arg_pages(), the move is happening
within the same VMA which spans the old and new ranges.
In case the aligned address happens to fall within that VMA, allow such
moves and don't abort the mremap alignment optimization.
In the regular non-stack mremap case, we cannot allow any such moves as
will end up destroying some part of the mapping (either the source of the
move, or part of the existing mapping). So just avoid it for stack moves.
mm/mremap: optimize the start addresses in move_page_tables()
Patch series "Optimize mremap during mutual alignment within PMD", v6.
This patchset optimizes the start addresses in move_page_tables() and
tests the changes. It addresses a warning [1] that occurs due to a
downward, overlapping move on a mutually-aligned offset within a PMD
during exec. By initiating the copy process at the PMD level when such
alignment is present, we can prevent this warning and speed up the copying
process at the same time. Linus Torvalds suggested this idea. Check the
individual patches for more details. [1]
https://lore.kernel.org/all/ZB2GTBD%[email protected]/
This patch (of 7):
Recently, we see reports [1] of a warning that triggers due to
move_page_tables() doing a downward and overlapping move on a
mutually-aligned offset within a PMD. By mutual alignment, I mean the
source and destination addresses of the mremap are at the same offset
within a PMD.
This mutual alignment along with the fact that the move is downward is
sufficient to cause a warning related to having an allocated PMD that does
not have PTEs in it.
This warning will only trigger when there is mutual alignment in the move
operation. A solution, as suggested by Linus Torvalds [2], is to initiate
the copy process at the PMD level whenever such alignment is present.
Implementing this approach will not only prevent the warning from being
triggered, but it will also optimize the operation as this method should
enhance the speed of the copy process whenever there's a possibility to
start copying at the PMD level.
Some more points:
a. The optimization can be done only when both the source and
destination of the mremap do not have anything mapped below it up to a
PMD boundary. I add support to detect that.
b. #1 is not a problem for the call to move_page_tables() from exec.c as
nothing is expected to be mapped below the source. However, for
non-overlapping mutually aligned moves as triggered by mremap(2), I
added support for checking such cases.
c. I currently only optimize for PMD moves, in the future I/we can build
on this work and do PUD moves as well if there is a need for this. But I
want to take it one step at a time.
d. We need to be careful about mremap of ranges within the VMA itself.
For this purpose, I added checks to determine if the address after
alignment falls within its VMA itself.
The reason is that the hugetlb pages being released are allocated from
movable nodes, and with hugetlb_optimize_vmemmap enabled, vmemmap pages
need to be allocated from the same node during the hugetlb pages
releasing. With GFP_KERNEL and __GFP_THISNODE set, allocating from movable
node is always failed. Fix this problem by removing __GFP_THISNODE.
mm/vmstat: use this_cpu_try_cmpxchg in mod_{zone,node}_state
Use this_cpu_try_cmpxchg instead of this_cpu_cmpxchg (*ptr, old, new) ==
old in mod_zone_state and mod_node_state. x86 CMPXCHG instruction returns
success in ZF flag, so this change saves a compare after cmpxchg (and
related move instruction in front of cmpxchg).
Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg
fails. There is no need to re-read the value in the loop.
mm: convert DAX lock/unlock page to lock/unlock folio
The one caller of DAX lock/unlock page already calls compound_head(), so
use page_folio() instead, then use a folio throughout the DAX code to
remove uses of page->mapping and page->index.
Lorenzo Stoakes [Sun, 27 Aug 2023 11:08:48 +0000 (12:08 +0100)]
mm: refactor si_mem_available()
si_mem_available() needlessly places LRU statistics into an array before
retrieving only two of them, simply access those directly.
In addition, refactor the code so that the blocks of code which calculate
the page cache and reclaimable components each resemble one another to
clearly indicate we cap both against wmark_low in the same fashion.
Xueshi Hu [Tue, 29 Aug 2023 03:33:43 +0000 (11:33 +0800)]
mm/hugetlb: fix nodes huge page allocation when there are surplus pages
In set_nr_huge_pages(), local variable "count" is used to record
persistent_huge_pages(), but when it cames to nodes huge page allocation,
the semantics changes to nr_huge_pages. When there exists surplus huge
pages and using the interface under
/sys/devices/system/node/node*/hugepages to change huge page pool size,
this difference can result in the allocation of an unexpected number of
huge pages.
Mike Kravetz [Tue, 29 Aug 2023 21:37:34 +0000 (14:37 -0700)]
hugetlb: set hugetlb page flag before optimizing vmemmap
Currently, vmemmap optimization of hugetlb pages is performed before the
hugetlb flag (previously hugetlb destructor) is set identifying it as a
hugetlb folio. This means there is a window of time where an ordinary
folio does not have all associated vmemmap present. The core mm only
expects vmemmap to be potentially optimized for hugetlb and device dax.
This can cause problems in code such as memory error handling that may
want to write to tail struct pages.
There is only one call to perform hugetlb vmemmap optimization today. To
fix this issue, simply set the hugetlb flag before that call.
There was a similar issue in the free hugetlb path that was previously
addressed. The two routines that optimize or restore hugetlb vmemmap
should only be passed hugetlb folios/pages. To catch any callers not
following this rule, add VM_WARN_ON calls to the routines. In the hugetlb
free code paths, some calls could be made to restore vmemmap after
clearing the hugetlb flag. This was 'safe' as in these cases vmemmap was
already present and the call was a NOOP. However, for consistency these
calls where eliminated so that we can add the VM_WARN_ON checks.
Kemeng Shi [Fri, 1 Sep 2023 15:51:41 +0000 (23:51 +0800)]
mm/compaction: factor out code to test if we should run compaction for target order
We always do zone_watermark_ok check and compaction_suitable check
together to test if compaction for target order should be ran. Factor
these code out to remove repeat code.
Kemeng Shi [Fri, 1 Sep 2023 15:51:40 +0000 (23:51 +0800)]
mm/compaction: improve comment of is_via_compact_memory
We do proactive compaction with order == -1 via
1. /proc/sys/vm/compact_memory
2. /sys/devices/system/node/nodex/compact
3. /proc/sys/vm/compaction_proactiveness
Add missed situation in which order == -1.
Kemeng Shi [Fri, 1 Sep 2023 15:51:39 +0000 (23:51 +0800)]
mm/compaction: remove repeat compact_blockskip_flush check in reset_isolation_suitable
We have compact_blockskip_flush check in __reset_isolation_suitable, just
remove repeat check before __reset_isolation_suitable in
compact_blockskip_flush.
Kemeng Shi [Fri, 1 Sep 2023 15:51:38 +0000 (23:51 +0800)]
mm/compaction: correctly return failure with bogus compound_order in strict mode
In strict mode, we should return 0 if there is any hole in pageblock. If
we successfully isolated pages at beginning at pageblock and then have a
bogus compound_order outside pageblock in next page. We will abort search
loop with blockpfn > end_pfn. Although we will limit blockpfn to end_pfn,
we will treat it as a successful isolation in strict mode as blockpfn is
not < end_pfn and return partial isolated pages. Then
isolate_freepages_range may success unexpectly with hole in isolated
range.
Kemeng Shi [Fri, 1 Sep 2023 15:51:37 +0000 (23:51 +0800)]
mm/compaction: call list_is_{first}/{last} more intuitively in move_freelist_{head}/{tail}
We use move_freelist_head after list_for_each_entry_reverse to skip recent
pages. And there is no need to do actual move if all freepages are
searched in list_for_each_entry_reverse, e.g. freepage point to first
page in freelist. It's more intuitively to call list_is_first with list
entry as the first argument and list head as the second argument to check
if list entry is the first list entry instead of call list_is_last with
list entry and list head passed in reverse.
Similarly, call list_is_last in move_freelist_tail is more intuitively.
Kemeng Shi [Fri, 1 Sep 2023 15:51:36 +0000 (23:51 +0800)]
mm/compaction: use correct list in move_freelist_{head}/{tail}
Patch series "Fixes and cleanups to compaction", v3.
This is a series to do fix and clean up to compaction.
Patch 1-2 fix and clean up freepage list operation.
Patch 3-4 fix and clean up isolation of freepages
Patch 7 factor code to check if compaction is needed for allocation order.
More details can be found in respective patches.
This patch (of 6):
The freepage is chained with buddy_list in freelist head. Use buddy_list
instead of lru to correct the list operation.
Linus Torvalds [Sun, 1 Oct 2023 20:48:46 +0000 (13:48 -0700)]
Merge tag 'kbuild-fixes-v6.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- Fix the module compression with xz so the in-kernel decompressor
works
- Document a kconfig idiom to express an optional dependency between
modules
- Make modpost, when W=1 is given, detect broken drivers that reference
.exit.* sections
- Remove unused code
* tag 'kbuild-fixes-v6.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
kbuild: remove stale code for 'source' symlink in packaging scripts
modpost: Don't let "driver"s reference .exit.*
vmlinux.lds.h: remove unused CPU_KEEP and CPU_DISCARD macros
modpost: add missing else to the "of" check
Documentation: kbuild: explain handling optional dependencies
kbuild: Use CRC32 and a 1MiB dictionary for XZ compressed modules
Linus Torvalds [Sun, 1 Oct 2023 20:33:25 +0000 (13:33 -0700)]
Merge tag 'mm-hotfixes-stable-2023-10-01-08-34' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull misc fixes from Andrew Morton:
"Fourteen hotfixes, eleven of which are cc:stable. The remainder
pertain to issues which were introduced after 6.5"
* tag 'mm-hotfixes-stable-2023-10-01-08-34' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
Crash: add lock to serialize crash hotplug handling
selftests/mm: fix awk usage in charge_reserved_hugetlb.sh and hugetlb_reparenting_test.sh that may cause error
mm: mempolicy: keep VMA walk if both MPOL_MF_STRICT and MPOL_MF_MOVE are specified
mm/damon/vaddr-test: fix memory leak in damon_do_test_apply_three_regions()
mm, memcg: reconsider kmem.limit_in_bytes deprecation
mm: zswap: fix potential memory corruption on duplicate store
arm64: hugetlb: fix set_huge_pte_at() to work with all swap entries
mm: hugetlb: add huge page size param to set_huge_pte_at()
maple_tree: add MAS_UNDERFLOW and MAS_OVERFLOW states
maple_tree: add mas_is_active() to detect in-tree walks
nilfs2: fix potential use after free in nilfs_gccache_submit_read_data()
mm: abstract moving to the next PFN
mm: report success more often from filemap_map_folio_range()
fs: binfmt_elf_efpic: fix personality for ELF-FDPIC
Linus Torvalds [Sun, 1 Oct 2023 19:50:04 +0000 (12:50 -0700)]
Merge tag 'char-misc-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull misc driver fix from Greg KH:
"Here is a single, much requested, fix for a set of misc drivers to
resolve a much reported regression in the -rc series that has also
propagated back to the stable releases. Sorry for the delay, lots of
conference travel for a few weeks put me very far behind in patch
wrangling.
It has been reported by many to resolve the reported problem, and has
been in linux-next with no reported issues"
* tag 'char-misc-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
misc: rtsx: Fix some platforms can not boot and move the l1ss judgment to probe
Linus Torvalds [Sun, 1 Oct 2023 19:44:45 +0000 (12:44 -0700)]
Merge tag 'tty-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty / serial driver fixes from Greg KH:
"Here are two tty/serial driver fixes for 6.6-rc4 that resolve some
reported regressions:
- revert a n_gsm change that ended up causing problems
- 8250_port fix for irq data
both have been in linux-next for over a week with no reported
problems"
* tag 'tty-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
Revert "tty: n_gsm: fix UAF in gsm_cleanup_mux"
serial: 8250_port: Check IRQ data before use
Linus Torvalds [Sun, 1 Oct 2023 16:50:58 +0000 (09:50 -0700)]
Merge tag 'x86-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar:
"Misc fixes: a kerneldoc build warning fix, add SRSO mitigation for
AMD-derived Hygon processors, and fix a SGX kernel crash in the page
fault handler that can trigger when ksgxd races to reclaim the SECS
special page, by making the SECS page unswappable"
* tag 'x86-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/sgx: Resolves SECS reclaim vs. page fault for EAUG race
x86/srso: Add SRSO mitigation for Hygon processors
x86/kgdb: Fix a kerneldoc warning when build with W=1
Linus Torvalds [Sun, 1 Oct 2023 16:41:58 +0000 (09:41 -0700)]
Merge tag 'timers-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer fix from Ingo Molnar:
"Fix a spurious kernel warning during CPU hotplug events that may
trigger when timer/hrtimer softirqs are pending, which are otherwise
hotplug-safe and don't merit a warning"
* tag 'timers-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
timers: Tag (hr)timer softirq as hotplug safe
Linus Torvalds [Sun, 1 Oct 2023 16:38:05 +0000 (09:38 -0700)]
Merge tag 'sched-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler fix from Ingo Molnar:
"Fix a RT tasks related lockup/live-lock during CPU offlining"
* tag 'sched-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched/rt: Fix live lock between select_fallback_rq() and RT push
Linus Torvalds [Sun, 1 Oct 2023 16:34:53 +0000 (09:34 -0700)]
Merge tag 'perf-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf event fixes from Ingo Molnar:
"Misc fixes: work around an AMD microcode bug on certain models, and
fix kexec kernel PMI handlers on AMD systems that get loaded on older
kernels that have an unexpected register state"
* tag 'perf-urgent-2023-10-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/amd: Do not WARN() on every IRQ
perf/x86/amd/core: Fix overflow reset on hotplug
Drivers must not reference functions marked with __exit as these likely
are not available when the code is built-in.
There are few creative offenders uncovered for example in ARCH=amd64
allmodconfig builds. So only trigger the section mismatch warning for
W=1 builds.
The dual rule that drivers must not reference .init.* is implemented
since commit 0db252452378 ("modpost: don't allow *driver to reference
.init.*") which however missed that .exit.* should be handled in the
same way.
Thanks to Masahiro Yamada and Arnd Bergmann who gave valuable hints to
find this improvement.
Without this 'else' statement, an "usb" name goes into two handlers:
the first/previous 'if' statement _AND_ the for-loop over 'devtable',
but the latter is useless as it has no 'usb' device_id entry anyway.
Tested with allmodconfig before/after patch; no changes to *.mod.c:
git checkout v6.6-rc3
make -j$(nproc) allmodconfig
make -j$(nproc) olddefconfig
make -j$(nproc)
find . -name '*.mod.c' | cpio -pd /tmp/before
# apply patch
make -j$(nproc)
find . -name '*.mod.c' | cpio -pd /tmp/after
diff -r /tmp/before/ /tmp/after/
# no difference
Fixes: acbef7b76629 ("modpost: fix module autoloading for OF devices with generic compatible property") Signed-off-by: Mauricio Faria de Oliveira <[email protected]> Signed-off-by: Masahiro Yamada <[email protected]>
Linus Torvalds [Sun, 1 Oct 2023 01:41:37 +0000 (18:41 -0700)]
Merge tag 'soc-fixes-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull ARM SoC fixes from Arnd Bergmann:
"These are the latest bug fixes that have come up in the soc tree. Most
of these are fairly minor. Most notably, the majority of changes this
time are not for dts files as usual.
- Updates to the addresses of the broadcom and aspeed entries in the
MAINTAINERS file.
- Defconfig updates to address a regression on samsung and a build
warning from an unknown Kconfig symbol
- Build fixes for the StrongARM and Uniphier platforms
- Code fixes for SCMI and FF-A firmware drivers, both of which had a
simple bug that resulted in invalid data, and a lesser fix for the
optee firmware driver
- Multiple fixes for the recently added loongson/loongarch "guts" soc
driver
- Devicetree fixes for RISC-V on the startfive platform, addressing
issues with NOR flash, usb and uart.
- Multiple fixes for NXP i.MX8/i.MX9 dts files, fixing problems with
clock, gpio, hdmi settings and the Makefile
- Bug fixes for i.MX firmware code and the OCOTP soc driver
- Multiple fixes for the TI sysc bus driver
- Minor dts updates for TI omap dts files, to address boot time
warnings and errors"
* tag 'soc-fixes-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (35 commits)
MAINTAINERS: Fix Florian Fainelli's email address
arm64: defconfig: enable syscon-poweroff driver
ARM: locomo: fix locomolcd_power declaration
soc: loongson: loongson2_guts: Remove unneeded semicolon
soc: loongson: loongson2_guts: Convert to devm_platform_ioremap_resource()
soc: loongson: loongson_pm2: Populate children syscon nodes
dt-bindings: soc: loongson,ls2k-pmc: Allow syscon-reboot/syscon-poweroff as child
soc: loongson: loongson_pm2: Drop useless of_device_id compatible
dt-bindings: soc: loongson,ls2k-pmc: Use fallbacks for ls2k-pmc compatible
soc: loongson: loongson_pm2: Add dependency for INPUT
arm64: defconfig: remove CONFIG_COMMON_CLK_NPCM8XX=y
ARM: uniphier: fix cache kernel-doc warnings
MAINTAINERS: aspeed: Update Andrew's email address
MAINTAINERS: aspeed: Update git tree URL
firmware: arm_ffa: Don't set the memory region attributes for MEM_LEND
arm64: dts: imx: Add imx8mm-prt8mm.dtb to build
arm64: dts: imx8mm-evk: Fix hdmi@3d node
soc: imx8m: Enable OCOTP clock for imx8mm before reading registers
arm64: dts: imx8mp-beacon-kit: Fix audio_pll2 clock
arm64: dts: imx8mp: Fix SDMA2/3 clocks
...
Linus Torvalds [Sun, 1 Oct 2023 01:19:02 +0000 (18:19 -0700)]
Merge tag 'trace-v6.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing fixes from Steven Rostedt:
- Make sure 32-bit applications using user events have aligned access
when running on a 64-bit kernel.
- Add cond_resched in the loop that handles converting enums in
print_fmt string is trace events.
- Fix premature wake ups of polling processes in the tracing ring
buffer. When a task polls waiting for a percentage of the ring buffer
to be filled, the writer still will wake it up at every event. Add
the polling's percentage to the "shortest_full" list to tell the
writer when to wake it up.
- For eventfs dir lookups on dynamic events, an event system's only
event could be removed, leaving its dentry with no children. This is
totally legitimate. But in eventfs_release() it must not access the
children array, as it is only allocated when the dentry has children.
* tag 'trace-v6.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
eventfs: Test for dentries array allocated in eventfs_release()
tracing/user_events: Align set_bit() address for all archs
tracing: relax trace_event_eval_update() execution with cond_resched()
ring-buffer: Update "shortest_full" in polling
eventfs: Test for dentries array allocated in eventfs_release()
The dcache_dir_open_wrapper() could be called when a dynamic event is
being deleted leaving a dentry with no children. In this case the
dlist->dentries array will never be allocated. This needs to be checked
for in eventfs_release(), otherwise it will trigger a NULL pointer
dereference.
tracing/user_events: Align set_bit() address for all archs
All architectures should use a long aligned address passed to set_bit().
User processes can pass either a 32-bit or 64-bit sized value to be
updated when tracing is enabled when on a 64-bit kernel. Both cases are
ensured to be naturally aligned, however, that is not enough. The
address must be long aligned without affecting checks on the value
within the user process which require different adjustments for the bit
for little and big endian CPUs.
Add a compat flag to user_event_enabler that indicates when a 32-bit
value is being used on a 64-bit kernel. Long align addresses and correct
the bit to be used by set_bit() to account for this alignment. Ensure
compat flags are copied during forks and used during deletion clears.
tracing: relax trace_event_eval_update() execution with cond_resched()
When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.
Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.
It was discovered that the ring buffer polling was incorrectly stating
that read would not block, but that's because polling did not take into
account that reads will block if the "buffer-percent" was set. Instead,
the ring buffer polling would say reads would not block if there was any
data in the ring buffer. This was incorrect behavior from a user space
point of view. This was fixed by commit 42fb0a1e84ff by having the polling
code check if the ring buffer had more data than what the user specified
"buffer percent" had.
The problem now is that the polling code did not register itself to the
writer that it wanted to wait for a specific "full" value of the ring
buffer. The result was that the writer would wake the polling waiter
whenever there was a new event. The polling waiter would then wake up, see
that there's not enough data in the ring buffer to notify user space and
then go back to sleep. The next event would wake it up again.
Before the polling fix was added, the code would wake up around 100 times
for a hackbench 30 benchmark. After the "fix", due to the constant waking
of the writer, it would wake up over 11,0000 times! It would never leave
the kernel, so the user space behavior was still "correct", but this
definitely is not the desired effect.
To fix this, have the polling code add what it's waiting for to the
"shortest_full" variable, to tell the writer not to wake it up if the
buffer is not as full as it expects to be.
Note, after this fix, it appears that the waiter is now woken up around 2x
the times it was before (~200). This is a tremendous improvement from the
11,000 times, but I will need to spend some time to see why polling is
more aggressive in its wakeups than the read blocking code.
Merge tag 'dma-mapping-6.6-2023-09-30' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fixes from Christoph Hellwig:
- fix the narea calculation in swiotlb initialization (Ross Lagerwall)
- fix the check whether a device has used swiotlb (Petr Tesarik)
* tag 'dma-mapping-6.6-2023-09-30' of git://git.infradead.org/users/hch/dma-mapping:
swiotlb: fix the check whether a device has used software IO TLB
swiotlb: use the calculated number of areas
Merge tag 'iomap-6.6-fixes-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
Pull iomap fixes from Darrick Wong:
- Handle a race between writing and shrinking block devices by
returning EIO
- Fix a typo in a comment
* tag 'iomap-6.6-fixes-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
iomap: Spelling s/preceeding/preceding/g
iomap: add a workaround for racy i_size updates on block devices
Merge tag 'acpi-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI fix from Rafael Wysocki:
"Fix a possible NULL pointer dereference in the error path of
acpi_video_bus_add() resulting from recent changes (Dinghao Liu)"
* tag 'acpi-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI: video: Fix NULL pointer dereference in acpi_video_bus_add()
Merge tag 'powerpc-6.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
- Fix arch_stack_walk_reliable(), used by live patching
- Fix powerpc selftests to work with run_kselftest.sh
Thanks to Joe Lawrence and Petr Mladek.
* tag 'powerpc-6.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
selftests/powerpc: Fix emit_tests to work with run_kselftest.sh
powerpc/stacktrace: Fix arch_stack_walk_reliable()
Baoquan He [Tue, 26 Sep 2023 12:09:05 +0000 (20:09 +0800)]
Crash: add lock to serialize crash hotplug handling
Eric reported that handling corresponding crash hotplug event can be
failed easily when many memory hotplug event are notified in a short
period. They failed because failing to take __kexec_lock.
=======
[ 78.714569] Fallback order for Node 0: 0
[ 78.714575] Built 1 zonelists, mobility grouping on. Total pages: 1817886
[ 78.717133] Policy zone: Normal
[ 78.724423] crash hp: kexec_trylock() failed, elfcorehdr may be inaccurate
[ 78.727207] crash hp: kexec_trylock() failed, elfcorehdr may be inaccurate
[ 80.056643] PEFILE: Unsigned PE binary
=======
The memory hotplug events are notified very quickly and very many, while
the handling of crash hotplug is much slower relatively. So the atomic
variable __kexec_lock and kexec_trylock() can't guarantee the
serialization of crash hotplug handling.
Here, add a new mutex lock __crash_hotplug_lock to serialize crash hotplug
handling specifically. This doesn't impact the usage of __kexec_lock.
Yang Shi [Wed, 20 Sep 2023 22:32:42 +0000 (15:32 -0700)]
mm: mempolicy: keep VMA walk if both MPOL_MF_STRICT and MPOL_MF_MOVE are specified
When calling mbind() with MPOL_MF_{MOVE|MOVEALL} | MPOL_MF_STRICT, kernel
should attempt to migrate all existing pages, and return -EIO if there is
misplaced or unmovable page. Then commit 6f4576e3687b ("mempolicy: apply
page table walker on queue_pages_range()") messed up the return value and
didn't break VMA scan early ianymore when MPOL_MF_STRICT alone. The
return value problem was fixed by commit a7f40cfe3b7a ("mm: mempolicy:
make mbind() return -EIO when MPOL_MF_STRICT is specified"), but it broke
the VMA walk early if unmovable page is met, it may cause some pages are
not migrated as expected.
The code should conceptually do:
if (MPOL_MF_MOVE|MOVEALL)
scan all vmas
try to migrate the existing pages
return success
else if (MPOL_MF_MOVE* | MPOL_MF_STRICT)
scan all vmas
try to migrate the existing pages
return -EIO if unmovable or migration failed
else /* MPOL_MF_STRICT alone */
break early if meets unmovable and don't call mbind_range() at all
else /* none of those flags */
check the ranges in test_walk, EFAULT without mbind_range() if discontig.
mm/damon/vaddr-test: fix memory leak in damon_do_test_apply_three_regions()
When CONFIG_DAMON_VADDR_KUNIT_TEST=y and making CONFIG_DEBUG_KMEMLEAK=y
and CONFIG_DEBUG_KMEMLEAK_AUTO_SCAN=y, the below memory leak is detected.
Since commit 9f86d624292c ("mm/damon/vaddr-test: remove unnecessary
variables"), the damon_destroy_ctx() is removed, but still call
damon_new_target() and damon_new_region(), the damon_region which is
allocated by kmem_cache_alloc() in damon_new_region() and the damon_target
which is allocated by kmalloc in damon_new_target() are not freed. And
the damon_region which is allocated in damon_new_region() in
damon_set_regions() is also not freed.
So use damon_destroy_target to free all the damon_regions and damon_target.
Michal Hocko [Thu, 21 Sep 2023 07:38:29 +0000 (09:38 +0200)]
mm, memcg: reconsider kmem.limit_in_bytes deprecation
This reverts commits 86327e8eb94c ("memcg: drop kmem.limit_in_bytes") and
partially reverts 58056f77502f ("memcg, kmem: further deprecate
kmem.limit_in_bytes") which have incrementally removed support for the
kernel memory accounting hard limit. Unfortunately it has turned out that
there is still userspace depending on the existence of
memory.kmem.limit_in_bytes [1]. The underlying functionality is not
really required but the non-existent file just confuses the userspace
which fails in the result. The patch to fix this on the userspace side
has been submitted but it is hard to predict how it will propagate through
the maze of 3rd party consumers of the software.
Now, reverting alone 86327e8eb94c is not an option because there is
another set of userspace which cannot cope with ENOTSUPP returned when
writing to the file. Therefore we have to go and revisit 58056f77502f as
well. There are two ways to go ahead. Either we give up on the
deprecation and fully revert 58056f77502f as well or we can keep
kmem.limit_in_bytes but make the write a noop and warn about the fact.
This should work for both known breaking workloads which depend on the
existence but do not depend on the hard limit enforcement.
Note to backporters to stable trees. a8c49af3be5f ("memcg: add per-memcg
total kernel memory stat") introduced in 4.18 has added memcg_account_kmem
so the accounting is not done by obj_cgroup_charge_pages directly for v1
anymore. Prior kernels need to add it explicitly (thanks to Johannes for
pointing this out).
mm: zswap: fix potential memory corruption on duplicate store
While stress-testing zswap a memory corruption was happening when writing
back pages. __frontswap_store used to check for duplicate entries before
attempting to store a page in zswap, this was because if the store fails
the old entry isn't removed from the tree. This change removes duplicate
entries in zswap_store before the actual attempt.
Ryan Roberts [Fri, 22 Sep 2023 11:58:04 +0000 (12:58 +0100)]
arm64: hugetlb: fix set_huge_pte_at() to work with all swap entries
When called with a swap entry that does not embed a PFN (e.g.
PTE_MARKER_POISONED or PTE_MARKER_UFFD_WP), the previous implementation of
set_huge_pte_at() would either cause a BUG() to fire (if CONFIG_DEBUG_VM
is enabled) or cause a dereference of an invalid address and subsequent
panic.
arm64's huge pte implementation supports multiple huge page sizes, some of
which are implemented in the page table with multiple contiguous entries.
So set_huge_pte_at() needs to work out how big the logical pte is, so that
it can also work out how many physical ptes (or pmds) need to be written.
It previously did this by grabbing the folio out of the pte and querying
its size.
However, there are cases when the pte being set is actually a swap entry.
But this also used to work fine, because for huge ptes, we only ever saw
migration entries and hwpoison entries. And both of these types of swap
entries have a PFN embedded, so the code would grab that and everything
still worked out.
But over time, more calls to set_huge_pte_at() have been added that set
swap entry types that do not embed a PFN. And this causes the code to go
bang. The triggering case is for the uffd poison test, commit 99aa77215ad0 ("selftests/mm: add uffd unit test for UFFDIO_POISON"), which
causes a PTE_MARKER_POISONED swap entry to be set, coutesey of commit 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs") -
added in v6.5-rc7. Although review shows that there are other call sites
that set PTE_MARKER_UFFD_WP (which also has no PFN), these don't trigger
on arm64 because arm64 doesn't support UFFD WP.
Arguably, the root cause is really due to commit 18f3962953e4 ("mm:
hugetlb: kill set_huge_swap_pte_at()"), which aimed to simplify the
interface to the core code by removing set_huge_swap_pte_at() (which took
a page size parameter) and replacing it with calls to set_huge_pte_at()
where the size was inferred from the folio, as descibed above. While that
commit didn't break anything at the time, it did break the interface
because it couldn't handle swap entries without PFNs. And since then new
callers have come along which rely on this working. But given the
brokeness is only observable after commit 8a13897fb0da ("mm: userfaultfd:
support UFFDIO_POISON for hugetlbfs"), that one gets the Fixes tag.
Now that we have modified the set_huge_pte_at() interface to pass the huge
page size in the previous patch, we can trivially fix this issue.
Ryan Roberts [Fri, 22 Sep 2023 11:58:03 +0000 (12:58 +0100)]
mm: hugetlb: add huge page size param to set_huge_pte_at()
Patch series "Fix set_huge_pte_at() panic on arm64", v2.
This series fixes a bug in arm64's implementation of set_huge_pte_at(),
which can result in an unprivileged user causing a kernel panic. The
problem was triggered when running the new uffd poison mm selftest for
HUGETLB memory. This test (and the uffd poison feature) was merged for
v6.5-rc7.
Ideally, I'd like to get this fix in for v6.6 and I've cc'ed stable
(correctly this time) to get it backported to v6.5, where the issue first
showed up.
Description of Bug
==================
arm64's huge pte implementation supports multiple huge page sizes, some of
which are implemented in the page table with multiple contiguous entries.
So set_huge_pte_at() needs to work out how big the logical pte is, so that
it can also work out how many physical ptes (or pmds) need to be written.
It previously did this by grabbing the folio out of the pte and querying
its size.
However, there are cases when the pte being set is actually a swap entry.
But this also used to work fine, because for huge ptes, we only ever saw
migration entries and hwpoison entries. And both of these types of swap
entries have a PFN embedded, so the code would grab that and everything
still worked out.
But over time, more calls to set_huge_pte_at() have been added that set
swap entry types that do not embed a PFN. And this causes the code to go
bang. The triggering case is for the uffd poison test, commit 99aa77215ad0 ("selftests/mm: add uffd unit test for UFFDIO_POISON"), which
causes a PTE_MARKER_POISONED swap entry to be set, coutesey of commit 8a13897fb0da ("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs") -
added in v6.5-rc7. Although review shows that there are other call sites
that set PTE_MARKER_UFFD_WP (which also has no PFN), these don't trigger
on arm64 because arm64 doesn't support UFFD WP.
If CONFIG_DEBUG_VM is enabled, we do at least get a BUG(), but otherwise,
it will dereference a bad pointer in page_folio():
The simplest fix would have been to revert the dodgy cleanup commit 18f3962953e4 ("mm: hugetlb: kill set_huge_swap_pte_at()"), but since
things have moved on, this would have required an audit of all the new
set_huge_pte_at() call sites to see if they should be converted to
set_huge_swap_pte_at(). As per the original intent of the change, it
would also leave us open to future bugs when people invariably get it
wrong and call the wrong helper.
So instead, I've added a huge page size parameter to set_huge_pte_at().
This means that the arm64 code has the size in all cases. It's a bigger
change, due to needing to touch the arches that implement the function,
but it is entirely mechanical, so in my view, low risk.
I've compile-tested all touched arches; arm64, parisc, powerpc, riscv,
s390, sparc (and additionally x86_64). I've additionally booted and run
mm selftests against arm64, where I observe the uffd poison test is fixed,
and there are no other regressions.
This patch (of 2):
In order to fix a bug, arm64 needs to be told the size of the huge page
for which the pte is being set in set_huge_pte_at(). Provide for this by
adding an `unsigned long sz` parameter to the function. This follows the
same pattern as huge_pte_clear().
This commit makes the required interface modifications to the core mm as
well as all arches that implement this function (arm64, parisc, powerpc,
riscv, s390, sparc). The actual arm64 bug will be fixed in a separate
commit.
maple_tree: add MAS_UNDERFLOW and MAS_OVERFLOW states
When updating the maple tree iterator to avoid rewalks, an issue was
introduced when shifting beyond the limits. This can be seen by trying to
go to the previous address of 0, which would set the maple node to
MAS_NONE and keep the range as the last entry.
Subsequent calls to mas_find() would then search upwards from mas->last
and skip the value at mas->index/mas->last. This showed up as a bug in
mprotect which skips the actual VMA at the current range after attempting
to go to the previous VMA from 0.
Since MAS_NONE may already be set when searching for a value that isn't
contained within a node, changing the handling of MAS_NONE in mas_find()
would make the code more complicated and error prone. Furthermore, there
was no way to tell which limit was hit, and thus which action to take
(next or the entry at the current range).
This solution is to add two states to track what happened with the
previous iterator action. This allows for the expected behaviour of the
next command to return the correct item (either the item at the range
requested, or the next/previous).
maple_tree: add mas_is_active() to detect in-tree walks
Patch series "maple_tree: Fix mas_prev() state regression".
Pedro Falcato retported an mprotect regression [1] which was bisected back
to the iterator changes for maple tree. Root cause analysis showed the
mas_prev() running off the end of the VMA space (previous from 0) followed
by mas_find(), would skip the first value.
This patchset introduces maple state underflow/overflow so the sequence of
calls on the maple state will return what the user expects.
Users who encounter this bug may see mprotect(), userfaultfd_register(),
and mlock() fail on VMAs mapped with address 0.
This patch (of 2):
Instead of constantly checking each possibility of the maple state,
create a fast path that will skip over checking unlikely states.
Pan Bian [Thu, 21 Sep 2023 14:17:31 +0000 (23:17 +0900)]
nilfs2: fix potential use after free in nilfs_gccache_submit_read_data()
In nilfs_gccache_submit_read_data(), brelse(bh) is called to drop the
reference count of bh when the call to nilfs_dat_translate() fails. If
the reference count hits 0 and its owner page gets unlocked, bh may be
freed. However, bh->b_page is dereferenced to put the page after that,
which may result in a use-after-free bug. This patch moves the release
operation after unlocking and putting the page.
NOTE: The function in question is only called in GC, and in combination
with current userland tools, address translation using DAT does not occur
in that function, so the code path that causes this issue will not be
executed. However, it is possible to run that code path by intentionally
modifying the userland GC library or by calling the GC ioctl directly.
In order to fix the L1TF vulnerability, x86 can invert the PTE bits for
PROT_NONE VMAs, which means we cannot move from one PTE to the next by
adding 1 to the PFN field of the PTE. This results in the BUG reported at
[1].
Abstract advancing the PTE to the next PFN through a pte_next_pfn()
function/macro.
mm: report success more often from filemap_map_folio_range()
Even though we had successfully mapped the relevant page, we would rarely
return success from filemap_map_folio_range(). That leads to falling back
from the VMA lock path to the mmap_lock path, which is a speed &
scalability issue. Found by inspection.
fs: binfmt_elf_efpic: fix personality for ELF-FDPIC
The elf-fdpic loader hard sets the process personality to either
PER_LINUX_FDPIC for true elf-fdpic binaries or to PER_LINUX for normal ELF
binaries (in this case they would be constant displacement compiled with
-pie for example). The problem with that is that it will lose any other
bits that may be in the ELF header personality (such as the "bug
emulation" bits).
On the ARM architecture the ADDR_LIMIT_32BIT flag is used to signify a
normal 32bit binary - as opposed to a legacy 26bit address binary. This
matters since start_thread() will set the ARM CPSR register as required
based on this flag. If the elf-fdpic loader loses this bit the process
will be mis-configured and crash out pretty quickly.
Modify elf-fdpic loader personality setting so that it preserves the upper
three bytes by using the SET_PERSONALITY macro to set it. This macro in
the generic case sets PER_LINUX and preserves the upper bytes.
Architectures can override this for their specific use case, and ARM does
exactly this.
The problem shows up quite easily running under qemu using the ARM
architecture, but not necessarily on all types of real ARM hardware. If
the underlying ARM processor does not support the legacy 26-bit addressing
mode then everything will work as expected.
Merge tag '6.6-rc3-ksmbd-server-fixes' of git://git.samba.org/ksmbd
Pull smb server fixes from Steve French:
"Two SMB3 server fixes for null pointer dereferences:
- invalid SMB3 request case (fixes issue found in testing the read
compound patch)
- iovec error case in response processing"
* tag '6.6-rc3-ksmbd-server-fixes' of git://git.samba.org/ksmbd:
ksmbd: check iov vector index in ksmbd_conn_write()
ksmbd: return invalid parameter error response if smb2 request is invalid
Merge tag 'ceph-for-6.6-rc4' of https://github.com/ceph/ceph-client
Pull ceph fixes from Ilya Dryomov:
"A series that fixes an involved 'double watch error' deadlock in RBD
marked for stable and two cleanups"
* tag 'ceph-for-6.6-rc4' of https://github.com/ceph/ceph-client:
rbd: take header_rwsem in rbd_dev_refresh() only when updating
rbd: decouple parent info read-in from updating rbd_dev
rbd: decouple header read-in from updating rbd_dev->header
rbd: move rbd_dev_refresh() definition
Revert "ceph: make members in struct ceph_mds_request_args_ext a union"
ceph: remove unnecessary check for NULL in parse_longname()
Commit 31345a0f5901 ("MAINTAINERS: Replace my email address") added 13
instances of [email protected] and one of only ...@broadcom. I didn't
double check if Broadcom really owns that TLD, but git send-email
doesn't accept it, so add ".com" to that one bogous(?) instance.
Merge tag 'ata-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata
Pull ATA fixes from Damien Le Moal:
"A larger than usual set of fixes for 6.6-rc4 due to the unexpected
number of fixes needed to address ATA disks suspend/resume issues.
In more detail:
- Add missing additionalProperties on child nodes to the pata-common
DT bindings (Rob)
- Fix handling of the REPORT SUPPORTED OPERATION CODES command to
ignore reserved bits (Niklas)
- Increase port multiplier soft reset timeout to accomodate slow
devices and avoid issues on wakeup (Matthias)
- A couple of minor code fixes to avoid compilation warnings in
libata-core and libata-eh (me)
- Many patches from me to address suspend/resume issues, and in
particular a potential deadlock on resume due to the SCSI disk
driver resume operation not being synchronized with libata EH port
resume handling.
This is addressed by changing the scsi disk driver disk start/stop
control to allow libata to execute disk suspend (spin down) and
resume (spin up) on its own during system suspend/resume. Runtime
suspend/resume control remains with the SCSI disk driver.
Other fixes include:
- Fix libata power management request issuing to avoid races
- Establish a link between ATA ports and SCSI devices to order PM
operations
- Fix device removal to avoid issues with driver rmmod removal
- Fix synchronization of libata device rescan and SCSI disk resume
operation
- Remove libsas PM operations as suspend/resume is handled
directly by the sas controller resume
- Fix the SCSI disk driver to not issue commands to suspended
disks, thus avoiding potential system lock-up on resume"
* tag 'ata-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata:
ata: libata-eh: Fix compilation warning in ata_eh_link_report()
ata: libata-core: Fix compilation warning in ata_dev_config_ncq()
scsi: sd: Do not issue commands to suspended disks on shutdown
ata: libata-core: Do not register PM operations for SAS ports
ata: libata-scsi: Fix delayed scsi_rescan_device() execution
scsi: Do not attempt to rescan suspended devices
ata: libata-scsi: Disable scsi device manage_system_start_stop
scsi: sd: Differentiate system and runtime start/stop management
ata: libata-scsi: link ata port and scsi device
ata: libata-core: Fix port and device removal
ata: libata-core: Fix ata_port_request_pm() locking
ata: libata-sata: increase PMP SRST timeout to 10s
ata: libata-scsi: ignore reserved bits for REPORT SUPPORTED OPERATION CODES
dt-bindings: ata: pata-common: Add missing additionalProperties on child nodes
Merge tag 'block-6.6-2023-09-28' of git://git.kernel.dk/linux
Pull block fixes from Jens Axboe:
"Just two minor comment / documentation fixes for the block side"
* tag 'block-6.6-2023-09-28' of git://git.kernel.dk/linux:
block: fix kernel-doc for disk_force_media_change()
block: correct stale comment in rq_qos_wait
Merge tag 'slab-fixes-for-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab fixes from Vlastimil Babka:
- stable fix to prevent list corruption when destroying caches with
leftover objects (Rafael Aquini)
- fix for a gotcha in kmalloc_size_roundup() when calling it with too
high size, discovered when recently a networking call site had to be
fixed for a different issue (David Laight)
* tag 'slab-fixes-for-6.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab:
slab: kmalloc_size_roundup() must not return 0 for non-zero size
mm/slab_common: fix slab_caches list corruption after kmem_cache_destroy()
Merge tag 'drm-fixes-2023-09-29' of git://anongit.freedesktop.org/drm/drm
Pull drm fixes from Dave Airlie:
"Regular pull, this feel suspiciously light so I expect next week might
be a bit heavier? Let's see how we go. This is from a code point of
view ivpu and i915 fixes.
The only other patch is adding Danilo Krummrich to the nouveau
maintainers, he's agreed to take on more of the roll after Ben
retired.
MAINTAINERS:
- add Danilo for nouveau
ivpu:
- Add PCI ids for Arrow Lake
- Fix memory corruption during IPC
- Avoid dmesg flooding
- 40xx: Wait for clock resource
- 40xx: Fix interrupt usage
- 40xx: Support caching when loading firmware
i915:
- Fix a panic regression on gen8_ggtt_insert_entries
- Fix load issue due to reservation address in ggtt_reserve_guc_top
- Fix a possible deadlock with guc busyness worker"
* tag 'drm-fixes-2023-09-29' of git://anongit.freedesktop.org/drm/drm:
accel/ivpu: Use cached buffers for FW loading
accel/ivpu/40xx: Fix missing VPUIP interrupts
accel/ivpu/40xx: Disable frequency change interrupt
accel/ivpu/40xx: Ensure clock resource ownership Ack before Power-Up
accel/ivpu: Don't flood dmesg with VPU ready message
accel/ivpu: Do not use wait event interruptible
MAINTAINERS: update nouveau maintainers
i915/guc: Get runtime pm in busyness worker only if already active
drm/i915/gt: Fix reservation address in ggtt_reserve_guc_top
i915: Limit the length of an sg list to the requested length
accel/ivpu: Add Arrow Lake pci id