This slightly strengthens our write assertion when lockdep is disabled.
It also downgrades us from BUG_ON to WARN_ON, but I think that's an
improvement. I don't think dumping the mm_struct was all that valuable;
the call chain is what's important.
Peter Xu [Wed, 3 Apr 2024 01:32:47 +0000 (21:32 -0400)]
mm: allow anon exclusive check over hugetlb tail pages
PageAnonExclusive() used to forbid tail pages for hugetlbfs, as that used
to be called mostly in hugetlb specific paths and the head page was
guaranteed.
As we move forward towards merging hugetlb paths into generic mm, we may
start to pass in tail hugetlb pages (when with cont-pte/cont-pmd huge
pages) for such check. Allow it to properly fetch the head, in which case
the anon-exclusiveness of the head will always represents the tail page.
There's already a sign of it when we look at the GUP-fast which already
contain the hugetlb processing altogether: we used to have a specific
commit 5805192c7b72 ("mm/gup: handle cont-PTE hugetlb pages correctly in
gup_must_unshare() via GUP-fast") covering that area. Now with this more
generic change, that can also go away.
Peter Xu [Wed, 27 Mar 2024 15:23:32 +0000 (11:23 -0400)]
mm/gup: handle hugetlb in the generic follow_page_mask code
Now follow_page() is ready to handle hugetlb pages in whatever form, and
over all architectures. Switch to the generic code path.
Time to retire hugetlb_follow_page_mask(), following the previous
retirement of follow_hugetlb_page() in 4849807114b8.
There may be a slight difference of how the loops run when processing slow
GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
loop of __get_user_pages() will resolve one pgtable entry with the patch
applied, rather than relying on the size of hugetlb hstate, the latter may
cover multiple entries in one loop.
A quick performance test on an aarch64 VM on M1 chip shows 15% degrade
over a tight loop of slow gup after the path switched. That shouldn't be
a problem because slow-gup should not be a hot path for GUP in general:
when page is commonly present, fast-gup will already succeed, while when
the page is indeed missing and require a follow up page fault, the slow
gup degrade will probably buried in the fault paths anyway. It also
explains why slow gup for THP used to be very slow before 57edfcfd3419
("mm/gup: accelerate thp gup even for "pages != NULL"") lands, the latter
not part of a performance analysis but a side benefit. If the performance
will be a concern, we can consider handle CONT_PTE in follow_page().
Before that is justified to be necessary, keep everything clean and simple.
Peter Xu [Wed, 27 Mar 2024 15:23:31 +0000 (11:23 -0400)]
mm/gup: handle hugepd for follow_page()
Hugepd is only used in PowerPC so far on 4K page size kernels where hash
mmu is used. follow_page_mask() used to leverage hugetlb APIs to access
hugepd entries. Teach follow_page_mask() itself on hugepd.
With previous refactors on fast-gup gup_huge_pd(), most of the code can be
leveraged. There's something not needed for follow page, for example,
gup_hugepte() tries to detect pgtable entry change which will never happen
with slow gup (which has the pgtable lock held), but that's not a problem
to check.
Since follow_page() always only fetch one page, set the end to "address +
PAGE_SIZE" should suffice. We will still do the pgtable walk once for
each hugetlb page by setting ctx->page_mask properly.
One thing worth mentioning is that some level of pgtable's _bad() helper
will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it
at least applies to PUD on Power8 with 4K pgsize. It means feeding a
hugepd entry to pud_bad() will report a false positive. Let's leave that
for now because it can be arch-specific where I am a bit declined to
touch. In this patch it's not a problem as long as hugepd is detected
before any bad pgtable entries.
To allow slow gup like follow_*_page() to access hugepd helpers, hugepd
codes are moved to the top. Besides that, the helper record_subpages()
will be used by either hugepd or fast-gup now. To avoid "unused function"
warnings we must provide a "#ifdef" for it, unfortunately.
Peter Xu [Wed, 27 Mar 2024 15:23:30 +0000 (11:23 -0400)]
mm/gup: handle huge pmd for follow_pmd_mask()
Replace pmd_trans_huge() with pmd_leaf() to also cover pmd_huge() as long
as enabled.
FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.
Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
into follow_huge_pmd() to match what it does. Move it into gup.c so not
depend on CONFIG_THP.
When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
it when the page is valid. It was not a bug to set it before even if GUP
failed (page==NULL), because follow_page_mask() callers always ignores
page_mask if so. But doing so makes the code cleaner.
Peter Xu [Wed, 27 Mar 2024 15:23:29 +0000 (11:23 -0400)]
mm/gup: handle huge pud for follow_pud_mask()
Teach follow_pud_mask() to be able to handle normal PUD pages like
hugetlb.
Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD
and and huge_memory.c (which relies on CONFIG_THP). Switch to pud_leaf()
to detect both cases in the slow gup.
In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary. touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.
Since at it, optimize the non-present check by adding a pud_present()
early check before taking the pgtable lock, failing the follow_page()
early if PUD is not present: that is required by both devmap or hugetlb.
Use pud_huge() to also cover the pud_devmap() case.
One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.
Setting ctx->page_mask properly for a PUD entry. As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified. Hugetlb already can do so
prior to this patch.
Peter Xu [Wed, 27 Mar 2024 15:23:28 +0000 (11:23 -0400)]
mm/gup: cache *pudp in follow_pud_mask()
Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time. Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.
Peter Xu [Wed, 27 Mar 2024 15:23:25 +0000 (11:23 -0400)]
mm/gup: drop gup_fast_folio_allowed() in hugepd processing
Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are
some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
PPC_8XX), however those pages are not candidates for GUP.
Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
file-backed mappings") added a check to fail gup-fast if there's potential
risk of violating GUP over writeback file systems. That should never
apply to hugepd. Considering that hugepd is an old format (and even
software-only), there's no plan to extend hugepd into other file typed
memories that is prone to the same issue.
Drop that check, not only because it'll never be true for hugepd per any
known plan, but also it paves way for reusing the function outside
fast-gup.
To make sure we'll still remember this issue just in case hugepd will be
extended to support non-hugetlbfs memories, add a rich comment above
gup_huge_pd(), explaining the issue with proper references.
Peter Xu [Wed, 27 Mar 2024 15:23:24 +0000 (11:23 -0400)]
mm/arch: provide pud_pfn() fallback
The comment in the code explains the reasons. We took a different
approach comparing to pmd_pfn() by providing a fallback function.
Another option is to provide some lower level config options (compare to
HUGETLB_PAGE or THP) to identify which layer an arch can support for such
huge mappings. However that can be an overkill.
Peter Xu [Wed, 27 Mar 2024 15:23:23 +0000 (11:23 -0400)]
mm: introduce vma_pgtable_walk_{begin|end}()
Introduce per-vma begin()/end() helpers for pgtable walks. This is a
preparation work to merge hugetlb pgtable walkers with generic mm.
The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages. It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).
Peter Xu [Wed, 27 Mar 2024 15:23:22 +0000 (11:23 -0400)]
mm: make HPAGE_PXD_* macros even if !THP
These macros can be helpful when we plan to merge hugetlb code into
generic code. Move them out and define them as long as
PGTABLE_HAS_HUGE_LEAVES is selected, because there are systems that only
define HUGETLB_PAGE not THP.
One note here is HPAGE_PMD_SHIFT must be defined even if PMD_SHIFT is not
defined (e.g. !CONFIG_MMU case); it (or in other forms, like
HPAGE_PMD_NR) is already used in lots of common codes without ifdef
guards. Use the old trick to let complations work.
Here we only need to differenciate HPAGE_PXD_SHIFT definitions. All the
rest macros will be defined based on it. When at it, move HPAGE_PMD_NR /
HPAGE_PMD_ORDER over together.
Peter Xu [Wed, 27 Mar 2024 15:23:20 +0000 (11:23 -0400)]
mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
Patch series "mm/gup: Unify hugetlb, part 2", v4.
The series removes the hugetlb slow gup path after a previous refactor
work [1], so that slow gup now uses the exact same path to process all
kinds of memory including hugetlb.
For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset(). It'll be ideal if that API can be completely dropped
from arch hugetlb API. This series is one small step towards merging
hugetlb specific codes into generic mm paths. From that POV, this series
removes one reference to huge_pte_offset() out of many others.
One goal of such a route is that we can reconsider merging hugetlb
features like High Granularity Mapping (HGM). It was not accepted in the
past because it may add lots of hugetlb specific codes and make the mm
code even harder to maintain. With a merged codeset, features like HGM
can hopefully share some code with THP, legacy (PMD+) or modern
(continuous PTEs).
To make it work, the generic slow gup code will need to at least
understand hugepd, which is already done like so in fast-gup. Due to the
specialty of hugepd to be software-only solution (no hardware recognizes
the hugepd format, so it's purely artificial structures), there's chance
we can merge some or all hugepd formats with cont_pte in the future. That
question is yet unsettled from Power side to have an acknowledgement. As
of now for this series, I kept the hugepd handling because we may still
need to do so before getting a clearer picture of the future of hugepd.
The other reason is simply that we did it already for fast-gup and most
codes are still around to be reused. It'll make more sense to keep
slow/fast gup behave the same before a decision is made to remove hugepd.
There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc). Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around. Now it's gone but
still working, by looking up pgtable entries one by one.
It's not ideal, but hopefully this change should not affect yet on major
workloads. There's some more information in the commit message of the
last patch. If this would be a concern, we can consider teaching slow gup
to recognize cont pte/pmd entries, and that should recover the lost
performance. But I doubt its necessity for now, so I kept it as simple as
it can be.
Patch layout
=============
Patch 1-8: Preparation works, or cleanups in relevant code paths
Patch 9-11: Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 12: Drop hugetlb_follow_page_mask()
More information can be found in the commit messages of each patch.
Introduce a config option that will be selected as long as huge leaves are
involved in pgtable (thp or hugetlbfs). It would be useful to mark any
code with this new config that can process either hugetlb or thp pages in
any level that is higher than pte level.
Convert directly from a pmd to a folio without going through another
representation first. For now this is just a slightly shorter way to
write it, but it might end up being more efficient later.
This is the folio equivalent of is_huge_zero_page(). It doesn't add any
efficiency, but it does prevent the caller from passing a tail page and
getting confused when the predicate returns false.
Patch series "Convert huge_zero_page to huge_zero_folio".
Almost all the callers of is_huge_zero_page() already have a folio. And
they should -- is_huge_zero_page() will return false for tail pages, even
if they're tail pages of the huge zero page. That's confusing, and one of
the benefits of the folio conversion is to get rid of this confusion.
This patch (of 8):
There's no need to convert to a page, much less a folio. We can tell from
the pmd whether it is a huge zero page or not. Saves 60 bytes of text.
Chris Li [Tue, 26 Mar 2024 18:35:43 +0000 (11:35 -0700)]
zswap: replace RB tree with xarray
Very deep RB tree requires rebalance at times. That contributes to the
zswap fault latencies. Xarray does not need to perform tree rebalance.
Replacing RB tree to xarray can have some small performance gain.
One small difference is that xarray insert might fail with ENOMEM, while
RB tree insert does not allocate additional memory.
The zswap_entry size will reduce a bit due to removing the RB node, which
has two pointers and a color field. Xarray store the pointer in the
xarray tree rather than the zswap_entry. Every entry has one pointer from
the xarray tree. Overall, switching to xarray should save some memory, if
the swap entries are densely packed.
Notice the zswap_rb_search and zswap_rb_insert often followed by
zswap_rb_erase. Use xa_erase and xa_store directly. That saves one tree
lookup as well.
Remove zswap_invalidate_entry due to no need to call zswap_rb_erase any
more. Use zswap_free_entry instead.
The "struct zswap_tree" has been replaced by "struct xarray". The tree
spin lock has transferred to the xarray lock.
Run the kernel build testing 5 times for each version, averages:
(memory.max=2GB, zswap shrinker and writeback enabled, one 50GB swapfile,
24 HT core, 32 jobs)
mm-unstable-4aaccadb5c04 xarray v9
user 3548.902 3534.375
sys 522.232 520.976
real 202.796 200.864
Baoquan He [Tue, 26 Mar 2024 06:11:33 +0000 (14:11 +0800)]
mm/page_alloc.c: change the array-length to MIGRATE_PCPTYPES
Earlier, in commit 1dd214b8f21c ("mm: page_alloc: avoid merging
non-fallbackable pageblocks with others"), migrate type MIGRATE_CMA and
MIGRATE_ISOLATE are removed from fallbacks list since they are never used.
Later on, in commit ("aa02d3c174ab mm/page_alloc: reduce fallbacks to
(MIGRATE_PCPTYPES - 1)"), the array column size is reduced to
'MIGRATE_PCPTYPES - 1'. In fact, the array row size need be reduced to
MIGRATE_PCPTYPES too since it's only covering rows of the number
MIGRATE_PCPTYPES. Even though the current code has handled cases
when the migratetype is CMA, HIGHATOMIC and MEMORY_ISOLATION, making
the row size right is still good to avoid future error and confusion.
Baoquan He [Tue, 26 Mar 2024 06:11:32 +0000 (14:11 +0800)]
mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone
On one node, for lower zone's ->lowmem_reserve[], it will show how much
memory is reserved in this lower zone to avoid excessive page allocation
from the relevant higher zone's fallback allocation.
However, currently lower zone's lowmem_reserve[] element will be filled
even though the relevant higher zone is empty. That doesnt' make sense
and can cause confusion.
E.g on node 0 of one system as below, it has zone
DMA/DMA32/NORMAL/MOVABLE/DEVICE, among them zone MOVABLE/DEVICE are the
highest and both are empty. In zone DMA/DMA32's protection array, we can
see that it has value for zone MOVABLE and DEVICE.
Node 0, zone DMA
......
pages free 2816
boost 0
min 7
low 10
high 13
spanned 4095
present 3998
managed 3840
cma 0
protection: (0, 1582, 23716, 23716, 23716)
......
Node 0, zone DMA32
pages free 403269
boost 0
min 753
low 1158
high 1563
spanned 1044480
present 487039
managed 405070
cma 0
protection: (0, 0, 22134, 22134, 22134)
......
Node 0, zone Normal
pages free 5423879
boost 0
min 10539
low 16205
high 21871
spanned 5767168
present 5767168
managed 5666438
cma 0
protection: (0, 0, 0, 0, 0)
......
Node 0, zone Movable
pages free 0
boost 0
min 32
low 32
high 32
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
Node 0, zone Device
pages free 0
boost 0
min 0
low 0
high 0
spanned 0
present 0
managed 0
cma 0
protection: (0, 0, 0, 0, 0)
Here, clear out the element value in lower zone's ->lowmem_reserve[] if the
relevant higher zone is empty.
And also replace space with tab in _deferred_grow_zone()
Baoquan He [Tue, 26 Mar 2024 06:11:31 +0000 (14:11 +0800)]
mm/mm_init.c: remove the outdated code comment above deferred_grow_zone()
The noinline attribute has been taken off in commit 9420f89db2dd ("mm:
move most of core MM initialization to mm/mm_init.c"). So remove the
unneeded code comment above deferred_grow_zone().
And also remove the unneeded bracket in deferred_init_pages().
Baoquan He [Tue, 26 Mar 2024 06:11:30 +0000 (14:11 +0800)]
mm/page_alloc.c: remove unneeded codes in !NUMA version of build_zonelists()
When CONFIG_NUMA=n, MAX_NUMNODES is always 1 because Kconfig item
NODES_SHIFT depends on NUMA. So in !NUMA version of build_zonelists(), no
need to bother with the two for loop because code execution won't enter
them ever.
Here, remove those unneeded codes in !NUMA version of build_zonelists().
Baoquan He [Tue, 26 Mar 2024 06:11:28 +0000 (14:11 +0800)]
mm/init: remove the unnecessary special treatment for memory-less node
Because memory-less node's ->node_present_pages and its zone's
->present_pages are all 0, the judgement before calling node_set_state()
to set N_MEMORY, N_HIGH_MEMORY, N_NORMAL_MEMORY for node is enough to skip
memory-less node. The 'continue;' statement inside for_each_node() loop
of free_area_init() is gilding the lily.
Here, remove the special handling to make memory-less node share the same
code flow as normal node.
And also rephrase the code comments above the 'continue' statement
and move them above above line 'if (pgdat->node_present_pages)'.
Baoquan He [Tue, 26 Mar 2024 06:11:27 +0000 (14:11 +0800)]
mm: move array mem_section init code out of memory_present()
Patch series "mm/init: minor clean up and improvement".
These are all observed when going through code flow during mm init.
This patch (of 7):
When CONFIG_SPARSEMEM_EXTREME is enabled, mem_section need be initialized
to point at a two-dimensional array, and its 1st dimension of length
NR_SECTION_ROOTS will be dynamically allocated. Once the allocation is
done, it's available for all nodes.
So take the 1st dimension of mem_section initialization out of
memory_present()(), and put it into memblocks_present() which is a more
appripriate place.
Vlastimil Babka [Tue, 26 Mar 2024 10:37:39 +0000 (11:37 +0100)]
mm, slab: move slab_memcg hooks to mm/memcontrol.c
The hooks make multiple calls to functions in mm/memcontrol.c, including
to th current_obj_cgroup() marked __always_inline. It might be faster to
make a single call to the hook in mm/memcontrol.c instead. The hooks also
don't use almost anything from mm/slub.c. obj_full_size() can move with
the hooks and cache_vmstat_idx() to the internal mm/slab.h
Vlastimil Babka [Tue, 26 Mar 2024 10:37:38 +0000 (11:37 +0100)]
mm, slab: move memcg charging to post-alloc hook
Patch series "memcg_kmem hooks refactoring", v3.
This patch (of 2):
The MEMCG_KMEM integration with slab currently relies on two hooks during
allocation. memcg_slab_pre_alloc_hook() determines the objcg and charges
it, and memcg_slab_post_alloc_hook() assigns the objcg pointer to the
allocated object(s).
As Linus pointed out, this is unnecessarily complex. Failing to charge
due to memcg limits should be rare, so we can optimistically allocate the
object(s) and do the charging together with assigning the objcg pointer in
a single post_alloc hook. In the rare case the charging fails, we can
free the object(s) back.
This simplifies the code (no need to pass around the objcg pointer) and
potentially allows to separate charging from allocation in cases where
it's common that the allocation would be immediately freed, and the memcg
handling overhead could be saved.
Reduce the usage of PageFlag tests and reduce the number of
compound_head() calls.
For multi-page folios, we'll now show all pages as having the flags that
apply to them, e.g. if it's dirty, all pages will have the dirty flag set
instead of just the head page. The mapped flag is still per page, as is
the hwpoison flag.
mm: convert arch_clear_hugepage_flags to take a folio
All implementations that aren't no-ops just set a bit in the flags, and we
want to use the folio flags rather than the page flags for that. Rename
it to arch_clear_hugetlb_flags() while we're touching it so nobody thinks
it's used for THP.
xtensa: remove uses of PG_arch_1 on individual pages
Since switching to the new page table range API, we disregard the
PG_arch_1 (aka dcache dirty) flag on tail pages, and only pay attention to
it on the folio. Fix these two missed spots where we were setting it on
arbitrary pages.
The first two patches are bug fixes, although I'm not sure that either
architecture will have noticed. There aren't a lot of uses of page->flags
left! The big build-up here is to reworking stable_page_flags(), which
will definitely be a user-visible change. I think a welcome one, given
the special case we had to spread the Slab flag into all tail pages.
This patch (of 10):
Since switching to the new page table range API, we do not set the
PG_arch_1 (aka dcache clean) flag on tail pages, only on the folio. Test
it on the folio. Also use page_mapped() instead of page_mapcount() as it
is more efficient.
mm: merge folio_is_secretmem() and folio_fast_pin_allowed() into gup_fast_folio_allowed()
folio_is_secretmem() is currently only used during GUP-fast. Nowadays,
folio_fast_pin_allowed() performs similar checks during GUP-fast and
contains a lot of careful handling -- READ_ONCE() -- , sanity checks --
lockdep_assert_irqs_disabled() -- and helpful comments on how this
handling is safe and correct.
So let's merge folio_is_secretmem() into folio_fast_pin_allowed(). Rename
folio_fast_pin_allowed() to gup_fast_folio_allowed(), to better match the
new semantics.
Let's add a simple reproducer for a scenario where GUP-fast could succeed
on secretmem folios, making vmsplice() succeed instead of failing. The
reproducer is based on a reproducer [1] by Miklos Szeredi.
We want to perform two tests: vmsplice() when a fresh page was just
faulted in, and vmsplice() on an existing page after munmap() that would
drain certain LRU caches/batches in the kernel.
In an ideal world, we could use fallocate(FALLOC_FL_PUNCH_HOLE) /
MADV_REMOVE to remove any existing page. As that is currently not
possible, run the test before any other tests that would allocate memory
in the secretmem fd.
Perform the ftruncate() only once, and check the return value.
follow_phys is only used by two callers in arch/x86/mm/pat/memtype.c.
Move it there and hardcode the two arguments that get the same values
passed by both callers.
This series open codes follow_pfn in the only remaining caller, although
the code there remains questionable. It then also moves follow_phys into
the only user and simplifies it a bit.
This patch (of 3):
Switch from follow_pfn to follow_pte so that we can get rid of follow_pfn.
Note that this doesn't fix any of the pre-existing raciness and lack of
permission checking in the code.
Yajun Deng [Mon, 25 Mar 2024 06:32:58 +0000 (14:32 +0800)]
mm/mmap: convert all mas except mas_detach to vma iterator
There are two types of iterators mas and vmi in the current code. If the
maple tree comes from the mm structure, we can use the vma iterator.
Avoid using mas directly as possible.
Keep using mas for the mt_detach tree, since it doesn't come from the mm
structure.
Remove as many uses of mas as possible, but we will still have a few that
must be passed through in unmap_vmas() and free_pgtables().
Also introduce vma_iter_reset, vma_iter_{prev, next}_range_limit and
vma_iter_area_{lowest, highest} helper functions for using the vma
interator.
Baoquan He [Mon, 25 Mar 2024 14:56:46 +0000 (22:56 +0800)]
mm/mm_init.c: remove arch_reserved_kernel_pages()
Since the current calculation of calc_nr_kernel_pages() has taken into
consideration of kernel reserved memory, no need to have
arch_reserved_kernel_pages() any more.
Baoquan He [Mon, 25 Mar 2024 14:56:44 +0000 (22:56 +0800)]
mm/mm_init.c: remove meaningless calculation of zone->managed_pages in free_area_init_core()
Currently, in free_area_init_core(), when initialize zone's field, a rough
value is set to zone->managed_pages. That value is calculated by
(zone->present_pages - memmap_pages).
In the meantime, add the value to nr_all_pages and nr_kernel_pages which
represent all free pages of system (only low memory or including HIGHMEM
memory separately). Both of them are gonna be used in
alloc_large_system_hash().
However, the rough calculation and setting of zone->managed_pages is
meaningless because
a) memmap pages are allocated on units of node in sparse_init() or
alloc_node_mem_map(pgdat); The simple (zone->present_pages -
memmap_pages) is too rough to make sense for zone;
b) the set zone->managed_pages will be zeroed out and reset with
acutal value in mem_init() via memblock_free_all(). Before the
resetting, no buddy allocation request is issued.
Here, remove the meaningless and complicated calculation of
(zone->present_pages - memmap_pages), directly set zone->managed_pages as
zone->present_pages for now. It will be adjusted in mem_init().
And also remove the assignment of nr_all_pages and nr_kernel_pages in
free_area_init_core(). Instead, call the newly added
calc_nr_kernel_pages() to count up all free but not reserved memory in
memblock and assign to nr_all_pages and nr_kernel_pages. The counting
excludes memmap_pages, and other kernel used data, which is more accurate
than old way and simpler, and can also cover the ppc required
arch_reserved_kernel_pages() case.
And also clean up the outdated code comment above free_area_init_core().
And free_area_init_core() is easy to understand now, no need to add words
to explain.
Baoquan He [Mon, 25 Mar 2024 14:56:43 +0000 (22:56 +0800)]
mm/mm_init.c: add new function calc_nr_all_pages()
This is a preparation to calculate nr_kernel_pages and nr_all_pages, both
of which will be used later in alloc_large_system_hash().
nr_all_pages counts up all free but not reserved memory in memblock
allocator, including HIGHMEM memory. While nr_kernel_pages counts up all
free but not reserved low memory in memblock allocator, excluding HIGHMEM
memory.
Baoquan He [Mon, 25 Mar 2024 14:56:41 +0000 (22:56 +0800)]
x86: remove unneeded memblock_find_dma_reserve()
Patch series "mm/mm_init.c: refactor free_area_init_core()".
In function free_area_init_core(), the code calculating
zone->managed_pages and the subtracting dma_reserve from DMA zone looks
very confusing.
From git history, the code calculating zone->managed_pages was for
zone->present_pages originally. The early rough assignment is for
optimize zone's pcp and water mark setting. Later, managed_pages was
introduced into zone to represent the number of managed pages by buddy.
Now, zone->managed_pages is zeroed out and reset in mem_init() when
calling memblock_free_all(). zone's pcp and wmark setting relying on
actual zone->managed_pages are done later than mem_init() invocation. So
we don't need rush to early calculate and set zone->managed_pages, just
set it as zone->present_pages, will adjust it in mem_init().
And also add a new function calc_nr_kernel_pages() to count up free but
not reserved pages in memblock, then assign it to nr_all_pages and
nr_kernel_pages after memmap pages are allocated.
This patch (of 6):
Variable dma_reserve and its usage was introduced in commit 0e0b864e069c
("[PATCH] Account for memmap and optionally the kernel image as holes").
Its original purpose was to accounting for the reserved pages in DMA zone
to make DMA zone's watermarks calculation more accurate on x86.
However, currently there's zone->managed_pages to account for all
available pages for buddy, zone->present_pages to account for all present
physical pages in zone. What is more important, on x86, calculating and
setting the zone->managed_pages is a temporary move, all zone's
managed_pages will be zeroed out and reset to the actual value according
to how many pages are added to buddy allocator in mem_init(). Before
mem_init(), no buddy alloction is requested. And zone's pcp and watermark
setting are all done after mem_init(). So, no need to worry about the DMA
zone's setting accuracy during free_area_init().
Hence, remove memblock_find_dma_reserve() to stop calculating and
setting dma_reserve.
Kairui Song [Mon, 15 Apr 2024 17:18:56 +0000 (01:18 +0800)]
mm/filemap: optimize filemap folio adding
Instead of doing multiple tree walks, do one optimism range check with
lock hold, and exit if raced with another insertion. If a shadow exists,
check it with a new xas_get_order helper before releasing the lock to
avoid redundant tree walks for getting its order.
Drop the lock and do the allocation only if a split is needed.
In the best case, it only need to walk the tree once. If it needs to
alloc and split, 3 walks are issued (One for first ranged conflict check
and order retrieving, one for the second check after allocation, one for
the insert after split).
Testing with 4K pages, in an 8G cgroup, with 16G brd as block device:
Kairui Song [Mon, 15 Apr 2024 17:18:54 +0000 (01:18 +0800)]
mm/filemap: clean up hugetlb exclusion code
__filemap_add_folio only has two callers, one never passes hugetlb folio
and one always passes in hugetlb folio. So move the hugetlb related
cgroup charging out of it to make the code cleaner.
Kairui Song [Mon, 15 Apr 2024 17:18:53 +0000 (01:18 +0800)]
mm/filemap: return early if failed to allocate memory for split
Patch series "mm/filemap: optimize folio adding and splitting", v4.
Currently, at least 3 tree walks are needed for filemap folio adding if
the folio is previously evicted. One for getting the order of current
slot, one for ranged conflict check, and one for another order retrieving.
If a split is needed, more walks are needed.
This series is trying to merge these walks, and speed up
filemap_add_folio, I see a 7.5% - 12.5% performance gain for fio stress
test.
So instead of doing multiple tree walks, do one optimism range check with
lock hold, and exit if raced with another insertion. If a shadow exists,
check it with a new xas_get_order helper before releasing the lock to
avoid redundant tree walks for getting its order.
Drop the lock and do the allocation only if a split is needed.
In the best case, it only need to walk the tree once. If it needs to
alloc and split, 3 walks are issued (One for first ranged conflict check
and order retrieving, one for the second check after allocation, one for
the insert after split).
Testing with 4K pages, in an 8G cgroup, with 16G brd as block device:
mm: convert folio_estimated_sharers() to folio_likely_mapped_shared()
Callers of folio_estimated_sharers() only care about "mapped shared vs.
mapped exclusively", not the exact estimate of sharers. Let's consolidate
and unify the condition users are checking. While at it clarify the
semantics and extend the discussion on the fuzziness.
Use the "likely mapped shared" terminology to better express what the
(adjusted) function actually checks.
Whether a partially-mappable folio is more likely to not be partially
mapped than partially mapped is debatable. In the future, we might be
able to improve our estimate for partially-mappable folios, though.
Note that we will now consistently detect "mapped shared" only if the
first subpage is actually mapped multiple times. When the first subpage
is not mapped, we will consistently detect it as "mapped exclusively".
This change should currently only affect the usage in
madvise_free_pte_range() and queue_folios_pte_range() for large folios: if
the first page was already unmapped, we would have skipped the folio.
Zi Yan [Fri, 22 Mar 2024 19:33:04 +0000 (15:33 -0400)]
mm/migrate: split source folio if it is on deferred split list
If the source folio is on deferred split list, it is likely some subpages
are not used. Split it before migration to avoid migrating unused
subpages.
Commit 616b8371539a6 ("mm: thp: enable thp migration in generic path") did
not check if a THP is on deferred split list before migration, thus, the
destination THP is never put on deferred split list even if the source THP
might be. The opportunity of reclaiming free pages in a partially mapped
THP during deferred list scanning is lost, but no other harmful
consequence is present[1].
Barry Song [Wed, 6 Mar 2024 09:52:19 +0000 (22:52 +1300)]
mm: hold PTL from the first PTE while reclaiming a large folio
Within try_to_unmap_one(), page_vma_mapped_walk() races with other PTE
modifications preceded by pte clear. While iterating over PTEs of a large
folio, it only starts acquiring PTL from the first valid (present) PTE.
PTE modifications can temporarily set PTEs to pte_none. Consequently, the
initial PTEs of a large folio might be skipped in try_to_unmap_one().
For example, for an anon folio, if we skip PTE0, we may have PTE0 which is
still present, while PTE1 ~ PTE(nr_pages - 1) are swap entries after
try_to_unmap_one().
So folio will be still mapped, the folio fails to be reclaimed and is put
back to LRU in this round.
This also breaks up PTEs optimization such as CONT-PTE on this large folio
and may lead to accident folio_split() afterwards. And since a part of
PTEs are now swap entries, accessing those parts will introduce overhead -
do_swap_page. Although the kernel can withstand all of the above issues,
the situation still seems quite awkward and warrants making it more ideal.
The same race also occurs with small folios, but they have only one PTE,
thus, it won't be possible for them to be partially unmapped.
This patch holds PTL from PTE0, allowing us to avoid reading PTE values
that are in the process of being transformed. With stable PTE values, we
can ensure that this large folio is either completely reclaimed or that
all PTEs remain untouched in this round.
A corner case is that if we hold PTL from PTE0 and most initial PTEs have
been really unmapped before that, we may increase the duration of holding
PTL. Thus we only apply this optimization to folios which are still
entirely mapped (not in deferred_split list).
Baoquan He [Sat, 9 Mar 2024 04:44:54 +0000 (12:44 +0800)]
mm/vmalloc.c: optimize to reduce arguments of alloc_vmap_area()
If called by __get_vm_area_node(), by open coding the field assignments of
'struct vm_struct *vm', and move the vm->flags and vm->caller assignments
into __get_vm_area_node(), the passed in arguments 'flags' and 'caller'
can be removed.
This alleviates overloaded arguments passed in for alloc_vmap_area().
Liu Shixin [Fri, 22 Mar 2024 09:35:55 +0000 (17:35 +0800)]
mm/filemap: don't decrease mmap_miss when folio has workingset flag
If there are too many folios that are recently evicted in a file, then
they will probably continue to be evicted. In such situation, there is no
positive effect to read-ahead this file since it is only a waste of IO.
The mmap_miss is increased in do_sync_mmap_readahead() and decreased in
both do_async_mmap_readahead() and filemap_map_pages(). In order to skip
read-ahead in above scenario, the mmap_miss have to increased exceed
MMAP_LOTSAMISS. This can be done by stop decreased mmap_miss when folio
has workingset flag. The async path is not to care because in above
scenario, it's hard to run into the async path.
Liu Shixin [Fri, 22 Mar 2024 09:35:54 +0000 (17:35 +0800)]
mm/readahead: break read-ahead loop if filemap_add_folio return -ENOMEM
Patch series "Fix I/O high when memory almost met memcg limit", v2.
Recently, when install package in a docker which almost reached its memory
limit, the installer has no respond severely for more than 15 minutes.
During this period, I/O stays high(~1G/s) and influence the whole machine.
I've constructed a use case as follows:
for (i = 0; i < 1024 * 6 - 50;i++) {
addr = (void *)malloc(SIZE);
if (!addr)
return -1;
memset(addr, 0, SIZE);
}
sleep(99999);
return 0;
}
We found that this problem is caused by a lot ot meaningless read-ahead.
Since the docker is almost met memory limit, the page will be reclaimed
immediately after read-ahead and will read-ahead again immediately. The
program is executed slowly and waste a lot of I/O resource.
These two patch aim to break the read-ahead in above scenario.
Barry Song [Fri, 22 Mar 2024 11:41:36 +0000 (00:41 +1300)]
arm64: mm: swap: support THP_SWAP on hardware with MTE
Commit d0637c505f8a1 ("arm64: enable THP_SWAP for arm64") brings up
THP_SWAP on ARM64, but it doesn't enable THP_SWP on hardware with MTE as
the MTE code works with the assumption tags save/restore is always
handling a folio with only one page.
The limitation should be removed as more and more ARM64 SoCs have this
feature. Co-existence of MTE and THP_SWAP becomes more and more
important.
This patch makes MTE tags saving support large folios, then we don't need
to split large folios into base pages for swapping out on ARM64 SoCs with
MTE any more.
arch_prepare_to_swap() should take folio rather than page as parameter
because we support THP swap-out as a whole. It saves tags for all pages
in a large folio.
As now we are restoring tags based-on folio, in arch_swap_restore(), we
may increase some extra loops and early-exitings while refaulting a large
folio which is still in swapcache in do_swap_page(). In case a large
folio has nr pages, do_swap_page() will only set the PTE of the particular
page which is causing the page fault. Thus do_swap_page() runs nr times,
and each time, arch_swap_restore() will loop nr times for those subpages
in the folio. So right now the algorithmic complexity becomes O(nr^2).
Once we support mapping large folios in do_swap_page(), extra loops and
early-exitings will decrease while not being completely removed as a large
folio might get partially tagged in corner cases such as, 1. a large
folio in swapcache can be partially unmapped, thus, MTE tags for the
unmapped pages will be invalidated; 2. users might use mprotect() to set
MTEs on a part of a large folio.
arch_thp_swp_supported() is dropped since ARM64 MTE was the only one who
needed it.
Baolin Wang [Wed, 6 Mar 2024 10:13:27 +0000 (18:13 +0800)]
mm: hugetlb: make the hugetlb migration strategy consistent
As discussed in previous thread [1], there is an inconsistency when
handing hugetlb migration. When handling the migration of freed hugetlb,
it prevents fallback to other NUMA nodes in
alloc_and_dissolve_hugetlb_folio(). However, when dealing with in-use
hugetlb, it allows fallback to other NUMA nodes in
alloc_hugetlb_folio_nodemask(), which can break the per-node hugetlb pool
and might result in unexpected failures when node bound workloads doesn't
get what is asssumed available.
To make hugetlb migration strategy more clear, we should list all the scenarios
of hugetlb migration and analyze whether allocation fallback is permitted:
1) Memory offline: will call dissolve_free_huge_pages() to free the
freed hugetlb, and call do_migrate_range() to migrate the in-use
hugetlb. Both can break the per-node hugetlb pool, but as this is an
explicit offlining operation, no better choice. So should allow the
hugetlb allocation fallback.
2) Memory failure: same as memory offline. Should allow fallback to a
different node might be the only option to handle it, otherwise the
impact of poisoned memory can be amplified.
3) Longterm pinning: will call migrate_longterm_unpinnable_pages() to
migrate in-use and not-longterm-pinnable hugetlb, which can break the
per-node pool. But we should fail to longterm pinning if can not
allocate on current node to avoid breaking the per-node pool.
4) Syscalls (mbind, migrate_pages, move_pages): these are explicit
users operation to move pages to other nodes, so fallback to other
nodes should not be prohibited.
5) alloc_contig_range: used by CMA allocation and virtio-mem
fake-offline to allocate given range of pages. Now the freed hugetlb
migration is not allowed to fallback, to keep consistency, the in-use
hugetlb migration should be also not allowed to fallback.
6) alloc_contig_pages: used by kfence, pgtable_debug etc. The strategy
should be consistent with that of alloc_contig_range().
Based on the analysis of the various scenarios above, introducing a new
helper to determine whether fallback is permitted according to the
migration reason..
Baolin Wang [Wed, 6 Mar 2024 10:13:26 +0000 (18:13 +0800)]
mm: record the migration reason for struct migration_target_control
Patch series "make the hugetlb migration strategy consistent", v2.
As discussed in previous thread [1], there is an inconsistency when
handling hugetlb migration. When handling the migration of freed hugetlb,
it prevents fallback to other NUMA nodes in
alloc_and_dissolve_hugetlb_folio(). However, when dealing with in-use
hugetlb, it allows fallback to other NUMA nodes in
alloc_hugetlb_folio_nodemask(), which can break the per-node hugetlb pool
and might result in unexpected failures when node bound workloads doesn't
get what is asssumed available.
This patchset tries to make the hugetlb migration strategy more clear
and consistent. Please find details in each patch.
To support different hugetlb allocation strategies during hugetlb
migration based on various migration reasons, record the migration reason
in the migration_target_control structure as a preparation.
rulinhuang [Thu, 7 Mar 2024 02:14:40 +0000 (21:14 -0500)]
mm/vmalloc: eliminated the lock contention from twice to once
When allocating a new memory area where the mapping address range is
known, it is observed that the vmap_node->busy.lock is acquired twice.
The first acquisition occurs in the alloc_vmap_area() function when
inserting the vm area into the vm mapping red-black tree. The second
acquisition occurs in the setup_vmalloc_vm() function when updating the
properties of the vm, such as flags and address, etc.
Combine these two operations together in alloc_vmap_area(), which improves
scalability when the vmap_node->busy.lock is contended. By doing so, the
need to acquire the lock twice can also be eliminated to once.
With the above change, tested on intel sapphire rapids platform(224 vcpu),
a 4% performance improvement is gained on
stress-ng/pthread(https://github.com/ColinIanKing/stress-ng), which is the
stress test of thread creations.
Waiman Long [Thu, 7 Mar 2024 19:05:48 +0000 (14:05 -0500)]
mm/kmemleak: disable KASAN instrumentation in kmemleak
Kmemleak ia a memory leak checker. KASAN is also a memory checker but it
focuses more on finding out-of-bounds and use-after-free bugs. Since
kmemleak is inherently slow especially on systems with large number of
CPUs, adding KASAN instrumentation will make it slower even more. As
kmemleak is not for production use, the utility of enabling KASAN there is
questionable.
This patch disables KASAN instrumentation for configurations that enable
both of them to slightly reduce performance overhead.
Waiman Long [Thu, 7 Mar 2024 19:05:47 +0000 (14:05 -0500)]
mm/kmemleak: compact kmemleak_object further
Patch series "mm/kmemleak: Minor cleanup & performance tuning".
This series contains 2 simple cleanup patches to slightly reduce memory
and performance overhead.
This patch (of 2):
With commit 56a61617dd22 ("mm: use stack_depot for recording kmemleak's
backtrace"), the size of kmemleak_object has been reduced by 128 bytes for
64-bit arches. The replacement "depot_stack_handle_t trace_handle" is
actually just 4 bytes long leaving a hole of 4 bytes. By moving up
trace_handle to another existing 4-byte hold, we can save 8 more bytes
from kmemleak_object reducing its overall size from 248 to 240 bytes.
Yosry Ahmed [Fri, 22 Mar 2024 00:10:01 +0000 (00:10 +0000)]
mm: zswap: remove nr_zswap_stored atomic
nr_stored was introduced by commit b5ba474f3f51 ("zswap: shrink zswap pool
based on memory pressure") as a per zswap_pool counter of the number of
stored pages that are not same-filled pages. It is used in
zswap_shrinker_count() to scale the number of freeable compressed pages by
the compression ratio. That is, to reduce the amount of writeback from
zswap with higher compression ratios as the ROI from IO diminishes.
Later on, commit bf9b7df23cb3 ("mm/zswap: global lru and shrinker shared
by all zswap_pools") made the shrinker global (not per zswap_pool), and
replaced nr_stored with nr_zswap_stored (initially introduced as
zswap.nr_stored), which is now a global counter.
The counter is now awfully close to zswap_stored_pages. The only
difference is that the latter also includes same-filled pages. Also, when
memcgs are enabled, we use memcg_page_state(memcg, MEMCG_ZSWAPPED), which
includes same-filled pages anyway (i.e. equivalent to
zswap_stored_pages).
Use zswap_stored_pages instead in zswap_shrinker_count() to keep things
consistent whether memcgs are enabled or not, and add a comment about the
number of freeable pages possibly being scaled down more than it should if
we have lots of same-filled pages (i.e. inflated compression ratio).
Remove nr_zswap_stored and one atomic operation in the store and free
paths.
mm: page_alloc: change move_freepages() to __move_freepages_block()
The function is now supposed to be called only on a single pageblock and
checks start_pfn and end_pfn accordingly. Rename it to make this more
obvious and drop the end_pfn parameter which can be determined trivially
and none of the callers use it for anything else.
Also make the (now internal) end_pfn exclusive, which is more common.
Johannes Weiner [Wed, 20 Mar 2024 18:02:15 +0000 (14:02 -0400)]
mm: page_alloc: consolidate free page accounting
Free page accounting currently happens a bit too high up the call stack,
where it has to deal with guard pages, compaction capturing, block
stealing and even page isolation. This is subtle and fragile, and makes
it difficult to hack on the code.
Now that type violations on the freelists have been fixed, push the
accounting down to where pages enter and leave the freelist.
Johannes Weiner [Wed, 20 Mar 2024 18:02:14 +0000 (14:02 -0400)]
mm: page_isolation: prepare for hygienic freelists
Page isolation currently sets MIGRATE_ISOLATE on a block, then drops
zone->lock and scans the block for straddling buddies to split up.
Because this happens non-atomically wrt the page allocator, it's possible
for allocations to get a buddy whose first block is a regular pcp
migratetype but whose tail is isolated. This means that in certain cases
memory can still be allocated after isolation. It will also trigger the
freelist type hygiene warnings in subsequent patches.
Introduce a variant of move_freepages_block() provided by the allocator
specifically for page isolation; it moves free pages, converts the block,
and handles the splitting of straddling buddies while holding zone->lock.
The allocator knows that pageblocks and buddies are always naturally
aligned, which means that buddies can only straddle blocks if they're
actually >pageblock_order. This means the search-and-split part can be
simplified compared to what page isolation used to do.
Also tighten up the page isolation code around the expectations of which
pages can be large, and how they are freed.
Based on extensive discussions with and invaluable input from Zi Yan.
Zi Yan [Wed, 20 Mar 2024 18:02:13 +0000 (14:02 -0400)]
mm: page_alloc: set migratetype inside move_freepages()
This avoids changing migratetype after move_freepages() or
move_freepages_block(), which is error prone. It also prepares for
upcoming changes to fix move_freepages() not moving free pages partially
in the range.
Johannes Weiner [Wed, 20 Mar 2024 18:02:12 +0000 (14:02 -0400)]
mm: page_alloc: close migratetype race between freeing and stealing
There are three freeing paths that read the page's migratetype
optimistically before grabbing the zone lock. When this races with block
stealing, those pages go on the wrong freelist.
The paths in question are:
- when freeing >costly orders that aren't THP
- when freeing pages to the buddy upon pcp lock contention
- when freeing pages that are isolated
- when freeing pages initially during boot
- when freeing the remainder in alloc_pages_exact()
- when "accepting" unaccepted VM host memory before first use
- when freeing pages during unpoisoning
None of these are so hot that they would need this optimization at the
cost of hampering defrag efforts. Especially when contrasted with the
fact that the most common buddy freeing path - free_pcppages_bulk - is
checking the migratetype under the zone->lock just fine.
In addition, isolated pages need to look up the migratetype under the lock
anyway, which adds branches to the locked section, and results in a double
lookup when the pages are in fact isolated.
Johannes Weiner [Wed, 20 Mar 2024 18:02:11 +0000 (14:02 -0400)]
mm: page_alloc: fix freelist movement during block conversion
Currently, page block type conversion during fallbacks, atomic
reservations and isolation can strand various amounts of free pages on
incorrect freelists.
For example, fallback stealing moves free pages in the block to the new
type's freelists, but then may not actually claim the block for that type
if there aren't enough compatible pages already allocated.
In all cases, free page moving might fail if the block straddles more than
one zone, in which case no free pages are moved at all, but the block type
is changed anyway.
This is detrimental to type hygiene on the freelists. It encourages
incompatible page mixing down the line (ask for one type, get another) and
thus contributes to long-term fragmentation.
Split the process into a proper transaction: check first if conversion
will happen, then try to move the free pages, and only if that was
successful convert the block to the new type.
Johannes Weiner [Wed, 20 Mar 2024 18:02:10 +0000 (14:02 -0400)]
mm: page_alloc: fix move_freepages_block() range error
When a block is partially outside the zone of the cursor page, the
function cuts the range to the pivot page instead of the zone start. This
can leave large parts of the block behind, which encourages incompatible
page mixing down the line (ask for one type, get another), and thus
long-term fragmentation.
This triggers reliably on the first block in the DMA zone, whose start_pfn
is 1. The block is stolen, but everything before the pivot page (which
was often hundreds of pages) is left on the old list.
Johannes Weiner [Wed, 20 Mar 2024 18:02:09 +0000 (14:02 -0400)]
mm: page_alloc: move free pages when converting block during isolation
When claiming a block during compaction isolation, move any remaining free
pages to the correct freelists as well, instead of stranding them on the
wrong list. Otherwise, this encourages incompatible page mixing down the
line, and thus long-term fragmentation.
Johannes Weiner [Wed, 20 Mar 2024 18:02:08 +0000 (14:02 -0400)]
mm: page_alloc: fix up block types when merging compatible blocks
The buddy allocator coalesces compatible blocks during freeing, but it
doesn't update the types of the subblocks to match. When an allocation
later breaks the chunk down again, its pieces will be put on freelists of
the wrong type. This encourages incompatible page mixing (ask for one
type, get another), and thus long-term fragmentation.
Update the subblocks when merging a larger chunk, such that a later
expand() will maintain freelist type hygiene.
Patch series "mm: page_alloc: freelist migratetype hygiene", v4.
The page allocator's mobility grouping is intended to keep unmovable pages
separate from reclaimable/compactable ones to allow on-demand
defragmentation for higher-order allocations and huge pages.
Currently, there are several places where accidental type mixing occurs:
an allocation asks for a page of a certain migratetype and receives
another. This ruins pageblocks for compaction, which in turn makes
allocating huge pages more expensive and less reliable.
The series addresses those causes. The last patch adds type checks on all
freelist movements to prevent new violations being introduced.
The benefits can be seen in a mixed workload that stresses the machine
with a memcache-type workload and a kernel build job while periodically
attempting to allocate batches of THP. The following data is aggregated
over 50 consecutive defconfig builds:
Huge page success rate is higher, allocation latencies are shorter and
more predictable.
Stealing (fallback) rate is drastically reduced. Notably, while the
vanilla kernel keeps doing fallbacks on an ongoing basis, the patched
kernel enters a steady state once the distribution of block types is
adequate for the workload. Steals over 50 runs:
Type mismatches are down too. Those count every time an allocation
request asks for one migratetype and gets another. This can still occur
minimally in the patched kernel due to non-stealing fallbacks, but it's
quite rare and follows the pattern of overall fallbacks - once the block
type distribution settles, mismatches cease as well:
Compaction work is markedly reduced despite much better THP rates.
In the vanilla kernel, reclaim seems to have been driven primarily by
watermark boosting that happens as a result of fallbacks. With those all
but eliminated, watermarks average lower and kswapd does less work. The
uptick in direct reclaim is because THP requests have to fend for
themselves more often - which is intended policy right now. Aggregate
reclaim activity is lowered significantly, though.
This patch (of 10):
The idea behind the cache is to save get_pageblock_migratetype() lookups
during bulk freeing. A microbenchmark suggests this isn't helping,
though. The pcp migratetype can get stale, which means that bulk freeing
has an extra branch to check if the pageblock was isolated while on the
pcp.
While the variance overlaps, the cache write and the branch seem to make
this a net negative. The following test allocates and frees batches of
10,000 pages (~3x the pcp high marks to trigger flushing):
0.0846799 +- 0.0000412 seconds time elapsed ( +- 0.05% )
A side effect is that this also ensures that pages whose pageblock gets
stolen while on the pcplist end up on the right freelist and we don't
perform potentially type-incompatible buddy merges (or skip merges when we
shouldn't), which is likely beneficial to long-term fragmentation
management, although the effects would be harder to measure. Settle for
simpler and faster code as justification here.
Peter Xu [Thu, 21 Mar 2024 21:50:47 +0000 (17:50 -0400)]
selftests/mm: run_vmtests.sh: fix hugetlb mem size calculation
The script calculates a mininum required size of hugetlb memories, but
it'll stop working with <1MB huge page sizes, reporting all zeros even if
huge pages are available.
In reality, the calculation doesn't really need to be as complicated
either. Make it simpler and work for KB-level hugepages too.
Dev Jain [Thu, 21 Mar 2024 10:35:22 +0000 (16:05 +0530)]
selftests/mm: confirm VA exhaustion without reliance on correctness of mmap()
Currently, VA exhaustion is being checked by passing a hint to mmap() and
expecting it to fail.
While populating the lower VA space, mmap() fails because we have
exhausted the space.
Then, in validate_lower_address_hint(), because mmap() fails, we
confirm that we have indeed exhausted the space. There is a circular
logic involved here.
Assume that there is a bug in mmap(), also assume that it exists
independent of whether you pass a hint address or not; that for some
reason it is not able to find a 1GB chunk. My idea is to assert the
exhaustion against some other method.
This patch makes a stricter test by successful
write() calls from /proc/self/maps to a dump file, confirming that a free
chunk is indeed not available.
We no longer have destructors or dtors, merely a page flag (technically a
page type flag, but that's an implementation detail). Remove
__clear_hugetlb_destructor, fix up comments and the occasional variable
name.
For pages that have a page_type, set the mapcount to 0, which will reduce
the confusion in people reading page dumps ("Why does this page have a
mapcount of -128?"). Now that hugetlbfs is a page_type, read the
entire_mapcount for any large folio; this is fine for all folios as no
user reuses the entire_mapcount field.
For pages which do not have a page type, do not print it to reduce
clutter.
Reclaim the Slab page flag by using a spare bit in PageType. We are
perennially short of page flags for various purposes, and now that the
original SLAB allocator has been retired, SLUB does not use the
mapcount/page_type field. This lets us remove a number of special cases
for ignoring mapcount on Slab pages.
Now that prep_compound_page() initialises folio->_deferred_list,
folio_prep_large_rmappable()'s only purpose is to set the large_rmappable
flag, so inline it into the two callers. Take the opportunity to convert
the large_rmappable definition from PAGEFLAG to FOLIO_FLAG and remove the
existance of PageTestLargeRmappable and friends.
These patches all interact in annoying ways which make it tricky to send
them out in any way other than a big batch, even though there's not really
an overarching theme to connect them.
The big effects of this patch series are:
- folio_test_hugetlb() becomes reliable, even when called without a
page reference
- We free up PG_slab, and we could always use more page flags
- We no longer need to check PageSlab before calling page_mapcount()
This patch (of 9):
For compound pages which are at least order-2 (and hence have a
deferred_list), initialise it and then we can check at free that the page
is not part of a deferred list. We recently found this useful to rule out
a source of corruption.
The system will immediate fill up stack and crash when both
CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled. Avoid
allocation tagging of kmemleak caches, otherwise recursive allocation
tracking occurs.
alloc_tag: Tighten file permissions on /proc/allocinfo
The /proc/allocinfo file exposes a tremendous about of information about
kernel build details, memory allocations (obviously), and potentially even
image layout (due to ordering). As this is intended to be consumed by
system owners (like /proc/slabinfo), use the same file permissions as
there: 0400.