]> Git Repo - linux.git/commitdiff
mm, mmap: limit THP alignment of anonymous mappings to PMD-aligned sizes
authorVlastimil Babka <[email protected]>
Thu, 24 Oct 2024 15:12:29 +0000 (17:12 +0200)
committerAndrew Morton <[email protected]>
Fri, 1 Nov 2024 03:27:04 +0000 (20:27 -0700)
Since commit efa7df3e3bb5 ("mm: align larger anonymous mappings on THP
boundaries") a mmap() of anonymous memory without a specific address hint
and of at least PMD_SIZE will be aligned to PMD so that it can benefit
from a THP backing page.

However this change has been shown to regress some workloads
significantly.  [1] reports regressions in various spec benchmarks, with
up to 600% slowdown of the cactusBSSN benchmark on some platforms.  The
benchmark seems to create many mappings of 4632kB, which would have merged
to a large THP-backed area before commit efa7df3e3bb5 and now they are
fragmented to multiple areas each aligned to PMD boundary with gaps
between.  The regression then seems to be caused mainly due to the
benchmark's memory access pattern suffering from TLB or cache aliasing due
to the aligned boundaries of the individual areas.

Another known regression bisected to commit efa7df3e3bb5 is darktable [2]
[3] and early testing suggests this patch fixes the regression there as
well.

To fix the regression but still try to benefit from THP-friendly anonymous
mapping alignment, add a condition that the size of the mapping must be a
multiple of PMD size instead of at least PMD size.  In case of many
odd-sized mapping like the cactusBSSN creates, those will stop being
aligned and with gaps between, and instead naturally merge again.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries")
Signed-off-by: Vlastimil Babka <[email protected]>
Reported-by: Michael Matz <[email protected]>
Debugged-by: Gabriel Krisman Bertazi <[email protected]>
Closes: https://bugzilla.suse.com/show_bug.cgi?id=1229012 [1]
Reported-by: Matthias Bodenbinder <[email protected]>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219366 [2]
Closes: https://lore.kernel.org/all/[email protected]/ [3]
Reviewed-by: Lorenzo Stoakes <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Liam R. Howlett <[email protected]>
Cc: Petr Tesarik <[email protected]>
Cc: Thorsten Leemhuis <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
mm/mmap.c

index 1e0e34cb993f18c8fe08534a2bbab605b3c70f6e..9841b41e3c7626f8df7a3f01d9e59c0925604f8a 100644 (file)
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -900,7 +900,8 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
 
        if (get_area) {
                addr = get_area(file, addr, len, pgoff, flags);
-       } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+       } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)
+                  && IS_ALIGNED(len, PMD_SIZE)) {
                /* Ensures that larger anonymous mappings are THP aligned. */
                addr = thp_get_unmapped_area_vmflags(file, addr, len,
                                                     pgoff, flags, vm_flags);
This page took 0.057786 seconds and 4 git commands to generate.