]> Git Repo - linux.git/log
linux.git
3 years agouserfaultfd/selftests: add pagemap uffd-wp test
Peter Xu [Thu, 1 Jul 2021 01:49:13 +0000 (18:49 -0700)]
userfaultfd/selftests: add pagemap uffd-wp test

Add one anonymous specific test to start using pagemap.  With pagemap
support, we can directly read the uffd-wp bit from pgtable without
triggering any fault, so it's easier to do sanity checks in unit tests.

Meanwhile this test also leverages the newly introduced MADV_PAGEOUT
madvise function to test swap ptes with uffd-wp bit set, and across
fork()s.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/pagemap: export uffd-wp protection information
Peter Xu [Thu, 1 Jul 2021 01:49:09 +0000 (18:49 -0700)]
mm/pagemap: export uffd-wp protection information

Export the PTE/PMD status of uffd-wp to pagemap too.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/userfaultfd: fail uffd-wp registration if not supported
Peter Xu [Thu, 1 Jul 2021 01:49:06 +0000 (18:49 -0700)]
mm/userfaultfd: fail uffd-wp registration if not supported

We should fail uffd-wp registration immediately if the arch does not even
have CONFIG_HAVE_ARCH_USERFAULTFD_WP defined.  That'll block also relevant
ioctls on e.g.  UFFDIO_WRITEPROTECT because that'll check against
VM_UFFD_WP, which can only be applied with a success registration.

Remove the WP feature bit too for those archs when handling UFFDIO_API
ioctl.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/userfaultfd: fix uffd-wp special cases for fork()
Peter Xu [Thu, 1 Jul 2021 01:49:02 +0000 (18:49 -0700)]
mm/userfaultfd: fix uffd-wp special cases for fork()

We tried to do something similar in b569a1760782 ("userfaultfd: wp: drop
_PAGE_UFFD_WP properly when fork") previously, but it's not doing it all
right..  A few fixes around the code path:

1. We were referencing VM_UFFD_WP vm_flags on the _old_ vma rather
   than the new vma.  That's overlooked in b569a1760782, so it won't work
   as expected.  Thanks to the recent rework on fork code
   (7a4830c380f3a8b3), we can easily get the new vma now, so switch the
   checks to that.

2. Dropping the uffd-wp bit in copy_huge_pmd() could be wrong if the
   huge pmd is a migration huge pmd.  When it happens, instead of using
   pmd_uffd_wp(), we should use pmd_swp_uffd_wp().  The fix is simply to
   handle them separately.

3. Forget to carry over uffd-wp bit for a write migration huge pmd
   entry.  This also happens in copy_huge_pmd(), where we converted a
   write huge migration entry into a read one.

4. In copy_nonpresent_pte(), drop uffd-wp if necessary for swap ptes.

5. In copy_present_page() when COW is enforced when fork(), we also
   need to pass over the uffd-wp bit if VM_UFFD_WP is armed on the new
   vma, and when the pte to be copied has uffd-wp bit set.

Remove the comment in copy_present_pte() about this.  It won't help a huge
lot to only comment there, but comment everywhere would be an overkill.
Let's assume the commit messages would help.

[[email protected]: fix a few thp pmd missing uffd-wp bit]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: b569a1760782f ("userfaultfd: wp: drop _PAGE_UFFD_WP properly when fork")
Signed-off-by: Peter Xu <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/thp: simplify copying of huge zero page pmd when fork
Peter Xu [Thu, 1 Jul 2021 01:48:59 +0000 (18:48 -0700)]
mm/thp: simplify copying of huge zero page pmd when fork

Patch series "mm/uffd: Misc fix for uffd-wp and one more test".

This series tries to fix some corner case bugs for uffd-wp on either thp
or fork().  Then it introduced a new test with pagemap/pageout.

Patch layout:

Patch 1:    cleanup for THP, it'll slightly simplify the follow up patches
Patch 2-4:  misc fixes for uffd-wp here and there; please refer to each patch
Patch 5:    add pagemap support for uffd-wp
Patch 6:    add pagemap/pageout test for uffd-wp

The last test introduced can also verify some of the fixes in previous
patches, as the test will fail without the fixes.  However it's not easy
to verify all the changes in patch 2-4, but hopefully they can still be
properly reviewed.

Note that if considering the ongoing uffd-wp shmem & hugetlbfs work, patch
5 will be incomplete as it's missing e.g.  hugetlbfs part or the special
swap pte detection.  However that's not needed in this series, and since
that series is still during review, this series does not depend on that
one (the last test only runs with anonymous memory, not file-backed).  So
this series can be merged even before that series.

This patch (of 6):

Huge zero page is handled in a special path in copy_huge_pmd(), however it
should share most codes with a normal thp page.  Trying to share more code
with it by removing the special path.  The only leftover so far is the
huge zero page refcounting (mm_get_huge_zero_page()), because that's
separately done with a global counter.

This prepares for a future patch to modify the huge pmd to be installed,
so that we don't need to duplicate it explicitly into huge zero page case
too.

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Mike Kravetz <[email protected]>, [email protected]
Cc: Mike Rapoport <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agouserfaultfd/selftests: unify error handling
Peter Xu [Thu, 1 Jul 2021 01:48:55 +0000 (18:48 -0700)]
userfaultfd/selftests: unify error handling

Introduce err()/_err() and replace all the different ways to fail the
program, mostly "fprintf" and "perror" with tons of exit() calls.  Always
stop the test program at any failure.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Axel Rasmussen <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agouserfaultfd/selftests: only dump counts if mode enabled
Peter Xu [Thu, 1 Jul 2021 01:48:52 +0000 (18:48 -0700)]
userfaultfd/selftests: only dump counts if mode enabled

WP and MINOR modes are conditionally enabled on specific memory types.
This patch avoids dumping tons of zeros for those cases when the modes are
not supported at all.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Axel Rasmussen <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agouserfaultfd/selftests: dropping VERIFY check in locking_thread
Peter Xu [Thu, 1 Jul 2021 01:48:48 +0000 (18:48 -0700)]
userfaultfd/selftests: dropping VERIFY check in locking_thread

It tries to check against all zeros and looped for quite a few times.
However after that we'll verify the same page with count_verify, while
count_verify can never be zero.  So it means if it's a zero page we'll
detect it anyways with below code.

There's yet another place we conditionally check the fault flag - just do
it unconditionally.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Axel Rasmussen <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agouserfaultfd/selftests: remove the time() check on delayed uffd
Peter Xu [Thu, 1 Jul 2021 01:48:44 +0000 (18:48 -0700)]
userfaultfd/selftests: remove the time() check on delayed uffd

There seems to have no guarantee that time() will return the same for the
two calls even if there's no delay, e.g.  when a fault is accidentally
crossing the changing of a second.  Meanwhile, this message is also not
helping that much since delay could happen with a lot of reasons, e.g.,
schedule latency of resolving thread.  It may not mean an issue with uffd.

Neither do I saw this error triggered either in the past runs.  Even if it
triggers, it'll be drown in all the rest of test logs.  Remove it.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Axel Rasmussen <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agouserfaultfd/selftests: use user mode only
Peter Xu [Thu, 1 Jul 2021 01:48:41 +0000 (18:48 -0700)]
userfaultfd/selftests: use user mode only

Patch series "userfaultfd/selftests: A few cleanups", v2.

I wanted to cleanup userfaultfd.c fault handling for a long time.  If it's
not cleaned, when the new code grows the file it'll also grow the size
that needs to be cleaned...  This is my attempt to cleanup the userfaultfd
selftest on fault handling, to use an err() macro instead of either
fprintf() or perror() then another exit() call.

The huge cleanup is done in the last patch.  The first 4 patches are some
other standalone cleanups for the same file, so I put them together.

This patch (of 5):

Userfaultfd selftest does not need to handle kernel initiated fault.  Set
user mode so it can be run even if unprivileged_userfaultfd=0 (which is
the default).

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Axel Rasmussen <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: "Dr . David Alan Gilbert" <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Lokesh Gidra <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Upton <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/hwpoison: disable pcp for page_handle_poison()
Naoya Horiguchi [Thu, 1 Jul 2021 01:48:38 +0000 (18:48 -0700)]
mm/hwpoison: disable pcp for page_handle_poison()

Recent changes by patch "mm/page_alloc: allow high-order pages to be
stored on the per-cpu lists" makes kernels determine whether to use pcp by
pcp_allowed_order(), which breaks soft-offline for hugetlb pages.

Soft-offline dissolves a migration source page, then removes it from buddy
free list, so it's assumed that any subpage of the soft-offlined hugepage
are recognized as a buddy page just after returning from
dissolve_free_huge_page().  pcp_allowed_order() returns true for hugetlb,
so this assumption is no longer true.

So disable pcp during dissolve_free_huge_page() and take_page_off_buddy()
to prevent soft-offlined hugepages from linking to pcp lists.
Soft-offline should not be common events so the impact on performance
should be minimal.  And I think that the optimization of Mel's patch could
benefit to hugetlb so zone_pcp_disable() is called only in hwpoison
context.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Naoya Horiguchi <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agohugetlb: address ref count racing in prep_compound_gigantic_page
Mike Kravetz [Thu, 1 Jul 2021 01:48:34 +0000 (18:48 -0700)]
hugetlb: address ref count racing in prep_compound_gigantic_page

In [1], Jann Horn points out a possible race between
prep_compound_gigantic_page and __page_cache_add_speculative.  The root
cause of the possible race is prep_compound_gigantic_page uncondittionally
setting the ref count of pages to zero.  It does this because
prep_compound_gigantic_page is handed a 'group' of pages from an allocator
and needs to convert that group of pages to a compound page.  The ref
count of each page in this 'group' is one as set by the allocator.
However, the ref count of compound page tail pages must be zero.

The potential race comes about when ref counted pages are returned from
the allocator.  When this happens, other mm code could also take a
reference on the page.  __page_cache_add_speculative is one such example.
Therefore, prep_compound_gigantic_page can not just set the ref count of
pages to zero as it does today.  Doing so would lose the reference taken
by any other code.  This would lead to BUGs in code checking ref counts
and could possibly even lead to memory corruption.

There are two possible ways to address this issue.

1) Make all allocators of gigantic groups of pages be able to return a
   properly constructed compound page.

2) Make prep_compound_gigantic_page be more careful when constructing a
   compound page.

This patch takes approach 2.

In prep_compound_gigantic_page, use cmpxchg to only set ref count to zero
if it is one.  If the cmpxchg fails, call synchronize_rcu() in the hope
that the extra ref count will be driopped during a rcu grace period.  This
is not a performance critical code path and the wait should be
accceptable.  If the ref count is still inflated after the grace period,
then undo any modifications made and return an error.

Currently prep_compound_gigantic_page is type void and does not return
errors.  Modify the two callers to check for and handle error returns.  On
error, the caller must free the 'group' of pages as they can not be used
to form a gigantic page.  After freeing pages, the runtime caller
(alloc_fresh_huge_page) will retry the allocation once.  Boot time
allocations can not be retried.

The routine prep_compound_page also unconditionally sets the ref count of
compound page tail pages to zero.  However, in this case the buddy
allocator is constructing a compound page from freshly allocated pages.
The ref count on those freshly allocated pages is already zero, so the
set_page_count(p, 0) is unnecessary and could lead to confusion.  Just
remove it.

[1] https://lore.kernel.org/linux-mm/CAG48ez23q0Jy9cuVnwAe7t_fdhMk2S7N5Hdi-GLcCeq5bsfLxw@mail.gmail.com/

Link: https://lkml.kernel.org/r/[email protected]
Fixes: 58a84aa92723 ("thp: set compound tail page _count to zero")
Signed-off-by: Mike Kravetz <[email protected]>
Reported-by: Jann Horn <[email protected]>
Cc: Youquan Song <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: "Kirill A . Shutemov" <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Muchun Song <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agohugetlb: remove prep_compound_huge_page cleanup
Mike Kravetz [Thu, 1 Jul 2021 01:48:31 +0000 (18:48 -0700)]
hugetlb: remove prep_compound_huge_page cleanup

Patch series "Fix prep_compound_gigantic_page ref count adjustment".

These patches address the possible race between
prep_compound_gigantic_page and __page_cache_add_speculative as described
by Jann Horn in [1].

The first patch simply removes the unnecessary/obsolete helper routine
prep_compound_huge_page to make the actual fix a little simpler.

The second patch is the actual fix and has a detailed explanation in the
commit message.

This potential issue has existed for almost 10 years and I am unaware of
anyone actually hitting the race.  I did not cc stable, but would be happy
to squash the patches and send to stable if anyone thinks that is a good
idea.

[1] https://lore.kernel.org/linux-mm/CAG48ez23q0Jy9cuVnwAe7t_fdhMk2S7N5Hdi-GLcCeq5bsfLxw@mail.gmail.com/

This patch (of 2):

I could not think of a reliable way to recreate the issue for testing.
Rather, I 'simulated errors' to exercise all the error paths.

The routine prep_compound_huge_page is a simple wrapper to call either
prep_compound_gigantic_page or prep_compound_page.  However, it is only
called from gather_bootmem_prealloc which only processes gigantic pages.
Eliminate the routine and call prep_compound_gigantic_page directly.

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mike Kravetz <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: "Kirill A . Shutemov" <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Youquan Song <[email protected]>
Cc: Muchun Song <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: introduce CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON
Muchun Song [Thu, 1 Jul 2021 01:48:28 +0000 (18:48 -0700)]
mm: hugetlb: introduce CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON

When using HUGETLB_PAGE_FREE_VMEMMAP, the freeing unused vmemmap pages
associated with each HugeTLB page is default off.  Now the vmemmap is PMD
mapped.  So there is no side effect when this feature is enabled with no
HugeTLB pages in the system.  Someone may want to enable this feature in
the compiler time instead of using boot command line.  So add a config to
make it default on when someone do not want to enable it via command line.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Cc: Chen Huang <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: sparsemem: use huge PMD mapping for vmemmap pages
Muchun Song [Thu, 1 Jul 2021 01:48:25 +0000 (18:48 -0700)]
mm: sparsemem: use huge PMD mapping for vmemmap pages

The preparation of splitting huge PMD mapping of vmemmap pages is ready,
so switch the mapping from PTE to PMD.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: Chen Huang <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: sparsemem: split the huge PMD mapping of vmemmap pages
Muchun Song [Thu, 1 Jul 2021 01:48:22 +0000 (18:48 -0700)]
mm: sparsemem: split the huge PMD mapping of vmemmap pages

Patch series "Split huge PMD mapping of vmemmap pages", v4.

In order to reduce the difficulty of code review in series[1].  We disable
huge PMD mapping of vmemmap pages when that feature is enabled.  In this
series, we do not disable huge PMD mapping of vmemmap pages anymore.  We
will split huge PMD mapping when needed.  When HugeTLB pages are freed
from the pool we do not attempt coalasce and move back to a PMD mapping
because it is much more complex.

[1] https://lore.kernel.org/linux-doc/20210510030027[email protected]/

This patch (of 3):

In [1], PMD mappings of vmemmap pages were disabled if the the feature
hugetlb_free_vmemmap was enabled.  This was done to simplify the initial
implementation of vmmemap freeing for hugetlb pages.  Now, remove this
simplification by allowing PMD mapping and switching to PTE mappings as
needed for allocated hugetlb pages.

When a hugetlb page is allocated, the vmemmap page tables are walked to
free vmemmap pages.  During this walk, split huge PMD mappings to PTE
mappings as required.  In the unlikely case PTE pages can not be
allocated, return error(ENOMEM) and do not optimize vmemmap of the hugetlb
page.

When HugeTLB pages are freed from the pool, we do not attempt to
coalesce and move back to a PMD mapping because it is much more complex.

[1] https://lkml.kernel.org/r/20210510030027[email protected]

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Chen Huang <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm, hugetlb: fix racy resv_huge_pages underflow on UFFDIO_COPY
Mina Almasry [Thu, 1 Jul 2021 01:48:19 +0000 (18:48 -0700)]
mm, hugetlb: fix racy resv_huge_pages underflow on UFFDIO_COPY

On UFFDIO_COPY, if we fail to copy the page contents while holding the
hugetlb_fault_mutex, we will drop the mutex and return to the caller after
allocating a page that consumed a reservation.  In this case there may be
a fault that double consumes the reservation.  To handle this, we free the
allocated page, fix the reservations, and allocate a temporary hugetlb
page and return that to the caller.  When the caller does the copy outside
of the lock, we again check the cache, and allocate a page consuming the
reservation, and copy over the contents.

Test:
Hacked the code locally such that resv_huge_pages underflows produce
a warning and the copy_huge_page_from_user() always fails, then:

./tools/testing/selftests/vm/userfaultfd hugetlb_shared 10
        2 /tmp/kokonut_test/huge/userfaultfd_test && echo test success
./tools/testing/selftests/vm/userfaultfd hugetlb 10
2 /tmp/kokonut_test/huge/userfaultfd_test && echo test success

Both tests succeed and produce no warnings. After the
test runs number of free/resv hugepages is correct.

[[email protected]: remove set but not used variable 'vm_alloc_shared']
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: fix allocation error check and copy func name]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mina Almasry <[email protected]>
Signed-off-by: YueHaibing <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Peter Xu <[email protected]>
Cc: Mike Kravetz <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agokhugepaged: selftests: remove debug_cow
Nanyong Sun [Thu, 1 Jul 2021 01:48:16 +0000 (18:48 -0700)]
khugepaged: selftests: remove debug_cow

The debug_cow attribute had been removed since commit 4958e4d86ecb01 ("mm:
thp: remove debug_cow switch"), so remove it in selftest code too,
otherwise the khugepaged test will fail.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: 4958e4d86ecb01 ("mm: thp: remove debug_cow switch")
Signed-off-by: Nanyong Sun <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Kefeng Wang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agopowerpc/8xx: add support for huge pages on VMAP and VMALLOC
Christophe Leroy [Thu, 1 Jul 2021 01:48:12 +0000 (18:48 -0700)]
powerpc/8xx: add support for huge pages on VMAP and VMALLOC

powerpc 8xx has 4 page sizes:
- 4k
- 16k
- 512k
- 8M

At the time being, vmalloc and vmap only support huge pages which are leaf
at PMD level.

Here the PMD level is 4M, it doesn't correspond to any supported page
size.

For now, implement use of 16k and 512k pages which is done at PTE level.

Support of 8M pages will be implemented later, it requires vmalloc to
support hugepd tables.

Link: https://lkml.kernel.org/r/8b972f1c03fb6bd59953035f0a3e4d26659de4f8.1620795204.git.christophe.leroy@csgroup.eu
Signed-off-by: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Uladzislau Rezki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/vmalloc: enable mapping of huge pages at pte level in vmalloc
Christophe Leroy [Thu, 1 Jul 2021 01:48:09 +0000 (18:48 -0700)]
mm/vmalloc: enable mapping of huge pages at pte level in vmalloc

On some architectures like powerpc, there are huge pages that are mapped
at pte level.

Enable it in vmalloc.

For that, architectures can provide arch_vmap_pte_supported_shift() that
returns the shift for pages to map at pte level.

Link: https://lkml.kernel.org/r/2c717e3b1fba1894d890feb7669f83025bfa314d.1620795204.git.christophe.leroy@csgroup.eu
Signed-off-by: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Uladzislau Rezki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/vmalloc: enable mapping of huge pages at pte level in vmap
Christophe Leroy [Thu, 1 Jul 2021 01:48:06 +0000 (18:48 -0700)]
mm/vmalloc: enable mapping of huge pages at pte level in vmap

On some architectures like powerpc, there are huge pages that are mapped
at pte level.

Enable it in vmap.

For that, architectures can provide arch_vmap_pte_range_map_size() that
returns the size of pages to map at pte level.

Link: https://lkml.kernel.org/r/fb3ccc73377832ac6708181ec419128a2f98ce36.1620795204.git.christophe.leroy@csgroup.eu
Signed-off-by: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Uladzislau Rezki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/pgtable: add stubs for {pmd/pub}_{set/clear}_huge
Christophe Leroy [Thu, 1 Jul 2021 01:48:03 +0000 (18:48 -0700)]
mm/pgtable: add stubs for {pmd/pub}_{set/clear}_huge

For architectures with no PMD and/or no PUD, add stubs similar to what we
have for architectures without P4D.

[[email protected]: arm64: define only {pud/pmd}_{set/clear}_huge when useful]
Link: https://lkml.kernel.org/r/73ec95f40cafbbb69bdfb43a7f53876fd845b0ce.1620990479.git.christophe.leroy@csgroup.eu
[[email protected]: x86: define only {pud/pmd}_{set/clear}_huge when useful]
Link: https://lkml.kernel.org/r/7fbf1b6bc3e15c07c24fa45278d57064f14c896b.1620930415.git.christophe.leroy@csgroup.eu
Link: https://lkml.kernel.org/r/5ac5976419350e8e048d463a64cae449eb3ba4b0.1620795204.git.christophe.leroy@csgroup.eu
Signed-off-by: Christophe Leroy <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Uladzislau Rezki <[email protected]>
Cc: Naresh Kamboju <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/hugetlb: change parameters of arch_make_huge_pte()
Christophe Leroy [Thu, 1 Jul 2021 01:48:00 +0000 (18:48 -0700)]
mm/hugetlb: change parameters of arch_make_huge_pte()

Patch series "Subject: [PATCH v2 0/5] Implement huge VMAP and VMALLOC on powerpc 8xx", v2.

This series implements huge VMAP and VMALLOC on powerpc 8xx.

Powerpc 8xx has 4 page sizes:
- 4k
- 16k
- 512k
- 8M

At the time being, vmalloc and vmap only support huge pages which are
leaf at PMD level.

Here the PMD level is 4M, it doesn't correspond to any supported
page size.

For now, implement use of 16k and 512k pages which is done
at PTE level.

Support of 8M pages will be implemented later, it requires use of
hugepd tables.

To allow this, the architecture provides two functions:
- arch_vmap_pte_range_map_size() which tells vmap_pte_range() what
page size to use. A stub returning PAGE_SIZE is provided when the
architecture doesn't provide this function.
- arch_vmap_pte_supported_shift() which tells __vmalloc_node_range()
what page shift to use for a given area size. A stub returning
PAGE_SHIFT is provided when the architecture doesn't provide this
function.

This patch (of 5):

At the time being, arch_make_huge_pte() has the following prototype:

  pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
   struct page *page, int writable);

vma is used to get the pages shift or size.
vma is also used on Sparc to get vm_flags.
page is not used.
writable is not used.

In order to use this function without a vma, replace vma by shift and
flags.  Also remove the used parameters.

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/f4633ac6a7da2f22f31a04a89e0a7026bb78b15b.1620795204.git.christophe.leroy@csgroup.eu
Signed-off-by: Christophe Leroy <[email protected]>
Acked-by: Mike Kravetz <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Uladzislau Rezki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/huge_memory.c: don't discard hugepage if other processes are mapping it
Miaohe Lin [Thu, 1 Jul 2021 01:47:57 +0000 (18:47 -0700)]
mm/huge_memory.c: don't discard hugepage if other processes are mapping it

If other processes are mapping any other subpages of the hugepage, i.e.
in pte-mapped thp case, page_mapcount() will return 1 incorrectly.  Then
we would discard the page while other processes are still mapping it.  Fix
it by using total_mapcount() which can tell whether other processes are
still mapping it.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: b8d3c4c3009d ("mm/huge_memory.c: don't split THP page when MADV_FREE syscall is called")
Reviewed-by: Yang Shi <[email protected]>
Signed-off-by: Miaohe Lin <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: "Aneesh Kumar K . V" <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Song Liu <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Mike Kravetz <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/huge_memory.c: remove unnecessary tlb_remove_page_size() for huge zero pmd
Miaohe Lin [Thu, 1 Jul 2021 01:47:53 +0000 (18:47 -0700)]
mm/huge_memory.c: remove unnecessary tlb_remove_page_size() for huge zero pmd

Commit aa88b68c3b1d ("thp: keep huge zero page pinned until tlb flush")
introduced tlb_remove_page() for huge zero page to keep it pinned until
flush is complete and prevents the page from being split under us.  But
huge zero page is kept pinned until all relevant mm_users reach zero since
the commit 6fcb52a56ff6 ("thp: reduce usage of huge zero page's atomic
counter").  So tlb_remove_page_size() for huge zero pmd is unnecessary
now.

Link: https://lkml.kernel.org/r/[email protected]
Reviewed-by: Yang Shi <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Signed-off-by: Miaohe Lin <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: "Aneesh Kumar K . V" <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Song Liu <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Mike Kravetz <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/huge_memory.c: add missing read-only THP checking in transparent_hugepage_enabled()
Miaohe Lin [Thu, 1 Jul 2021 01:47:50 +0000 (18:47 -0700)]
mm/huge_memory.c: add missing read-only THP checking in transparent_hugepage_enabled()

Since commit 99cb0dbd47a1 ("mm,thp: add read-only THP support for
(non-shmem) FS"), read-only THP file mapping is supported.  But it forgot
to add checking for it in transparent_hugepage_enabled().  To fix it, we
add checking for read-only THP file mapping and also introduce helper
transhuge_vma_enabled() to check whether thp is enabled for specified vma
to reduce duplicated code.  We rename transparent_hugepage_enabled to
transparent_hugepage_active to make the code easier to follow as suggested
by David Hildenbrand.

[[email protected]: define transhuge_vma_enabled next to transhuge_vma_suitable]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: Miaohe Lin <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: "Aneesh Kumar K . V" <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Song Liu <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Mike Kravetz <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/huge_memory.c: use page->deferred_list
Miaohe Lin [Thu, 1 Jul 2021 01:47:46 +0000 (18:47 -0700)]
mm/huge_memory.c: use page->deferred_list

Now that we can represent the location of ->deferred_list instead of
->mapping + ->index, make use of it to improve readability.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Miaohe Lin <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: "Aneesh Kumar K . V" <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Song Liu <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Mike Kravetz <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/huge_memory.c: remove dedicated macro HPAGE_CACHE_INDEX_MASK
Miaohe Lin [Thu, 1 Jul 2021 01:47:43 +0000 (18:47 -0700)]
mm/huge_memory.c: remove dedicated macro HPAGE_CACHE_INDEX_MASK

Patch series "Cleanup and fixup for huge_memory:, v3.

This series contains cleanups to remove dedicated macro and remove
unnecessary tlb_remove_page_size() for huge zero pmd.  Also this adds
missing read-only THP checking for transparent_hugepage_enabled() and
avoids discarding hugepage if other processes are mapping it.  More
details can be found in the respective changelogs.

Thi patch (of 5):

Rewrite the pgoff checking logic to remove macro HPAGE_CACHE_INDEX_MASK
which is only used here to simplify the code.

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Miaohe Lin <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: "Aneesh Kumar K . V" <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Song Liu <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: Mike Kravetz <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/debug_vm_pgtable: remove redundant pfn_{pmd/pte}() and fix one comment mistake
Shixin Liu [Thu, 1 Jul 2021 01:47:40 +0000 (18:47 -0700)]
mm/debug_vm_pgtable: remove redundant pfn_{pmd/pte}() and fix one comment mistake

Remove redundant pfn_{pmd/pte}() in {pmd/pte}_advanced_tests() and adjust
pfn_pud() in pud_advanced_tests() to make it similar with other two
functions.

In addition, the branch condition should be CONFIG_TRANSPARENT_HUGEPAGE
instead of CONFIG_ARCH_HAS_PTE_DEVMAP.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Shixin Liu <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm/debug_vm_pgtable: move {pmd/pud}_huge_tests out of CONFIG_TRANSPARENT_HUGEPAGE
Shixin Liu [Thu, 1 Jul 2021 01:47:37 +0000 (18:47 -0700)]
mm/debug_vm_pgtable: move {pmd/pud}_huge_tests out of CONFIG_TRANSPARENT_HUGEPAGE

The functions {pmd/pud}_set_huge and {pmd/pud}_clear_huge are not
dependent on THP.  Hence move {pmd/pud}_huge_tests out of
CONFIG_TRANSPARENT_HUGEPAGE.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Shixin Liu <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate
Muchun Song [Thu, 1 Jul 2021 01:47:33 +0000 (18:47 -0700)]
mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate

All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
field in the hstate to indicate how many vmemmap pages associated with a
HugeTLB page that can be freed to buddy allocator.  And initialize it in
the hugetlb_vmemmap_init().  This patch is actual enablement of the
feature.

There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page
structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a
BUILD_BUG_ON to catch invalid usage of the tail struct page.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Acked-by: Mike Kravetz <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: Miaohe Lin <[email protected]>
Tested-by: Chen Huang <[email protected]>
Tested-by: Bodeddula Balasubramaniam <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: memory_hotplug: disable memmap_on_memory when hugetlb_free_vmemmap enabled
Muchun Song [Thu, 1 Jul 2021 01:47:29 +0000 (18:47 -0700)]
mm: memory_hotplug: disable memmap_on_memory when hugetlb_free_vmemmap enabled

The parameter of memory_hotplug.memmap_on_memory is not compatible with
hugetlb_free_vmemmap.  So disable it when hugetlb_free_vmemmap is enabled.

[[email protected]: remove unneeded include, per Oscar]

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Acked-by: Mike Kravetz <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Bodeddula Balasubramaniam <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Chen Huang <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Miaohe Lin <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
Muchun Song [Thu, 1 Jul 2021 01:47:25 +0000 (18:47 -0700)]
mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap

Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

We disable PMD mapping of vmemmap pages for x86-64 arch when this feature
is enabled.  Because vmemmap_remap_free() depends on vmemmap being base
page mapped.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: Barry Song <[email protected]>
Reviewed-by: Miaohe Lin <[email protected]>
Tested-by: Chen Huang <[email protected]>
Tested-by: Bodeddula Balasubramaniam <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
Muchun Song [Thu, 1 Jul 2021 01:47:21 +0000 (18:47 -0700)]
mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page

When we free a HugeTLB page to the buddy allocator, we need to allocate
the vmemmap pages associated with it.  However, we may not be able to
allocate the vmemmap pages when the system is under memory pressure.  In
this case, we just refuse to free the HugeTLB page.  This changes behavior
in some corner cases as listed below:

 1) Failing to free a huge page triggered by the user (decrease nr_pages).

    User needs to try again later.

 2) Failing to free a surplus huge page when freed by the application.

    Try again later when freeing a huge page next time.

 3) Failing to dissolve a free huge page on ZONE_MOVABLE via
    offline_pages().

    This can happen when we have plenty of ZONE_MOVABLE memory, but
    not enough kernel memory to allocate vmemmmap pages.  We may even
    be able to migrate huge page contents, but will not be able to
    dissolve the source huge page.  This will prevent an offline
    operation and is unfortunate as memory offlining is expected to
    succeed on movable zones.  Users that depend on memory hotplug
    to succeed for movable zones should carefully consider whether the
    memory savings gained from this feature are worth the risk of
    possibly not being able to offline memory in certain situations.

 4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
    alloc_contig_range() - once we have that handling in place. Mainly
    affects CMA and virtio-mem.

    Similar to 3). virito-mem will handle migration errors gracefully.
    CMA might be able to fallback on other free areas within the CMA
    region.

Vmemmap pages are allocated from the page freeing context.  In order for
those allocations to be not disruptive (e.g.  trigger oom killer)
__GFP_NORETRY is used.  hugetlb_lock is dropped for the allocation because
a non sleeping allocation would be too fragile and it could fail too
easily under memory pressure.  GFP_ATOMIC or other modes to access memory
reserves is not used because we want to prevent consuming reserves under
heavy hugetlb freeing.

[[email protected]: fix dissolve_free_huge_page use of tail/head page]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: fix alloc_vmemmap_page_list documentation warning]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Signed-off-by: Mike Kravetz <[email protected]>
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Bodeddula Balasubramaniam <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Chen Huang <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Miaohe Lin <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: defer freeing of HugeTLB pages
Muchun Song [Thu, 1 Jul 2021 01:47:17 +0000 (18:47 -0700)]
mm: hugetlb: defer freeing of HugeTLB pages

In the subsequent patch, we should allocate the vmemmap pages when freeing
a HugeTLB page.  But update_and_free_page() can be called under any
context, so we cannot use GFP_KERNEL to allocate vmemmap pages.  However,
we can defer the actual freeing in a kworker to prevent from using
GFP_ATOMIC to allocate the vmemmap pages.

The __update_and_free_page() is where the call to allocate vmemmmap pages
will be inserted.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Bodeddula Balasubramaniam <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Chen Huang <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Miaohe Lin <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: free the vmemmap pages associated with each HugeTLB page
Muchun Song [Thu, 1 Jul 2021 01:47:13 +0000 (18:47 -0700)]
mm: hugetlb: free the vmemmap pages associated with each HugeTLB page

Every HugeTLB has more than one struct page structure.  We __know__ that
we only use the first 4 (__NR_USED_SUBPAGE) struct page structures to
store metadata associated with each HugeTLB.

There are a lot of struct page structures associated with each HugeTLB
page.  For tail pages, the value of compound_head is the same.  So we can
reuse first page of tail page structures.  We map the virtual addresses of
the remaining pages of tail page structures to the first tail page struct,
and then free these page frames.  Therefore, we need to reserve two pages
as vmemmap areas.

When we allocate a HugeTLB page from the buddy, we can free some vmemmap
pages associated with each HugeTLB page.  It is more appropriate to do it
in the prep_new_huge_page().

The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages
associated with a HugeTLB page can be freed, returns zero for now, which
means the feature is disabled.  We will enable it once all the
infrastructure is there.

[[email protected]: fix documentation warning]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Tested-by: Chen Huang <[email protected]>
Tested-by: Bodeddula Balasubramaniam <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Miaohe Lin <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: gather discrete indexes of tail page
Muchun Song [Thu, 1 Jul 2021 01:47:09 +0000 (18:47 -0700)]
mm: hugetlb: gather discrete indexes of tail page

For HugeTLB page, there are more metadata to save in the struct page.  But
the head struct page cannot meet our needs, so we have to abuse other tail
struct page to store the metadata.  In order to avoid conflicts caused by
subsequent use of more tail struct pages, we can gather these discrete
indexes of tail struct page.  In this case, it will be easier to add a new
tail page index later.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: Miaohe Lin <[email protected]>
Tested-by: Chen Huang <[email protected]>
Tested-by: Bodeddula Balasubramaniam <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
Muchun Song [Thu, 1 Jul 2021 01:47:04 +0000 (18:47 -0700)]
mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP

The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of some
vmemmap pages associated with pre-allocated HugeTLB pages.  For example,
on X86_64 6 vmemmap pages of size 4KB each can be saved for each 2MB
HugeTLB page.  4094 vmemmap pages of size 4KB each can be saved for each
1GB HugeTLB page.

When a HugeTLB page is allocated or freed, the vmemmap array representing
the range associated with the page will need to be remapped.  When a page
is allocated, vmemmap pages are freed after remapping.  When a page is
freed, previously discarded vmemmap pages must be allocated before
remapping.

The config option is introduced early so that supporting code can be
written to depend on the option.  The initial version of the code only
provides support for x86-64.

If config HAVE_BOOTMEM_INFO_NODE is enabled, the freeing vmemmap page code
denpend on it to free vmemmap pages.  Otherwise, just use
free_reserved_page() to free vmemmmap pages.  The routine
register_page_bootmem_info() is used to register bootmem info.  Therefore,
make sure register_page_bootmem_info is enabled if
HUGETLB_PAGE_FREE_VMEMMAP is defined.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Acked-by: Mike Kravetz <[email protected]>
Reviewed-by: Miaohe Lin <[email protected]>
Tested-by: Chen Huang <[email protected]>
Tested-by: Bodeddula Balasubramaniam <[email protected]>
Reviewed-by: Balbir Singh <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agomm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
Muchun Song [Thu, 1 Jul 2021 01:47:00 +0000 (18:47 -0700)]
mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c

Patch series "Free some vmemmap pages of HugeTLB page", v23.

This patch series will free some vmemmap pages(struct page structures)
associated with each HugeTLB page when preallocated to save memory.

In order to reduce the difficulty of the first version of code review.  In
this version, we disable PMD/huge page mapping of vmemmap if this feature
was enabled.  This acutely eliminates a bunch of the complex code doing
page table manipulation.  When this patch series is solid, we cam add the
code of vmemmap page table manipulation in the future.

The struct page structures (page structs) are used to describe a physical
page frame.  By default, there is an one-to-one mapping from a page frame
to it's corresponding page struct.

The HugeTLB pages consist of multiple base page size pages and is
supported by many architectures.  See hugetlbpage.rst in the Documentation
directory for more details.  On the x86 architecture, HugeTLB pages of
size 2MB and 1GB are currently supported.  Since the base page size on x86
is 4KB, a 2MB HugeTLB page consists of 512 base pages and a 1GB HugeTLB
page consists of 4096 base pages.  For each base page, there is a
corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page.  HUGETLB_CGROUP_MIN_ORDER
provides this upper limit.  The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for
all tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned
to the buddy allocator for other uses.

When the system boot up, every 2M HugeTLB has 512 struct page structs which
size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | -------------> |     2     |
 |           |                     +-----------+                +-----------+
 |           |                     |     3     | -------------> |     3     |
 |           |                     +-----------+                +-----------+
 |           |                     |     4     | -------------> |     4     |
 |    2MB    |                     +-----------+                +-----------+
 |           |                     |     5     | -------------> |     5     |
 |           |                     +-----------+                +-----------+
 |           |                     |     6     | -------------> |     6     |
 |           |                     +-----------+                +-----------+
 |           |                     |     7     | -------------> |     7     |
 |           |                     +-----------+                +-----------+
 |           |
 |           |
 |           |
 +-----------+

The value of page->compound_head is the same for all tail pages.  The
first page of page structs (page 0) associated with the HugeTLB page
contains the 4 page structs necessary to describe the HugeTLB.  The only
use of the remaining pages of page structs (page 1 to page 7) is to point
to page->compound_head.  Therefore, we can remap pages 2 to 7 to page 1.
Only 2 pages of page structs will be used for each HugeTLB page.  This
will allow us to free the remaining 6 pages to the buddy allocator.

Here is how things look after remapping.

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
 |           |                     +-----------+                   | | | | |
 |           |                     |     3     | ------------------+ | | | |
 |           |                     +-----------+                     | | | |
 |           |                     |     4     | --------------------+ | | |
 |    2MB    |                     +-----------+                       | | |
 |           |                     |     5     | ----------------------+ | |
 |           |                     +-----------+                         | |
 |           |                     |     6     | ------------------------+ |
 |           |                     +-----------+                           |
 |           |                     |     7     | --------------------------+
 |           |                     +-----------+
 |           |
 |           |
 |           |
 +-----------+

When a HugeTLB is freed to the buddy system, we should allocate 6 pages
for vmemmap pages and restore the previous mapping relationship.

Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page.  It is similar
to the 2MB HugeTLB page.  We also can use this approach to free the
vmemmap pages.

In this case, for the 1GB HugeTLB page, we can save 4094 pages.  This is a
very substantial gain.  On our server, run some SPDK/QEMU applications
which will use 1024GB HugeTLB page.  With this feature enabled, we can
save ~16GB (1G hugepage)/~12GB (2MB hugepage) memory.

Because there are vmemmap page tables reconstruction on the
freeing/allocating path, it increases some overhead.  Here are some
overhead analysis.

1) Allocating 10240 2MB HugeTLB pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           5476 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          4760 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@       |
   [32K, 64K)             4 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.067s
   user     0m0.000s
   sys      0m0.067s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10147 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             93 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 2MB HugeTLB pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.213s
   user     0m0.000s
   sys      0m0.213s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)              6 |                                                    |
   [16K, 32K)         10227 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [32K, 64K)             7 |                                                    |

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.081s
   user     0m0.000s
   sys      0m0.081s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            6805 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)           3427 |@@@@@@@@@@@@@@@@@@@@@@@@@@                          |
   [16K, 32K)             8 |                                                    |

   Summary: The overhead of __free_hugepage is about ~2-3x slower than before.

Although the overhead has increased, the overhead is not significant.
Like Mike said, "However, remember that the majority of use cases create
HugeTLB pages at or shortly after boot time and add them to the pool.  So,
additional overhead is at pool creation time.  There is no change to
'normal run time' operations of getting a page from or returning a page to
the pool (think page fault/unmap)".

Despite the overhead and in addition to the memory gains from this series.
The following data is obtained by Joao Martins.  Very thanks to his
effort.

There's an additional benefit which is page (un)pinners will see an improvement
and Joao presumes because there are fewer memmap pages and thus the tail/head
pages are staying in cache more often.

Out of the box Joao saw (when comparing linux-next against linux-next +
this series) with gup_test and pinning a 16G HugeTLB file (with 1G pages):

get_user_pages(): ~32k -> ~9k
unpin_user_pages(): ~75k -> ~70k

Usually any tight loop fetching compound_head(), or reading tail pages
data (e.g.  compound_head) benefit a lot.  There's some unpinning
inefficiencies Joao was fixing[2], but with that in added it shows even
more:

unpin_user_pages(): ~27k -> ~3.8k

[1] https://lore.kernel.org/linux-mm/20210409205254[email protected]/
[2] https://lore.kernel.org/linux-mm/20210204202500[email protected]/

This patch (of 9):

Move bootmem info registration common API to individual bootmem_info.c.
And we will use {get,put}_page_bootmem() to initialize the page for the
vmemmap pages or free the vmemmap pages to buddy in the later patch.  So
move them out of CONFIG_MEMORY_HOTPLUG_SPARSE.  This is just code movement
without any functional change.

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Muchun Song <[email protected]>
Acked-by: Mike Kravetz <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Miaohe Lin <[email protected]>
Tested-by: Chen Huang <[email protected]>
Tested-by: Bodeddula Balasubramaniam <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: [email protected]
Cc: "H. Peter Anvin" <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Pawan Gupta <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Oliver Neukum <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Mina Almasry <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Barry Song <[email protected]>
Cc: HORIGUCHI NAOYA <[email protected]>
Cc: Joao Martins <[email protected]>
Cc: Xiongchun Duan <[email protected]>
Cc: Balbir Singh <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
3 years agoMerge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso...
Linus Torvalds [Thu, 1 Jul 2021 02:37:39 +0000 (19:37 -0700)]
Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4

Pull ext4 updates from Ted Ts'o:
 "In addition to bug fixes and cleanups, there are two new features for
  ext4 in 5.14:

   - Allow applications to poll on changes to
     /sys/fs/ext4/*/errors_count

   - Add the ioctl EXT4_IOC_CHECKPOINT which allows the journal to be
     checkpointed, truncated and discarded or zero'ed"

* tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (32 commits)
  jbd2: export jbd2_journal_[un]register_shrinker()
  ext4: notify sysfs on errors_count value change
  fs: remove bdev_try_to_free_page callback
  ext4: remove bdev_try_to_free_page() callback
  jbd2: simplify journal_clean_one_cp_list()
  jbd2,ext4: add a shrinker to release checkpointed buffers
  jbd2: remove redundant buffer io error checks
  jbd2: don't abort the journal when freeing buffers
  jbd2: ensure abort the journal if detect IO error when writing original buffer back
  jbd2: remove the out label in __jbd2_journal_remove_checkpoint()
  ext4: no need to verify new add extent block
  jbd2: clean up misleading comments for jbd2_fc_release_bufs
  ext4: add check to prevent attempting to resize an fs with sparse_super2
  ext4: consolidate checks for resize of bigalloc into ext4_resize_begin
  ext4: remove duplicate definition of ext4_xattr_ibody_inline_set()
  ext4: fsmap: fix the block/inode bitmap comment
  ext4: fix comment for s_hash_unsigned
  ext4: use local variable ei instead of EXT4_I() macro
  ext4: fix avefreec in find_group_orlov
  ext4: correct the cache_nr in tracepoint ext4_es_shrink_exit
  ...

3 years agoMerge tag 'for-5.14/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Thu, 1 Jul 2021 01:19:39 +0000 (18:19 -0700)]
Merge tag 'for-5.14/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper updates from Mike Snitzer:

 - Various DM persistent-data library improvements and fixes that
   benefit both the DM thinp and cache targets.

 - A few small DM kcopyd efficiency improvements.

 - Significant zoned related block core, DM core and DM zoned target
   changes that culminate with adding zoned append emulation (which is
   required to properly fix DM crypt's zoned support).

 - Various DM writecache target changes that improve efficiency. Adds an
   optional "metadata_only" feature that only promotes bios flagged with
   REQ_META. But the most significant improvement is writecache's
   ability to pause writeback, for a confiurable time, if/when the
   working set is larger than the cache (and the cache is full) -- this
   ensures performance is no worse than the slower origin device.

* tag 'for-5.14/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (35 commits)
  dm writecache: make writeback pause configurable
  dm writecache: pause writeback if cache full and origin being written directly
  dm io tracker: factor out IO tracker
  dm btree remove: assign new_root only when removal succeeds
  dm zone: fix dm_revalidate_zones() memory allocation
  dm ps io affinity: remove redundant continue statement
  dm writecache: add optional "metadata_only" parameter
  dm writecache: add "cleaner" and "max_age" to Documentation
  dm writecache: write at least 4k when committing
  dm writecache: flush origin device when writing and cache is full
  dm writecache: have ssd writeback wait if the kcopyd workqueue is busy
  dm writecache: use list_move instead of list_del/list_add in writecache_writeback()
  dm writecache: commit just one block, not a full page
  dm writecache: remove unused gfp_t argument from wc_add_block()
  dm crypt: Fix zoned block device support
  dm: introduce zone append emulation
  dm: rearrange core declarations for extended use from dm-zone.c
  block: introduce BIO_ZONE_WRITE_LOCKED bio flag
  block: introduce bio zone helpers
  block: improve handling of all zones reset operation
  ...

3 years agoMerge tag 'net-next-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev...
Linus Torvalds [Wed, 30 Jun 2021 22:51:09 +0000 (15:51 -0700)]
Merge tag 'net-next-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next

Pull networking updates from Jakub Kicinski:
 "Core:

   - BPF:
      - add syscall program type and libbpf support for generating
        instructions and bindings for in-kernel BPF loaders (BPF loaders
        for BPF), this is a stepping stone for signed BPF programs
      - infrastructure to migrate TCP child sockets from one listener to
        another in the same reuseport group/map to improve flexibility
        of service hand-off/restart
      - add broadcast support to XDP redirect

   - allow bypass of the lockless qdisc to improving performance (for
     pktgen: +23% with one thread, +44% with 2 threads)

   - add a simpler version of "DO_ONCE()" which does not require jump
     labels, intended for slow-path usage

   - virtio/vsock: introduce SOCK_SEQPACKET support

   - add getsocketopt to retrieve netns cookie

   - ip: treat lowest address of a IPv4 subnet as ordinary unicast
     address allowing reclaiming of precious IPv4 addresses

   - ipv6: use prandom_u32() for ID generation

   - ip: add support for more flexible field selection for hashing
     across multi-path routes (w/ offload to mlxsw)

   - icmp: add support for extended RFC 8335 PROBE (ping)

   - seg6: add support for SRv6 End.DT46 behavior

   - mptcp:
      - DSS checksum support (RFC 8684) to detect middlebox meddling
      - support Connection-time 'C' flag
      - time stamping support

   - sctp: packetization Layer Path MTU Discovery (RFC 8899)

   - xfrm: speed up state addition with seq set

   - WiFi:
      - hidden AP discovery on 6 GHz and other HE 6 GHz improvements
      - aggregation handling improvements for some drivers
      - minstrel improvements for no-ack frames
      - deferred rate control for TXQs to improve reaction times
      - switch from round robin to virtual time-based airtime scheduler

   - add trace points:
      - tcp checksum errors
      - openvswitch - action execution, upcalls
      - socket errors via sk_error_report

  Device APIs:

   - devlink: add rate API for hierarchical control of max egress rate
     of virtual devices (VFs, SFs etc.)

   - don't require RCU read lock to be held around BPF hooks in NAPI
     context

   - page_pool: generic buffer recycling

  New hardware/drivers:

   - mobile:
      - iosm: PCIe Driver for Intel M.2 Modem
      - support for Qualcomm MSM8998 (ipa)

   - WiFi: Qualcomm QCN9074 and WCN6855 PCI devices

   - sparx5: Microchip SparX-5 family of Enterprise Ethernet switches

   - Mellanox BlueField Gigabit Ethernet (control NIC of the DPU)

   - NXP SJA1110 Automotive Ethernet 10-port switch

   - Qualcomm QCA8327 switch support (qca8k)

   - Mikrotik 10/25G NIC (atl1c)

  Driver changes:

   - ACPI support for some MDIO, MAC and PHY devices from Marvell and
     NXP (our first foray into MAC/PHY description via ACPI)

   - HW timestamping (PTP) support: bnxt_en, ice, sja1105, hns3, tja11xx

   - Mellanox/Nvidia NIC (mlx5)
      - NIC VF offload of L2 bridging
      - support IRQ distribution to Sub-functions

   - Marvell (prestera):
      - add flower and match all
      - devlink trap
      - link aggregation

   - Netronome (nfp): connection tracking offload

   - Intel 1GE (igc): add AF_XDP support

   - Marvell DPU (octeontx2): ingress ratelimit offload

   - Google vNIC (gve): new ring/descriptor format support

   - Qualcomm mobile (rmnet & ipa): inline checksum offload support

   - MediaTek WiFi (mt76)
      - mt7915 MSI support
      - mt7915 Tx status reporting
      - mt7915 thermal sensors support
      - mt7921 decapsulation offload
      - mt7921 enable runtime pm and deep sleep

   - Realtek WiFi (rtw88)
      - beacon filter support
      - Tx antenna path diversity support
      - firmware crash information via devcoredump

   - Qualcomm WiFi (wcn36xx)
      - Wake-on-WLAN support with magic packets and GTK rekeying

   - Micrel PHY (ksz886x/ksz8081): add cable test support"

* tag 'net-next-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2168 commits)
  tcp: change ICSK_CA_PRIV_SIZE definition
  tcp_yeah: check struct yeah size at compile time
  gve: DQO: Fix off by one in gve_rx_dqo()
  stmmac: intel: set PCI_D3hot in suspend
  stmmac: intel: Enable PHY WOL option in EHL
  net: stmmac: option to enable PHY WOL with PMT enabled
  net: say "local" instead of "static" addresses in ndo_dflt_fdb_{add,del}
  net: use netdev_info in ndo_dflt_fdb_{add,del}
  ptp: Set lookup cookie when creating a PTP PPS source.
  net: sock: add trace for socket errors
  net: sock: introduce sk_error_report
  net: dsa: replay the local bridge FDB entries pointing to the bridge dev too
  net: dsa: ensure during dsa_fdb_offload_notify that dev_hold and dev_put are on the same dev
  net: dsa: include fdb entries pointing to bridge in the host fdb list
  net: dsa: include bridge addresses which are local in the host fdb list
  net: dsa: sync static FDB entries on foreign interfaces to hardware
  net: dsa: install the host MDB and FDB entries in the master's RX filter
  net: dsa: reference count the FDB addresses at the cross-chip notifier level
  net: dsa: introduce a separate cross-chip notifier type for host FDBs
  net: dsa: reference count the MDB entries at the cross-chip notifier level
  ...

3 years agoMerge tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Wed, 30 Jun 2021 22:37:49 +0000 (15:37 -0700)]
Merge tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler fixes from Ingo Molnar:

 - Fix a small inconsistency (bug) in load tracking, caught by a new
   warning that several people reported.

 - Flip CONFIG_SCHED_CORE to default-disabled, and update the Kconfig
   help text.

* tag 'sched-urgent-2021-06-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched/core: Disable CONFIG_SCHED_CORE by default
  sched/fair: Ensure _sum and _avg values stay consistent

3 years agoMerge tag 'microblaze-v5.14' of git://git.monstr.eu/linux-2.6-microblaze
Linus Torvalds [Wed, 30 Jun 2021 22:36:48 +0000 (15:36 -0700)]
Merge tag 'microblaze-v5.14' of git://git.monstr.eu/linux-2.6-microblaze

Pull microblaze updates from Michal Simek:

 - Remove unused PAGE_UP/DOWN macros

 - Fix trivial spelling mistake

* tag 'microblaze-v5.14' of git://git.monstr.eu/linux-2.6-microblaze:
  arch: microblaze: Fix spelling mistake "vesion" -> "version"
  microblaze: Cleanup unused functions

3 years agoMerge tag 'safesetid-5.14' of git://github.com/micah-morton/linux
Linus Torvalds [Wed, 30 Jun 2021 22:30:47 +0000 (15:30 -0700)]
Merge tag 'safesetid-5.14' of git://github.com/micah-morton/linux

Pull SafeSetID update from Micah Morton:
 "One very minor code cleanup change that marks a variable as
  __initdata"

* tag 'safesetid-5.14' of git://github.com/micah-morton/linux:
  LSM: SafeSetID: Mark safesetid_initialized as __initdata

3 years agoMerge tag 'Smack-for-5.14' of git://github.com/cschaufler/smack-next
Linus Torvalds [Wed, 30 Jun 2021 22:28:43 +0000 (15:28 -0700)]
Merge tag 'Smack-for-5.14' of git://github.com/cschaufler/smack-next

Pull smack updates from Casey Schaufler:
 "There is nothing more significant than an improvement to a byte count
  check in smackfs.

  All changes have been in next for weeks"

* tag 'Smack-for-5.14' of git://github.com/cschaufler/smack-next:
  Smack: fix doc warning
  Revert "Smack: Handle io_uring kernel thread privileges"
  smackfs: restrict bytes count in smk_set_cipso()
  security/smack/: fix misspellings using codespell tool

3 years agoMerge tag 'audit-pr-20210629' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoor...
Linus Torvalds [Wed, 30 Jun 2021 22:22:05 +0000 (15:22 -0700)]
Merge tag 'audit-pr-20210629' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit

Pull audit updates from Paul Moore:
 "Another merge window, another small audit pull request.

  Four patches in total: one is cosmetic, one removes an unnecessary
  initialization, one renames some enum values to prevent name
  collisions, and one converts list_del()/list_add() to list_move().

  None of these are earth shattering and all pass the audit-testsuite
  tests while merging cleanly on top of your tree from earlier today"

* tag 'audit-pr-20210629' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit:
  audit: remove unnecessary 'ret' initialization
  audit: remove trailing spaces and tabs
  audit: Use list_move instead of list_del/list_add
  audit: Rename enum audit_state constants to avoid AUDIT_DISABLED redefinition
  audit: add blank line after variable declarations

3 years agoMerge tag 'selinux-pr-20210629' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Wed, 30 Jun 2021 21:55:42 +0000 (14:55 -0700)]
Merge tag 'selinux-pr-20210629' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux

Pull SELinux updates from Paul Moore:

 - The slow_avc_audit() function is now non-blocking so we can remove
   the AVC_NONBLOCKING tricks; this also includes the 'flags' variant of
   avc_has_perm().

 - Use kmemdup() instead of kcalloc()+copy when copying parts of the
   SELinux policydb.

 - The InfiniBand device name is now passed by reference when possible
   in the SELinux code, removing a strncpy().

 - Minor cleanups including: constification of avtab function args,
   removal of useless LSM/XFRM function args, SELinux kdoc fixes, and
   removal of redundant assignments.

* tag 'selinux-pr-20210629' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux:
  selinux: kill 'flags' argument in avc_has_perm_flags() and avc_audit()
  selinux: slow_avc_audit has become non-blocking
  selinux: Fix kernel-doc
  selinux: use __GFP_NOWARN with GFP_NOWAIT in the AVC
  lsm_audit,selinux: pass IB device name by reference
  selinux: Remove redundant assignment to rc
  selinux: Corrected comment to match kernel-doc comment
  selinux: delete selinux_xfrm_policy_lookup() useless argument
  selinux: constify some avtab function arguments
  selinux: simplify duplicate_policydb_cond_list() by using kmemdup()

3 years agoMerge tag 'clang-features-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Wed, 30 Jun 2021 21:33:25 +0000 (14:33 -0700)]
Merge tag 'clang-features-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull clang feature updates from Kees Cook:

 - Add CC_HAS_NO_PROFILE_FN_ATTR in preparation for PGO support in the
   face of the noinstr attribute, paving the way for PGO and fixing
   GCOV. (Nick Desaulniers)

 - x86_64 LTO coverage is expanded to 32-bit x86. (Nathan Chancellor)

 - Small fixes to CFI. (Mark Rutland, Nathan Chancellor)

* tag 'clang-features-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  qemu_fw_cfg: Make fw_cfg_rev_attr a proper kobj_attribute
  Kconfig: Introduce ARCH_WANTS_NO_INSTR and CC_HAS_NO_PROFILE_FN_ATTR
  compiler_attributes.h: cleanups for GCC 4.9+
  compiler_attributes.h: define __no_profile, add to noinstr
  x86, lto: Enable Clang LTO for 32-bit as well
  CFI: Move function_nocfi() into compiler.h
  MAINTAINERS: Add Clang CFI section

3 years agoio_uring: code clean for kiocb_done()
Hao Xu [Sun, 27 Jun 2021 21:48:05 +0000 (05:48 +0800)]
io_uring: code clean for kiocb_done()

A simple code clean for kiocb_done()

Signed-off-by: Hao Xu <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: spin in iopoll() only when reqs are in a single queue
Hao Xu [Sun, 27 Jun 2021 21:37:30 +0000 (05:37 +0800)]
io_uring: spin in iopoll() only when reqs are in a single queue

We currently spin in iopoll() when requests to be iopolled are for
same file(device), while one device may have multiple hardware queues.
given an example:

hw_queue_0     |    hw_queue_1
req(30us)           req(10us)

If we first spin on iopolling for the hw_queue_0. the avg latency would
be (30us + 30us) / 2 = 30us. While if we do round robin, the avg
latency would be (30us + 10us) / 2 = 20us since we reap the request in
hw_queue_1 in time. So it's better to do spinning only when requests
are in same hardware queue.

Signed-off-by: Hao Xu <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: pre-initialise some of req fields
Pavel Begunkov [Sat, 26 Jun 2021 20:40:49 +0000 (21:40 +0100)]
io_uring: pre-initialise some of req fields

Most of requests are allocated from an internal cache, so it's waste of
time fully initialising them every time. Instead, let's pre-init some of
the fields we can during initial allocation (e.g. kmalloc(), see
io_alloc_req()) and keep them valid on request recycling. There are four
of them in this patch:

->ctx is always stays the same
->link is NULL on free, it's an invariant
->result is not even needed to init, just a precaution
->async_data we now clean in io_dismantle_req() as it's likely to
   never be allocated.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/892ba0e71309bba9fe9e0142472330bbf9d8f05d.1624739600.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: refactor io_submit_flush_completions
Pavel Begunkov [Sat, 26 Jun 2021 20:40:48 +0000 (21:40 +0100)]
io_uring: refactor io_submit_flush_completions

Don't init req_batch before we actually need it. Also, add a small clean
up for req declaration.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/ad85512e12bd3a20d521e9782750300970e5afc8.1624739600.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: optimise hot path restricted checks
Pavel Begunkov [Sat, 26 Jun 2021 20:40:47 +0000 (21:40 +0100)]
io_uring: optimise hot path restricted checks

Move likely/unlikely from io_check_restriction() to specifically
ctx->restricted check, because doesn't do what it supposed to and make
the common path take an extra jump.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/22bf70d0a543dfc935d7276bdc73081784e30698.1624739600.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: remove not needed PF_EXITING check
Pavel Begunkov [Sat, 26 Jun 2021 20:40:46 +0000 (21:40 +0100)]
io_uring: remove not needed PF_EXITING check

Since cancellation got moved before exit_signals(), there is no one left
who can call io_run_task_work() with PF_EXIING set, so remove the check.
Note that __io_req_task_submit() still needs a similar check.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/f7f305ececb1e6044ea649fb983ca754805bb884.1624739600.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: mainstream sqpoll task_work running
Pavel Begunkov [Sat, 26 Jun 2021 20:40:45 +0000 (21:40 +0100)]
io_uring: mainstream sqpoll task_work running

task_works are widely used, so place io_run_task_work() directly into
the main path of io_sq_thread(), and remove it from other places where
it's not needed anymore.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/24eb5e35d519c590d3dffbd694b4c61a5fe49029.1624739600.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: refactor io_arm_poll_handler()
Pavel Begunkov [Sat, 26 Jun 2021 20:40:44 +0000 (21:40 +0100)]
io_uring: refactor io_arm_poll_handler()

gcc 11 goes a weird path and duplicates most of io_arm_poll_handler()
for READ and WRITE cases. Help it and move all pollin vs pollout
specific bits under a single if-else, so there is no temptation for this
kind of unfolding.

before vs after:
   text    data     bss     dec     hex filename
  85362   12650       8   98020   17ee4 ./fs/io_uring.o
  85186   12650       8   97844   17e34 ./fs/io_uring.o

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/1deea0037293a922a0358e2958384b2e42437885.1624739600.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: reduce latency by reissueing the operation
Olivier Langlois [Tue, 22 Jun 2021 12:17:39 +0000 (05:17 -0700)]
io_uring: reduce latency by reissueing the operation

It is quite frequent that when an operation fails and returns EAGAIN,
the data becomes available between that failure and the call to
vfs_poll() done by io_arm_poll_handler().

Detecting the situation and reissuing the operation is much faster
than going ahead and push the operation to the io-wq.

Performance improvement testing has been performed with:
Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.

4 measurements have been taken:
1. The time it takes to process a read request when data is already available
2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data was available
3. The time it takes to execute io_queue_async_work()
4. The time it takes to complete a read request asynchronously

2.25% of all the read operations did use the new path.

ready data (baseline)
avg 3657.94182918628
min 580
max 20098
stddev 1213.15975908162

reissue completion
average 7882.67567567568
min 2316
max 28811
stddev 1982.79172973284

insert io-wq time
average 8983.82276995305
min 3324
max 87816
stddev 2551.60056552038

async time completion
average 24670.4758861127
min 10758
max 102612
stddev 3483.92416873804

Conclusion:
On average reissuing the sqe with the patch code is 1.1uSec faster and
in the worse case scenario 59uSec faster than placing the request on
io-wq

On average completion time by reissuing the sqe with the patch code is
16.79uSec faster and in the worse case scenario 73.8uSec faster than
async completion.

Signed-off-by: Olivier Langlois <[email protected]>
Reviewed-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/9e8441419bb1b8f3c3fcc607b2713efecdef2136.1624364038.git.olivier@trillion01.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: add IOPOLL and reserved field checks to IORING_OP_UNLINKAT
Jens Axboe [Wed, 23 Jun 2021 15:07:45 +0000 (09:07 -0600)]
io_uring: add IOPOLL and reserved field checks to IORING_OP_UNLINKAT

We can't support IOPOLL with non-pollable request types, and we should
check for unused/reserved fields like we do for other request types.

Fixes: 14a1143b68ee ("io_uring: add support for IORING_OP_UNLINKAT")
Cc: [email protected]
Reported-by: Dmitry Kadashev <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: add IOPOLL and reserved field checks to IORING_OP_RENAMEAT
Jens Axboe [Wed, 23 Jun 2021 15:04:13 +0000 (09:04 -0600)]
io_uring: add IOPOLL and reserved field checks to IORING_OP_RENAMEAT

We can't support IOPOLL with non-pollable request types, and we should
check for unused/reserved fields like we do for other request types.

Fixes: 80a261fd0032 ("io_uring: add support for IORING_OP_RENAMEAT")
Cc: [email protected]
Reported-by: Dmitry Kadashev <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: refactor io_openat2()
Pavel Begunkov [Thu, 24 Jun 2021 14:10:00 +0000 (15:10 +0100)]
io_uring: refactor io_openat2()

Put do_filp_open() fail path of io_openat2() under a single if,
deduplicating put_unused_fd(), making it look better and helping
the hot path.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/f4c84d25c049d0af2adc19c703bbfef607200209.1624543113.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: simplify struct io_uring_sqe layout
Pavel Begunkov [Thu, 24 Jun 2021 14:09:59 +0000 (15:09 +0100)]
io_uring: simplify struct io_uring_sqe layout

Flatten struct io_uring_sqe, the last union is exactly 64B, so move them
out of union { struct { ... }}, and decrease __pad2 size.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/2e21ef7aed136293d654450bc3088973a8adc730.1624543113.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: update sqe layout build checks
Pavel Begunkov [Thu, 24 Jun 2021 14:09:58 +0000 (15:09 +0100)]
io_uring: update sqe layout build checks

Add missing BUILD_BUG_SQE_ELEM() for ->buf_group verifying that SQE
layout doesn't change.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/1f9d21bd74599b856b3a632be4c23ffa184a3ef0.1624543113.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: fix code style problems
Pavel Begunkov [Thu, 24 Jun 2021 14:09:57 +0000 (15:09 +0100)]
io_uring: fix code style problems

Fix a bunch of problems mostly found by checkpatch.pl

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/cfaf9a2f27b43934144fe9422a916bd327099f44.1624543113.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: refactor io_sq_thread()
Pavel Begunkov [Thu, 24 Jun 2021 14:09:56 +0000 (15:09 +0100)]
io_uring: refactor io_sq_thread()

Move needs_sched declaration into the block where it's used, so it's
harder to misuse/wrongfully reuse.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/e4a07db1353ee38b924dd1b45394cf8e746130b4.1624543113.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoio_uring: don't change sqpoll creds if not needed
Pavel Begunkov [Thu, 24 Jun 2021 14:09:55 +0000 (15:09 +0100)]
io_uring: don't change sqpoll creds if not needed

SQPOLL doesn't need to change creds if it's not submitting requests.
Move creds overriding into __io_sq_thread() after checking if there are
SQEs pending.

Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/c54368da2357ac539e0a333f7cfff70d5fb045b2.1624543113.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
3 years agoMerge tag 'for-5.14/drivers-2021-06-29' of git://git.kernel.dk/linux-block
Linus Torvalds [Wed, 30 Jun 2021 19:21:16 +0000 (12:21 -0700)]
Merge tag 'for-5.14/drivers-2021-06-29' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:
 "Pretty calm round, mostly just NVMe and a bit of MD:

   - NVMe updates (via Christoph)
        - improve the APST configuration algorithm (Alexey Bogoslavsky)
        - look for StorageD3Enable on companion ACPI device
          (Mario Limonciello)
        - allow selecting the network interface for TCP connections
          (Martin Belanger)
        - misc cleanups (Amit Engel, Chaitanya Kulkarni, Colin Ian King,
          Christoph)
        - move the ACPI StorageD3 code to drivers/acpi/ and add quirks
          for certain AMD CPUs (Mario Limonciello)
        - zoned device support for nvmet (Chaitanya Kulkarni)
        - fix the rules for changing the serial number in nvmet
          (Noam Gottlieb)
        - various small fixes and cleanups (Dan Carpenter, JK Kim,
          Chaitanya Kulkarni, Hannes Reinecke, Wesley Sheng, Geert
          Uytterhoeven, Daniel Wagner)

   - MD updates (Via Song)
        - iostats rewrite (Guoqing Jiang)
        - raid5 lock contention optimization (Gal Ofri)

   - Fall through warning fix (Gustavo)

   - Misc fixes (Gustavo, Jiapeng)"

* tag 'for-5.14/drivers-2021-06-29' of git://git.kernel.dk/linux-block: (78 commits)
  nvmet: use NVMET_MAX_NAMESPACES to set nn value
  loop: Fix missing discard support when using LOOP_CONFIGURE
  nvme.h: add missing nvme_lba_range_type endianness annotations
  nvme: remove zeroout memset call for struct
  nvme-pci: remove zeroout memset call for struct
  nvmet: remove zeroout memset call for struct
  nvmet: add ZBD over ZNS backend support
  nvmet: add Command Set Identifier support
  nvmet: add nvmet_req_bio put helper for backends
  nvmet: add req cns error complete helper
  block: export blk_next_bio()
  nvmet: remove local variable
  nvmet: use nvme status value directly
  nvmet: use u32 type for the local variable nsid
  nvmet: use u32 for nvmet_subsys max_nsid
  nvmet: use req->cmd directly in file-ns fast path
  nvmet: use req->cmd directly in bdev-ns fast path
  nvmet: make ver stable once connection established
  nvmet: allow mn change if subsys not discovered
  nvmet: make sn stable once connection was established
  ...

3 years agoMerge tag 'for-5.14/block-2021-06-29' of git://git.kernel.dk/linux-block
Linus Torvalds [Wed, 30 Jun 2021 19:12:56 +0000 (12:12 -0700)]
Merge tag 'for-5.14/block-2021-06-29' of git://git.kernel.dk/linux-block

Pull core block updates from Jens Axboe:

 - disk events cleanup (Christoph)

 - gendisk and request queue allocation simplifications (Christoph)

 - bdev_disk_changed cleanups (Christoph)

 - IO priority improvements (Bart)

 - Chained bio completion trace fix (Edward)

 - blk-wbt fixes (Jan)

 - blk-wbt enable/disable fix (Zhang)

 - Scheduler dispatch improvements (Jan, Ming)

 - Shared tagset scheduler improvements (John)

 - BFQ updates (Paolo, Luca, Pietro)

 - BFQ lock inversion fix (Jan)

 - Documentation improvements (Kir)

 - CLONE_IO block cgroup fix (Tejun)

 - Remove of ancient and deprecated block dump feature (zhangyi)

 - Discard merge fix (Ming)

 - Misc fixes or followup fixes (Colin, Damien, Dan, Long, Max, Thomas,
   Yang)

* tag 'for-5.14/block-2021-06-29' of git://git.kernel.dk/linux-block: (129 commits)
  block: fix discard request merge
  block/mq-deadline: Remove a WARN_ON_ONCE() call
  blk-mq: update hctx->dispatch_busy in case of real scheduler
  blk: Fix lock inversion between ioc lock and bfqd lock
  bfq: Remove merged request already in bfq_requests_merged()
  block: pass a gendisk to bdev_disk_changed
  block: move bdev_disk_changed
  block: add the events* attributes to disk_attrs
  block: move the disk events code to a separate file
  block: fix trace completion for chained bio
  block/partitions/msdos: Fix typo inidicator -> indicator
  block, bfq: reset waker pointer with shared queues
  block, bfq: check waker only for queues with no in-flight I/O
  block, bfq: avoid delayed merge of async queues
  block, bfq: boost throughput by extending queue-merging times
  block, bfq: consider also creation time in delayed stable merge
  block, bfq: fix delayed stable merge check
  block, bfq: let also stably merged queues enjoy weight raising
  blk-wbt: make sure throttle is enabled properly
  blk-wbt: introduce a new disable state to prevent false positive by rwb_enabled()
  ...

3 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid
Linus Torvalds [Wed, 30 Jun 2021 18:31:32 +0000 (11:31 -0700)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid

Pull HID updates from Jiri Kosina:

 - patch series that ensures that hid-multitouch driver disables touch
   and button-press reporting on hid-mt devices during suspend when the
   device is not configured as a wakeup-source, from Hans de Goede

 - support for ISH DMA on Intel EHL platform, from Even Xu

 - support for Renoir and Cezanne SoCs, Ambient Light Sensor and Human
   Presence Detection sensor for amd-sfh driver, from Basavaraj Natikar

 - other assorted code cleanups and device-specific fixes/quirks

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid: (45 commits)
  HID: thrustmaster: Switch to kmemdup() when allocate change_request
  HID: multitouch: Disable event reporting on suspend when the device is not a wakeup-source
  HID: logitech-dj: Implement may_wakeup ll-driver callback
  HID: usbhid: Implement may_wakeup ll-driver callback
  HID: core: Add hid_hw_may_wakeup() function
  HID: input: Add support for Programmable Buttons
  HID: wacom: Correct base usage for capacitive ExpressKey status bits
  HID: amd_sfh: Add initial support for HPD sensor
  HID: amd_sfh: Extend ALS support for newer AMD platform
  HID: amd_sfh: Extend driver capabilities for multi-generation support
  HID: surface-hid: Fix get-report request
  HID: sony: fix freeze when inserting ghlive ps3/wii dongles
  HID: usbkbd: Avoid GFP_ATOMIC when GFP_KERNEL is possible
  HID: amd_sfh: change in maintainer
  HID: intel-ish-hid: ipc: Specify that EHL no cache snooping
  HID: intel-ish-hid: ishtp: Add dma_no_cache_snooping() callback
  HID: intel-ish-hid: Set ISH driver depends on x86
  HID: hid-input: add Surface Go battery quirk
  HID: intel-ish-hid: Fix minor typos in comments
  HID: usbmouse: Avoid GFP_ATOMIC when GFP_KERNEL is possible
  ...

3 years agotools lib: Adopt bitmap_intersects() operation from the kernel sources
Alexey Bayduraev [Wed, 30 Jun 2021 15:54:48 +0000 (18:54 +0300)]
tools lib: Adopt bitmap_intersects() operation from the kernel sources

Adopt bitmap_intersects() routine that tests whether bitmaps bitmap1 and
bitmap2 intersects. This routine will be used during thread masks
initialization.

Signed-off-by: Alexey Bayduraev <[email protected]>
Acked-by: Andi Kleen <[email protected]>
Acked-by: Namhyung Kim <[email protected]>
Cc: Adrian Hunter <[email protected]>
Cc: Alexander Antonov <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Alexei Budankov <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Riccardo Mancini <[email protected]>
Link: http://lore.kernel.org/lkml/f75aa738d8ff8f9cffd7532d671f3ef3deb97a7c.1625065643.git.alexey.v.bayduraev@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
3 years agoMerge tag 'edac_updates_for_v5.14' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Wed, 30 Jun 2021 18:27:49 +0000 (11:27 -0700)]
Merge tag 'edac_updates_for_v5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras

Pull EDAC updates from Tony Luck:
 "Various fixes and support for new CPUs:

   - Clean up error messages from thunderx_edac

   - Add MODULE_DEVICE_TABLE to ti_edac so it will autoload

   - Use %pR to print resources in aspeed_edac

   - Add Yazen Ghannam as MAINTAINER for AMD edac drivers

   - Fix Ice Lake and Sapphire Rapids drivers to report correct "near"
     or "far" device for errors in 2LM configurations

   - Add support of on package high bandwidth memory in Sapphire Rapids

   - New CPU support for three CPUs supporting in-band ECC (IOT SKUs for
     ICL-NNPI, Tiger Lake and Alder Lake)

   - Don't even try to load Intel EDAC drivers when running as a guest

   - Fix Kconfig dependency on X86_MCE_INTEL for EDAC_IGEN6"

* tag 'edac_updates_for_v5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
  EDAC/igen6: fix core dependency
  EDAC/Intel: Do not load EDAC driver when running as a guest
  EDAC/igen6: Add Intel Alder Lake SoC support
  EDAC/igen6: Add Intel Tiger Lake SoC support
  EDAC/igen6: Add Intel ICL-NNPI SoC support
  EDAC/i10nm: Add support for high bandwidth memory
  EDAC/i10nm: Add detection of memory levels for ICX/SPR servers
  EDAC/skx_common: Add new ADXL components for 2-level memory
  MAINTAINERS: Make Yazen Ghannam maintainer for EDAC-AMD64
  EDAC/aspeed: Use proper format string for printing resource
  EDAC/ti: Add missing MODULE_DEVICE_TABLE
  EDAC/thunderx: Remove irrelevant variable from error messages

3 years agoMerge remote-tracking branch 'torvalds/master' into perf/core
Arnaldo Carvalho de Melo [Wed, 30 Jun 2021 18:27:32 +0000 (15:27 -0300)]
Merge remote-tracking branch 'torvalds/master' into perf/core

To pick up fixes.

Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
3 years agoMerge tag 'tpmdd-next-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Wed, 30 Jun 2021 18:23:33 +0000 (11:23 -0700)]
Merge tag 'tpmdd-next-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd

Pull tpm driver updates from Jarkko Sakkinen:
 "Bug fixes for TPM"

[ This isn't actually the whole contents of the tag and thus doesn't
  contain Jarkko's signature - I dropped the two top commits that added
  support for signing modules using elliptic curve keys because there's
  a new series for that that fixes a few confising things   - Linus ]

* tag 'tpmdd-next-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd:
  tpm: Replace WARN_ONCE() with dev_err_once() in tpm_tis_status()
  tpm_tis: Use DEFINE_RES_MEM() to simplify code
  tpm: fix some doc warnings in tpm1-cmd.c
  tpm_tis_spi: add missing SPI device ID entries
  tpm: add longer timeout for TPM2_CC_VERIFY_SIGNATURE
  char: tpm: move to use request_irq by IRQF_NO_AUTOEN flag
  tpm_tis_spi: set default probe function if device id not match
  tpm_crb: Use IOMEM_ERR_PTR when function returns iomem

3 years agoMerge tag 'platform-drivers-x86-v5.14-1' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Wed, 30 Jun 2021 18:15:39 +0000 (11:15 -0700)]
Merge tag 'platform-drivers-x86-v5.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Hans de Goede:
 "Highlights:

   - New think-lmi driver adding support for changing Lenovo Thinkpad
     BIOS settings from within Linux using the standard firmware-
     attributes class sysfs API

   - MS Surface aggregator-cdev now also supports forwarding events to
     user-space (for debugging / new driver development purposes only)

   - New intel_skl_int3472 driver this provides the necessary glue to
     translate ACPI table information to GPIOs, regulators, etc. for
     camera sensors on Intel devices with IPU3 attached MIPI cameras

   - A whole bunch of other fixes + device-specific quirk additions

   - New devm_work_autocancel() devm-helpers.h function"

* tag 'platform-drivers-x86-v5.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: (83 commits)
  platform/x86: dell-wmi-sysman: Change user experience when Admin/System Password is modified
  platform/x86: intel_skl_int3472: Uninitialized variable in skl_int3472_handle_gpio_resources()
  platform/x86: think-lmi: Move kfree(setting->possible_values) to tlmi_attr_setting_release()
  platform/x86: think-lmi: Split current_value to reflect only the value
  platform/x86: think-lmi: Fix issues with duplicate attributes
  platform/x86: think-lmi: Return EINVAL when kbdlang gets set to a 0 length string
  platform/x86: intel_cht_int33fe: Move to its own subfolder
  platform/x86: intel_skl_int3472: Move to intel/ subfolder
  platform/x86: intel_skl_int3472: Provide skl_int3472_unregister_clock()
  platform/x86: intel_skl_int3472: Provide skl_int3472_unregister_regulator()
  platform/x86: intel_skl_int3472: Use ACPI GPIO resource directly
  platform/x86: intel_skl_int3472: Fix dependencies (drop CLKDEV_LOOKUP)
  platform/x86: intel_skl_int3472: Free ACPI device resources after use
  platform/x86: Remove "default n" entries
  platform/x86: ISST: Use numa node id for cpu pci dev mapping
  platform/x86: ISST: Optimize CPU to PCI device mapping
  tools/power/x86/intel-speed-select: v1.10 release
  tools/power/x86/intel-speed-select: Fix uncore memory frequency display
  extcon: extcon-max8997: Simplify driver using devm
  extcon: extcon-max8997: Fix IRQ freeing at error path
  ...

3 years agoMerge tag 'mailbox-v5.14' of git://git.linaro.org/landing-teams/working/fujitsu/integ...
Linus Torvalds [Wed, 30 Jun 2021 18:11:47 +0000 (11:11 -0700)]
Merge tag 'mailbox-v5.14' of git://git.linaro.org/landing-teams/working/fujitsu/integration

Pull mailbox updates from Jassi Brar:

 - imx: add support for i.MX8ULP

 - mtk: code change around callback struct

 - qcom: add sm6125, MSM8939 fix for channel exhaustion

 - microchip: add support for polarfire controller

 - misc: cosmetic changes to bcm-2835,flexrm,pdc, arm-mhu and hisilicon

* tag 'mailbox-v5.14' of git://git.linaro.org/landing-teams/working/fujitsu/integration: (26 commits)
  MAINTAINERS: add entry for polarfire soc mailbox
  dt-bindings: add bindings for polarfire soc system controller
  mbox: add polarfire soc system controller mailbox
  dt-bindings: add bindings for polarfire soc mailbox
  mailbox: imx: Avoid using val uninitialized in imx_mu_isr()
  mailbox: qcom: Add MSM8939 APCS support
  mailbox: qcom: Use PLATFORM_DEVID_AUTO to register platform device
  dt-bindings: mailbox: qcom: Add MSM8939 APCS compatible
  mailbox: qcom-apcs: Add SM6125 compatible
  dt-bindings: mailbox: Add binding for sm6125
  mailbox: mtk-cmdq: Fix uninitialized variable in cmdq_mbox_flush()
  mailbox: bcm-flexrm-mailbox: Remove redundant dev_err call in flexrm_mbox_probe()
  mailbox: bcm2835: Remove redundant dev_err call in bcm2835_mbox_probe()
  mailbox: qcom-ipcc: Fix IPCC mbox channel exhaustion
  mailbox: mtk-cmdq: Add struct cmdq_pkt in struct cmdq_cb_data
  mailbox: mtk-cmdq: Use mailbox rx_callback
  mailbox: mtk-cmdq: Remove cmdq_cb_status
  mailbox: imx-mailbox: support i.MX8ULP MU
  mailbox: imx: add xSR/xCR register array
  mailbox: imx: replace the xTR/xRR array with single register
  ...

3 years agoMerge tag 'for-linus-5.14-1' of git://github.com/cminyard/linux-ipmi
Linus Torvalds [Wed, 30 Jun 2021 18:09:37 +0000 (11:09 -0700)]
Merge tag 'for-linus-5.14-1' of git://github.com/cminyard/linux-ipmi

Pull IPMI driver updates from Corey Minyard:
 "Mostly a restructure of the kcs_bmc driver to make it easier to use
  with different types of devices, and just to clean things up and
  improve things.

  Also some bug fixes for the kcs_bmc driver.

  One fix to the IPMI watchdog to stop the timer when the action is
  none. Not a big deal, but it's the right thing to do"

* tag 'for-linus-5.14-1' of git://github.com/cminyard/linux-ipmi:
  ipmi: kcs_bmc_aspeed: Fix less than zero comparison of a unsigned int
  ipmi: kcs_bmc_aspeed: Optionally apply status address
  ipmi: kcs_bmc_aspeed: Fix IBFIE typo from datasheet
  ipmi: kcs_bmc_aspeed: Implement KCS SerIRQ configuration
  dt-bindings: ipmi: Add optional SerIRQ property to ASPEED KCS devices
  dt-bindings: ipmi: Convert ASPEED KCS binding to schema
  ipmi: kcs_bmc: Add serio adaptor
  ipmi: kcs_bmc: Enable IBF on open
  ipmi: kcs_bmc: Allow clients to control KCS IRQ state
  ipmi: kcs_bmc: Decouple the IPMI chardev from the core
  ipmi: kcs_bmc: Strip private client data from struct kcs_bmc
  ipmi: kcs_bmc: Split headers into device and client
  ipmi: kcs_bmc: Turn the driver data-structures inside-out
  ipmi: kcs_bmc: Split out kcs_bmc_cdev_ipmi
  ipmi: kcs_bmc: Rename {read,write}_{status,data}() functions
  ipmi: kcs_bmc: Make status update atomic
  ipmi: kcs_bmc_aspeed: Use of match data to extract KCS properties
  ipmi/watchdog: Stop watchdog timer when the current action is 'none'

3 years agojbd2: export jbd2_journal_[un]register_shrinker()
Zhang Yi [Wed, 30 Jun 2021 08:36:38 +0000 (16:36 +0800)]
jbd2: export jbd2_journal_[un]register_shrinker()

Export jbd2_journal_[un]register_shrinker() to fix this error when
ext4 is built as a module:

  ERROR: modpost: "jbd2_journal_unregister_shrinker" undefined!
  ERROR: modpost: "jbd2_journal_register_shrinker" undefined!

Fixes: 4ba3fcdde7e3 ("jbd2,ext4: add a shrinker to release checkpointed buffers")
Signed-off-by: Zhang Yi <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Theodore Ts'o <[email protected]>
3 years agoMIPS: Fix PKMAP with 32-bit MIPS huge page support
Wei Li [Tue, 29 Jun 2021 14:14:20 +0000 (22:14 +0800)]
MIPS: Fix PKMAP with 32-bit MIPS huge page support

When 32-bit MIPS huge page support is enabled, we halve the number of
pointers a PTE page holds, making its last half go to waste.
Correspondingly, we should halve the number of kmap entries, as we just
initialized only a single pte table for that in pagetable_init().

Fixes: 35476311e529 ("MIPS: Add partial 32-bit huge page support")
Signed-off-by: Wei Li <[email protected]>
Signed-off-by: Thomas Bogendoerfer <[email protected]>
3 years agoMIPS: CI20: Add second percpu timer for SMP.
周琰杰 (Zhou Yanjie) [Sat, 26 Jun 2021 06:18:41 +0000 (14:18 +0800)]
MIPS: CI20: Add second percpu timer for SMP.

1.Add a new TCU channel as the percpu timer of core1, this is to
  prepare for the subsequent SMP support. The newly added channel
  will not adversely affect the current single-core state.
2.Adjust the position of TCU node to make it consistent with the
  order in jz4780.dtsi file.

Tested-by: Nikolaus Schaller <[email protected]> # on CI20
Signed-off-by: 周琰杰 (Zhou Yanjie) <[email protected]>
Acked-by: Paul Cercueil <[email protected]>
Signed-off-by: Thomas Bogendoerfer <[email protected]>
3 years agoMIPS: CI20: Reduce clocksource to 750 kHz.
周琰杰 (Zhou Yanjie) [Sat, 26 Jun 2021 06:18:40 +0000 (14:18 +0800)]
MIPS: CI20: Reduce clocksource to 750 kHz.

The original clock (3 MHz) is too fast for the clocksource,
there will be a chance that the system may get stuck.

Reported-by: Nikolaus Schaller <[email protected]>
Tested-by: Nikolaus Schaller <[email protected]> # on CI20
Signed-off-by: 周琰杰 (Zhou Yanjie) <[email protected]>
Acked-by: Paul Cercueil <[email protected]>
Signed-off-by: Thomas Bogendoerfer <[email protected]>
3 years agoMIPS: Ingenic: Add MAC syscon nodes for Ingenic SoCs.
周琰杰 (Zhou Yanjie) [Sat, 26 Jun 2021 06:18:39 +0000 (14:18 +0800)]
MIPS: Ingenic: Add MAC syscon nodes for Ingenic SoCs.

Add MAC syscon nodes for X1000 SoC and X1830 SoC from Ingenic.

Signed-off-by: 周琰杰 (Zhou Yanjie) <[email protected]>
Acked-by: Paul Cercueil <[email protected]>
Signed-off-by: Thomas Bogendoerfer <[email protected]>
3 years agodt-bindings: clock: Add documentation for MAC PHY control bindings.
周琰杰 (Zhou Yanjie) [Sat, 26 Jun 2021 06:18:38 +0000 (14:18 +0800)]
dt-bindings: clock: Add documentation for MAC PHY control bindings.

Update the CGU binding documentation, add mac-phy-ctrl as a
pattern property.

Signed-off-by: 周琰杰 (Zhou Yanjie) <[email protected]>
Acked-by: Paul Cercueil <[email protected]>
Acked-by: Stephen Boyd <[email protected]>
Signed-off-by: Thomas Bogendoerfer <[email protected]>
3 years agoMIPS: X1830: Respect cell count of common properties.
周琰杰 (Zhou Yanjie) [Sat, 26 Jun 2021 06:18:37 +0000 (14:18 +0800)]
MIPS: X1830: Respect cell count of common properties.

If N fields of X cells should be provided, then that's what the
devicetree should represent, instead of having one single field of
(N * X) cells.

Signed-off-by: 周琰杰 (Zhou Yanjie) <[email protected]>
Acked-by: Paul Cercueil <[email protected]>
Signed-off-by: Thomas Bogendoerfer <[email protected]>
3 years agopowerpc/64s: move ret_from_fork etc above __end_soft_masked
Nicholas Piggin [Wed, 30 Jun 2021 07:46:21 +0000 (17:46 +1000)]
powerpc/64s: move ret_from_fork etc above __end_soft_masked

Code which runs with interrupts enabled should be moved above
__end_soft_masked where possible, because maskable interrupts that hit
below that symbol will need to consult the soft mask table, which is an
extra cost.

Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64s/interrupt: clean up interrupt return labels
Nicholas Piggin [Wed, 30 Jun 2021 07:46:20 +0000 (17:46 +1000)]
powerpc/64s/interrupt: clean up interrupt return labels

Normal kernel-interrupt exits can get interrupt_return_srr_user_restart
in their backtrace, which is an unusual and notable function, and it is
part of the user-interrupt exit path, which is doubly confusing.

Add non-local labels for both user and kernel interrupt exit cases to
address this and make the user and kernel cases more symmetric. Also get
rid of an unused label.

Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols
Nicholas Piggin [Wed, 30 Jun 2021 07:46:19 +0000 (17:46 +1000)]
powerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols

If one interrupt exit symbol must not be kprobed, none of them can be,
without more justification for why it's safe. Disallow kprobing on any
of the (non-local) labels in the exit paths.

Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64: enable MSR[EE] in irq replay pt_regs
Nicholas Piggin [Wed, 30 Jun 2021 07:46:18 +0000 (17:46 +1000)]
powerpc/64: enable MSR[EE] in irq replay pt_regs

Similar to commit 2b48e96be2f9f ("powerpc/64: fix irq replay
pt_regs->softe value"), enable MSR_EE in pt_regs->msr. This makes the
regs look more normal. It also allows some extra debug checks to be
added to interrupt handler entry.

Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64s/interrupt: preserve regs->softe for NMI interrupts
Nicholas Piggin [Wed, 30 Jun 2021 07:46:17 +0000 (17:46 +1000)]
powerpc/64s/interrupt: preserve regs->softe for NMI interrupts

If an NMI interrupt hits in an implicit soft-masked region, regs->softe
is modified to reflect that. This may not be necessary for correctness
at the moment, but it is less surprising and it's unhelpful when
debugging or adding checks.

Make sure this is changed back to how it was found before returning.

Fixes: 4ec5feec1ad0 ("powerpc/64s: Make NMI record implicitly soft-masked code as irqs disabled")
Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64s: add a table of implicit soft-masked addresses
Nicholas Piggin [Wed, 30 Jun 2021 07:46:16 +0000 (17:46 +1000)]
powerpc/64s: add a table of implicit soft-masked addresses

Commit 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs
soft-masked") ends up catching too much code, including ret_from_fork,
and parts of interrupt and syscall return that do not expect to be
interrupts to be soft-masked. If an interrupt gets marked pending,
and then the code proceeds out of the implicit soft-masked region it
will fail to deal with the pending interrupt.

Fix this by adding a new table of addresses which explicitly marks
the regions of code that are soft masked. This table is only checked
for interrupts that below __end_soft_masked, so most kernel interrupts
will not have the overhead of the table search.

Fixes: 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs soft-masked")
Reported-by: Sachin Sant <[email protected]>
Signed-off-by: Nicholas Piggin <[email protected]>
Tested-by: Sachin Sant <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64e: remove implicit soft-masking and interrupt exit restart logic
Nicholas Piggin [Wed, 30 Jun 2021 07:46:15 +0000 (17:46 +1000)]
powerpc/64e: remove implicit soft-masking and interrupt exit restart logic

The implicit soft-masking to speed up interrupt return was going to be
used by 64e as well, but it has not been extensively tested on that
platform and is not considered ready. It was intended to be disabled
before merge. Disable it for now.

Most of the restart code is common with 64s, so with more correctness
and performance testing this could be re-enabled again by adding the
extra soft-mask checks to interrupt handlers and flipping
exit_must_hard_disable().

Fixes: 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs soft-masked")
Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64e: fix CONFIG_RELOCATABLE build warnings
Nicholas Piggin [Wed, 30 Jun 2021 07:46:14 +0000 (17:46 +1000)]
powerpc/64e: fix CONFIG_RELOCATABLE build warnings

CONFIG_RELOCATABLE=y causes build warnings from unresolved relocations.
Fix these by using TOC addressing for these cases.

Commit 24d33ac5b8ff ("powerpc/64s: Make prom_init require RELOCATABLE")
caused some 64e configs to select RELOCATABLE resulting in these
warnings, but the underlying issue was already there.

This passes basic qemu testing.

Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/64s: fix hash page fault interrupt handler
Nicholas Piggin [Wed, 30 Jun 2021 07:46:13 +0000 (17:46 +1000)]
powerpc/64s: fix hash page fault interrupt handler

The early bad fault or key fault test in do_hash_fault() ends up calling
into ___do_page_fault without having gone through an interrupt handler
wrapper (except the initial _RAW one). This can end up calling local irq
functions while the interrupt has not been reconciled, which will likely
cause crashes and it trips up on a later patch that adds more assertions.

pkey_exec_prot from selftests causes this path to be executed.

There is no real reason to run the in_nmi() test should be performed
before the key fault check. In fact if a perf interrupt in the hash
fault code did a stack walk that was made to take a key fault somehow
then running ___do_page_fault could possibly cause another hash fault
causing problems. Move the in_nmi() test first, and then do everything
else inside the regular interrupt handler function.

Fixes: 3a96570ffceb ("powerpc: convert interrupt handlers to use wrappers")
Reported-by: Sachin Sant <[email protected]>
Signed-off-by: Nicholas Piggin <[email protected]>
Tested-by: Sachin Sant <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
3 years agopowerpc/4xx: Fix setup_kuep() on SMP
Christophe Leroy [Tue, 29 Jun 2021 12:24:21 +0000 (12:24 +0000)]
powerpc/4xx: Fix setup_kuep() on SMP

On SMP, setup_kuep() is also called from start_secondary() since
commit 86f46f343272 ("powerpc/32s: Initialise KUAP and KUEP in C").

start_secondary() is not an __init function.

Remove the __init marker from setup_kuep() and bail out when
not caller on the first CPU as the work is already done.

Fixes: 10248dcba120 ("powerpc/44x: Implement Kernel Userspace Exec Protection (KUEP)")
Fixes: 86f46f343272 ("powerpc/32s: Initialise KUAP and KUEP in C")
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/8ee05934288994a65743a987acb1558f12c0c8c1.1624969450.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Fix setup_{kuap/kuep}() on SMP
Christophe Leroy [Mon, 28 Jun 2021 06:56:11 +0000 (06:56 +0000)]
powerpc/32s: Fix setup_{kuap/kuep}() on SMP

On SMP, setup_kup() is also called from start_secondary().

start_secondary() is not an __init function.

Remove the __init marker from setup_kuep() and setup_kuap().

Fixes: 86f46f343272 ("powerpc/32s: Initialise KUAP and KUEP in C")
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/42f4bd12b476942e4d5dc81c0e839d8871b20b1c.1624863319.git.christophe.leroy@csgroup.eu
3 years agoMerge branch 'sched/core' into sched/urgent, to pick up fix
Ingo Molnar [Wed, 30 Jun 2021 10:31:41 +0000 (12:31 +0200)]
Merge branch 'sched/core' into sched/urgent, to pick up fix

Pick up a fix for a warning that several people reported.

Signed-off-by: Ingo Molnar <[email protected]>
3 years agoMerge branch 'for-5.14/multitouch' into for-linus
Jiri Kosina [Wed, 30 Jun 2021 07:15:15 +0000 (09:15 +0200)]
Merge branch 'for-5.14/multitouch' into for-linus

- patch series that ensures that hid-multitouch driver disables touch and
  button-press reporting on hid-mt devices during suspend when the device is
  not configured as a wakeup-source, from Hans de Goede

3 years agoMerge branch 'for-5.14/logitech' into for-linus
Jiri Kosina [Wed, 30 Jun 2021 07:12:44 +0000 (09:12 +0200)]
Merge branch 'for-5.14/logitech' into for-linus

- support for LCD menu keys + LCD brightness control on the Logitech Z-10
  speakers (with LCD) which use the same protocol as the G15 keyboards
  from Hans de Goede

3 years agoMerge branch 'for-5.14/intel-ish' into for-linus
Jiri Kosina [Wed, 30 Jun 2021 07:06:53 +0000 (09:06 +0200)]
Merge branch 'for-5.14/intel-ish' into for-linus

- support for ISH DMA on EHL platform from Even Xu
- various code style fixes and cleanups from Lee Jones and Uwe Kleine-König

3 years agoMerge branch 'for-5.14/google' into for-linus
Jiri Kosina [Wed, 30 Jun 2021 07:05:31 +0000 (09:05 +0200)]
Merge branch 'for-5.14/google' into for-linus

- device tree match for Google Whiskers device from Ikjoon Jang

3 years agoMerge branch 'for-5.14/core' into for-linus
Jiri Kosina [Wed, 30 Jun 2021 07:03:51 +0000 (09:03 +0200)]
Merge branch 'for-5.14/core' into for-linus

- device unbinding locking fix from Dmitry Torokhov
- support for programmable buttons (mapping to KEY_MACRO# event codes)
  from Thomas Weißschuh
- various other small fixes and code style improvements

This page took 0.157613 seconds and 4 git commands to generate.