Edward Chron [Mon, 23 Sep 2019 22:37:11 +0000 (15:37 -0700)]
mm/oom: add oom_score_adj and pgtables to Killed process message
For an OOM event: print oom_score_adj value for the OOM Killed process to
document what the oom score adjust value was at the time the process was
OOM Killed. The adjustment value can be set by user code and it affects
the resulting oom_score so it is used to influence kill process selection.
When eligible tasks are not printed (sysctl oom_dump_tasks = 0) printing
this value is the only documentation of the value for the process being
killed. Having this value on the Killed process message is useful to
document if a miscconfiguration occurred or to confirm that the
oom_score_adj configuration applies as expected.
An example which illustates both misconfiguration and validation that the
oom_score_adj was applied as expected is:
Aug 14 23:00:02 testserver kernel: Out of memory: Killed process 2692
(systemd-udevd) total-vm:1056800kB, anon-rss:1052760kB, file-rss:4kB,
shmem-rss:0kB pgtables:22kB oom_score_adj:1000
The systemd-udevd is a critical system application that should have an
oom_score_adj of -1000. It was miconfigured to have a adjustment of 1000
making it a highly favored OOM kill target process. The output documents
both the misconfiguration and the fact that the process was correctly
targeted by OOM due to the miconfiguration. This can be quite helpful for
triage and problem determination.
The addition of the pgtables_bytes shows page table usage by the process
and is a useful measure of the memory size of the process.
memcg, oom: don't require __GFP_FS when invoking memcg OOM killer
Masoud Sharbiani noticed that commit 29ef680ae7c21110 ("memcg, oom: move
out_of_memory back to the charge path") broke memcg OOM called from
__xfs_filemap_fault() path. It turned out that try_charge() is retrying
forever without making forward progress because mem_cgroup_oom(GFP_NOFS)
cannot invoke the OOM killer due to commit 3da88fb3bacfaa33 ("mm, oom:
move GFP_NOFS check to out_of_memory").
Allowing forced charge due to being unable to invoke memcg OOM killer will
lead to global OOM situation. Also, just returning -ENOMEM will be risky
because OOM path is lost and some paths (e.g. get_user_pages()) will leak
-ENOMEM. Therefore, invoking memcg OOM killer (despite GFP_NOFS) will be
the only choice we can choose for now.
Until 29ef680ae7c21110, we were able to invoke memcg OOM killer when
GFP_KERNEL reclaim failed [1]. But since 29ef680ae7c21110, we need to
invoke memcg OOM killer when GFP_NOFS reclaim failed [2]. Although in the
past we did invoke memcg OOM killer for GFP_NOFS [3], we might get
pre-mature memcg OOM reports due to this patch.
Joel Savitz [Mon, 23 Sep 2019 22:37:04 +0000 (15:37 -0700)]
mm/oom_kill.c: add task UID to info message on an oom kill
In the event of an oom kill, useful information about the killed process
is printed to dmesg. Users, especially system administrators, will find
it useful to immediately see the UID of the process.
We already print uid when dumping eligible tasks so it is not overly hard
to find that information in the oom report. However this information is
unavailable when dumping of eligible tasks is disabled.
In the following example, abuse_the_ram is the name of a program that
attempts to iteratively allocate all available memory until it is stopped
by force.
Current message:
Out of memory: Killed process 35389 (abuse_the_ram)
total-vm:133718232kB, anon-rss:129624980kB, file-rss:0kB,
shmem-rss:0kB
Patched message:
Out of memory: Killed process 2739 (abuse_the_ram),
total-vm:133880028kB, anon-rss:129754836kB, file-rss:0kB,
shmem-rss:0kB, UID:0
mm/compaction.c: clear total_{migrate,free}_scanned before scanning a new zone
total_{migrate,free}_scanned will be added to COMPACTMIGRATE_SCANNED and
COMPACTFREE_SCANNED in compact_zone(). We should clear them before
scanning a new zone. In the proc triggered compaction, we forgot clearing
them.
Currently there is a leak in init_z3fold_page() -- it allocates handles
from kmem cache even for headless pages, but then they are never used and
never freed, so eventually kmem cache may get exhausted. This patch
provides a fix for that.
Qian Cai [Mon, 23 Sep 2019 22:36:48 +0000 (15:36 -0700)]
mm: silence -Woverride-init/initializer-overrides
When compiling a kernel with W=1, there are several of those warnings due
to arm64 overriding a field on purpose. Just disable those warnings for
both GCC and Clang of this file, so it will help dig "gems" hidden in the
W=1 warnings by reducing some noises.
mm/init-mm.c:39:2: warning: initializer overrides prior initialization
of this subobject [-Winitializer-overrides]
INIT_MM_CONTEXT(init_mm)
^~~~~~~~~~~~~~~~~~~~~~~~
./arch/arm64/include/asm/mmu.h:133:9: note: expanded from macro
'INIT_MM_CONTEXT'
.pgd = init_pg_dir,
^~~~~~~~~~~
mm/init-mm.c:30:10: note: previous initialization is here
.pgd = swapper_pg_dir,
^~~~~~~~~~~~~~
Note: there is a side project trying to support explicitly allowing
specific initializer overrides in Clang, but there is no guarantee it
will happen or not.
mm/vmalloc: do not keep unpurged areas in the busy tree
The busy tree can be quite big, even though the area is freed or unmapped
it still stays there until "purge" logic removes it.
1) Optimize and reduce the size of "busy" tree by removing a node from
it right away as soon as user triggers free paths. It is possible to
do so, because the allocation is done using another augmented tree.
The vmalloc test driver shows the difference, for example the
"fix_size_alloc_test" is ~11% better comparing with default configuration:
2) Since the busy tree now contains allocated areas only and does not
interfere with lazily free nodes, introduce the new function
show_purge_info() that dumps "unpurged" areas that is propagated
through "/proc/vmallocinfo".
mm/sparse.c: remove NULL check in clear_hwpoisoned_pages()
There is no possibility for memmap to be NULL in the current codebase.
This check was added in commit 95a4774d055c ("memory-hotplug: update
mce_bad_pages when removing the memory") where memmap was originally
inited to NULL, and only conditionally given a value.
The code that could have passed a NULL has been removed by commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug"), so there is no
longer a possibility that memmap can be NULL.
mm/sparse.c: fix ALIGN() without power of 2 in sparse_buffer_alloc()
The size argument passed into sparse_buffer_alloc() has already been
aligned with PAGE_SIZE or PMD_SIZE.
If the size after aligned is not power of 2 (e.g. 0x480000), the
PTR_ALIGN() will return wrong value. Use roundup to round sparsemap_buf
up to next multiple of size.
mm/sparse.c: fix memory leak of sparsemap_buf in aligned memory
sparse_buffer_alloc(xsize) gets the size of memory from sparsemap_buf
after being aligned with the size. However, the size is at least
PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION) and usually larger
than PAGE_SIZE.
Also, sparse_buffer_fini() only frees memory between sparsemap_buf and
sparsemap_buf_end, since sparsemap_buf may be changed by PTR_ALIGN()
first, the aligned space before sparsemap_buf is wasted and no one will
touch it.
In our ARM32 platform (without SPARSEMEM_VMEMMAP)
Sparse_buffer_init
Reserve d359c000 - d3e9c000 (9M)
Sparse_buffer_alloc
Alloc d3a00000 - d3E80000 (4.5M)
Sparse_buffer_fini
Free d3e80000 - d3e9c000 (~=100k)
The reserved memory between d359c000 - d3a00000 (~=4.4M) is unfreed.
mm/memory_hotplug: online_pages cannot be 0 in online_pages()
walk_system_ram_range() will fail with -EINVAL in case
online_pages_range() was never called (== no resource applicable in the
range). Otherwise, we will always call online_pages_range() with nr_pages
> 0 and, therefore, have online_pages > 0.
mm/memory_hotplug: make sure the pfn is aligned to the order when onlining
Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as
higher order") assumed that any PFN we get via memory resources is aligned
to to MAX_ORDER - 1, I am not convinced that is always true. Let's play
safe, check the alignment and fallback to single pages.
akpm: warn in this situation so we get to find out if and why this ever
occurs.
mm/memory_hotplug: drop PageReserved() check in online_pages_range()
move_pfn_range_to_zone() will set all pages to PG_reserved via
memmap_init_zone(). The only way a page could no longer be reserved would
be if a MEM_GOING_ONLINE notifier would clear PG_reserved - which is not
done (the online_page callback is used for that purpose by e.g., Hyper-V
instead). walk_system_ram_range() will never call online_pages_range()
with duplicate PFNs, so drop the PageReserved() check.
This seems to be a leftover from ancient times where the memmap was
initialized when adding memory and we wanted to check for already onlined
memory.
mm/memory_hotplug.c: use PFN_UP / PFN_DOWN in walk_system_ram_range()
Patch series "mm/memory_hotplug: online_pages() cleanups", v2.
Some cleanups (+ one fix for a special case) in the context of
online_pages().
This patch (of 5):
This makes it clearer that we will never call func() with duplicate PFNs
in case we have multiple sub-page memory resources. All unaligned parts
of PFNs are completely discarded.
Wei Yang [Mon, 23 Sep 2019 22:35:52 +0000 (15:35 -0700)]
mm/memory_hotplug.c: prevent memory leak when reusing pgdat
When offlining a node in try_offline_node(), pgdat is not released. So
that pgdat could be reused in hotadd_new_pgdat(). While we reallocate
pgdat->per_cpu_nodestats if this pgdat is reused.
This patch prevents the memory leak by just allocating per_cpu_nodestats
when it is a new pgdat.
drivers/base/memory.c: don't store end_section_nr in memory blocks
Each memory block spans the same amount of sections/pages/bytes. The size
is determined before the first memory block is created. No need to store
what we can easily calculate - and the calculations even look simpler now.
Michal brought up the idea of variable-sized memory blocks. However, if
we ever implement something like this, we will need an API compatibility
switch and reworks at various places (most code assumes a fixed memory
block size). So let's cleanup what we have right now.
While at it, fix the variable naming in register_mem_sect_under_node() -
we no longer talk about a single section.
driver/base/memory.c: validate memory block size early
Let's validate the memory block size early, when initializing the memory
device infrastructure. Fail hard in case the value is not suitable.
As nobody checks the return value of memory_dev_init(), turn it into a
void function and fail with a panic in all scenarios instead. Otherwise,
we'll crash later during boot when core/drivers expect that the memory
device infrastructure (including memory_block_size_bytes()) works as
expected.
I think long term, we should move the whole memory block size
configuration (set_memory_block_size_order() and
memory_block_size_bytes()) into drivers/base/memory.c.
We don't allow to offline memory block devices that belong to multiple
numa nodes. Therefore, such devices can never get removed. It is
sufficient to process a single node when removing the memory block. No
need to iterate over each and every PFN.
We already have the nid stored for each memory block. Make sure that the
nid always has a sane value.
Please note that checking for node_online(nid) is not required. If we
would have a memory block belonging to a node that is no longer offline,
then we would have a BUG in the node offlining code.
Let's remove this indirection. We need the zone in the caller either way,
so let's just detect it there. Add some documentation for
move_pfn_range_to_zone() instead.
Mike Rapoport [Mon, 23 Sep 2019 22:35:31 +0000 (15:35 -0700)]
mm: consolidate pgtable_cache_init() and pgd_cache_init()
Both pgtable_cache_init() and pgd_cache_init() are used to initialize kmem
cache for page table allocations on several architectures that do not use
PAGE_SIZE tables for one or more levels of the page table hierarchy.
Most architectures do not implement these functions and use __weak default
NOP implementation of pgd_cache_init(). Since there is no such default
for pgtable_cache_init(), its empty stub is duplicated among most
architectures.
Rename the definitions of pgd_cache_init() to pgtable_cache_init() and
drop empty stubs of pgtable_cache_init().
Mike Rapoport [Mon, 23 Sep 2019 22:35:28 +0000 (15:35 -0700)]
microblaze: switch to generic version of pte allocation
The microblaze implementation of pte_alloc_one() has a provision to
allocated PTEs from high memory, but neither CONFIG_HIGHPTE nor pte_map*()
versions for suitable for HIGHPTE are defined.
Except that, microblaze version of pte_alloc_one() is identical to the
generic one as well as the implementations of pte_free() and
pte_free_kernel().
Switch microblaze to use the generic versions of these functions. Also
remove pte_free_slow() that is not referenced anywhere in the code.
Mike Rapoport [Mon, 23 Sep 2019 22:35:25 +0000 (15:35 -0700)]
sh: switch to generic version of pte allocation
The sh implementation pte_alloc_one(), pte_alloc_one_kernel(),
pte_free_kernel() and pte_free() is identical to the generic except of
lack of __GFP_ACCOUNT for the user PTEs allocation.
Switch sh to use generic version of these functions.
Mike Rapoport [Mon, 23 Sep 2019 22:35:22 +0000 (15:35 -0700)]
ia64: switch to generic version of pte allocation
The ia64 implementation pte_alloc_one(), pte_alloc_one_kernel(),
pte_free_kernel() and pte_free() is identical to the generic except of
lack of __GFP_ACCOUNT for the user PTEs allocation.
Switch ia64 to use generic version of these functions.
Remove page table allocator "quicklists". These have been around for a
long time, but have not got much traction in the last decade and are only
used on ia64 and sh architectures.
The numbers in the initial commit look interesting but probably don't
apply anymore. If anybody wants to resurrect this it's in the git
history, but it's unhelpful to have this code and divergent allocator
behaviour for minor archs.
Also it might be better to instead make more general improvements to page
allocator if this is still so slow.
Minchan Kim [Tue, 24 Sep 2019 00:02:24 +0000 (00:02 +0000)]
mm: release the spinlock on zap_pte_range
In our testing (camera recording), Miguel and Wei found
unmap_page_range() takes above 6ms with preemption disabled easily.
When I see that, the reason is it holds page table spinlock during
entire 512 page operation in a PMD. 6.2ms is never trivial for user
experince if RT task couldn't run in the time because it could make
frame drop or glitch audio problem.
I had a time to benchmark it via adding some trace_printk hooks between
pte_offset_map_lock and pte_unmap_unlock in zap_pte_range. The testing
device is 2018 premium mobile device.
I can get 2ms delay rather easily to release 2M(ie, 512 pages) when the
task runs on little core even though it doesn't have any IPI and LRU
lock contention. It's already too heavy.
If I remove activate_page, 35-40% overhead of zap_pte_range is gone so
most of overhead(about 0.7ms) comes from activate_page via
mark_page_accessed. Thus, if there are LRU contention, that 0.7ms could
accumulate up to several ms.
So this patch adds a check for need_resched() in the loop, and a
preemption point if necessary.
John Hubbard [Mon, 23 Sep 2019 22:35:10 +0000 (15:35 -0700)]
net/xdp: convert put_page() to put_user_page*()
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in fc1d8e7cca2d ("mm:
introduce put_user_page*(), placeholder versions").
John Hubbard [Mon, 23 Sep 2019 22:35:07 +0000 (15:35 -0700)]
drivers/gpu/drm/via: convert put_page() to put_user_page*()
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in fc1d8e7cca2d ("mm:
introduce put_user_page*(), placeholder versions").
Also reverse the order of a comparison, in order to placate checkpatch.pl.
mm/gup: add make_dirty arg to put_user_pages_dirty_lock()
[11~From: John Hubbard <[email protected]>
Subject: mm/gup: add make_dirty arg to put_user_pages_dirty_lock()
Patch series "mm/gup: add make_dirty arg to put_user_pages_dirty_lock()",
v3.
There are about 50+ patches in my tree [2], and I'll be sending out the
remaining ones in a few more groups:
* The block/bio related changes (Jerome mostly wrote those, but I've had
to move stuff around extensively, and add a little code)
* mm/ changes
* other subsystem patches
* an RFC that shows the current state of the tracking patch set. That
can only be applied after all call sites are converted, but it's good to
get an early look at it.
This is part a tree-wide conversion, as described in fc1d8e7cca2d ("mm:
introduce put_user_page*(), placeholder versions").
This patch (of 3):
Provide more capable variation of put_user_pages_dirty_lock(), and delete
put_user_pages_dirty(). This is based on the following:
1. Lots of call sites become simpler if a bool is passed into
put_user_page*(), instead of making the call site choose which
put_user_page*() variant to call.
2. Christoph Hellwig's observation that set_page_dirty_lock() is
usually correct, and set_page_dirty() is usually a bug, or at least
questionable, within a put_user_page*() calling chain.
Johannes Weiner [Mon, 23 Sep 2019 22:35:01 +0000 (15:35 -0700)]
mm: vmscan: do not share cgroup iteration between reclaimers
One of our services observed a high rate of cgroup OOM kills in the
presence of large amounts of clean cache. Debugging showed that the
culprit is the shared cgroup iteration in page reclaim.
Under high allocation concurrency, multiple threads enter reclaim at the
same time. Fearing overreclaim when we first switched from the single
global LRU to cgrouped LRU lists, we introduced a shared iteration state
for reclaim invocations - whether 1 or 20 reclaimers are active
concurrently, we only walk the cgroup tree once: the 1st reclaimer
reclaims the first cgroup, the second the second one etc. With more
reclaimers than cgroups, we start another walk from the top.
This sounded reasonable at the time, but the problem is that reclaim
concurrency doesn't scale with allocation concurrency. As reclaim
concurrency increases, the amount of memory individual reclaimers get to
scan gets smaller and smaller. Individual reclaimers may only see one
cgroup per cycle, and that may not have much reclaimable memory. We see
individual reclaimers declare OOM when there is plenty of reclaimable
memory available in cgroups they didn't visit.
This patch does away with the shared iterator, and every reclaimer is
allowed to scan the full cgroup tree and see all of reclaimable memory,
just like it would on a non-cgrouped system. This way, when OOM is
declared, we know that the reclaimer actually had a chance.
To still maintain fairness in reclaim pressure, disallow cgroup reclaim
from bailing out of the tree walk early. Kswapd and regular direct
reclaim already don't bail, so it's not clear why limit reclaim would have
to, especially since it only walks subtrees to begin with.
This change completely eliminates the OOM kills on our service, while
showing no signs of overreclaim - no increased scan rates, %sys time, or
abrupt free memory spikes. I tested across 100 machines that have 64G of
RAM and host about 300 cgroups each.
[ It's possible overreclaim never was a *practical* issue to begin
with - it was simply a concern we had on the mailing lists at the
time, with no real data to back it up. But we have also added more
bail-out conditions deeper inside reclaim (e.g. the proportional
exit in shrink_node_memcg) since. Regardless, now we have data that
suggests full walks are more reliable and scale just fine. ]
Roman Gushchin [Mon, 23 Sep 2019 22:34:58 +0000 (15:34 -0700)]
mm: memcontrol: switch to rcu protection in drain_all_stock()
Commit 72f0184c8a00 ("mm, memcg: remove hotplug locking from try_charge")
introduced css_tryget()/css_put() calls in drain_all_stock(), which are
supposed to protect the target memory cgroup from being released during
the mem_cgroup_is_descendant() call.
However, it's not completely safe. In theory, memcg can go away between
reading stock->cached pointer and calling css_tryget().
This can happen if drain_all_stock() races with drain_local_stock()
performed on the remote cpu as a result of a work, scheduled by the
previous invocation of drain_all_stock().
The race is a bit theoretical and there are few chances to trigger it, but
the current code looks a bit confusing, so it makes sense to fix it
anyway. The code looks like as if css_tryget() and css_put() are used to
protect stocks drainage. It's not necessary because stocked pages are
holding references to the cached cgroup. And it obviously won't work for
works, scheduled on other cpus.
So, let's read the stock->cached pointer and evaluate the memory cgroup
inside a rcu read section, and get rid of css_tryget()/css_put() calls.
Chris Down [Mon, 23 Sep 2019 22:34:55 +0000 (15:34 -0700)]
mm, memcg: throttle allocators when failing reclaim over memory.high
We're trying to use memory.high to limit workloads, but have found that
containment can frequently fail completely and cause OOM situations
outside of the cgroup. This happens especially with swap space -- either
when none is configured, or swap is full. These failures often also don't
have enough warning to allow one to react, whether for a human or for a
daemon monitoring PSI.
Here is output from a simple program showing how long it takes in usec
(column 2) to allocate a megabyte of anonymous memory (column 1) when a
cgroup is already beyond its memory high setting, and no swap is
available:
The delay does go up extremely marginally past the 100MB memory.high
threshold, as now we spend time scanning before returning to usermode, but
it's nowhere near enough to contain growth. It also doesn't get worse the
more pages you have, since it only considers nr_pages.
The current situation goes against both the expectations of users of
memory.high, and our intentions as cgroup v2 developers. In
cgroup-v2.txt, we claim that we will throttle and only under "extreme
conditions" will memory.high protection be breached. Likewise, cgroup v2
users generally also expect that memory.high should throttle workloads as
they exceed their high threshold. However, as seen above, this isn't
always how it works in practice -- even on banal setups like those with no
swap, or where swap has become exhausted, we can end up with memory.high
being breached and us having no weapons left in our arsenal to combat
runaway growth with, since reclaim is futile.
It's also hard for system monitoring software or users to tell how bad the
situation is, as "high" events for the memcg may in some cases be benign,
and in others be catastrophic. The current status quo is that we fail
containment in a way that doesn't provide any advance warning that things
are about to go horribly wrong (for example, we are about to invoke the
kernel OOM killer).
This patch introduces explicit throttling when reclaim is failing to keep
memcg size contained at the memory.high setting. It does so by applying
an exponential delay curve derived from the memcg's overage compared to
memory.high. In the normal case where the memcg is either below or only
marginally over its memory.high setting, no throttling will be performed.
This composes well with system health monitoring and remediation, as these
allocator delays are factored into PSI's memory pressure calculations.
This both creates a mechanism system administrators or applications
consuming the PSI interface to trivially see that the memcg in question is
struggling and use that to make more reasonable decisions, and permits
them enough time to act. Either of these can act with significantly more
nuance than that we can provide using the system OOM killer.
This is a similar idea to memory.oom_control in cgroup v1 which would put
the cgroup to sleep if the threshold was violated, but it's also
significantly improved as it results in visible memory pressure, and also
doesn't schedule indefinitely, which previously made tracing and other
introspection difficult (ie. it's clamped at 2*HZ per allocation through
MEMCG_MAX_HIGH_DELAY_JIFFIES).
Contrast the previous results with a kernel with this patch:
As you can see, in the normal case, memory allocation takes around 1000
usec. However, as we exceed our memory.high, things start to increase
exponentially, but fairly leniently at first. Our first megabyte over
memory.high takes us 0.16 seconds, then the next is 0.46 seconds, then the
next is almost an entire second. This gets worse until we reach our
eventual 2*HZ clamp per batch, resulting in 16 seconds per megabyte.
However, this is still making forward progress, so permits tracing or
further analysis with programs like GDB.
We use an exponential curve for our delay penalty for a few reasons:
1. We run mem_cgroup_handle_over_high to potentially do reclaim after
we've already performed allocations, which means that temporarily
going over memory.high by a small amount may be perfectly legitimate,
even for compliant workloads. We don't want to unduly penalise such
cases.
2. An exponential curve (as opposed to a static or linear delay) allows
ramping up memory pressure stats more gradually, which can be useful
to work out that you have set memory.high too low, without destroying
application performance entirely.
This patch expands on earlier work by Johannes Weiner. Thanks!
Transparent Huge Pages are currently stored in i_pages as pointers to
consecutive subpages. This patch changes that to storing consecutive
pointers to the head page in preparation for storing huge pages more
efficiently in i_pages.
mm/filemap.c: don't initiate writeback if mapping has no dirty pages
Functions like filemap_write_and_wait_range() should do nothing if inode
has no dirty pages or pages currently under writeback. But they anyway
construct struct writeback_control and this does some atomic operations if
CONFIG_CGROUP_WRITEBACK=y - on fast path it locks inode->i_lock and
updates state of writeback ownership, on slow path might be more work.
Current this path is safely avoided only when inode mapping has no pages.
For example generic_file_read_iter() calls filemap_write_and_wait_range()
at each O_DIRECT read - pretty hot path.
This patch skips starting new writeback if mapping has no dirty tags set.
If writeback is already in progress filemap_write_and_wait_range() will
wait for it.
mm, page_owner, debug_pagealloc: save and dump freeing stack trace
The debug_pagealloc functionality is useful to catch buggy page allocator
users that cause e.g. use after free or double free. When page
inconsistency is detected, debugging is often simpler by knowing the call
stack of process that last allocated and freed the page. When page_owner
is also enabled, we record the allocation stack trace, but not freeing.
This patch therefore adds recording of freeing process stack trace to page
owner info, if both page_owner and debug_pagealloc are configured and
enabled. With only page_owner enabled, this info is not useful for the
memory leak debugging use case. dump_page() is adjusted to print the
info. An example result of calling __free_pages() twice may look like
this (note the page last free stack trace):
BUG: Bad page state in process bash pfn:13d8f8
page:ffffc31984f63e00 refcount:-1 mapcount:0 mapping:0000000000000000 index:0x0
flags: 0x1affff800000000()
raw: 01affff800000000dead000000000100dead0000000001220000000000000000
raw: 00000000000000000000000000000000ffffffffffffffff0000000000000000
page dumped because: nonzero _refcount
page_owner tracks the page as freed
page last allocated via order 0, migratetype Unmovable, gfp_mask 0xcc0(GFP_KERNEL)
prep_new_page+0x143/0x150
get_page_from_freelist+0x289/0x380
__alloc_pages_nodemask+0x13c/0x2d0
khugepaged+0x6e/0xc10
kthread+0xf9/0x130
ret_from_fork+0x3a/0x50
page last free stack trace:
free_pcp_prepare+0x134/0x1e0
free_unref_page+0x18/0x90
khugepaged+0x7b/0xc10
kthread+0xf9/0x130
ret_from_fork+0x3a/0x50
Modules linked in:
CPU: 3 PID: 271 Comm: bash Not tainted 5.3.0-rc4-2.g07a1a73-default+ #57
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-prebuilt.qemu.org 04/01/2014
Call Trace:
dump_stack+0x85/0xc0
bad_page.cold+0xba/0xbf
rmqueue_pcplist.isra.0+0x6c5/0x6d0
rmqueue+0x2d/0x810
get_page_from_freelist+0x191/0x380
__alloc_pages_nodemask+0x13c/0x2d0
__get_free_pages+0xd/0x30
__pud_alloc+0x2c/0x110
copy_page_range+0x4f9/0x630
dup_mmap+0x362/0x480
dup_mm+0x68/0x110
copy_process+0x19e1/0x1b40
_do_fork+0x73/0x310
__x64_sys_clone+0x75/0x80
do_syscall_64+0x6e/0x1e0
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7f10af854a10
...
mm, page_owner: keep owner info when freeing the page
For debugging purposes it might be useful to keep the owner info even
after page has been freed, and include it in e.g. dump_page() when
detecting a bad page state. For that, change the PAGE_EXT_OWNER flag
meaning to "page owner info has been set at least once" and add new
PAGE_EXT_OWNER_ACTIVE for tracking whether page is supposed to be
currently tracked allocated or free. Adjust dump_page() accordingly,
distinguishing free and allocated pages. In the page_owner debugfs file,
keep printing only allocated pages so that existing scripts are not
confused, and also because free pages are irrelevant for the memory
statistics or leak detection that's the typical use case of the file,
anyway.
mm, page_owner: record page owner for each subpage
Patch series "debug_pagealloc improvements through page_owner", v2.
The debug_pagealloc functionality serves a similar purpose on the page
allocator level that slub_debug does on the kmalloc level, which is to
detect bad users. One notable feature that slub_debug has is storing
stack traces of who last allocated and freed the object. On page level we
track allocations via page_owner, but that info is discarded when freeing,
and we don't track freeing at all. This series improves those aspects.
With both debug_pagealloc and page_owner enabled, we can then get bug
reports such as the example in Patch 4.
SLUB debug tracking additionally stores cpu, pid and timestamp. This could
be added later, if deemed useful enough to justify the additional page_ext
structure size.
This patch (of 3):
Currently, page owner info is only recorded for the first page of a
high-order allocation, and copied to tail pages in the event of a split
page. With the plan to keep previous owner info after freeing the page,
it would be benefical to record page owner for each subpage upon
allocation. This increases the overhead for high orders, but that should
be acceptable for a debugging option.
The order stored for each subpage is the order of the whole allocation.
This makes it possible to calculate the "head" pfn and to recognize "tail"
pages (quoted because not all high-order allocations are compound pages
with true head and tail pages). When reading the page_owner debugfs file,
keep skipping the "tail" pages so that stats gathered by existing scripts
don't get inflated.
Walter Wu [Mon, 23 Sep 2019 22:34:13 +0000 (15:34 -0700)]
kasan: add memory corruption identification for software tag-based mode
Add memory corruption identification at bug report for software tag-based
mode. The report shows whether it is "use-after-free" or "out-of-bound"
error instead of "invalid-access" error. This will make it easier for
programmers to see the memory corruption problem.
We extend the slab to store five old free pointer tag and free backtrace,
we can check if the tagged address is in the slab record and make a good
guess if the object is more like "use-after-free" or "out-of-bound".
therefore every slab memory corruption can be identified whether it's
"use-after-free" or "out-of-bound".
Qian Cai [Mon, 23 Sep 2019 22:34:10 +0000 (15:34 -0700)]
mm/kmemleak: increase the max mem pool to 1M
There are some machines with slow disk and fast CPUs. When they are under
memory pressure, it could take a long time to swap before the OOM kicks in
to free up some memory. As the results, it needs a large mem pool for
kmemleak or suffering from higher chance of a kmemleak metadata allocation
failure. 524288 proves to be the good number for all architectures here.
Increase the upper bound to 1M to leave some room for the future.
Qian Cai [Mon, 23 Sep 2019 22:34:07 +0000 (15:34 -0700)]
mm/kmemleak.c: record the current memory pool size
The only way to obtain the current memory pool size for a running kernel
is to check the kernel config file which is inconvenient. Record it in
the kernel messages.
mm: kmemleak: use the memory pool for early allocations
Currently kmemleak uses a static early_log buffer to trace all memory
allocation/freeing before the slab allocator is initialised. Such early
log is replayed during kmemleak_init() to properly initialise the kmemleak
metadata for objects allocated up that point. With a memory pool that
does not rely on the slab allocator, it is possible to skip this early log
entirely.
In order to remove the early logging, consider kmemleak_enabled == 1 by
default while the kmem_cache availability is checked directly on the
object_cache and scan_area_cache variables. The RCU callback is only
invoked after object_cache has been initialised as we wouldn't have any
concurrent list traversal before this.
In order to reduce the number of callbacks before kmemleak is fully
initialised, move the kmemleak_init() call to mm_init().
mm: kmemleak: simple memory allocation pool for kmemleak objects
Add a memory pool for struct kmemleak_object in case the normal
kmem_cache_alloc() fails under the gfp constraints passed by the caller.
The mem_pool[] array size is currently fixed at 16000.
We are not using the existing mempool kernel API since this requires
the slab allocator to be available (for pool->elements allocation). A
subsequent kmemleak patch will replace the static early log buffer with
the pool allocation introduced here and this functionality is required
to be available before the slab was initialised.
Object scan areas are an optimisation aimed to decrease the false
positives and slightly improve the scanning time of large objects known to
only have a few specific pointers. If a struct scan_area fails to
allocate, kmemleak can still function normally by scanning the full
object.
Introduce an OBJECT_FULL_SCAN flag and mark objects as such when scan_area
allocation fails.
tid_to_cpu() and tid_to_event() are only used in note_cmpxchg_failure()
when SLUB_DEBUG_CMPXCHG=y, so when SLUB_DEBUG_CMPXCHG=n by default, Clang
will complain that those unused functions.
Waiman Long [Mon, 23 Sep 2019 22:33:49 +0000 (15:33 -0700)]
mm, slab: move memcg_cache_params structure to mm/slab.h
The memcg_cache_params structure is only embedded into the kmem_cache of
slab and slub allocators as defined in slab_def.h and slub_def.h and used
internally by mm code. There is no needed to expose it in a public
header. So move it from include/linux/slab.h to mm/slab.h. It is just a
refactoring patch with no code change.
In fact both the slub_def.h and slab_def.h should be moved into the mm
directory as well, but that will probably cause many merge conflicts.
Waiman Long [Mon, 23 Sep 2019 22:33:46 +0000 (15:33 -0700)]
mm, slab: extend slab/shrink to shrink all memcg caches
Currently, a value of '1" is written to /sys/kernel/slab/<slab>/shrink
file to shrink the slab by flushing out all the per-cpu slabs and free
slabs in partial lists. This can be useful to squeeze out a bit more
memory under extreme condition as well as making the active object counts
in /proc/slabinfo more accurate.
This usually applies only to the root caches, as the SLUB_MEMCG_SYSFS_ON
option is usually not enabled and "slub_memcg_sysfs=1" not set. Even if
memcg sysfs is turned on, it is too cumbersome and impractical to manage
all those per-memcg sysfs files in a real production system.
So there is no practical way to shrink memcg caches. Fix this by enabling
a proper write to the shrink sysfs file of the root cache to scan all the
available memcg caches and shrink them as well. For a non-root memcg
cache (when SLUB_MEMCG_SYSFS_ON or slub_memcg_sysfs is on), only that
cache will be shrunk when written.
On a 2-socket 64-core 256-thread arm64 system with 64k page after
a parallel kernel build, the the amount of memory occupied by slabs
before shrinking slabs were:
Changwei Ge [Mon, 23 Sep 2019 22:33:40 +0000 (15:33 -0700)]
ocfs2: checkpoint appending truncate log transaction before flushing
Appending truncate log(TA) and and flushing truncate log(TF) are two
separated transactions. They can be both committed but not checkpointed.
If crash occurs then, both transaction will be replayed with several
already released to global bitmap clusters. Then truncate log will be
replayed resulting in cluster double free.
To reproduce this issue, just crash the host while punching hole to files.
Changwei Ge [Mon, 23 Sep 2019 22:33:37 +0000 (15:33 -0700)]
ocfs2: wait for recovering done after direct unlock request
There is a scenario causing ocfs2 umount hang when multiple hosts are
rebooting at the same time.
NODE1 NODE2 NODE3
send unlock requset to NODE2
dies
become recovery master
recover NODE2
find NODE2 dead
mark resource RECOVERING
directly remove lock from grant list
calculate usage but RECOVERING marked
**miss the window of purging
clear RECOVERING
To reproduce this issue, crash a host and then umount ocfs2
from another node.
To solve this, just let unlock progress wait for recovery done.
fs/ocfs2/dir.c: In function ocfs2_dx_dir_transfer_leaf:
fs/ocfs2/dir.c:3653:42: warning: variable new_list set but not used [-Wunused-but-set-variable]
fs/ocfs2/namei.c: remove set but not used variables
Fixes gcc '-Wunused-but-set-variable' warning:
fs/ocfs2/namei.c: In function ocfs2_create_inode_in_orphan:
fs/ocfs2/namei.c:2503:23: warning: variable di set but not used [-Wunused-but-set-variable]
There is no need to check return value of debugfs_create functions, but
the last sweep through ocfs missed a number of places where this was
happening. There is also no need to save the individual dentries for the
debugfs files, as everything is can just be removed at once when the
directory is removed.
By getting rid of the file dentries for the debugfs entries, a bit of
local memory can be saved as well.
Joseph Qi [Mon, 23 Sep 2019 22:33:08 +0000 (15:33 -0700)]
ocfs2: use jbd2_inode dirty range scoping
6ba0e7dc64a5 ("jbd2: introduce jbd2_inode dirty range scoping") allow us
scoping each of the inode dirty ranges associated with a given
transaction, and ext4 already does this way.
Now let's also use the newly introduced jbd2_inode dirty range scoping to
prevent us from waiting forever when trying to complete a journal
transaction in ocfs2.
Since 9e3596b0c653 ("kbuild: initramfs cleanup, set target from Kconfig")
"make clean" leaves behind compressed initramfs images. Example:
$ make defconfig
$ sed -i 's|CONFIG_INITRAMFS_SOURCE=""|CONFIG_INITRAMFS_SOURCE="/tmp/ir.cpio"|' .config
$ make olddefconfig
$ make -s
$ make -s clean
$ git clean -ndxf | grep initramfs
Would remove usr/initramfs_data.cpio.gz
clean rules do not have CONFIG_* context so they do not know which
compression format was used. Thus they don't know which files to delete.
Tell clean to delete all possible compression formats.
Once patched usr/initramfs_data.cpio.gz and friends are deleted by
"make clean".
z3fold_page_reclaim()'s retry mechanism is broken: on a second iteration
it will have zhdr from the first one so that zhdr is no longer in line
with struct page. That leads to crashes when the system is stressed.
Fix that by moving zhdr assignment up.
While at it, protect against using already freed handles by using own
local slots structure in z3fold_page_reclaim().
On kernels without CONFIG_MMU, we get a link error for the siw driver:
drivers/infiniband/sw/siw/siw_mem.o: In function `siw_umem_get':
siw_mem.c:(.text+0x4c8): undefined reference to `can_do_mlock'
This is probably not the only driver that needs the function and could
otherwise build correctly without CONFIG_MMU, so add a dummy variant that
always returns false.
Revert "mm/z3fold.c: fix race between migration and destruction"
With the original commit applied, z3fold_zpool_destroy() may get blocked
on wait_event() for indefinite time. Revert this commit for the time
being to get rid of this problem since the issue the original commit
addresses is less severe.
fat: work around race with userspace's read via blockdev while mounting
If userspace reads the buffer via blockdev while mounting,
sb_getblk()+modify can race with buffer read via blockdev.
For example,
FS userspace
bh = sb_getblk()
modify bh->b_data
read
ll_rw_block(bh)
fill bh->b_data by on-disk data
/* lost modified data by FS */
set_buffer_uptodate(bh)
set_buffer_uptodate(bh)
Userspace should not use the blockdev while mounting though, the udev
seems to be already doing this. Although I think the udev should try to
avoid this, workaround the race by small overhead.
Merge tag 'for-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-power-supply
Pull power supply and reset updates from Sebastian Reichel:
"Core:
- Ensure HWMON devices are registered with valid names
- Fix device wakeup code
Drivers:
- bq25890_charger: Add BQ25895 support
- axp288_fuel_gauge: Add Minix Neo Z83-4 to blacklist
- sc27xx: improve battery calibration
- misc small fixes all over drivers"
* tag 'for-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-power-supply: (24 commits)
power: supply: cpcap-charger: Enable vbus boost voltage
power: supply: sc27xx: Add POWER_SUPPLY_PROP_CALIBRATE attribute
power: supply: sc27xx: Optimize the battery capacity calibration
power: supply: sc27xx: Make sure the alarm capacity is larger than 0
power: supply: sc27xx: Fix the the accuracy issue of coulomb calculation
power: supply: sc27xx: Fix conditon to enable the FGU interrupt
power: supply: sc27xx: Add POWER_SUPPLY_PROP_ENERGY_FULL_DESIGN attribute
power: supply: max77650: add MODULE_ALIAS()
power: supply: isp1704: remove redundant assignment to variable ret
power: supply: bq25890_charger: Add the BQ25895 part
power: supply: sc27xx: Replace devm_add_action() followed by failure action with devm_add_action_or_reset()
power: supply: sc27xx: Introduce local variable 'struct device *dev'
power: reset: reboot-mode: Fix author email format
power: supply: ab8500: remove set but not used variables 'vbup33_vrtcn' and 'bup_vch_range'
power: supply: max17042_battery: Fix a typo in function names
power: reset: gpio-restart: Fix typo when gpio reset is not found
power: supply: Init device wakeup after device_add()
power: supply: ab8500_charger: Mark expected switch fall-through
power: supply: sbs-battery: only return health when battery present
MAINTAINERS: N900: Remove isp1704_charger.h record
...
Merge tag 'hsi-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-hsi
Pull HSI updates from Sebastian Reichel:
"Misc cleanups"
* tag 'hsi-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-hsi:
HSI: Remove dev_err() usage after platform_get_irq()
HSI: ssi_protocol: Mark expected switch fall-throughs
firmware: bcm47xx_nvram: _really_ correct size_t printf format
Commit feb4eb060c3a ("firmware: bcm47xx_nvram: Correct size_t printf
format") was wrong, and changed a printout of 'header.len' - which is an
u32 type - to use '%zu'.
It apparently did pattern matching on the other case, where it printed
out 'nvram_len', which is indeed of type 'size_t'.
Rather than undoing the change, this just makes it use the variable that
the change seemed to expect to be used.
modules: make MODULE_IMPORT_NS() work even when modular builds are disabled
It's an unusual configuration, and was apparently never tested, and not
caught in linux-next because of a combination of travels and it making
it into the tree too late.
The fix is to simply move the #define to outside the CONFIG_MODULE
section, since MODULE_INFO() will do the right thing.
Merge tag 'rtc-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux
Pull RTC updates from Alexandre Belloni:
"Two new drivers and the new pcf2127 feature make the bulk of the
additions. The rest are the usual fixes and new features.
Subsystem:
- add debug message when registration fails
New drivers:
- Amlogic Virtual Wake
- Freescale FlexTimer Module alarm
Drivers:
- remove superfluous error messages
- convert to i2c_new_dummy_device and devm_i2c_new_dummy_device
- Remove dev_err() usage after platform_get_irq()
- Set RTC range for: pcf2123, pcf8563, snvs.
- pcf2127: tamper detection and watchdog support
- pcf85363: fix regmap issue
- sun6i: H6 support
- remove w90x900/nuc900 driver"
* tag 'rtc-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux: (51 commits)
rtc: meson: mark PM functions as __maybe_unused
rtc: sc27xx: Remove clearing SPRD_RTC_POWEROFF_ALM_FLAG flag
dt-bindings: rtc: ds1307: add rx8130 compatible
rtc: sun6i: Allow using as wakeup source from suspend
rtc: pcf8563: let the core handle range offsetting
rtc: pcf8563: remove useless indirection
rtc: pcf8563: convert to devm_rtc_allocate_device
rtc: pcf8563: add Microcrystal RV8564 compatible
rtc: pcf8563: add Epson RTC8564 compatible
rtc: s35390a: convert to devm_i2c_new_dummy_device()
rtc: max77686: convert to devm_i2c_new_dummy_device()
rtc: pcf85363/pcf85263: fix regmap error in set_time
rtc: snvs: switch to rtc_time64_to_tm/rtc_tm_to_time64
rtc: snvs: set range
rtc: snvs: fix possible race condition
rtc: pcf2127: bugfix: watchdog build dependency
rtc: pcf2127: add tamper detection support
rtc: pcf2127: add watchdog feature support
rtc: pcf2127: bugfix: read rtc disables watchdog
rtc: pcf2127: cleanup register and bit defines
...
Merge tag 'rpmsg-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc
Pull rpmsg updates from Bjorn Andersson:
"This contains updates to make the rpmsg sample driver more useful,
fixes the naming of GLINK devices to avoid naming collisions and a few
minor bug fixes. It also updates MAINTAINERS to reflect the move to
kernel.org"
* tag 'rpmsg-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc:
rpmsg: glink-smem: Name the edge based on parent remoteproc
rpmsg: glink: Use struct_size() helper
rpmsg: virtio_rpmsg_bus: replace "%p" with "%pK"
MAINTAINERS: rpmsg: fix git tree location
rpmsg: core: fix comments
samples/rpmsg: Introduce a module parameter for message count
samples/rpmsg: Replace print_hex_dump() with print_hex_dump_debug()
Merge tag 'rproc-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc
Pull remoteproc updates from Bjorn Andersson:
"This exposes the remoteproc's name in sysfs, allows stm32 to enter
platform standby and provides bug fixes for stm32 and Qualcomm's modem
remoteproc drivers. Finally it updates MAINTAINERS to reflect the move
to kernel.org"
* tag 'rproc-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc:
MAINTAINERS: remoteproc: update git tree location
remoteproc: Remove dev_err() usage after platform_get_irq()
remoteproc: stm32: manage the get_irq probe defer case
remoteproc: stm32: clear MCU PDDS at firmware start
remoteproc: qcom: q6v5-mss: fixup q6v5_pds_enable error handling
remoteproc: Add a sysfs interface for name
remoteproc: qcom: Move glink_ssr notification after stop
Merge tag 'soundwire-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire
Pull soundwire updates from Vinod Koul:
"This includes DT support thanks to Srini and more work done by Intel
(Pierre) on improving cadence and intel support.
Summary:
- Add DT bindings and DT support in core
- Add debugfs support for soundwire properties
- Improvements on streaming handling to core
- Improved handling of Cadence module
- More updates and improvements to Intel driver"
* tag 'soundwire-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/soundwire: (30 commits)
soundwire: stream: make stream name a const pointer
soundwire: Add compute_params callback
soundwire: core: add device tree support for slave devices
dt-bindings: soundwire: add slave bindings
soundwire: bus: set initial value to port_status
soundwire: intel: handle disabled links
soundwire: intel: add debugfs register dump
soundwire: cadence_master: add debugfs register dump
soundwire: add debugfs support
soundwire: intel: remove unused variables
soundwire: intel: move shutdown() callback and don't export symbol
soundwire: cadence_master: add kernel parameter to override interrupt mask
soundwire: intel_init: add kernel module parameter to filter out links
soundwire: cadence_master: fix divider setting in clock register
soundwire: cadence_master: make use of mclk_freq property
soundwire: intel: read mclk_freq property from firmware
soundwire: add new mclk_freq field for properties
soundwire: stream: remove unnecessary variable initializations
soundwire: stream: fix disable sequence
soundwire: include mod_devicetable.h to avoid compiling warnings
...
Merge tag 'modules-for-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux
Pull modules updates from Jessica Yu:
"The main bulk of this pull request introduces a new exported symbol
namespaces feature. The number of exported symbols is increasingly
growing with each release (we're at about 31k exports as of 5.3-rc7)
and we currently have no way of visualizing how these symbols are
"clustered" or making sense of this huge export surface.
Namespacing exported symbols allows kernel developers to more
explicitly partition and categorize exported symbols, as well as more
easily limiting the availability of namespaced symbols to other parts
of the kernel. For starters, we have introduced the USB_STORAGE
namespace to demonstrate the API's usage. I have briefly summarized
the feature and its main motivations in the tag below.
Summary:
- Introduce exported symbol namespaces.
This new feature allows subsystem maintainers to partition and
categorize their exported symbols into explicit namespaces. Module
authors are now required to import the namespaces they need.
Some of the main motivations of this feature include: allowing
kernel developers to better manage the export surface, allow
subsystem maintainers to explicitly state that usage of some
exported symbols should only be limited to certain users (think:
inter-module or inter-driver symbols, debugging symbols, etc), as
well as more easily limiting the availability of namespaced symbols
to other parts of the kernel.
With the module import requirement, it is also easier to spot the
misuse of exported symbols during patch review.
Two new macros are introduced: EXPORT_SYMBOL_NS() and
EXPORT_SYMBOL_NS_GPL(). The API is thoroughly documented in
Documentation/kbuild/namespaces.rst.
- Some small code and kbuild cleanups here and there"
* tag 'modules-for-v5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux:
module: Remove leftover '#undef' from export header
module: remove unneeded casts in cmp_name()
module: move CONFIG_UNUSED_SYMBOLS to the sub-menu of MODULES
module: remove redundant 'depends on MODULES'
module: Fix link failure due to invalid relocation on namespace offset
usb-storage: export symbols in USB_STORAGE namespace
usb-storage: remove single-use define for debugging
docs: Add documentation for Symbol Namespaces
scripts: Coccinelle script for namespace dependencies.
modpost: add support for generating namespace dependencies
export: allow definition default namespaces in Makefiles or sources
module: add config option MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS
modpost: add support for symbol namespaces
module: add support for symbol namespaces.
export: explicitly align struct kernel_symbol
module: support reading multiple values per modinfo tag
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
- fix various clang build and cppcheck issues
- switch ARM to use new common outgoing-CPU-notification code
- add some additional explanation about the boot code
- kbuild "make clean" fixes
- get rid of another "(____ptrval____)", this time for the VDSO code
- avoid treating cache maintenance faults as a write
- add a frame pointer unwinder implementation for clang
- add EDAC support for Aurora L2 cache
- improve robustness of adjust_lowmem_bounds() finding the bounds of
lowmem.
- add reset control for AMBA primecell devices
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (24 commits)
ARM: 8906/1: drivers/amba: add reset control to amba bus probe
ARM: 8905/1: Emit __gnu_mcount_nc when using Clang 10.0.0 or newer
ARM: 8904/1: skip nomap memblocks while finding the lowmem/highmem boundary
ARM: 8903/1: ensure that usable memory in bank 0 starts from a PMD-aligned address
ARM: 8891/1: EDAC: armada_xp: Add support for more SoCs
ARM: 8888/1: EDAC: Add driver for the Marvell Armada XP SDRAM and L2 cache ECC
ARM: 8892/1: EDAC: Add missing debugfs_create_x32 wrapper
ARM: 8890/1: l2x0: add marvell,ecc-enable property for aurora
ARM: 8889/1: dt-bindings: document marvell,ecc-enable binding
ARM: 8886/1: l2x0: support parity-enable/disable on aurora
ARM: 8885/1: aurora-l2: add defines for parity and ECC registers
ARM: 8887/1: aurora-l2: add prefix to MAX_RANGE_SIZE
ARM: 8902/1: l2c: move cache-aurora-l2.h to asm/hardware
ARM: 8900/1: UNWINDER_FRAME_POINTER implementation for Clang
ARM: 8898/1: mm: Don't treat faults reported from cache maintenance as writes
ARM: 8896/1: VDSO: Don't leak kernel addresses
ARM: 8895/1: visit mach-* and plat-* directories when cleaning
ARM: 8894/1: boot: Replace open-coded nop with macro
ARM: 8893/1: boot: Explain the 8 nops
ARM: 8876/1: fix O= building with CONFIG_FPE_FASTFPE
...
Merge tag 'mips_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS updates from Paul Burton:
"Main MIPS changes:
- boot_mem_map is removed, providing a nice cleanup made possible by
the recent removal of bootmem.
- Some fixes to atomics, in general providing compiler barriers for
smp_mb__{before,after}_atomic plus fixes specific to Loongson CPUs
or MIPS32 systems using cmpxchg64().
- Conversion to the new generic VDSO infrastructure courtesy of
Vincenzo Frascino.
- Removal of undefined behavior in set_io_port_base(), fixing the
behavior of some MIPS kernel configurations when built with recent
clang versions.
- Initial MIPS32 huge page support, functional on at least Ingenic
SoCs.
- pte_special() is now supported for some configurations, allowing
among other things generic fast GUP to be used.
- Miscellaneous fixes & cleanups.
And platform specific changes:
- Major improvements to Ingenic SoC support from Paul Cercueil,
mostly enabled by the inclusion of the new TCU (timer-counter unit)
drivers he's spent a very patient year or so working on. Plus some
fixes for X1000 SoCs from Zhou Yanjie.
- Netgear R6200 v1 systems are now supported by the bcm47xx platform.
- DT updates for BMIPS, Lantiq & Microsemi Ocelot systems"
* tag 'mips_5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (89 commits)
MIPS: Detect bad _PFN_SHIFT values
MIPS: Disable pte_special() for MIPS32 with RiXi
MIPS: ralink: deactivate PCI support for SOC_MT7621
mips: compat: vdso: Use legacy syscalls as fallback
MIPS: Drop Loongson _CACHE_* definitions
MIPS: tlbex: Remove cpu_has_local_ebase
MIPS: tlbex: Simplify r3k check
MIPS: Select R3k-style TLB in Kconfig
MIPS: PCI: refactor ioc3 special handling
mips: remove ioremap_cachable
mips/atomic: Fix smp_mb__{before,after}_atomic()
mips/atomic: Fix loongson_llsc_mb() wreckage
mips/atomic: Fix cmpxchg64 barriers
MIPS: Octeon: remove duplicated include from dma-octeon.c
firmware: bcm47xx_nvram: Allow COMPILE_TEST
firmware: bcm47xx_nvram: Correct size_t printf format
MIPS: Treat Loongson Extensions as ASEs
MIPS: Remove dev_err() usage after platform_get_irq()
MIPS: dts: mscc: describe the PTP ready interrupt
MIPS: dts: mscc: describe the PTP register range
...
Merge tag 'gfs2-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2
Pull gfs2 updates from Andreas Gruenbacher:
- Use asynchronous glocks and timeouts to recover from deadlocks during
rename and exchange: the lock ordering constraints the vfs uses are
not sufficient to prevent deadlocks across multiple nodes.
- Add support for IOMAP_ZERO and use iomap_zero_range to replace gfs2
specific code.
- Various other minor fixes and cleanups.
* tag 'gfs2-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2:
gfs2: clear buf_in_tr when ending a transaction in sweep_bh_for_rgrps
gfs2: Improve mmap write vs. truncate consistency
gfs2: Use async glocks for rename
gfs2: create function gfs2_glock_update_hold_time
gfs2: separate holder for rgrps in gfs2_rename
gfs2: Delete an unnecessary check before brelse()
gfs2: Minor PAGE_SIZE arithmetic cleanups
gfs2: Fix recovery slot bumping
gfs2: Fix possible fs name overflows
gfs2: untangle the logic in gfs2_drevalidate
gfs2: Always mark inode dirty in fallocate
gfs2: Minor gfs2_alloc_inode cleanup
gfs2: implement gfs2_block_zero_range using iomap_zero_range
gfs2: Add support for IOMAP_ZERO
gfs2: gfs2_iomap_begin cleanup
Merge tag 'f2fs-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs
Pull f2fs updates from Jaegeuk Kim:
"In this round, we introduced casefolding support in f2fs, and fixed
various bugs in individual features such as IO alignment,
checkpoint=disable, quota, and swapfile.
Enhancement:
- support casefolding w/ enhancement in ext4
- support fiemap for directory
- support FS_IO_GET|SET_FSLABEL
Bug fix:
- fix IO stuck during checkpoint=disable
- avoid infinite GC loop
- fix panic/overflow related to IO alignment feature
- fix livelock in swap file
- fix discard command leak
- disallow dio for atomic_write"
* tag 'f2fs-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (51 commits)
f2fs: add a condition to detect overflow in f2fs_ioc_gc_range()
f2fs: fix to add missing F2FS_IO_ALIGNED() condition
f2fs: fix to fallback to buffered IO in IO aligned mode
f2fs: fix to handle error path correctly in f2fs_map_blocks
f2fs: fix extent corrupotion during directIO in LFS mode
f2fs: check all the data segments against all node ones
f2fs: Add a small clarification to CONFIG_FS_F2FS_FS_SECURITY
f2fs: fix inode rwsem regression
f2fs: fix to avoid accessing uninitialized field of inode page in is_alive()
f2fs: avoid infinite GC loop due to stale atomic files
f2fs: Fix indefinite loop in f2fs_gc()
f2fs: convert inline_data in prior to i_size_write
f2fs: fix error path of f2fs_convert_inline_page()
f2fs: add missing documents of reserve_root/resuid/resgid
f2fs: fix flushing node pages when checkpoint is disabled
f2fs: enhance f2fs_is_checkpoint_ready()'s readability
f2fs: clean up __bio_alloc()'s parameter
f2fs: fix wrong error injection path in inc_valid_block_count()
f2fs: fix to writeout dirty inode during node flush
f2fs: optimize case-insensitive lookups
...
Merge tag 'for_v5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
Pull ext2, quota, udf fixes and cleanups from Jan Kara:
- two small quota fixes (in grace time handling and possible missed
accounting of preallocated blocks beyond EOF).
- some ext2 cleanups
- udf fixes for better compatibility with Windows 10 generated media
(named streams, write-protection using domain-identifier, placement
of volume recognition sequence)
- some udf cleanups
* tag 'for_v5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
quota: fix wrong condition in is_quota_modification()
fs-udf: Delete an unnecessary check before brelse()
ext2: Delete an unnecessary check before brelse()
udf: Drop forward function declarations
udf: Verify domain identifier fields
udf: augment UDF permissions on new inodes
udf: Use dynamic debug infrastructure
udf: reduce leakage of blocks related to named streams
udf: prevent allocation beyond UDF partition
quota: fix condition for resetting time limit in do_set_dqblk()
ext2: code cleanup for ext2_free_blocks()
ext2: fix block range in ext2_data_block_valid()
udf: support 2048-byte spacing of VRS descriptors on 4K media
udf: refactor VRS descriptor identification
Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 updates from Ted Ts'o:
"Added new ext4 debugging ioctls to allow userspace to get information
about the state of the extent status cache.
Dropped workaround for pre-1970 dates which were encoded incorrectly
in pre-4.4 kernels. Since both the kernel correctly generates, and
e2fsck detects and fixes this issue for the past four years, it'e time
to drop the workaround. (Also, it's not like files with dates in the
distant past were all that common in the first place.)
A lot of miscellaneous bug fixes and cleanups, including some ext4
Documentation fixes. Also included are two minor bug fixes in
fs/unicode"
* tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (21 commits)
unicode: make array 'token' static const, makes object smaller
unicode: Move static keyword to the front of declarations
ext4: add missing bigalloc documentation.
ext4: fix kernel oops caused by spurious casefold flag
ext4: fix integer overflow when calculating commit interval
ext4: use percpu_counters for extent_status cache hits/misses
ext4: fix potential use after free after remounting with noblock_validity
jbd2: add missing tracepoint for reserved handle
ext4: fix punch hole for inline_data file systems
ext4: rework reserved cluster accounting when invalidating pages
ext4: documentation fixes
ext4: treat buffers with write errors as containing valid data
ext4: fix warning inside ext4_convert_unwritten_extents_endio
ext4: set error return correctly when ext4_htree_store_dirent fails
ext4: drop legacy pre-1970 encoding workaround
ext4: add new ioctl EXT4_IOC_GET_ES_CACHE
ext4: add a new ioctl EXT4_IOC_GETSTATE
ext4: add a new ioctl EXT4_IOC_CLEAR_ES_CACHE
jbd2: flush_descriptor(): Do not decrease buffer head's ref count
ext4: remove unnecessary error check
...
Merge tag 'upstream-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs
Pull UBI, UBIFS and JFFS2 updates from Richard Weinberger:
"UBI:
- Be less stupid when placing a fastmap anchor
- Try harder to get an empty PEB in case of contention
- Make ubiblock to warn if image is not a multiple of 512
UBIFS:
- Various fixes in error paths
JFFS2:
- Various fixes in error paths"
* tag 'upstream-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs:
jffs2: Fix memory leak in jffs2_scan_eraseblock() error path
jffs2: Remove jffs2_gc_fetch_page and jffs2_gc_release_page
jffs2: Fix possible null-pointer dereferences in jffs2_add_frag_to_fragtree()
ubi: block: Warn if volume size is not multiple of 512
ubifs: Fix memory leak bug in alloc_ubifs_info() error path
ubifs: Fix memory leak in __ubifs_node_verify_hmac error path
ubifs: Fix memory leak in read_znode() error path
ubi: ubi_wl_get_peb: Increase the number of attempts while getting PEB
ubi: Don't do anchor move within fastmap area
ubifs: Remove redundant assignment to pointer fname