mm/sparse.c: fix memory leak of sparsemap_buf in aligned memory
sparse_buffer_alloc(xsize) gets the size of memory from sparsemap_buf
after being aligned with the size. However, the size is at least
PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION) and usually larger
than PAGE_SIZE.
Also, sparse_buffer_fini() only frees memory between sparsemap_buf and
sparsemap_buf_end, since sparsemap_buf may be changed by PTR_ALIGN()
first, the aligned space before sparsemap_buf is wasted and no one will
touch it.
In our ARM32 platform (without SPARSEMEM_VMEMMAP)
Sparse_buffer_init
Reserve
d359c000 -
d3e9c000 (9M)
Sparse_buffer_alloc
Alloc
d3a00000 -
d3E80000 (4.5M)
Sparse_buffer_fini
Free
d3e80000 -
d3e9c000 (~=100k)
The reserved memory between
d359c000 -
d3a00000 (~=4.4M) is unfreed.
In ARM64 platform (with SPARSEMEM_VMEMMAP)
sparse_buffer_init
Reserve
ffffffc07d623000 -
ffffffc07f623000 (32M)
Sparse_buffer_alloc
Alloc
ffffffc07d800000 -
ffffffc07f600000 (30M)
Sparse_buffer_fini
Free
ffffffc07f600000 -
ffffffc07f623000 (140K)
The reserved memory between
ffffffc07d623000 -
ffffffc07d800000
(~=1.9M) is unfreed.
Let's explicit free redundant aligned memory.
[
[email protected]: mark sparse_buffer_free as __meminit]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Lecopzer Chen <[email protected]>
Signed-off-by: Mark-PK Tsai <[email protected]>
Signed-off-by: Arnd Bergmann <[email protected]>
Cc: YJ Chiang <[email protected]>
Cc: Lecopzer Chen <[email protected]>
Cc: Pavel Tatashin <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Rapoport <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>