]> Git Repo - linux.git/log
linux.git
7 years agodrivers/scsi/megaraid: remove expensive inline from megasas_return_cmd
Andi Kleen [Mon, 8 May 2017 22:58:53 +0000 (15:58 -0700)]
drivers/scsi/megaraid: remove expensive inline from megasas_return_cmd

Remove an inline from a fairly big function that is used often.  It's
unlikely that calling or not calling it makes a lot of difference.

Saves around 8k text in my kernel.

     text    data     bss     dec     hex filename
  9047801 5367568 11116544        25531913        1859609 vmlinux-before-megasas
  9039417 5367568 11116544        25523529        1857549 vmlinux-megasas

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Andi Kleen <[email protected]>
Cc: Kashyap Desai <[email protected]>
Cc: Sumit Saxena <[email protected]>
Cc: James Bottomley <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agokref: remove WARN_ON for NULL release functions
Andi Kleen [Mon, 8 May 2017 22:58:50 +0000 (15:58 -0700)]
kref: remove WARN_ON for NULL release functions

The kref functions check for NULL release functions.  This WARN_ON seems
rather pointless.  We will eventually release and then just crash
nicely.  It is also somewhat expensive because these functions are
inlined in a lot of places.  Removing the WARN_ONs saves around 2.3k in
this kernel (likely more in others with more drivers)

     text    data     bss     dec     hex filename
  9083992 5367600 11116544        25568136        1862388 vmlinux-before-load-avg
  9070166 5367600 11116544        25554310        185ed86 vmlinux-load-avg

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Andi Kleen <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agotreewide: decouple cacheflush.h and set_memory.h
Laura Abbott [Mon, 8 May 2017 22:58:47 +0000 (15:58 -0700)]
treewide: decouple cacheflush.h and set_memory.h

Now that all call sites, completely decouple cacheflush.h and
set_memory.h

[[email protected]: kprobes/x86: merge fix for set_memory.h decoupling]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Signed-off-by: Stephen Rothwell <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/staging/media/atomisp/pci/atomisp2: use set_memory.h
Andrew Morton [Mon, 8 May 2017 22:58:44 +0000 (15:58 -0700)]
drivers/staging/media/atomisp/pci/atomisp2: use set_memory.h

Cc: Laura Abbott <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/video/fbdev/vermilion/vermilion.c: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:41 +0000 (15:58 -0700)]
drivers/video/fbdev/vermilion/vermilion.c: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Bartlomiej Zolnierkiewicz <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/misc/sram-exec.c: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:38 +0000 (15:58 -0700)]
drivers/misc/sram-exec.c: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoalsa: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:35 +0000 (15:58 -0700)]
alsa: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Takashi Iwai <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agokernel/power/snapshot.c: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:32 +0000 (15:58 -0700)]
kernel/power/snapshot.c: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agokernel/module.c: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:29 +0000 (15:58 -0700)]
kernel/module.c: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Jessica Yu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoinclude/linux/filter.h: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:26 +0000 (15:58 -0700)]
include/linux/filter.h: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/watchdog/hpwdt.c: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:23 +0000 (15:58 -0700)]
drivers/watchdog/hpwdt.c: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Guenter Roeck <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/hwtracing/intel_th/msu.c: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:20 +0000 (15:58 -0700)]
drivers/hwtracing/intel_th/msu.c: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Alexander Shishkin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrm: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:17 +0000 (15:58 -0700)]
drm: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

[[email protected]: track drivers/gpu/drm/i915/i915_gem_gtt.c linux-next changes]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoagp: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:14 +0000 (15:58 -0700)]
agp: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agox86: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:11 +0000 (15:58 -0700)]
x86: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agos390: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:08 +0000 (15:58 -0700)]
s390: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoarm64: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:05 +0000 (15:58 -0700)]
arm64: use set_memory.h header

The set_memory_* functions have moved to set_memory.h.  Use that header
explicitly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoarm: use set_memory.h header
Laura Abbott [Mon, 8 May 2017 22:58:02 +0000 (15:58 -0700)]
arm: use set_memory.h header

set_memory_* functions have moved to set_memory.h.  Switch to this
explicitly

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Russell King <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agotreewide: move set_memory_* functions away from cacheflush.h
Laura Abbott [Mon, 8 May 2017 22:57:59 +0000 (15:57 -0700)]
treewide: move set_memory_* functions away from cacheflush.h

Patch series "set_memory_* functions header refactor", v3.

The set_memory_* APIs came out of a desire to have a better way to
change memory attributes.  Many of these attributes were linked to cache
functionality so the prototypes were put in cacheflush.h.  These days,
the APIs have grown and have a much wider use than just cache APIs.  To
support this growth, split off set_memory_* and friends into a separate
header file to avoid growing cacheflush.h for APIs that have nothing to
do with caches.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Laura Abbott <[email protected]>
Acked-by: Russell King <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agotreewide: spelling: correct diffrent[iate] and banlance typos
Joe Perches [Mon, 8 May 2017 22:57:56 +0000 (15:57 -0700)]
treewide: spelling: correct diffrent[iate] and banlance typos

Add these misspellings to scripts/spelling.txt too

Link: http://lkml.kernel.org/r/962aace119675e5fe87be2a88ddac1a5486f8e60.1490931810.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Acked-by: Mauro Carvalho Chehab <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoscripts/spelling.txt: add "intialise(d)" pattern and fix typo instances
Masahiro Yamada [Mon, 8 May 2017 22:57:53 +0000 (15:57 -0700)]
scripts/spelling.txt: add "intialise(d)" pattern and fix typo instances

Fix typos and add the following to the scripts/spelling.txt:

  intialisation||initialisation
  intialised||initialised
  intialise||initialise

This commit does not intend to change the British spelling itself.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Masahiro Yamada <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoscripts/spelling.txt: add regsiter -> register spelling mistake
Stephen Boyd [Mon, 8 May 2017 22:57:50 +0000 (15:57 -0700)]
scripts/spelling.txt: add regsiter -> register spelling mistake

This typo is quite common.  Fix it and add it to the spelling file so
that checkpatch catches it earlier.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Stephen Boyd <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoscripts/spelling.txt: add "memory" pattern and fix typos
Stephen Boyd [Mon, 8 May 2017 22:57:47 +0000 (15:57 -0700)]
scripts/spelling.txt: add "memory" pattern and fix typos

Fix typos and add the following to the scripts/spelling.txt:

      momery||memory

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Stephen Boyd <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, vmalloc: use __GFP_HIGHMEM implicitly
Michal Hocko [Mon, 8 May 2017 22:57:44 +0000 (15:57 -0700)]
mm, vmalloc: use __GFP_HIGHMEM implicitly

__vmalloc* allows users to provide gfp flags for the underlying
allocation.  This API is quite popular

  $ git grep "=[[:space:]]__vmalloc\|return[[:space:]]*__vmalloc" | wc -l
  77

The only problem is that many people are not aware that they really want
to give __GFP_HIGHMEM along with other flags because there is really no
reason to consume precious lowmemory on CONFIG_HIGHMEM systems for pages
which are mapped to the kernel vmalloc space.  About half of users don't
use this flag, though.  This signals that we make the API unnecessarily
too complex.

This patch simply uses __GFP_HIGHMEM implicitly when allocating pages to
be mapped to the vmalloc space.  Current users which add __GFP_HIGHMEM
are simplified and drop the flag.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Reviewed-by: Matthew Wilcox <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Cristopher Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, swap: use kvzalloc to allocate some swap data structures
Huang Ying [Mon, 8 May 2017 22:57:40 +0000 (15:57 -0700)]
mm, swap: use kvzalloc to allocate some swap data structures

Now vzalloc() is used in swap code to allocate various data structures,
such as swap cache, swap slots cache, cluster info, etc.  Because the
size may be too large on some system, so that normal kzalloc() may fail.
But using kzalloc() has some advantages, for example, less memory
fragmentation, less TLB pressure, etc.  So change the data structure
allocation in swap code to use kvzalloc() which will try kzalloc()
firstly, and fallback to vzalloc() if kzalloc() failed.

In general, although kmalloc() will reduce the number of high-order
pages in short term, vmalloc() will cause more pain for memory
fragmentation in the long term.  And the swap data structure allocation
that is changed in this patch is expected to be long term allocation.

From Dave Hansen:
 "for example, we have a two-page data structure. vmalloc() takes two
  effectively random order-0 pages, probably from two different 2M pages
  and pins them. That "kills" two 2M pages. kmalloc(), allocating two
  *contiguous* pages, will not cross a 2M boundary. That means it will
  only "kill" the possibility of a single 2M page. More 2M pages == less
  fragmentation.

The allocation in this patch occurs during swap on time, which is
usually done during system boot, so usually we have high opportunity to
allocate the contiguous pages successfully.

The allocation for swap_map[] in struct swap_info_struct is not changed,
because that is usually quite large and vmalloc_to_page() is used for
it.  That makes it a little harder to change.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Huang Ying <[email protected]>
Acked-by: Tim Chen <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Minchan Kim <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/md/bcache/super.c: use kvmalloc
Michal Hocko [Mon, 8 May 2017 22:57:37 +0000 (15:57 -0700)]
drivers/md/bcache/super.c: use kvmalloc

bcache_device_init uses kmalloc for small requests and vmalloc for those
which are larger than 64 pages.  This alone is a strange criterion.
Moreover kmalloc can fallback to vmalloc on the failure.  Let's simply
use kvmalloc instead as it knows how to handle the fallback properly

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Kent Overstreet <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/md/dm-ioctl.c: use kvmalloc rather than opencoded variant
Michal Hocko [Mon, 8 May 2017 22:57:34 +0000 (15:57 -0700)]
drivers/md/dm-ioctl.c: use kvmalloc rather than opencoded variant

copy_params uses kmalloc with vmalloc fallback.  We already have a
helper for that - kvmalloc.  This caller requires GFP_NOIO semantic so
it hasn't been converted with many others by previous patches.  All we
need to achieve this semantic is to use the scope
memalloc_noio_{save,restore} around kvmalloc.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Mikulas Patocka <[email protected]>
Cc: Mike Snitzer <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agonet: use kvmalloc with __GFP_REPEAT rather than open coded variant
Michal Hocko [Mon, 8 May 2017 22:57:31 +0000 (15:57 -0700)]
net: use kvmalloc with __GFP_REPEAT rather than open coded variant

fq_alloc_node, alloc_netdev_mqs and netif_alloc* open code kmalloc with
vmalloc fallback.  Use the kvmalloc variant instead.  Keep the
__GFP_REPEAT flag based on explanation from Eric:

 "At the time, tests on the hardware I had in my labs showed that
  vmalloc() could deliver pages spread all over the memory and that was
  a small penalty (once memory is fragmented enough, not at boot time)"

The way how the code is constructed means, however, that we prefer to go
and hit the OOM killer before we fall back to the vmalloc for requests
<=32kB (with 4kB pages) in the current code.  This is rather disruptive
for something that can be achived with the fallback.  On the other hand
__GFP_REPEAT doesn't have any useful semantic for these requests.  So
the effect of this patch is that requests which fit into 32kB will fall
back to vmalloc easier now.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: David Miller <[email protected]>
Cc: Shakeel Butt <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agotreewide: use kv[mz]alloc* rather than opencoded variants
Michal Hocko [Mon, 8 May 2017 22:57:27 +0000 (15:57 -0700)]
treewide: use kv[mz]alloc* rather than opencoded variants

There are many code paths opencoding kvmalloc.  Let's use the helper
instead.  The main difference to kvmalloc is that those users are
usually not considering all the aspects of the memory allocator.  E.g.
allocation requests <= 32kB (with 4kB pages) are basically never failing
and invoke OOM killer to satisfy the allocation.  This sounds too
disruptive for something that has a reasonable fallback - the vmalloc.
On the other hand those requests might fallback to vmalloc even when the
memory allocator would succeed after several more reclaim/compaction
attempts previously.  There is no guarantee something like that happens
though.

This patch converts many of those places to kv[mz]alloc* helpers because
they are more conservative.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]> # Xen bits
Acked-by: Kees Cook <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Acked-by: Andreas Dilger <[email protected]> # Lustre
Acked-by: Christian Borntraeger <[email protected]> # KVM/s390
Acked-by: Dan Williams <[email protected]> # nvdim
Acked-by: David Sterba <[email protected]> # btrfs
Acked-by: Ilya Dryomov <[email protected]> # Ceph
Acked-by: Tariq Toukan <[email protected]> # mlx4
Acked-by: Leon Romanovsky <[email protected]> # mlx5
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: Anton Vorontsov <[email protected]>
Cc: Colin Cross <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Ben Skeggs <[email protected]>
Cc: Kent Overstreet <[email protected]>
Cc: Santosh Raspatur <[email protected]>
Cc: Hariprasad S <[email protected]>
Cc: Yishai Hadas <[email protected]>
Cc: Oleg Drokin <[email protected]>
Cc: "Yan, Zheng" <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: David Miller <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agofs/xattr.c: zero out memory copied to userspace in getxattr
Michal Hocko [Mon, 8 May 2017 22:57:24 +0000 (15:57 -0700)]
fs/xattr.c: zero out memory copied to userspace in getxattr

getxattr uses vmalloc to allocate memory if kzalloc fails.  This is
filled by vfs_getxattr and then copied to the userspace.  vmalloc,
however, doesn't zero out the memory so if the specific implementation
of the xattr handler is sloppy we can theoretically expose a kernel
memory.  There is no real sign this is really the case but let's make
sure this will not happen and use vzalloc instead.

Fixes: 779302e67835 ("fs/xattr.c:getxattr(): improve handling of allocation failures")
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Kees Cook <[email protected]>
Reported-by: Vlastimil Babka <[email protected]>
Signed-off-by: Michal Hocko <[email protected]>
Cc: <[email protected]> [3.6+]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agonet/ipv6/ila/ila_xlat.c: simplify a strange allocation pattern
Michal Hocko [Mon, 8 May 2017 22:57:21 +0000 (15:57 -0700)]
net/ipv6/ila/ila_xlat.c: simplify a strange allocation pattern

alloc_ila_locks seemed to c&p from alloc_bucket_locks allocation pattern
which is quite unusual.  The default allocation size is 320 *
sizeof(spinlock_t) which is sub page unless lockdep is enabled when the
performance benefit is really questionable and not worth the subtle code
IMHO.  Also note that the context when we call ila_init_net (modprobe or
a task creating a net namespace) has to be properly configured.

Let's just simplify the code and use kvmalloc helper which is a
transparent way to use kmalloc with vmalloc fallback.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Tom Herbert <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: David Miller <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agolib/rhashtable.c: simplify a strange allocation pattern
Michal Hocko [Mon, 8 May 2017 22:57:18 +0000 (15:57 -0700)]
lib/rhashtable.c: simplify a strange allocation pattern

alloc_bucket_locks allocation pattern is quite unusual.  We are
preferring vmalloc when CONFIG_NUMA is enabled.  The rationale is that
vmalloc will respect the memory policy of the current process and so the
backing memory will get distributed over multiple nodes if the requester
is configured properly.  At least that is the intention, in reality
rhastable is shrunk and expanded from a kernel worker so no mempolicy
can be assumed.

Let's just simplify the code and use kvmalloc helper, which is a
transparent way to use kmalloc with vmalloc fallback, if the caller is
allowed to block and use the flag otherwise.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Tom Herbert <[email protected]>
Cc: Eric Dumazet <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm: support __GFP_REPEAT in kvmalloc_node for >32kB
Michal Hocko [Mon, 8 May 2017 22:57:15 +0000 (15:57 -0700)]
mm: support __GFP_REPEAT in kvmalloc_node for >32kB

vhost code uses __GFP_REPEAT when allocating vhost_virtqueue resp.
vhost_vsock because it would really like to prefer kmalloc to the
vmalloc fallback - see 23cc5a991c7a ("vhost-net: extend device
allocation to vmalloc") for more context.  Michael Tsirkin has also
noted:

 "__GFP_REPEAT overhead is during allocation time. Using vmalloc means
  all accesses are slowed down. Allocation is not on data path, accesses
  are."

The similar applies to other vhost_kvzalloc users.

Let's teach kvmalloc_node to handle __GFP_REPEAT properly.  There are
two things to be careful about.  First we should prevent from the OOM
killer and so have to involve __GFP_NORETRY by default and secondly
override __GFP_REPEAT for !costly order requests as the __GFP_REPEAT is
ignored for !costly orders.

Supporting __GFP_REPEAT like semantic for !costly request is possible it
would require changes in the page allocator.  This is out of scope of
this patch.

This patch shouldn't introduce any functional change.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Acked-by: Michael S. Tsirkin <[email protected]>
Cc: David Miller <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, vmalloc: properly track vmalloc users
Michal Hocko [Mon, 8 May 2017 22:57:12 +0000 (15:57 -0700)]
mm, vmalloc: properly track vmalloc users

__vmalloc_node_flags used to be static inline but this has changed by
"mm: introduce kv[mz]alloc helpers" because kvmalloc_node needs to use
it as well and the code is outside of the vmalloc proper.  I haven't
realized that changing this will lead to a subtle bug though.  The
function is responsible to track the caller as well.  This caller is
then printed by /proc/vmallocinfo.  If __vmalloc_node_flags is not
inline then we would get only direct users of __vmalloc_node_flags as
callers (e.g.  v[mz]alloc) which reduces usefulness of this debugging
feature considerably.  It simply doesn't help to see that the given
range belongs to vmalloc as a caller:

  0xffffc90002c79000-0xffffc90002c7d000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N0=3
  0xffffc90002c81000-0xffffc90002c85000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
  0xffffc90002c8d000-0xffffc90002c91000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
  0xffffc90002c95000-0xffffc90002c99000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3

We really want to catch the _caller_ of the vmalloc function.  Fix this
issue by making __vmalloc_node_flags static inline again.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm: introduce kv[mz]alloc helpers
Michal Hocko [Mon, 8 May 2017 22:57:09 +0000 (15:57 -0700)]
mm: introduce kv[mz]alloc helpers

Patch series "kvmalloc", v5.

There are many open coded kmalloc with vmalloc fallback instances in the
tree.  Most of them are not careful enough or simply do not care about
the underlying semantic of the kmalloc/page allocator which means that
a) some vmalloc fallbacks are basically unreachable because the kmalloc
part will keep retrying until it succeeds b) the page allocator can
invoke a really disruptive steps like the OOM killer to move forward
which doesn't sound appropriate when we consider that the vmalloc
fallback is available.

As it can be seen implementing kvmalloc requires quite an intimate
knowledge if the page allocator and the memory reclaim internals which
strongly suggests that a helper should be implemented in the memory
subsystem proper.

Most callers, I could find, have been converted to use the helper
instead.  This is patch 6.  There are some more relying on __GFP_REPEAT
in the networking stack which I have converted as well and Eric Dumazet
was not opposed [2] to convert them as well.

[1] http://lkml.kernel.org/r/20170130094940[email protected]
[2] http://lkml.kernel.org/r/1485273626[email protected]

This patch (of 9):

Using kmalloc with the vmalloc fallback for larger allocations is a
common pattern in the kernel code.  Yet we do not have any common helper
for that and so users have invented their own helpers.  Some of them are
really creative when doing so.  Let's just add kv[mz]alloc and make sure
it is implemented properly.  This implementation makes sure to not make
a large memory pressure for > PAGE_SZE requests (__GFP_NORETRY) and also
to not warn about allocation failures.  This also rules out the OOM
killer as the vmalloc is a more approapriate fallback than a disruptive
user visible action.

This patch also changes some existing users and removes helpers which
are specific for them.  In some cases this is not possible (e.g.
ext4_kvmalloc, libcfs_kvzalloc) because those seems to be broken and
require GFP_NO{FS,IO} context which is not vmalloc compatible in general
(note that the page table allocation is GFP_KERNEL).  Those need to be
fixed separately.

While we are at it, document that __vmalloc{_node} about unsupported gfp
mask because there seems to be a lot of confusion out there.
kvmalloc_node will warn about GFP_KERNEL incompatible (which are not
superset) flags to catch new abusers.  Existing ones would have to die
slowly.

[[email protected]: f2fs fixup]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Michal Hocko <[email protected]>
Signed-off-by: Stephen Rothwell <[email protected]>
Reviewed-by: Andreas Dilger <[email protected]> [ext4 part]
Acked-by: Vlastimil Babka <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: David Miller <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agosysv,ipc: cacheline align kern_ipc_perm
Davidlohr Bueso [Mon, 8 May 2017 22:57:06 +0000 (15:57 -0700)]
sysv,ipc: cacheline align kern_ipc_perm

Assign 'struct kern_ipc_perm' its own cacheline to avoid false sharing
with sysv ipc calls.

While the structure itself is rather read-mostly throughout the lifespan
of ipc, the spinlock causes most of the invalidations.  One example is
commit 31a7c4746e9 ("ipc/sem.c: cacheline align the ipc spinlock for
semaphores").  Therefore, extend this to all ipc.

The effect of cacheline alignment on sems can be seen in sembench, which
deals mostly with semtimedop wait/wakes is seen to improve raw
throughput (worker loops) between 8 to 12% on a 24-core x86 with over 4
threads.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Davidlohr Bueso <[email protected]>
Cc: Manfred Spraul <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoipc/shm: some shmat cleanups
Davidlohr Bueso [Mon, 8 May 2017 22:57:03 +0000 (15:57 -0700)]
ipc/shm: some shmat cleanups

Clean up early flag and address some minutia.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Davidlohr Bueso <[email protected]>
Cc: Manfred Spraul <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoinitramfs: use vfs_stat/lstat directly
Arnd Bergmann [Mon, 8 May 2017 22:57:00 +0000 (15:57 -0700)]
initramfs: use vfs_stat/lstat directly

sys_newlstat is a system call implementation that is meant for user
space, and that copies kernel-internal data structure to the user
format, which is not needed for in-kernel users.

Further, as we rearrange the system call implementation so we can extend
it with 64-bit time_t, the prototype for sys_newlstat changes.

This changes the initramfs code to use vfs_lstat directly, to get it out
of the way of the time_t changes, and make it slightly more efficient in
the process.  Along the same lines we also replace sys_stat and
sys_stat64 with vfs_stat.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnd Bergmann <[email protected]>
Cc: Alexander Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoinitramfs: provide a way to ignore image provided by bootloader
Daniel Thompson [Mon, 8 May 2017 22:56:57 +0000 (15:56 -0700)]
initramfs: provide a way to ignore image provided by bootloader

Many "embedded" architectures provide CMDLINE_FORCE to allow the kernel
to override the command line provided by an inflexible bootloader.
However there is currrently no way for the kernel to override the
initramfs image provided by the bootloader meaning there are still ways
for bootloaders to make things difficult for us.

Fix this by introducing INITRAMFS_FORCE which can prevent the kernel
from loading the bootloader supplied image.

We use CMDLINE_FORCE (and its friend CMDLINE_EXTEND) to imply that the
system has an inflexible bootloader.  This allow us to avoid presenting
this config option to users of systems where inflexible bootloaders
aren't usually a problem.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Daniel Thompson <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agolib/zlib_inflate/inftrees.c: fix potential buffer overflow
Guenter Roeck [Mon, 8 May 2017 22:56:54 +0000 (15:56 -0700)]
lib/zlib_inflate/inftrees.c: fix potential buffer overflow

smatch says:

  WARNING: please, no spaces at the start of a line
  #30: FILE: lib/zlib_inflate/inftrees.c:112:
  +    for (min = 1; min < MAXBITS; min++)$

  total: 0 errors, 1 warnings, 8 lines checked

NOTE: For some of the reported defects, checkpatch may be able to
      mechanically convert to the typical style using --fix or --fix-inplace.

./patches/zlib-inflate-fix-potential-buffer-overflow.patch has style problems, please review.

NOTE: If any of the errors are false positives, please report
      them to the maintainer, see CHECKPATCH in MAINTAINERS.

Please run checkpatch prior to sending patches

Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agolib/fault-inject.c: use correct check for interrupts
Dmitry Vyukov [Mon, 8 May 2017 22:56:51 +0000 (15:56 -0700)]
lib/fault-inject.c: use correct check for interrupts

in_interrupt() also returns true when bh is disabled in task context.
That's not what fail_task() wants to check.  Use the new in_task()
predicate that does the right thing.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Dmitry Vyukov <[email protected]>
Reviewed-by: Akinobu Mita <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agokcov: simplify interrupt check
Dmitry Vyukov [Mon, 8 May 2017 22:56:48 +0000 (15:56 -0700)]
kcov: simplify interrupt check

in_interrupt() semantics are confusing and wrong for most users as it
also returns true when bh is disabled.  Thus we open coded a proper
check for interrupts in __sanitizer_cov_trace_pc() with a lengthy
explanatory comment.

Use the new in_task() predicate instead.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Dmitry Vyukov <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: James Morse <[email protected]>
Cc: Alexander Popov <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agotaskstats: add e/u/stime for TGID command
Zhang Xiao [Mon, 8 May 2017 22:56:45 +0000 (15:56 -0700)]
taskstats: add e/u/stime for TGID command

The elapsed time, user CPU time and system CPU time for the thread group
status request are presently left at zero.  Fill these in.

[[email protected]: run ktime_get_ns() a single time]
[[email protected]: include linux/sched/cputime.h for task_cputime()]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Zhang Xiao <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agopidns: expose task pid_ns_for_children to userspace
Kirill Tkhai [Mon, 8 May 2017 22:56:41 +0000 (15:56 -0700)]
pidns: expose task pid_ns_for_children to userspace

pid_ns_for_children set by a task is known only to the task itself, and
it's impossible to identify it from outside.

It's a big problem for checkpoint/restore software like CRIU, because it
can't correctly handle tasks, that do setns(CLONE_NEWPID) in proccess of
their work.

This patch solves the problem, and it exposes pid_ns_for_children to ns
directory in standard way with the name "pid_for_children":

  ~# ls /proc/5531/ns -l | grep pid
  lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid -> pid:[4026531836]
  lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid_for_children -> pid:[4026532286]

Link: http://lkml.kernel.org/r/149201123914.6007.2187327078064239572.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <[email protected]>
Cc: Andrei Vagin <[email protected]>
Cc: Andreas Gruenbacher <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Paul Moore <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Serge Hallyn <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agons: allow ns_entries to have custom symlink content
Kirill Tkhai [Mon, 8 May 2017 22:56:38 +0000 (15:56 -0700)]
ns: allow ns_entries to have custom symlink content

Patch series "Expose task pid_ns_for_children to userspace".

pid_ns_for_children set by a task is known only to the task itself, and
it's impossible to identify it from outside.

It's a big problem for checkpoint/restore software like CRIU, because it
can't correctly handle tasks, that do setns(CLONE_NEWPID) in proccess of
their work.  If they have a custom pid_ns_for_children before dump, they
must have the same ns after restore.  Otherwise, restored task bumped
into enviroment it does not expect.

This patchset solves the problem.  It exposes pid_ns_for_children to ns
directory in standard way with the name "pid_for_children":

  ~# ls /proc/5531/ns -l | grep pid
  lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid -> pid:[4026531836]
  lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid_for_children -> pid:[4026532286]

This patch (of 2):

Make possible to have link content prefix yyy different from the link
name xxx:

  $ readlink /proc/[pid]/ns/xxx
  yyy:[4026531838]

This will be used in next patch.

Link: http://lkml.kernel.org/r/149201120318.6007.7362655181033883000.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <[email protected]>
Reviewed-by: Cyrill Gorcunov <[email protected]>
Acked-by: Andrei Vagin <[email protected]>
Cc: Andreas Gruenbacher <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Paul Moore <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Serge Hallyn <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agopidns: disable pid allocation if pid_ns_prepare_proc() is failed in alloc_pid()
Kirill Tkhai [Mon, 8 May 2017 22:56:34 +0000 (15:56 -0700)]
pidns: disable pid allocation if pid_ns_prepare_proc() is failed in alloc_pid()

alloc_pidmap() advances pid_namespace::last_pid.  When first pid
allocation fails, then next created process will have pid 2 and
pid_ns_prepare_proc() won't be called.  So, pid_namespace::proc_mnt will
never be initialized (not to mention that there won't be a child
reaper).

I saw crash stack of such case on kernel 3.10:

    BUG: unable to handle kernel NULL pointer dereference at (null)
    IP: proc_flush_task+0x8f/0x1b0
    Call Trace:
        release_task+0x3f/0x490
        wait_consider_task.part.10+0x7ff/0xb00
        do_wait+0x11f/0x280
        SyS_wait4+0x7d/0x110

We may fix this by restore of last_pid in 0 or by prohibiting of futher
allocations.  Since there was a similar issue in Oleg Nesterov's commit
314a8ad0f18a ("pidns: fix free_pid() to handle the first fork failure").
and it was fixed via prohibiting allocation, let's follow this way, and
do the same.

Link: http://lkml.kernel.org/r/149201021004.4863.6762095011554287922.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <[email protected]>
Acked-by: Cyrill Gorcunov <[email protected]>
Cc: Andrei Vagin <[email protected]>
Cc: Andreas Gruenbacher <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Paul Moore <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Serge Hallyn <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agopowerpc/fadump: update documentation about crashkernel parameter reuse
Hari Bathini [Mon, 8 May 2017 22:56:31 +0000 (15:56 -0700)]
powerpc/fadump: update documentation about crashkernel parameter reuse

As we are reusing crashkernel parameter instead of fadump_reserve_mem
parameter to specify the memory to reserve for fadump's crash kernel,
update the documentation accordingly.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Hari Bathini <[email protected]>
Acked-by: Michael Ellerman <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Mahesh Salgaonkar <[email protected]>
Cc: Vivek Goyal <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agopowerpc/fadump: reuse crashkernel parameter for fadump memory reservation
Hari Bathini [Mon, 8 May 2017 22:56:28 +0000 (15:56 -0700)]
powerpc/fadump: reuse crashkernel parameter for fadump memory reservation

fadump supports specifying memory to reserve for fadump's crash kernel
with fadump_reserve_mem kernel parameter.  This parameter currently
supports passing a fixed memory size, like fadump_reserve_mem=<size>
only.  This patch aims to add support for other syntaxes like
range-based memory size
<range1>:<size1>[,<range2>:<size2>,<range3>:<size3>,...] which allows
using the same parameter to boot the kernel with different system RAM
sizes.

As crashkernel parameter already supports the above mentioned syntaxes,
this patch deprecates fadump_reserve_mem parameter and reuses
crashkernel parameter instead, to specify memory for fadump's crash
kernel memory reservation as well.  If any offset is provided in
crashkernel parameter, it will be ignored in case of fadump, as fadump
reserves memory at end of RAM.

Advantages using crashkernel parameter instead of fadump_reserve_mem
parameter are one less kernel parameter overall, code reuse and support
for multiple syntaxes to specify memory.

Suggested-by: Dave Young <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Hari Bathini <[email protected]>
Reviewed-by: Mahesh Salgaonkar <[email protected]>
Acked-by: Michael Ellerman <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Vivek Goyal <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agopowerpc/fadump: remove dependency with CONFIG_KEXEC
Hari Bathini [Mon, 8 May 2017 22:56:24 +0000 (15:56 -0700)]
powerpc/fadump: remove dependency with CONFIG_KEXEC

Now that crashkernel parameter parsing and vmcoreinfo related code is
moved under CONFIG_CRASH_CORE instead of CONFIG_KEXEC_CORE, remove
dependency with CONFIG_KEXEC for CONFIG_FA_DUMP.  While here, get rid of
definitions of fadump_append_elf_note() & fadump_final_note() functions
to reuse similar functions compiled under CONFIG_CRASH_CORE.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Hari Bathini <[email protected]>
Reviewed-by: Mahesh Salgaonkar <[email protected]>
Acked-by: Michael Ellerman <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Vivek Goyal <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoia64: reuse append_elf_note() and final_note() functions
Hari Bathini [Mon, 8 May 2017 22:56:21 +0000 (15:56 -0700)]
ia64: reuse append_elf_note() and final_note() functions

Get rid of multiple definitions of append_elf_note() & final_note()
functions.  Reuse these functions compiled under CONFIG_CRASH_CORE Also,
define Elf_Word and use it instead of generic u32 or the more specific
Elf64_Word.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Hari Bathini <[email protected]>
Acked-by: Dave Young <[email protected]>
Acked-by: Tony Luck <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Mahesh Salgaonkar <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Michael Ellerman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocrash: move crashkernel parsing and vmcore related code under CONFIG_CRASH_CORE
Hari Bathini [Mon, 8 May 2017 22:56:18 +0000 (15:56 -0700)]
crash: move crashkernel parsing and vmcore related code under CONFIG_CRASH_CORE

Patch series "kexec/fadump: remove dependency with CONFIG_KEXEC and
reuse crashkernel parameter for fadump", v4.

Traditionally, kdump is used to save vmcore in case of a crash.  Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism.  Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.

This patchset removes dependency with CONFIG_KEXEC for crashkernel
parameter and vmcoreinfo related code as it can be reused without kexec
support.  Also, crashkernel parameter is reused instead of
fadump_reserve_mem to reserve memory for fadump.

The first patch moves crashkernel parameter parsing and vmcoreinfo
related code under CONFIG_CRASH_CORE instead of CONFIG_KEXEC_CORE.  The
second patch reuses the definitions of append_elf_note() & final_note()
functions under CONFIG_CRASH_CORE in IA64 arch code.  The third patch
removes dependency on CONFIG_KEXEC for firmware-assisted dump (fadump)
in powerpc.  The next patch reuses crashkernel parameter for reserving
memory for fadump, instead of the fadump_reserve_mem parameter.  This
has the advantage of using all syntaxes crashkernel parameter supports,
for fadump as well.  The last patch updates fadump kernel documentation
about use of crashkernel parameter.

This patch (of 5):

Traditionally, kdump is used to save vmcore in case of a crash.  Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism.  Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.

But currently, code related to vmcoreinfo and parsing of crashkernel
parameter is built under CONFIG_KEXEC_CORE.  This patch introduces
CONFIG_CRASH_CORE and moves the above mentioned code under this config,
allowing code reuse without dependency on CONFIG_KEXEC.  There is no
functional change with this patch.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Hari Bathini <[email protected]>
Acked-by: Dave Young <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Eric Biederman <[email protected]>
Cc: Mahesh Salgaonkar <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Michael Ellerman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocpumask: make "nr_cpumask_bits" unsigned
Alexey Dobriyan [Mon, 8 May 2017 22:56:15 +0000 (15:56 -0700)]
cpumask: make "nr_cpumask_bits" unsigned

Bit searching functions accept "unsigned long" indices but
"nr_cpumask_bits" is "int" which is signed, so inevitable sign
extensions occur on x86_64.  Those MOVSX are #1 MOVSX bloat by number of
uses across whole kernel.

Change "nr_cpumask_bits" to unsigned, this number can't be negative
after all.  It allows to do implicit zero-extension on x86_64 without
MOVSX.

Change signed comparisons into unsigned comparisons where necessary.

Other uses looks fine because it is either argument passed to a function
or comparison is already unsigned.

Net win on allyesconfig type of kernel: ~2.8 KB (!)

add/remove: 0/0 grow/shrink: 8/725 up/down: 93/-2926 (-2833)
function                                     old     new   delta
xen_exit_mmap                                691     735     +44
qstat_read                                   426     440     +14
__cpufreq_cooling_register                  1678    1687      +9
trace_rb_cpu_prepare                         447     455      +8
vermagic                                      54      60      +6
nfp_driver_version                            54      60      +6
rcu_torture_stats_print                     1147    1151      +4
find_next_push_cpu                           267     269      +2
xen_irq_resume                               961     960      -1
...
init_vp_index                                946     906     -40
od_set_powersave_bias                        328     281     -47
power_cpu_exit                               193     139     -54
arch_show_interrupts                        3538    3484     -54
select_idle_sibling                         1558    1471     -87
Total: Before=158358910, After=158356077, chg -0.00%

Same arguments apply to "nr_cpu_ids" but I haven't yet found enough
courage to delve into this issue (and proper fix may require new type
"cpu_t" which is whole separate story).

Link: http://lkml.kernel.org/r/20170309205322.GA1728@avx2
Signed-off-by: Alexey Dobriyan <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agofork: free vmapped stacks in cache when cpus are offline
Hoeun Ryu [Mon, 8 May 2017 22:56:11 +0000 (15:56 -0700)]
fork: free vmapped stacks in cache when cpus are offline

Using virtually mapped stack, kernel stacks are allocated via vmalloc.

In the current implementation, two stacks per cpu can be cached when
tasks are freed and the cached stacks are used again in task
duplications.  But the cached stacks may remain unfreed even when cpu
are offline.  By adding a cpu hotplug callback to free the cached stacks
when a cpu goes offline, the pages of the cached stacks are not wasted.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Hoeun Ryu <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Mateusz Guzik <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoreiserfs: use designated initializers
Kees Cook [Mon, 8 May 2017 22:56:08 +0000 (15:56 -0700)]
reiserfs: use designated initializers

Prepare to mark sensitive kernel structures for randomization by making
sure they're using designated initializers.  These were identified
during allyesconfig builds of x86, arm, and arm64, with most initializer
fixes extracted from grsecurity.

Link: http://lkml.kernel.org/r/20170329210419.GA40066@beast
Signed-off-by: Kees Cook <[email protected]>
Cc: Jan Kara <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: improve the SUSPECT_CODE_INDENT test
Joe Perches [Mon, 8 May 2017 22:56:05 +0000 (15:56 -0700)]
checkpatch: improve the SUSPECT_CODE_INDENT test

The current SUSPECT_CODE_INDENT test does not recognize several
defective code style defects where code following a logical test is
inappropriately indented.

Before this patch, for code like:

if (foo)
bar();

checkpatch would not emit a warning.

Improve the test to warn when code after a logical test has the same
indentation as the logical test.

Perform the same indentation test for "else" blocks too.

Link: http://lkml.kernel.org/r/df2374b68c4a68af2b7ef08afe486584811f610a.1493683942.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: improve the embedded function name test for patch contexts
Joe Perches [Mon, 8 May 2017 22:56:02 +0000 (15:56 -0700)]
checkpatch: improve the embedded function name test for patch contexts

The current test works only for a single patch context as it is done in
the foreach ($rawlines) loop that precedes the loop where the actual
$context_function variable is used.

Move the set of $context_function into the foreach (@lines) loop where
it is useful for each patch context.

Link: http://lkml.kernel.org/r/6c675a31c74fbfad4fc45b9f462303d60ca2a283.1493486091.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: add --typedefsfile
Jerome Forissier [Mon, 8 May 2017 22:56:00 +0000 (15:56 -0700)]
checkpatch: add --typedefsfile

When using checkpatch on out-of-tree code, it may occur that some
project-specific types are used, which will cause spurious warnings.
Add the --typedefsfile option as a way to extend the known types and
deal with this issue.

This was developed for OP-TEE [1].  We run a Travis job on all pull
requests [2], and checkpatch is part of that.  The typical false warning
we get on a regular basis is with some pointers to functions returning
TEE_Result [3], which is a typedef from the GlobalPlatform APIs.  We
consider it is acceptable to use GP types in the OP-TEE core
implementation, that's why this patch would be helpful for us.

[1] https://github.com/OP-TEE/optee_os
[2] https://travis-ci.org/OP-TEE/optee_os/builds
[3] https://travis-ci.org/OP-TEE/optee_os/builds/193355335#L1733

Link: http://lkml.kernel.org/r/ba1124d6dfa599bb0dd1d8919dd45dd09ce541a4.1492702192.git.jerome.forissier@linaro.org
Signed-off-by: Jerome Forissier <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: Andy Whitcroft <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: improve k.alloc with multiplication and sizeof test
Joe Perches [Mon, 8 May 2017 22:55:57 +0000 (15:55 -0700)]
checkpatch: improve k.alloc with multiplication and sizeof test

Find multi-line uses of k.alloc by using the $stat variable and not the
$line variable.  This can still --fix only the single line variant
though.

Link: http://lkml.kernel.org/r/3f4b23d37cd4c7d8628eefc25afe83ba8fb3ab55.1493167076.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: special audit for revert commit line
Wei Wang [Mon, 8 May 2017 22:55:54 +0000 (15:55 -0700)]
checkpatch: special audit for revert commit line

Currently checkpatch.pl does not recognize git's default commit revert
message and will complain about the hash format.  Add special audit for
revert commit message line to fix it.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Wei Wang <[email protected]>
Acked-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: clarify the EMBEDDED_FUNCTION_NAME message
Joe Perches [Mon, 8 May 2017 22:55:51 +0000 (15:55 -0700)]
checkpatch: clarify the EMBEDDED_FUNCTION_NAME message

Try to make the conversion of embedded function names to "%s: ", __func__
a bit clearer.

Add a bit more information to the comment describing the test too.

Link: http://lkml.kernel.org/r/38f5d32f0aec1cd98cb9ceeedd6a736cc9a802db.1491759835.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: improve MULTISTATEMENT_MACRO_USE_DO_WHILE test
Joe Perches [Mon, 8 May 2017 22:55:48 +0000 (15:55 -0700)]
checkpatch: improve MULTISTATEMENT_MACRO_USE_DO_WHILE test

The logic currrently misses macros that start with an if statement.

e.g.:    #define foo(bar)   if (bar) baz;

Add a test for macro content that starts with if

Link: http://lkml.kernel.org/r/a9d41aafe1673889caf1a9850208fb7fd74107a0.1491783914.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Reported-by: Andreas Mohr <[email protected]>
Original-patch-by: Alfonso Lima <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: avoid suggesting struct definitions should be const
Joe Perches [Mon, 8 May 2017 22:55:45 +0000 (15:55 -0700)]
checkpatch: avoid suggesting struct definitions should be const

Many structs are generally used const and there is a known list of these
structs.

struct definitions should not be generally be declared const.

Add a test for the lack of an open brace immediately after the struct to
avoid definitions.

This avoids the false positive "struct foo should normally be const"
message only when the open brace is on the same line as the definition.

Link: http://lkml.kernel.org/r/0dce709150d712e66f1b90b03827634b53b28085.1491845946.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Cc: Arthur Brainville <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: allow space leading blank lines in email headers
Joe Perches [Mon, 8 May 2017 22:55:42 +0000 (15:55 -0700)]
checkpatch: allow space leading blank lines in email headers

Allow a leading space and otherwise blank link in the email headers as
it can be a line wrapped Spamassassin multiple line string or any other
valid rfc 2822/5322 email header.

The line with space causes checkpatch to erroneously think that it's in
the content body, as opposed to headers and thus flag a mail header as
an unwrapped long comment line.

Link: http://lkml.kernel.org/r/d75a9f0b78b3488078429f4037d9fff3bdfa3b78.1490247180.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>Reported-by: Darren Hart (VMware) <[email protected]>
Tested-by: Darren Hart (VMware) <[email protected]>
Reviewed-by: Darren Hart (VMware) <[email protected]>
Original-patch-by: John 'Warthog9' Hawley (VMware) <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: improve EMBEDDED_FUNCTION_NAME test
Joe Perches [Mon, 8 May 2017 22:55:39 +0000 (15:55 -0700)]
checkpatch: improve EMBEDDED_FUNCTION_NAME test

The existing behavior relies on patch context to identify function
declarations.  Add the ability to find function declarations when there
is an open brace in column 1.

This finds function declarations only in specific single line forms
where the function name is on a single line like:

  int foo(args...)
  {

and

  int
  foo(args...)
  {

It does not recognize function declarations like:

  int foo(int bar,
          int baz)
  {

Link: http://lkml.kernel.org/r/738d74bbbe1a06b80f11ed504818107c68903095.1488155636.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: add ability to find bad uses of vsprintf %p<foo> extensions
Joe Perches [Mon, 8 May 2017 22:55:36 +0000 (15:55 -0700)]
checkpatch: add ability to find bad uses of vsprintf %p<foo> extensions

%pK was at least once misused at %pk in an out-of-tree module.  This
lead to some security concerns.  Add the ability to track single and
multiple line statements for misuses of %p<foo>.

[[email protected]: add helpful comment into lib/vsprintf.c]
[[email protected]: text tweak]
Link: http://lkml.kernel.org/r/163a690510e636a23187c0dc9caa09ddac6d4cde.1488228427.git.joe@perches.com
Signed-off-by: Joe Perches <[email protected]>
Acked-by: Kees Cook <[email protected]>
Acked-by: William Roberts <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agocheckpatch: remove obsolete CONFIG_EXPERIMENTAL checks
Ruslan Bilovol [Mon, 8 May 2017 22:55:33 +0000 (15:55 -0700)]
checkpatch: remove obsolete CONFIG_EXPERIMENTAL checks

Config EXPERIMENTAL has been removed from kernel in 2013 (see commit
3d374d09f16f: "final removal of CONFIG_EXPERIMENTAL"), there is no any
reason to do these checks now.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ruslan Bilovol <[email protected]>
Acked-by: Kees Cook <[email protected]>
Acked-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agofirmware/Makefile: force recompilation if makefile changes
Luis R. Rodriguez [Mon, 8 May 2017 22:55:30 +0000 (15:55 -0700)]
firmware/Makefile: force recompilation if makefile changes

If you modify the target asm we currently do not force the recompilation
of the firmware files.  The target asm is in the firmware/Makefile, peg
this file as a dependency to require re-compilation of firmware targets
when the asm changes.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Luis R. Rodriguez <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Michal Marek <[email protected]>
Cc: Ming Lei <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Tom Gundersen <[email protected]>
Cc: David Woodhouse <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agolib: add module support to linked list sorting tests
Geert Uytterhoeven [Mon, 8 May 2017 22:55:26 +0000 (15:55 -0700)]
lib: add module support to linked list sorting tests

Extract the linked list sorting test code into its own source file, to
allow to compile it either to a loadable module, or builtin into the
kernel.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Shuah Khan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agolib: add module support to array-based sort tests
Geert Uytterhoeven [Mon, 8 May 2017 22:55:23 +0000 (15:55 -0700)]
lib: add module support to array-based sort tests

Allow to compile the array-based sort test code either to a loadable
module, or builtin into the kernel.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Shuah Khan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoRevert "lib/test_sort.c: make it explicitly non-modular"
Geert Uytterhoeven [Mon, 8 May 2017 22:55:20 +0000 (15:55 -0700)]
Revert "lib/test_sort.c: make it explicitly non-modular"

Patch series "lib: add module support to sort tests".

This patch series allows to compile the array-based and linked list sort
test code either to loadable modules, or builtin into the kernel.

It's very valuable to have modular tests, so you can run them just by
insmodding the test modules, instead of needing a separate kernel that
runs them at boot.

This patch (of 3):

This reverts commit 8893f519330bb073a49c5b4676fce4be6f1be15d.

It's very valuable to have modular tests, so you can run them just by
insmodding the test modules, instead of needing a separate kernel that
runs them at boot.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Shuah Khan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/misc/c2port/c2port-duramar2150.c: checking for NULL instead of IS_ERR()
Dan Carpenter [Mon, 8 May 2017 22:55:17 +0000 (15:55 -0700)]
drivers/misc/c2port/c2port-duramar2150.c: checking for NULL instead of IS_ERR()

c2port_device_register() never returns NULL, it uses error pointers.

Link: http://lkml.kernel.org/r/20170412083321.GC3250@mwanda
Fixes: 65131cd52b9e ("c2port: add c2port support for Eurotech Duramar 2150")
Signed-off-by: Dan Carpenter <[email protected]>
Acked-by: Rodolfo Giometti <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/misc/vmw_vmci/vmci_queue_pair.c: fix a couple integer overflow tests
Dan Carpenter [Mon, 8 May 2017 22:55:14 +0000 (15:55 -0700)]
drivers/misc/vmw_vmci/vmci_queue_pair.c: fix a couple integer overflow tests

The "DIV_ROUND_UP(size, PAGE_SIZE)" operation can overflow if "size" is
more than ULLONG_MAX - PAGE_SIZE.

Link: http://lkml.kernel.org/r/20170322111950.GA11279@mwanda
Signed-off-by: Dan Carpenter <[email protected]>
Cc: Jorgen Hansen <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agokernel/hung_task.c: defer showing held locks
Tetsuo Handa [Mon, 8 May 2017 22:55:11 +0000 (15:55 -0700)]
kernel/hung_task.c: defer showing held locks

When I was running my testcase which may block hundreds of threads on fs
locks, I got lockup due to output from debug_show_all_locks() added by
commit b2d4c2edb2e4 ("locking/hung_task: Show all locks").

For example, if 1000 threads were blocked in TASK_UNINTERRUPTIBLE state
and 500 out of 1000 threads hold some lock, debug_show_all_locks() from
for_each_process_thread() loop will report locks held by 500 threads for
1000 times.  This is a too much noise.

In order to make sure rcu_lock_break() is called frequently, we should
avoid calling debug_show_all_locks() from for_each_process_thread() loop
because debug_show_all_locks() effectively calls for_each_process_thread()
loop.  Let's defer calling debug_show_all_locks() till before panic() or
leaving for_each_process_thread() loop.

Link: http://lkml.kernel.org/r/1489296834-60436-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
Signed-off-by: Tetsuo Handa <[email protected]>
Reviewed-by: Vegard Nossum <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomake help: add tools help target
Randy Dunlap [Mon, 8 May 2017 22:55:08 +0000 (15:55 -0700)]
make help: add tools help target

Add a top-level Makefile help target for Userspace tools.

Also make each help "heading" end with a colon ':'.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Randy Dunlap <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agojiffies.h: declare jiffies and jiffies_64 with ____cacheline_aligned_in_smp
Matthias Kaehlcke [Mon, 8 May 2017 22:55:05 +0000 (15:55 -0700)]
jiffies.h: declare jiffies and jiffies_64 with ____cacheline_aligned_in_smp

jiffies_64 is defined in kernel/time/timer.c with
____cacheline_aligned_in_smp, however this macro is not part of the
declaration of jiffies and jiffies_64 in jiffies.h.

As a result clang generates the following warning:

  kernel/time/timer.c:57:26: error: section does not match previous declaration [-Werror,-Wsection]
  __visible u64 jiffies_64 __cacheline_aligned_in_smp = INITIAL_JIFFIES;
                           ^
  include/linux/cache.h:39:36: note: expanded from macro '__cacheline_aligned_in_smp'
                                     ^
  include/linux/cache.h:34:4: note: expanded from macro '__cacheline_aligned'
                   __section__(".data..cacheline_aligned")))
                   ^
  include/linux/jiffies.h:77:12: note: previous attribute is here
  extern u64 __jiffy_data jiffies_64;
             ^
  include/linux/jiffies.h:70:38: note: expanded from macro '__jiffy_data'

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Matthias Kaehlcke <[email protected]>
Cc: "Jason A . Donenfeld" <[email protected]>
Cc: Grant Grundler <[email protected]>
Cc: Michael Davidson <[email protected]>
Cc: Greg Hackmann <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agodrivers/virt/fsl_hypervisor.c: use get_user_pages_unlocked()
Lorenzo Stoakes [Mon, 8 May 2017 22:55:02 +0000 (15:55 -0700)]
drivers/virt/fsl_hypervisor.c: use get_user_pages_unlocked()

Moving from get_user_pages() to get_user_pages_unlocked() simplifies the
code and takes advantage of VM_FAULT_RETRY functionality when faulting
in pages.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Lorenzo Stoakes <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Kumar Gala <[email protected]>
Cc: Mihai Caraman <[email protected]>
Cc: Greg KH <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agoproc/sysctl: fix the int overflow for jiffies conversion
Gao Feng [Mon, 8 May 2017 22:54:58 +0000 (15:54 -0700)]
proc/sysctl: fix the int overflow for jiffies conversion

do_proc_dointvec_jiffies_conv() uses LONG_MAX/HZ as the max value to
avoid overflow.  But actually the *valp is int type, so it still causes
overflow.

For example,

  echo 2147483647 > ./sys/net/ipv4/tcp_keepalive_time

Then,

  cat ./sys/net/ipv4/tcp_keepalive_time

The output is "-1", it is not expected.

Now use INT_MAX/HZ as the max value instead LONG_MAX/HZ to fix it.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Gao Feng <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Alexey Dobriyan <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agofs/proc/inode.c: remove cast from memory allocation
Tobin C. Harding [Mon, 8 May 2017 22:54:55 +0000 (15:54 -0700)]
fs/proc/inode.c: remove cast from memory allocation

Coccinelle emits this warning:

  WARNING: casting value returned by memory allocation function to (struct proc_inode *) is useless.

Remove unnecessary cast.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Tobin C. Harding <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, compaction: finish whole pageblock to reduce fragmentation
Vlastimil Babka [Mon, 8 May 2017 22:54:52 +0000 (15:54 -0700)]
mm, compaction: finish whole pageblock to reduce fragmentation

The main goal of direct compaction is to form a high-order page for
allocation, but it should also help against long-term fragmentation when
possible.

Most lower-than-pageblock-order compactions are for non-movable
allocations, which means that if we compact in a movable pageblock and
terminate as soon as we create the high-order page, it's unlikely that
the fallback heuristics will claim the whole block.  Instead there might
be a single unmovable page in a pageblock full of movable pages, and the
next unmovable allocation might pick another pageblock and increase
long-term fragmentation.

To help against such scenarios, this patch changes the termination
criteria for compaction so that the current pageblock is finished even
though the high-order page already exists.  Note that it might be
possible that the high-order page formed elsewhere in the zone due to
parallel activity, but this patch doesn't try to detect that.

This is only done with sync compaction, because async compaction is
limited to pageblock of the same migratetype, where it cannot result in
a migratetype fallback.  (Async compaction also eagerly skips
order-aligned blocks where isolation fails, which is against the goal of
migrating away as much of the pageblock as possible.)

As a result of this patch, long-term memory fragmentation should be
reduced.

In testing based on 4.9 kernel with stress-highalloc from mmtests
configured for order-4 GFP_KERNEL allocations, this patch has reduced
the number of unmovable allocations falling back to movable pageblocks
by 20%.  The number

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, compaction: restrict async compaction to pageblocks of same migratetype
Vlastimil Babka [Mon, 8 May 2017 22:54:49 +0000 (15:54 -0700)]
mm, compaction: restrict async compaction to pageblocks of same migratetype

The migrate scanner in async compaction is currently limited to
MIGRATE_MOVABLE pageblocks.  This is a heuristic intended to reduce
latency, based on the assumption that non-MOVABLE pageblocks are
unlikely to contain movable pages.

However, with the exception of THP's, most high-order allocations are
not movable.  Should the async compaction succeed, this increases the
chance that the non-MOVABLE allocations will fallback to a MOVABLE
pageblock, making the long-term fragmentation worse.

This patch attempts to help the situation by changing async direct
compaction so that the migrate scanner only scans the pageblocks of the
requested migratetype.  If it's a non-MOVABLE type and there are such
pageblocks that do contain movable pages, chances are that the
allocation can succeed within one of such pageblocks, removing the need
for a fallback.  If that fails, the subsequent sync attempt will ignore
this restriction.

In testing based on 4.9 kernel with stress-highalloc from mmtests
configured for order-4 GFP_KERNEL allocations, this patch has reduced
the number of unmovable allocations falling back to movable pageblocks
by 30%.  The number of movable allocations falling back is reduced by
12%.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, compaction: add migratetype to compact_control
Vlastimil Babka [Mon, 8 May 2017 22:54:46 +0000 (15:54 -0700)]
mm, compaction: add migratetype to compact_control

Preparation patch.  We are going to need migratetype at lower layers
than compact_zone() and compact_finished().

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, compaction: change migrate_async_suitable() to suitable_migration_source()
Vlastimil Babka [Mon, 8 May 2017 22:54:43 +0000 (15:54 -0700)]
mm, compaction: change migrate_async_suitable() to suitable_migration_source()

Preparation for making the decisions more complex and depending on
compact_control flags.  No functional change.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, page_alloc: count movable pages when stealing from pageblock
Vlastimil Babka [Mon, 8 May 2017 22:54:40 +0000 (15:54 -0700)]
mm, page_alloc: count movable pages when stealing from pageblock

When stealing pages from pageblock of a different migratetype, we count
how many free pages were stolen, and change the pageblock's migratetype
if more than half of the pageblock was free.  This might be too
conservative, as there might be other pages that are not free, but were
allocated with the same migratetype as our allocation requested.

While we cannot determine the migratetype of allocated pages precisely
(at least without the page_owner functionality enabled), we can count
pages that compaction would try to isolate for migration - those are
either on LRU or __PageMovable().  The rest can be assumed to be
MIGRATE_RECLAIMABLE or MIGRATE_UNMOVABLE, which we cannot easily
distinguish.  This counting can be done as part of free page stealing
with little additional overhead.

The page stealing code is changed so that it considers free pages plus
pages of the "good" migratetype for the decision whether to change
pageblock's migratetype.

The result should be more accurate migratetype of pageblocks wrt the
actual pages in the pageblocks, when stealing from semi-occupied
pageblocks.  This should help the efficiency of page grouping by
mobility.

In testing based on 4.9 kernel with stress-highalloc from mmtests
configured for order-4 GFP_KERNEL allocations, this patch has reduced
the number of unmovable allocations falling back to movable pageblocks
by 47%.  The number of movable allocations falling back to other
pageblocks are increased by 55%, but these events don't cause permanent
fragmentation, so the tradeoff should be positive.  Later patches also
offset the movable fallback increase to some extent.

[[email protected]: merge fix]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, page_alloc: split smallest stolen page in fallback
Vlastimil Babka [Mon, 8 May 2017 22:54:37 +0000 (15:54 -0700)]
mm, page_alloc: split smallest stolen page in fallback

The __rmqueue_fallback() function is called when there's no free page of
requested migratetype, and we need to steal from a different one.

There are various heuristics to make this event infrequent and reduce
permanent fragmentation.  The main one is to try stealing from a
pageblock that has the most free pages, and possibly steal them all at
once and convert the whole pageblock.  Precise searching for such
pageblock would be expensive, so instead the heuristics walks the free
lists from MAX_ORDER down to requested order and assumes that the block
with highest-order free page is likely to also have the most free pages
in total.

Chances are that together with the highest-order page, we steal also
pages of lower orders from the same block.  But then we still split the
highest order page.  This is wasteful and can contribute to
fragmentation instead of avoiding it.

This patch thus changes __rmqueue_fallback() to just steal the page(s)
and put them on the freelist of the requested migratetype, and only
report whether it was successful.  Then we pick (and eventually split)
the smallest page with __rmqueue_smallest().  This all happens under
zone lock, so nobody can steal it from us in the process.  This should
reduce fragmentation due to fallbacks.  At worst we are only stealing a
single highest-order page and waste some cycles by moving it between
lists and then removing it, but fallback is not exactly hot path so that
should not be a concern.  As a side benefit the patch removes some
duplicate code by reusing __rmqueue_smallest().

[[email protected]: fix endless loop in the modified __rmqueue()]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, compaction: remove redundant watermark check in compact_finished()
Vlastimil Babka [Mon, 8 May 2017 22:54:33 +0000 (15:54 -0700)]
mm, compaction: remove redundant watermark check in compact_finished()

When detecting whether compaction has succeeded in forming a high-order
page, __compact_finished() employs a watermark check, followed by an own
search for a suitable page in the freelists.  This is not ideal for two
reasons:

 - The watermark check also searches high-order freelists, but has a
   less strict criteria wrt fallback. It's therefore redundant and waste
   of cycles. This was different in the past when high-order watermark
   check attempted to apply reserves to high-order pages.

 - The watermark check might actually fail due to lack of order-0 pages.
   Compaction can't help with that, so there's no point in continuing
   because of that. It's possible that high-order page still exists and
   it terminates.

This patch therefore removes the watermark check.  This should save some
cycles and terminate compaction sooner in some cases.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agomm, compaction: reorder fields in struct compact_control
Vlastimil Babka [Mon, 8 May 2017 22:54:30 +0000 (15:54 -0700)]
mm, compaction: reorder fields in struct compact_control

Patch series "try to reduce fragmenting fallbacks", v3.

Last year, Johannes Weiner has reported a regression in page mobility
grouping [1] and while the exact cause was not found, I've come up with
some ways to improve it by reducing the number of allocations falling
back to different migratetype and causing permanent fragmentation.

The series was tested with mmtests stress-highalloc modified to do
GFP_KERNEL order-4 allocations, on 4.9 with "mm, vmscan: fix zone
balance check in prepare_kswapd_sleep" (without that, kcompactd indeed
wasn't woken up) on UMA machine with 4GB memory.  There were 5 repeats
of each run, as the extfrag stats are quite volatile (note the stats
below are sums, not averages, as it was less perl hacking for me).

Success rate are the same, already high due to the low allocation order
used, so I'm not including them.

Compaction stats:
(the patches are stacked, and I haven't measured the non-functional-changes
patches separately)

                                     patch 1     patch 2     patch 3     patch 4     patch 7     patch 8
  Compaction stalls                    22449       24680       24846       19765       22059       17480
  Compaction success                   12971       14836       14608       10475       11632        8757
  Compaction failures                   9477        9843       10238        9290       10426        8722
  Page migrate success               3109022     3370438     3312164     1695105     1608435     2111379
  Page migrate failure                911588     1149065     1028264     1112675     1077251     1026367
  Compaction pages isolated          7242983     8015530     7782467     4629063     4402787     5377665
  Compaction migrate scanned       980838938   987367943   957690188   917647238   947155598  1018922197
  Compaction free scanned          557926893   598946443   602236894   594024490   541169699   763651731
  Compaction cost                      10243       10578       10304        8286        8398        9440

Compaction stats are mostly within noise until patch 4, which decreases
the number of compactions, and migrations.  Part of that could be due to
more pageblocks marked as unmovable, and async compaction skipping
those.  This changes a bit with patch 7, but not so much.  Patch 8
increases free scanner stats and migrations, which comes from the
changed termination criteria.  Interestingly number of compactions
decreases - probably the fully compacted pageblock satisfies multiple
subsequent allocations, so it amortizes.

Next comes the extfrag tracepoint, where "fragmenting" means that an
allocation had to fallback to a pageblock of another migratetype which
wasn't fully free (which is almost all of the fallbacks).  I have
locally added another tracepoint for "Page steal" into
steal_suitable_fallback() which triggers in situations where we are
allowed to do move_freepages_block().  If we decide to also do
set_pageblock_migratetype(), it's "Pages steal with pageblock" with
break down for which allocation migratetype we are stealing and from
which fallback migratetype.  The last part "due to counting" comes from
patch 4 and counts the events where the counting of movable pages
allowed us to change pageblock's migratetype, while the number of free
pages alone wouldn't be enough to cross the threshold.

                                                       patch 1     patch 2     patch 3     patch 4     patch 7     patch 8
  Page alloc extfrag event                            10155066     8522968    10164959    15622080    13727068    13140319
  Extfrag fragmenting                                 10149231     8517025    10159040    15616925    13721391    13134792
  Extfrag fragmenting for unmovable                     159504      168500      184177       97835       70625       56948
  Extfrag fragmenting unmovable placed with movable     153613      163549      172693       91740       64099       50917
  Extfrag fragmenting unmovable placed with reclaim.      5891        4951       11484        6095        6526        6031
  Extfrag fragmenting for reclaimable                     4738        4829        6345        4822        5640        5378
  Extfrag fragmenting reclaimable placed with movable     1836        1902        1851        1579        1739        1760
  Extfrag fragmenting reclaimable placed with unmov.      2902        2927        4494        3243        3901        3618
  Extfrag fragmenting for movable                      9984989     8343696     9968518    15514268    13645126    13072466
  Pages steal                                           179954      192291      210880      123254       94545       81486
  Pages steal with pageblock                             22153       18943       20154       33562       29969       33444
  Pages steal with pageblock for unmovable               14350       12858       13256       20660       19003       20852
  Pages steal with pageblock for unmovable from mov.     12812       11402       11683       19072       17467       19298
  Pages steal with pageblock for unmovable from recl.     1538        1456        1573        1588        1536        1554
  Pages steal with pageblock for movable                  7114        5489        5965       11787       10012       11493
  Pages steal with pageblock for movable from unmov.      6885        5291        5541       11179        9525       10885
  Pages steal with pageblock for movable from recl.        229         198         424         608         487         608
  Pages steal with pageblock for reclaimable               689         596         933        1115         954        1099
  Pages steal with pageblock for reclaimable from unmov.   273         219         537         658         547         667
  Pages steal with pageblock for reclaimable from mov.     416         377         396         457         407         432
  Pages steal with pageblock due to counting                                                 11834       10075        7530
  ... for unmovable                                                                           8993        7381        4616
  ... for movable                                                                             2792        2653        2851
  ... for reclaimable                                                                           49          41          63

What we can see is that "Extfrag fragmenting for unmovable" and "...
placed with movable" drops with almost each patch, which is good as we
are polluting less movable pageblocks with unmovable pages.

The most significant change is patch 4 with movable page counting.  On
the other hand it increases "Extfrag fragmenting for movable" by 50%.
"Pages steal" drops though, so these movable allocation fallbacks find
only small free pages and are not allowed to steal whole pageblocks
back.  "Pages steal with pageblock" raises, because the patch increases
the chances of pageblock migratetype changes to happen.  This affects
all migratetypes.

The summary is that patch 4 is not a clear win wrt these stats, but I
believe that the tradeoff it makes is a good one.  There's less
pollution of movable pageblocks by unmovable allocations.  There's less
stealing between pageblock, and those that remain have higher chance of
changing migratetype also the pageblock itself, so it should more
faithfully reflect the migratetype of the pages within the pageblock.
The increase of movable allocations falling back to unmovable pageblock
might look dramatic, but those allocations can be migrated by compaction
when needed, and other patches in the series (7-9) improve that aspect.

Patches 7 and 8 continue the trend of reduced unmovable fallbacks and
also reduce the impact on movable fallbacks from patch 4.

[1] https://www.spinics.net/lists/linux-mm/msg114237.html

This patch (of 8):

While currently there are (mostly by accident) no holes in struct
compact_control (on x86_64), but we are going to add more bool flags, so
place them all together to the end of the structure.  While at it, just
order all fields from largest to smallest.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
7 years agonet: mdio-mux: bcm-iproc: call mdiobus_free() in error path
Jon Mason [Mon, 8 May 2017 21:48:35 +0000 (17:48 -0400)]
net: mdio-mux: bcm-iproc: call mdiobus_free() in error path

If an error is encountered in mdio_mux_init(), the error path will call
mdiobus_free().  Since mdiobus_register() has been called prior to
mdio_mux_init(), the bus->state will not be MDIOBUS_UNREGISTERED.  This
causes a BUG_ON() in mdiobus_free().  To correct this issue, add an
error path for mdio_mux_init() which calls mdiobus_unregister() prior to
mdiobus_free().

Signed-off-by: Jon Mason <[email protected]>
Fixes: 98bc865a1ec8 ("net: mdio-mux: Add MDIO mux driver for iProc SoCs")
Acked-by: Florian Fainelli <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agoide: don't call memcpy with the same source and destination
Mikulas Patocka [Fri, 14 Apr 2017 18:35:33 +0000 (14:35 -0400)]
ide: don't call memcpy with the same source and destination

The parisc architecture recently reimplemented the memcpy function and
their reimplementation crashed when source and destination overlapped.

The crash happened in the function ide_complete_cmd where memcpy is called
with the same source and destination pointer. According to the C
specification, memcpy behavior is undefined if the source and destination
range overlaps. This patches fixes the undefined behavior.

Signed-off-by: Mikulas Patocka <[email protected]>
Reviewed-by: Bartlomiej Zolnierkiewicz <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agoide: use setup_timer
Geliang Tang [Sun, 9 Apr 2017 01:41:32 +0000 (09:41 +0800)]
ide: use setup_timer

Use setup_timer() instead of init_timer() to simplify the code.

Signed-off-by: Geliang Tang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agonet: ethernet: ti: cpsw: adjust cpsw fifos depth for fullduplex flow control
Grygorii Strashko [Mon, 8 May 2017 19:21:21 +0000 (14:21 -0500)]
net: ethernet: ti: cpsw: adjust cpsw fifos depth for fullduplex flow control

When users set flow control using ethtool the bits are set properly in the
CPGMAC_SL MACCONTROL register, but the FIFO depth in the respective Port n
Maximum FIFO Blocks (Pn_MAX_BLKS) registers remains set to the minimum size
reset value. When receive flow control is enabled on a port, the port's
associated FIFO block allocation must be adjusted. The port RX allocation
must increase to accommodate the flow control runout. The TRM recommends
numbers of 5 or 6.

Hence, apply required Port FIFO configuration to
Pn_MAX_BLKS.Pn_TX_MAX_BLKS=0xF and Pn_MAX_BLKS.Pn_RX_MAX_BLKS=0x5 during
interface initialization.

Cc: Schuyler Patton <[email protected]>
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agoipv6: reorder ip6_route_dev_notifier after ipv6_dev_notf
WANG Cong [Mon, 8 May 2017 17:12:13 +0000 (10:12 -0700)]
ipv6: reorder ip6_route_dev_notifier after ipv6_dev_notf

For each netns (except init_net), we initialize its null entry
in 3 places:

1) The template itself, as we use kmemdup()
2) Code around dst_init_metrics() in ip6_route_net_init()
3) ip6_route_dev_notify(), which is supposed to initialize it after
   loopback registers

Unfortunately the last one still happens in a wrong order because
we expect to initialize net->ipv6.ip6_null_entry->rt6i_idev to
net->loopback_dev's idev, thus we have to do that after we add
idev to loopback. However, this notifier has priority == 0 same as
ipv6_dev_notf, and ipv6_dev_notf is registered after
ip6_route_dev_notifier so it is called actually after
ip6_route_dev_notifier. This is similar to commit 2f460933f58e
("ipv6: initialize route null entry in addrconf_init()") which
fixes init_net.

Fix it by picking a smaller priority for ip6_route_dev_notifier.
Also, we have to release the refcnt accordingly when unregistering
loopback_dev because device exit functions are called before subsys
exit functions.

Acked-by: David Ahern <[email protected]>
Tested-by: David Ahern <[email protected]>
Signed-off-by: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agoMerge tag 'mac80211-for-davem-2017-05-08' of git://git.kernel.org/pub/scm/linux/kerne...
David S. Miller [Mon, 8 May 2017 20:02:23 +0000 (16:02 -0400)]
Merge tag 'mac80211-for-davem-2017-05-08' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211

Johannes Berg says:

====================
A couple more fixes:
 * don't try to authenticate during reconfiguration, which causes
   drivers to get confused
 * fix a kernel-doc warning for a recently merged change
 * fix MU-MIMO group configuration (relevant only for monitor mode)
 * more rate flags fix: remove stray RX_ENC_FLAG_40MHZ
 * fix IBSS probe response allocation size
====================

Signed-off-by: David S. Miller <[email protected]>
7 years agonet: cdc_ncm: Fix TX zero padding
Jim Baxter [Mon, 8 May 2017 12:49:57 +0000 (13:49 +0100)]
net: cdc_ncm: Fix TX zero padding

The zero padding that is added to NTB's does
not zero the memory correctly.
This is because the skb_put modifies the value
of skb_out->len which results in the memset
command not setting any memory to zero as
(ctx->tx_max - skb_out->len) == 0.

I have resolved this by storing the size of
the memory to be zeroed before the skb_put
and using this in the memset call.

Signed-off-by: Jim Baxter <[email protected]>
Reviewed-by: Bjørn Mork <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agoMerge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Linus Torvalds [Mon, 8 May 2017 19:37:56 +0000 (12:37 -0700)]
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM updates from Paolo Bonzini:
 "ARM:
   - HYP mode stub supports kexec/kdump on 32-bit
   - improved PMU support
   - virtual interrupt controller performance improvements
   - support for userspace virtual interrupt controller (slower, but
     necessary for KVM on the weird Broadcom SoCs used by the Raspberry
     Pi 3)

  MIPS:
   - basic support for hardware virtualization (ImgTec P5600/P6600/I6400
     and Cavium Octeon III)

  PPC:
   - in-kernel acceleration for VFIO

  s390:
   - support for guests without storage keys
   - adapter interruption suppression

  x86:
   - usual range of nVMX improvements, notably nested EPT support for
     accessed and dirty bits
   - emulation of CPL3 CPUID faulting

  generic:
   - first part of VCPU thread request API
   - kvm_stat improvements"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (227 commits)
  kvm: nVMX: Don't validate disabled secondary controls
  KVM: put back #ifndef CONFIG_S390 around kvm_vcpu_kick
  Revert "KVM: Support vCPU-based gfn->hva cache"
  tools/kvm: fix top level makefile
  KVM: x86: don't hold kvm->lock in KVM_SET_GSI_ROUTING
  KVM: Documentation: remove VM mmap documentation
  kvm: nVMX: Remove superfluous VMX instruction fault checks
  KVM: x86: fix emulation of RSM and IRET instructions
  KVM: mark requests that need synchronization
  KVM: return if kvm_vcpu_wake_up() did wake up the VCPU
  KVM: add explicit barrier to kvm_vcpu_kick
  KVM: perform a wake_up in kvm_make_all_cpus_request
  KVM: mark requests that do not need a wakeup
  KVM: remove #ifndef CONFIG_S390 around kvm_vcpu_wake_up
  KVM: x86: always use kvm_make_request instead of set_bit
  KVM: add kvm_{test,clear}_request to replace {test,clear}_bit
  s390: kvm: Cpu model support for msa6, msa7 and msa8
  KVM: x86: remove irq disablement around KVM_SET_CLOCK/KVM_GET_CLOCK
  kvm: better MWAIT emulation for guests
  KVM: x86: virtualize cpuid faulting
  ...

7 years agoMerge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Linus Torvalds [Mon, 8 May 2017 19:32:00 +0000 (12:32 -0700)]
Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:
 "Lots of little things this time:

   - allow modules to be autoloaded according to the HWCAP feature bits
     (used primarily for crypto modules)

   - split module core and init PLT sections, since the core code and
     init code could be placed far apart, and the PLT sections need to
     be local to the code block.

   - three patches from Chris Brandt to allow Cortex-A9 L2 cache
     optimisations to be disabled where a SoC didn't wire up the out of
     band signals.

   - NoMMU compliance fixes, avoiding corruption of vector table which
     is not being used at this point, and avoiding possible register
     state corruption when switching mode.

   - fixmap memory attribute compliance update.

   - remove unnecessary locking from update_sections_early()

   - ftrace fix for DEBUG_RODATA with !FRAME_POINTER"

* 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: 8672/1: mm: remove tasklist locking from update_sections_early()
  ARM: 8671/1: V7M: Preserve registers across switch from Thread to Handler mode
  ARM: 8670/1: V7M: Do not corrupt vector table around v7m_invalidate_l1 call
  ARM: 8668/1: ftrace: Fix dynamic ftrace with DEBUG_RODATA and !FRAME_POINTER
  ARM: 8667/3: Fix memory attribute inconsistencies when using fixmap
  ARM: 8663/1: wire up HWCAP/HWCAP2 feature bits to the CPU modalias
  ARM: 8666/1: mm: dump: Add domain to output
  ARM: 8662/1: module: split core and init PLT sections
  ARM: 8661/1: dts: r7s72100: add l2 cache
  ARM: 8660/1: shmobile: r7s72100: Enable L2 cache
  ARM: 8659/1: l2c: allow CA9 optimizations to be disabled

7 years agoMerge tag 'xtensa-20170507' of git://github.com/jcmvbkbc/linux-xtensa
Linus Torvalds [Mon, 8 May 2017 19:29:46 +0000 (12:29 -0700)]
Merge tag 'xtensa-20170507' of git://github.com/jcmvbkbc/linux-xtensa

Pull Xtensa updates from Max Filippov:

 - clearly mark references to spilled register locations with SPILL_SLOT
   macros

 - clean up xtensa ptrace: use generic tracehooks, move internal kernel
   definitions from uapi/asm to asm, make locally-used functions static,
   fix code style and alignment

 - use command line parameters passed to ISS as kernel command line.

* tag 'xtensa-20170507' of git://github.com/jcmvbkbc/linux-xtensa:
  xtensa: clean up access to spilled registers locations
  xtensa: use generic tracehooks
  xtensa: move internal ptrace definitions from uapi/asm to asm
  xtensa: clean up xtensa/kernel/ptrace.c
  xtensa: drop unused fast_io_protect function
  xtensa: use ITLB_HIT_BIT instead of hardcoded number
  xtensa: ISS: update kernel command line in platform_setup
  xtensa: ISS: add argc/argv simcall definitions
  xtensa: ISS: cleanup setup.c

7 years agoMerge tag 'for-f2fs-4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk...
Linus Torvalds [Mon, 8 May 2017 19:24:17 +0000 (12:24 -0700)]
Merge tag 'for-f2fs-4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
 "In this round, we've focused on enhancing performance with regards to
  block allocation, GC, and discard/in-place-update IO controls. There
  are a bunch of clean-ups as well as minor bug fixes.

  Enhancements:
   - disable heap-based allocation by default
   - issue small-sized discard commands by default
   - change the policy of data hotness for logging
   - distinguish IOs in terms of size and wbc type
   - start SSR earlier to avoid foreground GC
   - enhance data structures managing discard commands
   - enhance in-place update flow
   - add some more fault injection routines
   - secure one more xattr entry

  Bug fixes:
   - calculate victim cost for GC correctly
   - remain correct victim segment number for GC
   - race condition in nid allocator and initializer
   - stale pointer produced by atomic_writes
   - fix missing REQ_SYNC for flush commands
   - handle missing errors in more corner cases"

* tag 'for-f2fs-4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (111 commits)
  f2fs: fix a mount fail for wrong next_scan_nid
  f2fs: enhance scalability of trace macro
  f2fs: relocate inode_{,un}lock in F2FS_IOC_SETFLAGS
  f2fs: Make flush bios explicitely sync
  f2fs: show available_nids in f2fs/status
  f2fs: flush dirty nats periodically
  f2fs: introduce CP_TRIMMED_FLAG to avoid unneeded discard
  f2fs: allow cpc->reason to indicate more than one reason
  f2fs: release cp and dnode lock before IPU
  f2fs: shrink size of struct discard_cmd
  f2fs: don't hold cmd_lock during waiting discard command
  f2fs: nullify fio->encrypted_page for each writes
  f2fs: sanity check segment count
  f2fs: introduce valid_ipu_blkaddr to clean up
  f2fs: lookup extent cache first under IPU scenario
  f2fs: reconstruct code to write a data page
  f2fs: introduce __wait_discard_cmd
  f2fs: introduce __issue_discard_cmd
  f2fs: enable small discard by default
  f2fs: delay awaking discard thread
  ...

7 years agoMerge branch 'stmmac-pci-fix-crash-on-Intel-Galileo-Gen2'
David S. Miller [Mon, 8 May 2017 19:15:03 +0000 (15:15 -0400)]
Merge branch 'stmmac-pci-fix-crash-on-Intel-Galileo-Gen2'

Andy Shevchenko says:

====================
stmmac: pci: Fix crash on Intel Galileo Gen2

Due to misconfiguration of PCI driver for Intel Quark the user will get
a kernel crash:

udhcpc: started, v1.26.2
stmmaceth 0000:00:14.6 eth0: device MAC address 98:4f:ee:05:ac:47
Generic PHY stmmac-a6:01: attached PHY driver [Generic PHY] (mii_bus:phy_addr=stmmac-a6:01, irq=-1)
stmmaceth 0000:00:14.6 eth0: IEEE 1588-2008 Advanced Timestamp supported
stmmaceth 0000:00:14.6 eth0: registered PTP clock
IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready

udhcpc: sending discover

stmmaceth 0000:00:14.6 eth0: Link is Up - 100Mbps/Full - flow control off
IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: stmmac_xmit+0xf1/0x1080

Fix this by adding necessary settings.

P.S. I split fix to three patches according to what each of them adds.
====================

Tested-by: Jan Kiszka <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agostmmac: pci: split out common_default_data() helper
Andy Shevchenko [Mon, 8 May 2017 14:14:22 +0000 (17:14 +0300)]
stmmac: pci: split out common_default_data() helper

New helper is added in order to prevent misconfiguration happened
for one of the platforms when configuration data is expanded.

Signed-off-by: Andy Shevchenko <[email protected]>
Acked-by: Joao Pinto <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
7 years agostmmac: pci: RX queue routing configuration
Andy Shevchenko [Mon, 8 May 2017 14:14:21 +0000 (17:14 +0300)]
stmmac: pci: RX queue routing configuration

The commit abe80fdc6ee6

    ("net: stmmac: RX queue routing configuration")

missed Intel Quark configuration. Append it here.

Fixes: abe80fdc6ee6 ("net: stmmac: RX queue routing configuration")
Cc: Joao Pinto <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Acked-by: Joao Pinto <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
This page took 0.142674 seconds and 4 git commands to generate.