]> Git Repo - linux.git/commitdiff
mm, slub: introduce static key for slub_debug()
authorVlastimil Babka <[email protected]>
Fri, 7 Aug 2020 06:18:51 +0000 (23:18 -0700)
committerLinus Torvalds <[email protected]>
Fri, 7 Aug 2020 18:33:22 +0000 (11:33 -0700)
One advantage of CONFIG_SLUB_DEBUG is that a generic distro kernel can be
built with the option enabled, but it's inactive until simply enabled on
boot, without rebuilding the kernel.  With a static key, we can further
eliminate the overhead of checking whether a cache has a particular debug
flag enabled if we know that there are no such caches (slub_debug was not
enabled during boot).  We use the same mechanism also for e.g.
page_owner, debug_pagealloc or kmemcg functionality.

This patch introduces the static key and makes the general check for
per-cache debug flags kmem_cache_debug() use it.  This benefits several
call sites, including (slow path but still rather frequent) __slab_free().
The next patches will add more uses.

Signed-off-by: Vlastimil Babka <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Acked-by: Roman Gushchin <[email protected]>
Acked-by: Christoph Lameter <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Vijayanand Jitta <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Pekka Enberg <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
mm/slub.c

index 617cf1fff12854964445ae831231793098041c85..8adab4c5296d7884825faed829d945f46765c0a5 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
  *                     the fast path and disables lockless freelists.
  */
 
+#ifdef CONFIG_SLUB_DEBUG
+#ifdef CONFIG_SLUB_DEBUG_ON
+DEFINE_STATIC_KEY_TRUE(slub_debug_enabled);
+#else
+DEFINE_STATIC_KEY_FALSE(slub_debug_enabled);
+#endif
+#endif
+
 static inline int kmem_cache_debug(struct kmem_cache *s)
 {
 #ifdef CONFIG_SLUB_DEBUG
-       return unlikely(s->flags & SLAB_DEBUG_FLAGS);
-#else
-       return 0;
+       if (static_branch_unlikely(&slub_debug_enabled))
+               return s->flags & SLAB_DEBUG_FLAGS;
 #endif
+       return 0;
 }
 
 void *fixup_red_left(struct kmem_cache *s, void *p)
@@ -1389,6 +1397,8 @@ static int __init setup_slub_debug(char *str)
                slub_debug_string = saved_str;
        }
 out:
+       if (slub_debug != 0 || slub_debug_string)
+               static_branch_enable(&slub_debug_enabled);
        if ((static_branch_unlikely(&init_on_alloc) ||
             static_branch_unlikely(&init_on_free)) &&
            (slub_debug & SLAB_POISON))
This page took 0.054522 seconds and 4 git commands to generate.