]> Git Repo - linux.git/commit
ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier()
authorWill Deacon <[email protected]>
Wed, 5 Jun 2013 10:20:33 +0000 (11:20 +0100)
committerRussell King <[email protected]>
Wed, 5 Jun 2013 22:35:56 +0000 (23:35 +0100)
commit509eb76ebf9771abc9fe51859382df2571f11447
treea5745368df4dbe458dc4de5fa63778e9b8cda2cc
parentced2a3b84965f1be8b6a142d6029faf241f109af
ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier()

__my_cpu_offset is non-volatile, since we want its value to be cached
when we access several per-cpu variables in a row with preemption
disabled. This means that we rely on preempt_{en,dis}able to hazard
with the operation via the barrier() macro, so that we can't end up
migrating CPUs without reloading the per-cpu offset.

Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile
asm block as a side-effect, and will happily re-order it before other
memory clobbers (including those in prempt_disable()) and cache the
value. This has been observed to break the cmpxchg logic in the slub
allocator, leading to livelock in kmem_cache_alloc in mainline kernels.

This patch adds a dummy memory input operand to __my_cpu_offset,
forcing it to be ordered with respect to the barrier() macro.

Cc: <[email protected]>
Cc: Rob Herring <[email protected]>
Reviewed-by: Nicolas Pitre <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
arch/arm/include/asm/percpu.h
This page took 0.053958 seconds and 4 git commands to generate.