ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier()
__my_cpu_offset is non-volatile, since we want its value to be cached when we access several per-cpu variables in a row with preemption disabled. This means that we rely on preempt_{en,dis}able to hazard with the operation via the barrier() macro, so that we can't end up migrating CPUs without reloading the per-cpu offset. Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile asm block as a side-effect, and will happily re-order it before other memory clobbers (including those in prempt_disable()) and cache the value. This has been observed to break the cmpxchg logic in the slub allocator, leading to livelock in kmem_cache_alloc in mainline kernels. This patch adds a dummy memory input operand to __my_cpu_offset, forcing it to be ordered with respect to the barrier() macro. Cc: <stable@vger.kernel.org> Cc: Rob Herring <rob.herring@calxeda.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This commit is contained in:
Родитель
ced2a3b849
Коммит
509eb76ebf
|
@ -30,8 +30,15 @@ static inline void set_my_cpu_offset(unsigned long off)
|
|||
static inline unsigned long __my_cpu_offset(void)
|
||||
{
|
||||
unsigned long off;
|
||||
/* Read TPIDRPRW */
|
||||
asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : : "memory");
|
||||
register unsigned long *sp asm ("sp");
|
||||
|
||||
/*
|
||||
* Read TPIDRPRW.
|
||||
* We want to allow caching the value, so avoid using volatile and
|
||||
* instead use a fake stack read to hazard against barrier().
|
||||
*/
|
||||
asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : "Q" (*sp));
|
||||
|
||||
return off;
|
||||
}
|
||||
#define __my_cpu_offset __my_cpu_offset()
|
||||
|
|
Загрузка…
Ссылка в новой задаче