Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next

Pull core locking updates from Ingo Molnar:
 "The main changes in this cycle were:

   - reduced/streamlined smp_mb__*() interface that allows more usecases
     and makes the existing ones less buggy, especially in rarer
     architectures

   - add rwsem implementation comments

   - bump up lockdep limits"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits)
  rwsem: Add comments to explain the meaning of the rwsem's count field
  lockdep: Increase static allocations
  arch: Mass conversion of smp_mb__*()
  arch,doc: Convert smp_mb__*()
  arch,xtensa: Convert smp_mb__*()
  arch,x86: Convert smp_mb__*()
  arch,tile: Convert smp_mb__*()
  arch,sparc: Convert smp_mb__*()
  arch,sh: Convert smp_mb__*()
  arch,score: Convert smp_mb__*()
  arch,s390: Convert smp_mb__*()
  arch,powerpc: Convert smp_mb__*()
  arch,parisc: Convert smp_mb__*()
  arch,openrisc: Convert smp_mb__*()
  arch,mn10300: Convert smp_mb__*()
  arch,mips: Convert smp_mb__*()
  arch,metag: Convert smp_mb__*()
  arch,m68k: Convert smp_mb__*()
  arch,m32r: Convert smp_mb__*()
  arch,ia64: Convert smp_mb__*()
  ...
This commit is contained in:
Linus Torvalds 2014-06-03 12:57:53 -07:00
Родитель 59a3d4c363 3cf2f34e1a
Коммит 776edb5931
185 изменённых файлов: 549 добавлений и 650 удалений

Просмотреть файл

@ -285,15 +285,13 @@ If a caller requires memory barrier semantics around an atomic_t
operation which does not return a value, a set of interfaces are
defined which accomplish this:
void smp_mb__before_atomic_dec(void);
void smp_mb__after_atomic_dec(void);
void smp_mb__before_atomic_inc(void);
void smp_mb__after_atomic_inc(void);
void smp_mb__before_atomic(void);
void smp_mb__after_atomic(void);
For example, smp_mb__before_atomic_dec() can be used like so:
For example, smp_mb__before_atomic() can be used like so:
obj->dead = 1;
smp_mb__before_atomic_dec();
smp_mb__before_atomic();
atomic_dec(&obj->ref_count);
It makes sure that all memory operations preceding the atomic_dec()
@ -302,15 +300,10 @@ operation. In the above example, it guarantees that the assignment of
"1" to obj->dead will be globally visible to other cpus before the
atomic counter decrement.
Without the explicit smp_mb__before_atomic_dec() call, the
Without the explicit smp_mb__before_atomic() call, the
implementation could legally allow the atomic counter update visible
to other cpus before the "obj->dead = 1;" assignment.
The other three interfaces listed are used to provide explicit
ordering with respect to memory operations after an atomic_dec() call
(smp_mb__after_atomic_dec()) and around atomic_inc() calls
(smp_mb__{before,after}_atomic_inc()).
A missing memory barrier in the cases where they are required by the
atomic_t implementation above can have disastrous results. Here is
an example, which follows a pattern occurring frequently in the Linux
@ -487,12 +480,12 @@ Finally there is the basic operation:
Which returns a boolean indicating if bit "nr" is set in the bitmask
pointed to by "addr".
If explicit memory barriers are required around clear_bit() (which
does not return a value, and thus does not need to provide memory
barrier semantics), two interfaces are provided:
If explicit memory barriers are required around {set,clear}_bit() (which do
not return a value, and thus does not need to provide memory barrier
semantics), two interfaces are provided:
void smp_mb__before_clear_bit(void);
void smp_mb__after_clear_bit(void);
void smp_mb__before_atomic(void);
void smp_mb__after_atomic(void);
They are used as follows, and are akin to their atomic_t operation
brothers:
@ -500,13 +493,13 @@ brothers:
/* All memory operations before this call will
* be globally visible before the clear_bit().
*/
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit( ... );
/* The clear_bit() will be visible before all
* subsequent memory operations.
*/
smp_mb__after_clear_bit();
smp_mb__after_atomic();
There are two special bitops with lock barrier semantics (acquire/release,
same as spinlocks). These operate in the same way as their non-_lock/unlock

Просмотреть файл

@ -1583,20 +1583,21 @@ There are some more advanced barrier functions:
insert anything more than a compiler barrier in a UP compilation.
(*) smp_mb__before_atomic_dec();
(*) smp_mb__after_atomic_dec();
(*) smp_mb__before_atomic_inc();
(*) smp_mb__after_atomic_inc();
(*) smp_mb__before_atomic();
(*) smp_mb__after_atomic();
These are for use with atomic add, subtract, increment and decrement
functions that don't return a value, especially when used for reference
counting. These functions do not imply memory barriers.
These are for use with atomic (such as add, subtract, increment and
decrement) functions that don't return a value, especially when used for
reference counting. These functions do not imply memory barriers.
These are also used for atomic bitop functions that do not return a
value (such as set_bit and clear_bit).
As an example, consider a piece of code that marks an object as being dead
and then decrements the object's reference count:
obj->dead = 1;
smp_mb__before_atomic_dec();
smp_mb__before_atomic();
atomic_dec(&obj->ref_count);
This makes sure that the death mark on the object is perceived to be set
@ -1606,27 +1607,6 @@ There are some more advanced barrier functions:
operations" subsection for information on where to use these.
(*) smp_mb__before_clear_bit(void);
(*) smp_mb__after_clear_bit(void);
These are for use similar to the atomic inc/dec barriers. These are
typically used for bitwise unlocking operations, so care must be taken as
there are no implicit memory barriers here either.
Consider implementing an unlock operation of some nature by clearing a
locking bit. The clear_bit() would then need to be barriered like this:
smp_mb__before_clear_bit();
clear_bit( ... );
This prevents memory operations before the clear leaking to after it. See
the subsection on "Locking Functions" with reference to RELEASE operation
implications.
See Documentation/atomic_ops.txt for more information. See the "Atomic
operations" subsection for information on where to use these.
MMIO WRITE BARRIER
------------------
@ -2283,11 +2263,11 @@ operations:
change_bit();
With these the appropriate explicit memory barrier should be used if necessary
(smp_mb__before_clear_bit() for instance).
(smp_mb__before_atomic() for instance).
The following also do _not_ imply memory barriers, and so may require explicit
memory barriers under some circumstances (smp_mb__before_atomic_dec() for
memory barriers under some circumstances (smp_mb__before_atomic() for
instance):
atomic_add();

Просмотреть файл

@ -292,9 +292,4 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
#define atomic_dec(v) atomic_sub(1,(v))
#define atomic64_dec(v) atomic64_sub(1,(v))
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
#endif /* _ALPHA_ATOMIC_H */

Просмотреть файл

@ -53,9 +53,6 @@ __set_bit(unsigned long nr, volatile void * addr)
*m |= 1 << (nr & 31);
}
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
static inline void
clear_bit(unsigned long nr, volatile void * addr)
{

Просмотреть файл

@ -190,11 +190,6 @@ static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
#endif /* !CONFIG_ARC_HAS_LLSC */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
/**
* __atomic_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t

Просмотреть файл

@ -19,6 +19,7 @@
#include <linux/types.h>
#include <linux/compiler.h>
#include <asm/barrier.h>
/*
* Hardware assisted read-modify-write using ARC700 LLOCK/SCOND insns.
@ -496,10 +497,6 @@ static inline __attribute__ ((const)) int __ffs(unsigned long word)
*/
#define ffz(x) __ffs(~(x))
/* TODO does this affect uni-processor code */
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm-generic/bitops/hweight.h>
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/sched.h>

Просмотреть файл

@ -241,11 +241,6 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0)
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
#ifndef CONFIG_GENERIC_ATOMIC64
typedef struct {
long long counter;

Просмотреть файл

@ -79,5 +79,8 @@ do { \
#define set_mb(var, value) do { var = value; smp_mb(); } while (0)
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_BARRIER_H */

Просмотреть файл

@ -25,9 +25,7 @@
#include <linux/compiler.h>
#include <linux/irqflags.h>
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#include <asm/barrier.h>
/*
* These functions are the basis of our bit ops.

Просмотреть файл

@ -152,11 +152,6 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0)
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
/*
* 64-bit atomic operations.
*/

Просмотреть файл

@ -98,6 +98,9 @@ do { \
#define set_mb(var, value) do { var = value; smp_mb(); } while (0)
#define nop() asm volatile("nop");
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
#endif /* __ASSEMBLY__ */
#endif /* __ASM_BARRIER_H */

Просмотреть файл

@ -17,17 +17,8 @@
#define __ASM_BITOPS_H
#include <linux/compiler.h>
#include <asm/barrier.h>
/*
* clear_bit may not imply a memory barrier
*/
#ifndef smp_mb__before_clear_bit
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#endif
#ifndef _LINUX_BITOPS_H
#error only <linux/bitops.h> can be included directly
#endif

Просмотреть файл

@ -183,9 +183,4 @@ static inline int atomic_sub_if_positive(int i, atomic_t *v)
#define atomic_dec_if_positive(v) atomic_sub_if_positive(1, v)
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* __ASM_AVR32_ATOMIC_H */

Просмотреть файл

@ -13,12 +13,7 @@
#endif
#include <asm/byteorder.h>
/*
* clear_bit() doesn't provide any barrier for the compiler
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm/barrier.h>
/*
* set_bit - Atomically set a bit in memory
@ -67,7 +62,7 @@ static inline void set_bit(int nr, volatile void * addr)
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static inline void clear_bit(int nr, volatile void * addr)

Просмотреть файл

@ -27,6 +27,9 @@
#endif /* !CONFIG_SMP */
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
#include <asm-generic/barrier.h>
#endif /* _BLACKFIN_BARRIER_H */

Просмотреть файл

@ -27,21 +27,17 @@
#include <asm-generic/bitops/ext2-atomic.h>
#include <asm/barrier.h>
#ifndef CONFIG_SMP
#include <linux/irqflags.h>
/*
* clear_bit may not imply a memory barrier
*/
#ifndef smp_mb__before_clear_bit
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#endif
#include <asm-generic/bitops/atomic.h>
#include <asm-generic/bitops/non-atomic.h>
#else
#include <asm/barrier.h>
#include <asm/byteorder.h> /* swab32 */
#include <linux/linkage.h>
@ -101,12 +97,6 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
return __raw_bit_test_toggle_asm(a, nr & 0x1f);
}
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#define test_bit __skip_test_bit
#include <asm-generic/bitops/non-atomic.h>
#undef test_bit

Просмотреть файл

@ -14,14 +14,8 @@
#ifdef __KERNEL__
#include <linux/bitops.h>
#include <asm/byteorder.h>
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm/barrier.h>
/*
* We are lucky, DSP is perfect for bitops: do it in 3 cycles

Просмотреть файл

@ -7,6 +7,8 @@
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <arch/atomic.h>
#include <arch/system.h>
#include <asm/barrier.h>
/*
* Atomic operations that C can't guarantee us. Useful for
@ -151,10 +153,4 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
return ret;
}
/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif

Просмотреть файл

@ -21,6 +21,7 @@
#include <arch/bitops.h>
#include <linux/atomic.h>
#include <linux/compiler.h>
#include <asm/barrier.h>
/*
* set_bit - Atomically set a bit in memory
@ -42,7 +43,7 @@
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
@ -84,12 +85,6 @@ static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
return retval;
}
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
/**
* test_and_clear_bit - Clear a bit and return its old value
* @nr: Bit to clear

Просмотреть файл

@ -17,6 +17,7 @@
#include <linux/types.h>
#include <asm/spr-regs.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#ifdef CONFIG_SMP
#error not SMP safe
@ -29,12 +30,6 @@
* We do not have SMP systems, so we don't have to deal with that.
*/
/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic_set(v, i) (((v)->counter) = (i))

Просмотреть файл

@ -25,12 +25,6 @@
#include <asm-generic/bitops/ffz.h>
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
static inline
unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v)

Просмотреть файл

@ -24,6 +24,7 @@
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#define ATOMIC_INIT(i) { (i) }
@ -176,9 +177,4 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
#define atomic_inc_return(v) (atomic_add_return(1, v))
#define atomic_dec_return(v) (atomic_sub_return(1, v))
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif

Просмотреть файл

@ -25,12 +25,10 @@
#include <linux/compiler.h>
#include <asm/byteorder.h>
#include <asm/atomic.h>
#include <asm/barrier.h>
#ifdef __KERNEL__
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
/*
* The offset calculations for these are based on BITS_PER_LONG == 32
* (i.e. I get to shift by #5-2 (32 bits per long, 4 bytes per access),

Просмотреть файл

@ -15,6 +15,7 @@
#include <linux/types.h>
#include <asm/intrinsics.h>
#include <asm/barrier.h>
#define ATOMIC_INIT(i) { (i) }
@ -208,10 +209,4 @@ atomic64_add_negative (__s64 i, atomic64_t *v)
#define atomic64_inc(v) atomic64_add(1, (v))
#define atomic64_dec(v) atomic64_sub(1, (v))
/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* _ASM_IA64_ATOMIC_H */

Просмотреть файл

@ -55,6 +55,9 @@
#endif
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
/*
* IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
* need for asm trickery!

Просмотреть файл

@ -16,6 +16,7 @@
#include <linux/compiler.h>
#include <linux/types.h>
#include <asm/intrinsics.h>
#include <asm/barrier.h>
/**
* set_bit - Atomically set a bit in memory
@ -65,12 +66,6 @@ __set_bit (int nr, volatile void *addr)
*((__u32 *) addr + (nr >> 5)) |= (1 << (nr & 31));
}
/*
* clear_bit() has "acquire" semantics.
*/
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() do { /* skip */; } while (0)
/**
* clear_bit - Clears a bit in memory
* @nr: Bit to clear
@ -78,7 +73,7 @@ __set_bit (int nr, volatile void *addr)
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static __inline__ void

Просмотреть файл

@ -118,6 +118,15 @@ extern long ia64_cmpxchg_called_with_bad_pointer(void);
#define cmpxchg_rel(ptr, o, n) \
ia64_cmpxchg(rel, (ptr), (o), (n), sizeof(*(ptr)))
/*
* Worse still - early processor implementations actually just ignored
* the acquire/release and did a full fence all the time. Unfortunately
* this meant a lot of badly written code that used .acq when they really
* wanted .rel became legacy out in the wild - so when we made a cpu
* that strictly did the .acq or .rel ... all that code started breaking - so
* we had to back-pedal and keep the "legacy" behavior of a full fence :-(
*/
/* for compatibility with other platforms: */
#define cmpxchg(ptr, o, n) cmpxchg_acq((ptr), (o), (n))
#define cmpxchg64(ptr, o, n) cmpxchg_acq((ptr), (o), (n))

Просмотреть файл

@ -13,6 +13,7 @@
#include <asm/assembler.h>
#include <asm/cmpxchg.h>
#include <asm/dcache_clear.h>
#include <asm/barrier.h>
/*
* Atomic operations that C can't guarantee us. Useful for
@ -308,10 +309,4 @@ static __inline__ void atomic_set_mask(unsigned long mask, atomic_t *addr)
local_irq_restore(flags);
}
/* Atomic operations are already serializing on m32r */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* _ASM_M32R_ATOMIC_H */

Просмотреть файл

@ -21,6 +21,7 @@
#include <asm/byteorder.h>
#include <asm/dcache_clear.h>
#include <asm/types.h>
#include <asm/barrier.h>
/*
* These have to be done with inline assembly: that way the bit-setting
@ -73,7 +74,7 @@ static __inline__ void set_bit(int nr, volatile void * addr)
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static __inline__ void clear_bit(int nr, volatile void * addr)
@ -103,9 +104,6 @@ static __inline__ void clear_bit(int nr, volatile void * addr)
local_irq_restore(flags);
}
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
/**
* change_bit - Toggle a bit in memory
* @nr: Bit to clear

Просмотреть файл

@ -4,6 +4,7 @@
#include <linux/types.h>
#include <linux/irqflags.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
/*
* Atomic operations that C can't guarantee us. Useful for
@ -209,11 +210,4 @@ static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u)
return c;
}
/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* __ARCH_M68K_ATOMIC __ */

Просмотреть файл

@ -13,6 +13,7 @@
#endif
#include <linux/compiler.h>
#include <asm/barrier.h>
/*
* Bit access functions vary across the ColdFire and 68k families.
@ -67,12 +68,6 @@ static inline void bfset_mem_set_bit(int nr, volatile unsigned long *vaddr)
#define __set_bit(nr, vaddr) set_bit(nr, vaddr)
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
static inline void bclr_reg_clear_bit(int nr, volatile unsigned long *vaddr)
{
char *p = (char *)vaddr + (nr ^ 31) / 8;

Просмотреть файл

@ -4,6 +4,7 @@
#include <linux/compiler.h>
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#if defined(CONFIG_METAG_ATOMICITY_IRQSOFF)
/* The simple UP case. */
@ -39,11 +40,6 @@
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif
#define atomic_dec_if_positive(v) atomic_sub_if_positive(1, v)

Просмотреть файл

@ -100,4 +100,7 @@ do { \
___p1; \
})
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
#endif /* _ASM_METAG_BARRIER_H */

Просмотреть файл

@ -5,12 +5,6 @@
#include <asm/barrier.h>
#include <asm/global_lock.h>
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#ifdef CONFIG_SMP
/*
* These functions are the basis of our bit ops.

Просмотреть файл

@ -761,13 +761,4 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
#endif /* CONFIG_64BIT */
/*
* atomic*_return operations are serializing but not the non-*_return
* versions.
*/
#define smp_mb__before_atomic_dec() smp_mb__before_llsc()
#define smp_mb__after_atomic_dec() smp_llsc_mb()
#define smp_mb__before_atomic_inc() smp_mb__before_llsc()
#define smp_mb__after_atomic_inc() smp_llsc_mb()
#endif /* _ASM_ATOMIC_H */

Просмотреть файл

@ -195,4 +195,7 @@ do { \
___p1; \
})
#define smp_mb__before_atomic() smp_mb__before_llsc()
#define smp_mb__after_atomic() smp_llsc_mb()
#endif /* __ASM_BARRIER_H */

Просмотреть файл

@ -37,13 +37,6 @@
#define __EXT "dext "
#endif
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() smp_mb__before_llsc()
#define smp_mb__after_clear_bit() smp_llsc_mb()
/*
* These are the "slower" versions of the functions and are in bitops.c.
* These functions call raw_local_irq_{save,restore}().
@ -120,7 +113,7 @@ static inline void set_bit(unsigned long nr, volatile unsigned long *addr)
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
@ -175,7 +168,7 @@ static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
*/
static inline void clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(nr, addr);
}

Просмотреть файл

@ -62,9 +62,9 @@ void __init alloc_legacy_irqno(void)
void free_irqno(unsigned int irq)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(irq, irq_map);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
/*

Просмотреть файл

@ -13,6 +13,7 @@
#include <asm/irqflags.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#ifndef CONFIG_SMP
#include <asm-generic/atomic.h>
@ -234,12 +235,6 @@ static inline void atomic_set_mask(unsigned long mask, unsigned long *addr)
#endif
}
/* Atomic operations are already serializing on MN10300??? */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* __KERNEL__ */
#endif /* CONFIG_SMP */
#endif /* _ASM_ATOMIC_H */

Просмотреть файл

@ -18,9 +18,7 @@
#define __ASM_BITOPS_H
#include <asm/cpu-regs.h>
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm/barrier.h>
/*
* set bit

Просмотреть файл

@ -78,9 +78,9 @@ void smp_flush_tlb(void *unused)
else
local_flush_tlb_page(flush_mm, flush_va);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
cpumask_clear_cpu(cpu_id, &flush_cpumask);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
out:
put_cpu();
}

Просмотреть файл

@ -27,14 +27,7 @@
#include <linux/irqflags.h>
#include <linux/compiler.h>
/*
* clear_bit may not imply a memory barrier
*/
#ifndef smp_mb__before_clear_bit
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#endif
#include <asm/barrier.h>
#include <asm/bitops/__ffs.h>
#include <asm-generic/bitops/ffz.h>

Просмотреть файл

@ -7,6 +7,7 @@
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
/*
* Atomic operations that C can't guarantee us. Useful for
@ -143,11 +144,6 @@ static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u)
#define ATOMIC_INIT(i) { (i) }
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
#ifdef CONFIG_64BIT
#define ATOMIC64_INIT(i) { (i) }

Просмотреть файл

@ -8,6 +8,7 @@
#include <linux/compiler.h>
#include <asm/types.h> /* for BITS_PER_LONG/SHIFT_PER_LONG */
#include <asm/byteorder.h>
#include <asm/barrier.h>
#include <linux/atomic.h>
/*
@ -19,9 +20,6 @@
#define CHOP_SHIFTCOUNT(x) (((unsigned long) (x)) & (BITS_PER_LONG - 1))
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
/* See http://marc.theaimsgroup.com/?t=108826637900003 for discussion
* on use of volatile and __*_bit() (set/clear/change):
* *_bit() want use of volatile.

Просмотреть файл

@ -8,6 +8,7 @@
#ifdef __KERNEL__
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#define ATOMIC_INIT(i) { (i) }
@ -270,11 +271,6 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
}
#define atomic_dec_if_positive atomic_dec_if_positive
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
#ifdef __powerpc64__
#define ATOMIC64_INIT(i) { (i) }

Просмотреть файл

@ -84,4 +84,7 @@ do { \
___p1; \
})
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
#endif /* _ASM_POWERPC_BARRIER_H */

Просмотреть файл

@ -51,11 +51,7 @@
#define PPC_BIT(bit) (1UL << PPC_BITLSHIFT(bit))
#define PPC_BITMASK(bs, be) ((PPC_BIT(bs) - PPC_BIT(be)) | PPC_BIT(bs))
/*
* clear_bit doesn't imply a memory barrier
*/
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#include <asm/barrier.h>
/* Macro for generating the ***_bits() functions */
#define DEFINE_BITOP(fn, op, prefix) \

Просмотреть файл

@ -81,7 +81,7 @@ void crash_ipi_callback(struct pt_regs *regs)
}
atomic_inc(&cpus_in_crash);
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
/*
* Starting the kdump boot.

Просмотреть файл

@ -412,9 +412,4 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
#define atomic64_dec_and_test(_v) (atomic64_sub_return(1, _v) == 0)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
#endif /* __ARCH_S390_ATOMIC__ */

Просмотреть файл

@ -27,8 +27,9 @@
#define smp_rmb() rmb()
#define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
#define set_mb(var, value) do { var = value; mb(); } while (0)

Просмотреть файл

@ -2,12 +2,7 @@
#define _ASM_SCORE_BITOPS_H
#include <asm/byteorder.h> /* swab32 */
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm/barrier.h>
#include <asm-generic/bitops.h>
#include <asm-generic/bitops/__fls.h>

Просмотреть файл

@ -10,6 +10,7 @@
#include <linux/compiler.h>
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#define ATOMIC_INIT(i) { (i) }
@ -62,9 +63,4 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
return c;
}
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
#endif /* __ASM_SH_ATOMIC_H */

Просмотреть файл

@ -9,6 +9,7 @@
/* For __swab32 */
#include <asm/byteorder.h>
#include <asm/barrier.h>
#ifdef CONFIG_GUSA_RB
#include <asm/bitops-grb.h>
@ -22,12 +23,6 @@
#include <asm-generic/bitops/non-atomic.h>
#endif
/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#ifdef CONFIG_SUPERH32
static inline unsigned long ffz(unsigned long word)
{

Просмотреть файл

@ -14,6 +14,7 @@
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#include <asm-generic/atomic64.h>
@ -52,10 +53,4 @@ extern void atomic_set(atomic_t *, int);
#define atomic_dec_and_test(v) (atomic_dec_return(v) == 0)
#define atomic_sub_and_test(i, v) (atomic_sub_return(i, v) == 0)
/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* !(__ARCH_SPARC_ATOMIC__) */

Просмотреть файл

@ -9,6 +9,7 @@
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#define ATOMIC_INIT(i) { (i) }
#define ATOMIC64_INIT(i) { (i) }
@ -108,10 +109,4 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
extern long atomic64_dec_if_positive(atomic64_t *v);
/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* !(__ARCH_SPARC64_ATOMIC__) */

Просмотреть файл

@ -68,4 +68,7 @@ do { \
___p1; \
})
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
#endif /* !(__SPARC64_BARRIER_H) */

Просмотреть файл

@ -90,9 +90,6 @@ static inline void change_bit(unsigned long nr, volatile unsigned long *addr)
#include <asm-generic/bitops/non-atomic.h>
#define smp_mb__before_clear_bit() do { } while(0)
#define smp_mb__after_clear_bit() do { } while(0)
#include <asm-generic/bitops/ffz.h>
#include <asm-generic/bitops/__ffs.h>
#include <asm-generic/bitops/sched.h>

Просмотреть файл

@ -13,6 +13,7 @@
#include <linux/compiler.h>
#include <asm/byteorder.h>
#include <asm/barrier.h>
extern int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
extern int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
@ -23,9 +24,6 @@ extern void change_bit(unsigned long nr, volatile unsigned long *addr);
#include <asm-generic/bitops/non-atomic.h>
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm-generic/bitops/fls.h>
#include <asm-generic/bitops/__fls.h>
#include <asm-generic/bitops/fls64.h>

Просмотреть файл

@ -169,16 +169,6 @@ static inline void atomic64_set(atomic64_t *v, long long n)
#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL)
/*
* We need to barrier before modifying the word, since the _atomic_xxx()
* routines just tns the lock and then read/modify/write of the word.
* But after the word is updated, the routine issues an "mf" before returning,
* and since it's a function call, we don't even need a compiler barrier.
*/
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_dec() do { } while (0)
#define smp_mb__after_atomic_inc() do { } while (0)
#endif /* !__ASSEMBLY__ */

Просмотреть файл

@ -105,12 +105,6 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
/* Atomic dec and inc don't implement barrier, so provide them if needed. */
#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
/* Define this to indicate that cmpxchg is an efficient operation. */
#define __HAVE_ARCH_CMPXCHG

Просмотреть файл

@ -72,6 +72,20 @@ mb_incoherent(void)
#define mb() fast_mb()
#define iob() fast_iob()
#ifndef __tilegx__ /* 32 bit */
/*
* We need to barrier before modifying the word, since the _atomic_xxx()
* routines just tns the lock and then read/modify/write of the word.
* But after the word is updated, the routine issues an "mf" before returning,
* and since it's a function call, we don't even need a compiler barrier.
*/
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() do { } while (0)
#else /* 64 bit */
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
#endif
#include <asm-generic/barrier.h>
#endif /* !__ASSEMBLY__ */

Просмотреть файл

@ -17,6 +17,7 @@
#define _ASM_TILE_BITOPS_H
#include <linux/types.h>
#include <asm/barrier.h>
#ifndef _LINUX_BITOPS_H
#error only <linux/bitops.h> can be included directly

Просмотреть файл

@ -49,8 +49,8 @@ static inline void set_bit(unsigned nr, volatile unsigned long *addr)
* restricted to acting on a single-word quantity.
*
* clear_bit() may not contain a memory barrier, so if it is used for
* locking purposes, you should call smp_mb__before_clear_bit() and/or
* smp_mb__after_clear_bit() to ensure changes are visible on other cpus.
* locking purposes, you should call smp_mb__before_atomic() and/or
* smp_mb__after_atomic() to ensure changes are visible on other cpus.
*/
static inline void clear_bit(unsigned nr, volatile unsigned long *addr)
{
@ -121,10 +121,6 @@ static inline int test_and_change_bit(unsigned nr,
return (_atomic_xor(addr, mask) & mask) != 0;
}
/* See discussion at smp_mb__before_atomic_dec() in <asm/atomic_32.h>. */
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() do {} while (0)
#include <asm-generic/bitops/ext2-atomic.h>
#endif /* _ASM_TILE_BITOPS_32_H */

Просмотреть файл

@ -32,10 +32,6 @@ static inline void clear_bit(unsigned nr, volatile unsigned long *addr)
__insn_fetchand((void *)(addr + nr / BITS_PER_LONG), ~mask);
}
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
static inline void change_bit(unsigned nr, volatile unsigned long *addr)
{
unsigned long mask = (1UL << (nr % BITS_PER_LONG));

Просмотреть файл

@ -7,6 +7,7 @@
#include <asm/alternative.h>
#include <asm/cmpxchg.h>
#include <asm/rmwcc.h>
#include <asm/barrier.h>
/*
* Atomic operations that C can't guarantee us. Useful for
@ -243,12 +244,6 @@ static inline void atomic_or_long(unsigned long *v1, unsigned long v2)
: : "r" ((unsigned)(mask)), "m" (*(addr)) \
: "memory")
/* Atomic operations are already serializing on x86 */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#ifdef CONFIG_X86_32
# include <asm/atomic64_32.h>
#else

Просмотреть файл

@ -137,6 +137,10 @@ do { \
#endif
/* Atomic operations are already serializing on x86 */
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
/*
* Stop RDTSC speculation. This is needed when you need to use RDTSC
* (or get_cycles or vread that possibly accesses the TSC) in a defined

Просмотреть файл

@ -15,6 +15,7 @@
#include <linux/compiler.h>
#include <asm/alternative.h>
#include <asm/rmwcc.h>
#include <asm/barrier.h>
#if BITS_PER_LONG == 32
# define _BITOPS_LONG_SHIFT 5
@ -102,7 +103,7 @@ static inline void __set_bit(long nr, volatile unsigned long *addr)
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static __always_inline void
@ -156,9 +157,6 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
__clear_bit(nr, addr);
}
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
/**
* __change_bit - Toggle a bit in memory
* @nr: the bit to change

Просмотреть файл

@ -41,7 +41,7 @@ static inline void sync_set_bit(long nr, volatile unsigned long *addr)
*
* sync_clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static inline void sync_clear_bit(long nr, volatile unsigned long *addr)

Просмотреть файл

@ -57,7 +57,7 @@ void arch_trigger_all_cpu_backtrace(void)
}
clear_bit(0, &backtrace_flag);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
static int __kprobes

Просмотреть файл

@ -19,6 +19,7 @@
#ifdef __KERNEL__
#include <asm/processor.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#define ATOMIC_INIT(i) { (i) }
@ -387,12 +388,6 @@ static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
#endif
}
/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()
#endif /* __KERNEL__ */
#endif /* _XTENSA_ATOMIC_H */

Просмотреть файл

@ -13,6 +13,9 @@
#define rmb() barrier()
#define wmb() mb()
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
#include <asm-generic/barrier.h>
#endif /* _XTENSA_SYSTEM_H */

Просмотреть файл

@ -21,9 +21,7 @@
#include <asm/processor.h>
#include <asm/byteorder.h>
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#include <asm/barrier.h>
#include <asm-generic/bitops/non-atomic.h>

Просмотреть файл

@ -49,7 +49,7 @@ EXPORT_SYMBOL(blk_iopoll_sched);
void __blk_iopoll_complete(struct blk_iopoll *iop)
{
list_del(&iop->list);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit_unlock(IOPOLL_F_SCHED, &iop->state);
}
EXPORT_SYMBOL(__blk_iopoll_complete);
@ -161,7 +161,7 @@ EXPORT_SYMBOL(blk_iopoll_disable);
void blk_iopoll_enable(struct blk_iopoll *iop)
{
BUG_ON(!test_bit(IOPOLL_F_SCHED, &iop->state));
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit_unlock(IOPOLL_F_SCHED, &iop->state);
}
EXPORT_SYMBOL(blk_iopoll_enable);

Просмотреть файл

@ -126,7 +126,7 @@ static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx)
int err = ctx->err;
if (!ctx->queue.qlen) {
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(CHAINIV_STATE_INUSE, &ctx->state);
if (!ctx->queue.qlen ||

Просмотреть файл

@ -105,7 +105,7 @@ static bool genpd_sd_counter_dec(struct generic_pm_domain *genpd)
static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
{
atomic_inc(&genpd->sd_count);
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
}
static void genpd_acquire_lock(struct generic_pm_domain *genpd)

Просмотреть файл

@ -159,7 +159,7 @@ void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a)
{
int n = dev->coupled->online_count;
smp_mb__before_atomic_inc();
smp_mb__before_atomic();
atomic_inc(a);
while (atomic_read(a) < n)

Просмотреть файл

@ -3498,7 +3498,7 @@ static int ohci_flush_iso_completions(struct fw_iso_context *base)
}
clear_bit_unlock(0, &ctx->flushing_completions);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
tasklet_enable(&ctx->context.tasklet);

Просмотреть файл

@ -156,7 +156,7 @@ static void vblank_disable_and_save(struct drm_device *dev, int crtc)
*/
if ((vblrc > 0) && (abs64(diff_ns) > 1000000)) {
atomic_inc(&dev->vblank[crtc].count);
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
}
/* Invalidate all timestamps while vblank irq's are off. */
@ -864,9 +864,9 @@ static void drm_update_vblank_count(struct drm_device *dev, int crtc)
vblanktimestamp(dev, crtc, tslot) = t_vblank;
}
smp_mb__before_atomic_inc();
smp_mb__before_atomic();
atomic_add(diff, &dev->vblank[crtc].count);
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
}
/**
@ -1330,9 +1330,9 @@ bool drm_handle_vblank(struct drm_device *dev, int crtc)
/* Increment cooked vblank count. This also atomically commits
* the timestamp computed above.
*/
smp_mb__before_atomic_inc();
smp_mb__before_atomic();
atomic_inc(&dev->vblank[crtc].count);
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
} else {
DRM_DEBUG("crtc %d: Redundant vblirq ignored. diff_ns = %d\n",
crtc, (int) diff_ns);

Просмотреть файл

@ -2157,7 +2157,7 @@ static void i915_error_work_func(struct work_struct *work)
* updates before
* the counter increment.
*/
smp_mb__before_atomic_inc();
smp_mb__before_atomic();
atomic_inc(&dev_priv->gpu_error.reset_counter);
kobject_uevent_env(&dev->primary->kdev->kobj,

Просмотреть файл

@ -828,7 +828,7 @@ static inline bool cached_dev_get(struct cached_dev *dc)
return false;
/* Paired with the mb in cached_dev_attach */
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
return true;
}

Просмотреть файл

@ -243,7 +243,7 @@ static inline void set_closure_fn(struct closure *cl, closure_fn *fn,
cl->fn = fn;
cl->wq = wq;
/* between atomic_dec() in closure_put() */
smp_mb__before_atomic_dec();
smp_mb__before_atomic();
}
static inline void closure_queue(struct closure *cl)

Просмотреть файл

@ -607,9 +607,9 @@ static void write_endio(struct bio *bio, int error)
BUG_ON(!test_bit(B_WRITING, &b->state));
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(B_WRITING, &b->state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
wake_up_bit(&b->state, B_WRITING);
}
@ -997,9 +997,9 @@ static void read_endio(struct bio *bio, int error)
BUG_ON(!test_bit(B_READING, &b->state));
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(B_READING, &b->state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
wake_up_bit(&b->state, B_READING);
}

Просмотреть файл

@ -642,7 +642,7 @@ static void free_pending_exception(struct dm_snap_pending_exception *pe)
struct dm_snapshot *s = pe->snap;
mempool_free(pe, s->pending_pool);
smp_mb__before_atomic_dec();
smp_mb__before_atomic();
atomic_dec(&s->pending_exceptions_count);
}
@ -783,7 +783,7 @@ static int init_hash_tables(struct dm_snapshot *s)
static void merge_shutdown(struct dm_snapshot *s)
{
clear_bit_unlock(RUNNING_MERGE, &s->state_bits);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
wake_up_bit(&s->state_bits, RUNNING_MERGE);
}

Просмотреть файл

@ -2446,7 +2446,7 @@ static void dm_wq_work(struct work_struct *work)
static void dm_queue_flush(struct mapped_device *md)
{
clear_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
queue_work(md->wq, &md->work);
}

Просмотреть файл

@ -4400,7 +4400,7 @@ static void raid5_unplug(struct blk_plug_cb *blk_cb, bool from_schedule)
* STRIPE_ON_UNPLUG_LIST clear but the stripe
* is still in our list
*/
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(STRIPE_ON_UNPLUG_LIST, &sh->state);
/*
* STRIPE_ON_RELEASE_LIST could be set here. In that

Просмотреть файл

@ -399,7 +399,7 @@ static int dvb_usb_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
/* clear 'streaming' status bit */
clear_bit(ADAP_STREAMING, &adap->state_bits);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
wake_up_bit(&adap->state_bits, ADAP_STREAMING);
skip_feed_stop:
@ -550,7 +550,7 @@ static int dvb_usb_fe_init(struct dvb_frontend *fe)
err:
if (!adap->suspend_resume_active) {
clear_bit(ADAP_INIT, &adap->state_bits);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
wake_up_bit(&adap->state_bits, ADAP_INIT);
}
@ -591,7 +591,7 @@ err:
if (!adap->suspend_resume_active) {
adap->active_fe = -1;
clear_bit(ADAP_SLEEP, &adap->state_bits);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
wake_up_bit(&adap->state_bits, ADAP_SLEEP);
}

Просмотреть файл

@ -2781,7 +2781,7 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
case LOAD_OPEN:
netif_tx_start_all_queues(bp->dev);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
break;
case LOAD_DIAG:
@ -4939,9 +4939,9 @@ void bnx2x_update_coalesce_sb_index(struct bnx2x *bp, u8 fw_sb_id,
void bnx2x_schedule_sp_rtnl(struct bnx2x *bp, enum sp_rtnl_flag flag,
u32 verbose)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
set_bit(flag, &bp->sp_rtnl_state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
DP((BNX2X_MSG_SP | verbose), "Scheduling sp_rtnl task [Flag: %d]\n",
flag);
schedule_delayed_work(&bp->sp_rtnl_task, 0);

Просмотреть файл

@ -1858,10 +1858,10 @@ void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe)
return;
#endif
smp_mb__before_atomic_inc();
smp_mb__before_atomic();
atomic_inc(&bp->cq_spq_left);
/* push the change in bp->spq_left and towards the memory */
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
DP(BNX2X_MSG_SP, "bp->cq_spq_left %x\n", atomic_read(&bp->cq_spq_left));
@ -1876,11 +1876,11 @@ void bnx2x_sp_event(struct bnx2x_fastpath *fp, union eth_rx_cqe *rr_cqe)
* sp_state is cleared, and this order prevents
* races
*/
smp_mb__before_clear_bit();
smp_mb__before_atomic();
set_bit(BNX2X_AFEX_PENDING_VIFSET_MCP_ACK, &bp->sp_state);
wmb();
clear_bit(BNX2X_AFEX_FCOE_Q_UPDATE_PENDING, &bp->sp_state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
/* schedule the sp task as mcp ack is required */
bnx2x_schedule_sp_task(bp);
@ -5272,9 +5272,9 @@ static void bnx2x_after_function_update(struct bnx2x *bp)
__clear_bit(RAMROD_COMP_WAIT, &queue_params.ramrod_flags);
/* mark latest Q bit */
smp_mb__before_clear_bit();
smp_mb__before_atomic();
set_bit(BNX2X_AFEX_FCOE_Q_UPDATE_PENDING, &bp->sp_state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
/* send Q update ramrod for FCoE Q */
rc = bnx2x_queue_state_change(bp, &queue_params);
@ -5500,7 +5500,7 @@ next_spqe:
spqe_cnt++;
} /* for */
smp_mb__before_atomic_inc();
smp_mb__before_atomic();
atomic_add(spqe_cnt, &bp->eq_spq_left);
bp->eq_cons = sw_cons;
@ -13875,9 +13875,9 @@ static int bnx2x_drv_ctl(struct net_device *dev, struct drv_ctl_info *ctl)
case DRV_CTL_RET_L2_SPQ_CREDIT_CMD: {
int count = ctl->data.credit.credit_count;
smp_mb__before_atomic_inc();
smp_mb__before_atomic();
atomic_add(count, &bp->cq_spq_left);
smp_mb__after_atomic_inc();
smp_mb__after_atomic();
break;
}
case DRV_CTL_ULP_REGISTER_CMD: {

Просмотреть файл

@ -258,16 +258,16 @@ static bool bnx2x_raw_check_pending(struct bnx2x_raw_obj *o)
static void bnx2x_raw_clear_pending(struct bnx2x_raw_obj *o)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(o->state, o->pstate);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
static void bnx2x_raw_set_pending(struct bnx2x_raw_obj *o)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
set_bit(o->state, o->pstate);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
/**
@ -2131,7 +2131,7 @@ static int bnx2x_set_rx_mode_e1x(struct bnx2x *bp,
/* The operation is completed */
clear_bit(p->state, p->pstate);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
return 0;
}
@ -3576,16 +3576,16 @@ error_exit1:
static void bnx2x_mcast_clear_sched(struct bnx2x_mcast_obj *o)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(o->sched_state, o->raw.pstate);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
static void bnx2x_mcast_set_sched(struct bnx2x_mcast_obj *o)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
set_bit(o->sched_state, o->raw.pstate);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
static bool bnx2x_mcast_check_sched(struct bnx2x_mcast_obj *o)
@ -4200,7 +4200,7 @@ int bnx2x_queue_state_change(struct bnx2x *bp,
if (rc) {
o->next_state = BNX2X_Q_STATE_MAX;
clear_bit(pending_bit, pending);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
return rc;
}
@ -4288,7 +4288,7 @@ static int bnx2x_queue_comp_cmd(struct bnx2x *bp,
wmb();
clear_bit(cmd, &o->pending);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
return 0;
}
@ -5279,7 +5279,7 @@ static inline int bnx2x_func_state_change_comp(struct bnx2x *bp,
wmb();
clear_bit(cmd, &o->pending);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
return 0;
}
@ -5926,7 +5926,7 @@ int bnx2x_func_state_change(struct bnx2x *bp,
if (rc) {
o->next_state = BNX2X_F_STATE_MAX;
clear_bit(cmd, pending);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
return rc;
}

Просмотреть файл

@ -1650,9 +1650,9 @@ static
void bnx2x_vf_handle_filters_eqe(struct bnx2x *bp,
struct bnx2x_virtf *vf)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(BNX2X_FILTER_RX_MODE_PENDING, &vf->filter_state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
}
static void bnx2x_vf_handle_rss_update_eqe(struct bnx2x *bp,
@ -2990,9 +2990,9 @@ void bnx2x_iov_task(struct work_struct *work)
void bnx2x_schedule_iov_task(struct bnx2x *bp, enum bnx2x_iov_flag flag)
{
smp_mb__before_clear_bit();
smp_mb__before_atomic();
set_bit(flag, &bp->iov_task_state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
DP(BNX2X_MSG_IOV, "Scheduling iov task [Flag: %d]\n", flag);
queue_delayed_work(bnx2x_iov_wq, &bp->iov_task, 0);
}

Просмотреть файл

@ -436,7 +436,7 @@ static int cnic_offld_prep(struct cnic_sock *csk)
static int cnic_close_prep(struct cnic_sock *csk)
{
clear_bit(SK_F_CONNECT_START, &csk->flags);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
if (test_and_clear_bit(SK_F_OFFLD_COMPLETE, &csk->flags)) {
while (test_and_set_bit(SK_F_OFFLD_SCHED, &csk->flags))
@ -450,7 +450,7 @@ static int cnic_close_prep(struct cnic_sock *csk)
static int cnic_abort_prep(struct cnic_sock *csk)
{
clear_bit(SK_F_CONNECT_START, &csk->flags);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
while (test_and_set_bit(SK_F_OFFLD_SCHED, &csk->flags))
msleep(1);
@ -3646,7 +3646,7 @@ static int cnic_cm_destroy(struct cnic_sock *csk)
csk_hold(csk);
clear_bit(SK_F_INUSE, &csk->flags);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
while (atomic_read(&csk->ref_count) != 1)
msleep(1);
cnic_cm_cleanup(csk);
@ -4026,7 +4026,7 @@ static void cnic_cm_process_kcqe(struct cnic_dev *dev, struct kcqe *kcqe)
L4_KCQE_COMPLETION_STATUS_PARITY_ERROR)
set_bit(SK_F_HW_ERR, &csk->flags);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(SK_F_OFFLD_SCHED, &csk->flags);
cnic_cm_upcall(cp, csk, opcode);
break;

Просмотреть файл

@ -249,7 +249,7 @@ bnad_tx_complete(struct bnad *bnad, struct bna_tcb *tcb)
if (likely(test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags)))
bna_ib_ack(tcb->i_dbell, sent);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(BNAD_TXQ_FREE_SENT, &tcb->flags);
return sent;
@ -1126,7 +1126,7 @@ bnad_tx_cleanup(struct delayed_work *work)
bnad_txq_cleanup(bnad, tcb);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(BNAD_TXQ_FREE_SENT, &tcb->flags);
}
@ -2992,7 +2992,7 @@ bnad_start_xmit(struct sk_buff *skb, struct net_device *netdev)
sent = bnad_txcmpl_process(bnad, tcb);
if (likely(test_bit(BNAD_TXQ_TX_STARTED, &tcb->flags)))
bna_ib_ack(tcb->i_dbell, sent);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(BNAD_TXQ_FREE_SENT, &tcb->flags);
} else {
netif_stop_queue(netdev);

Просмотреть файл

@ -281,7 +281,7 @@ static int cxgb_close(struct net_device *dev)
if (adapter->params.stats_update_period &&
!(adapter->open_device_map & PORT_MASK)) {
/* Stop statistics accumulation. */
smp_mb__after_clear_bit();
smp_mb__after_atomic();
spin_lock(&adapter->work_lock); /* sync with update task */
spin_unlock(&adapter->work_lock);
cancel_mac_stats_update(adapter);

Просмотреть файл

@ -1379,7 +1379,7 @@ static inline int check_desc_avail(struct adapter *adap, struct sge_txq *q,
struct sge_qset *qs = txq_to_qset(q, qid);
set_bit(qid, &qs->txq_stopped);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
if (should_restart_tx(q) &&
test_and_clear_bit(qid, &qs->txq_stopped))
@ -1492,7 +1492,7 @@ static void restart_ctrlq(unsigned long data)
if (!skb_queue_empty(&q->sendq)) {
set_bit(TXQ_CTRL, &qs->txq_stopped);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
if (should_restart_tx(q) &&
test_and_clear_bit(TXQ_CTRL, &qs->txq_stopped))
@ -1697,7 +1697,7 @@ again: reclaim_completed_tx(adap, q, TX_RECLAIM_CHUNK);
if (unlikely(q->size - q->in_use < ndesc)) {
set_bit(TXQ_OFLD, &qs->txq_stopped);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
if (should_restart_tx(q) &&
test_and_clear_bit(TXQ_OFLD, &qs->txq_stopped))

Просмотреть файл

@ -2031,7 +2031,7 @@ static void sge_rx_timer_cb(unsigned long data)
struct sge_fl *fl = s->egr_map[id];
clear_bit(id, s->starving_fl);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
if (fl_starving(fl)) {
rxq = container_of(fl, struct sge_eth_rxq, fl);

Просмотреть файл

@ -1951,7 +1951,7 @@ static void sge_rx_timer_cb(unsigned long data)
struct sge_fl *fl = s->egr_map[id];
clear_bit(id, s->starving_fl);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
/*
* Since we are accessing fl without a lock there's a

Просмотреть файл

@ -1798,9 +1798,9 @@ void stop_gfar(struct net_device *dev)
netif_tx_stop_all_queues(dev);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
set_bit(GFAR_DOWN, &priv->state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
disable_napi(priv);
@ -2043,9 +2043,9 @@ int startup_gfar(struct net_device *ndev)
gfar_init_tx_rx_base(priv);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(GFAR_DOWN, &priv->state);
smp_mb__after_clear_bit();
smp_mb__after_atomic();
/* Start Rx/Tx DMA and enable the interrupts */
gfar_start(priv);

Просмотреть файл

@ -4676,7 +4676,7 @@ static void i40e_service_event_complete(struct i40e_pf *pf)
BUG_ON(!test_bit(__I40E_SERVICE_SCHED, &pf->state));
/* flush memory to make sure state is correct before next watchog */
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__I40E_SERVICE_SCHED, &pf->state);
}

Просмотреть файл

@ -376,7 +376,7 @@ static void ixgbe_service_event_complete(struct ixgbe_adapter *adapter)
BUG_ON(!test_bit(__IXGBE_SERVICE_SCHED, &adapter->state));
/* flush memory to make sure state is correct before next watchdog */
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__IXGBE_SERVICE_SCHED, &adapter->state);
}
@ -4672,7 +4672,7 @@ static void ixgbe_up_complete(struct ixgbe_adapter *adapter)
if (hw->mac.ops.enable_tx_laser)
hw->mac.ops.enable_tx_laser(hw);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__IXGBE_DOWN, &adapter->state);
ixgbe_napi_enable_all(adapter);
@ -5568,7 +5568,7 @@ static int ixgbe_resume(struct pci_dev *pdev)
e_dev_err("Cannot enable PCI device from suspend\n");
return err;
}
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__IXGBE_DISABLED, &adapter->state);
pci_set_master(pdev);
@ -8542,7 +8542,7 @@ static pci_ers_result_t ixgbe_io_slot_reset(struct pci_dev *pdev)
e_err(probe, "Cannot re-enable PCI device after reset.\n");
result = PCI_ERS_RESULT_DISCONNECT;
} else {
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__IXGBE_DISABLED, &adapter->state);
adapter->hw.hw_addr = adapter->io_addr;
pci_set_master(pdev);

Просмотреть файл

@ -1668,7 +1668,7 @@ static void ixgbevf_up_complete(struct ixgbevf_adapter *adapter)
spin_unlock_bh(&adapter->mbx_lock);
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__IXGBEVF_DOWN, &adapter->state);
ixgbevf_napi_enable_all(adapter);
@ -3354,7 +3354,7 @@ static int ixgbevf_resume(struct pci_dev *pdev)
dev_err(&pdev->dev, "Cannot enable PCI device from suspend\n");
return err;
}
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__IXGBEVF_DISABLED, &adapter->state);
pci_set_master(pdev);
@ -3712,7 +3712,7 @@ static pci_ers_result_t ixgbevf_io_slot_reset(struct pci_dev *pdev)
return PCI_ERS_RESULT_DISCONNECT;
}
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit(__IXGBEVF_DISABLED, &adapter->state);
pci_set_master(pdev);

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше