Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core locking updates from Ingo Molnar:
 "The main changes in this cycle are:

   - Another attempt at enabling cross-release lockdep dependency
     tracking (automatically part of CONFIG_PROVE_LOCKING=y), this time
     with better performance and fewer false positives. (Byungchul Park)

   - Introduce lockdep_assert_irqs_enabled()/disabled() and convert
     open-coded equivalents to lockdep variants. (Frederic Weisbecker)

   - Add down_read_killable() and use it in the VFS's iterate_dir()
     method. (Kirill Tkhai)

   - Convert remaining uses of ACCESS_ONCE() to
     READ_ONCE()/WRITE_ONCE(). Most of the conversion was Coccinelle
     driven. (Mark Rutland, Paul E. McKenney)

   - Get rid of lockless_dereference(), by strengthening Alpha atomics,
     strengthening READ_ONCE() with smp_read_barrier_depends() and thus
     being able to convert users of lockless_dereference() to
     READ_ONCE(). (Will Deacon)

   - Various micro-optimizations:

        - better PV qspinlocks (Waiman Long),
        - better x86 barriers (Michael S. Tsirkin)
        - better x86 refcounts (Kees Cook)

   - ... plus other fixes and enhancements. (Borislav Petkov, Juergen
     Gross, Miguel Bernal Marin)"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
  locking/x86: Use LOCK ADD for smp_mb() instead of MFENCE
  rcu: Use lockdep to assert IRQs are disabled/enabled
  netpoll: Use lockdep to assert IRQs are disabled/enabled
  timers/posix-cpu-timers: Use lockdep to assert IRQs are disabled/enabled
  sched/clock, sched/cputime: Use lockdep to assert IRQs are disabled/enabled
  irq_work: Use lockdep to assert IRQs are disabled/enabled
  irq/timings: Use lockdep to assert IRQs are disabled/enabled
  perf/core: Use lockdep to assert IRQs are disabled/enabled
  x86: Use lockdep to assert IRQs are disabled/enabled
  smp/core: Use lockdep to assert IRQs are disabled/enabled
  timers/hrtimer: Use lockdep to assert IRQs are disabled/enabled
  timers/nohz: Use lockdep to assert IRQs are disabled/enabled
  workqueue: Use lockdep to assert IRQs are disabled/enabled
  irq/softirqs: Use lockdep to assert IRQs are disabled/enabled
  locking/lockdep: Add IRQs disabled/enabled assertion APIs: lockdep_assert_irqs_enabled()/disabled()
  locking/pvqspinlock: Implement hybrid PV queued/unfair locks
  locking/rwlocks: Fix comments
  x86/paravirt: Set up the virt_spin_lock_key after static keys get initialized
  block, locking/lockdep: Assign a lock_class per gendisk used for wait_for_completion()
  workqueue: Remove now redundant lock acquisitions wrt. workqueue flushes
  ...
This commit is contained in:
Linus Torvalds 2017-11-13 12:38:26 -08:00
Родитель 6098850e7e 450cbdd012
Коммит 8e9a2dba86
307 изменённых файлов: 1252 добавлений и 1672 удалений

Просмотреть файл

@ -709,6 +709,9 @@
It will be ignored when crashkernel=X,high is not used It will be ignored when crashkernel=X,high is not used
or memory reserved is below 4G. or memory reserved is below 4G.
crossrelease_fullstack
[KNL] Allow to record full stack trace in cross-release
cryptomgr.notests cryptomgr.notests
[KNL] Disable crypto self-tests [KNL] Disable crypto self-tests

Просмотреть файл

@ -826,9 +826,9 @@ If the filesystem may need to revalidate dcache entries, then
*is* passed the dentry but does not have access to the `inode` or the *is* passed the dentry but does not have access to the `inode` or the
`seq` number from the `nameidata`, so it needs to be extra careful `seq` number from the `nameidata`, so it needs to be extra careful
when accessing fields in the dentry. This "extra care" typically when accessing fields in the dentry. This "extra care" typically
involves using `ACCESS_ONCE()` or the newer [`READ_ONCE()`] to access involves using [`READ_ONCE()`] to access fields, and verifying the
fields, and verifying the result is not NULL before using it. This result is not NULL before using it. This pattern can be seen in
pattern can be see in `nfs_lookup_revalidate()`. `nfs_lookup_revalidate()`.
A pair of patterns A pair of patterns
------------------ ------------------

Просмотреть файл

@ -1880,18 +1880,6 @@ There are some more advanced barrier functions:
See Documentation/atomic_{t,bitops}.txt for more information. See Documentation/atomic_{t,bitops}.txt for more information.
(*) lockless_dereference();
This can be thought of as a pointer-fetch wrapper around the
smp_read_barrier_depends() data-dependency barrier.
This is also similar to rcu_dereference(), but in cases where
object lifetime is handled by some mechanism other than RCU, for
example, when the objects removed only when the system goes down.
In addition, lockless_dereference() is used in some data structures
that can be used both with and without RCU.
(*) dma_wmb(); (*) dma_wmb();
(*) dma_rmb(); (*) dma_rmb();

Просмотреть файл

@ -1858,18 +1858,6 @@ Mandatory 배리어들은 SMP 시스템에서도 UP 시스템에서도 SMP 효
참고하세요. 참고하세요.
(*) lockless_dereference();
이 함수는 smp_read_barrier_depends() 데이터 의존성 배리어를 사용하는
포인터 읽어오기 래퍼(wrapper) 함수로 생각될 수 있습니다.
객체의 라이프타임이 RCU 외의 메커니즘으로 관리된다는 점을 제외하면
rcu_dereference() 와도 유사한데, 예를 들면 객체가 시스템이 꺼질 때에만
제거되는 경우 등입니다. 또한, lockless_dereference() 은 RCU 와 함께
사용될수도, RCU 없이 사용될 수도 있는 일부 데이터 구조에 사용되고
있습니다.
(*) dma_wmb(); (*) dma_wmb();
(*) dma_rmb(); (*) dma_rmb();

Просмотреть файл

@ -14,6 +14,15 @@
* than regular operations. * than regular operations.
*/ */
/*
* To ensure dependency ordering is preserved for the _relaxed and
* _release atomics, an smp_read_barrier_depends() is unconditionally
* inserted into the _relaxed variants, which are used to build the
* barriered versions. To avoid redundant back-to-back fences, we can
* define the _acquire and _fence versions explicitly.
*/
#define __atomic_op_acquire(op, args...) op##_relaxed(args)
#define __atomic_op_fence __atomic_op_release
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
#define ATOMIC64_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) }
@ -61,6 +70,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }
@ -78,6 +88,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }
@ -112,6 +123,7 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }
@ -129,6 +141,7 @@ static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
".previous" \ ".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \ :"Ir" (i), "m" (v->counter) : "memory"); \
smp_read_barrier_depends(); \
return result; \ return result; \
} }

Просмотреть файл

@ -22,7 +22,7 @@
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS #define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS) #define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
static inline void __down_read(struct rw_semaphore *sem) static inline int ___down_read(struct rw_semaphore *sem)
{ {
long oldcount; long oldcount;
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
@ -42,10 +42,24 @@ static inline void __down_read(struct rw_semaphore *sem)
:"=&r" (oldcount), "=m" (sem->count), "=&r" (temp) :"=&r" (oldcount), "=m" (sem->count), "=&r" (temp)
:"Ir" (RWSEM_ACTIVE_READ_BIAS), "m" (sem->count) : "memory"); :"Ir" (RWSEM_ACTIVE_READ_BIAS), "m" (sem->count) : "memory");
#endif #endif
if (unlikely(oldcount < 0)) return (oldcount < 0);
}
static inline void __down_read(struct rw_semaphore *sem)
{
if (unlikely(___down_read(sem)))
rwsem_down_read_failed(sem); rwsem_down_read_failed(sem);
} }
static inline int __down_read_killable(struct rw_semaphore *sem)
{
if (unlikely(___down_read(sem)))
if (IS_ERR(rwsem_down_read_failed_killable(sem)))
return -EINTR;
return 0;
}
/* /*
* trylock for reading -- returns 1 if successful, 0 if contention * trylock for reading -- returns 1 if successful, 0 if contention
*/ */
@ -95,9 +109,10 @@ static inline void __down_write(struct rw_semaphore *sem)
static inline int __down_write_killable(struct rw_semaphore *sem) static inline int __down_write_killable(struct rw_semaphore *sem)
{ {
if (unlikely(___down_write(sem))) if (unlikely(___down_write(sem))) {
if (IS_ERR(rwsem_down_write_failed_killable(sem))) if (IS_ERR(rwsem_down_write_failed_killable(sem)))
return -EINTR; return -EINTR;
}
return 0; return 0;
} }

Просмотреть файл

@ -14,7 +14,6 @@
* We make no fairness assumptions. They have a cost. * We make no fairness assumptions. They have a cost.
*/ */
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
#define arch_spin_is_locked(x) ((x)->lock != 0) #define arch_spin_is_locked(x) ((x)->lock != 0)
static inline int arch_spin_value_unlocked(arch_spinlock_t lock) static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
@ -55,16 +54,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
/***********************************************************/ /***********************************************************/
static inline int arch_read_can_lock(arch_rwlock_t *lock)
{
return (lock->lock & 1) == 0;
}
static inline int arch_write_can_lock(arch_rwlock_t *lock)
{
return lock->lock == 0;
}
static inline void arch_read_lock(arch_rwlock_t *lock) static inline void arch_read_lock(arch_rwlock_t *lock)
{ {
long regx; long regx;
@ -171,7 +160,4 @@ static inline void arch_write_unlock(arch_rwlock_t * lock)
lock->lock = 0; lock->lock = 0;
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#endif /* _ALPHA_SPINLOCK_H */ #endif /* _ALPHA_SPINLOCK_H */

Просмотреть файл

@ -14,7 +14,6 @@
#include <asm/barrier.h> #include <asm/barrier.h>
#define arch_spin_is_locked(x) ((x)->slock != __ARCH_SPIN_LOCK_UNLOCKED__) #define arch_spin_is_locked(x) ((x)->slock != __ARCH_SPIN_LOCK_UNLOCKED__)
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
#ifdef CONFIG_ARC_HAS_LLSC #ifdef CONFIG_ARC_HAS_LLSC
@ -410,14 +409,4 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
#endif #endif
#define arch_read_can_lock(x) ((x)->counter > 0)
#define arch_write_can_lock(x) ((x)->counter == __ARCH_RW_LOCK_UNLOCKED__)
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* __ASM_SPINLOCK_H */ #endif /* __ASM_SPINLOCK_H */

Просмотреть файл

@ -250,7 +250,7 @@ static void ipi_send_msg_one(int cpu, enum ipi_msg_type msg)
* and read back old value * and read back old value
*/ */
do { do {
new = old = ACCESS_ONCE(*ipi_data_ptr); new = old = READ_ONCE(*ipi_data_ptr);
new |= 1U << msg; new |= 1U << msg;
} while (cmpxchg(ipi_data_ptr, old, new) != old); } while (cmpxchg(ipi_data_ptr, old, new) != old);

Просмотреть файл

@ -126,8 +126,7 @@ extern unsigned long profile_pc(struct pt_regs *regs);
/* /*
* kprobe-based event tracer support * kprobe-based event tracer support
*/ */
#include <linux/stddef.h> #include <linux/compiler.h>
#include <linux/types.h>
#define MAX_REG_OFFSET (offsetof(struct pt_regs, ARM_ORIG_r0)) #define MAX_REG_OFFSET (offsetof(struct pt_regs, ARM_ORIG_r0))
extern int regs_query_register_offset(const char *name); extern int regs_query_register_offset(const char *name);

Просмотреть файл

@ -53,8 +53,6 @@ static inline void dsb_sev(void)
* memory. * memory.
*/ */
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
static inline void arch_spin_lock(arch_spinlock_t *lock) static inline void arch_spin_lock(arch_spinlock_t *lock)
{ {
unsigned long tmp; unsigned long tmp;
@ -74,7 +72,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
while (lockval.tickets.next != lockval.tickets.owner) { while (lockval.tickets.next != lockval.tickets.owner) {
wfe(); wfe();
lockval.tickets.owner = ACCESS_ONCE(lock->tickets.owner); lockval.tickets.owner = READ_ONCE(lock->tickets.owner);
} }
smp_mb(); smp_mb();
@ -194,9 +192,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
dsb_sev(); dsb_sev();
} }
/* write_can_lock - would write_trylock() succeed? */
#define arch_write_can_lock(x) (ACCESS_ONCE((x)->lock) == 0)
/* /*
* Read locks are a bit more hairy: * Read locks are a bit more hairy:
* - Exclusively load the lock value. * - Exclusively load the lock value.
@ -274,14 +269,4 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
} }
} }
/* read_can_lock - would read_trylock() succeed? */
#define arch_read_can_lock(x) (ACCESS_ONCE((x)->lock) < 0x80000000)
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* __ASM_SPINLOCK_H */ #endif /* __ASM_SPINLOCK_H */

Просмотреть файл

@ -179,7 +179,7 @@ static int tegra20_idle_lp2_coupled(struct cpuidle_device *dev,
bool entered_lp2 = false; bool entered_lp2 = false;
if (tegra_pending_sgi()) if (tegra_pending_sgi())
ACCESS_ONCE(abort_flag) = true; WRITE_ONCE(abort_flag, true);
cpuidle_coupled_parallel_barrier(dev, &abort_barrier); cpuidle_coupled_parallel_barrier(dev, &abort_barrier);

Просмотреть файл

@ -35,7 +35,7 @@ static notrace u32 __vdso_read_begin(const struct vdso_data *vdata)
{ {
u32 seq; u32 seq;
repeat: repeat:
seq = ACCESS_ONCE(vdata->seq_count); seq = READ_ONCE(vdata->seq_count);
if (seq & 1) { if (seq & 1) {
cpu_relax(); cpu_relax();
goto repeat; goto repeat;

Просмотреть файл

@ -22,7 +22,24 @@ config ARM64
select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_STRICT_MODULE_RWX
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_NMI_SAFE_CMPXCHG if ACPI_APEI_SEA select ARCH_HAVE_NMI_SAFE_CMPXCHG if ACPI_APEI_SEA
select ARCH_INLINE_READ_LOCK if !PREEMPT
select ARCH_INLINE_READ_LOCK_BH if !PREEMPT
select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPT
select ARCH_INLINE_READ_LOCK_IRQSAVE if !PREEMPT
select ARCH_INLINE_READ_UNLOCK if !PREEMPT
select ARCH_INLINE_READ_UNLOCK_BH if !PREEMPT
select ARCH_INLINE_READ_UNLOCK_IRQ if !PREEMPT
select ARCH_INLINE_READ_UNLOCK_IRQRESTORE if !PREEMPT
select ARCH_INLINE_WRITE_LOCK if !PREEMPT
select ARCH_INLINE_WRITE_LOCK_BH if !PREEMPT
select ARCH_INLINE_WRITE_LOCK_IRQ if !PREEMPT
select ARCH_INLINE_WRITE_LOCK_IRQSAVE if !PREEMPT
select ARCH_INLINE_WRITE_UNLOCK if !PREEMPT
select ARCH_INLINE_WRITE_UNLOCK_BH if !PREEMPT
select ARCH_INLINE_WRITE_UNLOCK_IRQ if !PREEMPT
select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE if !PREEMPT
select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_SUPPORTS_MEMORY_FAILURE select ARCH_SUPPORTS_MEMORY_FAILURE
select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_SUPPORTS_NUMA_BALANCING

Просмотреть файл

@ -16,6 +16,7 @@ generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
generic-y += msi.h generic-y += msi.h
generic-y += preempt.h generic-y += preempt.h
generic-y += qrwlock.h
generic-y += rwsem.h generic-y += rwsem.h
generic-y += segment.h generic-y += segment.h
generic-y += serial.h generic-y += serial.h

Просмотреть файл

@ -27,8 +27,6 @@
* instructions. * instructions.
*/ */
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
static inline void arch_spin_lock(arch_spinlock_t *lock) static inline void arch_spin_lock(arch_spinlock_t *lock)
{ {
unsigned int tmp; unsigned int tmp;
@ -139,176 +137,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
} }
#define arch_spin_is_contended arch_spin_is_contended #define arch_spin_is_contended arch_spin_is_contended
/* #include <asm/qrwlock.h>
* Write lock implementation.
*
* Write locks set bit 31. Unlocking, is done by writing 0 since the lock is
* exclusively held.
*
* The memory barriers are implicit with the load-acquire and store-release
* instructions.
*/
static inline void arch_write_lock(arch_rwlock_t *rw)
{
unsigned int tmp;
asm volatile(ARM64_LSE_ATOMIC_INSN(
/* LL/SC */
" sevl\n"
"1: wfe\n"
"2: ldaxr %w0, %1\n"
" cbnz %w0, 1b\n"
" stxr %w0, %w2, %1\n"
" cbnz %w0, 2b\n"
__nops(1),
/* LSE atomics */
"1: mov %w0, wzr\n"
"2: casa %w0, %w2, %1\n"
" cbz %w0, 3f\n"
" ldxr %w0, %1\n"
" cbz %w0, 2b\n"
" wfe\n"
" b 1b\n"
"3:")
: "=&r" (tmp), "+Q" (rw->lock)
: "r" (0x80000000)
: "memory");
}
static inline int arch_write_trylock(arch_rwlock_t *rw)
{
unsigned int tmp;
asm volatile(ARM64_LSE_ATOMIC_INSN(
/* LL/SC */
"1: ldaxr %w0, %1\n"
" cbnz %w0, 2f\n"
" stxr %w0, %w2, %1\n"
" cbnz %w0, 1b\n"
"2:",
/* LSE atomics */
" mov %w0, wzr\n"
" casa %w0, %w2, %1\n"
__nops(2))
: "=&r" (tmp), "+Q" (rw->lock)
: "r" (0x80000000)
: "memory");
return !tmp;
}
static inline void arch_write_unlock(arch_rwlock_t *rw)
{
asm volatile(ARM64_LSE_ATOMIC_INSN(
" stlr wzr, %0",
" swpl wzr, wzr, %0")
: "=Q" (rw->lock) :: "memory");
}
/* write_can_lock - would write_trylock() succeed? */
#define arch_write_can_lock(x) ((x)->lock == 0)
/*
* Read lock implementation.
*
* It exclusively loads the lock value, increments it and stores the new value
* back if positive and the CPU still exclusively owns the location. If the
* value is negative, the lock is already held.
*
* During unlocking there may be multiple active read locks but no write lock.
*
* The memory barriers are implicit with the load-acquire and store-release
* instructions.
*
* Note that in UNDEFINED cases, such as unlocking a lock twice, the LL/SC
* and LSE implementations may exhibit different behaviour (although this
* will have no effect on lockdep).
*/
static inline void arch_read_lock(arch_rwlock_t *rw)
{
unsigned int tmp, tmp2;
asm volatile(
" sevl\n"
ARM64_LSE_ATOMIC_INSN(
/* LL/SC */
"1: wfe\n"
"2: ldaxr %w0, %2\n"
" add %w0, %w0, #1\n"
" tbnz %w0, #31, 1b\n"
" stxr %w1, %w0, %2\n"
" cbnz %w1, 2b\n"
__nops(1),
/* LSE atomics */
"1: wfe\n"
"2: ldxr %w0, %2\n"
" adds %w1, %w0, #1\n"
" tbnz %w1, #31, 1b\n"
" casa %w0, %w1, %2\n"
" sbc %w0, %w1, %w0\n"
" cbnz %w0, 2b")
: "=&r" (tmp), "=&r" (tmp2), "+Q" (rw->lock)
:
: "cc", "memory");
}
static inline void arch_read_unlock(arch_rwlock_t *rw)
{
unsigned int tmp, tmp2;
asm volatile(ARM64_LSE_ATOMIC_INSN(
/* LL/SC */
"1: ldxr %w0, %2\n"
" sub %w0, %w0, #1\n"
" stlxr %w1, %w0, %2\n"
" cbnz %w1, 1b",
/* LSE atomics */
" movn %w0, #0\n"
" staddl %w0, %2\n"
__nops(2))
: "=&r" (tmp), "=&r" (tmp2), "+Q" (rw->lock)
:
: "memory");
}
static inline int arch_read_trylock(arch_rwlock_t *rw)
{
unsigned int tmp, tmp2;
asm volatile(ARM64_LSE_ATOMIC_INSN(
/* LL/SC */
" mov %w1, #1\n"
"1: ldaxr %w0, %2\n"
" add %w0, %w0, #1\n"
" tbnz %w0, #31, 2f\n"
" stxr %w1, %w0, %2\n"
" cbnz %w1, 1b\n"
"2:",
/* LSE atomics */
" ldr %w0, %2\n"
" adds %w1, %w0, #1\n"
" tbnz %w1, #31, 1f\n"
" casa %w0, %w1, %2\n"
" sbc %w1, %w1, %w0\n"
__nops(1)
"1:")
: "=&r" (tmp), "=&r" (tmp2), "+Q" (rw->lock)
:
: "cc", "memory");
return !tmp2;
}
/* read_can_lock - would read_trylock() succeed? */
#define arch_read_can_lock(x) ((x)->lock < 0x80000000)
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
/* See include/linux/spinlock.h */ /* See include/linux/spinlock.h */
#define smp_mb__after_spinlock() smp_mb() #define smp_mb__after_spinlock() smp_mb()

Просмотреть файл

@ -36,10 +36,6 @@ typedef struct {
#define __ARCH_SPIN_LOCK_UNLOCKED { 0 , 0 } #define __ARCH_SPIN_LOCK_UNLOCKED { 0 , 0 }
typedef struct { #include <asm-generic/qrwlock_types.h>
volatile unsigned int lock;
} arch_rwlock_t;
#define __ARCH_RW_LOCK_UNLOCKED { 0 }
#endif #endif

Просмотреть файл

@ -36,8 +36,6 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
__raw_spin_lock_asm(&lock->lock); __raw_spin_lock_asm(&lock->lock);
} }
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
static inline int arch_spin_trylock(arch_spinlock_t *lock) static inline int arch_spin_trylock(arch_spinlock_t *lock)
{ {
return __raw_spin_trylock_asm(&lock->lock); return __raw_spin_trylock_asm(&lock->lock);
@ -48,23 +46,11 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
__raw_spin_unlock_asm(&lock->lock); __raw_spin_unlock_asm(&lock->lock);
} }
static inline int arch_read_can_lock(arch_rwlock_t *rw)
{
return __raw_uncached_fetch_asm(&rw->lock) > 0;
}
static inline int arch_write_can_lock(arch_rwlock_t *rw)
{
return __raw_uncached_fetch_asm(&rw->lock) == RW_LOCK_BIAS;
}
static inline void arch_read_lock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw)
{ {
__raw_read_lock_asm(&rw->lock); __raw_read_lock_asm(&rw->lock);
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
static inline int arch_read_trylock(arch_rwlock_t *rw) static inline int arch_read_trylock(arch_rwlock_t *rw)
{ {
return __raw_read_trylock_asm(&rw->lock); return __raw_read_trylock_asm(&rw->lock);
@ -80,8 +66,6 @@ static inline void arch_write_lock(arch_rwlock_t *rw)
__raw_write_lock_asm(&rw->lock); __raw_write_lock_asm(&rw->lock);
} }
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
static inline int arch_write_trylock(arch_rwlock_t *rw) static inline int arch_write_trylock(arch_rwlock_t *rw)
{ {
return __raw_write_trylock_asm(&rw->lock); return __raw_write_trylock_asm(&rw->lock);
@ -92,10 +76,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
__raw_write_unlock_asm(&rw->lock); __raw_write_unlock_asm(&rw->lock);
} }
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif #endif
#endif /* !__BFIN_SPINLOCK_H */ #endif /* !__BFIN_SPINLOCK_H */

Просмотреть файл

@ -86,16 +86,6 @@ static inline int arch_read_trylock(arch_rwlock_t *lock)
return temp; return temp;
} }
static inline int arch_read_can_lock(arch_rwlock_t *rwlock)
{
return rwlock->lock == 0;
}
static inline int arch_write_can_lock(arch_rwlock_t *rwlock)
{
return rwlock->lock == 0;
}
/* Stuffs a -1 in the lock value? */ /* Stuffs a -1 in the lock value? */
static inline void arch_write_lock(arch_rwlock_t *lock) static inline void arch_write_lock(arch_rwlock_t *lock)
{ {
@ -177,11 +167,6 @@ static inline unsigned int arch_spin_trylock(arch_spinlock_t *lock)
/* /*
* SMP spinlocks are intended to allow only a single CPU at the lock * SMP spinlocks are intended to allow only a single CPU at the lock
*/ */
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
#define arch_spin_is_locked(x) ((x)->lock != 0) #define arch_spin_is_locked(x) ((x)->lock != 0)
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#endif #endif

Просмотреть файл

@ -38,15 +38,31 @@
/* /*
* lock for reading * lock for reading
*/ */
static inline void static inline int
__down_read (struct rw_semaphore *sem) ___down_read (struct rw_semaphore *sem)
{ {
long result = ia64_fetchadd8_acq((unsigned long *)&sem->count.counter, 1); long result = ia64_fetchadd8_acq((unsigned long *)&sem->count.counter, 1);
if (result < 0) return (result < 0);
}
static inline void
__down_read (struct rw_semaphore *sem)
{
if (___down_read(sem))
rwsem_down_read_failed(sem); rwsem_down_read_failed(sem);
} }
static inline int
__down_read_killable (struct rw_semaphore *sem)
{
if (___down_read(sem))
if (IS_ERR(rwsem_down_read_failed_killable(sem)))
return -EINTR;
return 0;
}
/* /*
* lock for writing * lock for writing
*/ */
@ -73,9 +89,10 @@ __down_write (struct rw_semaphore *sem)
static inline int static inline int
__down_write_killable (struct rw_semaphore *sem) __down_write_killable (struct rw_semaphore *sem)
{ {
if (___down_write(sem)) if (___down_write(sem)) {
if (IS_ERR(rwsem_down_write_failed_killable(sem))) if (IS_ERR(rwsem_down_write_failed_killable(sem)))
return -EINTR; return -EINTR;
}
return 0; return 0;
} }

Просмотреть файл

@ -62,7 +62,7 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock) static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
{ {
int tmp = ACCESS_ONCE(lock->lock); int tmp = READ_ONCE(lock->lock);
if (!(((tmp >> TICKET_SHIFT) ^ tmp) & TICKET_MASK)) if (!(((tmp >> TICKET_SHIFT) ^ tmp) & TICKET_MASK))
return ia64_cmpxchg(acq, &lock->lock, tmp, tmp + 1, sizeof (tmp)) == tmp; return ia64_cmpxchg(acq, &lock->lock, tmp, tmp + 1, sizeof (tmp)) == tmp;
@ -74,19 +74,19 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
unsigned short *p = (unsigned short *)&lock->lock + 1, tmp; unsigned short *p = (unsigned short *)&lock->lock + 1, tmp;
asm volatile ("ld2.bias %0=[%1]" : "=r"(tmp) : "r"(p)); asm volatile ("ld2.bias %0=[%1]" : "=r"(tmp) : "r"(p));
ACCESS_ONCE(*p) = (tmp + 2) & ~1; WRITE_ONCE(*p, (tmp + 2) & ~1);
} }
static inline int __ticket_spin_is_locked(arch_spinlock_t *lock) static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
{ {
long tmp = ACCESS_ONCE(lock->lock); long tmp = READ_ONCE(lock->lock);
return !!(((tmp >> TICKET_SHIFT) ^ tmp) & TICKET_MASK); return !!(((tmp >> TICKET_SHIFT) ^ tmp) & TICKET_MASK);
} }
static inline int __ticket_spin_is_contended(arch_spinlock_t *lock) static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
{ {
long tmp = ACCESS_ONCE(lock->lock); long tmp = READ_ONCE(lock->lock);
return ((tmp - (tmp >> TICKET_SHIFT)) & TICKET_MASK) > 1; return ((tmp - (tmp >> TICKET_SHIFT)) & TICKET_MASK) > 1;
} }
@ -127,9 +127,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
{ {
arch_spin_lock(lock); arch_spin_lock(lock);
} }
#define arch_spin_lock_flags arch_spin_lock_flags
#define arch_read_can_lock(rw) (*(volatile int *)(rw) >= 0)
#define arch_write_can_lock(rw) (*(volatile int *)(rw) == 0)
#ifdef ASM_SUPPORTED #ifdef ASM_SUPPORTED
@ -157,6 +155,7 @@ arch_read_lock_flags(arch_rwlock_t *lock, unsigned long flags)
: "p6", "p7", "r2", "memory"); : "p6", "p7", "r2", "memory");
} }
#define arch_read_lock_flags arch_read_lock_flags
#define arch_read_lock(lock) arch_read_lock_flags(lock, 0) #define arch_read_lock(lock) arch_read_lock_flags(lock, 0)
#else /* !ASM_SUPPORTED */ #else /* !ASM_SUPPORTED */
@ -209,6 +208,7 @@ arch_write_lock_flags(arch_rwlock_t *lock, unsigned long flags)
: "ar.ccv", "p6", "p7", "r2", "r29", "memory"); : "ar.ccv", "p6", "p7", "r2", "r29", "memory");
} }
#define arch_write_lock_flags arch_write_lock_flags
#define arch_write_lock(rw) arch_write_lock_flags(rw, 0) #define arch_write_lock(rw) arch_write_lock_flags(rw, 0)
#define arch_write_trylock(rw) \ #define arch_write_trylock(rw) \
@ -232,8 +232,6 @@ static inline void arch_write_unlock(arch_rwlock_t *x)
#else /* !ASM_SUPPORTED */ #else /* !ASM_SUPPORTED */
#define arch_write_lock_flags(l, flags) arch_write_lock(l)
#define arch_write_lock(l) \ #define arch_write_lock(l) \
({ \ ({ \
__u64 ia64_val, ia64_set_val = ia64_dep_mi(-1, 0, 31, 1); \ __u64 ia64_val, ia64_set_val = ia64_dep_mi(-1, 0, 31, 1); \
@ -273,8 +271,4 @@ static inline int arch_read_trylock(arch_rwlock_t *x)
return (u32)ia64_cmpxchg4_acq((__u32 *)(x), new.word, old.word) == old.word; return (u32)ia64_cmpxchg4_acq((__u32 *)(x), new.word, old.word) == old.word;
} }
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* _ASM_IA64_SPINLOCK_H */ #endif /* _ASM_IA64_SPINLOCK_H */

Просмотреть файл

@ -29,7 +29,6 @@
*/ */
#define arch_spin_is_locked(x) (*(volatile int *)(&(x)->slock) <= 0) #define arch_spin_is_locked(x) (*(volatile int *)(&(x)->slock) <= 0)
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
/** /**
* arch_spin_trylock - Try spin lock and return a result * arch_spin_trylock - Try spin lock and return a result
@ -138,18 +137,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
* semaphore.h for details. -ben * semaphore.h for details. -ben
*/ */
/**
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_read_can_lock(x) ((int)(x)->lock > 0)
/**
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_write_can_lock(x) ((x)->lock == RW_LOCK_BIAS)
static inline void arch_read_lock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw)
{ {
unsigned long tmp0, tmp1; unsigned long tmp0, tmp1;
@ -318,11 +305,4 @@ static inline int arch_write_trylock(arch_rwlock_t *lock)
return 0; return 0;
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* _ASM_M32R_SPINLOCK_H */ #endif /* _ASM_M32R_SPINLOCK_H */

Просмотреть файл

@ -16,13 +16,4 @@
* locked. * locked.
*/ */
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* __ASM_SPINLOCK_H */ #endif /* __ASM_SPINLOCK_H */

Просмотреть файл

@ -137,21 +137,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
: "memory"); : "memory");
} }
/* write_can_lock - would write_trylock() succeed? */
static inline int arch_write_can_lock(arch_rwlock_t *rw)
{
int ret;
asm volatile ("LNKGETD %0, [%1]\n"
"CMP %0, #0\n"
"MOV %0, #1\n"
"XORNZ %0, %0, %0\n"
: "=&d" (ret)
: "da" (&rw->lock)
: "cc");
return ret;
}
/* /*
* Read locks are a bit more hairy: * Read locks are a bit more hairy:
* - Exclusively load the lock value. * - Exclusively load the lock value.
@ -225,26 +210,4 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
return tmp; return tmp;
} }
/* read_can_lock - would read_trylock() succeed? */
static inline int arch_read_can_lock(arch_rwlock_t *rw)
{
int tmp;
asm volatile ("LNKGETD %0, [%1]\n"
"CMP %0, %2\n"
"MOV %0, #1\n"
"XORZ %0, %0, %0\n"
: "=&d" (tmp)
: "da" (&rw->lock), "bd" (0x80000000)
: "cc");
return tmp;
}
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* __ASM_SPINLOCK_LNKGET_H */ #endif /* __ASM_SPINLOCK_LNKGET_H */

Просмотреть файл

@ -105,16 +105,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
rw->lock = 0; rw->lock = 0;
} }
/* write_can_lock - would write_trylock() succeed? */
static inline int arch_write_can_lock(arch_rwlock_t *rw)
{
unsigned int ret;
barrier();
ret = rw->lock;
return (ret == 0);
}
/* /*
* Read locks are a bit more hairy: * Read locks are a bit more hairy:
* - Exclusively load the lock value. * - Exclusively load the lock value.
@ -172,14 +162,4 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
return (ret < 0x80000000); return (ret < 0x80000000);
} }
/* read_can_lock - would read_trylock() succeed? */
static inline int arch_read_can_lock(arch_rwlock_t *rw)
{
unsigned int ret;
barrier();
ret = rw->lock;
return (ret < 0x80000000);
}
#endif /* __ASM_SPINLOCK_LOCK1_H */ #endif /* __ASM_SPINLOCK_LOCK1_H */

Просмотреть файл

@ -13,11 +13,4 @@
#include <asm/qrwlock.h> #include <asm/qrwlock.h>
#include <asm/qspinlock.h> #include <asm/qspinlock.h>
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* _ASM_SPINLOCK_H */ #endif /* _ASM_SPINLOCK_H */

Просмотреть файл

@ -99,7 +99,7 @@ static inline u32 vdso_data_read_begin(const union mips_vdso_data *data)
u32 seq; u32 seq;
while (true) { while (true) {
seq = ACCESS_ONCE(data->seq_count); seq = READ_ONCE(data->seq_count);
if (likely(!(seq & 1))) { if (likely(!(seq & 1))) {
/* Paired with smp_wmb() in vdso_data_write_*(). */ /* Paired with smp_wmb() in vdso_data_write_*(). */
smp_rmb(); smp_rmb();

Просмотреть файл

@ -166,7 +166,7 @@ int cps_pm_enter_state(enum cps_pm_state state)
nc_core_ready_count = nc_addr; nc_core_ready_count = nc_addr;
/* Ensure ready_count is zero-initialised before the assembly runs */ /* Ensure ready_count is zero-initialised before the assembly runs */
ACCESS_ONCE(*nc_core_ready_count) = 0; WRITE_ONCE(*nc_core_ready_count, 0);
coupled_barrier(&per_cpu(pm_barrier, core), online); coupled_barrier(&per_cpu(pm_barrier, core), online);
/* Run the generated entry code */ /* Run the generated entry code */

Просмотреть файл

@ -84,6 +84,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *lock,
: "d" (flags), "a"(&lock->slock), "i"(EPSW_IE | MN10300_CLI_LEVEL) : "d" (flags), "a"(&lock->slock), "i"(EPSW_IE | MN10300_CLI_LEVEL)
: "memory", "cc"); : "memory", "cc");
} }
#define arch_spin_lock_flags arch_spin_lock_flags
#ifdef __KERNEL__ #ifdef __KERNEL__
@ -98,18 +99,6 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *lock,
* read-locks. * read-locks.
*/ */
/**
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_read_can_lock(x) ((int)(x)->lock > 0)
/**
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_write_can_lock(x) ((x)->lock == RW_LOCK_BIAS)
/* /*
* On mn10300, we implement read-write locks as a 32-bit counter * On mn10300, we implement read-write locks as a 32-bit counter
* with the high bit (sign) being the "contended" bit. * with the high bit (sign) being the "contended" bit.
@ -183,9 +172,6 @@ static inline int arch_write_trylock(arch_rwlock_t *lock)
return 0; return 0;
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define _raw_spin_relax(lock) cpu_relax() #define _raw_spin_relax(lock) cpu_relax()
#define _raw_read_relax(lock) cpu_relax() #define _raw_read_relax(lock) cpu_relax()
#define _raw_write_relax(lock) cpu_relax() #define _raw_write_relax(lock) cpu_relax()

Просмотреть файл

@ -543,7 +543,7 @@ static void mn10300_serial_receive_interrupt(struct mn10300_serial_port *port)
try_again: try_again:
/* pull chars out of the hat */ /* pull chars out of the hat */
ix = ACCESS_ONCE(port->rx_outp); ix = READ_ONCE(port->rx_outp);
if (CIRC_CNT(port->rx_inp, ix, MNSC_BUFFER_SIZE) == 0) { if (CIRC_CNT(port->rx_inp, ix, MNSC_BUFFER_SIZE) == 0) {
if (push && !tport->low_latency) if (push && !tport->low_latency)
tty_flip_buffer_push(tport); tty_flip_buffer_push(tport);
@ -1724,7 +1724,7 @@ static int mn10300_serial_poll_get_char(struct uart_port *_port)
if (mn10300_serial_int_tbl[port->rx_irq].port != NULL) { if (mn10300_serial_int_tbl[port->rx_irq].port != NULL) {
do { do {
/* pull chars out of the hat */ /* pull chars out of the hat */
ix = ACCESS_ONCE(port->rx_outp); ix = READ_ONCE(port->rx_outp);
if (CIRC_CNT(port->rx_inp, ix, MNSC_BUFFER_SIZE) == 0) if (CIRC_CNT(port->rx_inp, ix, MNSC_BUFFER_SIZE) == 0)
return NO_POLL_CHAR; return NO_POLL_CHAR;

Просмотреть файл

@ -261,7 +261,7 @@ atomic64_set(atomic64_t *v, s64 i)
static __inline__ s64 static __inline__ s64
atomic64_read(const atomic64_t *v) atomic64_read(const atomic64_t *v)
{ {
return ACCESS_ONCE((v)->counter); return READ_ONCE((v)->counter);
} }
#define atomic64_inc(v) (atomic64_add( 1,(v))) #define atomic64_inc(v) (atomic64_add( 1,(v)))

Просмотреть файл

@ -32,6 +32,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
cpu_relax(); cpu_relax();
mb(); mb();
} }
#define arch_spin_lock_flags arch_spin_lock_flags
static inline void arch_spin_unlock(arch_spinlock_t *x) static inline void arch_spin_unlock(arch_spinlock_t *x)
{ {
@ -169,25 +170,4 @@ static __inline__ int arch_write_trylock(arch_rwlock_t *rw)
return result; return result;
} }
/*
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
static __inline__ int arch_read_can_lock(arch_rwlock_t *rw)
{
return rw->counter >= 0;
}
/*
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
static __inline__ int arch_write_can_lock(arch_rwlock_t *rw)
{
return !rw->counter;
}
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#endif /* __ASM_SPINLOCK_H */ #endif /* __ASM_SPINLOCK_H */

Просмотреть файл

@ -161,6 +161,7 @@ void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
local_irq_restore(flags_dis); local_irq_restore(flags_dis);
} }
} }
#define arch_spin_lock_flags arch_spin_lock_flags
static inline void arch_spin_unlock(arch_spinlock_t *lock) static inline void arch_spin_unlock(arch_spinlock_t *lock)
{ {
@ -181,9 +182,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
* read-locks. * read-locks.
*/ */
#define arch_read_can_lock(rw) ((rw)->lock >= 0)
#define arch_write_can_lock(rw) (!(rw)->lock)
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
#define __DO_SIGN_EXTEND "extsw %0,%0\n" #define __DO_SIGN_EXTEND "extsw %0,%0\n"
#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ #define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */
@ -302,9 +300,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
rw->lock = 0; rw->lock = 0;
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) __spin_yield(lock) #define arch_spin_relax(lock) __spin_yield(lock)
#define arch_read_relax(lock) __rw_yield(lock) #define arch_read_relax(lock) __rw_yield(lock)
#define arch_write_relax(lock) __rw_yield(lock) #define arch_write_relax(lock) __rw_yield(lock)

Просмотреть файл

@ -78,7 +78,7 @@ static unsigned long lock_rtas(void)
local_irq_save(flags); local_irq_save(flags);
preempt_disable(); preempt_disable();
arch_spin_lock_flags(&rtas.lock, flags); arch_spin_lock(&rtas.lock);
return flags; return flags;
} }

Просмотреть файл

@ -43,7 +43,7 @@ ssize_t opal_msglog_copy(char *to, loff_t pos, size_t count)
if (!opal_memcons) if (!opal_memcons)
return -ENODEV; return -ENODEV;
out_pos = be32_to_cpu(ACCESS_ONCE(opal_memcons->out_pos)); out_pos = be32_to_cpu(READ_ONCE(opal_memcons->out_pos));
/* Now we've read out_pos, put a barrier in before reading the new /* Now we've read out_pos, put a barrier in before reading the new
* data it points to in conbuf. */ * data it points to in conbuf. */

Просмотреть файл

@ -38,6 +38,7 @@ bool arch_vcpu_is_preempted(int cpu);
*/ */
void arch_spin_relax(arch_spinlock_t *lock); void arch_spin_relax(arch_spinlock_t *lock);
#define arch_spin_relax arch_spin_relax
void arch_spin_lock_wait(arch_spinlock_t *); void arch_spin_lock_wait(arch_spinlock_t *);
int arch_spin_trylock_retry(arch_spinlock_t *); int arch_spin_trylock_retry(arch_spinlock_t *);
@ -76,6 +77,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *lp,
if (!arch_spin_trylock_once(lp)) if (!arch_spin_trylock_once(lp))
arch_spin_lock_wait(lp); arch_spin_lock_wait(lp);
} }
#define arch_spin_lock_flags arch_spin_lock_flags
static inline int arch_spin_trylock(arch_spinlock_t *lp) static inline int arch_spin_trylock(arch_spinlock_t *lp)
{ {
@ -105,20 +107,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp)
* read-locks. * read-locks.
*/ */
/**
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_read_can_lock(x) (((x)->cnts & 0xffff0000) == 0)
/**
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_write_can_lock(x) ((x)->cnts == 0)
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_read_relax(rw) barrier() #define arch_read_relax(rw) barrier()
#define arch_write_relax(rw) barrier() #define arch_write_relax(rw) barrier()

Просмотреть файл

@ -215,7 +215,7 @@ static inline void arch_spin_lock_classic(arch_spinlock_t *lp)
lockval = SPINLOCK_LOCKVAL; /* cpu + 1 */ lockval = SPINLOCK_LOCKVAL; /* cpu + 1 */
/* Pass the virtual CPU to the lock holder if it is not running */ /* Pass the virtual CPU to the lock holder if it is not running */
owner = arch_spin_yield_target(ACCESS_ONCE(lp->lock), NULL); owner = arch_spin_yield_target(READ_ONCE(lp->lock), NULL);
if (owner && arch_vcpu_is_preempted(owner - 1)) if (owner && arch_vcpu_is_preempted(owner - 1))
smp_yield_cpu(owner - 1); smp_yield_cpu(owner - 1);

Просмотреть файл

@ -27,7 +27,6 @@ static inline unsigned __sl_cas(volatile unsigned *p, unsigned old, unsigned new
*/ */
#define arch_spin_is_locked(x) ((x)->lock <= 0) #define arch_spin_is_locked(x) ((x)->lock <= 0)
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
static inline void arch_spin_lock(arch_spinlock_t *lock) static inline void arch_spin_lock(arch_spinlock_t *lock)
{ {
@ -53,18 +52,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
* read-locks. * read-locks.
*/ */
/**
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_read_can_lock(x) ((x)->lock > 0)
/**
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_write_can_lock(x) ((x)->lock == RW_LOCK_BIAS)
static inline void arch_read_lock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw)
{ {
unsigned old; unsigned old;
@ -102,11 +89,4 @@ static inline int arch_write_trylock(arch_rwlock_t *rw)
return __sl_cas(&rw->lock, RW_LOCK_BIAS, 0) == RW_LOCK_BIAS; return __sl_cas(&rw->lock, RW_LOCK_BIAS, 0) == RW_LOCK_BIAS;
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* __ASM_SH_SPINLOCK_CAS_H */ #endif /* __ASM_SH_SPINLOCK_CAS_H */

Просмотреть файл

@ -19,7 +19,6 @@
*/ */
#define arch_spin_is_locked(x) ((x)->lock <= 0) #define arch_spin_is_locked(x) ((x)->lock <= 0)
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
/* /*
* Simple spin lock operations. There are two variants, one clears IRQ's * Simple spin lock operations. There are two variants, one clears IRQ's
@ -89,18 +88,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
* read-locks. * read-locks.
*/ */
/**
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_read_can_lock(x) ((x)->lock > 0)
/**
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
#define arch_write_can_lock(x) ((x)->lock == RW_LOCK_BIAS)
static inline void arch_read_lock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw)
{ {
unsigned long tmp; unsigned long tmp;
@ -209,11 +196,4 @@ static inline int arch_write_trylock(arch_rwlock_t *rw)
return (oldval > (RW_LOCK_BIAS - 1)); return (oldval > (RW_LOCK_BIAS - 1));
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* __ASM_SH_SPINLOCK_LLSC_H */ #endif /* __ASM_SH_SPINLOCK_LLSC_H */

Просмотреть файл

@ -32,7 +32,7 @@ void atomic_set(atomic_t *, int);
#define atomic_set_release(v, i) atomic_set((v), (i)) #define atomic_set_release(v, i) atomic_set((v), (i))
#define atomic_read(v) ACCESS_ONCE((v)->counter) #define atomic_read(v) READ_ONCE((v)->counter)
#define atomic_add(i, v) ((void)atomic_add_return( (int)(i), (v))) #define atomic_add(i, v) ((void)atomic_add_return( (int)(i), (v)))
#define atomic_sub(i, v) ((void)atomic_add_return(-(int)(i), (v))) #define atomic_sub(i, v) ((void)atomic_add_return(-(int)(i), (v)))

Просмотреть файл

@ -7,6 +7,7 @@
#if defined(__sparc__) && defined(__arch64__) #if defined(__sparc__) && defined(__arch64__)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/compiler.h>
#include <linux/threads.h> #include <linux/threads.h>
#include <asm/switch_to.h> #include <asm/switch_to.h>

Просмотреть файл

@ -183,17 +183,6 @@ static inline int __arch_read_trylock(arch_rwlock_t *rw)
res; \ res; \
}) })
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
#define arch_read_lock_flags(rw, flags) arch_read_lock(rw)
#define arch_write_lock_flags(rw, flags) arch_write_lock(rw)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#define arch_read_can_lock(rw) (!((rw)->lock & 0xff))
#define arch_write_can_lock(rw) (!(rw)->lock)
#endif /* !(__ASSEMBLY__) */ #endif /* !(__ASSEMBLY__) */
#endif /* __SPARC_SPINLOCK_H */ #endif /* __SPARC_SPINLOCK_H */

Просмотреть файл

@ -14,13 +14,6 @@
#include <asm/qrwlock.h> #include <asm/qrwlock.h>
#include <asm/qspinlock.h> #include <asm/qspinlock.h>
#define arch_read_lock_flags(p, f) arch_read_lock(p)
#define arch_write_lock_flags(p, f) arch_write_lock(p)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* !(__ASSEMBLY__) */ #endif /* !(__ASSEMBLY__) */
#endif /* !(__SPARC64_SPINLOCK_H) */ #endif /* !(__SPARC64_SPINLOCK_H) */

Просмотреть файл

@ -163,14 +163,14 @@ int __gxio_dma_queue_is_complete(__gxio_dma_queue_t *dma_queue,
int64_t completion_slot, int update) int64_t completion_slot, int update)
{ {
if (update) { if (update) {
if (ACCESS_ONCE(dma_queue->hw_complete_count) > if (READ_ONCE(dma_queue->hw_complete_count) >
completion_slot) completion_slot)
return 1; return 1;
__gxio_dma_queue_update_credits(dma_queue); __gxio_dma_queue_update_credits(dma_queue);
} }
return ACCESS_ONCE(dma_queue->hw_complete_count) > completion_slot; return READ_ONCE(dma_queue->hw_complete_count) > completion_slot;
} }
EXPORT_SYMBOL_GPL(__gxio_dma_queue_is_complete); EXPORT_SYMBOL_GPL(__gxio_dma_queue_is_complete);

Просмотреть файл

@ -51,9 +51,6 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
void arch_spin_lock(arch_spinlock_t *lock); void arch_spin_lock(arch_spinlock_t *lock);
/* We cannot take an interrupt after getting a ticket, so don't enable them. */
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
int arch_spin_trylock(arch_spinlock_t *lock); int arch_spin_trylock(arch_spinlock_t *lock);
static inline void arch_spin_unlock(arch_spinlock_t *lock) static inline void arch_spin_unlock(arch_spinlock_t *lock)
@ -79,22 +76,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
#define _RD_COUNT_SHIFT 24 #define _RD_COUNT_SHIFT 24
#define _RD_COUNT_WIDTH 8 #define _RD_COUNT_WIDTH 8
/**
* arch_read_can_lock() - would read_trylock() succeed?
*/
static inline int arch_read_can_lock(arch_rwlock_t *rwlock)
{
return (rwlock->lock << _RD_COUNT_WIDTH) == 0;
}
/**
* arch_write_can_lock() - would write_trylock() succeed?
*/
static inline int arch_write_can_lock(arch_rwlock_t *rwlock)
{
return rwlock->lock == 0;
}
/** /**
* arch_read_lock() - acquire a read lock. * arch_read_lock() - acquire a read lock.
*/ */
@ -125,7 +106,4 @@ void arch_read_unlock(arch_rwlock_t *rwlock);
*/ */
void arch_write_unlock(arch_rwlock_t *rwlock); void arch_write_unlock(arch_rwlock_t *rwlock);
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#endif /* _ASM_TILE_SPINLOCK_32_H */ #endif /* _ASM_TILE_SPINLOCK_32_H */

Просмотреть файл

@ -75,9 +75,6 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
/* Try to get the lock, and return whether we succeeded. */ /* Try to get the lock, and return whether we succeeded. */
int arch_spin_trylock(arch_spinlock_t *lock); int arch_spin_trylock(arch_spinlock_t *lock);
/* We cannot take an interrupt after getting a ticket, so don't enable them. */
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
/* /*
* Read-write spinlocks, allowing multiple readers * Read-write spinlocks, allowing multiple readers
* but only one writer. * but only one writer.
@ -93,24 +90,6 @@ static inline int arch_write_val_locked(int val)
return val < 0; /* Optimize "val & __WRITE_LOCK_BIT". */ return val < 0; /* Optimize "val & __WRITE_LOCK_BIT". */
} }
/**
* read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question.
*/
static inline int arch_read_can_lock(arch_rwlock_t *rw)
{
return !arch_write_val_locked(rw->lock);
}
/**
* write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question.
*/
static inline int arch_write_can_lock(arch_rwlock_t *rw)
{
return rw->lock == 0;
}
extern void __read_lock_failed(arch_rwlock_t *rw); extern void __read_lock_failed(arch_rwlock_t *rw);
static inline void arch_read_lock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw)
@ -156,7 +135,4 @@ static inline int arch_write_trylock(arch_rwlock_t *rw)
return 0; return 0;
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#endif /* _ASM_TILE_SPINLOCK_64_H */ #endif /* _ASM_TILE_SPINLOCK_64_H */

Просмотреть файл

@ -121,7 +121,7 @@ static inline int64_t __gxio_dma_queue_reserve(__gxio_dma_queue_t *dma_queue,
* if the result is LESS than "hw_complete_count". * if the result is LESS than "hw_complete_count".
*/ */
uint64_t complete; uint64_t complete;
complete = ACCESS_ONCE(dma_queue->hw_complete_count); complete = READ_ONCE(dma_queue->hw_complete_count);
slot |= (complete & 0xffffffffff000000); slot |= (complete & 0xffffffffff000000);
if (slot < complete) if (slot < complete)
slot += 0x1000000; slot += 0x1000000;

Просмотреть файл

@ -255,7 +255,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
int do_syscall_trace_enter(struct pt_regs *regs) int do_syscall_trace_enter(struct pt_regs *regs)
{ {
u32 work = ACCESS_ONCE(current_thread_info()->flags); u32 work = READ_ONCE(current_thread_info()->flags);
if ((work & _TIF_SYSCALL_TRACE) && if ((work & _TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs)) { tracehook_report_syscall_entry(regs)) {

Просмотреть файл

@ -41,7 +41,7 @@
typedef int (*initcall_t)(void); typedef int (*initcall_t)(void);
typedef void (*exitcall_t)(void); typedef void (*exitcall_t)(void);
#include <linux/compiler.h> #include <linux/compiler_types.h>
/* These are for everybody (although not all archs will actually /* These are for everybody (although not all archs will actually
discard it in modules) */ discard it in modules) */

Просмотреть файл

@ -56,7 +56,7 @@ config X86
select ARCH_HAS_KCOV if X86_64 select ARCH_HAS_KCOV if X86_64
select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PMEM_API if X86_64
# Causing hangs/crashes, see the commit that added this change for details. # Causing hangs/crashes, see the commit that added this change for details.
select ARCH_HAS_REFCOUNT if BROKEN select ARCH_HAS_REFCOUNT
select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64 select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64
select ARCH_HAS_SET_MEMORY select ARCH_HAS_SET_MEMORY
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN

Просмотреть файл

@ -75,7 +75,7 @@ static long syscall_trace_enter(struct pt_regs *regs)
if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
BUG_ON(regs != task_pt_regs(current)); BUG_ON(regs != task_pt_regs(current));
work = ACCESS_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY; work = READ_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY;
if (unlikely(work & _TIF_SYSCALL_EMU)) if (unlikely(work & _TIF_SYSCALL_EMU))
emulated = true; emulated = true;
@ -186,9 +186,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
addr_limit_user_check(); addr_limit_user_check();
if (IS_ENABLED(CONFIG_PROVE_LOCKING) && WARN_ON(!irqs_disabled())) lockdep_assert_irqs_disabled();
local_irq_disable();
lockdep_sys_exit(); lockdep_sys_exit();
cached_flags = READ_ONCE(ti->flags); cached_flags = READ_ONCE(ti->flags);

Просмотреть файл

@ -318,7 +318,7 @@ int gettimeofday(struct timeval *, struct timezone *)
notrace time_t __vdso_time(time_t *t) notrace time_t __vdso_time(time_t *t)
{ {
/* This is atomic on x86 so we don't need any locks. */ /* This is atomic on x86 so we don't need any locks. */
time_t result = ACCESS_ONCE(gtod->wall_time_sec); time_t result = READ_ONCE(gtod->wall_time_sec);
if (t) if (t)
*t = result; *t = result;

Просмотреть файл

@ -2118,7 +2118,7 @@ static int x86_pmu_event_init(struct perf_event *event)
event->destroy(event); event->destroy(event);
} }
if (ACCESS_ONCE(x86_pmu.attr_rdpmc)) if (READ_ONCE(x86_pmu.attr_rdpmc))
event->hw.flags |= PERF_X86_EVENT_RDPMC_ALLOWED; event->hw.flags |= PERF_X86_EVENT_RDPMC_ALLOWED;
return err; return err;
@ -2371,7 +2371,7 @@ static unsigned long get_segment_base(unsigned int segment)
struct ldt_struct *ldt; struct ldt_struct *ldt;
/* IRQs are off, so this synchronizes with smp_store_release */ /* IRQs are off, so this synchronizes with smp_store_release */
ldt = lockless_dereference(current->active_mm->context.ldt); ldt = READ_ONCE(current->active_mm->context.ldt);
if (!ldt || idx >= ldt->nr_entries) if (!ldt || idx >= ldt->nr_entries)
return 0; return 0;

Просмотреть файл

@ -12,11 +12,11 @@
*/ */
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
#define mb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "mfence", \ #define mb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "mfence", \
X86_FEATURE_XMM2) ::: "memory", "cc") X86_FEATURE_XMM2) ::: "memory", "cc")
#define rmb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "lfence", \ #define rmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "lfence", \
X86_FEATURE_XMM2) ::: "memory", "cc") X86_FEATURE_XMM2) ::: "memory", "cc")
#define wmb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "sfence", \ #define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "sfence", \
X86_FEATURE_XMM2) ::: "memory", "cc") X86_FEATURE_XMM2) ::: "memory", "cc")
#else #else
#define mb() asm volatile("mfence":::"memory") #define mb() asm volatile("mfence":::"memory")
@ -31,7 +31,11 @@
#endif #endif
#define dma_wmb() barrier() #define dma_wmb() barrier()
#define __smp_mb() mb() #ifdef CONFIG_X86_32
#define __smp_mb() asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")
#else
#define __smp_mb() asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc")
#endif
#define __smp_rmb() dma_rmb() #define __smp_rmb() dma_rmb()
#define __smp_wmb() barrier() #define __smp_wmb() barrier()
#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)

Просмотреть файл

@ -73,8 +73,8 @@ static inline void load_mm_ldt(struct mm_struct *mm)
#ifdef CONFIG_MODIFY_LDT_SYSCALL #ifdef CONFIG_MODIFY_LDT_SYSCALL
struct ldt_struct *ldt; struct ldt_struct *ldt;
/* lockless_dereference synchronizes with smp_store_release */ /* READ_ONCE synchronizes with smp_store_release */
ldt = lockless_dereference(mm->context.ldt); ldt = READ_ONCE(mm->context.ldt);
/* /*
* Any change to mm->context.ldt is followed by an IPI to all * Any change to mm->context.ldt is followed by an IPI to all

Просмотреть файл

@ -2,6 +2,7 @@
#ifndef _ASM_X86_QSPINLOCK_H #ifndef _ASM_X86_QSPINLOCK_H
#define _ASM_X86_QSPINLOCK_H #define _ASM_X86_QSPINLOCK_H
#include <linux/jump_label.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm-generic/qspinlock_types.h> #include <asm-generic/qspinlock_types.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
@ -47,10 +48,14 @@ static inline void queued_spin_unlock(struct qspinlock *lock)
#endif #endif
#ifdef CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key);
void native_pv_lock_init(void) __init;
#define virt_spin_lock virt_spin_lock #define virt_spin_lock virt_spin_lock
static inline bool virt_spin_lock(struct qspinlock *lock) static inline bool virt_spin_lock(struct qspinlock *lock)
{ {
if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) if (!static_branch_likely(&virt_spin_lock_key))
return false; return false;
/* /*
@ -66,6 +71,10 @@ static inline bool virt_spin_lock(struct qspinlock *lock)
return true; return true;
} }
#else
static inline void native_pv_lock_init(void)
{
}
#endif /* CONFIG_PARAVIRT */ #endif /* CONFIG_PARAVIRT */
#include <asm-generic/qspinlock.h> #include <asm-generic/qspinlock.h>

Просмотреть файл

@ -15,7 +15,7 @@
* back to the regular execution flow in .text. * back to the regular execution flow in .text.
*/ */
#define _REFCOUNT_EXCEPTION \ #define _REFCOUNT_EXCEPTION \
".pushsection .text.unlikely\n" \ ".pushsection .text..refcount\n" \
"111:\tlea %[counter], %%" _ASM_CX "\n" \ "111:\tlea %[counter], %%" _ASM_CX "\n" \
"112:\t" ASM_UD0 "\n" \ "112:\t" ASM_UD0 "\n" \
ASM_UNREACHABLE \ ASM_UNREACHABLE \

Просмотреть файл

@ -61,18 +61,33 @@
/* /*
* lock for reading * lock for reading
*/ */
#define ____down_read(sem, slow_path) \
({ \
struct rw_semaphore* ret; \
asm volatile("# beginning down_read\n\t" \
LOCK_PREFIX _ASM_INC "(%[sem])\n\t" \
/* adds 0x00000001 */ \
" jns 1f\n" \
" call " slow_path "\n" \
"1:\n\t" \
"# ending down_read\n\t" \
: "+m" (sem->count), "=a" (ret), \
ASM_CALL_CONSTRAINT \
: [sem] "a" (sem) \
: "memory", "cc"); \
ret; \
})
static inline void __down_read(struct rw_semaphore *sem) static inline void __down_read(struct rw_semaphore *sem)
{ {
asm volatile("# beginning down_read\n\t" ____down_read(sem, "call_rwsem_down_read_failed");
LOCK_PREFIX _ASM_INC "(%1)\n\t" }
/* adds 0x00000001 */
" jns 1f\n" static inline int __down_read_killable(struct rw_semaphore *sem)
" call call_rwsem_down_read_failed\n" {
"1:\n\t" if (IS_ERR(____down_read(sem, "call_rwsem_down_read_failed_killable")))
"# ending down_read\n\t" return -EINTR;
: "+m" (sem->count) return 0;
: "a" (sem)
: "memory", "cc");
} }
/* /*
@ -82,17 +97,18 @@ static inline bool __down_read_trylock(struct rw_semaphore *sem)
{ {
long result, tmp; long result, tmp;
asm volatile("# beginning __down_read_trylock\n\t" asm volatile("# beginning __down_read_trylock\n\t"
" mov %0,%1\n\t" " mov %[count],%[result]\n\t"
"1:\n\t" "1:\n\t"
" mov %1,%2\n\t" " mov %[result],%[tmp]\n\t"
" add %3,%2\n\t" " add %[inc],%[tmp]\n\t"
" jle 2f\n\t" " jle 2f\n\t"
LOCK_PREFIX " cmpxchg %2,%0\n\t" LOCK_PREFIX " cmpxchg %[tmp],%[count]\n\t"
" jnz 1b\n\t" " jnz 1b\n\t"
"2:\n\t" "2:\n\t"
"# ending __down_read_trylock\n\t" "# ending __down_read_trylock\n\t"
: "+m" (sem->count), "=&a" (result), "=&r" (tmp) : [count] "+m" (sem->count), [result] "=&a" (result),
: "i" (RWSEM_ACTIVE_READ_BIAS) [tmp] "=&r" (tmp)
: [inc] "i" (RWSEM_ACTIVE_READ_BIAS)
: "memory", "cc"); : "memory", "cc");
return result >= 0; return result >= 0;
} }
@ -106,7 +122,7 @@ static inline bool __down_read_trylock(struct rw_semaphore *sem)
struct rw_semaphore* ret; \ struct rw_semaphore* ret; \
\ \
asm volatile("# beginning down_write\n\t" \ asm volatile("# beginning down_write\n\t" \
LOCK_PREFIX " xadd %1,(%4)\n\t" \ LOCK_PREFIX " xadd %[tmp],(%[sem])\n\t" \
/* adds 0xffff0001, returns the old value */ \ /* adds 0xffff0001, returns the old value */ \
" test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" \ " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" \
/* was the active mask 0 before? */\ /* was the active mask 0 before? */\
@ -114,9 +130,9 @@ static inline bool __down_read_trylock(struct rw_semaphore *sem)
" call " slow_path "\n" \ " call " slow_path "\n" \
"1:\n" \ "1:\n" \
"# ending down_write" \ "# ending down_write" \
: "+m" (sem->count), "=d" (tmp), \ : "+m" (sem->count), [tmp] "=d" (tmp), \
"=a" (ret), ASM_CALL_CONSTRAINT \ "=a" (ret), ASM_CALL_CONSTRAINT \
: "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS) \ : [sem] "a" (sem), "[tmp]" (RWSEM_ACTIVE_WRITE_BIAS) \
: "memory", "cc"); \ : "memory", "cc"); \
ret; \ ret; \
}) })
@ -142,21 +158,21 @@ static inline bool __down_write_trylock(struct rw_semaphore *sem)
bool result; bool result;
long tmp0, tmp1; long tmp0, tmp1;
asm volatile("# beginning __down_write_trylock\n\t" asm volatile("# beginning __down_write_trylock\n\t"
" mov %0,%1\n\t" " mov %[count],%[tmp0]\n\t"
"1:\n\t" "1:\n\t"
" test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t"
/* was the active mask 0 before? */ /* was the active mask 0 before? */
" jnz 2f\n\t" " jnz 2f\n\t"
" mov %1,%2\n\t" " mov %[tmp0],%[tmp1]\n\t"
" add %4,%2\n\t" " add %[inc],%[tmp1]\n\t"
LOCK_PREFIX " cmpxchg %2,%0\n\t" LOCK_PREFIX " cmpxchg %[tmp1],%[count]\n\t"
" jnz 1b\n\t" " jnz 1b\n\t"
"2:\n\t" "2:\n\t"
CC_SET(e) CC_SET(e)
"# ending __down_write_trylock\n\t" "# ending __down_write_trylock\n\t"
: "+m" (sem->count), "=&a" (tmp0), "=&r" (tmp1), : [count] "+m" (sem->count), [tmp0] "=&a" (tmp0),
CC_OUT(e) (result) [tmp1] "=&r" (tmp1), CC_OUT(e) (result)
: "er" (RWSEM_ACTIVE_WRITE_BIAS) : [inc] "er" (RWSEM_ACTIVE_WRITE_BIAS)
: "memory"); : "memory");
return result; return result;
} }
@ -168,14 +184,14 @@ static inline void __up_read(struct rw_semaphore *sem)
{ {
long tmp; long tmp;
asm volatile("# beginning __up_read\n\t" asm volatile("# beginning __up_read\n\t"
LOCK_PREFIX " xadd %1,(%2)\n\t" LOCK_PREFIX " xadd %[tmp],(%[sem])\n\t"
/* subtracts 1, returns the old value */ /* subtracts 1, returns the old value */
" jns 1f\n\t" " jns 1f\n\t"
" call call_rwsem_wake\n" /* expects old value in %edx */ " call call_rwsem_wake\n" /* expects old value in %edx */
"1:\n" "1:\n"
"# ending __up_read\n" "# ending __up_read\n"
: "+m" (sem->count), "=d" (tmp) : "+m" (sem->count), [tmp] "=d" (tmp)
: "a" (sem), "1" (-RWSEM_ACTIVE_READ_BIAS) : [sem] "a" (sem), "[tmp]" (-RWSEM_ACTIVE_READ_BIAS)
: "memory", "cc"); : "memory", "cc");
} }
@ -186,14 +202,14 @@ static inline void __up_write(struct rw_semaphore *sem)
{ {
long tmp; long tmp;
asm volatile("# beginning __up_write\n\t" asm volatile("# beginning __up_write\n\t"
LOCK_PREFIX " xadd %1,(%2)\n\t" LOCK_PREFIX " xadd %[tmp],(%[sem])\n\t"
/* subtracts 0xffff0001, returns the old value */ /* subtracts 0xffff0001, returns the old value */
" jns 1f\n\t" " jns 1f\n\t"
" call call_rwsem_wake\n" /* expects old value in %edx */ " call call_rwsem_wake\n" /* expects old value in %edx */
"1:\n\t" "1:\n\t"
"# ending __up_write\n" "# ending __up_write\n"
: "+m" (sem->count), "=d" (tmp) : "+m" (sem->count), [tmp] "=d" (tmp)
: "a" (sem), "1" (-RWSEM_ACTIVE_WRITE_BIAS) : [sem] "a" (sem), "[tmp]" (-RWSEM_ACTIVE_WRITE_BIAS)
: "memory", "cc"); : "memory", "cc");
} }
@ -203,7 +219,7 @@ static inline void __up_write(struct rw_semaphore *sem)
static inline void __downgrade_write(struct rw_semaphore *sem) static inline void __downgrade_write(struct rw_semaphore *sem)
{ {
asm volatile("# beginning __downgrade_write\n\t" asm volatile("# beginning __downgrade_write\n\t"
LOCK_PREFIX _ASM_ADD "%2,(%1)\n\t" LOCK_PREFIX _ASM_ADD "%[inc],(%[sem])\n\t"
/* /*
* transitions 0xZZZZ0001 -> 0xYYYY0001 (i386) * transitions 0xZZZZ0001 -> 0xYYYY0001 (i386)
* 0xZZZZZZZZ00000001 -> 0xYYYYYYYY00000001 (x86_64) * 0xZZZZZZZZ00000001 -> 0xYYYYYYYY00000001 (x86_64)
@ -213,7 +229,7 @@ static inline void __downgrade_write(struct rw_semaphore *sem)
"1:\n\t" "1:\n\t"
"# ending __downgrade_write\n" "# ending __downgrade_write\n"
: "+m" (sem->count) : "+m" (sem->count)
: "a" (sem), "er" (-RWSEM_WAITING_BIAS) : [sem] "a" (sem), [inc] "er" (-RWSEM_WAITING_BIAS)
: "memory", "cc"); : "memory", "cc");
} }

Просмотреть файл

@ -42,11 +42,4 @@
#include <asm/qrwlock.h> #include <asm/qrwlock.h>
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_spin_relax(lock) cpu_relax()
#define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax()
#endif /* _ASM_X86_SPINLOCK_H */ #endif /* _ASM_X86_SPINLOCK_H */

Просмотреть файл

@ -49,7 +49,7 @@ static inline unsigned gtod_read_begin(const struct vsyscall_gtod_data *s)
unsigned ret; unsigned ret;
repeat: repeat:
ret = ACCESS_ONCE(s->seq); ret = READ_ONCE(s->seq);
if (unlikely(ret & 1)) { if (unlikely(ret & 1)) {
cpu_relax(); cpu_relax();
goto repeat; goto repeat;

Просмотреть файл

@ -155,14 +155,14 @@ void init_espfix_ap(int cpu)
page = cpu/ESPFIX_STACKS_PER_PAGE; page = cpu/ESPFIX_STACKS_PER_PAGE;
/* Did another CPU already set this up? */ /* Did another CPU already set this up? */
stack_page = ACCESS_ONCE(espfix_pages[page]); stack_page = READ_ONCE(espfix_pages[page]);
if (likely(stack_page)) if (likely(stack_page))
goto done; goto done;
mutex_lock(&espfix_init_mutex); mutex_lock(&espfix_init_mutex);
/* Did we race on the lock? */ /* Did we race on the lock? */
stack_page = ACCESS_ONCE(espfix_pages[page]); stack_page = READ_ONCE(espfix_pages[page]);
if (stack_page) if (stack_page)
goto unlock_done; goto unlock_done;
@ -200,7 +200,7 @@ void init_espfix_ap(int cpu)
set_pte(&pte_p[n*PTE_STRIDE], pte); set_pte(&pte_p[n*PTE_STRIDE], pte);
/* Job is done for this CPU and any CPU which shares this page */ /* Job is done for this CPU and any CPU which shares this page */
ACCESS_ONCE(espfix_pages[page]) = stack_page; WRITE_ONCE(espfix_pages[page], stack_page);
unlock_done: unlock_done:
mutex_unlock(&espfix_init_mutex); mutex_unlock(&espfix_init_mutex);

Просмотреть файл

@ -102,7 +102,7 @@ static void finalize_ldt_struct(struct ldt_struct *ldt)
static void install_ldt(struct mm_struct *current_mm, static void install_ldt(struct mm_struct *current_mm,
struct ldt_struct *ldt) struct ldt_struct *ldt)
{ {
/* Synchronizes with lockless_dereference in load_mm_ldt. */ /* Synchronizes with READ_ONCE in load_mm_ldt. */
smp_store_release(&current_mm->context.ldt, ldt); smp_store_release(&current_mm->context.ldt, ldt);
/* Activate the LDT for all CPUs using current_mm. */ /* Activate the LDT for all CPUs using current_mm. */

Просмотреть файл

@ -105,7 +105,7 @@ static void nmi_max_handler(struct irq_work *w)
{ {
struct nmiaction *a = container_of(w, struct nmiaction, irq_work); struct nmiaction *a = container_of(w, struct nmiaction, irq_work);
int remainder_ns, decimal_msecs; int remainder_ns, decimal_msecs;
u64 whole_msecs = ACCESS_ONCE(a->max_duration); u64 whole_msecs = READ_ONCE(a->max_duration);
remainder_ns = do_div(whole_msecs, (1000 * 1000)); remainder_ns = do_div(whole_msecs, (1000 * 1000));
decimal_msecs = remainder_ns / 1000; decimal_msecs = remainder_ns / 1000;

Просмотреть файл

@ -115,8 +115,18 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
return 5; return 5;
} }
/* Neat trick to map patch type back to the call within the DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
* corresponding structure. */
void __init native_pv_lock_init(void)
{
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
static_branch_disable(&virt_spin_lock_key);
}
/*
* Neat trick to map patch type back to the call within the
* corresponding structure.
*/
static void *get_call_destination(u8 type) static void *get_call_destination(u8 type)
{ {
struct paravirt_patch_template tmpl = { struct paravirt_patch_template tmpl = {

Просмотреть файл

@ -77,6 +77,7 @@
#include <asm/i8259.h> #include <asm/i8259.h>
#include <asm/realmode.h> #include <asm/realmode.h>
#include <asm/misc.h> #include <asm/misc.h>
#include <asm/qspinlock.h>
/* Number of siblings per CPU package */ /* Number of siblings per CPU package */
int smp_num_siblings = 1; int smp_num_siblings = 1;
@ -1095,7 +1096,7 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
unsigned long flags; unsigned long flags;
int err, ret = 0; int err, ret = 0;
WARN_ON(irqs_disabled()); lockdep_assert_irqs_enabled();
pr_debug("++++++++++++++++++++=_---CPU UP %u\n", cpu); pr_debug("++++++++++++++++++++=_---CPU UP %u\n", cpu);
@ -1358,6 +1359,8 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
pr_info("CPU0: "); pr_info("CPU0: ");
print_cpu_info(&cpu_data(0)); print_cpu_info(&cpu_data(0));
native_pv_lock_init();
uv_system_init(); uv_system_init();
set_mtrr_aps_delayed_init(); set_mtrr_aps_delayed_init();

Просмотреть файл

@ -443,7 +443,7 @@ static u64 __update_clear_spte_slow(u64 *sptep, u64 spte)
static u64 __get_spte_lockless(u64 *sptep) static u64 __get_spte_lockless(u64 *sptep)
{ {
return ACCESS_ONCE(*sptep); return READ_ONCE(*sptep);
} }
#else #else
union split_spte { union split_spte {
@ -4819,7 +4819,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
* If we don't have indirect shadow pages, it means no page is * If we don't have indirect shadow pages, it means no page is
* write-protected, so we can exit simply. * write-protected, so we can exit simply.
*/ */
if (!ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages))
return; return;
remote_flush = local_flush = false; remote_flush = local_flush = false;

Просмотреть файл

@ -157,7 +157,7 @@ bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn,
return false; return false;
index = gfn_to_index(gfn, slot->base_gfn, PT_PAGE_TABLE_LEVEL); index = gfn_to_index(gfn, slot->base_gfn, PT_PAGE_TABLE_LEVEL);
return !!ACCESS_ONCE(slot->arch.gfn_track[mode][index]); return !!READ_ONCE(slot->arch.gfn_track[mode][index]);
} }
void kvm_page_track_cleanup(struct kvm *kvm) void kvm_page_track_cleanup(struct kvm *kvm)

Просмотреть файл

@ -98,6 +98,18 @@ ENTRY(call_rwsem_down_read_failed)
ret ret
ENDPROC(call_rwsem_down_read_failed) ENDPROC(call_rwsem_down_read_failed)
ENTRY(call_rwsem_down_read_failed_killable)
FRAME_BEGIN
save_common_regs
__ASM_SIZE(push,) %__ASM_REG(dx)
movq %rax,%rdi
call rwsem_down_read_failed_killable
__ASM_SIZE(pop,) %__ASM_REG(dx)
restore_common_regs
FRAME_END
ret
ENDPROC(call_rwsem_down_read_failed_killable)
ENTRY(call_rwsem_down_write_failed) ENTRY(call_rwsem_down_write_failed)
FRAME_BEGIN FRAME_BEGIN
save_common_regs save_common_regs

Просмотреть файл

@ -67,12 +67,17 @@ bool ex_handler_refcount(const struct exception_table_entry *fixup,
* wrapped around) will be set. Additionally, seeing the refcount * wrapped around) will be set. Additionally, seeing the refcount
* reach 0 will set ZF (Zero Flag: result was zero). In each of * reach 0 will set ZF (Zero Flag: result was zero). In each of
* these cases we want a report, since it's a boundary condition. * these cases we want a report, since it's a boundary condition.
* * The SF case is not reported since it indicates post-boundary
* manipulations below zero or above INT_MAX. And if none of the
* flags are set, something has gone very wrong, so report it.
*/ */
if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_ZF)) { if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_ZF)) {
bool zero = regs->flags & X86_EFLAGS_ZF; bool zero = regs->flags & X86_EFLAGS_ZF;
refcount_error_report(regs, zero ? "hit zero" : "overflow"); refcount_error_report(regs, zero ? "hit zero" : "overflow");
} else if ((regs->flags & X86_EFLAGS_SF) == 0) {
/* Report if none of OF, ZF, nor SF are set. */
refcount_error_report(regs, "unexpected saturation");
} }
return true; return true;

Просмотреть файл

@ -547,7 +547,7 @@ int xen_alloc_p2m_entry(unsigned long pfn)
if (p2m_top_mfn && pfn < MAX_P2M_PFN) { if (p2m_top_mfn && pfn < MAX_P2M_PFN) {
topidx = p2m_top_index(pfn); topidx = p2m_top_index(pfn);
top_mfn_p = &p2m_top_mfn[topidx]; top_mfn_p = &p2m_top_mfn[topidx];
mid_mfn = ACCESS_ONCE(p2m_top_mfn_p[topidx]); mid_mfn = READ_ONCE(p2m_top_mfn_p[topidx]);
BUG_ON(virt_to_mfn(mid_mfn) != *top_mfn_p); BUG_ON(virt_to_mfn(mid_mfn) != *top_mfn_p);

Просмотреть файл

@ -11,6 +11,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
#include <asm/qspinlock.h>
#include <xen/interface/xen.h> #include <xen/interface/xen.h>
#include <xen/events.h> #include <xen/events.h>
@ -81,8 +82,11 @@ void xen_init_lock_cpu(int cpu)
int irq; int irq;
char *name; char *name;
if (!xen_pvspin) if (!xen_pvspin) {
if (cpu == 0)
static_branch_disable(&virt_spin_lock_key);
return; return;
}
WARN(per_cpu(lock_kicker_irq, cpu) >= 0, "spinlock on CPU%d exists on IRQ%d!\n", WARN(per_cpu(lock_kicker_irq, cpu) >= 0, "spinlock on CPU%d exists on IRQ%d!\n",
cpu, per_cpu(lock_kicker_irq, cpu)); cpu, per_cpu(lock_kicker_irq, cpu));

Просмотреть файл

@ -33,8 +33,6 @@
#define arch_spin_is_locked(x) ((x)->slock != 0) #define arch_spin_is_locked(x) ((x)->slock != 0)
#define arch_spin_lock_flags(lock, flags) arch_spin_lock(lock)
static inline void arch_spin_lock(arch_spinlock_t *lock) static inline void arch_spin_lock(arch_spinlock_t *lock)
{ {
unsigned long tmp; unsigned long tmp;
@ -97,8 +95,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
* 0x80000000 one writer owns the rwlock, no other writers, no readers * 0x80000000 one writer owns the rwlock, no other writers, no readers
*/ */
#define arch_write_can_lock(x) ((x)->lock == 0)
static inline void arch_write_lock(arch_rwlock_t *rw) static inline void arch_write_lock(arch_rwlock_t *rw)
{ {
unsigned long tmp; unsigned long tmp;
@ -200,7 +196,4 @@ static inline void arch_read_unlock(arch_rwlock_t *rw)
: "memory"); : "memory");
} }
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#endif /* _XTENSA_SPINLOCK_H */ #endif /* _XTENSA_SPINLOCK_H */

Просмотреть файл

@ -34,23 +34,23 @@
static void lcd_put_byte(u8 *addr, u8 data) static void lcd_put_byte(u8 *addr, u8 data)
{ {
#ifdef CONFIG_XTFPGA_LCD_8BIT_ACCESS #ifdef CONFIG_XTFPGA_LCD_8BIT_ACCESS
ACCESS_ONCE(*addr) = data; WRITE_ONCE(*addr, data);
#else #else
ACCESS_ONCE(*addr) = data & 0xf0; WRITE_ONCE(*addr, data & 0xf0);
ACCESS_ONCE(*addr) = (data << 4) & 0xf0; WRITE_ONCE(*addr, (data << 4) & 0xf0);
#endif #endif
} }
static int __init lcd_init(void) static int __init lcd_init(void)
{ {
ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT; WRITE_ONCE(*LCD_INSTR_ADDR, LCD_DISPLAY_MODE8BIT);
mdelay(5); mdelay(5);
ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT; WRITE_ONCE(*LCD_INSTR_ADDR, LCD_DISPLAY_MODE8BIT);
udelay(200); udelay(200);
ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE8BIT; WRITE_ONCE(*LCD_INSTR_ADDR, LCD_DISPLAY_MODE8BIT);
udelay(50); udelay(50);
#ifndef CONFIG_XTFPGA_LCD_8BIT_ACCESS #ifndef CONFIG_XTFPGA_LCD_8BIT_ACCESS
ACCESS_ONCE(*LCD_INSTR_ADDR) = LCD_DISPLAY_MODE4BIT; WRITE_ONCE(*LCD_INSTR_ADDR, LCD_DISPLAY_MODE4BIT);
udelay(50); udelay(50);
lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_MODE4BIT); lcd_put_byte(LCD_INSTR_ADDR, LCD_DISPLAY_MODE4BIT);
udelay(50); udelay(50);

Просмотреть файл

@ -917,17 +917,9 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
} }
EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages); EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages);
struct submit_bio_ret {
struct completion event;
int error;
};
static void submit_bio_wait_endio(struct bio *bio) static void submit_bio_wait_endio(struct bio *bio)
{ {
struct submit_bio_ret *ret = bio->bi_private; complete(bio->bi_private);
ret->error = blk_status_to_errno(bio->bi_status);
complete(&ret->event);
} }
/** /**
@ -943,16 +935,15 @@ static void submit_bio_wait_endio(struct bio *bio)
*/ */
int submit_bio_wait(struct bio *bio) int submit_bio_wait(struct bio *bio)
{ {
struct submit_bio_ret ret; DECLARE_COMPLETION_ONSTACK_MAP(done, bio->bi_disk->lockdep_map);
init_completion(&ret.event); bio->bi_private = &done;
bio->bi_private = &ret;
bio->bi_end_io = submit_bio_wait_endio; bio->bi_end_io = submit_bio_wait_endio;
bio->bi_opf |= REQ_SYNC; bio->bi_opf |= REQ_SYNC;
submit_bio(bio); submit_bio(bio);
wait_for_completion_io(&ret.event); wait_for_completion_io(&done);
return ret.error; return blk_status_to_errno(bio->bi_status);
} }
EXPORT_SYMBOL(submit_bio_wait); EXPORT_SYMBOL(submit_bio_wait);

Просмотреть файл

@ -261,7 +261,7 @@ static inline bool stat_sample_valid(struct blk_rq_stat *stat)
static u64 rwb_sync_issue_lat(struct rq_wb *rwb) static u64 rwb_sync_issue_lat(struct rq_wb *rwb)
{ {
u64 now, issue = ACCESS_ONCE(rwb->sync_issue); u64 now, issue = READ_ONCE(rwb->sync_issue);
if (!issue || !rwb->sync_cookie) if (!issue || !rwb->sync_cookie)
return 0; return 0;

Просмотреть файл

@ -1354,13 +1354,7 @@ dev_t blk_lookup_devt(const char *name, int partno)
} }
EXPORT_SYMBOL(blk_lookup_devt); EXPORT_SYMBOL(blk_lookup_devt);
struct gendisk *alloc_disk(int minors) struct gendisk *__alloc_disk_node(int minors, int node_id)
{
return alloc_disk_node(minors, NUMA_NO_NODE);
}
EXPORT_SYMBOL(alloc_disk);
struct gendisk *alloc_disk_node(int minors, int node_id)
{ {
struct gendisk *disk; struct gendisk *disk;
struct disk_part_tbl *ptbl; struct disk_part_tbl *ptbl;
@ -1411,7 +1405,7 @@ struct gendisk *alloc_disk_node(int minors, int node_id)
} }
return disk; return disk;
} }
EXPORT_SYMBOL(alloc_disk_node); EXPORT_SYMBOL(__alloc_disk_node);
struct kobject *get_disk(struct gendisk *disk) struct kobject *get_disk(struct gendisk *disk)
{ {

Просмотреть файл

@ -668,7 +668,7 @@ const char *dev_driver_string(const struct device *dev)
* so be careful about accessing it. dev->bus and dev->class should * so be careful about accessing it. dev->bus and dev->class should
* never change once they are set, so they don't need special care. * never change once they are set, so they don't need special care.
*/ */
drv = ACCESS_ONCE(dev->driver); drv = READ_ONCE(dev->driver);
return drv ? drv->name : return drv ? drv->name :
(dev->bus ? dev->bus->name : (dev->bus ? dev->bus->name :
(dev->class ? dev->class->name : "")); (dev->class ? dev->class->name : ""));

Просмотреть файл

@ -134,11 +134,11 @@ unsigned long pm_runtime_autosuspend_expiration(struct device *dev)
if (!dev->power.use_autosuspend) if (!dev->power.use_autosuspend)
goto out; goto out;
autosuspend_delay = ACCESS_ONCE(dev->power.autosuspend_delay); autosuspend_delay = READ_ONCE(dev->power.autosuspend_delay);
if (autosuspend_delay < 0) if (autosuspend_delay < 0)
goto out; goto out;
last_busy = ACCESS_ONCE(dev->power.last_busy); last_busy = READ_ONCE(dev->power.last_busy);
elapsed = jiffies - last_busy; elapsed = jiffies - last_busy;
if (elapsed < 0) if (elapsed < 0)
goto out; /* jiffies has wrapped around. */ goto out; /* jiffies has wrapped around. */

Просмотреть файл

@ -641,7 +641,7 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
return; return;
retry: retry:
entropy_count = orig = ACCESS_ONCE(r->entropy_count); entropy_count = orig = READ_ONCE(r->entropy_count);
if (nfrac < 0) { if (nfrac < 0) {
/* Debit */ /* Debit */
entropy_count += nfrac; entropy_count += nfrac;
@ -1265,7 +1265,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
/* Can we pull enough? */ /* Can we pull enough? */
retry: retry:
entropy_count = orig = ACCESS_ONCE(r->entropy_count); entropy_count = orig = READ_ONCE(r->entropy_count);
ibytes = nbytes; ibytes = nbytes;
/* never pull more than available */ /* never pull more than available */
have_bytes = entropy_count >> (ENTROPY_SHIFT + 3); have_bytes = entropy_count >> (ENTROPY_SHIFT + 3);

Просмотреть файл

@ -71,7 +71,7 @@ static irqreturn_t bcm2835_time_interrupt(int irq, void *dev_id)
if (readl_relaxed(timer->control) & timer->match_mask) { if (readl_relaxed(timer->control) & timer->match_mask) {
writel_relaxed(timer->match_mask, timer->control); writel_relaxed(timer->match_mask, timer->control);
event_handler = ACCESS_ONCE(timer->evt.event_handler); event_handler = READ_ONCE(timer->evt.event_handler);
if (event_handler) if (event_handler)
event_handler(&timer->evt); event_handler(&timer->evt);
return IRQ_HANDLED; return IRQ_HANDLED;

Просмотреть файл

@ -172,7 +172,7 @@ static void caam_jr_dequeue(unsigned long devarg)
while (rd_reg32(&jrp->rregs->outring_used)) { while (rd_reg32(&jrp->rregs->outring_used)) {
head = ACCESS_ONCE(jrp->head); head = READ_ONCE(jrp->head);
spin_lock(&jrp->outlock); spin_lock(&jrp->outlock);
@ -341,7 +341,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
spin_lock_bh(&jrp->inplock); spin_lock_bh(&jrp->inplock);
head = jrp->head; head = jrp->head;
tail = ACCESS_ONCE(jrp->tail); tail = READ_ONCE(jrp->tail);
if (!rd_reg32(&jrp->rregs->inpring_avail) || if (!rd_reg32(&jrp->rregs->inpring_avail) ||
CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) { CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) {

Просмотреть файл

@ -193,7 +193,7 @@ static int wait_for_csb(struct nx842_workmem *wmem,
ktime_t start = wmem->start, now = ktime_get(); ktime_t start = wmem->start, now = ktime_get();
ktime_t timeout = ktime_add_ms(start, CSB_WAIT_MAX); ktime_t timeout = ktime_add_ms(start, CSB_WAIT_MAX);
while (!(ACCESS_ONCE(csb->flags) & CSB_V)) { while (!(READ_ONCE(csb->flags) & CSB_V)) {
cpu_relax(); cpu_relax();
now = ktime_get(); now = ktime_get();
if (ktime_after(now, timeout)) if (ktime_after(now, timeout))

Просмотреть файл

@ -175,11 +175,11 @@ static ssize_t altr_sdr_mc_err_inject_write(struct file *file,
/* /*
* To trigger the error, we need to read the data back * To trigger the error, we need to read the data back
* (the data was written with errors above). * (the data was written with errors above).
* The ACCESS_ONCE macros and printk are used to prevent the * The READ_ONCE macros and printk are used to prevent the
* the compiler optimizing these reads out. * the compiler optimizing these reads out.
*/ */
reg = ACCESS_ONCE(ptemp[0]); reg = READ_ONCE(ptemp[0]);
read_reg = ACCESS_ONCE(ptemp[1]); read_reg = READ_ONCE(ptemp[1]);
/* Force Read */ /* Force Read */
rmb(); rmb();
@ -618,7 +618,7 @@ static ssize_t altr_edac_device_trig(struct file *file,
for (i = 0; i < (priv->trig_alloc_sz / sizeof(*ptemp)); i++) { for (i = 0; i < (priv->trig_alloc_sz / sizeof(*ptemp)); i++) {
/* Read data so we're in the correct state */ /* Read data so we're in the correct state */
rmb(); rmb();
if (ACCESS_ONCE(ptemp[i])) if (READ_ONCE(ptemp[i]))
result = -1; result = -1;
/* Toggle Error bit (it is latched), leave ECC enabled */ /* Toggle Error bit (it is latched), leave ECC enabled */
writel(error_mask, (drvdata->base + priv->set_err_ofst)); writel(error_mask, (drvdata->base + priv->set_err_ofst));
@ -635,7 +635,7 @@ static ssize_t altr_edac_device_trig(struct file *file,
/* Read out written data. ECC error caused here */ /* Read out written data. ECC error caused here */
for (i = 0; i < ALTR_TRIGGER_READ_WRD_CNT; i++) for (i = 0; i < ALTR_TRIGGER_READ_WRD_CNT; i++)
if (ACCESS_ONCE(ptemp[i]) != i) if (READ_ONCE(ptemp[i]) != i)
edac_printk(KERN_ERR, EDAC_DEVICE, edac_printk(KERN_ERR, EDAC_DEVICE,
"Read doesn't match written data\n"); "Read doesn't match written data\n");

Просмотреть файл

@ -734,7 +734,7 @@ static unsigned int ar_search_last_active_buffer(struct ar_context *ctx,
__le16 res_count, next_res_count; __le16 res_count, next_res_count;
i = ar_first_buffer_index(ctx); i = ar_first_buffer_index(ctx);
res_count = ACCESS_ONCE(ctx->descriptors[i].res_count); res_count = READ_ONCE(ctx->descriptors[i].res_count);
/* A buffer that is not yet completely filled must be the last one. */ /* A buffer that is not yet completely filled must be the last one. */
while (i != last && res_count == 0) { while (i != last && res_count == 0) {
@ -742,8 +742,7 @@ static unsigned int ar_search_last_active_buffer(struct ar_context *ctx,
/* Peek at the next descriptor. */ /* Peek at the next descriptor. */
next_i = ar_next_buffer_index(i); next_i = ar_next_buffer_index(i);
rmb(); /* read descriptors in order */ rmb(); /* read descriptors in order */
next_res_count = ACCESS_ONCE( next_res_count = READ_ONCE(ctx->descriptors[next_i].res_count);
ctx->descriptors[next_i].res_count);
/* /*
* If the next descriptor is still empty, we must stop at this * If the next descriptor is still empty, we must stop at this
* descriptor. * descriptor.
@ -759,8 +758,7 @@ static unsigned int ar_search_last_active_buffer(struct ar_context *ctx,
if (MAX_AR_PACKET_SIZE > PAGE_SIZE && i != last) { if (MAX_AR_PACKET_SIZE > PAGE_SIZE && i != last) {
next_i = ar_next_buffer_index(next_i); next_i = ar_next_buffer_index(next_i);
rmb(); rmb();
next_res_count = ACCESS_ONCE( next_res_count = READ_ONCE(ctx->descriptors[next_i].res_count);
ctx->descriptors[next_i].res_count);
if (next_res_count != cpu_to_le16(PAGE_SIZE)) if (next_res_count != cpu_to_le16(PAGE_SIZE))
goto next_buffer_is_active; goto next_buffer_is_active;
} }
@ -2812,7 +2810,7 @@ static int handle_ir_buffer_fill(struct context *context,
u32 buffer_dma; u32 buffer_dma;
req_count = le16_to_cpu(last->req_count); req_count = le16_to_cpu(last->req_count);
res_count = le16_to_cpu(ACCESS_ONCE(last->res_count)); res_count = le16_to_cpu(READ_ONCE(last->res_count));
completed = req_count - res_count; completed = req_count - res_count;
buffer_dma = le32_to_cpu(last->data_address); buffer_dma = le32_to_cpu(last->data_address);

Просмотреть файл

@ -99,11 +99,11 @@ static inline bool tegra_ivc_empty(struct tegra_ivc *ivc,
{ {
/* /*
* This function performs multiple checks on the same values with * This function performs multiple checks on the same values with
* security implications, so create snapshots with ACCESS_ONCE() to * security implications, so create snapshots with READ_ONCE() to
* ensure that these checks use the same values. * ensure that these checks use the same values.
*/ */
u32 tx = ACCESS_ONCE(header->tx.count); u32 tx = READ_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count); u32 rx = READ_ONCE(header->rx.count);
/* /*
* Perform an over-full check to prevent denial of service attacks * Perform an over-full check to prevent denial of service attacks
@ -124,8 +124,8 @@ static inline bool tegra_ivc_empty(struct tegra_ivc *ivc,
static inline bool tegra_ivc_full(struct tegra_ivc *ivc, static inline bool tegra_ivc_full(struct tegra_ivc *ivc,
struct tegra_ivc_header *header) struct tegra_ivc_header *header)
{ {
u32 tx = ACCESS_ONCE(header->tx.count); u32 tx = READ_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count); u32 rx = READ_ONCE(header->rx.count);
/* /*
* Invalid cases where the counters indicate that the queue is over * Invalid cases where the counters indicate that the queue is over
@ -137,8 +137,8 @@ static inline bool tegra_ivc_full(struct tegra_ivc *ivc,
static inline u32 tegra_ivc_available(struct tegra_ivc *ivc, static inline u32 tegra_ivc_available(struct tegra_ivc *ivc,
struct tegra_ivc_header *header) struct tegra_ivc_header *header)
{ {
u32 tx = ACCESS_ONCE(header->tx.count); u32 tx = READ_ONCE(header->tx.count);
u32 rx = ACCESS_ONCE(header->rx.count); u32 rx = READ_ONCE(header->rx.count);
/* /*
* This function isn't expected to be used in scenarios where an * This function isn't expected to be used in scenarios where an
@ -151,8 +151,8 @@ static inline u32 tegra_ivc_available(struct tegra_ivc *ivc,
static inline void tegra_ivc_advance_tx(struct tegra_ivc *ivc) static inline void tegra_ivc_advance_tx(struct tegra_ivc *ivc)
{ {
ACCESS_ONCE(ivc->tx.channel->tx.count) = WRITE_ONCE(ivc->tx.channel->tx.count,
ACCESS_ONCE(ivc->tx.channel->tx.count) + 1; READ_ONCE(ivc->tx.channel->tx.count) + 1);
if (ivc->tx.position == ivc->num_frames - 1) if (ivc->tx.position == ivc->num_frames - 1)
ivc->tx.position = 0; ivc->tx.position = 0;
@ -162,8 +162,8 @@ static inline void tegra_ivc_advance_tx(struct tegra_ivc *ivc)
static inline void tegra_ivc_advance_rx(struct tegra_ivc *ivc) static inline void tegra_ivc_advance_rx(struct tegra_ivc *ivc)
{ {
ACCESS_ONCE(ivc->rx.channel->rx.count) = WRITE_ONCE(ivc->rx.channel->rx.count,
ACCESS_ONCE(ivc->rx.channel->rx.count) + 1; READ_ONCE(ivc->rx.channel->rx.count) + 1);
if (ivc->rx.position == ivc->num_frames - 1) if (ivc->rx.position == ivc->num_frames - 1)
ivc->rx.position = 0; ivc->rx.position = 0;
@ -428,7 +428,7 @@ int tegra_ivc_notified(struct tegra_ivc *ivc)
/* Copy the receiver's state out of shared memory. */ /* Copy the receiver's state out of shared memory. */
tegra_ivc_invalidate(ivc, ivc->rx.phys + offset); tegra_ivc_invalidate(ivc, ivc->rx.phys + offset);
state = ACCESS_ONCE(ivc->rx.channel->tx.state); state = READ_ONCE(ivc->rx.channel->tx.state);
if (state == TEGRA_IVC_STATE_SYNC) { if (state == TEGRA_IVC_STATE_SYNC) {
offset = offsetof(struct tegra_ivc_header, tx.count); offset = offsetof(struct tegra_ivc_header, tx.count);

Просмотреть файл

@ -260,7 +260,7 @@ static void amdgpu_fence_fallback(unsigned long arg)
*/ */
int amdgpu_fence_wait_empty(struct amdgpu_ring *ring) int amdgpu_fence_wait_empty(struct amdgpu_ring *ring)
{ {
uint64_t seq = ACCESS_ONCE(ring->fence_drv.sync_seq); uint64_t seq = READ_ONCE(ring->fence_drv.sync_seq);
struct dma_fence *fence, **ptr; struct dma_fence *fence, **ptr;
int r; int r;
@ -300,7 +300,7 @@ unsigned amdgpu_fence_count_emitted(struct amdgpu_ring *ring)
amdgpu_fence_process(ring); amdgpu_fence_process(ring);
emitted = 0x100000000ull; emitted = 0x100000000ull;
emitted -= atomic_read(&ring->fence_drv.last_seq); emitted -= atomic_read(&ring->fence_drv.last_seq);
emitted += ACCESS_ONCE(ring->fence_drv.sync_seq); emitted += READ_ONCE(ring->fence_drv.sync_seq);
return lower_32_bits(emitted); return lower_32_bits(emitted);
} }

Просмотреть файл

@ -788,11 +788,11 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, void *data)
seq_printf(m, "\t0x%08x: %12ld byte %s", seq_printf(m, "\t0x%08x: %12ld byte %s",
id, amdgpu_bo_size(bo), placement); id, amdgpu_bo_size(bo), placement);
offset = ACCESS_ONCE(bo->tbo.mem.start); offset = READ_ONCE(bo->tbo.mem.start);
if (offset != AMDGPU_BO_INVALID_OFFSET) if (offset != AMDGPU_BO_INVALID_OFFSET)
seq_printf(m, " @ 0x%010Lx", offset); seq_printf(m, " @ 0x%010Lx", offset);
pin_count = ACCESS_ONCE(bo->pin_count); pin_count = READ_ONCE(bo->pin_count);
if (pin_count) if (pin_count)
seq_printf(m, " pin count %d", pin_count); seq_printf(m, " pin count %d", pin_count);
seq_printf(m, "\n"); seq_printf(m, "\n");

Просмотреть файл

@ -187,7 +187,7 @@ static bool amd_sched_entity_is_ready(struct amd_sched_entity *entity)
if (kfifo_is_empty(&entity->job_queue)) if (kfifo_is_empty(&entity->job_queue))
return false; return false;
if (ACCESS_ONCE(entity->dependency)) if (READ_ONCE(entity->dependency))
return false; return false;
return true; return true;

Просмотреть файл

@ -451,7 +451,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data,
else else
r = 0; r = 0;
cur_placement = ACCESS_ONCE(robj->tbo.mem.mem_type); cur_placement = READ_ONCE(robj->tbo.mem.mem_type);
args->domain = radeon_mem_type_to_domain(cur_placement); args->domain = radeon_mem_type_to_domain(cur_placement);
drm_gem_object_put_unlocked(gobj); drm_gem_object_put_unlocked(gobj);
return r; return r;
@ -481,7 +481,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
r = ret; r = ret;
/* Flush HDP cache via MMIO if necessary */ /* Flush HDP cache via MMIO if necessary */
cur_placement = ACCESS_ONCE(robj->tbo.mem.mem_type); cur_placement = READ_ONCE(robj->tbo.mem.mem_type);
if (rdev->asic->mmio_hdp_flush && if (rdev->asic->mmio_hdp_flush &&
radeon_mem_type_to_domain(cur_placement) == RADEON_GEM_DOMAIN_VRAM) radeon_mem_type_to_domain(cur_placement) == RADEON_GEM_DOMAIN_VRAM)
robj->rdev->asic->mmio_hdp_flush(rdev); robj->rdev->asic->mmio_hdp_flush(rdev);

Просмотреть файл

@ -904,7 +904,7 @@ vmw_surface_handle_reference(struct vmw_private *dev_priv,
if (unlikely(drm_is_render_client(file_priv))) if (unlikely(drm_is_render_client(file_priv)))
require_exist = true; require_exist = true;
if (ACCESS_ONCE(vmw_fpriv(file_priv)->locked_master)) { if (READ_ONCE(vmw_fpriv(file_priv)->locked_master)) {
DRM_ERROR("Locked master refused legacy " DRM_ERROR("Locked master refused legacy "
"surface reference.\n"); "surface reference.\n");
return -EACCES; return -EACCES;

Просмотреть файл

@ -380,7 +380,7 @@ static long hfi1_file_ioctl(struct file *fp, unsigned int cmd,
if (sc->flags & SCF_FROZEN) { if (sc->flags & SCF_FROZEN) {
wait_event_interruptible_timeout( wait_event_interruptible_timeout(
dd->event_queue, dd->event_queue,
!(ACCESS_ONCE(dd->flags) & HFI1_FROZEN), !(READ_ONCE(dd->flags) & HFI1_FROZEN),
msecs_to_jiffies(SEND_CTXT_HALT_TIMEOUT)); msecs_to_jiffies(SEND_CTXT_HALT_TIMEOUT));
if (dd->flags & HFI1_FROZEN) if (dd->flags & HFI1_FROZEN)
return -ENOLCK; return -ENOLCK;

Просмотреть файл

@ -1423,14 +1423,14 @@ retry:
goto done; goto done;
} }
/* copy from receiver cache line and recalculate */ /* copy from receiver cache line and recalculate */
sc->alloc_free = ACCESS_ONCE(sc->free); sc->alloc_free = READ_ONCE(sc->free);
avail = avail =
(unsigned long)sc->credits - (unsigned long)sc->credits -
(sc->fill - sc->alloc_free); (sc->fill - sc->alloc_free);
if (blocks > avail) { if (blocks > avail) {
/* still no room, actively update */ /* still no room, actively update */
sc_release_update(sc); sc_release_update(sc);
sc->alloc_free = ACCESS_ONCE(sc->free); sc->alloc_free = READ_ONCE(sc->free);
trycount++; trycount++;
goto retry; goto retry;
} }
@ -1667,7 +1667,7 @@ void sc_release_update(struct send_context *sc)
/* call sent buffer callbacks */ /* call sent buffer callbacks */
code = -1; /* code not yet set */ code = -1; /* code not yet set */
head = ACCESS_ONCE(sc->sr_head); /* snapshot the head */ head = READ_ONCE(sc->sr_head); /* snapshot the head */
tail = sc->sr_tail; tail = sc->sr_tail;
while (head != tail) { while (head != tail) {
pbuf = &sc->sr[tail].pbuf; pbuf = &sc->sr[tail].pbuf;

Просмотреть файл

@ -363,7 +363,7 @@ static void ruc_loopback(struct rvt_qp *sqp)
again: again:
smp_read_barrier_depends(); /* see post_one_send() */ smp_read_barrier_depends(); /* see post_one_send() */
if (sqp->s_last == ACCESS_ONCE(sqp->s_head)) if (sqp->s_last == READ_ONCE(sqp->s_head))
goto clr_busy; goto clr_busy;
wqe = rvt_get_swqe_ptr(sqp, sqp->s_last); wqe = rvt_get_swqe_ptr(sqp, sqp->s_last);

Просмотреть файл

@ -1725,7 +1725,7 @@ retry:
swhead = sde->descq_head & sde->sdma_mask; swhead = sde->descq_head & sde->sdma_mask;
/* this code is really bad for cache line trading */ /* this code is really bad for cache line trading */
swtail = ACCESS_ONCE(sde->descq_tail) & sde->sdma_mask; swtail = READ_ONCE(sde->descq_tail) & sde->sdma_mask;
cnt = sde->descq_cnt; cnt = sde->descq_cnt;
if (swhead < swtail) if (swhead < swtail)
@ -1872,7 +1872,7 @@ retry:
if ((status & sde->idle_mask) && !idle_check_done) { if ((status & sde->idle_mask) && !idle_check_done) {
u16 swtail; u16 swtail;
swtail = ACCESS_ONCE(sde->descq_tail) & sde->sdma_mask; swtail = READ_ONCE(sde->descq_tail) & sde->sdma_mask;
if (swtail != hwhead) { if (swtail != hwhead) {
hwhead = (u16)read_sde_csr(sde, SD(HEAD)); hwhead = (u16)read_sde_csr(sde, SD(HEAD));
idle_check_done = 1; idle_check_done = 1;
@ -2222,7 +2222,7 @@ void sdma_seqfile_dump_sde(struct seq_file *s, struct sdma_engine *sde)
u16 len; u16 len;
head = sde->descq_head & sde->sdma_mask; head = sde->descq_head & sde->sdma_mask;
tail = ACCESS_ONCE(sde->descq_tail) & sde->sdma_mask; tail = READ_ONCE(sde->descq_tail) & sde->sdma_mask;
seq_printf(s, SDE_FMT, sde->this_idx, seq_printf(s, SDE_FMT, sde->this_idx,
sde->cpu, sde->cpu,
sdma_state_name(sde->state.current_state), sdma_state_name(sde->state.current_state),
@ -3305,7 +3305,7 @@ int sdma_ahg_alloc(struct sdma_engine *sde)
return -EINVAL; return -EINVAL;
} }
while (1) { while (1) {
nr = ffz(ACCESS_ONCE(sde->ahg_bits)); nr = ffz(READ_ONCE(sde->ahg_bits));
if (nr > 31) { if (nr > 31) {
trace_hfi1_ahg_allocate(sde, -ENOSPC); trace_hfi1_ahg_allocate(sde, -ENOSPC);
return -ENOSPC; return -ENOSPC;

Просмотреть файл

@ -445,7 +445,7 @@ static inline u16 sdma_descq_freecnt(struct sdma_engine *sde)
{ {
return sde->descq_cnt - return sde->descq_cnt -
(sde->descq_tail - (sde->descq_tail -
ACCESS_ONCE(sde->descq_head)) - 1; READ_ONCE(sde->descq_head)) - 1;
} }
static inline u16 sdma_descq_inprocess(struct sdma_engine *sde) static inline u16 sdma_descq_inprocess(struct sdma_engine *sde)

Просмотреть файл

@ -80,7 +80,7 @@ int hfi1_make_uc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
goto bail; goto bail;
/* We are in the error state, flush the work request. */ /* We are in the error state, flush the work request. */
smp_read_barrier_depends(); /* see post_one_send() */ smp_read_barrier_depends(); /* see post_one_send() */
if (qp->s_last == ACCESS_ONCE(qp->s_head)) if (qp->s_last == READ_ONCE(qp->s_head))
goto bail; goto bail;
/* If DMAs are in progress, we can't flush immediately. */ /* If DMAs are in progress, we can't flush immediately. */
if (iowait_sdma_pending(&priv->s_iowait)) { if (iowait_sdma_pending(&priv->s_iowait)) {
@ -121,7 +121,7 @@ int hfi1_make_uc_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
goto bail; goto bail;
/* Check if send work queue is empty. */ /* Check if send work queue is empty. */
smp_read_barrier_depends(); /* see post_one_send() */ smp_read_barrier_depends(); /* see post_one_send() */
if (qp->s_cur == ACCESS_ONCE(qp->s_head)) { if (qp->s_cur == READ_ONCE(qp->s_head)) {
clear_ahg(qp); clear_ahg(qp);
goto bail; goto bail;
} }

Просмотреть файл

@ -487,7 +487,7 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
goto bail; goto bail;
/* We are in the error state, flush the work request. */ /* We are in the error state, flush the work request. */
smp_read_barrier_depends(); /* see post_one_send */ smp_read_barrier_depends(); /* see post_one_send */
if (qp->s_last == ACCESS_ONCE(qp->s_head)) if (qp->s_last == READ_ONCE(qp->s_head))
goto bail; goto bail;
/* If DMAs are in progress, we can't flush immediately. */ /* If DMAs are in progress, we can't flush immediately. */
if (iowait_sdma_pending(&priv->s_iowait)) { if (iowait_sdma_pending(&priv->s_iowait)) {
@ -501,7 +501,7 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
/* see post_one_send() */ /* see post_one_send() */
smp_read_barrier_depends(); smp_read_barrier_depends();
if (qp->s_cur == ACCESS_ONCE(qp->s_head)) if (qp->s_cur == READ_ONCE(qp->s_head))
goto bail; goto bail;
wqe = rvt_get_swqe_ptr(qp, qp->s_cur); wqe = rvt_get_swqe_ptr(qp, qp->s_cur);

Просмотреть файл

@ -276,7 +276,7 @@ int hfi1_user_sdma_free_queues(struct hfi1_filedata *fd,
/* Wait until all requests have been freed. */ /* Wait until all requests have been freed. */
wait_event_interruptible( wait_event_interruptible(
pq->wait, pq->wait,
(ACCESS_ONCE(pq->state) == SDMA_PKT_Q_INACTIVE)); (READ_ONCE(pq->state) == SDMA_PKT_Q_INACTIVE));
kfree(pq->reqs); kfree(pq->reqs);
kfree(pq->req_in_use); kfree(pq->req_in_use);
kmem_cache_destroy(pq->txreq_cache); kmem_cache_destroy(pq->txreq_cache);
@ -591,7 +591,7 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
if (ret != -EBUSY) { if (ret != -EBUSY) {
req->status = ret; req->status = ret;
WRITE_ONCE(req->has_error, 1); WRITE_ONCE(req->has_error, 1);
if (ACCESS_ONCE(req->seqcomp) == if (READ_ONCE(req->seqcomp) ==
req->seqsubmitted - 1) req->seqsubmitted - 1)
goto free_req; goto free_req;
return ret; return ret;
@ -825,7 +825,7 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, unsigned maxpkts)
*/ */
if (req->data_len) { if (req->data_len) {
iovec = &req->iovs[req->iov_idx]; iovec = &req->iovs[req->iov_idx];
if (ACCESS_ONCE(iovec->offset) == iovec->iov.iov_len) { if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) {
if (++req->iov_idx == req->data_iovs) { if (++req->iov_idx == req->data_iovs) {
ret = -EFAULT; ret = -EFAULT;
goto free_txreq; goto free_txreq;
@ -1390,7 +1390,7 @@ static void user_sdma_txreq_cb(struct sdma_txreq *txreq, int status)
} else { } else {
if (status != SDMA_TXREQ_S_OK) if (status != SDMA_TXREQ_S_OK)
req->status = status; req->status = status;
if (req->seqcomp == (ACCESS_ONCE(req->seqsubmitted) - 1) && if (req->seqcomp == (READ_ONCE(req->seqsubmitted) - 1) &&
(READ_ONCE(req->done) || (READ_ONCE(req->done) ||
READ_ONCE(req->has_error))) { READ_ONCE(req->has_error))) {
user_sdma_free_request(req, false); user_sdma_free_request(req, false);

Просмотреть файл

@ -368,7 +368,7 @@ static void qib_ruc_loopback(struct rvt_qp *sqp)
again: again:
smp_read_barrier_depends(); /* see post_one_send() */ smp_read_barrier_depends(); /* see post_one_send() */
if (sqp->s_last == ACCESS_ONCE(sqp->s_head)) if (sqp->s_last == READ_ONCE(sqp->s_head))
goto clr_busy; goto clr_busy;
wqe = rvt_get_swqe_ptr(sqp, sqp->s_last); wqe = rvt_get_swqe_ptr(sqp, sqp->s_last);

Просмотреть файл

@ -61,7 +61,7 @@ int qib_make_uc_req(struct rvt_qp *qp, unsigned long *flags)
goto bail; goto bail;
/* We are in the error state, flush the work request. */ /* We are in the error state, flush the work request. */
smp_read_barrier_depends(); /* see post_one_send() */ smp_read_barrier_depends(); /* see post_one_send() */
if (qp->s_last == ACCESS_ONCE(qp->s_head)) if (qp->s_last == READ_ONCE(qp->s_head))
goto bail; goto bail;
/* If DMAs are in progress, we can't flush immediately. */ /* If DMAs are in progress, we can't flush immediately. */
if (atomic_read(&priv->s_dma_busy)) { if (atomic_read(&priv->s_dma_busy)) {
@ -91,7 +91,7 @@ int qib_make_uc_req(struct rvt_qp *qp, unsigned long *flags)
goto bail; goto bail;
/* Check if send work queue is empty. */ /* Check if send work queue is empty. */
smp_read_barrier_depends(); /* see post_one_send() */ smp_read_barrier_depends(); /* see post_one_send() */
if (qp->s_cur == ACCESS_ONCE(qp->s_head)) if (qp->s_cur == READ_ONCE(qp->s_head))
goto bail; goto bail;
/* /*
* Start a new request. * Start a new request.

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше