Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:
 "The biggest part of this tree is the new auto-generated atomics API
  wrappers by Mark Rutland.

  The primary motivation was to allow instrumentation without uglifying
  the primary source code.

  The linecount increase comes from adding the auto-generated files to
  the Git space as well:

    include/asm-generic/atomic-instrumented.h     | 1689 ++++++++++++++++--
    include/asm-generic/atomic-long.h             | 1174 ++++++++++---
    include/linux/atomic-fallback.h               | 2295 +++++++++++++++++++++++++
    include/linux/atomic.h                        | 1241 +------------

  I preferred this approach, so that the full call stack of the (already
  complex) locking APIs is still fully visible in 'git grep'.

  But if this is excessive we could certainly hide them.

  There's a separate build-time mechanism to determine whether the
  headers are out of date (they should never be stale if we do our job
  right).

  Anyway, nothing from this should be visible to regular kernel
  developers.

  Other changes:

   - Add support for dynamic keys, which removes a source of false
     positives in the workqueue code, among other things (Bart Van
     Assche)

   - Updates to tools/memory-model (Andrea Parri, Paul E. McKenney)

   - qspinlock, wake_q and lockdep micro-optimizations (Waiman Long)

   - misc other updates and enhancements"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (48 commits)
  locking/lockdep: Shrink struct lock_class_key
  locking/lockdep: Add module_param to enable consistency checks
  lockdep/lib/tests: Test dynamic key registration
  lockdep/lib/tests: Fix run_tests.sh
  kernel/workqueue: Use dynamic lockdep keys for workqueues
  locking/lockdep: Add support for dynamic keys
  locking/lockdep: Verify whether lock objects are small enough to be used as class keys
  locking/lockdep: Check data structure consistency
  locking/lockdep: Reuse lock chains that have been freed
  locking/lockdep: Fix a comment in add_chain_cache()
  locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count()
  locking/lockdep: Reuse list entries that are no longer in use
  locking/lockdep: Free lock classes that are no longer in use
  locking/lockdep: Update two outdated comments
  locking/lockdep: Make it easy to detect whether or not inside a selftest
  locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock()
  locking/lockdep: Initialize the locks_before and locks_after lists earlier
  locking/lockdep: Make zap_class() remove all matching lock order entries
  locking/lockdep: Reorder struct lock_class members
  locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache
  ...
This commit is contained in:
Linus Torvalds 2019-03-06 07:17:17 -08:00
Родитель c8f5ed6ef9 28d49e2826
Коммит 3478588b51
60 изменённых файлов: 7193 добавлений и 2276 удалений

Просмотреть файл

@ -54,6 +54,13 @@ must propagate to all other CPUs before the release operation
(A-cumulative property). This is implemented using
:c:func:`smp_store_release`.
An ACQUIRE memory ordering guarantees that all post loads and
stores (all po-later instructions) on the same CPU are
completed after the acquire operation. It also guarantees that all
po-later stores on the same CPU must propagate to all other CPUs
after the acquire operation executes. This is implemented using
:c:func:`smp_acquire__after_ctrl_dep`.
A control dependency (on success) for refcounters guarantees that
if a reference for an object was successfully obtained (reference
counter increment or addition happened, function returned true),
@ -119,13 +126,24 @@ Memory ordering guarantees changes:
result of obtaining pointer to the object!
case 5) - decrement-based RMW ops that return a value
-----------------------------------------------------
case 5) - generic dec/sub decrement-based RMW ops that return a value
---------------------------------------------------------------------
Function changes:
* :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test`
* :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test`
Memory ordering guarantees changes:
* fully ordered --> RELEASE ordering + ACQUIRE ordering on success
case 6) other decrement-based RMW ops that return a value
---------------------------------------------------------
Function changes:
* no atomic counterpart --> :c:func:`refcount_dec_if_one`
* ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)``
@ -136,7 +154,7 @@ Memory ordering guarantees changes:
.. note:: :c:func:`atomic_add_unless` only provides full order on success.
case 6) - lock-based RMW
case 7) - lock-based RMW
------------------------
Function changes:

18
Kbuild
Просмотреть файл

@ -6,7 +6,8 @@
# 2) Generate timeconst.h
# 3) Generate asm-offsets.h (may need bounds.h and timeconst.h)
# 4) Check for missing system calls
# 5) Generate constants.py (may need bounds.h)
# 5) check atomics headers are up-to-date
# 6) Generate constants.py (may need bounds.h)
#####
# 1) Generate bounds.h
@ -59,7 +60,20 @@ missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE
$(call cmd,syscalls)
#####
# 5) Generate constants for Python GDB integration
# 5) Check atomic headers are up-to-date
#
always += old-atomics
targets += old-atomics
quiet_cmd_atomics = CALL $<
cmd_atomics = $(CONFIG_SHELL) $<
old-atomics: scripts/atomic/check-atomics.sh FORCE
$(call cmd,atomics)
#####
# 6) Generate constants for Python GDB integration
#
extra-$(CONFIG_GDB_SCRIPTS) += build_constants_py

Просмотреть файл

@ -2608,6 +2608,7 @@ L: linux-kernel@vger.kernel.org
S: Maintained
F: arch/*/include/asm/atomic*.h
F: include/*/atomic*.h
F: scripts/atomic/
ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
M: Bradley Grove <linuxdrivers@attotech.com>

Просмотреть файл

@ -42,124 +42,131 @@
#define ATOMIC_INIT(i) { (i) }
#define atomic_read(v) READ_ONCE((v)->counter)
#define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
#define arch_atomic_read(v) READ_ONCE((v)->counter)
#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
#define atomic_add_return_relaxed atomic_add_return_relaxed
#define atomic_add_return_acquire atomic_add_return_acquire
#define atomic_add_return_release atomic_add_return_release
#define atomic_add_return atomic_add_return
#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed
#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire
#define arch_atomic_add_return_release arch_atomic_add_return_release
#define arch_atomic_add_return arch_atomic_add_return
#define atomic_sub_return_relaxed atomic_sub_return_relaxed
#define atomic_sub_return_acquire atomic_sub_return_acquire
#define atomic_sub_return_release atomic_sub_return_release
#define atomic_sub_return atomic_sub_return
#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire
#define arch_atomic_sub_return_release arch_atomic_sub_return_release
#define arch_atomic_sub_return arch_atomic_sub_return
#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
#define atomic_fetch_add_acquire atomic_fetch_add_acquire
#define atomic_fetch_add_release atomic_fetch_add_release
#define atomic_fetch_add atomic_fetch_add
#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release
#define arch_atomic_fetch_add arch_atomic_fetch_add
#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire
#define atomic_fetch_sub_release atomic_fetch_sub_release
#define atomic_fetch_sub atomic_fetch_sub
#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release
#define arch_atomic_fetch_sub arch_atomic_fetch_sub
#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
#define atomic_fetch_and_acquire atomic_fetch_and_acquire
#define atomic_fetch_and_release atomic_fetch_and_release
#define atomic_fetch_and atomic_fetch_and
#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release
#define arch_atomic_fetch_and arch_atomic_fetch_and
#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire
#define atomic_fetch_andnot_release atomic_fetch_andnot_release
#define atomic_fetch_andnot atomic_fetch_andnot
#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
#define atomic_fetch_or_acquire atomic_fetch_or_acquire
#define atomic_fetch_or_release atomic_fetch_or_release
#define atomic_fetch_or atomic_fetch_or
#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release
#define arch_atomic_fetch_or arch_atomic_fetch_or
#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire
#define atomic_fetch_xor_release atomic_fetch_xor_release
#define atomic_fetch_xor atomic_fetch_xor
#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release
#define arch_atomic_fetch_xor arch_atomic_fetch_xor
#define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
#define atomic_xchg_acquire(v, new) xchg_acquire(&((v)->counter), (new))
#define atomic_xchg_release(v, new) xchg_release(&((v)->counter), (new))
#define atomic_xchg(v, new) xchg(&((v)->counter), (new))
#define arch_atomic_xchg_relaxed(v, new) \
arch_xchg_relaxed(&((v)->counter), (new))
#define arch_atomic_xchg_acquire(v, new) \
arch_xchg_acquire(&((v)->counter), (new))
#define arch_atomic_xchg_release(v, new) \
arch_xchg_release(&((v)->counter), (new))
#define arch_atomic_xchg(v, new) \
arch_xchg(&((v)->counter), (new))
#define atomic_cmpxchg_relaxed(v, old, new) \
cmpxchg_relaxed(&((v)->counter), (old), (new))
#define atomic_cmpxchg_acquire(v, old, new) \
cmpxchg_acquire(&((v)->counter), (old), (new))
#define atomic_cmpxchg_release(v, old, new) \
cmpxchg_release(&((v)->counter), (old), (new))
#define atomic_cmpxchg(v, old, new) cmpxchg(&((v)->counter), (old), (new))
#define arch_atomic_cmpxchg_relaxed(v, old, new) \
arch_cmpxchg_relaxed(&((v)->counter), (old), (new))
#define arch_atomic_cmpxchg_acquire(v, old, new) \
arch_cmpxchg_acquire(&((v)->counter), (old), (new))
#define arch_atomic_cmpxchg_release(v, old, new) \
arch_cmpxchg_release(&((v)->counter), (old), (new))
#define arch_atomic_cmpxchg(v, old, new) \
arch_cmpxchg(&((v)->counter), (old), (new))
#define atomic_andnot atomic_andnot
#define arch_atomic_andnot arch_atomic_andnot
/*
* 64-bit atomic operations.
* 64-bit arch_atomic operations.
*/
#define ATOMIC64_INIT ATOMIC_INIT
#define atomic64_read atomic_read
#define atomic64_set atomic_set
#define ATOMIC64_INIT ATOMIC_INIT
#define arch_atomic64_read arch_atomic_read
#define arch_atomic64_set arch_atomic_set
#define atomic64_add_return_relaxed atomic64_add_return_relaxed
#define atomic64_add_return_acquire atomic64_add_return_acquire
#define atomic64_add_return_release atomic64_add_return_release
#define atomic64_add_return atomic64_add_return
#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire
#define arch_atomic64_add_return_release arch_atomic64_add_return_release
#define arch_atomic64_add_return arch_atomic64_add_return
#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
#define atomic64_sub_return_acquire atomic64_sub_return_acquire
#define atomic64_sub_return_release atomic64_sub_return_release
#define atomic64_sub_return atomic64_sub_return
#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release
#define arch_atomic64_sub_return arch_atomic64_sub_return
#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire
#define atomic64_fetch_add_release atomic64_fetch_add_release
#define atomic64_fetch_add atomic64_fetch_add
#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release
#define arch_atomic64_fetch_add arch_atomic64_fetch_add
#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire
#define atomic64_fetch_sub_release atomic64_fetch_sub_release
#define atomic64_fetch_sub atomic64_fetch_sub
#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire
#define atomic64_fetch_and_release atomic64_fetch_and_release
#define atomic64_fetch_and atomic64_fetch_and
#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release
#define arch_atomic64_fetch_and arch_atomic64_fetch_and
#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire
#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release
#define atomic64_fetch_andnot atomic64_fetch_andnot
#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot
#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire
#define atomic64_fetch_or_release atomic64_fetch_or_release
#define atomic64_fetch_or atomic64_fetch_or
#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release
#define arch_atomic64_fetch_or arch_atomic64_fetch_or
#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire
#define atomic64_fetch_xor_release atomic64_fetch_xor_release
#define atomic64_fetch_xor atomic64_fetch_xor
#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
#define atomic64_xchg_relaxed atomic_xchg_relaxed
#define atomic64_xchg_acquire atomic_xchg_acquire
#define atomic64_xchg_release atomic_xchg_release
#define atomic64_xchg atomic_xchg
#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed
#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire
#define arch_atomic64_xchg_release arch_atomic_xchg_release
#define arch_atomic64_xchg arch_atomic_xchg
#define atomic64_cmpxchg_relaxed atomic_cmpxchg_relaxed
#define atomic64_cmpxchg_acquire atomic_cmpxchg_acquire
#define atomic64_cmpxchg_release atomic_cmpxchg_release
#define atomic64_cmpxchg atomic_cmpxchg
#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire
#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release
#define arch_atomic64_cmpxchg arch_atomic_cmpxchg
#define atomic64_andnot atomic64_andnot
#define arch_atomic64_andnot arch_atomic64_andnot
#define atomic64_dec_if_positive atomic64_dec_if_positive
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
#include <asm-generic/atomic-instrumented.h>
#endif
#endif

Просмотреть файл

@ -39,7 +39,7 @@
#define ATOMIC_OP(op, asm_op) \
__LL_SC_INLINE void \
__LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \
__LL_SC_PREFIX(arch_atomic_##op(int i, atomic_t *v)) \
{ \
unsigned long tmp; \
int result; \
@ -53,11 +53,11 @@ __LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: "Ir" (i)); \
} \
__LL_SC_EXPORT(atomic_##op);
__LL_SC_EXPORT(arch_atomic_##op);
#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE int \
__LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \
__LL_SC_PREFIX(arch_atomic_##op##_return##name(int i, atomic_t *v)) \
{ \
unsigned long tmp; \
int result; \
@ -75,11 +75,11 @@ __LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \
\
return result; \
} \
__LL_SC_EXPORT(atomic_##op##_return##name);
__LL_SC_EXPORT(arch_atomic_##op##_return##name);
#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE int \
__LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \
__LL_SC_PREFIX(arch_atomic_fetch_##op##name(int i, atomic_t *v)) \
{ \
unsigned long tmp; \
int val, result; \
@ -97,7 +97,7 @@ __LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \
\
return result; \
} \
__LL_SC_EXPORT(atomic_fetch_##op##name);
__LL_SC_EXPORT(arch_atomic_fetch_##op##name);
#define ATOMIC_OPS(...) \
ATOMIC_OP(__VA_ARGS__) \
@ -133,7 +133,7 @@ ATOMIC_OPS(xor, eor)
#define ATOMIC64_OP(op, asm_op) \
__LL_SC_INLINE void \
__LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \
__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \
{ \
long result; \
unsigned long tmp; \
@ -147,11 +147,11 @@ __LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: "Ir" (i)); \
} \
__LL_SC_EXPORT(atomic64_##op);
__LL_SC_EXPORT(arch_atomic64_##op);
#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE long \
__LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \
__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
{ \
long result; \
unsigned long tmp; \
@ -169,11 +169,11 @@ __LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \
\
return result; \
} \
__LL_SC_EXPORT(atomic64_##op##_return##name);
__LL_SC_EXPORT(arch_atomic64_##op##_return##name);
#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE long \
__LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \
__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v)) \
{ \
long result, val; \
unsigned long tmp; \
@ -191,7 +191,7 @@ __LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \
\
return result; \
} \
__LL_SC_EXPORT(atomic64_fetch_##op##name);
__LL_SC_EXPORT(arch_atomic64_fetch_##op##name);
#define ATOMIC64_OPS(...) \
ATOMIC64_OP(__VA_ARGS__) \
@ -226,7 +226,7 @@ ATOMIC64_OPS(xor, eor)
#undef ATOMIC64_OP
__LL_SC_INLINE long
__LL_SC_PREFIX(atomic64_dec_if_positive(atomic64_t *v))
__LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v))
{
long result;
unsigned long tmp;
@ -246,7 +246,7 @@ __LL_SC_PREFIX(atomic64_dec_if_positive(atomic64_t *v))
return result;
}
__LL_SC_EXPORT(atomic64_dec_if_positive);
__LL_SC_EXPORT(arch_atomic64_dec_if_positive);
#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl) \
__LL_SC_INLINE u##sz \

Просмотреть файл

@ -25,9 +25,9 @@
#error "please don't include this file directly"
#endif
#define __LL_SC_ATOMIC(op) __LL_SC_CALL(atomic_##op)
#define __LL_SC_ATOMIC(op) __LL_SC_CALL(arch_atomic_##op)
#define ATOMIC_OP(op, asm_op) \
static inline void atomic_##op(int i, atomic_t *v) \
static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \
register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \
@ -47,7 +47,7 @@ ATOMIC_OP(add, stadd)
#undef ATOMIC_OP
#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \
static inline int atomic_fetch_##op##name(int i, atomic_t *v) \
static inline int arch_atomic_fetch_##op##name(int i, atomic_t *v) \
{ \
register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \
@ -79,7 +79,7 @@ ATOMIC_FETCH_OPS(add, ldadd)
#undef ATOMIC_FETCH_OPS
#define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \
static inline int atomic_add_return##name(int i, atomic_t *v) \
static inline int arch_atomic_add_return##name(int i, atomic_t *v) \
{ \
register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \
@ -105,7 +105,7 @@ ATOMIC_OP_ADD_RETURN( , al, "memory")
#undef ATOMIC_OP_ADD_RETURN
static inline void atomic_and(int i, atomic_t *v)
static inline void arch_atomic_and(int i, atomic_t *v)
{
register int w0 asm ("w0") = i;
register atomic_t *x1 asm ("x1") = v;
@ -123,7 +123,7 @@ static inline void atomic_and(int i, atomic_t *v)
}
#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \
static inline int atomic_fetch_and##name(int i, atomic_t *v) \
static inline int arch_atomic_fetch_and##name(int i, atomic_t *v) \
{ \
register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \
@ -149,7 +149,7 @@ ATOMIC_FETCH_OP_AND( , al, "memory")
#undef ATOMIC_FETCH_OP_AND
static inline void atomic_sub(int i, atomic_t *v)
static inline void arch_atomic_sub(int i, atomic_t *v)
{
register int w0 asm ("w0") = i;
register atomic_t *x1 asm ("x1") = v;
@ -167,7 +167,7 @@ static inline void atomic_sub(int i, atomic_t *v)
}
#define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \
static inline int atomic_sub_return##name(int i, atomic_t *v) \
static inline int arch_atomic_sub_return##name(int i, atomic_t *v) \
{ \
register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \
@ -195,7 +195,7 @@ ATOMIC_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC_OP_SUB_RETURN
#define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \
static inline int atomic_fetch_sub##name(int i, atomic_t *v) \
static inline int arch_atomic_fetch_sub##name(int i, atomic_t *v) \
{ \
register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \
@ -222,9 +222,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory")
#undef ATOMIC_FETCH_OP_SUB
#undef __LL_SC_ATOMIC
#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(atomic64_##op)
#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(arch_atomic64_##op)
#define ATOMIC64_OP(op, asm_op) \
static inline void atomic64_##op(long i, atomic64_t *v) \
static inline void arch_atomic64_##op(long i, atomic64_t *v) \
{ \
register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
@ -244,7 +244,7 @@ ATOMIC64_OP(add, stadd)
#undef ATOMIC64_OP
#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
static inline long atomic64_fetch_##op##name(long i, atomic64_t *v) \
static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\
{ \
register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
@ -276,7 +276,7 @@ ATOMIC64_FETCH_OPS(add, ldadd)
#undef ATOMIC64_FETCH_OPS
#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \
static inline long atomic64_add_return##name(long i, atomic64_t *v) \
static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\
{ \
register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
@ -302,7 +302,7 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory")
#undef ATOMIC64_OP_ADD_RETURN
static inline void atomic64_and(long i, atomic64_t *v)
static inline void arch_atomic64_and(long i, atomic64_t *v)
{
register long x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v;
@ -320,7 +320,7 @@ static inline void atomic64_and(long i, atomic64_t *v)
}
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \
static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v) \
{ \
register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
@ -346,7 +346,7 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")
#undef ATOMIC64_FETCH_OP_AND
static inline void atomic64_sub(long i, atomic64_t *v)
static inline void arch_atomic64_sub(long i, atomic64_t *v)
{
register long x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v;
@ -364,7 +364,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
}
#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \
static inline long atomic64_sub_return##name(long i, atomic64_t *v) \
static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\
{ \
register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
@ -392,7 +392,7 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC64_OP_SUB_RETURN
#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \
static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v) \
{ \
register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory")
#undef ATOMIC64_FETCH_OP_SUB
static inline long atomic64_dec_if_positive(atomic64_t *v)
static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
{
register long x0 asm ("x0") = (long)v;

Просмотреть файл

@ -110,10 +110,10 @@ __XCHG_GEN(_mb)
})
/* xchg */
#define xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__)
#define xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__)
#define xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__)
#define xchg(...) __xchg_wrapper( _mb, __VA_ARGS__)
#define arch_xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__)
#define arch_xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__)
#define arch_xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__)
#define arch_xchg(...) __xchg_wrapper( _mb, __VA_ARGS__)
#define __CMPXCHG_GEN(sfx) \
static inline unsigned long __cmpxchg##sfx(volatile void *ptr, \
@ -154,18 +154,18 @@ __CMPXCHG_GEN(_mb)
})
/* cmpxchg */
#define cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__)
#define cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__)
#define cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__)
#define cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__)
#define cmpxchg_local cmpxchg_relaxed
#define arch_cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__)
#define arch_cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__)
#define arch_cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__)
#define arch_cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__)
#define arch_cmpxchg_local arch_cmpxchg_relaxed
/* cmpxchg64 */
#define cmpxchg64_relaxed cmpxchg_relaxed
#define cmpxchg64_acquire cmpxchg_acquire
#define cmpxchg64_release cmpxchg_release
#define cmpxchg64 cmpxchg
#define cmpxchg64_local cmpxchg_local
#define arch_cmpxchg64_relaxed arch_cmpxchg_relaxed
#define arch_cmpxchg64_acquire arch_cmpxchg_acquire
#define arch_cmpxchg64_release arch_cmpxchg_release
#define arch_cmpxchg64 arch_cmpxchg
#define arch_cmpxchg64_local arch_cmpxchg_local
/* cmpxchg_double */
#define system_has_cmpxchg_double() 1
@ -177,24 +177,24 @@ __CMPXCHG_GEN(_mb)
VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1); \
})
#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \
({\
int __ret;\
__cmpxchg_double_check(ptr1, ptr2); \
__ret = !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2), \
(unsigned long)(n1), (unsigned long)(n2), \
ptr1); \
__ret; \
#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \
({ \
int __ret; \
__cmpxchg_double_check(ptr1, ptr2); \
__ret = !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2), \
(unsigned long)(n1), (unsigned long)(n2), \
ptr1); \
__ret; \
})
#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \
({\
int __ret;\
__cmpxchg_double_check(ptr1, ptr2); \
__ret = !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2), \
(unsigned long)(n1), (unsigned long)(n2), \
ptr1); \
__ret; \
#define arch_cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \
({ \
int __ret; \
__cmpxchg_double_check(ptr1, ptr2); \
__ret = !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2), \
(unsigned long)(n1), (unsigned long)(n2), \
ptr1); \
__ret; \
})
#define __CMPWAIT_CASE(w, sfx, sz) \

Просмотреть файл

@ -15,13 +15,13 @@
* ops which are SMP safe even on a UP kernel.
*/
#define sync_set_bit(nr, p) set_bit(nr, p)
#define sync_clear_bit(nr, p) clear_bit(nr, p)
#define sync_change_bit(nr, p) change_bit(nr, p)
#define sync_test_and_set_bit(nr, p) test_and_set_bit(nr, p)
#define sync_test_and_clear_bit(nr, p) test_and_clear_bit(nr, p)
#define sync_test_and_change_bit(nr, p) test_and_change_bit(nr, p)
#define sync_test_bit(nr, addr) test_bit(nr, addr)
#define sync_cmpxchg cmpxchg
#define sync_set_bit(nr, p) set_bit(nr, p)
#define sync_clear_bit(nr, p) clear_bit(nr, p)
#define sync_change_bit(nr, p) change_bit(nr, p)
#define sync_test_and_set_bit(nr, p) test_and_set_bit(nr, p)
#define sync_test_and_clear_bit(nr, p) test_and_clear_bit(nr, p)
#define sync_test_and_change_bit(nr, p) test_and_change_bit(nr, p)
#define sync_test_bit(nr, addr) test_bit(nr, addr)
#define arch_sync_cmpxchg arch_cmpxchg
#endif

Просмотреть файл

@ -67,16 +67,30 @@ static __always_inline void refcount_dec(refcount_t *r)
static __always_inline __must_check
bool refcount_sub_and_test(unsigned int i, refcount_t *r)
{
return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
bool ret = GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
REFCOUNT_CHECK_LT_ZERO,
r->refs.counter, e, "er", i, "cx");
if (ret) {
smp_acquire__after_ctrl_dep();
return true;
}
return false;
}
static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
{
return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
REFCOUNT_CHECK_LT_ZERO,
r->refs.counter, e, "cx");
bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
REFCOUNT_CHECK_LT_ZERO,
r->refs.counter, e, "cx");
if (ret) {
smp_acquire__after_ctrl_dep();
return true;
}
return false;
}
static __always_inline __must_check

Просмотреть файл

@ -1058,7 +1058,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
return -ENOMEM;
}
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
if (request->fl_flags & FL_ACCESS)
goto find_conflict;
@ -1100,7 +1100,7 @@ find_conflict:
out:
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
if (new_fl)
locks_free_lock(new_fl);
locks_dispose_list(&dispose);
@ -1138,7 +1138,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
new_fl2 = locks_alloc_lock();
}
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
/*
* New lock request. Walk all POSIX locks and look for conflicts. If
@ -1312,7 +1312,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
}
out:
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
/*
* Free any unused locks.
*/
@ -1584,7 +1584,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
return error;
}
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose);
@ -1636,13 +1636,13 @@ restart:
locks_insert_block(fl, new_fl, leases_conflict);
trace_break_lease_block(inode, new_fl);
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
error = wait_event_interruptible_timeout(new_fl->fl_wait,
!new_fl->fl_blocker, break_time);
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
trace_break_lease_unblock(inode, new_fl);
locks_delete_block(new_fl);
@ -1659,7 +1659,7 @@ restart:
}
out:
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
locks_free_lock(new_fl);
return error;
@ -1729,7 +1729,7 @@ int fcntl_getlease(struct file *filp)
ctx = smp_load_acquire(&inode->i_flctx);
if (ctx && !list_empty_careful(&ctx->flc_lease)) {
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose);
list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
@ -1739,7 +1739,7 @@ int fcntl_getlease(struct file *filp)
break;
}
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
}
@ -1813,7 +1813,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
return -EINVAL;
}
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose);
error = check_conflicting_open(dentry, arg, lease->fl_flags);
@ -1884,7 +1884,7 @@ out_setup:
lease->fl_lmops->lm_setup(lease, priv);
out:
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
if (is_deleg)
inode_unlock(inode);
@ -1907,7 +1907,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
return error;
}
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
if (fl->fl_file == filp &&
@ -1920,7 +1920,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
if (victim)
error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
return error;
}
@ -2643,13 +2643,13 @@ locks_remove_lease(struct file *filp, struct file_lock_context *ctx)
if (list_empty(&ctx->flc_lease))
return;
percpu_down_read_preempt_disable(&file_rwsem);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list)
if (filp == fl->fl_file)
lease_modify(fl, F_UNLCK, &dispose);
spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem);
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -46,16 +46,22 @@ extern int lock_stat;
#define NR_LOCKDEP_CACHING_CLASSES 2
/*
* Lock-classes are keyed via unique addresses, by embedding the
* lockclass-key into the kernel (or module) .data section. (For
* static locks we use the lock address itself as the key.)
* A lockdep key is associated with each lock object. For static locks we use
* the lock address itself as the key. Dynamically allocated lock objects can
* have a statically or dynamically allocated key. Dynamically allocated lock
* keys must be registered before being used and must be unregistered before
* the key memory is freed.
*/
struct lockdep_subclass_key {
char __one_byte;
} __attribute__ ((__packed__));
/* hash_entry is used to keep track of dynamically allocated keys. */
struct lock_class_key {
struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
union {
struct hlist_node hash_entry;
struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
};
};
extern struct lock_class_key __lockdep_no_validate__;
@ -63,7 +69,8 @@ extern struct lock_class_key __lockdep_no_validate__;
#define LOCKSTAT_POINTS 4
/*
* The lock-class itself:
* The lock-class itself. The order of the structure members matters.
* reinit_class() zeroes the key member and all subsequent members.
*/
struct lock_class {
/*
@ -72,10 +79,19 @@ struct lock_class {
struct hlist_node hash_entry;
/*
* global list of all lock-classes:
* Entry in all_lock_classes when in use. Entry in free_lock_classes
* when not in use. Instances that are being freed are on one of the
* zapped_classes lists.
*/
struct list_head lock_entry;
/*
* These fields represent a directed graph of lock dependencies,
* to every node we attach a list of "forward" and a list of
* "backward" graph nodes.
*/
struct list_head locks_after, locks_before;
struct lockdep_subclass_key *key;
unsigned int subclass;
unsigned int dep_gen_id;
@ -86,13 +102,6 @@ struct lock_class {
unsigned long usage_mask;
struct stack_trace usage_traces[XXX_LOCK_USAGE_STATES];
/*
* These fields represent a directed graph of lock dependencies,
* to every node we attach a list of "forward" and a list of
* "backward" graph nodes.
*/
struct list_head locks_after, locks_before;
/*
* Generation counter, when doing certain classes of graph walking,
* to ensure that we check one node only once:
@ -104,7 +113,7 @@ struct lock_class {
unsigned long contention_point[LOCKSTAT_POINTS];
unsigned long contending_point[LOCKSTAT_POINTS];
#endif
};
} __no_randomize_layout;
#ifdef CONFIG_LOCK_STAT
struct lock_time {
@ -178,6 +187,7 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
struct lock_list {
struct list_head entry;
struct lock_class *class;
struct lock_class *links_to;
struct stack_trace trace;
int distance;
@ -264,10 +274,14 @@ extern void lockdep_reset(void);
extern void lockdep_reset_lock(struct lockdep_map *lock);
extern void lockdep_free_key_range(void *start, unsigned long size);
extern asmlinkage void lockdep_sys_exit(void);
extern void lockdep_set_selftest_task(struct task_struct *task);
extern void lockdep_off(void);
extern void lockdep_on(void);
extern void lockdep_register_key(struct lock_class_key *key);
extern void lockdep_unregister_key(struct lock_class_key *key);
/*
* These methods are used by specific locking variants (spinlocks,
* rwlocks, mutexes and rwsems) to pass init/acquire/release events
@ -394,6 +408,10 @@ static inline void lockdep_on(void)
{
}
static inline void lockdep_set_selftest_task(struct task_struct *task)
{
}
# define lock_acquire(l, s, t, r, c, n, i) do { } while (0)
# define lock_release(l, n, i) do { } while (0)
# define lock_downgrade(l, i) do { } while (0)
@ -425,6 +443,14 @@ static inline void lockdep_on(void)
*/
struct lock_class_key { };
static inline void lockdep_register_key(struct lock_class_key *key)
{
}
static inline void lockdep_unregister_key(struct lock_class_key *key)
{
}
/*
* The lockdep_map takes no space if lockdep is disabled:
*/

Просмотреть файл

@ -29,7 +29,7 @@ static struct percpu_rw_semaphore name = { \
extern int __percpu_down_read(struct percpu_rw_semaphore *, int);
extern void __percpu_up_read(struct percpu_rw_semaphore *);
static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *sem)
static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
{
might_sleep();
@ -47,16 +47,10 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *
__this_cpu_inc(*sem->read_count);
if (unlikely(!rcu_sync_is_idle(&sem->rss)))
__percpu_down_read(sem, false); /* Unconditional memory barrier */
barrier();
/*
* The barrier() prevents the compiler from
* The preempt_enable() prevents the compiler from
* bleeding the critical section out.
*/
}
static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
{
percpu_down_read_preempt_disable(sem);
preempt_enable();
}
@ -83,13 +77,9 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
return ret;
}
static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem)
static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
{
/*
* The barrier() prevents the compiler from
* bleeding the critical section out.
*/
barrier();
preempt_disable();
/*
* Same as in percpu_down_read().
*/
@ -102,12 +92,6 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem
rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_);
}
static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
{
preempt_disable();
percpu_up_read_preempt_enable(sem);
}
extern void percpu_down_write(struct percpu_rw_semaphore *);
extern void percpu_up_write(struct percpu_rw_semaphore *);

Просмотреть файл

@ -51,8 +51,8 @@ static inline void wake_q_init(struct wake_q_head *head)
head->lastp = &head->first;
}
extern void wake_q_add(struct wake_q_head *head,
struct task_struct *task);
extern void wake_q_add(struct wake_q_head *head, struct task_struct *task);
extern void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task);
extern void wake_up_q(struct wake_q_head *head);
#endif /* _LINUX_SCHED_WAKE_Q_H */

Просмотреть файл

@ -390,43 +390,23 @@ extern struct workqueue_struct *system_freezable_wq;
extern struct workqueue_struct *system_power_efficient_wq;
extern struct workqueue_struct *system_freezable_power_efficient_wq;
extern struct workqueue_struct *
__alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
struct lock_class_key *key, const char *lock_name, ...) __printf(1, 6);
/**
* alloc_workqueue - allocate a workqueue
* @fmt: printf format for the name of the workqueue
* @flags: WQ_* flags
* @max_active: max in-flight work items, 0 for default
* @args...: args for @fmt
* remaining args: args for @fmt
*
* Allocate a workqueue with the specified parameters. For detailed
* information on WQ_* flags, please refer to
* Documentation/core-api/workqueue.rst.
*
* The __lock_name macro dance is to guarantee that single lock_class_key
* doesn't end up with different namesm, which isn't allowed by lockdep.
*
* RETURNS:
* Pointer to the allocated workqueue on success, %NULL on failure.
*/
#ifdef CONFIG_LOCKDEP
#define alloc_workqueue(fmt, flags, max_active, args...) \
({ \
static struct lock_class_key __key; \
const char *__lock_name; \
\
__lock_name = "(wq_completion)"#fmt#args; \
\
__alloc_workqueue_key((fmt), (flags), (max_active), \
&__key, __lock_name, ##args); \
})
#else
#define alloc_workqueue(fmt, flags, max_active, args...) \
__alloc_workqueue_key((fmt), (flags), (max_active), \
NULL, NULL, ##args)
#endif
struct workqueue_struct *alloc_workqueue(const char *fmt,
unsigned int flags,
int max_active, ...);
/**
* alloc_ordered_workqueue - allocate an ordered workqueue

Просмотреть файл

@ -313,6 +313,15 @@ void cpus_write_unlock(void)
void lockdep_assert_cpus_held(void)
{
/*
* We can't have hotplug operations before userspace starts running,
* and some init codepaths will knowingly not take the hotplug lock.
* This is all valid, so mute lockdep until it makes sense to report
* unheld locks.
*/
if (system_state < SYSTEM_RUNNING)
return;
percpu_rwsem_assert_held(&cpu_hotplug_lock);
}

Просмотреть файл

@ -68,6 +68,7 @@
#include <linux/freezer.h>
#include <linux/memblock.h>
#include <linux/fault-inject.h>
#include <linux/refcount.h>
#include <asm/futex.h>
@ -212,7 +213,7 @@ struct futex_pi_state {
struct rt_mutex pi_mutex;
struct task_struct *owner;
atomic_t refcount;
refcount_t refcount;
union futex_key key;
} __randomize_layout;
@ -321,12 +322,8 @@ static int __init fail_futex_debugfs(void)
if (IS_ERR(dir))
return PTR_ERR(dir);
if (!debugfs_create_bool("ignore-private", mode, dir,
&fail_futex.ignore_private)) {
debugfs_remove_recursive(dir);
return -ENOMEM;
}
debugfs_create_bool("ignore-private", mode, dir,
&fail_futex.ignore_private);
return 0;
}
@ -803,7 +800,7 @@ static int refill_pi_state_cache(void)
INIT_LIST_HEAD(&pi_state->list);
/* pi_mutex gets initialized later */
pi_state->owner = NULL;
atomic_set(&pi_state->refcount, 1);
refcount_set(&pi_state->refcount, 1);
pi_state->key = FUTEX_KEY_INIT;
current->pi_state_cache = pi_state;
@ -823,7 +820,7 @@ static struct futex_pi_state *alloc_pi_state(void)
static void get_pi_state(struct futex_pi_state *pi_state)
{
WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
}
/*
@ -835,7 +832,7 @@ static void put_pi_state(struct futex_pi_state *pi_state)
if (!pi_state)
return;
if (!atomic_dec_and_test(&pi_state->refcount))
if (!refcount_dec_and_test(&pi_state->refcount))
return;
/*
@ -865,7 +862,7 @@ static void put_pi_state(struct futex_pi_state *pi_state)
* refcount is at 0 - put it back to 1.
*/
pi_state->owner = NULL;
atomic_set(&pi_state->refcount, 1);
refcount_set(&pi_state->refcount, 1);
current->pi_state_cache = pi_state;
}
}
@ -908,7 +905,7 @@ void exit_pi_state_list(struct task_struct *curr)
* In that case; drop the locks to let put_pi_state() make
* progress and retry the loop.
*/
if (!atomic_inc_not_zero(&pi_state->refcount)) {
if (!refcount_inc_not_zero(&pi_state->refcount)) {
raw_spin_unlock_irq(&curr->pi_lock);
cpu_relax();
raw_spin_lock_irq(&curr->pi_lock);
@ -1064,7 +1061,7 @@ static int attach_to_pi_state(u32 __user *uaddr, u32 uval,
* and futex_wait_requeue_pi() as it cannot go to 0 and consequently
* free pi_state before we can take a reference ourselves.
*/
WARN_ON(!atomic_read(&pi_state->refcount));
WARN_ON(!refcount_read(&pi_state->refcount));
/*
* Now that we have a pi_state, we can acquire wait_lock
@ -1467,8 +1464,7 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
* Queue the task for later wakeup for after we've released
* the hb->lock. wake_q_add() grabs reference to p.
*/
wake_q_add(wake_q, p);
put_task_struct(p);
wake_q_add_safe(wake_q, p);
}
/*

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -22,6 +22,10 @@ enum lock_usage_bit {
LOCK_USAGE_STATES
};
#define LOCK_USAGE_READ_MASK 1
#define LOCK_USAGE_DIR_MASK 2
#define LOCK_USAGE_STATE_MASK (~(LOCK_USAGE_READ_MASK | LOCK_USAGE_DIR_MASK))
/*
* Usage-state bitmasks:
*/
@ -96,7 +100,8 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i);
extern unsigned long nr_lock_classes;
extern unsigned long nr_list_entries;
extern unsigned long nr_lock_chains;
long lockdep_next_lockchain(long i);
unsigned long lock_chain_count(void);
extern int nr_chain_hlocks;
extern unsigned long nr_stack_trace_entries;

Просмотреть файл

@ -104,18 +104,18 @@ static const struct seq_operations lockdep_ops = {
#ifdef CONFIG_PROVE_LOCKING
static void *lc_start(struct seq_file *m, loff_t *pos)
{
if (*pos < 0)
return NULL;
if (*pos == 0)
return SEQ_START_TOKEN;
if (*pos - 1 < nr_lock_chains)
return lock_chains + (*pos - 1);
return NULL;
return lock_chains + (*pos - 1);
}
static void *lc_next(struct seq_file *m, void *v, loff_t *pos)
{
(*pos)++;
*pos = lockdep_next_lockchain(*pos - 1) + 1;
return lc_start(m, pos);
}
@ -268,7 +268,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
#ifdef CONFIG_PROVE_LOCKING
seq_printf(m, " dependency chains: %11lu [max: %lu]\n",
nr_lock_chains, MAX_LOCKDEP_CHAINS);
lock_chain_count(), MAX_LOCKDEP_CHAINS);
seq_printf(m, " dependency chain hlocks: %11d [max: %lu]\n",
nr_chain_hlocks, MAX_LOCKDEP_CHAIN_HLOCKS);
#endif

Просмотреть файл

@ -124,9 +124,6 @@ static inline __pure u32 encode_tail(int cpu, int idx)
{
u32 tail;
#ifdef CONFIG_DEBUG_SPINLOCK
BUG_ON(idx > 3);
#endif
tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
@ -412,12 +409,28 @@ pv_queue:
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
/*
* 4 nodes are allocated based on the assumption that there will
* not be nested NMIs taking spinlocks. That may not be true in
* some architectures even though the chance of needing more than
* 4 nodes will still be extremely unlikely. When that happens,
* we fall back to spinning on the lock directly without using
* any MCS node. This is not the most elegant solution, but is
* simple enough.
*/
if (unlikely(idx >= MAX_NODES)) {
qstat_inc(qstat_lock_no_node, true);
while (!queued_spin_trylock(lock))
cpu_relax();
goto release;
}
node = grab_mcs_node(node, idx);
/*
* Keep counts of non-zero index values:
*/
qstat_inc(qstat_lock_idx1 + idx - 1, idx);
qstat_inc(qstat_lock_use_node2 + idx - 1, idx);
/*
* Ensure that we increment the head node->count before initialising

Просмотреть файл

@ -30,6 +30,13 @@
* pv_wait_node - # of vCPU wait's at a non-head queue node
* lock_pending - # of locking operations via pending code
* lock_slowpath - # of locking operations via MCS lock queue
* lock_use_node2 - # of locking operations that use 2nd per-CPU node
* lock_use_node3 - # of locking operations that use 3rd per-CPU node
* lock_use_node4 - # of locking operations that use 4th per-CPU node
* lock_no_node - # of locking operations without using per-CPU node
*
* Subtracting lock_use_node[234] from lock_slowpath will give you
* lock_use_node1.
*
* Writing to the "reset_counters" file will reset all the above counter
* values.
@ -55,9 +62,10 @@ enum qlock_stats {
qstat_pv_wait_node,
qstat_lock_pending,
qstat_lock_slowpath,
qstat_lock_idx1,
qstat_lock_idx2,
qstat_lock_idx3,
qstat_lock_use_node2,
qstat_lock_use_node3,
qstat_lock_use_node4,
qstat_lock_no_node,
qstat_num, /* Total number of statistical counters */
qstat_reset_cnts = qstat_num,
};
@ -85,9 +93,10 @@ static const char * const qstat_names[qstat_num + 1] = {
[qstat_pv_wait_node] = "pv_wait_node",
[qstat_lock_pending] = "lock_pending",
[qstat_lock_slowpath] = "lock_slowpath",
[qstat_lock_idx1] = "lock_index1",
[qstat_lock_idx2] = "lock_index2",
[qstat_lock_idx3] = "lock_index3",
[qstat_lock_use_node2] = "lock_use_node2",
[qstat_lock_use_node3] = "lock_use_node3",
[qstat_lock_use_node4] = "lock_use_node4",
[qstat_lock_no_node] = "lock_no_node",
[qstat_reset_cnts] = "reset_counters",
};

Просмотреть файл

@ -211,9 +211,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
* Ensure issuing the wakeup (either by us or someone else)
* after setting the reader waiter to nil.
*/
wake_q_add(wake_q, tsk);
/* wake_q_add() already take the task ref */
put_task_struct(tsk);
wake_q_add_safe(wake_q, tsk);
}
adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;

Просмотреть файл

@ -396,6 +396,30 @@ static bool set_nr_if_polling(struct task_struct *p)
#endif
#endif
static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
{
struct wake_q_node *node = &task->wake_q;
/*
* Atomically grab the task, if ->wake_q is !nil already it means
* its already queued (either by us or someone else) and will get the
* wakeup due to that.
*
* In order to ensure that a pending wakeup will observe our pending
* state, even in the failed case, an explicit smp_mb() must be used.
*/
smp_mb__before_atomic();
if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
return false;
/*
* The head is context local, there can be no concurrency.
*/
*head->lastp = node;
head->lastp = &node->next;
return true;
}
/**
* wake_q_add() - queue a wakeup for 'later' waking.
* @head: the wake_q_head to add @task to
@ -410,27 +434,31 @@ static bool set_nr_if_polling(struct task_struct *p)
*/
void wake_q_add(struct wake_q_head *head, struct task_struct *task)
{
struct wake_q_node *node = &task->wake_q;
if (__wake_q_add(head, task))
get_task_struct(task);
}
/*
* Atomically grab the task, if ->wake_q is !nil already it means
* its already queued (either by us or someone else) and will get the
* wakeup due to that.
*
* In order to ensure that a pending wakeup will observe our pending
* state, even in the failed case, an explicit smp_mb() must be used.
*/
smp_mb__before_atomic();
if (cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL))
return;
get_task_struct(task);
/*
* The head is context local, there can be no concurrency.
*/
*head->lastp = node;
head->lastp = &node->next;
/**
* wake_q_add_safe() - safely queue a wakeup for 'later' waking.
* @head: the wake_q_head to add @task to
* @task: the task to queue for 'later' wakeup
*
* Queue a task for later wakeup, most likely by the wake_up_q() call in the
* same context, _HOWEVER_ this is not guaranteed, the wakeup can come
* instantly.
*
* This function must be used as-if it were wake_up_process(); IOW the task
* must be ready to be woken at this location.
*
* This function is essentially a task-safe equivalent to wake_q_add(). Callers
* that already hold reference to @task can call the 'safe' version and trust
* wake_q to do the right thing depending whether or not the @task is already
* queued for wakeup.
*/
void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
{
if (!__wake_q_add(head, task))
put_task_struct(task);
}
void wake_up_q(struct wake_q_head *head)
@ -5866,14 +5894,11 @@ void __init sched_init_smp(void)
/*
* There's no userspace yet to cause hotplug operations; hence all the
* CPU masks are stable and all blatant races in the below code cannot
* happen. The hotplug lock is nevertheless taken to satisfy lockdep,
* but there won't be any contention on it.
* happen.
*/
cpus_read_lock();
mutex_lock(&sched_domains_mutex);
sched_init_domains(cpu_active_mask);
mutex_unlock(&sched_domains_mutex);
cpus_read_unlock();
/* Move init over to a non-isolated CPU */
if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)

Просмотреть файл

@ -259,6 +259,8 @@ struct workqueue_struct {
struct wq_device *wq_dev; /* I: for sysfs interface */
#endif
#ifdef CONFIG_LOCKDEP
char *lock_name;
struct lock_class_key key;
struct lockdep_map lockdep_map;
#endif
char name[WQ_NAME_LEN]; /* I: workqueue name */
@ -3337,11 +3339,49 @@ static int init_worker_pool(struct worker_pool *pool)
return 0;
}
#ifdef CONFIG_LOCKDEP
static void wq_init_lockdep(struct workqueue_struct *wq)
{
char *lock_name;
lockdep_register_key(&wq->key);
lock_name = kasprintf(GFP_KERNEL, "%s%s", "(wq_completion)", wq->name);
if (!lock_name)
lock_name = wq->name;
lockdep_init_map(&wq->lockdep_map, lock_name, &wq->key, 0);
}
static void wq_unregister_lockdep(struct workqueue_struct *wq)
{
lockdep_unregister_key(&wq->key);
}
static void wq_free_lockdep(struct workqueue_struct *wq)
{
if (wq->lock_name != wq->name)
kfree(wq->lock_name);
}
#else
static void wq_init_lockdep(struct workqueue_struct *wq)
{
}
static void wq_unregister_lockdep(struct workqueue_struct *wq)
{
}
static void wq_free_lockdep(struct workqueue_struct *wq)
{
}
#endif
static void rcu_free_wq(struct rcu_head *rcu)
{
struct workqueue_struct *wq =
container_of(rcu, struct workqueue_struct, rcu);
wq_free_lockdep(wq);
if (!(wq->flags & WQ_UNBOUND))
free_percpu(wq->cpu_pwqs);
else
@ -3532,8 +3572,10 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
* If we're the last pwq going away, @wq is already dead and no one
* is gonna access it anymore. Schedule RCU free.
*/
if (is_last)
if (is_last) {
wq_unregister_lockdep(wq);
call_rcu(&wq->rcu, rcu_free_wq);
}
}
/**
@ -4067,11 +4109,9 @@ static int init_rescuer(struct workqueue_struct *wq)
return 0;
}
struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
unsigned int flags,
int max_active,
struct lock_class_key *key,
const char *lock_name, ...)
struct workqueue_struct *alloc_workqueue(const char *fmt,
unsigned int flags,
int max_active, ...)
{
size_t tbl_size = 0;
va_list args;
@ -4106,7 +4146,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
goto err_free_wq;
}
va_start(args, lock_name);
va_start(args, max_active);
vsnprintf(wq->name, sizeof(wq->name), fmt, args);
va_end(args);
@ -4123,7 +4163,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
INIT_LIST_HEAD(&wq->flusher_overflow);
INIT_LIST_HEAD(&wq->maydays);
lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
wq_init_lockdep(wq);
INIT_LIST_HEAD(&wq->list);
if (alloc_and_link_pwqs(wq) < 0)
@ -4161,7 +4201,7 @@ err_destroy:
destroy_workqueue(wq);
return NULL;
}
EXPORT_SYMBOL_GPL(__alloc_workqueue_key);
EXPORT_SYMBOL_GPL(alloc_workqueue);
/**
* destroy_workqueue - safely terminate a workqueue
@ -4214,6 +4254,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
kthread_stop(wq->rescuer->task);
if (!(wq->flags & WQ_UNBOUND)) {
wq_unregister_lockdep(wq);
/*
* The base ref is never dropped on per-cpu pwqs. Directly
* schedule RCU free.

Просмотреть файл

@ -1989,6 +1989,7 @@ void locking_selftest(void)
init_shared_classes();
debug_locks_silent = !debug_locks_verbose;
lockdep_set_selftest_task(current);
DO_TESTCASE_6R("A-A deadlock", AA);
DO_TESTCASE_6R("A-B-B-A deadlock", ABBA);
@ -2097,5 +2098,6 @@ void locking_selftest(void)
printk("---------------------------------\n");
debug_locks = 1;
}
lockdep_set_selftest_task(NULL);
debug_locks_silent = 0;
}

Просмотреть файл

@ -33,6 +33,9 @@
* Note that the allocator is responsible for ordering things between free()
* and alloc().
*
* The decrements dec_and_test() and sub_and_test() also provide acquire
* ordering on success.
*
*/
#include <linux/mutex.h>
@ -164,8 +167,8 @@ EXPORT_SYMBOL(refcount_inc_checked);
* at UINT_MAX.
*
* Provides release memory ordering, such that prior loads and stores are done
* before, and provides a control dependency such that free() must come after.
* See the comment on top.
* before, and provides an acquire ordering on success such that free()
* must come after.
*
* Use of this function is not recommended for the normal reference counting
* use case in which references are taken and released one at a time. In these
@ -190,7 +193,12 @@ bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r)
} while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
return !new;
if (!new) {
smp_acquire__after_ctrl_dep();
return true;
}
return false;
}
EXPORT_SYMBOL(refcount_sub_and_test_checked);
@ -202,8 +210,8 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked);
* decrement when saturated at UINT_MAX.
*
* Provides release memory ordering, such that prior loads and stores are done
* before, and provides a control dependency such that free() must come after.
* See the comment on top.
* before, and provides an acquire ordering on success such that free()
* must come after.
*
* Return: true if the resulting refcount is 0, false otherwise
*/

186
scripts/atomic/atomic-tbl.sh Executable file
Просмотреть файл

@ -0,0 +1,186 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# helpers for dealing with atomics.tbl
#meta_in(meta, match)
meta_in()
{
case "$1" in
[$2]) return 0;;
esac
return 1
}
#meta_has_ret(meta)
meta_has_ret()
{
meta_in "$1" "bBiIfFlR"
}
#meta_has_acquire(meta)
meta_has_acquire()
{
meta_in "$1" "BFIlR"
}
#meta_has_release(meta)
meta_has_release()
{
meta_in "$1" "BFIRs"
}
#meta_has_relaxed(meta)
meta_has_relaxed()
{
meta_in "$1" "BFIR"
}
#find_fallback_template(pfx, name, sfx, order)
find_fallback_template()
{
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local base=""
local file=""
# We may have fallbacks for a specific case (e.g. read_acquire()), or
# an entire class, e.g. *inc*().
#
# Start at the most specific, and fall back to the most general. Once
# we find a specific fallback, don't bother looking for more.
for base in "${pfx}${name}${sfx}${order}" "${name}"; do
file="${ATOMICDIR}/fallbacks/${base}"
if [ -f "${file}" ]; then
printf "${file}"
break
fi
done
}
#gen_ret_type(meta, int)
gen_ret_type() {
local meta="$1"; shift
local int="$1"; shift
case "${meta}" in
[sv]) printf "void";;
[bB]) printf "bool";;
[aiIfFlR]) printf "${int}";;
esac
}
#gen_ret_stmt(meta)
gen_ret_stmt()
{
if meta_has_ret "${meta}"; then
printf "return ";
fi
}
# gen_param_name(arg)
gen_param_name()
{
# strip off the leading 'c' for 'cv'
local name="${1#c}"
printf "${name#*:}"
}
# gen_param_type(arg, int, atomic)
gen_param_type()
{
local type="${1%%:*}"; shift
local int="$1"; shift
local atomic="$1"; shift
case "${type}" in
i) type="${int} ";;
p) type="${int} *";;
v) type="${atomic}_t *";;
cv) type="const ${atomic}_t *";;
esac
printf "${type}"
}
#gen_param(arg, int, atomic)
gen_param()
{
local arg="$1"; shift
local int="$1"; shift
local atomic="$1"; shift
local name="$(gen_param_name "${arg}")"
local type="$(gen_param_type "${arg}" "${int}" "${atomic}")"
printf "${type}${name}"
}
#gen_params(int, atomic, arg...)
gen_params()
{
local int="$1"; shift
local atomic="$1"; shift
while [ "$#" -gt 0 ]; do
gen_param "$1" "${int}" "${atomic}"
[ "$#" -gt 1 ] && printf ", "
shift;
done
}
#gen_args(arg...)
gen_args()
{
while [ "$#" -gt 0 ]; do
printf "$(gen_param_name "$1")"
[ "$#" -gt 1 ] && printf ", "
shift;
done
}
#gen_proto_order_variants(meta, pfx, name, sfx, ...)
gen_proto_order_variants()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
if meta_has_acquire "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
fi
if meta_has_release "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
fi
if meta_has_relaxed "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
fi
}
#gen_proto_variants(meta, name, ...)
gen_proto_variants()
{
local meta="$1"; shift
local name="$1"; shift
local pfx=""
local sfx=""
meta_in "${meta}" "fF" && pfx="fetch_"
meta_in "${meta}" "R" && sfx="_return"
gen_proto_order_variants "${meta}" "${pfx}" "${name}" "${sfx}" "$@"
}
#gen_proto(meta, ...)
gen_proto() {
local meta="$1"; shift
for m in $(echo "${meta}" | grep -o .); do
gen_proto_variants "${m}" "$@"
done
}

41
scripts/atomic/atomics.tbl Executable file
Просмотреть файл

@ -0,0 +1,41 @@
# name meta args...
#
# Where meta contains a string of variants to generate.
# Upper-case implies _{acquire,release,relaxed} variants.
# Valid meta values are:
# * B/b - bool: returns bool
# * v - void: returns void
# * I/i - int: returns base type
# * R - return: returns base type (has _return variants)
# * F/f - fetch: returns base type (has fetch_ variants)
# * l - load: returns base type (has _acquire order variant)
# * s - store: returns void (has _release order variant)
#
# Where args contains list of type[:name], where type is:
# * cv - const pointer to atomic base type (atomic_t/atomic64_t/atomic_long_t)
# * v - pointer to atomic base type (atomic_t/atomic64_t/atomic_long_t)
# * i - base type (int/s64/long)
# * p - pointer to base type (int/s64/long)
#
read l cv
set s v i
add vRF i v
sub vRF i v
inc vRF v
dec vRF v
and vF i v
andnot vF i v
or vF i v
xor vF i v
xchg I v i
cmpxchg I v i:old i:new
try_cmpxchg B v p:old i:new
sub_and_test b i v
dec_and_test b v
inc_and_test b v
add_negative b i v
add_unless fb v i:a i:u
inc_not_zero b v
inc_unless_negative b v
dec_unless_positive b v
dec_if_positive i v

33
scripts/atomic/check-atomics.sh Executable file
Просмотреть файл

@ -0,0 +1,33 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
#
# Check if atomic headers are up-to-date
ATOMICDIR=$(dirname $0)
ATOMICTBL=${ATOMICDIR}/atomics.tbl
LINUXDIR=${ATOMICDIR}/../..
echo '' | sha1sum - > /dev/null 2>&1
if [ $? -ne 0 ]; then
printf "sha1sum not available, skipping atomic header checks.\n"
exit 0
fi
cat <<EOF |
asm-generic/atomic-instrumented.h
asm-generic/atomic-long.h
linux/atomic-fallback.h
EOF
while read header; do
OLDSUM="$(tail -n 1 ${LINUXDIR}/include/${header})"
OLDSUM="${OLDSUM#// }"
NEWSUM="$(head -n -1 ${LINUXDIR}/include/${header} | sha1sum)"
NEWSUM="${NEWSUM%% *}"
if [ "${OLDSUM}" != "${NEWSUM}" ]; then
printf "warning: generated include/${header} has been modified.\n"
fi
done
exit 0

Просмотреть файл

@ -0,0 +1,9 @@
cat <<EOF
static inline ${ret}
${atomic}_${pfx}${name}${sfx}_acquire(${params})
{
${ret} ret = ${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_acquire_fence();
return ret;
}
EOF

Просмотреть файл

@ -0,0 +1,16 @@
cat <<EOF
/**
* ${atomic}_add_negative - add and test if negative
* @i: integer value to add
* @v: pointer of type ${atomic}_t
*
* Atomically adds @i to @v and returns true
* if the result is negative, or false when
* result is greater than or equal to zero.
*/
static inline bool
${atomic}_add_negative(${int} i, ${atomic}_t *v)
{
return ${atomic}_add_return(i, v) < 0;
}
EOF

Просмотреть файл

@ -0,0 +1,16 @@
cat << EOF
/**
* ${atomic}_add_unless - add unless the number is already a given value
* @v: pointer of type ${atomic}_t
* @a: the amount to add to v...
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, if @v was not already @u.
* Returns true if the addition was done.
*/
static inline bool
${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
return ${atomic}_fetch_add_unless(v, a, u) != u;
}
EOF

Просмотреть файл

@ -0,0 +1,7 @@
cat <<EOF
static inline ${ret}
${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
{
${retstmt}${atomic}_${pfx}and${sfx}${order}(~i, v);
}
EOF

7
scripts/atomic/fallbacks/dec Executable file
Просмотреть файл

@ -0,0 +1,7 @@
cat <<EOF
static inline ${ret}
${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
{
${retstmt}${atomic}_${pfx}sub${sfx}${order}(1, v);
}
EOF

Просмотреть файл

@ -0,0 +1,15 @@
cat <<EOF
/**
* ${atomic}_dec_and_test - decrement and test
* @v: pointer of type ${atomic}_t
*
* Atomically decrements @v by 1 and
* returns true if the result is 0, or false for all other
* cases.
*/
static inline bool
${atomic}_dec_and_test(${atomic}_t *v)
{
return ${atomic}_dec_return(v) == 0;
}
EOF

Просмотреть файл

@ -0,0 +1,15 @@
cat <<EOF
static inline ${ret}
${atomic}_dec_if_positive(${atomic}_t *v)
{
${int} dec, c = ${atomic}_read(v);
do {
dec = c - 1;
if (unlikely(dec < 0))
break;
} while (!${atomic}_try_cmpxchg(v, &c, dec));
return dec;
}
EOF

Просмотреть файл

@ -0,0 +1,14 @@
cat <<EOF
static inline bool
${atomic}_dec_unless_positive(${atomic}_t *v)
{
${int} c = ${atomic}_read(v);
do {
if (unlikely(c > 0))
return false;
} while (!${atomic}_try_cmpxchg(v, &c, c - 1));
return true;
}
EOF

11
scripts/atomic/fallbacks/fence Executable file
Просмотреть файл

@ -0,0 +1,11 @@
cat <<EOF
static inline ${ret}
${atomic}_${pfx}${name}${sfx}(${params})
{
${ret} ret;
__atomic_pre_full_fence();
ret = ${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_post_full_fence();
return ret;
}
EOF

Просмотреть файл

@ -0,0 +1,23 @@
cat << EOF
/**
* ${atomic}_fetch_add_unless - add unless the number is already a given value
* @v: pointer of type ${atomic}_t
* @a: the amount to add to v...
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, so long as @v was not already @u.
* Returns original value of @v
*/
static inline ${int}
${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
${int} c = ${atomic}_read(v);
do {
if (unlikely(c == u))
break;
} while (!${atomic}_try_cmpxchg(v, &c, c + a));
return c;
}
EOF

7
scripts/atomic/fallbacks/inc Executable file
Просмотреть файл

@ -0,0 +1,7 @@
cat <<EOF
static inline ${ret}
${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
{
${retstmt}${atomic}_${pfx}add${sfx}${order}(1, v);
}
EOF

Просмотреть файл

@ -0,0 +1,15 @@
cat <<EOF
/**
* ${atomic}_inc_and_test - increment and test
* @v: pointer of type ${atomic}_t
*
* Atomically increments @v by 1
* and returns true if the result is zero, or false for all
* other cases.
*/
static inline bool
${atomic}_inc_and_test(${atomic}_t *v)
{
return ${atomic}_inc_return(v) == 0;
}
EOF

Просмотреть файл

@ -0,0 +1,14 @@
cat <<EOF
/**
* ${atomic}_inc_not_zero - increment unless the number is zero
* @v: pointer of type ${atomic}_t
*
* Atomically increments @v by 1, if @v is non-zero.
* Returns true if the increment was done.
*/
static inline bool
${atomic}_inc_not_zero(${atomic}_t *v)
{
return ${atomic}_add_unless(v, 1, 0);
}
EOF

Просмотреть файл

@ -0,0 +1,14 @@
cat <<EOF
static inline bool
${atomic}_inc_unless_negative(${atomic}_t *v)
{
${int} c = ${atomic}_read(v);
do {
if (unlikely(c < 0))
return false;
} while (!${atomic}_try_cmpxchg(v, &c, c + 1));
return true;
}
EOF

Просмотреть файл

@ -0,0 +1,7 @@
cat <<EOF
static inline ${ret}
${atomic}_read_acquire(const ${atomic}_t *v)
{
return smp_load_acquire(&(v)->counter);
}
EOF

Просмотреть файл

@ -0,0 +1,8 @@
cat <<EOF
static inline ${ret}
${atomic}_${pfx}${name}${sfx}_release(${params})
{
__atomic_release_fence();
${retstmt}${atomic}_${pfx}${name}${sfx}_relaxed(${args});
}
EOF

Просмотреть файл

@ -0,0 +1,7 @@
cat <<EOF
static inline void
${atomic}_set_release(${atomic}_t *v, ${int} i)
{
smp_store_release(&(v)->counter, i);
}
EOF

Просмотреть файл

@ -0,0 +1,16 @@
cat <<EOF
/**
* ${atomic}_sub_and_test - subtract value from variable and test result
* @i: integer value to subtract
* @v: pointer of type ${atomic}_t
*
* Atomically subtracts @i from @v and returns
* true if the result is zero, or false for all
* other cases.
*/
static inline bool
${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
{
return ${atomic}_sub_return(i, v) == 0;
}
EOF

Просмотреть файл

@ -0,0 +1,11 @@
cat <<EOF
static inline bool
${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
{
${int} r, o = *old;
r = ${atomic}_cmpxchg${order}(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
EOF

Просмотреть файл

@ -0,0 +1,181 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
ATOMICDIR=$(dirname $0)
. ${ATOMICDIR}/atomic-tbl.sh
#gen_template_fallback(template, meta, pfx, name, sfx, order, atomic, int, args...)
gen_template_fallback()
{
local template="$1"; shift
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local atomic="$1"; shift
local int="$1"; shift
local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
local ret="$(gen_ret_type "${meta}" "${int}")"
local retstmt="$(gen_ret_stmt "${meta}")"
local params="$(gen_params "${int}" "${atomic}" "$@")"
local args="$(gen_args "$@")"
if [ ! -z "${template}" ]; then
printf "#ifndef ${atomicname}\n"
. ${template}
printf "#define ${atomicname} ${atomicname}\n"
printf "#endif\n\n"
fi
}
#gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
gen_proto_fallback()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local tmpl="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
}
#gen_basic_fallbacks(basename)
gen_basic_fallbacks()
{
local basename="$1"; shift
cat << EOF
#define ${basename}_acquire ${basename}
#define ${basename}_release ${basename}
#define ${basename}_relaxed ${basename}
EOF
}
#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
gen_proto_order_variants()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local atomic="$1"
local basename="${atomic}_${pfx}${name}${sfx}"
local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
# If we don't have relaxed atomics, then we don't bother with ordering fallbacks
# read_acquire and set_release need to be templated, though
if ! meta_has_relaxed "${meta}"; then
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
if meta_has_acquire "${meta}"; then
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
fi
if meta_has_release "${meta}"; then
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
fi
return
fi
printf "#ifndef ${basename}_relaxed\n"
if [ ! -z "${template}" ]; then
printf "#ifdef ${basename}\n"
fi
gen_basic_fallbacks "${basename}"
if [ ! -z "${template}" ]; then
printf "#endif /* ${atomic}_${pfx}${name}${sfx} */\n\n"
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
fi
printf "#else /* ${basename}_relaxed */\n\n"
gen_template_fallback "${ATOMICDIR}/fallbacks/acquire" "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
gen_template_fallback "${ATOMICDIR}/fallbacks/release" "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
gen_template_fallback "${ATOMICDIR}/fallbacks/fence" "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
printf "#endif /* ${basename}_relaxed */\n\n"
}
gen_xchg_fallbacks()
{
local xchg="$1"; shift
cat <<EOF
#ifndef ${xchg}_relaxed
#define ${xchg}_relaxed ${xchg}
#define ${xchg}_acquire ${xchg}
#define ${xchg}_release ${xchg}
#else /* ${xchg}_relaxed */
#ifndef ${xchg}_acquire
#define ${xchg}_acquire(...) \\
__atomic_op_acquire(${xchg}, __VA_ARGS__)
#endif
#ifndef ${xchg}_release
#define ${xchg}_release(...) \\
__atomic_op_release(${xchg}, __VA_ARGS__)
#endif
#ifndef ${xchg}
#define ${xchg}(...) \\
__atomic_op_fence(${xchg}, __VA_ARGS__)
#endif
#endif /* ${xchg}_relaxed */
EOF
}
cat << EOF
// SPDX-License-Identifier: GPL-2.0
// Generated by $0
// DO NOT MODIFY THIS FILE DIRECTLY
#ifndef _LINUX_ATOMIC_FALLBACK_H
#define _LINUX_ATOMIC_FALLBACK_H
EOF
for xchg in "xchg" "cmpxchg" "cmpxchg64"; do
gen_xchg_fallbacks "${xchg}"
done
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic" "int" ${args}
done
cat <<EOF
#define atomic_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))
#define atomic_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c))
#ifdef CONFIG_GENERIC_ATOMIC64
#include <asm-generic/atomic64.h>
#endif
EOF
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
done
cat <<EOF
#define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))
#define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c))
#endif /* _LINUX_ATOMIC_FALLBACK_H */
EOF

Просмотреть файл

@ -0,0 +1,182 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
ATOMICDIR=$(dirname $0)
. ${ATOMICDIR}/atomic-tbl.sh
#gen_param_check(arg)
gen_param_check()
{
local arg="$1"; shift
local type="${arg%%:*}"
local name="$(gen_param_name "${arg}")"
local rw="write"
case "${type#c}" in
i) return;;
esac
# We don't write to constant parameters
[ ${type#c} != ${type} ] && rw="read"
printf "\tkasan_check_${rw}(${name}, sizeof(*${name}));\n"
}
#gen_param_check(arg...)
gen_params_checks()
{
while [ "$#" -gt 0 ]; do
gen_param_check "$1"
shift;
done
}
# gen_guard(meta, atomic, pfx, name, sfx, order)
gen_guard()
{
local meta="$1"; shift
local atomic="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}"
local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
# We definitely need a preprocessor symbol for this atomic if it is an
# ordering variant, or if there's a generic fallback.
if [ ! -z "${order}" ] || [ ! -z "${template}" ]; then
printf "defined(${atomicname})"
return
fi
# If this is a base variant, but a relaxed variant *may* exist, then we
# only have a preprocessor symbol if the relaxed variant isn't defined
if meta_has_relaxed "${meta}"; then
printf "!defined(${atomicname}_relaxed) || defined(${atomicname})"
fi
}
#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
gen_proto_order_variant()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local atomic="$1"; shift
local int="$1"; shift
local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
local guard="$(gen_guard "${meta}" "${atomic}" "${pfx}" "${name}" "${sfx}" "${order}")"
local ret="$(gen_ret_type "${meta}" "${int}")"
local params="$(gen_params "${int}" "${atomic}" "$@")"
local checks="$(gen_params_checks "$@")"
local args="$(gen_args "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
[ ! -z "${guard}" ] && printf "#if ${guard}\n"
cat <<EOF
static inline ${ret}
${atomicname}(${params})
{
${checks}
${retstmt}arch_${atomicname}(${args});
}
#define ${atomicname} ${atomicname}
EOF
[ ! -z "${guard}" ] && printf "#endif\n"
printf "\n"
}
gen_xchg()
{
local xchg="$1"; shift
local mult="$1"; shift
cat <<EOF
#define ${xchg}(ptr, ...) \\
({ \\
typeof(ptr) __ai_ptr = (ptr); \\
kasan_check_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
arch_${xchg}(__ai_ptr, __VA_ARGS__); \\
})
EOF
}
gen_optional_xchg()
{
local name="$1"; shift
local sfx="$1"; shift
local guard="defined(arch_${name}${sfx})"
[ -z "${sfx}" ] && guard="!defined(arch_${name}_relaxed) || defined(arch_${name})"
printf "#if ${guard}\n"
gen_xchg "${name}${sfx}" ""
printf "#endif\n\n"
}
cat << EOF
// SPDX-License-Identifier: GPL-2.0
// Generated by $0
// DO NOT MODIFY THIS FILE DIRECTLY
/*
* This file provides wrappers with KASAN instrumentation for atomic operations.
* To use this functionality an arch's atomic.h file needs to define all
* atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
* this file at the end. This file provides atomic_read() that forwards to
* arch_atomic_read() for actual atomic operation.
* Note: if an arch atomic operation is implemented by means of other atomic
* operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
* arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
* double instrumentation.
*/
#ifndef _ASM_GENERIC_ATOMIC_INSTRUMENTED_H
#define _ASM_GENERIC_ATOMIC_INSTRUMENTED_H
#include <linux/build_bug.h>
#include <linux/kasan-checks.h>
EOF
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic" "int" ${args}
done
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
done
for xchg in "xchg" "cmpxchg" "cmpxchg64"; do
for order in "" "_acquire" "_release" "_relaxed"; do
gen_optional_xchg "${xchg}" "${order}"
done
done
for xchg in "cmpxchg_local" "cmpxchg64_local" "sync_cmpxchg"; do
gen_xchg "${xchg}" ""
printf "\n"
done
gen_xchg "cmpxchg_double" "2 * "
printf "\n\n"
gen_xchg "cmpxchg_double_local" "2 * "
cat <<EOF
#endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */
EOF

101
scripts/atomic/gen-atomic-long.sh Executable file
Просмотреть файл

@ -0,0 +1,101 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
ATOMICDIR=$(dirname $0)
. ${ATOMICDIR}/atomic-tbl.sh
#gen_cast(arg, int, atomic)
gen_cast()
{
local arg="$1"; shift
local int="$1"; shift
local atomic="$1"; shift
[ "${arg%%:*}" = "p" ] || return
printf "($(gen_param_type "${arg}" "${int}" "${atomic}"))"
}
#gen_args_cast(int, atomic, arg...)
gen_args_cast()
{
local int="$1"; shift
local atomic="$1"; shift
while [ "$#" -gt 0 ]; do
local cast="$(gen_cast "$1" "${int}" "${atomic}")"
local arg="$(gen_param_name "$1")"
printf "${cast}${arg}"
[ "$#" -gt 1 ] && printf ", "
shift;
done
}
#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
gen_proto_order_variant()
{
local meta="$1"; shift
local name="$1$2$3$4"; shift; shift; shift; shift
local atomic="$1"; shift
local int="$1"; shift
local ret="$(gen_ret_type "${meta}" "long")"
local params="$(gen_params "long" "atomic_long" "$@")"
local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
cat <<EOF
static inline ${ret}
atomic_long_${name}(${params})
{
${retstmt}${atomic}_${name}(${argscast});
}
EOF
}
cat << EOF
// SPDX-License-Identifier: GPL-2.0
// Generated by $0
// DO NOT MODIFY THIS FILE DIRECTLY
#ifndef _ASM_GENERIC_ATOMIC_LONG_H
#define _ASM_GENERIC_ATOMIC_LONG_H
#include <asm/types.h>
#ifdef CONFIG_64BIT
typedef atomic64_t atomic_long_t;
#define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i)
#define atomic_long_cond_read_acquire atomic64_cond_read_acquire
#define atomic_long_cond_read_relaxed atomic64_cond_read_relaxed
#else
typedef atomic_t atomic_long_t;
#define ATOMIC_LONG_INIT(i) ATOMIC_INIT(i)
#define atomic_long_cond_read_acquire atomic_cond_read_acquire
#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
#endif
#ifdef CONFIG_64BIT
EOF
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
done
cat <<EOF
#else /* CONFIG_64BIT */
EOF
grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic" "int" ${args}
done
cat <<EOF
#endif /* CONFIG_64BIT */
#endif /* _ASM_GENERIC_ATOMIC_LONG_H */
EOF

Просмотреть файл

@ -0,0 +1,20 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
#
# Generate atomic headers
ATOMICDIR=$(dirname $0)
ATOMICTBL=${ATOMICDIR}/atomics.tbl
LINUXDIR=${ATOMICDIR}/../..
cat <<EOF |
gen-atomic-instrumented.sh asm-generic/atomic-instrumented.h
gen-atomic-long.sh asm-generic/atomic-long.h
gen-atomic-fallback.sh linux/atomic-fallback.h
EOF
while read script header; do
${ATOMICDIR}/${script} ${ATOMICTBL} > ${LINUXDIR}/include/${header}
HASH="$(sha1sum ${LINUXDIR}/include/${header})"
HASH="${HASH%% *}"
printf "// %s\n" "${HASH}" >> ${LINUXDIR}/include/${header}
done

Просмотреть файл

@ -45,6 +45,8 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
void lock_release(struct lockdep_map *lock, int nested,
unsigned long ip);
void lockdep_reset_lock(struct lockdep_map *lock);
void lockdep_register_key(struct lock_class_key *key);
void lockdep_unregister_key(struct lock_class_key *key);
extern void debug_check_no_locks_freed(const void *from, unsigned long len);
#define STATIC_LOCKDEP_MAP_INIT(_name, _key) \

Просмотреть файл

@ -7,6 +7,7 @@
struct liblockdep_pthread_mutex {
pthread_mutex_t mutex;
struct lock_class_key key;
struct lockdep_map dep_map;
};
@ -27,11 +28,10 @@ static inline int __mutex_init(liblockdep_pthread_mutex_t *lock,
return pthread_mutex_init(&lock->mutex, __mutexattr);
}
#define liblockdep_pthread_mutex_init(mutex, mutexattr) \
({ \
static struct lock_class_key __key; \
\
__mutex_init((mutex), #mutex, &__key, (mutexattr)); \
#define liblockdep_pthread_mutex_init(mutex, mutexattr) \
({ \
lockdep_register_key(&(mutex)->key); \
__mutex_init((mutex), #mutex, &(mutex)->key, (mutexattr)); \
})
static inline int liblockdep_pthread_mutex_lock(liblockdep_pthread_mutex_t *lock)
@ -55,6 +55,7 @@ static inline int liblockdep_pthread_mutex_trylock(liblockdep_pthread_mutex_t *l
static inline int liblockdep_pthread_mutex_destroy(liblockdep_pthread_mutex_t *lock)
{
lockdep_reset_lock(&lock->dep_map);
lockdep_unregister_key(&lock->key);
return pthread_mutex_destroy(&lock->mutex);
}

Просмотреть файл

@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
testname=$(basename "$i" .c)
echo -ne "$testname... "
if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
timeout 1 "tests/$testname" 2>&1 | /bin/bash "tests/${testname}.sh"; then
echo "PASSED!"
else
echo "FAILED!"
@ -24,7 +24,7 @@ find tests -name '*.c' | sort | while read -r i; do
echo -ne "(PRELOAD) $testname... "
if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
timeout 1 ./lockdep "tests/$testname" 2>&1 |
"tests/${testname}.sh"; then
/bin/bash "tests/${testname}.sh"; then
echo "PASSED!"
else
echo "FAILED!"
@ -37,7 +37,7 @@ find tests -name '*.c' | sort | while read -r i; do
echo -ne "(PRELOAD + Valgrind) $testname... "
if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
{ timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
"tests/${testname}.sh" < "tests/${testname}.vg.out" &&
/bin/bash "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
echo "PASSED!"
else

Просмотреть файл

@ -14,4 +14,13 @@ void main(void)
pthread_mutex_destroy(&b);
pthread_mutex_destroy(&a);
pthread_mutex_init(&a, NULL);
pthread_mutex_init(&b, NULL);
LOCK_UNLOCK_2(a, b);
LOCK_UNLOCK_2(b, a);
pthread_mutex_destroy(&b);
pthread_mutex_destroy(&a);
}