WSL2-Linux-Kernel/kernel/locking
Peter Zijlstra 4ed1549129 locking/rtmutex: Fix task->pi_waiters integrity
[ Upstream commit f7853c3424 ]

Henry reported that rt_mutex_adjust_prio_check() has an ordering
problem and puts the lie to the comment in [7]. Sharing the sort key
between lock->waiters and owner->pi_waiters *does* create problems,
since unlike what the comment claims, holding [L] is insufficient.

Notably, consider:

	A
      /   \
     M1   M2
     |     |
     B     C

That is, task A owns both M1 and M2, B and C block on them. In this
case a concurrent chain walk (B & C) will modify their resp. sort keys
in [7] while holding M1->wait_lock and M2->wait_lock. So holding [L]
is meaningless, they're different Ls.

This then gives rise to a race condition between [7] and [11], where
the requeue of pi_waiters will observe an inconsistent tree order.

	B				C

  (holds M1->wait_lock,		(holds M2->wait_lock,
   holds B->pi_lock)		 holds A->pi_lock)

  [7]
  waiter_update_prio();
  ...
  [8]
  raw_spin_unlock(B->pi_lock);
  ...
  [10]
  raw_spin_lock(A->pi_lock);

				[11]
				rt_mutex_enqueue_pi();
				// observes inconsistent A->pi_waiters
				// tree order

Fixing this means either extending the range of the owner lock from
[10-13] to [6-13], with the immediate problem that this means [6-8]
hold both blocked and owner locks, or duplicating the sort key.

Since the locking in chain walk is horrible enough without having to
consider pi_lock nesting rules, duplicate the sort key instead.

By giving each tree their own sort key, the above race becomes
harmless, if C sees B at the old location, then B will correct things
(if they need correcting) when it walks up the chain and reaches A.

Fixes: fb00aca474 ("rtmutex: Turn the plist into an rb-tree")
Reported-by: Henry Wu <triangletrap12@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Henry Wu <triangletrap12@gmail.com>
Link: https://lkml.kernel.org/r/20230707161052.GF2883469%40hirez.programming.kicks-ass.net
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-03 10:22:45 +02:00
..
Makefile locking/ww_mutex: Implement rtmutex based ww_mutex API functions 2021-08-17 19:05:26 +02:00
irqflag-debug.c lockdep: Noinstr annotate warn_bogus_irq_restore() 2021-02-10 14:44:39 +01:00
lock_events.c
lock_events.h locking/lock_events: Use raw_cpu_{add,inc}() for stats 2019-06-03 12:32:56 +02:00
lock_events_list.h locking/rwsem: Remove reader optimistic spinning 2020-12-09 17:08:48 +01:00
lockdep.c lockdep: Fix -Wunused-parameter for _THIS_IP_ 2022-09-20 12:39:42 +02:00
lockdep_internals.h locking/lockdep: Iterate lock_classes directly when reading lockdep files 2022-04-08 14:23:57 +02:00
lockdep_proc.c locking/lockdep: Iterate lock_classes directly when reading lockdep files 2022-04-08 14:23:57 +02:00
lockdep_states.h
locktorture.c locktorture: Count lock readers 2021-07-27 11:39:30 -07:00
mcs_spinlock.h locking: Fix typos in comments 2021-03-22 02:45:52 +01:00
mutex-debug.c locking/ww_mutex: Gather mutex_waiter initialization 2021-08-17 19:04:41 +02:00
mutex.c locking/ww_mutex: Initialize waiter.ww_ctx properly 2021-08-20 12:15:54 +02:00
mutex.h locking/mutex: Move the 'struct mutex_waiter' definition from <linux/mutex.h> to the internal header 2021-08-17 18:24:31 +02:00
osq_lock.c locking: Fix typos in comments 2021-03-22 02:45:52 +01:00
percpu-rwsem.c locking/percpu-rwsem: Use this_cpu_{inc,dec}() for read_count 2020-09-16 16:26:56 +02:00
qrwlock.c locking/qrwlock: Cleanup queued_write_lock_slowpath() 2021-05-06 15:33:49 +02:00
qspinlock.c x86/kvm: Add "nopvspin" parameter to disable PV spinlocks 2020-07-08 16:21:57 -04:00
qspinlock_paravirt.h Revert "locking/pvqspinlock: Don't wait if vCPU is preempted" 2019-09-25 10:22:37 +02:00
qspinlock_stat.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157 2019-05-30 11:26:37 -07:00
rtmutex.c locking/rtmutex: Fix task->pi_waiters integrity 2023-08-03 10:22:45 +02:00
rtmutex_api.c locking/rtmutex: Fix task->pi_waiters integrity 2023-08-03 10:22:45 +02:00
rtmutex_common.h locking/rtmutex: Fix task->pi_waiters integrity 2023-08-03 10:22:45 +02:00
rwbase_rt.c locking/rwbase: Take care of ordering guarantee for fastpath reader 2021-09-15 17:49:16 +02:00
rwsem.c locking/rwsem: Add __always_inline annotation to __down_read_common() and inlined callers 2023-05-17 11:50:29 +02:00
semaphore.c locking/semaphore: Add might_sleep() to down_*() family 2021-08-20 12:33:17 +02:00
spinlock.c locking/rwlock: Provide RT variant 2021-08-17 17:50:51 +02:00
spinlock_debug.c locking/rwlock: Provide RT variant 2021-08-17 17:50:51 +02:00
spinlock_rt.c locking/spinlock/rt: Prepare for RT local_lock 2021-08-17 19:06:13 +02:00
test-ww_mutex.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 9 2019-05-21 11:28:40 +02:00
ww_mutex.h locking/rtmutex: Fix task->pi_waiters integrity 2023-08-03 10:22:45 +02:00
ww_rt_mutex.c locking/ww_mutex: Implement rtmutex based ww_mutex API functions 2021-08-17 19:05:26 +02:00