License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2006-07-03 11:24:29 +04:00
|
|
|
/* kernel/rwsem.c: R/W semaphores, public implementation
|
|
|
|
*
|
|
|
|
* Written by David Howells (dhowells@redhat.com).
|
|
|
|
* Derived from asm-i386/semaphore.h
|
2019-05-20 23:59:03 +03:00
|
|
|
*
|
|
|
|
* Writer lock-stealing by Alex Shi <alex.shi@intel.com>
|
|
|
|
* and Michel Lespinasse <walken@google.com>
|
|
|
|
*
|
|
|
|
* Optimistic spinning by Tim Chen <tim.c.chen@intel.com>
|
|
|
|
* and Davidlohr Bueso <davidlohr@hp.com>. Based on mutexes.
|
|
|
|
*
|
|
|
|
* Rwsem count bit fields re-definition and rwsem rearchitecture
|
|
|
|
* by Waiman Long <longman@redhat.com>.
|
2006-07-03 11:24:29 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/kernel.h>
|
2007-12-18 17:21:13 +03:00
|
|
|
#include <linux/sched.h>
|
2019-05-20 23:59:03 +03:00
|
|
|
#include <linux/sched/rt.h>
|
|
|
|
#include <linux/sched/task.h>
|
2017-02-08 20:51:35 +03:00
|
|
|
#include <linux/sched/debug.h>
|
2019-05-20 23:59:03 +03:00
|
|
|
#include <linux/sched/wake_q.h>
|
|
|
|
#include <linux/sched/signal.h>
|
2011-05-23 22:51:41 +04:00
|
|
|
#include <linux/export.h>
|
2006-07-03 11:24:29 +04:00
|
|
|
#include <linux/rwsem.h>
|
2011-07-27 03:09:06 +04:00
|
|
|
#include <linux/atomic.h>
|
2006-07-03 11:24:29 +04:00
|
|
|
|
2015-01-30 12:14:25 +03:00
|
|
|
#include "rwsem.h"
|
2019-05-20 23:59:03 +03:00
|
|
|
#include "lock_events.h"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The least significant 2 bits of the owner value has the following
|
|
|
|
* meanings when set.
|
|
|
|
* - RWSEM_READER_OWNED (bit 0): The rwsem is owned by readers
|
|
|
|
* - RWSEM_ANONYMOUSLY_OWNED (bit 1): The rwsem is anonymously owned,
|
|
|
|
* i.e. the owner(s) cannot be readily determined. It can be reader
|
|
|
|
* owned or the owning writer is indeterminate.
|
|
|
|
*
|
|
|
|
* When a writer acquires a rwsem, it puts its task_struct pointer
|
|
|
|
* into the owner field. It is cleared after an unlock.
|
|
|
|
*
|
|
|
|
* When a reader acquires a rwsem, it will also puts its task_struct
|
|
|
|
* pointer into the owner field with both the RWSEM_READER_OWNED and
|
|
|
|
* RWSEM_ANONYMOUSLY_OWNED bits set. On unlock, the owner field will
|
|
|
|
* largely be left untouched. So for a free or reader-owned rwsem,
|
|
|
|
* the owner value may contain information about the last reader that
|
|
|
|
* acquires the rwsem. The anonymous bit is set because that particular
|
|
|
|
* reader may or may not still own the lock.
|
|
|
|
*
|
|
|
|
* That information may be helpful in debugging cases where the system
|
|
|
|
* seems to hang on a reader owned rwsem especially if only one reader
|
|
|
|
* is involved. Ideally we would like to track all the readers that own
|
|
|
|
* a rwsem, but the overhead is simply too big.
|
|
|
|
*/
|
|
|
|
#define RWSEM_READER_OWNED (1UL << 0)
|
|
|
|
#define RWSEM_ANONYMOUSLY_OWNED (1UL << 1)
|
|
|
|
|
|
|
|
#ifdef CONFIG_DEBUG_RWSEMS
|
|
|
|
# define DEBUG_RWSEMS_WARN_ON(c, sem) do { \
|
|
|
|
if (!debug_locks_silent && \
|
|
|
|
WARN_ONCE(c, "DEBUG_RWSEMS_WARN_ON(%s): count = 0x%lx, owner = 0x%lx, curr 0x%lx, list %sempty\n",\
|
|
|
|
#c, atomic_long_read(&(sem)->count), \
|
|
|
|
(long)((sem)->owner), (long)current, \
|
|
|
|
list_empty(&(sem)->wait_list) ? "" : "not ")) \
|
|
|
|
debug_locks_off(); \
|
|
|
|
} while (0)
|
|
|
|
#else
|
|
|
|
# define DEBUG_RWSEMS_WARN_ON(c, sem)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The definition of the atomic counter in the semaphore:
|
|
|
|
*
|
|
|
|
* Bit 0 - writer locked bit
|
|
|
|
* Bit 1 - waiters present bit
|
|
|
|
* Bits 2-7 - reserved
|
|
|
|
* Bits 8-X - 24-bit (32-bit) or 56-bit reader count
|
|
|
|
*
|
|
|
|
* atomic_long_fetch_add() is used to obtain reader lock, whereas
|
|
|
|
* atomic_long_cmpxchg() will be used to obtain writer lock.
|
|
|
|
*/
|
|
|
|
#define RWSEM_WRITER_LOCKED (1UL << 0)
|
|
|
|
#define RWSEM_FLAG_WAITERS (1UL << 1)
|
|
|
|
#define RWSEM_READER_SHIFT 8
|
|
|
|
#define RWSEM_READER_BIAS (1UL << RWSEM_READER_SHIFT)
|
|
|
|
#define RWSEM_READER_MASK (~(RWSEM_READER_BIAS - 1))
|
|
|
|
#define RWSEM_WRITER_MASK RWSEM_WRITER_LOCKED
|
|
|
|
#define RWSEM_LOCK_MASK (RWSEM_WRITER_MASK|RWSEM_READER_MASK)
|
|
|
|
#define RWSEM_READ_FAILED_MASK (RWSEM_WRITER_MASK|RWSEM_FLAG_WAITERS)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* All writes to owner are protected by WRITE_ONCE() to make sure that
|
|
|
|
* store tearing can't happen as optimistic spinners may read and use
|
|
|
|
* the owner value concurrently without lock. Read from owner, however,
|
|
|
|
* may not need READ_ONCE() as long as the pointer value is only used
|
|
|
|
* for comparison and isn't being dereferenced.
|
|
|
|
*/
|
|
|
|
static inline void rwsem_set_owner(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(sem->owner, current);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void rwsem_clear_owner(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(sem->owner, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The task_struct pointer of the last owning reader will be left in
|
|
|
|
* the owner field.
|
|
|
|
*
|
|
|
|
* Note that the owner value just indicates the task has owned the rwsem
|
|
|
|
* previously, it may not be the real owner or one of the real owners
|
|
|
|
* anymore when that field is examined, so take it with a grain of salt.
|
|
|
|
*/
|
|
|
|
static inline void __rwsem_set_reader_owned(struct rw_semaphore *sem,
|
|
|
|
struct task_struct *owner)
|
|
|
|
{
|
|
|
|
unsigned long val = (unsigned long)owner | RWSEM_READER_OWNED
|
|
|
|
| RWSEM_ANONYMOUSLY_OWNED;
|
|
|
|
|
|
|
|
WRITE_ONCE(sem->owner, (struct task_struct *)val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
__rwsem_set_reader_owned(sem, current);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return true if the a rwsem waiter can spin on the rwsem's owner
|
|
|
|
* and steal the lock, i.e. the lock is not anonymously owned.
|
|
|
|
* N.B. !owner is considered spinnable.
|
|
|
|
*/
|
|
|
|
static inline bool is_rwsem_owner_spinnable(struct task_struct *owner)
|
|
|
|
{
|
|
|
|
return !((unsigned long)owner & RWSEM_ANONYMOUSLY_OWNED);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return true if rwsem is owned by an anonymous writer or readers.
|
|
|
|
*/
|
|
|
|
static inline bool rwsem_has_anonymous_owner(struct task_struct *owner)
|
|
|
|
{
|
|
|
|
return (unsigned long)owner & RWSEM_ANONYMOUSLY_OWNED;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_DEBUG_RWSEMS
|
|
|
|
/*
|
|
|
|
* With CONFIG_DEBUG_RWSEMS configured, it will make sure that if there
|
|
|
|
* is a task pointer in owner of a reader-owned rwsem, it will be the
|
|
|
|
* real owner or one of the real owners. The only exception is when the
|
|
|
|
* unlock is done by up_read_non_owner().
|
|
|
|
*/
|
|
|
|
static inline void rwsem_clear_reader_owned(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
unsigned long val = (unsigned long)current | RWSEM_READER_OWNED
|
|
|
|
| RWSEM_ANONYMOUSLY_OWNED;
|
|
|
|
if (READ_ONCE(sem->owner) == (struct task_struct *)val)
|
|
|
|
cmpxchg_relaxed((unsigned long *)&sem->owner, val,
|
|
|
|
RWSEM_READER_OWNED | RWSEM_ANONYMOUSLY_OWNED);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void rwsem_clear_reader_owned(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Guide to the rw_semaphore's count field.
|
|
|
|
*
|
|
|
|
* When the RWSEM_WRITER_LOCKED bit in count is set, the lock is owned
|
|
|
|
* by a writer.
|
|
|
|
*
|
|
|
|
* The lock is owned by readers when
|
|
|
|
* (1) the RWSEM_WRITER_LOCKED isn't set in count,
|
|
|
|
* (2) some of the reader bits are set in count, and
|
|
|
|
* (3) the owner field has RWSEM_READ_OWNED bit set.
|
|
|
|
*
|
|
|
|
* Having some reader bits set is not enough to guarantee a readers owned
|
|
|
|
* lock as the readers may be in the process of backing out from the count
|
|
|
|
* and a writer has just released the lock. So another writer may steal
|
|
|
|
* the lock immediately after that.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize an rwsem:
|
|
|
|
*/
|
|
|
|
void __init_rwsem(struct rw_semaphore *sem, const char *name,
|
|
|
|
struct lock_class_key *key)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
|
|
|
/*
|
|
|
|
* Make sure we are not reinitializing a held semaphore:
|
|
|
|
*/
|
|
|
|
debug_check_no_locks_freed((void *)sem, sizeof(*sem));
|
|
|
|
lockdep_init_map(&sem->dep_map, name, key, 0);
|
|
|
|
#endif
|
|
|
|
atomic_long_set(&sem->count, RWSEM_UNLOCKED_VALUE);
|
|
|
|
raw_spin_lock_init(&sem->wait_lock);
|
|
|
|
INIT_LIST_HEAD(&sem->wait_list);
|
|
|
|
sem->owner = NULL;
|
|
|
|
#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
|
|
|
|
osq_lock_init(&sem->osq);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(__init_rwsem);
|
|
|
|
|
|
|
|
enum rwsem_waiter_type {
|
|
|
|
RWSEM_WAITING_FOR_WRITE,
|
|
|
|
RWSEM_WAITING_FOR_READ
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rwsem_waiter {
|
|
|
|
struct list_head list;
|
|
|
|
struct task_struct *task;
|
|
|
|
enum rwsem_waiter_type type;
|
|
|
|
};
|
|
|
|
|
|
|
|
enum rwsem_wake_type {
|
|
|
|
RWSEM_WAKE_ANY, /* Wake whatever's at head of wait list */
|
|
|
|
RWSEM_WAKE_READERS, /* Wake readers only */
|
|
|
|
RWSEM_WAKE_READ_OWNED /* Waker thread holds the read lock */
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle the lock release when processes blocked on it that can now run
|
|
|
|
* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must
|
|
|
|
* have been set.
|
|
|
|
* - there must be someone on the queue
|
|
|
|
* - the wait_lock must be held by the caller
|
|
|
|
* - tasks are marked for wakeup, the caller must later invoke wake_up_q()
|
|
|
|
* to actually wakeup the blocked task(s) and drop the reference count,
|
|
|
|
* preferably when the wait_lock is released
|
|
|
|
* - woken process blocks are discarded from the list after having task zeroed
|
|
|
|
* - writers are only marked woken if downgrading is false
|
|
|
|
*/
|
2019-05-20 23:59:04 +03:00
|
|
|
static void rwsem_mark_wake(struct rw_semaphore *sem,
|
|
|
|
enum rwsem_wake_type wake_type,
|
|
|
|
struct wake_q_head *wake_q)
|
2019-05-20 23:59:03 +03:00
|
|
|
{
|
|
|
|
struct rwsem_waiter *waiter, *tmp;
|
|
|
|
long oldcount, woken = 0, adjustment = 0;
|
|
|
|
struct list_head wlist;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Take a peek at the queue head waiter such that we can determine
|
|
|
|
* the wakeup(s) to perform.
|
|
|
|
*/
|
|
|
|
waiter = list_first_entry(&sem->wait_list, struct rwsem_waiter, list);
|
|
|
|
|
|
|
|
if (waiter->type == RWSEM_WAITING_FOR_WRITE) {
|
|
|
|
if (wake_type == RWSEM_WAKE_ANY) {
|
|
|
|
/*
|
|
|
|
* Mark writer at the front of the queue for wakeup.
|
|
|
|
* Until the task is actually later awoken later by
|
|
|
|
* the caller, other writers are able to steal it.
|
|
|
|
* Readers, on the other hand, will block as they
|
|
|
|
* will notice the queued writer.
|
|
|
|
*/
|
|
|
|
wake_q_add(wake_q, waiter->task);
|
|
|
|
lockevent_inc(rwsem_wake_writer);
|
|
|
|
}
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Writers might steal the lock before we grant it to the next reader.
|
|
|
|
* We prefer to do the first reader grant before counting readers
|
|
|
|
* so we can bail out early if a writer stole the lock.
|
|
|
|
*/
|
|
|
|
if (wake_type != RWSEM_WAKE_READ_OWNED) {
|
|
|
|
adjustment = RWSEM_READER_BIAS;
|
|
|
|
oldcount = atomic_long_fetch_add(adjustment, &sem->count);
|
|
|
|
if (unlikely(oldcount & RWSEM_WRITER_MASK)) {
|
|
|
|
atomic_long_sub(adjustment, &sem->count);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Set it to reader-owned to give spinners an early
|
|
|
|
* indication that readers now have the lock.
|
|
|
|
*/
|
|
|
|
__rwsem_set_reader_owned(sem, waiter->task);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Grant an infinite number of read locks to the readers at the front
|
|
|
|
* of the queue. We know that woken will be at least 1 as we accounted
|
|
|
|
* for above. Note we increment the 'active part' of the count by the
|
|
|
|
* number of readers before waking any processes up.
|
|
|
|
*
|
|
|
|
* We have to do wakeup in 2 passes to prevent the possibility that
|
|
|
|
* the reader count may be decremented before it is incremented. It
|
|
|
|
* is because the to-be-woken waiter may not have slept yet. So it
|
|
|
|
* may see waiter->task got cleared, finish its critical section and
|
|
|
|
* do an unlock before the reader count increment.
|
|
|
|
*
|
|
|
|
* 1) Collect the read-waiters in a separate list, count them and
|
|
|
|
* fully increment the reader count in rwsem.
|
|
|
|
* 2) For each waiters in the new list, clear waiter->task and
|
|
|
|
* put them into wake_q to be woken up later.
|
|
|
|
*/
|
|
|
|
list_for_each_entry(waiter, &sem->wait_list, list) {
|
|
|
|
if (waiter->type == RWSEM_WAITING_FOR_WRITE)
|
|
|
|
break;
|
|
|
|
|
|
|
|
woken++;
|
|
|
|
}
|
|
|
|
list_cut_before(&wlist, &sem->wait_list, &waiter->list);
|
|
|
|
|
|
|
|
adjustment = woken * RWSEM_READER_BIAS - adjustment;
|
|
|
|
lockevent_cond_inc(rwsem_wake_reader, woken);
|
|
|
|
if (list_empty(&sem->wait_list)) {
|
|
|
|
/* hit end of list above */
|
|
|
|
adjustment -= RWSEM_FLAG_WAITERS;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (adjustment)
|
|
|
|
atomic_long_add(adjustment, &sem->count);
|
|
|
|
|
|
|
|
/* 2nd pass */
|
|
|
|
list_for_each_entry_safe(waiter, tmp, &wlist, list) {
|
|
|
|
struct task_struct *tsk;
|
|
|
|
|
|
|
|
tsk = waiter->task;
|
|
|
|
get_task_struct(tsk);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure calling get_task_struct() before setting the reader
|
2019-05-20 23:59:04 +03:00
|
|
|
* waiter to nil such that rwsem_down_read_slowpath() cannot
|
2019-05-20 23:59:03 +03:00
|
|
|
* race with do_exit() by always holding a reference count
|
|
|
|
* to the task to wakeup.
|
|
|
|
*/
|
|
|
|
smp_store_release(&waiter->task, NULL);
|
|
|
|
/*
|
|
|
|
* Ensure issuing the wakeup (either by us or someone else)
|
|
|
|
* after setting the reader waiter to nil.
|
|
|
|
*/
|
|
|
|
wake_q_add_safe(wake_q, tsk);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function must be called with the sem->wait_lock held to prevent
|
|
|
|
* race conditions between checking the rwsem wait list and setting the
|
|
|
|
* sem->count accordingly.
|
|
|
|
*/
|
|
|
|
static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
long new;
|
|
|
|
|
|
|
|
if (count & RWSEM_LOCK_MASK)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
new = count + RWSEM_WRITER_LOCKED -
|
|
|
|
(list_is_singular(&sem->wait_list) ? RWSEM_FLAG_WAITERS : 0);
|
|
|
|
|
|
|
|
if (atomic_long_try_cmpxchg_acquire(&sem->count, &count, new)) {
|
|
|
|
rwsem_set_owner(sem);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
|
|
|
|
/*
|
|
|
|
* Try to acquire write lock before the writer has been put on wait queue.
|
|
|
|
*/
|
|
|
|
static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
long count = atomic_long_read(&sem->count);
|
|
|
|
|
|
|
|
while (!(count & RWSEM_LOCK_MASK)) {
|
|
|
|
if (atomic_long_try_cmpxchg_acquire(&sem->count, &count,
|
|
|
|
count + RWSEM_WRITER_LOCKED)) {
|
|
|
|
rwsem_set_owner(sem);
|
|
|
|
lockevent_inc(rwsem_opt_wlock);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool owner_on_cpu(struct task_struct *owner)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* As lock holder preemption issue, we both skip spinning if
|
|
|
|
* task is not on cpu or its cpu is preempted
|
|
|
|
*/
|
|
|
|
return owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
struct task_struct *owner;
|
|
|
|
bool ret = true;
|
|
|
|
|
|
|
|
BUILD_BUG_ON(!rwsem_has_anonymous_owner(RWSEM_OWNER_UNKNOWN));
|
|
|
|
|
|
|
|
if (need_resched())
|
|
|
|
return false;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
owner = READ_ONCE(sem->owner);
|
|
|
|
if (owner) {
|
|
|
|
ret = is_rwsem_owner_spinnable(owner) &&
|
|
|
|
owner_on_cpu(owner);
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2019-05-20 23:59:05 +03:00
|
|
|
* The rwsem_spin_on_owner() function returns the folowing 4 values
|
|
|
|
* depending on the lock owner state.
|
|
|
|
* OWNER_NULL : owner is currently NULL
|
|
|
|
* OWNER_WRITER: when owner changes and is a writer
|
|
|
|
* OWNER_READER: when owner changes and the new owner may be a reader.
|
|
|
|
* OWNER_NONSPINNABLE:
|
|
|
|
* when optimistic spinning has to stop because either the
|
|
|
|
* owner stops running, is unknown, or its timeslice has
|
|
|
|
* been used up.
|
2019-05-20 23:59:03 +03:00
|
|
|
*/
|
2019-05-20 23:59:05 +03:00
|
|
|
enum owner_state {
|
|
|
|
OWNER_NULL = 1 << 0,
|
|
|
|
OWNER_WRITER = 1 << 1,
|
|
|
|
OWNER_READER = 1 << 2,
|
|
|
|
OWNER_NONSPINNABLE = 1 << 3,
|
|
|
|
};
|
|
|
|
#define OWNER_SPINNABLE (OWNER_NULL | OWNER_WRITER)
|
|
|
|
|
|
|
|
static inline enum owner_state rwsem_owner_state(unsigned long owner)
|
2019-05-20 23:59:03 +03:00
|
|
|
{
|
2019-05-20 23:59:05 +03:00
|
|
|
if (!owner)
|
|
|
|
return OWNER_NULL;
|
2019-05-20 23:59:03 +03:00
|
|
|
|
2019-05-20 23:59:05 +03:00
|
|
|
if (owner & RWSEM_ANONYMOUSLY_OWNED)
|
|
|
|
return OWNER_NONSPINNABLE;
|
|
|
|
|
|
|
|
if (owner & RWSEM_READER_OWNED)
|
|
|
|
return OWNER_READER;
|
|
|
|
|
|
|
|
return OWNER_WRITER;
|
|
|
|
}
|
|
|
|
|
|
|
|
static noinline enum owner_state rwsem_spin_on_owner(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
struct task_struct *tmp, *owner = READ_ONCE(sem->owner);
|
|
|
|
enum owner_state state = rwsem_owner_state((unsigned long)owner);
|
|
|
|
|
|
|
|
if (state != OWNER_WRITER)
|
|
|
|
return state;
|
2019-05-20 23:59:03 +03:00
|
|
|
|
|
|
|
rcu_read_lock();
|
2019-05-20 23:59:05 +03:00
|
|
|
for (;;) {
|
|
|
|
tmp = READ_ONCE(sem->owner);
|
|
|
|
if (tmp != owner) {
|
|
|
|
state = rwsem_owner_state((unsigned long)tmp);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2019-05-20 23:59:03 +03:00
|
|
|
/*
|
|
|
|
* Ensure we emit the owner->on_cpu, dereference _after_
|
|
|
|
* checking sem->owner still matches owner, if that fails,
|
|
|
|
* owner might point to free()d memory, if it still matches,
|
|
|
|
* the rcu_read_lock() ensures the memory stays valid.
|
|
|
|
*/
|
|
|
|
barrier();
|
|
|
|
|
|
|
|
if (need_resched() || !owner_on_cpu(owner)) {
|
2019-05-20 23:59:05 +03:00
|
|
|
state = OWNER_NONSPINNABLE;
|
|
|
|
break;
|
2019-05-20 23:59:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
cpu_relax();
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
2019-05-20 23:59:05 +03:00
|
|
|
return state;
|
2019-05-20 23:59:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
bool taken = false;
|
|
|
|
|
|
|
|
preempt_disable();
|
|
|
|
|
|
|
|
/* sem->wait_lock should not be held when doing optimistic spinning */
|
|
|
|
if (!rwsem_can_spin_on_owner(sem))
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
if (!osq_lock(&sem->osq))
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Optimistically spin on the owner field and attempt to acquire the
|
|
|
|
* lock whenever the owner changes. Spinning will be stopped when:
|
|
|
|
* 1) the owning writer isn't running; or
|
|
|
|
* 2) readers own the lock as we can't determine if they are
|
|
|
|
* actively running or not.
|
|
|
|
*/
|
2019-05-20 23:59:05 +03:00
|
|
|
while (rwsem_spin_on_owner(sem) & OWNER_SPINNABLE) {
|
2019-05-20 23:59:03 +03:00
|
|
|
/*
|
|
|
|
* Try to acquire the lock
|
|
|
|
*/
|
|
|
|
if (rwsem_try_write_lock_unqueued(sem)) {
|
|
|
|
taken = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When there's no owner, we might have preempted between the
|
|
|
|
* owner acquiring the lock and setting the owner field. If
|
|
|
|
* we're an RT task that will live-lock because we won't let
|
|
|
|
* the owner complete.
|
|
|
|
*/
|
|
|
|
if (!sem->owner && (need_resched() || rt_task(current)))
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The cpu_relax() call is a compiler barrier which forces
|
|
|
|
* everything in this loop to be re-loaded. We don't need
|
|
|
|
* memory barriers as we'll eventually observe the right
|
|
|
|
* values at the cost of a few extra spins.
|
|
|
|
*/
|
|
|
|
cpu_relax();
|
|
|
|
}
|
|
|
|
osq_unlock(&sem->osq);
|
|
|
|
done:
|
|
|
|
preempt_enable();
|
|
|
|
lockevent_cond_inc(rwsem_opt_fail, !taken);
|
|
|
|
return taken;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for the read lock to be granted
|
|
|
|
*/
|
2019-05-20 23:59:04 +03:00
|
|
|
static struct rw_semaphore __sched *
|
|
|
|
rwsem_down_read_slowpath(struct rw_semaphore *sem, int state)
|
2019-05-20 23:59:03 +03:00
|
|
|
{
|
|
|
|
long count, adjustment = -RWSEM_READER_BIAS;
|
|
|
|
struct rwsem_waiter waiter;
|
|
|
|
DEFINE_WAKE_Q(wake_q);
|
|
|
|
|
|
|
|
waiter.task = current;
|
|
|
|
waiter.type = RWSEM_WAITING_FOR_READ;
|
|
|
|
|
|
|
|
raw_spin_lock_irq(&sem->wait_lock);
|
|
|
|
if (list_empty(&sem->wait_list)) {
|
|
|
|
/*
|
|
|
|
* In case the wait queue is empty and the lock isn't owned
|
|
|
|
* by a writer, this reader can exit the slowpath and return
|
|
|
|
* immediately as its RWSEM_READER_BIAS has already been
|
|
|
|
* set in the count.
|
|
|
|
*/
|
|
|
|
if (!(atomic_long_read(&sem->count) & RWSEM_WRITER_MASK)) {
|
|
|
|
raw_spin_unlock_irq(&sem->wait_lock);
|
|
|
|
rwsem_set_reader_owned(sem);
|
|
|
|
lockevent_inc(rwsem_rlock_fast);
|
|
|
|
return sem;
|
|
|
|
}
|
|
|
|
adjustment += RWSEM_FLAG_WAITERS;
|
|
|
|
}
|
|
|
|
list_add_tail(&waiter.list, &sem->wait_list);
|
|
|
|
|
|
|
|
/* we're now waiting on the lock, but no longer actively locking */
|
|
|
|
count = atomic_long_add_return(adjustment, &sem->count);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there are no active locks, wake the front queued process(es).
|
|
|
|
*
|
|
|
|
* If there are no writers and we are first in the queue,
|
|
|
|
* wake our own waiter to join the existing active readers !
|
|
|
|
*/
|
|
|
|
if (!(count & RWSEM_LOCK_MASK) ||
|
|
|
|
(!(count & RWSEM_WRITER_MASK) && (adjustment & RWSEM_FLAG_WAITERS)))
|
2019-05-20 23:59:04 +03:00
|
|
|
rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q);
|
2019-05-20 23:59:03 +03:00
|
|
|
|
|
|
|
raw_spin_unlock_irq(&sem->wait_lock);
|
|
|
|
wake_up_q(&wake_q);
|
|
|
|
|
|
|
|
/* wait to be given the lock */
|
|
|
|
while (true) {
|
|
|
|
set_current_state(state);
|
|
|
|
if (!waiter.task)
|
|
|
|
break;
|
|
|
|
if (signal_pending_state(state, current)) {
|
|
|
|
raw_spin_lock_irq(&sem->wait_lock);
|
|
|
|
if (waiter.task)
|
|
|
|
goto out_nolock;
|
|
|
|
raw_spin_unlock_irq(&sem->wait_lock);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
schedule();
|
|
|
|
lockevent_inc(rwsem_sleep_reader);
|
|
|
|
}
|
|
|
|
|
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
lockevent_inc(rwsem_rlock);
|
|
|
|
return sem;
|
|
|
|
out_nolock:
|
|
|
|
list_del(&waiter.list);
|
|
|
|
if (list_empty(&sem->wait_list))
|
|
|
|
atomic_long_andnot(RWSEM_FLAG_WAITERS, &sem->count);
|
|
|
|
raw_spin_unlock_irq(&sem->wait_lock);
|
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
lockevent_inc(rwsem_rlock_fail);
|
|
|
|
return ERR_PTR(-EINTR);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait until we successfully acquire the write lock
|
|
|
|
*/
|
2019-05-20 23:59:04 +03:00
|
|
|
static struct rw_semaphore *
|
|
|
|
rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
|
2019-05-20 23:59:03 +03:00
|
|
|
{
|
|
|
|
long count;
|
|
|
|
bool waiting = true; /* any queued threads before us */
|
|
|
|
struct rwsem_waiter waiter;
|
|
|
|
struct rw_semaphore *ret = sem;
|
|
|
|
DEFINE_WAKE_Q(wake_q);
|
|
|
|
|
|
|
|
/* do optimistic spinning and steal lock if possible */
|
|
|
|
if (rwsem_optimistic_spin(sem))
|
|
|
|
return sem;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Optimistic spinning failed, proceed to the slowpath
|
|
|
|
* and block until we can acquire the sem.
|
|
|
|
*/
|
|
|
|
waiter.task = current;
|
|
|
|
waiter.type = RWSEM_WAITING_FOR_WRITE;
|
|
|
|
|
|
|
|
raw_spin_lock_irq(&sem->wait_lock);
|
|
|
|
|
|
|
|
/* account for this before adding a new element to the list */
|
|
|
|
if (list_empty(&sem->wait_list))
|
|
|
|
waiting = false;
|
|
|
|
|
|
|
|
list_add_tail(&waiter.list, &sem->wait_list);
|
|
|
|
|
|
|
|
/* we're now waiting on the lock */
|
|
|
|
if (waiting) {
|
|
|
|
count = atomic_long_read(&sem->count);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there were already threads queued before us and there are
|
|
|
|
* no active writers and some readers, the lock must be read
|
|
|
|
* owned; so we try to any read locks that were queued ahead
|
|
|
|
* of us.
|
|
|
|
*/
|
|
|
|
if (!(count & RWSEM_WRITER_MASK) &&
|
|
|
|
(count & RWSEM_READER_MASK)) {
|
2019-05-20 23:59:04 +03:00
|
|
|
rwsem_mark_wake(sem, RWSEM_WAKE_READERS, &wake_q);
|
2019-05-20 23:59:03 +03:00
|
|
|
/*
|
|
|
|
* The wakeup is normally called _after_ the wait_lock
|
|
|
|
* is released, but given that we are proactively waking
|
|
|
|
* readers we can deal with the wake_q overhead as it is
|
|
|
|
* similar to releasing and taking the wait_lock again
|
|
|
|
* for attempting rwsem_try_write_lock().
|
|
|
|
*/
|
|
|
|
wake_up_q(&wake_q);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reinitialize wake_q after use.
|
|
|
|
*/
|
|
|
|
wake_q_init(&wake_q);
|
|
|
|
}
|
|
|
|
|
|
|
|
} else {
|
|
|
|
count = atomic_long_add_return(RWSEM_FLAG_WAITERS, &sem->count);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* wait until we successfully acquire the lock */
|
|
|
|
set_current_state(state);
|
|
|
|
while (true) {
|
|
|
|
if (rwsem_try_write_lock(count, sem))
|
|
|
|
break;
|
|
|
|
raw_spin_unlock_irq(&sem->wait_lock);
|
|
|
|
|
|
|
|
/* Block until there are no active lockers. */
|
|
|
|
do {
|
|
|
|
if (signal_pending_state(state, current))
|
|
|
|
goto out_nolock;
|
|
|
|
|
|
|
|
schedule();
|
|
|
|
lockevent_inc(rwsem_sleep_writer);
|
|
|
|
set_current_state(state);
|
|
|
|
count = atomic_long_read(&sem->count);
|
|
|
|
} while (count & RWSEM_LOCK_MASK);
|
|
|
|
|
|
|
|
raw_spin_lock_irq(&sem->wait_lock);
|
|
|
|
}
|
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
list_del(&waiter.list);
|
|
|
|
raw_spin_unlock_irq(&sem->wait_lock);
|
|
|
|
lockevent_inc(rwsem_wlock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
out_nolock:
|
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
raw_spin_lock_irq(&sem->wait_lock);
|
|
|
|
list_del(&waiter.list);
|
|
|
|
if (list_empty(&sem->wait_list))
|
|
|
|
atomic_long_andnot(RWSEM_FLAG_WAITERS, &sem->count);
|
|
|
|
else
|
2019-05-20 23:59:04 +03:00
|
|
|
rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q);
|
2019-05-20 23:59:03 +03:00
|
|
|
raw_spin_unlock_irq(&sem->wait_lock);
|
|
|
|
wake_up_q(&wake_q);
|
|
|
|
lockevent_inc(rwsem_wlock_fail);
|
|
|
|
|
|
|
|
return ERR_PTR(-EINTR);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle waking up a waiter on the semaphore
|
|
|
|
* - up_read/up_write has decremented the active part of count if we come here
|
|
|
|
*/
|
2019-05-20 23:59:04 +03:00
|
|
|
static struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
|
2019-05-20 23:59:03 +03:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
DEFINE_WAKE_Q(wake_q);
|
|
|
|
|
|
|
|
raw_spin_lock_irqsave(&sem->wait_lock, flags);
|
|
|
|
|
|
|
|
if (!list_empty(&sem->wait_list))
|
2019-05-20 23:59:04 +03:00
|
|
|
rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q);
|
2019-05-20 23:59:03 +03:00
|
|
|
|
|
|
|
raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
|
|
|
|
wake_up_q(&wake_q);
|
|
|
|
|
|
|
|
return sem;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* downgrade a write lock into a read lock
|
|
|
|
* - caller incremented waiting part of count and discovered it still negative
|
|
|
|
* - just wake up any readers at the front of the queue
|
|
|
|
*/
|
2019-05-20 23:59:04 +03:00
|
|
|
static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
|
2019-05-20 23:59:03 +03:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
DEFINE_WAKE_Q(wake_q);
|
|
|
|
|
|
|
|
raw_spin_lock_irqsave(&sem->wait_lock, flags);
|
|
|
|
|
|
|
|
if (!list_empty(&sem->wait_list))
|
2019-05-20 23:59:04 +03:00
|
|
|
rwsem_mark_wake(sem, RWSEM_WAKE_READ_OWNED, &wake_q);
|
2019-05-20 23:59:03 +03:00
|
|
|
|
|
|
|
raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
|
|
|
|
wake_up_q(&wake_q);
|
|
|
|
|
|
|
|
return sem;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* lock for reading
|
|
|
|
*/
|
|
|
|
inline void __down_read(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
if (unlikely(atomic_long_fetch_add_acquire(RWSEM_READER_BIAS,
|
|
|
|
&sem->count) & RWSEM_READ_FAILED_MASK)) {
|
2019-05-20 23:59:04 +03:00
|
|
|
rwsem_down_read_slowpath(sem, TASK_UNINTERRUPTIBLE);
|
2019-05-20 23:59:03 +03:00
|
|
|
DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner &
|
|
|
|
RWSEM_READER_OWNED), sem);
|
|
|
|
} else {
|
|
|
|
rwsem_set_reader_owned(sem);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int __down_read_killable(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
if (unlikely(atomic_long_fetch_add_acquire(RWSEM_READER_BIAS,
|
|
|
|
&sem->count) & RWSEM_READ_FAILED_MASK)) {
|
2019-05-20 23:59:04 +03:00
|
|
|
if (IS_ERR(rwsem_down_read_slowpath(sem, TASK_KILLABLE)))
|
2019-05-20 23:59:03 +03:00
|
|
|
return -EINTR;
|
|
|
|
DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner &
|
|
|
|
RWSEM_READER_OWNED), sem);
|
|
|
|
} else {
|
|
|
|
rwsem_set_reader_owned(sem);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int __down_read_trylock(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Optimize for the case when the rwsem is not locked at all.
|
|
|
|
*/
|
|
|
|
long tmp = RWSEM_UNLOCKED_VALUE;
|
|
|
|
|
|
|
|
do {
|
|
|
|
if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
|
|
|
|
tmp + RWSEM_READER_BIAS)) {
|
|
|
|
rwsem_set_reader_owned(sem);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
} while (!(tmp & RWSEM_READ_FAILED_MASK));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* lock for writing
|
|
|
|
*/
|
|
|
|
static inline void __down_write(struct rw_semaphore *sem)
|
|
|
|
{
|
2019-05-20 23:59:04 +03:00
|
|
|
long tmp = RWSEM_UNLOCKED_VALUE;
|
|
|
|
|
|
|
|
if (unlikely(!atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
|
|
|
|
RWSEM_WRITER_LOCKED)))
|
|
|
|
rwsem_down_write_slowpath(sem, TASK_UNINTERRUPTIBLE);
|
2019-05-20 23:59:03 +03:00
|
|
|
rwsem_set_owner(sem);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int __down_write_killable(struct rw_semaphore *sem)
|
|
|
|
{
|
2019-05-20 23:59:04 +03:00
|
|
|
long tmp = RWSEM_UNLOCKED_VALUE;
|
|
|
|
|
|
|
|
if (unlikely(!atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
|
|
|
|
RWSEM_WRITER_LOCKED))) {
|
|
|
|
if (IS_ERR(rwsem_down_write_slowpath(sem, TASK_KILLABLE)))
|
2019-05-20 23:59:03 +03:00
|
|
|
return -EINTR;
|
2019-05-20 23:59:04 +03:00
|
|
|
}
|
2019-05-20 23:59:03 +03:00
|
|
|
rwsem_set_owner(sem);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int __down_write_trylock(struct rw_semaphore *sem)
|
|
|
|
{
|
2019-05-20 23:59:04 +03:00
|
|
|
long tmp = RWSEM_UNLOCKED_VALUE;
|
2019-05-20 23:59:03 +03:00
|
|
|
|
2019-05-20 23:59:04 +03:00
|
|
|
if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
|
|
|
|
RWSEM_WRITER_LOCKED)) {
|
2019-05-20 23:59:03 +03:00
|
|
|
rwsem_set_owner(sem);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* unlock after reading
|
|
|
|
*/
|
|
|
|
inline void __up_read(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
long tmp;
|
|
|
|
|
2019-05-20 23:59:04 +03:00
|
|
|
DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED), sem);
|
2019-05-20 23:59:03 +03:00
|
|
|
rwsem_clear_reader_owned(sem);
|
|
|
|
tmp = atomic_long_add_return_release(-RWSEM_READER_BIAS, &sem->count);
|
2019-05-20 23:59:04 +03:00
|
|
|
if (unlikely((tmp & (RWSEM_LOCK_MASK|RWSEM_FLAG_WAITERS)) ==
|
|
|
|
RWSEM_FLAG_WAITERS))
|
2019-05-20 23:59:03 +03:00
|
|
|
rwsem_wake(sem);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* unlock after writing
|
|
|
|
*/
|
|
|
|
static inline void __up_write(struct rw_semaphore *sem)
|
|
|
|
{
|
2019-05-20 23:59:04 +03:00
|
|
|
long tmp;
|
|
|
|
|
2019-05-20 23:59:03 +03:00
|
|
|
DEBUG_RWSEMS_WARN_ON(sem->owner != current, sem);
|
|
|
|
rwsem_clear_owner(sem);
|
2019-05-20 23:59:04 +03:00
|
|
|
tmp = atomic_long_fetch_add_release(-RWSEM_WRITER_LOCKED, &sem->count);
|
|
|
|
if (unlikely(tmp & RWSEM_FLAG_WAITERS))
|
2019-05-20 23:59:03 +03:00
|
|
|
rwsem_wake(sem);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* downgrade write lock to read lock
|
|
|
|
*/
|
|
|
|
static inline void __downgrade_write(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
long tmp;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When downgrading from exclusive to shared ownership,
|
|
|
|
* anything inside the write-locked region cannot leak
|
|
|
|
* into the read side. In contrast, anything in the
|
|
|
|
* read-locked region is ok to be re-ordered into the
|
|
|
|
* write side. As such, rely on RELEASE semantics.
|
|
|
|
*/
|
|
|
|
DEBUG_RWSEMS_WARN_ON(sem->owner != current, sem);
|
|
|
|
tmp = atomic_long_fetch_add_release(
|
|
|
|
-RWSEM_WRITER_LOCKED+RWSEM_READER_BIAS, &sem->count);
|
|
|
|
rwsem_set_reader_owned(sem);
|
|
|
|
if (tmp & RWSEM_FLAG_WAITERS)
|
|
|
|
rwsem_downgrade_wake(sem);
|
|
|
|
}
|
locking/rwsem: Support optimistic spinning
We have reached the point where our mutexes are quite fine tuned
for a number of situations. This includes the use of heuristics
and optimistic spinning, based on MCS locking techniques.
Exclusive ownership of read-write semaphores are, conceptually,
just about the same as mutexes, making them close cousins. To
this end we need to make them both perform similarly, and
right now, rwsems are simply not up to it. This was discovered
by both reverting commit 4fc3f1d6 (mm/rmap, migration: Make
rmap_walk_anon() and try_to_unmap_anon() more scalable) and
similarly, converting some other mutexes (ie: i_mmap_mutex) to
rwsems. This creates a situation where users have to choose
between a rwsem and mutex taking into account this important
performance difference. Specifically, biggest difference between
both locks is when we fail to acquire a mutex in the fastpath,
optimistic spinning comes in to play and we can avoid a large
amount of unnecessary sleeping and overhead of moving tasks in
and out of wait queue. Rwsems do not have such logic.
This patch, based on the work from Tim Chen and I, adds support
for write-side optimistic spinning when the lock is contended.
It also includes support for the recently added cancelable MCS
locking for adaptive spinning. Note that is is only applicable
to the xadd method, and the spinlock rwsem variant remains intact.
Allowing optimistic spinning before putting the writer on the wait
queue reduces wait queue contention and provided greater chance
for the rwsem to get acquired. With these changes, rwsem is on par
with mutex. The performance benefits can be seen on a number of
workloads. For instance, on a 8 socket, 80 core 64bit Westmere box,
aim7 shows the following improvements in throughput:
+--------------+---------------------+-----------------+
| Workload | throughput-increase | number of users |
+--------------+---------------------+-----------------+
| alltests | 20% | >1000 |
| custom | 27%, 60% | 10-100, >1000 |
| high_systime | 36%, 30% | >100, >1000 |
| shared | 58%, 29% | 10-100, >1000 |
+--------------+---------------------+-----------------+
There was also improvement on smaller systems, such as a quad-core
x86-64 laptop running a 30Gb PostgreSQL (pgbench) workload for up
to +60% in throughput for over 50 clients. Additionally, benefits
were also noticed in exim (mail server) workloads. Furthermore, no
performance regression have been seen at all.
Based-on-work-from: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
[peterz: rej fixup due to comment patches, sched/rt.h header]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Alex Shi <alex.shi@linaro.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "Scott J Norton" <scott.norton@hp.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fusionio.com>
Link: http://lkml.kernel.org/r/1399055055.6275.15.camel@buesod1.americas.hpqcorp.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-05-02 22:24:15 +04:00
|
|
|
|
2006-07-03 11:24:29 +04:00
|
|
|
/*
|
|
|
|
* lock for reading
|
|
|
|
*/
|
2007-12-18 17:21:13 +03:00
|
|
|
void __sched down_read(struct rw_semaphore *sem)
|
2006-07-03 11:24:29 +04:00
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
|
|
|
|
|
2007-07-19 12:48:58 +04:00
|
|
|
LOCK_CONTENDED(sem, __down_read_trylock, __down_read);
|
2006-07-03 11:24:29 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_read);
|
|
|
|
|
2017-09-29 19:06:38 +03:00
|
|
|
int __sched down_read_killable(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
|
|
|
|
|
|
|
|
if (LOCK_CONTENDED_RETURN(sem, __down_read_trylock, __down_read_killable)) {
|
|
|
|
rwsem_release(&sem->dep_map, 1, _RET_IP_);
|
|
|
|
return -EINTR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_read_killable);
|
|
|
|
|
2006-07-03 11:24:29 +04:00
|
|
|
/*
|
|
|
|
* trylock for reading -- returns 1 if successful, 0 if contention
|
|
|
|
*/
|
|
|
|
int down_read_trylock(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
int ret = __down_read_trylock(sem);
|
|
|
|
|
2019-04-04 20:43:11 +03:00
|
|
|
if (ret == 1)
|
2006-07-03 11:24:29 +04:00
|
|
|
rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_read_trylock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* lock for writing
|
|
|
|
*/
|
2007-12-18 17:21:13 +03:00
|
|
|
void __sched down_write(struct rw_semaphore *sem)
|
2006-07-03 11:24:29 +04:00
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
|
2007-07-19 12:48:58 +04:00
|
|
|
LOCK_CONTENDED(sem, __down_write_trylock, __down_write);
|
2006-07-03 11:24:29 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_write);
|
|
|
|
|
2016-04-07 18:12:31 +03:00
|
|
|
/*
|
|
|
|
* lock for writing
|
|
|
|
*/
|
|
|
|
int __sched down_write_killable(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
|
|
|
|
|
2019-05-20 23:59:04 +03:00
|
|
|
if (LOCK_CONTENDED_RETURN(sem, __down_write_trylock,
|
|
|
|
__down_write_killable)) {
|
2016-04-07 18:12:31 +03:00
|
|
|
rwsem_release(&sem->dep_map, 1, _RET_IP_);
|
|
|
|
return -EINTR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_write_killable);
|
|
|
|
|
2006-07-03 11:24:29 +04:00
|
|
|
/*
|
|
|
|
* trylock for writing -- returns 1 if successful, 0 if contention
|
|
|
|
*/
|
|
|
|
int down_write_trylock(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
int ret = __down_write_trylock(sem);
|
|
|
|
|
2019-04-04 20:43:11 +03:00
|
|
|
if (ret == 1)
|
2007-05-08 11:29:10 +04:00
|
|
|
rwsem_acquire(&sem->dep_map, 0, 1, _RET_IP_);
|
locking/rwsem: Support optimistic spinning
We have reached the point where our mutexes are quite fine tuned
for a number of situations. This includes the use of heuristics
and optimistic spinning, based on MCS locking techniques.
Exclusive ownership of read-write semaphores are, conceptually,
just about the same as mutexes, making them close cousins. To
this end we need to make them both perform similarly, and
right now, rwsems are simply not up to it. This was discovered
by both reverting commit 4fc3f1d6 (mm/rmap, migration: Make
rmap_walk_anon() and try_to_unmap_anon() more scalable) and
similarly, converting some other mutexes (ie: i_mmap_mutex) to
rwsems. This creates a situation where users have to choose
between a rwsem and mutex taking into account this important
performance difference. Specifically, biggest difference between
both locks is when we fail to acquire a mutex in the fastpath,
optimistic spinning comes in to play and we can avoid a large
amount of unnecessary sleeping and overhead of moving tasks in
and out of wait queue. Rwsems do not have such logic.
This patch, based on the work from Tim Chen and I, adds support
for write-side optimistic spinning when the lock is contended.
It also includes support for the recently added cancelable MCS
locking for adaptive spinning. Note that is is only applicable
to the xadd method, and the spinlock rwsem variant remains intact.
Allowing optimistic spinning before putting the writer on the wait
queue reduces wait queue contention and provided greater chance
for the rwsem to get acquired. With these changes, rwsem is on par
with mutex. The performance benefits can be seen on a number of
workloads. For instance, on a 8 socket, 80 core 64bit Westmere box,
aim7 shows the following improvements in throughput:
+--------------+---------------------+-----------------+
| Workload | throughput-increase | number of users |
+--------------+---------------------+-----------------+
| alltests | 20% | >1000 |
| custom | 27%, 60% | 10-100, >1000 |
| high_systime | 36%, 30% | >100, >1000 |
| shared | 58%, 29% | 10-100, >1000 |
+--------------+---------------------+-----------------+
There was also improvement on smaller systems, such as a quad-core
x86-64 laptop running a 30Gb PostgreSQL (pgbench) workload for up
to +60% in throughput for over 50 clients. Additionally, benefits
were also noticed in exim (mail server) workloads. Furthermore, no
performance regression have been seen at all.
Based-on-work-from: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
[peterz: rej fixup due to comment patches, sched/rt.h header]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Alex Shi <alex.shi@linaro.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "Scott J Norton" <scott.norton@hp.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fusionio.com>
Link: http://lkml.kernel.org/r/1399055055.6275.15.camel@buesod1.americas.hpqcorp.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-05-02 22:24:15 +04:00
|
|
|
|
2006-07-03 11:24:29 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_write_trylock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* release a read lock
|
|
|
|
*/
|
|
|
|
void up_read(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
rwsem_release(&sem->dep_map, 1, _RET_IP_);
|
|
|
|
__up_read(sem);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(up_read);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* release a write lock
|
|
|
|
*/
|
|
|
|
void up_write(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
rwsem_release(&sem->dep_map, 1, _RET_IP_);
|
|
|
|
__up_write(sem);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(up_write);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* downgrade write lock to read lock
|
|
|
|
*/
|
|
|
|
void downgrade_write(struct rw_semaphore *sem)
|
|
|
|
{
|
2017-02-02 19:38:17 +03:00
|
|
|
lock_downgrade(&sem->dep_map, _RET_IP_);
|
2006-07-03 11:24:29 +04:00
|
|
|
__downgrade_write(sem);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(downgrade_write);
|
2006-07-03 11:24:53 +04:00
|
|
|
|
|
|
|
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
|
|
|
|
|
|
|
void down_read_nested(struct rw_semaphore *sem, int subclass)
|
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire_read(&sem->dep_map, subclass, 0, _RET_IP_);
|
2007-07-19 12:48:58 +04:00
|
|
|
LOCK_CONTENDED(sem, __down_read_trylock, __down_read);
|
2006-07-03 11:24:53 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_read_nested);
|
|
|
|
|
2013-01-12 02:31:56 +04:00
|
|
|
void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest)
|
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire_nest(&sem->dep_map, 0, 0, nest, _RET_IP_);
|
|
|
|
LOCK_CONTENDED(sem, __down_write_trylock, __down_write);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(_down_write_nest_lock);
|
|
|
|
|
2011-09-22 08:43:05 +04:00
|
|
|
void down_read_non_owner(struct rw_semaphore *sem)
|
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
__down_read(sem);
|
2018-09-06 23:18:34 +03:00
|
|
|
__rwsem_set_reader_owned(sem, NULL);
|
2011-09-22 08:43:05 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_read_non_owner);
|
|
|
|
|
2006-07-03 11:24:53 +04:00
|
|
|
void down_write_nested(struct rw_semaphore *sem, int subclass)
|
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire(&sem->dep_map, subclass, 0, _RET_IP_);
|
2007-07-19 12:48:58 +04:00
|
|
|
LOCK_CONTENDED(sem, __down_write_trylock, __down_write);
|
2006-07-03 11:24:53 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_write_nested);
|
|
|
|
|
2016-05-26 07:04:58 +03:00
|
|
|
int __sched down_write_killable_nested(struct rw_semaphore *sem, int subclass)
|
|
|
|
{
|
|
|
|
might_sleep();
|
|
|
|
rwsem_acquire(&sem->dep_map, subclass, 0, _RET_IP_);
|
|
|
|
|
2019-05-20 23:59:04 +03:00
|
|
|
if (LOCK_CONTENDED_RETURN(sem, __down_write_trylock,
|
|
|
|
__down_write_killable)) {
|
2016-05-26 07:04:58 +03:00
|
|
|
rwsem_release(&sem->dep_map, 1, _RET_IP_);
|
|
|
|
return -EINTR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(down_write_killable_nested);
|
|
|
|
|
2011-09-22 08:43:05 +04:00
|
|
|
void up_read_non_owner(struct rw_semaphore *sem)
|
|
|
|
{
|
2019-04-04 20:43:15 +03:00
|
|
|
DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED),
|
|
|
|
sem);
|
2011-09-22 08:43:05 +04:00
|
|
|
__up_read(sem);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(up_read_non_owner);
|
|
|
|
|
2006-07-03 11:24:53 +04:00
|
|
|
#endif
|