License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2008-07-28 15:06:00 +04:00
|
|
|
/*
|
|
|
|
* Functions related to softirq rq completions
|
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/bio.h>
|
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/cpu.h>
|
2012-01-26 15:44:34 +04:00
|
|
|
#include <linux/sched.h>
|
2017-02-01 18:36:40 +03:00
|
|
|
#include <linux/sched/topology.h>
|
2008-07-28 15:06:00 +04:00
|
|
|
|
|
|
|
#include "blk.h"
|
|
|
|
|
|
|
|
static DEFINE_PER_CPU(struct list_head, blk_cpu_done);
|
|
|
|
|
2008-09-13 22:26:01 +04:00
|
|
|
/*
|
|
|
|
* Softirq action handler - move entries to local list and loop over them
|
|
|
|
* while passing them to the queue registered handler.
|
|
|
|
*/
|
2016-06-20 21:42:34 +03:00
|
|
|
static __latent_entropy void blk_done_softirq(struct softirq_action *h)
|
2008-09-13 22:26:01 +04:00
|
|
|
{
|
|
|
|
struct list_head *cpu_list, local_list;
|
|
|
|
|
|
|
|
local_irq_disable();
|
block: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.
The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e. using a global
register that may be set to the per cpu base.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
this_cpu_inc(y)
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-15 22:22:29 +04:00
|
|
|
cpu_list = this_cpu_ptr(&blk_cpu_done);
|
2008-09-13 22:26:01 +04:00
|
|
|
list_replace_init(cpu_list, &local_list);
|
|
|
|
local_irq_enable();
|
|
|
|
|
|
|
|
while (!list_empty(&local_list)) {
|
|
|
|
struct request *rq;
|
|
|
|
|
2014-04-10 06:27:01 +04:00
|
|
|
rq = list_entry(local_list.next, struct request, ipi_list);
|
|
|
|
list_del_init(&rq->ipi_list);
|
2018-10-31 18:43:30 +03:00
|
|
|
rq->q->mq_ops->complete(rq);
|
2008-09-13 22:26:01 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-11-15 02:32:07 +04:00
|
|
|
#ifdef CONFIG_SMP
|
2008-09-13 22:26:01 +04:00
|
|
|
static void trigger_softirq(void *data)
|
|
|
|
{
|
|
|
|
struct request *rq = data;
|
|
|
|
struct list_head *list;
|
|
|
|
|
block: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.
The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e. using a global
register that may be set to the per cpu base.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
this_cpu_inc(y)
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-15 22:22:29 +04:00
|
|
|
list = this_cpu_ptr(&blk_cpu_done);
|
2014-04-10 06:27:01 +04:00
|
|
|
list_add_tail(&rq->ipi_list, list);
|
2008-09-13 22:26:01 +04:00
|
|
|
|
2014-04-10 06:27:01 +04:00
|
|
|
if (list->next == &rq->ipi_list)
|
2008-09-13 22:26:01 +04:00
|
|
|
raise_softirq_irqoff(BLOCK_SOFTIRQ);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Setup and invoke a run of 'trigger_softirq' on the given cpu.
|
|
|
|
*/
|
|
|
|
static int raise_blk_irq(int cpu, struct request *rq)
|
|
|
|
{
|
|
|
|
if (cpu_online(cpu)) {
|
smp: Avoid using two cache lines for struct call_single_data
struct call_single_data is used in IPIs to transfer information between
CPUs. Its size is bigger than sizeof(unsigned long) and less than
cache line size. Currently it is not allocated with any explicit alignment
requirements. This makes it possible for allocated call_single_data to
cross two cache lines, which results in double the number of the cache lines
that need to be transferred among CPUs.
This can be fixed by requiring call_single_data to be aligned with the
size of call_single_data. Currently the size of call_single_data is the
power of 2. If we add new fields to call_single_data, we may need to
add padding to make sure the size of new definition is the power of 2
as well.
Fortunately, this is enforced by GCC, which will report bad sizes.
To set alignment requirements of call_single_data to the size of
call_single_data, a struct definition and a typedef is used.
To test the effect of the patch, I used the vm-scalability multiple
thread swap test case (swap-w-seq-mt). The test will create multiple
threads and each thread will eat memory until all RAM and part of swap
is used, so that huge number of IPIs are triggered when unmapping
memory. In the test, the throughput of memory writing improves ~5%
compared with misaligned call_single_data, because of faster IPIs.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Huang, Ying <ying.huang@intel.com>
[ Add call_single_data_t and align with size of call_single_data. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/87bmnqd6lz.fsf@yhuang-mobile.sh.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-08 07:30:00 +03:00
|
|
|
call_single_data_t *data = &rq->csd;
|
2008-09-13 22:26:01 +04:00
|
|
|
|
|
|
|
data->func = trigger_softirq;
|
|
|
|
data->info = rq;
|
|
|
|
data->flags = 0;
|
|
|
|
|
2014-02-24 19:40:02 +04:00
|
|
|
smp_call_function_single_async(cpu, data);
|
2008-09-13 22:26:01 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
2013-11-15 02:32:07 +04:00
|
|
|
#else /* CONFIG_SMP */
|
2008-09-13 22:26:01 +04:00
|
|
|
static int raise_blk_irq(int cpu, struct request *rq)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-09-06 20:04:44 +03:00
|
|
|
static int blk_softirq_cpu_dead(unsigned int cpu)
|
2008-07-28 15:06:00 +04:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If a CPU goes away, splice its entries to the current CPU
|
|
|
|
* and trigger a run of the softirq
|
|
|
|
*/
|
2016-09-06 20:04:44 +03:00
|
|
|
local_irq_disable();
|
|
|
|
list_splice_init(&per_cpu(blk_cpu_done, cpu),
|
|
|
|
this_cpu_ptr(&blk_cpu_done));
|
|
|
|
raise_softirq_irqoff(BLOCK_SOFTIRQ);
|
|
|
|
local_irq_enable();
|
2008-07-28 15:06:00 +04:00
|
|
|
|
2016-09-06 20:04:44 +03:00
|
|
|
return 0;
|
2008-07-28 15:06:00 +04:00
|
|
|
}
|
|
|
|
|
2008-09-14 16:55:09 +04:00
|
|
|
void __blk_complete_request(struct request *req)
|
2008-07-28 15:06:00 +04:00
|
|
|
{
|
2008-09-13 22:26:01 +04:00
|
|
|
struct request_queue *q = req->q;
|
2018-11-01 02:01:22 +03:00
|
|
|
int cpu, ccpu = req->mq_ctx->cpu;
|
2008-07-28 15:06:00 +04:00
|
|
|
unsigned long flags;
|
2012-01-26 15:44:34 +04:00
|
|
|
bool shared = false;
|
2008-07-28 15:06:00 +04:00
|
|
|
|
2018-10-31 18:43:30 +03:00
|
|
|
BUG_ON(!q->mq_ops->complete);
|
2008-07-28 15:06:00 +04:00
|
|
|
|
|
|
|
local_irq_save(flags);
|
2008-09-13 22:26:01 +04:00
|
|
|
cpu = smp_processor_id();
|
2008-07-28 15:06:00 +04:00
|
|
|
|
2008-09-13 22:26:01 +04:00
|
|
|
/*
|
|
|
|
* Select completion CPU
|
|
|
|
*/
|
2018-09-28 11:42:20 +03:00
|
|
|
if (test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags) && ccpu != -1) {
|
2012-01-26 15:44:34 +04:00
|
|
|
if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags))
|
|
|
|
shared = cpus_share_cache(cpu, ccpu);
|
2011-07-23 22:44:25 +04:00
|
|
|
} else
|
2008-09-13 22:26:01 +04:00
|
|
|
ccpu = cpu;
|
|
|
|
|
block: improve rq_affinity placement
This patch reverts commit 35ae66e0a09ab70ed(block: Make rq_affinity = 1
work as expected). The purpose is to avoid an unnecessary IPI.
Let's take an example. My test box has cpu 0-7, one socket. Say request is
added from CPU 1, blk_complete_request() occurs at CPU 7. Without the reverted
patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU
0, and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and
CPU 7 have no difference from cache sharing point view and we can avoid an
ipi if doing it in CPU 7.
An immediate concern is this is just like QUEUE_FLAG_SAME_FORCE, but actually
not. blk_complete_request() is running in interrupt handler, and currently
I/O controller doesn't support multiple interrupts (I checked several LSI
cards and AHCI), so only one CPU can run blk_complete_request(). This is
still quite different as QUEUE_FLAG_SAME_FORCE.
Since only one CPU runs softirq, the only difference with below patch is
softirq not always runs at the first CPU of a group.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2011-08-11 12:39:04 +04:00
|
|
|
/*
|
2012-01-26 15:44:34 +04:00
|
|
|
* If current CPU and requested CPU share a cache, run the softirq on
|
|
|
|
* the current CPU. One might concern this is just like
|
block: improve rq_affinity placement
This patch reverts commit 35ae66e0a09ab70ed(block: Make rq_affinity = 1
work as expected). The purpose is to avoid an unnecessary IPI.
Let's take an example. My test box has cpu 0-7, one socket. Say request is
added from CPU 1, blk_complete_request() occurs at CPU 7. Without the reverted
patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU
0, and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and
CPU 7 have no difference from cache sharing point view and we can avoid an
ipi if doing it in CPU 7.
An immediate concern is this is just like QUEUE_FLAG_SAME_FORCE, but actually
not. blk_complete_request() is running in interrupt handler, and currently
I/O controller doesn't support multiple interrupts (I checked several LSI
cards and AHCI), so only one CPU can run blk_complete_request(). This is
still quite different as QUEUE_FLAG_SAME_FORCE.
Since only one CPU runs softirq, the only difference with below patch is
softirq not always runs at the first CPU of a group.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
2011-08-11 12:39:04 +04:00
|
|
|
* QUEUE_FLAG_SAME_FORCE, but actually not. blk_complete_request() is
|
|
|
|
* running in interrupt handler, and currently I/O controller doesn't
|
|
|
|
* support multiple interrupts, so current CPU is unique actually. This
|
|
|
|
* avoids IPI sending from current CPU to the first CPU of a group.
|
|
|
|
*/
|
2012-01-26 15:44:34 +04:00
|
|
|
if (ccpu == cpu || shared) {
|
2008-09-13 22:26:01 +04:00
|
|
|
struct list_head *list;
|
|
|
|
do_local:
|
block: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.
The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e. using a global
register that may be set to the per cpu base.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
this_cpu_inc(y)
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-10-15 22:22:29 +04:00
|
|
|
list = this_cpu_ptr(&blk_cpu_done);
|
2014-04-10 06:27:01 +04:00
|
|
|
list_add_tail(&req->ipi_list, list);
|
2008-09-13 22:26:01 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* if the list only contains our just added request,
|
|
|
|
* signal a raise of the softirq. If there are already
|
|
|
|
* entries there, someone already raised the irq but it
|
|
|
|
* hasn't run yet.
|
|
|
|
*/
|
2014-04-10 06:27:01 +04:00
|
|
|
if (list->next == &req->ipi_list)
|
2008-09-13 22:26:01 +04:00
|
|
|
raise_softirq_irqoff(BLOCK_SOFTIRQ);
|
|
|
|
} else if (raise_blk_irq(ccpu, req))
|
|
|
|
goto do_local;
|
2008-07-28 15:06:00 +04:00
|
|
|
|
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
2008-09-14 16:55:09 +04:00
|
|
|
|
2008-12-10 17:47:33 +03:00
|
|
|
static __init int blk_softirq_init(void)
|
2008-07-28 15:06:00 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for_each_possible_cpu(i)
|
|
|
|
INIT_LIST_HEAD(&per_cpu(blk_cpu_done, i));
|
|
|
|
|
|
|
|
open_softirq(BLOCK_SOFTIRQ, blk_done_softirq);
|
2016-09-06 20:04:44 +03:00
|
|
|
cpuhp_setup_state_nocalls(CPUHP_BLOCK_SOFTIRQ_DEAD,
|
|
|
|
"block/softirq:dead", NULL,
|
|
|
|
blk_softirq_cpu_dead);
|
2008-07-28 15:06:00 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
subsys_initcall(blk_softirq_init);
|