License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 02:20:36 +04:00
|
|
|
#ifndef _LINUX_LIST_H
|
|
|
|
#define _LINUX_LIST_H
|
|
|
|
|
2010-07-02 21:41:14 +04:00
|
|
|
#include <linux/types.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/stddef.h>
|
2006-06-27 13:53:52 +04:00
|
|
|
#include <linux/poison.h>
|
2011-05-20 01:15:29 +04:00
|
|
|
#include <linux/const.h>
|
2014-10-14 02:51:30 +04:00
|
|
|
#include <linux/kernel.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Simple doubly linked list implementation.
|
|
|
|
*
|
|
|
|
* Some of the internal functions ("__xxx") are useful when
|
|
|
|
* manipulating whole lists rather than single entries, as
|
|
|
|
* sometimes we already know the next/prev entries and we can
|
|
|
|
* generate better code by using them directly rather than
|
|
|
|
* using the generic single-entry routines.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define LIST_HEAD_INIT(name) { &(name), &(name) }
|
|
|
|
|
|
|
|
#define LIST_HEAD(name) \
|
|
|
|
struct list_head name = LIST_HEAD_INIT(name)
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* INIT_LIST_HEAD - Initialize a list_head structure
|
|
|
|
* @list: list_head structure to be initialized.
|
|
|
|
*
|
|
|
|
* Initializes the list_head to point to itself. If it is a list header,
|
|
|
|
* the result is an empty list.
|
|
|
|
*/
|
2006-02-03 14:03:56 +03:00
|
|
|
static inline void INIT_LIST_HEAD(struct list_head *list)
|
|
|
|
{
|
2015-10-13 02:56:42 +03:00
|
|
|
WRITE_ONCE(list->next, list);
|
2006-02-03 14:03:56 +03:00
|
|
|
list->prev = list;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2016-08-18 00:42:08 +03:00
|
|
|
#ifdef CONFIG_DEBUG_LIST
|
|
|
|
extern bool __list_add_valid(struct list_head *new,
|
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next);
|
2016-08-18 00:42:10 +03:00
|
|
|
extern bool __list_del_entry_valid(struct list_head *entry);
|
2016-08-18 00:42:08 +03:00
|
|
|
#else
|
|
|
|
static inline bool __list_add_valid(struct list_head *new,
|
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
2016-08-18 00:42:10 +03:00
|
|
|
static inline bool __list_del_entry_valid(struct list_head *entry)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
2016-08-18 00:42:08 +03:00
|
|
|
#endif
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Insert a new entry between two known consecutive entries.
|
|
|
|
*
|
|
|
|
* This is only for internal list manipulation where we know
|
|
|
|
* the prev/next entries already!
|
|
|
|
*/
|
|
|
|
static inline void __list_add(struct list_head *new,
|
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next)
|
|
|
|
{
|
2016-08-18 00:42:08 +03:00
|
|
|
if (!__list_add_valid(new, prev, next))
|
|
|
|
return;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
next->prev = new;
|
|
|
|
new->next = next;
|
|
|
|
new->prev = prev;
|
2015-09-21 08:02:17 +03:00
|
|
|
WRITE_ONCE(prev->next, new);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_add - add a new entry
|
|
|
|
* @new: new entry to be added
|
|
|
|
* @head: list head to add it after
|
|
|
|
*
|
|
|
|
* Insert a new entry after the specified head.
|
|
|
|
* This is good for implementing stacks.
|
|
|
|
*/
|
|
|
|
static inline void list_add(struct list_head *new, struct list_head *head)
|
|
|
|
{
|
|
|
|
__list_add(new, head, head->next);
|
|
|
|
}
|
2006-09-29 12:59:00 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* list_add_tail - add a new entry
|
|
|
|
* @new: new entry to be added
|
|
|
|
* @head: list head to add it before
|
|
|
|
*
|
|
|
|
* Insert a new entry before the specified head.
|
|
|
|
* This is useful for implementing queues.
|
|
|
|
*/
|
|
|
|
static inline void list_add_tail(struct list_head *new, struct list_head *head)
|
|
|
|
{
|
|
|
|
__list_add(new, head->prev, head);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Delete a list entry by making the prev/next entries
|
|
|
|
* point to each other.
|
|
|
|
*
|
|
|
|
* This is only for internal list manipulation where we know
|
|
|
|
* the prev/next entries already!
|
|
|
|
*/
|
|
|
|
static inline void __list_del(struct list_head * prev, struct list_head * next)
|
|
|
|
{
|
|
|
|
next->prev = prev;
|
2015-09-18 18:45:22 +03:00
|
|
|
WRITE_ONCE(prev->next, next);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2019-06-28 12:12:34 +03:00
|
|
|
/*
|
|
|
|
* Delete a list entry and clear the 'prev' pointer.
|
|
|
|
*
|
|
|
|
* This is a special-purpose list clearing method used in the networking code
|
|
|
|
* for lists allocated as per-cpu, where we don't want to incur the extra
|
|
|
|
* WRITE_ONCE() overhead of a regular list_del_init(). The code that uses this
|
|
|
|
* needs to check the node 'prev' pointer instead of calling list_empty().
|
|
|
|
*/
|
|
|
|
static inline void __list_del_clearprev(struct list_head *entry)
|
|
|
|
{
|
|
|
|
__list_del(entry->prev, entry->next);
|
|
|
|
entry->prev = NULL;
|
|
|
|
}
|
|
|
|
|
2011-02-18 22:32:28 +03:00
|
|
|
static inline void __list_del_entry(struct list_head *entry)
|
|
|
|
{
|
2016-08-18 00:42:10 +03:00
|
|
|
if (!__list_del_entry_valid(entry))
|
|
|
|
return;
|
|
|
|
|
2011-02-18 22:32:28 +03:00
|
|
|
__list_del(entry->prev, entry->next);
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* list_del - deletes entry from list.
|
|
|
|
* @entry: the element to delete from the list.
|
|
|
|
* Note: list_empty() on entry does not return true after this, the entry is
|
|
|
|
* in an undefined state.
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
static inline void list_del(struct list_head *entry)
|
|
|
|
{
|
2016-08-18 00:42:10 +03:00
|
|
|
__list_del_entry(entry);
|
2005-04-17 02:20:36 +04:00
|
|
|
entry->next = LIST_POISON1;
|
|
|
|
entry->prev = LIST_POISON2;
|
|
|
|
}
|
|
|
|
|
2006-06-23 13:05:54 +04:00
|
|
|
/**
|
|
|
|
* list_replace - replace old entry by new one
|
|
|
|
* @old : the element to be replaced
|
|
|
|
* @new : the new element to insert
|
2007-02-10 12:45:59 +03:00
|
|
|
*
|
|
|
|
* If @old was empty, it will be overwritten.
|
2006-06-23 13:05:54 +04:00
|
|
|
*/
|
|
|
|
static inline void list_replace(struct list_head *old,
|
|
|
|
struct list_head *new)
|
|
|
|
{
|
|
|
|
new->next = old->next;
|
|
|
|
new->next->prev = new;
|
|
|
|
new->prev = old->prev;
|
|
|
|
new->prev->next = new;
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* list_replace_init - replace old entry by new one and initialize the old one
|
|
|
|
* @old : the element to be replaced
|
|
|
|
* @new : the new element to insert
|
|
|
|
*
|
|
|
|
* If @old was empty, it will be overwritten.
|
|
|
|
*/
|
2006-06-23 13:05:54 +04:00
|
|
|
static inline void list_replace_init(struct list_head *old,
|
2019-11-09 21:35:13 +03:00
|
|
|
struct list_head *new)
|
2006-06-23 13:05:54 +04:00
|
|
|
{
|
|
|
|
list_replace(old, new);
|
|
|
|
INIT_LIST_HEAD(old);
|
|
|
|
}
|
|
|
|
|
mm: shuffle initial free memory to improve memory-side-cache utilization
Patch series "mm: Randomize free memory", v10.
This patch (of 3):
Randomization of the page allocator improves the average utilization of
a direct-mapped memory-side-cache. Memory side caching is a platform
capability that Linux has been previously exposed to in HPC
(high-performance computing) environments on specialty platforms. In
that instance it was a smaller pool of high-bandwidth-memory relative to
higher-capacity / lower-bandwidth DRAM. Now, this capability is going
to be found on general purpose server platforms where DRAM is a cache in
front of higher latency persistent memory [1].
Robert offered an explanation of the state of the art of Linux
interactions with memory-side-caches [2], and I copy it here:
It's been a problem in the HPC space:
http://www.nersc.gov/research-and-development/knl-cache-mode-performance-coe/
A kernel module called zonesort is available to try to help:
https://software.intel.com/en-us/articles/xeon-phi-software
and this abandoned patch series proposed that for the kernel:
https://lkml.kernel.org/r/20170823100205.17311-1-lukasz.daniluk@intel.com
Dan's patch series doesn't attempt to ensure buffers won't conflict, but
also reduces the chance that the buffers will. This will make performance
more consistent, albeit slower than "optimal" (which is near impossible
to attain in a general-purpose kernel). That's better than forcing
users to deploy remedies like:
"To eliminate this gradual degradation, we have added a Stream
measurement to the Node Health Check that follows each job;
nodes are rebooted whenever their measured memory bandwidth
falls below 300 GB/s."
A replacement for zonesort was merged upstream in commit cc9aec03e58f
("x86/numa_emulation: Introduce uniform split capability"). With this
numa_emulation capability, memory can be split into cache sized
("near-memory" sized) numa nodes. A bind operation to such a node, and
disabling workloads on other nodes, enables full cache performance.
However, once the workload exceeds the cache size then cache conflicts
are unavoidable. While HPC environments might be able to tolerate
time-scheduling of cache sized workloads, for general purpose server
platforms, the oversubscribed cache case will be the common case.
The worst case scenario is that a server system owner benchmarks a
workload at boot with an un-contended cache only to see that performance
degrade over time, even below the average cache performance due to
excessive conflicts. Randomization clips the peaks and fills in the
valleys of cache utilization to yield steady average performance.
Here are some performance impact details of the patches:
1/ An Intel internal synthetic memory bandwidth measurement tool, saw a
3X speedup in a contrived case that tries to force cache conflicts.
The contrived cased used the numa_emulation capability to force an
instance of the benchmark to be run in two of the near-memory sized
numa nodes. If both instances were placed on the same emulated they
would fit and cause zero conflicts. While on separate emulated nodes
without randomization they underutilized the cache and conflicted
unnecessarily due to the in-order allocation per node.
2/ A well known Java server application benchmark was run with a heap
size that exceeded cache size by 3X. The cache conflict rate was 8%
for the first run and degraded to 21% after page allocator aging. With
randomization enabled the rate levelled out at 11%.
3/ A MongoDB workload did not observe measurable difference in
cache-conflict rates, but the overall throughput dropped by 7% with
randomization in one case.
4/ Mel Gorman ran his suite of performance workloads with randomization
enabled on platforms without a memory-side-cache and saw a mix of some
improvements and some losses [3].
While there is potentially significant improvement for applications that
depend on low latency access across a wide working-set, the performance
may be negligible to negative for other workloads. For this reason the
shuffle capability defaults to off unless a direct-mapped
memory-side-cache is detected. Even then, the page_alloc.shuffle=0
parameter can be specified to disable the randomization on those systems.
Outside of memory-side-cache utilization concerns there is potentially
security benefit from randomization. Some data exfiltration and
return-oriented-programming attacks rely on the ability to infer the
location of sensitive data objects. The kernel page allocator, especially
early in system boot, has predictable first-in-first out behavior for
physical pages. Pages are freed in physical address order when first
onlined.
Quoting Kees:
"While we already have a base-address randomization
(CONFIG_RANDOMIZE_MEMORY), attacks against the same hardware and
memory layouts would certainly be using the predictability of
allocation ordering (i.e. for attacks where the base address isn't
important: only the relative positions between allocated memory).
This is common in lots of heap-style attacks. They try to gain
control over ordering by spraying allocations, etc.
I'd really like to see this because it gives us something similar
to CONFIG_SLAB_FREELIST_RANDOM but for the page allocator."
While SLAB_FREELIST_RANDOM reduces the predictability of some local slab
caches it leaves vast bulk of memory to be predictably in order allocated.
However, it should be noted, the concrete security benefits are hard to
quantify, and no known CVE is mitigated by this randomization.
Introduce shuffle_free_memory(), and its helper shuffle_zone(), to perform
a Fisher-Yates shuffle of the page allocator 'free_area' lists when they
are initially populated with free memory at boot and at hotplug time. Do
this based on either the presence of a page_alloc.shuffle=Y command line
parameter, or autodetection of a memory-side-cache (to be added in a
follow-on patch).
The shuffling is done in terms of CONFIG_SHUFFLE_PAGE_ORDER sized free
pages where the default CONFIG_SHUFFLE_PAGE_ORDER is MAX_ORDER-1 i.e. 10,
4MB this trades off randomization granularity for time spent shuffling.
MAX_ORDER-1 was chosen to be minimally invasive to the page allocator
while still showing memory-side cache behavior improvements, and the
expectation that the security implications of finer granularity
randomization is mitigated by CONFIG_SLAB_FREELIST_RANDOM. The
performance impact of the shuffling appears to be in the noise compared to
other memory initialization work.
This initial randomization can be undone over time so a follow-on patch is
introduced to inject entropy on page free decisions. It is reasonable to
ask if the page free entropy is sufficient, but it is not enough due to
the in-order initial freeing of pages. At the start of that process
putting page1 in front or behind page0 still keeps them close together,
page2 is still near page1 and has a high chance of being adjacent. As
more pages are added ordering diversity improves, but there is still high
page locality for the low address pages and this leads to no significant
impact to the cache conflict rate.
[1]: https://itpeernetwork.intel.com/intel-optane-dc-persistent-memory-operating-modes/
[2]: https://lkml.kernel.org/r/AT5PR8401MB1169D656C8B5E121752FC0F8AB120@AT5PR8401MB1169.NAMPRD84.PROD.OUTLOOK.COM
[3]: https://lkml.org/lkml/2018/10/12/309
[dan.j.williams@intel.com: fix shuffle enable]
Link: http://lkml.kernel.org/r/154943713038.3858443.4125180191382062871.stgit@dwillia2-desk3.amr.corp.intel.com
[cai@lca.pw: fix SHUFFLE_PAGE_ALLOCATOR help texts]
Link: http://lkml.kernel.org/r/20190425201300.75650-1-cai@lca.pw
Link: http://lkml.kernel.org/r/154899811738.3165233.12325692939590944259.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Qian Cai <cai@lca.pw>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Robert Elliott <elliott@hpe.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-15 01:41:28 +03:00
|
|
|
/**
|
|
|
|
* list_swap - replace entry1 with entry2 and re-add entry1 at entry2's position
|
|
|
|
* @entry1: the location to place entry2
|
|
|
|
* @entry2: the location to place entry1
|
|
|
|
*/
|
|
|
|
static inline void list_swap(struct list_head *entry1,
|
|
|
|
struct list_head *entry2)
|
|
|
|
{
|
|
|
|
struct list_head *pos = entry2->prev;
|
|
|
|
|
|
|
|
list_del(entry2);
|
|
|
|
list_replace(entry1, entry2);
|
|
|
|
if (pos == entry1)
|
|
|
|
pos = entry2;
|
|
|
|
list_add(entry1, pos);
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
|
|
|
* list_del_init - deletes entry from list and reinitialize it.
|
|
|
|
* @entry: the element to delete from the list.
|
|
|
|
*/
|
|
|
|
static inline void list_del_init(struct list_head *entry)
|
|
|
|
{
|
2011-02-18 22:32:28 +03:00
|
|
|
__list_del_entry(entry);
|
2005-04-17 02:20:36 +04:00
|
|
|
INIT_LIST_HEAD(entry);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_move - delete from one list and add as another's head
|
|
|
|
* @list: the entry to move
|
|
|
|
* @head: the head that will precede our entry
|
|
|
|
*/
|
|
|
|
static inline void list_move(struct list_head *list, struct list_head *head)
|
|
|
|
{
|
2011-02-18 22:32:28 +03:00
|
|
|
__list_del_entry(list);
|
2007-05-13 03:28:35 +04:00
|
|
|
list_add(list, head);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_move_tail - delete from one list and add as another's tail
|
|
|
|
* @list: the entry to move
|
|
|
|
* @head: the head that will follow our entry
|
|
|
|
*/
|
|
|
|
static inline void list_move_tail(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
2011-02-18 22:32:28 +03:00
|
|
|
__list_del_entry(list);
|
2007-05-13 03:28:35 +04:00
|
|
|
list_add_tail(list, head);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2018-09-13 12:17:23 +03:00
|
|
|
/**
|
|
|
|
* list_bulk_move_tail - move a subsection of a list to its tail
|
|
|
|
* @head: the head that will follow our entry
|
|
|
|
* @first: first entry to move
|
|
|
|
* @last: last entry to move, can be the same as first
|
|
|
|
*
|
|
|
|
* Move all entries between @first and including @last before @head.
|
|
|
|
* All three entries must belong to the same linked list.
|
|
|
|
*/
|
|
|
|
static inline void list_bulk_move_tail(struct list_head *head,
|
|
|
|
struct list_head *first,
|
|
|
|
struct list_head *last)
|
|
|
|
{
|
|
|
|
first->prev->next = last->next;
|
|
|
|
last->next->prev = first->prev;
|
|
|
|
|
|
|
|
head->prev->next = first;
|
|
|
|
first->prev = head->prev;
|
|
|
|
|
|
|
|
last->next = head;
|
|
|
|
head->prev = last;
|
|
|
|
}
|
|
|
|
|
2019-03-06 02:44:54 +03:00
|
|
|
/**
|
2019-03-29 06:44:05 +03:00
|
|
|
* list_is_first -- tests whether @list is the first entry in list @head
|
2019-03-06 02:44:54 +03:00
|
|
|
* @list: the entry to test
|
|
|
|
* @head: the head of the list
|
|
|
|
*/
|
|
|
|
static inline int list_is_first(const struct list_head *list,
|
|
|
|
const struct list_head *head)
|
|
|
|
{
|
|
|
|
return list->prev == head;
|
|
|
|
}
|
|
|
|
|
2006-07-14 11:24:35 +04:00
|
|
|
/**
|
|
|
|
* list_is_last - tests whether @list is the last entry in list @head
|
|
|
|
* @list: the entry to test
|
|
|
|
* @head: the head of the list
|
|
|
|
*/
|
|
|
|
static inline int list_is_last(const struct list_head *list,
|
|
|
|
const struct list_head *head)
|
|
|
|
{
|
|
|
|
return list->next == head;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
|
|
|
* list_empty - tests whether a list is empty
|
|
|
|
* @head: the list to test.
|
|
|
|
*/
|
|
|
|
static inline int list_empty(const struct list_head *head)
|
|
|
|
{
|
2015-09-21 03:03:16 +03:00
|
|
|
return READ_ONCE(head->next) == head;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2020-07-23 22:33:41 +03:00
|
|
|
/**
|
|
|
|
* list_del_init_careful - deletes entry from list and reinitialize it.
|
|
|
|
* @entry: the element to delete from the list.
|
|
|
|
*
|
|
|
|
* This is the same as list_del_init(), except designed to be used
|
|
|
|
* together with list_empty_careful() in a way to guarantee ordering
|
|
|
|
* of other memory operations.
|
|
|
|
*
|
|
|
|
* Any memory operations done before a list_del_init_careful() are
|
|
|
|
* guaranteed to be visible after a list_empty_careful() test.
|
|
|
|
*/
|
|
|
|
static inline void list_del_init_careful(struct list_head *entry)
|
|
|
|
{
|
|
|
|
__list_del_entry(entry);
|
|
|
|
entry->prev = entry;
|
|
|
|
smp_store_release(&entry->next, entry);
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
2006-06-25 16:47:42 +04:00
|
|
|
* list_empty_careful - tests whether a list is empty and not being modified
|
|
|
|
* @head: the list to test
|
|
|
|
*
|
|
|
|
* Description:
|
|
|
|
* tests whether a list is empty _and_ checks that no other CPU might be
|
|
|
|
* in the process of modifying either member (next or prev)
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
|
|
|
* NOTE: using list_empty_careful() without synchronization
|
|
|
|
* can only be safe if the only activity that can happen
|
|
|
|
* to the list entry is list_del_init(). Eg. it cannot be used
|
|
|
|
* if another CPU could re-list_add() it.
|
|
|
|
*/
|
|
|
|
static inline int list_empty_careful(const struct list_head *head)
|
|
|
|
{
|
2020-07-23 22:33:41 +03:00
|
|
|
struct list_head *next = smp_load_acquire(&head->next);
|
2005-04-17 02:20:36 +04:00
|
|
|
return (next == head) && (next == head->prev);
|
2008-04-28 13:14:27 +04:00
|
|
|
}
|
|
|
|
|
2010-01-09 22:53:14 +03:00
|
|
|
/**
|
|
|
|
* list_rotate_left - rotate the list to the left
|
|
|
|
* @head: the head of the list
|
|
|
|
*/
|
|
|
|
static inline void list_rotate_left(struct list_head *head)
|
|
|
|
{
|
|
|
|
struct list_head *first;
|
|
|
|
|
|
|
|
if (!list_empty(head)) {
|
|
|
|
first = head->next;
|
|
|
|
list_move_tail(first, head);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-05-14 03:15:59 +03:00
|
|
|
/**
|
|
|
|
* list_rotate_to_front() - Rotate list to specific item.
|
|
|
|
* @list: The desired new front of the list.
|
|
|
|
* @head: The head of the list.
|
|
|
|
*
|
|
|
|
* Rotates list so that @list becomes the new front of the list.
|
|
|
|
*/
|
|
|
|
static inline void list_rotate_to_front(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Deletes the list head from the list denoted by @head and
|
|
|
|
* places it as the tail of @list, this effectively rotates the
|
|
|
|
* list so that @list is at the front.
|
|
|
|
*/
|
|
|
|
list_move_tail(head, list);
|
|
|
|
}
|
|
|
|
|
2008-04-28 13:14:27 +04:00
|
|
|
/**
|
|
|
|
* list_is_singular - tests whether a list has just one entry.
|
|
|
|
* @head: the list to test.
|
|
|
|
*/
|
|
|
|
static inline int list_is_singular(const struct list_head *head)
|
|
|
|
{
|
|
|
|
return !list_empty(head) && (head->next == head->prev);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2008-08-07 00:28:54 +04:00
|
|
|
static inline void __list_cut_position(struct list_head *list,
|
|
|
|
struct list_head *head, struct list_head *entry)
|
|
|
|
{
|
|
|
|
struct list_head *new_first = entry->next;
|
|
|
|
list->next = head->next;
|
|
|
|
list->next->prev = list;
|
|
|
|
list->prev = entry;
|
|
|
|
entry->next = list;
|
|
|
|
head->next = new_first;
|
|
|
|
new_first->prev = head;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_cut_position - cut a list into two
|
|
|
|
* @list: a new list to add all removed entries
|
|
|
|
* @head: a list with entries
|
|
|
|
* @entry: an entry within head, could be the head itself
|
|
|
|
* and if so we won't cut the list
|
|
|
|
*
|
|
|
|
* This helper moves the initial part of @head, up to and
|
|
|
|
* including @entry, from @head to @list. You should
|
|
|
|
* pass on @entry an element you know is on @head. @list
|
|
|
|
* should be an empty list or a list you do not care about
|
|
|
|
* losing its data.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
static inline void list_cut_position(struct list_head *list,
|
|
|
|
struct list_head *head, struct list_head *entry)
|
|
|
|
{
|
|
|
|
if (list_empty(head))
|
|
|
|
return;
|
|
|
|
if (list_is_singular(head) &&
|
|
|
|
(head->next != entry && head != entry))
|
|
|
|
return;
|
|
|
|
if (entry == head)
|
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
else
|
|
|
|
__list_cut_position(list, head, entry);
|
|
|
|
}
|
|
|
|
|
2018-07-02 18:13:40 +03:00
|
|
|
/**
|
|
|
|
* list_cut_before - cut a list into two, before given entry
|
|
|
|
* @list: a new list to add all removed entries
|
|
|
|
* @head: a list with entries
|
|
|
|
* @entry: an entry within head, could be the head itself
|
|
|
|
*
|
|
|
|
* This helper moves the initial part of @head, up to but
|
|
|
|
* excluding @entry, from @head to @list. You should pass
|
|
|
|
* in @entry an element you know is on @head. @list should
|
|
|
|
* be an empty list or a list you do not care about losing
|
|
|
|
* its data.
|
|
|
|
* If @entry == @head, all entries on @head are moved to
|
|
|
|
* @list.
|
|
|
|
*/
|
|
|
|
static inline void list_cut_before(struct list_head *list,
|
|
|
|
struct list_head *head,
|
|
|
|
struct list_head *entry)
|
|
|
|
{
|
|
|
|
if (head->next == entry) {
|
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
list->next = head->next;
|
|
|
|
list->next->prev = list;
|
|
|
|
list->prev = entry->prev;
|
|
|
|
list->prev->next = list;
|
|
|
|
head->next = entry;
|
|
|
|
entry->prev = head;
|
|
|
|
}
|
|
|
|
|
2008-04-29 11:59:29 +04:00
|
|
|
static inline void __list_splice(const struct list_head *list,
|
2008-08-07 02:21:26 +04:00
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct list_head *first = list->next;
|
|
|
|
struct list_head *last = list->prev;
|
|
|
|
|
2008-08-07 02:21:26 +04:00
|
|
|
first->prev = prev;
|
|
|
|
prev->next = first;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2008-08-07 02:21:26 +04:00
|
|
|
last->next = next;
|
|
|
|
next->prev = last;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2008-08-07 02:21:26 +04:00
|
|
|
* list_splice - join two lists, this is designed for stacks
|
2005-04-17 02:20:36 +04:00
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*/
|
2008-04-29 11:59:29 +04:00
|
|
|
static inline void list_splice(const struct list_head *list,
|
|
|
|
struct list_head *head)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
if (!list_empty(list))
|
2008-08-07 02:21:26 +04:00
|
|
|
__list_splice(list, head, head->next);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_splice_tail - join two lists, each list being a queue
|
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*/
|
|
|
|
static inline void list_splice_tail(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
if (!list_empty(list))
|
|
|
|
__list_splice(list, head->prev, head);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_splice_init - join two lists and reinitialise the emptied list.
|
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*
|
|
|
|
* The list at @list is reinitialised
|
|
|
|
*/
|
|
|
|
static inline void list_splice_init(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
if (!list_empty(list)) {
|
2008-08-07 02:21:26 +04:00
|
|
|
__list_splice(list, head, head->next);
|
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2008-08-09 00:56:20 +04:00
|
|
|
* list_splice_tail_init - join two lists and reinitialise the emptied list
|
2008-08-07 02:21:26 +04:00
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*
|
2008-08-09 00:56:20 +04:00
|
|
|
* Each of the lists is a queue.
|
2008-08-07 02:21:26 +04:00
|
|
|
* The list at @list is reinitialised
|
|
|
|
*/
|
|
|
|
static inline void list_splice_tail_init(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
if (!list_empty(list)) {
|
|
|
|
__list_splice(list, head->prev, head);
|
2005-04-17 02:20:36 +04:00
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_entry - get the struct for this entry
|
|
|
|
* @ptr: the &struct list_head pointer.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#define list_entry(ptr, type, member) \
|
|
|
|
container_of(ptr, type, member)
|
|
|
|
|
Introduce a handy list_first_entry macro
There are many places in the kernel where the construction like
foo = list_entry(head->next, struct foo_struct, list);
are used.
The code might look more descriptive and neat if using the macro
list_first_entry(head, type, member) \
list_entry((head)->next, type, member)
Here is the macro itself and the examples of its usage in the generic code.
If it will turn out to be useful, I can prepare the set of patches to
inject in into arch-specific code, drivers, networking, etc.
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Signed-off-by: Kirill Korotaev <dev@openvz.org>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Zach Brown <zach.brown@oracle.com>
Cc: Davide Libenzi <davidel@xmailserver.org>
Cc: John McCutchan <ttb@tentacle.dhs.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: john stultz <johnstul@us.ibm.com>
Cc: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08 11:30:19 +04:00
|
|
|
/**
|
|
|
|
* list_first_entry - get the first element from a list
|
|
|
|
* @ptr: the list head to take the element from.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
Introduce a handy list_first_entry macro
There are many places in the kernel where the construction like
foo = list_entry(head->next, struct foo_struct, list);
are used.
The code might look more descriptive and neat if using the macro
list_first_entry(head, type, member) \
list_entry((head)->next, type, member)
Here is the macro itself and the examples of its usage in the generic code.
If it will turn out to be useful, I can prepare the set of patches to
inject in into arch-specific code, drivers, networking, etc.
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Signed-off-by: Kirill Korotaev <dev@openvz.org>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Zach Brown <zach.brown@oracle.com>
Cc: Davide Libenzi <davidel@xmailserver.org>
Cc: John McCutchan <ttb@tentacle.dhs.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: john stultz <johnstul@us.ibm.com>
Cc: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08 11:30:19 +04:00
|
|
|
*
|
|
|
|
* Note, that list is expected to be not empty.
|
|
|
|
*/
|
|
|
|
#define list_first_entry(ptr, type, member) \
|
|
|
|
list_entry((ptr)->next, type, member)
|
|
|
|
|
2013-11-13 03:10:03 +04:00
|
|
|
/**
|
|
|
|
* list_last_entry - get the last element from a list
|
|
|
|
* @ptr: the list head to take the element from.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-11-13 03:10:03 +04:00
|
|
|
*
|
|
|
|
* Note, that list is expected to be not empty.
|
|
|
|
*/
|
|
|
|
#define list_last_entry(ptr, type, member) \
|
|
|
|
list_entry((ptr)->prev, type, member)
|
|
|
|
|
2013-05-29 09:02:56 +04:00
|
|
|
/**
|
|
|
|
* list_first_entry_or_null - get the first element from a list
|
|
|
|
* @ptr: the list head to take the element from.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-05-29 09:02:56 +04:00
|
|
|
*
|
|
|
|
* Note that if the list is empty, it returns NULL.
|
|
|
|
*/
|
2016-07-23 21:27:50 +03:00
|
|
|
#define list_first_entry_or_null(ptr, type, member) ({ \
|
|
|
|
struct list_head *head__ = (ptr); \
|
|
|
|
struct list_head *pos__ = READ_ONCE(head__->next); \
|
|
|
|
pos__ != head__ ? list_entry(pos__, type, member) : NULL; \
|
|
|
|
})
|
2013-05-29 09:02:56 +04:00
|
|
|
|
2013-11-13 03:10:01 +04:00
|
|
|
/**
|
|
|
|
* list_next_entry - get the next element in list
|
|
|
|
* @pos: the type * to cursor
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-11-13 03:10:01 +04:00
|
|
|
*/
|
|
|
|
#define list_next_entry(pos, member) \
|
|
|
|
list_entry((pos)->member.next, typeof(*(pos)), member)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_prev_entry - get the prev element in list
|
|
|
|
* @pos: the type * to cursor
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-11-13 03:10:01 +04:00
|
|
|
*/
|
|
|
|
#define list_prev_entry(pos, member) \
|
|
|
|
list_entry((pos)->member.prev, typeof(*(pos)), member)
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
|
|
|
* list_for_each - iterate over a list
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each(pos, head) \
|
2011-05-20 01:15:29 +04:00
|
|
|
for (pos = (head)->next; pos != (head); pos = pos->next)
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2019-11-29 00:11:54 +03:00
|
|
|
/**
|
|
|
|
* list_for_each_continue - continue iteration over a list
|
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
|
|
|
* @head: the head for your list.
|
|
|
|
*
|
|
|
|
* Continue to iterate over a list, continuing after the current position.
|
|
|
|
*/
|
|
|
|
#define list_for_each_continue(pos, head) \
|
|
|
|
for (pos = pos->next; pos != (head); pos = pos->next)
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
|
|
|
* list_for_each_prev - iterate over a list backwards
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each_prev(pos, head) \
|
2011-05-20 01:15:29 +04:00
|
|
|
for (pos = (head)->prev; pos != (head); pos = pos->prev)
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/**
|
2006-06-25 16:47:42 +04:00
|
|
|
* list_for_each_safe - iterate over a list safe against removal of list entry
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @n: another &struct list_head to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each_safe(pos, n, head) \
|
|
|
|
for (pos = (head)->next, n = pos->next; pos != (head); \
|
|
|
|
pos = n, n = pos->next)
|
|
|
|
|
2007-10-17 10:29:53 +04:00
|
|
|
/**
|
2007-10-19 10:39:28 +04:00
|
|
|
* list_for_each_prev_safe - iterate over a list backwards safe against removal of list entry
|
2007-10-17 10:29:53 +04:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
|
|
|
* @n: another &struct list_head to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each_prev_safe(pos, n, head) \
|
|
|
|
for (pos = (head)->prev, n = pos->prev; \
|
2011-05-20 01:15:29 +04:00
|
|
|
pos != (head); \
|
2007-10-17 10:29:53 +04:00
|
|
|
pos = n, n = pos->prev)
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
|
|
|
* list_for_each_entry - iterate over list of given type
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry(pos, head, member) \
|
2013-11-13 03:10:03 +04:00
|
|
|
for (pos = list_first_entry(head, typeof(*pos), member); \
|
2013-11-13 03:10:02 +04:00
|
|
|
&pos->member != (head); \
|
|
|
|
pos = list_next_entry(pos, member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* list_for_each_entry_reverse - iterate backwards over list of given type.
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_reverse(pos, head, member) \
|
2013-11-13 03:10:03 +04:00
|
|
|
for (pos = list_last_entry(head, typeof(*pos), member); \
|
2013-11-13 03:10:02 +04:00
|
|
|
&pos->member != (head); \
|
|
|
|
pos = list_prev_entry(pos, member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/**
|
2007-02-10 12:45:59 +03:00
|
|
|
* list_prepare_entry - prepare a pos entry for use in list_for_each_entry_continue()
|
2005-04-17 02:20:36 +04:00
|
|
|
* @pos: the type * to use as a start point
|
|
|
|
* @head: the head of the list
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 16:47:42 +04:00
|
|
|
*
|
2007-02-10 12:45:59 +03:00
|
|
|
* Prepares a pos entry for use as a start point in list_for_each_entry_continue().
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#define list_prepare_entry(pos, head, member) \
|
|
|
|
((pos) ? : list_entry(head, typeof(*pos), member))
|
|
|
|
|
|
|
|
/**
|
2006-06-25 16:47:42 +04:00
|
|
|
* list_for_each_entry_continue - continue iteration over list of given type
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 16:47:42 +04:00
|
|
|
*
|
|
|
|
* Continue to iterate over list of given type, continuing after
|
|
|
|
* the current position.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_continue(pos, head, member) \
|
2013-11-13 03:10:02 +04:00
|
|
|
for (pos = list_next_entry(pos, member); \
|
|
|
|
&pos->member != (head); \
|
|
|
|
pos = list_next_entry(pos, member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-09-19 00:20:41 +04:00
|
|
|
/**
|
|
|
|
* list_for_each_entry_continue_reverse - iterate backwards from the given point
|
|
|
|
* @pos: the type * to use as a loop cursor.
|
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2007-09-19 00:20:41 +04:00
|
|
|
*
|
|
|
|
* Start to iterate over list of given type backwards, continuing after
|
|
|
|
* the current position.
|
|
|
|
*/
|
|
|
|
#define list_for_each_entry_continue_reverse(pos, head, member) \
|
2013-11-13 03:10:02 +04:00
|
|
|
for (pos = list_prev_entry(pos, member); \
|
|
|
|
&pos->member != (head); \
|
|
|
|
pos = list_prev_entry(pos, member))
|
2007-09-19 00:20:41 +04:00
|
|
|
|
2006-03-21 04:19:17 +03:00
|
|
|
/**
|
2006-06-25 16:47:42 +04:00
|
|
|
* list_for_each_entry_from - iterate over list of given type from the current point
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2006-03-21 04:19:17 +03:00
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 16:47:42 +04:00
|
|
|
*
|
|
|
|
* Iterate over list of given type, continuing from current position.
|
2006-03-21 04:19:17 +03:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_from(pos, head, member) \
|
2013-11-13 03:10:02 +04:00
|
|
|
for (; &pos->member != (head); \
|
|
|
|
pos = list_next_entry(pos, member))
|
2006-03-21 04:19:17 +03:00
|
|
|
|
2017-02-03 12:29:05 +03:00
|
|
|
/**
|
|
|
|
* list_for_each_entry_from_reverse - iterate backwards over list of given type
|
|
|
|
* from the current point
|
|
|
|
* @pos: the type * to use as a loop cursor.
|
|
|
|
* @head: the head for your list.
|
|
|
|
* @member: the name of the list_head within the struct.
|
|
|
|
*
|
|
|
|
* Iterate backwards over list of given type, continuing from current position.
|
|
|
|
*/
|
|
|
|
#define list_for_each_entry_from_reverse(pos, head, member) \
|
|
|
|
for (; &pos->member != (head); \
|
|
|
|
pos = list_prev_entry(pos, member))
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
|
|
|
* list_for_each_entry_safe - iterate over list of given type safe against removal of list entry
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe(pos, n, head, member) \
|
2013-11-13 03:10:03 +04:00
|
|
|
for (pos = list_first_entry(head, typeof(*pos), member), \
|
2013-11-13 03:10:02 +04:00
|
|
|
n = list_next_entry(pos, member); \
|
2005-04-17 02:20:36 +04:00
|
|
|
&pos->member != (head); \
|
2013-11-13 03:10:02 +04:00
|
|
|
pos = n, n = list_next_entry(n, member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-08-10 07:15:51 +04:00
|
|
|
/**
|
2010-03-06 00:43:17 +03:00
|
|
|
* list_for_each_entry_safe_continue - continue list iteration safe against removal
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-08-10 07:15:51 +04:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 16:47:42 +04:00
|
|
|
*
|
|
|
|
* Iterate over list of given type, continuing after current point,
|
|
|
|
* safe against removal of list entry.
|
2005-08-10 07:15:51 +04:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe_continue(pos, n, head, member) \
|
2013-11-13 03:10:02 +04:00
|
|
|
for (pos = list_next_entry(pos, member), \
|
|
|
|
n = list_next_entry(pos, member); \
|
2006-03-21 04:18:05 +03:00
|
|
|
&pos->member != (head); \
|
2013-11-13 03:10:02 +04:00
|
|
|
pos = n, n = list_next_entry(n, member))
|
2006-03-21 04:18:05 +03:00
|
|
|
|
|
|
|
/**
|
2010-03-06 00:43:17 +03:00
|
|
|
* list_for_each_entry_safe_from - iterate over list from current point safe against removal
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2006-03-21 04:18:05 +03:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 16:47:42 +04:00
|
|
|
*
|
|
|
|
* Iterate over list of given type from current point, safe against
|
|
|
|
* removal of list entry.
|
2006-03-21 04:18:05 +03:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe_from(pos, n, head, member) \
|
2013-11-13 03:10:02 +04:00
|
|
|
for (n = list_next_entry(pos, member); \
|
2005-08-10 07:15:51 +04:00
|
|
|
&pos->member != (head); \
|
2013-11-13 03:10:02 +04:00
|
|
|
pos = n, n = list_next_entry(n, member))
|
2005-08-10 07:15:51 +04:00
|
|
|
|
2006-01-10 07:51:31 +03:00
|
|
|
/**
|
2010-03-06 00:43:17 +03:00
|
|
|
* list_for_each_entry_safe_reverse - iterate backwards over list safe against removal
|
2006-06-25 16:47:43 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2006-01-10 07:51:31 +03:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 16:47:42 +04:00
|
|
|
*
|
|
|
|
* Iterate backwards over list of given type, safe against removal
|
|
|
|
* of list entry.
|
2006-01-10 07:51:31 +03:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe_reverse(pos, n, head, member) \
|
2013-11-13 03:10:03 +04:00
|
|
|
for (pos = list_last_entry(head, typeof(*pos), member), \
|
2013-11-13 03:10:02 +04:00
|
|
|
n = list_prev_entry(pos, member); \
|
2006-01-10 07:51:31 +03:00
|
|
|
&pos->member != (head); \
|
2013-11-13 03:10:02 +04:00
|
|
|
pos = n, n = list_prev_entry(n, member))
|
2006-01-10 07:51:31 +03:00
|
|
|
|
2010-06-24 07:02:14 +04:00
|
|
|
/**
|
|
|
|
* list_safe_reset_next - reset a stale list_for_each_entry_safe loop
|
|
|
|
* @pos: the loop cursor used in the list_for_each_entry_safe loop
|
|
|
|
* @n: temporary storage used in list_for_each_entry_safe
|
2014-11-14 04:09:55 +03:00
|
|
|
* @member: the name of the list_head within the struct.
|
2010-06-24 07:02:14 +04:00
|
|
|
*
|
|
|
|
* list_safe_reset_next is not safe to use in general if the list may be
|
|
|
|
* modified concurrently (eg. the lock is dropped in the loop body). An
|
|
|
|
* exception to this is if the cursor element (pos) is pinned in the list,
|
|
|
|
* and list_safe_reset_next is called after re-taking the lock and before
|
|
|
|
* completing the current iteration of the loop body.
|
|
|
|
*/
|
|
|
|
#define list_safe_reset_next(pos, n, member) \
|
2013-11-13 03:10:02 +04:00
|
|
|
n = list_next_entry(pos, member)
|
2010-06-24 07:02:14 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Double linked lists with a single pointer list head.
|
|
|
|
* Mostly useful for hash tables where the two pointer list head is
|
|
|
|
* too wasteful.
|
|
|
|
* You lose the ability to access the tail in O(1).
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define HLIST_HEAD_INIT { .first = NULL }
|
|
|
|
#define HLIST_HEAD(name) struct hlist_head name = { .first = NULL }
|
|
|
|
#define INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
|
2006-02-03 14:03:56 +03:00
|
|
|
static inline void INIT_HLIST_NODE(struct hlist_node *h)
|
|
|
|
{
|
|
|
|
h->next = NULL;
|
|
|
|
h->pprev = NULL;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_unhashed - Has node been removed from list and reinitialized?
|
|
|
|
* @h: Node to be checked
|
|
|
|
*
|
|
|
|
* Not that not all removal functions will leave a node in unhashed
|
|
|
|
* state. For example, hlist_nulls_del_init_rcu() does leave the
|
|
|
|
* node in unhashed state, but hlist_nulls_del() does not.
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
static inline int hlist_unhashed(const struct hlist_node *h)
|
|
|
|
{
|
|
|
|
return !h->pprev;
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_unhashed_lockless - Version of hlist_unhashed for lockless use
|
|
|
|
* @h: Node to be checked
|
|
|
|
*
|
|
|
|
* This variant of hlist_unhashed() must be used in lockless contexts
|
|
|
|
* to avoid potential load-tearing. The READ_ONCE() is paired with the
|
|
|
|
* various WRITE_ONCE() in hlist helpers that are defined below.
|
2019-11-07 22:37:37 +03:00
|
|
|
*/
|
|
|
|
static inline int hlist_unhashed_lockless(const struct hlist_node *h)
|
|
|
|
{
|
|
|
|
return !READ_ONCE(h->pprev);
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_empty - Is the specified hlist_head structure an empty hlist?
|
|
|
|
* @h: Structure to check.
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
static inline int hlist_empty(const struct hlist_head *h)
|
|
|
|
{
|
2015-09-21 03:03:16 +03:00
|
|
|
return !READ_ONCE(h->first);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void __hlist_del(struct hlist_node *n)
|
|
|
|
{
|
|
|
|
struct hlist_node *next = n->next;
|
|
|
|
struct hlist_node **pprev = n->pprev;
|
2015-09-18 18:45:22 +03:00
|
|
|
|
|
|
|
WRITE_ONCE(*pprev, next);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (next)
|
2019-11-07 22:37:37 +03:00
|
|
|
WRITE_ONCE(next->pprev, pprev);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_del - Delete the specified hlist_node from its list
|
|
|
|
* @n: Node to delete.
|
|
|
|
*
|
|
|
|
* Note that this function leaves the node in hashed state. Use
|
|
|
|
* hlist_del_init() or similar instead to unhash @n.
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
static inline void hlist_del(struct hlist_node *n)
|
|
|
|
{
|
|
|
|
__hlist_del(n);
|
|
|
|
n->next = LIST_POISON1;
|
|
|
|
n->pprev = LIST_POISON2;
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_del_init - Delete the specified hlist_node from its list and initialize
|
|
|
|
* @n: Node to delete.
|
|
|
|
*
|
|
|
|
* Note that this function leaves the node in unhashed state.
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
static inline void hlist_del_init(struct hlist_node *n)
|
|
|
|
{
|
2006-04-29 02:21:23 +04:00
|
|
|
if (!hlist_unhashed(n)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
__hlist_del(n);
|
|
|
|
INIT_HLIST_NODE(n);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_add_head - add a new entry at the beginning of the hlist
|
|
|
|
* @n: new entry to be added
|
|
|
|
* @h: hlist head to add it after
|
|
|
|
*
|
|
|
|
* Insert a new entry after the specified head.
|
|
|
|
* This is good for implementing stacks.
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
static inline void hlist_add_head(struct hlist_node *n, struct hlist_head *h)
|
|
|
|
{
|
|
|
|
struct hlist_node *first = h->first;
|
2019-11-07 22:37:37 +03:00
|
|
|
WRITE_ONCE(n->next, first);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (first)
|
2019-11-07 22:37:37 +03:00
|
|
|
WRITE_ONCE(first->pprev, &n->next);
|
2015-09-21 08:02:17 +03:00
|
|
|
WRITE_ONCE(h->first, n);
|
2019-11-07 22:37:37 +03:00
|
|
|
WRITE_ONCE(n->pprev, &h->first);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_add_before - add a new entry before the one specified
|
|
|
|
* @n: new entry to be added
|
|
|
|
* @next: hlist node to add it before, which must be non-NULL
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
static inline void hlist_add_before(struct hlist_node *n,
|
2019-11-09 21:35:13 +03:00
|
|
|
struct hlist_node *next)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2019-11-07 22:37:37 +03:00
|
|
|
WRITE_ONCE(n->pprev, next->pprev);
|
|
|
|
WRITE_ONCE(n->next, next);
|
|
|
|
WRITE_ONCE(next->pprev, &n->next);
|
2015-09-21 08:02:17 +03:00
|
|
|
WRITE_ONCE(*(n->pprev), n);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_add_behing - add a new entry after the one specified
|
|
|
|
* @n: new entry to be added
|
|
|
|
* @prev: hlist node to add it after, which must be non-NULL
|
|
|
|
*/
|
2014-08-07 03:09:16 +04:00
|
|
|
static inline void hlist_add_behind(struct hlist_node *n,
|
|
|
|
struct hlist_node *prev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2019-11-07 22:37:37 +03:00
|
|
|
WRITE_ONCE(n->next, prev->next);
|
|
|
|
WRITE_ONCE(prev->next, n);
|
|
|
|
WRITE_ONCE(n->pprev, &prev->next);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-08-07 03:09:14 +04:00
|
|
|
if (n->next)
|
2019-11-07 22:37:37 +03:00
|
|
|
WRITE_ONCE(n->next->pprev, &n->next);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_add_fake - create a fake hlist consisting of a single headless node
|
|
|
|
* @n: Node to make a fake list out of
|
|
|
|
*
|
|
|
|
* This makes @n appear to be its own predecessor on a headless hlist.
|
|
|
|
* The point of this is to allow things like hlist_del() to work correctly
|
|
|
|
* in cases where there is no list.
|
|
|
|
*/
|
2010-10-23 23:23:40 +04:00
|
|
|
static inline void hlist_add_fake(struct hlist_node *n)
|
|
|
|
{
|
|
|
|
n->pprev = &n->next;
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_fake: Is this node a fake hlist?
|
|
|
|
* @h: Node to check for being a self-referential fake hlist.
|
|
|
|
*/
|
2015-03-12 15:19:11 +03:00
|
|
|
static inline bool hlist_fake(struct hlist_node *h)
|
|
|
|
{
|
|
|
|
return h->pprev == &h->next;
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_is_singular_node - is node the only element of the specified hlist?
|
|
|
|
* @n: Node to check for singularity.
|
|
|
|
* @h: Header for potentially singular list.
|
|
|
|
*
|
2016-07-04 12:50:27 +03:00
|
|
|
* Check whether the node is the only node of the head without
|
2019-11-09 21:35:13 +03:00
|
|
|
* accessing head, thus avoiding unnecessary cache misses.
|
2016-07-04 12:50:27 +03:00
|
|
|
*/
|
|
|
|
static inline bool
|
|
|
|
hlist_is_singular_node(struct hlist_node *n, struct hlist_head *h)
|
|
|
|
{
|
|
|
|
return !n->next && n->pprev == &h->first;
|
|
|
|
}
|
|
|
|
|
2019-11-09 21:35:13 +03:00
|
|
|
/**
|
|
|
|
* hlist_move_list - Move an hlist
|
|
|
|
* @old: hlist_head for old list.
|
|
|
|
* @new: hlist_head for new list.
|
|
|
|
*
|
2008-09-01 01:39:21 +04:00
|
|
|
* Move a list from one list head to another. Fixup the pprev
|
|
|
|
* reference of the first entry if it exists.
|
|
|
|
*/
|
|
|
|
static inline void hlist_move_list(struct hlist_head *old,
|
|
|
|
struct hlist_head *new)
|
|
|
|
{
|
|
|
|
new->first = old->first;
|
|
|
|
if (new->first)
|
|
|
|
new->first->pprev = &new->first;
|
|
|
|
old->first = NULL;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
#define hlist_entry(ptr, type, member) container_of(ptr,type,member)
|
|
|
|
|
|
|
|
#define hlist_for_each(pos, head) \
|
2011-05-20 00:50:07 +04:00
|
|
|
for (pos = (head)->first; pos ; pos = pos->next)
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
#define hlist_for_each_safe(pos, n, head) \
|
|
|
|
for (pos = (head)->first; pos && ({ n = pos->next; 1; }); \
|
|
|
|
pos = n)
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
#define hlist_entry_safe(ptr, type, member) \
|
list: Fix double fetch of pointer in hlist_entry_safe()
The current version of hlist_entry_safe() fetches the pointer twice,
once to test for NULL and the other to compute the offset back to the
enclosing structure. This is OK for normal lock-based use because in
that case, the pointer cannot change. However, when the pointer is
protected by RCU (as in "rcu_dereference(p)"), then the pointer can
change at any time. This use case can result in the following sequence
of events:
1. CPU 0 invokes hlist_entry_safe(), fetches the RCU-protected
pointer as sees that it is non-NULL.
2. CPU 1 invokes hlist_del_rcu(), deleting the entry that CPU 0
just fetched a pointer to. Because this is the last entry
in the list, the pointer fetched by CPU 0 is now NULL.
3. CPU 0 refetches the pointer, obtains NULL, and then gets a
NULL-pointer crash.
This commit therefore applies gcc's "({ })" statement expression to
create a temporary variable so that the specified pointer is fetched
only once, avoiding the above sequence of events. Please note that
it is the caller's responsibility to use rcu_dereference() as needed.
This allows RCU-protected uses to work correctly without imposing
any additional overhead on the non-RCU case.
Many thanks to Eric Dumazet for spotting root cause!
Reported-by: CAI Qian <caiqian@redhat.com>
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Li Zefan <lizefan@huawei.com>
2013-03-09 19:38:41 +04:00
|
|
|
({ typeof(ptr) ____ptr = (ptr); \
|
|
|
|
____ptr ? hlist_entry(____ptr, type, member) : NULL; \
|
|
|
|
})
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/**
|
|
|
|
* hlist_for_each_entry - iterate over list of given type
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @head: the head for your list.
|
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
#define hlist_for_each_entry(pos, head, member) \
|
|
|
|
for (pos = hlist_entry_safe((head)->first, typeof(*(pos)), member);\
|
|
|
|
pos; \
|
|
|
|
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/**
|
2006-06-25 16:47:42 +04:00
|
|
|
* hlist_for_each_entry_continue - iterate over a hlist continuing after current point
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
#define hlist_for_each_entry_continue(pos, member) \
|
|
|
|
for (pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member);\
|
|
|
|
pos; \
|
|
|
|
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/**
|
2006-06-25 16:47:42 +04:00
|
|
|
* hlist_for_each_entry_from - iterate over a hlist continuing from current point
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-17 02:20:36 +04:00
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
#define hlist_for_each_entry_from(pos, member) \
|
|
|
|
for (; pos; \
|
|
|
|
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* hlist_for_each_entry_safe - iterate over list of given type safe against removal of list entry
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2019-03-21 06:54:22 +03:00
|
|
|
* @n: a &struct hlist_node to use as temporary storage
|
2005-04-17 02:20:36 +04:00
|
|
|
* @head: the head for your list.
|
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 05:06:00 +04:00
|
|
|
#define hlist_for_each_entry_safe(pos, n, head, member) \
|
|
|
|
for (pos = hlist_entry_safe((head)->first, typeof(*pos), member);\
|
|
|
|
pos && ({ n = pos->member.next; 1; }); \
|
|
|
|
pos = hlist_entry_safe(n, typeof(*pos), member))
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
#endif
|