License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2015-05-23 00:13:32 +03:00
|
|
|
#ifndef __LINUX_BACKING_DEV_DEFS_H
|
|
|
|
#define __LINUX_BACKING_DEV_DEFS_H
|
|
|
|
|
|
|
|
#include <linux/list.h>
|
2015-05-23 00:13:37 +03:00
|
|
|
#include <linux/radix-tree.h>
|
|
|
|
#include <linux/rbtree.h>
|
2015-05-23 00:13:32 +03:00
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/percpu_counter.h>
|
2015-05-23 00:13:37 +03:00
|
|
|
#include <linux/percpu-refcount.h>
|
2015-05-23 00:13:32 +03:00
|
|
|
#include <linux/flex_proportions.h>
|
|
|
|
#include <linux/timer.h>
|
|
|
|
#include <linux/workqueue.h>
|
2017-02-02 17:56:51 +03:00
|
|
|
#include <linux/kref.h>
|
2018-08-22 07:55:31 +03:00
|
|
|
#include <linux/refcount.h>
|
2015-05-23 00:13:32 +03:00
|
|
|
|
|
|
|
struct page;
|
|
|
|
struct device;
|
|
|
|
struct dentry;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Bits in bdi_writeback.state
|
|
|
|
*/
|
|
|
|
enum wb_state {
|
|
|
|
WB_registered, /* bdi_register() was done */
|
|
|
|
WB_writeback_running, /* Writeback is in progress */
|
2015-05-23 00:13:45 +03:00
|
|
|
WB_has_dirty_io, /* Dirty inodes on ->b_{dirty|io|more_io} */
|
2017-09-28 20:31:55 +03:00
|
|
|
WB_start_all, /* nr_pages == 0 (all) work pending */
|
2015-05-23 00:13:32 +03:00
|
|
|
};
|
|
|
|
|
2015-05-23 00:13:35 +03:00
|
|
|
enum wb_congested_state {
|
|
|
|
WB_async_congested, /* The async (write) queue is getting full */
|
|
|
|
WB_sync_congested, /* The sync queue is getting full */
|
|
|
|
};
|
|
|
|
|
2015-05-23 00:13:32 +03:00
|
|
|
typedef int (congested_fn)(void *, int);
|
|
|
|
|
|
|
|
enum wb_stat_item {
|
|
|
|
WB_RECLAIMABLE,
|
|
|
|
WB_WRITEBACK,
|
|
|
|
WB_DIRTIED,
|
|
|
|
WB_WRITTEN,
|
|
|
|
NR_WB_STAT_ITEMS
|
|
|
|
};
|
|
|
|
|
|
|
|
#define WB_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
|
|
|
|
|
2017-09-30 11:09:06 +03:00
|
|
|
/*
|
|
|
|
* why some writeback work was initiated
|
|
|
|
*/
|
|
|
|
enum wb_reason {
|
|
|
|
WB_REASON_BACKGROUND,
|
|
|
|
WB_REASON_VMSCAN,
|
|
|
|
WB_REASON_SYNC,
|
|
|
|
WB_REASON_PERIODIC,
|
|
|
|
WB_REASON_LAPTOP_TIMER,
|
|
|
|
WB_REASON_FS_FREE_SPACE,
|
|
|
|
/*
|
|
|
|
* There is no bdi forker thread any more and works are done
|
|
|
|
* by emergency worker, however, this is TPs userland visible
|
|
|
|
* and we'll be exposing exactly the same information,
|
|
|
|
* so it has a mismatch name.
|
|
|
|
*/
|
|
|
|
WB_REASON_FORKER_THREAD,
|
writeback, memcg: Implement foreign dirty flushing
There's an inherent mismatch between memcg and writeback. The former
trackes ownership per-page while the latter per-inode. This was a
deliberate design decision because honoring per-page ownership in the
writeback path is complicated, may lead to higher CPU and IO overheads
and deemed unnecessary given that write-sharing an inode across
different cgroups isn't a common use-case.
Combined with inode majority-writer ownership switching, this works
well enough in most cases but there are some pathological cases. For
example, let's say there are two cgroups A and B which keep writing to
different but confined parts of the same inode. B owns the inode and
A's memory is limited far below B's. A's dirty ratio can rise enough
to trigger balance_dirty_pages() sleeps but B's can be low enough to
avoid triggering background writeback. A will be slowed down without
a way to make writeback of the dirty pages happen.
This patch implements foreign dirty recording and foreign mechanism so
that when a memcg encounters a condition as above it can trigger
flushes on bdi_writebacks which can clean its pages. Please see the
comment on top of mem_cgroup_track_foreign_dirty_slowpath() for
details.
A reproducer follows.
write-range.c::
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
static const char *usage = "write-range FILE START SIZE\n";
int main(int argc, char **argv)
{
int fd;
unsigned long start, size, end, pos;
char *endp;
char buf[4096];
if (argc < 4) {
fprintf(stderr, usage);
return 1;
}
fd = open(argv[1], O_WRONLY);
if (fd < 0) {
perror("open");
return 1;
}
start = strtoul(argv[2], &endp, 0);
if (*endp != '\0') {
fprintf(stderr, usage);
return 1;
}
size = strtoul(argv[3], &endp, 0);
if (*endp != '\0') {
fprintf(stderr, usage);
return 1;
}
end = start + size;
while (1) {
for (pos = start; pos < end; ) {
long bread, bwritten = 0;
if (lseek(fd, pos, SEEK_SET) < 0) {
perror("lseek");
return 1;
}
bread = read(0, buf, sizeof(buf) < end - pos ?
sizeof(buf) : end - pos);
if (bread < 0) {
perror("read");
return 1;
}
if (bread == 0)
return 0;
while (bwritten < bread) {
long this;
this = write(fd, buf + bwritten,
bread - bwritten);
if (this < 0) {
perror("write");
return 1;
}
bwritten += this;
pos += bwritten;
}
}
}
}
repro.sh::
#!/bin/bash
set -e
set -x
sysctl -w vm.dirty_expire_centisecs=300000
sysctl -w vm.dirty_writeback_centisecs=300000
sysctl -w vm.dirtytime_expire_seconds=300000
echo 3 > /proc/sys/vm/drop_caches
TEST=/sys/fs/cgroup/test
A=$TEST/A
B=$TEST/B
mkdir -p $A $B
echo "+memory +io" > $TEST/cgroup.subtree_control
echo $((1<<30)) > $A/memory.high
echo $((32<<30)) > $B/memory.high
rm -f testfile
touch testfile
fallocate -l 4G testfile
echo "Starting B"
(echo $BASHPID > $B/cgroup.procs
pv -q --rate-limit 70M < /dev/urandom | ./write-range testfile $((2<<30)) $((2<<30))) &
echo "Waiting 10s to ensure B claims the testfile inode"
sleep 5
sync
sleep 5
sync
echo "Starting A"
(echo $BASHPID > $A/cgroup.procs
pv < /dev/urandom | ./write-range testfile 0 $((2<<30)))
v2: Added comments explaining why the specific intervals are being used.
v3: Use 0 @nr when calling cgroup_writeback_by_id() to use best-effort
flushing while avoding possible livelocks.
v4: Use get_jiffies_64() and time_before/after64() instead of raw
jiffies_64 and arthimetic comparisons as suggested by Jan.
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-08-26 19:06:56 +03:00
|
|
|
WB_REASON_FOREIGN_FLUSH,
|
2017-09-30 11:09:06 +03:00
|
|
|
|
|
|
|
WB_REASON_MAX,
|
|
|
|
};
|
|
|
|
|
2019-08-26 19:06:52 +03:00
|
|
|
struct wb_completion {
|
|
|
|
atomic_t cnt;
|
|
|
|
wait_queue_head_t *waitq;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define __WB_COMPLETION_INIT(_waitq) \
|
|
|
|
(struct wb_completion){ .cnt = ATOMIC_INIT(1), .waitq = (_waitq) }
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If one wants to wait for one or more wb_writeback_works, each work's
|
|
|
|
* ->done should be set to a wb_completion defined using the following
|
|
|
|
* macro. Once all work items are issued with wb_queue_work(), the caller
|
|
|
|
* can wait for the completion of all using wb_wait_for_completion(). Work
|
|
|
|
* items which are waited upon aren't freed automatically on completion.
|
|
|
|
*/
|
|
|
|
#define WB_COMPLETION_INIT(bdi) __WB_COMPLETION_INIT(&(bdi)->wb_waitq)
|
|
|
|
|
|
|
|
#define DEFINE_WB_COMPLETION(cmpl, bdi) \
|
|
|
|
struct wb_completion cmpl = WB_COMPLETION_INIT(bdi)
|
|
|
|
|
2015-05-23 00:13:37 +03:00
|
|
|
/*
|
|
|
|
* For cgroup writeback, multiple wb's may map to the same blkcg. Those
|
|
|
|
* wb's can operate mostly independently but should share the congested
|
|
|
|
* state. To facilitate such sharing, the congested state is tracked using
|
|
|
|
* the following struct which is created on demand, indexed by blkcg ID on
|
|
|
|
* its bdi, and refcounted.
|
|
|
|
*/
|
2015-05-23 00:13:35 +03:00
|
|
|
struct bdi_writeback_congested {
|
|
|
|
unsigned long state; /* WB_[a]sync_congested flags */
|
2018-08-22 07:55:31 +03:00
|
|
|
refcount_t refcnt; /* nr of attached wb's and blkg */
|
2015-05-23 00:13:37 +03:00
|
|
|
|
|
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
2017-03-23 03:36:54 +03:00
|
|
|
struct backing_dev_info *__bdi; /* the associated bdi, set to NULL
|
|
|
|
* on bdi unregistration. For memcg-wb
|
|
|
|
* internal use only! */
|
2015-05-23 00:13:37 +03:00
|
|
|
int blkcg_id; /* ID of the associated blkcg */
|
|
|
|
struct rb_node rb_node; /* on bdi->cgwb_congestion_tree */
|
|
|
|
#endif
|
2015-05-23 00:13:35 +03:00
|
|
|
};
|
|
|
|
|
2015-05-23 00:13:37 +03:00
|
|
|
/*
|
|
|
|
* Each wb (bdi_writeback) can perform writeback operations, is measured
|
|
|
|
* and throttled, independently. Without cgroup writeback, each bdi
|
|
|
|
* (bdi_writeback) is served by its embedded bdi->wb.
|
|
|
|
*
|
|
|
|
* On the default hierarchy, blkcg implicitly enables memcg. This allows
|
|
|
|
* using memcg's page ownership for attributing writeback IOs, and every
|
|
|
|
* memcg - blkcg combination can be served by its own wb by assigning a
|
|
|
|
* dedicated wb to each memcg, which enables isolation across different
|
|
|
|
* cgroups and propagation of IO back pressure down from the IO layer upto
|
|
|
|
* the tasks which are generating the dirty pages to be written back.
|
|
|
|
*
|
|
|
|
* A cgroup wb is indexed on its bdi by the ID of the associated memcg,
|
|
|
|
* refcounted with the number of inodes attached to it, and pins the memcg
|
|
|
|
* and the corresponding blkcg. As the corresponding blkcg for a memcg may
|
|
|
|
* change as blkcg is disabled and enabled higher up in the hierarchy, a wb
|
|
|
|
* is tested for blkcg after lookup and removed from index on mismatch so
|
|
|
|
* that a new wb for the combination can be created.
|
|
|
|
*/
|
2015-05-23 00:13:32 +03:00
|
|
|
struct bdi_writeback {
|
|
|
|
struct backing_dev_info *bdi; /* our parent bdi */
|
|
|
|
|
|
|
|
unsigned long state; /* Always use atomic bitops on this */
|
|
|
|
unsigned long last_old_flush; /* last old data flush */
|
|
|
|
|
|
|
|
struct list_head b_dirty; /* dirty inodes */
|
|
|
|
struct list_head b_io; /* parked for writeback */
|
|
|
|
struct list_head b_more_io; /* parked for more writeback */
|
|
|
|
struct list_head b_dirty_time; /* time stamps are dirty */
|
|
|
|
spinlock_t list_lock; /* protects the b_* lists */
|
|
|
|
|
|
|
|
struct percpu_counter stat[NR_WB_STAT_ITEMS];
|
|
|
|
|
2015-05-23 00:13:35 +03:00
|
|
|
struct bdi_writeback_congested *congested;
|
|
|
|
|
2015-05-23 00:13:32 +03:00
|
|
|
unsigned long bw_time_stamp; /* last time write bw is updated */
|
|
|
|
unsigned long dirtied_stamp;
|
|
|
|
unsigned long written_stamp; /* pages written at bw_time_stamp */
|
|
|
|
unsigned long write_bandwidth; /* the estimated write bandwidth */
|
2015-05-23 00:13:47 +03:00
|
|
|
unsigned long avg_write_bandwidth; /* further smoothed write bw, > 0 */
|
2015-05-23 00:13:32 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The base dirty throttle rate, re-calculated on every 200ms.
|
|
|
|
* All the bdi tasks' dirty rate will be curbed under it.
|
|
|
|
* @dirty_ratelimit tracks the estimated @balanced_dirty_ratelimit
|
|
|
|
* in small steps and is much more smooth/stable than the latter.
|
|
|
|
*/
|
|
|
|
unsigned long dirty_ratelimit;
|
|
|
|
unsigned long balanced_dirty_ratelimit;
|
|
|
|
|
|
|
|
struct fprop_local_percpu completions;
|
|
|
|
int dirty_exceeded;
|
2017-09-30 11:09:06 +03:00
|
|
|
enum wb_reason start_all_reason;
|
2015-05-23 00:13:32 +03:00
|
|
|
|
|
|
|
spinlock_t work_lock; /* protects work_list & dwork scheduling */
|
|
|
|
struct list_head work_list;
|
|
|
|
struct delayed_work dwork; /* work item used for writeback */
|
2015-05-23 00:13:37 +03:00
|
|
|
|
2016-09-01 19:20:33 +03:00
|
|
|
unsigned long dirty_sleep; /* last wait */
|
|
|
|
|
2015-10-02 21:47:05 +03:00
|
|
|
struct list_head bdi_node; /* anchored at bdi->wb_list */
|
|
|
|
|
2015-05-23 00:13:37 +03:00
|
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
|
|
struct percpu_ref refcnt; /* used only for !root wb's */
|
2015-05-23 01:23:33 +03:00
|
|
|
struct fprop_local_percpu memcg_completions;
|
2015-05-23 00:13:37 +03:00
|
|
|
struct cgroup_subsys_state *memcg_css; /* the associated memcg */
|
|
|
|
struct cgroup_subsys_state *blkcg_css; /* and blkcg */
|
|
|
|
struct list_head memcg_node; /* anchored at memcg->cgwb_list */
|
|
|
|
struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */
|
|
|
|
|
|
|
|
union {
|
|
|
|
struct work_struct release_work;
|
|
|
|
struct rcu_head rcu;
|
|
|
|
};
|
|
|
|
#endif
|
2015-05-23 00:13:32 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
struct backing_dev_info {
|
2019-08-26 19:06:53 +03:00
|
|
|
u64 id;
|
|
|
|
struct rb_node rb_node; /* keyed by ->id */
|
2015-05-23 00:13:32 +03:00
|
|
|
struct list_head bdi_list;
|
2016-04-01 15:29:48 +03:00
|
|
|
unsigned long ra_pages; /* max readahead in PAGE_SIZE units */
|
mm: don't cap request size based on read-ahead setting
We ran into a funky issue, where someone doing 256K buffered reads saw
128K requests at the device level. Turns out it is read-ahead capping
the request size, since we use 128K as the default setting. This
doesn't make a lot of sense - if someone is issuing 256K reads, they
should see 256K reads, regardless of the read-ahead setting, if the
underlying device can support a 256K read in a single command.
This patch introduces a bdi hint, io_pages. This is the soft max IO
size for the lower level, I've hooked it up to the bdev settings here.
Read-ahead is modified to issue the maximum of the user request size,
and the read-ahead max size, but capped to the max request size on the
device side. The latter is done to avoid reading ahead too much, if the
application asks for a huge read. With this patch, the kernel behaves
like the application expects.
Link: http://lkml.kernel.org/r/1479498073-8657-1-git-send-email-axboe@fb.com
Signed-off-by: Jens Axboe <axboe@fb.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-13 03:43:26 +03:00
|
|
|
unsigned long io_pages; /* max allowed IO size */
|
2015-05-23 00:13:32 +03:00
|
|
|
congested_fn *congested_fn; /* Function pointer if device is md/dm */
|
|
|
|
void *congested_data; /* Pointer to aux data for congested func */
|
|
|
|
|
2017-02-02 17:56:51 +03:00
|
|
|
struct kref refcnt; /* Reference counter for the structure */
|
2016-12-13 03:43:29 +03:00
|
|
|
unsigned int capabilities; /* Device capabilities */
|
2015-05-23 00:13:32 +03:00
|
|
|
unsigned int min_ratio;
|
|
|
|
unsigned int max_ratio, max_prop_frac;
|
|
|
|
|
2015-05-23 00:13:47 +03:00
|
|
|
/*
|
|
|
|
* Sum of avg_write_bw of wbs with dirty inodes. > 0 if there are
|
|
|
|
* any dirty wbs, which is depended upon by bdi_has_dirty().
|
|
|
|
*/
|
|
|
|
atomic_long_t tot_write_bandwidth;
|
2015-05-23 00:13:46 +03:00
|
|
|
|
2015-05-23 00:13:37 +03:00
|
|
|
struct bdi_writeback wb; /* the root writeback info for this bdi */
|
2015-10-02 21:47:05 +03:00
|
|
|
struct list_head wb_list; /* list of all wbs */
|
2015-05-23 00:13:37 +03:00
|
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
|
|
struct radix_tree_root cgwb_tree; /* radix tree of active cgroup wbs */
|
|
|
|
struct rb_root cgwb_congested_tree; /* their congested states */
|
2018-06-18 16:46:58 +03:00
|
|
|
struct mutex cgwb_release_mutex; /* protect shutdown of wb structs */
|
2017-12-12 19:38:30 +03:00
|
|
|
struct rw_semaphore wb_switch_rwsem; /* no cgwb switch while syncing */
|
2015-07-02 17:44:34 +03:00
|
|
|
#else
|
|
|
|
struct bdi_writeback_congested *wb_congested;
|
2015-05-23 00:13:37 +03:00
|
|
|
#endif
|
2015-05-23 00:13:58 +03:00
|
|
|
wait_queue_head_t wb_waitq;
|
|
|
|
|
2015-05-23 00:13:32 +03:00
|
|
|
struct device *dev;
|
2020-05-04 15:47:56 +03:00
|
|
|
char dev_name[64];
|
2016-07-31 21:15:13 +03:00
|
|
|
struct device *owner;
|
2015-05-23 00:13:32 +03:00
|
|
|
|
|
|
|
struct timer_list laptop_mode_wb_timer;
|
|
|
|
|
|
|
|
#ifdef CONFIG_DEBUG_FS
|
|
|
|
struct dentry *debug_dir;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
BLK_RW_ASYNC = 0,
|
|
|
|
BLK_RW_SYNC = 1,
|
|
|
|
};
|
|
|
|
|
2015-05-23 00:13:41 +03:00
|
|
|
void clear_wb_congested(struct bdi_writeback_congested *congested, int sync);
|
|
|
|
void set_wb_congested(struct bdi_writeback_congested *congested, int sync);
|
|
|
|
|
|
|
|
static inline void clear_bdi_congested(struct backing_dev_info *bdi, int sync)
|
|
|
|
{
|
|
|
|
clear_wb_congested(bdi->wb.congested, sync);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_bdi_congested(struct backing_dev_info *bdi, int sync)
|
|
|
|
{
|
|
|
|
set_wb_congested(bdi->wb.congested, sync);
|
|
|
|
}
|
2015-05-23 00:13:32 +03:00
|
|
|
|
2018-04-21 00:55:42 +03:00
|
|
|
struct wb_lock_cookie {
|
|
|
|
bool locked;
|
|
|
|
unsigned long flags;
|
|
|
|
};
|
|
|
|
|
2015-05-28 21:50:49 +03:00
|
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wb_tryget - try to increment a wb's refcount
|
|
|
|
* @wb: bdi_writeback to get
|
|
|
|
*/
|
|
|
|
static inline bool wb_tryget(struct bdi_writeback *wb)
|
|
|
|
{
|
|
|
|
if (wb != &wb->bdi->wb)
|
|
|
|
return percpu_ref_tryget(&wb->refcnt);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wb_get - increment a wb's refcount
|
|
|
|
* @wb: bdi_writeback to get
|
|
|
|
*/
|
|
|
|
static inline void wb_get(struct bdi_writeback *wb)
|
|
|
|
{
|
|
|
|
if (wb != &wb->bdi->wb)
|
|
|
|
percpu_ref_get(&wb->refcnt);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wb_put - decrement a wb's refcount
|
|
|
|
* @wb: bdi_writeback to put
|
|
|
|
*/
|
|
|
|
static inline void wb_put(struct bdi_writeback *wb)
|
|
|
|
{
|
2018-12-28 11:33:31 +03:00
|
|
|
if (WARN_ON_ONCE(!wb->bdi)) {
|
|
|
|
/*
|
|
|
|
* A driver bug might cause a file to be removed before bdi was
|
|
|
|
* initialized.
|
|
|
|
*/
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-05-28 21:50:49 +03:00
|
|
|
if (wb != &wb->bdi->wb)
|
|
|
|
percpu_ref_put(&wb->refcnt);
|
|
|
|
}
|
|
|
|
|
2015-05-28 21:50:57 +03:00
|
|
|
/**
|
|
|
|
* wb_dying - is a wb dying?
|
|
|
|
* @wb: bdi_writeback of interest
|
|
|
|
*
|
|
|
|
* Returns whether @wb is unlinked and being drained.
|
|
|
|
*/
|
|
|
|
static inline bool wb_dying(struct bdi_writeback *wb)
|
|
|
|
{
|
|
|
|
return percpu_ref_is_dying(&wb->refcnt);
|
|
|
|
}
|
|
|
|
|
2015-05-28 21:50:49 +03:00
|
|
|
#else /* CONFIG_CGROUP_WRITEBACK */
|
|
|
|
|
|
|
|
static inline bool wb_tryget(struct bdi_writeback *wb)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void wb_get(struct bdi_writeback *wb)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void wb_put(struct bdi_writeback *wb)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2015-05-28 21:50:57 +03:00
|
|
|
static inline bool wb_dying(struct bdi_writeback *wb)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2015-05-28 21:50:49 +03:00
|
|
|
#endif /* CONFIG_CGROUP_WRITEBACK */
|
|
|
|
|
2015-05-23 00:13:32 +03:00
|
|
|
#endif /* __LINUX_BACKING_DEV_DEFS_H */
|