WSL2-Linux-Kernel/fs/btrfs/ref-cache.c

233 строки
5.5 KiB
C
Исходник Обычный вид История

/*
* Copyright (C) 2008 Oracle. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public
* License v2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public
* License along with this program; if not, write to the
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 021110-1307, USA.
*/
#include <linux/sched.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
#include <linux/slab.h>
Btrfs: Make btrfs_drop_snapshot work in larger and more efficient chunks Every transaction in btrfs creates a new snapshot, and then schedules the snapshot from the last transaction for deletion. Snapshot deletion works by walking down the btree and dropping the reference counts on each btree block during the walk. If if a given leaf or node has a reference count greater than one, the reference count is decremented and the subtree pointed to by that node is ignored. If the reference count is one, walking continues down into that node or leaf, and the references of everything it points to are decremented. The old code would try to work in small pieces, walking down the tree until it found the lowest leaf or node to free and then returning. This was very friendly to the rest of the FS because it didn't have a huge impact on other operations. But it wouldn't always keep up with the rate that new commits added new snapshots for deletion, and it wasn't very optimal for the extent allocation tree because it wasn't finding leaves that were close together on disk and processing them at the same time. This changes things to walk down to a level 1 node and then process it in bulk. All the leaf pointers are sorted and the leaves are dropped in order based on their extent number. The extent allocation tree and commit code are now fast enough for this kind of bulk processing to work without slowing the rest of the FS down. Overall it does less IO and is better able to keep up with snapshot deletions under high load. Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-04 17:27:02 +03:00
#include <linux/sort.h>
#include "ctree.h"
#include "ref-cache.h"
#include "transaction.h"
/*
* leaf refs are used to cache the information about which extents
* a given leaf has references on. This allows us to process that leaf
* in btrfs_drop_snapshot without needing to read it back from disk.
*/
/*
* kmalloc a leaf reference struct and update the counters for the
* total ref cache size
*/
struct btrfs_leaf_ref *btrfs_alloc_leaf_ref(struct btrfs_root *root,
int nr_extents)
{
struct btrfs_leaf_ref *ref;
size_t size = btrfs_leaf_ref_size(nr_extents);
ref = kmalloc(size, GFP_NOFS);
if (ref) {
spin_lock(&root->fs_info->ref_cache_lock);
root->fs_info->total_ref_cache_size += size;
spin_unlock(&root->fs_info->ref_cache_lock);
memset(ref, 0, sizeof(*ref));
atomic_set(&ref->usage, 1);
INIT_LIST_HEAD(&ref->list);
}
return ref;
}
/*
* free a leaf reference struct and update the counters for the
* total ref cache size
*/
void btrfs_free_leaf_ref(struct btrfs_root *root, struct btrfs_leaf_ref *ref)
{
if (!ref)
return;
WARN_ON(atomic_read(&ref->usage) == 0);
if (atomic_dec_and_test(&ref->usage)) {
size_t size = btrfs_leaf_ref_size(ref->nritems);
BUG_ON(ref->in_tree);
kfree(ref);
spin_lock(&root->fs_info->ref_cache_lock);
root->fs_info->total_ref_cache_size -= size;
spin_unlock(&root->fs_info->ref_cache_lock);
}
}
static struct rb_node *tree_insert(struct rb_root *root, u64 bytenr,
struct rb_node *node)
{
struct rb_node **p = &root->rb_node;
struct rb_node *parent = NULL;
struct btrfs_leaf_ref *entry;
while (*p) {
parent = *p;
entry = rb_entry(parent, struct btrfs_leaf_ref, rb_node);
if (bytenr < entry->bytenr)
p = &(*p)->rb_left;
else if (bytenr > entry->bytenr)
p = &(*p)->rb_right;
else
return parent;
}
entry = rb_entry(node, struct btrfs_leaf_ref, rb_node);
rb_link_node(node, parent, p);
rb_insert_color(node, root);
return NULL;
}
static struct rb_node *tree_search(struct rb_root *root, u64 bytenr)
{
struct rb_node *n = root->rb_node;
struct btrfs_leaf_ref *entry;
while (n) {
entry = rb_entry(n, struct btrfs_leaf_ref, rb_node);
WARN_ON(!entry->in_tree);
if (bytenr < entry->bytenr)
n = n->rb_left;
else if (bytenr > entry->bytenr)
n = n->rb_right;
else
return n;
}
return NULL;
}
int btrfs_remove_leaf_refs(struct btrfs_root *root, u64 max_root_gen,
int shared)
{
struct btrfs_leaf_ref *ref = NULL;
struct btrfs_leaf_ref_tree *tree = root->ref_tree;
if (shared)
tree = &root->fs_info->shared_ref_tree;
if (!tree)
return 0;
spin_lock(&tree->lock);
while (!list_empty(&tree->list)) {
ref = list_entry(tree->list.next, struct btrfs_leaf_ref, list);
BUG_ON(ref->tree != tree);
if (ref->root_gen > max_root_gen)
break;
if (!xchg(&ref->in_tree, 0)) {
cond_resched_lock(&tree->lock);
continue;
}
rb_erase(&ref->rb_node, &tree->root);
list_del_init(&ref->list);
spin_unlock(&tree->lock);
btrfs_free_leaf_ref(root, ref);
cond_resched();
spin_lock(&tree->lock);
}
spin_unlock(&tree->lock);
return 0;
}
/*
* find the leaf ref for a given extent. This returns the ref struct with
* a usage reference incremented
*/
struct btrfs_leaf_ref *btrfs_lookup_leaf_ref(struct btrfs_root *root,
u64 bytenr)
{
struct rb_node *rb;
struct btrfs_leaf_ref *ref = NULL;
struct btrfs_leaf_ref_tree *tree = root->ref_tree;
again:
if (tree) {
spin_lock(&tree->lock);
rb = tree_search(&tree->root, bytenr);
if (rb)
ref = rb_entry(rb, struct btrfs_leaf_ref, rb_node);
if (ref)
atomic_inc(&ref->usage);
spin_unlock(&tree->lock);
if (ref)
return ref;
}
if (tree != &root->fs_info->shared_ref_tree) {
tree = &root->fs_info->shared_ref_tree;
goto again;
}
return NULL;
}
/*
* add a fully filled in leaf ref struct
* remove all the refs older than a given root generation
*/
int btrfs_add_leaf_ref(struct btrfs_root *root, struct btrfs_leaf_ref *ref,
int shared)
{
int ret = 0;
struct rb_node *rb;
struct btrfs_leaf_ref_tree *tree = root->ref_tree;
if (shared)
tree = &root->fs_info->shared_ref_tree;
spin_lock(&tree->lock);
rb = tree_insert(&tree->root, ref->bytenr, &ref->rb_node);
if (rb) {
ret = -EEXIST;
} else {
atomic_inc(&ref->usage);
ref->tree = tree;
ref->in_tree = 1;
list_add_tail(&ref->list, &tree->list);
}
spin_unlock(&tree->lock);
return ret;
}
/*
* remove a single leaf ref from the tree. This drops the ref held by the tree
* only
*/
int btrfs_remove_leaf_ref(struct btrfs_root *root, struct btrfs_leaf_ref *ref)
{
struct btrfs_leaf_ref_tree *tree;
if (!xchg(&ref->in_tree, 0))
return 0;
tree = ref->tree;
spin_lock(&tree->lock);
rb_erase(&ref->rb_node, &tree->root);
list_del_init(&ref->list);
spin_unlock(&tree->lock);
btrfs_free_leaf_ref(root, ref);
return 0;
}