2019-05-27 09:55:05 +03:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
2012-10-09 03:31:33 +04:00
|
|
|
/*
|
|
|
|
Red Black Trees
|
|
|
|
(C) 1999 Andrea Arcangeli <andrea@suse.de>
|
|
|
|
(C) 2002 David Woodhouse <dwmw2@infradead.org>
|
|
|
|
(C) 2012 Michel Lespinasse <walken@google.com>
|
|
|
|
|
|
|
|
|
|
|
|
linux/include/linux/rbtree_augmented.h
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _LINUX_RBTREE_AUGMENTED_H
|
|
|
|
#define _LINUX_RBTREE_AUGMENTED_H
|
|
|
|
|
2012-10-26 00:37:53 +04:00
|
|
|
#include <linux/compiler.h>
|
2012-10-09 03:31:33 +04:00
|
|
|
#include <linux/rbtree.h>
|
2018-05-12 02:02:14 +03:00
|
|
|
#include <linux/rcupdate.h>
|
2012-10-09 03:31:33 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Please note - only struct rb_augment_callbacks and the prototypes for
|
|
|
|
* rb_insert_augmented() and rb_erase_augmented() are intended to be public.
|
|
|
|
* The rest are implementation details you are not expected to depend on.
|
|
|
|
*
|
2020-04-01 20:33:43 +03:00
|
|
|
* See Documentation/core-api/rbtree.rst for documentation and samples.
|
2012-10-09 03:31:33 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
struct rb_augment_callbacks {
|
|
|
|
void (*propagate)(struct rb_node *node, struct rb_node *stop);
|
|
|
|
void (*copy)(struct rb_node *old, struct rb_node *new);
|
|
|
|
void (*rotate)(struct rb_node *old, struct rb_node *new);
|
|
|
|
};
|
|
|
|
|
2019-07-17 02:27:45 +03:00
|
|
|
extern void __rb_insert_augmented(struct rb_node *node, struct rb_root *root,
|
2012-10-09 03:31:33 +04:00
|
|
|
void (*augment_rotate)(struct rb_node *old, struct rb_node *new));
|
2019-07-17 02:27:45 +03:00
|
|
|
|
2014-10-14 02:53:48 +04:00
|
|
|
/*
|
|
|
|
* Fixup the rbtree and update the augmented information when rebalancing.
|
|
|
|
*
|
|
|
|
* On insertion, the user must update the augmented information on the path
|
|
|
|
* leading to the inserted node, then call rb_link_node() as usual and
|
2018-10-31 01:05:42 +03:00
|
|
|
* rb_insert_augmented() instead of the usual rb_insert_color() call.
|
|
|
|
* If rb_insert_augmented() rebalances the rbtree, it will callback into
|
2014-10-14 02:53:48 +04:00
|
|
|
* a user provided function to update the augmented information on the
|
|
|
|
* affected subtrees.
|
|
|
|
*/
|
2012-10-09 03:31:33 +04:00
|
|
|
static inline void
|
|
|
|
rb_insert_augmented(struct rb_node *node, struct rb_root *root,
|
|
|
|
const struct rb_augment_callbacks *augment)
|
|
|
|
{
|
2019-07-17 02:27:45 +03:00
|
|
|
__rb_insert_augmented(node, root, augment->rotate);
|
rbtree: cache leftmost node internally
Patch series "rbtree: Cache leftmost node internally", v4.
A series to extending rbtrees to internally cache the leftmost node such
that we can have fast overlap check optimization for all interval tree
users[1]. The benefits of this series are that:
(i) Unify users that do internal leftmost node caching.
(ii) Optimize all interval tree users.
(iii) Convert at least two new users (epoll and procfs) to the new interface.
This patch (of 16):
Red-black tree semantics imply that nodes with smaller or greater (or
equal for duplicates) keys always be to the left and right,
respectively. For the kernel this is extremely evident when considering
our rb_first() semantics. Enabling lookups for the smallest node in the
tree in O(1) can save a good chunk of cycles in not having to walk down
the tree each time. To this end there are a few core users that
explicitly do this, such as the scheduler and rtmutexes. There is also
the desire for interval trees to have this optimization allowing faster
overlap checking.
This patch introduces a new 'struct rb_root_cached' which is just the
root with a cached pointer to the leftmost node. The reason why the
regular rb_root was not extended instead of adding a new structure was
that this allows the user to have the choice between memory footprint
and actual tree performance. The new wrappers on top of the regular
rb_root calls are:
- rb_first_cached(cached_root) -- which is a fast replacement
for rb_first.
- rb_insert_color_cached(node, cached_root, new)
- rb_erase_cached(node, cached_root)
In addition, augmented cached interfaces are also added for basic
insertion and deletion operations; which becomes important for the
interval tree changes.
With the exception of the inserts, which adds a bool for updating the
new leftmost, the interfaces are kept the same. To this end, porting rb
users to the cached version becomes really trivial, and keeping current
rbtree semantics for users that don't care about the optimization
requires zero overhead.
Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-09 02:14:36 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
rb_insert_augmented_cached(struct rb_node *node,
|
|
|
|
struct rb_root_cached *root, bool newleft,
|
|
|
|
const struct rb_augment_callbacks *augment)
|
|
|
|
{
|
2019-07-17 02:27:45 +03:00
|
|
|
if (newleft)
|
|
|
|
root->rb_leftmost = node;
|
|
|
|
rb_insert_augmented(node, &root->rb_root, augment);
|
2012-10-09 03:31:33 +04:00
|
|
|
}
|
|
|
|
|
2019-09-26 02:46:04 +03:00
|
|
|
/*
|
2019-09-26 02:46:07 +03:00
|
|
|
* Template for declaring augmented rbtree callbacks (generic case)
|
2019-09-26 02:46:04 +03:00
|
|
|
*
|
|
|
|
* RBSTATIC: 'static' or empty
|
|
|
|
* RBNAME: name of the rb_augment_callbacks structure
|
|
|
|
* RBSTRUCT: struct type of the tree nodes
|
|
|
|
* RBFIELD: name of struct rb_node field within RBSTRUCT
|
2019-09-26 02:46:10 +03:00
|
|
|
* RBAUGMENTED: name of field within RBSTRUCT holding data for subtree
|
2019-09-26 02:46:04 +03:00
|
|
|
* RBCOMPUTE: name of function that recomputes the RBAUGMENTED data
|
|
|
|
*/
|
|
|
|
|
2019-09-26 02:46:10 +03:00
|
|
|
#define RB_DECLARE_CALLBACKS(RBSTATIC, RBNAME, \
|
|
|
|
RBSTRUCT, RBFIELD, RBAUGMENTED, RBCOMPUTE) \
|
2012-10-09 03:31:33 +04:00
|
|
|
static inline void \
|
2019-09-26 02:46:04 +03:00
|
|
|
RBNAME ## _propagate(struct rb_node *rb, struct rb_node *stop) \
|
2012-10-09 03:31:33 +04:00
|
|
|
{ \
|
|
|
|
while (rb != stop) { \
|
2019-09-26 02:46:04 +03:00
|
|
|
RBSTRUCT *node = rb_entry(rb, RBSTRUCT, RBFIELD); \
|
2019-09-26 02:46:10 +03:00
|
|
|
if (RBCOMPUTE(node, true)) \
|
2012-10-09 03:31:33 +04:00
|
|
|
break; \
|
2019-09-26 02:46:04 +03:00
|
|
|
rb = rb_parent(&node->RBFIELD); \
|
2012-10-09 03:31:33 +04:00
|
|
|
} \
|
|
|
|
} \
|
|
|
|
static inline void \
|
2019-09-26 02:46:04 +03:00
|
|
|
RBNAME ## _copy(struct rb_node *rb_old, struct rb_node *rb_new) \
|
2012-10-09 03:31:33 +04:00
|
|
|
{ \
|
2019-09-26 02:46:04 +03:00
|
|
|
RBSTRUCT *old = rb_entry(rb_old, RBSTRUCT, RBFIELD); \
|
|
|
|
RBSTRUCT *new = rb_entry(rb_new, RBSTRUCT, RBFIELD); \
|
|
|
|
new->RBAUGMENTED = old->RBAUGMENTED; \
|
2012-10-09 03:31:33 +04:00
|
|
|
} \
|
|
|
|
static void \
|
2019-09-26 02:46:04 +03:00
|
|
|
RBNAME ## _rotate(struct rb_node *rb_old, struct rb_node *rb_new) \
|
2012-10-09 03:31:33 +04:00
|
|
|
{ \
|
2019-09-26 02:46:04 +03:00
|
|
|
RBSTRUCT *old = rb_entry(rb_old, RBSTRUCT, RBFIELD); \
|
|
|
|
RBSTRUCT *new = rb_entry(rb_new, RBSTRUCT, RBFIELD); \
|
|
|
|
new->RBAUGMENTED = old->RBAUGMENTED; \
|
2019-09-26 02:46:10 +03:00
|
|
|
RBCOMPUTE(old, false); \
|
2012-10-09 03:31:33 +04:00
|
|
|
} \
|
2019-09-26 02:46:04 +03:00
|
|
|
RBSTATIC const struct rb_augment_callbacks RBNAME = { \
|
|
|
|
.propagate = RBNAME ## _propagate, \
|
|
|
|
.copy = RBNAME ## _copy, \
|
|
|
|
.rotate = RBNAME ## _rotate \
|
2012-10-09 03:31:33 +04:00
|
|
|
};
|
|
|
|
|
2019-09-26 02:46:07 +03:00
|
|
|
/*
|
|
|
|
* Template for declaring augmented rbtree callbacks,
|
|
|
|
* computing RBAUGMENTED scalar as max(RBCOMPUTE(node)) for all subtree nodes.
|
|
|
|
*
|
|
|
|
* RBSTATIC: 'static' or empty
|
|
|
|
* RBNAME: name of the rb_augment_callbacks structure
|
|
|
|
* RBSTRUCT: struct type of the tree nodes
|
|
|
|
* RBFIELD: name of struct rb_node field within RBSTRUCT
|
|
|
|
* RBTYPE: type of the RBAUGMENTED field
|
|
|
|
* RBAUGMENTED: name of RBTYPE field within RBSTRUCT holding data for subtree
|
|
|
|
* RBCOMPUTE: name of function that returns the per-node RBTYPE scalar
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define RB_DECLARE_CALLBACKS_MAX(RBSTATIC, RBNAME, RBSTRUCT, RBFIELD, \
|
|
|
|
RBTYPE, RBAUGMENTED, RBCOMPUTE) \
|
2019-09-26 02:46:10 +03:00
|
|
|
static inline bool RBNAME ## _compute_max(RBSTRUCT *node, bool exit) \
|
2019-09-26 02:46:07 +03:00
|
|
|
{ \
|
|
|
|
RBSTRUCT *child; \
|
|
|
|
RBTYPE max = RBCOMPUTE(node); \
|
|
|
|
if (node->RBFIELD.rb_left) { \
|
|
|
|
child = rb_entry(node->RBFIELD.rb_left, RBSTRUCT, RBFIELD); \
|
|
|
|
if (child->RBAUGMENTED > max) \
|
|
|
|
max = child->RBAUGMENTED; \
|
|
|
|
} \
|
|
|
|
if (node->RBFIELD.rb_right) { \
|
|
|
|
child = rb_entry(node->RBFIELD.rb_right, RBSTRUCT, RBFIELD); \
|
|
|
|
if (child->RBAUGMENTED > max) \
|
|
|
|
max = child->RBAUGMENTED; \
|
|
|
|
} \
|
2019-09-26 02:46:10 +03:00
|
|
|
if (exit && node->RBAUGMENTED == max) \
|
|
|
|
return true; \
|
|
|
|
node->RBAUGMENTED = max; \
|
|
|
|
return false; \
|
2019-09-26 02:46:07 +03:00
|
|
|
} \
|
2019-09-26 02:46:10 +03:00
|
|
|
RB_DECLARE_CALLBACKS(RBSTATIC, RBNAME, \
|
|
|
|
RBSTRUCT, RBFIELD, RBAUGMENTED, RBNAME ## _compute_max)
|
2019-09-26 02:46:07 +03:00
|
|
|
|
2012-10-09 03:31:33 +04:00
|
|
|
|
|
|
|
#define RB_RED 0
|
|
|
|
#define RB_BLACK 1
|
|
|
|
|
|
|
|
#define __rb_parent(pc) ((struct rb_node *)(pc & ~3))
|
|
|
|
|
|
|
|
#define __rb_color(pc) ((pc) & 1)
|
|
|
|
#define __rb_is_black(pc) __rb_color(pc)
|
|
|
|
#define __rb_is_red(pc) (!__rb_color(pc))
|
|
|
|
#define rb_color(rb) __rb_color((rb)->__rb_parent_color)
|
|
|
|
#define rb_is_red(rb) __rb_is_red((rb)->__rb_parent_color)
|
|
|
|
#define rb_is_black(rb) __rb_is_black((rb)->__rb_parent_color)
|
|
|
|
|
|
|
|
static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
|
|
|
|
{
|
|
|
|
rb->__rb_parent_color = rb_color(rb) | (unsigned long)p;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void rb_set_parent_color(struct rb_node *rb,
|
|
|
|
struct rb_node *p, int color)
|
|
|
|
{
|
|
|
|
rb->__rb_parent_color = (unsigned long)p | color;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
__rb_change_child(struct rb_node *old, struct rb_node *new,
|
|
|
|
struct rb_node *parent, struct rb_root *root)
|
|
|
|
{
|
|
|
|
if (parent) {
|
|
|
|
if (parent->rb_left == old)
|
2015-05-27 04:39:36 +03:00
|
|
|
WRITE_ONCE(parent->rb_left, new);
|
2012-10-09 03:31:33 +04:00
|
|
|
else
|
2015-05-27 04:39:36 +03:00
|
|
|
WRITE_ONCE(parent->rb_right, new);
|
2012-10-09 03:31:33 +04:00
|
|
|
} else
|
2015-05-27 04:39:36 +03:00
|
|
|
WRITE_ONCE(root->rb_node, new);
|
2012-10-09 03:31:33 +04:00
|
|
|
}
|
|
|
|
|
2016-07-01 09:53:51 +03:00
|
|
|
static inline void
|
|
|
|
__rb_change_child_rcu(struct rb_node *old, struct rb_node *new,
|
|
|
|
struct rb_node *parent, struct rb_root *root)
|
|
|
|
{
|
|
|
|
if (parent) {
|
|
|
|
if (parent->rb_left == old)
|
|
|
|
rcu_assign_pointer(parent->rb_left, new);
|
|
|
|
else
|
|
|
|
rcu_assign_pointer(parent->rb_right, new);
|
|
|
|
} else
|
|
|
|
rcu_assign_pointer(root->rb_node, new);
|
|
|
|
}
|
|
|
|
|
2012-10-09 03:31:33 +04:00
|
|
|
extern void __rb_erase_color(struct rb_node *parent, struct rb_root *root,
|
|
|
|
void (*augment_rotate)(struct rb_node *old, struct rb_node *new));
|
|
|
|
|
2013-01-12 02:32:20 +04:00
|
|
|
static __always_inline struct rb_node *
|
|
|
|
__rb_erase_augmented(struct rb_node *node, struct rb_root *root,
|
|
|
|
const struct rb_augment_callbacks *augment)
|
2012-10-09 03:31:33 +04:00
|
|
|
{
|
2015-05-27 04:39:36 +03:00
|
|
|
struct rb_node *child = node->rb_right;
|
|
|
|
struct rb_node *tmp = node->rb_left;
|
2012-10-09 03:31:33 +04:00
|
|
|
struct rb_node *parent, *rebalance;
|
|
|
|
unsigned long pc;
|
|
|
|
|
|
|
|
if (!tmp) {
|
|
|
|
/*
|
|
|
|
* Case 1: node to erase has no more than 1 child (easy!)
|
|
|
|
*
|
|
|
|
* Note that if there is one child it must be red due to 5)
|
|
|
|
* and node must be black due to 4). We adjust colors locally
|
|
|
|
* so as to bypass __rb_erase_color() later on.
|
|
|
|
*/
|
|
|
|
pc = node->__rb_parent_color;
|
|
|
|
parent = __rb_parent(pc);
|
|
|
|
__rb_change_child(node, child, parent, root);
|
|
|
|
if (child) {
|
|
|
|
child->__rb_parent_color = pc;
|
|
|
|
rebalance = NULL;
|
|
|
|
} else
|
|
|
|
rebalance = __rb_is_black(pc) ? parent : NULL;
|
|
|
|
tmp = parent;
|
|
|
|
} else if (!child) {
|
|
|
|
/* Still case 1, but this time the child is node->rb_left */
|
|
|
|
tmp->__rb_parent_color = pc = node->__rb_parent_color;
|
|
|
|
parent = __rb_parent(pc);
|
|
|
|
__rb_change_child(node, tmp, parent, root);
|
|
|
|
rebalance = NULL;
|
|
|
|
tmp = parent;
|
|
|
|
} else {
|
|
|
|
struct rb_node *successor = child, *child2;
|
2015-05-27 04:39:36 +03:00
|
|
|
|
2012-10-09 03:31:33 +04:00
|
|
|
tmp = child->rb_left;
|
|
|
|
if (!tmp) {
|
|
|
|
/*
|
|
|
|
* Case 2: node's successor is its right child
|
|
|
|
*
|
|
|
|
* (n) (s)
|
|
|
|
* / \ / \
|
|
|
|
* (x) (s) -> (x) (c)
|
|
|
|
* \
|
|
|
|
* (c)
|
|
|
|
*/
|
|
|
|
parent = successor;
|
|
|
|
child2 = successor->rb_right;
|
2015-05-27 04:39:36 +03:00
|
|
|
|
2012-10-09 03:31:33 +04:00
|
|
|
augment->copy(node, successor);
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Case 3: node's successor is leftmost under
|
|
|
|
* node's right child subtree
|
|
|
|
*
|
|
|
|
* (n) (s)
|
|
|
|
* / \ / \
|
|
|
|
* (x) (y) -> (x) (y)
|
|
|
|
* / /
|
|
|
|
* (p) (p)
|
|
|
|
* / /
|
|
|
|
* (s) (c)
|
|
|
|
* \
|
|
|
|
* (c)
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
parent = successor;
|
|
|
|
successor = tmp;
|
|
|
|
tmp = tmp->rb_left;
|
|
|
|
} while (tmp);
|
2015-05-27 04:39:36 +03:00
|
|
|
child2 = successor->rb_right;
|
|
|
|
WRITE_ONCE(parent->rb_left, child2);
|
|
|
|
WRITE_ONCE(successor->rb_right, child);
|
2012-10-09 03:31:33 +04:00
|
|
|
rb_set_parent(child, successor);
|
2015-05-27 04:39:36 +03:00
|
|
|
|
2012-10-09 03:31:33 +04:00
|
|
|
augment->copy(node, successor);
|
|
|
|
augment->propagate(parent, successor);
|
|
|
|
}
|
|
|
|
|
2015-05-27 04:39:36 +03:00
|
|
|
tmp = node->rb_left;
|
|
|
|
WRITE_ONCE(successor->rb_left, tmp);
|
2012-10-09 03:31:33 +04:00
|
|
|
rb_set_parent(tmp, successor);
|
|
|
|
|
|
|
|
pc = node->__rb_parent_color;
|
|
|
|
tmp = __rb_parent(pc);
|
|
|
|
__rb_change_child(node, successor, tmp, root);
|
2015-05-27 04:39:36 +03:00
|
|
|
|
2012-10-09 03:31:33 +04:00
|
|
|
if (child2) {
|
|
|
|
rb_set_parent_color(child2, parent, RB_BLACK);
|
|
|
|
rebalance = NULL;
|
|
|
|
} else {
|
2019-12-05 03:51:50 +03:00
|
|
|
rebalance = rb_is_black(successor) ? parent : NULL;
|
2012-10-09 03:31:33 +04:00
|
|
|
}
|
2019-12-05 03:51:47 +03:00
|
|
|
successor->__rb_parent_color = pc;
|
2012-10-09 03:31:33 +04:00
|
|
|
tmp = successor;
|
|
|
|
}
|
|
|
|
|
|
|
|
augment->propagate(tmp, NULL);
|
2013-01-12 02:32:20 +04:00
|
|
|
return rebalance;
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline void
|
|
|
|
rb_erase_augmented(struct rb_node *node, struct rb_root *root,
|
|
|
|
const struct rb_augment_callbacks *augment)
|
|
|
|
{
|
2019-07-17 02:27:45 +03:00
|
|
|
struct rb_node *rebalance = __rb_erase_augmented(node, root, augment);
|
2012-10-09 03:31:33 +04:00
|
|
|
if (rebalance)
|
|
|
|
__rb_erase_color(rebalance, root, augment->rotate);
|
|
|
|
}
|
|
|
|
|
rbtree: cache leftmost node internally
Patch series "rbtree: Cache leftmost node internally", v4.
A series to extending rbtrees to internally cache the leftmost node such
that we can have fast overlap check optimization for all interval tree
users[1]. The benefits of this series are that:
(i) Unify users that do internal leftmost node caching.
(ii) Optimize all interval tree users.
(iii) Convert at least two new users (epoll and procfs) to the new interface.
This patch (of 16):
Red-black tree semantics imply that nodes with smaller or greater (or
equal for duplicates) keys always be to the left and right,
respectively. For the kernel this is extremely evident when considering
our rb_first() semantics. Enabling lookups for the smallest node in the
tree in O(1) can save a good chunk of cycles in not having to walk down
the tree each time. To this end there are a few core users that
explicitly do this, such as the scheduler and rtmutexes. There is also
the desire for interval trees to have this optimization allowing faster
overlap checking.
This patch introduces a new 'struct rb_root_cached' which is just the
root with a cached pointer to the leftmost node. The reason why the
regular rb_root was not extended instead of adding a new structure was
that this allows the user to have the choice between memory footprint
and actual tree performance. The new wrappers on top of the regular
rb_root calls are:
- rb_first_cached(cached_root) -- which is a fast replacement
for rb_first.
- rb_insert_color_cached(node, cached_root, new)
- rb_erase_cached(node, cached_root)
In addition, augmented cached interfaces are also added for basic
insertion and deletion operations; which becomes important for the
interval tree changes.
With the exception of the inserts, which adds a bool for updating the
new leftmost, the interfaces are kept the same. To this end, porting rb
users to the cached version becomes really trivial, and keeping current
rbtree semantics for users that don't care about the optimization
requires zero overhead.
Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-09 02:14:36 +03:00
|
|
|
static __always_inline void
|
|
|
|
rb_erase_augmented_cached(struct rb_node *node, struct rb_root_cached *root,
|
|
|
|
const struct rb_augment_callbacks *augment)
|
|
|
|
{
|
2019-07-17 02:27:45 +03:00
|
|
|
if (root->rb_leftmost == node)
|
|
|
|
root->rb_leftmost = rb_next(node);
|
|
|
|
rb_erase_augmented(node, &root->rb_root, augment);
|
rbtree: cache leftmost node internally
Patch series "rbtree: Cache leftmost node internally", v4.
A series to extending rbtrees to internally cache the leftmost node such
that we can have fast overlap check optimization for all interval tree
users[1]. The benefits of this series are that:
(i) Unify users that do internal leftmost node caching.
(ii) Optimize all interval tree users.
(iii) Convert at least two new users (epoll and procfs) to the new interface.
This patch (of 16):
Red-black tree semantics imply that nodes with smaller or greater (or
equal for duplicates) keys always be to the left and right,
respectively. For the kernel this is extremely evident when considering
our rb_first() semantics. Enabling lookups for the smallest node in the
tree in O(1) can save a good chunk of cycles in not having to walk down
the tree each time. To this end there are a few core users that
explicitly do this, such as the scheduler and rtmutexes. There is also
the desire for interval trees to have this optimization allowing faster
overlap checking.
This patch introduces a new 'struct rb_root_cached' which is just the
root with a cached pointer to the leftmost node. The reason why the
regular rb_root was not extended instead of adding a new structure was
that this allows the user to have the choice between memory footprint
and actual tree performance. The new wrappers on top of the regular
rb_root calls are:
- rb_first_cached(cached_root) -- which is a fast replacement
for rb_first.
- rb_insert_color_cached(node, cached_root, new)
- rb_erase_cached(node, cached_root)
In addition, augmented cached interfaces are also added for basic
insertion and deletion operations; which becomes important for the
interval tree changes.
With the exception of the inserts, which adds a bool for updating the
new leftmost, the interfaces are kept the same. To this end, porting rb
users to the cached version becomes really trivial, and keeping current
rbtree semantics for users that don't care about the optimization
requires zero overhead.
Link: http://lkml.kernel.org/r/20170719014603.19029-2-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-09 02:14:36 +03:00
|
|
|
}
|
|
|
|
|
2012-10-09 03:31:33 +04:00
|
|
|
#endif /* _LINUX_RBTREE_AUGMENTED_H */
|