Merge branch 'classifier-no-rtnl'

Vlad Buslov says:

====================
Refactor classifier API to work with chain/classifiers without rtnl lock

Currently, all netlink protocol handlers for updating rules, actions and
qdiscs are protected with single global rtnl lock which removes any
possibility for parallelism. This patch set is a third step to remove
rtnl lock dependency from TC rules update path.

Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added.
Handlers registered with this flag are called without RTNL taken. End
goal is to have rule update handlers(RTM_NEWTFILTER, RTM_DELTFILTER,
etc.) to be registered with UNLOCKED flag to allow parallel execution.
However, there is no intention to completely remove or split rtnl lock
itself. This patch set addresses specific problems in implementation of
classifiers API that prevent its control path from being executed
concurrently, and completes refactoring of cls API rules update handlers
by removing rtnl lock dependency from code that handles chains and
classifiers. Rules update handlers are registered with
RTNL_FLAG_DOIT_UNLOCKED flag.

This patch set substitutes global rtnl lock dependency on rules update
path in cls API by extending its data structures with following locks:
- tcf_block with 'lock' mutex. It is used to protect block state and
  life-time management fields of chains on the block (chain->refcnt,
  chain->action_refcnt, chain->explicitly crated, etc.).
- tcf_chain with 'filter_chain_lock' mutex, that is used to protect list
  of classifier instances attached to chain. chain0->filter_chain_lock
  serializes calls to head change callbacks and allows them to rely on
  filter_chain_lock for serialization instead of rtnl lock.
- tcf_proto with 'lock' spinlock that is intended to be used to
  synchronize access to classifiers that support unlocked execution.

Classifiers are extended with reference counting to accommodate parallel
access by unlocked cls API. Classifier ops structure is extended with
additional 'put' function to allow reference counting of filters and
intended to be used by classifiers that implement rtnl-unlocked API.
Users of classifiers and individual filter instances are modified to
always hold reference while working with them.

Classifiers that support unlocked execution still need to know the
status of rtnl lock, so their API is extended with additional
'rtnl_held' argument that is used to indicate that caller holds rtnl
lock. Cls API propagates rtnl lock status across its helper functions
and passes it to classifier.

Changes from V3 to V4:
- Patch 1:
  - Extract code that manages chain 'explicitly_created' flag into
    standalone patch.
- Patch 2 - new.

Changes from V2 to V3:
- Change block->lock and chain->filter_chain_lock type to mutex. This
  removes the need for async miniqp refactoring and allows calling
  sleeping functions while holding the block->lock and
  chain->filter_chain_lock locks.
- Previous patch 1 - async miniqp is no longer needed, remove the patch.
- Patch 1:
  - Change block->lock type to mutex.
  - Implement tcf_block_destroy() helper function that destroys
    block->lock mutex before deallocating the block.
  - Revert GFP_KERNEL->GFP_ATOMIC memory allocation flags of tcf_chain
    which is no longer needed after block->lock type change.
- Patch 6:
  - Change chain->filter_chain_lock type to mutex.
  - Assume chain0->filter_chain_lock synchronizations instead of rtnl
    lock in mini_qdisc_pair_swap() function that is called from head
    change callback of ingress Qdisc. With filter_chain_lock type
    changed to mutex it is now possible to call sleeping function while
    holding it, so it is now used instead of async implementation from
    previous versions of this patch set.
- Patch 7:
  - Add local tp_next var to tcf_chain_flush() and use it to store
    tp->next pointer dereferenced with rcu_dereference_protected() to
    satisfy kbuild test robot.
  - Reset tp pointer to NULL at the beginning of tc_new_tfilter() to
    prevent its uninitialized usage in error handling code. This code
    was already implemented in patch 10, but must be in patch 8 to
    preserve code bisectability.
  - Put parent chain in tcf_proto_destroy(). In previous version this
    code was implemented in patch 1 which was removed in V3.

Changes from V1 to V2:
- Patch 1:
  - Use smp_store_release() instead of xchg() for setting
    miniqp->tp_head.
  - Move Qdisc deallocation to tc_proto_wq ordered workqueue that is
    used to destroy tcf proto instances. This is necessary to ensure
    that Qdisc is destroyed after all instances of chain/proto that it
    contains in order to prevent use-after-free error in
    tc_chain_notify_delete().
  - Cache parent net device ifindex in block to prevent use-after-free
    of dev queue in tc_chain_notify_delete().
- Patch 2:
  - Use lockdep_assert_held() instead of spin_is_locked() for assertion.
  - Use 'free_block' argument in tcf_chain_destroy() instead of checking
    block's reference count and chain_list for second time.
- Patch 7:
  - Refactor tcf_chain0_head_change_cb_add() to not take block->lock and
    chain0->filter_chain_lock in correct order.
- Patch 10:
  - Always set 'tp_created' flag when creating tp to prevent releasing
    the chain twice when tp with same priority was inserted
    concurrently.
- Patch 11:
  - Add additional check to prevent creation of new proto instance when
    parent chain is being flushed to reduce CPU usage.
  - Don't call tcf_chain_delete_empty() if tp insertion failed.
- Patch 16 - new.
- Patch 17:
  - Refactor to only lock take rtnl lock once (at the beginning of rule
    update handlers).
  - Always release rtnl mutex in the same function that locked it.
    Remove unlock code from tcf_block_release().
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2019-02-12 13:41:33 -05:00
Родитель b67de691f6 470502de5b
Коммит ef718bc309
16 изменённых файлов: 1110 добавлений и 373 удалений

Просмотреть файл

@ -44,6 +44,10 @@ bool tcf_queue_work(struct rcu_work *rwork, work_func_t func);
struct tcf_chain *tcf_chain_get_by_act(struct tcf_block *block,
u32 chain_index);
void tcf_chain_put_by_act(struct tcf_chain *chain);
struct tcf_chain *tcf_get_next_chain(struct tcf_block *block,
struct tcf_chain *chain);
struct tcf_proto *tcf_get_next_proto(struct tcf_chain *chain,
struct tcf_proto *tp, bool rtnl_held);
void tcf_block_netif_keep_dst(struct tcf_block *block);
int tcf_block_get(struct tcf_block **p_block,
struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q,
@ -412,7 +416,7 @@ tcf_exts_exec(struct sk_buff *skb, struct tcf_exts *exts,
int tcf_exts_validate(struct net *net, struct tcf_proto *tp,
struct nlattr **tb, struct nlattr *rate_tlv,
struct tcf_exts *exts, bool ovr,
struct tcf_exts *exts, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack);
void tcf_exts_destroy(struct tcf_exts *exts);
void tcf_exts_change(struct tcf_exts *dst, struct tcf_exts *src);

Просмотреть файл

@ -12,6 +12,7 @@
#include <linux/list.h>
#include <linux/refcount.h>
#include <linux/workqueue.h>
#include <linux/mutex.h>
#include <net/gen_stats.h>
#include <net/rtnetlink.h>
@ -178,6 +179,7 @@ static inline int qdisc_avail_bulklimit(const struct netdev_queue *txq)
}
struct Qdisc_class_ops {
unsigned int flags;
/* Child qdisc manipulation */
struct netdev_queue * (*select_queue)(struct Qdisc *, struct tcmsg *);
int (*graft)(struct Qdisc *, unsigned long cl,
@ -209,6 +211,13 @@ struct Qdisc_class_ops {
struct gnet_dump *);
};
/* Qdisc_class_ops flag values */
/* Implements API that doesn't require rtnl lock */
enum qdisc_class_ops_flags {
QDISC_CLASS_OPS_DOIT_UNLOCKED = 1,
};
struct Qdisc_ops {
struct Qdisc_ops *next;
const struct Qdisc_class_ops *cl_ops;
@ -272,19 +281,21 @@ struct tcf_proto_ops {
const struct tcf_proto *,
struct tcf_result *);
int (*init)(struct tcf_proto*);
void (*destroy)(struct tcf_proto *tp,
void (*destroy)(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack);
void* (*get)(struct tcf_proto*, u32 handle);
void (*put)(struct tcf_proto *tp, void *f);
int (*change)(struct net *net, struct sk_buff *,
struct tcf_proto*, unsigned long,
u32 handle, struct nlattr **,
void **, bool,
void **, bool, bool,
struct netlink_ext_ack *);
int (*delete)(struct tcf_proto *tp, void *arg,
bool *last,
bool *last, bool rtnl_held,
struct netlink_ext_ack *);
void (*walk)(struct tcf_proto*, struct tcf_walker *arg);
void (*walk)(struct tcf_proto *tp,
struct tcf_walker *arg, bool rtnl_held);
int (*reoffload)(struct tcf_proto *tp, bool add,
tc_setup_cb_t *cb, void *cb_priv,
struct netlink_ext_ack *extack);
@ -297,12 +308,18 @@ struct tcf_proto_ops {
/* rtnetlink specific */
int (*dump)(struct net*, struct tcf_proto*, void *,
struct sk_buff *skb, struct tcmsg*);
struct sk_buff *skb, struct tcmsg*,
bool);
int (*tmplt_dump)(struct sk_buff *skb,
struct net *net,
void *tmplt_priv);
struct module *owner;
int flags;
};
enum tcf_proto_ops_flags {
TCF_PROTO_OPS_DOIT_UNLOCKED = 1,
};
struct tcf_proto {
@ -321,6 +338,12 @@ struct tcf_proto {
void *data;
const struct tcf_proto_ops *ops;
struct tcf_chain *chain;
/* Lock protects tcf_proto shared state and can be used by unlocked
* classifiers to protect their private data.
*/
spinlock_t lock;
bool deleting;
refcount_t refcnt;
struct rcu_head rcu;
};
@ -340,6 +363,8 @@ struct qdisc_skb_cb {
typedef void tcf_chain_head_change_t(struct tcf_proto *tp_head, void *priv);
struct tcf_chain {
/* Protects filter_chain. */
struct mutex filter_chain_lock;
struct tcf_proto __rcu *filter_chain;
struct list_head list;
struct tcf_block *block;
@ -347,11 +372,16 @@ struct tcf_chain {
unsigned int refcnt;
unsigned int action_refcnt;
bool explicitly_created;
bool flushing;
const struct tcf_proto_ops *tmplt_ops;
void *tmplt_priv;
};
struct tcf_block {
/* Lock protects tcf_block and lifetime-management data of chains
* attached to the block (refcnt, action_refcnt, explicitly_created).
*/
struct mutex lock;
struct list_head chain_list;
u32 index; /* block index for shared blocks */
refcount_t refcnt;
@ -369,6 +399,34 @@ struct tcf_block {
struct rcu_head rcu;
};
#ifdef CONFIG_PROVE_LOCKING
static inline bool lockdep_tcf_chain_is_locked(struct tcf_chain *chain)
{
return lockdep_is_held(&chain->filter_chain_lock);
}
static inline bool lockdep_tcf_proto_is_locked(struct tcf_proto *tp)
{
return lockdep_is_held(&tp->lock);
}
#else
static inline bool lockdep_tcf_chain_is_locked(struct tcf_block *chain)
{
return true;
}
static inline bool lockdep_tcf_proto_is_locked(struct tcf_proto *tp)
{
return true;
}
#endif /* #ifdef CONFIG_PROVE_LOCKING */
#define tcf_chain_dereference(p, chain) \
rcu_dereference_protected(p, lockdep_tcf_chain_is_locked(chain))
#define tcf_proto_dereference(p, tp) \
rcu_dereference_protected(p, lockdep_tcf_proto_is_locked(tp))
static inline void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
{
if (*flags & TCA_CLS_FLAGS_IN_HW)

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -107,7 +107,8 @@ static void basic_delete_filter_work(struct work_struct *work)
rtnl_unlock();
}
static void basic_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void basic_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct basic_head *head = rtnl_dereference(tp->root);
struct basic_filter *f, *n;
@ -126,7 +127,7 @@ static void basic_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
}
static int basic_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct basic_head *head = rtnl_dereference(tp->root);
struct basic_filter *f = arg;
@ -153,7 +154,7 @@ static int basic_set_parms(struct net *net, struct tcf_proto *tp,
{
int err;
err = tcf_exts_validate(net, tp, tb, est, &f->exts, ovr, extack);
err = tcf_exts_validate(net, tp, tb, est, &f->exts, ovr, true, extack);
if (err < 0)
return err;
@ -173,7 +174,7 @@ static int basic_set_parms(struct net *net, struct tcf_proto *tp,
static int basic_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base, u32 handle,
struct nlattr **tca, void **arg, bool ovr,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
int err;
struct basic_head *head = rtnl_dereference(tp->root);
@ -247,7 +248,8 @@ errout:
return err;
}
static void basic_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void basic_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct basic_head *head = rtnl_dereference(tp->root);
struct basic_filter *f;
@ -274,7 +276,7 @@ static void basic_bind_class(void *fh, u32 classid, unsigned long cl)
}
static int basic_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct tc_basic_pcnt gpf = {};
struct basic_filter *f = fh;

Просмотреть файл

@ -298,7 +298,7 @@ static void __cls_bpf_delete(struct tcf_proto *tp, struct cls_bpf_prog *prog,
}
static int cls_bpf_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct cls_bpf_head *head = rtnl_dereference(tp->root);
@ -307,7 +307,7 @@ static int cls_bpf_delete(struct tcf_proto *tp, void *arg, bool *last,
return 0;
}
static void cls_bpf_destroy(struct tcf_proto *tp,
static void cls_bpf_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct cls_bpf_head *head = rtnl_dereference(tp->root);
@ -417,7 +417,8 @@ static int cls_bpf_set_parms(struct net *net, struct tcf_proto *tp,
if ((!is_bpf && !is_ebpf) || (is_bpf && is_ebpf))
return -EINVAL;
ret = tcf_exts_validate(net, tp, tb, est, &prog->exts, ovr, extack);
ret = tcf_exts_validate(net, tp, tb, est, &prog->exts, ovr, true,
extack);
if (ret < 0)
return ret;
@ -455,7 +456,8 @@ static int cls_bpf_set_parms(struct net *net, struct tcf_proto *tp,
static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca,
void **arg, bool ovr, struct netlink_ext_ack *extack)
void **arg, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct cls_bpf_head *head = rtnl_dereference(tp->root);
struct cls_bpf_prog *oldprog = *arg;
@ -575,7 +577,7 @@ static int cls_bpf_dump_ebpf_info(const struct cls_bpf_prog *prog,
}
static int cls_bpf_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *tm)
struct sk_buff *skb, struct tcmsg *tm, bool rtnl_held)
{
struct cls_bpf_prog *prog = fh;
struct nlattr *nest;
@ -635,7 +637,8 @@ static void cls_bpf_bind_class(void *fh, u32 classid, unsigned long cl)
prog->res.class = cl;
}
static void cls_bpf_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void cls_bpf_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct cls_bpf_head *head = rtnl_dereference(tp->root);
struct cls_bpf_prog *prog;

Просмотреть файл

@ -78,7 +78,7 @@ static void cls_cgroup_destroy_work(struct work_struct *work)
static int cls_cgroup_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca,
void **arg, bool ovr,
void **arg, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct nlattr *tb[TCA_CGROUP_MAX + 1];
@ -110,7 +110,7 @@ static int cls_cgroup_change(struct net *net, struct sk_buff *in_skb,
goto errout;
err = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &new->exts, ovr,
extack);
true, extack);
if (err < 0)
goto errout;
@ -130,7 +130,7 @@ errout:
return err;
}
static void cls_cgroup_destroy(struct tcf_proto *tp,
static void cls_cgroup_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct cls_cgroup_head *head = rtnl_dereference(tp->root);
@ -145,12 +145,13 @@ static void cls_cgroup_destroy(struct tcf_proto *tp,
}
static int cls_cgroup_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
static void cls_cgroup_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void cls_cgroup_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct cls_cgroup_head *head = rtnl_dereference(tp->root);
@ -166,7 +167,7 @@ skip:
}
static int cls_cgroup_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct cls_cgroup_head *head = rtnl_dereference(tp->root);
struct nlattr *nest;

Просмотреть файл

@ -391,7 +391,8 @@ static void flow_destroy_filter_work(struct work_struct *work)
static int flow_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca,
void **arg, bool ovr, struct netlink_ext_ack *extack)
void **arg, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct flow_head *head = rtnl_dereference(tp->root);
struct flow_filter *fold, *fnew;
@ -445,7 +446,7 @@ static int flow_change(struct net *net, struct sk_buff *in_skb,
goto err2;
err = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &fnew->exts, ovr,
extack);
true, extack);
if (err < 0)
goto err2;
@ -566,7 +567,7 @@ err1:
}
static int flow_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct flow_head *head = rtnl_dereference(tp->root);
struct flow_filter *f = arg;
@ -590,7 +591,8 @@ static int flow_init(struct tcf_proto *tp)
return 0;
}
static void flow_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void flow_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct flow_head *head = rtnl_dereference(tp->root);
struct flow_filter *f, *next;
@ -617,7 +619,7 @@ static void *flow_get(struct tcf_proto *tp, u32 handle)
}
static int flow_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct flow_filter *f = fh;
struct nlattr *nest;
@ -677,7 +679,8 @@ nla_put_failure:
return -1;
}
static void flow_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void flow_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct flow_head *head = rtnl_dereference(tp->root);
struct flow_filter *f;

Просмотреть файл

@ -465,7 +465,8 @@ static void fl_destroy_sleepable(struct work_struct *work)
module_put(THIS_MODULE);
}
static void fl_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void fl_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct cls_fl_head *head = rtnl_dereference(tp->root);
struct fl_flow_mask *mask, *next_mask;
@ -1272,7 +1273,8 @@ static int fl_set_parms(struct net *net, struct tcf_proto *tp,
{
int err;
err = tcf_exts_validate(net, tp, tb, est, &f->exts, ovr, extack);
err = tcf_exts_validate(net, tp, tb, est, &f->exts, ovr, true,
extack);
if (err < 0)
return err;
@ -1299,7 +1301,8 @@ static int fl_set_parms(struct net *net, struct tcf_proto *tp,
static int fl_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca,
void **arg, bool ovr, struct netlink_ext_ack *extack)
void **arg, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct cls_fl_head *head = rtnl_dereference(tp->root);
struct cls_fl_filter *fold = *arg;
@ -1436,7 +1439,7 @@ errout_mask_alloc:
}
static int fl_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct cls_fl_head *head = rtnl_dereference(tp->root);
struct cls_fl_filter *f = arg;
@ -1448,7 +1451,8 @@ static int fl_delete(struct tcf_proto *tp, void *arg, bool *last,
return 0;
}
static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct cls_fl_head *head = rtnl_dereference(tp->root);
struct cls_fl_filter *f;
@ -2043,7 +2047,7 @@ nla_put_failure:
}
static int fl_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct cls_fl_filter *f = fh;
struct nlattr *nest;

Просмотреть файл

@ -139,7 +139,8 @@ static void fw_delete_filter_work(struct work_struct *work)
rtnl_unlock();
}
static void fw_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void fw_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f;
@ -163,7 +164,7 @@ static void fw_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
}
static int fw_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f = arg;
@ -217,7 +218,7 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
int err;
err = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &f->exts, ovr,
extack);
true, extack);
if (err < 0)
return err;
@ -250,7 +251,8 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
static int fw_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca, void **arg,
bool ovr, struct netlink_ext_ack *extack)
bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f = *arg;
@ -354,7 +356,8 @@ errout:
return err;
}
static void fw_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void fw_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct fw_head *head = rtnl_dereference(tp->root);
int h;
@ -384,7 +387,7 @@ static void fw_walk(struct tcf_proto *tp, struct tcf_walker *arg)
}
static int fw_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct fw_head *head = rtnl_dereference(tp->root);
struct fw_filter *f = fh;

Просмотреть файл

@ -109,7 +109,8 @@ static int mall_replace_hw_filter(struct tcf_proto *tp,
return 0;
}
static void mall_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void mall_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct cls_mall_head *head = rtnl_dereference(tp->root);
@ -145,7 +146,8 @@ static int mall_set_parms(struct net *net, struct tcf_proto *tp,
{
int err;
err = tcf_exts_validate(net, tp, tb, est, &head->exts, ovr, extack);
err = tcf_exts_validate(net, tp, tb, est, &head->exts, ovr, true,
extack);
if (err < 0)
return err;
@ -159,7 +161,8 @@ static int mall_set_parms(struct net *net, struct tcf_proto *tp,
static int mall_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle, struct nlattr **tca,
void **arg, bool ovr, struct netlink_ext_ack *extack)
void **arg, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct cls_mall_head *head = rtnl_dereference(tp->root);
struct nlattr *tb[TCA_MATCHALL_MAX + 1];
@ -232,12 +235,13 @@ err_exts_init:
}
static int mall_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
static void mall_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void mall_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct cls_mall_head *head = rtnl_dereference(tp->root);
@ -279,7 +283,7 @@ static int mall_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb,
}
static int mall_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct tc_matchall_pcnt gpf = {};
struct cls_mall_head *head = fh;

Просмотреть файл

@ -276,7 +276,8 @@ static void route4_queue_work(struct route4_filter *f)
tcf_queue_work(&f->rwork, route4_delete_filter_work);
}
static void route4_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void route4_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct route4_head *head = rtnl_dereference(tp->root);
int h1, h2;
@ -312,7 +313,7 @@ static void route4_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
}
static int route4_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct route4_head *head = rtnl_dereference(tp->root);
struct route4_filter *f = arg;
@ -393,7 +394,7 @@ static int route4_set_parms(struct net *net, struct tcf_proto *tp,
struct route4_bucket *b;
int err;
err = tcf_exts_validate(net, tp, tb, est, &f->exts, ovr, extack);
err = tcf_exts_validate(net, tp, tb, est, &f->exts, ovr, true, extack);
if (err < 0)
return err;
@ -468,7 +469,7 @@ static int route4_set_parms(struct net *net, struct tcf_proto *tp,
static int route4_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base, u32 handle,
struct nlattr **tca, void **arg, bool ovr,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct route4_head *head = rtnl_dereference(tp->root);
struct route4_filter __rcu **fp;
@ -560,7 +561,8 @@ errout:
return err;
}
static void route4_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void route4_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct route4_head *head = rtnl_dereference(tp->root);
unsigned int h, h1;
@ -597,7 +599,7 @@ static void route4_walk(struct tcf_proto *tp, struct tcf_walker *arg)
}
static int route4_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct route4_filter *f = fh;
struct nlattr *nest;

Просмотреть файл

@ -312,7 +312,8 @@ static void rsvp_delete_filter(struct tcf_proto *tp, struct rsvp_filter *f)
__rsvp_delete_filter(f);
}
static void rsvp_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void rsvp_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct rsvp_head *data = rtnl_dereference(tp->root);
int h1, h2;
@ -341,7 +342,7 @@ static void rsvp_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
}
static int rsvp_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct rsvp_head *head = rtnl_dereference(tp->root);
struct rsvp_filter *nfp, *f = arg;
@ -477,7 +478,8 @@ static int rsvp_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base,
u32 handle,
struct nlattr **tca,
void **arg, bool ovr, struct netlink_ext_ack *extack)
void **arg, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct rsvp_head *data = rtnl_dereference(tp->root);
struct rsvp_filter *f, *nfp;
@ -502,7 +504,8 @@ static int rsvp_change(struct net *net, struct sk_buff *in_skb,
err = tcf_exts_init(&e, TCA_RSVP_ACT, TCA_RSVP_POLICE);
if (err < 0)
return err;
err = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &e, ovr, extack);
err = tcf_exts_validate(net, tp, tb, tca[TCA_RATE], &e, ovr, true,
extack);
if (err < 0)
goto errout2;
@ -654,7 +657,8 @@ errout2:
return err;
}
static void rsvp_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void rsvp_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct rsvp_head *head = rtnl_dereference(tp->root);
unsigned int h, h1;
@ -688,7 +692,7 @@ static void rsvp_walk(struct tcf_proto *tp, struct tcf_walker *arg)
}
static int rsvp_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct rsvp_filter *f = fh;
struct rsvp_session *s;

Просмотреть файл

@ -173,7 +173,7 @@ static void tcindex_destroy_fexts_work(struct work_struct *work)
}
static int tcindex_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter_result *r = arg;
@ -226,7 +226,7 @@ static int tcindex_destroy_element(struct tcf_proto *tp,
{
bool last;
return tcindex_delete(tp, arg, &last, NULL);
return tcindex_delete(tp, arg, &last, false, NULL);
}
static void __tcindex_destroy(struct rcu_head *head)
@ -314,7 +314,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
err = tcf_exts_init(&e, TCA_TCINDEX_ACT, TCA_TCINDEX_POLICE);
if (err < 0)
return err;
err = tcf_exts_validate(net, tp, tb, est, &e, ovr, extack);
err = tcf_exts_validate(net, tp, tb, est, &e, ovr, true, extack);
if (err < 0)
goto errout;
@ -499,7 +499,7 @@ static int
tcindex_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base, u32 handle,
struct nlattr **tca, void **arg, bool ovr,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_TCINDEX_MAX + 1];
@ -522,7 +522,8 @@ tcindex_change(struct net *net, struct sk_buff *in_skb,
tca[TCA_RATE], ovr, extack);
}
static void tcindex_walk(struct tcf_proto *tp, struct tcf_walker *walker)
static void tcindex_walk(struct tcf_proto *tp, struct tcf_walker *walker,
bool rtnl_held)
{
struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter *f, *next;
@ -558,7 +559,7 @@ static void tcindex_walk(struct tcf_proto *tp, struct tcf_walker *walker)
}
}
static void tcindex_destroy(struct tcf_proto *tp,
static void tcindex_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct tcindex_data *p = rtnl_dereference(tp->root);
@ -568,14 +569,14 @@ static void tcindex_destroy(struct tcf_proto *tp,
walker.count = 0;
walker.skip = 0;
walker.fn = tcindex_destroy_element;
tcindex_walk(tp, &walker);
tcindex_walk(tp, &walker, true);
call_rcu(&p->rcu, __tcindex_destroy);
}
static int tcindex_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct tcindex_data *p = rtnl_dereference(tp->root);
struct tcindex_filter_result *r = fh;

Просмотреть файл

@ -629,7 +629,8 @@ static int u32_destroy_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht,
return -ENOENT;
}
static void u32_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
static void u32_destroy(struct tcf_proto *tp, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct tc_u_common *tp_c = tp->data;
struct tc_u_hnode *root_ht = rtnl_dereference(tp->root);
@ -663,7 +664,7 @@ static void u32_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
}
static int u32_delete(struct tcf_proto *tp, void *arg, bool *last,
struct netlink_ext_ack *extack)
bool rtnl_held, struct netlink_ext_ack *extack)
{
struct tc_u_hnode *ht = arg;
struct tc_u_common *tp_c = tp->data;
@ -726,7 +727,7 @@ static int u32_set_parms(struct net *net, struct tcf_proto *tp,
{
int err;
err = tcf_exts_validate(net, tp, tb, est, &n->exts, ovr, extack);
err = tcf_exts_validate(net, tp, tb, est, &n->exts, ovr, true, extack);
if (err < 0)
return err;
@ -858,7 +859,7 @@ static struct tc_u_knode *u32_init_knode(struct tcf_proto *tp,
static int u32_change(struct net *net, struct sk_buff *in_skb,
struct tcf_proto *tp, unsigned long base, u32 handle,
struct nlattr **tca, void **arg, bool ovr,
struct nlattr **tca, void **arg, bool ovr, bool rtnl_held,
struct netlink_ext_ack *extack)
{
struct tc_u_common *tp_c = tp->data;
@ -1123,7 +1124,8 @@ erridr:
return err;
}
static void u32_walk(struct tcf_proto *tp, struct tcf_walker *arg)
static void u32_walk(struct tcf_proto *tp, struct tcf_walker *arg,
bool rtnl_held)
{
struct tc_u_common *tp_c = tp->data;
struct tc_u_hnode *ht;
@ -1281,7 +1283,7 @@ static void u32_bind_class(void *fh, u32 classid, unsigned long cl)
}
static int u32_dump(struct net *net, struct tcf_proto *tp, void *fh,
struct sk_buff *skb, struct tcmsg *t)
struct sk_buff *skb, struct tcmsg *t, bool rtnl_held)
{
struct tc_u_knode *n = fh;
struct tc_u_hnode *ht_up, *ht_down;

Просмотреть файл

@ -1909,17 +1909,19 @@ static void tc_bind_tclass(struct Qdisc *q, u32 portid, u32 clid,
block = cops->tcf_block(q, cl, NULL);
if (!block)
return;
list_for_each_entry(chain, &block->chain_list, list) {
for (chain = tcf_get_next_chain(block, NULL);
chain;
chain = tcf_get_next_chain(block, chain)) {
struct tcf_proto *tp;
for (tp = rtnl_dereference(chain->filter_chain);
tp; tp = rtnl_dereference(tp->next)) {
for (tp = tcf_get_next_proto(chain, NULL, true);
tp; tp = tcf_get_next_proto(chain, tp, true)) {
struct tcf_bind_args arg = {};
arg.w.fn = tcf_node_bind;
arg.classid = clid;
arg.cl = new_cl;
tp->ops->walk(tp, &arg.w);
tp->ops->walk(tp, &arg.w, true);
}
}
}

Просмотреть файл

@ -1366,7 +1366,11 @@ static void mini_qdisc_rcu_func(struct rcu_head *head)
void mini_qdisc_pair_swap(struct mini_Qdisc_pair *miniqp,
struct tcf_proto *tp_head)
{
struct mini_Qdisc *miniq_old = rtnl_dereference(*miniqp->p_miniq);
/* Protected with chain0->filter_chain_lock.
* Can't access chain directly because tp_head can be NULL.
*/
struct mini_Qdisc *miniq_old =
rcu_dereference_protected(*miniqp->p_miniq, 1);
struct mini_Qdisc *miniq;
if (!tp_head) {