2007-07-18 05:37:06 +04:00
|
|
|
/*
|
|
|
|
* Virtual network driver for conversing with remote driver backends.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2002-2005, K A Fraser
|
|
|
|
* Copyright (c) 2005, XenSource Ltd
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License version 2
|
|
|
|
* as published by the Free Software Foundation; or, when distributed
|
|
|
|
* separately from the Linux kernel or incorporated into other
|
|
|
|
* software packages, subject to the following license:
|
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
|
|
* of this source file (the "Software"), to deal in the Software without
|
|
|
|
* restriction, including without limitation the rights to use, copy, modify,
|
|
|
|
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
|
|
|
* and to permit persons to whom the Software is furnished to do so, subject to
|
|
|
|
* the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
|
|
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
|
|
|
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
|
|
|
* IN THE SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
2013-06-28 08:57:49 +04:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/etherdevice.h>
|
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/ethtool.h>
|
|
|
|
#include <linux/if_ether.h>
|
2013-04-22 06:20:41 +04:00
|
|
|
#include <net/tcp.h>
|
2007-07-18 05:37:06 +04:00
|
|
|
#include <linux/udp.h>
|
|
|
|
#include <linux/moduleparam.h>
|
|
|
|
#include <linux/mm.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
|
|
|
#include <linux/slab.h>
|
2007-07-18 05:37:06 +04:00
|
|
|
#include <net/ip.h>
|
2020-06-29 16:13:28 +03:00
|
|
|
#include <linux/bpf.h>
|
|
|
|
#include <net/page_pool.h>
|
|
|
|
#include <linux/bpf_trace.h>
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2009-10-07 02:11:14 +04:00
|
|
|
#include <xen/xen.h>
|
2007-07-18 05:37:06 +04:00
|
|
|
#include <xen/xenbus.h>
|
|
|
|
#include <xen/events.h>
|
|
|
|
#include <xen/page.h>
|
2012-03-21 18:08:38 +04:00
|
|
|
#include <xen/platform_pci.h>
|
2007-07-18 05:37:06 +04:00
|
|
|
#include <xen/grant_table.h>
|
|
|
|
|
|
|
|
#include <xen/interface/io/netif.h>
|
|
|
|
#include <xen/interface/memory.h>
|
|
|
|
#include <xen/interface/grant_table.h>
|
|
|
|
|
2014-06-04 13:30:45 +04:00
|
|
|
/* Module parameters */
|
2017-01-10 16:32:51 +03:00
|
|
|
#define MAX_QUEUES_DEFAULT 8
|
2014-06-04 13:30:45 +04:00
|
|
|
static unsigned int xennet_max_queues;
|
|
|
|
module_param_named(max_queues, xennet_max_queues, uint, 0644);
|
|
|
|
MODULE_PARM_DESC(max_queues,
|
|
|
|
"Maximum number of queues per virtual interface");
|
|
|
|
|
2020-07-24 11:59:10 +03:00
|
|
|
#define XENNET_TIMEOUT (5 * HZ)
|
|
|
|
|
2009-09-02 12:03:33 +04:00
|
|
|
static const struct ethtool_ops xennet_ethtool_ops;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
struct netfront_cb {
|
2012-08-22 04:26:47 +04:00
|
|
|
int pull_to;
|
2007-07-18 05:37:06 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
#define NETFRONT_SKB_CB(skb) ((struct netfront_cb *)((skb)->cb))
|
|
|
|
|
|
|
|
#define RX_COPY_THRESHOLD 256
|
|
|
|
|
|
|
|
#define GRANT_INVALID_REF 0
|
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
#define NET_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, XEN_PAGE_SIZE)
|
|
|
|
#define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, XEN_PAGE_SIZE)
|
2014-10-22 14:17:06 +04:00
|
|
|
|
|
|
|
/* Minimum number of Rx slots (includes slot for GSO metadata). */
|
|
|
|
#define NET_RX_SLOTS_MIN (XEN_NETIF_NR_SLOTS_MIN + 1)
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
/* Queue name is interface name with "-qNNN" appended */
|
|
|
|
#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
|
|
|
|
|
|
|
|
/* IRQ name is queue name with "-tx" or "-rx" appended */
|
|
|
|
#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
|
|
|
|
|
2018-09-07 15:21:30 +03:00
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(module_wq);
|
2017-11-23 17:18:35 +03:00
|
|
|
|
2011-06-21 09:35:31 +04:00
|
|
|
struct netfront_stats {
|
2015-01-13 19:42:42 +03:00
|
|
|
u64 packets;
|
|
|
|
u64 bytes;
|
2011-06-21 09:35:31 +04:00
|
|
|
struct u64_stats_sync syncp;
|
|
|
|
};
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
struct netfront_info;
|
|
|
|
|
|
|
|
struct netfront_queue {
|
|
|
|
unsigned int id; /* Queue ID, 0-based */
|
|
|
|
char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
|
|
|
|
struct netfront_info *info;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
struct bpf_prog __rcu *xdp_prog;
|
|
|
|
|
[NET]: Make NAPI polling independent of struct net_device objects.
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 03:41:36 +04:00
|
|
|
struct napi_struct napi;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2013-05-22 10:34:46 +04:00
|
|
|
/* Split event channels support, tx_* == rx_* when using
|
|
|
|
* single event channel.
|
|
|
|
*/
|
|
|
|
unsigned int tx_evtchn, rx_evtchn;
|
|
|
|
unsigned int tx_irq, rx_irq;
|
|
|
|
/* Only used when split event channels support is enabled */
|
2014-06-04 13:30:44 +04:00
|
|
|
char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
|
|
|
|
char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2007-10-15 23:59:53 +04:00
|
|
|
spinlock_t tx_lock;
|
|
|
|
struct xen_netif_tx_front_ring tx;
|
|
|
|
int tx_ring_ref;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* {tx,rx}_skbs store outstanding skbuffs. Free tx_skb entries
|
2021-08-24 13:28:08 +03:00
|
|
|
* are linked from tx_skb_freelist through tx_link.
|
2007-07-18 05:37:06 +04:00
|
|
|
*/
|
2021-08-24 13:28:08 +03:00
|
|
|
struct sk_buff *tx_skbs[NET_TX_RING_SIZE];
|
|
|
|
unsigned short tx_link[NET_TX_RING_SIZE];
|
|
|
|
#define TX_LINK_NONE 0xffff
|
2021-08-24 13:28:09 +03:00
|
|
|
#define TX_PENDING 0xfffe
|
2007-07-18 05:37:06 +04:00
|
|
|
grant_ref_t gref_tx_head;
|
|
|
|
grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
|
2014-01-28 07:35:42 +04:00
|
|
|
struct page *grant_tx_page[NET_TX_RING_SIZE];
|
2007-07-18 05:37:06 +04:00
|
|
|
unsigned tx_skb_freelist;
|
2021-08-24 13:28:09 +03:00
|
|
|
unsigned int tx_pend_queue;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2007-10-15 23:59:53 +04:00
|
|
|
spinlock_t rx_lock ____cacheline_aligned_in_smp;
|
|
|
|
struct xen_netif_rx_front_ring rx;
|
|
|
|
int rx_ring_ref;
|
|
|
|
|
|
|
|
struct timer_list rx_refill_timer;
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
struct sk_buff *rx_skbs[NET_RX_RING_SIZE];
|
|
|
|
grant_ref_t gref_rx_head;
|
|
|
|
grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
|
2020-06-29 16:13:28 +03:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
unsigned int rx_rsp_unconsumed;
|
|
|
|
spinlock_t rx_cons_lock;
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
struct page_pool *page_pool;
|
|
|
|
struct xdp_rxq_info xdp_rxq;
|
2014-06-04 13:30:44 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
struct netfront_info {
|
|
|
|
struct list_head list;
|
|
|
|
struct net_device *netdev;
|
|
|
|
|
|
|
|
struct xenbus_device *xbdev;
|
|
|
|
|
|
|
|
/* Multi-queue support */
|
|
|
|
struct netfront_queue *queues;
|
2011-01-27 07:14:03 +03:00
|
|
|
|
|
|
|
/* Statistics */
|
2015-01-13 19:42:42 +03:00
|
|
|
struct netfront_stats __percpu *rx_stats;
|
|
|
|
struct netfront_stats __percpu *tx_stats;
|
2011-06-21 09:35:31 +04:00
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
/* XDP state */
|
|
|
|
bool netback_has_xdp_headroom;
|
|
|
|
bool netfront_xdp_enabled;
|
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
/* Is device behaving sane? */
|
|
|
|
bool broken;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
atomic_t rx_gso_checksum_fixup;
|
2007-07-18 05:37:06 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
struct netfront_rx_info {
|
|
|
|
struct xen_netif_rx_response rx;
|
|
|
|
struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX - 1];
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Access macros for acquiring freeing slots in tx_skbs[].
|
|
|
|
*/
|
|
|
|
|
2021-08-24 13:28:08 +03:00
|
|
|
static void add_id_to_list(unsigned *head, unsigned short *list,
|
|
|
|
unsigned short id)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2021-08-24 13:28:08 +03:00
|
|
|
list[id] = *head;
|
2007-07-18 05:37:06 +04:00
|
|
|
*head = id;
|
|
|
|
}
|
|
|
|
|
2021-08-24 13:28:08 +03:00
|
|
|
static unsigned short get_id_from_list(unsigned *head, unsigned short *list)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
unsigned int id = *head;
|
2021-08-24 13:28:08 +03:00
|
|
|
|
|
|
|
if (id != TX_LINK_NONE) {
|
|
|
|
*head = list[id];
|
|
|
|
list[id] = TX_LINK_NONE;
|
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
return id;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_rxidx(RING_IDX idx)
|
|
|
|
{
|
|
|
|
return idx & (NET_RX_RING_SIZE - 1);
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
|
2007-07-18 05:37:06 +04:00
|
|
|
RING_IDX ri)
|
|
|
|
{
|
|
|
|
int i = xennet_rxidx(ri);
|
2014-06-04 13:30:44 +04:00
|
|
|
struct sk_buff *skb = queue->rx_skbs[i];
|
|
|
|
queue->rx_skbs[i] = NULL;
|
2007-07-18 05:37:06 +04:00
|
|
|
return skb;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
|
2007-07-18 05:37:06 +04:00
|
|
|
RING_IDX ri)
|
|
|
|
{
|
|
|
|
int i = xennet_rxidx(ri);
|
2014-06-04 13:30:44 +04:00
|
|
|
grant_ref_t ref = queue->grant_rx_ref[i];
|
|
|
|
queue->grant_rx_ref[i] = GRANT_INVALID_REF;
|
2007-07-18 05:37:06 +04:00
|
|
|
return ref;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYSFS
|
2015-02-04 16:38:55 +03:00
|
|
|
static const struct attribute_group xennet_dev_group;
|
2007-07-18 05:37:06 +04:00
|
|
|
#endif
|
|
|
|
|
2011-11-16 18:05:33 +04:00
|
|
|
static bool xennet_can_sg(struct net_device *dev)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2011-11-16 18:05:33 +04:00
|
|
|
return dev->features & NETIF_F_SG;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
treewide: setup_timer() -> timer_setup()
This converts all remaining cases of the old setup_timer() API into using
timer_setup(), where the callback argument is the structure already
holding the struct timer_list. These should have no behavioral changes,
since they just change which pointer is passed into the callback with
the same available pointers after conversion. It handles the following
examples, in addition to some other variations.
Casting from unsigned long:
void my_callback(unsigned long data)
{
struct something *ptr = (struct something *)data;
...
}
...
setup_timer(&ptr->my_timer, my_callback, ptr);
and forced object casts:
void my_callback(struct something *ptr)
{
...
}
...
setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr);
become:
void my_callback(struct timer_list *t)
{
struct something *ptr = from_timer(ptr, t, my_timer);
...
}
...
timer_setup(&ptr->my_timer, my_callback, 0);
Direct function assignments:
void my_callback(unsigned long data)
{
struct something *ptr = (struct something *)data;
...
}
...
ptr->my_timer.function = my_callback;
have a temporary cast added, along with converting the args:
void my_callback(struct timer_list *t)
{
struct something *ptr = from_timer(ptr, t, my_timer);
...
}
...
ptr->my_timer.function = (TIMER_FUNC_TYPE)my_callback;
And finally, callbacks without a data assignment:
void my_callback(unsigned long data)
{
...
}
...
setup_timer(&ptr->my_timer, my_callback, 0);
have their argument renamed to verify they're unused during conversion:
void my_callback(struct timer_list *unused)
{
...
}
...
timer_setup(&ptr->my_timer, my_callback, 0);
The conversion is done with the following Coccinelle script:
spatch --very-quiet --all-includes --include-headers \
-I ./arch/x86/include -I ./arch/x86/include/generated \
-I ./include -I ./arch/x86/include/uapi \
-I ./arch/x86/include/generated/uapi -I ./include/uapi \
-I ./include/generated/uapi --include ./include/linux/kconfig.h \
--dir . \
--cocci-file ~/src/data/timer_setup.cocci
@fix_address_of@
expression e;
@@
setup_timer(
-&(e)
+&e
, ...)
// Update any raw setup_timer() usages that have a NULL callback, but
// would otherwise match change_timer_function_usage, since the latter
// will update all function assignments done in the face of a NULL
// function initialization in setup_timer().
@change_timer_function_usage_NULL@
expression _E;
identifier _timer;
type _cast_data;
@@
(
-setup_timer(&_E->_timer, NULL, _E);
+timer_setup(&_E->_timer, NULL, 0);
|
-setup_timer(&_E->_timer, NULL, (_cast_data)_E);
+timer_setup(&_E->_timer, NULL, 0);
|
-setup_timer(&_E._timer, NULL, &_E);
+timer_setup(&_E._timer, NULL, 0);
|
-setup_timer(&_E._timer, NULL, (_cast_data)&_E);
+timer_setup(&_E._timer, NULL, 0);
)
@change_timer_function_usage@
expression _E;
identifier _timer;
struct timer_list _stl;
identifier _callback;
type _cast_func, _cast_data;
@@
(
-setup_timer(&_E->_timer, _callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, &_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, &_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)&_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)&_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, &_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, &_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
_E->_timer@_stl.function = _callback;
|
_E->_timer@_stl.function = &_callback;
|
_E->_timer@_stl.function = (_cast_func)_callback;
|
_E->_timer@_stl.function = (_cast_func)&_callback;
|
_E._timer@_stl.function = _callback;
|
_E._timer@_stl.function = &_callback;
|
_E._timer@_stl.function = (_cast_func)_callback;
|
_E._timer@_stl.function = (_cast_func)&_callback;
)
// callback(unsigned long arg)
@change_callback_handle_cast
depends on change_timer_function_usage@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _origtype;
identifier _origarg;
type _handletype;
identifier _handle;
@@
void _callback(
-_origtype _origarg
+struct timer_list *t
)
{
(
... when != _origarg
_handletype *_handle =
-(_handletype *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle =
-(void *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle;
... when != _handle
_handle =
-(_handletype *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle;
... when != _handle
_handle =
-(void *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
)
}
// callback(unsigned long arg) without existing variable
@change_callback_handle_cast_no_arg
depends on change_timer_function_usage &&
!change_callback_handle_cast@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _origtype;
identifier _origarg;
type _handletype;
@@
void _callback(
-_origtype _origarg
+struct timer_list *t
)
{
+ _handletype *_origarg = from_timer(_origarg, t, _timer);
+
... when != _origarg
- (_handletype *)_origarg
+ _origarg
... when != _origarg
}
// Avoid already converted callbacks.
@match_callback_converted
depends on change_timer_function_usage &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg@
identifier change_timer_function_usage._callback;
identifier t;
@@
void _callback(struct timer_list *t)
{ ... }
// callback(struct something *handle)
@change_callback_handle_arg
depends on change_timer_function_usage &&
!match_callback_converted &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _handletype;
identifier _handle;
@@
void _callback(
-_handletype *_handle
+struct timer_list *t
)
{
+ _handletype *_handle = from_timer(_handle, t, _timer);
...
}
// If change_callback_handle_arg ran on an empty function, remove
// the added handler.
@unchange_callback_handle_arg
depends on change_timer_function_usage &&
change_callback_handle_arg@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _handletype;
identifier _handle;
identifier t;
@@
void _callback(struct timer_list *t)
{
- _handletype *_handle = from_timer(_handle, t, _timer);
}
// We only want to refactor the setup_timer() data argument if we've found
// the matching callback. This undoes changes in change_timer_function_usage.
@unchange_timer_function_usage
depends on change_timer_function_usage &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg &&
!change_callback_handle_arg@
expression change_timer_function_usage._E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type change_timer_function_usage._cast_data;
@@
(
-timer_setup(&_E->_timer, _callback, 0);
+setup_timer(&_E->_timer, _callback, (_cast_data)_E);
|
-timer_setup(&_E._timer, _callback, 0);
+setup_timer(&_E._timer, _callback, (_cast_data)&_E);
)
// If we fixed a callback from a .function assignment, fix the
// assignment cast now.
@change_timer_function_assignment
depends on change_timer_function_usage &&
(change_callback_handle_cast ||
change_callback_handle_cast_no_arg ||
change_callback_handle_arg)@
expression change_timer_function_usage._E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type _cast_func;
typedef TIMER_FUNC_TYPE;
@@
(
_E->_timer.function =
-_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-&_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-(_cast_func)_callback;
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-(_cast_func)&_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-&_callback;
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-(_cast_func)_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-(_cast_func)&_callback
+(TIMER_FUNC_TYPE)_callback
;
)
// Sometimes timer functions are called directly. Replace matched args.
@change_timer_function_calls
depends on change_timer_function_usage &&
(change_callback_handle_cast ||
change_callback_handle_cast_no_arg ||
change_callback_handle_arg)@
expression _E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type _cast_data;
@@
_callback(
(
-(_cast_data)_E
+&_E->_timer
|
-(_cast_data)&_E
+&_E._timer
|
-_E
+&_E->_timer
)
)
// If a timer has been configured without a data argument, it can be
// converted without regard to the callback argument, since it is unused.
@match_timer_function_unused_data@
expression _E;
identifier _timer;
identifier _callback;
@@
(
-setup_timer(&_E->_timer, _callback, 0);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, 0L);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, 0UL);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0L);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0UL);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0L);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0UL);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0);
+timer_setup(_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0L);
+timer_setup(_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0UL);
+timer_setup(_timer, _callback, 0);
)
@change_callback_unused_data
depends on match_timer_function_unused_data@
identifier match_timer_function_unused_data._callback;
type _origtype;
identifier _origarg;
@@
void _callback(
-_origtype _origarg
+struct timer_list *unused
)
{
... when != _origarg
}
Signed-off-by: Kees Cook <keescook@chromium.org>
2017-10-17 00:43:17 +03:00
|
|
|
static void rx_refill_timeout(struct timer_list *t)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
treewide: setup_timer() -> timer_setup()
This converts all remaining cases of the old setup_timer() API into using
timer_setup(), where the callback argument is the structure already
holding the struct timer_list. These should have no behavioral changes,
since they just change which pointer is passed into the callback with
the same available pointers after conversion. It handles the following
examples, in addition to some other variations.
Casting from unsigned long:
void my_callback(unsigned long data)
{
struct something *ptr = (struct something *)data;
...
}
...
setup_timer(&ptr->my_timer, my_callback, ptr);
and forced object casts:
void my_callback(struct something *ptr)
{
...
}
...
setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr);
become:
void my_callback(struct timer_list *t)
{
struct something *ptr = from_timer(ptr, t, my_timer);
...
}
...
timer_setup(&ptr->my_timer, my_callback, 0);
Direct function assignments:
void my_callback(unsigned long data)
{
struct something *ptr = (struct something *)data;
...
}
...
ptr->my_timer.function = my_callback;
have a temporary cast added, along with converting the args:
void my_callback(struct timer_list *t)
{
struct something *ptr = from_timer(ptr, t, my_timer);
...
}
...
ptr->my_timer.function = (TIMER_FUNC_TYPE)my_callback;
And finally, callbacks without a data assignment:
void my_callback(unsigned long data)
{
...
}
...
setup_timer(&ptr->my_timer, my_callback, 0);
have their argument renamed to verify they're unused during conversion:
void my_callback(struct timer_list *unused)
{
...
}
...
timer_setup(&ptr->my_timer, my_callback, 0);
The conversion is done with the following Coccinelle script:
spatch --very-quiet --all-includes --include-headers \
-I ./arch/x86/include -I ./arch/x86/include/generated \
-I ./include -I ./arch/x86/include/uapi \
-I ./arch/x86/include/generated/uapi -I ./include/uapi \
-I ./include/generated/uapi --include ./include/linux/kconfig.h \
--dir . \
--cocci-file ~/src/data/timer_setup.cocci
@fix_address_of@
expression e;
@@
setup_timer(
-&(e)
+&e
, ...)
// Update any raw setup_timer() usages that have a NULL callback, but
// would otherwise match change_timer_function_usage, since the latter
// will update all function assignments done in the face of a NULL
// function initialization in setup_timer().
@change_timer_function_usage_NULL@
expression _E;
identifier _timer;
type _cast_data;
@@
(
-setup_timer(&_E->_timer, NULL, _E);
+timer_setup(&_E->_timer, NULL, 0);
|
-setup_timer(&_E->_timer, NULL, (_cast_data)_E);
+timer_setup(&_E->_timer, NULL, 0);
|
-setup_timer(&_E._timer, NULL, &_E);
+timer_setup(&_E._timer, NULL, 0);
|
-setup_timer(&_E._timer, NULL, (_cast_data)&_E);
+timer_setup(&_E._timer, NULL, 0);
)
@change_timer_function_usage@
expression _E;
identifier _timer;
struct timer_list _stl;
identifier _callback;
type _cast_func, _cast_data;
@@
(
-setup_timer(&_E->_timer, _callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, &_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, &_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)&_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)&_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, &_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, &_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
_E->_timer@_stl.function = _callback;
|
_E->_timer@_stl.function = &_callback;
|
_E->_timer@_stl.function = (_cast_func)_callback;
|
_E->_timer@_stl.function = (_cast_func)&_callback;
|
_E._timer@_stl.function = _callback;
|
_E._timer@_stl.function = &_callback;
|
_E._timer@_stl.function = (_cast_func)_callback;
|
_E._timer@_stl.function = (_cast_func)&_callback;
)
// callback(unsigned long arg)
@change_callback_handle_cast
depends on change_timer_function_usage@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _origtype;
identifier _origarg;
type _handletype;
identifier _handle;
@@
void _callback(
-_origtype _origarg
+struct timer_list *t
)
{
(
... when != _origarg
_handletype *_handle =
-(_handletype *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle =
-(void *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle;
... when != _handle
_handle =
-(_handletype *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle;
... when != _handle
_handle =
-(void *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
)
}
// callback(unsigned long arg) without existing variable
@change_callback_handle_cast_no_arg
depends on change_timer_function_usage &&
!change_callback_handle_cast@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _origtype;
identifier _origarg;
type _handletype;
@@
void _callback(
-_origtype _origarg
+struct timer_list *t
)
{
+ _handletype *_origarg = from_timer(_origarg, t, _timer);
+
... when != _origarg
- (_handletype *)_origarg
+ _origarg
... when != _origarg
}
// Avoid already converted callbacks.
@match_callback_converted
depends on change_timer_function_usage &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg@
identifier change_timer_function_usage._callback;
identifier t;
@@
void _callback(struct timer_list *t)
{ ... }
// callback(struct something *handle)
@change_callback_handle_arg
depends on change_timer_function_usage &&
!match_callback_converted &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _handletype;
identifier _handle;
@@
void _callback(
-_handletype *_handle
+struct timer_list *t
)
{
+ _handletype *_handle = from_timer(_handle, t, _timer);
...
}
// If change_callback_handle_arg ran on an empty function, remove
// the added handler.
@unchange_callback_handle_arg
depends on change_timer_function_usage &&
change_callback_handle_arg@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _handletype;
identifier _handle;
identifier t;
@@
void _callback(struct timer_list *t)
{
- _handletype *_handle = from_timer(_handle, t, _timer);
}
// We only want to refactor the setup_timer() data argument if we've found
// the matching callback. This undoes changes in change_timer_function_usage.
@unchange_timer_function_usage
depends on change_timer_function_usage &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg &&
!change_callback_handle_arg@
expression change_timer_function_usage._E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type change_timer_function_usage._cast_data;
@@
(
-timer_setup(&_E->_timer, _callback, 0);
+setup_timer(&_E->_timer, _callback, (_cast_data)_E);
|
-timer_setup(&_E._timer, _callback, 0);
+setup_timer(&_E._timer, _callback, (_cast_data)&_E);
)
// If we fixed a callback from a .function assignment, fix the
// assignment cast now.
@change_timer_function_assignment
depends on change_timer_function_usage &&
(change_callback_handle_cast ||
change_callback_handle_cast_no_arg ||
change_callback_handle_arg)@
expression change_timer_function_usage._E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type _cast_func;
typedef TIMER_FUNC_TYPE;
@@
(
_E->_timer.function =
-_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-&_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-(_cast_func)_callback;
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-(_cast_func)&_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-&_callback;
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-(_cast_func)_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-(_cast_func)&_callback
+(TIMER_FUNC_TYPE)_callback
;
)
// Sometimes timer functions are called directly. Replace matched args.
@change_timer_function_calls
depends on change_timer_function_usage &&
(change_callback_handle_cast ||
change_callback_handle_cast_no_arg ||
change_callback_handle_arg)@
expression _E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type _cast_data;
@@
_callback(
(
-(_cast_data)_E
+&_E->_timer
|
-(_cast_data)&_E
+&_E._timer
|
-_E
+&_E->_timer
)
)
// If a timer has been configured without a data argument, it can be
// converted without regard to the callback argument, since it is unused.
@match_timer_function_unused_data@
expression _E;
identifier _timer;
identifier _callback;
@@
(
-setup_timer(&_E->_timer, _callback, 0);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, 0L);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, 0UL);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0L);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0UL);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0L);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0UL);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0);
+timer_setup(_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0L);
+timer_setup(_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0UL);
+timer_setup(_timer, _callback, 0);
)
@change_callback_unused_data
depends on match_timer_function_unused_data@
identifier match_timer_function_unused_data._callback;
type _origtype;
identifier _origarg;
@@
void _callback(
-_origtype _origarg
+struct timer_list *unused
)
{
... when != _origarg
}
Signed-off-by: Kees Cook <keescook@chromium.org>
2017-10-17 00:43:17 +03:00
|
|
|
struct netfront_queue *queue = from_timer(queue, t, rx_refill_timer);
|
2014-06-04 13:30:44 +04:00
|
|
|
napi_schedule(&queue->napi);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static int netfront_tx_slot_available(struct netfront_queue *queue)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
|
2018-06-12 09:57:53 +03:00
|
|
|
(NET_TX_RING_SIZE - XEN_NETIF_NR_SLOTS_MIN - 1);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static void xennet_maybe_wake_tx(struct netfront_queue *queue)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
struct net_device *dev = queue->info->netdev;
|
|
|
|
struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
|
|
|
|
netfront_tx_slot_available(queue) &&
|
2007-07-18 05:37:06 +04:00
|
|
|
likely(netif_running(dev)))
|
2014-06-04 13:30:44 +04:00
|
|
|
netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
|
|
|
|
static struct sk_buff *xennet_alloc_one_rx_buffer(struct netfront_queue *queue)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
struct sk_buff *skb;
|
|
|
|
struct page *page;
|
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
skb = __netdev_alloc_skb(queue->info->netdev,
|
|
|
|
RX_COPY_THRESHOLD + NET_IP_ALIGN,
|
|
|
|
GFP_ATOMIC | __GFP_NOWARN);
|
|
|
|
if (unlikely(!skb))
|
|
|
|
return NULL;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
page = page_pool_dev_alloc_pages(queue->page_pool);
|
|
|
|
if (unlikely(!page)) {
|
2014-10-22 14:17:06 +04:00
|
|
|
kfree_skb(skb);
|
|
|
|
return NULL;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
2014-10-22 14:17:06 +04:00
|
|
|
skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
|
|
|
|
|
|
|
|
/* Align ip header to a 16 bytes boundary */
|
|
|
|
skb_reserve(skb, NET_IP_ALIGN);
|
|
|
|
skb->dev = queue->info->netdev;
|
|
|
|
|
|
|
|
return skb;
|
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
|
|
|
|
static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
|
|
|
|
{
|
|
|
|
RING_IDX req_prod = queue->rx.req_prod_pvt;
|
|
|
|
int notify;
|
2017-02-07 21:59:01 +03:00
|
|
|
int err = 0;
|
2014-10-22 14:17:06 +04:00
|
|
|
|
|
|
|
if (unlikely(!netif_carrier_ok(queue->info->netdev)))
|
2007-07-18 05:37:06 +04:00
|
|
|
return;
|
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
for (req_prod = queue->rx.req_prod_pvt;
|
|
|
|
req_prod - queue->rx.rsp_cons < NET_RX_RING_SIZE;
|
|
|
|
req_prod++) {
|
|
|
|
struct sk_buff *skb;
|
|
|
|
unsigned short id;
|
|
|
|
grant_ref_t ref;
|
2015-04-10 16:42:21 +03:00
|
|
|
struct page *page;
|
2014-10-22 14:17:06 +04:00
|
|
|
struct xen_netif_rx_request *req;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
skb = xennet_alloc_one_rx_buffer(queue);
|
2017-02-07 21:59:01 +03:00
|
|
|
if (!skb) {
|
|
|
|
err = -ENOMEM;
|
2007-07-18 05:37:06 +04:00
|
|
|
break;
|
2017-02-07 21:59:01 +03:00
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
id = xennet_rxidx(req_prod);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
BUG_ON(queue->rx_skbs[id]);
|
|
|
|
queue->rx_skbs[id] = skb;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
|
2016-11-02 04:04:33 +03:00
|
|
|
WARN_ON_ONCE(IS_ERR_VALUE((unsigned long)(int)ref));
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->grant_rx_ref[id] = ref;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
req = RING_GET_REQUEST(&queue->rx, req_prod);
|
2015-04-10 16:42:21 +03:00
|
|
|
gnttab_page_grant_foreign_access_ref_one(ref,
|
|
|
|
queue->info->xbdev->otherend_id,
|
|
|
|
page,
|
|
|
|
0);
|
2007-07-18 05:37:06 +04:00
|
|
|
req->id = id;
|
|
|
|
req->gref = ref;
|
|
|
|
}
|
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
queue->rx.req_prod_pvt = req_prod;
|
|
|
|
|
2017-02-07 21:59:01 +03:00
|
|
|
/* Try again later if there are not enough requests or skb allocation
|
|
|
|
* failed.
|
|
|
|
* Enough requests is quantified as the sum of newly created slots and
|
|
|
|
* the unconsumed slots at the backend.
|
|
|
|
*/
|
|
|
|
if (req_prod - queue->rx.rsp_cons < NET_RX_SLOTS_MIN ||
|
|
|
|
unlikely(err)) {
|
2014-10-22 14:17:06 +04:00
|
|
|
mod_timer(&queue->rx_refill_timer, jiffies + (HZ/10));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
|
2007-07-18 05:37:06 +04:00
|
|
|
if (notify)
|
2014-06-04 13:30:44 +04:00
|
|
|
notify_remote_via_irq(queue->rx_irq);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_open(struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
2014-06-04 13:30:44 +04:00
|
|
|
unsigned int num_queues = dev->real_num_tx_queues;
|
|
|
|
unsigned int i = 0;
|
|
|
|
struct netfront_queue *queue = NULL;
|
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
if (!np->queues || np->broken)
|
2018-01-11 12:36:38 +03:00
|
|
|
return -ENODEV;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
for (i = 0; i < num_queues; ++i) {
|
|
|
|
queue = &np->queues[i];
|
|
|
|
napi_enable(&queue->napi);
|
|
|
|
|
|
|
|
spin_lock_bh(&queue->rx_lock);
|
|
|
|
if (netif_carrier_ok(dev)) {
|
|
|
|
xennet_alloc_rx_buffers(queue);
|
|
|
|
queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
|
|
|
|
if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
|
|
|
|
napi_schedule(&queue->napi);
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&queue->rx_lock);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
netif_tx_start_all_queues(dev);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
static bool xennet_tx_buf_gc(struct netfront_queue *queue)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
RING_IDX cons, prod;
|
|
|
|
unsigned short id;
|
|
|
|
struct sk_buff *skb;
|
2016-01-26 20:12:44 +03:00
|
|
|
bool more_to_do;
|
2021-12-16 10:24:08 +03:00
|
|
|
bool work_done = false;
|
2021-08-24 13:28:09 +03:00
|
|
|
const struct device *dev = &queue->info->netdev->dev;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
BUG_ON(!netif_carrier_ok(queue->info->netdev));
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
do {
|
2014-06-04 13:30:44 +04:00
|
|
|
prod = queue->tx.sring->rsp_prod;
|
2021-08-24 13:28:09 +03:00
|
|
|
if (RING_RESPONSE_PROD_OVERFLOW(&queue->tx, prod)) {
|
|
|
|
dev_alert(dev, "Illegal number of responses %u\n",
|
|
|
|
prod - queue->tx.rsp_cons);
|
|
|
|
goto err;
|
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
rmb(); /* Ensure we see responses up to 'rp'. */
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
|
2021-08-24 13:28:06 +03:00
|
|
|
struct xen_netif_tx_response txrsp;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
work_done = true;
|
|
|
|
|
2021-08-24 13:28:06 +03:00
|
|
|
RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
|
|
|
|
if (txrsp.status == XEN_NETIF_RSP_NULL)
|
2007-07-18 05:37:06 +04:00
|
|
|
continue;
|
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
id = txrsp.id;
|
|
|
|
if (id >= RING_SIZE(&queue->tx)) {
|
|
|
|
dev_alert(dev,
|
|
|
|
"Response has incorrect id (%u)\n",
|
|
|
|
id);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
if (queue->tx_link[id] != TX_PENDING) {
|
|
|
|
dev_alert(dev,
|
|
|
|
"Response for inactive request\n");
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
queue->tx_link[id] = TX_LINK_NONE;
|
2021-08-24 13:28:08 +03:00
|
|
|
skb = queue->tx_skbs[id];
|
|
|
|
queue->tx_skbs[id] = NULL;
|
2022-02-25 18:05:41 +03:00
|
|
|
if (unlikely(!gnttab_end_foreign_access_ref(
|
|
|
|
queue->grant_tx_ref[id], GNTMAP_readonly))) {
|
2021-08-24 13:28:09 +03:00
|
|
|
dev_alert(dev,
|
|
|
|
"Grant still in use by backend domain\n");
|
|
|
|
goto err;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
gnttab_release_grant_reference(
|
2014-06-04 13:30:44 +04:00
|
|
|
&queue->gref_tx_head, queue->grant_tx_ref[id]);
|
|
|
|
queue->grant_tx_ref[id] = GRANT_INVALID_REF;
|
|
|
|
queue->grant_tx_page[id] = NULL;
|
2021-08-24 13:28:08 +03:00
|
|
|
add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, id);
|
2007-07-18 05:37:06 +04:00
|
|
|
dev_kfree_skb_irq(skb);
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->tx.rsp_cons = prod;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2016-01-26 20:12:44 +03:00
|
|
|
RING_FINAL_CHECK_FOR_RESPONSES(&queue->tx, more_to_do);
|
|
|
|
} while (more_to_do);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
xennet_maybe_wake_tx(queue);
|
2021-08-24 13:28:09 +03:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
return work_done;
|
2021-08-24 13:28:09 +03:00
|
|
|
|
|
|
|
err:
|
|
|
|
queue->info->broken = true;
|
|
|
|
dev_alert(dev, "Disabled for further use\n");
|
2021-12-16 10:24:08 +03:00
|
|
|
|
|
|
|
return work_done;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
struct xennet_gnttab_make_txreq {
|
|
|
|
struct netfront_queue *queue;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
struct page *page;
|
2021-08-24 13:28:07 +03:00
|
|
|
struct xen_netif_tx_request *tx; /* Last request on ring page */
|
|
|
|
struct xen_netif_tx_request tx_local; /* Last request local copy*/
|
2015-04-10 16:42:21 +03:00
|
|
|
unsigned int size;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
|
|
|
|
unsigned int len, void *data)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2015-04-10 16:42:21 +03:00
|
|
|
struct xennet_gnttab_make_txreq *info = data;
|
2007-07-18 05:37:06 +04:00
|
|
|
unsigned int id;
|
2015-01-13 20:16:44 +03:00
|
|
|
struct xen_netif_tx_request *tx;
|
2007-07-18 05:37:06 +04:00
|
|
|
grant_ref_t ref;
|
2015-04-10 16:42:21 +03:00
|
|
|
/* convenient aliases */
|
|
|
|
struct page *page = info->page;
|
|
|
|
struct netfront_queue *queue = info->queue;
|
|
|
|
struct sk_buff *skb = info->skb;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:08 +03:00
|
|
|
id = get_id_from_list(&queue->tx_skb_freelist, queue->tx_link);
|
2015-01-13 20:16:44 +03:00
|
|
|
tx = RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
|
|
|
|
ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
|
2016-11-02 04:04:33 +03:00
|
|
|
WARN_ON_ONCE(IS_ERR_VALUE((unsigned long)(int)ref));
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
|
|
|
|
gfn, GNTMAP_readonly);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:08 +03:00
|
|
|
queue->tx_skbs[id] = skb;
|
2015-01-13 20:16:44 +03:00
|
|
|
queue->grant_tx_page[id] = page;
|
|
|
|
queue->grant_tx_ref[id] = ref;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:07 +03:00
|
|
|
info->tx_local.id = id;
|
|
|
|
info->tx_local.gref = ref;
|
|
|
|
info->tx_local.offset = offset;
|
|
|
|
info->tx_local.size = len;
|
|
|
|
info->tx_local.flags = 0;
|
|
|
|
|
|
|
|
*tx = info->tx_local;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
/*
|
|
|
|
* Put the request in the pending queue, it will be set to be pending
|
|
|
|
* when the producer index is about to be raised.
|
|
|
|
*/
|
|
|
|
add_id_to_list(&queue->tx_pend_queue, queue->tx_link, id);
|
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
info->tx = tx;
|
2021-08-24 13:28:07 +03:00
|
|
|
info->size += info->tx_local.size;
|
2015-04-10 16:42:21 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct xen_netif_tx_request *xennet_make_first_txreq(
|
2021-08-24 13:28:07 +03:00
|
|
|
struct xennet_gnttab_make_txreq *info,
|
|
|
|
unsigned int offset, unsigned int len)
|
2015-04-10 16:42:21 +03:00
|
|
|
{
|
2021-08-24 13:28:07 +03:00
|
|
|
info->size = 0;
|
2015-04-10 16:42:21 +03:00
|
|
|
|
2021-08-24 13:28:07 +03:00
|
|
|
gnttab_for_one_grant(info->page, offset, len, xennet_tx_setup_grant, info);
|
2015-04-10 16:42:21 +03:00
|
|
|
|
2021-08-24 13:28:07 +03:00
|
|
|
return info->tx;
|
2015-04-10 16:42:21 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
|
|
|
|
unsigned int len, void *data)
|
|
|
|
{
|
|
|
|
struct xennet_gnttab_make_txreq *info = data;
|
|
|
|
|
|
|
|
info->tx->flags |= XEN_NETTXF_more_data;
|
|
|
|
skb_get(info->skb);
|
|
|
|
xennet_tx_setup_grant(gfn, offset, len, data);
|
2015-01-13 20:16:44 +03:00
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:07 +03:00
|
|
|
static void xennet_make_txreqs(
|
|
|
|
struct xennet_gnttab_make_txreq *info,
|
|
|
|
struct page *page,
|
2015-01-13 20:16:44 +03:00
|
|
|
unsigned int offset, unsigned int len)
|
|
|
|
{
|
|
|
|
/* Skip unused frames from start of page */
|
|
|
|
page += offset >> PAGE_SHIFT;
|
|
|
|
offset &= ~PAGE_MASK;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
while (len) {
|
2021-08-24 13:28:07 +03:00
|
|
|
info->page = page;
|
|
|
|
info->size = 0;
|
2015-04-10 16:42:21 +03:00
|
|
|
|
|
|
|
gnttab_foreach_grant_in_range(page, offset, len,
|
|
|
|
xennet_make_one_txreq,
|
2021-08-24 13:28:07 +03:00
|
|
|
info);
|
2015-04-10 16:42:21 +03:00
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
page++;
|
|
|
|
offset = 0;
|
2021-08-24 13:28:07 +03:00
|
|
|
len -= info->size;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-11-21 06:02:16 +04:00
|
|
|
/*
|
2015-01-13 20:16:43 +03:00
|
|
|
* Count how many ring slots are required to send this skb. Each frag
|
|
|
|
* might be a compound page.
|
2012-11-21 06:02:16 +04:00
|
|
|
*/
|
2015-01-13 20:16:43 +03:00
|
|
|
static int xennet_count_skb_slots(struct sk_buff *skb)
|
2012-11-21 06:02:16 +04:00
|
|
|
{
|
|
|
|
int i, frags = skb_shinfo(skb)->nr_frags;
|
2015-04-10 16:42:21 +03:00
|
|
|
int slots;
|
2015-01-13 20:16:43 +03:00
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
slots = gnttab_count_grant(offset_in_page(skb->data),
|
|
|
|
skb_headlen(skb));
|
2012-11-21 06:02:16 +04:00
|
|
|
|
|
|
|
for (i = 0; i < frags; i++) {
|
|
|
|
skb_frag_t *frag = skb_shinfo(skb)->frags + i;
|
|
|
|
unsigned long size = skb_frag_size(frag);
|
2019-07-30 17:40:33 +03:00
|
|
|
unsigned long offset = skb_frag_off(frag);
|
2012-11-21 06:02:16 +04:00
|
|
|
|
|
|
|
/* Skip unused frames from start of page */
|
|
|
|
offset &= ~PAGE_MASK;
|
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
slots += gnttab_count_grant(offset, size);
|
2012-11-21 06:02:16 +04:00
|
|
|
}
|
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
return slots;
|
2012-11-21 06:02:16 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:45 +04:00
|
|
|
static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
|
2019-03-20 13:02:06 +03:00
|
|
|
struct net_device *sb_dev)
|
2014-06-04 13:30:44 +04:00
|
|
|
{
|
2014-06-04 13:30:45 +04:00
|
|
|
unsigned int num_queues = dev->real_num_tx_queues;
|
|
|
|
u32 hash;
|
|
|
|
u16 queue_idx;
|
|
|
|
|
|
|
|
/* First, check if there is only one queue */
|
|
|
|
if (num_queues == 1) {
|
|
|
|
queue_idx = 0;
|
|
|
|
} else {
|
|
|
|
hash = skb_get_hash(skb);
|
|
|
|
queue_idx = hash % num_queues;
|
|
|
|
}
|
|
|
|
|
|
|
|
return queue_idx;
|
2014-06-04 13:30:44 +04:00
|
|
|
}
|
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
static void xennet_mark_tx_pending(struct netfront_queue *queue)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
while ((i = get_id_from_list(&queue->tx_pend_queue, queue->tx_link)) !=
|
|
|
|
TX_LINK_NONE)
|
|
|
|
queue->tx_link[i] = TX_PENDING;
|
|
|
|
}
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
static int xennet_xdp_xmit_one(struct net_device *dev,
|
|
|
|
struct netfront_queue *queue,
|
|
|
|
struct xdp_frame *xdpf)
|
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
|
|
|
struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
|
2021-08-24 13:28:07 +03:00
|
|
|
struct xennet_gnttab_make_txreq info = {
|
|
|
|
.queue = queue,
|
|
|
|
.skb = NULL,
|
|
|
|
.page = virt_to_page(xdpf->data),
|
|
|
|
};
|
2020-06-29 16:13:28 +03:00
|
|
|
int notify;
|
|
|
|
|
2021-08-24 13:28:07 +03:00
|
|
|
xennet_make_first_txreq(&info,
|
2020-06-29 16:13:28 +03:00
|
|
|
offset_in_page(xdpf->data),
|
|
|
|
xdpf->len);
|
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
xennet_mark_tx_pending(queue);
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
|
|
|
|
if (notify)
|
|
|
|
notify_remote_via_irq(queue->tx_irq);
|
|
|
|
|
|
|
|
u64_stats_update_begin(&tx_stats->syncp);
|
|
|
|
tx_stats->bytes += xdpf->len;
|
|
|
|
tx_stats->packets++;
|
|
|
|
u64_stats_update_end(&tx_stats->syncp);
|
|
|
|
|
|
|
|
xennet_tx_buf_gc(queue);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_xdp_xmit(struct net_device *dev, int n,
|
|
|
|
struct xdp_frame **frames, u32 flags)
|
|
|
|
{
|
|
|
|
unsigned int num_queues = dev->real_num_tx_queues;
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
|
|
|
struct netfront_queue *queue = NULL;
|
|
|
|
unsigned long irq_flags;
|
2021-03-08 14:06:58 +03:00
|
|
|
int nxmit = 0;
|
|
|
|
int i;
|
2020-06-29 16:13:28 +03:00
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
if (unlikely(np->broken))
|
|
|
|
return -ENODEV;
|
2020-06-29 16:13:28 +03:00
|
|
|
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
queue = &np->queues[smp_processor_id() % num_queues];
|
|
|
|
|
|
|
|
spin_lock_irqsave(&queue->tx_lock, irq_flags);
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
struct xdp_frame *xdpf = frames[i];
|
|
|
|
|
|
|
|
if (!xdpf)
|
|
|
|
continue;
|
2021-03-08 14:06:58 +03:00
|
|
|
if (xennet_xdp_xmit_one(dev, queue, xdpf))
|
|
|
|
break;
|
|
|
|
nxmit++;
|
2020-06-29 16:13:28 +03:00
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&queue->tx_lock, irq_flags);
|
|
|
|
|
2021-03-08 14:06:58 +03:00
|
|
|
return nxmit;
|
2020-06-29 16:13:28 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-04-10 16:42:21 +03:00
|
|
|
#define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1)
|
|
|
|
|
2018-04-24 16:18:14 +03:00
|
|
|
static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
2015-01-13 19:42:42 +03:00
|
|
|
struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
|
2021-08-24 13:28:07 +03:00
|
|
|
struct xen_netif_tx_request *first_tx;
|
2015-01-13 20:16:44 +03:00
|
|
|
unsigned int i;
|
2007-07-18 05:37:06 +04:00
|
|
|
int notify;
|
2012-11-21 06:02:16 +04:00
|
|
|
int slots;
|
2015-01-13 20:16:44 +03:00
|
|
|
struct page *page;
|
|
|
|
unsigned int offset;
|
|
|
|
unsigned int len;
|
2012-01-23 12:24:43 +04:00
|
|
|
unsigned long flags;
|
2014-06-04 13:30:44 +04:00
|
|
|
struct netfront_queue *queue = NULL;
|
2021-08-24 13:28:07 +03:00
|
|
|
struct xennet_gnttab_make_txreq info = { };
|
2014-06-04 13:30:44 +04:00
|
|
|
unsigned int num_queues = dev->real_num_tx_queues;
|
|
|
|
u16 queue_index;
|
2016-09-19 13:53:40 +03:00
|
|
|
struct sk_buff *nskb;
|
2014-06-04 13:30:44 +04:00
|
|
|
|
|
|
|
/* Drop the packet if no queues are set up */
|
|
|
|
if (num_queues < 1)
|
|
|
|
goto drop;
|
2021-08-24 13:28:09 +03:00
|
|
|
if (unlikely(np->broken))
|
|
|
|
goto drop;
|
2014-06-04 13:30:44 +04:00
|
|
|
/* Determine which queue to transmit this SKB on */
|
|
|
|
queue_index = skb_get_queue_mapping(skb);
|
|
|
|
queue = &np->queues[queue_index];
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2013-04-22 06:20:41 +04:00
|
|
|
/* If skb->len is too big for wire format, drop skb and alert
|
|
|
|
* user about misconfiguration.
|
|
|
|
*/
|
|
|
|
if (unlikely(skb->len > XEN_NETIF_MAX_TX_SIZE)) {
|
|
|
|
net_alert_ratelimited(
|
|
|
|
"xennet: skb->len = %u, too big for wire format\n",
|
|
|
|
skb->len);
|
|
|
|
goto drop;
|
|
|
|
}
|
|
|
|
|
2015-01-13 20:16:43 +03:00
|
|
|
slots = xennet_count_skb_slots(skb);
|
2015-04-10 16:42:21 +03:00
|
|
|
if (unlikely(slots > MAX_XEN_SKB_FRAGS + 1)) {
|
xen-netfront: Fix handling packets on compound pages with skb_linearize
There is a long known problem with the netfront/netback interface: if the guest
tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
it gets dropped. The reason is that netback maps these slots to a frag in the
frags array, which is limited by size. Having so many slots can occur since
compound pages were introduced, as the ring protocol slice them up into
individual (non-compound) page aligned slots. The theoretical worst case
scenario looks like this (note, skbs are limited to 64 Kb here):
linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
using 2 slots
first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
end and the beginning of a page, therefore they use 3 * 15 = 45 slots
last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
Although I don't think this 51 slots skb can really happen, we need a solution
which can deal with every scenario. In real life there is only a few slots
overdue, but usually it causes the TCP stream to be blocked, as the retry will
most likely have the same buffer layout.
This patch solves this problem by linearizing the packet. This is not the
fastest way, and it can fail much easier as it tries to allocate a big linear
area for the whole packet, but probably easier by an order of magnitude than
anything else. Probably this code path is not touched very frequently anyway.
Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-08-11 21:32:23 +04:00
|
|
|
net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
|
|
|
|
slots, skb->len);
|
|
|
|
if (skb_linearize(skb))
|
|
|
|
goto drop;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
page = virt_to_page(skb->data);
|
|
|
|
offset = offset_in_page(skb->data);
|
2016-09-19 13:53:40 +03:00
|
|
|
|
|
|
|
/* The first req should be at least ETH_HLEN size or the packet will be
|
|
|
|
* dropped by netback.
|
|
|
|
*/
|
|
|
|
if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
|
|
|
|
nskb = skb_copy(skb, GFP_ATOMIC);
|
|
|
|
if (!nskb)
|
|
|
|
goto drop;
|
2017-08-30 20:32:58 +03:00
|
|
|
dev_consume_skb_any(skb);
|
2016-09-19 13:53:40 +03:00
|
|
|
skb = nskb;
|
|
|
|
page = virt_to_page(skb->data);
|
|
|
|
offset = offset_in_page(skb->data);
|
|
|
|
}
|
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
len = skb_headlen(skb);
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_lock_irqsave(&queue->tx_lock, flags);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
if (unlikely(!netif_carrier_ok(dev) ||
|
2012-11-21 06:02:16 +04:00
|
|
|
(slots > 1 && !xennet_can_sg(dev)) ||
|
2015-04-17 16:45:04 +03:00
|
|
|
netif_needs_gso(skb, netif_skb_features(skb)))) {
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_unlock_irqrestore(&queue->tx_lock, flags);
|
2007-07-18 05:37:06 +04:00
|
|
|
goto drop;
|
|
|
|
}
|
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
/* First request for the linear area. */
|
2021-08-24 13:28:07 +03:00
|
|
|
info.queue = queue;
|
|
|
|
info.skb = skb;
|
|
|
|
info.page = page;
|
|
|
|
first_tx = xennet_make_first_txreq(&info, offset, len);
|
|
|
|
offset += info.tx_local.size;
|
2015-04-10 16:42:21 +03:00
|
|
|
if (offset == PAGE_SIZE) {
|
|
|
|
page++;
|
|
|
|
offset = 0;
|
|
|
|
}
|
2021-08-24 13:28:07 +03:00
|
|
|
len -= info.tx_local.size;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
if (skb->ip_summed == CHECKSUM_PARTIAL)
|
|
|
|
/* local packet? */
|
2021-08-24 13:28:07 +03:00
|
|
|
first_tx->flags |= XEN_NETTXF_csum_blank |
|
|
|
|
XEN_NETTXF_data_validated;
|
2007-07-18 05:37:06 +04:00
|
|
|
else if (skb->ip_summed == CHECKSUM_UNNECESSARY)
|
|
|
|
/* remote but checksummed. */
|
2021-08-24 13:28:07 +03:00
|
|
|
first_tx->flags |= XEN_NETTXF_data_validated;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
/* Optional extra info after the first request. */
|
2007-07-18 05:37:06 +04:00
|
|
|
if (skb_shinfo(skb)->gso_size) {
|
|
|
|
struct xen_netif_extra_info *gso;
|
|
|
|
|
|
|
|
gso = (struct xen_netif_extra_info *)
|
2015-01-13 20:16:44 +03:00
|
|
|
RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:07 +03:00
|
|
|
first_tx->flags |= XEN_NETTXF_extra_info;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
gso->u.gso.size = skb_shinfo(skb)->gso_size;
|
2014-01-15 21:30:33 +04:00
|
|
|
gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
|
|
|
|
XEN_NETIF_GSO_TYPE_TCPV6 :
|
|
|
|
XEN_NETIF_GSO_TYPE_TCPV4;
|
2007-07-18 05:37:06 +04:00
|
|
|
gso->u.gso.pad = 0;
|
|
|
|
gso->u.gso.features = 0;
|
|
|
|
|
|
|
|
gso->type = XEN_NETIF_EXTRA_TYPE_GSO;
|
|
|
|
gso->flags = 0;
|
|
|
|
}
|
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
/* Requests for the rest of the linear area. */
|
2021-08-24 13:28:07 +03:00
|
|
|
xennet_make_txreqs(&info, page, offset, len);
|
2015-01-13 20:16:44 +03:00
|
|
|
|
|
|
|
/* Requests for all the frags. */
|
|
|
|
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
|
|
|
|
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
|
2021-08-24 13:28:07 +03:00
|
|
|
xennet_make_txreqs(&info, skb_frag_page(frag),
|
2019-07-30 17:40:33 +03:00
|
|
|
skb_frag_off(frag),
|
2015-01-13 20:16:44 +03:00
|
|
|
skb_frag_size(frag));
|
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-01-13 20:16:44 +03:00
|
|
|
/* First request has the packet length. */
|
|
|
|
first_tx->size = skb->len;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2020-07-03 09:22:34 +03:00
|
|
|
/* timestamp packet in software */
|
|
|
|
skb_tx_timestamp(skb);
|
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
xennet_mark_tx_pending(queue);
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
|
2007-07-18 05:37:06 +04:00
|
|
|
if (notify)
|
2014-06-04 13:30:44 +04:00
|
|
|
notify_remote_via_irq(queue->tx_irq);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-01-13 19:42:42 +03:00
|
|
|
u64_stats_update_begin(&tx_stats->syncp);
|
|
|
|
tx_stats->bytes += skb->len;
|
|
|
|
tx_stats->packets++;
|
|
|
|
u64_stats_update_end(&tx_stats->syncp);
|
2007-08-13 23:54:37 +04:00
|
|
|
|
|
|
|
/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
|
2014-06-04 13:30:44 +04:00
|
|
|
xennet_tx_buf_gc(queue);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
if (!netfront_tx_slot_available(queue))
|
|
|
|
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_unlock_irqrestore(&queue->tx_lock, flags);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2009-06-23 10:03:08 +04:00
|
|
|
return NETDEV_TX_OK;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
drop:
|
2007-10-04 04:41:50 +04:00
|
|
|
dev->stats.tx_dropped++;
|
2014-03-16 05:33:04 +04:00
|
|
|
dev_kfree_skb_any(skb);
|
2009-06-23 10:03:08 +04:00
|
|
|
return NETDEV_TX_OK;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_close(struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
2014-06-04 13:30:44 +04:00
|
|
|
unsigned int num_queues = dev->real_num_tx_queues;
|
|
|
|
unsigned int i;
|
|
|
|
struct netfront_queue *queue;
|
|
|
|
netif_tx_stop_all_queues(np->netdev);
|
|
|
|
for (i = 0; i < num_queues; ++i) {
|
|
|
|
queue = &np->queues[i];
|
|
|
|
napi_disable(&queue->napi);
|
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-02-24 00:19:54 +03:00
|
|
|
static void xennet_destroy_queues(struct netfront_info *info)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
|
|
|
|
struct netfront_queue *queue = &info->queues[i];
|
|
|
|
|
|
|
|
if (netif_running(info->netdev))
|
|
|
|
napi_disable(&queue->napi);
|
|
|
|
netif_napi_del(&queue->napi);
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(info->queues);
|
|
|
|
info->queues = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void xennet_uninit(struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
|
|
|
xennet_destroy_queues(np);
|
|
|
|
}
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&queue->rx_cons_lock, flags);
|
|
|
|
queue->rx.rsp_cons = val;
|
|
|
|
queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
|
|
|
|
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
|
2007-07-18 05:37:06 +04:00
|
|
|
grant_ref_t ref)
|
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
int new = xennet_rxidx(queue->rx.req_prod_pvt);
|
|
|
|
|
|
|
|
BUG_ON(queue->rx_skbs[new]);
|
|
|
|
queue->rx_skbs[new] = skb;
|
|
|
|
queue->grant_rx_ref[new] = ref;
|
|
|
|
RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
|
|
|
|
RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
|
|
|
|
queue->rx.req_prod_pvt++;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static int xennet_get_extras(struct netfront_queue *queue,
|
2007-07-18 05:37:06 +04:00
|
|
|
struct xen_netif_extra_info *extras,
|
|
|
|
RING_IDX rp)
|
|
|
|
|
|
|
|
{
|
2021-08-24 13:28:06 +03:00
|
|
|
struct xen_netif_extra_info extra;
|
2014-06-04 13:30:44 +04:00
|
|
|
struct device *dev = &queue->info->netdev->dev;
|
|
|
|
RING_IDX cons = queue->rx.rsp_cons;
|
2007-07-18 05:37:06 +04:00
|
|
|
int err = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
struct sk_buff *skb;
|
|
|
|
grant_ref_t ref;
|
|
|
|
|
|
|
|
if (unlikely(cons + 1 == rp)) {
|
|
|
|
if (net_ratelimit())
|
|
|
|
dev_warn(dev, "Missing extra info\n");
|
|
|
|
err = -EBADR;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2021-08-24 13:28:06 +03:00
|
|
|
RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:06 +03:00
|
|
|
if (unlikely(!extra.type ||
|
|
|
|
extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
|
2007-07-18 05:37:06 +04:00
|
|
|
if (net_ratelimit())
|
|
|
|
dev_warn(dev, "Invalid extra type: %d\n",
|
2021-08-24 13:28:06 +03:00
|
|
|
extra.type);
|
2007-07-18 05:37:06 +04:00
|
|
|
err = -EINVAL;
|
|
|
|
} else {
|
2021-08-24 13:28:06 +03:00
|
|
|
extras[extra.type - 1] = extra;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
skb = xennet_get_rx_skb(queue, cons);
|
|
|
|
ref = xennet_get_rx_ref(queue, cons);
|
|
|
|
xennet_move_rx_slot(queue, skb, ref);
|
2021-08-24 13:28:06 +03:00
|
|
|
} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
xennet_set_rx_rsp_cons(queue, cons);
|
2007-07-18 05:37:06 +04:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
|
|
|
|
struct xen_netif_rx_response *rx, struct bpf_prog *prog,
|
|
|
|
struct xdp_buff *xdp, bool *need_xdp_flush)
|
|
|
|
{
|
|
|
|
struct xdp_frame *xdpf;
|
|
|
|
u32 len = rx->status;
|
2020-07-02 17:22:23 +03:00
|
|
|
u32 act;
|
2020-06-29 16:13:28 +03:00
|
|
|
int err;
|
|
|
|
|
2020-12-23 00:09:28 +03:00
|
|
|
xdp_init_buff(xdp, XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
|
|
|
|
&queue->xdp_rxq);
|
2020-12-23 00:09:29 +03:00
|
|
|
xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM,
|
|
|
|
len, false);
|
2020-06-29 16:13:28 +03:00
|
|
|
|
|
|
|
act = bpf_prog_run_xdp(prog, xdp);
|
|
|
|
switch (act) {
|
|
|
|
case XDP_TX:
|
|
|
|
get_page(pdata);
|
|
|
|
xdpf = xdp_convert_buff_to_frame(xdp);
|
|
|
|
err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);
|
2021-03-08 14:06:58 +03:00
|
|
|
if (unlikely(!err))
|
|
|
|
xdp_return_frame_rx_napi(xdpf);
|
|
|
|
else if (unlikely(err < 0))
|
2020-06-29 16:13:28 +03:00
|
|
|
trace_xdp_exception(queue->info->netdev, prog, act);
|
|
|
|
break;
|
|
|
|
case XDP_REDIRECT:
|
|
|
|
get_page(pdata);
|
|
|
|
err = xdp_do_redirect(queue->info->netdev, xdp, prog);
|
|
|
|
*need_xdp_flush = true;
|
|
|
|
if (unlikely(err))
|
|
|
|
trace_xdp_exception(queue->info->netdev, prog, act);
|
|
|
|
break;
|
|
|
|
case XDP_PASS:
|
|
|
|
case XDP_DROP:
|
|
|
|
break;
|
|
|
|
|
|
|
|
case XDP_ABORTED:
|
|
|
|
trace_xdp_exception(queue->info->netdev, prog, act);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
bpf_warn_invalid_xdp_action(act);
|
|
|
|
}
|
|
|
|
|
|
|
|
return act;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static int xennet_get_responses(struct netfront_queue *queue,
|
2007-07-18 05:37:06 +04:00
|
|
|
struct netfront_rx_info *rinfo, RING_IDX rp,
|
2020-06-29 16:13:28 +03:00
|
|
|
struct sk_buff_head *list,
|
|
|
|
bool *need_xdp_flush)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2021-08-24 13:28:06 +03:00
|
|
|
struct xen_netif_rx_response *rx = &rinfo->rx, rx_local;
|
2020-06-29 16:13:28 +03:00
|
|
|
int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
|
2014-06-04 13:30:44 +04:00
|
|
|
RING_IDX cons = queue->rx.rsp_cons;
|
|
|
|
struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
|
2020-06-29 16:13:28 +03:00
|
|
|
struct xen_netif_extra_info *extras = rinfo->extras;
|
2014-06-04 13:30:44 +04:00
|
|
|
grant_ref_t ref = xennet_get_rx_ref(queue, cons);
|
2020-06-29 16:13:28 +03:00
|
|
|
struct device *dev = &queue->info->netdev->dev;
|
|
|
|
struct bpf_prog *xdp_prog;
|
|
|
|
struct xdp_buff xdp;
|
|
|
|
unsigned long ret;
|
2013-03-25 05:08:19 +04:00
|
|
|
int slots = 1;
|
2007-07-18 05:37:06 +04:00
|
|
|
int err = 0;
|
2020-06-29 16:13:28 +03:00
|
|
|
u32 verdict;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2011-03-15 03:06:18 +03:00
|
|
|
if (rx->flags & XEN_NETRXF_extra_info) {
|
2014-06-04 13:30:44 +04:00
|
|
|
err = xennet_get_extras(queue, extras, rp);
|
2020-06-29 16:13:28 +03:00
|
|
|
if (!err) {
|
|
|
|
if (extras[XEN_NETIF_EXTRA_TYPE_XDP - 1].type) {
|
|
|
|
struct xen_netif_extra_info *xdp;
|
|
|
|
|
|
|
|
xdp = &extras[XEN_NETIF_EXTRA_TYPE_XDP - 1];
|
|
|
|
rx->offset = xdp->u.xdp.headroom;
|
|
|
|
}
|
|
|
|
}
|
2014-06-04 13:30:44 +04:00
|
|
|
cons = queue->rx.rsp_cons;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
if (unlikely(rx->status < 0 ||
|
2015-04-10 16:42:21 +03:00
|
|
|
rx->offset + rx->status > XEN_PAGE_SIZE)) {
|
2007-07-18 05:37:06 +04:00
|
|
|
if (net_ratelimit())
|
2015-06-16 22:10:46 +03:00
|
|
|
dev_warn(dev, "rx->offset: %u, size: %d\n",
|
2007-07-18 05:37:06 +04:00
|
|
|
rx->offset, rx->status);
|
2014-06-04 13:30:44 +04:00
|
|
|
xennet_move_rx_slot(queue, skb, ref);
|
2007-07-18 05:37:06 +04:00
|
|
|
err = -EINVAL;
|
|
|
|
goto next;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This definitely indicates a bug, either in this driver or in
|
|
|
|
* the backend driver. In future this should flag the bad
|
2013-04-22 06:20:40 +04:00
|
|
|
* situation to the system controller to reboot the backend.
|
2007-07-18 05:37:06 +04:00
|
|
|
*/
|
|
|
|
if (ref == GRANT_INVALID_REF) {
|
|
|
|
if (net_ratelimit())
|
|
|
|
dev_warn(dev, "Bad rx response id %d.\n",
|
|
|
|
rx->id);
|
|
|
|
err = -EINVAL;
|
|
|
|
goto next;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = gnttab_end_foreign_access_ref(ref, 0);
|
|
|
|
BUG_ON(!ret);
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
gnttab_release_grant_reference(&queue->gref_rx_head, ref);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
rcu_read_lock();
|
|
|
|
xdp_prog = rcu_dereference(queue->xdp_prog);
|
|
|
|
if (xdp_prog) {
|
|
|
|
if (!(rx->flags & XEN_NETRXF_more_data)) {
|
|
|
|
/* currently only a single page contains data */
|
|
|
|
verdict = xennet_run_xdp(queue,
|
|
|
|
skb_frag_page(&skb_shinfo(skb)->frags[0]),
|
|
|
|
rx, xdp_prog, &xdp, need_xdp_flush);
|
|
|
|
if (verdict != XDP_PASS)
|
|
|
|
err = -EINVAL;
|
|
|
|
} else {
|
|
|
|
/* drop the frame */
|
|
|
|
err = -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2007-07-18 05:37:06 +04:00
|
|
|
next:
|
2020-06-29 16:13:28 +03:00
|
|
|
__skb_queue_tail(list, skb);
|
2011-03-15 03:06:18 +03:00
|
|
|
if (!(rx->flags & XEN_NETRXF_more_data))
|
2007-07-18 05:37:06 +04:00
|
|
|
break;
|
|
|
|
|
2013-03-25 05:08:19 +04:00
|
|
|
if (cons + slots == rp) {
|
2007-07-18 05:37:06 +04:00
|
|
|
if (net_ratelimit())
|
2013-03-25 05:08:19 +04:00
|
|
|
dev_warn(dev, "Need more slots\n");
|
2007-07-18 05:37:06 +04:00
|
|
|
err = -ENOENT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2021-08-24 13:28:06 +03:00
|
|
|
RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx_local);
|
|
|
|
rx = &rx_local;
|
2014-06-04 13:30:44 +04:00
|
|
|
skb = xennet_get_rx_skb(queue, cons + slots);
|
|
|
|
ref = xennet_get_rx_ref(queue, cons + slots);
|
2013-03-25 05:08:19 +04:00
|
|
|
slots++;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2013-03-25 05:08:19 +04:00
|
|
|
if (unlikely(slots > max)) {
|
2007-07-18 05:37:06 +04:00
|
|
|
if (net_ratelimit())
|
2013-04-22 06:20:40 +04:00
|
|
|
dev_warn(dev, "Too many slots\n");
|
2007-07-18 05:37:06 +04:00
|
|
|
err = -E2BIG;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(err))
|
2021-12-16 10:24:08 +03:00
|
|
|
xennet_set_rx_rsp_cons(queue, cons + slots);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_set_skb_gso(struct sk_buff *skb,
|
|
|
|
struct xen_netif_extra_info *gso)
|
|
|
|
{
|
|
|
|
if (!gso->u.gso.size) {
|
|
|
|
if (net_ratelimit())
|
2013-06-28 08:57:49 +04:00
|
|
|
pr_warn("GSO size must not be zero\n");
|
2007-07-18 05:37:06 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2014-01-15 21:30:33 +04:00
|
|
|
if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
|
|
|
|
gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
|
2007-07-18 05:37:06 +04:00
|
|
|
if (net_ratelimit())
|
2013-06-28 08:57:49 +04:00
|
|
|
pr_warn("Bad GSO type %d\n", gso->u.gso.type);
|
2007-07-18 05:37:06 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
skb_shinfo(skb)->gso_size = gso->u.gso.size;
|
2014-01-15 21:30:33 +04:00
|
|
|
skb_shinfo(skb)->gso_type =
|
|
|
|
(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
|
|
|
|
SKB_GSO_TCPV4 :
|
|
|
|
SKB_GSO_TCPV6;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
/* Header must be checked, and gso_segs computed. */
|
|
|
|
skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
|
|
|
|
skb_shinfo(skb)->gso_segs = 0;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-10-01 16:56:41 +03:00
|
|
|
static int xennet_fill_frags(struct netfront_queue *queue,
|
|
|
|
struct sk_buff *skb,
|
|
|
|
struct sk_buff_head *list)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
RING_IDX cons = queue->rx.rsp_cons;
|
2007-07-18 05:37:06 +04:00
|
|
|
struct sk_buff *nskb;
|
|
|
|
|
|
|
|
while ((nskb = __skb_dequeue(list))) {
|
2021-08-24 13:28:06 +03:00
|
|
|
struct xen_netif_rx_response rx;
|
2011-10-05 04:28:47 +04:00
|
|
|
skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2021-08-24 13:28:06 +03:00
|
|
|
RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
|
|
|
|
|
2018-08-09 17:42:16 +03:00
|
|
|
if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) {
|
2013-07-17 11:09:37 +04:00
|
|
|
unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2018-12-18 18:06:19 +03:00
|
|
|
BUG_ON(pull_to < skb_headlen(skb));
|
2013-07-17 11:09:37 +04:00
|
|
|
__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
|
|
|
|
}
|
2018-09-11 10:04:48 +03:00
|
|
|
if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
|
2021-12-16 10:24:08 +03:00
|
|
|
xennet_set_rx_rsp_cons(queue,
|
|
|
|
++cons + skb_queue_len(list));
|
2018-09-11 10:04:48 +03:00
|
|
|
kfree_skb(nskb);
|
2019-10-01 16:56:41 +03:00
|
|
|
return -ENOENT;
|
2018-09-11 10:04:48 +03:00
|
|
|
}
|
2013-07-17 11:09:37 +04:00
|
|
|
|
2018-08-09 17:42:16 +03:00
|
|
|
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
|
|
|
|
skb_frag_page(nfrag),
|
2021-08-24 13:28:06 +03:00
|
|
|
rx.offset, rx.status, PAGE_SIZE);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
skb_shinfo(nskb)->nr_frags = 0;
|
|
|
|
kfree_skb(nskb);
|
|
|
|
}
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
xennet_set_rx_rsp_cons(queue, cons);
|
2019-10-01 16:56:41 +03:00
|
|
|
|
|
|
|
return 0;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2011-01-27 07:14:03 +03:00
|
|
|
static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2014-01-09 14:02:48 +04:00
|
|
|
bool recalculate_partial_csum = false;
|
2011-01-27 07:14:03 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
|
|
|
|
* peers can fail to set NETRXF_csum_blank when sending a GSO
|
|
|
|
* frame. In this case force the SKB to CHECKSUM_PARTIAL and
|
|
|
|
* recalculate the partial checksum.
|
|
|
|
*/
|
|
|
|
if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
2014-06-04 13:30:44 +04:00
|
|
|
atomic_inc(&np->rx_gso_checksum_fixup);
|
2011-01-27 07:14:03 +03:00
|
|
|
skb->ip_summed = CHECKSUM_PARTIAL;
|
2014-01-09 14:02:48 +04:00
|
|
|
recalculate_partial_csum = true;
|
2011-01-27 07:14:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
|
|
|
|
if (skb->ip_summed != CHECKSUM_PARTIAL)
|
|
|
|
return 0;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-01-09 14:02:48 +04:00
|
|
|
return skb_checksum_setup(skb, recalculate_partial_csum);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static int handle_incoming_queue(struct netfront_queue *queue,
|
2007-10-04 04:41:50 +04:00
|
|
|
struct sk_buff_head *rxq)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2015-01-13 19:42:42 +03:00
|
|
|
struct netfront_stats *rx_stats = this_cpu_ptr(queue->info->rx_stats);
|
2007-07-18 05:37:06 +04:00
|
|
|
int packets_dropped = 0;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
while ((skb = __skb_dequeue(rxq)) != NULL) {
|
2012-08-22 04:26:47 +04:00
|
|
|
int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2013-07-17 11:09:37 +04:00
|
|
|
if (pull_to > skb_headlen(skb))
|
|
|
|
__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
/* Ethernet work: Delayed to here as it peeks the header. */
|
2014-06-04 13:30:44 +04:00
|
|
|
skb->protocol = eth_type_trans(skb, queue->info->netdev);
|
2014-02-19 22:48:34 +04:00
|
|
|
skb_reset_network_header(skb);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
if (checksum_setup(queue->info->netdev, skb)) {
|
2011-01-27 07:14:03 +03:00
|
|
|
kfree_skb(skb);
|
|
|
|
packets_dropped++;
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->info->netdev->stats.rx_errors++;
|
2011-01-27 07:14:03 +03:00
|
|
|
continue;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2015-01-13 19:42:42 +03:00
|
|
|
u64_stats_update_begin(&rx_stats->syncp);
|
|
|
|
rx_stats->packets++;
|
|
|
|
rx_stats->bytes += skb->len;
|
|
|
|
u64_stats_update_end(&rx_stats->syncp);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
/* Pass it up. */
|
2014-06-04 13:30:44 +04:00
|
|
|
napi_gro_receive(&queue->napi, skb);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
return packets_dropped;
|
|
|
|
}
|
|
|
|
|
[NET]: Make NAPI polling independent of struct net_device objects.
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 03:41:36 +04:00
|
|
|
static int xennet_poll(struct napi_struct *napi, int budget)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
|
|
|
|
struct net_device *dev = queue->info->netdev;
|
2007-07-18 05:37:06 +04:00
|
|
|
struct sk_buff *skb;
|
|
|
|
struct netfront_rx_info rinfo;
|
|
|
|
struct xen_netif_rx_response *rx = &rinfo.rx;
|
|
|
|
struct xen_netif_extra_info *extras = rinfo.extras;
|
|
|
|
RING_IDX i, rp;
|
[NET]: Make NAPI polling independent of struct net_device objects.
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 03:41:36 +04:00
|
|
|
int work_done;
|
2007-07-18 05:37:06 +04:00
|
|
|
struct sk_buff_head rxq;
|
|
|
|
struct sk_buff_head errq;
|
|
|
|
struct sk_buff_head tmpq;
|
|
|
|
int err;
|
2020-06-29 16:13:28 +03:00
|
|
|
bool need_xdp_flush = false;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_lock(&queue->rx_lock);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
skb_queue_head_init(&rxq);
|
|
|
|
skb_queue_head_init(&errq);
|
|
|
|
skb_queue_head_init(&tmpq);
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
rp = queue->rx.sring->rsp_prod;
|
2021-08-24 13:28:09 +03:00
|
|
|
if (RING_RESPONSE_PROD_OVERFLOW(&queue->rx, rp)) {
|
|
|
|
dev_alert(&dev->dev, "Illegal number of responses %u\n",
|
|
|
|
rp - queue->rx.rsp_cons);
|
|
|
|
queue->info->broken = true;
|
|
|
|
spin_unlock(&queue->rx_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
rmb(); /* Ensure we see queued responses up to 'rp'. */
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
i = queue->rx.rsp_cons;
|
2007-07-18 05:37:06 +04:00
|
|
|
work_done = 0;
|
|
|
|
while ((i != rp) && (work_done < budget)) {
|
2021-08-24 13:28:06 +03:00
|
|
|
RING_COPY_RESPONSE(&queue->rx, i, rx);
|
2007-07-18 05:37:06 +04:00
|
|
|
memset(extras, 0, sizeof(rinfo.extras));
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
err = xennet_get_responses(queue, &rinfo, rp, &tmpq,
|
|
|
|
&need_xdp_flush);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
if (unlikely(err)) {
|
|
|
|
err:
|
|
|
|
while ((skb = __skb_dequeue(&tmpq)))
|
|
|
|
__skb_queue_tail(&errq, skb);
|
2007-10-04 04:41:50 +04:00
|
|
|
dev->stats.rx_errors++;
|
2014-06-04 13:30:44 +04:00
|
|
|
i = queue->rx.rsp_cons;
|
2007-07-18 05:37:06 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
skb = __skb_dequeue(&tmpq);
|
|
|
|
|
|
|
|
if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
|
|
|
|
struct xen_netif_extra_info *gso;
|
|
|
|
gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
|
|
|
|
|
|
|
|
if (unlikely(xennet_set_skb_gso(skb, gso))) {
|
|
|
|
__skb_queue_head(&tmpq, skb);
|
2021-12-16 10:24:08 +03:00
|
|
|
xennet_set_rx_rsp_cons(queue,
|
|
|
|
queue->rx.rsp_cons +
|
|
|
|
skb_queue_len(&tmpq));
|
2007-07-18 05:37:06 +04:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-22 04:26:47 +04:00
|
|
|
NETFRONT_SKB_CB(skb)->pull_to = rx->status;
|
|
|
|
if (NETFRONT_SKB_CB(skb)->pull_to > RX_COPY_THRESHOLD)
|
|
|
|
NETFRONT_SKB_CB(skb)->pull_to = RX_COPY_THRESHOLD;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2019-07-30 17:40:33 +03:00
|
|
|
skb_frag_off_set(&skb_shinfo(skb)->frags[0], rx->offset);
|
2012-08-22 04:26:47 +04:00
|
|
|
skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status);
|
|
|
|
skb->data_len = rx->status;
|
2013-07-17 11:09:37 +04:00
|
|
|
skb->len += rx->status;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2019-10-01 16:56:41 +03:00
|
|
|
if (unlikely(xennet_fill_frags(queue, skb, &tmpq)))
|
2018-09-11 10:04:48 +03:00
|
|
|
goto err;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2011-03-15 03:06:18 +03:00
|
|
|
if (rx->flags & XEN_NETRXF_csum_blank)
|
2007-07-18 05:37:06 +04:00
|
|
|
skb->ip_summed = CHECKSUM_PARTIAL;
|
2011-03-15 03:06:18 +03:00
|
|
|
else if (rx->flags & XEN_NETRXF_data_validated)
|
2007-07-18 05:37:06 +04:00
|
|
|
skb->ip_summed = CHECKSUM_UNNECESSARY;
|
|
|
|
|
|
|
|
__skb_queue_tail(&rxq, skb);
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
i = queue->rx.rsp_cons + 1;
|
|
|
|
xennet_set_rx_rsp_cons(queue, i);
|
2007-07-18 05:37:06 +04:00
|
|
|
work_done++;
|
|
|
|
}
|
2020-06-29 16:13:28 +03:00
|
|
|
if (need_xdp_flush)
|
|
|
|
xdp_do_flush();
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2008-05-22 14:09:06 +04:00
|
|
|
__skb_queue_purge(&errq);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
work_done -= handle_incoming_queue(queue, &rxq);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
xennet_alloc_rx_buffers(queue);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
if (work_done < budget) {
|
[NET]: Make NAPI polling independent of struct net_device objects.
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 03:41:36 +04:00
|
|
|
int more_to_do = 0;
|
|
|
|
|
2017-01-30 19:22:01 +03:00
|
|
|
napi_complete_done(napi, work_done);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
|
2014-12-16 21:59:46 +03:00
|
|
|
if (more_to_do)
|
|
|
|
napi_schedule(napi);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_unlock(&queue->rx_lock);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
[NET]: Make NAPI polling independent of struct net_device objects.
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 03:41:36 +04:00
|
|
|
return work_done;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_change_mtu(struct net_device *dev, int mtu)
|
|
|
|
{
|
xen-netfront: transmit fully GSO-sized packets
xen-netfront limits transmitted skbs to be at most 44 segments in size. However,
GSO permits up to 65536 bytes, which means a maximum of 45 segments of 1448
bytes each. This slight reduction in the size of packets means a slight loss in
efficiency.
Since c/s 9ecd1a75d, xen-netfront sets gso_max_size to
XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER,
where XEN_NETIF_MAX_TX_SIZE is 65535 bytes.
The calculation used by tcp_tso_autosize (and also tcp_xmit_size_goal since c/s
6c09fa09d) in determining when to split an skb into two is
sk->sk_gso_max_size - 1 - MAX_TCP_HEADER.
So the maximum permitted size of an skb is calculated to be
(XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER) - 1 - MAX_TCP_HEADER.
Intuitively, this looks like the wrong formula -- we don't need two TCP headers.
Instead, there is no need to deviate from the default gso_max_size of 65536 as
this already accommodates the size of the header.
Currently, the largest skb transmitted by netfront is 63712 bytes (44 segments
of 1448 bytes each), as observed via tcpdump. This patch makes netfront send
skbs of up to 65160 bytes (45 segments of 1448 bytes each).
Similarly, the maximum allowable mtu does not need to subtract MAX_TCP_HEADER as
it relates to the size of the whole packet, including the header.
Fixes: 9ecd1a75d977 ("xen-netfront: reduce gso_max_size to account for max TCP header")
Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-31 13:05:15 +03:00
|
|
|
int max = xennet_can_sg(dev) ? XEN_NETIF_MAX_TX_SIZE : ETH_DATA_LEN;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
if (mtu > max)
|
|
|
|
return -EINVAL;
|
|
|
|
dev->mtu = mtu;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-01-07 06:12:52 +03:00
|
|
|
static void xennet_get_stats64(struct net_device *dev,
|
|
|
|
struct rtnl_link_stats64 *tot)
|
2011-06-21 09:35:31 +04:00
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
for_each_possible_cpu(cpu) {
|
2015-01-13 19:42:42 +03:00
|
|
|
struct netfront_stats *rx_stats = per_cpu_ptr(np->rx_stats, cpu);
|
|
|
|
struct netfront_stats *tx_stats = per_cpu_ptr(np->tx_stats, cpu);
|
2011-06-21 09:35:31 +04:00
|
|
|
u64 rx_packets, rx_bytes, tx_packets, tx_bytes;
|
|
|
|
unsigned int start;
|
|
|
|
|
|
|
|
do {
|
2015-01-13 19:42:42 +03:00
|
|
|
start = u64_stats_fetch_begin_irq(&tx_stats->syncp);
|
|
|
|
tx_packets = tx_stats->packets;
|
|
|
|
tx_bytes = tx_stats->bytes;
|
|
|
|
} while (u64_stats_fetch_retry_irq(&tx_stats->syncp, start));
|
2011-06-21 09:35:31 +04:00
|
|
|
|
2015-01-13 19:42:42 +03:00
|
|
|
do {
|
|
|
|
start = u64_stats_fetch_begin_irq(&rx_stats->syncp);
|
|
|
|
rx_packets = rx_stats->packets;
|
|
|
|
rx_bytes = rx_stats->bytes;
|
|
|
|
} while (u64_stats_fetch_retry_irq(&rx_stats->syncp, start));
|
2011-06-21 09:35:31 +04:00
|
|
|
|
|
|
|
tot->rx_packets += rx_packets;
|
|
|
|
tot->tx_packets += tx_packets;
|
|
|
|
tot->rx_bytes += rx_bytes;
|
|
|
|
tot->tx_bytes += tx_bytes;
|
|
|
|
}
|
|
|
|
|
|
|
|
tot->rx_errors = dev->stats.rx_errors;
|
|
|
|
tot->tx_dropped = dev->stats.tx_dropped;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static void xennet_release_tx_bufs(struct netfront_queue *queue)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
struct sk_buff *skb;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < NET_TX_RING_SIZE; i++) {
|
|
|
|
/* Skip over entries which are actually freelist references */
|
2021-08-24 13:28:08 +03:00
|
|
|
if (!queue->tx_skbs[i])
|
2007-07-18 05:37:06 +04:00
|
|
|
continue;
|
|
|
|
|
2021-08-24 13:28:08 +03:00
|
|
|
skb = queue->tx_skbs[i];
|
|
|
|
queue->tx_skbs[i] = NULL;
|
2014-06-04 13:30:44 +04:00
|
|
|
get_page(queue->grant_tx_page[i]);
|
|
|
|
gnttab_end_foreign_access(queue->grant_tx_ref[i],
|
2014-01-28 07:35:42 +04:00
|
|
|
GNTMAP_readonly,
|
2014-06-04 13:30:44 +04:00
|
|
|
(unsigned long)page_address(queue->grant_tx_page[i]));
|
|
|
|
queue->grant_tx_page[i] = NULL;
|
|
|
|
queue->grant_tx_ref[i] = GRANT_INVALID_REF;
|
2021-08-24 13:28:08 +03:00
|
|
|
add_id_to_list(&queue->tx_skb_freelist, queue->tx_link, i);
|
2007-07-18 05:37:06 +04:00
|
|
|
dev_kfree_skb_irq(skb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static void xennet_release_rx_bufs(struct netfront_queue *queue)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
int id, ref;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_lock_bh(&queue->rx_lock);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
for (id = 0; id < NET_RX_RING_SIZE; id++) {
|
2014-01-28 07:35:42 +04:00
|
|
|
struct sk_buff *skb;
|
|
|
|
struct page *page;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
skb = queue->rx_skbs[id];
|
2014-01-28 07:35:42 +04:00
|
|
|
if (!skb)
|
2007-07-18 05:37:06 +04:00
|
|
|
continue;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
ref = queue->grant_rx_ref[id];
|
2014-01-28 07:35:42 +04:00
|
|
|
if (ref == GRANT_INVALID_REF)
|
|
|
|
continue;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-01-28 07:35:42 +04:00
|
|
|
page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-01-28 07:35:42 +04:00
|
|
|
/* gnttab_end_foreign_access() needs a page ref until
|
|
|
|
* foreign access is ended (which may be deferred).
|
|
|
|
*/
|
|
|
|
get_page(page);
|
|
|
|
gnttab_end_foreign_access(ref, 0,
|
|
|
|
(unsigned long)page_address(page));
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->grant_rx_ref[id] = GRANT_INVALID_REF;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-01-28 07:35:42 +04:00
|
|
|
kfree_skb(skb);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_unlock_bh(&queue->rx_lock);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2011-11-15 19:29:55 +04:00
|
|
|
static netdev_features_t xennet_fix_features(struct net_device *dev,
|
|
|
|
netdev_features_t features)
|
2011-04-04 04:21:00 +04:00
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
|
|
|
|
2016-10-31 16:58:41 +03:00
|
|
|
if (features & NETIF_F_SG &&
|
|
|
|
!xenbus_read_unsigned(np->xbdev->otherend, "feature-sg", 0))
|
|
|
|
features &= ~NETIF_F_SG;
|
2011-04-04 04:21:00 +04:00
|
|
|
|
2016-10-31 16:58:41 +03:00
|
|
|
if (features & NETIF_F_IPV6_CSUM &&
|
|
|
|
!xenbus_read_unsigned(np->xbdev->otherend,
|
|
|
|
"feature-ipv6-csum-offload", 0))
|
|
|
|
features &= ~NETIF_F_IPV6_CSUM;
|
2011-04-04 04:21:00 +04:00
|
|
|
|
2016-10-31 16:58:41 +03:00
|
|
|
if (features & NETIF_F_TSO &&
|
|
|
|
!xenbus_read_unsigned(np->xbdev->otherend, "feature-gso-tcpv4", 0))
|
|
|
|
features &= ~NETIF_F_TSO;
|
2011-04-04 04:21:00 +04:00
|
|
|
|
2016-10-31 16:58:41 +03:00
|
|
|
if (features & NETIF_F_TSO6 &&
|
|
|
|
!xenbus_read_unsigned(np->xbdev->otherend, "feature-gso-tcpv6", 0))
|
|
|
|
features &= ~NETIF_F_TSO6;
|
2014-01-15 21:30:33 +04:00
|
|
|
|
2011-04-04 04:21:00 +04:00
|
|
|
return features;
|
|
|
|
}
|
|
|
|
|
2011-11-15 19:29:55 +04:00
|
|
|
static int xennet_set_features(struct net_device *dev,
|
|
|
|
netdev_features_t features)
|
2011-04-04 04:21:00 +04:00
|
|
|
{
|
|
|
|
if (!(features & NETIF_F_SG) && dev->mtu > ETH_DATA_LEN) {
|
|
|
|
netdev_info(dev, "Reducing MTU because no SG offload");
|
|
|
|
dev->mtu = ETH_DATA_LEN;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi)
|
2012-01-23 12:24:43 +04:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
if (unlikely(queue->info->broken))
|
|
|
|
return false;
|
2021-08-24 13:28:09 +03:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_lock_irqsave(&queue->tx_lock, flags);
|
2021-12-16 10:24:08 +03:00
|
|
|
if (xennet_tx_buf_gc(queue))
|
|
|
|
*eoi = 0;
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_unlock_irqrestore(&queue->tx_lock, flags);
|
2012-01-23 12:24:43 +04:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
|
|
|
|
{
|
|
|
|
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
|
|
|
|
|
|
|
if (likely(xennet_handle_tx(dev_id, &eoiflag)))
|
|
|
|
xen_irq_lateeoi(irq, eoiflag);
|
|
|
|
|
2013-05-22 10:34:46 +04:00
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi)
|
2013-05-22 10:34:46 +04:00
|
|
|
{
|
2021-12-16 10:24:08 +03:00
|
|
|
unsigned int work_queued;
|
|
|
|
unsigned long flags;
|
2013-05-22 10:34:46 +04:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
if (unlikely(queue->info->broken))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&queue->rx_cons_lock, flags);
|
|
|
|
work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
|
|
|
|
if (work_queued > queue->rx_rsp_unconsumed) {
|
|
|
|
queue->rx_rsp_unconsumed = work_queued;
|
|
|
|
*eoi = 0;
|
|
|
|
} else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) {
|
|
|
|
const struct device *dev = &queue->info->netdev->dev;
|
|
|
|
|
|
|
|
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
|
|
|
|
dev_alert(dev, "RX producer index going backwards\n");
|
|
|
|
dev_alert(dev, "Disabled for further use\n");
|
|
|
|
queue->info->broken = true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
|
2021-08-24 13:28:09 +03:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
if (likely(netif_carrier_ok(queue->info->netdev) && work_queued))
|
2014-06-18 13:47:27 +04:00
|
|
|
napi_schedule(&queue->napi);
|
2012-01-23 12:24:43 +04:00
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
|
|
|
|
{
|
|
|
|
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
|
|
|
|
|
|
|
if (likely(xennet_handle_rx(dev_id, &eoiflag)))
|
|
|
|
xen_irq_lateeoi(irq, eoiflag);
|
|
|
|
|
2013-05-22 10:34:46 +04:00
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
2012-01-23 12:24:43 +04:00
|
|
|
|
2013-05-22 10:34:46 +04:00
|
|
|
static irqreturn_t xennet_interrupt(int irq, void *dev_id)
|
|
|
|
{
|
2021-12-16 10:24:08 +03:00
|
|
|
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
|
|
|
|
|
|
|
if (xennet_handle_tx(dev_id, &eoiflag) &&
|
|
|
|
xennet_handle_rx(dev_id, &eoiflag))
|
|
|
|
xen_irq_lateeoi(irq, eoiflag);
|
|
|
|
|
2012-01-23 12:24:43 +04:00
|
|
|
return IRQ_HANDLED;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_NET_POLL_CONTROLLER
|
|
|
|
static void xennet_poll_controller(struct net_device *dev)
|
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
/* Poll each queue */
|
|
|
|
struct netfront_info *info = netdev_priv(dev);
|
|
|
|
unsigned int num_queues = dev->real_num_tx_queues;
|
|
|
|
unsigned int i;
|
2021-08-24 13:28:09 +03:00
|
|
|
|
|
|
|
if (info->broken)
|
|
|
|
return;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
for (i = 0; i < num_queues; ++i)
|
|
|
|
xennet_interrupt(0, &info->queues[i]);
|
2012-01-23 12:24:43 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
#define NETBACK_XDP_HEADROOM_DISABLE 0
|
|
|
|
#define NETBACK_XDP_HEADROOM_ENABLE 1
|
|
|
|
|
|
|
|
static int talk_to_netback_xdp(struct netfront_info *np, int xdp)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
unsigned short headroom;
|
|
|
|
|
|
|
|
headroom = xdp ? XDP_PACKET_HEADROOM : 0;
|
|
|
|
err = xenbus_printf(XBT_NIL, np->xbdev->nodename,
|
|
|
|
"xdp-headroom", "%hu",
|
|
|
|
headroom);
|
|
|
|
if (err)
|
|
|
|
pr_warn("Error writing xdp-headroom\n");
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
|
|
|
|
struct netlink_ext_ack *extack)
|
|
|
|
{
|
|
|
|
unsigned long max_mtu = XEN_PAGE_SIZE - XDP_PACKET_HEADROOM;
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
|
|
|
struct bpf_prog *old_prog;
|
|
|
|
unsigned int i, err;
|
|
|
|
|
|
|
|
if (dev->mtu > max_mtu) {
|
|
|
|
netdev_warn(dev, "XDP requires MTU less than %lu\n", max_mtu);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!np->netback_has_xdp_headroom)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
xenbus_switch_state(np->xbdev, XenbusStateReconfiguring);
|
|
|
|
|
|
|
|
err = talk_to_netback_xdp(np, prog ? NETBACK_XDP_HEADROOM_ENABLE :
|
|
|
|
NETBACK_XDP_HEADROOM_DISABLE);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
/* avoid the race with XDP headroom adjustment */
|
|
|
|
wait_event(module_wq,
|
|
|
|
xenbus_read_driver_state(np->xbdev->otherend) ==
|
|
|
|
XenbusStateReconfigured);
|
|
|
|
np->netfront_xdp_enabled = true;
|
|
|
|
|
|
|
|
old_prog = rtnl_dereference(np->queues[0].xdp_prog);
|
|
|
|
|
|
|
|
if (prog)
|
|
|
|
bpf_prog_add(prog, dev->real_num_tx_queues);
|
|
|
|
|
|
|
|
for (i = 0; i < dev->real_num_tx_queues; ++i)
|
|
|
|
rcu_assign_pointer(np->queues[i].xdp_prog, prog);
|
|
|
|
|
|
|
|
if (old_prog)
|
|
|
|
for (i = 0; i < dev->real_num_tx_queues; ++i)
|
|
|
|
bpf_prog_put(old_prog);
|
|
|
|
|
|
|
|
xenbus_switch_state(np->xbdev, XenbusStateConnected);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
|
|
|
|
{
|
2021-08-24 13:28:09 +03:00
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
|
|
|
|
|
|
|
if (np->broken)
|
|
|
|
return -ENODEV;
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
switch (xdp->command) {
|
|
|
|
case XDP_SETUP_PROG:
|
|
|
|
return xennet_xdp_set(dev, xdp->prog, xdp->extack);
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-01-06 21:44:55 +03:00
|
|
|
static const struct net_device_ops xennet_netdev_ops = {
|
2022-02-24 00:19:54 +03:00
|
|
|
.ndo_uninit = xennet_uninit,
|
2009-01-06 21:44:55 +03:00
|
|
|
.ndo_open = xennet_open,
|
|
|
|
.ndo_stop = xennet_close,
|
|
|
|
.ndo_start_xmit = xennet_start_xmit,
|
|
|
|
.ndo_change_mtu = xennet_change_mtu,
|
2011-06-21 09:35:31 +04:00
|
|
|
.ndo_get_stats64 = xennet_get_stats64,
|
2009-01-06 21:44:55 +03:00
|
|
|
.ndo_set_mac_address = eth_mac_addr,
|
|
|
|
.ndo_validate_addr = eth_validate_addr,
|
2011-03-31 05:01:35 +04:00
|
|
|
.ndo_fix_features = xennet_fix_features,
|
|
|
|
.ndo_set_features = xennet_set_features,
|
2014-06-04 13:30:44 +04:00
|
|
|
.ndo_select_queue = xennet_select_queue,
|
2020-06-29 16:13:28 +03:00
|
|
|
.ndo_bpf = xennet_xdp,
|
|
|
|
.ndo_xdp_xmit = xennet_xdp_xmit,
|
2012-01-23 12:24:43 +04:00
|
|
|
#ifdef CONFIG_NET_POLL_CONTROLLER
|
|
|
|
.ndo_poll_controller = xennet_poll_controller,
|
|
|
|
#endif
|
2009-01-06 21:44:55 +03:00
|
|
|
};
|
|
|
|
|
2015-01-13 19:42:42 +03:00
|
|
|
static void xennet_free_netdev(struct net_device *netdev)
|
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(netdev);
|
|
|
|
|
|
|
|
free_percpu(np->rx_stats);
|
|
|
|
free_percpu(np->tx_stats);
|
|
|
|
free_netdev(netdev);
|
|
|
|
}
|
|
|
|
|
2012-12-03 18:24:22 +04:00
|
|
|
static struct net_device *xennet_create_dev(struct xenbus_device *dev)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
int err;
|
2007-07-18 05:37:06 +04:00
|
|
|
struct net_device *netdev;
|
|
|
|
struct netfront_info *np;
|
|
|
|
|
2014-06-04 13:30:45 +04:00
|
|
|
netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
|
2012-01-29 17:47:52 +04:00
|
|
|
if (!netdev)
|
2007-07-18 05:37:06 +04:00
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
|
|
|
np = netdev_priv(netdev);
|
|
|
|
np->xbdev = dev;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
np->queues = NULL;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2011-06-21 09:35:31 +04:00
|
|
|
err = -ENOMEM;
|
2015-01-13 19:42:42 +03:00
|
|
|
np->rx_stats = netdev_alloc_pcpu_stats(struct netfront_stats);
|
|
|
|
if (np->rx_stats == NULL)
|
|
|
|
goto exit;
|
|
|
|
np->tx_stats = netdev_alloc_pcpu_stats(struct netfront_stats);
|
|
|
|
if (np->tx_stats == NULL)
|
2011-06-21 09:35:31 +04:00
|
|
|
goto exit;
|
|
|
|
|
2009-01-06 21:44:55 +03:00
|
|
|
netdev->netdev_ops = &xennet_netdev_ops;
|
|
|
|
|
2011-03-31 05:01:35 +04:00
|
|
|
netdev->features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
|
|
|
|
NETIF_F_GSO_ROBUST;
|
2014-01-15 21:30:33 +04:00
|
|
|
netdev->hw_features = NETIF_F_SG |
|
|
|
|
NETIF_F_IPV6_CSUM |
|
|
|
|
NETIF_F_TSO | NETIF_F_TSO6;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2011-04-04 22:07:57 +04:00
|
|
|
/*
|
|
|
|
* Assume that all hw features are available for now. This set
|
|
|
|
* will be adjusted by the call to netdev_update_features() in
|
|
|
|
* xennet_connect() which is the earliest point where we can
|
|
|
|
* negotiate with the backend regarding supported features.
|
|
|
|
*/
|
|
|
|
netdev->features |= netdev->hw_features;
|
|
|
|
|
2014-05-11 04:12:32 +04:00
|
|
|
netdev->ethtool_ops = &xennet_ethtool_ops;
|
2017-10-16 16:20:32 +03:00
|
|
|
netdev->min_mtu = ETH_MIN_MTU;
|
2016-10-20 20:55:21 +03:00
|
|
|
netdev->max_mtu = XEN_NETIF_MAX_TX_SIZE;
|
2007-07-18 05:37:06 +04:00
|
|
|
SET_NETDEV_DEV(netdev, &dev->dev);
|
|
|
|
|
|
|
|
np->netdev = netdev;
|
2020-06-29 16:13:28 +03:00
|
|
|
np->netfront_xdp_enabled = false;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
netif_carrier_off(netdev);
|
|
|
|
|
2020-07-24 11:59:10 +03:00
|
|
|
do {
|
|
|
|
xenbus_switch_state(dev, XenbusStateInitialising);
|
|
|
|
err = wait_event_timeout(module_wq,
|
|
|
|
xenbus_read_driver_state(dev->otherend) !=
|
|
|
|
XenbusStateClosed &&
|
|
|
|
xenbus_read_driver_state(dev->otherend) !=
|
|
|
|
XenbusStateUnknown, XENNET_TIMEOUT);
|
|
|
|
} while (!err);
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
return netdev;
|
|
|
|
|
|
|
|
exit:
|
2015-01-13 19:42:42 +03:00
|
|
|
xennet_free_netdev(netdev);
|
2007-07-18 05:37:06 +04:00
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
|
2021-01-15 23:09:03 +03:00
|
|
|
/*
|
2007-07-18 05:37:06 +04:00
|
|
|
* Entry point to this code when a new device is created. Allocate the basic
|
|
|
|
* structures and the ring buffers for communication with the backend, and
|
|
|
|
* inform the backend of the appropriate details for those.
|
|
|
|
*/
|
2012-12-03 18:24:22 +04:00
|
|
|
static int netfront_probe(struct xenbus_device *dev,
|
2012-12-06 18:30:56 +04:00
|
|
|
const struct xenbus_device_id *id)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
struct net_device *netdev;
|
|
|
|
struct netfront_info *info;
|
|
|
|
|
|
|
|
netdev = xennet_create_dev(dev);
|
|
|
|
if (IS_ERR(netdev)) {
|
|
|
|
err = PTR_ERR(netdev);
|
|
|
|
xenbus_dev_fatal(dev, err, "creating netdev");
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
info = netdev_priv(netdev);
|
2009-05-04 23:40:54 +04:00
|
|
|
dev_set_drvdata(&dev->dev, info);
|
2015-02-04 16:38:55 +03:00
|
|
|
#ifdef CONFIG_SYSFS
|
|
|
|
info->netdev->sysfs_groups[0] = &xennet_dev_group;
|
|
|
|
#endif
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void xennet_end_access(int ref, void *page)
|
|
|
|
{
|
|
|
|
/* This frees the page as a side-effect */
|
|
|
|
if (ref != GRANT_INVALID_REF)
|
|
|
|
gnttab_end_foreign_access(ref, 0, (unsigned long)page);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void xennet_disconnect_backend(struct netfront_info *info)
|
|
|
|
{
|
2014-06-04 13:30:44 +04:00
|
|
|
unsigned int i = 0;
|
|
|
|
unsigned int num_queues = info->netdev->real_num_tx_queues;
|
|
|
|
|
2014-07-02 19:09:15 +04:00
|
|
|
netif_carrier_off(info->netdev);
|
|
|
|
|
2015-08-20 02:14:20 +03:00
|
|
|
for (i = 0; i < num_queues && info->queues; ++i) {
|
2014-06-18 13:47:27 +04:00
|
|
|
struct netfront_queue *queue = &info->queues[i];
|
|
|
|
|
2017-01-30 20:45:46 +03:00
|
|
|
del_timer_sync(&queue->rx_refill_timer);
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
|
|
|
|
unbind_from_irqhandler(queue->tx_irq, queue);
|
|
|
|
if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
|
|
|
|
unbind_from_irqhandler(queue->tx_irq, queue);
|
|
|
|
unbind_from_irqhandler(queue->rx_irq, queue);
|
|
|
|
}
|
|
|
|
queue->tx_evtchn = queue->rx_evtchn = 0;
|
|
|
|
queue->tx_irq = queue->rx_irq = 0;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-08-27 19:28:46 +03:00
|
|
|
if (netif_running(info->netdev))
|
|
|
|
napi_synchronize(&queue->napi);
|
2014-07-02 19:09:15 +04:00
|
|
|
|
2014-07-31 20:38:23 +04:00
|
|
|
xennet_release_tx_bufs(queue);
|
|
|
|
xennet_release_rx_bufs(queue);
|
|
|
|
gnttab_free_grant_references(queue->gref_tx_head);
|
|
|
|
gnttab_free_grant_references(queue->gref_rx_head);
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
/* End access and free the pages */
|
|
|
|
xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
|
|
|
|
xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->tx_ring_ref = GRANT_INVALID_REF;
|
|
|
|
queue->rx_ring_ref = GRANT_INVALID_REF;
|
|
|
|
queue->tx.sring = NULL;
|
|
|
|
queue->rx.sring = NULL;
|
2020-06-29 16:13:28 +03:00
|
|
|
|
|
|
|
page_pool_destroy(queue->page_pool);
|
2014-06-04 13:30:44 +04:00
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2021-01-15 23:09:03 +03:00
|
|
|
/*
|
2007-07-18 05:37:06 +04:00
|
|
|
* We are reconnecting to the backend, due to a suspend/resume, or a backend
|
|
|
|
* driver restart. We tear down our netif structure and recreate it, but
|
|
|
|
* leave the device-layer structures intact so that this is transparent to the
|
|
|
|
* rest of the kernel.
|
|
|
|
*/
|
|
|
|
static int netfront_resume(struct xenbus_device *dev)
|
|
|
|
{
|
2009-05-04 23:40:54 +04:00
|
|
|
struct netfront_info *info = dev_get_drvdata(&dev->dev);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
dev_dbg(&dev->dev, "%s\n", dev->nodename);
|
|
|
|
|
2021-10-23 02:31:39 +03:00
|
|
|
netif_tx_lock_bh(info->netdev);
|
|
|
|
netif_device_detach(info->netdev);
|
|
|
|
netif_tx_unlock_bh(info->netdev);
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
xennet_disconnect_backend(info);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
|
|
|
|
{
|
|
|
|
char *s, *e, *macstr;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
macstr = s = xenbus_read(XBT_NIL, dev->nodename, "mac", NULL);
|
|
|
|
if (IS_ERR(macstr))
|
|
|
|
return PTR_ERR(macstr);
|
|
|
|
|
|
|
|
for (i = 0; i < ETH_ALEN; i++) {
|
|
|
|
mac[i] = simple_strtoul(s, &e, 16);
|
|
|
|
if ((s == e) || (*e != ((i == ETH_ALEN-1) ? '\0' : ':'))) {
|
|
|
|
kfree(macstr);
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
s = e+1;
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(macstr);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static int setup_netfront_single(struct netfront_queue *queue)
|
2013-05-22 10:34:46 +04:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
|
2013-05-22 10:34:46 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto fail;
|
|
|
|
|
2021-12-16 10:24:08 +03:00
|
|
|
err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
|
|
|
|
xennet_interrupt, 0,
|
|
|
|
queue->info->netdev->name,
|
|
|
|
queue);
|
2013-05-22 10:34:46 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto bind_fail;
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->rx_evtchn = queue->tx_evtchn;
|
|
|
|
queue->rx_irq = queue->tx_irq = err;
|
2013-05-22 10:34:46 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
bind_fail:
|
2014-06-04 13:30:44 +04:00
|
|
|
xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
|
|
|
|
queue->tx_evtchn = 0;
|
2013-05-22 10:34:46 +04:00
|
|
|
fail:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static int setup_netfront_split(struct netfront_queue *queue)
|
2013-05-22 10:34:46 +04:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
|
2013-05-22 10:34:46 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto fail;
|
2014-06-04 13:30:44 +04:00
|
|
|
err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
|
2013-05-22 10:34:46 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto alloc_rx_evtchn_fail;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
|
|
|
|
"%s-tx", queue->name);
|
2021-12-16 10:24:08 +03:00
|
|
|
err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
|
|
|
|
xennet_tx_interrupt, 0,
|
|
|
|
queue->tx_irq_name, queue);
|
2013-05-22 10:34:46 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto bind_tx_fail;
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->tx_irq = err;
|
2013-05-22 10:34:46 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
|
|
|
|
"%s-rx", queue->name);
|
2021-12-16 10:24:08 +03:00
|
|
|
err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn,
|
|
|
|
xennet_rx_interrupt, 0,
|
|
|
|
queue->rx_irq_name, queue);
|
2013-05-22 10:34:46 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto bind_rx_fail;
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->rx_irq = err;
|
2013-05-22 10:34:46 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
bind_rx_fail:
|
2014-06-04 13:30:44 +04:00
|
|
|
unbind_from_irqhandler(queue->tx_irq, queue);
|
|
|
|
queue->tx_irq = 0;
|
2013-05-22 10:34:46 +04:00
|
|
|
bind_tx_fail:
|
2014-06-04 13:30:44 +04:00
|
|
|
xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
|
|
|
|
queue->rx_evtchn = 0;
|
2013-05-22 10:34:46 +04:00
|
|
|
alloc_rx_evtchn_fail:
|
2014-06-04 13:30:44 +04:00
|
|
|
xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
|
|
|
|
queue->tx_evtchn = 0;
|
2013-05-22 10:34:46 +04:00
|
|
|
fail:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
static int setup_netfront(struct xenbus_device *dev,
|
|
|
|
struct netfront_queue *queue, unsigned int feature_split_evtchn)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
struct xen_netif_tx_sring *txs;
|
|
|
|
struct xen_netif_rx_sring *rxs;
|
2015-04-03 09:44:59 +03:00
|
|
|
grant_ref_t gref;
|
2007-07-18 05:37:06 +04:00
|
|
|
int err;
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->tx_ring_ref = GRANT_INVALID_REF;
|
|
|
|
queue->rx_ring_ref = GRANT_INVALID_REF;
|
|
|
|
queue->rx.sring = NULL;
|
|
|
|
queue->tx.sring = NULL;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2008-06-17 12:47:08 +04:00
|
|
|
txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
|
2007-07-18 05:37:06 +04:00
|
|
|
if (!txs) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
xenbus_dev_fatal(dev, err, "allocating tx ring page");
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
SHARED_RING_INIT(txs);
|
2015-04-10 16:42:21 +03:00
|
|
|
FRONT_RING_INIT(&queue->tx, txs, XEN_PAGE_SIZE);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-04-03 09:44:59 +03:00
|
|
|
err = xenbus_grant_ring(dev, txs, 1, &gref);
|
2013-05-20 05:05:12 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto grant_tx_ring_fail;
|
2015-04-03 09:44:59 +03:00
|
|
|
queue->tx_ring_ref = gref;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2008-06-17 12:47:08 +04:00
|
|
|
rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
|
2007-07-18 05:37:06 +04:00
|
|
|
if (!rxs) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
xenbus_dev_fatal(dev, err, "allocating rx ring page");
|
2013-05-20 05:05:12 +04:00
|
|
|
goto alloc_rx_ring_fail;
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
SHARED_RING_INIT(rxs);
|
2015-04-10 16:42:21 +03:00
|
|
|
FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-04-03 09:44:59 +03:00
|
|
|
err = xenbus_grant_ring(dev, rxs, 1, &gref);
|
2013-05-20 05:05:12 +04:00
|
|
|
if (err < 0)
|
|
|
|
goto grant_rx_ring_fail;
|
2015-04-03 09:44:59 +03:00
|
|
|
queue->rx_ring_ref = gref;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2013-05-22 10:34:46 +04:00
|
|
|
if (feature_split_evtchn)
|
2014-06-04 13:30:44 +04:00
|
|
|
err = setup_netfront_split(queue);
|
2013-05-22 10:34:46 +04:00
|
|
|
/* setup single event channel if
|
|
|
|
* a) feature-split-event-channels == 0
|
|
|
|
* b) feature-split-event-channels == 1 but failed to setup
|
|
|
|
*/
|
2021-02-02 13:17:49 +03:00
|
|
|
if (!feature_split_evtchn || err)
|
2014-06-04 13:30:44 +04:00
|
|
|
err = setup_netfront_single(queue);
|
2013-05-22 10:34:46 +04:00
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
if (err)
|
2013-05-20 05:05:12 +04:00
|
|
|
goto alloc_evtchn_fail;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
2013-05-20 05:05:12 +04:00
|
|
|
/* If we fail to setup netfront, it is safe to just revoke access to
|
|
|
|
* granted pages because backend is not accessing it at this point.
|
|
|
|
*/
|
|
|
|
alloc_evtchn_fail:
|
2014-06-04 13:30:44 +04:00
|
|
|
gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
|
2013-05-20 05:05:12 +04:00
|
|
|
grant_rx_ring_fail:
|
|
|
|
free_page((unsigned long)rxs);
|
|
|
|
alloc_rx_ring_fail:
|
2014-06-04 13:30:44 +04:00
|
|
|
gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
|
2013-05-20 05:05:12 +04:00
|
|
|
grant_tx_ring_fail:
|
|
|
|
free_page((unsigned long)txs);
|
|
|
|
fail:
|
2007-07-18 05:37:06 +04:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
/* Queue-specific initialisation
|
|
|
|
* This used to be done in xennet_create_dev() but must now
|
|
|
|
* be run per-queue.
|
|
|
|
*/
|
|
|
|
static int xennet_init_queue(struct netfront_queue *queue)
|
|
|
|
{
|
|
|
|
unsigned short i;
|
|
|
|
int err = 0;
|
2018-08-14 18:21:28 +03:00
|
|
|
char *devid;
|
2014-06-04 13:30:44 +04:00
|
|
|
|
|
|
|
spin_lock_init(&queue->tx_lock);
|
|
|
|
spin_lock_init(&queue->rx_lock);
|
2021-12-16 10:24:08 +03:00
|
|
|
spin_lock_init(&queue->rx_cons_lock);
|
2014-06-04 13:30:44 +04:00
|
|
|
|
treewide: setup_timer() -> timer_setup()
This converts all remaining cases of the old setup_timer() API into using
timer_setup(), where the callback argument is the structure already
holding the struct timer_list. These should have no behavioral changes,
since they just change which pointer is passed into the callback with
the same available pointers after conversion. It handles the following
examples, in addition to some other variations.
Casting from unsigned long:
void my_callback(unsigned long data)
{
struct something *ptr = (struct something *)data;
...
}
...
setup_timer(&ptr->my_timer, my_callback, ptr);
and forced object casts:
void my_callback(struct something *ptr)
{
...
}
...
setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr);
become:
void my_callback(struct timer_list *t)
{
struct something *ptr = from_timer(ptr, t, my_timer);
...
}
...
timer_setup(&ptr->my_timer, my_callback, 0);
Direct function assignments:
void my_callback(unsigned long data)
{
struct something *ptr = (struct something *)data;
...
}
...
ptr->my_timer.function = my_callback;
have a temporary cast added, along with converting the args:
void my_callback(struct timer_list *t)
{
struct something *ptr = from_timer(ptr, t, my_timer);
...
}
...
ptr->my_timer.function = (TIMER_FUNC_TYPE)my_callback;
And finally, callbacks without a data assignment:
void my_callback(unsigned long data)
{
...
}
...
setup_timer(&ptr->my_timer, my_callback, 0);
have their argument renamed to verify they're unused during conversion:
void my_callback(struct timer_list *unused)
{
...
}
...
timer_setup(&ptr->my_timer, my_callback, 0);
The conversion is done with the following Coccinelle script:
spatch --very-quiet --all-includes --include-headers \
-I ./arch/x86/include -I ./arch/x86/include/generated \
-I ./include -I ./arch/x86/include/uapi \
-I ./arch/x86/include/generated/uapi -I ./include/uapi \
-I ./include/generated/uapi --include ./include/linux/kconfig.h \
--dir . \
--cocci-file ~/src/data/timer_setup.cocci
@fix_address_of@
expression e;
@@
setup_timer(
-&(e)
+&e
, ...)
// Update any raw setup_timer() usages that have a NULL callback, but
// would otherwise match change_timer_function_usage, since the latter
// will update all function assignments done in the face of a NULL
// function initialization in setup_timer().
@change_timer_function_usage_NULL@
expression _E;
identifier _timer;
type _cast_data;
@@
(
-setup_timer(&_E->_timer, NULL, _E);
+timer_setup(&_E->_timer, NULL, 0);
|
-setup_timer(&_E->_timer, NULL, (_cast_data)_E);
+timer_setup(&_E->_timer, NULL, 0);
|
-setup_timer(&_E._timer, NULL, &_E);
+timer_setup(&_E._timer, NULL, 0);
|
-setup_timer(&_E._timer, NULL, (_cast_data)&_E);
+timer_setup(&_E._timer, NULL, 0);
)
@change_timer_function_usage@
expression _E;
identifier _timer;
struct timer_list _stl;
identifier _callback;
type _cast_func, _cast_data;
@@
(
-setup_timer(&_E->_timer, _callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, &_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, &_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)&_callback, _E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, (_cast_func)&_callback, (_cast_data)_E);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, &_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, &_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)_E);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, (_cast_func)&_callback, (_cast_data)&_E);
+timer_setup(&_E._timer, _callback, 0);
|
_E->_timer@_stl.function = _callback;
|
_E->_timer@_stl.function = &_callback;
|
_E->_timer@_stl.function = (_cast_func)_callback;
|
_E->_timer@_stl.function = (_cast_func)&_callback;
|
_E._timer@_stl.function = _callback;
|
_E._timer@_stl.function = &_callback;
|
_E._timer@_stl.function = (_cast_func)_callback;
|
_E._timer@_stl.function = (_cast_func)&_callback;
)
// callback(unsigned long arg)
@change_callback_handle_cast
depends on change_timer_function_usage@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _origtype;
identifier _origarg;
type _handletype;
identifier _handle;
@@
void _callback(
-_origtype _origarg
+struct timer_list *t
)
{
(
... when != _origarg
_handletype *_handle =
-(_handletype *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle =
-(void *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle;
... when != _handle
_handle =
-(_handletype *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
|
... when != _origarg
_handletype *_handle;
... when != _handle
_handle =
-(void *)_origarg;
+from_timer(_handle, t, _timer);
... when != _origarg
)
}
// callback(unsigned long arg) without existing variable
@change_callback_handle_cast_no_arg
depends on change_timer_function_usage &&
!change_callback_handle_cast@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _origtype;
identifier _origarg;
type _handletype;
@@
void _callback(
-_origtype _origarg
+struct timer_list *t
)
{
+ _handletype *_origarg = from_timer(_origarg, t, _timer);
+
... when != _origarg
- (_handletype *)_origarg
+ _origarg
... when != _origarg
}
// Avoid already converted callbacks.
@match_callback_converted
depends on change_timer_function_usage &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg@
identifier change_timer_function_usage._callback;
identifier t;
@@
void _callback(struct timer_list *t)
{ ... }
// callback(struct something *handle)
@change_callback_handle_arg
depends on change_timer_function_usage &&
!match_callback_converted &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _handletype;
identifier _handle;
@@
void _callback(
-_handletype *_handle
+struct timer_list *t
)
{
+ _handletype *_handle = from_timer(_handle, t, _timer);
...
}
// If change_callback_handle_arg ran on an empty function, remove
// the added handler.
@unchange_callback_handle_arg
depends on change_timer_function_usage &&
change_callback_handle_arg@
identifier change_timer_function_usage._callback;
identifier change_timer_function_usage._timer;
type _handletype;
identifier _handle;
identifier t;
@@
void _callback(struct timer_list *t)
{
- _handletype *_handle = from_timer(_handle, t, _timer);
}
// We only want to refactor the setup_timer() data argument if we've found
// the matching callback. This undoes changes in change_timer_function_usage.
@unchange_timer_function_usage
depends on change_timer_function_usage &&
!change_callback_handle_cast &&
!change_callback_handle_cast_no_arg &&
!change_callback_handle_arg@
expression change_timer_function_usage._E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type change_timer_function_usage._cast_data;
@@
(
-timer_setup(&_E->_timer, _callback, 0);
+setup_timer(&_E->_timer, _callback, (_cast_data)_E);
|
-timer_setup(&_E._timer, _callback, 0);
+setup_timer(&_E._timer, _callback, (_cast_data)&_E);
)
// If we fixed a callback from a .function assignment, fix the
// assignment cast now.
@change_timer_function_assignment
depends on change_timer_function_usage &&
(change_callback_handle_cast ||
change_callback_handle_cast_no_arg ||
change_callback_handle_arg)@
expression change_timer_function_usage._E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type _cast_func;
typedef TIMER_FUNC_TYPE;
@@
(
_E->_timer.function =
-_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-&_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-(_cast_func)_callback;
+(TIMER_FUNC_TYPE)_callback
;
|
_E->_timer.function =
-(_cast_func)&_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-&_callback;
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-(_cast_func)_callback
+(TIMER_FUNC_TYPE)_callback
;
|
_E._timer.function =
-(_cast_func)&_callback
+(TIMER_FUNC_TYPE)_callback
;
)
// Sometimes timer functions are called directly. Replace matched args.
@change_timer_function_calls
depends on change_timer_function_usage &&
(change_callback_handle_cast ||
change_callback_handle_cast_no_arg ||
change_callback_handle_arg)@
expression _E;
identifier change_timer_function_usage._timer;
identifier change_timer_function_usage._callback;
type _cast_data;
@@
_callback(
(
-(_cast_data)_E
+&_E->_timer
|
-(_cast_data)&_E
+&_E._timer
|
-_E
+&_E->_timer
)
)
// If a timer has been configured without a data argument, it can be
// converted without regard to the callback argument, since it is unused.
@match_timer_function_unused_data@
expression _E;
identifier _timer;
identifier _callback;
@@
(
-setup_timer(&_E->_timer, _callback, 0);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, 0L);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E->_timer, _callback, 0UL);
+timer_setup(&_E->_timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0L);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_E._timer, _callback, 0UL);
+timer_setup(&_E._timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0L);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(&_timer, _callback, 0UL);
+timer_setup(&_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0);
+timer_setup(_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0L);
+timer_setup(_timer, _callback, 0);
|
-setup_timer(_timer, _callback, 0UL);
+timer_setup(_timer, _callback, 0);
)
@change_callback_unused_data
depends on match_timer_function_unused_data@
identifier match_timer_function_unused_data._callback;
type _origtype;
identifier _origarg;
@@
void _callback(
-_origtype _origarg
+struct timer_list *unused
)
{
... when != _origarg
}
Signed-off-by: Kees Cook <keescook@chromium.org>
2017-10-17 00:43:17 +03:00
|
|
|
timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
|
2014-06-04 13:30:44 +04:00
|
|
|
|
2018-08-14 18:21:28 +03:00
|
|
|
devid = strrchr(queue->info->xbdev->nodename, '/') + 1;
|
|
|
|
snprintf(queue->name, sizeof(queue->name), "vif%s-q%u",
|
|
|
|
devid, queue->id);
|
2014-06-04 13:30:47 +04:00
|
|
|
|
2021-08-24 13:28:08 +03:00
|
|
|
/* Initialise tx_skb_freelist as a free chain containing every entry. */
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->tx_skb_freelist = 0;
|
2021-08-24 13:28:09 +03:00
|
|
|
queue->tx_pend_queue = TX_LINK_NONE;
|
2014-06-04 13:30:44 +04:00
|
|
|
for (i = 0; i < NET_TX_RING_SIZE; i++) {
|
2021-08-24 13:28:08 +03:00
|
|
|
queue->tx_link[i] = i + 1;
|
2014-06-04 13:30:44 +04:00
|
|
|
queue->grant_tx_ref[i] = GRANT_INVALID_REF;
|
|
|
|
queue->grant_tx_page[i] = NULL;
|
|
|
|
}
|
2021-08-24 13:28:08 +03:00
|
|
|
queue->tx_link[NET_TX_RING_SIZE - 1] = TX_LINK_NONE;
|
2014-06-04 13:30:44 +04:00
|
|
|
|
|
|
|
/* Clear out rx_skbs */
|
|
|
|
for (i = 0; i < NET_RX_RING_SIZE; i++) {
|
|
|
|
queue->rx_skbs[i] = NULL;
|
|
|
|
queue->grant_rx_ref[i] = GRANT_INVALID_REF;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* A grant for every tx ring slot */
|
2014-10-22 14:17:06 +04:00
|
|
|
if (gnttab_alloc_grant_references(NET_TX_RING_SIZE,
|
2014-06-04 13:30:44 +04:00
|
|
|
&queue->gref_tx_head) < 0) {
|
|
|
|
pr_alert("can't alloc tx grant refs\n");
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* A grant for every rx ring slot */
|
2014-10-22 14:17:06 +04:00
|
|
|
if (gnttab_alloc_grant_references(NET_RX_RING_SIZE,
|
2014-06-04 13:30:44 +04:00
|
|
|
&queue->gref_rx_head) < 0) {
|
|
|
|
pr_alert("can't alloc rx grant refs\n");
|
|
|
|
err = -ENOMEM;
|
|
|
|
goto exit_free_tx;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
exit_free_tx:
|
|
|
|
gnttab_free_grant_references(queue->gref_tx_head);
|
|
|
|
exit:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:45 +04:00
|
|
|
static int write_queue_xenstore_keys(struct netfront_queue *queue,
|
|
|
|
struct xenbus_transaction *xbt, int write_hierarchical)
|
|
|
|
{
|
|
|
|
/* Write the queue-specific keys into XenStore in the traditional
|
|
|
|
* way for a single queue, or in a queue subkeys for multiple
|
|
|
|
* queues.
|
|
|
|
*/
|
|
|
|
struct xenbus_device *dev = queue->info->xbdev;
|
|
|
|
int err;
|
|
|
|
const char *message;
|
|
|
|
char *path;
|
|
|
|
size_t pathsize;
|
|
|
|
|
|
|
|
/* Choose the correct place to write the keys */
|
|
|
|
if (write_hierarchical) {
|
|
|
|
pathsize = strlen(dev->nodename) + 10;
|
|
|
|
path = kzalloc(pathsize, GFP_KERNEL);
|
|
|
|
if (!path) {
|
|
|
|
err = -ENOMEM;
|
|
|
|
message = "out of memory while writing ring references";
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
snprintf(path, pathsize, "%s/queue-%u",
|
|
|
|
dev->nodename, queue->id);
|
|
|
|
} else {
|
|
|
|
path = (char *)dev->nodename;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Write ring references */
|
|
|
|
err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
|
|
|
|
queue->tx_ring_ref);
|
|
|
|
if (err) {
|
|
|
|
message = "writing tx-ring-ref";
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
|
|
|
|
queue->rx_ring_ref);
|
|
|
|
if (err) {
|
|
|
|
message = "writing rx-ring-ref";
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Write event channels; taking into account both shared
|
|
|
|
* and split event channel scenarios.
|
|
|
|
*/
|
|
|
|
if (queue->tx_evtchn == queue->rx_evtchn) {
|
|
|
|
/* Shared event channel */
|
|
|
|
err = xenbus_printf(*xbt, path,
|
|
|
|
"event-channel", "%u", queue->tx_evtchn);
|
|
|
|
if (err) {
|
|
|
|
message = "writing event-channel";
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* Split event channels */
|
|
|
|
err = xenbus_printf(*xbt, path,
|
|
|
|
"event-channel-tx", "%u", queue->tx_evtchn);
|
|
|
|
if (err) {
|
|
|
|
message = "writing event-channel-tx";
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xenbus_printf(*xbt, path,
|
|
|
|
"event-channel-rx", "%u", queue->rx_evtchn);
|
|
|
|
if (err) {
|
|
|
|
message = "writing event-channel-rx";
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (write_hierarchical)
|
|
|
|
kfree(path);
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
error:
|
|
|
|
if (write_hierarchical)
|
|
|
|
kfree(path);
|
|
|
|
xenbus_dev_fatal(dev, err, "%s", message);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
|
|
|
|
|
|
|
|
static int xennet_create_page_pool(struct netfront_queue *queue)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
struct page_pool_params pp_params = {
|
|
|
|
.order = 0,
|
|
|
|
.flags = 0,
|
|
|
|
.pool_size = NET_RX_RING_SIZE,
|
|
|
|
.nid = NUMA_NO_NODE,
|
|
|
|
.dev = &queue->info->netdev->dev,
|
|
|
|
.offset = XDP_PACKET_HEADROOM,
|
|
|
|
.max_len = XEN_PAGE_SIZE - XDP_PACKET_HEADROOM,
|
|
|
|
};
|
|
|
|
|
|
|
|
queue->page_pool = page_pool_create(&pp_params);
|
|
|
|
if (IS_ERR(queue->page_pool)) {
|
|
|
|
err = PTR_ERR(queue->page_pool);
|
|
|
|
queue->page_pool = NULL;
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xdp_rxq_info_reg(&queue->xdp_rxq, queue->info->netdev,
|
2020-11-30 21:52:01 +03:00
|
|
|
queue->id, 0);
|
2020-06-29 16:13:28 +03:00
|
|
|
if (err) {
|
|
|
|
netdev_err(queue->info->netdev, "xdp_rxq_info_reg failed\n");
|
|
|
|
goto err_free_pp;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xdp_rxq_info_reg_mem_model(&queue->xdp_rxq,
|
|
|
|
MEM_TYPE_PAGE_POOL, queue->page_pool);
|
|
|
|
if (err) {
|
|
|
|
netdev_err(queue->info->netdev, "xdp_rxq_info_reg_mem_model failed\n");
|
|
|
|
goto err_unregister_rxq;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_unregister_rxq:
|
|
|
|
xdp_rxq_info_unreg(&queue->xdp_rxq);
|
|
|
|
err_free_pp:
|
|
|
|
page_pool_destroy(queue->page_pool);
|
|
|
|
queue->page_pool = NULL;
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2014-06-18 13:47:28 +04:00
|
|
|
static int xennet_create_queues(struct netfront_info *info,
|
2015-10-19 08:37:17 +03:00
|
|
|
unsigned int *num_queues)
|
2014-06-18 13:47:28 +04:00
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
int ret;
|
|
|
|
|
2015-10-19 08:37:17 +03:00
|
|
|
info->queues = kcalloc(*num_queues, sizeof(struct netfront_queue),
|
2014-06-18 13:47:28 +04:00
|
|
|
GFP_KERNEL);
|
|
|
|
if (!info->queues)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2015-10-19 08:37:17 +03:00
|
|
|
for (i = 0; i < *num_queues; i++) {
|
2014-06-18 13:47:28 +04:00
|
|
|
struct netfront_queue *queue = &info->queues[i];
|
|
|
|
|
|
|
|
queue->id = i;
|
|
|
|
queue->info = info;
|
|
|
|
|
|
|
|
ret = xennet_init_queue(queue);
|
|
|
|
if (ret < 0) {
|
2018-01-11 12:36:38 +03:00
|
|
|
dev_warn(&info->xbdev->dev,
|
2014-07-31 20:38:24 +04:00
|
|
|
"only created %d queues\n", i);
|
2015-10-19 08:37:17 +03:00
|
|
|
*num_queues = i;
|
2014-06-18 13:47:28 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
/* use page pool recycling instead of buddy allocator */
|
|
|
|
ret = xennet_create_page_pool(queue);
|
|
|
|
if (ret < 0) {
|
|
|
|
dev_err(&info->xbdev->dev, "can't allocate page pool\n");
|
|
|
|
*num_queues = i;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-06-18 13:47:28 +04:00
|
|
|
netif_napi_add(queue->info->netdev, &queue->napi,
|
|
|
|
xennet_poll, 64);
|
|
|
|
if (netif_running(info->netdev))
|
|
|
|
napi_enable(&queue->napi);
|
|
|
|
}
|
|
|
|
|
2015-10-19 08:37:17 +03:00
|
|
|
netif_set_real_num_tx_queues(info->netdev, *num_queues);
|
2014-06-18 13:47:28 +04:00
|
|
|
|
2015-10-19 08:37:17 +03:00
|
|
|
if (*num_queues == 0) {
|
2018-01-11 12:36:38 +03:00
|
|
|
dev_err(&info->xbdev->dev, "no queues\n");
|
2014-06-18 13:47:28 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
/* Common code used when first setting up, and when resuming. */
|
2010-08-19 03:27:49 +04:00
|
|
|
static int talk_to_netback(struct xenbus_device *dev,
|
2007-07-18 05:37:06 +04:00
|
|
|
struct netfront_info *info)
|
|
|
|
{
|
|
|
|
const char *message;
|
|
|
|
struct xenbus_transaction xbt;
|
|
|
|
int err;
|
2014-06-04 13:30:44 +04:00
|
|
|
unsigned int feature_split_evtchn;
|
|
|
|
unsigned int i = 0;
|
2014-06-04 13:30:45 +04:00
|
|
|
unsigned int max_queues = 0;
|
2014-06-04 13:30:44 +04:00
|
|
|
struct netfront_queue *queue = NULL;
|
|
|
|
unsigned int num_queues = 1;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
info->netdev->irq = 0;
|
|
|
|
|
2014-06-04 13:30:45 +04:00
|
|
|
/* Check if backend supports multiple queues */
|
2016-10-31 16:58:41 +03:00
|
|
|
max_queues = xenbus_read_unsigned(info->xbdev->otherend,
|
|
|
|
"multi-queue-max-queues", 1);
|
2014-06-04 13:30:45 +04:00
|
|
|
num_queues = min(max_queues, xennet_max_queues);
|
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
/* Check feature-split-event-channels */
|
2016-10-31 16:58:41 +03:00
|
|
|
feature_split_evtchn = xenbus_read_unsigned(info->xbdev->otherend,
|
|
|
|
"feature-split-event-channels", 0);
|
2014-06-04 13:30:44 +04:00
|
|
|
|
|
|
|
/* Read mac addr. */
|
|
|
|
err = xen_net_read_mac(dev, info->netdev->dev_addr);
|
|
|
|
if (err) {
|
|
|
|
xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
|
2018-06-21 16:00:20 +03:00
|
|
|
goto out_unlocked;
|
2014-06-04 13:30:44 +04:00
|
|
|
}
|
|
|
|
|
2020-06-29 16:13:28 +03:00
|
|
|
info->netback_has_xdp_headroom = xenbus_read_unsigned(info->xbdev->otherend,
|
|
|
|
"feature-xdp-headroom", 0);
|
|
|
|
if (info->netback_has_xdp_headroom) {
|
|
|
|
/* set the current xen-netfront xdp state */
|
|
|
|
err = talk_to_netback_xdp(info, info->netfront_xdp_enabled ?
|
|
|
|
NETBACK_XDP_HEADROOM_ENABLE :
|
|
|
|
NETBACK_XDP_HEADROOM_DISABLE);
|
|
|
|
if (err)
|
|
|
|
goto out_unlocked;
|
|
|
|
}
|
|
|
|
|
2018-01-11 12:36:38 +03:00
|
|
|
rtnl_lock();
|
2014-06-18 13:47:28 +04:00
|
|
|
if (info->queues)
|
|
|
|
xennet_destroy_queues(info);
|
|
|
|
|
2021-08-24 13:28:09 +03:00
|
|
|
/* For the case of a reconnect reset the "broken" indicator. */
|
|
|
|
info->broken = false;
|
|
|
|
|
2015-10-19 08:37:17 +03:00
|
|
|
err = xennet_create_queues(info, &num_queues);
|
2017-02-08 13:57:37 +03:00
|
|
|
if (err < 0) {
|
|
|
|
xenbus_dev_fatal(dev, err, "creating queues");
|
|
|
|
kfree(info->queues);
|
|
|
|
info->queues = NULL;
|
|
|
|
goto out;
|
|
|
|
}
|
2018-01-11 12:36:38 +03:00
|
|
|
rtnl_unlock();
|
2014-06-04 13:30:44 +04:00
|
|
|
|
|
|
|
/* Create shared ring, alloc event channel -- for each queue */
|
|
|
|
for (i = 0; i < num_queues; ++i) {
|
|
|
|
queue = &info->queues[i];
|
|
|
|
err = setup_netfront(dev, queue, feature_split_evtchn);
|
2017-02-08 13:57:37 +03:00
|
|
|
if (err)
|
|
|
|
goto destroy_ring;
|
2014-06-04 13:30:44 +04:00
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
again:
|
|
|
|
err = xenbus_transaction_start(&xbt);
|
|
|
|
if (err) {
|
|
|
|
xenbus_dev_fatal(dev, err, "starting transaction");
|
|
|
|
goto destroy_ring;
|
|
|
|
}
|
|
|
|
|
2015-09-16 23:28:25 +03:00
|
|
|
if (xenbus_exists(XBT_NIL,
|
|
|
|
info->xbdev->otherend, "multi-queue-max-queues")) {
|
2014-06-04 13:30:45 +04:00
|
|
|
/* Write the number of queues */
|
2015-09-16 23:28:25 +03:00
|
|
|
err = xenbus_printf(xbt, dev->nodename,
|
|
|
|
"multi-queue-num-queues", "%u", num_queues);
|
2013-05-22 10:34:46 +04:00
|
|
|
if (err) {
|
2014-06-04 13:30:45 +04:00
|
|
|
message = "writing multi-queue-num-queues";
|
|
|
|
goto abort_transaction_no_dev_fatal;
|
2013-05-22 10:34:46 +04:00
|
|
|
}
|
2015-09-16 23:28:25 +03:00
|
|
|
}
|
2014-06-04 13:30:45 +04:00
|
|
|
|
2015-09-16 23:28:25 +03:00
|
|
|
if (num_queues == 1) {
|
|
|
|
err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
|
|
|
|
if (err)
|
|
|
|
goto abort_transaction_no_dev_fatal;
|
|
|
|
} else {
|
2014-06-04 13:30:45 +04:00
|
|
|
/* Write the keys for each queue */
|
|
|
|
for (i = 0; i < num_queues; ++i) {
|
|
|
|
queue = &info->queues[i];
|
|
|
|
err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
|
|
|
|
if (err)
|
|
|
|
goto abort_transaction_no_dev_fatal;
|
2013-05-22 10:34:46 +04:00
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:45 +04:00
|
|
|
/* The remaining keys are not queue-specific */
|
2007-07-18 05:37:06 +04:00
|
|
|
err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
|
|
|
|
1);
|
|
|
|
if (err) {
|
|
|
|
message = "writing request-rx-copy";
|
|
|
|
goto abort_transaction;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xenbus_printf(xbt, dev->nodename, "feature-rx-notify", "%d", 1);
|
|
|
|
if (err) {
|
|
|
|
message = "writing feature-rx-notify";
|
|
|
|
goto abort_transaction;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xenbus_printf(xbt, dev->nodename, "feature-sg", "%d", 1);
|
|
|
|
if (err) {
|
|
|
|
message = "writing feature-sg";
|
|
|
|
goto abort_transaction;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv4", "%d", 1);
|
|
|
|
if (err) {
|
|
|
|
message = "writing feature-gso-tcpv4";
|
|
|
|
goto abort_transaction;
|
|
|
|
}
|
|
|
|
|
2014-01-15 21:30:33 +04:00
|
|
|
err = xenbus_write(xbt, dev->nodename, "feature-gso-tcpv6", "1");
|
|
|
|
if (err) {
|
|
|
|
message = "writing feature-gso-tcpv6";
|
|
|
|
goto abort_transaction;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = xenbus_write(xbt, dev->nodename, "feature-ipv6-csum-offload",
|
|
|
|
"1");
|
|
|
|
if (err) {
|
|
|
|
message = "writing feature-ipv6-csum-offload";
|
|
|
|
goto abort_transaction;
|
|
|
|
}
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
err = xenbus_transaction_end(xbt, 0);
|
|
|
|
if (err) {
|
|
|
|
if (err == -EAGAIN)
|
|
|
|
goto again;
|
|
|
|
xenbus_dev_fatal(dev, err, "completing transaction");
|
|
|
|
goto destroy_ring;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
abort_transaction:
|
|
|
|
xenbus_dev_fatal(dev, err, "%s", message);
|
2014-06-04 13:30:45 +04:00
|
|
|
abort_transaction_no_dev_fatal:
|
|
|
|
xenbus_transaction_end(xbt, 1);
|
2007-07-18 05:37:06 +04:00
|
|
|
destroy_ring:
|
|
|
|
xennet_disconnect_backend(info);
|
2018-01-11 12:36:38 +03:00
|
|
|
rtnl_lock();
|
2017-02-08 13:57:37 +03:00
|
|
|
xennet_destroy_queues(info);
|
2007-07-18 05:37:06 +04:00
|
|
|
out:
|
2018-01-11 12:36:38 +03:00
|
|
|
rtnl_unlock();
|
2018-06-21 16:00:20 +03:00
|
|
|
out_unlocked:
|
2017-05-11 14:58:06 +03:00
|
|
|
device_unregister(&dev->dev);
|
2007-07-18 05:37:06 +04:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_connect(struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct netfront_info *np = netdev_priv(dev);
|
2014-06-04 13:30:44 +04:00
|
|
|
unsigned int num_queues = 0;
|
2014-07-31 20:38:23 +04:00
|
|
|
int err;
|
2014-06-04 13:30:44 +04:00
|
|
|
unsigned int j = 0;
|
|
|
|
struct netfront_queue *queue = NULL;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2016-10-31 16:58:41 +03:00
|
|
|
if (!xenbus_read_unsigned(np->xbdev->otherend, "feature-rx-copy", 0)) {
|
2007-07-18 05:37:06 +04:00
|
|
|
dev_info(&dev->dev,
|
2007-10-18 14:06:30 +04:00
|
|
|
"backend does not support copying receive path\n");
|
2007-07-18 05:37:06 +04:00
|
|
|
return -ENODEV;
|
|
|
|
}
|
|
|
|
|
2010-08-19 03:27:49 +04:00
|
|
|
err = talk_to_netback(np->xbdev, np);
|
2007-07-18 05:37:06 +04:00
|
|
|
if (err)
|
|
|
|
return err;
|
2020-06-29 16:13:28 +03:00
|
|
|
if (np->netback_has_xdp_headroom)
|
|
|
|
pr_info("backend supports XDP headroom\n");
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
/* talk_to_netback() sets the correct number of queues */
|
|
|
|
num_queues = dev->real_num_tx_queues;
|
|
|
|
|
2018-01-11 12:36:38 +03:00
|
|
|
if (dev->reg_state == NETREG_UNINITIALIZED) {
|
|
|
|
err = register_netdev(dev);
|
|
|
|
if (err) {
|
|
|
|
pr_warn("%s: register_netdev err=%d\n", __func__, err);
|
|
|
|
device_unregister(&np->xbdev->dev);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-06-21 16:00:21 +03:00
|
|
|
rtnl_lock();
|
|
|
|
netdev_update_features(dev);
|
|
|
|
rtnl_unlock();
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
/*
|
2014-07-31 20:38:23 +04:00
|
|
|
* All public and private state should now be sane. Get
|
2007-07-18 05:37:06 +04:00
|
|
|
* ready to start sending and receiving packets and give the driver
|
|
|
|
* domain a kick because we've probably just requeued some
|
|
|
|
* packets.
|
|
|
|
*/
|
2021-10-23 02:31:39 +03:00
|
|
|
netif_tx_lock_bh(np->netdev);
|
|
|
|
netif_device_attach(np->netdev);
|
|
|
|
netif_tx_unlock_bh(np->netdev);
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
netif_carrier_on(np->netdev);
|
2014-06-04 13:30:44 +04:00
|
|
|
for (j = 0; j < num_queues; ++j) {
|
|
|
|
queue = &np->queues[j];
|
2014-07-02 19:09:14 +04:00
|
|
|
|
2014-06-04 13:30:44 +04:00
|
|
|
notify_remote_via_irq(queue->tx_irq);
|
|
|
|
if (queue->tx_irq != queue->rx_irq)
|
|
|
|
notify_remote_via_irq(queue->rx_irq);
|
|
|
|
|
2014-07-02 19:09:14 +04:00
|
|
|
spin_lock_irq(&queue->tx_lock);
|
|
|
|
xennet_tx_buf_gc(queue);
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_unlock_irq(&queue->tx_lock);
|
2014-07-02 19:09:14 +04:00
|
|
|
|
|
|
|
spin_lock_bh(&queue->rx_lock);
|
|
|
|
xennet_alloc_rx_buffers(queue);
|
2014-06-04 13:30:44 +04:00
|
|
|
spin_unlock_bh(&queue->rx_lock);
|
|
|
|
}
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-01-15 23:09:03 +03:00
|
|
|
/*
|
2007-07-18 05:37:06 +04:00
|
|
|
* Callback received when the backend's state changes.
|
|
|
|
*/
|
2010-08-19 03:27:49 +04:00
|
|
|
static void netback_changed(struct xenbus_device *dev,
|
2007-07-18 05:37:06 +04:00
|
|
|
enum xenbus_state backend_state)
|
|
|
|
{
|
2009-05-04 23:40:54 +04:00
|
|
|
struct netfront_info *np = dev_get_drvdata(&dev->dev);
|
2007-07-18 05:37:06 +04:00
|
|
|
struct net_device *netdev = np->netdev;
|
|
|
|
|
|
|
|
dev_dbg(&dev->dev, "%s\n", xenbus_strstate(backend_state));
|
|
|
|
|
2018-09-07 15:21:30 +03:00
|
|
|
wake_up_all(&module_wq);
|
|
|
|
|
2007-07-18 05:37:06 +04:00
|
|
|
switch (backend_state) {
|
|
|
|
case XenbusStateInitialising:
|
|
|
|
case XenbusStateInitialised:
|
2009-10-14 01:22:29 +04:00
|
|
|
case XenbusStateReconfiguring:
|
|
|
|
case XenbusStateReconfigured:
|
2007-07-18 05:37:06 +04:00
|
|
|
case XenbusStateUnknown:
|
|
|
|
break;
|
|
|
|
|
|
|
|
case XenbusStateInitWait:
|
|
|
|
if (dev->state != XenbusStateInitialising)
|
|
|
|
break;
|
|
|
|
if (xennet_connect(netdev) != 0)
|
|
|
|
break;
|
|
|
|
xenbus_switch_state(dev, XenbusStateConnected);
|
xen-netfront: delay gARP until backend switches to Connected
After a guest is live migrated, the xen-netfront driver emits a gratuitous
ARP message, so that networking hardware on the target host's subnet can
take notice, and public routing to the guest is re-established. However,
if the packet appears on the backend interface before the backend is added
to the target host's bridge, the packet is lost, and the migrated guest's
peers become unable to talk to the guest.
A sufficient two-parts condition to prevent the above is:
(1) ensure that the backend only moves to Connected xenbus state after its
hotplug scripts completed, ie. the netback interface got added to the
bridge; and
(2) ensure the frontend only queues the gARP when it sees the backend move
to Connected.
These two together provide complete ordering. Sub-condition (1) is already
satisfied by commit f942dc2552b8 in Linus' tree, based on commit
6b0b80ca7165 from [1].
In general, the full condition is sufficient, not necessary, because,
according to [2], live migration has been working for a long time without
satisfying sub-condition (2). However, after 6b0b80ca7165 was backported
to the RHEL-5 host to ensure (1), (2) still proved necessary in the RHEL-6
guest. This patch intends to provide (2) for upstream.
The Reviewed-by line comes from [3].
[1] git://xenbits.xen.org/people/ianc/linux-2.6.git#upstream/dom0/backend/netback-history
[2] http://old-list-archives.xen.org/xen-devel/2011-06/msg01969.html
[3] http://old-list-archives.xen.org/xen-devel/2011-07/msg00484.html
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-11 05:48:59 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case XenbusStateConnected:
|
2012-08-10 02:14:56 +04:00
|
|
|
netdev_notify_peers(netdev);
|
2007-07-18 05:37:06 +04:00
|
|
|
break;
|
|
|
|
|
2014-02-04 22:50:26 +04:00
|
|
|
case XenbusStateClosed:
|
|
|
|
if (dev->state == XenbusStateClosed)
|
|
|
|
break;
|
2020-08-24 01:36:59 +03:00
|
|
|
fallthrough; /* Missed the backend's CLOSING state */
|
2007-07-18 05:37:06 +04:00
|
|
|
case XenbusStateClosing:
|
|
|
|
xenbus_frontend_closed(dev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-01-27 07:14:03 +03:00
|
|
|
static const struct xennet_stat {
|
|
|
|
char name[ETH_GSTRING_LEN];
|
|
|
|
u16 offset;
|
|
|
|
} xennet_stats[] = {
|
|
|
|
{
|
|
|
|
"rx_gso_checksum_fixup",
|
|
|
|
offsetof(struct netfront_info, rx_gso_checksum_fixup)
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
static int xennet_get_sset_count(struct net_device *dev, int string_set)
|
|
|
|
{
|
|
|
|
switch (string_set) {
|
|
|
|
case ETH_SS_STATS:
|
|
|
|
return ARRAY_SIZE(xennet_stats);
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void xennet_get_ethtool_stats(struct net_device *dev,
|
|
|
|
struct ethtool_stats *stats, u64 * data)
|
|
|
|
{
|
|
|
|
void *np = netdev_priv(dev);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
|
2014-06-04 13:30:44 +04:00
|
|
|
data[i] = atomic_read((atomic_t *)(np + xennet_stats[i].offset));
|
2011-01-27 07:14:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
switch (stringset) {
|
|
|
|
case ETH_SS_STATS:
|
|
|
|
for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
|
|
|
|
memcpy(data + i * ETH_GSTRING_LEN,
|
|
|
|
xennet_stats[i].name, ETH_GSTRING_LEN);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-09-02 12:03:33 +04:00
|
|
|
static const struct ethtool_ops xennet_ethtool_ops =
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
.get_link = ethtool_op_get_link,
|
2011-01-27 07:14:03 +03:00
|
|
|
|
|
|
|
.get_sset_count = xennet_get_sset_count,
|
|
|
|
.get_ethtool_stats = xennet_get_ethtool_stats,
|
|
|
|
.get_strings = xennet_get_strings,
|
2020-07-03 09:22:34 +03:00
|
|
|
.get_ts_info = ethtool_op_get_ts_info,
|
2007-07-18 05:37:06 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYSFS
|
2014-10-22 14:17:06 +04:00
|
|
|
static ssize_t show_rxbuf(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2014-10-22 14:17:06 +04:00
|
|
|
return sprintf(buf, "%lu\n", NET_RX_RING_SIZE);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
static ssize_t store_rxbuf(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
const char *buf, size_t len)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
|
|
|
char *endp;
|
|
|
|
|
|
|
|
if (!capable(CAP_NET_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
2020-10-31 21:04:35 +03:00
|
|
|
simple_strtoul(buf, &endp, 0);
|
2007-07-18 05:37:06 +04:00
|
|
|
if (endp == buf)
|
|
|
|
return -EBADMSG;
|
|
|
|
|
2014-10-22 14:17:06 +04:00
|
|
|
/* rxbuf_min and rxbuf_max are no longer configurable. */
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
2018-03-24 01:54:39 +03:00
|
|
|
static DEVICE_ATTR(rxbuf_min, 0644, show_rxbuf, store_rxbuf);
|
|
|
|
static DEVICE_ATTR(rxbuf_max, 0644, show_rxbuf, store_rxbuf);
|
|
|
|
static DEVICE_ATTR(rxbuf_cur, 0444, show_rxbuf, NULL);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-02-04 16:38:55 +03:00
|
|
|
static struct attribute *xennet_dev_attrs[] = {
|
|
|
|
&dev_attr_rxbuf_min.attr,
|
|
|
|
&dev_attr_rxbuf_max.attr,
|
|
|
|
&dev_attr_rxbuf_cur.attr,
|
|
|
|
NULL
|
|
|
|
};
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2015-02-04 16:38:55 +03:00
|
|
|
static const struct attribute_group xennet_dev_group = {
|
|
|
|
.attrs = xennet_dev_attrs
|
|
|
|
};
|
2007-07-18 05:37:06 +04:00
|
|
|
#endif /* CONFIG_SYSFS */
|
|
|
|
|
2020-07-24 11:59:10 +03:00
|
|
|
static void xennet_bus_close(struct xenbus_device *dev)
|
2007-07-18 05:37:06 +04:00
|
|
|
{
|
2020-07-24 11:59:10 +03:00
|
|
|
int ret;
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2020-07-24 11:59:10 +03:00
|
|
|
if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
|
|
|
|
return;
|
|
|
|
do {
|
2017-11-23 17:18:35 +03:00
|
|
|
xenbus_switch_state(dev, XenbusStateClosing);
|
2020-07-24 11:59:10 +03:00
|
|
|
ret = wait_event_timeout(module_wq,
|
|
|
|
xenbus_read_driver_state(dev->otherend) ==
|
|
|
|
XenbusStateClosing ||
|
|
|
|
xenbus_read_driver_state(dev->otherend) ==
|
|
|
|
XenbusStateClosed ||
|
|
|
|
xenbus_read_driver_state(dev->otherend) ==
|
|
|
|
XenbusStateUnknown,
|
|
|
|
XENNET_TIMEOUT);
|
|
|
|
} while (!ret);
|
|
|
|
|
|
|
|
if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
|
|
|
|
return;
|
2017-11-23 17:18:35 +03:00
|
|
|
|
2020-07-24 11:59:10 +03:00
|
|
|
do {
|
2017-11-23 17:18:35 +03:00
|
|
|
xenbus_switch_state(dev, XenbusStateClosed);
|
2020-07-24 11:59:10 +03:00
|
|
|
ret = wait_event_timeout(module_wq,
|
|
|
|
xenbus_read_driver_state(dev->otherend) ==
|
|
|
|
XenbusStateClosed ||
|
|
|
|
xenbus_read_driver_state(dev->otherend) ==
|
|
|
|
XenbusStateUnknown,
|
|
|
|
XENNET_TIMEOUT);
|
|
|
|
} while (!ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int xennet_remove(struct xenbus_device *dev)
|
|
|
|
{
|
|
|
|
struct netfront_info *info = dev_get_drvdata(&dev->dev);
|
2017-11-23 17:18:35 +03:00
|
|
|
|
2020-07-24 11:59:10 +03:00
|
|
|
xennet_bus_close(dev);
|
2007-07-18 05:37:06 +04:00
|
|
|
xennet_disconnect_backend(info);
|
|
|
|
|
2018-01-11 12:36:38 +03:00
|
|
|
if (info->netdev->reg_state == NETREG_REGISTERED)
|
|
|
|
unregister_netdev(info->netdev);
|
2012-06-26 02:48:41 +04:00
|
|
|
|
2018-01-11 12:36:38 +03:00
|
|
|
if (info->queues) {
|
|
|
|
rtnl_lock();
|
2015-08-20 02:14:20 +03:00
|
|
|
xennet_destroy_queues(info);
|
2018-01-11 12:36:38 +03:00
|
|
|
rtnl_unlock();
|
|
|
|
}
|
2015-01-13 19:42:42 +03:00
|
|
|
xennet_free_netdev(info->netdev);
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-09-08 20:30:41 +04:00
|
|
|
static const struct xenbus_device_id netfront_ids[] = {
|
|
|
|
{ "vif" },
|
|
|
|
{ "" }
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct xenbus_driver netfront_driver = {
|
|
|
|
.ids = netfront_ids,
|
2007-07-18 05:37:06 +04:00
|
|
|
.probe = netfront_probe,
|
2012-12-03 18:24:22 +04:00
|
|
|
.remove = xennet_remove,
|
2007-07-18 05:37:06 +04:00
|
|
|
.resume = netfront_resume,
|
2010-08-19 03:27:49 +04:00
|
|
|
.otherend_changed = netback_changed,
|
2014-09-08 20:30:41 +04:00
|
|
|
};
|
2007-07-18 05:37:06 +04:00
|
|
|
|
|
|
|
static int __init netif_init(void)
|
|
|
|
{
|
2008-08-20 00:16:17 +04:00
|
|
|
if (!xen_domain())
|
2007-07-18 05:37:06 +04:00
|
|
|
return -ENODEV;
|
|
|
|
|
xen/pvhvm: If xen_platform_pci=0 is set don't blow up (v4).
The user has the option of disabling the platform driver:
00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
which is used to unplug the emulated drivers (IDE, Realtek 8169, etc)
and allow the PV drivers to take over. If the user wishes
to disable that they can set:
xen_platform_pci=0
(in the guest config file)
or
xen_emul_unplug=never
(on the Linux command line)
except it does not work properly. The PV drivers still try to
load and since the Xen platform driver is not run - and it
has not initialized the grant tables, most of the PV drivers
stumble upon:
input: Xen Virtual Keyboard as /devices/virtual/input/input5
input: Xen Virtual Pointer as /devices/virtual/input/input6M
------------[ cut here ]------------
kernel BUG at /home/konrad/ssd/konrad/linux/drivers/xen/grant-table.c:1206!
invalid opcode: 0000 [#1] SMP
Modules linked in: xen_kbdfront(+) xenfs xen_privcmd
CPU: 6 PID: 1389 Comm: modprobe Not tainted 3.13.0-rc1upstream-00021-ga6c892b-dirty #1
Hardware name: Xen HVM domU, BIOS 4.4-unstable 11/26/2013
RIP: 0010:[<ffffffff813ddc40>] [<ffffffff813ddc40>] get_free_entries+0x2e0/0x300
Call Trace:
[<ffffffff8150d9a3>] ? evdev_connect+0x1e3/0x240
[<ffffffff813ddd0e>] gnttab_grant_foreign_access+0x2e/0x70
[<ffffffffa0010081>] xenkbd_connect_backend+0x41/0x290 [xen_kbdfront]
[<ffffffffa0010a12>] xenkbd_probe+0x2f2/0x324 [xen_kbdfront]
[<ffffffff813e5757>] xenbus_dev_probe+0x77/0x130
[<ffffffff813e7217>] xenbus_frontend_dev_probe+0x47/0x50
[<ffffffff8145e9a9>] driver_probe_device+0x89/0x230
[<ffffffff8145ebeb>] __driver_attach+0x9b/0xa0
[<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
[<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
[<ffffffff8145cf1c>] bus_for_each_dev+0x8c/0xb0
[<ffffffff8145e7d9>] driver_attach+0x19/0x20
[<ffffffff8145e260>] bus_add_driver+0x1a0/0x220
[<ffffffff8145f1ff>] driver_register+0x5f/0xf0
[<ffffffff813e55c5>] xenbus_register_driver_common+0x15/0x20
[<ffffffff813e76b3>] xenbus_register_frontend+0x23/0x40
[<ffffffffa0015000>] ? 0xffffffffa0014fff
[<ffffffffa001502b>] xenkbd_init+0x2b/0x1000 [xen_kbdfront]
[<ffffffff81002049>] do_one_initcall+0x49/0x170
.. snip..
which is hardly nice. This patch fixes this by having each
PV driver check for:
- if running in PV, then it is fine to execute (as that is their
native environment).
- if running in HVM, check if user wanted 'xen_emul_unplug=never',
in which case bail out and don't load any PV drivers.
- if running in HVM, and if PCI device 5853:0001 (xen_platform_pci)
does not exist, then bail out and not load PV drivers.
- (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=ide-disks',
then bail out for all PV devices _except_ the block one.
Ditto for the network one ('nics').
- (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=unnecessary'
then load block PV driver, and also setup the legacy IDE paths.
In (v3) make it actually load PV drivers.
Reported-by: Sander Eikelenboom <linux@eikelenboom.it
Reported-by: Anthony PERARD <anthony.perard@citrix.com>
Reported-and-Tested-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
[v2: Add extra logic to handle the myrid ways 'xen_emul_unplug'
can be used per Ian and Stefano suggestion]
[v3: Make the unnecessary case work properly]
[v4: s/disks/ide-disks/ spotted by Fabio]
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com> [for PCI parts]
CC: stable@vger.kernel.org
2013-11-27 00:05:40 +04:00
|
|
|
if (!xen_has_pv_nic_devices())
|
2012-03-21 18:08:38 +04:00
|
|
|
return -ENODEV;
|
|
|
|
|
2013-06-28 08:57:49 +04:00
|
|
|
pr_info("Initialising Xen virtual ethernet driver\n");
|
2007-07-18 05:37:06 +04:00
|
|
|
|
2017-01-10 16:32:51 +03:00
|
|
|
/* Allow as many queues as there are CPUs inut max. 8 if user has not
|
2015-09-10 13:18:58 +03:00
|
|
|
* specified a value.
|
|
|
|
*/
|
|
|
|
if (xennet_max_queues == 0)
|
2017-01-10 16:32:51 +03:00
|
|
|
xennet_max_queues = min_t(unsigned int, MAX_QUEUES_DEFAULT,
|
|
|
|
num_online_cpus());
|
2014-06-04 13:30:45 +04:00
|
|
|
|
2008-11-22 20:38:14 +03:00
|
|
|
return xenbus_register_frontend(&netfront_driver);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
module_init(netif_init);
|
|
|
|
|
|
|
|
|
|
|
|
static void __exit netif_exit(void)
|
|
|
|
{
|
2008-11-22 20:38:14 +03:00
|
|
|
xenbus_unregister_driver(&netfront_driver);
|
2007-07-18 05:37:06 +04:00
|
|
|
}
|
|
|
|
module_exit(netif_exit);
|
|
|
|
|
|
|
|
MODULE_DESCRIPTION("Xen virtual network device frontend");
|
|
|
|
MODULE_LICENSE("GPL");
|
2008-04-02 21:54:05 +04:00
|
|
|
MODULE_ALIAS("xen:vif");
|
2008-04-02 21:54:06 +04:00
|
|
|
MODULE_ALIAS("xennet");
|