2019-05-27 09:55:01 +03:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Authors:
|
|
|
|
* Copyright 2001, 2002 by Robert Olsson <robert.olsson@its.uu.se>
|
|
|
|
* Uppsala University and
|
|
|
|
* Swedish University of Agricultural Sciences
|
|
|
|
*
|
|
|
|
* Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
|
|
|
|
* Ben Greear <greearb@candelatech.com>
|
2007-10-20 01:21:04 +04:00
|
|
|
* Jens Låås <jens.laas@data.slu.se>
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
|
|
|
* A tool for loading the network with preconfigurated packets.
|
2007-02-09 17:24:36 +03:00
|
|
|
* The tool is implemented as a linux module. Parameters are output
|
2005-04-17 02:20:36 +04:00
|
|
|
* device, delay (to hard_xmit), number of packets, and whether
|
|
|
|
* to use multiple SKBs or just the same one.
|
|
|
|
* pktgen uses the installed interface's output routine.
|
|
|
|
*
|
|
|
|
* Additional hacking by:
|
|
|
|
*
|
|
|
|
* Jens.Laas@data.slu.se
|
|
|
|
* Improved by ANK. 010120.
|
|
|
|
* Improved by ANK even more. 010212.
|
|
|
|
* MAC address typo fixed. 010417 --ro
|
|
|
|
* Integrated. 020301 --DaveM
|
|
|
|
* Added multiskb option 020301 --DaveM
|
|
|
|
* Scaling of results. 020417--sigurdur@linpro.no
|
|
|
|
* Significant re-work of the module:
|
|
|
|
* * Convert to threaded model to more efficiently be able to transmit
|
|
|
|
* and receive on multiple interfaces at once.
|
|
|
|
* * Converted many counters to __u64 to allow longer runs.
|
|
|
|
* * Allow configuration of ranges, like min/max IP address, MACs,
|
|
|
|
* and UDP-ports, for both source and destination, and can
|
|
|
|
* set to use a random distribution or sequentially walk the range.
|
|
|
|
* * Can now change most values after starting.
|
|
|
|
* * Place 12-byte packet in UDP payload with magic number,
|
|
|
|
* sequence number, and timestamp.
|
|
|
|
* * Add receiver code that detects dropped pkts, re-ordered pkts, and
|
|
|
|
* latencies (with micro-second) precision.
|
|
|
|
* * Add IOCTL interface to easily get counters & configuration.
|
|
|
|
* --Ben Greear <greearb@candelatech.com>
|
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* Renamed multiskb to clone_skb and cleaned up sending core for two distinct
|
|
|
|
* skb modes. A clone_skb=0 mode for Ben "ranges" work and a clone_skb != 0
|
2005-04-17 02:20:36 +04:00
|
|
|
* as a "fastpath" with a configurable number of clones after alloc's.
|
2007-02-09 17:24:36 +03:00
|
|
|
* clone_skb=0 means all packets are allocated this also means ranges time
|
|
|
|
* stamps etc can be used. clone_skb=100 means 1 malloc is followed by 100
|
2005-04-17 02:20:36 +04:00
|
|
|
* clones.
|
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* Also moved to /proc/net/pktgen/
|
2005-04-17 02:20:36 +04:00
|
|
|
* --ro
|
|
|
|
*
|
|
|
|
* Sept 10: Fixed threading/locking. Lots of bone-headed and more clever
|
|
|
|
* mistakes. Also merged in DaveM's patch in the -pre6 patch.
|
|
|
|
* --Ben Greear <greearb@candelatech.com>
|
|
|
|
*
|
|
|
|
* Integrated to 2.5.x 021029 --Lucio Maciel (luciomaciel@zipmail.com.br)
|
|
|
|
*
|
|
|
|
* 021124 Finished major redesign and rewrite for new functionality.
|
2020-04-30 19:04:13 +03:00
|
|
|
* See Documentation/networking/pktgen.rst for how to use this.
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
|
|
|
* The new operation:
|
2007-02-09 17:24:36 +03:00
|
|
|
* For each CPU one thread/process is created at start. This process checks
|
|
|
|
* for running devices in the if_list and sends packets until count is 0 it
|
|
|
|
* also the thread checks the thread->control which is used for inter-process
|
|
|
|
* communication. controlling process "posts" operations to the threads this
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
* way.
|
|
|
|
* The if_list is RCU protected, and the if_lock remains to protect updating
|
|
|
|
* of if_list, from "add_device" as it invoked from userspace (via proc write).
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* By design there should only be *one* "controlling" process. In practice
|
|
|
|
* multiple write accesses gives unpredictable result. Understood by "write"
|
2005-04-17 02:20:36 +04:00
|
|
|
* to /proc gives result code thats should be read be the "writer".
|
2005-10-15 02:32:22 +04:00
|
|
|
* For practical use this should be no problem.
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* Note when adding devices to a specific CPU there good idea to also assign
|
|
|
|
* /proc/irq/XX/smp_affinity so TX-interrupts gets bound to the same CPU.
|
2005-04-17 02:20:36 +04:00
|
|
|
* --ro
|
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* Fix refcount off by one if first packet fails, potential null deref,
|
2005-04-17 02:20:36 +04:00
|
|
|
* memleak 030710- KJP
|
|
|
|
*
|
|
|
|
* First "ranges" functionality for ipv6 030726 --ro
|
|
|
|
*
|
|
|
|
* Included flow support. 030802 ANK.
|
|
|
|
*
|
|
|
|
* Fixed unaligned access on IA-64 Grant Grundler <grundler@parisc-linux.org>
|
2007-02-09 17:24:36 +03:00
|
|
|
*
|
2005-04-17 02:20:36 +04:00
|
|
|
* Remove if fix from added Harald Welte <laforge@netfilter.org> 040419
|
|
|
|
* ia64 compilation fix from Aron Griffis <aron@hp.com> 040604
|
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* New xmit() return, do_div and misc clean up by Stephen Hemminger
|
2005-04-17 02:20:36 +04:00
|
|
|
* <shemminger@osdl.org> 040923
|
|
|
|
*
|
2015-02-14 21:47:54 +03:00
|
|
|
* Randy Dunlap fixed u64 printk compiler warning
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
|
|
|
* Remove FCS from BW calculation. Lennert Buytenhek <buytenh@wantstofly.org>
|
|
|
|
* New time handling. Lennert Buytenhek <buytenh@wantstofly.org> 041213
|
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* Corrections from Nikolai Malykh (nmalykh@bilim.com)
|
2005-04-17 02:20:36 +04:00
|
|
|
* Removed unused flags F_SET_SRCMAC & F_SET_SRCIP 041230
|
|
|
|
*
|
2007-02-09 17:24:36 +03:00
|
|
|
* interruptible_sleep_on_timeout() replaced Nishanth Aravamudan <nacc@us.ibm.com>
|
2005-04-17 02:20:36 +04:00
|
|
|
* 050103
|
2006-03-23 12:10:26 +03:00
|
|
|
*
|
|
|
|
* MPLS support by Steven Whitehouse <steve@chygwyn.com>
|
|
|
|
*
|
2006-09-28 03:30:44 +04:00
|
|
|
* 802.1Q/Q-in-Q support by Francesco Fondelli (FF) <francesco.fondelli@gmail.com>
|
|
|
|
*
|
2007-09-17 01:52:15 +04:00
|
|
|
* Fixed src_mac command to set source mac of packet to value specified in
|
|
|
|
* command by Adit Ranadive <adit.262@gmail.com>
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2010-06-21 16:29:14 +04:00
|
|
|
|
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/sys.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/moduleparam.h>
|
|
|
|
#include <linux/kernel.h>
|
2006-03-21 09:24:27 +03:00
|
|
|
#include <linux/mutex.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/vmalloc.h>
|
|
|
|
#include <linux/unistd.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/ptrace.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/ioport.h>
|
|
|
|
#include <linux/interrupt.h>
|
2006-01-11 23:17:47 +03:00
|
|
|
#include <linux/capability.h>
|
2009-08-29 10:41:29 +04:00
|
|
|
#include <linux/hrtimer.h>
|
2007-04-13 01:45:32 +04:00
|
|
|
#include <linux/freezer.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/delay.h>
|
|
|
|
#include <linux/timer.h>
|
2006-03-21 09:16:40 +03:00
|
|
|
#include <linux/list.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/inet.h>
|
|
|
|
#include <linux/inetdevice.h>
|
|
|
|
#include <linux/rtnetlink.h>
|
|
|
|
#include <linux/if_arp.h>
|
2006-09-28 03:30:44 +04:00
|
|
|
#include <linux/if_vlan.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/in.h>
|
|
|
|
#include <linux/ip.h>
|
|
|
|
#include <linux/ipv6.h>
|
|
|
|
#include <linux/udp.h>
|
|
|
|
#include <linux/proc_fs.h>
|
2005-10-15 02:42:33 +04:00
|
|
|
#include <linux/seq_file.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/wait.h>
|
2006-01-18 00:04:57 +03:00
|
|
|
#include <linux/etherdevice.h>
|
2007-01-02 07:51:53 +03:00
|
|
|
#include <linux/kthread.h>
|
2011-05-20 23:50:29 +04:00
|
|
|
#include <linux/prefetch.h>
|
2019-03-06 02:42:58 +03:00
|
|
|
#include <linux/mmzone.h>
|
2007-09-12 14:01:34 +04:00
|
|
|
#include <net/net_namespace.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <net/checksum.h>
|
|
|
|
#include <net/ipv6.h>
|
2013-07-25 20:12:18 +04:00
|
|
|
#include <net/udp.h>
|
2013-07-29 10:21:26 +04:00
|
|
|
#include <net/ip6_checksum.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <net/addrconf.h>
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
#include <net/xfrm.h>
|
|
|
|
#endif
|
2013-01-28 23:55:53 +04:00
|
|
|
#include <net/netns/generic.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <asm/byteorder.h>
|
|
|
|
#include <linux/rcupdate.h>
|
2007-10-19 10:40:25 +04:00
|
|
|
#include <linux/bitops.h>
|
2009-08-27 17:55:19 +04:00
|
|
|
#include <linux/io.h>
|
|
|
|
#include <linux/timex.h>
|
|
|
|
#include <linux/uaccess.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <asm/dma.h>
|
2006-03-21 09:16:13 +03:00
|
|
|
#include <asm/div64.h> /* do_div */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2015-05-21 13:16:56 +03:00
|
|
|
#define VERSION "2.75"
|
2005-04-17 02:20:36 +04:00
|
|
|
#define IP_NAME_SZ 32
|
2006-03-23 12:10:26 +03:00
|
|
|
#define MAX_MPLS_LABELS 16 /* This is the max label stack depth */
|
2007-03-05 03:08:08 +03:00
|
|
|
#define MPLS_STACK_BOTTOM htonl(0x00000100)
|
2021-08-10 22:01:53 +03:00
|
|
|
/* Max number of internet mix entries that can be specified in imix_weights. */
|
|
|
|
#define MAX_IMIX_ENTRIES 20
|
2021-08-10 22:01:54 +03:00
|
|
|
#define IMIX_PRECISION 100 /* Precision of IMIX distribution */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
#define func_enter() pr_debug("entering %s\n", __func__);
|
|
|
|
|
2018-01-18 21:31:35 +03:00
|
|
|
#define PKT_FLAGS \
|
|
|
|
pf(IPV6) /* Interface in IPV6 Mode */ \
|
|
|
|
pf(IPSRC_RND) /* IP-Src Random */ \
|
|
|
|
pf(IPDST_RND) /* IP-Dst Random */ \
|
|
|
|
pf(TXSIZE_RND) /* Transmit size is random */ \
|
|
|
|
pf(UDPSRC_RND) /* UDP-Src Random */ \
|
|
|
|
pf(UDPDST_RND) /* UDP-Dst Random */ \
|
|
|
|
pf(UDPCSUM) /* Include UDP checksum */ \
|
|
|
|
pf(NO_TIMESTAMP) /* Don't timestamp packets (default TS) */ \
|
|
|
|
pf(MPLS_RND) /* Random MPLS labels */ \
|
|
|
|
pf(QUEUE_MAP_RND) /* queue map Random */ \
|
|
|
|
pf(QUEUE_MAP_CPU) /* queue map mirrors smp_processor_id() */ \
|
|
|
|
pf(FLOW_SEQ) /* Sequential flows */ \
|
|
|
|
pf(IPSEC) /* ipsec on for flows */ \
|
|
|
|
pf(MACSRC_RND) /* MAC-Src Random */ \
|
|
|
|
pf(MACDST_RND) /* MAC-Dst Random */ \
|
|
|
|
pf(VID_RND) /* Random VLAN ID */ \
|
|
|
|
pf(SVID_RND) /* Random SVLAN ID */ \
|
|
|
|
pf(NODE) /* Node memory alloc*/ \
|
|
|
|
|
|
|
|
#define pf(flag) flag##_SHIFT,
|
|
|
|
enum pkt_flags {
|
|
|
|
PKT_FLAGS
|
|
|
|
};
|
|
|
|
#undef pf
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Device flag bits */
|
2018-01-18 21:31:35 +03:00
|
|
|
#define pf(flag) static const __u32 F_##flag = (1<<flag##_SHIFT);
|
|
|
|
PKT_FLAGS
|
|
|
|
#undef pf
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2018-01-18 21:31:36 +03:00
|
|
|
#define pf(flag) __stringify(flag),
|
|
|
|
static char *pkt_flag_names[] = {
|
|
|
|
PKT_FLAGS
|
|
|
|
};
|
|
|
|
#undef pf
|
|
|
|
|
|
|
|
#define NR_PKT_FLAGS ARRAY_SIZE(pkt_flag_names)
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Thread control flag bits */
|
2009-09-22 23:41:42 +04:00
|
|
|
#define T_STOP (1<<0) /* Stop run */
|
|
|
|
#define T_RUN (1<<1) /* Start run */
|
|
|
|
#define T_REMDEVALL (1<<2) /* Remove all devs */
|
|
|
|
#define T_REMDEV (1<<3) /* Remove one dev */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2015-05-07 17:35:32 +03:00
|
|
|
/* Xmit modes */
|
|
|
|
#define M_START_XMIT 0 /* Default normal TX */
|
|
|
|
#define M_NETIF_RECEIVE 1 /* Inject packets into stack */
|
2016-07-03 00:12:54 +03:00
|
|
|
#define M_QUEUE_XMIT 2 /* Inject packet into qdisc */
|
2015-05-07 17:35:32 +03:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
/* If lock -- protects updating of if_list */
|
2016-10-15 18:50:49 +03:00
|
|
|
#define if_lock(t) mutex_lock(&(t->if_lock));
|
|
|
|
#define if_unlock(t) mutex_unlock(&(t->if_lock));
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/* Used to help with determining the pkts on receive */
|
|
|
|
#define PKTGEN_MAGIC 0xbe9be955
|
2005-10-15 02:42:33 +04:00
|
|
|
#define PG_PROC_DIR "pktgen"
|
|
|
|
#define PGCTRL "pgctrl"
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
#define MAX_CFLOWS 65536
|
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
#define VLAN_TAG_SIZE(x) ((x)->vlan_id == 0xffff ? 0 : 4)
|
|
|
|
#define SVLAN_TAG_SIZE(x) ((x)->svlan_id == 0xffff ? 0 : 4)
|
|
|
|
|
2021-08-10 22:01:53 +03:00
|
|
|
struct imix_pkt {
|
|
|
|
u64 size;
|
|
|
|
u64 weight;
|
|
|
|
u64 count_so_far;
|
|
|
|
};
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
struct flow_state {
|
2006-11-15 07:48:11 +03:00
|
|
|
__be32 cur_daddr;
|
2006-03-21 09:16:13 +03:00
|
|
|
int count;
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
struct xfrm_state *x;
|
|
|
|
#endif
|
2007-07-03 09:40:36 +04:00
|
|
|
__u32 flags;
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2007-07-03 09:40:36 +04:00
|
|
|
/* flow flag bits */
|
|
|
|
#define F_INIT (1<<0) /* flow has been initialized */
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
struct pktgen_dev {
|
|
|
|
/*
|
|
|
|
* Try to keep frequent/infrequent used vars. separated.
|
|
|
|
*/
|
2007-03-05 03:11:51 +03:00
|
|
|
struct proc_dir_entry *entry; /* proc file */
|
|
|
|
struct pktgen_thread *pg_thread;/* the owner */
|
2009-08-27 17:55:19 +04:00
|
|
|
struct list_head list; /* chaining in the thread's run-queue */
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
struct rcu_head rcu; /* freed by RCU */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
int running; /* if false, the test will stop */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
/* If min != max, then we will either do a linear iteration, or
|
|
|
|
* we will do a random selection from within the range.
|
|
|
|
*/
|
|
|
|
__u32 flags;
|
2015-05-07 17:35:32 +03:00
|
|
|
int xmit_mode;
|
2012-10-09 21:48:17 +04:00
|
|
|
int min_pkt_size;
|
|
|
|
int max_pkt_size;
|
2007-07-03 09:39:50 +04:00
|
|
|
int pkt_overhead; /* overhead for MPLS, VLANs, IPSEC etc */
|
2006-03-21 09:16:13 +03:00
|
|
|
int nfrags;
|
2015-05-07 17:35:32 +03:00
|
|
|
int removal_mark; /* non-zero => the device is marked for
|
|
|
|
* removal by worker thread */
|
|
|
|
|
2011-01-26 00:26:05 +03:00
|
|
|
struct page *page;
|
2009-08-27 17:55:16 +04:00
|
|
|
u64 delay; /* nano-seconds */
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
__u64 count; /* Default No packets to send */
|
|
|
|
__u64 sofar; /* How many pkts we've sent so far */
|
|
|
|
__u64 tx_bytes; /* How many bytes we've transmitted */
|
2009-12-24 09:02:57 +03:00
|
|
|
__u64 errors; /* Errors when trying to transmit, */
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
/* runtime counters relating to clone_skb */
|
|
|
|
|
|
|
|
__u32 clone_count;
|
|
|
|
int last_ok; /* Was last skb sent?
|
2009-08-27 17:55:19 +04:00
|
|
|
* Or a failed transmit of some sort?
|
|
|
|
* This will keep sequence numbers in order
|
2006-03-21 09:16:13 +03:00
|
|
|
*/
|
2009-08-27 17:55:16 +04:00
|
|
|
ktime_t next_tx;
|
|
|
|
ktime_t started_at;
|
|
|
|
ktime_t stopped_at;
|
|
|
|
u64 idle_acc; /* nano-seconds */
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
__u32 seq_num;
|
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
int clone_skb; /*
|
|
|
|
* Use multiple SKBs during packet gen.
|
|
|
|
* If this number is greater than 1, then
|
|
|
|
* that many copies of the same packet will be
|
|
|
|
* sent before a new packet is allocated.
|
|
|
|
* If you want to send 1024 identical packets
|
|
|
|
* before creating a new packet,
|
|
|
|
* set clone_skb to 1024.
|
2006-03-21 09:16:13 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
char dst_min[IP_NAME_SZ]; /* IP, ie 1.2.3.4 */
|
|
|
|
char dst_max[IP_NAME_SZ]; /* IP, ie 1.2.3.4 */
|
|
|
|
char src_min[IP_NAME_SZ]; /* IP, ie 1.2.3.4 */
|
|
|
|
char src_max[IP_NAME_SZ]; /* IP, ie 1.2.3.4 */
|
|
|
|
|
|
|
|
struct in6_addr in6_saddr;
|
|
|
|
struct in6_addr in6_daddr;
|
|
|
|
struct in6_addr cur_in6_daddr;
|
|
|
|
struct in6_addr cur_in6_saddr;
|
2005-04-17 02:20:36 +04:00
|
|
|
/* For ranges */
|
2006-03-21 09:16:13 +03:00
|
|
|
struct in6_addr min_in6_daddr;
|
|
|
|
struct in6_addr max_in6_daddr;
|
|
|
|
struct in6_addr min_in6_saddr;
|
|
|
|
struct in6_addr max_in6_saddr;
|
|
|
|
|
|
|
|
/* If we're doing ranges, random or incremental, then this
|
|
|
|
* defines the min/max for those ranges.
|
|
|
|
*/
|
2006-11-15 07:48:11 +03:00
|
|
|
__be32 saddr_min; /* inclusive, source IP address */
|
|
|
|
__be32 saddr_max; /* exclusive, source IP address */
|
|
|
|
__be32 daddr_min; /* inclusive, dest IP address */
|
|
|
|
__be32 daddr_max; /* exclusive, dest IP address */
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
__u16 udp_src_min; /* inclusive, source UDP port */
|
|
|
|
__u16 udp_src_max; /* exclusive, source UDP port */
|
|
|
|
__u16 udp_dst_min; /* inclusive, dest UDP port */
|
|
|
|
__u16 udp_dst_max; /* exclusive, dest UDP port */
|
|
|
|
|
2006-09-28 03:32:03 +04:00
|
|
|
/* DSCP + ECN */
|
2009-08-27 17:55:19 +04:00
|
|
|
__u8 tos; /* six MSB of (former) IPv4 TOS
|
|
|
|
are for dscp codepoint */
|
|
|
|
__u8 traffic_class; /* ditto for the (former) Traffic Class in IPv6
|
|
|
|
(see RFC 3260, sec. 4) */
|
2006-09-28 03:32:03 +04:00
|
|
|
|
2021-08-10 22:01:53 +03:00
|
|
|
/* IMIX */
|
|
|
|
unsigned int n_imix_entries;
|
|
|
|
struct imix_pkt imix_entries[MAX_IMIX_ENTRIES];
|
2021-08-10 22:01:54 +03:00
|
|
|
/* Maps 0-IMIX_PRECISION range to imix_entry based on probability*/
|
|
|
|
__u8 imix_distribution[IMIX_PRECISION];
|
2021-08-10 22:01:53 +03:00
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
/* MPLS */
|
2012-04-15 09:58:06 +04:00
|
|
|
unsigned int nr_labels; /* Depth of stack, 0 = no MPLS */
|
2006-03-23 12:10:26 +03:00
|
|
|
__be32 labels[MAX_MPLS_LABELS];
|
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
/* VLAN/SVLAN (802.1Q/Q-in-Q) */
|
|
|
|
__u8 vlan_p;
|
|
|
|
__u8 vlan_cfi;
|
|
|
|
__u16 vlan_id; /* 0xffff means no vlan tag */
|
|
|
|
|
|
|
|
__u8 svlan_p;
|
|
|
|
__u8 svlan_cfi;
|
|
|
|
__u16 svlan_id; /* 0xffff means no svlan tag */
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
__u32 src_mac_count; /* How many MACs to iterate through */
|
|
|
|
__u32 dst_mac_count; /* How many MACs to iterate through */
|
|
|
|
|
|
|
|
unsigned char dst_mac[ETH_ALEN];
|
|
|
|
unsigned char src_mac[ETH_ALEN];
|
|
|
|
|
|
|
|
__u32 cur_dst_mac_offset;
|
|
|
|
__u32 cur_src_mac_offset;
|
2006-11-15 07:48:11 +03:00
|
|
|
__be32 cur_saddr;
|
|
|
|
__be32 cur_daddr;
|
2009-10-24 17:55:20 +04:00
|
|
|
__u16 ip_id;
|
2006-03-21 09:16:13 +03:00
|
|
|
__u16 cur_udp_dst;
|
|
|
|
__u16 cur_udp_src;
|
2007-08-29 02:45:55 +04:00
|
|
|
__u16 cur_queue_map;
|
2006-03-21 09:16:13 +03:00
|
|
|
__u32 cur_pkt_size;
|
2009-11-06 08:04:32 +03:00
|
|
|
__u32 last_pkt_size;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
__u8 hh[14];
|
|
|
|
/* = {
|
|
|
|
0x00, 0x80, 0xC8, 0x79, 0xB3, 0xCB,
|
|
|
|
|
|
|
|
We fill in SRC address later
|
|
|
|
0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
|
|
|
|
0x08, 0x00
|
|
|
|
};
|
|
|
|
*/
|
|
|
|
__u16 pad; /* pad out the hh struct to an even 16 bytes */
|
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
struct sk_buff *skb; /* skb we are to transmit next, used for when we
|
2006-03-21 09:16:13 +03:00
|
|
|
* are transmitting the same one multiple times
|
|
|
|
*/
|
2009-08-27 17:55:19 +04:00
|
|
|
struct net_device *odev; /* The out-going device.
|
|
|
|
* Note that the device should have it's
|
|
|
|
* pg_info pointer pointing back to this
|
|
|
|
* device.
|
|
|
|
* Set when the user specifies the out-going
|
|
|
|
* device name (not when the inject is
|
|
|
|
* started as it used to do.)
|
|
|
|
*/
|
2021-12-07 04:30:35 +03:00
|
|
|
netdevice_tracker dev_tracker;
|
2009-11-23 04:44:37 +03:00
|
|
|
char odevname[32];
|
2005-04-17 02:20:36 +04:00
|
|
|
struct flow_state *flows;
|
2012-04-15 09:58:06 +04:00
|
|
|
unsigned int cflows; /* Concurrent flows (config) */
|
|
|
|
unsigned int lflow; /* Flow length (config) */
|
|
|
|
unsigned int nflows; /* accumulated flows (stats) */
|
|
|
|
unsigned int curfl; /* current sequenced flow (state)*/
|
2007-08-29 02:45:55 +04:00
|
|
|
|
|
|
|
u16 queue_map_min;
|
|
|
|
u16 queue_map_max;
|
2010-11-16 22:12:28 +03:00
|
|
|
__u32 skb_priority; /* skb priority field */
|
2014-10-01 04:53:21 +04:00
|
|
|
unsigned int burst; /* number of duplicated packets to burst */
|
2010-03-19 01:44:30 +03:00
|
|
|
int node; /* Memory node */
|
2007-08-29 02:45:55 +04:00
|
|
|
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
__u8 ipsmode; /* IPSEC mode (config) */
|
|
|
|
__u8 ipsproto; /* IPSEC type (config) */
|
2014-01-03 07:18:30 +04:00
|
|
|
__u32 spi;
|
2017-11-28 23:45:44 +03:00
|
|
|
struct xfrm_dst xdst;
|
2014-01-03 07:18:31 +04:00
|
|
|
struct dst_ops dstops;
|
2007-07-03 09:41:59 +04:00
|
|
|
#endif
|
2007-03-05 03:11:51 +03:00
|
|
|
char result[512];
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
struct pktgen_hdr {
|
2006-11-15 07:48:11 +03:00
|
|
|
__be32 pgh_magic;
|
|
|
|
__be32 seq_num;
|
|
|
|
__be32 tv_sec;
|
|
|
|
__be32 tv_usec;
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
|
netns: make struct pernet_operations::id unsigned int
Make struct pernet_operations::id unsigned.
There are 2 reasons to do so:
1)
This field is really an index into an zero based array and
thus is unsigned entity. Using negative value is out-of-bound
access by definition.
2)
On x86_64 unsigned 32-bit data which are mixed with pointers
via array indexing or offsets added or subtracted to pointers
are preffered to signed 32-bit data.
"int" being used as an array index needs to be sign-extended
to 64-bit before being used.
void f(long *p, int i)
{
g(p[i]);
}
roughly translates to
movsx rsi, esi
mov rdi, [rsi+...]
call g
MOVSX is 3 byte instruction which isn't necessary if the variable is
unsigned because x86_64 is zero extending by default.
Now, there is net_generic() function which, you guessed it right, uses
"int" as an array index:
static inline void *net_generic(const struct net *net, int id)
{
...
ptr = ng->ptr[id - 1];
...
}
And this function is used a lot, so those sign extensions add up.
Patch snipes ~1730 bytes on allyesconfig kernel (without all junk
messing with code generation):
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
Unfortunately some functions actually grow bigger.
This is a semmingly random artefact of code generation with register
allocator being used differently. gcc decides that some variable
needs to live in new r8+ registers and every access now requires REX
prefix. Or it is shifted into r12, so [r12+0] addressing mode has to be
used which is longer than [r8]
However, overall balance is in negative direction:
add/remove: 0/0 grow/shrink: 70/598 up/down: 396/-2126 (-1730)
function old new delta
nfsd4_lock 3886 3959 +73
tipc_link_build_proto_msg 1096 1140 +44
mac80211_hwsim_new_radio 2776 2808 +32
tipc_mon_rcv 1032 1058 +26
svcauth_gss_legacy_init 1413 1429 +16
tipc_bcbase_select_primary 379 392 +13
nfsd4_exchange_id 1247 1260 +13
nfsd4_setclientid_confirm 782 793 +11
...
put_client_renew_locked 494 480 -14
ip_set_sockfn_get 730 716 -14
geneve_sock_add 829 813 -16
nfsd4_sequence_done 721 703 -18
nlmclnt_lookup_host 708 686 -22
nfsd4_lockt 1085 1063 -22
nfs_get_client 1077 1050 -27
tcf_bpf_init 1106 1076 -30
nfsd4_encode_fattr 5997 5930 -67
Total: Before=154856051, After=154854321, chg -0.00%
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-11-17 04:58:21 +03:00
|
|
|
static unsigned int pg_net_id __read_mostly;
|
2013-01-28 23:55:53 +04:00
|
|
|
|
|
|
|
struct pktgen_net {
|
|
|
|
struct net *net;
|
|
|
|
struct proc_dir_entry *proc_dir;
|
|
|
|
struct list_head pktgen_threads;
|
|
|
|
bool pktgen_exiting;
|
|
|
|
};
|
2010-11-21 21:26:44 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
struct pktgen_thread {
|
2016-10-15 18:50:49 +03:00
|
|
|
struct mutex if_lock; /* for list of devices */
|
2006-03-21 09:18:16 +03:00
|
|
|
struct list_head if_list; /* All device here */
|
2006-03-21 09:16:40 +03:00
|
|
|
struct list_head th_list;
|
2007-01-02 07:51:53 +03:00
|
|
|
struct task_struct *tsk;
|
2006-03-21 09:16:13 +03:00
|
|
|
char result[512];
|
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
/* Field for thread to receive "posted" events terminate,
|
|
|
|
stop ifs etc. */
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
u32 control;
|
2005-04-17 02:20:36 +04:00
|
|
|
int cpu;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
wait_queue_head_t queue;
|
2008-05-21 02:12:44 +04:00
|
|
|
struct completion start_done;
|
2013-01-28 23:55:53 +04:00
|
|
|
struct pktgen_net *net;
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
#define REMOVE 1
|
|
|
|
#define FIND 0
|
|
|
|
|
2009-08-27 17:55:20 +04:00
|
|
|
static const char version[] =
|
2010-06-21 16:29:14 +04:00
|
|
|
"Packet Generator for packet performance testing. "
|
|
|
|
"Version: " VERSION "\n";
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int pktgen_remove_device(struct pktgen_thread *t, struct pktgen_dev *i);
|
|
|
|
static int pktgen_add_device(struct pktgen_thread *t, const char *ifname);
|
|
|
|
static struct pktgen_dev *pktgen_find_dev(struct pktgen_thread *t,
|
2009-11-25 01:50:53 +03:00
|
|
|
const char *ifname, bool exact);
|
2005-04-17 02:20:36 +04:00
|
|
|
static int pktgen_device_event(struct notifier_block *, unsigned long, void *);
|
2013-01-28 23:55:53 +04:00
|
|
|
static void pktgen_run_all_threads(struct pktgen_net *pn);
|
|
|
|
static void pktgen_reset_all_threads(struct pktgen_net *pn);
|
2021-06-07 05:37:41 +03:00
|
|
|
static void pktgen_stop_all_threads(struct pktgen_net *pn);
|
2009-08-27 17:55:10 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static void pktgen_stop(struct pktgen_thread *t);
|
2005-04-17 02:20:36 +04:00
|
|
|
static void pktgen_clear_counters(struct pktgen_dev *pkt_dev);
|
2021-08-10 22:01:54 +03:00
|
|
|
static void fill_imix_distribution(struct pktgen_dev *pkt_dev);
|
2007-03-05 03:11:51 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Module parameters, defaults. */
|
2009-08-27 17:55:09 +04:00
|
|
|
static int pg_count_d __read_mostly = 1000;
|
|
|
|
static int pg_delay_d __read_mostly;
|
|
|
|
static int pg_clone_skb_d __read_mostly;
|
|
|
|
static int debug __read_mostly;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:24:27 +03:00
|
|
|
static DEFINE_MUTEX(pktgen_thread_lock);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
static struct notifier_block pktgen_notifier_block = {
|
|
|
|
.notifier_call = pktgen_device_event,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
2007-02-09 17:24:36 +03:00
|
|
|
* /proc handling functions
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
static int pgctrl_show(struct seq_file *seq, void *v)
|
2006-03-21 09:16:13 +03:00
|
|
|
{
|
2009-08-27 17:55:20 +04:00
|
|
|
seq_puts(seq, version);
|
2005-10-15 02:42:33 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
static ssize_t pgctrl_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2005-10-15 02:42:33 +04:00
|
|
|
char data[128];
|
2013-01-28 23:55:53 +04:00
|
|
|
struct pktgen_net *pn = net_generic(current->nsproxy->net_ns, pg_net_id);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-02-22 00:38:35 +04:00
|
|
|
if (!capable(CAP_NET_ADMIN))
|
|
|
|
return -EPERM;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-02-22 00:38:34 +04:00
|
|
|
if (count == 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
if (count > sizeof(data))
|
|
|
|
count = sizeof(data);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-02-22 00:38:35 +04:00
|
|
|
if (copy_from_user(data, buf, count))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2014-02-22 00:38:34 +04:00
|
|
|
data[count - 1] = 0; /* Strip trailing '\n' and terminate string */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(data, "stop"))
|
2021-06-07 05:37:41 +03:00
|
|
|
pktgen_stop_all_threads(pn);
|
2006-03-21 09:16:13 +03:00
|
|
|
else if (!strcmp(data, "start"))
|
2013-01-28 23:55:53 +04:00
|
|
|
pktgen_run_all_threads(pn);
|
2008-11-11 03:48:03 +03:00
|
|
|
else if (!strcmp(data, "reset"))
|
2013-01-28 23:55:53 +04:00
|
|
|
pktgen_reset_all_threads(pn);
|
2006-03-21 09:16:13 +03:00
|
|
|
else
|
2015-05-21 13:16:56 +03:00
|
|
|
return -EINVAL;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-02-22 00:38:35 +04:00
|
|
|
return count;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
static int pgctrl_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2022-01-22 09:14:23 +03:00
|
|
|
return single_open(file, pgctrl_show, pde_data(inode));
|
2005-10-15 02:42:33 +04:00
|
|
|
}
|
|
|
|
|
2020-02-04 04:37:17 +03:00
|
|
|
static const struct proc_ops pktgen_proc_ops = {
|
|
|
|
.proc_open = pgctrl_open,
|
|
|
|
.proc_read = seq_read,
|
|
|
|
.proc_lseek = seq_lseek,
|
|
|
|
.proc_write = pgctrl_write,
|
|
|
|
.proc_release = single_release,
|
2005-10-15 02:42:33 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
static int pktgen_if_show(struct seq_file *seq, void *v)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2009-08-27 17:55:07 +04:00
|
|
|
const struct pktgen_dev *pkt_dev = seq->private;
|
2009-08-27 17:55:16 +04:00
|
|
|
ktime_t stopped;
|
2018-01-18 21:31:36 +03:00
|
|
|
unsigned int i;
|
2009-08-27 17:55:16 +04:00
|
|
|
u64 idle;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
seq_printf(seq,
|
|
|
|
"Params: count %llu min_pkt_size: %u max_pkt_size: %u\n",
|
|
|
|
(unsigned long long)pkt_dev->count, pkt_dev->min_pkt_size,
|
|
|
|
pkt_dev->max_pkt_size);
|
|
|
|
|
2021-08-10 22:01:53 +03:00
|
|
|
if (pkt_dev->n_imix_entries > 0) {
|
|
|
|
seq_puts(seq, " imix_weights: ");
|
|
|
|
for (i = 0; i < pkt_dev->n_imix_entries; i++) {
|
|
|
|
seq_printf(seq, "%llu,%llu ",
|
|
|
|
pkt_dev->imix_entries[i].size,
|
|
|
|
pkt_dev->imix_entries[i].weight);
|
|
|
|
}
|
|
|
|
seq_puts(seq, "\n");
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_printf(seq,
|
2009-08-27 17:55:16 +04:00
|
|
|
" frags: %d delay: %llu clone_skb: %d ifname: %s\n",
|
|
|
|
pkt_dev->nfrags, (unsigned long long) pkt_dev->delay,
|
2009-11-23 04:44:37 +03:00
|
|
|
pkt_dev->clone_skb, pkt_dev->odevname);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
seq_printf(seq, " flows: %u flowlen: %u\n", pkt_dev->cflows,
|
|
|
|
pkt_dev->lflow);
|
|
|
|
|
2007-08-29 02:45:55 +04:00
|
|
|
seq_printf(seq,
|
|
|
|
" queue_map_min: %u queue_map_max: %u\n",
|
|
|
|
pkt_dev->queue_map_min,
|
|
|
|
pkt_dev->queue_map_max);
|
|
|
|
|
2010-11-16 22:12:28 +03:00
|
|
|
if (pkt_dev->skb_priority)
|
|
|
|
seq_printf(seq, " skb_priority: %u\n",
|
|
|
|
pkt_dev->skb_priority);
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->flags & F_IPV6) {
|
|
|
|
seq_printf(seq,
|
2011-05-03 15:23:40 +04:00
|
|
|
" saddr: %pI6c min_saddr: %pI6c max_saddr: %pI6c\n"
|
|
|
|
" daddr: %pI6c min_daddr: %pI6c max_daddr: %pI6c\n",
|
|
|
|
&pkt_dev->in6_saddr,
|
|
|
|
&pkt_dev->min_in6_saddr, &pkt_dev->max_in6_saddr,
|
|
|
|
&pkt_dev->in6_daddr,
|
|
|
|
&pkt_dev->min_in6_daddr, &pkt_dev->max_in6_daddr);
|
2009-08-27 17:55:19 +04:00
|
|
|
} else {
|
|
|
|
seq_printf(seq,
|
|
|
|
" dst_min: %s dst_max: %s\n",
|
|
|
|
pkt_dev->dst_min, pkt_dev->dst_max);
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_printf(seq,
|
2015-05-21 13:16:11 +03:00
|
|
|
" src_min: %s src_max: %s\n",
|
2009-08-27 17:55:19 +04:00
|
|
|
pkt_dev->src_min, pkt_dev->src_max);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_puts(seq, " src_mac: ");
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2008-10-28 01:59:26 +03:00
|
|
|
seq_printf(seq, "%pM ",
|
|
|
|
is_zero_ether_addr(pkt_dev->src_mac) ?
|
|
|
|
pkt_dev->odev->dev_addr : pkt_dev->src_mac);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2014-05-17 01:28:54 +04:00
|
|
|
seq_puts(seq, "dst_mac: ");
|
2008-10-28 01:59:26 +03:00
|
|
|
seq_printf(seq, "%pM\n", pkt_dev->dst_mac);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
seq_printf(seq,
|
2009-08-27 17:55:19 +04:00
|
|
|
" udp_src_min: %d udp_src_max: %d"
|
|
|
|
" udp_dst_min: %d udp_dst_max: %d\n",
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->udp_src_min, pkt_dev->udp_src_max,
|
|
|
|
pkt_dev->udp_dst_min, pkt_dev->udp_dst_max);
|
|
|
|
|
|
|
|
seq_printf(seq,
|
2006-03-23 12:10:26 +03:00
|
|
|
" src_mac_count: %d dst_mac_count: %d\n",
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->src_mac_count, pkt_dev->dst_mac_count);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
if (pkt_dev->nr_labels) {
|
2014-05-17 01:28:54 +04:00
|
|
|
seq_puts(seq, " mpls: ");
|
2007-04-11 07:10:33 +04:00
|
|
|
for (i = 0; i < pkt_dev->nr_labels; i++)
|
2006-03-23 12:10:26 +03:00
|
|
|
seq_printf(seq, "%08x%s", ntohl(pkt_dev->labels[i]),
|
|
|
|
i == pkt_dev->nr_labels-1 ? "\n" : ", ");
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
if (pkt_dev->vlan_id != 0xffff)
|
2006-09-28 03:30:44 +04:00
|
|
|
seq_printf(seq, " vlan_id: %u vlan_p: %u vlan_cfi: %u\n",
|
2009-08-27 17:55:19 +04:00
|
|
|
pkt_dev->vlan_id, pkt_dev->vlan_p,
|
|
|
|
pkt_dev->vlan_cfi);
|
2006-09-28 03:30:44 +04:00
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
if (pkt_dev->svlan_id != 0xffff)
|
2006-09-28 03:30:44 +04:00
|
|
|
seq_printf(seq, " svlan_id: %u vlan_p: %u vlan_cfi: %u\n",
|
2009-08-27 17:55:19 +04:00
|
|
|
pkt_dev->svlan_id, pkt_dev->svlan_p,
|
|
|
|
pkt_dev->svlan_cfi);
|
2006-09-28 03:30:44 +04:00
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
if (pkt_dev->tos)
|
2006-09-28 03:32:03 +04:00
|
|
|
seq_printf(seq, " tos: 0x%02x\n", pkt_dev->tos);
|
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
if (pkt_dev->traffic_class)
|
2006-09-28 03:32:03 +04:00
|
|
|
seq_printf(seq, " traffic_class: 0x%02x\n", pkt_dev->traffic_class);
|
|
|
|
|
2014-10-01 04:53:21 +04:00
|
|
|
if (pkt_dev->burst > 1)
|
|
|
|
seq_printf(seq, " burst: %d\n", pkt_dev->burst);
|
|
|
|
|
2010-03-19 01:44:30 +03:00
|
|
|
if (pkt_dev->node >= 0)
|
|
|
|
seq_printf(seq, " node: %d\n", pkt_dev->node);
|
|
|
|
|
2015-05-07 17:35:32 +03:00
|
|
|
if (pkt_dev->xmit_mode == M_NETIF_RECEIVE)
|
|
|
|
seq_puts(seq, " xmit_mode: netif_receive\n");
|
2016-07-03 00:12:54 +03:00
|
|
|
else if (pkt_dev->xmit_mode == M_QUEUE_XMIT)
|
|
|
|
seq_puts(seq, " xmit_mode: xmit_queue\n");
|
2015-05-07 17:35:32 +03:00
|
|
|
|
2014-05-17 01:28:54 +04:00
|
|
|
seq_puts(seq, " Flags: ");
|
2006-03-23 12:10:26 +03:00
|
|
|
|
2018-01-18 21:31:36 +03:00
|
|
|
for (i = 0; i < NR_PKT_FLAGS; i++) {
|
|
|
|
if (i == F_FLOW_SEQ)
|
|
|
|
if (!pkt_dev->cflows)
|
|
|
|
continue;
|
2014-08-28 20:14:47 +04:00
|
|
|
|
2018-01-18 21:31:36 +03:00
|
|
|
if (pkt_dev->flags & (1 << i))
|
|
|
|
seq_printf(seq, "%s ", pkt_flag_names[i]);
|
|
|
|
else if (i == F_FLOW_SEQ)
|
|
|
|
seq_puts(seq, "FLOW_RND ");
|
2007-07-03 09:40:36 +04:00
|
|
|
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
2018-01-18 21:31:36 +03:00
|
|
|
if (i == F_IPSEC && pkt_dev->spi)
|
2014-01-03 07:18:33 +04:00
|
|
|
seq_printf(seq, "spi:%u", pkt_dev->spi);
|
2007-07-03 09:41:59 +04:00
|
|
|
#endif
|
2018-01-18 21:31:36 +03:00
|
|
|
}
|
2010-03-19 01:44:30 +03:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_puts(seq, "\n");
|
|
|
|
|
2009-08-27 17:55:16 +04:00
|
|
|
/* not really stopped, more like last-running-at */
|
2012-10-28 12:27:19 +04:00
|
|
|
stopped = pkt_dev->running ? ktime_get() : pkt_dev->stopped_at;
|
2009-08-27 17:55:16 +04:00
|
|
|
idle = pkt_dev->idle_acc;
|
|
|
|
do_div(idle, NSEC_PER_USEC);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_printf(seq,
|
2009-08-27 17:55:16 +04:00
|
|
|
"Current:\n pkts-sofar: %llu errors: %llu\n",
|
2006-03-21 09:16:13 +03:00
|
|
|
(unsigned long long)pkt_dev->sofar,
|
2009-08-27 17:55:16 +04:00
|
|
|
(unsigned long long)pkt_dev->errors);
|
|
|
|
|
pktgen: Add output for imix results
The bps for imix mode is calculated by:
sum(imix_entry.size) / time_elapsed
The actual counts of each imix_entry are displayed under the
"Current:" section of the interface output in the following format:
imix_size_counts: size_1,count_1 size_2,count_2 ... size_n,count_n
Example (count = 200000):
imix_weights: 256,1 859,3 205,2
imix_size_counts: 256,32082 859,99796 205,68122
Result: OK: 17992362(c17964678+d27684) usec, 200000 (859byte,0frags)
11115pps 47Mb/sec (47977140bps) errors: 0
Summary of changes:
Calculate bps based on imix counters when in IMIX mode.
Add output for IMIX counters.
Signed-off-by: Nick Richardson <richardsonnick@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-10 22:01:55 +03:00
|
|
|
if (pkt_dev->n_imix_entries > 0) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
seq_puts(seq, " imix_size_counts: ");
|
|
|
|
for (i = 0; i < pkt_dev->n_imix_entries; i++) {
|
|
|
|
seq_printf(seq, "%llu,%llu ",
|
|
|
|
pkt_dev->imix_entries[i].size,
|
|
|
|
pkt_dev->imix_entries[i].count_so_far);
|
|
|
|
}
|
|
|
|
seq_puts(seq, "\n");
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:16 +04:00
|
|
|
seq_printf(seq,
|
|
|
|
" started: %lluus stopped: %lluus idle: %lluus\n",
|
|
|
|
(unsigned long long) ktime_to_us(pkt_dev->started_at),
|
|
|
|
(unsigned long long) ktime_to_us(stopped),
|
|
|
|
(unsigned long long) idle);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_printf(seq,
|
|
|
|
" seq_num: %d cur_dst_mac_offset: %d cur_src_mac_offset: %d\n",
|
2005-10-15 02:42:33 +04:00
|
|
|
pkt_dev->seq_num, pkt_dev->cur_dst_mac_offset,
|
|
|
|
pkt_dev->cur_src_mac_offset);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->flags & F_IPV6) {
|
2011-05-03 15:23:40 +04:00
|
|
|
seq_printf(seq, " cur_saddr: %pI6c cur_daddr: %pI6c\n",
|
|
|
|
&pkt_dev->cur_in6_saddr,
|
|
|
|
&pkt_dev->cur_in6_daddr);
|
2006-03-21 09:16:13 +03:00
|
|
|
} else
|
2012-10-09 21:48:18 +04:00
|
|
|
seq_printf(seq, " cur_saddr: %pI4 cur_daddr: %pI4\n",
|
|
|
|
&pkt_dev->cur_saddr, &pkt_dev->cur_daddr);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_printf(seq, " cur_udp_dst: %d cur_udp_src: %d\n",
|
2005-10-15 02:42:33 +04:00
|
|
|
pkt_dev->cur_udp_dst, pkt_dev->cur_udp_src);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-08-29 02:45:55 +04:00
|
|
|
seq_printf(seq, " cur_queue_map: %u\n", pkt_dev->cur_queue_map);
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_printf(seq, " flows: %u\n", pkt_dev->nflows);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (pkt_dev->result[0])
|
2006-03-21 09:16:13 +03:00
|
|
|
seq_printf(seq, "Result: %s\n", pkt_dev->result);
|
2005-04-17 02:20:36 +04:00
|
|
|
else
|
2014-05-17 01:28:54 +04:00
|
|
|
seq_puts(seq, "Result: Idle\n");
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
static int hex32_arg(const char __user *user_buffer, unsigned long maxlen,
|
|
|
|
__u32 *num)
|
2006-03-23 12:10:26 +03:00
|
|
|
{
|
|
|
|
int i = 0;
|
|
|
|
*num = 0;
|
|
|
|
|
2007-04-11 07:10:33 +04:00
|
|
|
for (; i < maxlen; i++) {
|
2010-09-21 00:40:26 +04:00
|
|
|
int value;
|
2006-03-23 12:10:26 +03:00
|
|
|
char c;
|
|
|
|
*num <<= 4;
|
|
|
|
if (get_user(c, &user_buffer[i]))
|
|
|
|
return -EFAULT;
|
2010-09-21 00:40:26 +04:00
|
|
|
value = hex_to_bin(c);
|
|
|
|
if (value >= 0)
|
|
|
|
*num |= value;
|
2006-03-23 12:10:26 +03:00
|
|
|
else
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int count_trail_chars(const char __user * user_buffer,
|
|
|
|
unsigned int maxlen)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < maxlen; i++) {
|
2006-03-21 09:16:13 +03:00
|
|
|
char c;
|
|
|
|
if (get_user(c, &user_buffer[i]))
|
|
|
|
return -EFAULT;
|
|
|
|
switch (c) {
|
2005-04-17 02:20:36 +04:00
|
|
|
case '\"':
|
|
|
|
case '\n':
|
|
|
|
case '\r':
|
|
|
|
case '\t':
|
|
|
|
case ' ':
|
|
|
|
case '=':
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto done;
|
2007-04-21 04:09:22 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
done:
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2012-01-19 19:40:06 +04:00
|
|
|
static long num_arg(const char __user *user_buffer, unsigned long maxlen,
|
|
|
|
unsigned long *num)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2010-10-18 16:14:44 +04:00
|
|
|
int i;
|
2005-04-17 02:20:36 +04:00
|
|
|
*num = 0;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2010-10-18 16:14:44 +04:00
|
|
|
for (i = 0; i < maxlen; i++) {
|
2006-03-21 09:16:13 +03:00
|
|
|
char c;
|
|
|
|
if (get_user(c, &user_buffer[i]))
|
|
|
|
return -EFAULT;
|
|
|
|
if ((c >= '0') && (c <= '9')) {
|
2005-04-17 02:20:36 +04:00
|
|
|
*num *= 10;
|
2006-03-21 09:16:13 +03:00
|
|
|
*num += c - '0';
|
2005-04-17 02:20:36 +04:00
|
|
|
} else
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int strn_len(const char __user * user_buffer, unsigned int maxlen)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2010-10-18 16:14:44 +04:00
|
|
|
int i;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-10-18 16:14:44 +04:00
|
|
|
for (i = 0; i < maxlen; i++) {
|
2006-03-21 09:16:13 +03:00
|
|
|
char c;
|
|
|
|
if (get_user(c, &user_buffer[i]))
|
|
|
|
return -EFAULT;
|
|
|
|
switch (c) {
|
2005-04-17 02:20:36 +04:00
|
|
|
case '\"':
|
|
|
|
case '\n':
|
|
|
|
case '\r':
|
|
|
|
case '\t':
|
|
|
|
case ' ':
|
|
|
|
goto done_str;
|
|
|
|
default:
|
|
|
|
break;
|
2007-04-21 04:09:22 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
done_str:
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2021-08-10 22:01:53 +03:00
|
|
|
/* Parses imix entries from user buffer.
|
|
|
|
* The user buffer should consist of imix entries separated by spaces
|
|
|
|
* where each entry consists of size and weight delimited by commas.
|
|
|
|
* "size1,weight_1 size2,weight_2 ... size_n,weight_n" for example.
|
|
|
|
*/
|
|
|
|
static ssize_t get_imix_entries(const char __user *buffer,
|
|
|
|
struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
|
|
|
const int max_digits = 10;
|
|
|
|
int i = 0;
|
|
|
|
long len;
|
|
|
|
char c;
|
|
|
|
|
|
|
|
pkt_dev->n_imix_entries = 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
unsigned long weight;
|
|
|
|
unsigned long size;
|
|
|
|
|
|
|
|
len = num_arg(&buffer[i], max_digits, &size);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
i += len;
|
|
|
|
if (get_user(c, &buffer[i]))
|
|
|
|
return -EFAULT;
|
|
|
|
/* Check for comma between size_i and weight_i */
|
|
|
|
if (c != ',')
|
|
|
|
return -EINVAL;
|
|
|
|
i++;
|
|
|
|
|
|
|
|
if (size < 14 + 20 + 8)
|
|
|
|
size = 14 + 20 + 8;
|
|
|
|
|
|
|
|
len = num_arg(&buffer[i], max_digits, &weight);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
if (weight <= 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
pkt_dev->imix_entries[pkt_dev->n_imix_entries].size = size;
|
|
|
|
pkt_dev->imix_entries[pkt_dev->n_imix_entries].weight = weight;
|
|
|
|
|
|
|
|
i += len;
|
|
|
|
if (get_user(c, &buffer[i]))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
i++;
|
|
|
|
pkt_dev->n_imix_entries++;
|
|
|
|
|
|
|
|
if (pkt_dev->n_imix_entries > MAX_IMIX_ENTRIES)
|
|
|
|
return -E2BIG;
|
|
|
|
} while (c == ' ');
|
|
|
|
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
static ssize_t get_labels(const char __user *buffer, struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
2012-04-15 09:58:06 +04:00
|
|
|
unsigned int n = 0;
|
2006-03-23 12:10:26 +03:00
|
|
|
char c;
|
|
|
|
ssize_t i = 0;
|
|
|
|
int len;
|
|
|
|
|
|
|
|
pkt_dev->nr_labels = 0;
|
|
|
|
do {
|
|
|
|
__u32 tmp;
|
2006-09-28 03:32:03 +04:00
|
|
|
len = hex32_arg(&buffer[i], 8, &tmp);
|
2006-03-23 12:10:26 +03:00
|
|
|
if (len <= 0)
|
|
|
|
return len;
|
|
|
|
pkt_dev->labels[n] = htonl(tmp);
|
|
|
|
if (pkt_dev->labels[n] & MPLS_STACK_BOTTOM)
|
|
|
|
pkt_dev->flags |= F_MPLS_RND;
|
|
|
|
i += len;
|
|
|
|
if (get_user(c, &buffer[i]))
|
|
|
|
return -EFAULT;
|
|
|
|
i++;
|
|
|
|
n++;
|
|
|
|
if (n >= MAX_MPLS_LABELS)
|
|
|
|
return -E2BIG;
|
2007-04-11 07:10:33 +04:00
|
|
|
} while (c == ',');
|
2006-03-23 12:10:26 +03:00
|
|
|
|
|
|
|
pkt_dev->nr_labels = n;
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2018-01-18 21:31:37 +03:00
|
|
|
static __u32 pktgen_read_flag(const char *f, bool *disable)
|
|
|
|
{
|
|
|
|
__u32 i;
|
|
|
|
|
|
|
|
if (f[0] == '!') {
|
|
|
|
*disable = true;
|
|
|
|
f++;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < NR_PKT_FLAGS; i++) {
|
|
|
|
if (!IS_ENABLED(CONFIG_XFRM) && i == IPSEC_SHIFT)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* allow only disabling ipv6 flag */
|
|
|
|
if (!*disable && i == IPV6_SHIFT)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (strcmp(f, pkt_flag_names[i]) == 0)
|
|
|
|
return 1 << i;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strcmp(f, "FLOW_RND") == 0) {
|
|
|
|
*disable = !*disable;
|
|
|
|
return F_FLOW_SEQ;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static ssize_t pktgen_if_write(struct file *file,
|
|
|
|
const char __user * user_buffer, size_t count,
|
|
|
|
loff_t * offset)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2010-07-12 14:50:23 +04:00
|
|
|
struct seq_file *seq = file->private_data;
|
2006-03-21 09:16:13 +03:00
|
|
|
struct pktgen_dev *pkt_dev = seq->private;
|
2010-10-18 16:14:44 +04:00
|
|
|
int i, max, len;
|
2005-04-17 02:20:36 +04:00
|
|
|
char name[16], valstr[32];
|
|
|
|
unsigned long value = 0;
|
2006-03-21 09:16:13 +03:00
|
|
|
char *pg_result = NULL;
|
|
|
|
int tmp = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
char buf[128];
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
pg_result = &(pkt_dev->result[0]);
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
if (count < 1) {
|
2014-09-10 08:17:30 +04:00
|
|
|
pr_warn("wrong command format\n");
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2010-10-18 16:14:44 +04:00
|
|
|
max = count;
|
|
|
|
tmp = count_trail_chars(user_buffer, max);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (tmp < 0) {
|
2014-09-10 08:17:30 +04:00
|
|
|
pr_warn("illegal format\n");
|
2006-03-21 09:16:13 +03:00
|
|
|
return tmp;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2010-10-18 16:14:44 +04:00
|
|
|
i = tmp;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Read variable name */
|
|
|
|
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(name) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
memset(name, 0, sizeof(name));
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(name, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
max = count - i;
|
2005-04-17 02:20:36 +04:00
|
|
|
len = count_trail_chars(&user_buffer[i], max);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
|
|
|
|
if (debug) {
|
2018-03-13 23:58:39 +03:00
|
|
|
size_t copy = min_t(size_t, count + 1, 1024);
|
|
|
|
char *tp = strndup_user(user_buffer, copy);
|
|
|
|
|
|
|
|
if (IS_ERR(tp))
|
|
|
|
return PTR_ERR(tp);
|
|
|
|
|
|
|
|
pr_debug("%s,%zu buffer -:%s:-\n", name, count, tp);
|
2018-03-14 11:07:27 +03:00
|
|
|
kfree(tp);
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (!strcmp(name, "min_pkt_size")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (value < 14 + 20 + 8)
|
|
|
|
value = 14 + 20 + 8;
|
|
|
|
if (value != pkt_dev->min_pkt_size) {
|
|
|
|
pkt_dev->min_pkt_size = value;
|
|
|
|
pkt_dev->cur_pkt_size = value;
|
|
|
|
}
|
2020-09-30 04:08:37 +03:00
|
|
|
sprintf(pg_result, "OK: min_pkt_size=%d",
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->min_pkt_size);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "max_pkt_size")) {
|
2005-04-17 02:20:36 +04:00
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (value < 14 + 20 + 8)
|
|
|
|
value = 14 + 20 + 8;
|
|
|
|
if (value != pkt_dev->max_pkt_size) {
|
|
|
|
pkt_dev->max_pkt_size = value;
|
|
|
|
pkt_dev->cur_pkt_size = value;
|
|
|
|
}
|
2020-09-30 04:08:37 +03:00
|
|
|
sprintf(pg_result, "OK: max_pkt_size=%d",
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->max_pkt_size);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
/* Shortcut for min = max */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (!strcmp(name, "pkt_size")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (value < 14 + 20 + 8)
|
|
|
|
value = 14 + 20 + 8;
|
|
|
|
if (value != pkt_dev->min_pkt_size) {
|
|
|
|
pkt_dev->min_pkt_size = value;
|
|
|
|
pkt_dev->max_pkt_size = value;
|
|
|
|
pkt_dev->cur_pkt_size = value;
|
|
|
|
}
|
2020-09-30 04:08:37 +03:00
|
|
|
sprintf(pg_result, "OK: pkt_size=%d", pkt_dev->min_pkt_size);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2021-08-10 22:01:53 +03:00
|
|
|
if (!strcmp(name, "imix_weights")) {
|
|
|
|
if (pkt_dev->clone_skb > 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
len = get_imix_entries(&user_buffer[i], pkt_dev);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
2021-08-10 22:01:54 +03:00
|
|
|
fill_imix_distribution(pkt_dev);
|
|
|
|
|
2021-08-10 22:01:53 +03:00
|
|
|
i += len;
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "debug")) {
|
2005-04-17 02:20:36 +04:00
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
debug = value;
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: debug=%u", debug);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "frags")) {
|
2005-04-17 02:20:36 +04:00
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
pkt_dev->nfrags = value;
|
2020-09-30 04:08:37 +03:00
|
|
|
sprintf(pg_result, "OK: frags=%d", pkt_dev->nfrags);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "delay")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2009-08-27 17:55:16 +04:00
|
|
|
if (value == 0x7FFFFFFF)
|
|
|
|
pkt_dev->delay = ULLONG_MAX;
|
|
|
|
else
|
2009-10-03 05:39:18 +04:00
|
|
|
pkt_dev->delay = (u64)value;
|
2009-08-27 17:55:16 +04:00
|
|
|
|
|
|
|
sprintf(pg_result, "OK: delay=%llu",
|
|
|
|
(unsigned long long) pkt_dev->delay);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
2010-06-10 02:49:57 +04:00
|
|
|
if (!strcmp(name, "rate")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
|
|
|
i += len;
|
|
|
|
if (!value)
|
|
|
|
return len;
|
|
|
|
pkt_dev->delay = pkt_dev->min_pkt_size*8*NSEC_PER_USEC/value;
|
|
|
|
if (debug)
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_info("Delay set at: %llu ns\n", pkt_dev->delay);
|
2010-06-10 02:49:57 +04:00
|
|
|
|
|
|
|
sprintf(pg_result, "OK: rate=%lu", value);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "ratep")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
|
|
|
i += len;
|
|
|
|
if (!value)
|
|
|
|
return len;
|
|
|
|
pkt_dev->delay = NSEC_PER_SEC/value;
|
|
|
|
if (debug)
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_info("Delay set at: %llu ns\n", pkt_dev->delay);
|
2010-06-10 02:49:57 +04:00
|
|
|
|
|
|
|
sprintf(pg_result, "OK: rate=%lu", value);
|
|
|
|
return count;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "udp_src_min")) {
|
2005-04-17 02:20:36 +04:00
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (value != pkt_dev->udp_src_min) {
|
|
|
|
pkt_dev->udp_src_min = value;
|
|
|
|
pkt_dev->cur_udp_src = value;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: udp_src_min=%u", pkt_dev->udp_src_min);
|
|
|
|
return count;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "udp_dst_min")) {
|
2005-04-17 02:20:36 +04:00
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (value != pkt_dev->udp_dst_min) {
|
|
|
|
pkt_dev->udp_dst_min = value;
|
|
|
|
pkt_dev->cur_udp_dst = value;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: udp_dst_min=%u", pkt_dev->udp_dst_min);
|
|
|
|
return count;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "udp_src_max")) {
|
2005-04-17 02:20:36 +04:00
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (value != pkt_dev->udp_src_max) {
|
|
|
|
pkt_dev->udp_src_max = value;
|
|
|
|
pkt_dev->cur_udp_src = value;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: udp_src_max=%u", pkt_dev->udp_src_max);
|
|
|
|
return count;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "udp_dst_max")) {
|
2005-04-17 02:20:36 +04:00
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (value != pkt_dev->udp_dst_max) {
|
|
|
|
pkt_dev->udp_dst_max = value;
|
|
|
|
pkt_dev->cur_udp_dst = value;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: udp_dst_max=%u", pkt_dev->udp_dst_max);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "clone_skb")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2021-08-10 22:01:53 +03:00
|
|
|
/* clone_skb is not supported for netif_receive xmit_mode and
|
|
|
|
* IMIX mode.
|
|
|
|
*/
|
2011-07-26 10:05:37 +04:00
|
|
|
if ((value > 0) &&
|
2015-05-07 17:35:32 +03:00
|
|
|
((pkt_dev->xmit_mode == M_NETIF_RECEIVE) ||
|
|
|
|
!(pkt_dev->odev->priv_flags & IFF_TX_SKB_SHARING)))
|
2011-07-26 10:05:37 +04:00
|
|
|
return -ENOTSUPP;
|
2021-08-10 22:01:53 +03:00
|
|
|
if (value > 0 && pkt_dev->n_imix_entries > 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->clone_skb = value;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: clone_skb=%d", pkt_dev->clone_skb);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "count")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
pkt_dev->count = value;
|
|
|
|
sprintf(pg_result, "OK: count=%llu",
|
2006-03-21 09:16:13 +03:00
|
|
|
(unsigned long long)pkt_dev->count);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "src_mac_count")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
if (pkt_dev->src_mac_count != value) {
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->src_mac_count = value;
|
|
|
|
pkt_dev->cur_src_mac_offset = 0;
|
|
|
|
}
|
|
|
|
sprintf(pg_result, "OK: src_mac_count=%d",
|
|
|
|
pkt_dev->src_mac_count);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "dst_mac_count")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
if (pkt_dev->dst_mac_count != value) {
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->dst_mac_count = value;
|
|
|
|
pkt_dev->cur_dst_mac_offset = 0;
|
|
|
|
}
|
|
|
|
sprintf(pg_result, "OK: dst_mac_count=%d",
|
|
|
|
pkt_dev->dst_mac_count);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
2014-10-01 04:53:21 +04:00
|
|
|
if (!strcmp(name, "burst")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
|
|
|
i += len;
|
2016-07-03 00:12:54 +03:00
|
|
|
if ((value > 1) &&
|
|
|
|
((pkt_dev->xmit_mode == M_QUEUE_XMIT) ||
|
|
|
|
((pkt_dev->xmit_mode == M_START_XMIT) &&
|
|
|
|
(!(pkt_dev->odev->priv_flags & IFF_TX_SKB_SHARING)))))
|
2015-02-23 04:03:41 +03:00
|
|
|
return -ENOTSUPP;
|
2014-10-01 04:53:21 +04:00
|
|
|
pkt_dev->burst = value < 1 ? 1 : value;
|
2020-09-30 04:08:37 +03:00
|
|
|
sprintf(pg_result, "OK: burst=%u", pkt_dev->burst);
|
2014-10-01 04:53:21 +04:00
|
|
|
return count;
|
|
|
|
}
|
2010-03-19 01:44:30 +03:00
|
|
|
if (!strcmp(name, "node")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
|
|
|
i += len;
|
|
|
|
|
|
|
|
if (node_possible(value)) {
|
|
|
|
pkt_dev->node = value;
|
|
|
|
sprintf(pg_result, "OK: node=%d", pkt_dev->node);
|
2011-01-26 00:26:05 +03:00
|
|
|
if (pkt_dev->page) {
|
|
|
|
put_page(pkt_dev->page);
|
|
|
|
pkt_dev->page = NULL;
|
|
|
|
}
|
2010-03-19 01:44:30 +03:00
|
|
|
}
|
|
|
|
else
|
|
|
|
sprintf(pg_result, "ERROR: node not possible");
|
|
|
|
return count;
|
|
|
|
}
|
2015-05-07 17:35:32 +03:00
|
|
|
if (!strcmp(name, "xmit_mode")) {
|
|
|
|
char f[32];
|
|
|
|
|
|
|
|
memset(f, 0, 32);
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(f) - 1);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
|
|
|
if (copy_from_user(f, &user_buffer[i], len))
|
|
|
|
return -EFAULT;
|
|
|
|
i += len;
|
|
|
|
|
|
|
|
if (strcmp(f, "start_xmit") == 0) {
|
|
|
|
pkt_dev->xmit_mode = M_START_XMIT;
|
|
|
|
} else if (strcmp(f, "netif_receive") == 0) {
|
|
|
|
/* clone_skb set earlier, not supported in this mode */
|
|
|
|
if (pkt_dev->clone_skb > 0)
|
|
|
|
return -ENOTSUPP;
|
|
|
|
|
|
|
|
pkt_dev->xmit_mode = M_NETIF_RECEIVE;
|
2015-05-12 01:19:48 +03:00
|
|
|
|
|
|
|
/* make sure new packet is allocated every time
|
|
|
|
* pktgen_xmit() is called
|
|
|
|
*/
|
|
|
|
pkt_dev->last_ok = 1;
|
2016-07-03 00:12:54 +03:00
|
|
|
} else if (strcmp(f, "queue_xmit") == 0) {
|
|
|
|
pkt_dev->xmit_mode = M_QUEUE_XMIT;
|
|
|
|
pkt_dev->last_ok = 1;
|
2015-05-07 17:35:32 +03:00
|
|
|
} else {
|
|
|
|
sprintf(pg_result,
|
|
|
|
"xmit_mode -:%s:- unknown\nAvailable modes: %s",
|
|
|
|
f, "start_xmit, netif_receive\n");
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
sprintf(pg_result, "OK: xmit_mode=%s", f);
|
|
|
|
return count;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!strcmp(name, "flag")) {
|
2018-01-18 21:31:37 +03:00
|
|
|
__u32 flag;
|
2006-03-21 09:16:13 +03:00
|
|
|
char f[32];
|
2018-01-18 21:31:37 +03:00
|
|
|
bool disable = false;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
memset(f, 0, 32);
|
2005-04-17 02:20:36 +04:00
|
|
|
len = strn_len(&user_buffer[i], sizeof(f) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
if (copy_from_user(f, &user_buffer[i], len))
|
|
|
|
return -EFAULT;
|
|
|
|
i += len;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2018-01-18 21:31:37 +03:00
|
|
|
flag = pktgen_read_flag(f, &disable);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2018-01-18 21:31:37 +03:00
|
|
|
if (flag) {
|
|
|
|
if (disable)
|
|
|
|
pkt_dev->flags &= ~flag;
|
|
|
|
else
|
|
|
|
pkt_dev->flags |= flag;
|
|
|
|
} else {
|
2006-03-21 09:16:13 +03:00
|
|
|
sprintf(pg_result,
|
|
|
|
"Flag -:%s:- unknown\nAvailable flags, (prepend ! to un-set flag):\n%s",
|
|
|
|
f,
|
2006-09-28 03:32:03 +04:00
|
|
|
"IPSRC_RND, IPDST_RND, UDPSRC_RND, UDPDST_RND, "
|
2014-02-22 00:38:36 +04:00
|
|
|
"MACSRC_RND, MACDST_RND, TXSIZE_RND, IPV6, "
|
|
|
|
"MPLS_RND, VID_RND, SVID_RND, FLOW_SEQ, "
|
|
|
|
"QUEUE_MAP_RND, QUEUE_MAP_CPU, UDPCSUM, "
|
2014-08-28 20:14:47 +04:00
|
|
|
"NO_TIMESTAMP, "
|
2014-02-22 00:38:36 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
"IPSEC, "
|
|
|
|
#endif
|
|
|
|
"NODE_ALLOC\n");
|
2006-03-21 09:16:13 +03:00
|
|
|
return count;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: flags=0x%x", pkt_dev->flags);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "dst_min") || !strcmp(name, "dst")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(pkt_dev->dst_min) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
|
|
|
if (strcmp(buf, pkt_dev->dst_min) != 0) {
|
|
|
|
memset(pkt_dev->dst_min, 0, sizeof(pkt_dev->dst_min));
|
pktgen: convert safe uses of strncpy() to strcpy() to avoid string truncation warning
GCC 8 complains:
net/core/pktgen.c: In function ‘pktgen_if_write’:
net/core/pktgen.c:1419:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1399:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1290:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1268:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is no bug here, but the code is not perfect either. It copies
sizeof(pkt_dev->/member/) - 1 from user space into buf, and then does
a strcmp(pkt_dev->/member/, buf) hence assuming buf will be null-terminated
and shorter than pkt_dev->/member/ (pkt_dev->/member/ is never
explicitly null-terminated, and strncpy() doesn't have to null-terminate
so the assumption must be on buf). The use of strncpy() without explicit
null-termination looks suspicious. Convert to use straight strcpy().
strncpy() would also null-pad the output, but that's clearly unnecessary
since the author calls memset(pkt_dev->/member/, 0, sizeof(..)); prior
to strncpy(), anyway.
While at it format the code for "dst_min", "dst_max", "src_min" and
"src_max" in the same way by removing extra new lines in one case.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-18 00:32:24 +03:00
|
|
|
strcpy(pkt_dev->dst_min, buf);
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->daddr_min = in_aton(pkt_dev->dst_min);
|
|
|
|
pkt_dev->cur_daddr = pkt_dev->daddr_min;
|
|
|
|
}
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("dst_min set to: %s\n", pkt_dev->dst_min);
|
2006-03-21 09:16:13 +03:00
|
|
|
i += len;
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: dst_min=%s", pkt_dev->dst_min);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "dst_max")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(pkt_dev->dst_max) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
|
|
|
if (strcmp(buf, pkt_dev->dst_max) != 0) {
|
|
|
|
memset(pkt_dev->dst_max, 0, sizeof(pkt_dev->dst_max));
|
pktgen: convert safe uses of strncpy() to strcpy() to avoid string truncation warning
GCC 8 complains:
net/core/pktgen.c: In function ‘pktgen_if_write’:
net/core/pktgen.c:1419:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1399:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1290:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1268:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is no bug here, but the code is not perfect either. It copies
sizeof(pkt_dev->/member/) - 1 from user space into buf, and then does
a strcmp(pkt_dev->/member/, buf) hence assuming buf will be null-terminated
and shorter than pkt_dev->/member/ (pkt_dev->/member/ is never
explicitly null-terminated, and strncpy() doesn't have to null-terminate
so the assumption must be on buf). The use of strncpy() without explicit
null-termination looks suspicious. Convert to use straight strcpy().
strncpy() would also null-pad the output, but that's clearly unnecessary
since the author calls memset(pkt_dev->/member/, 0, sizeof(..)); prior
to strncpy(), anyway.
While at it format the code for "dst_min", "dst_max", "src_min" and
"src_max" in the same way by removing extra new lines in one case.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-18 00:32:24 +03:00
|
|
|
strcpy(pkt_dev->dst_max, buf);
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->daddr_max = in_aton(pkt_dev->dst_max);
|
|
|
|
pkt_dev->cur_daddr = pkt_dev->daddr_max;
|
|
|
|
}
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("dst_max set to: %s\n", pkt_dev->dst_max);
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
sprintf(pg_result, "OK: dst_max=%s", pkt_dev->dst_max);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "dst6")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(buf) - 1);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
|
|
|
return len;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
pkt_dev->flags |= F_IPV6;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2012-10-09 21:48:20 +04:00
|
|
|
in6_pton(buf, -1, pkt_dev->in6_daddr.s6_addr, -1, NULL);
|
2011-05-03 15:23:40 +04:00
|
|
|
snprintf(buf, sizeof(buf), "%pI6c", &pkt_dev->in6_daddr);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2011-11-21 07:39:03 +04:00
|
|
|
pkt_dev->cur_in6_daddr = pkt_dev->in6_daddr;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("dst6 set to: %s\n", buf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
i += len;
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: dst6=%s", buf);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "dst6_min")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(buf) - 1);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
|
|
|
return len;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
pkt_dev->flags |= F_IPV6;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2012-10-09 21:48:20 +04:00
|
|
|
in6_pton(buf, -1, pkt_dev->min_in6_daddr.s6_addr, -1, NULL);
|
2011-05-03 15:23:40 +04:00
|
|
|
snprintf(buf, sizeof(buf), "%pI6c", &pkt_dev->min_in6_daddr);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2011-11-21 07:39:03 +04:00
|
|
|
pkt_dev->cur_in6_daddr = pkt_dev->min_in6_daddr;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("dst6_min set to: %s\n", buf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
i += len;
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: dst6_min=%s", buf);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "dst6_max")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(buf) - 1);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
|
|
|
return len;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
pkt_dev->flags |= F_IPV6;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2012-10-09 21:48:20 +04:00
|
|
|
in6_pton(buf, -1, pkt_dev->max_in6_daddr.s6_addr, -1, NULL);
|
2011-05-03 15:23:40 +04:00
|
|
|
snprintf(buf, sizeof(buf), "%pI6c", &pkt_dev->max_in6_daddr);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("dst6_max set to: %s\n", buf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
i += len;
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: dst6_max=%s", buf);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "src6")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(buf) - 1);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
|
|
|
return len;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
pkt_dev->flags |= F_IPV6;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2012-10-09 21:48:20 +04:00
|
|
|
in6_pton(buf, -1, pkt_dev->in6_saddr.s6_addr, -1, NULL);
|
2011-05-03 15:23:40 +04:00
|
|
|
snprintf(buf, sizeof(buf), "%pI6c", &pkt_dev->in6_saddr);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2011-11-21 07:39:03 +04:00
|
|
|
pkt_dev->cur_in6_saddr = pkt_dev->in6_saddr;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("src6 set to: %s\n", buf);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
i += len;
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pg_result, "OK: src6=%s", buf);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "src_min")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(pkt_dev->src_min) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
|
|
|
if (strcmp(buf, pkt_dev->src_min) != 0) {
|
|
|
|
memset(pkt_dev->src_min, 0, sizeof(pkt_dev->src_min));
|
pktgen: convert safe uses of strncpy() to strcpy() to avoid string truncation warning
GCC 8 complains:
net/core/pktgen.c: In function ‘pktgen_if_write’:
net/core/pktgen.c:1419:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1399:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1290:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1268:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is no bug here, but the code is not perfect either. It copies
sizeof(pkt_dev->/member/) - 1 from user space into buf, and then does
a strcmp(pkt_dev->/member/, buf) hence assuming buf will be null-terminated
and shorter than pkt_dev->/member/ (pkt_dev->/member/ is never
explicitly null-terminated, and strncpy() doesn't have to null-terminate
so the assumption must be on buf). The use of strncpy() without explicit
null-termination looks suspicious. Convert to use straight strcpy().
strncpy() would also null-pad the output, but that's clearly unnecessary
since the author calls memset(pkt_dev->/member/, 0, sizeof(..)); prior
to strncpy(), anyway.
While at it format the code for "dst_min", "dst_max", "src_min" and
"src_max" in the same way by removing extra new lines in one case.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-18 00:32:24 +03:00
|
|
|
strcpy(pkt_dev->src_min, buf);
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->saddr_min = in_aton(pkt_dev->src_min);
|
|
|
|
pkt_dev->cur_saddr = pkt_dev->saddr_min;
|
|
|
|
}
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("src_min set to: %s\n", pkt_dev->src_min);
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
sprintf(pg_result, "OK: src_min=%s", pkt_dev->src_min);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "src_max")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(pkt_dev->src_max) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(buf, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
2006-03-21 09:16:13 +03:00
|
|
|
buf[len] = 0;
|
|
|
|
if (strcmp(buf, pkt_dev->src_max) != 0) {
|
|
|
|
memset(pkt_dev->src_max, 0, sizeof(pkt_dev->src_max));
|
pktgen: convert safe uses of strncpy() to strcpy() to avoid string truncation warning
GCC 8 complains:
net/core/pktgen.c: In function ‘pktgen_if_write’:
net/core/pktgen.c:1419:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1399:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->src_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1290:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_max, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
net/core/pktgen.c:1268:4: warning: ‘strncpy’ output may be truncated copying between 0 and 31 bytes from a string of length 127 [-Wstringop-truncation]
strncpy(pkt_dev->dst_min, buf, len);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is no bug here, but the code is not perfect either. It copies
sizeof(pkt_dev->/member/) - 1 from user space into buf, and then does
a strcmp(pkt_dev->/member/, buf) hence assuming buf will be null-terminated
and shorter than pkt_dev->/member/ (pkt_dev->/member/ is never
explicitly null-terminated, and strncpy() doesn't have to null-terminate
so the assumption must be on buf). The use of strncpy() without explicit
null-termination looks suspicious. Convert to use straight strcpy().
strncpy() would also null-pad the output, but that's clearly unnecessary
since the author calls memset(pkt_dev->/member/, 0, sizeof(..)); prior
to strncpy(), anyway.
While at it format the code for "dst_min", "dst_max", "src_min" and
"src_max" in the same way by removing extra new lines in one case.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-18 00:32:24 +03:00
|
|
|
strcpy(pkt_dev->src_max, buf);
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->saddr_max = in_aton(pkt_dev->src_max);
|
|
|
|
pkt_dev->cur_saddr = pkt_dev->saddr_max;
|
|
|
|
}
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("src_max set to: %s\n", pkt_dev->src_max);
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
sprintf(pg_result, "OK: src_max=%s", pkt_dev->src_max);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "dst_mac")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(valstr) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
memset(valstr, 0, sizeof(valstr));
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(valstr, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
|
|
|
|
2011-05-08 03:00:07 +04:00
|
|
|
if (!mac_pton(valstr, pkt_dev->dst_mac))
|
|
|
|
return -EINVAL;
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Set up Dest MAC */
|
2014-01-20 21:52:19 +04:00
|
|
|
ether_addr_copy(&pkt_dev->hh[0], pkt_dev->dst_mac);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2011-05-08 03:00:07 +04:00
|
|
|
sprintf(pg_result, "OK: dstmac %pM", pkt_dev->dst_mac);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
if (!strcmp(name, "src_mac")) {
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(valstr) - 1);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
memset(valstr, 0, sizeof(valstr));
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(valstr, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
|
|
|
|
2011-05-08 03:00:07 +04:00
|
|
|
if (!mac_pton(valstr, pkt_dev->src_mac))
|
|
|
|
return -EINVAL;
|
2007-09-17 01:52:15 +04:00
|
|
|
/* Set up Src MAC */
|
2014-01-20 21:52:19 +04:00
|
|
|
ether_addr_copy(&pkt_dev->hh[6], pkt_dev->src_mac);
|
2007-09-17 01:52:15 +04:00
|
|
|
|
2011-05-08 03:00:07 +04:00
|
|
|
sprintf(pg_result, "OK: srcmac %pM", pkt_dev->src_mac);
|
2005-04-17 02:20:36 +04:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "clear_counters")) {
|
|
|
|
pktgen_clear_counters(pkt_dev);
|
|
|
|
sprintf(pg_result, "OK: Clearing counters.\n");
|
|
|
|
return count;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (!strcmp(name, "flows")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
if (value > MAX_CFLOWS)
|
|
|
|
value = MAX_CFLOWS;
|
|
|
|
|
|
|
|
pkt_dev->cflows = value;
|
|
|
|
sprintf(pg_result, "OK: flows=%u", pkt_dev->cflows);
|
|
|
|
return count;
|
|
|
|
}
|
2014-01-10 10:39:13 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
2014-01-03 07:18:30 +04:00
|
|
|
if (!strcmp(name, "spi")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
|
|
|
i += len;
|
|
|
|
pkt_dev->spi = value;
|
|
|
|
sprintf(pg_result, "OK: spi=%u", pkt_dev->spi);
|
|
|
|
return count;
|
|
|
|
}
|
2014-01-10 10:39:13 +04:00
|
|
|
#endif
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!strcmp(name, "flowlen")) {
|
|
|
|
len = num_arg(&user_buffer[i], 10, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-03-21 09:16:13 +03:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
pkt_dev->lflow = value;
|
|
|
|
sprintf(pg_result, "OK: flowlen=%u", pkt_dev->lflow);
|
|
|
|
return count;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2007-08-29 02:45:55 +04:00
|
|
|
if (!strcmp(name, "queue_map_min")) {
|
|
|
|
len = num_arg(&user_buffer[i], 5, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2007-08-29 02:45:55 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2007-08-29 02:45:55 +04:00
|
|
|
i += len;
|
|
|
|
pkt_dev->queue_map_min = value;
|
|
|
|
sprintf(pg_result, "OK: queue_map_min=%u", pkt_dev->queue_map_min);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "queue_map_max")) {
|
|
|
|
len = num_arg(&user_buffer[i], 5, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2007-08-29 02:45:55 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2007-08-29 02:45:55 +04:00
|
|
|
i += len;
|
|
|
|
pkt_dev->queue_map_max = value;
|
|
|
|
sprintf(pg_result, "OK: queue_map_max=%u", pkt_dev->queue_map_max);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
if (!strcmp(name, "mpls")) {
|
2012-04-15 09:58:06 +04:00
|
|
|
unsigned int n, cnt;
|
2007-10-09 12:59:42 +04:00
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
len = get_labels(&user_buffer[i], pkt_dev);
|
2007-10-09 12:59:42 +04:00
|
|
|
if (len < 0)
|
|
|
|
return len;
|
2006-03-23 12:10:26 +03:00
|
|
|
i += len;
|
2007-10-09 12:59:42 +04:00
|
|
|
cnt = sprintf(pg_result, "OK: mpls=");
|
2007-04-11 07:10:33 +04:00
|
|
|
for (n = 0; n < pkt_dev->nr_labels; n++)
|
2007-10-09 12:59:42 +04:00
|
|
|
cnt += sprintf(pg_result + cnt,
|
|
|
|
"%08x%s", ntohl(pkt_dev->labels[n]),
|
|
|
|
n == pkt_dev->nr_labels-1 ? "" : ",");
|
2006-09-28 03:30:44 +04:00
|
|
|
|
|
|
|
if (pkt_dev->nr_labels && pkt_dev->vlan_id != 0xffff) {
|
|
|
|
pkt_dev->vlan_id = 0xffff; /* turn off VLAN/SVLAN */
|
|
|
|
pkt_dev->svlan_id = 0xffff;
|
|
|
|
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("VLAN/SVLAN auto turned off\n");
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "vlan_id")) {
|
|
|
|
len = num_arg(&user_buffer[i], 4, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:30:44 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
i += len;
|
|
|
|
if (value <= 4095) {
|
|
|
|
pkt_dev->vlan_id = value; /* turn on VLAN */
|
|
|
|
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("VLAN turned on\n");
|
2006-09-28 03:30:44 +04:00
|
|
|
|
|
|
|
if (debug && pkt_dev->nr_labels)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("MPLS auto turned off\n");
|
2006-09-28 03:30:44 +04:00
|
|
|
|
|
|
|
pkt_dev->nr_labels = 0; /* turn off MPLS */
|
|
|
|
sprintf(pg_result, "OK: vlan_id=%u", pkt_dev->vlan_id);
|
|
|
|
} else {
|
|
|
|
pkt_dev->vlan_id = 0xffff; /* turn off VLAN/SVLAN */
|
|
|
|
pkt_dev->svlan_id = 0xffff;
|
|
|
|
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("VLAN/SVLAN turned off\n");
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "vlan_p")) {
|
|
|
|
len = num_arg(&user_buffer[i], 1, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:30:44 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
i += len;
|
|
|
|
if ((value <= 7) && (pkt_dev->vlan_id != 0xffff)) {
|
|
|
|
pkt_dev->vlan_p = value;
|
|
|
|
sprintf(pg_result, "OK: vlan_p=%u", pkt_dev->vlan_p);
|
|
|
|
} else {
|
|
|
|
sprintf(pg_result, "ERROR: vlan_p must be 0-7");
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "vlan_cfi")) {
|
|
|
|
len = num_arg(&user_buffer[i], 1, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:30:44 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
i += len;
|
|
|
|
if ((value <= 1) && (pkt_dev->vlan_id != 0xffff)) {
|
|
|
|
pkt_dev->vlan_cfi = value;
|
|
|
|
sprintf(pg_result, "OK: vlan_cfi=%u", pkt_dev->vlan_cfi);
|
|
|
|
} else {
|
|
|
|
sprintf(pg_result, "ERROR: vlan_cfi must be 0-1");
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "svlan_id")) {
|
|
|
|
len = num_arg(&user_buffer[i], 4, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:30:44 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
i += len;
|
|
|
|
if ((value <= 4095) && ((pkt_dev->vlan_id != 0xffff))) {
|
|
|
|
pkt_dev->svlan_id = value; /* turn on SVLAN */
|
|
|
|
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("SVLAN turned on\n");
|
2006-09-28 03:30:44 +04:00
|
|
|
|
|
|
|
if (debug && pkt_dev->nr_labels)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("MPLS auto turned off\n");
|
2006-09-28 03:30:44 +04:00
|
|
|
|
|
|
|
pkt_dev->nr_labels = 0; /* turn off MPLS */
|
|
|
|
sprintf(pg_result, "OK: svlan_id=%u", pkt_dev->svlan_id);
|
|
|
|
} else {
|
|
|
|
pkt_dev->vlan_id = 0xffff; /* turn off VLAN/SVLAN */
|
|
|
|
pkt_dev->svlan_id = 0xffff;
|
|
|
|
|
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("VLAN/SVLAN turned off\n");
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "svlan_p")) {
|
|
|
|
len = num_arg(&user_buffer[i], 1, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:30:44 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
i += len;
|
|
|
|
if ((value <= 7) && (pkt_dev->svlan_id != 0xffff)) {
|
|
|
|
pkt_dev->svlan_p = value;
|
|
|
|
sprintf(pg_result, "OK: svlan_p=%u", pkt_dev->svlan_p);
|
|
|
|
} else {
|
|
|
|
sprintf(pg_result, "ERROR: svlan_p must be 0-7");
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "svlan_cfi")) {
|
|
|
|
len = num_arg(&user_buffer[i], 1, &value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:30:44 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
i += len;
|
|
|
|
if ((value <= 1) && (pkt_dev->svlan_id != 0xffff)) {
|
|
|
|
pkt_dev->svlan_cfi = value;
|
|
|
|
sprintf(pg_result, "OK: svlan_cfi=%u", pkt_dev->svlan_cfi);
|
|
|
|
} else {
|
|
|
|
sprintf(pg_result, "ERROR: svlan_cfi must be 0-1");
|
|
|
|
}
|
2006-03-23 12:10:26 +03:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2006-09-28 03:32:03 +04:00
|
|
|
if (!strcmp(name, "tos")) {
|
|
|
|
__u32 tmp_value = 0;
|
|
|
|
len = hex32_arg(&user_buffer[i], 2, &tmp_value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:32:03 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:32:03 +04:00
|
|
|
i += len;
|
|
|
|
if (len == 2) {
|
|
|
|
pkt_dev->tos = tmp_value;
|
|
|
|
sprintf(pg_result, "OK: tos=0x%02x", pkt_dev->tos);
|
|
|
|
} else {
|
|
|
|
sprintf(pg_result, "ERROR: tos must be 00-ff");
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!strcmp(name, "traffic_class")) {
|
|
|
|
__u32 tmp_value = 0;
|
|
|
|
len = hex32_arg(&user_buffer[i], 2, &tmp_value);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (len < 0)
|
2006-09-28 03:32:03 +04:00
|
|
|
return len;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-09-28 03:32:03 +04:00
|
|
|
i += len;
|
|
|
|
if (len == 2) {
|
|
|
|
pkt_dev->traffic_class = tmp_value;
|
|
|
|
sprintf(pg_result, "OK: traffic_class=0x%02x", pkt_dev->traffic_class);
|
|
|
|
} else {
|
|
|
|
sprintf(pg_result, "ERROR: traffic_class must be 00-ff");
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2010-11-16 22:12:28 +03:00
|
|
|
if (!strcmp(name, "skb_priority")) {
|
|
|
|
len = num_arg(&user_buffer[i], 9, &value);
|
|
|
|
if (len < 0)
|
|
|
|
return len;
|
|
|
|
|
|
|
|
i += len;
|
|
|
|
pkt_dev->skb_priority = value;
|
|
|
|
sprintf(pg_result, "OK: skb_priority=%i",
|
|
|
|
pkt_dev->skb_priority);
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
sprintf(pkt_dev->result, "No such parameter \"%s\"", name);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
static int pktgen_if_open(struct inode *inode, struct file *file)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2022-01-22 09:14:23 +03:00
|
|
|
return single_open(file, pktgen_if_show, pde_data(inode));
|
2005-10-15 02:42:33 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2020-02-04 04:37:17 +03:00
|
|
|
static const struct proc_ops pktgen_if_proc_ops = {
|
|
|
|
.proc_open = pktgen_if_open,
|
|
|
|
.proc_read = seq_read,
|
|
|
|
.proc_lseek = seq_lseek,
|
|
|
|
.proc_write = pktgen_if_write,
|
|
|
|
.proc_release = single_release,
|
2005-10-15 02:42:33 +04:00
|
|
|
};
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
static int pktgen_thread_show(struct seq_file *seq, void *v)
|
|
|
|
{
|
2006-03-21 09:16:13 +03:00
|
|
|
struct pktgen_thread *t = seq->private;
|
2009-08-27 17:55:07 +04:00
|
|
|
const struct pktgen_dev *pkt_dev;
|
2005-10-15 02:42:33 +04:00
|
|
|
|
|
|
|
BUG_ON(!t);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-05-17 01:28:54 +04:00
|
|
|
seq_puts(seq, "Running: ");
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(pkt_dev, &t->if_list, list)
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->running)
|
2009-11-23 04:44:37 +03:00
|
|
|
seq_printf(seq, "%s ", pkt_dev->odevname);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-05-17 01:28:54 +04:00
|
|
|
seq_puts(seq, "\nStopped: ");
|
2006-03-21 09:16:13 +03:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
list_for_each_entry_rcu(pkt_dev, &t->if_list, list)
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!pkt_dev->running)
|
2009-11-23 04:44:37 +03:00
|
|
|
seq_printf(seq, "%s ", pkt_dev->odevname);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (t->result[0])
|
2005-10-15 02:42:33 +04:00
|
|
|
seq_printf(seq, "\nResult: %s\n", t->result);
|
2005-04-17 02:20:36 +04:00
|
|
|
else
|
2014-05-17 01:28:54 +04:00
|
|
|
seq_puts(seq, "\nResult: NA\n");
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
static ssize_t pktgen_thread_write(struct file *file,
|
2006-03-21 09:16:13 +03:00
|
|
|
const char __user * user_buffer,
|
|
|
|
size_t count, loff_t * offset)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2010-07-12 14:50:23 +04:00
|
|
|
struct seq_file *seq = file->private_data;
|
2006-03-21 09:16:13 +03:00
|
|
|
struct pktgen_thread *t = seq->private;
|
2010-10-18 16:14:44 +04:00
|
|
|
int i, max, len, ret;
|
2005-04-17 02:20:36 +04:00
|
|
|
char name[40];
|
2006-03-21 09:16:13 +03:00
|
|
|
char *pg_result;
|
2005-10-15 02:42:33 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
if (count < 1) {
|
2006-03-21 09:16:13 +03:00
|
|
|
// sprintf(pg_result, "Wrong command format");
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2005-10-15 02:42:33 +04:00
|
|
|
|
2010-10-18 16:14:44 +04:00
|
|
|
max = count;
|
|
|
|
len = count_trail_chars(user_buffer, max);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
2005-10-15 02:42:33 +04:00
|
|
|
return len;
|
|
|
|
|
2010-10-18 16:14:44 +04:00
|
|
|
i = len;
|
2005-10-15 02:42:33 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Read variable name */
|
|
|
|
|
|
|
|
len = strn_len(&user_buffer[i], sizeof(name) - 1);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
2005-10-15 02:42:33 +04:00
|
|
|
return len;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
memset(name, 0, sizeof(name));
|
|
|
|
if (copy_from_user(name, &user_buffer[i], len))
|
|
|
|
return -EFAULT;
|
|
|
|
i += len;
|
2005-10-15 02:42:33 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
max = count - i;
|
2005-04-17 02:20:36 +04:00
|
|
|
len = count_trail_chars(&user_buffer[i], max);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0)
|
2005-10-15 02:42:33 +04:00
|
|
|
return len;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
i += len;
|
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
if (debug)
|
2012-05-16 21:50:41 +04:00
|
|
|
pr_debug("t=%s, count=%lu\n", name, (unsigned long)count);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!t) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("ERROR: No thread\n");
|
2005-04-17 02:20:36 +04:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
pg_result = &(t->result[0]);
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "add_device")) {
|
|
|
|
char f[32];
|
|
|
|
memset(f, 0, 32);
|
2005-04-17 02:20:36 +04:00
|
|
|
len = strn_len(&user_buffer[i], sizeof(f) - 1);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (len < 0) {
|
|
|
|
ret = len;
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
if (copy_from_user(f, &user_buffer[i], len))
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EFAULT;
|
|
|
|
i += len;
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2013-01-28 01:14:08 +04:00
|
|
|
ret = pktgen_add_device(t, f);
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
2013-01-28 01:14:08 +04:00
|
|
|
if (!ret) {
|
|
|
|
ret = count;
|
|
|
|
sprintf(pg_result, "OK: add_device=%s", f);
|
|
|
|
} else
|
|
|
|
sprintf(pg_result, "ERROR: can not add device %s", f);
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "rem_device_all")) {
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2006-03-21 08:26:56 +03:00
|
|
|
t->control |= T_REMDEVALL;
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
2006-03-21 09:16:13 +03:00
|
|
|
schedule_timeout_interruptible(msecs_to_jiffies(125)); /* Propagate thread->control */
|
2005-04-17 02:20:36 +04:00
|
|
|
ret = count;
|
2006-03-21 09:16:13 +03:00
|
|
|
sprintf(pg_result, "OK: rem_device_all");
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!strcmp(name, "max_before_softirq")) {
|
2007-08-29 02:46:58 +04:00
|
|
|
sprintf(pg_result, "OK: Note! max_before_softirq is obsoleted -- Do not use");
|
2006-03-21 09:16:13 +03:00
|
|
|
ret = count;
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = -EINVAL;
|
2006-03-21 09:16:13 +03:00
|
|
|
out:
|
2005-04-17 02:20:36 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2005-10-15 02:42:33 +04:00
|
|
|
static int pktgen_thread_open(struct inode *inode, struct file *file)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2022-01-22 09:14:23 +03:00
|
|
|
return single_open(file, pktgen_thread_show, pde_data(inode));
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2020-02-04 04:37:17 +03:00
|
|
|
static const struct proc_ops pktgen_thread_proc_ops = {
|
|
|
|
.proc_open = pktgen_thread_open,
|
|
|
|
.proc_read = seq_read,
|
|
|
|
.proc_lseek = seq_lseek,
|
|
|
|
.proc_write = pktgen_thread_write,
|
|
|
|
.proc_release = single_release,
|
2005-10-15 02:42:33 +04:00
|
|
|
};
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/* Think find or remove for NN */
|
2013-01-28 23:55:53 +04:00
|
|
|
static struct pktgen_dev *__pktgen_NN_threads(const struct pktgen_net *pn,
|
|
|
|
const char *ifname, int remove)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct pktgen_thread *t;
|
|
|
|
struct pktgen_dev *pkt_dev = NULL;
|
2009-11-25 01:50:53 +03:00
|
|
|
bool exact = (remove == FIND);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
list_for_each_entry(t, &pn->pktgen_threads, th_list) {
|
2009-11-25 01:50:53 +03:00
|
|
|
pkt_dev = pktgen_find_dev(t, ifname, exact);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (pkt_dev) {
|
2006-03-21 09:16:13 +03:00
|
|
|
if (remove) {
|
|
|
|
pkt_dev->removal_mark = 1;
|
|
|
|
t->control |= T_REMDEV;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
return pkt_dev;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 08:26:56 +03:00
|
|
|
/*
|
|
|
|
* mark a device for removal
|
|
|
|
*/
|
2013-01-28 23:55:53 +04:00
|
|
|
static void pktgen_mark_device(const struct pktgen_net *pn, const char *ifname)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct pktgen_dev *pkt_dev = NULL;
|
2006-03-21 08:26:56 +03:00
|
|
|
const int max_tries = 10, msec_per_try = 125;
|
|
|
|
int i = 0;
|
|
|
|
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("%s: marking %s for removal\n", __func__, ifname);
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
while (1) {
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
pkt_dev = __pktgen_NN_threads(pn, ifname, REMOVE);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev == NULL)
|
|
|
|
break; /* success */
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("%s: waiting for %s to disappear....\n",
|
|
|
|
__func__, ifname);
|
2006-03-21 08:26:56 +03:00
|
|
|
schedule_timeout_interruptible(msecs_to_jiffies(msec_per_try));
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2006-03-21 08:26:56 +03:00
|
|
|
|
|
|
|
if (++i >= max_tries) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("%s: timed out after waiting %d msec for device %s to be removed\n",
|
|
|
|
__func__, msec_per_try * i, ifname);
|
2006-03-21 08:26:56 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
2007-03-05 03:11:51 +03:00
|
|
|
}
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static void pktgen_change_name(const struct pktgen_net *pn, struct net_device *dev)
|
2007-03-05 03:11:51 +03:00
|
|
|
{
|
|
|
|
struct pktgen_thread *t;
|
|
|
|
|
2016-10-15 18:50:49 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
list_for_each_entry(t, &pn->pktgen_threads, th_list) {
|
2007-03-05 03:11:51 +03:00
|
|
|
struct pktgen_dev *pkt_dev;
|
|
|
|
|
2016-10-15 18:50:49 +03:00
|
|
|
if_lock(t);
|
|
|
|
list_for_each_entry(pkt_dev, &t->if_list, list) {
|
2007-03-05 03:11:51 +03:00
|
|
|
if (pkt_dev->odev != dev)
|
|
|
|
continue;
|
|
|
|
|
2013-04-12 20:27:28 +04:00
|
|
|
proc_remove(pkt_dev->entry);
|
2007-03-05 03:11:51 +03:00
|
|
|
|
2009-08-29 10:34:43 +04:00
|
|
|
pkt_dev->entry = proc_create_data(dev->name, 0600,
|
2013-01-28 23:55:53 +04:00
|
|
|
pn->proc_dir,
|
2020-02-04 04:37:17 +03:00
|
|
|
&pktgen_if_proc_ops,
|
2009-08-29 10:34:43 +04:00
|
|
|
pkt_dev);
|
2007-03-05 03:11:51 +03:00
|
|
|
if (!pkt_dev->entry)
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("can't move proc entry for '%s'\n",
|
|
|
|
dev->name);
|
2007-03-05 03:11:51 +03:00
|
|
|
break;
|
|
|
|
}
|
2016-10-15 18:50:49 +03:00
|
|
|
if_unlock(t);
|
2007-03-05 03:11:51 +03:00
|
|
|
}
|
2016-10-15 18:50:49 +03:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int pktgen_device_event(struct notifier_block *unused,
|
|
|
|
unsigned long event, void *ptr)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2013-05-28 05:30:21 +04:00
|
|
|
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
|
2013-01-28 23:55:53 +04:00
|
|
|
struct pktgen_net *pn = net_generic(dev_net(dev), pg_net_id);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
if (pn->pktgen_exiting)
|
2007-09-12 15:02:17 +04:00
|
|
|
return NOTIFY_DONE;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* It is OK that we do not hold the group lock right now,
|
|
|
|
* as we run under the RTNL lock.
|
|
|
|
*/
|
|
|
|
|
|
|
|
switch (event) {
|
2007-03-05 03:11:51 +03:00
|
|
|
case NETDEV_CHANGENAME:
|
2013-01-28 23:55:53 +04:00
|
|
|
pktgen_change_name(pn, dev);
|
2005-04-17 02:20:36 +04:00
|
|
|
break;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
case NETDEV_UNREGISTER:
|
2013-01-28 23:55:53 +04:00
|
|
|
pktgen_mark_device(pn, dev->name);
|
2005-04-17 02:20:36 +04:00
|
|
|
break;
|
2007-04-21 04:09:22 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static struct net_device *pktgen_dev_get_by_name(const struct pktgen_net *pn,
|
|
|
|
struct pktgen_dev *pkt_dev,
|
2009-08-27 17:55:19 +04:00
|
|
|
const char *ifname)
|
2008-08-07 13:23:01 +04:00
|
|
|
{
|
|
|
|
char b[IFNAMSIZ+5];
|
2010-10-18 16:14:44 +04:00
|
|
|
int i;
|
2008-08-07 13:23:01 +04:00
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
for (i = 0; ifname[i] != '@'; i++) {
|
|
|
|
if (i == IFNAMSIZ)
|
2008-08-07 13:23:01 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
b[i] = ifname[i];
|
|
|
|
}
|
|
|
|
b[i] = 0;
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
return dev_get_by_name(pn->net, b);
|
2008-08-07 13:23:01 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Associate pktgen_dev with a device. */
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static int pktgen_setup_dev(const struct pktgen_net *pn,
|
|
|
|
struct pktgen_dev *pkt_dev, const char *ifname)
|
2006-03-21 09:16:13 +03:00
|
|
|
{
|
2005-04-17 02:20:36 +04:00
|
|
|
struct net_device *odev;
|
2007-03-05 03:11:51 +03:00
|
|
|
int err;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/* Clean old setups */
|
|
|
|
if (pkt_dev->odev) {
|
2022-06-08 07:39:55 +03:00
|
|
|
netdev_put(pkt_dev->odev, &pkt_dev->dev_tracker);
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->odev = NULL;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
odev = pktgen_dev_get_by_name(pn, pkt_dev, ifname);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!odev) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("no such netdevice: \"%s\"\n", ifname);
|
2007-03-05 03:11:51 +03:00
|
|
|
return -ENODEV;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2007-03-05 03:11:51 +03:00
|
|
|
|
2020-03-10 13:49:46 +03:00
|
|
|
if (odev->type != ARPHRD_ETHER && odev->type != ARPHRD_LOOPBACK) {
|
|
|
|
pr_err("not an ethernet or loopback device: \"%s\"\n", ifname);
|
2007-03-05 03:11:51 +03:00
|
|
|
err = -EINVAL;
|
|
|
|
} else if (!netif_running(odev)) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("device is down: \"%s\"\n", ifname);
|
2007-03-05 03:11:51 +03:00
|
|
|
err = -ENETDOWN;
|
|
|
|
} else {
|
|
|
|
pkt_dev->odev = odev;
|
2021-12-07 04:30:35 +03:00
|
|
|
netdev_tracker_alloc(odev, &pkt_dev->dev_tracker, GFP_KERNEL);
|
2007-03-05 03:11:51 +03:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
dev_put(odev);
|
2007-03-05 03:11:51 +03:00
|
|
|
return err;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Read pkt_dev from the interface and set up internal pktgen_dev
|
|
|
|
* structure to have the right information to create/send packets
|
|
|
|
*/
|
|
|
|
static void pktgen_setup_inject(struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
2008-08-14 02:16:00 +04:00
|
|
|
int ntxq;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!pkt_dev->odev) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("ERROR: pkt_dev->odev == NULL in setup_inject\n");
|
2006-03-21 09:16:13 +03:00
|
|
|
sprintf(pkt_dev->result,
|
|
|
|
"ERROR: pkt_dev->odev == NULL in setup_inject.\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-08-14 02:16:00 +04:00
|
|
|
/* make sure that we don't pick a non-existing transmit queue */
|
|
|
|
ntxq = pkt_dev->odev->real_num_tx_queues;
|
2008-11-20 01:09:47 +03:00
|
|
|
|
2008-08-14 02:16:00 +04:00
|
|
|
if (ntxq <= pkt_dev->queue_map_min) {
|
2014-09-10 08:17:30 +04:00
|
|
|
pr_warn("WARNING: Requested queue_map_min (zero-based) (%d) exceeds valid range [0 - %d] for (%d) queues on %s, resetting\n",
|
|
|
|
pkt_dev->queue_map_min, (ntxq ?: 1) - 1, ntxq,
|
|
|
|
pkt_dev->odevname);
|
2012-01-06 07:13:47 +04:00
|
|
|
pkt_dev->queue_map_min = (ntxq ?: 1) - 1;
|
2008-08-14 02:16:00 +04:00
|
|
|
}
|
2008-10-28 23:21:51 +03:00
|
|
|
if (pkt_dev->queue_map_max >= ntxq) {
|
2014-09-10 08:17:30 +04:00
|
|
|
pr_warn("WARNING: Requested queue_map_max (zero-based) (%d) exceeds valid range [0 - %d] for (%d) queues on %s, resetting\n",
|
|
|
|
pkt_dev->queue_map_max, (ntxq ?: 1) - 1, ntxq,
|
|
|
|
pkt_dev->odevname);
|
2012-01-06 07:13:47 +04:00
|
|
|
pkt_dev->queue_map_max = (ntxq ?: 1) - 1;
|
2008-08-14 02:16:00 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
/* Default to the interface's mac if not explicitly set. */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-18 00:04:57 +03:00
|
|
|
if (is_zero_ether_addr(pkt_dev->src_mac))
|
2014-01-20 21:52:19 +04:00
|
|
|
ether_addr_copy(&(pkt_dev->hh[6]), pkt_dev->odev->dev_addr);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
/* Set up Dest MAC */
|
2014-01-20 21:52:19 +04:00
|
|
|
ether_addr_copy(&(pkt_dev->hh[0]), pkt_dev->dst_mac);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->flags & F_IPV6) {
|
2012-10-09 21:48:19 +04:00
|
|
|
int i, set = 0, err = 1;
|
|
|
|
struct inet6_dev *idev;
|
|
|
|
|
2012-10-09 21:48:17 +04:00
|
|
|
if (pkt_dev->min_pkt_size == 0) {
|
|
|
|
pkt_dev->min_pkt_size = 14 + sizeof(struct ipv6hdr)
|
|
|
|
+ sizeof(struct udphdr)
|
|
|
|
+ sizeof(struct pktgen_hdr)
|
|
|
|
+ pkt_dev->pkt_overhead;
|
|
|
|
}
|
|
|
|
|
2017-11-04 18:27:14 +03:00
|
|
|
for (i = 0; i < sizeof(struct in6_addr); i++)
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->cur_in6_saddr.s6_addr[i]) {
|
2005-04-17 02:20:36 +04:00
|
|
|
set = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!set) {
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Use linklevel address if unconfigured.
|
|
|
|
*
|
|
|
|
* use ipv6_get_lladdr if/when it's get exported
|
|
|
|
*/
|
|
|
|
|
2006-09-23 01:44:24 +04:00
|
|
|
rcu_read_lock();
|
2009-08-27 17:55:19 +04:00
|
|
|
idev = __in6_dev_get(pkt_dev->odev);
|
|
|
|
if (idev) {
|
2005-04-17 02:20:36 +04:00
|
|
|
struct inet6_ifaddr *ifp;
|
|
|
|
|
|
|
|
read_lock_bh(&idev->lock);
|
2012-10-09 21:48:19 +04:00
|
|
|
list_for_each_entry(ifp, &idev->addr_list, if_list) {
|
|
|
|
if ((ifp->scope & IFA_LINK) &&
|
2009-11-30 03:55:45 +03:00
|
|
|
!(ifp->flags & IFA_F_TENTATIVE)) {
|
2011-11-21 07:39:03 +04:00
|
|
|
pkt_dev->cur_in6_saddr = ifp->addr;
|
2005-04-17 02:20:36 +04:00
|
|
|
err = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
read_unlock_bh(&idev->lock);
|
|
|
|
}
|
2006-09-23 01:44:24 +04:00
|
|
|
rcu_read_unlock();
|
2006-03-21 09:16:13 +03:00
|
|
|
if (err)
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("ERROR: IPv6 link address not available\n");
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
} else {
|
2012-10-09 21:48:17 +04:00
|
|
|
if (pkt_dev->min_pkt_size == 0) {
|
|
|
|
pkt_dev->min_pkt_size = 14 + sizeof(struct iphdr)
|
|
|
|
+ sizeof(struct udphdr)
|
|
|
|
+ sizeof(struct pktgen_hdr)
|
|
|
|
+ pkt_dev->pkt_overhead;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->saddr_min = 0;
|
|
|
|
pkt_dev->saddr_max = 0;
|
|
|
|
if (strlen(pkt_dev->src_min) == 0) {
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
struct in_device *in_dev;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
rcu_read_lock();
|
2005-10-04 01:35:55 +04:00
|
|
|
in_dev = __in_dev_get_rcu(pkt_dev->odev);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (in_dev) {
|
2019-05-31 19:27:09 +03:00
|
|
|
const struct in_ifaddr *ifa;
|
|
|
|
|
|
|
|
ifa = rcu_dereference(in_dev->ifa_list);
|
|
|
|
if (ifa) {
|
|
|
|
pkt_dev->saddr_min = ifa->ifa_address;
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->saddr_max = pkt_dev->saddr_min;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2006-03-21 09:16:13 +03:00
|
|
|
} else {
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->saddr_min = in_aton(pkt_dev->src_min);
|
|
|
|
pkt_dev->saddr_max = in_aton(pkt_dev->src_max);
|
|
|
|
}
|
|
|
|
|
|
|
|
pkt_dev->daddr_min = in_aton(pkt_dev->dst_min);
|
|
|
|
pkt_dev->daddr_max = in_aton(pkt_dev->dst_max);
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
/* Initialize current values. */
|
2012-10-09 21:48:17 +04:00
|
|
|
pkt_dev->cur_pkt_size = pkt_dev->min_pkt_size;
|
|
|
|
if (pkt_dev->min_pkt_size > pkt_dev->max_pkt_size)
|
|
|
|
pkt_dev->max_pkt_size = pkt_dev->min_pkt_size;
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->cur_dst_mac_offset = 0;
|
|
|
|
pkt_dev->cur_src_mac_offset = 0;
|
|
|
|
pkt_dev->cur_saddr = pkt_dev->saddr_min;
|
|
|
|
pkt_dev->cur_daddr = pkt_dev->daddr_min;
|
|
|
|
pkt_dev->cur_udp_dst = pkt_dev->udp_dst_min;
|
|
|
|
pkt_dev->cur_udp_src = pkt_dev->udp_src_min;
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->nflows = 0;
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:16 +04:00
|
|
|
|
|
|
|
static void spin(struct pktgen_dev *pkt_dev, ktime_t spin_until)
|
|
|
|
{
|
2009-09-22 23:41:43 +04:00
|
|
|
ktime_t start_time, end_time;
|
2009-10-01 20:29:45 +04:00
|
|
|
s64 remaining;
|
2009-08-29 10:41:29 +04:00
|
|
|
struct hrtimer_sleeper t;
|
|
|
|
|
2019-07-26 21:30:50 +03:00
|
|
|
hrtimer_init_sleeper_on_stack(&t, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
|
2009-08-29 10:41:29 +04:00
|
|
|
hrtimer_set_expires(&t.timer, spin_until);
|
|
|
|
|
2010-06-10 02:49:57 +04:00
|
|
|
remaining = ktime_to_ns(hrtimer_expires_remaining(&t.timer));
|
2016-05-27 03:21:06 +03:00
|
|
|
if (remaining <= 0)
|
|
|
|
goto out;
|
2009-08-29 10:41:29 +04:00
|
|
|
|
2012-10-28 12:27:19 +04:00
|
|
|
start_time = ktime_get();
|
2011-10-21 01:00:21 +04:00
|
|
|
if (remaining < 100000) {
|
|
|
|
/* for small delays (<100us), just loop until limit is reached */
|
|
|
|
do {
|
2012-10-28 12:27:19 +04:00
|
|
|
end_time = ktime_get();
|
|
|
|
} while (ktime_compare(end_time, spin_until) < 0);
|
2011-10-21 01:00:21 +04:00
|
|
|
} else {
|
2009-08-29 10:41:29 +04:00
|
|
|
do {
|
|
|
|
set_current_state(TASK_INTERRUPTIBLE);
|
2019-07-30 22:16:55 +03:00
|
|
|
hrtimer_sleeper_start_expires(&t, HRTIMER_MODE_ABS);
|
2009-08-29 10:41:29 +04:00
|
|
|
|
|
|
|
if (likely(t.task))
|
2005-04-17 02:20:36 +04:00
|
|
|
schedule();
|
|
|
|
|
2009-08-29 10:41:29 +04:00
|
|
|
hrtimer_cancel(&t.timer);
|
|
|
|
} while (t.task && pkt_dev->running && !signal_pending(current));
|
|
|
|
__set_current_state(TASK_RUNNING);
|
2012-10-28 12:27:19 +04:00
|
|
|
end_time = ktime_get();
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2009-09-22 23:41:43 +04:00
|
|
|
|
|
|
|
pkt_dev->idle_acc += ktime_to_ns(ktime_sub(end_time, start_time));
|
2016-05-27 03:21:06 +03:00
|
|
|
out:
|
2010-06-11 10:08:11 +04:00
|
|
|
pkt_dev->next_tx = ktime_add_ns(spin_until, pkt_dev->delay);
|
2016-05-27 03:21:06 +03:00
|
|
|
destroy_hrtimer_on_stack(&t.timer);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-07-03 09:39:50 +04:00
|
|
|
static inline void set_pkt_overhead(struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
2016-09-30 17:56:45 +03:00
|
|
|
pkt_dev->pkt_overhead = 0;
|
2007-07-03 09:39:50 +04:00
|
|
|
pkt_dev->pkt_overhead += pkt_dev->nr_labels*sizeof(u32);
|
|
|
|
pkt_dev->pkt_overhead += VLAN_TAG_SIZE(pkt_dev);
|
|
|
|
pkt_dev->pkt_overhead += SVLAN_TAG_SIZE(pkt_dev);
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:07 +04:00
|
|
|
static inline int f_seen(const struct pktgen_dev *pkt_dev, int flow)
|
2007-07-03 09:40:36 +04:00
|
|
|
{
|
2009-08-27 17:55:07 +04:00
|
|
|
return !!(pkt_dev->flows[flow].flags & F_INIT);
|
2007-07-03 09:40:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int f_pick(struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
|
|
|
int flow = pkt_dev->curfl;
|
|
|
|
|
|
|
|
if (pkt_dev->flags & F_FLOW_SEQ) {
|
|
|
|
if (pkt_dev->flows[flow].count >= pkt_dev->lflow) {
|
|
|
|
/* reset time */
|
|
|
|
pkt_dev->flows[flow].count = 0;
|
2008-08-06 05:44:26 +04:00
|
|
|
pkt_dev->flows[flow].flags = 0;
|
2007-07-03 09:40:36 +04:00
|
|
|
pkt_dev->curfl += 1;
|
|
|
|
if (pkt_dev->curfl >= pkt_dev->cflows)
|
|
|
|
pkt_dev->curfl = 0; /*reset */
|
|
|
|
}
|
|
|
|
} else {
|
2022-10-05 17:43:38 +03:00
|
|
|
flow = prandom_u32_max(pkt_dev->cflows);
|
2008-08-06 05:44:26 +04:00
|
|
|
pkt_dev->curfl = flow;
|
2007-07-03 09:40:36 +04:00
|
|
|
|
2008-08-06 05:44:26 +04:00
|
|
|
if (pkt_dev->flows[flow].count > pkt_dev->lflow) {
|
2007-07-03 09:40:36 +04:00
|
|
|
pkt_dev->flows[flow].count = 0;
|
2008-08-06 05:44:26 +04:00
|
|
|
pkt_dev->flows[flow].flags = 0;
|
|
|
|
}
|
2007-07-03 09:40:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
return pkt_dev->curfl;
|
|
|
|
}
|
|
|
|
|
2007-07-03 09:41:59 +04:00
|
|
|
|
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
/* If there was already an IPSEC SA, we keep it as is, else
|
|
|
|
* we go look for it ...
|
|
|
|
*/
|
2010-02-23 03:20:22 +03:00
|
|
|
#define DUMMY_MARK 0
|
2007-07-31 05:04:09 +04:00
|
|
|
static void get_ipsec_sa(struct pktgen_dev *pkt_dev, int flow)
|
2007-07-03 09:41:59 +04:00
|
|
|
{
|
|
|
|
struct xfrm_state *x = pkt_dev->flows[flow].x;
|
2013-01-28 23:55:53 +04:00
|
|
|
struct pktgen_net *pn = net_generic(dev_net(pkt_dev->odev), pg_net_id);
|
2007-07-03 09:41:59 +04:00
|
|
|
if (!x) {
|
2014-01-03 07:18:32 +04:00
|
|
|
|
|
|
|
if (pkt_dev->spi) {
|
|
|
|
/* We need as quick as possible to find the right SA
|
|
|
|
* Searching with minimum criteria to archieve this.
|
|
|
|
*/
|
|
|
|
x = xfrm_state_lookup_byspi(pn->net, htonl(pkt_dev->spi), AF_INET);
|
|
|
|
} else {
|
|
|
|
/* slow path: we dont already have xfrm_state */
|
2018-06-12 15:07:07 +03:00
|
|
|
x = xfrm_stateonly_find(pn->net, DUMMY_MARK, 0,
|
2014-01-03 07:18:32 +04:00
|
|
|
(xfrm_address_t *)&pkt_dev->cur_daddr,
|
|
|
|
(xfrm_address_t *)&pkt_dev->cur_saddr,
|
|
|
|
AF_INET,
|
|
|
|
pkt_dev->ipsmode,
|
|
|
|
pkt_dev->ipsproto, 0);
|
|
|
|
}
|
2007-07-03 09:41:59 +04:00
|
|
|
if (x) {
|
|
|
|
pkt_dev->flows[flow].x = x;
|
|
|
|
set_pkt_overhead(pkt_dev);
|
2009-08-27 17:55:19 +04:00
|
|
|
pkt_dev->pkt_overhead += x->props.header_len;
|
2007-07-03 09:41:59 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2008-07-17 12:56:23 +04:00
|
|
|
static void set_cur_queue_map(struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
2008-08-07 13:23:01 +04:00
|
|
|
|
|
|
|
if (pkt_dev->flags & F_QUEUE_MAP_CPU)
|
|
|
|
pkt_dev->cur_queue_map = smp_processor_id();
|
|
|
|
|
2009-10-03 00:24:59 +04:00
|
|
|
else if (pkt_dev->queue_map_min <= pkt_dev->queue_map_max) {
|
2008-07-17 12:56:23 +04:00
|
|
|
__u16 t;
|
|
|
|
if (pkt_dev->flags & F_QUEUE_MAP_RND) {
|
2022-10-05 17:43:38 +03:00
|
|
|
t = prandom_u32_max(pkt_dev->queue_map_max -
|
|
|
|
pkt_dev->queue_map_min + 1) +
|
|
|
|
pkt_dev->queue_map_min;
|
2008-07-17 12:56:23 +04:00
|
|
|
} else {
|
|
|
|
t = pkt_dev->cur_queue_map + 1;
|
|
|
|
if (t > pkt_dev->queue_map_max)
|
|
|
|
t = pkt_dev->queue_map_min;
|
|
|
|
}
|
|
|
|
pkt_dev->cur_queue_map = t;
|
|
|
|
}
|
2008-11-20 01:09:47 +03:00
|
|
|
pkt_dev->cur_queue_map = pkt_dev->cur_queue_map % pkt_dev->odev->real_num_tx_queues;
|
2008-07-17 12:56:23 +04:00
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Increment/randomize headers according to flags and current values
|
|
|
|
* for IP src/dest, UDP src/dst port, MAC-Addr src/dst
|
|
|
|
*/
|
2006-03-21 09:16:13 +03:00
|
|
|
static void mod_cur_headers(struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
|
|
|
__u32 imn;
|
|
|
|
__u32 imx;
|
|
|
|
int flow = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-07-03 09:40:36 +04:00
|
|
|
if (pkt_dev->cflows)
|
|
|
|
flow = f_pick(pkt_dev);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/* Deal with source MAC */
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->src_mac_count > 1) {
|
|
|
|
__u32 mc;
|
|
|
|
__u32 tmp;
|
|
|
|
|
|
|
|
if (pkt_dev->flags & F_MACSRC_RND)
|
2022-10-05 17:43:38 +03:00
|
|
|
mc = prandom_u32_max(pkt_dev->src_mac_count);
|
2006-03-21 09:16:13 +03:00
|
|
|
else {
|
|
|
|
mc = pkt_dev->cur_src_mac_offset++;
|
2008-08-06 05:45:05 +04:00
|
|
|
if (pkt_dev->cur_src_mac_offset >=
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->src_mac_count)
|
|
|
|
pkt_dev->cur_src_mac_offset = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
tmp = pkt_dev->src_mac[5] + (mc & 0xFF);
|
|
|
|
pkt_dev->hh[11] = tmp;
|
|
|
|
tmp = (pkt_dev->src_mac[4] + ((mc >> 8) & 0xFF) + (tmp >> 8));
|
|
|
|
pkt_dev->hh[10] = tmp;
|
|
|
|
tmp = (pkt_dev->src_mac[3] + ((mc >> 16) & 0xFF) + (tmp >> 8));
|
|
|
|
pkt_dev->hh[9] = tmp;
|
|
|
|
tmp = (pkt_dev->src_mac[2] + ((mc >> 24) & 0xFF) + (tmp >> 8));
|
|
|
|
pkt_dev->hh[8] = tmp;
|
|
|
|
tmp = (pkt_dev->src_mac[1] + (tmp >> 8));
|
|
|
|
pkt_dev->hh[7] = tmp;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Deal with Destination MAC */
|
|
|
|
if (pkt_dev->dst_mac_count > 1) {
|
|
|
|
__u32 mc;
|
|
|
|
__u32 tmp;
|
|
|
|
|
|
|
|
if (pkt_dev->flags & F_MACDST_RND)
|
2022-10-05 17:43:38 +03:00
|
|
|
mc = prandom_u32_max(pkt_dev->dst_mac_count);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
else {
|
|
|
|
mc = pkt_dev->cur_dst_mac_offset++;
|
2008-08-06 05:45:05 +04:00
|
|
|
if (pkt_dev->cur_dst_mac_offset >=
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->dst_mac_count) {
|
|
|
|
pkt_dev->cur_dst_mac_offset = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
tmp = pkt_dev->dst_mac[5] + (mc & 0xFF);
|
|
|
|
pkt_dev->hh[5] = tmp;
|
|
|
|
tmp = (pkt_dev->dst_mac[4] + ((mc >> 8) & 0xFF) + (tmp >> 8));
|
|
|
|
pkt_dev->hh[4] = tmp;
|
|
|
|
tmp = (pkt_dev->dst_mac[3] + ((mc >> 16) & 0xFF) + (tmp >> 8));
|
|
|
|
pkt_dev->hh[3] = tmp;
|
|
|
|
tmp = (pkt_dev->dst_mac[2] + ((mc >> 24) & 0xFF) + (tmp >> 8));
|
|
|
|
pkt_dev->hh[2] = tmp;
|
|
|
|
tmp = (pkt_dev->dst_mac[1] + (tmp >> 8));
|
|
|
|
pkt_dev->hh[1] = tmp;
|
|
|
|
}
|
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
if (pkt_dev->flags & F_MPLS_RND) {
|
2012-04-15 09:58:06 +04:00
|
|
|
unsigned int i;
|
2007-04-11 07:10:33 +04:00
|
|
|
for (i = 0; i < pkt_dev->nr_labels; i++)
|
2006-03-23 12:10:26 +03:00
|
|
|
if (pkt_dev->labels[i] & MPLS_STACK_BOTTOM)
|
|
|
|
pkt_dev->labels[i] = MPLS_STACK_BOTTOM |
|
2022-10-05 18:43:22 +03:00
|
|
|
((__force __be32)get_random_u32() &
|
2006-03-23 12:10:26 +03:00
|
|
|
htonl(0x000fffff));
|
|
|
|
}
|
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
if ((pkt_dev->flags & F_VID_RND) && (pkt_dev->vlan_id != 0xffff)) {
|
2022-10-05 17:43:38 +03:00
|
|
|
pkt_dev->vlan_id = prandom_u32_max(4096);
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if ((pkt_dev->flags & F_SVID_RND) && (pkt_dev->svlan_id != 0xffff)) {
|
2022-10-05 17:43:38 +03:00
|
|
|
pkt_dev->svlan_id = prandom_u32_max(4096);
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->udp_src_min < pkt_dev->udp_src_max) {
|
|
|
|
if (pkt_dev->flags & F_UDPSRC_RND)
|
2022-10-05 17:43:38 +03:00
|
|
|
pkt_dev->cur_udp_src = prandom_u32_max(
|
|
|
|
pkt_dev->udp_src_max - pkt_dev->udp_src_min) +
|
|
|
|
pkt_dev->udp_src_min;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
else {
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->cur_udp_src++;
|
|
|
|
if (pkt_dev->cur_udp_src >= pkt_dev->udp_src_max)
|
|
|
|
pkt_dev->cur_udp_src = pkt_dev->udp_src_min;
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (pkt_dev->udp_dst_min < pkt_dev->udp_dst_max) {
|
|
|
|
if (pkt_dev->flags & F_UDPDST_RND) {
|
2022-10-05 17:43:38 +03:00
|
|
|
pkt_dev->cur_udp_dst = prandom_u32_max(
|
|
|
|
pkt_dev->udp_dst_max - pkt_dev->udp_dst_min) +
|
|
|
|
pkt_dev->udp_dst_min;
|
2006-03-21 09:16:13 +03:00
|
|
|
} else {
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->cur_udp_dst++;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->cur_udp_dst >= pkt_dev->udp_dst_max)
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->cur_udp_dst = pkt_dev->udp_dst_min;
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (!(pkt_dev->flags & F_IPV6)) {
|
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
imn = ntohl(pkt_dev->saddr_min);
|
|
|
|
imx = ntohl(pkt_dev->saddr_max);
|
|
|
|
if (imn < imx) {
|
2005-04-17 02:20:36 +04:00
|
|
|
__u32 t;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->flags & F_IPSRC_RND)
|
2022-10-05 17:43:38 +03:00
|
|
|
t = prandom_u32_max(imx - imn) + imn;
|
2005-04-17 02:20:36 +04:00
|
|
|
else {
|
|
|
|
t = ntohl(pkt_dev->cur_saddr);
|
|
|
|
t++;
|
2009-08-27 17:55:19 +04:00
|
|
|
if (t > imx)
|
2005-04-17 02:20:36 +04:00
|
|
|
t = imn;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
pkt_dev->cur_saddr = htonl(t);
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2007-07-03 09:40:36 +04:00
|
|
|
if (pkt_dev->cflows && f_seen(pkt_dev, flow)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->cur_daddr = pkt_dev->flows[flow].cur_daddr;
|
|
|
|
} else {
|
2006-11-15 07:48:11 +03:00
|
|
|
imn = ntohl(pkt_dev->daddr_min);
|
|
|
|
imx = ntohl(pkt_dev->daddr_max);
|
|
|
|
if (imn < imx) {
|
2005-04-17 02:20:36 +04:00
|
|
|
__u32 t;
|
2006-11-15 07:48:11 +03:00
|
|
|
__be32 s;
|
2005-04-17 02:20:36 +04:00
|
|
|
if (pkt_dev->flags & F_IPDST_RND) {
|
|
|
|
|
2013-04-30 03:21:41 +04:00
|
|
|
do {
|
2022-10-05 17:43:38 +03:00
|
|
|
t = prandom_u32_max(imx - imn) +
|
|
|
|
imn;
|
2006-11-15 07:48:11 +03:00
|
|
|
s = htonl(t);
|
2013-04-30 03:21:41 +04:00
|
|
|
} while (ipv4_is_loopback(s) ||
|
|
|
|
ipv4_is_multicast(s) ||
|
|
|
|
ipv4_is_lbcast(s) ||
|
|
|
|
ipv4_is_zeronet(s) ||
|
|
|
|
ipv4_is_local_multicast(s));
|
2006-11-15 07:48:11 +03:00
|
|
|
pkt_dev->cur_daddr = s;
|
|
|
|
} else {
|
2005-04-17 02:20:36 +04:00
|
|
|
t = ntohl(pkt_dev->cur_daddr);
|
|
|
|
t++;
|
|
|
|
if (t > imx) {
|
|
|
|
t = imn;
|
|
|
|
}
|
|
|
|
pkt_dev->cur_daddr = htonl(t);
|
|
|
|
}
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->cflows) {
|
2007-07-03 09:40:36 +04:00
|
|
|
pkt_dev->flows[flow].flags |= F_INIT;
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->flows[flow].cur_daddr =
|
|
|
|
pkt_dev->cur_daddr;
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
2018-01-18 21:31:35 +03:00
|
|
|
if (pkt_dev->flags & F_IPSEC)
|
2007-07-03 09:41:59 +04:00
|
|
|
get_ipsec_sa(pkt_dev, flow);
|
|
|
|
#endif
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->nflows++;
|
|
|
|
}
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
} else { /* IPV6 * */
|
|
|
|
|
2012-10-18 21:55:31 +04:00
|
|
|
if (!ipv6_addr_any(&pkt_dev->min_in6_daddr)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Only random destinations yet */
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
for (i = 0; i < 4; i++) {
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->cur_in6_daddr.s6_addr32[i] =
|
2022-10-05 18:43:22 +03:00
|
|
|
(((__force __be32)get_random_u32() |
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->min_in6_daddr.s6_addr32[i]) &
|
|
|
|
pkt_dev->max_in6_daddr.s6_addr32[i]);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->min_pkt_size < pkt_dev->max_pkt_size) {
|
|
|
|
__u32 t;
|
|
|
|
if (pkt_dev->flags & F_TXSIZE_RND) {
|
2022-10-05 17:43:38 +03:00
|
|
|
t = prandom_u32_max(pkt_dev->max_pkt_size -
|
|
|
|
pkt_dev->min_pkt_size) +
|
|
|
|
pkt_dev->min_pkt_size;
|
2006-03-21 09:16:13 +03:00
|
|
|
} else {
|
2005-04-17 02:20:36 +04:00
|
|
|
t = pkt_dev->cur_pkt_size + 1;
|
2006-03-21 09:16:13 +03:00
|
|
|
if (t > pkt_dev->max_pkt_size)
|
2005-04-17 02:20:36 +04:00
|
|
|
t = pkt_dev->min_pkt_size;
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
|
|
|
pkt_dev->cur_pkt_size = t;
|
2021-08-10 22:01:54 +03:00
|
|
|
} else if (pkt_dev->n_imix_entries > 0) {
|
|
|
|
struct imix_pkt *entry;
|
2022-10-05 17:43:38 +03:00
|
|
|
__u32 t = prandom_u32_max(IMIX_PRECISION);
|
2021-08-10 22:01:54 +03:00
|
|
|
__u8 entry_index = pkt_dev->imix_distribution[t];
|
|
|
|
|
|
|
|
entry = &pkt_dev->imix_entries[entry_index];
|
|
|
|
entry->count_so_far++;
|
|
|
|
pkt_dev->cur_pkt_size = entry->size;
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2008-07-17 12:56:23 +04:00
|
|
|
set_cur_queue_map(pkt_dev);
|
2007-08-29 02:45:55 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->flows[flow].count++;
|
|
|
|
}
|
|
|
|
|
2021-08-18 04:31:26 +03:00
|
|
|
static void fill_imix_distribution(struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
|
|
|
int cumulative_probabilites[MAX_IMIX_ENTRIES];
|
|
|
|
int j = 0;
|
|
|
|
__u64 cumulative_prob = 0;
|
|
|
|
__u64 total_weight = 0;
|
|
|
|
int i = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < pkt_dev->n_imix_entries; i++)
|
|
|
|
total_weight += pkt_dev->imix_entries[i].weight;
|
|
|
|
|
|
|
|
/* Fill cumulative_probabilites with sum of normalized probabilities */
|
|
|
|
for (i = 0; i < pkt_dev->n_imix_entries - 1; i++) {
|
|
|
|
cumulative_prob += div64_u64(pkt_dev->imix_entries[i].weight *
|
|
|
|
IMIX_PRECISION,
|
|
|
|
total_weight);
|
|
|
|
cumulative_probabilites[i] = cumulative_prob;
|
|
|
|
}
|
|
|
|
cumulative_probabilites[pkt_dev->n_imix_entries - 1] = 100;
|
|
|
|
|
|
|
|
for (i = 0; i < IMIX_PRECISION; i++) {
|
|
|
|
if (i == cumulative_probabilites[j])
|
|
|
|
j++;
|
|
|
|
pkt_dev->imix_distribution[i] = j;
|
|
|
|
}
|
|
|
|
}
|
2007-07-03 09:41:59 +04:00
|
|
|
|
|
|
|
#ifdef CONFIG_XFRM
|
2014-01-06 14:00:07 +04:00
|
|
|
static u32 pktgen_dst_metrics[RTAX_MAX + 1] = {
|
2014-01-03 07:18:31 +04:00
|
|
|
|
|
|
|
[RTAX_HOPLIMIT] = 0x5, /* Set a static hoplimit */
|
|
|
|
};
|
|
|
|
|
2007-07-03 09:41:59 +04:00
|
|
|
static int pktgen_output_ipsec(struct sk_buff *skb, struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
|
|
|
struct xfrm_state *x = pkt_dev->flows[pkt_dev->curfl].x;
|
|
|
|
int err = 0;
|
2014-01-03 07:18:28 +04:00
|
|
|
struct net *net = dev_net(pkt_dev->odev);
|
2007-07-03 09:41:59 +04:00
|
|
|
|
|
|
|
if (!x)
|
|
|
|
return 0;
|
|
|
|
/* XXX: we dont support tunnel mode for now until
|
|
|
|
* we resolve the dst issue */
|
2014-01-03 07:18:31 +04:00
|
|
|
if ((x->props.mode != XFRM_MODE_TRANSPORT) && (pkt_dev->spi == 0))
|
2007-07-03 09:41:59 +04:00
|
|
|
return 0;
|
|
|
|
|
2014-01-03 07:18:31 +04:00
|
|
|
/* But when user specify an valid SPI, transformation
|
|
|
|
* supports both transport/tunnel mode + ESP/AH type.
|
|
|
|
*/
|
|
|
|
if ((x->props.mode == XFRM_MODE_TUNNEL) && (pkt_dev->spi != 0))
|
2017-11-28 23:45:44 +03:00
|
|
|
skb->_skb_refdst = (unsigned long)&pkt_dev->xdst.u.dst | SKB_DST_NOREF;
|
2014-01-03 07:18:31 +04:00
|
|
|
|
|
|
|
rcu_read_lock_bh();
|
2019-03-29 23:16:25 +03:00
|
|
|
err = pktgen_xfrm_outer_mode_output(x, skb);
|
2014-01-03 07:18:31 +04:00
|
|
|
rcu_read_unlock_bh();
|
2014-01-03 07:18:28 +04:00
|
|
|
if (err) {
|
|
|
|
XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTSTATEMODEERROR);
|
2007-07-03 09:41:59 +04:00
|
|
|
goto error;
|
2014-01-03 07:18:28 +04:00
|
|
|
}
|
2007-07-03 09:41:59 +04:00
|
|
|
err = x->type->output(x, skb);
|
2014-01-03 07:18:28 +04:00
|
|
|
if (err) {
|
|
|
|
XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTSTATEPROTOERROR);
|
2007-07-03 09:41:59 +04:00
|
|
|
goto error;
|
2014-01-03 07:18:28 +04:00
|
|
|
}
|
2014-01-03 07:18:27 +04:00
|
|
|
spin_lock_bh(&x->lock);
|
2009-08-27 17:55:19 +04:00
|
|
|
x->curlft.bytes += skb->len;
|
2007-07-03 09:41:59 +04:00
|
|
|
x->curlft.packets++;
|
2014-01-03 07:18:27 +04:00
|
|
|
spin_unlock_bh(&x->lock);
|
2007-07-03 09:41:59 +04:00
|
|
|
error:
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:08 +04:00
|
|
|
static void free_SAs(struct pktgen_dev *pkt_dev)
|
2007-07-03 09:41:59 +04:00
|
|
|
{
|
|
|
|
if (pkt_dev->cflows) {
|
|
|
|
/* let go of the SAs if we have them */
|
2010-10-18 16:14:44 +04:00
|
|
|
int i;
|
|
|
|
for (i = 0; i < pkt_dev->cflows; i++) {
|
2007-07-03 09:41:59 +04:00
|
|
|
struct xfrm_state *x = pkt_dev->flows[i].x;
|
|
|
|
if (x) {
|
|
|
|
xfrm_state_put(x);
|
|
|
|
pkt_dev->flows[i].x = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:08 +04:00
|
|
|
static int process_ipsec(struct pktgen_dev *pkt_dev,
|
2007-07-03 09:41:59 +04:00
|
|
|
struct sk_buff *skb, __be16 protocol)
|
|
|
|
{
|
2018-01-18 21:31:35 +03:00
|
|
|
if (pkt_dev->flags & F_IPSEC) {
|
2007-07-03 09:41:59 +04:00
|
|
|
struct xfrm_state *x = pkt_dev->flows[pkt_dev->curfl].x;
|
|
|
|
int nhead = 0;
|
|
|
|
if (x) {
|
2015-05-26 02:06:37 +03:00
|
|
|
struct ethhdr *eth;
|
2013-12-01 12:28:48 +04:00
|
|
|
struct iphdr *iph;
|
2015-05-26 02:06:37 +03:00
|
|
|
int ret;
|
2013-12-01 12:28:48 +04:00
|
|
|
|
2007-07-03 09:41:59 +04:00
|
|
|
nhead = x->props.header_len - skb_headroom(skb);
|
2009-08-27 17:55:19 +04:00
|
|
|
if (nhead > 0) {
|
2007-07-03 09:41:59 +04:00
|
|
|
ret = pskb_expand_head(skb, nhead, 0, GFP_ATOMIC);
|
|
|
|
if (ret < 0) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("Error expanding ipsec packet %d\n",
|
|
|
|
ret);
|
2008-10-14 05:43:59 +04:00
|
|
|
goto err;
|
2007-07-03 09:41:59 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ipsec is not expecting ll header */
|
|
|
|
skb_pull(skb, ETH_HLEN);
|
|
|
|
ret = pktgen_output_ipsec(skb, pkt_dev);
|
|
|
|
if (ret) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("Error creating ipsec packet %d\n", ret);
|
2008-10-14 05:43:59 +04:00
|
|
|
goto err;
|
2007-07-03 09:41:59 +04:00
|
|
|
}
|
|
|
|
/* restore ll */
|
networking: make skb_push & __skb_push return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions return void * and remove all the casts across
the tree, adding a (u8 *) cast only where the unsigned char pointer
was used directly, all done with the following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
@@
expression SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- fn(SKB, LEN)[0]
+ *(u8 *)fn(SKB, LEN)
Note that the last part there converts from push(...)[0] to the
more idiomatic *(u8 *)push(...).
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:23 +03:00
|
|
|
eth = skb_push(skb, ETH_HLEN);
|
2015-05-26 02:06:37 +03:00
|
|
|
memcpy(eth, pkt_dev->hh, 2 * ETH_ALEN);
|
|
|
|
eth->h_proto = protocol;
|
2013-12-01 12:28:48 +04:00
|
|
|
|
|
|
|
/* Update IPv4 header len as well as checksum value */
|
|
|
|
iph = ip_hdr(skb);
|
|
|
|
iph->tot_len = htons(skb->len - ETH_HLEN);
|
|
|
|
ip_send_check(iph);
|
2007-07-03 09:41:59 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return 1;
|
2008-10-14 05:43:59 +04:00
|
|
|
err:
|
|
|
|
kfree_skb(skb);
|
|
|
|
return 0;
|
2007-07-03 09:41:59 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
static void mpls_push(__be32 *mpls, struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
2012-04-15 09:58:06 +04:00
|
|
|
unsigned int i;
|
2009-08-27 17:55:19 +04:00
|
|
|
for (i = 0; i < pkt_dev->nr_labels; i++)
|
2006-03-23 12:10:26 +03:00
|
|
|
*mpls++ = pkt_dev->labels[i] & ~MPLS_STACK_BOTTOM;
|
2009-08-27 17:55:19 +04:00
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
mpls--;
|
|
|
|
*mpls |= MPLS_STACK_BOTTOM;
|
|
|
|
}
|
|
|
|
|
2006-11-03 14:49:56 +03:00
|
|
|
static inline __be16 build_tci(unsigned int id, unsigned int cfi,
|
|
|
|
unsigned int prio)
|
|
|
|
{
|
|
|
|
return htons(id | (cfi << 12) | (prio << 13));
|
|
|
|
}
|
|
|
|
|
2011-01-26 00:26:05 +03:00
|
|
|
static void pktgen_finalize_skb(struct pktgen_dev *pkt_dev, struct sk_buff *skb,
|
|
|
|
int datalen)
|
|
|
|
{
|
2017-11-07 13:38:32 +03:00
|
|
|
struct timespec64 timestamp;
|
2011-01-26 00:26:05 +03:00
|
|
|
struct pktgen_hdr *pgh;
|
|
|
|
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
pgh = skb_put(skb, sizeof(*pgh));
|
2011-01-26 00:26:05 +03:00
|
|
|
datalen -= sizeof(*pgh);
|
|
|
|
|
|
|
|
if (pkt_dev->nfrags <= 0) {
|
networking: convert many more places to skb_put_zero()
There were many places that my previous spatch didn't find,
as pointed out by yuan linyu in various patches.
The following spatch found many more and also removes the
now unnecessary casts:
@@
identifier p, p2;
expression len;
expression skb;
type t, t2;
@@
(
-p = skb_put(skb, len);
+p = skb_put_zero(skb, len);
|
-p = (t)skb_put(skb, len);
+p = skb_put_zero(skb, len);
)
... when != p
(
p2 = (t2)p;
-memset(p2, 0, len);
|
-memset(p, 0, len);
)
@@
type t, t2;
identifier p, p2;
expression skb;
@@
t *p;
...
(
-p = skb_put(skb, sizeof(t));
+p = skb_put_zero(skb, sizeof(t));
|
-p = (t *)skb_put(skb, sizeof(t));
+p = skb_put_zero(skb, sizeof(t));
)
... when != p
(
p2 = (t2)p;
-memset(p2, 0, sizeof(*p));
|
-memset(p, 0, sizeof(*p));
)
@@
expression skb, len;
@@
-memset(skb_put(skb, len), 0, len);
+skb_put_zero(skb, len);
Apply it to the tree (with one manual fixup to keep the
comment in vxlan.c, which spatch removed.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:19 +03:00
|
|
|
skb_put_zero(skb, datalen);
|
2011-01-26 00:26:05 +03:00
|
|
|
} else {
|
|
|
|
int frags = pkt_dev->nfrags;
|
|
|
|
int i, len;
|
2011-04-22 20:22:20 +04:00
|
|
|
int frag_len;
|
2011-01-26 00:26:05 +03:00
|
|
|
|
|
|
|
|
|
|
|
if (frags > MAX_SKB_FRAGS)
|
|
|
|
frags = MAX_SKB_FRAGS;
|
|
|
|
len = datalen - frags * PAGE_SIZE;
|
|
|
|
if (len > 0) {
|
networking: convert many more places to skb_put_zero()
There were many places that my previous spatch didn't find,
as pointed out by yuan linyu in various patches.
The following spatch found many more and also removes the
now unnecessary casts:
@@
identifier p, p2;
expression len;
expression skb;
type t, t2;
@@
(
-p = skb_put(skb, len);
+p = skb_put_zero(skb, len);
|
-p = (t)skb_put(skb, len);
+p = skb_put_zero(skb, len);
)
... when != p
(
p2 = (t2)p;
-memset(p2, 0, len);
|
-memset(p, 0, len);
)
@@
type t, t2;
identifier p, p2;
expression skb;
@@
t *p;
...
(
-p = skb_put(skb, sizeof(t));
+p = skb_put_zero(skb, sizeof(t));
|
-p = (t *)skb_put(skb, sizeof(t));
+p = skb_put_zero(skb, sizeof(t));
)
... when != p
(
p2 = (t2)p;
-memset(p2, 0, sizeof(*p));
|
-memset(p, 0, sizeof(*p));
)
@@
expression skb, len;
@@
-memset(skb_put(skb, len), 0, len);
+skb_put_zero(skb, len);
Apply it to the tree (with one manual fixup to keep the
comment in vxlan.c, which spatch removed.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:19 +03:00
|
|
|
skb_put_zero(skb, len);
|
2011-01-26 00:26:05 +03:00
|
|
|
datalen = frags * PAGE_SIZE;
|
|
|
|
}
|
|
|
|
|
|
|
|
i = 0;
|
2021-12-14 15:53:41 +03:00
|
|
|
frag_len = (datalen/frags) < PAGE_SIZE ?
|
|
|
|
(datalen/frags) : PAGE_SIZE;
|
2011-01-26 00:26:05 +03:00
|
|
|
while (datalen > 0) {
|
|
|
|
if (unlikely(!pkt_dev->page)) {
|
|
|
|
int node = numa_node_id();
|
|
|
|
|
|
|
|
if (pkt_dev->node >= 0 && (pkt_dev->flags & F_NODE))
|
|
|
|
node = pkt_dev->node;
|
|
|
|
pkt_dev->page = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
|
|
|
|
if (!pkt_dev->page)
|
|
|
|
break;
|
|
|
|
}
|
2011-10-19 02:55:11 +04:00
|
|
|
get_page(pkt_dev->page);
|
2011-08-23 03:44:58 +04:00
|
|
|
skb_frag_set_page(skb, i, pkt_dev->page);
|
2019-07-30 17:40:33 +03:00
|
|
|
skb_frag_off_set(&skb_shinfo(skb)->frags[i], 0);
|
2011-04-22 20:22:20 +04:00
|
|
|
/*last fragment, fill rest of data*/
|
|
|
|
if (i == (frags - 1))
|
2011-10-19 01:00:24 +04:00
|
|
|
skb_frag_size_set(&skb_shinfo(skb)->frags[i],
|
2021-12-14 15:53:41 +03:00
|
|
|
(datalen < PAGE_SIZE ? datalen : PAGE_SIZE));
|
2011-04-22 20:22:20 +04:00
|
|
|
else
|
2011-10-19 01:00:24 +04:00
|
|
|
skb_frag_size_set(&skb_shinfo(skb)->frags[i], frag_len);
|
|
|
|
datalen -= skb_frag_size(&skb_shinfo(skb)->frags[i]);
|
|
|
|
skb->len += skb_frag_size(&skb_shinfo(skb)->frags[i]);
|
|
|
|
skb->data_len += skb_frag_size(&skb_shinfo(skb)->frags[i]);
|
2011-01-26 00:26:05 +03:00
|
|
|
i++;
|
|
|
|
skb_shinfo(skb)->nr_frags = i;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Stamp the time, and sequence number,
|
|
|
|
* convert them to network byte order
|
|
|
|
*/
|
|
|
|
pgh->pgh_magic = htonl(PKTGEN_MAGIC);
|
|
|
|
pgh->seq_num = htonl(pkt_dev->seq_num);
|
|
|
|
|
2014-08-28 20:14:47 +04:00
|
|
|
if (pkt_dev->flags & F_NO_TIMESTAMP) {
|
|
|
|
pgh->tv_sec = 0;
|
|
|
|
pgh->tv_usec = 0;
|
|
|
|
} else {
|
2017-11-07 13:38:32 +03:00
|
|
|
/*
|
|
|
|
* pgh->tv_sec wraps in y2106 when interpreted as unsigned
|
|
|
|
* as done by wireshark, or y2038 when interpreted as signed.
|
|
|
|
* This is probably harmless, but if anyone wants to improve
|
|
|
|
* it, we could introduce a variant that puts 64-bit nanoseconds
|
|
|
|
* into the respective header bytes.
|
|
|
|
* This would also be slightly faster to read.
|
|
|
|
*/
|
|
|
|
ktime_get_real_ts64(×tamp);
|
2014-08-28 20:14:47 +04:00
|
|
|
pgh->tv_sec = htonl(timestamp.tv_sec);
|
2017-11-07 13:38:32 +03:00
|
|
|
pgh->tv_usec = htonl(timestamp.tv_nsec / NSEC_PER_USEC);
|
2014-08-28 20:14:47 +04:00
|
|
|
}
|
2011-01-26 00:26:05 +03:00
|
|
|
}
|
|
|
|
|
2013-06-08 16:18:16 +04:00
|
|
|
static struct sk_buff *pktgen_alloc_skb(struct net_device *dev,
|
2016-09-30 17:56:45 +03:00
|
|
|
struct pktgen_dev *pkt_dev)
|
2013-06-08 16:18:16 +04:00
|
|
|
{
|
2016-09-30 17:56:45 +03:00
|
|
|
unsigned int extralen = LL_RESERVED_SPACE(dev);
|
2013-06-08 16:18:16 +04:00
|
|
|
struct sk_buff *skb = NULL;
|
2016-09-30 17:56:45 +03:00
|
|
|
unsigned int size;
|
2013-06-08 16:18:16 +04:00
|
|
|
|
2016-09-30 17:56:45 +03:00
|
|
|
size = pkt_dev->cur_pkt_size + 64 + extralen + pkt_dev->pkt_overhead;
|
2013-06-08 16:18:16 +04:00
|
|
|
if (pkt_dev->flags & F_NODE) {
|
|
|
|
int node = pkt_dev->node >= 0 ? pkt_dev->node : numa_node_id();
|
|
|
|
|
|
|
|
skb = __alloc_skb(NET_SKB_PAD + size, GFP_NOWAIT, 0, node);
|
|
|
|
if (likely(skb)) {
|
|
|
|
skb_reserve(skb, NET_SKB_PAD);
|
|
|
|
skb->dev = dev;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
skb = __netdev_alloc_skb(dev, size, GFP_NOWAIT);
|
|
|
|
}
|
2016-01-11 08:38:44 +03:00
|
|
|
|
2016-09-30 17:56:45 +03:00
|
|
|
/* the caller pre-fetches from skb->data and reserves for the mac hdr */
|
2016-01-11 08:38:44 +03:00
|
|
|
if (likely(skb))
|
2016-09-30 17:56:45 +03:00
|
|
|
skb_reserve(skb, extralen - 16);
|
2013-06-08 16:18:16 +04:00
|
|
|
|
|
|
|
return skb;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static struct sk_buff *fill_packet_ipv4(struct net_device *odev,
|
|
|
|
struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct sk_buff *skb = NULL;
|
|
|
|
__u8 *eth;
|
|
|
|
struct udphdr *udph;
|
|
|
|
int datalen, iplen;
|
|
|
|
struct iphdr *iph;
|
2007-03-05 03:08:08 +03:00
|
|
|
__be16 protocol = htons(ETH_P_IP);
|
2006-03-23 12:10:26 +03:00
|
|
|
__be32 *mpls;
|
2006-09-28 03:30:44 +04:00
|
|
|
__be16 *vlan_tci = NULL; /* Encapsulates priority and VLAN ID */
|
|
|
|
__be16 *vlan_encapsulated_proto = NULL; /* packet type ID field (or len) for VLAN tag */
|
|
|
|
__be16 *svlan_tci = NULL; /* Encapsulates priority and SVLAN ID */
|
|
|
|
__be16 *svlan_encapsulated_proto = NULL; /* packet type ID field (or len) for SVLAN tag */
|
2008-07-17 12:56:23 +04:00
|
|
|
u16 queue_map;
|
2006-03-23 12:10:26 +03:00
|
|
|
|
|
|
|
if (pkt_dev->nr_labels)
|
2007-03-05 03:08:08 +03:00
|
|
|
protocol = htons(ETH_P_MPLS_UC);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
if (pkt_dev->vlan_id != 0xffff)
|
2007-03-05 03:08:08 +03:00
|
|
|
protocol = htons(ETH_P_8021Q);
|
2006-09-28 03:30:44 +04:00
|
|
|
|
2005-06-27 02:27:10 +04:00
|
|
|
/* Update any of the values, used when we're incrementing various
|
|
|
|
* fields.
|
|
|
|
*/
|
|
|
|
mod_cur_headers(pkt_dev);
|
2010-11-08 02:19:43 +03:00
|
|
|
queue_map = pkt_dev->cur_queue_map;
|
2005-06-27 02:27:10 +04:00
|
|
|
|
2016-09-30 17:56:45 +03:00
|
|
|
skb = pktgen_alloc_skb(odev, pkt_dev);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!skb) {
|
|
|
|
sprintf(pkt_dev->result, "No memory");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2013-06-08 16:18:16 +04:00
|
|
|
prefetchw(skb->data);
|
2016-09-30 17:56:45 +03:00
|
|
|
skb_reserve(skb, 16);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/* Reserve for ethernet and IP header */
|
networking: make skb_push & __skb_push return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions return void * and remove all the casts across
the tree, adding a (u8 *) cast only where the unsigned char pointer
was used directly, all done with the following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
@@
expression SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- fn(SKB, LEN)[0]
+ *(u8 *)fn(SKB, LEN)
Note that the last part there converts from push(...)[0] to the
more idiomatic *(u8 *)push(...).
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:23 +03:00
|
|
|
eth = skb_push(skb, 14);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
mpls = skb_put(skb, pkt_dev->nr_labels * sizeof(__u32));
|
2006-03-23 12:10:26 +03:00
|
|
|
if (pkt_dev->nr_labels)
|
|
|
|
mpls_push(mpls, pkt_dev);
|
2006-09-28 03:30:44 +04:00
|
|
|
|
|
|
|
if (pkt_dev->vlan_id != 0xffff) {
|
2007-04-11 07:10:33 +04:00
|
|
|
if (pkt_dev->svlan_id != 0xffff) {
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
svlan_tci = skb_put(skb, sizeof(__be16));
|
2006-11-03 14:49:56 +03:00
|
|
|
*svlan_tci = build_tci(pkt_dev->svlan_id,
|
|
|
|
pkt_dev->svlan_cfi,
|
|
|
|
pkt_dev->svlan_p);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
svlan_encapsulated_proto = skb_put(skb,
|
|
|
|
sizeof(__be16));
|
2007-03-05 03:08:08 +03:00
|
|
|
*svlan_encapsulated_proto = htons(ETH_P_8021Q);
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
vlan_tci = skb_put(skb, sizeof(__be16));
|
2006-11-03 14:49:56 +03:00
|
|
|
*vlan_tci = build_tci(pkt_dev->vlan_id,
|
|
|
|
pkt_dev->vlan_cfi,
|
|
|
|
pkt_dev->vlan_p);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
vlan_encapsulated_proto = skb_put(skb, sizeof(__be16));
|
2007-03-05 03:08:08 +03:00
|
|
|
*vlan_encapsulated_proto = htons(ETH_P_IP);
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
|
|
|
|
2016-02-29 11:21:30 +03:00
|
|
|
skb_reset_mac_header(skb);
|
2013-06-03 15:49:23 +04:00
|
|
|
skb_set_network_header(skb, skb->len);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
iph = skb_put(skb, sizeof(struct iphdr));
|
2013-06-03 15:49:23 +04:00
|
|
|
|
|
|
|
skb_set_transport_header(skb, skb->len);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
udph = skb_put(skb, sizeof(struct udphdr));
|
2008-07-17 12:56:23 +04:00
|
|
|
skb_set_queue_mapping(skb, queue_map);
|
2010-11-16 22:12:28 +03:00
|
|
|
skb->priority = pkt_dev->skb_priority;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
memcpy(eth, pkt_dev->hh, 12);
|
2006-11-15 07:48:11 +03:00
|
|
|
*(__be16 *) & eth[12] = protocol;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
/* Eth + IPh + UDPh + mpls */
|
|
|
|
datalen = pkt_dev->cur_pkt_size - 14 - 20 - 8 -
|
2007-07-03 09:39:50 +04:00
|
|
|
pkt_dev->pkt_overhead;
|
2012-09-12 17:32:49 +04:00
|
|
|
if (datalen < 0 || datalen < sizeof(struct pktgen_hdr))
|
2005-04-17 02:20:36 +04:00
|
|
|
datalen = sizeof(struct pktgen_hdr);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
udph->source = htons(pkt_dev->cur_udp_src);
|
|
|
|
udph->dest = htons(pkt_dev->cur_udp_dst);
|
2006-03-21 09:16:13 +03:00
|
|
|
udph->len = htons(datalen + 8); /* DATA + udphdr */
|
2013-07-25 20:12:18 +04:00
|
|
|
udph->check = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
iph->ihl = 5;
|
|
|
|
iph->version = 4;
|
|
|
|
iph->ttl = 32;
|
2006-09-28 03:32:03 +04:00
|
|
|
iph->tos = pkt_dev->tos;
|
2006-03-21 09:16:13 +03:00
|
|
|
iph->protocol = IPPROTO_UDP; /* UDP */
|
2005-04-17 02:20:36 +04:00
|
|
|
iph->saddr = pkt_dev->cur_saddr;
|
|
|
|
iph->daddr = pkt_dev->cur_daddr;
|
2009-10-24 17:55:20 +04:00
|
|
|
iph->id = htons(pkt_dev->ip_id);
|
|
|
|
pkt_dev->ip_id++;
|
2005-04-17 02:20:36 +04:00
|
|
|
iph->frag_off = 0;
|
|
|
|
iplen = 20 + 8 + datalen;
|
|
|
|
iph->tot_len = htons(iplen);
|
2013-07-25 16:08:04 +04:00
|
|
|
ip_send_check(iph);
|
2006-03-23 12:10:26 +03:00
|
|
|
skb->protocol = protocol;
|
2005-04-17 02:20:36 +04:00
|
|
|
skb->dev = odev;
|
|
|
|
skb->pkt_type = PACKET_HOST;
|
2013-07-25 20:12:18 +04:00
|
|
|
|
2015-02-05 01:08:50 +03:00
|
|
|
pktgen_finalize_skb(pkt_dev, skb, datalen);
|
|
|
|
|
2013-07-25 20:12:18 +04:00
|
|
|
if (!(pkt_dev->flags & F_UDPCSUM)) {
|
|
|
|
skb->ip_summed = CHECKSUM_NONE;
|
2015-12-14 22:19:44 +03:00
|
|
|
} else if (odev->features & (NETIF_F_HW_CSUM | NETIF_F_IP_CSUM)) {
|
2013-07-25 20:12:18 +04:00
|
|
|
skb->ip_summed = CHECKSUM_PARTIAL;
|
|
|
|
skb->csum = 0;
|
2015-02-05 01:08:50 +03:00
|
|
|
udp4_hwcsum(skb, iph->saddr, iph->daddr);
|
2013-07-25 20:12:18 +04:00
|
|
|
} else {
|
2015-02-05 01:08:50 +03:00
|
|
|
__wsum csum = skb_checksum(skb, skb_transport_offset(skb), datalen + 8, 0);
|
2013-07-25 20:12:18 +04:00
|
|
|
|
|
|
|
/* add protocol-dependent pseudo-header */
|
2015-02-05 01:08:50 +03:00
|
|
|
udph->check = csum_tcpudp_magic(iph->saddr, iph->daddr,
|
2013-07-25 20:12:18 +04:00
|
|
|
datalen + 8, IPPROTO_UDP, csum);
|
|
|
|
|
|
|
|
if (udph->check == 0)
|
|
|
|
udph->check = CSUM_MANGLED_0;
|
|
|
|
}
|
|
|
|
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
if (!process_ipsec(pkt_dev, skb, protocol))
|
|
|
|
return NULL;
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
return skb;
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static struct sk_buff *fill_packet_ipv6(struct net_device *odev,
|
|
|
|
struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct sk_buff *skb = NULL;
|
|
|
|
__u8 *eth;
|
|
|
|
struct udphdr *udph;
|
2013-07-25 20:12:18 +04:00
|
|
|
int datalen, udplen;
|
2005-04-17 02:20:36 +04:00
|
|
|
struct ipv6hdr *iph;
|
2007-03-05 03:08:08 +03:00
|
|
|
__be16 protocol = htons(ETH_P_IPV6);
|
2006-03-23 12:10:26 +03:00
|
|
|
__be32 *mpls;
|
2006-09-28 03:30:44 +04:00
|
|
|
__be16 *vlan_tci = NULL; /* Encapsulates priority and VLAN ID */
|
|
|
|
__be16 *vlan_encapsulated_proto = NULL; /* packet type ID field (or len) for VLAN tag */
|
|
|
|
__be16 *svlan_tci = NULL; /* Encapsulates priority and SVLAN ID */
|
|
|
|
__be16 *svlan_encapsulated_proto = NULL; /* packet type ID field (or len) for SVLAN tag */
|
2008-07-17 12:56:23 +04:00
|
|
|
u16 queue_map;
|
2006-03-23 12:10:26 +03:00
|
|
|
|
|
|
|
if (pkt_dev->nr_labels)
|
2007-03-05 03:08:08 +03:00
|
|
|
protocol = htons(ETH_P_MPLS_UC);
|
2005-06-27 02:27:10 +04:00
|
|
|
|
2006-09-28 03:30:44 +04:00
|
|
|
if (pkt_dev->vlan_id != 0xffff)
|
2007-03-05 03:08:08 +03:00
|
|
|
protocol = htons(ETH_P_8021Q);
|
2006-09-28 03:30:44 +04:00
|
|
|
|
2005-06-27 02:27:10 +04:00
|
|
|
/* Update any of the values, used when we're incrementing various
|
|
|
|
* fields.
|
|
|
|
*/
|
|
|
|
mod_cur_headers(pkt_dev);
|
2010-11-08 02:19:43 +03:00
|
|
|
queue_map = pkt_dev->cur_queue_map;
|
2005-06-27 02:27:10 +04:00
|
|
|
|
2016-09-30 17:56:45 +03:00
|
|
|
skb = pktgen_alloc_skb(odev, pkt_dev);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!skb) {
|
|
|
|
sprintf(pkt_dev->result, "No memory");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2013-06-08 16:18:16 +04:00
|
|
|
prefetchw(skb->data);
|
2005-04-17 02:20:36 +04:00
|
|
|
skb_reserve(skb, 16);
|
|
|
|
|
|
|
|
/* Reserve for ethernet and IP header */
|
networking: make skb_push & __skb_push return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions return void * and remove all the casts across
the tree, adding a (u8 *) cast only where the unsigned char pointer
was used directly, all done with the following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
@@
expression SKB, LEN;
identifier fn = { skb_push, __skb_push, skb_push_rcsum };
@@
- fn(SKB, LEN)[0]
+ *(u8 *)fn(SKB, LEN)
Note that the last part there converts from push(...)[0] to the
more idiomatic *(u8 *)push(...).
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:23 +03:00
|
|
|
eth = skb_push(skb, 14);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
mpls = skb_put(skb, pkt_dev->nr_labels * sizeof(__u32));
|
2006-03-23 12:10:26 +03:00
|
|
|
if (pkt_dev->nr_labels)
|
|
|
|
mpls_push(mpls, pkt_dev);
|
2006-09-28 03:30:44 +04:00
|
|
|
|
|
|
|
if (pkt_dev->vlan_id != 0xffff) {
|
2007-04-11 07:10:33 +04:00
|
|
|
if (pkt_dev->svlan_id != 0xffff) {
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
svlan_tci = skb_put(skb, sizeof(__be16));
|
2006-11-03 14:49:56 +03:00
|
|
|
*svlan_tci = build_tci(pkt_dev->svlan_id,
|
|
|
|
pkt_dev->svlan_cfi,
|
|
|
|
pkt_dev->svlan_p);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
svlan_encapsulated_proto = skb_put(skb,
|
|
|
|
sizeof(__be16));
|
2007-03-05 03:08:08 +03:00
|
|
|
*svlan_encapsulated_proto = htons(ETH_P_8021Q);
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
vlan_tci = skb_put(skb, sizeof(__be16));
|
2006-11-03 14:49:56 +03:00
|
|
|
*vlan_tci = build_tci(pkt_dev->vlan_id,
|
|
|
|
pkt_dev->vlan_cfi,
|
|
|
|
pkt_dev->vlan_p);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
vlan_encapsulated_proto = skb_put(skb, sizeof(__be16));
|
2007-03-05 03:08:08 +03:00
|
|
|
*vlan_encapsulated_proto = htons(ETH_P_IPV6);
|
2006-09-28 03:30:44 +04:00
|
|
|
}
|
|
|
|
|
2016-02-29 11:21:30 +03:00
|
|
|
skb_reset_mac_header(skb);
|
2013-06-03 15:49:23 +04:00
|
|
|
skb_set_network_header(skb, skb->len);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
iph = skb_put(skb, sizeof(struct ipv6hdr));
|
2013-06-03 15:49:23 +04:00
|
|
|
|
|
|
|
skb_set_transport_header(skb, skb->len);
|
networking: make skb_put & friends return void pointers
It seems like a historic accident that these return unsigned char *,
and in many places that means casts are required, more often than not.
Make these functions (skb_put, __skb_put and pskb_put) return void *
and remove all the casts across the tree, adding a (u8 *) cast only
where the unsigned char pointer was used directly, all done with the
following spatch:
@@
expression SKB, LEN;
typedef u8;
identifier fn = { skb_put, __skb_put };
@@
- *(fn(SKB, LEN))
+ *(u8 *)fn(SKB, LEN)
@@
expression E, SKB, LEN;
identifier fn = { skb_put, __skb_put };
type T;
@@
- E = ((T *)(fn(SKB, LEN)))
+ E = fn(SKB, LEN)
which actually doesn't cover pskb_put since there are only three
users overall.
A handful of stragglers were converted manually, notably a macro in
drivers/isdn/i4l/isdn_bsdcomp.c and, oddly enough, one of the many
instances in net/bluetooth/hci_sock.c. In the former file, I also
had to fix one whitespace problem spatch introduced.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 15:29:21 +03:00
|
|
|
udph = skb_put(skb, sizeof(struct udphdr));
|
2008-07-17 12:56:23 +04:00
|
|
|
skb_set_queue_mapping(skb, queue_map);
|
2010-11-16 22:12:28 +03:00
|
|
|
skb->priority = pkt_dev->skb_priority;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
memcpy(eth, pkt_dev->hh, 12);
|
2009-08-27 17:55:19 +04:00
|
|
|
*(__be16 *) ð[12] = protocol;
|
2005-06-27 02:27:10 +04:00
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
/* Eth + IPh + UDPh + mpls */
|
|
|
|
datalen = pkt_dev->cur_pkt_size - 14 -
|
|
|
|
sizeof(struct ipv6hdr) - sizeof(struct udphdr) -
|
2007-07-03 09:39:50 +04:00
|
|
|
pkt_dev->pkt_overhead;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2012-10-09 21:48:16 +04:00
|
|
|
if (datalen < 0 || datalen < sizeof(struct pktgen_hdr)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
datalen = sizeof(struct pktgen_hdr);
|
2012-05-14 01:56:26 +04:00
|
|
|
net_info_ratelimited("increased datalen to %d\n", datalen);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2013-07-25 20:12:18 +04:00
|
|
|
udplen = datalen + sizeof(struct udphdr);
|
2005-04-17 02:20:36 +04:00
|
|
|
udph->source = htons(pkt_dev->cur_udp_src);
|
|
|
|
udph->dest = htons(pkt_dev->cur_udp_dst);
|
2013-07-25 20:12:18 +04:00
|
|
|
udph->len = htons(udplen);
|
|
|
|
udph->check = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-03-05 03:08:08 +03:00
|
|
|
*(__be32 *) iph = htonl(0x60000000); /* Version + flow */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-09-28 03:32:03 +04:00
|
|
|
if (pkt_dev->traffic_class) {
|
|
|
|
/* Version + traffic class + flow (0) */
|
2006-11-15 07:48:11 +03:00
|
|
|
*(__be32 *)iph |= htonl(0x60000000 | (pkt_dev->traffic_class << 20));
|
2006-09-28 03:32:03 +04:00
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
iph->hop_limit = 32;
|
|
|
|
|
2013-07-25 20:12:18 +04:00
|
|
|
iph->payload_len = htons(udplen);
|
2005-04-17 02:20:36 +04:00
|
|
|
iph->nexthdr = IPPROTO_UDP;
|
|
|
|
|
2011-11-21 07:39:03 +04:00
|
|
|
iph->daddr = pkt_dev->cur_in6_daddr;
|
|
|
|
iph->saddr = pkt_dev->cur_in6_saddr;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-23 12:10:26 +03:00
|
|
|
skb->protocol = protocol;
|
2005-04-17 02:20:36 +04:00
|
|
|
skb->dev = odev;
|
|
|
|
skb->pkt_type = PACKET_HOST;
|
|
|
|
|
2015-02-05 01:08:50 +03:00
|
|
|
pktgen_finalize_skb(pkt_dev, skb, datalen);
|
|
|
|
|
2013-07-25 20:12:18 +04:00
|
|
|
if (!(pkt_dev->flags & F_UDPCSUM)) {
|
|
|
|
skb->ip_summed = CHECKSUM_NONE;
|
2015-12-14 22:19:44 +03:00
|
|
|
} else if (odev->features & (NETIF_F_HW_CSUM | NETIF_F_IPV6_CSUM)) {
|
2013-07-25 20:12:18 +04:00
|
|
|
skb->ip_summed = CHECKSUM_PARTIAL;
|
|
|
|
skb->csum_start = skb_transport_header(skb) - skb->head;
|
|
|
|
skb->csum_offset = offsetof(struct udphdr, check);
|
|
|
|
udph->check = ~csum_ipv6_magic(&iph->saddr, &iph->daddr, udplen, IPPROTO_UDP, 0);
|
|
|
|
} else {
|
2015-02-05 01:08:50 +03:00
|
|
|
__wsum csum = skb_checksum(skb, skb_transport_offset(skb), udplen, 0);
|
2013-07-25 20:12:18 +04:00
|
|
|
|
|
|
|
/* add protocol-dependent pseudo-header */
|
|
|
|
udph->check = csum_ipv6_magic(&iph->saddr, &iph->daddr, udplen, IPPROTO_UDP, csum);
|
|
|
|
|
|
|
|
if (udph->check == 0)
|
|
|
|
udph->check = CSUM_MANGLED_0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
return skb;
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:08 +04:00
|
|
|
static struct sk_buff *fill_packet(struct net_device *odev,
|
|
|
|
struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->flags & F_IPV6)
|
2005-04-17 02:20:36 +04:00
|
|
|
return fill_packet_ipv6(odev, pkt_dev);
|
|
|
|
else
|
|
|
|
return fill_packet_ipv4(odev, pkt_dev);
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static void pktgen_clear_counters(struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->seq_num = 1;
|
|
|
|
pkt_dev->idle_acc = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->sofar = 0;
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->tx_bytes = 0;
|
|
|
|
pkt_dev->errors = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Set up structure for sending pkts, clear counters */
|
|
|
|
|
|
|
|
static void pktgen_run(struct pktgen_thread *t)
|
|
|
|
{
|
2006-03-21 09:18:16 +03:00
|
|
|
struct pktgen_dev *pkt_dev;
|
2005-04-17 02:20:36 +04:00
|
|
|
int started = 0;
|
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
func_enter();
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(pkt_dev, &t->if_list, list) {
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* setup odev and create initial packet.
|
|
|
|
*/
|
|
|
|
pktgen_setup_inject(pkt_dev);
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->odev) {
|
2005-04-17 02:20:36 +04:00
|
|
|
pktgen_clear_counters(pkt_dev);
|
|
|
|
pkt_dev->skb = NULL;
|
2012-10-28 12:27:19 +04:00
|
|
|
pkt_dev->started_at = pkt_dev->next_tx = ktime_get();
|
2009-08-27 17:55:16 +04:00
|
|
|
|
2007-07-03 09:39:50 +04:00
|
|
|
set_pkt_overhead(pkt_dev);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
strcpy(pkt_dev->result, "Starting");
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
pkt_dev->running = 1; /* Cranke yeself! */
|
2005-04-17 02:20:36 +04:00
|
|
|
started++;
|
2006-03-21 09:16:13 +03:00
|
|
|
} else
|
2005-04-17 02:20:36 +04:00
|
|
|
strcpy(pkt_dev->result, "Error starting");
|
|
|
|
}
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_unlock();
|
2006-03-21 09:16:13 +03:00
|
|
|
if (started)
|
|
|
|
t->control &= ~(T_STOP);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2021-06-07 05:37:41 +03:00
|
|
|
static void pktgen_handle_all_threads(struct pktgen_net *pn, u32 flags)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:40 +03:00
|
|
|
struct pktgen_thread *t;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2006-03-21 09:16:40 +03:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
list_for_each_entry(t, &pn->pktgen_threads, th_list)
|
2021-06-07 05:37:41 +03:00
|
|
|
t->control |= (flags);
|
2006-03-21 09:16:40 +03:00
|
|
|
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2021-06-07 05:37:41 +03:00
|
|
|
static void pktgen_stop_all_threads(struct pktgen_net *pn)
|
|
|
|
{
|
|
|
|
func_enter();
|
|
|
|
|
|
|
|
pktgen_handle_all_threads(pn, T_STOP);
|
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:07 +04:00
|
|
|
static int thread_is_running(const struct pktgen_thread *t)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2009-08-27 17:55:07 +04:00
|
|
|
const struct pktgen_dev *pkt_dev;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(pkt_dev, &t->if_list, list)
|
|
|
|
if (pkt_dev->running) {
|
|
|
|
rcu_read_unlock();
|
2009-08-27 17:55:07 +04:00
|
|
|
return 1;
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2009-08-27 17:55:07 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int pktgen_wait_thread_run(struct pktgen_thread *t)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:13 +03:00
|
|
|
while (thread_is_running(t)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: do not sleep with the thread lock held.
Currently, the process issuing a "start" command on the pktgen procfs
interface, acquires the pktgen thread lock and never release it, until
all pktgen threads are completed. The above can blocks indefinitely any
other pktgen command and any (even unrelated) netdevice removal - as
the pktgen netdev notifier acquires the same lock.
The issue is demonstrated by the following script, reported by Matteo:
ip -b - <<'EOF'
link add type dummy
link add type veth
link set dummy0 up
EOF
modprobe pktgen
echo reset >/proc/net/pktgen/pgctrl
{
echo rem_device_all
echo add_device dummy0
} >/proc/net/pktgen/kpktgend_0
echo count 0 >/proc/net/pktgen/dummy0
echo start >/proc/net/pktgen/pgctrl &
sleep 1
rmmod veth
Fix the above releasing the thread lock around the sleep call.
Additionally we must prevent racing with forcefull rmmod - as the
thread lock no more protects from them. Instead, acquire a self-reference
before waiting for any thread. As a side effect, running
rmmod pktgen
while some thread is running now fails with "module in use" error,
before this patch such command hanged indefinitely.
Note: the issue predates the commit reported in the fixes tag, but
this fix can't be applied before the mentioned commit.
v1 -> v2:
- no need to check for thread existence after flipping the lock,
pktgen threads are freed only at net exit time
-
Fixes: 6146e6a43b35 ("[PKTGEN]: Removes thread_{un,}lock() macros.")
Reported-and-tested-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06 16:45:03 +03:00
|
|
|
/* note: 't' will still be around even after the unlock/lock
|
|
|
|
* cycle because pktgen_thread threads are only cleared at
|
|
|
|
* net exit
|
|
|
|
*/
|
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
2006-03-21 09:16:13 +03:00
|
|
|
msleep_interruptible(100);
|
pktgen: do not sleep with the thread lock held.
Currently, the process issuing a "start" command on the pktgen procfs
interface, acquires the pktgen thread lock and never release it, until
all pktgen threads are completed. The above can blocks indefinitely any
other pktgen command and any (even unrelated) netdevice removal - as
the pktgen netdev notifier acquires the same lock.
The issue is demonstrated by the following script, reported by Matteo:
ip -b - <<'EOF'
link add type dummy
link add type veth
link set dummy0 up
EOF
modprobe pktgen
echo reset >/proc/net/pktgen/pgctrl
{
echo rem_device_all
echo add_device dummy0
} >/proc/net/pktgen/kpktgend_0
echo count 0 >/proc/net/pktgen/dummy0
echo start >/proc/net/pktgen/pgctrl &
sleep 1
rmmod veth
Fix the above releasing the thread lock around the sleep call.
Additionally we must prevent racing with forcefull rmmod - as the
thread lock no more protects from them. Instead, acquire a self-reference
before waiting for any thread. As a side effect, running
rmmod pktgen
while some thread is running now fails with "module in use" error,
before this patch such command hanged indefinitely.
Note: the issue predates the commit reported in the fixes tag, but
this fix can't be applied before the mentioned commit.
v1 -> v2:
- no need to check for thread existence after flipping the lock,
pktgen threads are freed only at net exit time
-
Fixes: 6146e6a43b35 ("[PKTGEN]: Removes thread_{un,}lock() macros.")
Reported-and-tested-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06 16:45:03 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (signal_pending(current))
|
|
|
|
goto signal;
|
|
|
|
}
|
|
|
|
return 1;
|
|
|
|
signal:
|
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static int pktgen_wait_all_threads_run(struct pktgen_net *pn)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:40 +03:00
|
|
|
struct pktgen_thread *t;
|
2005-04-17 02:20:36 +04:00
|
|
|
int sig = 1;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
pktgen: do not sleep with the thread lock held.
Currently, the process issuing a "start" command on the pktgen procfs
interface, acquires the pktgen thread lock and never release it, until
all pktgen threads are completed. The above can blocks indefinitely any
other pktgen command and any (even unrelated) netdevice removal - as
the pktgen netdev notifier acquires the same lock.
The issue is demonstrated by the following script, reported by Matteo:
ip -b - <<'EOF'
link add type dummy
link add type veth
link set dummy0 up
EOF
modprobe pktgen
echo reset >/proc/net/pktgen/pgctrl
{
echo rem_device_all
echo add_device dummy0
} >/proc/net/pktgen/kpktgend_0
echo count 0 >/proc/net/pktgen/dummy0
echo start >/proc/net/pktgen/pgctrl &
sleep 1
rmmod veth
Fix the above releasing the thread lock around the sleep call.
Additionally we must prevent racing with forcefull rmmod - as the
thread lock no more protects from them. Instead, acquire a self-reference
before waiting for any thread. As a side effect, running
rmmod pktgen
while some thread is running now fails with "module in use" error,
before this patch such command hanged indefinitely.
Note: the issue predates the commit reported in the fixes tag, but
this fix can't be applied before the mentioned commit.
v1 -> v2:
- no need to check for thread existence after flipping the lock,
pktgen threads are freed only at net exit time
-
Fixes: 6146e6a43b35 ("[PKTGEN]: Removes thread_{un,}lock() macros.")
Reported-and-tested-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06 16:45:03 +03:00
|
|
|
/* prevent from racing with rmmod */
|
|
|
|
if (!try_module_get(THIS_MODULE))
|
|
|
|
return sig;
|
|
|
|
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2006-03-21 09:16:40 +03:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
list_for_each_entry(t, &pn->pktgen_threads, th_list) {
|
2005-04-17 02:20:36 +04:00
|
|
|
sig = pktgen_wait_thread_run(t);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (sig == 0)
|
|
|
|
break;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-03-21 09:16:40 +03:00
|
|
|
|
|
|
|
if (sig == 0)
|
2013-01-28 23:55:53 +04:00
|
|
|
list_for_each_entry(t, &pn->pktgen_threads, th_list)
|
2005-04-17 02:20:36 +04:00
|
|
|
t->control |= (T_STOP);
|
2006-03-21 09:16:40 +03:00
|
|
|
|
2006-03-21 09:24:45 +03:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
pktgen: do not sleep with the thread lock held.
Currently, the process issuing a "start" command on the pktgen procfs
interface, acquires the pktgen thread lock and never release it, until
all pktgen threads are completed. The above can blocks indefinitely any
other pktgen command and any (even unrelated) netdevice removal - as
the pktgen netdev notifier acquires the same lock.
The issue is demonstrated by the following script, reported by Matteo:
ip -b - <<'EOF'
link add type dummy
link add type veth
link set dummy0 up
EOF
modprobe pktgen
echo reset >/proc/net/pktgen/pgctrl
{
echo rem_device_all
echo add_device dummy0
} >/proc/net/pktgen/kpktgend_0
echo count 0 >/proc/net/pktgen/dummy0
echo start >/proc/net/pktgen/pgctrl &
sleep 1
rmmod veth
Fix the above releasing the thread lock around the sleep call.
Additionally we must prevent racing with forcefull rmmod - as the
thread lock no more protects from them. Instead, acquire a self-reference
before waiting for any thread. As a side effect, running
rmmod pktgen
while some thread is running now fails with "module in use" error,
before this patch such command hanged indefinitely.
Note: the issue predates the commit reported in the fixes tag, but
this fix can't be applied before the mentioned commit.
v1 -> v2:
- no need to check for thread existence after flipping the lock,
pktgen threads are freed only at net exit time
-
Fixes: 6146e6a43b35 ("[PKTGEN]: Removes thread_{un,}lock() macros.")
Reported-and-tested-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06 16:45:03 +03:00
|
|
|
module_put(THIS_MODULE);
|
2005-04-17 02:20:36 +04:00
|
|
|
return sig;
|
|
|
|
}
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static void pktgen_run_all_threads(struct pktgen_net *pn)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2010-06-21 16:29:14 +04:00
|
|
|
func_enter();
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2021-06-07 05:37:41 +03:00
|
|
|
pktgen_handle_all_threads(pn, T_RUN);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
/* Propagate thread->control */
|
|
|
|
schedule_timeout_interruptible(msecs_to_jiffies(125));
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
pktgen_wait_all_threads_run(pn);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static void pktgen_reset_all_threads(struct pktgen_net *pn)
|
2008-11-11 03:48:03 +03:00
|
|
|
{
|
2010-06-21 16:29:14 +04:00
|
|
|
func_enter();
|
2008-11-11 03:48:03 +03:00
|
|
|
|
2021-06-07 05:37:41 +03:00
|
|
|
pktgen_handle_all_threads(pn, T_REMDEVALL);
|
2008-11-11 03:48:03 +03:00
|
|
|
|
2009-08-27 17:55:19 +04:00
|
|
|
/* Propagate thread->control */
|
|
|
|
schedule_timeout_interruptible(msecs_to_jiffies(125));
|
2008-11-11 03:48:03 +03:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
pktgen_wait_all_threads_run(pn);
|
2008-11-11 03:48:03 +03:00
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
static void show_results(struct pktgen_dev *pkt_dev, int nr_frags)
|
|
|
|
{
|
2009-08-27 17:55:16 +04:00
|
|
|
__u64 bps, mbps, pps;
|
2006-03-21 09:16:13 +03:00
|
|
|
char *p = pkt_dev->result;
|
2009-08-27 17:55:16 +04:00
|
|
|
ktime_t elapsed = ktime_sub(pkt_dev->stopped_at,
|
|
|
|
pkt_dev->started_at);
|
|
|
|
ktime_t idle = ns_to_ktime(pkt_dev->idle_acc);
|
|
|
|
|
2011-03-10 01:11:00 +03:00
|
|
|
p += sprintf(p, "OK: %llu(c%llu+d%llu) usec, %llu (%dbyte,%dfrags)\n",
|
2009-08-27 17:55:16 +04:00
|
|
|
(unsigned long long)ktime_to_us(elapsed),
|
|
|
|
(unsigned long long)ktime_to_us(ktime_sub(elapsed, idle)),
|
|
|
|
(unsigned long long)ktime_to_us(idle),
|
2006-03-21 09:16:13 +03:00
|
|
|
(unsigned long long)pkt_dev->sofar,
|
|
|
|
pkt_dev->cur_pkt_size, nr_frags);
|
|
|
|
|
2009-08-27 17:55:16 +04:00
|
|
|
pps = div64_u64(pkt_dev->sofar * NSEC_PER_SEC,
|
|
|
|
ktime_to_ns(elapsed));
|
2006-03-21 09:16:13 +03:00
|
|
|
|
pktgen: Add output for imix results
The bps for imix mode is calculated by:
sum(imix_entry.size) / time_elapsed
The actual counts of each imix_entry are displayed under the
"Current:" section of the interface output in the following format:
imix_size_counts: size_1,count_1 size_2,count_2 ... size_n,count_n
Example (count = 200000):
imix_weights: 256,1 859,3 205,2
imix_size_counts: 256,32082 859,99796 205,68122
Result: OK: 17992362(c17964678+d27684) usec, 200000 (859byte,0frags)
11115pps 47Mb/sec (47977140bps) errors: 0
Summary of changes:
Calculate bps based on imix counters when in IMIX mode.
Add output for IMIX counters.
Signed-off-by: Nick Richardson <richardsonnick@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-10 22:01:55 +03:00
|
|
|
if (pkt_dev->n_imix_entries > 0) {
|
|
|
|
int i;
|
|
|
|
struct imix_pkt *entry;
|
|
|
|
|
|
|
|
bps = 0;
|
|
|
|
for (i = 0; i < pkt_dev->n_imix_entries; i++) {
|
|
|
|
entry = &pkt_dev->imix_entries[i];
|
|
|
|
bps += entry->size * entry->count_so_far;
|
|
|
|
}
|
|
|
|
bps = div64_u64(bps * 8 * NSEC_PER_SEC, ktime_to_ns(elapsed));
|
|
|
|
} else {
|
|
|
|
bps = pps * 8 * pkt_dev->cur_pkt_size;
|
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
|
|
|
|
mbps = bps;
|
|
|
|
do_div(mbps, 1000000);
|
|
|
|
p += sprintf(p, " %llupps %lluMb/sec (%llubps) errors: %llu",
|
|
|
|
(unsigned long long)pps,
|
|
|
|
(unsigned long long)mbps,
|
|
|
|
(unsigned long long)bps,
|
|
|
|
(unsigned long long)pkt_dev->errors);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Set stopped-at timer, remove from running list, do counters & statistics */
|
2006-03-21 09:16:13 +03:00
|
|
|
static int pktgen_stop_device(struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:13 +03:00
|
|
|
int nr_frags = pkt_dev->skb ? skb_shinfo(pkt_dev->skb)->nr_frags : -1;
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!pkt_dev->running) {
|
2014-09-10 08:17:30 +04:00
|
|
|
pr_warn("interface: %s is already stopped\n",
|
|
|
|
pkt_dev->odevname);
|
2006-03-21 09:16:13 +03:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
pkt_dev->running = 0;
|
2009-08-27 17:55:10 +04:00
|
|
|
kfree_skb(pkt_dev->skb);
|
|
|
|
pkt_dev->skb = NULL;
|
2012-10-28 12:27:19 +04:00
|
|
|
pkt_dev->stopped_at = ktime_get();
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 08:26:56 +03:00
|
|
|
show_results(pkt_dev, nr_frags);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static struct pktgen_dev *next_to_run(struct pktgen_thread *t)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:18:16 +03:00
|
|
|
struct pktgen_dev *pkt_dev, *best = NULL;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(pkt_dev, &t->if_list, list) {
|
2006-03-21 09:18:16 +03:00
|
|
|
if (!pkt_dev->running)
|
2006-03-21 09:16:13 +03:00
|
|
|
continue;
|
|
|
|
if (best == NULL)
|
2006-03-21 09:18:16 +03:00
|
|
|
best = pkt_dev;
|
2012-10-28 12:27:19 +04:00
|
|
|
else if (ktime_compare(pkt_dev->next_tx, best->next_tx) < 0)
|
2006-03-21 09:18:16 +03:00
|
|
|
best = pkt_dev;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_unlock();
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
return best;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static void pktgen_stop(struct pktgen_thread *t)
|
|
|
|
{
|
2006-03-21 09:18:16 +03:00
|
|
|
struct pktgen_dev *pkt_dev;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
func_enter();
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_lock();
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
list_for_each_entry_rcu(pkt_dev, &t->if_list, list) {
|
2006-03-21 09:18:16 +03:00
|
|
|
pktgen_stop_device(pkt_dev);
|
2006-03-21 08:26:56 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 08:26:56 +03:00
|
|
|
/*
|
|
|
|
* one of our devices needs to be removed - find it
|
|
|
|
* and remove it
|
|
|
|
*/
|
|
|
|
static void pktgen_rem_one_if(struct pktgen_thread *t)
|
|
|
|
{
|
2006-03-21 09:18:16 +03:00
|
|
|
struct list_head *q, *n;
|
|
|
|
struct pktgen_dev *cur;
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
func_enter();
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2006-03-21 09:18:16 +03:00
|
|
|
list_for_each_safe(q, n, &t->if_list) {
|
|
|
|
cur = list_entry(q, struct pktgen_dev, list);
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!cur->removal_mark)
|
|
|
|
continue;
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2009-02-25 03:31:54 +03:00
|
|
|
kfree_skb(cur->skb);
|
2006-03-21 08:26:56 +03:00
|
|
|
cur->skb = NULL;
|
|
|
|
|
|
|
|
pktgen_remove_device(t, cur);
|
|
|
|
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static void pktgen_rem_all_ifs(struct pktgen_thread *t)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:18:16 +03:00
|
|
|
struct list_head *q, *n;
|
|
|
|
struct pktgen_dev *cur;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
func_enter();
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
/* Remove all devices, free mem */
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2006-03-21 09:18:16 +03:00
|
|
|
list_for_each_safe(q, n, &t->if_list) {
|
|
|
|
cur = list_entry(q, struct pktgen_dev, list);
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2009-02-25 03:31:54 +03:00
|
|
|
kfree_skb(cur->skb);
|
2006-03-21 08:26:56 +03:00
|
|
|
cur->skb = NULL;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
pktgen_remove_device(t, cur);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static void pktgen_rem_thread(struct pktgen_thread *t)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:13 +03:00
|
|
|
/* Remove from the thread list */
|
2013-01-28 23:55:53 +04:00
|
|
|
remove_proc_entry(t->tsk->comm, t->net->proc_dir);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
static void pktgen_resched(struct pktgen_dev *pkt_dev)
|
2009-08-27 17:55:11 +04:00
|
|
|
{
|
2012-10-28 12:27:19 +04:00
|
|
|
ktime_t idle_start = ktime_get();
|
2009-09-22 23:41:43 +04:00
|
|
|
schedule();
|
2012-10-28 12:27:19 +04:00
|
|
|
pkt_dev->idle_acc += ktime_to_ns(ktime_sub(ktime_get(), idle_start));
|
2009-09-22 23:41:43 +04:00
|
|
|
}
|
2009-08-27 17:55:11 +04:00
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
static void pktgen_wait_for_skb(struct pktgen_dev *pkt_dev)
|
|
|
|
{
|
2012-10-28 12:27:19 +04:00
|
|
|
ktime_t idle_start = ktime_get();
|
2009-08-27 17:55:11 +04:00
|
|
|
|
2017-06-30 13:07:58 +03:00
|
|
|
while (refcount_read(&(pkt_dev->skb->users)) != 1) {
|
2009-09-22 23:41:43 +04:00
|
|
|
if (signal_pending(current))
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (need_resched())
|
|
|
|
pktgen_resched(pkt_dev);
|
|
|
|
else
|
|
|
|
cpu_relax();
|
|
|
|
}
|
2012-10-28 12:27:19 +04:00
|
|
|
pkt_dev->idle_acc += ktime_to_ns(ktime_sub(ktime_get(), idle_start));
|
2009-08-27 17:55:11 +04:00
|
|
|
}
|
|
|
|
|
2009-08-27 17:55:08 +04:00
|
|
|
static void pktgen_xmit(struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
Please do not apply this to mainline directly, instead please re-run the
coccinelle script shown below and apply its output.
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't harmful, and changing them results in
churn.
However, for some features, the read/write distinction is critical to
correct operation. To distinguish these cases, separate read/write
accessors must be used. This patch migrates (most) remaining
ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
coccinelle script:
----
// Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
// WRITE_ONCE()
// $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
virtual patch
@ depends on patch @
expression E1, E2;
@@
- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)
@ depends on patch @
expression E;
@@
- ACCESS_ONCE(E)
+ READ_ONCE(E)
----
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davem@davemloft.net
Cc: linux-arch@vger.kernel.org
Cc: mpe@ellerman.id.au
Cc: shuah@kernel.org
Cc: snitzer@redhat.com
Cc: thor.thayer@linux.intel.com
Cc: tj@kernel.org
Cc: viro@zeniv.linux.org.uk
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-10-24 00:07:29 +03:00
|
|
|
unsigned int burst = READ_ONCE(pkt_dev->burst);
|
2008-11-21 07:14:53 +03:00
|
|
|
struct net_device *odev = pkt_dev->odev;
|
2008-07-17 12:56:23 +04:00
|
|
|
struct netdev_queue *txq;
|
2015-05-07 17:35:32 +03:00
|
|
|
struct sk_buff *skb;
|
2005-04-17 02:20:36 +04:00
|
|
|
int ret;
|
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
/* If device is offline, then don't send */
|
|
|
|
if (unlikely(!netif_running(odev) || !netif_carrier_ok(odev))) {
|
|
|
|
pktgen_stop_device(pkt_dev);
|
|
|
|
return;
|
2008-07-17 12:56:23 +04:00
|
|
|
}
|
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
/* This is max DELAY, this has special meaning of
|
|
|
|
* "never transmit"
|
|
|
|
*/
|
|
|
|
if (unlikely(pkt_dev->delay == ULLONG_MAX)) {
|
2012-10-28 12:27:19 +04:00
|
|
|
pkt_dev->next_tx = ktime_add_ns(ktime_get(), ULONG_MAX);
|
2009-08-27 17:55:11 +04:00
|
|
|
return;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
/* If no skb or clone count exhausted then get new one */
|
2009-08-27 17:55:12 +04:00
|
|
|
if (!pkt_dev->skb || (pkt_dev->last_ok &&
|
|
|
|
++pkt_dev->clone_count >= pkt_dev->clone_skb)) {
|
|
|
|
/* build a new pkt */
|
|
|
|
kfree_skb(pkt_dev->skb);
|
|
|
|
|
|
|
|
pkt_dev->skb = fill_packet(odev, pkt_dev);
|
|
|
|
if (pkt_dev->skb == NULL) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("ERROR: couldn't allocate skb in fill_packet\n");
|
2009-08-27 17:55:12 +04:00
|
|
|
schedule();
|
|
|
|
pkt_dev->clone_count--; /* back out increment, OOM */
|
|
|
|
return;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2009-11-06 08:04:32 +03:00
|
|
|
pkt_dev->last_pkt_size = pkt_dev->skb->len;
|
2009-08-27 17:55:12 +04:00
|
|
|
pkt_dev->clone_count = 0; /* reset counter */
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-03-21 08:26:56 +03:00
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
if (pkt_dev->delay && pkt_dev->last_ok)
|
|
|
|
spin(pkt_dev, pkt_dev->next_tx);
|
|
|
|
|
2015-05-07 17:35:32 +03:00
|
|
|
if (pkt_dev->xmit_mode == M_NETIF_RECEIVE) {
|
|
|
|
skb = pkt_dev->skb;
|
|
|
|
skb->protocol = eth_type_trans(skb, skb->dev);
|
2017-06-30 13:07:58 +03:00
|
|
|
refcount_add(burst, &skb->users);
|
2015-05-07 17:35:32 +03:00
|
|
|
local_bh_disable();
|
|
|
|
do {
|
|
|
|
ret = netif_receive_skb(skb);
|
|
|
|
if (ret == NET_RX_DROP)
|
|
|
|
pkt_dev->errors++;
|
|
|
|
pkt_dev->sofar++;
|
|
|
|
pkt_dev->seq_num++;
|
2017-06-30 13:07:58 +03:00
|
|
|
if (refcount_read(&skb->users) != burst) {
|
2015-05-07 17:35:32 +03:00
|
|
|
/* skb was queued by rps/rfs or taps,
|
|
|
|
* so cannot reuse this skb
|
|
|
|
*/
|
2017-06-30 13:07:58 +03:00
|
|
|
WARN_ON(refcount_sub_and_test(burst - 1, &skb->users));
|
2015-05-07 17:35:32 +03:00
|
|
|
/* get out of the loop and wait
|
|
|
|
* until skb is consumed
|
|
|
|
*/
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* skb was 'freed' by stack, so clean few
|
|
|
|
* bits and reuse it
|
|
|
|
*/
|
2020-03-25 15:47:18 +03:00
|
|
|
skb_reset_redirect(skb);
|
2015-05-07 17:35:32 +03:00
|
|
|
} while (--burst > 0);
|
|
|
|
goto out; /* Skips xmit_mode M_START_XMIT */
|
2016-07-03 00:12:54 +03:00
|
|
|
} else if (pkt_dev->xmit_mode == M_QUEUE_XMIT) {
|
|
|
|
local_bh_disable();
|
2017-06-30 13:07:58 +03:00
|
|
|
refcount_inc(&pkt_dev->skb->users);
|
2016-07-03 00:12:54 +03:00
|
|
|
|
|
|
|
ret = dev_queue_xmit(pkt_dev->skb);
|
|
|
|
switch (ret) {
|
|
|
|
case NET_XMIT_SUCCESS:
|
|
|
|
pkt_dev->sofar++;
|
|
|
|
pkt_dev->seq_num++;
|
|
|
|
pkt_dev->tx_bytes += pkt_dev->last_pkt_size;
|
|
|
|
break;
|
|
|
|
case NET_XMIT_DROP:
|
|
|
|
case NET_XMIT_CN:
|
|
|
|
/* These are all valid return codes for a qdisc but
|
|
|
|
* indicate packets are being dropped or will likely
|
|
|
|
* be dropped soon.
|
|
|
|
*/
|
|
|
|
case NETDEV_TX_BUSY:
|
|
|
|
/* qdisc may call dev_hard_start_xmit directly in cases
|
|
|
|
* where no queues exist e.g. loopback device, virtual
|
|
|
|
* devices, etc. In this case we need to handle
|
|
|
|
* NETDEV_TX_ codes.
|
|
|
|
*/
|
|
|
|
default:
|
|
|
|
pkt_dev->errors++;
|
|
|
|
net_info_ratelimited("%s xmit error: %d\n",
|
|
|
|
pkt_dev->odevname, ret);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
goto out;
|
2015-05-07 17:35:32 +03:00
|
|
|
}
|
|
|
|
|
2014-08-27 13:11:27 +04:00
|
|
|
txq = skb_get_tx_queue(odev, pkt_dev->skb);
|
2008-07-17 12:56:23 +04:00
|
|
|
|
2014-04-11 15:22:00 +04:00
|
|
|
local_bh_disable();
|
|
|
|
|
|
|
|
HARD_TX_LOCK(odev, txq, smp_processor_id());
|
2009-08-27 17:55:14 +04:00
|
|
|
|
2014-04-07 19:18:30 +04:00
|
|
|
if (unlikely(netif_xmit_frozen_or_drv_stopped(txq))) {
|
2009-09-30 17:03:33 +04:00
|
|
|
pkt_dev->last_ok = 0;
|
|
|
|
goto unlock;
|
|
|
|
}
|
2017-06-30 13:07:58 +03:00
|
|
|
refcount_add(burst, &pkt_dev->skb->users);
|
2014-10-01 04:53:21 +04:00
|
|
|
|
|
|
|
xmit_more:
|
|
|
|
ret = netdev_start_xmit(pkt_dev->skb, odev, txq, --burst > 0);
|
2009-09-22 23:41:43 +04:00
|
|
|
|
|
|
|
switch (ret) {
|
|
|
|
case NETDEV_TX_OK:
|
|
|
|
pkt_dev->last_ok = 1;
|
|
|
|
pkt_dev->sofar++;
|
|
|
|
pkt_dev->seq_num++;
|
2009-11-06 08:04:32 +03:00
|
|
|
pkt_dev->tx_bytes += pkt_dev->last_pkt_size;
|
2014-10-01 04:53:21 +04:00
|
|
|
if (burst > 0 && !netif_xmit_frozen_or_drv_stopped(txq))
|
|
|
|
goto xmit_more;
|
2009-09-22 23:41:43 +04:00
|
|
|
break;
|
2009-12-24 09:02:57 +03:00
|
|
|
case NET_XMIT_DROP:
|
|
|
|
case NET_XMIT_CN:
|
|
|
|
/* skb has been consumed */
|
|
|
|
pkt_dev->errors++;
|
|
|
|
break;
|
2009-09-22 23:41:43 +04:00
|
|
|
default: /* Drivers are not supposed to return other values! */
|
2012-05-14 01:56:26 +04:00
|
|
|
net_info_ratelimited("%s xmit error: %d\n",
|
|
|
|
pkt_dev->odevname, ret);
|
2009-09-22 23:41:43 +04:00
|
|
|
pkt_dev->errors++;
|
2020-08-24 01:36:59 +03:00
|
|
|
fallthrough;
|
2009-09-22 23:41:43 +04:00
|
|
|
case NETDEV_TX_BUSY:
|
|
|
|
/* Retry it next time */
|
2017-06-30 13:07:58 +03:00
|
|
|
refcount_dec(&(pkt_dev->skb->users));
|
2009-09-22 23:41:43 +04:00
|
|
|
pkt_dev->last_ok = 0;
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2014-10-01 04:53:21 +04:00
|
|
|
if (unlikely(burst))
|
2017-06-30 13:07:58 +03:00
|
|
|
WARN_ON(refcount_sub_and_test(burst, &pkt_dev->skb->users));
|
2009-09-30 17:03:33 +04:00
|
|
|
unlock:
|
2014-04-11 15:22:00 +04:00
|
|
|
HARD_TX_UNLOCK(odev, txq);
|
|
|
|
|
2015-05-07 17:35:32 +03:00
|
|
|
out:
|
2014-04-11 15:22:00 +04:00
|
|
|
local_bh_enable();
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* If pkt_dev->count is zero, then run forever */
|
|
|
|
if ((pkt_dev->count != 0) && (pkt_dev->sofar >= pkt_dev->count)) {
|
2009-09-22 23:41:43 +04:00
|
|
|
pktgen_wait_for_skb(pkt_dev);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* Done with this */
|
|
|
|
pktgen_stop_device(pkt_dev);
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-02-09 17:24:36 +03:00
|
|
|
/*
|
2005-04-17 02:20:36 +04:00
|
|
|
* Main loop of the thread goes here
|
|
|
|
*/
|
|
|
|
|
2007-01-02 07:51:53 +03:00
|
|
|
static int pktgen_thread_worker(void *arg)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2007-01-02 07:51:53 +03:00
|
|
|
struct pktgen_thread *t = arg;
|
2006-03-21 09:16:13 +03:00
|
|
|
struct pktgen_dev *pkt_dev = NULL;
|
2005-04-17 02:20:36 +04:00
|
|
|
int cpu = t->cpu;
|
|
|
|
|
2021-01-25 15:42:29 +03:00
|
|
|
WARN_ON(smp_processor_id() != cpu);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
init_waitqueue_head(&t->queue);
|
2008-05-21 02:12:44 +04:00
|
|
|
complete(&t->start_done);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("starting pktgen/%d: pid=%d\n", cpu, task_pid_nr(current));
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-07-17 15:03:35 +04:00
|
|
|
set_freezable();
|
|
|
|
|
2007-01-02 07:51:53 +03:00
|
|
|
while (!kthread_should_stop()) {
|
|
|
|
pkt_dev = next_to_run(t);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
if (unlikely(!pkt_dev && t->control == 0)) {
|
2013-01-28 23:55:53 +04:00
|
|
|
if (t->net->pktgen_exiting)
|
2010-11-21 21:26:44 +03:00
|
|
|
break;
|
2009-09-22 23:41:43 +04:00
|
|
|
wait_event_interruptible_timeout(t->queue,
|
|
|
|
t->control != 0,
|
|
|
|
HZ/10);
|
2010-02-05 01:00:41 +03:00
|
|
|
try_to_freeze();
|
2009-09-22 23:41:43 +04:00
|
|
|
continue;
|
2007-01-02 07:51:53 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
if (likely(pkt_dev)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
pktgen_xmit(pkt_dev);
|
|
|
|
|
2009-09-22 23:41:43 +04:00
|
|
|
if (need_resched())
|
|
|
|
pktgen_resched(pkt_dev);
|
|
|
|
else
|
|
|
|
cpu_relax();
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (t->control & T_STOP) {
|
2005-04-17 02:20:36 +04:00
|
|
|
pktgen_stop(t);
|
|
|
|
t->control &= ~(T_STOP);
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (t->control & T_RUN) {
|
2005-04-17 02:20:36 +04:00
|
|
|
pktgen_run(t);
|
|
|
|
t->control &= ~(T_RUN);
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (t->control & T_REMDEVALL) {
|
2005-04-17 02:20:36 +04:00
|
|
|
pktgen_rem_all_ifs(t);
|
2006-03-21 08:26:56 +03:00
|
|
|
t->control &= ~(T_REMDEVALL);
|
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (t->control & T_REMDEV) {
|
2006-03-21 08:26:56 +03:00
|
|
|
pktgen_rem_one_if(t);
|
2005-04-17 02:20:36 +04:00
|
|
|
t->control &= ~(T_REMDEV);
|
|
|
|
}
|
|
|
|
|
2007-04-13 01:45:32 +04:00
|
|
|
try_to_freeze();
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("%s stopping all device\n", t->tsk->comm);
|
2006-03-21 09:16:13 +03:00
|
|
|
pktgen_stop(t);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("%s removing all device\n", t->tsk->comm);
|
2006-03-21 09:16:13 +03:00
|
|
|
pktgen_rem_all_ifs(t);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("%s removing thread\n", t->tsk->comm);
|
2006-03-21 09:16:13 +03:00
|
|
|
pktgen_rem_thread(t);
|
2006-03-21 09:16:40 +03:00
|
|
|
|
2007-01-02 07:51:53 +03:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static struct pktgen_dev *pktgen_find_dev(struct pktgen_thread *t,
|
2009-11-25 01:50:53 +03:00
|
|
|
const char *ifname, bool exact)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:18:16 +03:00
|
|
|
struct pktgen_dev *p, *pkt_dev = NULL;
|
2009-11-25 01:50:53 +03:00
|
|
|
size_t len = strlen(ifname);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(p, &t->if_list, list)
|
2009-11-25 01:50:53 +03:00
|
|
|
if (strncmp(p->odevname, ifname, len) == 0) {
|
|
|
|
if (p->odevname[len]) {
|
|
|
|
if (exact || p->odevname[len] != '@')
|
|
|
|
continue;
|
|
|
|
}
|
2006-03-21 09:18:16 +03:00
|
|
|
pkt_dev = p;
|
2006-03-21 09:16:13 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
rcu_read_unlock();
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("find_dev(%s) returning %p\n", ifname, pkt_dev);
|
2006-03-21 09:16:13 +03:00
|
|
|
return pkt_dev;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-02-09 17:24:36 +03:00
|
|
|
/*
|
|
|
|
* Adds a dev at front of if_list.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int add_dev_to_thread(struct pktgen_thread *t,
|
|
|
|
struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
int rv = 0;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
/* This function cannot be called concurrently, as its called
|
|
|
|
* under pktgen_thread_lock mutex, but it can run from
|
|
|
|
* userspace on another CPU than the kthread. The if_lock()
|
|
|
|
* is used here to sync with concurrent instances of
|
|
|
|
* _rem_dev_from_if_list() invoked via kthread, which is also
|
|
|
|
* updating the if_list */
|
2006-03-21 09:16:13 +03:00
|
|
|
if_lock(t);
|
|
|
|
|
|
|
|
if (pkt_dev->pg_thread) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("ERROR: already assigned to a thread\n");
|
2006-03-21 09:16:13 +03:00
|
|
|
rv = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
2006-03-21 09:18:16 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
pkt_dev->running = 0;
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
pkt_dev->pg_thread = t;
|
|
|
|
list_add_rcu(&pkt_dev->list, &t->if_list);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
out:
|
|
|
|
if_unlock(t);
|
|
|
|
return rv;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Called under thread lock */
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int pktgen_add_device(struct pktgen_thread *t, const char *ifname)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:13 +03:00
|
|
|
struct pktgen_dev *pkt_dev;
|
2007-03-05 03:11:51 +03:00
|
|
|
int err;
|
2009-11-29 11:44:33 +03:00
|
|
|
int node = cpu_to_node(t->cpu);
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* We don't allow a device to be on several threads */
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
pkt_dev = __pktgen_NN_threads(t->net, ifname, FIND);
|
2005-10-15 02:42:33 +04:00
|
|
|
if (pkt_dev) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("ERROR: interface already used\n");
|
2006-03-21 09:16:13 +03:00
|
|
|
return -EBUSY;
|
|
|
|
}
|
2005-10-15 02:42:33 +04:00
|
|
|
|
2009-11-29 11:44:33 +03:00
|
|
|
pkt_dev = kzalloc_node(sizeof(struct pktgen_dev), GFP_KERNEL, node);
|
2005-10-15 02:42:33 +04:00
|
|
|
if (!pkt_dev)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2009-11-23 04:44:37 +03:00
|
|
|
strcpy(pkt_dev->odevname, ifname);
|
treewide: Use array_size() in vzalloc_node()
The vzalloc_node() function has no 2-factor argument form, so
multiplication factors need to be wrapped in array_size(). This patch
replaces cases of:
vzalloc_node(a * b, node)
with:
vzalloc_node(array_size(a, b), node)
as well as handling cases of:
vzalloc_node(a * b * c, node)
with:
vzalloc_node(array3_size(a, b, c), node)
This does, however, attempt to ignore constant size factors like:
vzalloc_node(4 * 1024, node)
though any constants defined via macros get caught up in the conversion.
Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.
The Coccinelle script used for this was:
// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@
(
vzalloc_node(
- (sizeof(TYPE)) * E
+ sizeof(TYPE) * E
, ...)
|
vzalloc_node(
- (sizeof(THING)) * E
+ sizeof(THING) * E
, ...)
)
// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@
(
vzalloc_node(
- sizeof(u8) * (COUNT)
+ COUNT
, ...)
|
vzalloc_node(
- sizeof(__u8) * (COUNT)
+ COUNT
, ...)
|
vzalloc_node(
- sizeof(char) * (COUNT)
+ COUNT
, ...)
|
vzalloc_node(
- sizeof(unsigned char) * (COUNT)
+ COUNT
, ...)
|
vzalloc_node(
- sizeof(u8) * COUNT
+ COUNT
, ...)
|
vzalloc_node(
- sizeof(__u8) * COUNT
+ COUNT
, ...)
|
vzalloc_node(
- sizeof(char) * COUNT
+ COUNT
, ...)
|
vzalloc_node(
- sizeof(unsigned char) * COUNT
+ COUNT
, ...)
)
// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@
(
vzalloc_node(
- sizeof(TYPE) * (COUNT_ID)
+ array_size(COUNT_ID, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(TYPE) * COUNT_ID
+ array_size(COUNT_ID, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(TYPE) * (COUNT_CONST)
+ array_size(COUNT_CONST, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(TYPE) * COUNT_CONST
+ array_size(COUNT_CONST, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(THING) * (COUNT_ID)
+ array_size(COUNT_ID, sizeof(THING))
, ...)
|
vzalloc_node(
- sizeof(THING) * COUNT_ID
+ array_size(COUNT_ID, sizeof(THING))
, ...)
|
vzalloc_node(
- sizeof(THING) * (COUNT_CONST)
+ array_size(COUNT_CONST, sizeof(THING))
, ...)
|
vzalloc_node(
- sizeof(THING) * COUNT_CONST
+ array_size(COUNT_CONST, sizeof(THING))
, ...)
)
// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@
vzalloc_node(
- SIZE * COUNT
+ array_size(COUNT, SIZE)
, ...)
// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@
(
vzalloc_node(
- sizeof(TYPE) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(TYPE) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(TYPE) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(TYPE) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
vzalloc_node(
- sizeof(THING) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
vzalloc_node(
- sizeof(THING) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
vzalloc_node(
- sizeof(THING) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
vzalloc_node(
- sizeof(THING) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
)
// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@
(
vzalloc_node(
- sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
vzalloc_node(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
vzalloc_node(
- sizeof(THING1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
vzalloc_node(
- sizeof(THING1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
vzalloc_node(
- sizeof(TYPE1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
|
vzalloc_node(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
)
// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@
(
vzalloc_node(
- (COUNT) * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
vzalloc_node(
- COUNT * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
vzalloc_node(
- COUNT * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
vzalloc_node(
- (COUNT) * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
vzalloc_node(
- COUNT * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
vzalloc_node(
- (COUNT) * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
vzalloc_node(
- (COUNT) * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
vzalloc_node(
- COUNT * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
)
// Any remaining multi-factor products, first at least 3-factor products
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@
(
vzalloc_node(C1 * C2 * C3, ...)
|
vzalloc_node(
- E1 * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
)
// And then all remaining 2 factors products when they're not all constants.
@@
expression E1, E2;
constant C1, C2;
@@
(
vzalloc_node(C1 * C2, ...)
|
vzalloc_node(
- E1 * E2
+ array_size(E1, E2)
, ...)
)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-13 00:27:52 +03:00
|
|
|
pkt_dev->flows = vzalloc_node(array_size(MAX_CFLOWS,
|
|
|
|
sizeof(struct flow_state)),
|
2009-11-29 11:44:33 +03:00
|
|
|
node);
|
2005-10-15 02:42:33 +04:00
|
|
|
if (pkt_dev->flows == NULL) {
|
|
|
|
kfree(pkt_dev);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2006-03-21 08:26:56 +03:00
|
|
|
pkt_dev->removal_mark = 0;
|
2005-10-15 02:42:33 +04:00
|
|
|
pkt_dev->nfrags = 0;
|
2009-08-27 17:55:16 +04:00
|
|
|
pkt_dev->delay = pg_delay_d;
|
2005-10-15 02:42:33 +04:00
|
|
|
pkt_dev->count = pg_count_d;
|
|
|
|
pkt_dev->sofar = 0;
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->udp_src_min = 9; /* sink port */
|
2005-10-15 02:42:33 +04:00
|
|
|
pkt_dev->udp_src_max = 9;
|
|
|
|
pkt_dev->udp_dst_min = 9;
|
|
|
|
pkt_dev->udp_dst_max = 9;
|
2006-09-28 03:30:44 +04:00
|
|
|
pkt_dev->vlan_p = 0;
|
|
|
|
pkt_dev->vlan_cfi = 0;
|
|
|
|
pkt_dev->vlan_id = 0xffff;
|
|
|
|
pkt_dev->svlan_p = 0;
|
|
|
|
pkt_dev->svlan_cfi = 0;
|
|
|
|
pkt_dev->svlan_id = 0xffff;
|
2014-10-01 04:53:21 +04:00
|
|
|
pkt_dev->burst = 1;
|
2019-03-06 02:42:58 +03:00
|
|
|
pkt_dev->node = NUMA_NO_NODE;
|
2006-09-28 03:30:44 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
err = pktgen_setup_dev(t->net, pkt_dev, ifname);
|
2007-03-05 03:11:51 +03:00
|
|
|
if (err)
|
|
|
|
goto out1;
|
2011-07-26 10:05:37 +04:00
|
|
|
if (pkt_dev->odev->priv_flags & IFF_TX_SKB_SHARING)
|
|
|
|
pkt_dev->clone_skb = pg_clone_skb_d;
|
2005-10-15 02:42:33 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
pkt_dev->entry = proc_create_data(ifname, 0600, t->net->proc_dir,
|
2020-02-04 04:37:17 +03:00
|
|
|
&pktgen_if_proc_ops, pkt_dev);
|
2007-03-05 03:11:51 +03:00
|
|
|
if (!pkt_dev->entry) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("cannot create %s/%s procfs entry\n",
|
2005-10-15 02:42:33 +04:00
|
|
|
PG_PROC_DIR, ifname);
|
2007-03-05 03:11:51 +03:00
|
|
|
err = -EINVAL;
|
|
|
|
goto out2;
|
2005-10-15 02:42:33 +04:00
|
|
|
}
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
pkt_dev->ipsmode = XFRM_MODE_TRANSPORT;
|
|
|
|
pkt_dev->ipsproto = IPPROTO_ESP;
|
2014-01-03 07:18:31 +04:00
|
|
|
|
|
|
|
/* xfrm tunnel mode needs additional dst to extract outter
|
|
|
|
* ip header protocol/ttl/id field, here creat a phony one.
|
|
|
|
* instead of looking for a valid rt, which definitely hurting
|
|
|
|
* performance under such circumstance.
|
|
|
|
*/
|
|
|
|
pkt_dev->dstops.family = AF_INET;
|
2017-11-28 23:45:44 +03:00
|
|
|
pkt_dev->xdst.u.dst.dev = pkt_dev->odev;
|
|
|
|
dst_init_metrics(&pkt_dev->xdst.u.dst, pktgen_dst_metrics, false);
|
|
|
|
pkt_dev->xdst.child = &pkt_dev->xdst.u.dst;
|
|
|
|
pkt_dev->xdst.u.dst.ops = &pkt_dev->dstops;
|
2007-07-03 09:41:59 +04:00
|
|
|
#endif
|
2005-10-15 02:42:33 +04:00
|
|
|
|
|
|
|
return add_dev_to_thread(t, pkt_dev);
|
2007-03-05 03:11:51 +03:00
|
|
|
out2:
|
2022-06-08 07:39:55 +03:00
|
|
|
netdev_put(pkt_dev->odev, &pkt_dev->dev_tracker);
|
2007-03-05 03:11:51 +03:00
|
|
|
out1:
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
free_SAs(pkt_dev);
|
|
|
|
#endif
|
2009-06-08 11:40:35 +04:00
|
|
|
vfree(pkt_dev->flows);
|
2007-03-05 03:11:51 +03:00
|
|
|
kfree(pkt_dev);
|
|
|
|
return err;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static int __net_init pktgen_create_thread(int cpu, struct pktgen_net *pn)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:16:40 +03:00
|
|
|
struct pktgen_thread *t;
|
2005-10-15 02:42:33 +04:00
|
|
|
struct proc_dir_entry *pe;
|
2007-01-02 07:51:53 +03:00
|
|
|
struct task_struct *p;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2009-11-29 11:44:33 +03:00
|
|
|
t = kzalloc_node(sizeof(struct pktgen_thread), GFP_KERNEL,
|
|
|
|
cpu_to_node(cpu));
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!t) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("ERROR: out of memory, can't create new thread\n");
|
2006-03-21 09:16:13 +03:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2016-10-15 18:50:49 +03:00
|
|
|
mutex_init(&t->if_lock);
|
2005-04-17 02:20:36 +04:00
|
|
|
t->cpu = cpu;
|
2006-03-21 09:16:13 +03:00
|
|
|
|
2007-01-02 07:51:53 +03:00
|
|
|
INIT_LIST_HEAD(&t->if_list);
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
list_add_tail(&t->th_list, &pn->pktgen_threads);
|
2008-05-21 02:12:44 +04:00
|
|
|
init_completion(&t->start_done);
|
2007-01-02 07:51:53 +03:00
|
|
|
|
2011-03-23 02:30:45 +03:00
|
|
|
p = kthread_create_on_node(pktgen_thread_worker,
|
|
|
|
t,
|
|
|
|
cpu_to_node(cpu),
|
|
|
|
"kpktgend_%d", cpu);
|
2007-01-02 07:51:53 +03:00
|
|
|
if (IS_ERR(p)) {
|
2020-09-01 16:04:47 +03:00
|
|
|
pr_err("kthread_create_on_node() failed for cpu %d\n", t->cpu);
|
2007-01-02 07:51:53 +03:00
|
|
|
list_del(&t->th_list);
|
|
|
|
kfree(t);
|
|
|
|
return PTR_ERR(p);
|
|
|
|
}
|
|
|
|
kthread_bind(p, cpu);
|
|
|
|
t->tsk = p;
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
pe = proc_create_data(t->tsk->comm, 0600, pn->proc_dir,
|
2020-02-04 04:37:17 +03:00
|
|
|
&pktgen_thread_proc_ops, t);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (!pe) {
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_err("cannot create %s/%s procfs entry\n",
|
2007-01-02 07:51:53 +03:00
|
|
|
PG_PROC_DIR, t->tsk->comm);
|
|
|
|
kthread_stop(p);
|
|
|
|
list_del(&t->th_list);
|
2006-03-21 09:16:13 +03:00
|
|
|
kfree(t);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2005-10-15 02:42:33 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
t->net = pn;
|
2015-07-08 22:42:13 +03:00
|
|
|
get_task_struct(p);
|
2007-01-02 07:51:53 +03:00
|
|
|
wake_up_process(p);
|
2008-05-21 02:12:44 +04:00
|
|
|
wait_for_completion(&t->start_done);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-02-09 17:24:36 +03:00
|
|
|
/*
|
|
|
|
* Removes a device from the thread if_list.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2006-03-21 09:16:13 +03:00
|
|
|
static void _rem_dev_from_if_list(struct pktgen_thread *t,
|
|
|
|
struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-03-21 09:18:16 +03:00
|
|
|
struct list_head *q, *n;
|
|
|
|
struct pktgen_dev *p;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
if_lock(t);
|
2006-03-21 09:18:16 +03:00
|
|
|
list_for_each_safe(q, n, &t->if_list) {
|
|
|
|
p = list_entry(q, struct pktgen_dev, list);
|
|
|
|
if (p == pkt_dev)
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
list_del_rcu(&p->list);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
if_unlock(t);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
static int pktgen_remove_device(struct pktgen_thread *t,
|
|
|
|
struct pktgen_dev *pkt_dev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2010-06-21 16:29:14 +04:00
|
|
|
pr_debug("remove_device pkt_dev=%p\n", pkt_dev);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pkt_dev->running) {
|
2014-09-10 08:17:30 +04:00
|
|
|
pr_warn("WARNING: trying to remove a running interface, stopping it now\n");
|
2006-03-21 09:16:13 +03:00
|
|
|
pktgen_stop_device(pkt_dev);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Dis-associate from the interface */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (pkt_dev->odev) {
|
2022-06-08 07:39:55 +03:00
|
|
|
netdev_put(pkt_dev->odev, &pkt_dev->dev_tracker);
|
2006-03-21 09:16:13 +03:00
|
|
|
pkt_dev->odev = NULL;
|
|
|
|
}
|
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
/* Remove proc before if_list entry, because add_device uses
|
|
|
|
* list to determine if interface already exist, avoid race
|
|
|
|
* with proc_create_data() */
|
2014-11-18 22:10:34 +03:00
|
|
|
proc_remove(pkt_dev->entry);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
/* And update the thread if_list */
|
|
|
|
_rem_dev_from_if_list(t, pkt_dev);
|
|
|
|
|
2007-07-03 09:41:59 +04:00
|
|
|
#ifdef CONFIG_XFRM
|
|
|
|
free_SAs(pkt_dev);
|
|
|
|
#endif
|
2009-06-08 11:40:35 +04:00
|
|
|
vfree(pkt_dev->flows);
|
2011-01-26 00:26:05 +03:00
|
|
|
if (pkt_dev->page)
|
|
|
|
put_page(pkt_dev->page);
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
kfree_rcu(pkt_dev, rcu);
|
2006-03-21 09:16:13 +03:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static int __net_init pg_net_init(struct net *net)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2013-01-28 23:55:53 +04:00
|
|
|
struct pktgen_net *pn = net_generic(net, pg_net_id);
|
2005-10-15 02:42:33 +04:00
|
|
|
struct proc_dir_entry *pe;
|
2013-01-28 23:55:53 +04:00
|
|
|
int cpu, ret = 0;
|
|
|
|
|
|
|
|
pn->net = net;
|
|
|
|
INIT_LIST_HEAD(&pn->pktgen_threads);
|
|
|
|
pn->pktgen_exiting = false;
|
|
|
|
pn->proc_dir = proc_mkdir(PG_PROC_DIR, pn->net->proc_net);
|
|
|
|
if (!pn->proc_dir) {
|
|
|
|
pr_warn("cannot create /proc/net/%s\n", PG_PROC_DIR);
|
2005-10-15 02:42:33 +04:00
|
|
|
return -ENODEV;
|
2013-01-28 23:55:53 +04:00
|
|
|
}
|
2020-02-04 04:37:17 +03:00
|
|
|
pe = proc_create(PGCTRL, 0600, pn->proc_dir, &pktgen_proc_ops);
|
2006-03-21 09:16:13 +03:00
|
|
|
if (pe == NULL) {
|
2013-01-28 23:55:53 +04:00
|
|
|
pr_err("cannot create %s procfs entry\n", PGCTRL);
|
2011-05-22 04:52:08 +04:00
|
|
|
ret = -EINVAL;
|
2013-01-28 23:55:53 +04:00
|
|
|
goto remove;
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-10-13 20:30:31 +04:00
|
|
|
for_each_online_cpu(cpu) {
|
2006-03-21 09:17:55 +03:00
|
|
|
int err;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
err = pktgen_create_thread(cpu, pn);
|
2006-03-21 09:17:55 +03:00
|
|
|
if (err)
|
2013-01-28 23:55:53 +04:00
|
|
|
pr_warn("Cannot create thread for cpu %d (%d)\n",
|
2010-06-21 16:29:14 +04:00
|
|
|
cpu, err);
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2006-03-21 09:17:55 +03:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
if (list_empty(&pn->pktgen_threads)) {
|
|
|
|
pr_err("Initialization failed for all threads\n");
|
2011-05-22 04:52:08 +04:00
|
|
|
ret = -ENODEV;
|
2013-01-28 23:55:53 +04:00
|
|
|
goto remove_entry;
|
2006-03-21 09:17:55 +03:00
|
|
|
}
|
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
return 0;
|
2011-05-22 04:52:08 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
remove_entry:
|
|
|
|
remove_proc_entry(PGCTRL, pn->proc_dir);
|
|
|
|
remove:
|
2013-02-18 05:34:56 +04:00
|
|
|
remove_proc_entry(PG_PROC_DIR, pn->net->proc_net);
|
2011-05-22 04:52:08 +04:00
|
|
|
return ret;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
static void __net_exit pg_net_exit(struct net *net)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2013-01-28 23:55:53 +04:00
|
|
|
struct pktgen_net *pn = net_generic(net, pg_net_id);
|
2006-03-21 09:16:40 +03:00
|
|
|
struct pktgen_thread *t;
|
|
|
|
struct list_head *q, *n;
|
2012-05-18 03:52:26 +04:00
|
|
|
LIST_HEAD(list);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-21 09:16:13 +03:00
|
|
|
/* Stop all interfaces & threads */
|
2013-01-28 23:55:53 +04:00
|
|
|
pn->pktgen_exiting = true;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2012-05-09 17:29:51 +04:00
|
|
|
mutex_lock(&pktgen_thread_lock);
|
2013-01-28 23:55:53 +04:00
|
|
|
list_splice_init(&pn->pktgen_threads, &list);
|
2012-05-09 17:29:51 +04:00
|
|
|
mutex_unlock(&pktgen_thread_lock);
|
|
|
|
|
|
|
|
list_for_each_safe(q, n, &list) {
|
2006-03-21 09:16:40 +03:00
|
|
|
t = list_entry(q, struct pktgen_thread, th_list);
|
2012-05-09 17:29:51 +04:00
|
|
|
list_del(&t->th_list);
|
2007-01-02 07:51:53 +03:00
|
|
|
kthread_stop(t->tsk);
|
2015-07-08 22:42:13 +03:00
|
|
|
put_task_struct(t->tsk);
|
2007-01-02 07:51:53 +03:00
|
|
|
kfree(t);
|
2006-03-21 09:16:13 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
remove_proc_entry(PGCTRL, pn->proc_dir);
|
2013-02-18 05:34:56 +04:00
|
|
|
remove_proc_entry(PG_PROC_DIR, pn->net->proc_net);
|
2013-01-28 23:55:53 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct pernet_operations pg_net_ops = {
|
|
|
|
.init = pg_net_init,
|
|
|
|
.exit = pg_net_exit,
|
|
|
|
.id = &pg_net_id,
|
|
|
|
.size = sizeof(struct pktgen_net),
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __init pg_init(void)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-01-28 23:55:53 +04:00
|
|
|
pr_info("%s", version);
|
|
|
|
ret = register_pernet_subsys(&pg_net_ops);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
ret = register_netdevice_notifier(&pktgen_notifier_block);
|
|
|
|
if (ret)
|
|
|
|
unregister_pernet_subsys(&pg_net_ops);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __exit pg_cleanup(void)
|
|
|
|
{
|
|
|
|
unregister_netdevice_notifier(&pktgen_notifier_block);
|
|
|
|
unregister_pernet_subsys(&pg_net_ops);
|
pktgen: RCU-ify "if_list" to remove lock in next_to_run()
The if_lock()/if_unlock() in next_to_run() adds a significant
overhead, because its called for every packet in busy loop of
pktgen_thread_worker(). (Thomas Graf originally pointed me
at this lock problem).
Removing these two "LOCK" operations should in theory save us approx
16ns (8ns x 2), as illustrated below we do save 16ns when removing
the locks and introducing RCU protection.
Performance data with CLONE_SKB==100000, TX-size=512, rx-usecs=30:
(single CPU performance, ixgbe 10Gbit/s, E5-2630)
* Prev : 5684009 pps --> 175.93ns (1/5684009*10^9)
* RCU-fix: 6272204 pps --> 159.43ns (1/6272204*10^9)
* Diff : +588195 pps --> -16.50ns
To understand this RCU patch, I describe the pktgen thread model
below.
In pktgen there is several kernel threads, but there is only one CPU
running each kernel thread. Communication with the kernel threads are
done through some thread control flags. This allow the thread to
change data structures at a know synchronization point, see main
thread func pktgen_thread_worker().
Userspace changes are communicated through proc-file writes. There
are three types of changes, general control changes "pgctrl"
(func:pgctrl_write), thread changes "kpktgend_X"
(func:pktgen_thread_write), and interface config changes "etcX@N"
(func:pktgen_if_write).
Userspace "pgctrl" and "thread" changes are synchronized via the mutex
pktgen_thread_lock, thus only a single userspace instance can run.
The mutex is taken while the packet generator is running, by pgctrl
"start". Thus e.g. "add_device" cannot be invoked when pktgen is
running/started.
All "pgctrl" and all "thread" changes, except thread "add_device",
communicate via the thread control flags. The main problem is the
exception "add_device", that modifies threads "if_list" directly.
Fortunately "add_device" cannot be invoked while pktgen is running.
But there exists a race between "rem_device_all" and "add_device"
(which normally don't occur, because "rem_device_all" waits 125ms
before returning). Background'ing "rem_device_all" and running
"add_device" immediately allow the race to occur.
The race affects the threads (list of devices) "if_list". The if_lock
is used for protecting this "if_list". Other readers are given
lock-free access to the list under RCU read sections.
Note, interface config changes (via proc) can occur while pktgen is
running, which worries me a bit. I'm assuming proc_remove() takes
appropriate locks, to assure no writers exists after proc_remove()
finish.
I've been running a script exercising the race condition (leading me
to fix the proc_remove order), without any issues. The script also
exercises concurrent proc writes, while the interface config is
getting removed.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-06-26 15:16:59 +04:00
|
|
|
/* Don't need rcu_barrier() due to use of kfree_rcu() */
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
module_init(pg_init);
|
|
|
|
module_exit(pg_cleanup);
|
|
|
|
|
2009-08-27 17:55:20 +04:00
|
|
|
MODULE_AUTHOR("Robert Olsson <robert.olsson@its.uu.se>");
|
2005-04-17 02:20:36 +04:00
|
|
|
MODULE_DESCRIPTION("Packet Generator tool");
|
|
|
|
MODULE_LICENSE("GPL");
|
2009-08-27 17:55:20 +04:00
|
|
|
MODULE_VERSION(VERSION);
|
2005-04-17 02:20:36 +04:00
|
|
|
module_param(pg_count_d, int, 0);
|
2009-08-27 17:55:20 +04:00
|
|
|
MODULE_PARM_DESC(pg_count_d, "Default number of packets to inject");
|
2005-04-17 02:20:36 +04:00
|
|
|
module_param(pg_delay_d, int, 0);
|
2009-08-27 17:55:20 +04:00
|
|
|
MODULE_PARM_DESC(pg_delay_d, "Default delay between packets (nanoseconds)");
|
2005-04-17 02:20:36 +04:00
|
|
|
module_param(pg_clone_skb_d, int, 0);
|
2009-08-27 17:55:20 +04:00
|
|
|
MODULE_PARM_DESC(pg_clone_skb_d, "Default number of copies of the same packet");
|
2005-04-17 02:20:36 +04:00
|
|
|
module_param(debug, int, 0);
|
2009-08-27 17:55:20 +04:00
|
|
|
MODULE_PARM_DESC(debug, "Enable debugging of pktgen module");
|