In ethtool_get_phy_stats(), the phydev varaible is set to
dev->phydev but dev->phydev is still used. Replace
dev->phydev uses with phydev.
Signed-off-by: Tom Rix <trix@redhat.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert the PCS selection to use mac_select_pcs, which allows the PCS
to perform any validation it needs.
We must use separate phylink_pcs instances for the USX and SGMII PCS,
rather than just changing the "ops" pointer before re-setting it to
phylink as this interface queries the PCS, rather than requesting it
to be changed.
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet suggested to allow users to modify max GRO packet size.
We have seen GRO being disabled by users of appliances (such as
wifi access points) because of claimed bufferbloat issues,
or some work arounds in sch_cake, to split GRO/GSO packets.
Instead of disabling GRO completely, one can chose to limit
the maximum packet size of GRO packets, depending on their
latency constraints.
This patch adds a per device gro_max_size attribute
that can be changed with ip link command.
ip link set dev eth0 gro_max_size 16000
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Coco Li <lixiaoyan@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When multiple sockets using the SOF_TIMESTAMPING_BIND_PHC flag received
a packet with a hardware timestamp (e.g. multiple PTP instances in
different PTP domains using the UDPv4/v6 multicast or L2 transport),
the timestamps received on some sockets were corrupted due to repeated
conversion of the same timestamp (by the same or different vclocks).
Fix ptp_convert_timestamp() to not modify the shared skb timestamp
and return the converted timestamp as a ktime_t instead. If the
conversion fails, return 0 to not confuse the application with
timestamps corresponding to an unexpected PHC.
Fixes: d7c0882655 ("net: socket: support hardware timestamp conversion to PHC bound")
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Cc: Yangbo Lu <yangbo.lu@nxp.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As discussed during review here:
https://patchwork.kernel.org/project/netdevbpf/patch/20220105132141.2648876-3-vladimir.oltean@nxp.com/
we should inform developers about pitfalls of concurrent access to the
boolean properties of dsa_switch and dsa_port, now that they've been
converted to bit fields. No other measure than a comment needs to be
taken, since the code paths that update these bit fields are not
concurrent with each other.
Suggested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is a cosmetic incremental fixup to commits
7787ff7763 ("net: dsa: merge all bools of struct dsa_switch into a single u32")
bde82f389a ("net: dsa: merge all bools of struct dsa_port into a single u8")
The desire to make this change was enunciated after posting these
patches here:
https://patchwork.kernel.org/project/netdevbpf/cover/20220105132141.2648876-1-vladimir.oltean@nxp.com/
but due to a slight timing overlap (message posted at 2:28 p.m. UTC,
merge commit is at 2:46 p.m. UTC), that comment was missed and the
changes were applied as-is.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean says:
====================
DSA initialization cleanups
These patches contain miscellaneous work that makes the DSA init code
path symmetric with the teardown path, and some additional patches
carried by Ansuel Smith for his register access over Ethernet work, but
those patches can be applied as-is too.
https://patchwork.kernel.org/project/netdevbpf/patch/20211214224409.5770-3-ansuelsmth@gmail.com/
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
It is said that as soon as a network interface is registered, all its
resources should have already been prepared, so that it is available for
sending and receiving traffic. One of the resources needed by a DSA
slave interface is the master.
dsa_tree_setup
-> dsa_tree_setup_ports
-> dsa_port_setup
-> dsa_slave_create
-> register_netdevice
-> dsa_tree_setup_master
-> dsa_master_setup
-> sets up master->dsa_ptr, which enables reception
Therefore, there is a short period of time after register_netdevice()
during which the master isn't prepared to pass traffic to the DSA layer
(master->dsa_ptr is checked by eth_type_trans). Same thing during
unregistration, there is a time frame in which packets might be missed.
Note that this change opens us to another race: dsa_master_find_slave()
will get invoked potentially earlier than the slave creation, and later
than the slave deletion. Since dp->slave starts off as a NULL pointer,
the earlier calls aren't a problem, but the later calls are. To avoid
use-after-free, we should zeroize dp->slave before calling
dsa_slave_destroy().
In practice I cannot really test real life improvements brought by this
change, since in my systems, netdevice creation races with PHY autoneg
which takes a few seconds to complete, and that masks quite a few races.
Effects might be noticeable in a setup with fixed links all the way to
an external system.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
After commit a57d8c217a ("net: dsa: flush switchdev workqueue before
tearing down CPU/DSA ports"), the port setup and teardown procedure
became asymmetric.
The fact of the matter is that user ports need the shared ports to be up
before they can be used for CPU-initiated termination. And since we
register net devices for the user ports, those won't be functional until
we also call the setup for the shared (CPU, DSA) ports. But we may do
that later, depending on the port numbering scheme of the hardware we
are dealing with.
It just makes sense that all shared ports are brought up before any user
port is. I can't pinpoint any issue due to the current behavior, but
let's change it nonetheless, for consistency's sake.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
DSA needs to simulate master tracking events when a binding is first
with a DSA master established and torn down, in order to give drivers
the simplifying guarantee that ->master_state_change calls are made
only when the master's readiness state to pass traffic changes.
master_state_change() provide a operational bool that DSA driver can use
to understand if DSA master is operational or not.
To avoid races, we need to block the reception of
NETDEV_UP/NETDEV_CHANGE/NETDEV_GOING_DOWN events in the netdev notifier
chain while we are changing the master's dev->dsa_ptr (this changes what
netdev_uses_dsa(dev) reports).
The dsa_master_setup() and dsa_master_teardown() functions optionally
require the rtnl_mutex to be held, if the tagger needs the master to be
promiscuous, these functions call dev_set_promiscuity(). Move the
rtnl_lock() from that function and make it top-level.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
At present there are two paths for changing the MTU of the DSA master.
The first is:
dsa_tree_setup
-> dsa_tree_setup_ports
-> dsa_port_setup
-> dsa_slave_create
-> dsa_slave_change_mtu
-> dev_set_mtu(master)
The second is:
dsa_tree_setup
-> dsa_tree_setup_master
-> dsa_master_setup
-> dev_set_mtu(dev)
So the dev_set_mtu() call from dsa_master_setup() has been effectively
superseded by the dsa_slave_change_mtu(slave_dev, ETH_DATA_LEN) that is
done from dsa_slave_create() for each user port. The later function also
updates the master MTU according to the largest user port MTU from the
tree. Therefore, updating the master MTU through a separate code path
isn't needed.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently dsa_slave_create() has two sequences of rtnl_lock/rtnl_unlock
in a row. Remove the rtnl_unlock() and rtnl_lock() in between, such that
the operation can execute slighly faster.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In dsa_slave_create() there are 2 sections that take rtnl_lock():
MTU change and netdev registration. They are separated by PHY
initialization.
There isn't any strict ordering requirement except for the fact that
netdev registration should be last. Therefore, we can perform the MTU
change a bit later, after the PHY setup. A future change will then be
able to merge the two rtnl_lock sections into one.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Steffen Klassert says:
====================
pull request (net-next): ipsec-next 2022-01-06
1) Fix some clang_analyzer warnings about never read variables.
From luo penghao.
2) Check for pols[0] only once in xfrm_expand_policies().
From Jean Sacren.
3) The SA curlft.use_time was updated only on SA cration time.
Update whenever the SA is used. From Antony Antony
4) Add support for SM3 secure hash.
From Xu Jia.
5) Add support for SM4 symmetric cipher algorithm.
From Xu Jia.
6) Add a rate limit for SA mapping change messages.
From Antony Antony.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add an xdp_do_redirect_frame() variant which supports pre-computed
xdp_frame structures. This will be used in bpf_prog_run() to avoid having
to write to the xdp_frame structure when the XDP program doesn't modify the
frame boundaries.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103150812.87914-6-toke@redhat.com
All map redirect functions except XSK maps convert xdp_buff to xdp_frame
before enqueueing it. So move this conversion of out the map functions
and into xdp_do_redirect(). This removes a bit of duplicated code, but more
importantly it makes it possible to support caller-allocated xdp_frame
structures, which will be added in a subsequent commit.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103150812.87914-5-toke@redhat.com
Store the XDP mem ID inside the page_pool struct so it can be retrieved
later for use in bpf_prog_run().
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20220103150812.87914-4-toke@redhat.com
Add a new callback function to page_pool that, if set, will be called every
time a new page is allocated. This will be used from bpf_test_run() to
initialise the page data with the data provided by userspace when running
XDP programs with redirect turned on.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20220103150812.87914-3-toke@redhat.com
The functions that register an XDP memory model take a struct xdp_rxq as
parameter, but the RXQ is not actually used for anything other than pulling
out the struct xdp_mem_info that it embeds. So refactor the register
functions and export variants that just take a pointer to the xdp_mem_info.
This is in preparation for enabling XDP_REDIRECT in bpf_prog_run(), using a
page_pool instance that is not connected to any network device.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103150812.87914-2-toke@redhat.com
Ong Boon says:
====================
First of all, sorry for taking more time to get back to this series and
thanks to all valuble feedback in series-1 at [1] from Jesper and Song
Liu.
Since then I have looked into what Jesper suggested in [2] and worked on
revising the patch series into several patches for ease of review:
v1->v2:
1/7: [No change]. Add VLAN tag (ID & Priority) to the generated Tx-Only
frames.
2/7: [No change]. Add DMAC and SMAC setting to the generated Tx-Only
frames. If parameters are not set, previous DMAC and SMAC are used.
3/7: [New]. Add support for selecting different CLOCK for clock_gettime()
used in get_nsecs.
4/7: [New]. This is a total rework from series-1 3/4-patch [3]. It uses
clock_nanosleep() suggested by Jesper. In addition, added statistic
for Tx schedule variance under application stat (-a|--app-stats).
Make the cyclic Tx operation and --poll mode to be mutually-
exclusive. Still, the ability to specify TX cycle time and used
together with batch size and packet count remain the same.
5/7: [New]. Add the support for TX process schedule policy and priority
setting. By default, SCHED_OTHER policy is used. This too is matching
the schedule policy setting in [2].
6/7: [Change]. This is update from series-1 4/4-patch [4]. Added TX clean
process time-out in 1s granularity with configurable retries count
(-O|--retries).
7/7: [New]. Added timestamp for TX packet following pktgen_hdr format
matching the implementation in [2]. However, the sequence ID remains
the same as it is instead of process schedule diff in [2].
To summarize on what program options have been added with v2 series
using an example below:-
DMAC (-G) = fa:8d:f1:e2:0b:e8
SMAC (-H) = ce:17:07:17:3e:3a
VLAN tagged (-V)
VLAN ID (-J) = 12
VLAN Pri (-K) = 3
Tx Queue (-q) = 3
Cycle Time in us (-T) = 1000
Batch (-b) = 2
Packet Count = 6
Tx schedule policy (-W) = FIFO
Tx schedule priority (-U) = 50
Clock selection (-w) = REALTIME
Tx timeout retries(-O) = 5
Tx timestamp (-y)
Cyclic Tx schedule stat (-a)
Note: xdpsock sets UDP dest-port and src-port to 0x1000 as default.
Sending Board
=============
$ xdpsock -i eth0 -t -N -z -H ce:17:07:17:3e:3a -G fa:8d:f1:e2:0b:e8 \
-V -J 12 -K 3 -q 3 \
-T 1000 -b 2 -C 6 -W FIFO -U 50 -w REALTIME \
-O 5 -y -a
sock0@eth0:3 txonly xdp-drv
pps pkts 0.00
rx 0 0
tx 0 6
calls/s count
rx empty polls 0 0
fill fail polls 0 0
copy tx sendtos 0 0
tx wakeup sendtos 0 5
opt polls 0 0
period min ave max cycle
Cyclic TX 1000000 31033 32009 33397 3
Receiving Board
===============
$ tcpdump -nei eth0 udp port 0x1000 -vv -Q in -X \
--time-stamp-precision nano
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
03:46:40.520111580 ce:17:07:17:3e:3a > fa:8d:f1:e2:0b:e8, ethertype 802.1Q (0x8100), length 62: vlan 12, p 3, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 44)
10.10.10.16.4096 > 10.10.10.32.4096: [udp sum ok] UDP, length 16
0x0000: 4500 002c 0000 0000 4011 527e 0a0a 0a10 E..,....@.R~....
0x0010: 0a0a 0a20 1000 1000 0018 e997 be9b e955 ...............U
0x0020: 0000 0000 61cd 2ba1 0006 987c ....a.+....|
03:46:40.520112163 ce:17:07:17:3e:3a > fa:8d:f1:e2:0b:e8, ethertype 802.1Q (0x8100), length 62: vlan 12, p 3, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 44)
10.10.10.16.4096 > 10.10.10.32.4096: [udp sum ok] UDP, length 16
0x0000: 4500 002c 0000 0000 4011 527e 0a0a 0a10 E..,....@.R~....
0x0010: 0a0a 0a20 1000 1000 0018 e996 be9b e955 ...............U
0x0020: 0000 0001 61cd 2ba1 0006 987c ....a.+....|
03:46:40.521066860 ce:17:07:17:3e:3a > fa:8d:f1:e2:0b:e8, ethertype 802.1Q (0x8100), length 62: vlan 12, p 3, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 44)
10.10.10.16.4096 > 10.10.10.32.4096: [udp sum ok] UDP, length 16
0x0000: 4500 002c 0000 0000 4011 527e 0a0a 0a10 E..,....@.R~....
0x0010: 0a0a 0a20 1000 1000 0018 e5af be9b e955 ...............U
0x0020: 0000 0002 61cd 2ba1 0006 9c62 ....a.+....b
03:46:40.521067012 ce:17:07:17:3e:3a > fa:8d:f1:e2:0b:e8, ethertype 802.1Q (0x8100), length 62: vlan 12, p 3, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 44)
10.10.10.16.4096 > 10.10.10.32.4096: [udp sum ok] UDP, length 16
0x0000: 4500 002c 0000 0000 4011 527e 0a0a 0a10 E..,....@.R~....
0x0010: 0a0a 0a20 1000 1000 0018 e5ae be9b e955 ...............U
0x0020: 0000 0003 61cd 2ba1 0006 9c62 ....a.+....b
03:46:40.522061935 ce:17:07:17:3e:3a > fa:8d:f1:e2:0b:e8, ethertype 802.1Q (0x8100), length 62: vlan 12, p 3, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 44)
10.10.10.16.4096 > 10.10.10.32.4096: [udp sum ok] UDP, length 16
0x0000: 4500 002c 0000 0000 4011 527e 0a0a 0a10 E..,....@.R~....
0x0010: 0a0a 0a20 1000 1000 0018 e1c5 be9b e955 ...............U
0x0020: 0000 0004 61cd 2ba1 0006 a04a ....a.+....J
03:46:40.522062173 ce:17:07:17:3e:3a > fa:8d:f1:e2:0b:e8, ethertype 802.1Q (0x8100), length 62: vlan 12, p 3, ethertype IPv4, (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto UDP (17), length 44)
10.10.10.16.4096 > 10.10.10.32.4096: [udp sum ok] UDP, length 16
0x0000: 4500 002c 0000 0000 4011 527e 0a0a 0a10 E..,....@.R~....
0x0010: 0a0a 0a20 1000 1000 0018 e1c4 be9b e955 ...............U
0x0020: 0000 0005 61cd 2ba1 0006 a04a ....a.+....J
I have tested the above with both tagged and untagged packet format and
based on the timestamp in tcpdump found that the timing of the batch
cyclic transmission is correct.
Appreciate if community can give the patch series v2 a try and point out
any gap.
Thanks
Boon Leong
[1] https://patchwork.kernel.org/project/netdevbpf/cover/20211124091821.3916046-1-boon.leong.ong@intel.com/
[2] https://github.com/netoptimizer/network-testing/blob/master/src/udp_pacer.c
[3] https://patchwork.kernel.org/project/netdevbpf/patch/20211124091821.3916046-4-boon.leong.ong@intel.com/
[4] https://patchwork.kernel.org/project/netdevbpf/patch/20211124091821.3916046-5-boon.leong.ong@intel.com/
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
It may be useful to add timestamp for Tx packets for continuous or cyclic
transmit operation. The timestamp and sequence ID of a Tx packet are
stored according to pktgen header format. To enable per-packet timestamp,
use -y|--tstamp option. If timestamp is off, pktgen header is not
included in the UDP payload. This means receiving side can use the magic
number for pktgen for differentiation.
The implementation supports both VLAN tagged and untagged option. By
default, the minimum packet size is set at 64B. However, if VLAN tagged
is on (-V), the minimum packet size is increased to 66B just so to fit
the pktgen_hdr size.
Added hex_dump() into the code path just for future cross-checking.
As before, simply change to "#define DEBUG_HEXDUMP 1" to inspect the
accuracy of TX packet.
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211230035447.523177-8-boon.leong.ong@intel.com
When user sets tx-pkt-count and in case where there are invalid Tx frame,
the complete_tx_only_all() process polls indefinitely. So, this patch
adds a time-out mechanism into the process so that the application
can terminate automatically after it retries 3*polling interval duration.
v1->v2:
Thanks to Jesper's and Song Liu's suggestion.
- clean-up git message to remove polling log
- make the Tx time-out retries configurable with 1s granularity
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211230035447.523177-7-boon.leong.ong@intel.com
By default, TX schedule policy is SCHED_OTHER (round-robin time-sharing).
To improve TX cyclic scheduling, we add SCHED_FIFO policy and its priority
by using -W FIFO or --policy=FIFO and -U <PRIO> or --schpri=<PRIO>.
A) From xdpsock --app-stats, for SCHED_OTHER policy:
$ xdpsock -i eth0 -t -N -z -T 1000 -b 16 -C 100000 -a
period min ave max cycle
Cyclic TX 1000000 53507 75334 712642 6250
B) For SCHED_FIFO policy and schpri=50:
$ xdpsock -i eth0 -t -N -z -T 1000 -b 16 -C 100000 -a -W FIFO -U 50
period min ave max cycle
Cyclic TX 1000000 3699 24859 54397 6250
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211230035447.523177-6-boon.leong.ong@intel.com
Tx cycle time is in micro-seconds unit. By combining the batch size (-b M)
and Tx cycle time (-T|--tx-cycle N), xdpsock now can transmit batch-size of
packets every N-us periodically. Cyclic TX operation is not applicable if
--poll mode is used.
To transmit 16 packets every 1ms cycle time for total of 100000 packets
silently:
$ xdpsock -i eth0 -T -N -z -T 1000 -b 16 -C 100000
To print cyclic TX schedule variance stats, use --app-stats|-a:
$ xdpsock -i eth0 -T -N -z -T 1000 -b 16 -C 100000 -a
sock0@eth0:0 txonly xdp-drv
pps pkts 0.00
rx 0 0
tx 0 100000
calls/s count
rx empty polls 0 0
fill fail polls 0 0
copy tx sendtos 0 0
tx wakeup sendtos 0 6254
opt polls 0 0
period min ave max cycle
Cyclic TX 1000000 53507 75334 712642 6250
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211230035447.523177-5-boon.leong.ong@intel.com
User specifies the clock selection by using -w CLOCK or --clock=CLOCK
where CLOCK=[REALTIME, TAI, BOOTTIME, MONOTONIC].
The default CLOCK selection is MONOTONIC.
The implementation of clock selection parsing is borrowed from
iproute2/tc/q_taprio.c
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211230035447.523177-4-boon.leong.ong@intel.com
To set Dest MAC address (-G|--tx-dmac) only:
$ xdpsock -i eth0 -t -N -z -G aa:bb:cc:dd:ee:ff
To set Source MAC address (-H|--tx-smac) only:
$ xdpsock -i eth0 -t -N -z -H 11:22:33:44:55:66
To set both Dest and Source MAC address:
$ xdpsock -i eth0 -t -N -z -G aa:bb:cc:dd:ee:ff \
-H 11:22:33:44:55:66
The default Dest and Source MAC address remain the same as before.
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20211230035447.523177-3-boon.leong.ong@intel.com
In multi-queue environment testing, the support for VLAN-tag based
steering is useful. So, this patch adds the capability to add
VLAN tag (VLAN ID and Priority) to the generated Tx frame.
To set the VLAN ID=10 and Priority=2 for Tx only through TxQ=3:
$ xdpsock -i eth0 -t -N -z -q 3 -V -J 10 -K 2
If VLAN ID (-J) and Priority (-K) is set, it default to
VLAN ID = 1
VLAN Priority = 0.
For example, VLAN-tagged Tx only, xdp copy mode through TxQ=1:
$ xdpsock -i eth0 -t -N -c -q 1 -V
Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20211230035447.523177-2-boon.leong.ong@intel.com
Aleksander Jan Bajkowski says:
====================
net: lantiq_xrx200: improve ethernet performance
This patchset improves Ethernet performance by 15%.
NAT Performance results on BT Home Hub 5A (kernel 5.10.89, mtu 1500):
Down Up
Before 539 Mbps 599 Mbps
After 624 Mbps 695 Mbps
====================
Link: https://lore.kernel.org/r/20220104151144.181736-1-olek2@wp.pl
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
We can increase the efficiency of rx path by using buffers to receive
packets then build SKBs around them just before passing into the network
stack. In contrast, preallocating SKBs too early reduces CPU cache
efficiency.
NAT Performance results on BT Home Hub 5A (kernel 5.10.89, mtu 1500):
Down Up
Before 577 Mbps 648 Mbps
After 624 Mbps 695 Mbps
Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
NAT Performance results on BT Home Hub 5A (kernel 5.10.89, mtu 1500):
Down Up
Before 545 Mbps 625 Mbps
After 577 Mbps 648 Mbps
Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
NAT Performance results on BT Home Hub 5A (kernel 5.10.89, mtu 1500):
Down Up
Before 539 Mbps 599 Mbps
After 545 Mbps 625 Mbps
Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
When the -L option of the testptp utility is specified with other
options (e.g. -p to enable PPS output), the user probably wants to
apply it to the pin configured by the -L option.
Reorder the code to set the pin function before other function requests
to avoid confusing users.
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Link: https://lore.kernel.org/r/20220105152506.3256026-1-mlichvar@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
If repeated legacy kprobes on same function in one process,
libbpf will register using the same probe name and got -EBUSY
error. So append index to the probe name format to fix this
problem.
Co-developed-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Qiang Wang <wangqiang.wq.frank@bytedance.com>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211227130713.66933-2-wangqiang.wq.frank@bytedance.com
With perf_buffer__poll() and perf_buffer__consume() APIs available,
there is no reason to expose bpf_perf_event_read_simple() API to
users. If users need custom perf buffer, they could re-implement
the function.
Mark bpf_perf_event_read_simple() and move the logic to a new
static function so it can still be called by other functions in the
same file.
[0] Closes: https://github.com/libbpf/libbpf/issues/310
Signed-off-by: Christy Lee <christylee@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211229204156.13569-1-christylee@fb.com
The commit 4057765f2d ("sock: consistent handling of extreme
SO_SNDBUF/SO_RCVBUF values") added a change to prevent underflow
in setsockopt() around SO_SNDBUF/SO_RCVBUF.
This patch adds the same change to _bpf_setsockopt().
Fixes: 4057765f2d ("sock: consistent handling of extreme SO_SNDBUF/SO_RCVBUF values")
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220104013153.97906-2-kuniyu@amazon.co.jp
Current release - regressions:
- Revert "xsk: Do not sleep in poll() when need_wakeup set",
made the problem worse
- Revert "net: phy: fixed_phy: Fix NULL vs IS_ERR() checking in
__fixed_phy_register", broke EPROBE_DEFER handling
- Revert "net: usb: r8152: Add MAC pass-through support for more
Lenovo Docks", broke setups without a Lenovo dock
Current release - new code bugs:
- selftests: set amt.sh executable
Previous releases - regressions:
- batman-adv: mcast: don't send link-local multicast to mcast routers
Previous releases - always broken:
- ipv4/ipv6: check attribute length for RTA_FLOW / RTA_GATEWAY
- sctp: hold endpoint before calling cb in
sctp_transport_lookup_process
- mac80211: mesh: embed mesh_paths and mpp_paths into
ieee80211_if_mesh to avoid complicated handling of sub-object
allocation failures
- seg6: fix traceroute in the presence of SRv6
- tipc: fix a kernel-infoleak in __tipc_sendmsg()
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmHV/ksACgkQMUZtbf5S
IrtZHBAAotpSY1buJLCHC+4EdqyMvYdcuTQJsqYBx2oNdMJ2D5bPSX7d2u2xkhgR
kBL7cAfnH6C7IdgLirh+JbHG2j1e3WMJikhqtWEMcBMt0eYRzEPGOnABYBjd8wdb
Ie6IiLw/0zXAdE5pfh2yzHTgyzaGPImA04E45nimoxiHOVWJLCFvI5H4BZvK9JLj
tmRxFG37m5wWRMdfsizXCvFJyMlg52FLIO1Duu82Gc7ZWMiYnxkD1dF8kzFj2jXM
wmIWRg1wJa+7mHJHPdUR2I1BNWaapamVVa+9NDONWOi3stImUEqNNDHuzlu4hT/p
khRXZNPHIbB/c7yR7bCJ9YK/raKKYh5GPRanF0YRL2RDqf80V7uLtVoQ8/Sar4pM
L2jRAC76SGdHVGJMckVV9LE9NPKTNYw0cA97MhwL5Nc/Ks0oB4oBxfG56350S8sb
5hel3pJ6lFoWIr88qWgJXzgkVLxLvG7EQBFg6URwGJjBgLLJLzMMO88ALrqR+SN+
tEwTfcjuG+9tEVIb4DQuXQm0LKcfD8Z7FzHEf5ikoyAbOSbGwZzr4vZu8fOw5Z1y
Z1YihoEoaHv1sZGGQf4MKD71cZmVrTDgYRZ5p/00jXs/NY6EyWCR2+j1tADgjFvY
UNKa4LlQPx1hfe9QxCpSBRf/eULYZjWT1qzfj4GVX9W9bk+Cz8c=
=xIOF
-----END PGP SIGNATURE-----
Merge tag 'net-5.16-final' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski"
"Networking fixes, including fixes from bpf, and WiFi. One last pull
request, turns out some of the recent fixes did more harm than good.
Current release - regressions:
- Revert "xsk: Do not sleep in poll() when need_wakeup set", made the
problem worse
- Revert "net: phy: fixed_phy: Fix NULL vs IS_ERR() checking in
__fixed_phy_register", broke EPROBE_DEFER handling
- Revert "net: usb: r8152: Add MAC pass-through support for more
Lenovo Docks", broke setups without a Lenovo dock
Current release - new code bugs:
- selftests: set amt.sh executable
Previous releases - regressions:
- batman-adv: mcast: don't send link-local multicast to mcast routers
Previous releases - always broken:
- ipv4/ipv6: check attribute length for RTA_FLOW / RTA_GATEWAY
- sctp: hold endpoint before calling cb in
sctp_transport_lookup_process
- mac80211: mesh: embed mesh_paths and mpp_paths into
ieee80211_if_mesh to avoid complicated handling of sub-object
allocation failures
- seg6: fix traceroute in the presence of SRv6
- tipc: fix a kernel-infoleak in __tipc_sendmsg()"
* tag 'net-5.16-final' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (36 commits)
selftests: set amt.sh executable
Revert "net: usb: r8152: Add MAC passthrough support for more Lenovo Docks"
sfc: The RX page_ring is optional
iavf: Fix limit of total number of queues to active queues of VF
i40e: Fix incorrect netdev's real number of RX/TX queues
i40e: Fix for displaying message regarding NVM version
i40e: fix use-after-free in i40e_sync_filters_subtask()
i40e: Fix to not show opcode msg on unsuccessful VF MAC change
ieee802154: atusb: fix uninit value in atusb_set_extended_addr
mac80211: mesh: embedd mesh_paths and mpp_paths into ieee80211_if_mesh
mac80211: initialize variable have_higher_than_11mbit
sch_qfq: prevent shift-out-of-bounds in qfq_init_qdisc
netrom: fix copying in user data in nr_setsockopt
udp6: Use Segment Routing Header for dest address if present
icmp: ICMPV6: Examine invoking packet for Segment Route Headers.
seg6: export get_srh() for ICMP handling
Revert "net: phy: fixed_phy: Fix NULL vs IS_ERR() checking in __fixed_phy_register"
ipv6: Do cleanup if attribute validation fails in multipath route
ipv6: Continue processing multipath route even if gateway attribute is invalid
net/fsl: Remove leftover definition in xgmac_mdio
...
Commit bfc6bb74e4 ("bpf: Implement verifier support for validation of async callbacks.")
added support for BPF_FUNC_timer_set_callback to
the __check_func_call() function. The test in __check_func_call() is
flaweed because it can mis-interpret a regular BPF-to-BPF pseudo-call
as a BPF_FUNC_timer_set_callback callback call.
Consider the conditional in the code:
if (insn->code == (BPF_JMP | BPF_CALL) &&
insn->imm == BPF_FUNC_timer_set_callback) {
The BPF_FUNC_timer_set_callback has value 170. This means that if you
have a BPF program that contains a pseudo-call with an instruction delta
of 170, this conditional will be found to be true by the verifier, and
it will interpret the pseudo-call as a callback. This leads to a mess
with the verification of the program because it makes the wrong
assumptions about the nature of this call.
Solution: include an explicit check to ensure that insn->src_reg == 0.
This ensures that calls cannot be mis-interpreted as an async callback
call.
Fixes: bfc6bb74e4 ("bpf: Implement verifier support for validation of async callbacks.")
Signed-off-by: Kris Van Hees <kris.van.hees@oracle.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220105210150.GH1559@oracle.com
Add a little more stucture to the ALU/JMP documentation with sections and
improve the example text.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103183556.41040-3-hch@lst.de
The eBPF instruction set document does not currently document the basic
instruction encoding. Add a section to do that.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220103183556.41040-2-hch@lst.de
Add a new test case with mem_or_null typed register with off > 0 to ensure
it gets rejected by the verifier:
# ./test_verifier 1011
#1009/u check with invalid reg offset 0 OK
#1009/p check with invalid reg offset 0 OK
Summary: 2 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
If we ever get to a point again where we convert a bogus looking <ptr>_or_null
typed register containing a non-zero fixed or variable offset, then lets not
reset these bounds to zero since they are not and also don't promote the register
to a <ptr> type, but instead leave it as <ptr>_or_null. Converting to a unknown
register could be an avenue as well, but then if we run into this case it would
allow to leak a kernel pointer this way.
Fixes: f1174f77b5 ("bpf/verifier: rework value tracking")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>