Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:

 - fix failure to add bond interfaces to a bridge, the offload-handling
   code was too defensive there and recent refactoring unearthed that.
   Users complained (Ido)

 - fix unnecessarily reflecting ECN bits within TOS values / QoS marking
   in TCP ACK and reset packets (Wei)

 - fix a deadlock with bpf iterator. Hopefully we're in the clear on
   this front now... (Yonghong)

 - BPF fix for clobbering r2 in bpf_gen_ld_abs (Daniel)

 - fix AQL on mt76 devices with FW rate control and add a couple of AQL
   issues in mac80211 code (Felix)

 - fix authentication issue with mwifiex (Maximilian)

 - WiFi connectivity fix: revert IGTK support in ti/wlcore (Mauro)

 - fix exception handling for multipath routes via same device (David
   Ahern)

 - revert back to a BH spin lock flavor for nsid_lock: there are paths
   which do require the BH context protection (Taehee)

 - fix interrupt / queue / NAPI handling in the lantiq driver (Hauke)

 - fix ife module load deadlock (Cong)

 - make an adjustment to netlink reply message type for code added in
   this release (the sole change touching uAPI here) (Michal)

 - a number of fixes for small NXP and Microchip switches (Vladimir)

[ Pull request acked by David: "you can expect more of this in the
  future as I try to delegate more things to Jakub" ]

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (167 commits)
  net: mscc: ocelot: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
  net: dsa: seville: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
  net: dsa: felix: fix some key offsets for IP4_TCP_UDP VCAP IS2 entries
  inet_diag: validate INET_DIAG_REQ_PROTOCOL attribute
  net: bridge: br_vlan_get_pvid_rcu() should dereference the VLAN group under RCU
  net: Update MAINTAINERS for MediaTek switch driver
  net/mlx5e: mlx5e_fec_in_caps() returns a boolean
  net/mlx5e: kTLS, Avoid kzalloc(GFP_KERNEL) under spinlock
  net/mlx5e: kTLS, Fix leak on resync error flow
  net/mlx5e: kTLS, Add missing dma_unmap in RX resync
  net/mlx5e: kTLS, Fix napi sync and possible use-after-free
  net/mlx5e: TLS, Do not expose FPGA TLS counter if not supported
  net/mlx5e: Fix using wrong stats_grps in mlx5e_update_ndo_stats()
  net/mlx5e: Fix multicast counter not up-to-date in "ip -s"
  net/mlx5e: Fix endianness when calculating pedit mask first bit
  net/mlx5e: Enable adding peer miss rules only if merged eswitch is supported
  net/mlx5e: CT: Fix freeing ct_label mapping
  net/mlx5e: Fix memory leak of tunnel info when rule under multipath not ready
  net/mlx5e: Use synchronize_rcu to sync with NAPI
  net/mlx5e: Use RCU to protect rq->xdp_prog
  ...
This commit is contained in:
Linus Torvalds 2020-09-22 14:43:50 -07:00
Родитель 0baca07006 b334ec66d4
Коммит d3017135c4
165 изменённых файлов: 1709 добавлений и 828 удалений

Просмотреть файл

@ -182,9 +182,6 @@ in the order of reservations, but only after all previous records where
already committed. It is thus possible for slow producers to temporarily hold already committed. It is thus possible for slow producers to temporarily hold
off submitted records, that were reserved later. off submitted records, that were reserved later.
Reservation/commit/consumer protocol is verified by litmus tests in
Documentation/litmus_tests/bpf-rb/_.
One interesting implementation bit, that significantly simplifies (and thus One interesting implementation bit, that significantly simplifies (and thus
speeds up as well) implementation of both producers and consumers is how data speeds up as well) implementation of both producers and consumers is how data
area is mapped twice contiguously back-to-back in the virtual memory. This area is mapped twice contiguously back-to-back in the virtual memory. This
@ -200,7 +197,7 @@ a self-pacing notifications of new data being availability.
being available after commit only if consumer has already caught up right up to being available after commit only if consumer has already caught up right up to
the record being committed. If not, consumer still has to catch up and thus the record being committed. If not, consumer still has to catch up and thus
will see new data anyways without needing an extra poll notification. will see new data anyways without needing an extra poll notification.
Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c_) show that Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbufs.c) show that
this allows to achieve a very high throughput without having to resort to this allows to achieve a very high throughput without having to resort to
tricks like "notify only every Nth sample", which are necessary with perf tricks like "notify only every Nth sample", which are necessary with perf
buffer. For extreme cases, when BPF program wants more manual control of buffer. For extreme cases, when BPF program wants more manual control of

Просмотреть файл

@ -206,6 +206,7 @@ Userspace to kernel:
``ETHTOOL_MSG_TSINFO_GET`` get timestamping info ``ETHTOOL_MSG_TSINFO_GET`` get timestamping info
``ETHTOOL_MSG_CABLE_TEST_ACT`` action start cable test ``ETHTOOL_MSG_CABLE_TEST_ACT`` action start cable test
``ETHTOOL_MSG_CABLE_TEST_TDR_ACT`` action start raw TDR cable test ``ETHTOOL_MSG_CABLE_TEST_TDR_ACT`` action start raw TDR cable test
``ETHTOOL_MSG_TUNNEL_INFO_GET`` get tunnel offload info
===================================== ================================ ===================================== ================================
Kernel to userspace: Kernel to userspace:
@ -239,6 +240,7 @@ Kernel to userspace:
``ETHTOOL_MSG_TSINFO_GET_REPLY`` timestamping info ``ETHTOOL_MSG_TSINFO_GET_REPLY`` timestamping info
``ETHTOOL_MSG_CABLE_TEST_NTF`` Cable test results ``ETHTOOL_MSG_CABLE_TEST_NTF`` Cable test results
``ETHTOOL_MSG_CABLE_TEST_TDR_NTF`` Cable test TDR results ``ETHTOOL_MSG_CABLE_TEST_TDR_NTF`` Cable test TDR results
``ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY`` tunnel offload info
===================================== ================================= ===================================== =================================
``GET`` requests are sent by userspace applications to retrieve device ``GET`` requests are sent by userspace applications to retrieve device
@ -1363,4 +1365,5 @@ are netlink only.
``ETHTOOL_SFECPARAM`` n/a ``ETHTOOL_SFECPARAM`` n/a
n/a ''ETHTOOL_MSG_CABLE_TEST_ACT'' n/a ''ETHTOOL_MSG_CABLE_TEST_ACT''
n/a ''ETHTOOL_MSG_CABLE_TEST_TDR_ACT'' n/a ''ETHTOOL_MSG_CABLE_TEST_TDR_ACT''
n/a ``ETHTOOL_MSG_TUNNEL_INFO_GET``
=================================== ===================================== =================================== =====================================

Просмотреть файл

@ -4408,12 +4408,6 @@ T: git git://git.infradead.org/users/hch/configfs.git
F: fs/configfs/ F: fs/configfs/
F: include/linux/configfs.h F: include/linux/configfs.h
CONNECTOR
M: Evgeniy Polyakov <zbr@ioremap.net>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/connector/
CONSOLE SUBSYSTEM CONSOLE SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Supported S: Supported
@ -8329,8 +8323,9 @@ S: Supported
F: drivers/pci/hotplug/rpaphp* F: drivers/pci/hotplug/rpaphp*
IBM Power SRIOV Virtual NIC Device Driver IBM Power SRIOV Virtual NIC Device Driver
M: Thomas Falcon <tlfalcon@linux.ibm.com> M: Dany Madden <drt@linux.ibm.com>
M: John Allen <jallen@linux.ibm.com> M: Lijun Pan <ljp@linux.ibm.com>
M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/ibm/ibmvnic.* F: drivers/net/ethernet/ibm/ibmvnic.*
@ -8344,7 +8339,7 @@ F: arch/powerpc/platforms/powernv/copy-paste.h
F: arch/powerpc/platforms/powernv/vas* F: arch/powerpc/platforms/powernv/vas*
IBM Power Virtual Ethernet Device Driver IBM Power Virtual Ethernet Device Driver
M: Thomas Falcon <tlfalcon@linux.ibm.com> M: Cristobal Forno <cforno12@linux.ibm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/ibm/ibmveth.* F: drivers/net/ethernet/ibm/ibmveth.*
@ -11042,6 +11037,7 @@ F: drivers/char/hw_random/mtk-rng.c
MEDIATEK SWITCH DRIVER MEDIATEK SWITCH DRIVER
M: Sean Wang <sean.wang@mediatek.com> M: Sean Wang <sean.wang@mediatek.com>
M: Landen Chao <Landen.Chao@mediatek.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: drivers/net/dsa/mt7530.* F: drivers/net/dsa/mt7530.*
@ -12055,6 +12051,7 @@ Q: http://patchwork.ozlabs.org/project/netdev/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
F: Documentation/devicetree/bindings/net/ F: Documentation/devicetree/bindings/net/
F: drivers/connector/
F: drivers/net/ F: drivers/net/
F: include/linux/etherdevice.h F: include/linux/etherdevice.h
F: include/linux/fcdevice.h F: include/linux/fcdevice.h

Просмотреть файл

@ -116,7 +116,6 @@
switch0: ksz8563@0 { switch0: ksz8563@0 {
compatible = "microchip,ksz8563"; compatible = "microchip,ksz8563";
reg = <0>; reg = <0>;
phy-mode = "mii";
reset-gpios = <&pioA PIN_PD4 GPIO_ACTIVE_LOW>; reset-gpios = <&pioA PIN_PD4 GPIO_ACTIVE_LOW>;
spi-max-frequency = <500000>; spi-max-frequency = <500000>;
@ -140,6 +139,7 @@
reg = <2>; reg = <2>;
label = "cpu"; label = "cpu";
ethernet = <&macb0>; ethernet = <&macb0>;
phy-mode = "mii";
fixed-link { fixed-link {
speed = <100>; speed = <100>;
full-duplex; full-duplex;

Просмотреть файл

@ -2224,7 +2224,7 @@ static int eni_init_one(struct pci_dev *pci_dev,
rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32)); rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
if (rc < 0) if (rc < 0)
goto out; goto err_disable;
rc = -ENOMEM; rc = -ENOMEM;
eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL); eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);

Просмотреть файл

@ -932,11 +932,19 @@ static void ksz8795_port_setup(struct ksz_device *dev, int port, bool cpu_port)
ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_ENABLE, true); ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_ENABLE, true);
if (cpu_port) { if (cpu_port) {
if (!p->interface && dev->compat_interface) {
dev_warn(dev->dev,
"Using legacy switch \"phy-mode\" property, because it is missing on port %d node. "
"Please update your device tree.\n",
port);
p->interface = dev->compat_interface;
}
/* Configure MII interface for proper network communication. */ /* Configure MII interface for proper network communication. */
ksz_read8(dev, REG_PORT_5_CTRL_6, &data8); ksz_read8(dev, REG_PORT_5_CTRL_6, &data8);
data8 &= ~PORT_INTERFACE_TYPE; data8 &= ~PORT_INTERFACE_TYPE;
data8 &= ~PORT_GMII_1GPS_MODE; data8 &= ~PORT_GMII_1GPS_MODE;
switch (dev->interface) { switch (p->interface) {
case PHY_INTERFACE_MODE_MII: case PHY_INTERFACE_MODE_MII:
p->phydev.speed = SPEED_100; p->phydev.speed = SPEED_100;
break; break;
@ -952,11 +960,11 @@ static void ksz8795_port_setup(struct ksz_device *dev, int port, bool cpu_port)
default: default:
data8 &= ~PORT_RGMII_ID_IN_ENABLE; data8 &= ~PORT_RGMII_ID_IN_ENABLE;
data8 &= ~PORT_RGMII_ID_OUT_ENABLE; data8 &= ~PORT_RGMII_ID_OUT_ENABLE;
if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
dev->interface == PHY_INTERFACE_MODE_RGMII_RXID) p->interface == PHY_INTERFACE_MODE_RGMII_RXID)
data8 |= PORT_RGMII_ID_IN_ENABLE; data8 |= PORT_RGMII_ID_IN_ENABLE;
if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
dev->interface == PHY_INTERFACE_MODE_RGMII_TXID) p->interface == PHY_INTERFACE_MODE_RGMII_TXID)
data8 |= PORT_RGMII_ID_OUT_ENABLE; data8 |= PORT_RGMII_ID_OUT_ENABLE;
data8 |= PORT_GMII_1GPS_MODE; data8 |= PORT_GMII_1GPS_MODE;
data8 |= PORT_INTERFACE_RGMII; data8 |= PORT_INTERFACE_RGMII;
@ -1252,7 +1260,7 @@ static int ksz8795_switch_init(struct ksz_device *dev)
} }
/* set the real number of ports */ /* set the real number of ports */
dev->ds->num_ports = dev->port_cnt; dev->ds->num_ports = dev->port_cnt + 1;
return 0; return 0;
} }

Просмотреть файл

@ -1208,7 +1208,7 @@ static void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
/* configure MAC to 1G & RGMII mode */ /* configure MAC to 1G & RGMII mode */
ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8); ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8);
switch (dev->interface) { switch (p->interface) {
case PHY_INTERFACE_MODE_MII: case PHY_INTERFACE_MODE_MII:
ksz9477_set_xmii(dev, 0, &data8); ksz9477_set_xmii(dev, 0, &data8);
ksz9477_set_gbit(dev, false, &data8); ksz9477_set_gbit(dev, false, &data8);
@ -1229,11 +1229,11 @@ static void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
ksz9477_set_gbit(dev, true, &data8); ksz9477_set_gbit(dev, true, &data8);
data8 &= ~PORT_RGMII_ID_IG_ENABLE; data8 &= ~PORT_RGMII_ID_IG_ENABLE;
data8 &= ~PORT_RGMII_ID_EG_ENABLE; data8 &= ~PORT_RGMII_ID_EG_ENABLE;
if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
dev->interface == PHY_INTERFACE_MODE_RGMII_RXID) p->interface == PHY_INTERFACE_MODE_RGMII_RXID)
data8 |= PORT_RGMII_ID_IG_ENABLE; data8 |= PORT_RGMII_ID_IG_ENABLE;
if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID || if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
dev->interface == PHY_INTERFACE_MODE_RGMII_TXID) p->interface == PHY_INTERFACE_MODE_RGMII_TXID)
data8 |= PORT_RGMII_ID_EG_ENABLE; data8 |= PORT_RGMII_ID_EG_ENABLE;
p->phydev.speed = SPEED_1000; p->phydev.speed = SPEED_1000;
break; break;
@ -1269,23 +1269,32 @@ static void ksz9477_config_cpu_port(struct dsa_switch *ds)
dev->cpu_port = i; dev->cpu_port = i;
dev->host_mask = (1 << dev->cpu_port); dev->host_mask = (1 << dev->cpu_port);
dev->port_mask |= dev->host_mask; dev->port_mask |= dev->host_mask;
p = &dev->ports[i];
/* Read from XMII register to determine host port /* Read from XMII register to determine host port
* interface. If set specifically in device tree * interface. If set specifically in device tree
* note the difference to help debugging. * note the difference to help debugging.
*/ */
interface = ksz9477_get_interface(dev, i); interface = ksz9477_get_interface(dev, i);
if (!dev->interface) if (!p->interface) {
dev->interface = interface; if (dev->compat_interface) {
if (interface && interface != dev->interface) dev_warn(dev->dev,
"Using legacy switch \"phy-mode\" property, because it is missing on port %d node. "
"Please update your device tree.\n",
i);
p->interface = dev->compat_interface;
} else {
p->interface = interface;
}
}
if (interface && interface != p->interface)
dev_info(dev->dev, dev_info(dev->dev,
"use %s instead of %s\n", "use %s instead of %s\n",
phy_modes(dev->interface), phy_modes(p->interface),
phy_modes(interface)); phy_modes(interface));
/* enable cpu port */ /* enable cpu port */
ksz9477_port_setup(dev, i, true); ksz9477_port_setup(dev, i, true);
p = &dev->ports[dev->cpu_port];
p->vid_member = dev->port_mask; p->vid_member = dev->port_mask;
p->on = 1; p->on = 1;
} }

Просмотреть файл

@ -388,6 +388,8 @@ int ksz_switch_register(struct ksz_device *dev,
const struct ksz_dev_ops *ops) const struct ksz_dev_ops *ops)
{ {
phy_interface_t interface; phy_interface_t interface;
struct device_node *port;
unsigned int port_num;
int ret; int ret;
if (dev->pdata) if (dev->pdata)
@ -421,10 +423,19 @@ int ksz_switch_register(struct ksz_device *dev,
/* Host port interface will be self detected, or specifically set in /* Host port interface will be self detected, or specifically set in
* device tree. * device tree.
*/ */
for (port_num = 0; port_num < dev->port_cnt; ++port_num)
dev->ports[port_num].interface = PHY_INTERFACE_MODE_NA;
if (dev->dev->of_node) { if (dev->dev->of_node) {
ret = of_get_phy_mode(dev->dev->of_node, &interface); ret = of_get_phy_mode(dev->dev->of_node, &interface);
if (ret == 0) if (ret == 0)
dev->interface = interface; dev->compat_interface = interface;
for_each_available_child_of_node(dev->dev->of_node, port) {
if (of_property_read_u32(port, "reg", &port_num))
continue;
if (port_num >= dev->port_cnt)
return -EINVAL;
of_get_phy_mode(port, &dev->ports[port_num].interface);
}
dev->synclko_125 = of_property_read_bool(dev->dev->of_node, dev->synclko_125 = of_property_read_bool(dev->dev->of_node,
"microchip,synclko-125"); "microchip,synclko-125");
} }

Просмотреть файл

@ -39,6 +39,7 @@ struct ksz_port {
u32 freeze:1; /* MIB counter freeze is enabled */ u32 freeze:1; /* MIB counter freeze is enabled */
struct ksz_port_mib mib; struct ksz_port_mib mib;
phy_interface_t interface;
}; };
struct ksz_device { struct ksz_device {
@ -72,7 +73,7 @@ struct ksz_device {
int mib_cnt; int mib_cnt;
int mib_port_cnt; int mib_port_cnt;
int last_port; /* ports after that not used */ int last_port; /* ports after that not used */
phy_interface_t interface; phy_interface_t compat_interface;
u32 regs_size; u32 regs_size;
bool phy_errata_9477; bool phy_errata_9477;
bool synclko_125; bool synclko_125;

Просмотреть файл

@ -585,7 +585,10 @@ static int felix_setup(struct dsa_switch *ds)
if (err) if (err)
return err; return err;
ocelot_init(ocelot); err = ocelot_init(ocelot);
if (err)
return err;
if (ocelot->ptp) { if (ocelot->ptp) {
err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info); err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);
if (err) { if (err) {
@ -640,10 +643,13 @@ static void felix_teardown(struct dsa_switch *ds)
{ {
struct ocelot *ocelot = ds->priv; struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot); struct felix *felix = ocelot_to_felix(ocelot);
int port;
if (felix->info->mdio_bus_free) if (felix->info->mdio_bus_free)
felix->info->mdio_bus_free(ocelot); felix->info->mdio_bus_free(ocelot);
for (port = 0; port < ocelot->num_phys_ports; port++)
ocelot_deinit_port(ocelot, port);
ocelot_deinit_timestamp(ocelot); ocelot_deinit_timestamp(ocelot);
/* stop workqueue thread */ /* stop workqueue thread */
ocelot_deinit(ocelot); ocelot_deinit(ocelot);

Просмотреть файл

@ -645,17 +645,17 @@ static struct vcap_field vsc9959_vcap_is2_keys[] = {
[VCAP_IS2_HK_DIP_EQ_SIP] = {118, 1}, [VCAP_IS2_HK_DIP_EQ_SIP] = {118, 1},
/* IP4_TCP_UDP (TYPE=100) */ /* IP4_TCP_UDP (TYPE=100) */
[VCAP_IS2_HK_TCP] = {119, 1}, [VCAP_IS2_HK_TCP] = {119, 1},
[VCAP_IS2_HK_L4_SPORT] = {120, 16}, [VCAP_IS2_HK_L4_DPORT] = {120, 16},
[VCAP_IS2_HK_L4_DPORT] = {136, 16}, [VCAP_IS2_HK_L4_SPORT] = {136, 16},
[VCAP_IS2_HK_L4_RNG] = {152, 8}, [VCAP_IS2_HK_L4_RNG] = {152, 8},
[VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {160, 1}, [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {160, 1},
[VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {161, 1}, [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {161, 1},
[VCAP_IS2_HK_L4_URG] = {162, 1}, [VCAP_IS2_HK_L4_FIN] = {162, 1},
[VCAP_IS2_HK_L4_ACK] = {163, 1}, [VCAP_IS2_HK_L4_SYN] = {163, 1},
[VCAP_IS2_HK_L4_PSH] = {164, 1}, [VCAP_IS2_HK_L4_RST] = {164, 1},
[VCAP_IS2_HK_L4_RST] = {165, 1}, [VCAP_IS2_HK_L4_PSH] = {165, 1},
[VCAP_IS2_HK_L4_SYN] = {166, 1}, [VCAP_IS2_HK_L4_ACK] = {166, 1},
[VCAP_IS2_HK_L4_FIN] = {167, 1}, [VCAP_IS2_HK_L4_URG] = {167, 1},
[VCAP_IS2_HK_L4_1588_DOM] = {168, 8}, [VCAP_IS2_HK_L4_1588_DOM] = {168, 8},
[VCAP_IS2_HK_L4_1588_VER] = {176, 4}, [VCAP_IS2_HK_L4_1588_VER] = {176, 4},
/* IP4_OTHER (TYPE=101) */ /* IP4_OTHER (TYPE=101) */

Просмотреть файл

@ -659,17 +659,17 @@ static struct vcap_field vsc9953_vcap_is2_keys[] = {
[VCAP_IS2_HK_DIP_EQ_SIP] = {122, 1}, [VCAP_IS2_HK_DIP_EQ_SIP] = {122, 1},
/* IP4_TCP_UDP (TYPE=100) */ /* IP4_TCP_UDP (TYPE=100) */
[VCAP_IS2_HK_TCP] = {123, 1}, [VCAP_IS2_HK_TCP] = {123, 1},
[VCAP_IS2_HK_L4_SPORT] = {124, 16}, [VCAP_IS2_HK_L4_DPORT] = {124, 16},
[VCAP_IS2_HK_L4_DPORT] = {140, 16}, [VCAP_IS2_HK_L4_SPORT] = {140, 16},
[VCAP_IS2_HK_L4_RNG] = {156, 8}, [VCAP_IS2_HK_L4_RNG] = {156, 8},
[VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {164, 1}, [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {164, 1},
[VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {165, 1}, [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {165, 1},
[VCAP_IS2_HK_L4_URG] = {166, 1}, [VCAP_IS2_HK_L4_FIN] = {166, 1},
[VCAP_IS2_HK_L4_ACK] = {167, 1}, [VCAP_IS2_HK_L4_SYN] = {167, 1},
[VCAP_IS2_HK_L4_PSH] = {168, 1}, [VCAP_IS2_HK_L4_RST] = {168, 1},
[VCAP_IS2_HK_L4_RST] = {169, 1}, [VCAP_IS2_HK_L4_PSH] = {169, 1},
[VCAP_IS2_HK_L4_SYN] = {170, 1}, [VCAP_IS2_HK_L4_ACK] = {170, 1},
[VCAP_IS2_HK_L4_FIN] = {171, 1}, [VCAP_IS2_HK_L4_URG] = {171, 1},
/* IP4_OTHER (TYPE=101) */ /* IP4_OTHER (TYPE=101) */
[VCAP_IS2_HK_IP4_L3_PROTO] = {123, 8}, [VCAP_IS2_HK_IP4_L3_PROTO] = {123, 8},
[VCAP_IS2_HK_L3_PAYLOAD] = {131, 56}, [VCAP_IS2_HK_L3_PAYLOAD] = {131, 56},
@ -1008,7 +1008,7 @@ static const struct felix_info seville_info_vsc9953 = {
.vcap_is2_keys = vsc9953_vcap_is2_keys, .vcap_is2_keys = vsc9953_vcap_is2_keys,
.vcap_is2_actions = vsc9953_vcap_is2_actions, .vcap_is2_actions = vsc9953_vcap_is2_actions,
.vcap = vsc9953_vcap_props, .vcap = vsc9953_vcap_props,
.shared_queue_sz = 128 * 1024, .shared_queue_sz = 2048 * 1024,
.num_mact_rows = 2048, .num_mact_rows = 2048,
.num_ports = 10, .num_ports = 10,
.mdio_bus_alloc = vsc9953_mdio_bus_alloc, .mdio_bus_alloc = vsc9953_mdio_bus_alloc,

Просмотреть файл

@ -452,13 +452,19 @@ int rtl8366_vlan_del(struct dsa_switch *ds, int port,
return ret; return ret;
if (vid == vlanmc.vid) { if (vid == vlanmc.vid) {
/* clear VLAN member configurations */ /* Remove this port from the VLAN */
vlanmc.vid = 0; vlanmc.member &= ~BIT(port);
vlanmc.priority = 0; vlanmc.untag &= ~BIT(port);
vlanmc.member = 0; /*
vlanmc.untag = 0; * If no ports are members of this VLAN
vlanmc.fid = 0; * anymore then clear the whole member
* config so it can be reused.
*/
if (!vlanmc.member && vlanmc.untag) {
vlanmc.vid = 0;
vlanmc.priority = 0;
vlanmc.fid = 0;
}
ret = smi->ops->set_vlan_mc(smi, i, &vlanmc); ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
if (ret) { if (ret) {
dev_err(smi->dev, dev_err(smi->dev,

Просмотреть файл

@ -3782,6 +3782,7 @@ static int bnxt_hwrm_func_qstat_ext(struct bnxt *bp,
return -EOPNOTSUPP; return -EOPNOTSUPP;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_QSTATS_EXT, -1, -1); bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_QSTATS_EXT, -1, -1);
req.fid = cpu_to_le16(0xffff);
req.flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; req.flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK;
mutex_lock(&bp->hwrm_cmd_lock); mutex_lock(&bp->hwrm_cmd_lock);
rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
@ -3852,7 +3853,7 @@ static void bnxt_init_stats(struct bnxt *bp)
tx_masks = stats->hw_masks; tx_masks = stats->hw_masks;
tx_count = sizeof(struct tx_port_stats_ext) / 8; tx_count = sizeof(struct tx_port_stats_ext) / 8;
flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK; flags = PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK;
rc = bnxt_hwrm_port_qstats_ext(bp, flags); rc = bnxt_hwrm_port_qstats_ext(bp, flags);
if (rc) { if (rc) {
mask = (1ULL << 40) - 1; mask = (1ULL << 40) - 1;
@ -4305,7 +4306,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM; u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM;
u16 dst = BNXT_HWRM_CHNL_CHIMP; u16 dst = BNXT_HWRM_CHNL_CHIMP;
if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) if (BNXT_NO_FW_ACCESS(bp))
return -EBUSY; return -EBUSY;
if (msg_len > BNXT_HWRM_MAX_REQ_LEN) { if (msg_len > BNXT_HWRM_MAX_REQ_LEN) {
@ -5723,7 +5724,7 @@ static int hwrm_ring_free_send_msg(struct bnxt *bp,
struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr; struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
u16 error_code; u16 error_code;
if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) if (BNXT_NO_FW_ACCESS(bp))
return 0; return 0;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_RING_FREE, cmpl_ring_id, -1); bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_RING_FREE, cmpl_ring_id, -1);
@ -7817,7 +7818,7 @@ static int bnxt_set_tpa(struct bnxt *bp, bool set_tpa)
if (set_tpa) if (set_tpa)
tpa_flags = bp->flags & BNXT_FLAG_TPA; tpa_flags = bp->flags & BNXT_FLAG_TPA;
else if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state)) else if (BNXT_NO_FW_ACCESS(bp))
return 0; return 0;
for (i = 0; i < bp->nr_vnics; i++) { for (i = 0; i < bp->nr_vnics; i++) {
rc = bnxt_hwrm_vnic_set_tpa(bp, i, tpa_flags); rc = bnxt_hwrm_vnic_set_tpa(bp, i, tpa_flags);
@ -9311,18 +9312,16 @@ static ssize_t bnxt_show_temp(struct device *dev,
struct hwrm_temp_monitor_query_output *resp; struct hwrm_temp_monitor_query_output *resp;
struct bnxt *bp = dev_get_drvdata(dev); struct bnxt *bp = dev_get_drvdata(dev);
u32 len = 0; u32 len = 0;
int rc;
resp = bp->hwrm_cmd_resp_addr; resp = bp->hwrm_cmd_resp_addr;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1); bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
mutex_lock(&bp->hwrm_cmd_lock); mutex_lock(&bp->hwrm_cmd_lock);
if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT)) rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (!rc)
len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */ len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
mutex_unlock(&bp->hwrm_cmd_lock); mutex_unlock(&bp->hwrm_cmd_lock);
return rc ?: len;
if (len)
return len;
return sprintf(buf, "unknown\n");
} }
static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0); static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
@ -9342,7 +9341,16 @@ static void bnxt_hwmon_close(struct bnxt *bp)
static void bnxt_hwmon_open(struct bnxt *bp) static void bnxt_hwmon_open(struct bnxt *bp)
{ {
struct hwrm_temp_monitor_query_input req = {0};
struct pci_dev *pdev = bp->pdev; struct pci_dev *pdev = bp->pdev;
int rc;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (rc == -EACCES || rc == -EOPNOTSUPP) {
bnxt_hwmon_close(bp);
return;
}
if (bp->hwmon_dev) if (bp->hwmon_dev)
return; return;
@ -11779,6 +11787,10 @@ static void bnxt_remove_one(struct pci_dev *pdev)
if (BNXT_PF(bp)) if (BNXT_PF(bp))
bnxt_sriov_disable(bp); bnxt_sriov_disable(bp);
clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
bnxt_cancel_sp_work(bp);
bp->sp_event = 0;
bnxt_dl_fw_reporters_destroy(bp, true); bnxt_dl_fw_reporters_destroy(bp, true);
if (BNXT_PF(bp)) if (BNXT_PF(bp))
devlink_port_type_clear(&bp->dl_port); devlink_port_type_clear(&bp->dl_port);
@ -11786,9 +11798,6 @@ static void bnxt_remove_one(struct pci_dev *pdev)
unregister_netdev(dev); unregister_netdev(dev);
bnxt_dl_unregister(bp); bnxt_dl_unregister(bp);
bnxt_shutdown_tc(bp); bnxt_shutdown_tc(bp);
clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
bnxt_cancel_sp_work(bp);
bp->sp_event = 0;
bnxt_clear_int_mode(bp); bnxt_clear_int_mode(bp);
bnxt_hwrm_func_drv_unrgtr(bp); bnxt_hwrm_func_drv_unrgtr(bp);
@ -12089,7 +12098,7 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
static void bnxt_vpd_read_info(struct bnxt *bp) static void bnxt_vpd_read_info(struct bnxt *bp)
{ {
struct pci_dev *pdev = bp->pdev; struct pci_dev *pdev = bp->pdev;
int i, len, pos, ro_size; int i, len, pos, ro_size, size;
ssize_t vpd_size; ssize_t vpd_size;
u8 *vpd_data; u8 *vpd_data;
@ -12124,7 +12133,8 @@ static void bnxt_vpd_read_info(struct bnxt *bp)
if (len + pos > vpd_size) if (len + pos > vpd_size)
goto read_sn; goto read_sn;
strlcpy(bp->board_partno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN)); size = min(len, BNXT_VPD_FLD_LEN - 1);
memcpy(bp->board_partno, &vpd_data[pos], size);
read_sn: read_sn:
pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size, pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size,
@ -12137,7 +12147,8 @@ read_sn:
if (len + pos > vpd_size) if (len + pos > vpd_size)
goto exit; goto exit;
strlcpy(bp->board_serialno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN)); size = min(len, BNXT_VPD_FLD_LEN - 1);
memcpy(bp->board_serialno, &vpd_data[pos], size);
exit: exit:
kfree(vpd_data); kfree(vpd_data);
} }

Просмотреть файл

@ -1737,6 +1737,10 @@ struct bnxt {
#define BNXT_STATE_FW_FATAL_COND 6 #define BNXT_STATE_FW_FATAL_COND 6
#define BNXT_STATE_DRV_REGISTERED 7 #define BNXT_STATE_DRV_REGISTERED 7
#define BNXT_NO_FW_ACCESS(bp) \
(test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \
pci_channel_offline((bp)->pdev))
struct bnxt_irq *irq_tbl; struct bnxt_irq *irq_tbl;
int total_irqs; int total_irqs;
u8 mac_addr[ETH_ALEN]; u8 mac_addr[ETH_ALEN];

Просмотреть файл

@ -1322,6 +1322,9 @@ static int bnxt_get_regs_len(struct net_device *dev)
struct bnxt *bp = netdev_priv(dev); struct bnxt *bp = netdev_priv(dev);
int reg_len; int reg_len;
if (!BNXT_PF(bp))
return -EOPNOTSUPP;
reg_len = BNXT_PXP_REG_LEN; reg_len = BNXT_PXP_REG_LEN;
if (bp->fw_cap & BNXT_FW_CAP_PCIE_STATS_SUPPORTED) if (bp->fw_cap & BNXT_FW_CAP_PCIE_STATS_SUPPORTED)
@ -1788,9 +1791,12 @@ static int bnxt_set_pauseparam(struct net_device *dev,
if (!BNXT_PHY_CFG_ABLE(bp)) if (!BNXT_PHY_CFG_ABLE(bp))
return -EOPNOTSUPP; return -EOPNOTSUPP;
mutex_lock(&bp->link_lock);
if (epause->autoneg) { if (epause->autoneg) {
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
return -EINVAL; rc = -EINVAL;
goto pause_exit;
}
link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL; link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
if (bp->hwrm_spec_code >= 0x10201) if (bp->hwrm_spec_code >= 0x10201)
@ -1811,11 +1817,11 @@ static int bnxt_set_pauseparam(struct net_device *dev,
if (epause->tx_pause) if (epause->tx_pause)
link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX; link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;
if (netif_running(dev)) { if (netif_running(dev))
mutex_lock(&bp->link_lock);
rc = bnxt_hwrm_set_pause(bp); rc = bnxt_hwrm_set_pause(bp);
mutex_unlock(&bp->link_lock);
} pause_exit:
mutex_unlock(&bp->link_lock);
return rc; return rc;
} }
@ -2552,8 +2558,7 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
struct bnxt *bp = netdev_priv(dev); struct bnxt *bp = netdev_priv(dev);
struct ethtool_eee *eee = &bp->eee; struct ethtool_eee *eee = &bp->eee;
struct bnxt_link_info *link_info = &bp->link_info; struct bnxt_link_info *link_info = &bp->link_info;
u32 advertising = u32 advertising;
_bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
int rc = 0; int rc = 0;
if (!BNXT_PHY_CFG_ABLE(bp)) if (!BNXT_PHY_CFG_ABLE(bp))
@ -2562,19 +2567,23 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
if (!(bp->flags & BNXT_FLAG_EEE_CAP)) if (!(bp->flags & BNXT_FLAG_EEE_CAP))
return -EOPNOTSUPP; return -EOPNOTSUPP;
mutex_lock(&bp->link_lock);
advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
if (!edata->eee_enabled) if (!edata->eee_enabled)
goto eee_ok; goto eee_ok;
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) { if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
netdev_warn(dev, "EEE requires autoneg\n"); netdev_warn(dev, "EEE requires autoneg\n");
return -EINVAL; rc = -EINVAL;
goto eee_exit;
} }
if (edata->tx_lpi_enabled) { if (edata->tx_lpi_enabled) {
if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi || if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi ||
edata->tx_lpi_timer < bp->lpi_tmr_lo)) { edata->tx_lpi_timer < bp->lpi_tmr_lo)) {
netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n", netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n",
bp->lpi_tmr_lo, bp->lpi_tmr_hi); bp->lpi_tmr_lo, bp->lpi_tmr_hi);
return -EINVAL; rc = -EINVAL;
goto eee_exit;
} else if (!bp->lpi_tmr_hi) { } else if (!bp->lpi_tmr_hi) {
edata->tx_lpi_timer = eee->tx_lpi_timer; edata->tx_lpi_timer = eee->tx_lpi_timer;
} }
@ -2584,7 +2593,8 @@ static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
} else if (edata->advertised & ~advertising) { } else if (edata->advertised & ~advertising) {
netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n", netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n",
edata->advertised, advertising); edata->advertised, advertising);
return -EINVAL; rc = -EINVAL;
goto eee_exit;
} }
eee->advertised = edata->advertised; eee->advertised = edata->advertised;
@ -2596,6 +2606,8 @@ eee_ok:
if (netif_running(dev)) if (netif_running(dev))
rc = bnxt_hwrm_set_link_setting(bp, false, true); rc = bnxt_hwrm_set_link_setting(bp, false, true);
eee_exit:
mutex_unlock(&bp->link_lock);
return rc; return rc;
} }

Просмотреть файл

@ -647,8 +647,7 @@ static void macb_mac_link_up(struct phylink_config *config,
ctrl |= GEM_BIT(GBE); ctrl |= GEM_BIT(GBE);
} }
/* We do not support MLO_PAUSE_RX yet */ if (rx_pause)
if (tx_pause)
ctrl |= MACB_BIT(PAE); ctrl |= MACB_BIT(PAE);
macb_set_tx_clk(bp->tx_clk, speed, ndev); macb_set_tx_clk(bp->tx_clk, speed, ndev);

Просмотреть файл

@ -1911,13 +1911,16 @@ out:
static int configure_filter_tcb(struct adapter *adap, unsigned int tid, static int configure_filter_tcb(struct adapter *adap, unsigned int tid,
struct filter_entry *f) struct filter_entry *f)
{ {
if (f->fs.hitcnts) if (f->fs.hitcnts) {
set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W, set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W,
TCB_TIMESTAMP_V(TCB_TIMESTAMP_M) | TCB_TIMESTAMP_V(TCB_TIMESTAMP_M),
TCB_TIMESTAMP_V(0ULL),
1);
set_tcb_field(adap, f, tid, TCB_RTT_TS_RECENT_AGE_W,
TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M), TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M),
TCB_TIMESTAMP_V(0ULL) |
TCB_RTT_TS_RECENT_AGE_V(0ULL), TCB_RTT_TS_RECENT_AGE_V(0ULL),
1); 1);
}
if (f->fs.newdmac) if (f->fs.newdmac)
set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1, set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1,

Просмотреть файл

@ -229,7 +229,7 @@ void cxgb4_free_mps_ref_entries(struct adapter *adap)
{ {
struct mps_entries_ref *mps_entry, *tmp; struct mps_entries_ref *mps_entry, *tmp;
if (!list_empty(&adap->mps_ref)) if (list_empty(&adap->mps_ref))
return; return;
spin_lock(&adap->mps_ref_lock); spin_lock(&adap->mps_ref_lock);

Просмотреть файл

@ -85,7 +85,7 @@ MODULE_PARM_DESC (rx_copybreak, "de2104x Breakpoint at which Rx packets are copi
#define DSL CONFIG_DE2104X_DSL #define DSL CONFIG_DE2104X_DSL
#endif #endif
#define DE_RX_RING_SIZE 64 #define DE_RX_RING_SIZE 128
#define DE_TX_RING_SIZE 64 #define DE_TX_RING_SIZE 64
#define DE_RING_BYTES \ #define DE_RING_BYTES \
((sizeof(struct de_desc) * DE_RX_RING_SIZE) + \ ((sizeof(struct de_desc) * DE_RX_RING_SIZE) + \

Просмотреть файл

@ -66,8 +66,8 @@ struct dpmac_cmd_get_counter {
}; };
struct dpmac_rsp_get_counter { struct dpmac_rsp_get_counter {
u64 pad; __le64 pad;
u64 counter; __le64 counter;
}; };
#endif /* _FSL_DPMAC_CMD_H */ #endif /* _FSL_DPMAC_CMD_H */

Просмотреть файл

@ -1053,7 +1053,6 @@ static int enetc_pf_probe(struct pci_dev *pdev,
err_reg_netdev: err_reg_netdev:
enetc_teardown_serdes(priv); enetc_teardown_serdes(priv);
enetc_mdio_remove(pf);
enetc_free_msix(priv); enetc_free_msix(priv);
err_alloc_msix: err_alloc_msix:
enetc_free_si_resources(priv); enetc_free_si_resources(priv);
@ -1061,6 +1060,7 @@ err_alloc_si_res:
si->ndev = NULL; si->ndev = NULL;
free_netdev(ndev); free_netdev(ndev);
err_alloc_netdev: err_alloc_netdev:
enetc_mdio_remove(pf);
enetc_of_put_phy(pf); enetc_of_put_phy(pf);
err_map_pf_space: err_map_pf_space:
enetc_pci_remove(pdev); enetc_pci_remove(pdev);

Просмотреть файл

@ -334,7 +334,7 @@ static void hns_dsaf_xge_srst_by_port_acpi(struct dsaf_device *dsaf_dev,
* bit6-11 for ppe0-5 * bit6-11 for ppe0-5
* bit12-17 for roce0-5 * bit12-17 for roce0-5
* bit18-19 for com/dfx * bit18-19 for com/dfx
* @enable: false - request reset , true - drop reset * @dereset: false - request reset , true - drop reset
*/ */
static void static void
hns_dsaf_srst_chns(struct dsaf_device *dsaf_dev, u32 msk, bool dereset) hns_dsaf_srst_chns(struct dsaf_device *dsaf_dev, u32 msk, bool dereset)
@ -357,7 +357,7 @@ hns_dsaf_srst_chns(struct dsaf_device *dsaf_dev, u32 msk, bool dereset)
* bit6-11 for ppe0-5 * bit6-11 for ppe0-5
* bit12-17 for roce0-5 * bit12-17 for roce0-5
* bit18-19 for com/dfx * bit18-19 for com/dfx
* @enable: false - request reset , true - drop reset * @dereset: false - request reset , true - drop reset
*/ */
static void static void
hns_dsaf_srst_chns_acpi(struct dsaf_device *dsaf_dev, u32 msk, bool dereset) hns_dsaf_srst_chns_acpi(struct dsaf_device *dsaf_dev, u32 msk, bool dereset)

Просмотреть файл

@ -463,8 +463,8 @@ static int __lb_clean_rings(struct hns_nic_priv *priv,
/** /**
* nic_run_loopback_test - run loopback test * nic_run_loopback_test - run loopback test
* @nic_dev: net device * @ndev: net device
* @loopback_type: loopback type * @loop_mode: loopback mode
*/ */
static int __lb_run_test(struct net_device *ndev, static int __lb_run_test(struct net_device *ndev,
enum hnae_loop loop_mode) enum hnae_loop loop_mode)
@ -572,7 +572,7 @@ static int __lb_down(struct net_device *ndev, enum hnae_loop loop)
/** /**
* hns_nic_self_test - self test * hns_nic_self_test - self test
* @dev: net device * @ndev: net device
* @eth_test: test cmd * @eth_test: test cmd
* @data: test result * @data: test result
*/ */
@ -633,7 +633,7 @@ static void hns_nic_self_test(struct net_device *ndev,
/** /**
* hns_nic_get_drvinfo - get net driver info * hns_nic_get_drvinfo - get net driver info
* @dev: net device * @net_dev: net device
* @drvinfo: driver info * @drvinfo: driver info
*/ */
static void hns_nic_get_drvinfo(struct net_device *net_dev, static void hns_nic_get_drvinfo(struct net_device *net_dev,
@ -658,7 +658,7 @@ static void hns_nic_get_drvinfo(struct net_device *net_dev,
/** /**
* hns_get_ringparam - get ring parameter * hns_get_ringparam - get ring parameter
* @dev: net device * @net_dev: net device
* @param: ethtool parameter * @param: ethtool parameter
*/ */
static void hns_get_ringparam(struct net_device *net_dev, static void hns_get_ringparam(struct net_device *net_dev,
@ -683,7 +683,7 @@ static void hns_get_ringparam(struct net_device *net_dev,
/** /**
* hns_get_pauseparam - get pause parameter * hns_get_pauseparam - get pause parameter
* @dev: net device * @net_dev: net device
* @param: pause parameter * @param: pause parameter
*/ */
static void hns_get_pauseparam(struct net_device *net_dev, static void hns_get_pauseparam(struct net_device *net_dev,
@ -701,7 +701,7 @@ static void hns_get_pauseparam(struct net_device *net_dev,
/** /**
* hns_set_pauseparam - set pause parameter * hns_set_pauseparam - set pause parameter
* @dev: net device * @net_dev: net device
* @param: pause parameter * @param: pause parameter
* *
* Return 0 on success, negative on failure * Return 0 on success, negative on failure
@ -725,7 +725,7 @@ static int hns_set_pauseparam(struct net_device *net_dev,
/** /**
* hns_get_coalesce - get coalesce info. * hns_get_coalesce - get coalesce info.
* @dev: net device * @net_dev: net device
* @ec: coalesce info. * @ec: coalesce info.
* *
* Return 0 on success, negative on failure. * Return 0 on success, negative on failure.
@ -769,7 +769,7 @@ static int hns_get_coalesce(struct net_device *net_dev,
/** /**
* hns_set_coalesce - set coalesce info. * hns_set_coalesce - set coalesce info.
* @dev: net device * @net_dev: net device
* @ec: coalesce info. * @ec: coalesce info.
* *
* Return 0 on success, negative on failure. * Return 0 on success, negative on failure.
@ -808,7 +808,7 @@ static int hns_set_coalesce(struct net_device *net_dev,
/** /**
* hns_get_channels - get channel info. * hns_get_channels - get channel info.
* @dev: net device * @net_dev: net device
* @ch: channel info. * @ch: channel info.
*/ */
static void static void
@ -825,7 +825,7 @@ hns_get_channels(struct net_device *net_dev, struct ethtool_channels *ch)
/** /**
* get_ethtool_stats - get detail statistics. * get_ethtool_stats - get detail statistics.
* @dev: net device * @netdev: net device
* @stats: statistics info. * @stats: statistics info.
* @data: statistics data. * @data: statistics data.
*/ */
@ -883,8 +883,8 @@ static void hns_get_ethtool_stats(struct net_device *netdev,
/** /**
* get_strings: Return a set of strings that describe the requested objects * get_strings: Return a set of strings that describe the requested objects
* @dev: net device * @netdev: net device
* @stats: string set ID. * @stringset: string set ID.
* @data: objects data. * @data: objects data.
*/ */
static void hns_get_strings(struct net_device *netdev, u32 stringset, u8 *data) static void hns_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
@ -972,7 +972,7 @@ static void hns_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
/** /**
* nic_get_sset_count - get string set count witch returned by nic_get_strings. * nic_get_sset_count - get string set count witch returned by nic_get_strings.
* @dev: net device * @netdev: net device
* @stringset: string set index, 0: self test string; 1: statistics string. * @stringset: string set index, 0: self test string; 1: statistics string.
* *
* Return string set count. * Return string set count.
@ -1006,7 +1006,7 @@ static int hns_get_sset_count(struct net_device *netdev, int stringset)
/** /**
* hns_phy_led_set - set phy LED status. * hns_phy_led_set - set phy LED status.
* @dev: net device * @netdev: net device
* @value: LED state. * @value: LED state.
* *
* Return 0 on success, negative on failure. * Return 0 on success, negative on failure.
@ -1028,7 +1028,7 @@ static int hns_phy_led_set(struct net_device *netdev, int value)
/** /**
* nic_set_phys_id - set phy identify LED. * nic_set_phys_id - set phy identify LED.
* @dev: net device * @netdev: net device
* @state: LED state. * @state: LED state.
* *
* Return 0 on success, negative on failure. * Return 0 on success, negative on failure.
@ -1104,9 +1104,9 @@ hns_set_phys_id(struct net_device *netdev, enum ethtool_phys_id_state state)
/** /**
* hns_get_regs - get net device register * hns_get_regs - get net device register
* @dev: net device * @net_dev: net device
* @cmd: ethtool cmd * @cmd: ethtool cmd
* @date: register data * @data: register data
*/ */
static void hns_get_regs(struct net_device *net_dev, struct ethtool_regs *cmd, static void hns_get_regs(struct net_device *net_dev, struct ethtool_regs *cmd,
void *data) void *data)
@ -1126,7 +1126,7 @@ static void hns_get_regs(struct net_device *net_dev, struct ethtool_regs *cmd,
/** /**
* nic_get_regs_len - get total register len. * nic_get_regs_len - get total register len.
* @dev: net device * @net_dev: net device
* *
* Return total register len. * Return total register len.
*/ */
@ -1151,7 +1151,7 @@ static int hns_get_regs_len(struct net_device *net_dev)
/** /**
* hns_nic_nway_reset - nway reset * hns_nic_nway_reset - nway reset
* @dev: net device * @netdev: net device
* *
* Return 0 on success, negative on failure * Return 0 on success, negative on failure
*/ */

Просмотреть файл

@ -1654,6 +1654,7 @@ static void hinic_diag_test(struct net_device *netdev,
} }
netif_carrier_off(netdev); netif_carrier_off(netdev);
netif_tx_disable(netdev);
err = do_lp_test(nic_dev, eth_test->flags, LP_DEFAULT_TIME, err = do_lp_test(nic_dev, eth_test->flags, LP_DEFAULT_TIME,
&test_index); &test_index);
@ -1662,9 +1663,12 @@ static void hinic_diag_test(struct net_device *netdev,
data[test_index] = 1; data[test_index] = 1;
} }
netif_tx_wake_all_queues(netdev);
err = hinic_port_link_state(nic_dev, &link_state); err = hinic_port_link_state(nic_dev, &link_state);
if (!err && link_state == HINIC_LINK_STATE_UP) if (!err && link_state == HINIC_LINK_STATE_UP)
netif_carrier_on(netdev); netif_carrier_on(netdev);
} }
static int hinic_set_phys_id(struct net_device *netdev, static int hinic_set_phys_id(struct net_device *netdev,

Просмотреть файл

@ -47,8 +47,12 @@
#define MGMT_MSG_TIMEOUT 5000 #define MGMT_MSG_TIMEOUT 5000
#define SET_FUNC_PORT_MBOX_TIMEOUT 30000
#define SET_FUNC_PORT_MGMT_TIMEOUT 25000 #define SET_FUNC_PORT_MGMT_TIMEOUT 25000
#define UPDATE_FW_MGMT_TIMEOUT 20000
#define mgmt_to_pfhwdev(pf_mgmt) \ #define mgmt_to_pfhwdev(pf_mgmt) \
container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt) container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt)
@ -361,16 +365,22 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
return -EINVAL; return -EINVAL;
} }
if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE) if (HINIC_IS_VF(hwif)) {
timeout = SET_FUNC_PORT_MGMT_TIMEOUT; if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
timeout = SET_FUNC_PORT_MBOX_TIMEOUT;
if (HINIC_IS_VF(hwif))
return hinic_mbox_to_pf(pf_to_mgmt->hwdev, mod, cmd, buf_in, return hinic_mbox_to_pf(pf_to_mgmt->hwdev, mod, cmd, buf_in,
in_size, buf_out, out_size, 0); in_size, buf_out, out_size, timeout);
else } else {
if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
timeout = SET_FUNC_PORT_MGMT_TIMEOUT;
else if (cmd == HINIC_PORT_CMD_UPDATE_FW)
timeout = UPDATE_FW_MGMT_TIMEOUT;
return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size, return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
buf_out, out_size, MGMT_DIRECT_SEND, buf_out, out_size, MGMT_DIRECT_SEND,
MSG_NOT_RESP, timeout); MSG_NOT_RESP, timeout);
}
} }
static void recv_mgmt_msg_work_handler(struct work_struct *work) static void recv_mgmt_msg_work_handler(struct work_struct *work)

Просмотреть файл

@ -174,6 +174,24 @@ err_init_txq:
return err; return err;
} }
static void enable_txqs_napi(struct hinic_dev *nic_dev)
{
int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
int i;
for (i = 0; i < num_txqs; i++)
napi_enable(&nic_dev->txqs[i].napi);
}
static void disable_txqs_napi(struct hinic_dev *nic_dev)
{
int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
int i;
for (i = 0; i < num_txqs; i++)
napi_disable(&nic_dev->txqs[i].napi);
}
/** /**
* free_txqs - Free the Logical Tx Queues of specific NIC device * free_txqs - Free the Logical Tx Queues of specific NIC device
* @nic_dev: the specific NIC device * @nic_dev: the specific NIC device
@ -400,6 +418,8 @@ int hinic_open(struct net_device *netdev)
goto err_create_txqs; goto err_create_txqs;
} }
enable_txqs_napi(nic_dev);
err = create_rxqs(nic_dev); err = create_rxqs(nic_dev);
if (err) { if (err) {
netif_err(nic_dev, drv, netdev, netif_err(nic_dev, drv, netdev,
@ -484,6 +504,7 @@ err_port_state:
} }
err_create_rxqs: err_create_rxqs:
disable_txqs_napi(nic_dev);
free_txqs(nic_dev); free_txqs(nic_dev);
err_create_txqs: err_create_txqs:
@ -497,6 +518,9 @@ int hinic_close(struct net_device *netdev)
struct hinic_dev *nic_dev = netdev_priv(netdev); struct hinic_dev *nic_dev = netdev_priv(netdev);
unsigned int flags; unsigned int flags;
/* Disable txq napi firstly to aviod rewaking txq in free_tx_poll */
disable_txqs_napi(nic_dev);
down(&nic_dev->mgmt_lock); down(&nic_dev->mgmt_lock);
flags = nic_dev->flags; flags = nic_dev->flags;

Просмотреть файл

@ -543,18 +543,25 @@ static int rx_request_irq(struct hinic_rxq *rxq)
if (err) { if (err) {
netif_err(nic_dev, drv, rxq->netdev, netif_err(nic_dev, drv, rxq->netdev,
"Failed to set RX interrupt coalescing attribute\n"); "Failed to set RX interrupt coalescing attribute\n");
rx_del_napi(rxq); goto err_req_irq;
return err;
} }
err = request_irq(rq->irq, rx_irq, 0, rxq->irq_name, rxq); err = request_irq(rq->irq, rx_irq, 0, rxq->irq_name, rxq);
if (err) { if (err)
rx_del_napi(rxq); goto err_req_irq;
return err;
}
cpumask_set_cpu(qp->q_id % num_online_cpus(), &rq->affinity_mask); cpumask_set_cpu(qp->q_id % num_online_cpus(), &rq->affinity_mask);
return irq_set_affinity_hint(rq->irq, &rq->affinity_mask); err = irq_set_affinity_hint(rq->irq, &rq->affinity_mask);
if (err)
goto err_irq_affinity;
return 0;
err_irq_affinity:
free_irq(rq->irq, rxq);
err_req_irq:
rx_del_napi(rxq);
return err;
} }
static void rx_free_irq(struct hinic_rxq *rxq) static void rx_free_irq(struct hinic_rxq *rxq)

Просмотреть файл

@ -717,8 +717,8 @@ static int free_tx_poll(struct napi_struct *napi, int budget)
netdev_txq = netdev_get_tx_queue(txq->netdev, qp->q_id); netdev_txq = netdev_get_tx_queue(txq->netdev, qp->q_id);
__netif_tx_lock(netdev_txq, smp_processor_id()); __netif_tx_lock(netdev_txq, smp_processor_id());
if (!netif_testing(nic_dev->netdev))
netif_wake_subqueue(nic_dev->netdev, qp->q_id); netif_wake_subqueue(nic_dev->netdev, qp->q_id);
__netif_tx_unlock(netdev_txq); __netif_tx_unlock(netdev_txq);
@ -745,18 +745,6 @@ static int free_tx_poll(struct napi_struct *napi, int budget)
return budget; return budget;
} }
static void tx_napi_add(struct hinic_txq *txq, int weight)
{
netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, weight);
napi_enable(&txq->napi);
}
static void tx_napi_del(struct hinic_txq *txq)
{
napi_disable(&txq->napi);
netif_napi_del(&txq->napi);
}
static irqreturn_t tx_irq(int irq, void *data) static irqreturn_t tx_irq(int irq, void *data)
{ {
struct hinic_txq *txq = data; struct hinic_txq *txq = data;
@ -790,7 +778,7 @@ static int tx_request_irq(struct hinic_txq *txq)
qp = container_of(sq, struct hinic_qp, sq); qp = container_of(sq, struct hinic_qp, sq);
tx_napi_add(txq, nic_dev->tx_weight); netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, nic_dev->tx_weight);
hinic_hwdev_msix_set(nic_dev->hwdev, sq->msix_entry, hinic_hwdev_msix_set(nic_dev->hwdev, sq->msix_entry,
TX_IRQ_NO_PENDING, TX_IRQ_NO_COALESC, TX_IRQ_NO_PENDING, TX_IRQ_NO_COALESC,
@ -807,14 +795,14 @@ static int tx_request_irq(struct hinic_txq *txq)
if (err) { if (err) {
netif_err(nic_dev, drv, txq->netdev, netif_err(nic_dev, drv, txq->netdev,
"Failed to set TX interrupt coalescing attribute\n"); "Failed to set TX interrupt coalescing attribute\n");
tx_napi_del(txq); netif_napi_del(&txq->napi);
return err; return err;
} }
err = request_irq(sq->irq, tx_irq, 0, txq->irq_name, txq); err = request_irq(sq->irq, tx_irq, 0, txq->irq_name, txq);
if (err) { if (err) {
dev_err(&pdev->dev, "Failed to request Tx irq\n"); dev_err(&pdev->dev, "Failed to request Tx irq\n");
tx_napi_del(txq); netif_napi_del(&txq->napi);
return err; return err;
} }
@ -826,7 +814,7 @@ static void tx_free_irq(struct hinic_txq *txq)
struct hinic_sq *sq = txq->sq; struct hinic_sq *sq = txq->sq;
free_irq(sq->irq, txq); free_irq(sq->irq, txq);
tx_napi_del(txq); netif_napi_del(&txq->napi);
} }
/** /**

Просмотреть файл

@ -2032,16 +2032,18 @@ static int do_reset(struct ibmvnic_adapter *adapter,
} else { } else {
rc = reset_tx_pools(adapter); rc = reset_tx_pools(adapter);
if (rc) if (rc) {
netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n", netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
rc); rc);
goto out; goto out;
}
rc = reset_rx_pools(adapter); rc = reset_rx_pools(adapter);
if (rc) if (rc) {
netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n", netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
rc); rc);
goto out; goto out;
}
} }
ibmvnic_disable_irqs(adapter); ibmvnic_disable_irqs(adapter);
} }

Просмотреть файл

@ -1115,7 +1115,7 @@ static int i40e_quiesce_vf_pci(struct i40e_vf *vf)
static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi) static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
{ {
struct i40e_mac_filter *f; struct i40e_mac_filter *f;
int num_vlans = 0, bkt; u16 num_vlans = 0, bkt;
hash_for_each(vsi->mac_filter_hash, bkt, f, hlist) { hash_for_each(vsi->mac_filter_hash, bkt, f, hlist) {
if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID) if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID)
@ -1134,8 +1134,8 @@ static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
* *
* Called to get number of VLANs and VLAN list present in mac_filter_hash. * Called to get number of VLANs and VLAN list present in mac_filter_hash.
**/ **/
static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, int *num_vlans, static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans,
s16 **vlan_list) s16 **vlan_list)
{ {
struct i40e_mac_filter *f; struct i40e_mac_filter *f;
int i = 0; int i = 0;
@ -1169,11 +1169,11 @@ err:
**/ **/
static i40e_status static i40e_status
i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable, i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
bool unicast_enable, s16 *vl, int num_vlans) bool unicast_enable, s16 *vl, u16 num_vlans)
{ {
i40e_status aq_ret, aq_tmp = 0;
struct i40e_pf *pf = vf->pf; struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw; struct i40e_hw *hw = &pf->hw;
i40e_status aq_ret;
int i; int i;
/* No VLAN to set promisc on, set on VSI */ /* No VLAN to set promisc on, set on VSI */
@ -1222,6 +1222,9 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
vf->vf_id, vf->vf_id,
i40e_stat_str(&pf->hw, aq_ret), i40e_stat_str(&pf->hw, aq_ret),
i40e_aq_str(&pf->hw, aq_err)); i40e_aq_str(&pf->hw, aq_err));
if (!aq_tmp)
aq_tmp = aq_ret;
} }
aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, seid, aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, seid,
@ -1235,8 +1238,15 @@ i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
vf->vf_id, vf->vf_id,
i40e_stat_str(&pf->hw, aq_ret), i40e_stat_str(&pf->hw, aq_ret),
i40e_aq_str(&pf->hw, aq_err)); i40e_aq_str(&pf->hw, aq_err));
if (!aq_tmp)
aq_tmp = aq_ret;
} }
} }
if (aq_tmp)
aq_ret = aq_tmp;
return aq_ret; return aq_ret;
} }
@ -1258,7 +1268,7 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf,
i40e_status aq_ret = I40E_SUCCESS; i40e_status aq_ret = I40E_SUCCESS;
struct i40e_pf *pf = vf->pf; struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi; struct i40e_vsi *vsi;
int num_vlans; u16 num_vlans;
s16 *vl; s16 *vl;
vsi = i40e_find_vsi_from_id(pf, vsi_id); vsi = i40e_find_vsi_from_id(pf, vsi_id);

Просмотреть файл

@ -299,18 +299,14 @@ extern char igc_driver_name[];
#define IGC_RX_HDR_LEN IGC_RXBUFFER_256 #define IGC_RX_HDR_LEN IGC_RXBUFFER_256
/* Transmit and receive latency (for PTP timestamps) */ /* Transmit and receive latency (for PTP timestamps) */
/* FIXME: These values were estimated using the ones that i225 has as #define IGC_I225_TX_LATENCY_10 240
* basis, they seem to provide good numbers with ptp4l/phc2sys, but we #define IGC_I225_TX_LATENCY_100 58
* need to confirm them. #define IGC_I225_TX_LATENCY_1000 80
*/ #define IGC_I225_TX_LATENCY_2500 1325
#define IGC_I225_TX_LATENCY_10 9542 #define IGC_I225_RX_LATENCY_10 6450
#define IGC_I225_TX_LATENCY_100 1024 #define IGC_I225_RX_LATENCY_100 185
#define IGC_I225_TX_LATENCY_1000 178 #define IGC_I225_RX_LATENCY_1000 300
#define IGC_I225_TX_LATENCY_2500 64 #define IGC_I225_RX_LATENCY_2500 1485
#define IGC_I225_RX_LATENCY_10 20662
#define IGC_I225_RX_LATENCY_100 2213
#define IGC_I225_RX_LATENCY_1000 448
#define IGC_I225_RX_LATENCY_2500 160
/* RX and TX descriptor control thresholds. /* RX and TX descriptor control thresholds.
* PTHRESH - MAC will consider prefetch if it has fewer than this number of * PTHRESH - MAC will consider prefetch if it has fewer than this number of

Просмотреть файл

@ -364,6 +364,7 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
struct sk_buff *skb = adapter->ptp_tx_skb; struct sk_buff *skb = adapter->ptp_tx_skb;
struct skb_shared_hwtstamps shhwtstamps; struct skb_shared_hwtstamps shhwtstamps;
struct igc_hw *hw = &adapter->hw; struct igc_hw *hw = &adapter->hw;
int adjust = 0;
u64 regval; u64 regval;
if (WARN_ON_ONCE(!skb)) if (WARN_ON_ONCE(!skb))
@ -373,6 +374,24 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
regval |= (u64)rd32(IGC_TXSTMPH) << 32; regval |= (u64)rd32(IGC_TXSTMPH) << 32;
igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval); igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
switch (adapter->link_speed) {
case SPEED_10:
adjust = IGC_I225_TX_LATENCY_10;
break;
case SPEED_100:
adjust = IGC_I225_TX_LATENCY_100;
break;
case SPEED_1000:
adjust = IGC_I225_TX_LATENCY_1000;
break;
case SPEED_2500:
adjust = IGC_I225_TX_LATENCY_2500;
break;
}
shhwtstamps.hwtstamp =
ktime_add_ns(shhwtstamps.hwtstamp, adjust);
/* Clear the lock early before calling skb_tstamp_tx so that /* Clear the lock early before calling skb_tstamp_tx so that
* applications are not woken up before the lock bit is clear. We use * applications are not woken up before the lock bit is clear. We use
* a copy of the skb pointer to ensure other threads can't change it * a copy of the skb pointer to ensure other threads can't change it

Просмотреть файл

@ -230,8 +230,8 @@ static int xrx200_poll_rx(struct napi_struct *napi, int budget)
} }
if (rx < budget) { if (rx < budget) {
napi_complete(&ch->napi); if (napi_complete_done(&ch->napi, rx))
ltq_dma_enable_irq(&ch->dma); ltq_dma_enable_irq(&ch->dma);
} }
return rx; return rx;
@ -268,9 +268,12 @@ static int xrx200_tx_housekeeping(struct napi_struct *napi, int budget)
net_dev->stats.tx_bytes += bytes; net_dev->stats.tx_bytes += bytes;
netdev_completed_queue(ch->priv->net_dev, pkts, bytes); netdev_completed_queue(ch->priv->net_dev, pkts, bytes);
if (netif_queue_stopped(net_dev))
netif_wake_queue(net_dev);
if (pkts < budget) { if (pkts < budget) {
napi_complete(&ch->napi); if (napi_complete_done(&ch->napi, pkts))
ltq_dma_enable_irq(&ch->dma); ltq_dma_enable_irq(&ch->dma);
} }
return pkts; return pkts;
@ -342,10 +345,12 @@ static irqreturn_t xrx200_dma_irq(int irq, void *ptr)
{ {
struct xrx200_chan *ch = ptr; struct xrx200_chan *ch = ptr;
ltq_dma_disable_irq(&ch->dma); if (napi_schedule_prep(&ch->napi)) {
ltq_dma_ack_irq(&ch->dma); __napi_schedule(&ch->napi);
ltq_dma_disable_irq(&ch->dma);
}
napi_schedule(&ch->napi); ltq_dma_ack_irq(&ch->dma);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
@ -499,7 +504,7 @@ static int xrx200_probe(struct platform_device *pdev)
/* setup NAPI */ /* setup NAPI */
netif_napi_add(net_dev, &priv->chan_rx.napi, xrx200_poll_rx, 32); netif_napi_add(net_dev, &priv->chan_rx.napi, xrx200_poll_rx, 32);
netif_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32); netif_tx_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32);
platform_set_drvdata(pdev, priv); platform_set_drvdata(pdev, priv);

Просмотреть файл

@ -2029,11 +2029,11 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
int i; int i;
page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),
sync_len, napi);
for (i = 0; i < sinfo->nr_frags; i++) for (i = 0; i < sinfo->nr_frags; i++)
page_pool_put_full_page(rxq->page_pool, page_pool_put_full_page(rxq->page_pool,
skb_frag_page(&sinfo->frags[i]), napi); skb_frag_page(&sinfo->frags[i]), napi);
page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),
sync_len, napi);
} }
static int static int
@ -2383,8 +2383,12 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
&size, page, &ps); &size, page, &ps);
} else { } else {
if (unlikely(!xdp_buf.data_hard_start)) if (unlikely(!xdp_buf.data_hard_start)) {
rx_desc->buf_phys_addr = 0;
page_pool_put_full_page(rxq->page_pool, page,
true);
continue; continue;
}
mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
&size, page); &size, page);

Просмотреть файл

@ -600,7 +600,7 @@ struct mlx5e_rq {
struct dim dim; /* Dynamic Interrupt Moderation */ struct dim dim; /* Dynamic Interrupt Moderation */
/* XDP */ /* XDP */
struct bpf_prog *xdp_prog; struct bpf_prog __rcu *xdp_prog;
struct mlx5e_xdpsq *xdpsq; struct mlx5e_xdpsq *xdpsq;
DECLARE_BITMAP(flags, 8); DECLARE_BITMAP(flags, 8);
struct page_pool *page_pool; struct page_pool *page_pool;
@ -1005,7 +1005,6 @@ int mlx5e_update_nic_rx(struct mlx5e_priv *priv);
void mlx5e_update_carrier(struct mlx5e_priv *priv); void mlx5e_update_carrier(struct mlx5e_priv *priv);
int mlx5e_close(struct net_device *netdev); int mlx5e_close(struct net_device *netdev);
int mlx5e_open(struct net_device *netdev); int mlx5e_open(struct net_device *netdev);
void mlx5e_update_ndo_stats(struct mlx5e_priv *priv);
void mlx5e_queue_update_stats(struct mlx5e_priv *priv); void mlx5e_queue_update_stats(struct mlx5e_priv *priv);
int mlx5e_bits_invert(unsigned long a, int size); int mlx5e_bits_invert(unsigned long a, int size);

Просмотреть файл

@ -51,7 +51,7 @@ static void mlx5e_monitor_counters_work(struct work_struct *work)
monitor_counters_work); monitor_counters_work);
mutex_lock(&priv->state_lock); mutex_lock(&priv->state_lock);
mlx5e_update_ndo_stats(priv); mlx5e_stats_update_ndo_stats(priv);
mutex_unlock(&priv->state_lock); mutex_unlock(&priv->state_lock);
mlx5e_monitor_counter_arm(priv); mlx5e_monitor_counter_arm(priv);
} }

Просмотреть файл

@ -490,11 +490,8 @@ bool mlx5e_fec_in_caps(struct mlx5_core_dev *dev, int fec_policy)
int err; int err;
int i; int i;
if (!MLX5_CAP_GEN(dev, pcam_reg)) if (!MLX5_CAP_GEN(dev, pcam_reg) || !MLX5_CAP_PCAM_REG(dev, pplm))
return -EOPNOTSUPP; return false;
if (!MLX5_CAP_PCAM_REG(dev, pplm))
return -EOPNOTSUPP;
MLX5_SET(pplm_reg, in, local_port, 1); MLX5_SET(pplm_reg, in, local_port, 1);
err = mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 0); err = mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 0);

Просмотреть файл

@ -699,6 +699,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
err_rule: err_rule:
mlx5e_mod_hdr_detach(ct_priv->esw->dev, mlx5e_mod_hdr_detach(ct_priv->esw->dev,
&esw->offloads.mod_hdr, zone_rule->mh); &esw->offloads.mod_hdr, zone_rule->mh);
mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
err_mod_hdr: err_mod_hdr:
kfree(spec); kfree(spec);
return err; return err;
@ -958,12 +959,22 @@ mlx5_tc_ct_add_no_trk_match(struct mlx5e_priv *priv,
return 0; return 0;
} }
void mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr)
{
struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv);
if (!ct_priv || !ct_attr->ct_labels_id)
return;
mapping_remove(ct_priv->labels_mapping, ct_attr->ct_labels_id);
}
int int
mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, mlx5_tc_ct_match_add(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec, struct mlx5_flow_spec *spec,
struct flow_cls_offload *f, struct flow_cls_offload *f,
struct mlx5_ct_attr *ct_attr, struct mlx5_ct_attr *ct_attr,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv); struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv);
struct flow_rule *rule = flow_cls_offload_flow_rule(f); struct flow_rule *rule = flow_cls_offload_flow_rule(f);

Просмотреть файл

@ -87,12 +87,15 @@ mlx5_tc_ct_init(struct mlx5_rep_uplink_priv *uplink_priv);
void void
mlx5_tc_ct_clean(struct mlx5_rep_uplink_priv *uplink_priv); mlx5_tc_ct_clean(struct mlx5_rep_uplink_priv *uplink_priv);
void
mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr);
int int
mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, mlx5_tc_ct_match_add(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec, struct mlx5_flow_spec *spec,
struct flow_cls_offload *f, struct flow_cls_offload *f,
struct mlx5_ct_attr *ct_attr, struct mlx5_ct_attr *ct_attr,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
int int
mlx5_tc_ct_add_no_trk_match(struct mlx5e_priv *priv, mlx5_tc_ct_add_no_trk_match(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec); struct mlx5_flow_spec *spec);
@ -130,12 +133,15 @@ mlx5_tc_ct_clean(struct mlx5_rep_uplink_priv *uplink_priv)
{ {
} }
static inline void
mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr) {}
static inline int static inline int
mlx5_tc_ct_parse_match(struct mlx5e_priv *priv, mlx5_tc_ct_match_add(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec, struct mlx5_flow_spec *spec,
struct flow_cls_offload *f, struct flow_cls_offload *f,
struct mlx5_ct_attr *ct_attr, struct mlx5_ct_attr *ct_attr,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct flow_rule *rule = flow_cls_offload_flow_rule(f); struct flow_rule *rule = flow_cls_offload_flow_rule(f);

Просмотреть файл

@ -20,6 +20,11 @@ enum mlx5e_icosq_wqe_type {
}; };
/* General */ /* General */
static inline bool mlx5e_skb_is_multicast(struct sk_buff *skb)
{
return skb->pkt_type == PACKET_MULTICAST || skb->pkt_type == PACKET_BROADCAST;
}
void mlx5e_trigger_irq(struct mlx5e_icosq *sq); void mlx5e_trigger_irq(struct mlx5e_icosq *sq);
void mlx5e_completion_event(struct mlx5_core_cq *mcq, struct mlx5_eqe *eqe); void mlx5e_completion_event(struct mlx5_core_cq *mcq, struct mlx5_eqe *eqe);
void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event); void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event);

Просмотреть файл

@ -122,7 +122,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di, bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
u32 *len, struct xdp_buff *xdp) u32 *len, struct xdp_buff *xdp)
{ {
struct bpf_prog *prog = READ_ONCE(rq->xdp_prog); struct bpf_prog *prog = rcu_dereference(rq->xdp_prog);
u32 act; u32 act;
int err; int err;

Просмотреть файл

@ -31,7 +31,6 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
{ {
struct xdp_buff *xdp = wi->umr.dma_info[page_idx].xsk; struct xdp_buff *xdp = wi->umr.dma_info[page_idx].xsk;
u32 cqe_bcnt32 = cqe_bcnt; u32 cqe_bcnt32 = cqe_bcnt;
bool consumed;
/* Check packet size. Note LRO doesn't use linear SKB */ /* Check packet size. Note LRO doesn't use linear SKB */
if (unlikely(cqe_bcnt > rq->hw_mtu)) { if (unlikely(cqe_bcnt > rq->hw_mtu)) {
@ -51,10 +50,6 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
xsk_buff_dma_sync_for_cpu(xdp); xsk_buff_dma_sync_for_cpu(xdp);
prefetch(xdp->data); prefetch(xdp->data);
rcu_read_lock();
consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp);
rcu_read_unlock();
/* Possible flows: /* Possible flows:
* - XDP_REDIRECT to XSKMAP: * - XDP_REDIRECT to XSKMAP:
* The page is owned by the userspace from now. * The page is owned by the userspace from now.
@ -70,7 +65,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
* allocated first from the Reuse Ring, so it has enough space. * allocated first from the Reuse Ring, so it has enough space.
*/ */
if (likely(consumed)) { if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp))) {
if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))) if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))
__set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */ return NULL; /* page/packet was consumed by XDP */
@ -88,7 +83,6 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
u32 cqe_bcnt) u32 cqe_bcnt)
{ {
struct xdp_buff *xdp = wi->di->xsk; struct xdp_buff *xdp = wi->di->xsk;
bool consumed;
/* wi->offset is not used in this function, because xdp->data and the /* wi->offset is not used in this function, because xdp->data and the
* DMA address point directly to the necessary place. Furthermore, the * DMA address point directly to the necessary place. Furthermore, the
@ -107,11 +101,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
return NULL; return NULL;
} }
rcu_read_lock(); if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp)))
consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp);
rcu_read_unlock();
if (likely(consumed))
return NULL; /* page/packet was consumed by XDP */ return NULL; /* page/packet was consumed by XDP */
/* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse

Просмотреть файл

@ -106,8 +106,7 @@ err_free_cparam:
void mlx5e_close_xsk(struct mlx5e_channel *c) void mlx5e_close_xsk(struct mlx5e_channel *c)
{ {
clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state); clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
napi_synchronize(&c->napi); synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */
synchronize_rcu(); /* Sync with the XSK wakeup. */
mlx5e_close_rq(&c->xskrq); mlx5e_close_rq(&c->xskrq);
mlx5e_close_cq(&c->xskrq.cq); mlx5e_close_cq(&c->xskrq.cq);

Просмотреть файл

@ -234,7 +234,7 @@ mlx5e_get_ktls_rx_priv_ctx(struct tls_context *tls_ctx)
/* Re-sync */ /* Re-sync */
/* Runs in work context */ /* Runs in work context */
static struct mlx5_wqe_ctrl_seg * static int
resync_post_get_progress_params(struct mlx5e_icosq *sq, resync_post_get_progress_params(struct mlx5e_icosq *sq,
struct mlx5e_ktls_offload_context_rx *priv_rx) struct mlx5e_ktls_offload_context_rx *priv_rx)
{ {
@ -258,15 +258,19 @@ resync_post_get_progress_params(struct mlx5e_icosq *sq,
PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(pdev, buf->dma_addr))) { if (unlikely(dma_mapping_error(pdev, buf->dma_addr))) {
err = -ENOMEM; err = -ENOMEM;
goto err_out; goto err_free;
} }
buf->priv_rx = priv_rx; buf->priv_rx = priv_rx;
BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1); BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1);
spin_lock(&sq->channel->async_icosq_lock);
if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) { if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) {
spin_unlock(&sq->channel->async_icosq_lock);
err = -ENOSPC; err = -ENOSPC;
goto err_out; goto err_dma_unmap;
} }
pi = mlx5e_icosq_get_next_pi(sq, 1); pi = mlx5e_icosq_get_next_pi(sq, 1);
@ -294,12 +298,18 @@ resync_post_get_progress_params(struct mlx5e_icosq *sq,
}; };
icosq_fill_wi(sq, pi, &wi); icosq_fill_wi(sq, pi, &wi);
sq->pc++; sq->pc++;
mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg);
spin_unlock(&sq->channel->async_icosq_lock);
return cseg; return 0;
err_dma_unmap:
dma_unmap_single(pdev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
err_free:
kfree(buf);
err_out: err_out:
priv_rx->stats->tls_resync_req_skip++; priv_rx->stats->tls_resync_req_skip++;
return ERR_PTR(err); return err;
} }
/* Function is called with elevated refcount. /* Function is called with elevated refcount.
@ -309,10 +319,8 @@ static void resync_handle_work(struct work_struct *work)
{ {
struct mlx5e_ktls_offload_context_rx *priv_rx; struct mlx5e_ktls_offload_context_rx *priv_rx;
struct mlx5e_ktls_rx_resync_ctx *resync; struct mlx5e_ktls_rx_resync_ctx *resync;
struct mlx5_wqe_ctrl_seg *cseg;
struct mlx5e_channel *c; struct mlx5e_channel *c;
struct mlx5e_icosq *sq; struct mlx5e_icosq *sq;
struct mlx5_wq_cyc *wq;
resync = container_of(work, struct mlx5e_ktls_rx_resync_ctx, work); resync = container_of(work, struct mlx5e_ktls_rx_resync_ctx, work);
priv_rx = container_of(resync, struct mlx5e_ktls_offload_context_rx, resync); priv_rx = container_of(resync, struct mlx5e_ktls_offload_context_rx, resync);
@ -324,18 +332,9 @@ static void resync_handle_work(struct work_struct *work)
c = resync->priv->channels.c[priv_rx->rxq]; c = resync->priv->channels.c[priv_rx->rxq];
sq = &c->async_icosq; sq = &c->async_icosq;
wq = &sq->wq;
spin_lock(&c->async_icosq_lock); if (resync_post_get_progress_params(sq, priv_rx))
cseg = resync_post_get_progress_params(sq, priv_rx);
if (IS_ERR(cseg)) {
refcount_dec(&resync->refcnt); refcount_dec(&resync->refcnt);
goto unlock;
}
mlx5e_notify_hw(wq, sq->pc, sq->uar_map, cseg);
unlock:
spin_unlock(&c->async_icosq_lock);
} }
static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync, static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync,
@ -386,16 +385,17 @@ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi,
struct mlx5e_ktls_offload_context_rx *priv_rx; struct mlx5e_ktls_offload_context_rx *priv_rx;
struct mlx5e_ktls_rx_resync_ctx *resync; struct mlx5e_ktls_rx_resync_ctx *resync;
u8 tracker_state, auth_state, *ctx; u8 tracker_state, auth_state, *ctx;
struct device *dev;
u32 hw_seq; u32 hw_seq;
priv_rx = buf->priv_rx; priv_rx = buf->priv_rx;
resync = &priv_rx->resync; resync = &priv_rx->resync;
dev = resync->priv->mdev->device;
if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags)))
goto out; goto out;
dma_sync_single_for_cpu(resync->priv->mdev->device, buf->dma_addr, dma_sync_single_for_cpu(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE,
PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); DMA_FROM_DEVICE);
ctx = buf->progress.ctx; ctx = buf->progress.ctx;
tracker_state = MLX5_GET(tls_progress_params, ctx, record_tracker_state); tracker_state = MLX5_GET(tls_progress_params, ctx, record_tracker_state);
@ -411,6 +411,7 @@ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi,
priv_rx->stats->tls_resync_req_end++; priv_rx->stats->tls_resync_req_end++;
out: out:
refcount_dec(&resync->refcnt); refcount_dec(&resync->refcnt);
dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
kfree(buf); kfree(buf);
} }
@ -659,7 +660,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx); priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx);
set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags); set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags);
mlx5e_set_ktls_rx_priv_ctx(tls_ctx, NULL); mlx5e_set_ktls_rx_priv_ctx(tls_ctx, NULL);
napi_synchronize(&priv->channels.c[priv_rx->rxq]->napi); synchronize_rcu(); /* Sync with NAPI */
if (!cancel_work_sync(&priv_rx->rule.work)) if (!cancel_work_sync(&priv_rx->rule.work))
/* completion is needed, as the priv_rx in the add flow /* completion is needed, as the priv_rx in the add flow
* is maintained on the wqe info (wi), not on the socket. * is maintained on the wqe info (wi), not on the socket.

Просмотреть файл

@ -35,7 +35,6 @@
#include <net/sock.h> #include <net/sock.h>
#include "en.h" #include "en.h"
#include "accel/tls.h"
#include "fpga/sdk.h" #include "fpga/sdk.h"
#include "en_accel/tls.h" #include "en_accel/tls.h"
@ -51,9 +50,14 @@ static const struct counter_desc mlx5e_tls_sw_stats_desc[] = {
#define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc) #define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc)
static bool is_tls_atomic_stats(struct mlx5e_priv *priv)
{
return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev);
}
int mlx5e_tls_get_count(struct mlx5e_priv *priv) int mlx5e_tls_get_count(struct mlx5e_priv *priv)
{ {
if (!priv->tls) if (!is_tls_atomic_stats(priv))
return 0; return 0;
return NUM_TLS_SW_COUNTERS; return NUM_TLS_SW_COUNTERS;
@ -63,7 +67,7 @@ int mlx5e_tls_get_strings(struct mlx5e_priv *priv, uint8_t *data)
{ {
unsigned int i, idx = 0; unsigned int i, idx = 0;
if (!priv->tls) if (!is_tls_atomic_stats(priv))
return 0; return 0;
for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
@ -77,7 +81,7 @@ int mlx5e_tls_get_stats(struct mlx5e_priv *priv, u64 *data)
{ {
int i, idx = 0; int i, idx = 0;
if (!priv->tls) if (!is_tls_atomic_stats(priv))
return 0; return 0;
for (i = 0; i < NUM_TLS_SW_COUNTERS; i++) for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)

Просмотреть файл

@ -158,16 +158,6 @@ static void mlx5e_update_carrier_work(struct work_struct *work)
mutex_unlock(&priv->state_lock); mutex_unlock(&priv->state_lock);
} }
void mlx5e_update_ndo_stats(struct mlx5e_priv *priv)
{
int i;
for (i = mlx5e_nic_stats_grps_num(priv) - 1; i >= 0; i--)
if (mlx5e_nic_stats_grps[i]->update_stats_mask &
MLX5E_NDO_UPDATE_STATS)
mlx5e_nic_stats_grps[i]->update_stats(priv);
}
static void mlx5e_update_stats_work(struct work_struct *work) static void mlx5e_update_stats_work(struct work_struct *work)
{ {
struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv, struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv,
@ -399,7 +389,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
if (params->xdp_prog) if (params->xdp_prog)
bpf_prog_inc(params->xdp_prog); bpf_prog_inc(params->xdp_prog);
rq->xdp_prog = params->xdp_prog; RCU_INIT_POINTER(rq->xdp_prog, params->xdp_prog);
rq_xdp_ix = rq->ix; rq_xdp_ix = rq->ix;
if (xsk) if (xsk)
@ -408,7 +398,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
if (err < 0) if (err < 0)
goto err_rq_wq_destroy; goto err_rq_wq_destroy;
rq->buff.map_dir = rq->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; rq->buff.map_dir = params->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk); rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk);
pool_size = 1 << params->log_rq_mtu_frames; pool_size = 1 << params->log_rq_mtu_frames;
@ -564,8 +554,8 @@ err_free:
} }
err_rq_wq_destroy: err_rq_wq_destroy:
if (rq->xdp_prog) if (params->xdp_prog)
bpf_prog_put(rq->xdp_prog); bpf_prog_put(params->xdp_prog);
xdp_rxq_info_unreg(&rq->xdp_rxq); xdp_rxq_info_unreg(&rq->xdp_rxq);
page_pool_destroy(rq->page_pool); page_pool_destroy(rq->page_pool);
mlx5_wq_destroy(&rq->wq_ctrl); mlx5_wq_destroy(&rq->wq_ctrl);
@ -575,10 +565,16 @@ err_rq_wq_destroy:
static void mlx5e_free_rq(struct mlx5e_rq *rq) static void mlx5e_free_rq(struct mlx5e_rq *rq)
{ {
struct mlx5e_channel *c = rq->channel;
struct bpf_prog *old_prog = NULL;
int i; int i;
if (rq->xdp_prog) /* drop_rq has neither channel nor xdp_prog. */
bpf_prog_put(rq->xdp_prog); if (c)
old_prog = rcu_dereference_protected(rq->xdp_prog,
lockdep_is_held(&c->priv->state_lock));
if (old_prog)
bpf_prog_put(old_prog);
switch (rq->wq_type) { switch (rq->wq_type) {
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
@ -867,7 +863,7 @@ void mlx5e_activate_rq(struct mlx5e_rq *rq)
void mlx5e_deactivate_rq(struct mlx5e_rq *rq) void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
{ {
clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */ synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */
} }
void mlx5e_close_rq(struct mlx5e_rq *rq) void mlx5e_close_rq(struct mlx5e_rq *rq)
@ -1312,12 +1308,10 @@ void mlx5e_tx_disable_queue(struct netdev_queue *txq)
static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq) static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)
{ {
struct mlx5e_channel *c = sq->channel;
struct mlx5_wq_cyc *wq = &sq->wq; struct mlx5_wq_cyc *wq = &sq->wq;
clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
/* prevent netif_tx_wake_queue */ synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */
napi_synchronize(&c->napi);
mlx5e_tx_disable_queue(sq->txq); mlx5e_tx_disable_queue(sq->txq);
@ -1392,10 +1386,8 @@ void mlx5e_activate_icosq(struct mlx5e_icosq *icosq)
void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq) void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq)
{ {
struct mlx5e_channel *c = icosq->channel;
clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state); clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);
napi_synchronize(&c->napi); synchronize_rcu(); /* Sync with NAPI. */
} }
void mlx5e_close_icosq(struct mlx5e_icosq *sq) void mlx5e_close_icosq(struct mlx5e_icosq *sq)
@ -1474,7 +1466,7 @@ void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq)
struct mlx5e_channel *c = sq->channel; struct mlx5e_channel *c = sq->channel;
clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
napi_synchronize(&c->napi); synchronize_rcu(); /* Sync with NAPI. */
mlx5e_destroy_sq(c->mdev, sq->sqn); mlx5e_destroy_sq(c->mdev, sq->sqn);
mlx5e_free_xdpsq_descs(sq); mlx5e_free_xdpsq_descs(sq);
@ -3567,6 +3559,7 @@ void mlx5e_fold_sw_stats64(struct mlx5e_priv *priv, struct rtnl_link_stats64 *s)
s->rx_packets += rq_stats->packets + xskrq_stats->packets; s->rx_packets += rq_stats->packets + xskrq_stats->packets;
s->rx_bytes += rq_stats->bytes + xskrq_stats->bytes; s->rx_bytes += rq_stats->bytes + xskrq_stats->bytes;
s->multicast += rq_stats->mcast_packets + xskrq_stats->mcast_packets;
for (j = 0; j < priv->max_opened_tc; j++) { for (j = 0; j < priv->max_opened_tc; j++) {
struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j]; struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j];
@ -3582,7 +3575,6 @@ void
mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
{ {
struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5e_vport_stats *vstats = &priv->stats.vport;
struct mlx5e_pport_stats *pstats = &priv->stats.pport; struct mlx5e_pport_stats *pstats = &priv->stats.pport;
/* In switchdev mode, monitor counters doesn't monitor /* In switchdev mode, monitor counters doesn't monitor
@ -3617,12 +3609,6 @@ mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors + stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors +
stats->rx_frame_errors; stats->rx_frame_errors;
stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors; stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors;
/* vport multicast also counts packets that are dropped due to steering
* or rx out of buffer
*/
stats->multicast =
VPORT_COUNTER_GET(vstats, received_eth_multicast.packets);
} }
static void mlx5e_set_rx_mode(struct net_device *dev) static void mlx5e_set_rx_mode(struct net_device *dev)
@ -4330,6 +4316,16 @@ static int mlx5e_xdp_allowed(struct mlx5e_priv *priv, struct bpf_prog *prog)
return 0; return 0;
} }
static void mlx5e_rq_replace_xdp_prog(struct mlx5e_rq *rq, struct bpf_prog *prog)
{
struct bpf_prog *old_prog;
old_prog = rcu_replace_pointer(rq->xdp_prog, prog,
lockdep_is_held(&rq->channel->priv->state_lock));
if (old_prog)
bpf_prog_put(old_prog);
}
static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
@ -4388,29 +4384,10 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
*/ */
for (i = 0; i < priv->channels.num; i++) { for (i = 0; i < priv->channels.num; i++) {
struct mlx5e_channel *c = priv->channels.c[i]; struct mlx5e_channel *c = priv->channels.c[i];
bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state); mlx5e_rq_replace_xdp_prog(&c->rq, prog);
if (xsk_open) if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); mlx5e_rq_replace_xdp_prog(&c->xskrq, prog);
napi_synchronize(&c->napi);
/* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */
old_prog = xchg(&c->rq.xdp_prog, prog);
if (old_prog)
bpf_prog_put(old_prog);
if (xsk_open) {
old_prog = xchg(&c->xskrq.xdp_prog, prog);
if (old_prog)
bpf_prog_put(old_prog);
}
set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);
if (xsk_open)
set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
/* napi_schedule in case we have missed anything */
napi_schedule(&c->napi);
} }
unlock: unlock:
@ -5200,7 +5177,7 @@ static const struct mlx5e_profile mlx5e_nic_profile = {
.enable = mlx5e_nic_enable, .enable = mlx5e_nic_enable,
.disable = mlx5e_nic_disable, .disable = mlx5e_nic_disable,
.update_rx = mlx5e_update_nic_rx, .update_rx = mlx5e_update_nic_rx,
.update_stats = mlx5e_update_ndo_stats, .update_stats = mlx5e_stats_update_ndo_stats,
.update_carrier = mlx5e_update_carrier, .update_carrier = mlx5e_update_carrier,
.rx_handlers = &mlx5e_rx_handlers_nic, .rx_handlers = &mlx5e_rx_handlers_nic,
.max_tc = MLX5E_MAX_NUM_TC, .max_tc = MLX5E_MAX_NUM_TC,

Просмотреть файл

@ -1171,7 +1171,7 @@ static const struct mlx5e_profile mlx5e_rep_profile = {
.cleanup_tx = mlx5e_cleanup_rep_tx, .cleanup_tx = mlx5e_cleanup_rep_tx,
.enable = mlx5e_rep_enable, .enable = mlx5e_rep_enable,
.update_rx = mlx5e_update_rep_rx, .update_rx = mlx5e_update_rep_rx,
.update_stats = mlx5e_update_ndo_stats, .update_stats = mlx5e_stats_update_ndo_stats,
.rx_handlers = &mlx5e_rx_handlers_rep, .rx_handlers = &mlx5e_rx_handlers_rep,
.max_tc = 1, .max_tc = 1,
.rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR), .rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR),
@ -1189,7 +1189,7 @@ static const struct mlx5e_profile mlx5e_uplink_rep_profile = {
.enable = mlx5e_uplink_rep_enable, .enable = mlx5e_uplink_rep_enable,
.disable = mlx5e_uplink_rep_disable, .disable = mlx5e_uplink_rep_disable,
.update_rx = mlx5e_update_rep_rx, .update_rx = mlx5e_update_rep_rx,
.update_stats = mlx5e_update_ndo_stats, .update_stats = mlx5e_stats_update_ndo_stats,
.update_carrier = mlx5e_update_carrier, .update_carrier = mlx5e_update_carrier,
.rx_handlers = &mlx5e_rx_handlers_rep, .rx_handlers = &mlx5e_rx_handlers_rep,
.max_tc = MLX5E_MAX_NUM_TC, .max_tc = MLX5E_MAX_NUM_TC,

Просмотреть файл

@ -53,6 +53,7 @@
#include "en/xsk/rx.h" #include "en/xsk/rx.h"
#include "en/health.h" #include "en/health.h"
#include "en/params.h" #include "en/params.h"
#include "en/txrx.h"
static struct sk_buff * static struct sk_buff *
mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
@ -1080,6 +1081,9 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
mlx5e_enable_ecn(rq, skb); mlx5e_enable_ecn(rq, skb);
skb->protocol = eth_type_trans(skb, netdev); skb->protocol = eth_type_trans(skb, netdev);
if (unlikely(mlx5e_skb_is_multicast(skb)))
stats->mcast_packets++;
} }
static inline void mlx5e_complete_rx_cqe(struct mlx5e_rq *rq, static inline void mlx5e_complete_rx_cqe(struct mlx5e_rq *rq,
@ -1132,7 +1136,6 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
struct xdp_buff xdp; struct xdp_buff xdp;
struct sk_buff *skb; struct sk_buff *skb;
void *va, *data; void *va, *data;
bool consumed;
u32 frag_size; u32 frag_size;
va = page_address(di->page) + wi->offset; va = page_address(di->page) + wi->offset;
@ -1144,11 +1147,8 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe,
prefetchw(va); /* xdp_frame data area */ prefetchw(va); /* xdp_frame data area */
prefetch(data); prefetch(data);
rcu_read_lock();
mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp); if (mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp))
rcu_read_unlock();
if (consumed)
return NULL; /* page/packet was consumed by XDP */ return NULL; /* page/packet was consumed by XDP */
rx_headroom = xdp.data - xdp.data_hard_start; rx_headroom = xdp.data - xdp.data_hard_start;
@ -1438,7 +1438,6 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
struct sk_buff *skb; struct sk_buff *skb;
void *va, *data; void *va, *data;
u32 frag_size; u32 frag_size;
bool consumed;
/* Check packet size. Note LRO doesn't use linear SKB */ /* Check packet size. Note LRO doesn't use linear SKB */
if (unlikely(cqe_bcnt > rq->hw_mtu)) { if (unlikely(cqe_bcnt > rq->hw_mtu)) {
@ -1455,11 +1454,8 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
prefetchw(va); /* xdp_frame data area */ prefetchw(va); /* xdp_frame data area */
prefetch(data); prefetch(data);
rcu_read_lock();
mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt32, &xdp); mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt32, &xdp);
consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp); if (mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp)) {
rcu_read_unlock();
if (consumed) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
__set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */ return NULL; /* page/packet was consumed by XDP */

Просмотреть файл

@ -54,6 +54,18 @@ unsigned int mlx5e_stats_total_num(struct mlx5e_priv *priv)
return total; return total;
} }
void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv)
{
mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps;
const unsigned int num_stats_grps = stats_grps_num(priv);
int i;
for (i = num_stats_grps - 1; i >= 0; i--)
if (stats_grps[i]->update_stats &&
stats_grps[i]->update_stats_mask & MLX5E_NDO_UPDATE_STATS)
stats_grps[i]->update_stats(priv);
}
void mlx5e_stats_update(struct mlx5e_priv *priv) void mlx5e_stats_update(struct mlx5e_priv *priv)
{ {
mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps; mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps;

Просмотреть файл

@ -103,6 +103,7 @@ unsigned int mlx5e_stats_total_num(struct mlx5e_priv *priv);
void mlx5e_stats_update(struct mlx5e_priv *priv); void mlx5e_stats_update(struct mlx5e_priv *priv);
void mlx5e_stats_fill(struct mlx5e_priv *priv, u64 *data, int idx); void mlx5e_stats_fill(struct mlx5e_priv *priv, u64 *data, int idx);
void mlx5e_stats_fill_strings(struct mlx5e_priv *priv, u8 *data); void mlx5e_stats_fill_strings(struct mlx5e_priv *priv, u8 *data);
void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv);
/* Concrete NIC Stats */ /* Concrete NIC Stats */
@ -119,6 +120,7 @@ struct mlx5e_sw_stats {
u64 tx_nop; u64 tx_nop;
u64 rx_lro_packets; u64 rx_lro_packets;
u64 rx_lro_bytes; u64 rx_lro_bytes;
u64 rx_mcast_packets;
u64 rx_ecn_mark; u64 rx_ecn_mark;
u64 rx_removed_vlan_packets; u64 rx_removed_vlan_packets;
u64 rx_csum_unnecessary; u64 rx_csum_unnecessary;
@ -298,6 +300,7 @@ struct mlx5e_rq_stats {
u64 csum_none; u64 csum_none;
u64 lro_packets; u64 lro_packets;
u64 lro_bytes; u64 lro_bytes;
u64 mcast_packets;
u64 ecn_mark; u64 ecn_mark;
u64 removed_vlan_packets; u64 removed_vlan_packets;
u64 xdp_drop; u64 xdp_drop;

Просмотреть файл

@ -1290,11 +1290,8 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
mlx5e_put_flow_tunnel_id(flow); mlx5e_put_flow_tunnel_id(flow);
if (flow_flag_test(flow, NOT_READY)) { if (flow_flag_test(flow, NOT_READY))
remove_unready_flow(flow); remove_unready_flow(flow);
kvfree(attr->parse_attr);
return;
}
if (mlx5e_is_offloaded_flow(flow)) { if (mlx5e_is_offloaded_flow(flow)) {
if (flow_flag_test(flow, SLOW)) if (flow_flag_test(flow, SLOW))
@ -1315,6 +1312,8 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
} }
kvfree(attr->parse_attr); kvfree(attr->parse_attr);
mlx5_tc_ct_match_del(priv, &flow->esw_attr->ct_attr);
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
mlx5e_detach_mod_hdr(priv, flow); mlx5e_detach_mod_hdr(priv, flow);
@ -2625,6 +2624,22 @@ static struct mlx5_fields fields[] = {
OFFLOAD(UDP_DPORT, 16, U16_MAX, udp.dest, 0, udp_dport), OFFLOAD(UDP_DPORT, 16, U16_MAX, udp.dest, 0, udp_dport),
}; };
static unsigned long mask_to_le(unsigned long mask, int size)
{
__be32 mask_be32;
__be16 mask_be16;
if (size == 32) {
mask_be32 = (__force __be32)(mask);
mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32));
} else if (size == 16) {
mask_be32 = (__force __be32)(mask);
mask_be16 = *(__be16 *)&mask_be32;
mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16));
}
return mask;
}
static int offload_pedit_fields(struct mlx5e_priv *priv, static int offload_pedit_fields(struct mlx5e_priv *priv,
int namespace, int namespace,
struct pedit_headers_action *hdrs, struct pedit_headers_action *hdrs,
@ -2638,9 +2653,7 @@ static int offload_pedit_fields(struct mlx5e_priv *priv,
u32 *s_masks_p, *a_masks_p, s_mask, a_mask; u32 *s_masks_p, *a_masks_p, s_mask, a_mask;
struct mlx5e_tc_mod_hdr_acts *mod_acts; struct mlx5e_tc_mod_hdr_acts *mod_acts;
struct mlx5_fields *f; struct mlx5_fields *f;
unsigned long mask; unsigned long mask, field_mask;
__be32 mask_be32;
__be16 mask_be16;
int err; int err;
u8 cmd; u8 cmd;
@ -2706,14 +2719,7 @@ static int offload_pedit_fields(struct mlx5e_priv *priv,
if (skip) if (skip)
continue; continue;
if (f->field_bsize == 32) { mask = mask_to_le(mask, f->field_bsize);
mask_be32 = (__force __be32)(mask);
mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32));
} else if (f->field_bsize == 16) {
mask_be32 = (__force __be32)(mask);
mask_be16 = *(__be16 *)&mask_be32;
mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16));
}
first = find_first_bit(&mask, f->field_bsize); first = find_first_bit(&mask, f->field_bsize);
next_z = find_next_zero_bit(&mask, f->field_bsize, first); next_z = find_next_zero_bit(&mask, f->field_bsize, first);
@ -2744,9 +2750,10 @@ static int offload_pedit_fields(struct mlx5e_priv *priv,
if (cmd == MLX5_ACTION_TYPE_SET) { if (cmd == MLX5_ACTION_TYPE_SET) {
int start; int start;
field_mask = mask_to_le(f->field_mask, f->field_bsize);
/* if field is bit sized it can start not from first bit */ /* if field is bit sized it can start not from first bit */
start = find_first_bit((unsigned long *)&f->field_mask, start = find_first_bit(&field_mask, f->field_bsize);
f->field_bsize);
MLX5_SET(set_action_in, action, offset, first - start); MLX5_SET(set_action_in, action, offset, first - start);
/* length is num of bits to be written, zero means length of 32 */ /* length is num of bits to be written, zero means length of 32 */
@ -4402,8 +4409,8 @@ __mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
goto err_free; goto err_free;
/* actions validation depends on parsing the ct matches first */ /* actions validation depends on parsing the ct matches first */
err = mlx5_tc_ct_parse_match(priv, &parse_attr->spec, f, err = mlx5_tc_ct_match_add(priv, &parse_attr->spec, f,
&flow->esw_attr->ct_attr, extack); &flow->esw_attr->ct_attr, extack);
if (err) if (err)
goto err_free; goto err_free;

Просмотреть файл

@ -121,13 +121,17 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
struct mlx5e_xdpsq *xsksq = &c->xsksq; struct mlx5e_xdpsq *xsksq = &c->xsksq;
struct mlx5e_rq *xskrq = &c->xskrq; struct mlx5e_rq *xskrq = &c->xskrq;
struct mlx5e_rq *rq = &c->rq; struct mlx5e_rq *rq = &c->rq;
bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
bool aff_change = false; bool aff_change = false;
bool busy_xsk = false; bool busy_xsk = false;
bool busy = false; bool busy = false;
int work_done = 0; int work_done = 0;
bool xsk_open;
int i; int i;
rcu_read_lock();
xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
ch_stats->poll++; ch_stats->poll++;
for (i = 0; i < c->num_tc; i++) for (i = 0; i < c->num_tc; i++)
@ -167,8 +171,10 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
busy |= busy_xsk; busy |= busy_xsk;
if (busy) { if (busy) {
if (likely(mlx5e_channel_no_affinity_change(c))) if (likely(mlx5e_channel_no_affinity_change(c))) {
return budget; work_done = budget;
goto out;
}
ch_stats->aff_change++; ch_stats->aff_change++;
aff_change = true; aff_change = true;
if (budget && work_done == budget) if (budget && work_done == budget)
@ -176,7 +182,7 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
} }
if (unlikely(!napi_complete_done(napi, work_done))) if (unlikely(!napi_complete_done(napi, work_done)))
return work_done; goto out;
ch_stats->arm++; ch_stats->arm++;
@ -203,6 +209,9 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
ch_stats->force_irq++; ch_stats->force_irq++;
} }
out:
rcu_read_unlock();
return work_done; return work_done;
} }

Просмотреть файл

@ -1219,36 +1219,38 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw)
} }
esw->fdb_table.offloads.send_to_vport_grp = g; esw->fdb_table.offloads.send_to_vport_grp = g;
/* create peer esw miss group */ if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
memset(flow_group_in, 0, inlen); /* create peer esw miss group */
memset(flow_group_in, 0, inlen);
esw_set_flow_group_source_port(esw, flow_group_in); esw_set_flow_group_source_port(esw, flow_group_in);
if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) { if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
match_criteria = MLX5_ADDR_OF(create_flow_group_in, match_criteria = MLX5_ADDR_OF(create_flow_group_in,
flow_group_in, flow_group_in,
match_criteria); match_criteria);
MLX5_SET_TO_ONES(fte_match_param, match_criteria, MLX5_SET_TO_ONES(fte_match_param, match_criteria,
misc_parameters.source_eswitch_owner_vhca_id); misc_parameters.source_eswitch_owner_vhca_id);
MLX5_SET(create_flow_group_in, flow_group_in, MLX5_SET(create_flow_group_in, flow_group_in,
source_eswitch_owner_vhca_id_valid, 1); source_eswitch_owner_vhca_id_valid, 1);
}
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
ix + esw->total_vports - 1);
ix += esw->total_vports;
g = mlx5_create_flow_group(fdb, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err);
goto peer_miss_err;
}
esw->fdb_table.offloads.peer_miss_grp = g;
} }
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
ix + esw->total_vports - 1);
ix += esw->total_vports;
g = mlx5_create_flow_group(fdb, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err);
goto peer_miss_err;
}
esw->fdb_table.offloads.peer_miss_grp = g;
/* create miss group */ /* create miss group */
memset(flow_group_in, 0, inlen); memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
@ -1281,7 +1283,8 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw)
miss_rule_err: miss_rule_err:
mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
miss_err: miss_err:
mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
peer_miss_err: peer_miss_err:
mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
send_vport_err: send_vport_err:
@ -1305,7 +1308,8 @@ static void esw_destroy_offloads_fdb_tables(struct mlx5_eswitch *esw)
mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_multi); mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_multi);
mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_uni); mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_uni);
mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp); mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp); if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp); mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
mlx5_esw_chains_destroy(esw); mlx5_esw_chains_destroy(esw);

Просмотреть файл

@ -654,7 +654,7 @@ static struct fs_fte *alloc_fte(struct mlx5_flow_table *ft,
fte->action = *flow_act; fte->action = *flow_act;
fte->flow_context = spec->flow_context; fte->flow_context = spec->flow_context;
tree_init_node(&fte->node, NULL, del_sw_fte); tree_init_node(&fte->node, del_hw_fte, del_sw_fte);
return fte; return fte;
} }
@ -1792,7 +1792,6 @@ skip_search:
up_write_ref_node(&g->node, false); up_write_ref_node(&g->node, false);
rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte); rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
up_write_ref_node(&fte->node, false); up_write_ref_node(&fte->node, false);
tree_put_node(&fte->node, false);
return rule; return rule;
} }
rule = ERR_PTR(-ENOENT); rule = ERR_PTR(-ENOENT);
@ -1891,7 +1890,6 @@ search_again_locked:
up_write_ref_node(&g->node, false); up_write_ref_node(&g->node, false);
rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte); rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
up_write_ref_node(&fte->node, false); up_write_ref_node(&fte->node, false);
tree_put_node(&fte->node, false);
tree_put_node(&g->node, false); tree_put_node(&g->node, false);
return rule; return rule;
@ -2001,7 +1999,9 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
up_write_ref_node(&fte->node, false); up_write_ref_node(&fte->node, false);
} else { } else {
del_hw_fte(&fte->node); del_hw_fte(&fte->node);
up_write(&fte->node.lock); /* Avoid double call to del_hw_fte */
fte->node.del_hw_func = NULL;
up_write_ref_node(&fte->node, false);
tree_put_node(&fte->node, false); tree_put_node(&fte->node, false);
} }
kfree(handle); kfree(handle);

Просмотреть файл

@ -421,10 +421,15 @@ int ocelot_port_add_txtstamp_skb(struct ocelot_port *ocelot_port,
if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP && if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP &&
ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) { ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) {
spin_lock(&ocelot_port->ts_id_lock);
shinfo->tx_flags |= SKBTX_IN_PROGRESS; shinfo->tx_flags |= SKBTX_IN_PROGRESS;
/* Store timestamp ID in cb[0] of sk_buff */ /* Store timestamp ID in cb[0] of sk_buff */
skb->cb[0] = ocelot_port->ts_id % 4; skb->cb[0] = ocelot_port->ts_id;
ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4;
skb_queue_tail(&ocelot_port->tx_skbs, skb); skb_queue_tail(&ocelot_port->tx_skbs, skb);
spin_unlock(&ocelot_port->ts_id_lock);
return 0; return 0;
} }
return -ENODATA; return -ENODATA;
@ -1300,6 +1305,7 @@ void ocelot_init_port(struct ocelot *ocelot, int port)
struct ocelot_port *ocelot_port = ocelot->ports[port]; struct ocelot_port *ocelot_port = ocelot->ports[port];
skb_queue_head_init(&ocelot_port->tx_skbs); skb_queue_head_init(&ocelot_port->tx_skbs);
spin_lock_init(&ocelot_port->ts_id_lock);
/* Basic L2 initialization */ /* Basic L2 initialization */
@ -1544,18 +1550,18 @@ EXPORT_SYMBOL(ocelot_init);
void ocelot_deinit(struct ocelot *ocelot) void ocelot_deinit(struct ocelot *ocelot)
{ {
struct ocelot_port *port;
int i;
cancel_delayed_work(&ocelot->stats_work); cancel_delayed_work(&ocelot->stats_work);
destroy_workqueue(ocelot->stats_queue); destroy_workqueue(ocelot->stats_queue);
mutex_destroy(&ocelot->stats_lock); mutex_destroy(&ocelot->stats_lock);
for (i = 0; i < ocelot->num_phys_ports; i++) {
port = ocelot->ports[i];
skb_queue_purge(&port->tx_skbs);
}
} }
EXPORT_SYMBOL(ocelot_deinit); EXPORT_SYMBOL(ocelot_deinit);
void ocelot_deinit_port(struct ocelot *ocelot, int port)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
skb_queue_purge(&ocelot_port->tx_skbs);
}
EXPORT_SYMBOL(ocelot_deinit_port);
MODULE_LICENSE("Dual MIT/GPL"); MODULE_LICENSE("Dual MIT/GPL");

Просмотреть файл

@ -330,6 +330,7 @@ static int ocelot_port_xmit(struct sk_buff *skb, struct net_device *dev)
u8 grp = 0; /* Send everything on CPU group 0 */ u8 grp = 0; /* Send everything on CPU group 0 */
unsigned int i, count, last; unsigned int i, count, last;
int port = priv->chip_port; int port = priv->chip_port;
bool do_tstamp;
val = ocelot_read(ocelot, QS_INJ_STATUS); val = ocelot_read(ocelot, QS_INJ_STATUS);
if (!(val & QS_INJ_STATUS_FIFO_RDY(BIT(grp))) || if (!(val & QS_INJ_STATUS_FIFO_RDY(BIT(grp))) ||
@ -344,10 +345,12 @@ static int ocelot_port_xmit(struct sk_buff *skb, struct net_device *dev)
info.vid = skb_vlan_tag_get(skb); info.vid = skb_vlan_tag_get(skb);
/* Check if timestamping is needed */ /* Check if timestamping is needed */
do_tstamp = (ocelot_port_add_txtstamp_skb(ocelot_port, skb) == 0);
if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP) { if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP) {
info.rew_op = ocelot_port->ptp_cmd; info.rew_op = ocelot_port->ptp_cmd;
if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP)
info.rew_op |= (ocelot_port->ts_id % 4) << 3; info.rew_op |= skb->cb[0] << 3;
} }
ocelot_gen_ifh(ifh, &info); ocelot_gen_ifh(ifh, &info);
@ -380,12 +383,9 @@ static int ocelot_port_xmit(struct sk_buff *skb, struct net_device *dev)
dev->stats.tx_packets++; dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len; dev->stats.tx_bytes += skb->len;
if (!ocelot_port_add_txtstamp_skb(ocelot_port, skb)) { if (!do_tstamp)
ocelot_port->ts_id++; dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
dev_kfree_skb_any(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }

Просмотреть файл

@ -806,17 +806,17 @@ static const struct vcap_field vsc7514_vcap_is2_keys[] = {
[VCAP_IS2_HK_DIP_EQ_SIP] = {123, 1}, [VCAP_IS2_HK_DIP_EQ_SIP] = {123, 1},
/* IP4_TCP_UDP (TYPE=100) */ /* IP4_TCP_UDP (TYPE=100) */
[VCAP_IS2_HK_TCP] = {124, 1}, [VCAP_IS2_HK_TCP] = {124, 1},
[VCAP_IS2_HK_L4_SPORT] = {125, 16}, [VCAP_IS2_HK_L4_DPORT] = {125, 16},
[VCAP_IS2_HK_L4_DPORT] = {141, 16}, [VCAP_IS2_HK_L4_SPORT] = {141, 16},
[VCAP_IS2_HK_L4_RNG] = {157, 8}, [VCAP_IS2_HK_L4_RNG] = {157, 8},
[VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {165, 1}, [VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {165, 1},
[VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {166, 1}, [VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {166, 1},
[VCAP_IS2_HK_L4_URG] = {167, 1}, [VCAP_IS2_HK_L4_FIN] = {167, 1},
[VCAP_IS2_HK_L4_ACK] = {168, 1}, [VCAP_IS2_HK_L4_SYN] = {168, 1},
[VCAP_IS2_HK_L4_PSH] = {169, 1}, [VCAP_IS2_HK_L4_RST] = {169, 1},
[VCAP_IS2_HK_L4_RST] = {170, 1}, [VCAP_IS2_HK_L4_PSH] = {170, 1},
[VCAP_IS2_HK_L4_SYN] = {171, 1}, [VCAP_IS2_HK_L4_ACK] = {171, 1},
[VCAP_IS2_HK_L4_FIN] = {172, 1}, [VCAP_IS2_HK_L4_URG] = {172, 1},
[VCAP_IS2_HK_L4_1588_DOM] = {173, 8}, [VCAP_IS2_HK_L4_1588_DOM] = {173, 8},
[VCAP_IS2_HK_L4_1588_VER] = {181, 4}, [VCAP_IS2_HK_L4_1588_VER] = {181, 4},
/* IP4_OTHER (TYPE=101) */ /* IP4_OTHER (TYPE=101) */
@ -896,11 +896,137 @@ static struct ptp_clock_info ocelot_ptp_clock_info = {
.enable = ocelot_ptp_enable, .enable = ocelot_ptp_enable,
}; };
static void mscc_ocelot_release_ports(struct ocelot *ocelot)
{
int port;
for (port = 0; port < ocelot->num_phys_ports; port++) {
struct ocelot_port_private *priv;
struct ocelot_port *ocelot_port;
ocelot_port = ocelot->ports[port];
if (!ocelot_port)
continue;
ocelot_deinit_port(ocelot, port);
priv = container_of(ocelot_port, struct ocelot_port_private,
port);
unregister_netdev(priv->dev);
free_netdev(priv->dev);
}
}
static int mscc_ocelot_init_ports(struct platform_device *pdev,
struct device_node *ports)
{
struct ocelot *ocelot = platform_get_drvdata(pdev);
struct device_node *portnp;
int err;
ocelot->ports = devm_kcalloc(ocelot->dev, ocelot->num_phys_ports,
sizeof(struct ocelot_port *), GFP_KERNEL);
if (!ocelot->ports)
return -ENOMEM;
/* No NPI port */
ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE,
OCELOT_TAG_PREFIX_NONE);
for_each_available_child_of_node(ports, portnp) {
struct ocelot_port_private *priv;
struct ocelot_port *ocelot_port;
struct device_node *phy_node;
phy_interface_t phy_mode;
struct phy_device *phy;
struct regmap *target;
struct resource *res;
struct phy *serdes;
char res_name[8];
u32 port;
if (of_property_read_u32(portnp, "reg", &port))
continue;
snprintf(res_name, sizeof(res_name), "port%d", port);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
res_name);
target = ocelot_regmap_init(ocelot, res);
if (IS_ERR(target))
continue;
phy_node = of_parse_phandle(portnp, "phy-handle", 0);
if (!phy_node)
continue;
phy = of_phy_find_device(phy_node);
of_node_put(phy_node);
if (!phy)
continue;
err = ocelot_probe_port(ocelot, port, target, phy);
if (err) {
of_node_put(portnp);
return err;
}
ocelot_port = ocelot->ports[port];
priv = container_of(ocelot_port, struct ocelot_port_private,
port);
of_get_phy_mode(portnp, &phy_mode);
ocelot_port->phy_mode = phy_mode;
switch (ocelot_port->phy_mode) {
case PHY_INTERFACE_MODE_NA:
continue;
case PHY_INTERFACE_MODE_SGMII:
break;
case PHY_INTERFACE_MODE_QSGMII:
/* Ensure clock signals and speed is set on all
* QSGMII links
*/
ocelot_port_writel(ocelot_port,
DEV_CLOCK_CFG_LINK_SPEED
(OCELOT_SPEED_1000),
DEV_CLOCK_CFG);
break;
default:
dev_err(ocelot->dev,
"invalid phy mode for port%d, (Q)SGMII only\n",
port);
of_node_put(portnp);
return -EINVAL;
}
serdes = devm_of_phy_get(ocelot->dev, portnp, NULL);
if (IS_ERR(serdes)) {
err = PTR_ERR(serdes);
if (err == -EPROBE_DEFER)
dev_dbg(ocelot->dev, "deferring probe\n");
else
dev_err(ocelot->dev,
"missing SerDes phys for port%d\n",
port);
of_node_put(portnp);
return err;
}
priv->serdes = serdes;
}
return 0;
}
static int mscc_ocelot_probe(struct platform_device *pdev) static int mscc_ocelot_probe(struct platform_device *pdev)
{ {
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
struct device_node *ports, *portnp;
int err, irq_xtr, irq_ptp_rdy; int err, irq_xtr, irq_ptp_rdy;
struct device_node *ports;
struct ocelot *ocelot; struct ocelot *ocelot;
struct regmap *hsio; struct regmap *hsio;
unsigned int i; unsigned int i;
@ -985,20 +1111,24 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
ports = of_get_child_by_name(np, "ethernet-ports"); ports = of_get_child_by_name(np, "ethernet-ports");
if (!ports) { if (!ports) {
dev_err(&pdev->dev, "no ethernet-ports child node found\n"); dev_err(ocelot->dev, "no ethernet-ports child node found\n");
return -ENODEV; return -ENODEV;
} }
ocelot->num_phys_ports = of_get_child_count(ports); ocelot->num_phys_ports = of_get_child_count(ports);
ocelot->ports = devm_kcalloc(&pdev->dev, ocelot->num_phys_ports,
sizeof(struct ocelot_port *), GFP_KERNEL);
ocelot->vcap_is2_keys = vsc7514_vcap_is2_keys; ocelot->vcap_is2_keys = vsc7514_vcap_is2_keys;
ocelot->vcap_is2_actions = vsc7514_vcap_is2_actions; ocelot->vcap_is2_actions = vsc7514_vcap_is2_actions;
ocelot->vcap = vsc7514_vcap_props; ocelot->vcap = vsc7514_vcap_props;
ocelot_init(ocelot); err = ocelot_init(ocelot);
if (err)
goto out_put_ports;
err = mscc_ocelot_init_ports(pdev, ports);
if (err)
goto out_put_ports;
if (ocelot->ptp) { if (ocelot->ptp) {
err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info); err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);
if (err) { if (err) {
@ -1008,96 +1138,6 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
} }
} }
/* No NPI port */
ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE,
OCELOT_TAG_PREFIX_NONE);
for_each_available_child_of_node(ports, portnp) {
struct ocelot_port_private *priv;
struct ocelot_port *ocelot_port;
struct device_node *phy_node;
phy_interface_t phy_mode;
struct phy_device *phy;
struct regmap *target;
struct resource *res;
struct phy *serdes;
char res_name[8];
u32 port;
if (of_property_read_u32(portnp, "reg", &port))
continue;
snprintf(res_name, sizeof(res_name), "port%d", port);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
res_name);
target = ocelot_regmap_init(ocelot, res);
if (IS_ERR(target))
continue;
phy_node = of_parse_phandle(portnp, "phy-handle", 0);
if (!phy_node)
continue;
phy = of_phy_find_device(phy_node);
of_node_put(phy_node);
if (!phy)
continue;
err = ocelot_probe_port(ocelot, port, target, phy);
if (err) {
of_node_put(portnp);
goto out_put_ports;
}
ocelot_port = ocelot->ports[port];
priv = container_of(ocelot_port, struct ocelot_port_private,
port);
of_get_phy_mode(portnp, &phy_mode);
ocelot_port->phy_mode = phy_mode;
switch (ocelot_port->phy_mode) {
case PHY_INTERFACE_MODE_NA:
continue;
case PHY_INTERFACE_MODE_SGMII:
break;
case PHY_INTERFACE_MODE_QSGMII:
/* Ensure clock signals and speed is set on all
* QSGMII links
*/
ocelot_port_writel(ocelot_port,
DEV_CLOCK_CFG_LINK_SPEED
(OCELOT_SPEED_1000),
DEV_CLOCK_CFG);
break;
default:
dev_err(ocelot->dev,
"invalid phy mode for port%d, (Q)SGMII only\n",
port);
of_node_put(portnp);
err = -EINVAL;
goto out_put_ports;
}
serdes = devm_of_phy_get(ocelot->dev, portnp, NULL);
if (IS_ERR(serdes)) {
err = PTR_ERR(serdes);
if (err == -EPROBE_DEFER)
dev_dbg(ocelot->dev, "deferring probe\n");
else
dev_err(ocelot->dev,
"missing SerDes phys for port%d\n",
port);
of_node_put(portnp);
goto out_put_ports;
}
priv->serdes = serdes;
}
register_netdevice_notifier(&ocelot_netdevice_nb); register_netdevice_notifier(&ocelot_netdevice_nb);
register_switchdev_notifier(&ocelot_switchdev_nb); register_switchdev_notifier(&ocelot_switchdev_nb);
register_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb); register_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
@ -1114,6 +1154,7 @@ static int mscc_ocelot_remove(struct platform_device *pdev)
struct ocelot *ocelot = platform_get_drvdata(pdev); struct ocelot *ocelot = platform_get_drvdata(pdev);
ocelot_deinit_timestamp(ocelot); ocelot_deinit_timestamp(ocelot);
mscc_ocelot_release_ports(ocelot);
ocelot_deinit(ocelot); ocelot_deinit(ocelot);
unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb); unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
unregister_switchdev_notifier(&ocelot_switchdev_nb); unregister_switchdev_notifier(&ocelot_switchdev_nb);

Просмотреть файл

@ -829,8 +829,8 @@ nfp_port_get_fecparam(struct net_device *netdev,
struct nfp_eth_table_port *eth_port; struct nfp_eth_table_port *eth_port;
struct nfp_port *port; struct nfp_port *port;
param->active_fec = ETHTOOL_FEC_NONE_BIT; param->active_fec = ETHTOOL_FEC_NONE;
param->fec = ETHTOOL_FEC_NONE_BIT; param->fec = ETHTOOL_FEC_NONE;
port = nfp_port_from_netdev(netdev); port = nfp_port_from_netdev(netdev);
eth_port = nfp_port_get_eth_port(port); eth_port = nfp_port_get_eth_port(port);

Просмотреть файл

@ -4253,7 +4253,8 @@ static int qed_hw_get_nvm_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) | cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |
BIT(QED_MF_LLH_PROTO_CLSS) | BIT(QED_MF_LLH_PROTO_CLSS) |
BIT(QED_MF_LL2_NON_UNICAST) | BIT(QED_MF_LL2_NON_UNICAST) |
BIT(QED_MF_INTER_PF_SWITCH); BIT(QED_MF_INTER_PF_SWITCH) |
BIT(QED_MF_DISABLE_ARFS);
break; break;
case NVM_CFG1_GLOB_MF_MODE_DEFAULT: case NVM_CFG1_GLOB_MF_MODE_DEFAULT:
cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) | cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |
@ -4266,6 +4267,14 @@ static int qed_hw_get_nvm_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n", DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
cdev->mf_bits); cdev->mf_bits);
/* In CMT the PF is unknown when the GFS block processes the
* packet. Therefore cannot use searcher as it has a per PF
* database, and thus ARFS must be disabled.
*
*/
if (QED_IS_CMT(cdev))
cdev->mf_bits |= BIT(QED_MF_DISABLE_ARFS);
} }
DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n", DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",

Просмотреть файл

@ -1980,6 +1980,9 @@ void qed_arfs_mode_configure(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt, struct qed_ptt *p_ptt,
struct qed_arfs_config_params *p_cfg_params) struct qed_arfs_config_params *p_cfg_params)
{ {
if (test_bit(QED_MF_DISABLE_ARFS, &p_hwfn->cdev->mf_bits))
return;
if (p_cfg_params->mode != QED_FILTER_CONFIG_MODE_DISABLE) { if (p_cfg_params->mode != QED_FILTER_CONFIG_MODE_DISABLE) {
qed_gft_config(p_hwfn, p_ptt, p_hwfn->rel_pf_id, qed_gft_config(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
p_cfg_params->tcp, p_cfg_params->tcp,

Просмотреть файл

@ -444,6 +444,8 @@ int qed_fill_dev_info(struct qed_dev *cdev,
dev_info->fw_eng = FW_ENGINEERING_VERSION; dev_info->fw_eng = FW_ENGINEERING_VERSION;
dev_info->b_inter_pf_switch = test_bit(QED_MF_INTER_PF_SWITCH, dev_info->b_inter_pf_switch = test_bit(QED_MF_INTER_PF_SWITCH,
&cdev->mf_bits); &cdev->mf_bits);
if (!test_bit(QED_MF_DISABLE_ARFS, &cdev->mf_bits))
dev_info->b_arfs_capable = true;
dev_info->tx_switching = true; dev_info->tx_switching = true;
if (hw_info->b_wol_support == QED_WOL_SUPPORT_PME) if (hw_info->b_wol_support == QED_WOL_SUPPORT_PME)

Просмотреть файл

@ -71,6 +71,7 @@ static int qed_sp_vf_start(struct qed_hwfn *p_hwfn, struct qed_vf_info *p_vf)
p_ramrod->personality = PERSONALITY_ETH; p_ramrod->personality = PERSONALITY_ETH;
break; break;
case QED_PCI_ETH_ROCE: case QED_PCI_ETH_ROCE:
case QED_PCI_ETH_IWARP:
p_ramrod->personality = PERSONALITY_RDMA_AND_ETH; p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
break; break;
default: default:

Просмотреть файл

@ -311,6 +311,9 @@ int qede_alloc_arfs(struct qede_dev *edev)
{ {
int i; int i;
if (!edev->dev_info.common.b_arfs_capable)
return -EINVAL;
edev->arfs = vzalloc(sizeof(*edev->arfs)); edev->arfs = vzalloc(sizeof(*edev->arfs));
if (!edev->arfs) if (!edev->arfs)
return -ENOMEM; return -ENOMEM;

Просмотреть файл

@ -804,7 +804,7 @@ static void qede_init_ndev(struct qede_dev *edev)
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC; NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC;
if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) if (edev->dev_info.common.b_arfs_capable)
hw_features |= NETIF_F_NTUPLE; hw_features |= NETIF_F_NTUPLE;
if (edev->dev_info.common.vxlan_enable || if (edev->dev_info.common.vxlan_enable ||
@ -2274,7 +2274,7 @@ static void qede_unload(struct qede_dev *edev, enum qede_unload_mode mode,
qede_vlan_mark_nonconfigured(edev); qede_vlan_mark_nonconfigured(edev);
edev->ops->fastpath_stop(edev->cdev); edev->ops->fastpath_stop(edev->cdev);
if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) { if (edev->dev_info.common.b_arfs_capable) {
qede_poll_for_freeing_arfs_filters(edev); qede_poll_for_freeing_arfs_filters(edev);
qede_free_arfs(edev); qede_free_arfs(edev);
} }
@ -2341,10 +2341,9 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
if (rc) if (rc)
goto err2; goto err2;
if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) { if (qede_alloc_arfs(edev)) {
rc = qede_alloc_arfs(edev); edev->ndev->features &= ~NETIF_F_NTUPLE;
if (rc) edev->dev_info.common.b_arfs_capable = false;
DP_NOTICE(edev, "aRFS memory allocation failed\n");
} }
qede_napi_add_enable(edev); qede_napi_add_enable(edev);

Просмотреть файл

@ -490,6 +490,7 @@ static int ef100_pci_probe(struct pci_dev *pci_dev,
if (fcw.offset > pci_resource_len(efx->pci_dev, fcw.bar) - ESE_GZ_FCW_LEN) { if (fcw.offset > pci_resource_len(efx->pci_dev, fcw.bar) - ESE_GZ_FCW_LEN) {
netif_err(efx, probe, efx->net_dev, netif_err(efx, probe, efx->net_dev,
"Func control window overruns BAR\n"); "Func control window overruns BAR\n");
rc = -EIO;
goto fail; goto fail;
} }

Просмотреть файл

@ -17,6 +17,7 @@
#include <linux/phy.h> #include <linux/phy.h>
#include <linux/phy/phy.h> #include <linux/phy/phy.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/pinctrl/consumer.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/gpio/consumer.h> #include <linux/gpio/consumer.h>
#include <linux/of.h> #include <linux/of.h>
@ -2070,9 +2071,61 @@ static int cpsw_remove(struct platform_device *pdev)
return 0; return 0;
} }
static int __maybe_unused cpsw_suspend(struct device *dev)
{
struct cpsw_common *cpsw = dev_get_drvdata(dev);
int i;
rtnl_lock();
for (i = 0; i < cpsw->data.slaves; i++) {
struct net_device *ndev = cpsw->slaves[i].ndev;
if (!(ndev && netif_running(ndev)))
continue;
cpsw_ndo_stop(ndev);
}
rtnl_unlock();
/* Select sleep pin state */
pinctrl_pm_select_sleep_state(dev);
return 0;
}
static int __maybe_unused cpsw_resume(struct device *dev)
{
struct cpsw_common *cpsw = dev_get_drvdata(dev);
int i;
/* Select default pin state */
pinctrl_pm_select_default_state(dev);
/* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */
rtnl_lock();
for (i = 0; i < cpsw->data.slaves; i++) {
struct net_device *ndev = cpsw->slaves[i].ndev;
if (!(ndev && netif_running(ndev)))
continue;
cpsw_ndo_open(ndev);
}
rtnl_unlock();
return 0;
}
static SIMPLE_DEV_PM_OPS(cpsw_pm_ops, cpsw_suspend, cpsw_resume);
static struct platform_driver cpsw_driver = { static struct platform_driver cpsw_driver = {
.driver = { .driver = {
.name = "cpsw-switch", .name = "cpsw-switch",
.pm = &cpsw_pm_ops,
.of_match_table = cpsw_of_mtable, .of_match_table = cpsw_of_mtable,
}, },
.probe = cpsw_probe, .probe = cpsw_probe,

Просмотреть файл

@ -777,7 +777,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
struct net_device *dev, struct net_device *dev,
struct geneve_sock *gs4, struct geneve_sock *gs4,
struct flowi4 *fl4, struct flowi4 *fl4,
const struct ip_tunnel_info *info) const struct ip_tunnel_info *info,
__be16 dport, __be16 sport)
{ {
bool use_cache = ip_tunnel_dst_cache_usable(skb, info); bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct geneve_dev *geneve = netdev_priv(dev); struct geneve_dev *geneve = netdev_priv(dev);
@ -793,6 +794,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
fl4->flowi4_proto = IPPROTO_UDP; fl4->flowi4_proto = IPPROTO_UDP;
fl4->daddr = info->key.u.ipv4.dst; fl4->daddr = info->key.u.ipv4.dst;
fl4->saddr = info->key.u.ipv4.src; fl4->saddr = info->key.u.ipv4.src;
fl4->fl4_dport = dport;
fl4->fl4_sport = sport;
tos = info->key.tos; tos = info->key.tos;
if ((tos == 1) && !geneve->cfg.collect_md) { if ((tos == 1) && !geneve->cfg.collect_md) {
@ -827,7 +830,8 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
struct net_device *dev, struct net_device *dev,
struct geneve_sock *gs6, struct geneve_sock *gs6,
struct flowi6 *fl6, struct flowi6 *fl6,
const struct ip_tunnel_info *info) const struct ip_tunnel_info *info,
__be16 dport, __be16 sport)
{ {
bool use_cache = ip_tunnel_dst_cache_usable(skb, info); bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct geneve_dev *geneve = netdev_priv(dev); struct geneve_dev *geneve = netdev_priv(dev);
@ -843,6 +847,9 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
fl6->flowi6_proto = IPPROTO_UDP; fl6->flowi6_proto = IPPROTO_UDP;
fl6->daddr = info->key.u.ipv6.dst; fl6->daddr = info->key.u.ipv6.dst;
fl6->saddr = info->key.u.ipv6.src; fl6->saddr = info->key.u.ipv6.src;
fl6->fl6_dport = dport;
fl6->fl6_sport = sport;
prio = info->key.tos; prio = info->key.tos;
if ((prio == 1) && !geneve->cfg.collect_md) { if ((prio == 1) && !geneve->cfg.collect_md) {
prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb); prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
@ -889,7 +896,9 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 sport; __be16 sport;
int err; int err;
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info); sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(rt)) if (IS_ERR(rt))
return PTR_ERR(rt); return PTR_ERR(rt);
@ -919,7 +928,6 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
return -EMSGSIZE; return -EMSGSIZE;
} }
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->cfg.collect_md) { if (geneve->cfg.collect_md) {
tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl; ttl = key->ttl;
@ -974,7 +982,9 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
__be16 sport; __be16 sport;
int err; int err;
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info); sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(dst)) if (IS_ERR(dst))
return PTR_ERR(dst); return PTR_ERR(dst);
@ -1003,7 +1013,6 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
return -EMSGSIZE; return -EMSGSIZE;
} }
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->cfg.collect_md) { if (geneve->cfg.collect_md) {
prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl; ttl = key->ttl;
@ -1085,13 +1094,18 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
{ {
struct ip_tunnel_info *info = skb_tunnel_info(skb); struct ip_tunnel_info *info = skb_tunnel_info(skb);
struct geneve_dev *geneve = netdev_priv(dev); struct geneve_dev *geneve = netdev_priv(dev);
__be16 sport;
if (ip_tunnel_info_af(info) == AF_INET) { if (ip_tunnel_info_af(info) == AF_INET) {
struct rtable *rt; struct rtable *rt;
struct flowi4 fl4; struct flowi4 fl4;
struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info); struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
sport = udp_flow_src_port(geneve->net, skb,
1, USHRT_MAX, true);
rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(rt)) if (IS_ERR(rt))
return PTR_ERR(rt); return PTR_ERR(rt);
@ -1101,9 +1115,13 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
} else if (ip_tunnel_info_af(info) == AF_INET6) { } else if (ip_tunnel_info_af(info) == AF_INET6) {
struct dst_entry *dst; struct dst_entry *dst;
struct flowi6 fl6; struct flowi6 fl6;
struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info); struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
sport = udp_flow_src_port(geneve->net, skb,
1, USHRT_MAX, true);
dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(dst)) if (IS_ERR(dst))
return PTR_ERR(dst); return PTR_ERR(dst);
@ -1114,8 +1132,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
return -EINVAL; return -EINVAL;
} }
info->key.tp_src = udp_flow_src_port(geneve->net, skb, info->key.tp_src = sport;
1, USHRT_MAX, true);
info->key.tp_dst = geneve->cfg.info.key.tp_dst; info->key.tp_dst = geneve->cfg.info.key.tp_dst;
return 0; return 0;
} }

Просмотреть файл

@ -847,6 +847,10 @@ struct nvsp_message {
#define NETVSC_XDP_HDRM 256 #define NETVSC_XDP_HDRM 256
#define NETVSC_XFER_HEADER_SIZE(rng_cnt) \
(offsetof(struct vmtransfer_page_packet_header, ranges) + \
(rng_cnt) * sizeof(struct vmtransfer_page_range))
struct multi_send_data { struct multi_send_data {
struct sk_buff *skb; /* skb containing the pkt */ struct sk_buff *skb; /* skb containing the pkt */
struct hv_netvsc_packet *pkt; /* netvsc pkt pending */ struct hv_netvsc_packet *pkt; /* netvsc pkt pending */
@ -974,6 +978,9 @@ struct net_device_context {
/* Serial number of the VF to team with */ /* Serial number of the VF to team with */
u32 vf_serial; u32 vf_serial;
/* Is the current data path through the VF NIC? */
bool data_path_is_vf;
/* Used to temporarily save the config info across hibernation */ /* Used to temporarily save the config info across hibernation */
struct netvsc_device_info *saved_netvsc_dev_info; struct netvsc_device_info *saved_netvsc_dev_info;
}; };

Просмотреть файл

@ -388,6 +388,15 @@ static int netvsc_init_buf(struct hv_device *device,
net_device->recv_section_size = resp->sections[0].sub_alloc_size; net_device->recv_section_size = resp->sections[0].sub_alloc_size;
net_device->recv_section_cnt = resp->sections[0].num_sub_allocs; net_device->recv_section_cnt = resp->sections[0].num_sub_allocs;
/* Ensure buffer will not overflow */
if (net_device->recv_section_size < NETVSC_MTU_MIN || (u64)net_device->recv_section_size *
(u64)net_device->recv_section_cnt > (u64)buf_size) {
netdev_err(ndev, "invalid recv_section_size %u\n",
net_device->recv_section_size);
ret = -EINVAL;
goto cleanup;
}
/* Setup receive completion ring. /* Setup receive completion ring.
* Add 1 to the recv_section_cnt because at least one entry in a * Add 1 to the recv_section_cnt because at least one entry in a
* ring buffer has to be empty. * ring buffer has to be empty.
@ -460,6 +469,12 @@ static int netvsc_init_buf(struct hv_device *device,
/* Parse the response */ /* Parse the response */
net_device->send_section_size = init_packet->msg. net_device->send_section_size = init_packet->msg.
v1_msg.send_send_buf_complete.section_size; v1_msg.send_send_buf_complete.section_size;
if (net_device->send_section_size < NETVSC_MTU_MIN) {
netdev_err(ndev, "invalid send_section_size %u\n",
net_device->send_section_size);
ret = -EINVAL;
goto cleanup;
}
/* Section count is simply the size divided by the section size. */ /* Section count is simply the size divided by the section size. */
net_device->send_section_cnt = buf_size / net_device->send_section_size; net_device->send_section_cnt = buf_size / net_device->send_section_size;
@ -731,12 +746,49 @@ static void netvsc_send_completion(struct net_device *ndev,
int budget) int budget)
{ {
const struct nvsp_message *nvsp_packet = hv_pkt_data(desc); const struct nvsp_message *nvsp_packet = hv_pkt_data(desc);
u32 msglen = hv_pkt_datalen(desc);
/* Ensure packet is big enough to read header fields */
if (msglen < sizeof(struct nvsp_message_header)) {
netdev_err(ndev, "nvsp_message length too small: %u\n", msglen);
return;
}
switch (nvsp_packet->hdr.msg_type) { switch (nvsp_packet->hdr.msg_type) {
case NVSP_MSG_TYPE_INIT_COMPLETE: case NVSP_MSG_TYPE_INIT_COMPLETE:
if (msglen < sizeof(struct nvsp_message_header) +
sizeof(struct nvsp_message_init_complete)) {
netdev_err(ndev, "nvsp_msg length too small: %u\n",
msglen);
return;
}
fallthrough;
case NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE: case NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE:
if (msglen < sizeof(struct nvsp_message_header) +
sizeof(struct nvsp_1_message_send_receive_buffer_complete)) {
netdev_err(ndev, "nvsp_msg1 length too small: %u\n",
msglen);
return;
}
fallthrough;
case NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE: case NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE:
if (msglen < sizeof(struct nvsp_message_header) +
sizeof(struct nvsp_1_message_send_send_buffer_complete)) {
netdev_err(ndev, "nvsp_msg1 length too small: %u\n",
msglen);
return;
}
fallthrough;
case NVSP_MSG5_TYPE_SUBCHANNEL: case NVSP_MSG5_TYPE_SUBCHANNEL:
if (msglen < sizeof(struct nvsp_message_header) +
sizeof(struct nvsp_5_subchannel_complete)) {
netdev_err(ndev, "nvsp_msg5 length too small: %u\n",
msglen);
return;
}
/* Copy the response back */ /* Copy the response back */
memcpy(&net_device->channel_init_pkt, nvsp_packet, memcpy(&net_device->channel_init_pkt, nvsp_packet,
sizeof(struct nvsp_message)); sizeof(struct nvsp_message));
@ -1117,19 +1169,28 @@ static void enq_receive_complete(struct net_device *ndev,
static int netvsc_receive(struct net_device *ndev, static int netvsc_receive(struct net_device *ndev,
struct netvsc_device *net_device, struct netvsc_device *net_device,
struct netvsc_channel *nvchan, struct netvsc_channel *nvchan,
const struct vmpacket_descriptor *desc, const struct vmpacket_descriptor *desc)
const struct nvsp_message *nvsp)
{ {
struct net_device_context *net_device_ctx = netdev_priv(ndev); struct net_device_context *net_device_ctx = netdev_priv(ndev);
struct vmbus_channel *channel = nvchan->channel; struct vmbus_channel *channel = nvchan->channel;
const struct vmtransfer_page_packet_header *vmxferpage_packet const struct vmtransfer_page_packet_header *vmxferpage_packet
= container_of(desc, const struct vmtransfer_page_packet_header, d); = container_of(desc, const struct vmtransfer_page_packet_header, d);
const struct nvsp_message *nvsp = hv_pkt_data(desc);
u32 msglen = hv_pkt_datalen(desc);
u16 q_idx = channel->offermsg.offer.sub_channel_index; u16 q_idx = channel->offermsg.offer.sub_channel_index;
char *recv_buf = net_device->recv_buf; char *recv_buf = net_device->recv_buf;
u32 status = NVSP_STAT_SUCCESS; u32 status = NVSP_STAT_SUCCESS;
int i; int i;
int count = 0; int count = 0;
/* Ensure packet is big enough to read header fields */
if (msglen < sizeof(struct nvsp_message_header)) {
netif_err(net_device_ctx, rx_err, ndev,
"invalid nvsp header, length too small: %u\n",
msglen);
return 0;
}
/* Make sure this is a valid nvsp packet */ /* Make sure this is a valid nvsp packet */
if (unlikely(nvsp->hdr.msg_type != NVSP_MSG1_TYPE_SEND_RNDIS_PKT)) { if (unlikely(nvsp->hdr.msg_type != NVSP_MSG1_TYPE_SEND_RNDIS_PKT)) {
netif_err(net_device_ctx, rx_err, ndev, netif_err(net_device_ctx, rx_err, ndev,
@ -1138,6 +1199,14 @@ static int netvsc_receive(struct net_device *ndev,
return 0; return 0;
} }
/* Validate xfer page pkt header */
if ((desc->offset8 << 3) < sizeof(struct vmtransfer_page_packet_header)) {
netif_err(net_device_ctx, rx_err, ndev,
"Invalid xfer page pkt, offset too small: %u\n",
desc->offset8 << 3);
return 0;
}
if (unlikely(vmxferpage_packet->xfer_pageset_id != NETVSC_RECEIVE_BUFFER_ID)) { if (unlikely(vmxferpage_packet->xfer_pageset_id != NETVSC_RECEIVE_BUFFER_ID)) {
netif_err(net_device_ctx, rx_err, ndev, netif_err(net_device_ctx, rx_err, ndev,
"Invalid xfer page set id - expecting %x got %x\n", "Invalid xfer page set id - expecting %x got %x\n",
@ -1148,6 +1217,14 @@ static int netvsc_receive(struct net_device *ndev,
count = vmxferpage_packet->range_cnt; count = vmxferpage_packet->range_cnt;
/* Check count for a valid value */
if (NETVSC_XFER_HEADER_SIZE(count) > desc->offset8 << 3) {
netif_err(net_device_ctx, rx_err, ndev,
"Range count is not valid: %d\n",
count);
return 0;
}
/* Each range represents 1 RNDIS pkt that contains 1 ethernet frame */ /* Each range represents 1 RNDIS pkt that contains 1 ethernet frame */
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
u32 offset = vmxferpage_packet->ranges[i].byte_offset; u32 offset = vmxferpage_packet->ranges[i].byte_offset;
@ -1155,7 +1232,8 @@ static int netvsc_receive(struct net_device *ndev,
void *data; void *data;
int ret; int ret;
if (unlikely(offset + buflen > net_device->recv_buf_size)) { if (unlikely(offset > net_device->recv_buf_size ||
buflen > net_device->recv_buf_size - offset)) {
nvchan->rsc.cnt = 0; nvchan->rsc.cnt = 0;
status = NVSP_STAT_FAIL; status = NVSP_STAT_FAIL;
netif_err(net_device_ctx, rx_err, ndev, netif_err(net_device_ctx, rx_err, ndev,
@ -1194,6 +1272,13 @@ static void netvsc_send_table(struct net_device *ndev,
u32 count, offset, *tab; u32 count, offset, *tab;
int i; int i;
/* Ensure packet is big enough to read send_table fields */
if (msglen < sizeof(struct nvsp_message_header) +
sizeof(struct nvsp_5_send_indirect_table)) {
netdev_err(ndev, "nvsp_v5_msg length too small: %u\n", msglen);
return;
}
count = nvmsg->msg.v5_msg.send_table.count; count = nvmsg->msg.v5_msg.send_table.count;
offset = nvmsg->msg.v5_msg.send_table.offset; offset = nvmsg->msg.v5_msg.send_table.offset;
@ -1225,10 +1310,18 @@ static void netvsc_send_table(struct net_device *ndev,
} }
static void netvsc_send_vf(struct net_device *ndev, static void netvsc_send_vf(struct net_device *ndev,
const struct nvsp_message *nvmsg) const struct nvsp_message *nvmsg,
u32 msglen)
{ {
struct net_device_context *net_device_ctx = netdev_priv(ndev); struct net_device_context *net_device_ctx = netdev_priv(ndev);
/* Ensure packet is big enough to read its fields */
if (msglen < sizeof(struct nvsp_message_header) +
sizeof(struct nvsp_4_send_vf_association)) {
netdev_err(ndev, "nvsp_v4_msg length too small: %u\n", msglen);
return;
}
net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated; net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated;
net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial; net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial;
netdev_info(ndev, "VF slot %u %s\n", netdev_info(ndev, "VF slot %u %s\n",
@ -1238,16 +1331,24 @@ static void netvsc_send_vf(struct net_device *ndev,
static void netvsc_receive_inband(struct net_device *ndev, static void netvsc_receive_inband(struct net_device *ndev,
struct netvsc_device *nvscdev, struct netvsc_device *nvscdev,
const struct nvsp_message *nvmsg, const struct vmpacket_descriptor *desc)
u32 msglen)
{ {
const struct nvsp_message *nvmsg = hv_pkt_data(desc);
u32 msglen = hv_pkt_datalen(desc);
/* Ensure packet is big enough to read header fields */
if (msglen < sizeof(struct nvsp_message_header)) {
netdev_err(ndev, "inband nvsp_message length too small: %u\n", msglen);
return;
}
switch (nvmsg->hdr.msg_type) { switch (nvmsg->hdr.msg_type) {
case NVSP_MSG5_TYPE_SEND_INDIRECTION_TABLE: case NVSP_MSG5_TYPE_SEND_INDIRECTION_TABLE:
netvsc_send_table(ndev, nvscdev, nvmsg, msglen); netvsc_send_table(ndev, nvscdev, nvmsg, msglen);
break; break;
case NVSP_MSG4_TYPE_SEND_VF_ASSOCIATION: case NVSP_MSG4_TYPE_SEND_VF_ASSOCIATION:
netvsc_send_vf(ndev, nvmsg); netvsc_send_vf(ndev, nvmsg, msglen);
break; break;
} }
} }
@ -1261,23 +1362,20 @@ static int netvsc_process_raw_pkt(struct hv_device *device,
{ {
struct vmbus_channel *channel = nvchan->channel; struct vmbus_channel *channel = nvchan->channel;
const struct nvsp_message *nvmsg = hv_pkt_data(desc); const struct nvsp_message *nvmsg = hv_pkt_data(desc);
u32 msglen = hv_pkt_datalen(desc);
trace_nvsp_recv(ndev, channel, nvmsg); trace_nvsp_recv(ndev, channel, nvmsg);
switch (desc->type) { switch (desc->type) {
case VM_PKT_COMP: case VM_PKT_COMP:
netvsc_send_completion(ndev, net_device, channel, netvsc_send_completion(ndev, net_device, channel, desc, budget);
desc, budget);
break; break;
case VM_PKT_DATA_USING_XFER_PAGES: case VM_PKT_DATA_USING_XFER_PAGES:
return netvsc_receive(ndev, net_device, nvchan, return netvsc_receive(ndev, net_device, nvchan, desc);
desc, nvmsg);
break; break;
case VM_PKT_DATA_INBAND: case VM_PKT_DATA_INBAND:
netvsc_receive_inband(ndev, net_device, nvmsg, msglen); netvsc_receive_inband(ndev, net_device, desc);
break; break;
default: default:

Просмотреть файл

@ -748,6 +748,13 @@ void netvsc_linkstatus_callback(struct net_device *net,
struct netvsc_reconfig *event; struct netvsc_reconfig *event;
unsigned long flags; unsigned long flags;
/* Ensure the packet is big enough to access its fields */
if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(struct rndis_indicate_status)) {
netdev_err(net, "invalid rndis_indicate_status packet, len: %u\n",
resp->msg_len);
return;
}
/* Update the physical link speed when changing to another vSwitch */ /* Update the physical link speed when changing to another vSwitch */
if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) { if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) {
u32 speed; u32 speed;
@ -2366,7 +2373,16 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
return NOTIFY_OK; return NOTIFY_OK;
} }
/* VF up/down change detected, schedule to change data path */ /* Change the data path when VF UP/DOWN/CHANGE are detected.
*
* Typically a UP or DOWN event is followed by a CHANGE event, so
* net_device_ctx->data_path_is_vf is used to cache the current data path
* to avoid the duplicate call of netvsc_switch_datapath() and the duplicate
* message.
*
* During hibernation, if a VF NIC driver (e.g. mlx5) preserves the network
* interface, there is only the CHANGE event and no UP or DOWN event.
*/
static int netvsc_vf_changed(struct net_device *vf_netdev) static int netvsc_vf_changed(struct net_device *vf_netdev)
{ {
struct net_device_context *net_device_ctx; struct net_device_context *net_device_ctx;
@ -2383,6 +2399,10 @@ static int netvsc_vf_changed(struct net_device *vf_netdev)
if (!netvsc_dev) if (!netvsc_dev)
return NOTIFY_DONE; return NOTIFY_DONE;
if (net_device_ctx->data_path_is_vf == vf_is_up)
return NOTIFY_OK;
net_device_ctx->data_path_is_vf = vf_is_up;
netvsc_switch_datapath(ndev, vf_is_up); netvsc_switch_datapath(ndev, vf_is_up);
netdev_info(ndev, "Data path switched %s VF: %s\n", netdev_info(ndev, "Data path switched %s VF: %s\n",
vf_is_up ? "to" : "from", vf_netdev->name); vf_is_up ? "to" : "from", vf_netdev->name);
@ -2587,8 +2607,8 @@ static int netvsc_remove(struct hv_device *dev)
static int netvsc_suspend(struct hv_device *dev) static int netvsc_suspend(struct hv_device *dev)
{ {
struct net_device_context *ndev_ctx; struct net_device_context *ndev_ctx;
struct net_device *vf_netdev, *net;
struct netvsc_device *nvdev; struct netvsc_device *nvdev;
struct net_device *net;
int ret; int ret;
net = hv_get_drvdata(dev); net = hv_get_drvdata(dev);
@ -2604,10 +2624,6 @@ static int netvsc_suspend(struct hv_device *dev)
goto out; goto out;
} }
vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
if (vf_netdev)
netvsc_unregister_vf(vf_netdev);
/* Save the current config info */ /* Save the current config info */
ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev); ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev);
@ -2628,6 +2644,12 @@ static int netvsc_resume(struct hv_device *dev)
rtnl_lock(); rtnl_lock();
net_device_ctx = netdev_priv(net); net_device_ctx = netdev_priv(net);
/* Reset the data path to the netvsc NIC before re-opening the vmbus
* channel. Later netvsc_netdev_event() will switch the data path to
* the VF upon the UP or CHANGE event.
*/
net_device_ctx->data_path_is_vf = false;
device_info = net_device_ctx->saved_netvsc_dev_info; device_info = net_device_ctx->saved_netvsc_dev_info;
ret = netvsc_attach(net, device_info); ret = netvsc_attach(net, device_info);
@ -2695,6 +2717,7 @@ static int netvsc_netdev_event(struct notifier_block *this,
return netvsc_unregister_vf(event_dev); return netvsc_unregister_vf(event_dev);
case NETDEV_UP: case NETDEV_UP:
case NETDEV_DOWN: case NETDEV_DOWN:
case NETDEV_CHANGE:
return netvsc_vf_changed(event_dev); return netvsc_vf_changed(event_dev);
default: default:
return NOTIFY_DONE; return NOTIFY_DONE;

Просмотреть файл

@ -275,6 +275,16 @@ static void rndis_filter_receive_response(struct net_device *ndev,
return; return;
} }
/* Ensure the packet is big enough to read req_id. Req_id is the 1st
* field in any request/response message, so the payload should have at
* least sizeof(u32) bytes
*/
if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(u32)) {
netdev_err(ndev, "rndis msg_len too small: %u\n",
resp->msg_len);
return;
}
spin_lock_irqsave(&dev->request_lock, flags); spin_lock_irqsave(&dev->request_lock, flags);
list_for_each_entry(request, &dev->req_list, list_ent) { list_for_each_entry(request, &dev->req_list, list_ent) {
/* /*
@ -331,8 +341,9 @@ static void rndis_filter_receive_response(struct net_device *ndev,
* Get the Per-Packet-Info with the specified type * Get the Per-Packet-Info with the specified type
* return NULL if not found. * return NULL if not found.
*/ */
static inline void *rndis_get_ppi(struct rndis_packet *rpkt, static inline void *rndis_get_ppi(struct net_device *ndev,
u32 type, u8 internal) struct rndis_packet *rpkt,
u32 rpkt_len, u32 type, u8 internal)
{ {
struct rndis_per_packet_info *ppi; struct rndis_per_packet_info *ppi;
int len; int len;
@ -340,11 +351,36 @@ static inline void *rndis_get_ppi(struct rndis_packet *rpkt,
if (rpkt->per_pkt_info_offset == 0) if (rpkt->per_pkt_info_offset == 0)
return NULL; return NULL;
/* Validate info_offset and info_len */
if (rpkt->per_pkt_info_offset < sizeof(struct rndis_packet) ||
rpkt->per_pkt_info_offset > rpkt_len) {
netdev_err(ndev, "Invalid per_pkt_info_offset: %u\n",
rpkt->per_pkt_info_offset);
return NULL;
}
if (rpkt->per_pkt_info_len > rpkt_len - rpkt->per_pkt_info_offset) {
netdev_err(ndev, "Invalid per_pkt_info_len: %u\n",
rpkt->per_pkt_info_len);
return NULL;
}
ppi = (struct rndis_per_packet_info *)((ulong)rpkt + ppi = (struct rndis_per_packet_info *)((ulong)rpkt +
rpkt->per_pkt_info_offset); rpkt->per_pkt_info_offset);
len = rpkt->per_pkt_info_len; len = rpkt->per_pkt_info_len;
while (len > 0) { while (len > 0) {
/* Validate ppi_offset and ppi_size */
if (ppi->size > len) {
netdev_err(ndev, "Invalid ppi size: %u\n", ppi->size);
continue;
}
if (ppi->ppi_offset >= ppi->size) {
netdev_err(ndev, "Invalid ppi_offset: %u\n", ppi->ppi_offset);
continue;
}
if (ppi->type == type && ppi->internal == internal) if (ppi->type == type && ppi->internal == internal)
return (void *)((ulong)ppi + ppi->ppi_offset); return (void *)((ulong)ppi + ppi->ppi_offset);
len -= ppi->size; len -= ppi->size;
@ -388,14 +424,29 @@ static int rndis_filter_receive_data(struct net_device *ndev,
const struct ndis_pkt_8021q_info *vlan; const struct ndis_pkt_8021q_info *vlan;
const struct rndis_pktinfo_id *pktinfo_id; const struct rndis_pktinfo_id *pktinfo_id;
const u32 *hash_info; const u32 *hash_info;
u32 data_offset; u32 data_offset, rpkt_len;
void *data; void *data;
bool rsc_more = false; bool rsc_more = false;
int ret; int ret;
/* Ensure data_buflen is big enough to read header fields */
if (data_buflen < RNDIS_HEADER_SIZE + sizeof(struct rndis_packet)) {
netdev_err(ndev, "invalid rndis pkt, data_buflen too small: %u\n",
data_buflen);
return NVSP_STAT_FAIL;
}
/* Validate rndis_pkt offset */
if (rndis_pkt->data_offset >= data_buflen - RNDIS_HEADER_SIZE) {
netdev_err(ndev, "invalid rndis packet offset: %u\n",
rndis_pkt->data_offset);
return NVSP_STAT_FAIL;
}
/* Remove the rndis header and pass it back up the stack */ /* Remove the rndis header and pass it back up the stack */
data_offset = RNDIS_HEADER_SIZE + rndis_pkt->data_offset; data_offset = RNDIS_HEADER_SIZE + rndis_pkt->data_offset;
rpkt_len = data_buflen - RNDIS_HEADER_SIZE;
data_buflen -= data_offset; data_buflen -= data_offset;
/* /*
@ -410,13 +461,13 @@ static int rndis_filter_receive_data(struct net_device *ndev,
return NVSP_STAT_FAIL; return NVSP_STAT_FAIL;
} }
vlan = rndis_get_ppi(rndis_pkt, IEEE_8021Q_INFO, 0); vlan = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, IEEE_8021Q_INFO, 0);
csum_info = rndis_get_ppi(rndis_pkt, TCPIP_CHKSUM_PKTINFO, 0); csum_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, TCPIP_CHKSUM_PKTINFO, 0);
hash_info = rndis_get_ppi(rndis_pkt, NBL_HASH_VALUE, 0); hash_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, NBL_HASH_VALUE, 0);
pktinfo_id = rndis_get_ppi(rndis_pkt, RNDIS_PKTINFO_ID, 1); pktinfo_id = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, RNDIS_PKTINFO_ID, 1);
data = (void *)msg + data_offset; data = (void *)msg + data_offset;
@ -474,6 +525,14 @@ int rndis_filter_receive(struct net_device *ndev,
if (netif_msg_rx_status(net_device_ctx)) if (netif_msg_rx_status(net_device_ctx))
dump_rndis_message(ndev, rndis_msg); dump_rndis_message(ndev, rndis_msg);
/* Validate incoming rndis_message packet */
if (buflen < RNDIS_HEADER_SIZE || rndis_msg->msg_len < RNDIS_HEADER_SIZE ||
buflen < rndis_msg->msg_len) {
netdev_err(ndev, "Invalid rndis_msg (buflen: %u, msg_len: %u)\n",
buflen, rndis_msg->msg_len);
return NVSP_STAT_FAIL;
}
switch (rndis_msg->ndis_msg_type) { switch (rndis_msg->ndis_msg_type) {
case RNDIS_MSG_PACKET: case RNDIS_MSG_PACKET:
return rndis_filter_receive_data(ndev, net_dev, nvchan, return rndis_filter_receive_data(ndev, net_dev, nvchan,

Просмотреть файл

@ -882,7 +882,9 @@ static int adf7242_rx(struct adf7242_local *lp)
int ret; int ret;
u8 lqi, len_u8, *data; u8 lqi, len_u8, *data;
adf7242_read_reg(lp, 0, &len_u8); ret = adf7242_read_reg(lp, 0, &len_u8);
if (ret)
return ret;
len = len_u8; len = len_u8;

Просмотреть файл

@ -2925,6 +2925,7 @@ static int ca8210_dev_com_init(struct ca8210_priv *priv)
); );
if (!priv->irq_workqueue) { if (!priv->irq_workqueue) {
dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n"); dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n");
destroy_workqueue(priv->mlme_workqueue);
return -ENOMEM; return -ENOMEM;
} }

Просмотреть файл

@ -521,7 +521,7 @@ static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint)
val = ioread32(endpoint->ipa->reg_virt + offset); val = ioread32(endpoint->ipa->reg_virt + offset);
/* Zero all filter-related fields, preserving the rest */ /* Zero all filter-related fields, preserving the rest */
u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL); u32p_replace_bits(&val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);
iowrite32(val, endpoint->ipa->reg_virt + offset); iowrite32(val, endpoint->ipa->reg_virt + offset);
} }
@ -573,7 +573,7 @@ static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id)
val = ioread32(ipa->reg_virt + offset); val = ioread32(ipa->reg_virt + offset);
/* Zero all route-related fields, preserving the rest */ /* Zero all route-related fields, preserving the rest */
u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL); u32p_replace_bits(&val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);
iowrite32(val, ipa->reg_virt + offset); iowrite32(val, ipa->reg_virt + offset);
} }

Просмотреть файл

@ -996,7 +996,7 @@ void phy_stop(struct phy_device *phydev)
{ {
struct net_device *dev = phydev->attached_dev; struct net_device *dev = phydev->attached_dev;
if (!phy_is_started(phydev)) { if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) {
WARN(1, "called from state %s\n", WARN(1, "called from state %s\n",
phy_state_to_str(phydev->state)); phy_state_to_str(phydev->state));
return; return;

Просмотреть файл

@ -1143,10 +1143,6 @@ int phy_init_hw(struct phy_device *phydev)
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = phy_disable_interrupts(phydev);
if (ret)
return ret;
if (phydev->drv->config_init) if (phydev->drv->config_init)
ret = phydev->drv->config_init(phydev); ret = phydev->drv->config_init(phydev);
@ -1423,6 +1419,10 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
if (err) if (err)
goto error; goto error;
err = phy_disable_interrupts(phydev);
if (err)
return err;
phy_resume(phydev); phy_resume(phydev);
phy_led_triggers_register(phydev); phy_led_triggers_register(phydev);
@ -1682,7 +1682,8 @@ void phy_detach(struct phy_device *phydev)
phy_led_triggers_unregister(phydev); phy_led_triggers_unregister(phydev);
module_put(phydev->mdio.dev.driver->owner); if (phydev->mdio.dev.driver)
module_put(phydev->mdio.dev.driver->owner);
/* If the device had no specific driver before (i.e. - it /* If the device had no specific driver before (i.e. - it
* was using the generic driver), we unbind the device * was using the generic driver), we unbind the device

Просмотреть файл

@ -201,7 +201,7 @@ int rndis_command(struct usbnet *dev, struct rndis_msg_hdr *buf, int buflen)
dev_dbg(&info->control->dev, dev_dbg(&info->control->dev,
"rndis response error, code %d\n", retval); "rndis response error, code %d\n", retval);
} }
msleep(20); msleep(40);
} }
dev_dbg(&info->control->dev, "rndis response timeout\n"); dev_dbg(&info->control->dev, "rndis response timeout\n");
return -ETIMEDOUT; return -ETIMEDOUT;

Просмотреть файл

@ -118,6 +118,7 @@ static void cisco_keepalive_send(struct net_device *dev, u32 type,
skb_put(skb, sizeof(struct cisco_packet)); skb_put(skb, sizeof(struct cisco_packet));
skb->priority = TC_PRIO_CONTROL; skb->priority = TC_PRIO_CONTROL;
skb->dev = dev; skb->dev = dev;
skb->protocol = htons(ETH_P_HDLC);
skb_reset_network_header(skb); skb_reset_network_header(skb);
dev_queue_xmit(skb); dev_queue_xmit(skb);

Просмотреть файл

@ -433,6 +433,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
if (pvc->state.fecn) /* TX Congestion counter */ if (pvc->state.fecn) /* TX Congestion counter */
dev->stats.tx_compressed++; dev->stats.tx_compressed++;
skb->dev = pvc->frad; skb->dev = pvc->frad;
skb->protocol = htons(ETH_P_HDLC);
skb_reset_network_header(skb);
dev_queue_xmit(skb); dev_queue_xmit(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
@ -555,6 +557,7 @@ static void fr_lmi_send(struct net_device *dev, int fullrep)
skb_put(skb, i); skb_put(skb, i);
skb->priority = TC_PRIO_CONTROL; skb->priority = TC_PRIO_CONTROL;
skb->dev = dev; skb->dev = dev;
skb->protocol = htons(ETH_P_HDLC);
skb_reset_network_header(skb); skb_reset_network_header(skb);
dev_queue_xmit(skb); dev_queue_xmit(skb);
@ -1041,7 +1044,7 @@ static void pvc_setup(struct net_device *dev)
{ {
dev->type = ARPHRD_DLCI; dev->type = ARPHRD_DLCI;
dev->flags = IFF_POINTOPOINT; dev->flags = IFF_POINTOPOINT;
dev->hard_header_len = 10; dev->hard_header_len = 0;
dev->addr_len = 2; dev->addr_len = 2;
netif_keep_dst(dev); netif_keep_dst(dev);
} }
@ -1093,6 +1096,7 @@ static int fr_add_pvc(struct net_device *frad, unsigned int dlci, int type)
dev->mtu = HDLC_MAX_MTU; dev->mtu = HDLC_MAX_MTU;
dev->min_mtu = 68; dev->min_mtu = 68;
dev->max_mtu = HDLC_MAX_MTU; dev->max_mtu = HDLC_MAX_MTU;
dev->needed_headroom = 10;
dev->priv_flags |= IFF_NO_QUEUE; dev->priv_flags |= IFF_NO_QUEUE;
dev->ml_priv = pvc; dev->ml_priv = pvc;

Просмотреть файл

@ -251,6 +251,7 @@ static void ppp_tx_cp(struct net_device *dev, u16 pid, u8 code,
skb->priority = TC_PRIO_CONTROL; skb->priority = TC_PRIO_CONTROL;
skb->dev = dev; skb->dev = dev;
skb->protocol = htons(ETH_P_HDLC);
skb_reset_network_header(skb); skb_reset_network_header(skb);
skb_queue_tail(&tx_queue, skb); skb_queue_tail(&tx_queue, skb);
} }
@ -383,11 +384,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
} }
for (opt = data; len; len -= opt[1], opt += opt[1]) { for (opt = data; len; len -= opt[1], opt += opt[1]) {
if (len < 2 || len < opt[1]) { if (len < 2 || opt[1] < 2 || len < opt[1])
dev->stats.rx_errors++; goto err_out;
kfree(out);
return; /* bad packet, drop silently */
}
if (pid == PID_LCP) if (pid == PID_LCP)
switch (opt[0]) { switch (opt[0]) {
@ -395,6 +393,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
continue; /* MRU always OK and > 1500 bytes? */ continue; /* MRU always OK and > 1500 bytes? */
case LCP_OPTION_ACCM: /* async control character map */ case LCP_OPTION_ACCM: /* async control character map */
if (opt[1] < sizeof(valid_accm))
goto err_out;
if (!memcmp(opt, valid_accm, if (!memcmp(opt, valid_accm,
sizeof(valid_accm))) sizeof(valid_accm)))
continue; continue;
@ -406,6 +406,8 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
} }
break; break;
case LCP_OPTION_MAGIC: case LCP_OPTION_MAGIC:
if (len < 6)
goto err_out;
if (opt[1] != 6 || (!opt[2] && !opt[3] && if (opt[1] != 6 || (!opt[2] && !opt[3] &&
!opt[4] && !opt[5])) !opt[4] && !opt[5]))
break; /* reject invalid magic number */ break; /* reject invalid magic number */
@ -424,6 +426,11 @@ static void ppp_cp_parse_cr(struct net_device *dev, u16 pid, u8 id,
ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data); ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data);
kfree(out); kfree(out);
return;
err_out:
dev->stats.rx_errors++;
kfree(out);
} }
static int ppp_rx(struct sk_buff *skb) static int ppp_rx(struct sk_buff *skb)

Просмотреть файл

@ -198,8 +198,6 @@ static void lapbeth_data_transmit(struct net_device *ndev, struct sk_buff *skb)
struct net_device *dev; struct net_device *dev;
int size = skb->len; int size = skb->len;
skb->protocol = htons(ETH_P_X25);
ptr = skb_push(skb, 2); ptr = skb_push(skb, 2);
*ptr++ = size % 256; *ptr++ = size % 256;
@ -210,6 +208,8 @@ static void lapbeth_data_transmit(struct net_device *ndev, struct sk_buff *skb)
skb->dev = dev = lapbeth->ethdev; skb->dev = dev = lapbeth->ethdev;
skb->protocol = htons(ETH_P_DEC);
skb_reset_network_header(skb); skb_reset_network_header(skb);
dev_hard_header(skb, dev, ETH_P_DEC, bcast_addr, NULL, 0); dev_hard_header(skb, dev, ETH_P_DEC, bcast_addr, NULL, 0);

Просмотреть файл

@ -87,15 +87,12 @@ static void handshake_zero(struct noise_handshake *handshake)
void wg_noise_handshake_clear(struct noise_handshake *handshake) void wg_noise_handshake_clear(struct noise_handshake *handshake)
{ {
down_write(&handshake->lock);
wg_index_hashtable_remove( wg_index_hashtable_remove(
handshake->entry.peer->device->index_hashtable, handshake->entry.peer->device->index_hashtable,
&handshake->entry); &handshake->entry);
down_write(&handshake->lock);
handshake_zero(handshake); handshake_zero(handshake);
up_write(&handshake->lock); up_write(&handshake->lock);
wg_index_hashtable_remove(
handshake->entry.peer->device->index_hashtable,
&handshake->entry);
} }
static struct noise_keypair *keypair_create(struct wg_peer *peer) static struct noise_keypair *keypair_create(struct wg_peer *peer)

Просмотреть файл

@ -167,9 +167,13 @@ bool wg_index_hashtable_replace(struct index_hashtable *table,
struct index_hashtable_entry *old, struct index_hashtable_entry *old,
struct index_hashtable_entry *new) struct index_hashtable_entry *new)
{ {
if (unlikely(hlist_unhashed(&old->index_hash))) bool ret;
return false;
spin_lock_bh(&table->lock); spin_lock_bh(&table->lock);
ret = !hlist_unhashed(&old->index_hash);
if (unlikely(!ret))
goto out;
new->index = old->index; new->index = old->index;
hlist_replace_rcu(&old->index_hash, &new->index_hash); hlist_replace_rcu(&old->index_hash, &new->index_hash);
@ -180,8 +184,9 @@ bool wg_index_hashtable_replace(struct index_hashtable *table,
* simply gets dropped, which isn't terrible. * simply gets dropped, which isn't terrible.
*/ */
INIT_HLIST_NODE(&old->index_hash); INIT_HLIST_NODE(&old->index_hash);
out:
spin_unlock_bh(&table->lock); spin_unlock_bh(&table->lock);
return true; return ret;
} }
void wg_index_hashtable_remove(struct index_hashtable *table, void wg_index_hashtable_remove(struct index_hashtable *table,

Просмотреть файл

@ -664,9 +664,15 @@ static void pkt_align(struct sk_buff *p, int len, int align)
/* To check if there's window offered */ /* To check if there's window offered */
static bool data_ok(struct brcmf_sdio *bus) static bool data_ok(struct brcmf_sdio *bus)
{ {
/* Reserve TXCTL_CREDITS credits for txctl */ u8 tx_rsv = 0;
return (bus->tx_max - bus->tx_seq) > TXCTL_CREDITS &&
((bus->tx_max - bus->tx_seq) & 0x80) == 0; /* Reserve TXCTL_CREDITS credits for txctl when it is ready to send */
if (bus->ctrl_frame_stat)
tx_rsv = TXCTL_CREDITS;
return (bus->tx_max - bus->tx_seq - tx_rsv) != 0 &&
((bus->tx_max - bus->tx_seq - tx_rsv) & 0x80) == 0;
} }
/* To check if there's window offered */ /* To check if there's window offered */

Просмотреть файл

@ -954,7 +954,7 @@ struct mwifiex_tkip_param {
struct mwifiex_aes_param { struct mwifiex_aes_param {
u8 pn[WPA_PN_SIZE]; u8 pn[WPA_PN_SIZE];
__le16 key_len; __le16 key_len;
u8 key[WLAN_KEY_LEN_CCMP]; u8 key[WLAN_KEY_LEN_CCMP_256];
} __packed; } __packed;
struct mwifiex_wapi_param { struct mwifiex_wapi_param {

Просмотреть файл

@ -619,7 +619,7 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
key_v2 = &resp->params.key_material_v2; key_v2 = &resp->params.key_material_v2;
len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len); len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len);
if (len > WLAN_KEY_LEN_CCMP) if (len > sizeof(key_v2->key_param_set.key_params.aes.key))
return -EINVAL; return -EINVAL;
if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) { if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) {
@ -635,7 +635,7 @@ static int mwifiex_ret_802_11_key_material_v2(struct mwifiex_private *priv,
return 0; return 0;
memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0, memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0,
WLAN_KEY_LEN_CCMP); sizeof(key_v2->key_param_set.key_params.aes.key));
priv->aes_key_v2.key_param_set.key_params.aes.key_len = priv->aes_key_v2.key_param_set.key_params.aes.key_len =
cpu_to_le16(len); cpu_to_le16(len);
memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key, memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,

Просмотреть файл

@ -2128,7 +2128,8 @@ static int mt7615_load_n9(struct mt7615_dev *dev, const char *name)
sizeof(dev->mt76.hw->wiphy->fw_version), sizeof(dev->mt76.hw->wiphy->fw_version),
"%.10s-%.15s", hdr->fw_ver, hdr->build_date); "%.10s-%.15s", hdr->fw_ver, hdr->build_date);
if (!strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) { if (!is_mt7615(&dev->mt76) &&
!strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) {
dev->fw_ver = MT7615_FIRMWARE_V2; dev->fw_ver = MT7615_FIRMWARE_V2;
dev->mcu_ops = &sta_update_ops; dev->mcu_ops = &sta_update_ops;
} else { } else {

Просмотреть файл

@ -699,8 +699,12 @@ void mt7915_unregister_device(struct mt7915_dev *dev)
spin_lock_bh(&dev->token_lock); spin_lock_bh(&dev->token_lock);
idr_for_each_entry(&dev->token, txwi, id) { idr_for_each_entry(&dev->token, txwi, id) {
mt7915_txp_skb_unmap(&dev->mt76, txwi); mt7915_txp_skb_unmap(&dev->mt76, txwi);
if (txwi->skb) if (txwi->skb) {
dev_kfree_skb_any(txwi->skb); struct ieee80211_hw *hw;
hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb);
ieee80211_free_txskb(hw, txwi->skb);
}
mt76_put_txwi(&dev->mt76, txwi); mt76_put_txwi(&dev->mt76, txwi);
} }
spin_unlock_bh(&dev->token_lock); spin_unlock_bh(&dev->token_lock);

Просмотреть файл

@ -841,7 +841,7 @@ mt7915_tx_complete_status(struct mt76_dev *mdev, struct sk_buff *skb,
if (sta || !(info->flags & IEEE80211_TX_CTL_NO_ACK)) if (sta || !(info->flags & IEEE80211_TX_CTL_NO_ACK))
mt7915_tx_status(sta, hw, info, NULL); mt7915_tx_status(sta, hw, info, NULL);
dev_kfree_skb(skb); ieee80211_free_txskb(hw, skb);
} }
void mt7915_txp_skb_unmap(struct mt76_dev *dev, void mt7915_txp_skb_unmap(struct mt76_dev *dev,

Просмотреть файл

@ -458,7 +458,6 @@ enum wl1271_cmd_key_type {
KEY_TKIP = 2, KEY_TKIP = 2,
KEY_AES = 3, KEY_AES = 3,
KEY_GEM = 4, KEY_GEM = 4,
KEY_IGTK = 5,
}; };
struct wl1271_cmd_set_keys { struct wl1271_cmd_set_keys {

Просмотреть файл

@ -3559,9 +3559,6 @@ int wlcore_set_key(struct wl1271 *wl, enum set_key_cmd cmd,
case WL1271_CIPHER_SUITE_GEM: case WL1271_CIPHER_SUITE_GEM:
key_type = KEY_GEM; key_type = KEY_GEM;
break; break;
case WLAN_CIPHER_SUITE_AES_CMAC:
key_type = KEY_IGTK;
break;
default: default:
wl1271_error("Unknown key algo 0x%x", key_conf->cipher); wl1271_error("Unknown key algo 0x%x", key_conf->cipher);
@ -6231,7 +6228,6 @@ static int wl1271_init_ieee80211(struct wl1271 *wl)
WLAN_CIPHER_SUITE_TKIP, WLAN_CIPHER_SUITE_TKIP,
WLAN_CIPHER_SUITE_CCMP, WLAN_CIPHER_SUITE_CCMP,
WL1271_CIPHER_SUITE_GEM, WL1271_CIPHER_SUITE_GEM,
WLAN_CIPHER_SUITE_AES_CMAC,
}; };
/* The tx descriptor buffer */ /* The tx descriptor buffer */

Просмотреть файл

@ -284,11 +284,11 @@ static void qeth_l2_stop_card(struct qeth_card *card)
if (card->state == CARD_STATE_SOFTSETUP) { if (card->state == CARD_STATE_SOFTSETUP) {
qeth_clear_ipacmd_list(card); qeth_clear_ipacmd_list(card);
qeth_drain_output_queues(card);
card->state = CARD_STATE_DOWN; card->state = CARD_STATE_DOWN;
} }
qeth_qdio_clear_card(card, 0); qeth_qdio_clear_card(card, 0);
qeth_drain_output_queues(card);
qeth_clear_working_pool_list(card); qeth_clear_working_pool_list(card);
flush_workqueue(card->event_wq); flush_workqueue(card->event_wq);
qeth_flush_local_addrs(card); qeth_flush_local_addrs(card);

Просмотреть файл

@ -1168,11 +1168,11 @@ static void qeth_l3_stop_card(struct qeth_card *card)
if (card->state == CARD_STATE_SOFTSETUP) { if (card->state == CARD_STATE_SOFTSETUP) {
qeth_l3_clear_ip_htable(card, 1); qeth_l3_clear_ip_htable(card, 1);
qeth_clear_ipacmd_list(card); qeth_clear_ipacmd_list(card);
qeth_drain_output_queues(card);
card->state = CARD_STATE_DOWN; card->state = CARD_STATE_DOWN;
} }
qeth_qdio_clear_card(card, 0); qeth_qdio_clear_card(card, 0);
qeth_drain_output_queues(card);
qeth_clear_working_pool_list(card); qeth_clear_working_pool_list(card);
flush_workqueue(card->event_wq); flush_workqueue(card->event_wq);
qeth_flush_local_addrs(card); qeth_flush_local_addrs(card);

Просмотреть файл

@ -193,7 +193,7 @@ static inline int find_next_netdev_feature(u64 feature, unsigned long start)
#define NETIF_F_GSO_MASK (__NETIF_F_BIT(NETIF_F_GSO_LAST + 1) - \ #define NETIF_F_GSO_MASK (__NETIF_F_BIT(NETIF_F_GSO_LAST + 1) - \
__NETIF_F_BIT(NETIF_F_GSO_SHIFT)) __NETIF_F_BIT(NETIF_F_GSO_SHIFT))
/* List of IP checksum features. Note that NETIF_F_ HW_CSUM should not be /* List of IP checksum features. Note that NETIF_F_HW_CSUM should not be
* set in features when NETIF_F_IP_CSUM or NETIF_F_IPV6_CSUM are set-- * set in features when NETIF_F_IP_CSUM or NETIF_F_IPV6_CSUM are set--
* this would be contradictory * this would be contradictory
*/ */

Просмотреть файл

@ -1784,6 +1784,7 @@ enum netdev_priv_flags {
* the watchdog (see dev_watchdog()) * the watchdog (see dev_watchdog())
* @watchdog_timer: List of timers * @watchdog_timer: List of timers
* *
* @proto_down_reason: reason a netdev interface is held down
* @pcpu_refcnt: Number of references to this device * @pcpu_refcnt: Number of references to this device
* @todo_list: Delayed register/unregister * @todo_list: Delayed register/unregister
* @link_watch_list: XXX: need comments on this one * @link_watch_list: XXX: need comments on this one
@ -1848,6 +1849,7 @@ enum netdev_priv_flags {
* @udp_tunnel_nic_info: static structure describing the UDP tunnel * @udp_tunnel_nic_info: static structure describing the UDP tunnel
* offload capabilities of the device * offload capabilities of the device
* @udp_tunnel_nic: UDP tunnel offload state * @udp_tunnel_nic: UDP tunnel offload state
* @xdp_state: stores info on attached XDP BPF programs
* *
* FIXME: cleanup struct net_device such that network protocol info * FIXME: cleanup struct net_device such that network protocol info
* moves out. * moves out.

Просмотреть файл

@ -623,6 +623,7 @@ struct qed_dev_info {
#define QED_MFW_VERSION_3_OFFSET 24 #define QED_MFW_VERSION_3_OFFSET 24
u32 flash_size; u32 flash_size;
bool b_arfs_capable;
bool b_inter_pf_switch; bool b_inter_pf_switch;
bool tx_switching; bool tx_switching;
bool rdma_supported; bool rdma_supported;

Просмотреть файл

@ -3223,8 +3223,9 @@ static inline int skb_padto(struct sk_buff *skb, unsigned int len)
* is untouched. Otherwise it is extended. Returns zero on * is untouched. Otherwise it is extended. Returns zero on
* success. The skb is freed on error if @free_on_error is true. * success. The skb is freed on error if @free_on_error is true.
*/ */
static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len, static inline int __must_check __skb_put_padto(struct sk_buff *skb,
bool free_on_error) unsigned int len,
bool free_on_error)
{ {
unsigned int size = skb->len; unsigned int size = skb->len;
@ -3247,7 +3248,7 @@ static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len,
* is untouched. Otherwise it is extended. Returns zero on * is untouched. Otherwise it is extended. Returns zero on
* success. The skb is freed on error. * success. The skb is freed on error.
*/ */
static inline int skb_put_padto(struct sk_buff *skb, unsigned int len) static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int len)
{ {
return __skb_put_padto(skb, len, true); return __skb_put_padto(skb, len, true);
} }

Просмотреть файл

@ -116,6 +116,7 @@ static inline void flowi4_init_output(struct flowi4 *fl4, int oif,
fl4->saddr = saddr; fl4->saddr = saddr;
fl4->fl4_dport = dport; fl4->fl4_dport = dport;
fl4->fl4_sport = sport; fl4->fl4_sport = sport;
fl4->flowi4_multipath_hash = 0;
} }
/* Reset some input parameters after previous lookup */ /* Reset some input parameters after previous lookup */

Просмотреть файл

@ -726,7 +726,6 @@ static inline int __nlmsg_parse(const struct nlmsghdr *nlh, int hdrlen,
* @hdrlen: length of family specific header * @hdrlen: length of family specific header
* @tb: destination array with maxtype+1 elements * @tb: destination array with maxtype+1 elements
* @maxtype: maximum attribute type to be expected * @maxtype: maximum attribute type to be expected
* @validate: validation strictness
* @extack: extended ACK report struct * @extack: extended ACK report struct
* *
* See nla_parse() * See nla_parse()
@ -824,7 +823,6 @@ static inline int nla_validate_deprecated(const struct nlattr *head, int len,
* @len: length of attribute stream * @len: length of attribute stream
* @maxtype: maximum attribute type to be expected * @maxtype: maximum attribute type to be expected
* @policy: validation policy * @policy: validation policy
* @validate: validation strictness
* @extack: extended ACK report struct * @extack: extended ACK report struct
* *
* Validates all attributes in the specified attribute stream against the * Validates all attributes in the specified attribute stream against the

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше