Including fixes from netfilter, wifi and ipsec.

Current release - regressions:
 
  - phy: mscc: fix deadlock in phy_ethtool_{get,set}_wol()
 
  - virtio: vsock: don't use skbuff state to account credit
 
  - virtio: vsock: don't drop skbuff on copy failure
 
  - virtio_net: fix page_to_skb() miscalculating the memory size
 
 Current release - new code bugs:
 
  - eth: correct xdp_features after device reconfig
 
  - wifi: nl80211: fix the puncturing bitmap policy
 
  - net/mlx5e: flower:
    - fix raw counter initialization
    - fix missing error code
    - fix cloned flow attribute
 
  - ipa:
    - fix some register validity checks
    - fix a surprising number of bad offsets
    - kill FILT_ROUT_CACHE_CFG IPA register
 
 Previous releases - regressions:
 
  - tcp: fix bind() conflict check for dual-stack wildcard address
 
  - veth: fix use after free in XDP_REDIRECT when skb headroom is small
 
  - ipv4: fix incorrect table ID in IOCTL path
 
  - ipvlan: make skb->skb_iif track skb->dev for l3s mode
 
  - mptcp:
   - fix possible deadlock in subflow_error_report
   - fix UaFs when destroying unaccepted and listening sockets
 
  - dsa: mv88e6xxx: fix max_mtu of 1492 on 6165, 6191, 6220, 6250, 6290
 
 Previous releases - always broken:
 
  - tcp: tcp_make_synack() can be called from process context,
    don't assume preemption is disabled when updating stats
 
  - netfilter: correct length for loading protocol registers
 
  - virtio_net: add checking sq is full inside xdp xmit
 
  - bonding: restore IFF_MASTER/SLAVE flags on bond enslave
    Ethertype change
 
  - phy: nxp-c45-tja11xx: fix MII_BASIC_CONFIG_REV bit number
 
  - eth: i40e: fix crash during reboot when adapter is in recovery mode
 
  - eth: ice: avoid deadlock on rtnl lock when auxiliary device
    plug/unplug meets bonding
 
  - dsa: mt7530:
    - remove now incorrect comment regarding port 5
    - set PLL frequency and trgmii only when trgmii is used
 
  - eth: mtk_eth_soc: reset PCS state when changing interface types
 
 Misc:
 
  - ynl: another license adjustment
 
  - move the TCA_EXT_WARN_MSG attribute for tc action
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmQUzTgACgkQMUZtbf5S
 IrvulQ/9GA5GaT52r5T9HaV5slygkHw9ValpfJAddI0MbBjeYfDhkSoTUujIr92W
 VMj+VpRcqS67pqzD2Z77s2EwB445NCOralB9ji8623tkCDevZU3gUKmjtiO5G7fP
 4iAUbibfXjQiDKIeCdcVZ+SXYYdBSQDfFvQskU6/nzKuqjEbhC+GbiMWz7rt2SKe
 q9gHFSK1du2SGa6fIfJTEosa+MX4UTwAhLOReS5vSFhrlOsUCeMGTCzBfDuacQqn
 Iq1MJqW2yLceUar164xkYAAwRdL/ZLVkWaMza7KjM8Qi04MiopuFB2+moFDowrM9
 D9lX6HMX9NUrHTFGjyZVk845PFxPW+Rnhu1/OKINdugOmcCHApYrtkxB6/Z+piS5
 sW3kfkTPsQydA6Dx/RINJE39z6EYabwIQCc68D1HlPuTpOjYWTQdn0CvwxCmOFCr
 saTkd1wOeiwy8BheBSeX1QCkx4MwO6Dg+ObX/eKsYXGGWPMZcbMdbmmvFu7dZHhO
 cH4AGypRMrDa2IoYGqIs5sgkjxAMZZSkeQ1E+EpPw3n4us/QjQYrey5uto8uvErm
 zz7hI1qAwM8dooxsKdPyaARzM//Bq/gmYbqD0Ahts2t6BMX6eX2weneuQ4VJEf94
 8RTtIu9BbBH0ysgBkgqMwCeM4YVtG+/e7p390z4tqPrwOi7bZ5A=
 =5/YI
 -----END PGP SIGNATURE-----

Merge tag 'net-6.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from netfilter, wifi and ipsec.

  A little more changes than usual, but it's pretty normal for us that
  the rc3/rc4 PRs are oversized as people start testing in earnest.

  Possibly an extra boost from people deploying the 6.1 LTS but that's
  more of an unscientific hunch.

  Current release - regressions:

   - phy: mscc: fix deadlock in phy_ethtool_{get,set}_wol()

   - virtio: vsock: don't use skbuff state to account credit

   - virtio: vsock: don't drop skbuff on copy failure

   - virtio_net: fix page_to_skb() miscalculating the memory size

  Current release - new code bugs:

   - eth: correct xdp_features after device reconfig

   - wifi: nl80211: fix the puncturing bitmap policy

   - net/mlx5e: flower:
      - fix raw counter initialization
      - fix missing error code
      - fix cloned flow attribute

   - ipa:
      - fix some register validity checks
      - fix a surprising number of bad offsets
      - kill FILT_ROUT_CACHE_CFG IPA register

  Previous releases - regressions:

   - tcp: fix bind() conflict check for dual-stack wildcard address

   - veth: fix use after free in XDP_REDIRECT when skb headroom is small

   - ipv4: fix incorrect table ID in IOCTL path

   - ipvlan: make skb->skb_iif track skb->dev for l3s mode

   - mptcp:
      - fix possible deadlock in subflow_error_report
      - fix UaFs when destroying unaccepted and listening sockets

   - dsa: mv88e6xxx: fix max_mtu of 1492 on 6165, 6191, 6220, 6250, 6290

  Previous releases - always broken:

   - tcp: tcp_make_synack() can be called from process context, don't
     assume preemption is disabled when updating stats

   - netfilter: correct length for loading protocol registers

   - virtio_net: add checking sq is full inside xdp xmit

   - bonding: restore IFF_MASTER/SLAVE flags on bond enslave Ethertype
     change

   - phy: nxp-c45-tja11xx: fix MII_BASIC_CONFIG_REV bit number

   - eth: i40e: fix crash during reboot when adapter is in recovery mode

   - eth: ice: avoid deadlock on rtnl lock when auxiliary device
     plug/unplug meets bonding

   - dsa: mt7530:
      - remove now incorrect comment regarding port 5
      - set PLL frequency and trgmii only when trgmii is used

   - eth: mtk_eth_soc: reset PCS state when changing interface types

  Misc:

   - ynl: another license adjustment

   - move the TCA_EXT_WARN_MSG attribute for tc action"

* tag 'net-6.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (108 commits)
  selftests: bonding: add tests for ether type changes
  bonding: restore bond's IFF_SLAVE flag if a non-eth dev enslave fails
  bonding: restore IFF_MASTER/SLAVE flags on bond enslave ether type change
  net: renesas: rswitch: Fix GWTSDIE register handling
  net: renesas: rswitch: Fix the output value of quote from rswitch_rx()
  ethernet: sun: add check for the mdesc_grab()
  net: ipa: fix some register validity checks
  net: ipa: kill FILT_ROUT_CACHE_CFG IPA register
  net: ipa: add two missing declarations
  net: ipa: reg: include <linux/bug.h>
  net: xdp: don't call notifiers during driver init
  net/sched: act_api: add specific EXT_WARN_MSG for tc action
  Revert "net/sched: act_api: move TCA_EXT_WARN_MSG to the correct hierarchy"
  net: dsa: microchip: fix RGMII delay configuration on KSZ8765/KSZ8794/KSZ8795
  ynl: make the tooling check the license
  ynl: broaden the license even more
  tools: ynl: make definitions optional again
  hsr: ratelimit only when errors are printed
  qed/qed_mng_tlv: correctly zero out ->min instead of ->hour
  selftests: net: devlink_port_split.py: skip test if no suitable device available
  ...
This commit is contained in:
Linus Torvalds 2023-03-17 13:31:16 -07:00
Родитель 8d3c682a5e f5e305e63b
Коммит 478a351ce0
132 изменённых файлов: 1325 добавлений и 670 удалений

Просмотреть файл

@ -215,6 +215,9 @@ Jens Axboe <axboe@suse.de>
Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net>
Jessica Zhang <quic_jesszhan@quicinc.com> <jesszhan@codeaurora.org>
Jiri Pirko <jiri@resnulli.us> <jiri@nvidia.com>
Jiri Pirko <jiri@resnulli.us> <jiri@mellanox.com>
Jiri Pirko <jiri@resnulli.us> <jpirko@redhat.com>
Jiri Slaby <jirislaby@kernel.org> <jirislaby@gmail.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@novell.com>
Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.com>

Просмотреть файл

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
%YAML 1.2
---
$id: http://kernel.org/schemas/netlink/genetlink-c.yaml#

Просмотреть файл

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
%YAML 1.2
---
$id: http://kernel.org/schemas/netlink/genetlink-legacy.yaml#

Просмотреть файл

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
%YAML 1.2
---
$id: http://kernel.org/schemas/netlink/genetlink-legacy.yaml#

Просмотреть файл

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
name: ethtool

Просмотреть файл

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
name: fou

Просмотреть файл

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
name: netdev
@ -9,6 +9,7 @@ definitions:
-
type: flags
name: xdp-act
render-max: true
entries:
-
name: basic

Просмотреть файл

@ -24,7 +24,8 @@ YAML specifications can be found under ``Documentation/netlink/specs/``
This document describes details of the schema.
See :doc:`intro-specs` for a practical starting guide.
All specs must be licensed under ``GPL-2.0-only OR BSD-3-Clause``
All specs must be licensed under
``((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)``
to allow for easy adoption in user space code.
Compatibility levels

Просмотреть файл

@ -5971,7 +5971,7 @@ F: include/linux/dm-*.h
F: include/uapi/linux/dm-*.h
DEVLINK
M: Jiri Pirko <jiri@nvidia.com>
M: Jiri Pirko <jiri@resnulli.us>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/devlink
@ -15079,7 +15079,7 @@ F: Documentation/hwmon/nzxt-smart2.rst
F: drivers/hwmon/nzxt-smart2.c
OBJAGG
M: Jiri Pirko <jiri@nvidia.com>
M: Jiri Pirko <jiri@resnulli.us>
L: netdev@vger.kernel.org
S: Supported
F: include/linux/objagg.h
@ -15853,7 +15853,7 @@ F: drivers/video/logo/logo_parisc*
F: include/linux/hp_sdc.h
PARMAN
M: Jiri Pirko <jiri@nvidia.com>
M: Jiri Pirko <jiri@resnulli.us>
L: netdev@vger.kernel.org
S: Supported
F: include/linux/parman.h

Просмотреть файл

@ -1775,6 +1775,19 @@ void bond_lower_state_changed(struct slave *slave)
slave_err(bond_dev, slave_dev, "Error: %s\n", errmsg); \
} while (0)
/* The bonding driver uses ether_setup() to convert a master bond device
* to ARPHRD_ETHER, that resets the target netdevice's flags so we always
* have to restore the IFF_MASTER flag, and only restore IFF_SLAVE if it was set
*/
static void bond_ether_setup(struct net_device *bond_dev)
{
unsigned int slave_flag = bond_dev->flags & IFF_SLAVE;
ether_setup(bond_dev);
bond_dev->flags |= IFF_MASTER | slave_flag;
bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
}
/* enslave device <slave> to bond device <master> */
int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
struct netlink_ext_ack *extack)
@ -1866,10 +1879,8 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
if (slave_dev->type != ARPHRD_ETHER)
bond_setup_by_slave(bond_dev, slave_dev);
else {
ether_setup(bond_dev);
bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
}
else
bond_ether_setup(bond_dev);
call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE,
bond_dev);
@ -2289,9 +2300,7 @@ err_undo_flags:
eth_hw_addr_random(bond_dev);
if (bond_dev->type != ARPHRD_ETHER) {
dev_close(bond_dev);
ether_setup(bond_dev);
bond_dev->flags |= IFF_MASTER;
bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
bond_ether_setup(bond_dev);
}
}

Просмотреть файл

@ -93,20 +93,20 @@ static int cc770_get_of_node_data(struct platform_device *pdev,
if (priv->can.clock.freq > 8000000)
priv->cpu_interface |= CPUIF_DMC;
if (of_get_property(np, "bosch,divide-memory-clock", NULL))
if (of_property_read_bool(np, "bosch,divide-memory-clock"))
priv->cpu_interface |= CPUIF_DMC;
if (of_get_property(np, "bosch,iso-low-speed-mux", NULL))
if (of_property_read_bool(np, "bosch,iso-low-speed-mux"))
priv->cpu_interface |= CPUIF_MUX;
if (!of_get_property(np, "bosch,no-comperator-bypass", NULL))
priv->bus_config |= BUSCFG_CBY;
if (of_get_property(np, "bosch,disconnect-rx0-input", NULL))
if (of_property_read_bool(np, "bosch,disconnect-rx0-input"))
priv->bus_config |= BUSCFG_DR0;
if (of_get_property(np, "bosch,disconnect-rx1-input", NULL))
if (of_property_read_bool(np, "bosch,disconnect-rx1-input"))
priv->bus_config |= BUSCFG_DR1;
if (of_get_property(np, "bosch,disconnect-tx1-output", NULL))
if (of_property_read_bool(np, "bosch,disconnect-tx1-output"))
priv->bus_config |= BUSCFG_DT1;
if (of_get_property(np, "bosch,polarity-dominant", NULL))
if (of_property_read_bool(np, "bosch,polarity-dominant"))
priv->bus_config |= BUSCFG_POL;
prop = of_get_property(np, "bosch,clock-out-frequency", &prop_size);

Просмотреть файл

@ -319,7 +319,7 @@ static const u16 ksz8795_regs[] = {
[S_BROADCAST_CTRL] = 0x06,
[S_MULTICAST_CTRL] = 0x04,
[P_XMII_CTRL_0] = 0x06,
[P_XMII_CTRL_1] = 0x56,
[P_XMII_CTRL_1] = 0x06,
};
static const u32 ksz8795_masks[] = {

Просмотреть файл

@ -430,8 +430,6 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
switch (interface) {
case PHY_INTERFACE_MODE_RGMII:
trgint = 0;
/* PLL frequency: 125MHz */
ncpo1 = 0x0c80;
break;
case PHY_INTERFACE_MODE_TRGMII:
trgint = 1;
@ -462,38 +460,40 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
mt7530_rmw(priv, MT7530_P6ECR, P6_INTF_MODE_MASK,
P6_INTF_MODE(trgint));
/* Lower Tx Driving for TRGMII path */
for (i = 0 ; i < NUM_TRGMII_CTRL ; i++)
mt7530_write(priv, MT7530_TRGMII_TD_ODT(i),
TD_DM_DRVP(8) | TD_DM_DRVN(8));
if (trgint) {
/* Lower Tx Driving for TRGMII path */
for (i = 0 ; i < NUM_TRGMII_CTRL ; i++)
mt7530_write(priv, MT7530_TRGMII_TD_ODT(i),
TD_DM_DRVP(8) | TD_DM_DRVN(8));
/* Disable MT7530 core and TRGMII Tx clocks */
core_clear(priv, CORE_TRGMII_GSW_CLK_CG,
REG_GSWCK_EN | REG_TRGMIICK_EN);
/* Disable MT7530 core and TRGMII Tx clocks */
core_clear(priv, CORE_TRGMII_GSW_CLK_CG,
REG_GSWCK_EN | REG_TRGMIICK_EN);
/* Setup the MT7530 TRGMII Tx Clock */
core_write(priv, CORE_PLL_GROUP5, RG_LCDDS_PCW_NCPO1(ncpo1));
core_write(priv, CORE_PLL_GROUP6, RG_LCDDS_PCW_NCPO0(0));
core_write(priv, CORE_PLL_GROUP10, RG_LCDDS_SSC_DELTA(ssc_delta));
core_write(priv, CORE_PLL_GROUP11, RG_LCDDS_SSC_DELTA1(ssc_delta));
core_write(priv, CORE_PLL_GROUP4,
RG_SYSPLL_DDSFBK_EN | RG_SYSPLL_BIAS_EN |
RG_SYSPLL_BIAS_LPF_EN);
core_write(priv, CORE_PLL_GROUP2,
RG_SYSPLL_EN_NORMAL | RG_SYSPLL_VODEN |
RG_SYSPLL_POSDIV(1));
core_write(priv, CORE_PLL_GROUP7,
RG_LCDDS_PCW_NCPO_CHG | RG_LCCDS_C(3) |
RG_LCDDS_PWDB | RG_LCDDS_ISO_EN);
/* Setup the MT7530 TRGMII Tx Clock */
core_write(priv, CORE_PLL_GROUP5, RG_LCDDS_PCW_NCPO1(ncpo1));
core_write(priv, CORE_PLL_GROUP6, RG_LCDDS_PCW_NCPO0(0));
core_write(priv, CORE_PLL_GROUP10, RG_LCDDS_SSC_DELTA(ssc_delta));
core_write(priv, CORE_PLL_GROUP11, RG_LCDDS_SSC_DELTA1(ssc_delta));
core_write(priv, CORE_PLL_GROUP4,
RG_SYSPLL_DDSFBK_EN | RG_SYSPLL_BIAS_EN |
RG_SYSPLL_BIAS_LPF_EN);
core_write(priv, CORE_PLL_GROUP2,
RG_SYSPLL_EN_NORMAL | RG_SYSPLL_VODEN |
RG_SYSPLL_POSDIV(1));
core_write(priv, CORE_PLL_GROUP7,
RG_LCDDS_PCW_NCPO_CHG | RG_LCCDS_C(3) |
RG_LCDDS_PWDB | RG_LCDDS_ISO_EN);
/* Enable MT7530 core and TRGMII Tx clocks */
core_set(priv, CORE_TRGMII_GSW_CLK_CG,
REG_GSWCK_EN | REG_TRGMIICK_EN);
if (!trgint)
/* Enable MT7530 core and TRGMII Tx clocks */
core_set(priv, CORE_TRGMII_GSW_CLK_CG,
REG_GSWCK_EN | REG_TRGMIICK_EN);
} else {
for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
mt7530_rmw(priv, MT7530_TRGMII_RD(i),
RD_TAP_MASK, RD_TAP(16));
}
return 0;
}
@ -2201,7 +2201,7 @@ mt7530_setup(struct dsa_switch *ds)
mt7530_pll_setup(priv);
/* Enable Port 6 only; P5 as GMAC5 which currently is not supported */
/* Enable port 6 */
val = mt7530_read(priv, MT7530_MHWTRAP);
val &= ~MHWTRAP_P6_DIS & ~MHWTRAP_PHY_ACCESS;
val |= MHWTRAP_MANUAL;

Просмотреть файл

@ -3549,7 +3549,7 @@ static int mv88e6xxx_get_max_mtu(struct dsa_switch *ds, int port)
return 10240 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
else if (chip->info->ops->set_max_frame_size)
return 1632 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
return 1522 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
return ETH_DATA_LEN;
}
static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
@ -3557,6 +3557,17 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
struct mv88e6xxx_chip *chip = ds->priv;
int ret = 0;
/* For families where we don't know how to alter the MTU,
* just accept any value up to ETH_DATA_LEN
*/
if (!chip->info->ops->port_set_jumbo_size &&
!chip->info->ops->set_max_frame_size) {
if (new_mtu > ETH_DATA_LEN)
return -EINVAL;
return 0;
}
if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
new_mtu += EDSA_HLEN;
@ -3565,9 +3576,6 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
else if (chip->info->ops->set_max_frame_size)
ret = chip->info->ops->set_max_frame_size(chip, new_mtu);
else
if (new_mtu > 1522)
ret = -EINVAL;
mv88e6xxx_reg_unlock(chip);
return ret;

Просмотреть файл

@ -850,11 +850,20 @@ static int ena_set_channels(struct net_device *netdev,
struct ena_adapter *adapter = netdev_priv(netdev);
u32 count = channels->combined_count;
/* The check for max value is already done in ethtool */
if (count < ENA_MIN_NUM_IO_QUEUES ||
(ena_xdp_present(adapter) &&
!ena_xdp_legal_queue_count(adapter, count)))
if (count < ENA_MIN_NUM_IO_QUEUES)
return -EINVAL;
if (!ena_xdp_legal_queue_count(adapter, count)) {
if (ena_xdp_present(adapter))
return -EINVAL;
xdp_clear_features_flag(netdev);
} else {
xdp_set_features_flag(netdev,
NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT);
}
return ena_update_queue_count(adapter, count);
}

Просмотреть файл

@ -4105,8 +4105,6 @@ static void ena_set_conf_feat_params(struct ena_adapter *adapter,
/* Set offload features */
ena_set_dev_offloads(feat, netdev);
netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
adapter->max_mtu = feat->dev_attr.max_mtu;
netdev->max_mtu = adapter->max_mtu;
netdev->min_mtu = ENA_MIN_MTU;
@ -4393,6 +4391,10 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
ena_config_debug_area(adapter);
if (ena_xdp_legal_queue_count(adapter, adapter->num_io_queues))
netdev->xdp_features = NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT;
memcpy(adapter->netdev->perm_addr, adapter->mac_addr, netdev->addr_len);
netif_carrier_off(netdev);

Просмотреть файл

@ -412,6 +412,25 @@ int aq_xdp_xmit(struct net_device *dev, int num_frames,
return num_frames - drop;
}
static struct sk_buff *aq_xdp_build_skb(struct xdp_buff *xdp,
struct net_device *dev,
struct aq_ring_buff_s *buff)
{
struct xdp_frame *xdpf;
struct sk_buff *skb;
xdpf = xdp_convert_buff_to_frame(xdp);
if (unlikely(!xdpf))
return NULL;
skb = xdp_build_skb_from_frame(xdpf, dev);
if (!skb)
return NULL;
aq_get_rxpages_xdp(buff, xdp);
return skb;
}
static struct sk_buff *aq_xdp_run_prog(struct aq_nic_s *aq_nic,
struct xdp_buff *xdp,
struct aq_ring_s *rx_ring,
@ -431,7 +450,7 @@ static struct sk_buff *aq_xdp_run_prog(struct aq_nic_s *aq_nic,
prog = READ_ONCE(rx_ring->xdp_prog);
if (!prog)
goto pass;
return aq_xdp_build_skb(xdp, aq_nic->ndev, buff);
prefetchw(xdp->data_hard_start); /* xdp_frame write */
@ -442,17 +461,12 @@ static struct sk_buff *aq_xdp_run_prog(struct aq_nic_s *aq_nic,
act = bpf_prog_run_xdp(prog, xdp);
switch (act) {
case XDP_PASS:
pass:
xdpf = xdp_convert_buff_to_frame(xdp);
if (unlikely(!xdpf))
goto out_aborted;
skb = xdp_build_skb_from_frame(xdpf, aq_nic->ndev);
skb = aq_xdp_build_skb(xdp, aq_nic->ndev, buff);
if (!skb)
goto out_aborted;
u64_stats_update_begin(&rx_ring->stats.rx.syncp);
++rx_ring->stats.rx.xdp_pass;
u64_stats_update_end(&rx_ring->stats.rx.syncp);
aq_get_rxpages_xdp(buff, xdp);
return skb;
case XDP_TX:
xdpf = xdp_convert_buff_to_frame(xdp);

Просмотреть файл

@ -6990,11 +6990,9 @@ static int bnxt_hwrm_func_qcfg(struct bnxt *bp)
if (flags & FUNC_QCFG_RESP_FLAGS_FW_DCBX_AGENT_ENABLED)
bp->fw_cap |= BNXT_FW_CAP_DCBX_AGENT;
}
if (BNXT_PF(bp) && (flags & FUNC_QCFG_RESP_FLAGS_MULTI_HOST)) {
if (BNXT_PF(bp) && (flags & FUNC_QCFG_RESP_FLAGS_MULTI_HOST))
bp->flags |= BNXT_FLAG_MULTI_HOST;
if (bp->fw_cap & BNXT_FW_CAP_PTP_RTC)
bp->fw_cap &= ~BNXT_FW_CAP_PTP_RTC;
}
if (flags & FUNC_QCFG_RESP_FLAGS_RING_MONITOR_ENABLED)
bp->fw_cap |= BNXT_FW_CAP_RING_MONITOR;

Просмотреть файл

@ -2000,6 +2000,8 @@ struct bnxt {
u32 fw_dbg_cap;
#define BNXT_NEW_RM(bp) ((bp)->fw_cap & BNXT_FW_CAP_NEW_RM)
#define BNXT_PTP_USE_RTC(bp) (!BNXT_MH(bp) && \
((bp)->fw_cap & BNXT_FW_CAP_PTP_RTC))
u32 hwrm_spec_code;
u16 hwrm_cmd_seq;
u16 hwrm_cmd_kong_seq;

Просмотреть файл

@ -63,7 +63,7 @@ static int bnxt_ptp_settime(struct ptp_clock_info *ptp_info,
ptp_info);
u64 ns = timespec64_to_ns(ts);
if (ptp->bp->fw_cap & BNXT_FW_CAP_PTP_RTC)
if (BNXT_PTP_USE_RTC(ptp->bp))
return bnxt_ptp_cfg_settime(ptp->bp, ns);
spin_lock_bh(&ptp->ptp_lock);
@ -196,7 +196,7 @@ static int bnxt_ptp_adjtime(struct ptp_clock_info *ptp_info, s64 delta)
struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
ptp_info);
if (ptp->bp->fw_cap & BNXT_FW_CAP_PTP_RTC)
if (BNXT_PTP_USE_RTC(ptp->bp))
return bnxt_ptp_adjphc(ptp, delta);
spin_lock_bh(&ptp->ptp_lock);
@ -205,34 +205,39 @@ static int bnxt_ptp_adjtime(struct ptp_clock_info *ptp_info, s64 delta)
return 0;
}
static int bnxt_ptp_adjfine_rtc(struct bnxt *bp, long scaled_ppm)
{
s32 ppb = scaled_ppm_to_ppb(scaled_ppm);
struct hwrm_port_mac_cfg_input *req;
int rc;
rc = hwrm_req_init(bp, req, HWRM_PORT_MAC_CFG);
if (rc)
return rc;
req->ptp_freq_adj_ppb = cpu_to_le32(ppb);
req->enables = cpu_to_le32(PORT_MAC_CFG_REQ_ENABLES_PTP_FREQ_ADJ_PPB);
rc = hwrm_req_send(bp, req);
if (rc)
netdev_err(bp->dev,
"ptp adjfine failed. rc = %d\n", rc);
return rc;
}
static int bnxt_ptp_adjfine(struct ptp_clock_info *ptp_info, long scaled_ppm)
{
struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
ptp_info);
struct hwrm_port_mac_cfg_input *req;
struct bnxt *bp = ptp->bp;
int rc = 0;
if (!(ptp->bp->fw_cap & BNXT_FW_CAP_PTP_RTC)) {
spin_lock_bh(&ptp->ptp_lock);
timecounter_read(&ptp->tc);
ptp->cc.mult = adjust_by_scaled_ppm(ptp->cmult, scaled_ppm);
spin_unlock_bh(&ptp->ptp_lock);
} else {
s32 ppb = scaled_ppm_to_ppb(scaled_ppm);
if (BNXT_PTP_USE_RTC(bp))
return bnxt_ptp_adjfine_rtc(bp, scaled_ppm);
rc = hwrm_req_init(bp, req, HWRM_PORT_MAC_CFG);
if (rc)
return rc;
req->ptp_freq_adj_ppb = cpu_to_le32(ppb);
req->enables = cpu_to_le32(PORT_MAC_CFG_REQ_ENABLES_PTP_FREQ_ADJ_PPB);
rc = hwrm_req_send(ptp->bp, req);
if (rc)
netdev_err(ptp->bp->dev,
"ptp adjfine failed. rc = %d\n", rc);
}
return rc;
spin_lock_bh(&ptp->ptp_lock);
timecounter_read(&ptp->tc);
ptp->cc.mult = adjust_by_scaled_ppm(ptp->cmult, scaled_ppm);
spin_unlock_bh(&ptp->ptp_lock);
return 0;
}
void bnxt_ptp_pps_event(struct bnxt *bp, u32 data1, u32 data2)
@ -879,7 +884,7 @@ int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg)
u64 ns;
int rc;
if (!bp->ptp_cfg || !(bp->fw_cap & BNXT_FW_CAP_PTP_RTC))
if (!bp->ptp_cfg || !BNXT_PTP_USE_RTC(bp))
return -ENODEV;
if (!phc_cfg) {
@ -932,13 +937,14 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
atomic_set(&ptp->tx_avail, BNXT_MAX_TX_TS);
spin_lock_init(&ptp->ptp_lock);
if (bp->fw_cap & BNXT_FW_CAP_PTP_RTC) {
if (BNXT_PTP_USE_RTC(bp)) {
bnxt_ptp_timecounter_init(bp, false);
rc = bnxt_ptp_init_rtc(bp, phc_cfg);
if (rc)
goto out;
} else {
bnxt_ptp_timecounter_init(bp, true);
bnxt_ptp_adjfine_rtc(bp, 0);
}
ptp->ptp_info = bnxt_ptp_caps;

Просмотреть файл

@ -4990,7 +4990,7 @@ static int macb_probe(struct platform_device *pdev)
bp->jumbo_max_len = macb_config->jumbo_max_len;
bp->wol = 0;
if (of_get_property(np, "magic-packet", NULL))
if (of_property_read_bool(np, "magic-packet"))
bp->wol |= MACB_WOL_HAS_MAGIC_PACKET;
device_set_wakeup_capable(&pdev->dev, bp->wol & MACB_WOL_HAS_MAGIC_PACKET);

Просмотреть файл

@ -735,12 +735,17 @@ static int nicvf_set_channels(struct net_device *dev,
if (channel->tx_count > nic->max_queues)
return -EINVAL;
if (nic->xdp_prog &&
((channel->tx_count + channel->rx_count) > nic->max_queues)) {
netdev_err(nic->netdev,
"XDP mode, RXQs + TXQs > Max %d\n",
nic->max_queues);
return -EINVAL;
if (channel->tx_count + channel->rx_count > nic->max_queues) {
if (nic->xdp_prog) {
netdev_err(nic->netdev,
"XDP mode, RXQs + TXQs > Max %d\n",
nic->max_queues);
return -EINVAL;
}
xdp_clear_features_flag(nic->netdev);
} else if (!pass1_silicon(nic->pdev)) {
xdp_set_features_flag(dev, NETDEV_XDP_ACT_BASIC);
}
if (if_up)

Просмотреть файл

@ -2218,7 +2218,9 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->netdev_ops = &nicvf_netdev_ops;
netdev->watchdog_timeo = NICVF_TX_TIMEOUT;
netdev->xdp_features = NETDEV_XDP_ACT_BASIC;
if (!pass1_silicon(nic->pdev) &&
nic->rx_queues + nic->tx_queues <= nic->max_queues)
netdev->xdp_features = NETDEV_XDP_ACT_BASIC;
/* MTU range: 64 - 9200 */
netdev->min_mtu = NIC_HW_MIN_FRS;

Просмотреть файл

@ -1393,9 +1393,9 @@ static struct dm9000_plat_data *dm9000_parse_dt(struct device *dev)
if (!pdata)
return ERR_PTR(-ENOMEM);
if (of_find_property(np, "davicom,ext-phy", NULL))
if (of_property_read_bool(np, "davicom,ext-phy"))
pdata->flags |= DM9000_PLATF_EXT_PHY;
if (of_find_property(np, "davicom,no-eeprom", NULL))
if (of_property_read_bool(np, "davicom,no-eeprom"))
pdata->flags |= DM9000_PLATF_NO_EEPROM;
ret = of_get_mac_address(np, pdata->dev_addr);

Просмотреть файл

@ -4251,7 +4251,7 @@ fec_probe(struct platform_device *pdev)
if (ret)
goto failed_ipc_init;
if (of_get_property(np, "fsl,magic-packet", NULL))
if (of_property_read_bool(np, "fsl,magic-packet"))
fep->wol_flag |= FEC_WOL_HAS_MAGIC_PACKET;
ret = fec_enet_init_stop_mode(fep, np);

Просмотреть файл

@ -937,7 +937,7 @@ static int mpc52xx_fec_probe(struct platform_device *op)
priv->phy_node = of_parse_phandle(np, "phy-handle", 0);
/* the 7-wire property means don't use MII mode */
if (of_find_property(np, "fsl,7-wire-mode", NULL)) {
if (of_property_read_bool(np, "fsl,7-wire-mode")) {
priv->seven_wire_mode = 1;
dev_info(&ndev->dev, "using 7-wire PHY mode\n");
}

Просмотреть файл

@ -787,10 +787,10 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev)
else
priv->interface = gfar_get_interface(dev);
if (of_find_property(np, "fsl,magic-packet", NULL))
if (of_property_read_bool(np, "fsl,magic-packet"))
priv->device_flags |= FSL_GIANFAR_DEV_HAS_MAGIC_PACKET;
if (of_get_property(np, "fsl,wake-on-filer", NULL))
if (of_property_read_bool(np, "fsl,wake-on-filer"))
priv->device_flags |= FSL_GIANFAR_DEV_HAS_WAKE_ON_FILER;
priv->phy_node = of_parse_phandle(np, "phy-handle", 0);

Просмотреть файл

@ -78,6 +78,7 @@ static int sni_82596_probe(struct platform_device *dev)
void __iomem *mpu_addr;
void __iomem *ca_addr;
u8 __iomem *eth_addr;
u8 mac[ETH_ALEN];
res = platform_get_resource(dev, IORESOURCE_MEM, 0);
ca = platform_get_resource(dev, IORESOURCE_MEM, 1);
@ -109,12 +110,13 @@ static int sni_82596_probe(struct platform_device *dev)
goto probe_failed;
/* someone seems to like messed up stuff */
netdevice->dev_addr[0] = readb(eth_addr + 0x0b);
netdevice->dev_addr[1] = readb(eth_addr + 0x0a);
netdevice->dev_addr[2] = readb(eth_addr + 0x09);
netdevice->dev_addr[3] = readb(eth_addr + 0x08);
netdevice->dev_addr[4] = readb(eth_addr + 0x07);
netdevice->dev_addr[5] = readb(eth_addr + 0x06);
mac[0] = readb(eth_addr + 0x0b);
mac[1] = readb(eth_addr + 0x0a);
mac[2] = readb(eth_addr + 0x09);
mac[3] = readb(eth_addr + 0x08);
mac[4] = readb(eth_addr + 0x07);
mac[5] = readb(eth_addr + 0x06);
eth_hw_addr_set(netdevice, mac);
iounmap(eth_addr);
if (netdevice->irq < 0) {

Просмотреть файл

@ -2939,9 +2939,9 @@ static int emac_init_config(struct emac_instance *dev)
}
/* Fixup some feature bits based on the device tree */
if (of_get_property(np, "has-inverted-stacr-oc", NULL))
if (of_property_read_bool(np, "has-inverted-stacr-oc"))
dev->features |= EMAC_FTR_STACR_OC_INVERT;
if (of_get_property(np, "has-new-stacr-staopc", NULL))
if (of_property_read_bool(np, "has-new-stacr-staopc"))
dev->features |= EMAC_FTR_HAS_NEW_STACR;
/* CAB lacks the appropriate properties */
@ -3042,7 +3042,7 @@ static int emac_probe(struct platform_device *ofdev)
* property here for now, but new flat device trees should set a
* status property to "disabled" instead.
*/
if (of_get_property(np, "unused", NULL) || !of_device_is_available(np))
if (of_property_read_bool(np, "unused") || !of_device_is_available(np))
return -ENODEV;
/* Find ourselves in the bootlist if we are there */
@ -3333,7 +3333,7 @@ static void __init emac_make_bootlist(void)
if (of_match_node(emac_match, np) == NULL)
continue;
if (of_get_property(np, "unused", NULL))
if (of_property_read_bool(np, "unused"))
continue;
idx = of_get_property(np, "cell-index", NULL);
if (idx == NULL)

Просмотреть файл

@ -242,7 +242,7 @@ static int rgmii_probe(struct platform_device *ofdev)
}
/* Check for RGMII flags */
if (of_get_property(ofdev->dev.of_node, "has-mdio", NULL))
if (of_property_read_bool(ofdev->dev.of_node, "has-mdio"))
dev->flags |= EMAC_RGMII_FLAG_HAS_MDIO;
/* CAB lacks the right properties, fix this up */

Просмотреть файл

@ -15525,6 +15525,7 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
int err;
int v_idx;
pci_set_drvdata(pf->pdev, pf);
pci_save_state(pf->pdev);
/* set up periodic task facility */

Просмотреть файл

@ -509,6 +509,7 @@ enum ice_pf_flags {
ICE_FLAG_VF_VLAN_PRUNING,
ICE_FLAG_LINK_LENIENT_MODE_ENA,
ICE_FLAG_PLUG_AUX_DEV,
ICE_FLAG_UNPLUG_AUX_DEV,
ICE_FLAG_MTU_CHANGED,
ICE_FLAG_GNSS, /* GNSS successfully initialized */
ICE_PF_FLAGS_NBITS /* must be last */
@ -955,16 +956,11 @@ static inline void ice_set_rdma_cap(struct ice_pf *pf)
*/
static inline void ice_clear_rdma_cap(struct ice_pf *pf)
{
/* We can directly unplug aux device here only if the flag bit
* ICE_FLAG_PLUG_AUX_DEV is not set because ice_unplug_aux_dev()
* could race with ice_plug_aux_dev() called from
* ice_service_task(). In this case we only clear that bit now and
* aux device will be unplugged later once ice_plug_aux_device()
* called from ice_service_task() finishes (see ice_service_task()).
/* defer unplug to service task to avoid RTNL lock and
* clear PLUG bit so that pending plugs don't interfere
*/
if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
ice_unplug_aux_dev(pf);
clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags);
set_bit(ICE_FLAG_UNPLUG_AUX_DEV, pf->flags);
clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
}
#endif /* _ICE_H_ */

Просмотреть файл

@ -2316,18 +2316,15 @@ static void ice_service_task(struct work_struct *work)
}
}
if (test_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) {
/* Plug aux device per request */
ice_plug_aux_dev(pf);
/* unplug aux dev per request, if an unplug request came in
* while processing a plug request, this will handle it
*/
if (test_and_clear_bit(ICE_FLAG_UNPLUG_AUX_DEV, pf->flags))
ice_unplug_aux_dev(pf);
/* Mark plugging as done but check whether unplug was
* requested during ice_plug_aux_dev() call
* (e.g. from ice_clear_rdma_cap()) and if so then
* plug aux device.
*/
if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
ice_unplug_aux_dev(pf);
}
/* Plug aux device per request */
if (test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
ice_plug_aux_dev(pf);
if (test_and_clear_bit(ICE_FLAG_MTU_CHANGED, pf->flags)) {
struct iidc_event *event;

Просмотреть файл

@ -184,8 +184,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
}
netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
ice_qvec_dis_irq(vsi, rx_ring, q_vector);
ice_fill_txq_meta(vsi, tx_ring, &txq_meta);
err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta);
if (err)
@ -200,10 +198,11 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
if (err)
return err;
}
ice_qvec_dis_irq(vsi, rx_ring, q_vector);
err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true);
if (err)
return err;
ice_clean_rx_ring(rx_ring);
ice_qvec_toggle_napi(vsi, q_vector, false);
ice_qp_clean_rings(vsi, q_idx);

Просмотреть файл

@ -4996,6 +4996,14 @@ static int mvpp2_bm_switch_buffers(struct mvpp2 *priv, bool percpu)
for (i = 0; i < priv->port_count; i++) {
port = priv->port_list[i];
if (percpu && port->ntxqs >= num_possible_cpus() * 2)
xdp_set_features_flag(port->dev,
NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT);
else
xdp_clear_features_flag(port->dev);
mvpp2_swf_bm_pool_init(port);
if (status[i])
mvpp2_open(port->dev);
@ -6863,13 +6871,14 @@ static int mvpp2_port_probe(struct platform_device *pdev,
if (!port->priv->percpu_pools)
mvpp2_set_hw_csum(port, port->pool_long->id);
else if (port->ntxqs >= num_possible_cpus() * 2)
dev->xdp_features = NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT;
dev->vlan_features |= features;
netif_set_tso_max_segs(dev, MVPP2_MAX_TSO_SEGS);
dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT;
dev->priv_flags |= IFF_UNICAST_FLT;
/* MTU range: 68 - 9704 */

Просмотреть файл

@ -542,6 +542,10 @@
#define SGMII_SEND_AN_ERROR_EN BIT(11)
#define SGMII_IF_MODE_MASK GENMASK(5, 1)
/* Register to reset SGMII design */
#define SGMII_RESERVED_0 0x34
#define SGMII_SW_RESET BIT(0)
/* Register to set SGMII speed, ANA RG_ Control Signals III*/
#define SGMSYS_ANA_RG_CS3 0x2028
#define RG_PHY_SPEED_MASK (BIT(2) | BIT(3))

Просмотреть файл

@ -38,20 +38,16 @@ static int mtk_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
const unsigned long *advertising,
bool permit_pause_to_mac)
{
bool mode_changed = false, changed, use_an;
struct mtk_pcs *mpcs = pcs_to_mtk_pcs(pcs);
unsigned int rgc3, sgm_mode, bmcr;
int advertise, link_timer;
bool changed, use_an;
advertise = phylink_mii_c22_pcs_encode_advertisement(interface,
advertising);
if (advertise < 0)
return advertise;
link_timer = phylink_get_link_timer_ns(interface);
if (link_timer < 0)
return link_timer;
/* Clearing IF_MODE_BIT0 switches the PCS to BASE-X mode, and
* we assume that fixes it's speed at bitrate = line rate (in
* other words, 1000Mbps or 2500Mbps).
@ -77,17 +73,24 @@ static int mtk_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
}
if (use_an) {
/* FIXME: Do we need to set AN_RESTART here? */
bmcr = SGMII_AN_RESTART | SGMII_AN_ENABLE;
bmcr = SGMII_AN_ENABLE;
} else {
bmcr = 0;
}
if (mpcs->interface != interface) {
link_timer = phylink_get_link_timer_ns(interface);
if (link_timer < 0)
return link_timer;
/* PHYA power down */
regmap_update_bits(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL,
SGMII_PHYA_PWD, SGMII_PHYA_PWD);
/* Reset SGMII PCS state */
regmap_update_bits(mpcs->regmap, SGMII_RESERVED_0,
SGMII_SW_RESET, SGMII_SW_RESET);
if (interface == PHY_INTERFACE_MODE_2500BASEX)
rgc3 = RG_PHY_SPEED_3_125G;
else
@ -97,16 +100,17 @@ static int mtk_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
regmap_update_bits(mpcs->regmap, mpcs->ana_rgc3,
RG_PHY_SPEED_3_125G, rgc3);
/* Setup the link timer */
regmap_write(mpcs->regmap, SGMSYS_PCS_LINK_TIMER, link_timer / 2 / 8);
mpcs->interface = interface;
mode_changed = true;
}
/* Update the advertisement, noting whether it has changed */
regmap_update_bits_check(mpcs->regmap, SGMSYS_PCS_ADVERTISE,
SGMII_ADVERTISE, advertise, &changed);
/* Setup the link timer and QPHY power up inside SGMIISYS */
regmap_write(mpcs->regmap, SGMSYS_PCS_LINK_TIMER, link_timer / 2 / 8);
/* Update the sgmsys mode register */
regmap_update_bits(mpcs->regmap, SGMSYS_SGMII_MODE,
SGMII_REMOTE_FAULT_DIS | SGMII_SPEED_DUPLEX_AN |
@ -114,7 +118,7 @@ static int mtk_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
/* Update the BMCR */
regmap_update_bits(mpcs->regmap, SGMSYS_PCS_CONTROL_1,
SGMII_AN_RESTART | SGMII_AN_ENABLE, bmcr);
SGMII_AN_ENABLE, bmcr);
/* Release PHYA power down state
* Only removing bit SGMII_PHYA_PWD isn't enough.
@ -128,7 +132,7 @@ static int mtk_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
usleep_range(50, 100);
regmap_write(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL, 0);
return changed;
return changed || mode_changed;
}
static void mtk_pcs_restart_an(struct phylink_pcs *pcs)

Просмотреть файл

@ -313,7 +313,6 @@ struct mlx5e_params {
} channel;
} mqprio;
bool rx_cqe_compress_def;
bool tunneled_offload_en;
struct dim_cq_moder rx_cq_moderation;
struct dim_cq_moder tx_cq_moderation;
struct mlx5e_packet_merge_param packet_merge;
@ -1243,6 +1242,7 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16
void mlx5e_rx_dim_work(struct work_struct *work);
void mlx5e_tx_dim_work(struct work_struct *work);
void mlx5e_set_xdp_feature(struct net_device *netdev);
netdev_features_t mlx5e_features_check(struct sk_buff *skb,
struct net_device *netdev,
netdev_features_t features);

Просмотреть файл

@ -178,7 +178,6 @@ tc_act_police_stats(struct mlx5e_priv *priv,
meter = mlx5e_tc_meter_get(priv->mdev, &params);
if (IS_ERR(meter)) {
NL_SET_ERR_MSG_MOD(fl_act->extack, "Failed to get flow meter");
mlx5_core_err(priv->mdev, "Failed to get flow meter %d\n", params.index);
return PTR_ERR(meter);
}

Просмотреть файл

@ -64,6 +64,7 @@ mlx5e_tc_act_stats_add(struct mlx5e_tc_act_stats_handle *handle,
{
struct mlx5e_tc_act_stats *act_stats, *old_act_stats;
struct rhashtable *ht = &handle->ht;
u64 lastused;
int err = 0;
act_stats = kvzalloc(sizeof(*act_stats), GFP_KERNEL);
@ -73,6 +74,10 @@ mlx5e_tc_act_stats_add(struct mlx5e_tc_act_stats_handle *handle,
act_stats->tc_act_cookie = act_cookie;
act_stats->counter = counter;
mlx5_fc_query_cached_raw(counter,
&act_stats->lastbytes,
&act_stats->lastpackets, &lastused);
rcu_read_lock();
old_act_stats = rhashtable_lookup_get_insert_fast(ht,
&act_stats->hash,

Просмотреть файл

@ -621,15 +621,6 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
if (unlikely(!priv_rx))
return -ENOMEM;
dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info);
if (IS_ERR(dek)) {
err = PTR_ERR(dek);
goto err_create_key;
}
priv_rx->dek = dek;
INIT_LIST_HEAD(&priv_rx->list);
spin_lock_init(&priv_rx->lock);
switch (crypto_info->cipher_type) {
case TLS_CIPHER_AES_GCM_128:
priv_rx->crypto_info.crypto_info_128 =
@ -642,9 +633,20 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
default:
WARN_ONCE(1, "Unsupported cipher type %u\n",
crypto_info->cipher_type);
return -EOPNOTSUPP;
err = -EOPNOTSUPP;
goto err_cipher_type;
}
dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info);
if (IS_ERR(dek)) {
err = PTR_ERR(dek);
goto err_cipher_type;
}
priv_rx->dek = dek;
INIT_LIST_HEAD(&priv_rx->list);
spin_lock_init(&priv_rx->lock);
rxq = mlx5e_ktls_sk_get_rxq(sk);
priv_rx->rxq = rxq;
priv_rx->sk = sk;
@ -677,7 +679,7 @@ err_post_wqes:
mlx5e_tir_destroy(&priv_rx->tir);
err_create_tir:
mlx5_ktls_destroy_key(priv->tls->dek_pool, priv_rx->dek);
err_create_key:
err_cipher_type:
kfree(priv_rx);
return err;
}

Просмотреть файл

@ -469,14 +469,6 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk,
if (IS_ERR(priv_tx))
return PTR_ERR(priv_tx);
dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info);
if (IS_ERR(dek)) {
err = PTR_ERR(dek);
goto err_create_key;
}
priv_tx->dek = dek;
priv_tx->expected_seq = start_offload_tcp_sn;
switch (crypto_info->cipher_type) {
case TLS_CIPHER_AES_GCM_128:
priv_tx->crypto_info.crypto_info_128 =
@ -489,8 +481,18 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk,
default:
WARN_ONCE(1, "Unsupported cipher type %u\n",
crypto_info->cipher_type);
return -EOPNOTSUPP;
err = -EOPNOTSUPP;
goto err_pool_push;
}
dek = mlx5_ktls_create_key(priv->tls->dek_pool, crypto_info);
if (IS_ERR(dek)) {
err = PTR_ERR(dek);
goto err_pool_push;
}
priv_tx->dek = dek;
priv_tx->expected_seq = start_offload_tcp_sn;
priv_tx->tx_ctx = tls_offload_ctx_tx(tls_ctx);
mlx5e_set_ktls_tx_priv_ctx(tls_ctx, priv_tx);
@ -500,7 +502,7 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk,
return 0;
err_create_key:
err_pool_push:
pool_push(pool, priv_tx);
return err;
}

Просмотреть файл

@ -89,8 +89,8 @@ struct mlx5e_macsec_rx_sc {
};
struct mlx5e_macsec_umr {
u8 __aligned(64) ctx[MLX5_ST_SZ_BYTES(macsec_aso)];
dma_addr_t dma_addr;
u8 ctx[MLX5_ST_SZ_BYTES(macsec_aso)];
u32 mkey;
};

Просмотреть файл

@ -1985,6 +1985,7 @@ static int set_pflag_rx_striding_rq(struct net_device *netdev, bool enable)
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5e_params new_params;
int err;
if (enable) {
/* Checking the regular RQ here; mlx5e_validate_xsk_param called
@ -2005,7 +2006,14 @@ static int set_pflag_rx_striding_rq(struct net_device *netdev, bool enable)
MLX5E_SET_PFLAG(&new_params, MLX5E_PFLAG_RX_STRIDING_RQ, enable);
mlx5e_set_rq_type(mdev, &new_params);
return mlx5e_safe_switch_params(priv, &new_params, NULL, NULL, true);
err = mlx5e_safe_switch_params(priv, &new_params, NULL, NULL, true);
if (err)
return err;
/* update XDP supported features */
mlx5e_set_xdp_feature(netdev);
return 0;
}
static int set_pflag_rx_no_csum_complete(struct net_device *netdev, bool enable)

Просмотреть файл

@ -4004,6 +4004,25 @@ static int mlx5e_handle_feature(struct net_device *netdev,
return 0;
}
void mlx5e_set_xdp_feature(struct net_device *netdev)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5e_params *params = &priv->channels.params;
xdp_features_t val;
if (params->packet_merge.type != MLX5E_PACKET_MERGE_NONE) {
xdp_clear_features_flag(netdev);
return;
}
val = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_XSK_ZEROCOPY |
NETDEV_XDP_ACT_NDO_XMIT;
if (params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC)
val |= NETDEV_XDP_ACT_RX_SG;
xdp_set_features_flag(netdev, val);
}
int mlx5e_set_features(struct net_device *netdev, netdev_features_t features)
{
netdev_features_t oper_features = features;
@ -4030,6 +4049,9 @@ int mlx5e_set_features(struct net_device *netdev, netdev_features_t features)
return -EINVAL;
}
/* update XDP supported features */
mlx5e_set_xdp_feature(netdev);
return 0;
}
@ -4147,13 +4169,17 @@ static bool mlx5e_xsk_validate_mtu(struct net_device *netdev,
struct xsk_buff_pool *xsk_pool =
mlx5e_xsk_get_pool(&chs->params, chs->params.xsk, ix);
struct mlx5e_xsk_param xsk;
int max_xdp_mtu;
if (!xsk_pool)
continue;
mlx5e_build_xsk_param(xsk_pool, &xsk);
max_xdp_mtu = mlx5e_xdp_max_mtu(new_params, &xsk);
if (!mlx5e_validate_xsk_param(new_params, &xsk, mdev)) {
/* Validate XSK params and XDP MTU in advance */
if (!mlx5e_validate_xsk_param(new_params, &xsk, mdev) ||
new_params->sw_mtu > max_xdp_mtu) {
u32 hr = mlx5e_get_linear_rq_headroom(new_params, &xsk);
int max_mtu_frame, max_mtu_page, max_mtu;
@ -4163,9 +4189,9 @@ static bool mlx5e_xsk_validate_mtu(struct net_device *netdev,
*/
max_mtu_frame = MLX5E_HW2SW_MTU(new_params, xsk.chunk_size - hr);
max_mtu_page = MLX5E_HW2SW_MTU(new_params, SKB_MAX_HEAD(0));
max_mtu = min(max_mtu_frame, max_mtu_page);
max_mtu = min3(max_mtu_frame, max_mtu_page, max_xdp_mtu);
netdev_err(netdev, "MTU %d is too big for an XSK running on channel %u. Try MTU <= %d\n",
netdev_err(netdev, "MTU %d is too big for an XSK running on channel %u or its redirection XDP program. Try MTU <= %d\n",
new_params->sw_mtu, ix, max_mtu);
return false;
}
@ -4761,13 +4787,6 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
if (old_prog)
bpf_prog_put(old_prog);
if (reset) {
if (prog)
xdp_features_set_redirect_target(netdev, true);
else
xdp_features_clear_redirect_target(netdev);
}
if (!test_bit(MLX5E_STATE_OPENED, &priv->state) || reset)
goto unlock;
@ -4964,8 +4983,6 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16
/* TX inline */
mlx5_query_min_inline(mdev, &params->tx_min_inline_mode);
params->tunneled_offload_en = mlx5_tunnel_inner_ft_supported(mdev);
/* AF_XDP */
params->xsk = xsk;
@ -5163,13 +5180,10 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
netdev->features |= NETIF_F_HIGHDMA;
netdev->features |= NETIF_F_HW_VLAN_STAG_FILTER;
netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_XSK_ZEROCOPY |
NETDEV_XDP_ACT_RX_SG;
netdev->priv_flags |= IFF_UNICAST_FLT;
netif_set_tso_max_size(netdev, GSO_MAX_SIZE);
mlx5e_set_xdp_feature(netdev);
mlx5e_set_netdev_dev_addr(netdev);
mlx5e_macsec_build_netdev(priv);
mlx5e_ipsec_build_netdev(priv);
@ -5241,6 +5255,9 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev,
mlx5_core_err(mdev, "TLS initialization failed, %d\n", err);
mlx5e_health_create_reporters(priv);
/* update XDP supported features */
mlx5e_set_xdp_feature(netdev);
return 0;
}
@ -5270,7 +5287,7 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
}
features = MLX5E_RX_RES_FEATURE_PTP;
if (priv->channels.params.tunneled_offload_en)
if (mlx5_tunnel_inner_ft_supported(mdev))
features |= MLX5E_RX_RES_FEATURE_INNER_FT;
err = mlx5e_rx_res_init(priv->rx_res, priv->mdev, features,
priv->max_nch, priv->drop_rq.rqn,

Просмотреть файл

@ -747,12 +747,14 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
/* RQ */
mlx5e_build_rq_params(mdev, params);
/* update XDP supported features */
mlx5e_set_xdp_feature(netdev);
/* CQ moderation params */
params->rx_dim_enabled = MLX5_CAP_GEN(mdev, cq_moderation);
mlx5e_set_rx_cq_mode_params(params, cq_period_mode);
params->mqprio.num_tc = 1;
params->tunneled_offload_en = false;
if (rep->vport != MLX5_VPORT_UPLINK)
params->vlan_strip_disable = true;

Просмотреть файл

@ -3752,7 +3752,7 @@ mlx5e_clone_flow_attr_for_post_act(struct mlx5_flow_attr *attr,
parse_attr->filter_dev = attr->parse_attr->filter_dev;
attr2->action = 0;
attr2->counter = NULL;
attr->tc_act_cookies_count = 0;
attr2->tc_act_cookies_count = 0;
attr2->flags = 0;
attr2->parse_attr = parse_attr;
attr2->dest_chain = 0;
@ -4304,6 +4304,7 @@ int mlx5e_set_fwd_to_int_port_actions(struct mlx5e_priv *priv,
esw_attr->dest_int_port = dest_int_port;
esw_attr->dests[out_index].flags |= MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE;
esw_attr->split_count = out_index;
/* Forward to root fdb for matching against the new source vport */
attr->dest_chain = 0;
@ -5304,8 +5305,10 @@ int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
mlx5e_tc_debugfs_init(tc, mlx5e_fs_get_debugfs_root(priv->fs));
tc->action_stats_handle = mlx5e_tc_act_stats_create();
if (IS_ERR(tc->action_stats_handle))
if (IS_ERR(tc->action_stats_handle)) {
err = PTR_ERR(tc->action_stats_handle);
goto err_act_stats;
}
return 0;
@ -5440,8 +5443,10 @@ int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv)
}
uplink_priv->action_stats_handle = mlx5e_tc_act_stats_create();
if (IS_ERR(uplink_priv->action_stats_handle))
if (IS_ERR(uplink_priv->action_stats_handle)) {
err = PTR_ERR(uplink_priv->action_stats_handle);
goto err_action_counter;
}
return 0;
@ -5463,6 +5468,16 @@ err_tun_mapping:
void mlx5e_tc_esw_cleanup(struct mlx5_rep_uplink_priv *uplink_priv)
{
struct mlx5e_rep_priv *rpriv;
struct mlx5_eswitch *esw;
struct mlx5e_priv *priv;
rpriv = container_of(uplink_priv, struct mlx5e_rep_priv, uplink_priv);
priv = netdev_priv(rpriv->netdev);
esw = priv->mdev->priv.eswitch;
mlx5e_tc_clean_fdb_peer_flows(esw);
mlx5e_tc_tun_cleanup(uplink_priv->encap);
mapping_destroy(uplink_priv->tunnel_enc_opts_mapping);

Просмотреть файл

@ -723,11 +723,11 @@ mlx5_eswitch_add_fwd_rule(struct mlx5_eswitch *esw,
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
for (i = 0; i < esw_attr->split_count; i++) {
if (esw_is_indir_table(esw, attr))
err = esw_setup_indir_table(dest, &flow_act, esw, attr, false, &i);
else if (esw_is_chain_src_port_rewrite(esw, esw_attr))
err = esw_setup_chain_src_port_rewrite(dest, &flow_act, esw, chains, attr,
&i);
if (esw_attr->dests[i].flags & MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE)
/* Source port rewrite (forward to ovs internal port or statck device) isn't
* supported in the rule of split action.
*/
err = -EOPNOTSUPP;
else
esw_setup_vport_dest(dest, &flow_act, esw, esw_attr, i, i, false);

Просмотреть файл

@ -70,7 +70,6 @@ static void mlx5i_build_nic_params(struct mlx5_core_dev *mdev,
params->packet_merge.type = MLX5E_PACKET_MERGE_NONE;
params->hard_mtu = MLX5_IB_GRH_BYTES + MLX5_IPOIB_HARD_LEN;
params->tunneled_offload_en = false;
/* CQE compression is not supported for IPoIB */
params->rx_cqe_compress_def = false;

Просмотреть файл

@ -1364,8 +1364,8 @@ static void mlx5_unload(struct mlx5_core_dev *dev)
{
mlx5_devlink_traps_unregister(priv_to_devlink(dev));
mlx5_sf_dev_table_destroy(dev);
mlx5_sriov_detach(dev);
mlx5_eswitch_disable(dev->priv.eswitch);
mlx5_sriov_detach(dev);
mlx5_lag_remove_mdev(dev);
mlx5_ec_cleanup(dev);
mlx5_sf_hw_table_destroy(dev);
@ -1789,11 +1789,11 @@ static void remove_one(struct pci_dev *pdev)
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
struct devlink *devlink = priv_to_devlink(dev);
set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state);
/* mlx5_drain_fw_reset() is using devlink APIs. Hence, we must drain
* fw_reset before unregistering the devlink.
*/
mlx5_drain_fw_reset(dev);
set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state);
devlink_unregister(devlink);
mlx5_sriov_disable(pdev);
mlx5_crdump_disable(dev);

Просмотреть файл

@ -82,6 +82,16 @@ static u16 func_id_to_type(struct mlx5_core_dev *dev, u16 func_id, bool ec_funct
return func_id <= mlx5_core_max_vfs(dev) ? MLX5_VF : MLX5_SF;
}
static u32 mlx5_get_ec_function(u32 function)
{
return function >> 16;
}
static u32 mlx5_get_func_id(u32 function)
{
return function & 0xffff;
}
static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function)
{
struct rb_root *root;
@ -665,20 +675,22 @@ static int optimal_reclaimed_pages(void)
}
static int mlx5_reclaim_root_pages(struct mlx5_core_dev *dev,
struct rb_root *root, u16 func_id)
struct rb_root *root, u32 function)
{
u64 recl_pages_to_jiffies = msecs_to_jiffies(mlx5_tout_ms(dev, RECLAIM_PAGES));
unsigned long end = jiffies + recl_pages_to_jiffies;
while (!RB_EMPTY_ROOT(root)) {
u32 ec_function = mlx5_get_ec_function(function);
u32 function_id = mlx5_get_func_id(function);
int nclaimed;
int err;
err = reclaim_pages(dev, func_id, optimal_reclaimed_pages(),
&nclaimed, false, mlx5_core_is_ecpf(dev));
err = reclaim_pages(dev, function_id, optimal_reclaimed_pages(),
&nclaimed, false, ec_function);
if (err) {
mlx5_core_warn(dev, "failed reclaiming pages (%d) for func id 0x%x\n",
err, func_id);
mlx5_core_warn(dev, "reclaim_pages err (%d) func_id=0x%x ec_func=0x%x\n",
err, function_id, ec_function);
return err;
}

Просмотреть файл

@ -2937,6 +2937,7 @@ static int mlxsw_sp_netdevice_event(struct notifier_block *unused,
static void mlxsw_sp_parsing_init(struct mlxsw_sp *mlxsw_sp)
{
refcount_set(&mlxsw_sp->parsing.parsing_depth_ref, 0);
mlxsw_sp->parsing.parsing_depth = MLXSW_SP_DEFAULT_PARSING_DEPTH;
mlxsw_sp->parsing.vxlan_udp_dport = MLXSW_SP_DEFAULT_VXLAN_UDP_DPORT;
mutex_init(&mlxsw_sp->parsing.lock);
@ -2945,6 +2946,7 @@ static void mlxsw_sp_parsing_init(struct mlxsw_sp *mlxsw_sp)
static void mlxsw_sp_parsing_fini(struct mlxsw_sp *mlxsw_sp)
{
mutex_destroy(&mlxsw_sp->parsing.lock);
WARN_ON_ONCE(refcount_read(&mlxsw_sp->parsing.parsing_depth_ref));
}
struct mlxsw_sp_ipv6_addr_node {

Просмотреть файл

@ -10381,11 +10381,23 @@ err_reg_write:
old_inc_parsing_depth);
return err;
}
static void mlxsw_sp_mp_hash_fini(struct mlxsw_sp *mlxsw_sp)
{
bool old_inc_parsing_depth = mlxsw_sp->router->inc_parsing_depth;
mlxsw_sp_mp_hash_parsing_depth_adjust(mlxsw_sp, old_inc_parsing_depth,
false);
}
#else
static int mlxsw_sp_mp_hash_init(struct mlxsw_sp *mlxsw_sp)
{
return 0;
}
static void mlxsw_sp_mp_hash_fini(struct mlxsw_sp *mlxsw_sp)
{
}
#endif
static int mlxsw_sp_dscp_init(struct mlxsw_sp *mlxsw_sp)
@ -10615,6 +10627,7 @@ err_register_inet6addr_notifier:
err_register_inetaddr_notifier:
mlxsw_core_flush_owq();
err_dscp_init:
mlxsw_sp_mp_hash_fini(mlxsw_sp);
err_mp_hash_init:
mlxsw_sp_neigh_fini(mlxsw_sp);
err_neigh_init:
@ -10655,6 +10668,7 @@ void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp)
unregister_inet6addr_notifier(&mlxsw_sp->router->inet6addr_nb);
unregister_inetaddr_notifier(&mlxsw_sp->router->inetaddr_nb);
mlxsw_core_flush_owq();
mlxsw_sp_mp_hash_fini(mlxsw_sp);
mlxsw_sp_neigh_fini(mlxsw_sp);
mlxsw_sp_lb_rif_fini(mlxsw_sp);
mlxsw_sp_vrs_fini(mlxsw_sp);

Просмотреть файл

@ -5083,6 +5083,11 @@ static int qed_init_wfq_param(struct qed_hwfn *p_hwfn,
num_vports = p_hwfn->qm_info.num_vports;
if (num_vports < 2) {
DP_NOTICE(p_hwfn, "Unexpected num_vports: %d\n", num_vports);
return -EINVAL;
}
/* Accounting for the vports which are configured for WFQ explicitly */
for (i = 0; i < num_vports; i++) {
u32 tmp_speed;

Просмотреть файл

@ -422,7 +422,7 @@ qed_mfw_get_tlv_time_value(struct qed_mfw_tlv_time *p_time,
if (p_time->hour > 23)
p_time->hour = 0;
if (p_time->min > 59)
p_time->hour = 0;
p_time->min = 0;
if (p_time->msec > 999)
p_time->msec = 0;
if (p_time->usec > 999)

Просмотреть файл

@ -1455,8 +1455,6 @@ static int ravb_phy_init(struct net_device *ndev)
phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
}
/* Indicate that the MAC is responsible for managing PHY PM */
phydev->mac_managed_pm = true;
phy_attached_info(phydev);
return 0;
@ -2379,6 +2377,8 @@ static int ravb_mdio_init(struct ravb_private *priv)
{
struct platform_device *pdev = priv->pdev;
struct device *dev = &pdev->dev;
struct phy_device *phydev;
struct device_node *pn;
int error;
/* Bitbang init */
@ -2400,6 +2400,14 @@ static int ravb_mdio_init(struct ravb_private *priv)
if (error)
goto out_free_bus;
pn = of_parse_phandle(dev->of_node, "phy-handle", 0);
phydev = of_phy_find_device(pn);
if (phydev) {
phydev->mac_managed_pm = true;
put_device(&phydev->mdio.dev);
}
of_node_put(pn);
return 0;
out_free_bus:

Просмотреть файл

@ -702,13 +702,14 @@ static bool rswitch_rx(struct net_device *ndev, int *quota)
u16 pkt_len;
u32 get_ts;
if (*quota <= 0)
return true;
boguscnt = min_t(int, gq->ring_size, *quota);
limit = boguscnt;
desc = &gq->rx_ring[gq->cur];
while ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY) {
if (--boguscnt < 0)
break;
dma_rmb();
pkt_len = le16_to_cpu(desc->desc.info_ds) & RX_DS;
skb = gq->skbs[gq->cur];
@ -734,6 +735,9 @@ static bool rswitch_rx(struct net_device *ndev, int *quota)
gq->cur = rswitch_next_queue_index(gq, true, 1);
desc = &gq->rx_ring[gq->cur];
if (--boguscnt <= 0)
break;
}
num = rswitch_get_num_cur_queues(gq);
@ -745,7 +749,7 @@ static bool rswitch_rx(struct net_device *ndev, int *quota)
goto err;
gq->dirty = rswitch_next_queue_index(gq, false, num);
*quota -= limit - (++boguscnt);
*quota -= limit - boguscnt;
return boguscnt <= 0;
@ -1437,7 +1441,10 @@ static int rswitch_open(struct net_device *ndev)
rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, true);
rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, true);
iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE);
if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE);
bitmap_set(rdev->priv->opened_ports, rdev->port, 1);
return 0;
};
@ -1448,8 +1455,10 @@ static int rswitch_stop(struct net_device *ndev)
struct rswitch_gwca_ts_info *ts_info, *ts_info2;
netif_tx_stop_all_queues(ndev);
bitmap_clear(rdev->priv->opened_ports, rdev->port, 1);
iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID);
if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID);
list_for_each_entry_safe(ts_info, ts_info2, &rdev->priv->gwca.ts_info_list, list) {
if (ts_info->port != rdev->port)

Просмотреть файл

@ -998,6 +998,7 @@ struct rswitch_private {
struct rcar_gen4_ptp_private *ptp_priv;
struct rswitch_device *rdev[RSWITCH_NUM_PORTS];
DECLARE_BITMAP(opened_ports, RSWITCH_NUM_PORTS);
struct rswitch_gwca gwca;
struct rswitch_etha etha[RSWITCH_NUM_PORTS];

Просмотреть файл

@ -2029,8 +2029,6 @@ static int sh_eth_phy_init(struct net_device *ndev)
if (mdp->cd->register_type != SH_ETH_REG_GIGABIT)
phy_set_max_speed(phydev, SPEED_100);
/* Indicate that the MAC is responsible for managing PHY PM */
phydev->mac_managed_pm = true;
phy_attached_info(phydev);
return 0;
@ -3097,6 +3095,8 @@ static int sh_mdio_init(struct sh_eth_private *mdp,
struct bb_info *bitbang;
struct platform_device *pdev = mdp->pdev;
struct device *dev = &mdp->pdev->dev;
struct phy_device *phydev;
struct device_node *pn;
/* create bit control struct for PHY */
bitbang = devm_kzalloc(dev, sizeof(struct bb_info), GFP_KERNEL);
@ -3133,6 +3133,14 @@ static int sh_mdio_init(struct sh_eth_private *mdp,
if (ret)
goto out_free_bus;
pn = of_parse_phandle(dev->of_node, "phy-handle", 0);
phydev = of_phy_find_device(pn);
if (phydev) {
phydev->mac_managed_pm = true;
put_device(&phydev->mdio.dev);
}
of_node_put(pn);
return 0;
out_free_bus:

Просмотреть файл

@ -213,8 +213,7 @@ imx_dwmac_parse_dt(struct imx_priv_data *dwmac, struct device *dev)
struct device_node *np = dev->of_node;
int err = 0;
if (of_get_property(np, "snps,rmii_refclk_ext", NULL))
dwmac->rmii_refclk_ext = true;
dwmac->rmii_refclk_ext = of_property_read_bool(np, "snps,rmii_refclk_ext");
dwmac->clk_tx = devm_clk_get(dev, "tx");
if (IS_ERR(dwmac->clk_tx)) {

Просмотреть файл

@ -287,6 +287,9 @@ static int vsw_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
hp = mdesc_grab();
if (!hp)
return -ENODEV;
rmac = mdesc_get_property(hp, vdev->mp, remote_macaddr_prop, &len);
err = -ENODEV;
if (!rmac) {

Просмотреть файл

@ -9271,7 +9271,7 @@ static int niu_get_of_props(struct niu *np)
if (model)
strcpy(np->vpd.model, model);
if (of_find_property(dp, "hot-swappable-phy", NULL)) {
if (of_property_read_bool(dp, "hot-swappable-phy")) {
np->flags |= (NIU_FLAGS_10G | NIU_FLAGS_FIBER |
NIU_FLAGS_HOTPLUG_PHY);
}

Просмотреть файл

@ -433,6 +433,9 @@ static int vnet_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
hp = mdesc_grab();
if (!hp)
return -ENODEV;
vp = vnet_find_parent(hp, vdev->mp, vdev);
if (IS_ERR(vp)) {
pr_err("Cannot find port parent vnet\n");

Просмотреть файл

@ -226,8 +226,7 @@ static int cpsw_phy_sel_probe(struct platform_device *pdev)
if (IS_ERR(priv->gmii_sel))
return PTR_ERR(priv->gmii_sel);
if (of_find_property(pdev->dev.of_node, "rmii-clock-ext", NULL))
priv->rmii_clock_external = true;
priv->rmii_clock_external = of_property_read_bool(pdev->dev.of_node, "rmii-clock-ext");
dev_set_drvdata(&pdev->dev, priv);

Просмотреть файл

@ -3583,13 +3583,11 @@ static int gbe_probe(struct netcp_device *netcp_device, struct device *dev,
/* init the hw stats lock */
spin_lock_init(&gbe_dev->hw_stats_lock);
if (of_find_property(node, "enable-ale", NULL)) {
gbe_dev->enable_ale = true;
gbe_dev->enable_ale = of_property_read_bool(node, "enable-ale");
if (gbe_dev->enable_ale)
dev_info(dev, "ALE enabled\n");
} else {
gbe_dev->enable_ale = false;
else
dev_dbg(dev, "ALE bypass enabled*\n");
}
ret = of_property_read_u32(node, "tx-queue",
&gbe_dev->tx_queue_id);

Просмотреть файл

@ -2709,8 +2709,7 @@ static int velocity_get_platform_info(struct velocity_info *vptr)
struct resource res;
int ret;
if (of_get_property(vptr->dev->of_node, "no-eeprom", NULL))
vptr->no_eeprom = 1;
vptr->no_eeprom = of_property_read_bool(vptr->dev->of_node, "no-eeprom");
ret = of_address_to_resource(vptr->dev->of_node, 0, &res);
if (ret) {

Просмотреть файл

@ -1383,7 +1383,7 @@ struct velocity_info {
struct device *dev;
struct pci_dev *pdev;
struct net_device *netdev;
int no_eeprom;
bool no_eeprom;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
u8 ip_addr[4];

Просмотреть файл

@ -1455,12 +1455,11 @@ static int temac_probe(struct platform_device *pdev)
* endianness mode. Default for OF devices is big-endian.
*/
little_endian = false;
if (temac_np) {
if (of_get_property(temac_np, "little-endian", NULL))
little_endian = true;
} else if (pdata) {
if (temac_np)
little_endian = of_property_read_bool(temac_np, "little-endian");
else if (pdata)
little_endian = pdata->reg_little_endian;
}
if (little_endian) {
lp->temac_ior = _temac_ior_le;
lp->temac_iow = _temac_iow_le;

Просмотреть файл

@ -15,6 +15,14 @@ static bool gsi_reg_id_valid(struct gsi *gsi, enum gsi_reg_id reg_id)
switch (reg_id) {
case INTER_EE_SRC_CH_IRQ_MSK:
case INTER_EE_SRC_EV_CH_IRQ_MSK:
return gsi->version >= IPA_VERSION_3_5;
case HW_PARAM_2:
return gsi->version >= IPA_VERSION_3_5_1;
case HW_PARAM_4:
return gsi->version >= IPA_VERSION_5_0;
case CH_C_CNTXT_0:
case CH_C_CNTXT_1:
case CH_C_CNTXT_2:
@ -43,7 +51,6 @@ static bool gsi_reg_id_valid(struct gsi *gsi, enum gsi_reg_id reg_id)
case CH_CMD:
case EV_CH_CMD:
case GENERIC_CMD:
case HW_PARAM_2:
case CNTXT_TYPE_IRQ:
case CNTXT_TYPE_IRQ_MSK:
case CNTXT_SRC_CH_IRQ:

Просмотреть файл

@ -10,6 +10,10 @@
#include <linux/bits.h>
struct platform_device;
struct gsi;
/**
* DOC: GSI Registers
*

Просмотреть файл

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
* Copyright (C) 2019-2022 Linaro Ltd.
* Copyright (C) 2019-2023 Linaro Ltd.
*/
#include <linux/io.h>
@ -15,6 +15,17 @@ static bool ipa_reg_id_valid(struct ipa *ipa, enum ipa_reg_id reg_id)
enum ipa_version version = ipa->version;
switch (reg_id) {
case FILT_ROUT_HASH_EN:
return version == IPA_VERSION_4_2;
case FILT_ROUT_HASH_FLUSH:
return version < IPA_VERSION_5_0 && version != IPA_VERSION_4_2;
case FILT_ROUT_CACHE_FLUSH:
case ENDP_FILTER_CACHE_CFG:
case ENDP_ROUTER_CACHE_CFG:
return version >= IPA_VERSION_5_0;
case IPA_BCR:
case COUNTER_CFG:
return version < IPA_VERSION_4_5;
@ -32,14 +43,17 @@ static bool ipa_reg_id_valid(struct ipa *ipa, enum ipa_reg_id reg_id)
case SRC_RSRC_GRP_45_RSRC_TYPE:
case DST_RSRC_GRP_45_RSRC_TYPE:
return version <= IPA_VERSION_3_1 ||
version == IPA_VERSION_4_5;
version == IPA_VERSION_4_5 ||
version == IPA_VERSION_5_0;
case SRC_RSRC_GRP_67_RSRC_TYPE:
case DST_RSRC_GRP_67_RSRC_TYPE:
return version <= IPA_VERSION_3_1;
return version <= IPA_VERSION_3_1 ||
version == IPA_VERSION_5_0;
case ENDP_FILTER_ROUTER_HSH_CFG:
return version != IPA_VERSION_4_2;
return version < IPA_VERSION_5_0 &&
version != IPA_VERSION_4_2;
case IRQ_SUSPEND_EN:
case IRQ_SUSPEND_CLR:
@ -51,10 +65,6 @@ static bool ipa_reg_id_valid(struct ipa *ipa, enum ipa_reg_id reg_id)
case SHARED_MEM_SIZE:
case QSB_MAX_WRITES:
case QSB_MAX_READS:
case FILT_ROUT_HASH_EN:
case FILT_ROUT_CACHE_CFG:
case FILT_ROUT_HASH_FLUSH:
case FILT_ROUT_CACHE_FLUSH:
case STATE_AGGR_ACTIVE:
case LOCAL_PKT_PROC_CNTXT:
case AGGR_FORCE_CLOSE:
@ -76,8 +86,6 @@ static bool ipa_reg_id_valid(struct ipa *ipa, enum ipa_reg_id reg_id)
case ENDP_INIT_RSRC_GRP:
case ENDP_INIT_SEQ:
case ENDP_STATUS:
case ENDP_FILTER_CACHE_CFG:
case ENDP_ROUTER_CACHE_CFG:
case IPA_IRQ_STTS:
case IPA_IRQ_EN:
case IPA_IRQ_CLR:

Просмотреть файл

@ -60,9 +60,8 @@ enum ipa_reg_id {
SHARED_MEM_SIZE,
QSB_MAX_WRITES,
QSB_MAX_READS,
FILT_ROUT_HASH_EN, /* Not IPA v5.0+ */
FILT_ROUT_CACHE_CFG, /* IPA v5.0+ */
FILT_ROUT_HASH_FLUSH, /* Not IPA v5.0+ */
FILT_ROUT_HASH_EN, /* IPA v4.2 */
FILT_ROUT_HASH_FLUSH, /* Not IPA v4.2 nor IPA v5.0+ */
FILT_ROUT_CACHE_FLUSH, /* IPA v5.0+ */
STATE_AGGR_ACTIVE,
IPA_BCR, /* Not IPA v4.5+ */
@ -77,12 +76,12 @@ enum ipa_reg_id {
TIMERS_PULSE_GRAN_CFG, /* IPA v4.5+ */
SRC_RSRC_GRP_01_RSRC_TYPE,
SRC_RSRC_GRP_23_RSRC_TYPE,
SRC_RSRC_GRP_45_RSRC_TYPE, /* Not IPA v3.5+, IPA v4.5 */
SRC_RSRC_GRP_67_RSRC_TYPE, /* Not IPA v3.5+ */
SRC_RSRC_GRP_45_RSRC_TYPE, /* Not IPA v3.5+; IPA v4.5, IPA v5.0 */
SRC_RSRC_GRP_67_RSRC_TYPE, /* Not IPA v3.5+; IPA v5.0 */
DST_RSRC_GRP_01_RSRC_TYPE,
DST_RSRC_GRP_23_RSRC_TYPE,
DST_RSRC_GRP_45_RSRC_TYPE, /* Not IPA v3.5+, IPA v4.5 */
DST_RSRC_GRP_67_RSRC_TYPE, /* Not IPA v3.5+ */
DST_RSRC_GRP_45_RSRC_TYPE, /* Not IPA v3.5+; IPA v4.5, IPA v5.0 */
DST_RSRC_GRP_67_RSRC_TYPE, /* Not IPA v3.5+; IPA v5.0 */
ENDP_INIT_CTRL, /* Not IPA v4.2+ for TX, not IPA v4.0+ for RX */
ENDP_INIT_CFG,
ENDP_INIT_NAT, /* TX only */
@ -206,14 +205,6 @@ enum ipa_reg_qsb_max_reads_field_id {
GEN_QMB_1_MAX_READS_BEATS, /* IPA v4.0+ */
};
/* FILT_ROUT_CACHE_CFG register */
enum ipa_reg_filt_rout_cache_cfg_field_id {
ROUTER_CACHE_EN,
FILTER_CACHE_EN,
LOW_PRI_HASH_HIT_DISABLE,
LRU_EVICTION_THRESHOLD,
};
/* FILT_ROUT_HASH_EN and FILT_ROUT_HASH_FLUSH registers */
enum ipa_reg_filt_rout_hash_field_id {
IPV6_ROUTER_HASH,

Просмотреть файл

@ -6,7 +6,8 @@
#define _REG_H_
#include <linux/types.h>
#include <linux/bits.h>
#include <linux/log2.h>
#include <linux/bug.h>
/**
* struct reg - A register descriptor

Просмотреть файл

@ -137,17 +137,17 @@ REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
0x0001004c + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
0x0001e000 + 0x4000 * GSI_EE_AP, 0x08);
0x00011000 + 0x4000 * GSI_EE_AP, 0x08);
REG_STRIDE(EV_CH_E_DOORBELL_0, ev_ch_e_doorbell_0,
0x0001e100 + 0x4000 * GSI_EE_AP, 0x08);
0x00011100 + 0x4000 * GSI_EE_AP, 0x08);
static const u32 reg_gsi_status_fmask[] = {
[ENABLED] = BIT(0),
/* Bits 1-31 reserved */
};
REG_FIELDS(GSI_STATUS, gsi_status, 0x0001f000 + 0x4000 * GSI_EE_AP);
REG_FIELDS(GSI_STATUS, gsi_status, 0x00012000 + 0x4000 * GSI_EE_AP);
static const u32 reg_ch_cmd_fmask[] = {
[CH_CHID] = GENMASK(7, 0),
@ -155,7 +155,7 @@ static const u32 reg_ch_cmd_fmask[] = {
[CH_OPCODE] = GENMASK(31, 24),
};
REG_FIELDS(CH_CMD, ch_cmd, 0x0001f008 + 0x4000 * GSI_EE_AP);
REG_FIELDS(CH_CMD, ch_cmd, 0x00012008 + 0x4000 * GSI_EE_AP);
static const u32 reg_ev_ch_cmd_fmask[] = {
[EV_CHID] = GENMASK(7, 0),
@ -163,7 +163,7 @@ static const u32 reg_ev_ch_cmd_fmask[] = {
[EV_OPCODE] = GENMASK(31, 24),
};
REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x0001f010 + 0x4000 * GSI_EE_AP);
REG_FIELDS(EV_CH_CMD, ev_ch_cmd, 0x00012010 + 0x4000 * GSI_EE_AP);
static const u32 reg_generic_cmd_fmask[] = {
[GENERIC_OPCODE] = GENMASK(4, 0),
@ -172,7 +172,7 @@ static const u32 reg_generic_cmd_fmask[] = {
/* Bits 14-31 reserved */
};
REG_FIELDS(GENERIC_CMD, generic_cmd, 0x0001f018 + 0x4000 * GSI_EE_AP);
REG_FIELDS(GENERIC_CMD, generic_cmd, 0x00012018 + 0x4000 * GSI_EE_AP);
static const u32 reg_hw_param_2_fmask[] = {
[IRAM_SIZE] = GENMASK(2, 0),
@ -188,58 +188,58 @@ static const u32 reg_hw_param_2_fmask[] = {
[GSI_USE_INTER_EE] = BIT(31),
};
REG_FIELDS(HW_PARAM_2, hw_param_2, 0x0001f040 + 0x4000 * GSI_EE_AP);
REG_FIELDS(HW_PARAM_2, hw_param_2, 0x00012040 + 0x4000 * GSI_EE_AP);
REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x0001f080 + 0x4000 * GSI_EE_AP);
REG(CNTXT_TYPE_IRQ, cntxt_type_irq, 0x00012080 + 0x4000 * GSI_EE_AP);
REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x0001f088 + 0x4000 * GSI_EE_AP);
REG(CNTXT_TYPE_IRQ_MSK, cntxt_type_irq_msk, 0x00012088 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x0001f090 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_CH_IRQ, cntxt_src_ch_irq, 0x00012090 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x0001f094 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_EV_CH_IRQ, cntxt_src_ev_ch_irq, 0x00012094 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_CH_IRQ_MSK, cntxt_src_ch_irq_msk,
0x0001f098 + 0x4000 * GSI_EE_AP);
0x00012098 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_EV_CH_IRQ_MSK, cntxt_src_ev_ch_irq_msk,
0x0001f09c + 0x4000 * GSI_EE_AP);
0x0001209c + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_CH_IRQ_CLR, cntxt_src_ch_irq_clr,
0x0001f0a0 + 0x4000 * GSI_EE_AP);
0x000120a0 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_EV_CH_IRQ_CLR, cntxt_src_ev_ch_irq_clr,
0x0001f0a4 + 0x4000 * GSI_EE_AP);
0x000120a4 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x0001f0b0 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_IEOB_IRQ, cntxt_src_ieob_irq, 0x000120b0 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_IEOB_IRQ_MSK, cntxt_src_ieob_irq_msk,
0x0001f0b8 + 0x4000 * GSI_EE_AP);
0x000120b8 + 0x4000 * GSI_EE_AP);
REG(CNTXT_SRC_IEOB_IRQ_CLR, cntxt_src_ieob_irq_clr,
0x0001f0c0 + 0x4000 * GSI_EE_AP);
0x000120c0 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x0001f100 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GLOB_IRQ_STTS, cntxt_glob_irq_stts, 0x00012100 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x0001f108 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GLOB_IRQ_EN, cntxt_glob_irq_en, 0x00012108 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x0001f110 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GLOB_IRQ_CLR, cntxt_glob_irq_clr, 0x00012110 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x0001f118 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GSI_IRQ_STTS, cntxt_gsi_irq_stts, 0x00012118 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x0001f120 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GSI_IRQ_EN, cntxt_gsi_irq_en, 0x00012120 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x0001f128 + 0x4000 * GSI_EE_AP);
REG(CNTXT_GSI_IRQ_CLR, cntxt_gsi_irq_clr, 0x00012128 + 0x4000 * GSI_EE_AP);
static const u32 reg_cntxt_intset_fmask[] = {
[INTYPE] = BIT(0)
/* Bits 1-31 reserved */
};
REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x0001f180 + 0x4000 * GSI_EE_AP);
REG_FIELDS(CNTXT_INTSET, cntxt_intset, 0x00012180 + 0x4000 * GSI_EE_AP);
REG_FIELDS(ERROR_LOG, error_log, 0x0001f200 + 0x4000 * GSI_EE_AP);
REG_FIELDS(ERROR_LOG, error_log, 0x00012200 + 0x4000 * GSI_EE_AP);
REG(ERROR_LOG_CLR, error_log_clr, 0x0001f210 + 0x4000 * GSI_EE_AP);
REG(ERROR_LOG_CLR, error_log_clr, 0x00012210 + 0x4000 * GSI_EE_AP);
static const u32 reg_cntxt_scratch_0_fmask[] = {
[INTER_EE_RESULT] = GENMASK(2, 0),
@ -248,7 +248,7 @@ static const u32 reg_cntxt_scratch_0_fmask[] = {
/* Bits 8-31 reserved */
};
REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x0001f400 + 0x4000 * GSI_EE_AP);
REG_FIELDS(CNTXT_SCRATCH_0, cntxt_scratch_0, 0x00012400 + 0x4000 * GSI_EE_AP);
static const struct reg *reg_array[] = {
[INTER_EE_SRC_CH_IRQ_MSK] = &reg_inter_ee_src_ch_irq_msk,

Просмотреть файл

@ -27,7 +27,7 @@ static const u32 reg_ch_c_cntxt_0_fmask[] = {
};
REG_STRIDE_FIELDS(CH_C_CNTXT_0, ch_c_cntxt_0,
0x0001c000 + 0x4000 * GSI_EE_AP, 0x80);
0x0000f000 + 0x4000 * GSI_EE_AP, 0x80);
static const u32 reg_ch_c_cntxt_1_fmask[] = {
[CH_R_LENGTH] = GENMASK(19, 0),
@ -35,11 +35,11 @@ static const u32 reg_ch_c_cntxt_1_fmask[] = {
};
REG_STRIDE_FIELDS(CH_C_CNTXT_1, ch_c_cntxt_1,
0x0001c004 + 0x4000 * GSI_EE_AP, 0x80);
0x0000f004 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0001c008 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_CNTXT_2, ch_c_cntxt_2, 0x0000f008 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0001c00c + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_CNTXT_3, ch_c_cntxt_3, 0x0000f00c + 0x4000 * GSI_EE_AP, 0x80);
static const u32 reg_ch_c_qos_fmask[] = {
[WRR_WEIGHT] = GENMASK(3, 0),
@ -53,7 +53,7 @@ static const u32 reg_ch_c_qos_fmask[] = {
/* Bits 25-31 reserved */
};
REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0001c05c + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE_FIELDS(CH_C_QOS, ch_c_qos, 0x0000f05c + 0x4000 * GSI_EE_AP, 0x80);
static const u32 reg_error_log_fmask[] = {
[ERR_ARG3] = GENMASK(3, 0),
@ -67,16 +67,16 @@ static const u32 reg_error_log_fmask[] = {
};
REG_STRIDE(CH_C_SCRATCH_0, ch_c_scratch_0,
0x0001c060 + 0x4000 * GSI_EE_AP, 0x80);
0x0000f060 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_SCRATCH_1, ch_c_scratch_1,
0x0001c064 + 0x4000 * GSI_EE_AP, 0x80);
0x0000f064 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_SCRATCH_2, ch_c_scratch_2,
0x0001c068 + 0x4000 * GSI_EE_AP, 0x80);
0x0000f068 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_SCRATCH_3, ch_c_scratch_3,
0x0001c06c + 0x4000 * GSI_EE_AP, 0x80);
0x0000f06c + 0x4000 * GSI_EE_AP, 0x80);
static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
[EV_CHTYPE] = GENMASK(3, 0),
@ -89,23 +89,23 @@ static const u32 reg_ev_ch_e_cntxt_0_fmask[] = {
};
REG_STRIDE_FIELDS(EV_CH_E_CNTXT_0, ev_ch_e_cntxt_0,
0x0001d000 + 0x4000 * GSI_EE_AP, 0x80);
0x00010000 + 0x4000 * GSI_EE_AP, 0x80);
static const u32 reg_ev_ch_e_cntxt_1_fmask[] = {
[R_LENGTH] = GENMASK(15, 0),
};
REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
0x0001d004 + 0x4000 * GSI_EE_AP, 0x80);
0x00010004 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_2, ev_ch_e_cntxt_2,
0x0001d008 + 0x4000 * GSI_EE_AP, 0x80);
0x00010008 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_3, ev_ch_e_cntxt_3,
0x0001d00c + 0x4000 * GSI_EE_AP, 0x80);
0x0001000c + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_4, ev_ch_e_cntxt_4,
0x0001d010 + 0x4000 * GSI_EE_AP, 0x80);
0x00010010 + 0x4000 * GSI_EE_AP, 0x80);
static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
[EV_MODT] = GENMASK(15, 0),
@ -114,28 +114,28 @@ static const u32 reg_ev_ch_e_cntxt_8_fmask[] = {
};
REG_STRIDE_FIELDS(EV_CH_E_CNTXT_8, ev_ch_e_cntxt_8,
0x0001d020 + 0x4000 * GSI_EE_AP, 0x80);
0x00010020 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_9, ev_ch_e_cntxt_9,
0x0001d024 + 0x4000 * GSI_EE_AP, 0x80);
0x00010024 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_10, ev_ch_e_cntxt_10,
0x0001d028 + 0x4000 * GSI_EE_AP, 0x80);
0x00010028 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_11, ev_ch_e_cntxt_11,
0x0001d02c + 0x4000 * GSI_EE_AP, 0x80);
0x0001002c + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_12, ev_ch_e_cntxt_12,
0x0001d030 + 0x4000 * GSI_EE_AP, 0x80);
0x00010030 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_CNTXT_13, ev_ch_e_cntxt_13,
0x0001d034 + 0x4000 * GSI_EE_AP, 0x80);
0x00010034 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_SCRATCH_0, ev_ch_e_scratch_0,
0x0001d048 + 0x4000 * GSI_EE_AP, 0x80);
0x00010048 + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(EV_CH_E_SCRATCH_1, ev_ch_e_scratch_1,
0x0001d04c + 0x4000 * GSI_EE_AP, 0x80);
0x0001004c + 0x4000 * GSI_EE_AP, 0x80);
REG_STRIDE(CH_C_DOORBELL_0, ch_c_doorbell_0,
0x00011000 + 0x4000 * GSI_EE_AP, 0x08);

Просмотреть файл

@ -101,6 +101,7 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
goto out;
skb->dev = addr->master->dev;
skb->skb_iif = skb->dev->ifindex;
len = skb->len + ETH_HLEN;
ipvlan_count_rx(addr->master, len, true, false);
out:

Просмотреть файл

@ -280,12 +280,9 @@ static int vsc85xx_wol_set(struct phy_device *phydev,
u16 pwd[3] = {0, 0, 0};
struct ethtool_wolinfo *wol_conf = wol;
mutex_lock(&phydev->lock);
rc = phy_select_page(phydev, MSCC_PHY_PAGE_EXTENDED_2);
if (rc < 0) {
rc = phy_restore_page(phydev, rc, rc);
goto out_unlock;
}
if (rc < 0)
return phy_restore_page(phydev, rc, rc);
if (wol->wolopts & WAKE_MAGIC) {
/* Store the device address for the magic packet */
@ -323,7 +320,7 @@ static int vsc85xx_wol_set(struct phy_device *phydev,
rc = phy_restore_page(phydev, rc, rc > 0 ? 0 : rc);
if (rc < 0)
goto out_unlock;
return rc;
if (wol->wolopts & WAKE_MAGIC) {
/* Enable the WOL interrupt */
@ -331,22 +328,19 @@ static int vsc85xx_wol_set(struct phy_device *phydev,
reg_val |= MII_VSC85XX_INT_MASK_WOL;
rc = phy_write(phydev, MII_VSC85XX_INT_MASK, reg_val);
if (rc)
goto out_unlock;
return rc;
} else {
/* Disable the WOL interrupt */
reg_val = phy_read(phydev, MII_VSC85XX_INT_MASK);
reg_val &= (~MII_VSC85XX_INT_MASK_WOL);
rc = phy_write(phydev, MII_VSC85XX_INT_MASK, reg_val);
if (rc)
goto out_unlock;
return rc;
}
/* Clear WOL iterrupt status */
reg_val = phy_read(phydev, MII_VSC85XX_INT_STATUS);
out_unlock:
mutex_unlock(&phydev->lock);
return rc;
return 0;
}
static void vsc85xx_wol_get(struct phy_device *phydev,
@ -358,10 +352,9 @@ static void vsc85xx_wol_get(struct phy_device *phydev,
u16 pwd[3] = {0, 0, 0};
struct ethtool_wolinfo *wol_conf = wol;
mutex_lock(&phydev->lock);
rc = phy_select_page(phydev, MSCC_PHY_PAGE_EXTENDED_2);
if (rc < 0)
goto out_unlock;
goto out_restore_page;
reg_val = __phy_read(phydev, MSCC_PHY_WOL_MAC_CONTROL);
if (reg_val & SECURE_ON_ENABLE)
@ -377,9 +370,8 @@ static void vsc85xx_wol_get(struct phy_device *phydev,
}
}
out_unlock:
out_restore_page:
phy_restore_page(phydev, rc, rc > 0 ? 0 : rc);
mutex_unlock(&phydev->lock);
}
#if IS_ENABLED(CONFIG_OF_MDIO)

Просмотреть файл

@ -79,7 +79,7 @@
#define SGMII_ABILITY BIT(0)
#define VEND1_MII_BASIC_CONFIG 0xAFC6
#define MII_BASIC_CONFIG_REV BIT(8)
#define MII_BASIC_CONFIG_REV BIT(4)
#define MII_BASIC_CONFIG_SGMII 0x9
#define MII_BASIC_CONFIG_RGMII 0x7
#define MII_BASIC_CONFIG_RMII 0x5

Просмотреть файл

@ -199,8 +199,11 @@ static int lan95xx_config_aneg_ext(struct phy_device *phydev)
static int lan87xx_read_status(struct phy_device *phydev)
{
struct smsc_phy_priv *priv = phydev->priv;
int err;
int err = genphy_read_status(phydev);
err = genphy_read_status(phydev);
if (err)
return err;
if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) {
/* Disable EDPD to wake up PHY */

Просмотреть файл

@ -2200,6 +2200,13 @@ static int smsc75xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
size = (rx_cmd_a & RX_CMD_A_LEN) - RXW_PADDING;
align_count = (4 - ((size + RXW_PADDING) % 4)) % 4;
if (unlikely(size > skb->len)) {
netif_dbg(dev, rx_err, dev->net,
"size err rx_cmd_a=0x%08x\n",
rx_cmd_a);
return 0;
}
if (unlikely(rx_cmd_a & RX_CMD_A_RED)) {
netif_dbg(dev, rx_err, dev->net,
"Error rx_cmd_a=0x%08x\n", rx_cmd_a);

Просмотреть файл

@ -708,7 +708,8 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
u32 frame_sz;
if (skb_shared(skb) || skb_head_is_locked(skb) ||
skb_shinfo(skb)->nr_frags) {
skb_shinfo(skb)->nr_frags ||
skb_headroom(skb) < XDP_PACKET_HEADROOM) {
u32 size, len, max_head_size, off;
struct sk_buff *nskb;
struct page *page;
@ -773,9 +774,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
consume_skb(skb);
skb = nskb;
} else if (skb_headroom(skb) < XDP_PACKET_HEADROOM &&
pskb_expand_head(skb, VETH_XDP_HEADROOM, 0, GFP_ATOMIC)) {
goto drop;
}
/* SKB "head" area always have tailroom for skb_shared_info */
@ -1257,6 +1255,26 @@ static int veth_enable_range_safe(struct net_device *dev, int start, int end)
return 0;
}
static void veth_set_xdp_features(struct net_device *dev)
{
struct veth_priv *priv = netdev_priv(dev);
struct net_device *peer;
peer = rtnl_dereference(priv->peer);
if (peer && peer->real_num_tx_queues <= dev->real_num_rx_queues) {
xdp_features_t val = NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_RX_SG;
if (priv->_xdp_prog || veth_gro_requested(dev))
val |= NETDEV_XDP_ACT_NDO_XMIT |
NETDEV_XDP_ACT_NDO_XMIT_SG;
xdp_set_features_flag(dev, val);
} else {
xdp_clear_features_flag(dev);
}
}
static int veth_set_channels(struct net_device *dev,
struct ethtool_channels *ch)
{
@ -1323,6 +1341,12 @@ out:
if (peer)
netif_carrier_on(peer);
}
/* update XDP supported features */
veth_set_xdp_features(dev);
if (peer)
veth_set_xdp_features(peer);
return err;
revert:
@ -1489,7 +1513,10 @@ static int veth_set_features(struct net_device *dev,
err = veth_napi_enable(dev);
if (err)
return err;
xdp_features_set_redirect_target(dev, true);
} else {
xdp_features_clear_redirect_target(dev);
veth_napi_del(dev);
}
return 0;
@ -1570,10 +1597,15 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
peer->hw_features &= ~NETIF_F_GSO_SOFTWARE;
peer->max_mtu = max_mtu;
}
xdp_features_set_redirect_target(dev, true);
}
if (old_prog) {
if (!prog) {
if (!veth_gro_requested(dev))
xdp_features_clear_redirect_target(dev);
if (dev->flags & IFF_UP)
veth_disable_xdp(dev);
@ -1686,10 +1718,6 @@ static void veth_setup(struct net_device *dev)
dev->hw_enc_features = VETH_FEATURES;
dev->mpls_features = NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE;
netif_set_tso_max_size(dev, GSO_MAX_SIZE);
dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG |
NETDEV_XDP_ACT_NDO_XMIT_SG;
}
/*
@ -1857,6 +1885,10 @@ static int veth_newlink(struct net *src_net, struct net_device *dev,
goto err_queues;
veth_disable_gro(dev);
/* update XDP supported features */
veth_set_xdp_features(dev);
veth_set_xdp_features(peer);
return 0;
err_queues:

Просмотреть файл

@ -446,7 +446,8 @@ static unsigned int mergeable_ctx_to_truesize(void *mrg_ctx)
static struct sk_buff *page_to_skb(struct virtnet_info *vi,
struct receive_queue *rq,
struct page *page, unsigned int offset,
unsigned int len, unsigned int truesize)
unsigned int len, unsigned int truesize,
unsigned int headroom)
{
struct sk_buff *skb;
struct virtio_net_hdr_mrg_rxbuf *hdr;
@ -464,11 +465,11 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
else
hdr_padded_len = sizeof(struct padded_vnet_hdr);
buf = p;
buf = p - headroom;
len -= hdr_len;
offset += hdr_padded_len;
p += hdr_padded_len;
tailroom = truesize - hdr_padded_len - len;
tailroom = truesize - headroom - hdr_padded_len - len;
shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
@ -545,6 +546,87 @@ ok:
return skb;
}
static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
{
unsigned int len;
unsigned int packets = 0;
unsigned int bytes = 0;
void *ptr;
while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
if (likely(!is_xdp_frame(ptr))) {
struct sk_buff *skb = ptr;
pr_debug("Sent skb %p\n", skb);
bytes += skb->len;
napi_consume_skb(skb, in_napi);
} else {
struct xdp_frame *frame = ptr_to_xdp(ptr);
bytes += xdp_get_frame_len(frame);
xdp_return_frame(frame);
}
packets++;
}
/* Avoid overhead when no packets have been processed
* happens when called speculatively from start_xmit.
*/
if (!packets)
return;
u64_stats_update_begin(&sq->stats.syncp);
sq->stats.bytes += bytes;
sq->stats.packets += packets;
u64_stats_update_end(&sq->stats.syncp);
}
static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
{
if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
return false;
else if (q < vi->curr_queue_pairs)
return true;
else
return false;
}
static void check_sq_full_and_disable(struct virtnet_info *vi,
struct net_device *dev,
struct send_queue *sq)
{
bool use_napi = sq->napi.weight;
int qnum;
qnum = sq - vi->sq;
/* If running out of space, stop queue to avoid getting packets that we
* are then unable to transmit.
* An alternative would be to force queuing layer to requeue the skb by
* returning NETDEV_TX_BUSY. However, NETDEV_TX_BUSY should not be
* returned in a normal path of operation: it means that driver is not
* maintaining the TX queue stop/start state properly, and causes
* the stack to do a non-trivial amount of useless work.
* Since most packets only take 1 or 2 ring slots, stopping the queue
* early means 16 slots are typically wasted.
*/
if (sq->vq->num_free < 2+MAX_SKB_FRAGS) {
netif_stop_subqueue(dev, qnum);
if (use_napi) {
if (unlikely(!virtqueue_enable_cb_delayed(sq->vq)))
virtqueue_napi_schedule(&sq->napi, sq->vq);
} else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
/* More just got used, free them then recheck. */
free_old_xmit_skbs(sq, false);
if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
netif_start_subqueue(dev, qnum);
virtqueue_disable_cb(sq->vq);
}
}
}
}
static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
struct send_queue *sq,
struct xdp_frame *xdpf)
@ -686,6 +768,9 @@ static int virtnet_xdp_xmit(struct net_device *dev,
}
ret = nxmit;
if (!is_xdp_raw_buffer_queue(vi, sq - vi->sq))
check_sq_full_and_disable(vi, dev, sq);
if (flags & XDP_XMIT_FLUSH) {
if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq))
kicks = 1;
@ -925,7 +1010,7 @@ static struct sk_buff *receive_big(struct net_device *dev,
{
struct page *page = buf;
struct sk_buff *skb =
page_to_skb(vi, rq, page, 0, len, PAGE_SIZE);
page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0);
stats->bytes += len - vi->hdr_len;
if (unlikely(!skb))
@ -1188,9 +1273,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
switch (act) {
case XDP_PASS:
head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
if (unlikely(!head_skb))
goto err_xdp_frags;
if (unlikely(xdp_page != page))
put_page(page);
head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
rcu_read_unlock();
return head_skb;
case XDP_TX:
@ -1248,7 +1336,7 @@ err_xdp_frags:
rcu_read_unlock();
skip_xdp:
head_skb = page_to_skb(vi, rq, page, offset, len, truesize);
head_skb = page_to_skb(vi, rq, page, offset, len, truesize, headroom);
curr_skb = head_skb;
if (unlikely(!curr_skb))
@ -1714,52 +1802,6 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
return stats.packets;
}
static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
{
unsigned int len;
unsigned int packets = 0;
unsigned int bytes = 0;
void *ptr;
while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
if (likely(!is_xdp_frame(ptr))) {
struct sk_buff *skb = ptr;
pr_debug("Sent skb %p\n", skb);
bytes += skb->len;
napi_consume_skb(skb, in_napi);
} else {
struct xdp_frame *frame = ptr_to_xdp(ptr);
bytes += xdp_get_frame_len(frame);
xdp_return_frame(frame);
}
packets++;
}
/* Avoid overhead when no packets have been processed
* happens when called speculatively from start_xmit.
*/
if (!packets)
return;
u64_stats_update_begin(&sq->stats.syncp);
sq->stats.bytes += bytes;
sq->stats.packets += packets;
u64_stats_update_end(&sq->stats.syncp);
}
static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
{
if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
return false;
else if (q < vi->curr_queue_pairs)
return true;
else
return false;
}
static void virtnet_poll_cleantx(struct receive_queue *rq)
{
struct virtnet_info *vi = rq->vq->vdev->priv;
@ -1989,30 +2031,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
nf_reset_ct(skb);
}
/* If running out of space, stop queue to avoid getting packets that we
* are then unable to transmit.
* An alternative would be to force queuing layer to requeue the skb by
* returning NETDEV_TX_BUSY. However, NETDEV_TX_BUSY should not be
* returned in a normal path of operation: it means that driver is not
* maintaining the TX queue stop/start state properly, and causes
* the stack to do a non-trivial amount of useless work.
* Since most packets only take 1 or 2 ring slots, stopping the queue
* early means 16 slots are typically wasted.
*/
if (sq->vq->num_free < 2+MAX_SKB_FRAGS) {
netif_stop_subqueue(dev, qnum);
if (use_napi) {
if (unlikely(!virtqueue_enable_cb_delayed(sq->vq)))
virtqueue_napi_schedule(&sq->napi, sq->vq);
} else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
/* More just got used, free them then recheck. */
free_old_xmit_skbs(sq, false);
if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
netif_start_subqueue(dev, qnum);
virtqueue_disable_cb(sq->vq);
}
}
}
check_sq_full_and_disable(vi, dev, sq);
if (kick || netif_xmit_stopped(txq)) {
if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {

Просмотреть файл

@ -1177,14 +1177,9 @@ static int ucc_hdlc_probe(struct platform_device *pdev)
uhdlc_priv->dev = &pdev->dev;
uhdlc_priv->ut_info = ut_info;
if (of_get_property(np, "fsl,tdm-interface", NULL))
uhdlc_priv->tsa = 1;
if (of_get_property(np, "fsl,ucc-internal-loopback", NULL))
uhdlc_priv->loopback = 1;
if (of_get_property(np, "fsl,hdlc-bus", NULL))
uhdlc_priv->hdlc_bus = 1;
uhdlc_priv->tsa = of_property_read_bool(np, "fsl,tdm-interface");
uhdlc_priv->loopback = of_property_read_bool(np, "fsl,ucc-internal-loopback");
uhdlc_priv->hdlc_bus = of_property_read_bool(np, "fsl,hdlc-bus");
if (uhdlc_priv->tsa == 1) {
utdm = kzalloc(sizeof(*utdm), GFP_KERNEL);

Просмотреть файл

@ -447,8 +447,7 @@ static int wlcore_probe_of(struct spi_device *spi, struct wl12xx_spi_glue *glue,
dev_info(&spi->dev, "selected chip family is %s\n",
pdev_data->family->name);
if (of_find_property(dt_node, "clock-xtal", NULL))
pdev_data->ref_clock_xtal = true;
pdev_data->ref_clock_xtal = of_property_read_bool(dt_node, "clock-xtal");
/* optional clock frequency params */
of_property_read_u32(dt_node, "ref-clock-frequency",

Просмотреть файл

@ -175,6 +175,7 @@ static int pn533_usb_send_frame(struct pn533 *dev,
print_hex_dump_debug("PN533 TX: ", DUMP_PREFIX_NONE, 16, 1,
out->data, out->len, false);
arg.phy = phy;
init_completion(&arg.done);
cntx = phy->out_urb->context;
phy->out_urb->context = &arg;

Просмотреть файл

@ -282,13 +282,15 @@ EXPORT_SYMBOL(ndlc_probe);
void ndlc_remove(struct llt_ndlc *ndlc)
{
st_nci_remove(ndlc->ndev);
/* cancel timers */
del_timer_sync(&ndlc->t1_timer);
del_timer_sync(&ndlc->t2_timer);
ndlc->t2_active = false;
ndlc->t1_active = false;
/* cancel work */
cancel_work_sync(&ndlc->sm_work);
st_nci_remove(ndlc->ndev);
skb_queue_purge(&ndlc->rcv_q);
skb_queue_purge(&ndlc->send_q);

Просмотреть файл

@ -297,9 +297,11 @@ struct hh_cache {
* relationship HH alignment <= LL alignment.
*/
#define LL_RESERVED_SPACE(dev) \
((((dev)->hard_header_len+(dev)->needed_headroom)&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom)) \
& ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
#define LL_RESERVED_SPACE_EXTRA(dev,extra) \
((((dev)->hard_header_len+(dev)->needed_headroom+(extra))&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom) + (extra)) \
& ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
struct header_ops {
int (*create) (struct sk_buff *skb, struct net_device *dev,

Просмотреть файл

@ -428,12 +428,18 @@ MAX_XDP_METADATA_KFUNC,
#ifdef CONFIG_NET
u32 bpf_xdp_metadata_kfunc_id(int id);
bool bpf_dev_bound_kfunc_id(u32 btf_id);
void xdp_set_features_flag(struct net_device *dev, xdp_features_t val);
void xdp_features_set_redirect_target(struct net_device *dev, bool support_sg);
void xdp_features_clear_redirect_target(struct net_device *dev);
#else
static inline u32 bpf_xdp_metadata_kfunc_id(int id) { return 0; }
static inline bool bpf_dev_bound_kfunc_id(u32 btf_id) { return false; }
static inline void
xdp_set_features_flag(struct net_device *dev, xdp_features_t val)
{
}
static inline void
xdp_features_set_redirect_target(struct net_device *dev, bool support_sg)
{
@ -445,4 +451,9 @@ xdp_features_clear_redirect_target(struct net_device *dev)
}
#endif
static inline void xdp_clear_features_flag(struct net_device *dev)
{
xdp_set_features_flag(dev, 0);
}
#endif /* __LINUX_NET_XDP_H__ */

Просмотреть файл

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause */
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
/* Do not edit directly, auto-generated from: */
/* Documentation/netlink/specs/fou.yaml */
/* YNL-GEN uapi header */

Просмотреть файл

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause */
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
/* Do not edit directly, auto-generated from: */
/* Documentation/netlink/specs/netdev.yaml */
/* YNL-GEN uapi header */
@ -33,6 +33,8 @@ enum netdev_xdp_act {
NETDEV_XDP_ACT_HW_OFFLOAD = 16,
NETDEV_XDP_ACT_RX_SG = 32,
NETDEV_XDP_ACT_NDO_XMIT_SG = 64,
NETDEV_XDP_ACT_MASK = 127,
};
enum {

Просмотреть файл

@ -789,6 +789,7 @@ enum {
TCA_ROOT_FLAGS,
TCA_ROOT_COUNT,
TCA_ROOT_TIME_DELTA, /* in msecs */
TCA_ROOT_EXT_WARN_MSG,
__TCA_ROOT_MAX,
#define TCA_ROOT_MAX (__TCA_ROOT_MAX - 1)
};

Просмотреть файл

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
/* Do not edit directly, auto-generated from: */
/* Documentation/netlink/specs/netdev.yaml */
/* YNL-GEN kernel source */

Просмотреть файл

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
/* Do not edit directly, auto-generated from: */
/* Documentation/netlink/specs/netdev.yaml */
/* YNL-GEN kernel header */

Просмотреть файл

@ -774,20 +774,34 @@ static int __init xdp_metadata_init(void)
}
late_initcall(xdp_metadata_init);
void xdp_set_features_flag(struct net_device *dev, xdp_features_t val)
{
val &= NETDEV_XDP_ACT_MASK;
if (dev->xdp_features == val)
return;
dev->xdp_features = val;
if (dev->reg_state == NETREG_REGISTERED)
call_netdevice_notifiers(NETDEV_XDP_FEAT_CHANGE, dev);
}
EXPORT_SYMBOL_GPL(xdp_set_features_flag);
void xdp_features_set_redirect_target(struct net_device *dev, bool support_sg)
{
dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
if (support_sg)
dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT_SG;
xdp_features_t val = (dev->xdp_features | NETDEV_XDP_ACT_NDO_XMIT);
call_netdevice_notifiers(NETDEV_XDP_FEAT_CHANGE, dev);
if (support_sg)
val |= NETDEV_XDP_ACT_NDO_XMIT_SG;
xdp_set_features_flag(dev, val);
}
EXPORT_SYMBOL_GPL(xdp_features_set_redirect_target);
void xdp_features_clear_redirect_target(struct net_device *dev)
{
dev->xdp_features &= ~(NETDEV_XDP_ACT_NDO_XMIT |
NETDEV_XDP_ACT_NDO_XMIT_SG);
call_netdevice_notifiers(NETDEV_XDP_FEAT_CHANGE, dev);
xdp_features_t val = dev->xdp_features;
val &= ~(NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_NDO_XMIT_SG);
xdp_set_features_flag(dev, val);
}
EXPORT_SYMBOL_GPL(xdp_features_clear_redirect_target);

Просмотреть файл

@ -1933,6 +1933,7 @@ int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
int new_master_mtu;
int old_master_mtu;
int mtu_limit;
int overhead;
int cpu_mtu;
int err;
@ -1961,9 +1962,10 @@ int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
largest_mtu = slave_mtu;
}
mtu_limit = min_t(int, master->max_mtu, dev->max_mtu);
overhead = dsa_tag_protocol_overhead(cpu_dp->tag_ops);
mtu_limit = min_t(int, master->max_mtu, dev->max_mtu + overhead);
old_master_mtu = master->mtu;
new_master_mtu = largest_mtu + dsa_tag_protocol_overhead(cpu_dp->tag_ops);
new_master_mtu = largest_mtu + overhead;
if (new_master_mtu > mtu_limit)
return -ERANGE;
@ -1998,8 +2000,7 @@ int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
out_port_failed:
if (new_master_mtu != old_master_mtu)
dsa_port_mtu_change(cpu_dp, old_master_mtu -
dsa_tag_protocol_overhead(cpu_dp->tag_ops));
dsa_port_mtu_change(cpu_dp, old_master_mtu - overhead);
out_cpu_failed:
if (new_master_mtu != old_master_mtu)
dev_set_mtu(master, old_master_mtu);

Просмотреть файл

@ -415,7 +415,7 @@ void hsr_addr_subst_dest(struct hsr_node *node_src, struct sk_buff *skb,
node_dst = find_node_by_addr_A(&port->hsr->node_db,
eth_hdr(skb)->h_dest);
if (!node_dst) {
if (net_ratelimit())
if (port->hsr->prot_version != PRP_V1 && net_ratelimit())
netdev_err(skb->dev, "%s: Unknown node\n", __func__);
return;
}

Просмотреть файл

@ -576,6 +576,9 @@ static int rtentry_to_fib_config(struct net *net, int cmd, struct rtentry *rt,
cfg->fc_scope = RT_SCOPE_UNIVERSE;
}
if (!cfg->fc_table)
cfg->fc_table = RT_TABLE_MAIN;
if (cmd == SIOCDELRT)
return 0;

Просмотреть файл

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
/* Do not edit directly, auto-generated from: */
/* Documentation/netlink/specs/fou.yaml */
/* YNL-GEN kernel source */

Просмотреть файл

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
/* Do not edit directly, auto-generated from: */
/* Documentation/netlink/specs/fou.yaml */
/* YNL-GEN kernel header */

Просмотреть файл

@ -828,8 +828,14 @@ bool inet_bind2_bucket_match_addr_any(const struct inet_bind2_bucket *tb, const
#if IS_ENABLED(CONFIG_IPV6)
struct in6_addr addr_any = {};
if (sk->sk_family != tb->family)
if (sk->sk_family != tb->family) {
if (sk->sk_family == AF_INET)
return net_eq(ib2_net(tb), net) && tb->port == port &&
tb->l3mdev == l3mdev &&
ipv6_addr_equal(&tb->v6_rcv_saddr, &addr_any);
return false;
}
if (sk->sk_family == AF_INET6)
return net_eq(ib2_net(tb), net) && tb->port == port &&

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше