Networking fixes for 5.13-rc1, including fixes from bpf, can
and netfilter trees. Self-contained fixes, nothing risky. Current release - new code bugs: - dsa: ksz: fix a few bugs found by static-checker in the new driver - stmmac: fix frame preemption handshake not triggering after interface restart Previous releases - regressions: - make nla_strcmp handle more then one trailing null character - fix stack OOB reads while fragmenting IPv4 packets in openvswitch and net/sched - sctp: do asoc update earlier in sctp_sf_do_dupcook_a - sctp: delay auto_asconf init until binding the first addr - stmmac: clear receive all(RA) bit when promiscuous mode is off - can: mcp251x: fix resume from sleep before interface was brought up Previous releases - always broken: - bpf: fix leakage of uninitialized bpf stack under speculation - bpf: fix masking negation logic upon negative dst register - netfilter: don't assume that skb_header_pointer() will never fail - only allow init netns to set default tcp cong to a restricted algo - xsk: fix xp_aligned_validate_desc() when len == chunk_size to avoid false positive errors - ethtool: fix missing NLM_F_MULTI flag when dumping - can: m_can: m_can_tx_work_queue(): fix tx_skb race condition - sctp: fix a SCTP_MIB_CURRESTAB leak in sctp_sf_do_dupcook_b - bridge: fix NULL-deref caused by a races between assigning rx_handler_data and setting the IFF_BRIDGE_PORT bit Latecomer: - seg6: add counters support for SRv6 Behaviors Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmCV3YoACgkQMUZtbf5S IrsQ2w//Q8/qbl6wGTKUfu6DZHYUU5j5sTwiHR823PKKSgXI+okWMN0KUlZszOsz qnPkH6GuojRooOE1s8PFLSlt9axKhQ0y7uzMTrWYafQ+JZTtgg9/MiPxQ8fdiE5i uOG1ngttZ+1jlE5tMPL4GAOSegg3rWVDclzqnJTdsPPOco3MWj6SL9xN0LDPxCEL BDysRqL/UiOIoh4v6IXQRx2UWjsNGu4biM1po+Jfumnd9T0zKoEpzu6UN6yPShbx 284LihZSQtughCbhGqkErBOxfjZcvpFOQrqmjEvI+Z/eYg4InfWZemt8Sa92/alE yAFjK76MUTaUxaAO/gk8XauhvkYOzJJwKpqhbOmlaM7oj55QdzT5/8JxMxVoA6hV pscHOixk15GVse49PdPV8v47cyTLc/Xi69i+/uUdNVVfuORL1wft1w1xbd0S6Pbe 7Gqax21S7zxcDsrUli7cFheYiqtbQAL0anlIUz8tUOZFz0VQ/zPuFd4rUYZ/o38V Mrevdk3t6CXNxS4CRXyUW4UejYB1O6Qw12sUue31e3h73d6LiN3NAiN5Qp7SEk1/ fvk+jfOf8vvmtimYvcUK2i0D+vqj4Ec/qRIE/XXuUDBcp22tPL9uWMfWavwTdAj1 Se4SzksTWF+NM0lO0ItonMyPh3ZXcSLhIv/gHrZwEKuWkXCGO4M= =JmWS -----END PGP SIGNATURE----- Merge tag 'net-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Networking fixes for 5.13-rc1, including fixes from bpf, can and netfilter trees. Self-contained fixes, nothing risky. Current release - new code bugs: - dsa: ksz: fix a few bugs found by static-checker in the new driver - stmmac: fix frame preemption handshake not triggering after interface restart Previous releases - regressions: - make nla_strcmp handle more then one trailing null character - fix stack OOB reads while fragmenting IPv4 packets in openvswitch and net/sched - sctp: do asoc update earlier in sctp_sf_do_dupcook_a - sctp: delay auto_asconf init until binding the first addr - stmmac: clear receive all(RA) bit when promiscuous mode is off - can: mcp251x: fix resume from sleep before interface was brought up Previous releases - always broken: - bpf: fix leakage of uninitialized bpf stack under speculation - bpf: fix masking negation logic upon negative dst register - netfilter: don't assume that skb_header_pointer() will never fail - only allow init netns to set default tcp cong to a restricted algo - xsk: fix xp_aligned_validate_desc() when len == chunk_size to avoid false positive errors - ethtool: fix missing NLM_F_MULTI flag when dumping - can: m_can: m_can_tx_work_queue(): fix tx_skb race condition - sctp: fix a SCTP_MIB_CURRESTAB leak in sctp_sf_do_dupcook_b - bridge: fix NULL-deref caused by a races between assigning rx_handler_data and setting the IFF_BRIDGE_PORT bit Latecomer: - seg6: add counters support for SRv6 Behaviors" * tag 'net-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (73 commits) atm: firestream: Use fallthrough pseudo-keyword net: stmmac: Do not enable RX FIFO overflow interrupts mptcp: fix splat when closing unaccepted socket i40e: Remove LLDP frame filters i40e: Fix PHY type identifiers for 2.5G and 5G adapters i40e: fix the restart auto-negotiation after FEC modified i40e: Fix use-after-free in i40e_client_subtask() i40e: fix broken XDP support netfilter: nftables: avoid potential overflows on 32bit arches netfilter: nftables: avoid overflows in nft_hash_buckets() tcp: Specify cmsgbuf is user pointer for receive zerocopy. mlxsw: spectrum_mr: Update egress RIF list before route's action net: ipa: fix inter-EE IRQ register definitions can: m_can: m_can_tx_work_queue(): fix tx_skb race condition can: mcp251x: fix resume from sleep before interface was brought up can: mcp251xfd: mcp251xfd_probe(): add missing can_rx_offload_del() in error path can: mcp251xfd: mcp251xfd_probe(): fix an error pointer dereference in probe netfilter: nftables: Fix a memleak from userdata error path in new objects netfilter: remove BUG_ON() after skb_header_pointer() netfilter: nfnetlink_osf: Fix a missing skb_header_pointer() NULL check ...
This commit is contained in:
Коммит
fc858a5231
5
CREDITS
5
CREDITS
|
@ -1874,6 +1874,11 @@ S: Krosenska' 543
|
|||
S: 181 00 Praha 8
|
||||
S: Czech Republic
|
||||
|
||||
N: Murali Karicheri
|
||||
E: m-karicheri2@ti.com
|
||||
D: Keystone NetCP driver
|
||||
D: Keystone PCIe host controller driver
|
||||
|
||||
N: Jan "Yenya" Kasprzak
|
||||
E: kas@fi.muni.cz
|
||||
D: Author of the COSA/SRP sync serial board driver.
|
||||
|
|
|
@ -58,3 +58,19 @@ Description:
|
|||
|
||||
Indicates the mux id associated to the qmimux network interface
|
||||
during its creation.
|
||||
|
||||
What: /sys/class/net/<iface>/qmi/pass_through
|
||||
Date: January 2021
|
||||
KernelVersion: 5.12
|
||||
Contact: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
||||
Description:
|
||||
Boolean. Default: 'N'
|
||||
|
||||
Set this to 'Y' to enable 'pass-through' mode, allowing packets
|
||||
in MAP format to be passed on to the stack.
|
||||
|
||||
Normally the rmnet driver (CONFIG_RMNET) is then used to process
|
||||
and demultiplex these packets.
|
||||
|
||||
'Pass-through' mode can be enabled when the device is in
|
||||
'raw-ip' mode only.
|
||||
|
|
16
MAINTAINERS
16
MAINTAINERS
|
@ -624,6 +624,7 @@ F: fs/affs/
|
|||
|
||||
AFS FILESYSTEM
|
||||
M: David Howells <dhowells@redhat.com>
|
||||
M: Marc Dionne <marc.dionne@auristor.com>
|
||||
L: linux-afs@lists.infradead.org
|
||||
S: Supported
|
||||
W: https://www.infradead.org/~dhowells/kafs/
|
||||
|
@ -14099,13 +14100,6 @@ F: Documentation/devicetree/bindings/pci/ti-pci.txt
|
|||
F: drivers/pci/controller/cadence/pci-j721e.c
|
||||
F: drivers/pci/controller/dwc/pci-dra7xx.c
|
||||
|
||||
PCI DRIVER FOR TI KEYSTONE
|
||||
M: Murali Karicheri <m-karicheri2@ti.com>
|
||||
L: linux-pci@vger.kernel.org
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
F: drivers/pci/controller/dwc/pci-keystone.c
|
||||
|
||||
PCI DRIVER FOR V3 SEMICONDUCTOR V360EPC
|
||||
M: Linus Walleij <linus.walleij@linaro.org>
|
||||
L: linux-pci@vger.kernel.org
|
||||
|
@ -15891,6 +15885,7 @@ F: drivers/infiniband/ulp/rtrs/
|
|||
|
||||
RXRPC SOCKETS (AF_RXRPC)
|
||||
M: David Howells <dhowells@redhat.com>
|
||||
M: Marc Dionne <marc.dionne@auristor.com>
|
||||
L: linux-afs@lists.infradead.org
|
||||
S: Supported
|
||||
W: https://www.infradead.org/~dhowells/kafs/
|
||||
|
@ -18307,13 +18302,6 @@ S: Maintained
|
|||
F: sound/soc/codecs/isabelle*
|
||||
F: sound/soc/codecs/lm49453*
|
||||
|
||||
TI NETCP ETHERNET DRIVER
|
||||
M: Wingman Kwok <w-kwok2@ti.com>
|
||||
M: Murali Karicheri <m-karicheri2@ti.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/net/ethernet/ti/netcp*
|
||||
|
||||
TI PCM3060 ASoC CODEC DRIVER
|
||||
M: Kirill Marinushkin <kmarinushkin@birdec.com>
|
||||
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
|
||||
|
|
|
@ -795,6 +795,7 @@ static void process_incoming (struct fs_dev *dev, struct queue *q)
|
|||
switch (STATUS_CODE (qe)) {
|
||||
case 0x1:
|
||||
/* Fall through for streaming mode */
|
||||
fallthrough;
|
||||
case 0x2:/* Packet received OK.... */
|
||||
if (atm_vcc) {
|
||||
skb = pe->skb;
|
||||
|
|
|
@ -1562,6 +1562,8 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
|
|||
int i;
|
||||
int putidx;
|
||||
|
||||
cdev->tx_skb = NULL;
|
||||
|
||||
/* Generate ID field for TX buffer Element */
|
||||
/* Common to all supported M_CAN versions */
|
||||
if (cf->can_id & CAN_EFF_FLAG) {
|
||||
|
@ -1678,7 +1680,6 @@ static void m_can_tx_work_queue(struct work_struct *ws)
|
|||
tx_work);
|
||||
|
||||
m_can_tx_handler(cdev);
|
||||
cdev->tx_skb = NULL;
|
||||
}
|
||||
|
||||
static netdev_tx_t m_can_start_xmit(struct sk_buff *skb,
|
||||
|
|
|
@ -956,8 +956,6 @@ static int mcp251x_stop(struct net_device *net)
|
|||
|
||||
priv->force_quit = 1;
|
||||
free_irq(spi->irq, priv);
|
||||
destroy_workqueue(priv->wq);
|
||||
priv->wq = NULL;
|
||||
|
||||
mutex_lock(&priv->mcp_lock);
|
||||
|
||||
|
@ -1224,24 +1222,15 @@ static int mcp251x_open(struct net_device *net)
|
|||
goto out_close;
|
||||
}
|
||||
|
||||
priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
|
||||
0);
|
||||
if (!priv->wq) {
|
||||
ret = -ENOMEM;
|
||||
goto out_clean;
|
||||
}
|
||||
INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
|
||||
INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
|
||||
|
||||
ret = mcp251x_hw_wake(spi);
|
||||
if (ret)
|
||||
goto out_free_wq;
|
||||
goto out_free_irq;
|
||||
ret = mcp251x_setup(net, spi);
|
||||
if (ret)
|
||||
goto out_free_wq;
|
||||
goto out_free_irq;
|
||||
ret = mcp251x_set_normal_mode(spi);
|
||||
if (ret)
|
||||
goto out_free_wq;
|
||||
goto out_free_irq;
|
||||
|
||||
can_led_event(net, CAN_LED_EVENT_OPEN);
|
||||
|
||||
|
@ -1250,9 +1239,7 @@ static int mcp251x_open(struct net_device *net)
|
|||
|
||||
return 0;
|
||||
|
||||
out_free_wq:
|
||||
destroy_workqueue(priv->wq);
|
||||
out_clean:
|
||||
out_free_irq:
|
||||
free_irq(spi->irq, priv);
|
||||
mcp251x_hw_sleep(spi);
|
||||
out_close:
|
||||
|
@ -1373,6 +1360,15 @@ static int mcp251x_can_probe(struct spi_device *spi)
|
|||
if (ret)
|
||||
goto out_clk;
|
||||
|
||||
priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
|
||||
0);
|
||||
if (!priv->wq) {
|
||||
ret = -ENOMEM;
|
||||
goto out_clk;
|
||||
}
|
||||
INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
|
||||
INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
|
||||
|
||||
priv->spi = spi;
|
||||
mutex_init(&priv->mcp_lock);
|
||||
|
||||
|
@ -1417,6 +1413,8 @@ static int mcp251x_can_probe(struct spi_device *spi)
|
|||
return 0;
|
||||
|
||||
error_probe:
|
||||
destroy_workqueue(priv->wq);
|
||||
priv->wq = NULL;
|
||||
mcp251x_power_enable(priv->power, 0);
|
||||
|
||||
out_clk:
|
||||
|
@ -1438,6 +1436,9 @@ static int mcp251x_can_remove(struct spi_device *spi)
|
|||
|
||||
mcp251x_power_enable(priv->power, 0);
|
||||
|
||||
destroy_workqueue(priv->wq);
|
||||
priv->wq = NULL;
|
||||
|
||||
clk_disable_unprepare(priv->clk);
|
||||
|
||||
free_candev(net);
|
||||
|
|
|
@ -2885,8 +2885,8 @@ static int mcp251xfd_probe(struct spi_device *spi)
|
|||
|
||||
clk = devm_clk_get(&spi->dev, NULL);
|
||||
if (IS_ERR(clk))
|
||||
dev_err_probe(&spi->dev, PTR_ERR(clk),
|
||||
"Failed to get Oscillator (clock)!\n");
|
||||
return dev_err_probe(&spi->dev, PTR_ERR(clk),
|
||||
"Failed to get Oscillator (clock)!\n");
|
||||
freq = clk_get_rate(clk);
|
||||
|
||||
/* Sanity check */
|
||||
|
@ -2986,10 +2986,12 @@ static int mcp251xfd_probe(struct spi_device *spi)
|
|||
|
||||
err = mcp251xfd_register(priv);
|
||||
if (err)
|
||||
goto out_free_candev;
|
||||
goto out_can_rx_offload_del;
|
||||
|
||||
return 0;
|
||||
|
||||
out_can_rx_offload_del:
|
||||
can_rx_offload_del(&priv->offload);
|
||||
out_free_candev:
|
||||
spi->max_speed_hz = priv->spi_max_speed_hz_orig;
|
||||
|
||||
|
|
|
@ -41,6 +41,9 @@ static int ksz8795_spi_probe(struct spi_device *spi)
|
|||
int i, ret = 0;
|
||||
|
||||
ksz8 = devm_kzalloc(&spi->dev, sizeof(struct ksz8), GFP_KERNEL);
|
||||
if (!ksz8)
|
||||
return -ENOMEM;
|
||||
|
||||
ksz8->priv = spi;
|
||||
|
||||
dev = ksz_switch_alloc(&spi->dev, ksz8);
|
||||
|
|
|
@ -147,11 +147,14 @@ static int ksz8863_smi_probe(struct mdio_device *mdiodev)
|
|||
int i;
|
||||
|
||||
ksz8 = devm_kzalloc(&mdiodev->dev, sizeof(struct ksz8), GFP_KERNEL);
|
||||
if (!ksz8)
|
||||
return -ENOMEM;
|
||||
|
||||
ksz8->priv = mdiodev;
|
||||
|
||||
dev = ksz_switch_alloc(&mdiodev->dev, ksz8);
|
||||
if (!dev)
|
||||
return -EINVAL;
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(ksz8863_regmap_config); i++) {
|
||||
rc = ksz8863_regmap_config[i];
|
||||
|
|
|
@ -2016,7 +2016,7 @@ static struct pci_driver alx_driver = {
|
|||
module_pci_driver(alx_driver);
|
||||
MODULE_DEVICE_TABLE(pci, alx_pci_tbl);
|
||||
MODULE_AUTHOR("Johannes Berg <johannes@sipsolutions.net>");
|
||||
MODULE_AUTHOR("Qualcomm Corporation, <nic-devel@qualcomm.com>");
|
||||
MODULE_AUTHOR("Qualcomm Corporation");
|
||||
MODULE_DESCRIPTION(
|
||||
"Qualcomm Atheros(R) AR816x/AR817x PCI-E Ethernet Network Driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -32,7 +32,7 @@ static const struct pci_device_id atl1c_pci_tbl[] = {
|
|||
MODULE_DEVICE_TABLE(pci, atl1c_pci_tbl);
|
||||
|
||||
MODULE_AUTHOR("Jie Yang");
|
||||
MODULE_AUTHOR("Qualcomm Atheros Inc., <nic-devel@qualcomm.com>");
|
||||
MODULE_AUTHOR("Qualcomm Atheros Inc.");
|
||||
MODULE_DESCRIPTION("Qualcomm Atheros 100/1000M Ethernet Network Driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
|
|
|
@ -1192,7 +1192,6 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
|
|||
return 0;
|
||||
}
|
||||
|
||||
err = -EIO;
|
||||
/* verify ari is enabled */
|
||||
if (!pci_ari_enabled(bp->pdev->bus)) {
|
||||
BNX2X_ERR("ARI not supported (check pci bridge ARI forwarding), SRIOV can not be enabled\n");
|
||||
|
|
|
@ -1764,7 +1764,7 @@ bnad_dim_timeout(struct timer_list *t)
|
|||
}
|
||||
}
|
||||
|
||||
/* Check for BNAD_CF_DIM_ENABLED, does not eleminate a race */
|
||||
/* Check for BNAD_CF_DIM_ENABLED, does not eliminate a race */
|
||||
if (test_bit(BNAD_RF_DIM_TIMER_RUNNING, &bnad->run_flags))
|
||||
mod_timer(&bnad->dim_timer,
|
||||
jiffies + msecs_to_jiffies(BNAD_DIM_TIMER_FREQ));
|
||||
|
|
|
@ -4852,7 +4852,7 @@ static int __maybe_unused macb_suspend(struct device *dev)
|
|||
{
|
||||
struct net_device *netdev = dev_get_drvdata(dev);
|
||||
struct macb *bp = netdev_priv(netdev);
|
||||
struct macb_queue *queue = bp->queues;
|
||||
struct macb_queue *queue;
|
||||
unsigned long flags;
|
||||
unsigned int q;
|
||||
int err;
|
||||
|
@ -4939,7 +4939,7 @@ static int __maybe_unused macb_resume(struct device *dev)
|
|||
{
|
||||
struct net_device *netdev = dev_get_drvdata(dev);
|
||||
struct macb *bp = netdev_priv(netdev);
|
||||
struct macb_queue *queue = bp->queues;
|
||||
struct macb_queue *queue;
|
||||
unsigned long flags;
|
||||
unsigned int q;
|
||||
int err;
|
||||
|
|
|
@ -2563,12 +2563,12 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
|
|||
spin_lock_bh(&eosw_txq->lock);
|
||||
if (tc != FW_SCHED_CLS_NONE) {
|
||||
if (eosw_txq->state != CXGB4_EO_STATE_CLOSED)
|
||||
goto out_unlock;
|
||||
goto out_free_skb;
|
||||
|
||||
next_state = CXGB4_EO_STATE_FLOWC_OPEN_SEND;
|
||||
} else {
|
||||
if (eosw_txq->state != CXGB4_EO_STATE_ACTIVE)
|
||||
goto out_unlock;
|
||||
goto out_free_skb;
|
||||
|
||||
next_state = CXGB4_EO_STATE_FLOWC_CLOSE_SEND;
|
||||
}
|
||||
|
@ -2604,17 +2604,19 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
|
|||
eosw_txq_flush_pending_skbs(eosw_txq);
|
||||
|
||||
ret = eosw_txq_enqueue(eosw_txq, skb);
|
||||
if (ret) {
|
||||
dev_consume_skb_any(skb);
|
||||
goto out_unlock;
|
||||
}
|
||||
if (ret)
|
||||
goto out_free_skb;
|
||||
|
||||
eosw_txq->state = next_state;
|
||||
eosw_txq->flowc_idx = eosw_txq->pidx;
|
||||
eosw_txq_advance(eosw_txq, 1);
|
||||
ethofld_xmit(dev, eosw_txq);
|
||||
|
||||
out_unlock:
|
||||
spin_unlock_bh(&eosw_txq->lock);
|
||||
return 0;
|
||||
|
||||
out_free_skb:
|
||||
dev_consume_skb_any(skb);
|
||||
spin_unlock_bh(&eosw_txq->lock);
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -768,7 +768,7 @@ static inline int enic_queue_wq_skb_encap(struct enic *enic, struct vnic_wq *wq,
|
|||
return err;
|
||||
}
|
||||
|
||||
static inline void enic_queue_wq_skb(struct enic *enic,
|
||||
static inline int enic_queue_wq_skb(struct enic *enic,
|
||||
struct vnic_wq *wq, struct sk_buff *skb)
|
||||
{
|
||||
unsigned int mss = skb_shinfo(skb)->gso_size;
|
||||
|
@ -814,6 +814,7 @@ static inline void enic_queue_wq_skb(struct enic *enic,
|
|||
wq->to_use = buf->next;
|
||||
dev_kfree_skb(skb);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
/* netif_tx_lock held, process context with BHs disabled, or BH */
|
||||
|
@ -857,7 +858,8 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
|
|||
return NETDEV_TX_BUSY;
|
||||
}
|
||||
|
||||
enic_queue_wq_skb(enic, wq, skb);
|
||||
if (enic_queue_wq_skb(enic, wq, skb))
|
||||
goto error;
|
||||
|
||||
if (vnic_wq_desc_avail(wq) < MAX_SKB_FRAGS + ENIC_DESC_MAX_SPLITS)
|
||||
netif_tx_stop_queue(txq);
|
||||
|
@ -865,6 +867,7 @@ static netdev_tx_t enic_hard_start_xmit(struct sk_buff *skb,
|
|||
if (!netdev_xmit_more() || netif_xmit_stopped(txq))
|
||||
vnic_wq_doorbell(wq);
|
||||
|
||||
error:
|
||||
spin_unlock(&enic->wq_lock[txq_map]);
|
||||
|
||||
return NETDEV_TX_OK;
|
||||
|
|
|
@ -575,8 +575,8 @@ static int hns3_nic_net_stop(struct net_device *netdev)
|
|||
if (h->ae_algo->ops->set_timer_task)
|
||||
h->ae_algo->ops->set_timer_task(priv->ae_handle, false);
|
||||
|
||||
netif_tx_stop_all_queues(netdev);
|
||||
netif_carrier_off(netdev);
|
||||
netif_tx_disable(netdev);
|
||||
|
||||
hns3_nic_net_down(netdev);
|
||||
|
||||
|
@ -824,7 +824,7 @@ static int hns3_get_l4_protocol(struct sk_buff *skb, u8 *ol4_proto,
|
|||
* and it is udp packet, which has a dest port as the IANA assigned.
|
||||
* the hardware is expected to do the checksum offload, but the
|
||||
* hardware will not do the checksum offload when udp dest port is
|
||||
* 4789 or 6081.
|
||||
* 4789, 4790 or 6081.
|
||||
*/
|
||||
static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
|
||||
{
|
||||
|
@ -842,7 +842,8 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
|
|||
|
||||
if (!(!skb->encapsulation &&
|
||||
(l4.udp->dest == htons(IANA_VXLAN_UDP_PORT) ||
|
||||
l4.udp->dest == htons(GENEVE_UDP_PORT))))
|
||||
l4.udp->dest == htons(GENEVE_UDP_PORT) ||
|
||||
l4.udp->dest == htons(4790))))
|
||||
return false;
|
||||
|
||||
skb_checksum_help(skb);
|
||||
|
@ -4616,6 +4617,11 @@ static int hns3_reset_notify_up_enet(struct hnae3_handle *handle)
|
|||
struct hns3_nic_priv *priv = netdev_priv(kinfo->netdev);
|
||||
int ret = 0;
|
||||
|
||||
if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state)) {
|
||||
netdev_err(kinfo->netdev, "device is not initialized yet\n");
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
clear_bit(HNS3_NIC_STATE_RESETTING, &priv->state);
|
||||
|
||||
if (netif_running(kinfo->netdev)) {
|
||||
|
|
|
@ -753,8 +753,9 @@ static int hclge_config_igu_egu_hw_err_int(struct hclge_dev *hdev, bool en)
|
|||
|
||||
/* configure IGU,EGU error interrupts */
|
||||
hclge_cmd_setup_basic_desc(&desc, HCLGE_IGU_COMMON_INT_EN, false);
|
||||
desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_TYPE);
|
||||
if (en)
|
||||
desc.data[0] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
|
||||
desc.data[0] |= cpu_to_le32(HCLGE_IGU_ERR_INT_EN);
|
||||
|
||||
desc.data[1] = cpu_to_le32(HCLGE_IGU_ERR_INT_EN_MASK);
|
||||
|
||||
|
|
|
@ -32,7 +32,8 @@
|
|||
#define HCLGE_TQP_ECC_ERR_INT_EN_MASK 0x0FFF
|
||||
#define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN_MASK 0x0F000000
|
||||
#define HCLGE_MSIX_SRAM_ECC_ERR_INT_EN 0x0F000000
|
||||
#define HCLGE_IGU_ERR_INT_EN 0x0000066F
|
||||
#define HCLGE_IGU_ERR_INT_EN 0x0000000F
|
||||
#define HCLGE_IGU_ERR_INT_TYPE 0x00000660
|
||||
#define HCLGE_IGU_ERR_INT_EN_MASK 0x000F
|
||||
#define HCLGE_IGU_TNL_ERR_INT_EN 0x0002AABF
|
||||
#define HCLGE_IGU_TNL_ERR_INT_EN_MASK 0x003F
|
||||
|
|
|
@ -3978,6 +3978,12 @@ static void hclge_update_reset_level(struct hclge_dev *hdev)
|
|||
struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
|
||||
enum hnae3_reset_type reset_level;
|
||||
|
||||
/* reset request will not be set during reset, so clear
|
||||
* pending reset request to avoid unnecessary reset
|
||||
* caused by the same reason.
|
||||
*/
|
||||
hclge_get_reset_level(ae_dev, &hdev->reset_request);
|
||||
|
||||
/* if default_reset_request has a higher level reset request,
|
||||
* it should be handled as soon as possible. since some errors
|
||||
* need this kind of reset to fix.
|
||||
|
|
|
@ -533,7 +533,7 @@ static void hclge_get_link_mode(struct hclge_vport *vport,
|
|||
unsigned long advertising;
|
||||
unsigned long supported;
|
||||
unsigned long send_data;
|
||||
u8 msg_data[10];
|
||||
u8 msg_data[10] = {};
|
||||
u8 dest_vfid;
|
||||
|
||||
advertising = hdev->hw.mac.advertising[0];
|
||||
|
|
|
@ -255,6 +255,8 @@ void hclge_mac_start_phy(struct hclge_dev *hdev)
|
|||
if (!phydev)
|
||||
return;
|
||||
|
||||
phy_loopback(phydev, false);
|
||||
|
||||
phy_start(phydev);
|
||||
}
|
||||
|
||||
|
|
|
@ -1144,7 +1144,6 @@ static inline bool i40e_is_sw_dcb(struct i40e_pf *pf)
|
|||
return !!(pf->flags & I40E_FLAG_DISABLE_FW_LLDP);
|
||||
}
|
||||
|
||||
void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable);
|
||||
#ifdef CONFIG_I40E_DCB
|
||||
void i40e_dcbnl_flush_apps(struct i40e_pf *pf,
|
||||
struct i40e_dcbx_config *old_cfg,
|
||||
|
|
|
@ -1566,8 +1566,10 @@ enum i40e_aq_phy_type {
|
|||
I40E_PHY_TYPE_25GBASE_LR = 0x22,
|
||||
I40E_PHY_TYPE_25GBASE_AOC = 0x23,
|
||||
I40E_PHY_TYPE_25GBASE_ACC = 0x24,
|
||||
I40E_PHY_TYPE_2_5GBASE_T = 0x30,
|
||||
I40E_PHY_TYPE_5GBASE_T = 0x31,
|
||||
I40E_PHY_TYPE_2_5GBASE_T = 0x26,
|
||||
I40E_PHY_TYPE_5GBASE_T = 0x27,
|
||||
I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS = 0x30,
|
||||
I40E_PHY_TYPE_5GBASE_T_LINK_STATUS = 0x31,
|
||||
I40E_PHY_TYPE_MAX,
|
||||
I40E_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP = 0xFD,
|
||||
I40E_PHY_TYPE_EMPTY = 0xFE,
|
||||
|
|
|
@ -375,6 +375,7 @@ void i40e_client_subtask(struct i40e_pf *pf)
|
|||
clear_bit(__I40E_CLIENT_INSTANCE_OPENED,
|
||||
&cdev->state);
|
||||
i40e_client_del_instance(pf);
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1154,8 +1154,8 @@ static enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw)
|
|||
break;
|
||||
case I40E_PHY_TYPE_100BASE_TX:
|
||||
case I40E_PHY_TYPE_1000BASE_T:
|
||||
case I40E_PHY_TYPE_2_5GBASE_T:
|
||||
case I40E_PHY_TYPE_5GBASE_T:
|
||||
case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
|
||||
case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
|
||||
case I40E_PHY_TYPE_10GBASE_T:
|
||||
media = I40E_MEDIA_TYPE_BASET;
|
||||
break;
|
||||
|
|
|
@ -841,8 +841,8 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
|
|||
10000baseT_Full);
|
||||
break;
|
||||
case I40E_PHY_TYPE_10GBASE_T:
|
||||
case I40E_PHY_TYPE_5GBASE_T:
|
||||
case I40E_PHY_TYPE_2_5GBASE_T:
|
||||
case I40E_PHY_TYPE_5GBASE_T_LINK_STATUS:
|
||||
case I40E_PHY_TYPE_2_5GBASE_T_LINK_STATUS:
|
||||
case I40E_PHY_TYPE_1000BASE_T:
|
||||
case I40E_PHY_TYPE_100BASE_TX:
|
||||
ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg);
|
||||
|
@ -1409,7 +1409,8 @@ static int i40e_set_fec_cfg(struct net_device *netdev, u8 fec_cfg)
|
|||
|
||||
memset(&config, 0, sizeof(config));
|
||||
config.phy_type = abilities.phy_type;
|
||||
config.abilities = abilities.abilities;
|
||||
config.abilities = abilities.abilities |
|
||||
I40E_AQ_PHY_ENABLE_ATOMIC_LINK;
|
||||
config.phy_type_ext = abilities.phy_type_ext;
|
||||
config.link_speed = abilities.link_speed;
|
||||
config.eee_capability = abilities.eee_capability;
|
||||
|
@ -5281,7 +5282,6 @@ flags_complete:
|
|||
i40e_aq_cfg_lldp_mib_change_event(&pf->hw, false, NULL);
|
||||
i40e_aq_stop_lldp(&pf->hw, true, false, NULL);
|
||||
} else {
|
||||
i40e_set_lldp_forwarding(pf, false);
|
||||
status = i40e_aq_start_lldp(&pf->hw, false, NULL);
|
||||
if (status) {
|
||||
adq_err = pf->hw.aq.asq_last_status;
|
||||
|
|
|
@ -6879,40 +6879,6 @@ out:
|
|||
}
|
||||
#endif /* CONFIG_I40E_DCB */
|
||||
|
||||
/**
|
||||
* i40e_set_lldp_forwarding - set forwarding of lldp frames
|
||||
* @pf: PF being configured
|
||||
* @enable: if forwarding to OS shall be enabled
|
||||
*
|
||||
* Toggle forwarding of lldp frames behavior,
|
||||
* When passing DCB control from firmware to software
|
||||
* lldp frames must be forwarded to the software based
|
||||
* lldp agent.
|
||||
*/
|
||||
void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable)
|
||||
{
|
||||
if (pf->lan_vsi == I40E_NO_VSI)
|
||||
return;
|
||||
|
||||
if (!pf->vsi[pf->lan_vsi])
|
||||
return;
|
||||
|
||||
/* No need to check the outcome, commands may fail
|
||||
* if desired value is already set
|
||||
*/
|
||||
i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP,
|
||||
I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX |
|
||||
I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC,
|
||||
pf->vsi[pf->lan_vsi]->seid, 0,
|
||||
enable, NULL, NULL);
|
||||
|
||||
i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP,
|
||||
I40E_AQC_ADD_CONTROL_PACKET_FLAGS_RX |
|
||||
I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC,
|
||||
pf->vsi[pf->lan_vsi]->seid, 0,
|
||||
enable, NULL, NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_print_link_message - print link up or down
|
||||
* @vsi: the VSI for which link needs a message
|
||||
|
@ -10736,10 +10702,6 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
|
|||
*/
|
||||
i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw,
|
||||
pf->main_vsi_seid);
|
||||
#ifdef CONFIG_I40E_DCB
|
||||
if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP)
|
||||
i40e_set_lldp_forwarding(pf, true);
|
||||
#endif /* CONFIG_I40E_DCB */
|
||||
|
||||
/* restart the VSIs that were rebuilt and running before the reset */
|
||||
i40e_pf_unquiesce_all_vsi(pf);
|
||||
|
@ -15772,10 +15734,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
*/
|
||||
i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw,
|
||||
pf->main_vsi_seid);
|
||||
#ifdef CONFIG_I40E_DCB
|
||||
if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP)
|
||||
i40e_set_lldp_forwarding(pf, true);
|
||||
#endif /* CONFIG_I40E_DCB */
|
||||
|
||||
if ((pf->hw.device_id == I40E_DEV_ID_10G_BASE_T) ||
|
||||
(pf->hw.device_id == I40E_DEV_ID_10G_BASE_T4))
|
||||
|
|
|
@ -1961,10 +1961,6 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb,
|
|||
union i40e_rx_desc *rx_desc)
|
||||
|
||||
{
|
||||
/* XDP packets use error pointer so abort at this point */
|
||||
if (IS_ERR(skb))
|
||||
return true;
|
||||
|
||||
/* ERR_MASK will only have valid bits if EOP set, and
|
||||
* what we are doing here is actually checking
|
||||
* I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
|
||||
|
@ -2534,7 +2530,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
|
|||
}
|
||||
|
||||
/* exit if we failed to retrieve a buffer */
|
||||
if (!skb) {
|
||||
if (!xdp_res && !skb) {
|
||||
rx_ring->rx_stats.alloc_buff_failed++;
|
||||
rx_buffer->pagecnt_bias++;
|
||||
break;
|
||||
|
@ -2547,7 +2543,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
|
|||
if (i40e_is_non_eop(rx_ring, rx_desc))
|
||||
continue;
|
||||
|
||||
if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
|
||||
if (xdp_res || i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
|
||||
skb = NULL;
|
||||
continue;
|
||||
}
|
||||
|
|
|
@ -239,11 +239,8 @@ struct i40e_phy_info {
|
|||
#define I40E_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(I40E_PHY_TYPE_25GBASE_ACC + \
|
||||
I40E_PHY_TYPE_OFFSET)
|
||||
/* Offset for 2.5G/5G PHY Types value to bit number conversion */
|
||||
#define I40E_PHY_TYPE_OFFSET2 (-10)
|
||||
#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T + \
|
||||
I40E_PHY_TYPE_OFFSET2)
|
||||
#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T + \
|
||||
I40E_PHY_TYPE_OFFSET2)
|
||||
#define I40E_CAP_PHY_TYPE_2_5GBASE_T BIT_ULL(I40E_PHY_TYPE_2_5GBASE_T)
|
||||
#define I40E_CAP_PHY_TYPE_5GBASE_T BIT_ULL(I40E_PHY_TYPE_5GBASE_T)
|
||||
#define I40E_HW_CAP_MAX_GPIO 30
|
||||
/* Capabilities of a PF or a VF or the whole device */
|
||||
struct i40e_hw_capabilities {
|
||||
|
|
|
@ -535,6 +535,16 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
|
|||
u16 erif_index = 0;
|
||||
int err;
|
||||
|
||||
/* Add the eRIF */
|
||||
if (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {
|
||||
erif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);
|
||||
err = mr->mr_ops->route_erif_add(mlxsw_sp,
|
||||
rve->mr_route->route_priv,
|
||||
erif_index);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Update the route action, as the new eVIF can be a tunnel or a pimreg
|
||||
* device which will require updating the action.
|
||||
*/
|
||||
|
@ -544,17 +554,7 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
|
|||
rve->mr_route->route_priv,
|
||||
route_action);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Add the eRIF */
|
||||
if (mlxsw_sp_mr_vif_valid(rve->mr_vif)) {
|
||||
erif_index = mlxsw_sp_rif_index(rve->mr_vif->rif);
|
||||
err = mr->mr_ops->route_erif_add(mlxsw_sp,
|
||||
rve->mr_route->route_priv,
|
||||
erif_index);
|
||||
if (err)
|
||||
goto err_route_erif_add;
|
||||
goto err_route_action_update;
|
||||
}
|
||||
|
||||
/* Update the minimum MTU */
|
||||
|
@ -572,14 +572,14 @@ mlxsw_sp_mr_route_evif_resolve(struct mlxsw_sp_mr_table *mr_table,
|
|||
return 0;
|
||||
|
||||
err_route_min_mtu_update:
|
||||
if (mlxsw_sp_mr_vif_valid(rve->mr_vif))
|
||||
mr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv,
|
||||
erif_index);
|
||||
err_route_erif_add:
|
||||
if (route_action != rve->mr_route->route_action)
|
||||
mr->mr_ops->route_action_update(mlxsw_sp,
|
||||
rve->mr_route->route_priv,
|
||||
rve->mr_route->route_action);
|
||||
err_route_action_update:
|
||||
if (mlxsw_sp_mr_vif_valid(rve->mr_vif))
|
||||
mr->mr_ops->route_erif_del(mlxsw_sp, rve->mr_route->route_priv,
|
||||
erif_index);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -642,6 +642,7 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
|
|||
value &= ~GMAC_PACKET_FILTER_PCF;
|
||||
value &= ~GMAC_PACKET_FILTER_PM;
|
||||
value &= ~GMAC_PACKET_FILTER_PR;
|
||||
value &= ~GMAC_PACKET_FILTER_RA;
|
||||
if (dev->flags & IFF_PROMISC) {
|
||||
/* VLAN Tag Filter Fail Packets Queuing */
|
||||
if (hw->vlan_fail_q_en) {
|
||||
|
|
|
@ -232,7 +232,7 @@ static void dwmac4_dma_rx_chan_op_mode(void __iomem *ioaddr, int mode,
|
|||
u32 channel, int fifosz, u8 qmode)
|
||||
{
|
||||
unsigned int rqs = fifosz / 256 - 1;
|
||||
u32 mtl_rx_op, mtl_rx_int;
|
||||
u32 mtl_rx_op;
|
||||
|
||||
mtl_rx_op = readl(ioaddr + MTL_CHAN_RX_OP_MODE(channel));
|
||||
|
||||
|
@ -293,11 +293,6 @@ static void dwmac4_dma_rx_chan_op_mode(void __iomem *ioaddr, int mode,
|
|||
}
|
||||
|
||||
writel(mtl_rx_op, ioaddr + MTL_CHAN_RX_OP_MODE(channel));
|
||||
|
||||
/* Enable MTL RX overflow */
|
||||
mtl_rx_int = readl(ioaddr + MTL_CHAN_INT_CTRL(channel));
|
||||
writel(mtl_rx_int | MTL_RX_OVERFLOW_INT_EN,
|
||||
ioaddr + MTL_CHAN_INT_CTRL(channel));
|
||||
}
|
||||
|
||||
static void dwmac4_dma_tx_chan_op_mode(void __iomem *ioaddr, int mode,
|
||||
|
|
|
@ -564,7 +564,6 @@ struct stmmac_mode_ops {
|
|||
#define stmmac_clean_desc3(__priv, __args...) \
|
||||
stmmac_do_void_callback(__priv, mode, clean_desc3, __args)
|
||||
|
||||
struct stmmac_priv;
|
||||
struct tc_cls_u32_offload;
|
||||
struct tc_cbs_qopt_offload;
|
||||
struct flow_cls_offload;
|
||||
|
|
|
@ -3180,6 +3180,7 @@ static int stmmac_fpe_start_wq(struct stmmac_priv *priv)
|
|||
char *name;
|
||||
|
||||
clear_bit(__FPE_TASK_SCHED, &priv->fpe_task_state);
|
||||
clear_bit(__FPE_REMOVING, &priv->fpe_task_state);
|
||||
|
||||
name = priv->wq_name;
|
||||
sprintf(name, "%s-fpe", priv->dev->name);
|
||||
|
@ -5586,7 +5587,6 @@ static void stmmac_common_interrupt(struct stmmac_priv *priv)
|
|||
/* To handle GMAC own interrupts */
|
||||
if ((priv->plat->has_gmac) || xmac) {
|
||||
int status = stmmac_host_irq_status(priv, priv->hw, &priv->xstats);
|
||||
int mtl_status;
|
||||
|
||||
if (unlikely(status)) {
|
||||
/* For LPI we need to save the tx status */
|
||||
|
@ -5597,17 +5597,8 @@ static void stmmac_common_interrupt(struct stmmac_priv *priv)
|
|||
}
|
||||
|
||||
for (queue = 0; queue < queues_count; queue++) {
|
||||
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
|
||||
|
||||
mtl_status = stmmac_host_mtl_irq_status(priv, priv->hw,
|
||||
queue);
|
||||
if (mtl_status != -EINVAL)
|
||||
status |= mtl_status;
|
||||
|
||||
if (status & CORE_IRQ_MTL_RX_OVERFLOW)
|
||||
stmmac_set_rx_tail_ptr(priv, priv->ioaddr,
|
||||
rx_q->rx_tail_addr,
|
||||
queue);
|
||||
status = stmmac_host_mtl_irq_status(priv, priv->hw,
|
||||
queue);
|
||||
}
|
||||
|
||||
/* PCS link status */
|
||||
|
|
|
@ -211,8 +211,8 @@ static void gsi_irq_setup(struct gsi *gsi)
|
|||
iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
|
||||
|
||||
/* The inter-EE registers are in the non-adjusted address range */
|
||||
iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_CH_IRQ_OFFSET);
|
||||
iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET);
|
||||
iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET);
|
||||
iowrite32(0, gsi->virt_raw + GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET);
|
||||
|
||||
iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET);
|
||||
}
|
||||
|
|
|
@ -53,15 +53,15 @@
|
|||
#define GSI_EE_REG_ADJUST 0x0000d000 /* IPA v4.5+ */
|
||||
|
||||
/* The two inter-EE IRQ register offsets are relative to gsi->virt_raw */
|
||||
#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \
|
||||
GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP)
|
||||
#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \
|
||||
(0x0000c018 + 0x1000 * (ee))
|
||||
#define GSI_INTER_EE_SRC_CH_IRQ_MSK_OFFSET \
|
||||
GSI_INTER_EE_N_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP)
|
||||
#define GSI_INTER_EE_N_SRC_CH_IRQ_MSK_OFFSET(ee) \
|
||||
(0x0000c020 + 0x1000 * (ee))
|
||||
|
||||
#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \
|
||||
GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP)
|
||||
#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \
|
||||
(0x0000c01c + 0x1000 * (ee))
|
||||
#define GSI_INTER_EE_SRC_EV_CH_IRQ_MSK_OFFSET \
|
||||
GSI_INTER_EE_N_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP)
|
||||
#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \
|
||||
(0x0000c024 + 0x1000 * (ee))
|
||||
|
||||
/* All other register offsets are relative to gsi->virt */
|
||||
|
||||
|
|
|
@ -1088,6 +1088,38 @@ static int m88e1011_set_tunable(struct phy_device *phydev,
|
|||
}
|
||||
}
|
||||
|
||||
static int m88e1112_config_init(struct phy_device *phydev)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = m88e1011_set_downshift(phydev, 3);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return m88e1111_config_init(phydev);
|
||||
}
|
||||
|
||||
static int m88e1111gbe_config_init(struct phy_device *phydev)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = m88e1111_set_downshift(phydev, 3);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return m88e1111_config_init(phydev);
|
||||
}
|
||||
|
||||
static int marvell_1011gbe_config_init(struct phy_device *phydev)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = m88e1011_set_downshift(phydev, 3);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return marvell_config_init(phydev);
|
||||
}
|
||||
static int m88e1116r_config_init(struct phy_device *phydev)
|
||||
{
|
||||
int err;
|
||||
|
@ -1168,6 +1200,9 @@ static int m88e1510_config_init(struct phy_device *phydev)
|
|||
if (err < 0)
|
||||
return err;
|
||||
}
|
||||
err = m88e1011_set_downshift(phydev, 3);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return m88e1318_config_init(phydev);
|
||||
}
|
||||
|
@ -1320,6 +1355,9 @@ static int m88e1145_config_init(struct phy_device *phydev)
|
|||
if (err < 0)
|
||||
return err;
|
||||
}
|
||||
err = m88e1111_set_downshift(phydev, 3);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = marvell_of_reg_init(phydev);
|
||||
if (err < 0)
|
||||
|
@ -2698,7 +2736,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.name = "Marvell 88E1112",
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.probe = marvell_probe,
|
||||
.config_init = m88e1111_config_init,
|
||||
.config_init = m88e1112_config_init,
|
||||
.config_aneg = marvell_config_aneg,
|
||||
.config_intr = marvell_config_intr,
|
||||
.handle_interrupt = marvell_handle_interrupt,
|
||||
|
@ -2718,7 +2756,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.name = "Marvell 88E1111",
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.probe = marvell_probe,
|
||||
.config_init = m88e1111_config_init,
|
||||
.config_init = m88e1111gbe_config_init,
|
||||
.config_aneg = m88e1111_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -2739,7 +2777,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.name = "Marvell 88E1111 (Finisar)",
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.probe = marvell_probe,
|
||||
.config_init = m88e1111_config_init,
|
||||
.config_init = m88e1111gbe_config_init,
|
||||
.config_aneg = m88e1111_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -2779,7 +2817,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.driver_data = DEF_MARVELL_HWMON_OPS(m88e1121_hwmon_ops),
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.probe = marvell_probe,
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e1121_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -2859,7 +2897,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.name = "Marvell 88E1240",
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.probe = marvell_probe,
|
||||
.config_init = m88e1111_config_init,
|
||||
.config_init = m88e1112_config_init,
|
||||
.config_aneg = marvell_config_aneg,
|
||||
.config_intr = marvell_config_intr,
|
||||
.handle_interrupt = marvell_handle_interrupt,
|
||||
|
@ -2929,7 +2967,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
/* PHY_GBIT_FEATURES */
|
||||
.flags = PHY_POLL_CABLE_TEST,
|
||||
.probe = marvell_probe,
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e1510_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -2955,7 +2993,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.probe = marvell_probe,
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.flags = PHY_POLL_CABLE_TEST,
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e1510_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -3000,7 +3038,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
/* PHY_GBIT_FEATURES */
|
||||
.flags = PHY_POLL_CABLE_TEST,
|
||||
.probe = marvell_probe,
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e6390_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -3026,7 +3064,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
/* PHY_GBIT_FEATURES */
|
||||
.flags = PHY_POLL_CABLE_TEST,
|
||||
.probe = marvell_probe,
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e6390_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -3052,7 +3090,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
/* PHY_GBIT_FEATURES */
|
||||
.flags = PHY_POLL_CABLE_TEST,
|
||||
.probe = marvell_probe,
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e1510_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -3077,7 +3115,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.driver_data = DEF_MARVELL_HWMON_OPS(m88e1510_hwmon_ops),
|
||||
.probe = marvell_probe,
|
||||
/* PHY_GBIT_FEATURES */
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e1510_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
@ -3099,7 +3137,7 @@ static struct phy_driver marvell_drivers[] = {
|
|||
.driver_data = DEF_MARVELL_HWMON_OPS(m88e1510_hwmon_ops),
|
||||
.probe = marvell_probe,
|
||||
.features = PHY_GBIT_FIBRE_FEATURES,
|
||||
.config_init = marvell_config_init,
|
||||
.config_init = marvell_1011gbe_config_init,
|
||||
.config_aneg = m88e1510_config_aneg,
|
||||
.read_status = marvell_read_status,
|
||||
.config_intr = marvell_config_intr,
|
||||
|
|
|
@ -415,7 +415,7 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
|
||||
if (pad > 0) { /* Pad the frame with zeros */
|
||||
if (__skb_pad(skb, pad, false))
|
||||
goto out;
|
||||
goto drop;
|
||||
skb_put(skb, pad);
|
||||
}
|
||||
}
|
||||
|
@ -448,9 +448,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
return NETDEV_TX_OK;
|
||||
|
||||
drop:
|
||||
kfree_skb(skb);
|
||||
out:
|
||||
dev->stats.tx_dropped++;
|
||||
kfree_skb(skb);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
|
|
|
@ -302,10 +302,11 @@ struct bpf_verifier_state_list {
|
|||
};
|
||||
|
||||
/* Possible states for alu_state member. */
|
||||
#define BPF_ALU_SANITIZE_SRC 1U
|
||||
#define BPF_ALU_SANITIZE_DST 2U
|
||||
#define BPF_ALU_SANITIZE_SRC (1U << 0)
|
||||
#define BPF_ALU_SANITIZE_DST (1U << 1)
|
||||
#define BPF_ALU_NEG_VALUE (1U << 2)
|
||||
#define BPF_ALU_NON_POINTER (1U << 3)
|
||||
#define BPF_ALU_IMMEDIATE (1U << 4)
|
||||
#define BPF_ALU_SANITIZE (BPF_ALU_SANITIZE_SRC | \
|
||||
BPF_ALU_SANITIZE_DST)
|
||||
|
||||
|
|
|
@ -53,8 +53,7 @@ int arpt_register_table(struct net *net, const struct xt_table *table,
|
|||
const struct arpt_replace *repl,
|
||||
const struct nf_hook_ops *ops);
|
||||
void arpt_unregister_table(struct net *net, const char *name);
|
||||
void arpt_unregister_table_pre_exit(struct net *net, const char *name,
|
||||
const struct nf_hook_ops *ops);
|
||||
void arpt_unregister_table_pre_exit(struct net *net, const char *name);
|
||||
extern unsigned int arpt_do_table(struct sk_buff *skb,
|
||||
const struct nf_hook_state *state,
|
||||
struct xt_table *table);
|
||||
|
|
|
@ -68,7 +68,6 @@ enum sctp_verb {
|
|||
SCTP_CMD_ASSOC_FAILED, /* Handle association failure. */
|
||||
SCTP_CMD_DISCARD_PACKET, /* Discard the whole packet. */
|
||||
SCTP_CMD_GEN_SHUTDOWN, /* Generate a SHUTDOWN chunk. */
|
||||
SCTP_CMD_UPDATE_ASSOC, /* Update association information. */
|
||||
SCTP_CMD_PURGE_OUTQUEUE, /* Purge all data waiting to be sent. */
|
||||
SCTP_CMD_SETUP_T2, /* Hi-level, setup T2-shutdown parms. */
|
||||
SCTP_CMD_RTO_PENDING, /* Set transport's rto_pending. */
|
||||
|
|
|
@ -20,4 +20,10 @@ struct xt_secmark_target_info {
|
|||
char secctx[SECMARK_SECCTX_MAX];
|
||||
};
|
||||
|
||||
struct xt_secmark_target_info_v1 {
|
||||
__u8 mode;
|
||||
char secctx[SECMARK_SECCTX_MAX];
|
||||
__u32 secid;
|
||||
};
|
||||
|
||||
#endif /*_XT_SECMARK_H_target */
|
||||
|
|
|
@ -27,6 +27,7 @@ enum {
|
|||
SEG6_LOCAL_OIF,
|
||||
SEG6_LOCAL_BPF,
|
||||
SEG6_LOCAL_VRFTABLE,
|
||||
SEG6_LOCAL_COUNTERS,
|
||||
__SEG6_LOCAL_MAX,
|
||||
};
|
||||
#define SEG6_LOCAL_MAX (__SEG6_LOCAL_MAX - 1)
|
||||
|
@ -78,4 +79,33 @@ enum {
|
|||
|
||||
#define SEG6_LOCAL_BPF_PROG_MAX (__SEG6_LOCAL_BPF_PROG_MAX - 1)
|
||||
|
||||
/* SRv6 Behavior counters are encoded as netlink attributes guaranteeing the
|
||||
* correct alignment.
|
||||
* Each counter is identified by a different attribute type (i.e.
|
||||
* SEG6_LOCAL_CNT_PACKETS).
|
||||
*
|
||||
* - SEG6_LOCAL_CNT_PACKETS: identifies a counter that counts the number of
|
||||
* packets that have been CORRECTLY processed by an SRv6 Behavior instance
|
||||
* (i.e., packets that generate errors or are dropped are NOT counted).
|
||||
*
|
||||
* - SEG6_LOCAL_CNT_BYTES: identifies a counter that counts the total amount
|
||||
* of traffic in bytes of all packets that have been CORRECTLY processed by
|
||||
* an SRv6 Behavior instance (i.e., packets that generate errors or are
|
||||
* dropped are NOT counted).
|
||||
*
|
||||
* - SEG6_LOCAL_CNT_ERRORS: identifies a counter that counts the number of
|
||||
* packets that have NOT been properly processed by an SRv6 Behavior instance
|
||||
* (i.e., packets that generate errors or are dropped).
|
||||
*/
|
||||
enum {
|
||||
SEG6_LOCAL_CNT_UNSPEC,
|
||||
SEG6_LOCAL_CNT_PAD, /* pad for 64 bits values */
|
||||
SEG6_LOCAL_CNT_PACKETS,
|
||||
SEG6_LOCAL_CNT_BYTES,
|
||||
SEG6_LOCAL_CNT_ERRORS,
|
||||
__SEG6_LOCAL_CNT_MAX,
|
||||
};
|
||||
|
||||
#define SEG6_LOCAL_CNT_MAX (__SEG6_LOCAL_CNT_MAX - 1)
|
||||
|
||||
#endif
|
||||
|
|
|
@ -6496,6 +6496,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
|
|||
{
|
||||
struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : tmp_aux;
|
||||
struct bpf_verifier_state *vstate = env->cur_state;
|
||||
bool off_is_imm = tnum_is_const(off_reg->var_off);
|
||||
bool off_is_neg = off_reg->smin_value < 0;
|
||||
bool ptr_is_dst_reg = ptr_reg == dst_reg;
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
|
@ -6526,6 +6527,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
|
|||
alu_limit = abs(tmp_aux->alu_limit - alu_limit);
|
||||
} else {
|
||||
alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
|
||||
alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
|
||||
alu_state |= ptr_is_dst_reg ?
|
||||
BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
|
||||
}
|
||||
|
@ -12371,7 +12373,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
|
|||
const u8 code_add = BPF_ALU64 | BPF_ADD | BPF_X;
|
||||
const u8 code_sub = BPF_ALU64 | BPF_SUB | BPF_X;
|
||||
struct bpf_insn *patch = &insn_buf[0];
|
||||
bool issrc, isneg;
|
||||
bool issrc, isneg, isimm;
|
||||
u32 off_reg;
|
||||
|
||||
aux = &env->insn_aux_data[i + delta];
|
||||
|
@ -12382,28 +12384,29 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
|
|||
isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
|
||||
issrc = (aux->alu_state & BPF_ALU_SANITIZE) ==
|
||||
BPF_ALU_SANITIZE_SRC;
|
||||
isimm = aux->alu_state & BPF_ALU_IMMEDIATE;
|
||||
|
||||
off_reg = issrc ? insn->src_reg : insn->dst_reg;
|
||||
if (isneg)
|
||||
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
|
||||
*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
|
||||
*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
|
||||
*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
|
||||
*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
|
||||
*patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
|
||||
if (issrc) {
|
||||
*patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX,
|
||||
off_reg);
|
||||
insn->src_reg = BPF_REG_AX;
|
||||
if (isimm) {
|
||||
*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
|
||||
} else {
|
||||
*patch++ = BPF_ALU64_REG(BPF_AND, off_reg,
|
||||
BPF_REG_AX);
|
||||
if (isneg)
|
||||
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
|
||||
*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
|
||||
*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
|
||||
*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
|
||||
*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
|
||||
*patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
|
||||
*patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX, off_reg);
|
||||
}
|
||||
if (!issrc)
|
||||
*patch++ = BPF_MOV64_REG(insn->dst_reg, insn->src_reg);
|
||||
insn->src_reg = BPF_REG_AX;
|
||||
if (isneg)
|
||||
insn->code = insn->code == code_add ?
|
||||
code_sub : code_add;
|
||||
*patch++ = *insn;
|
||||
if (issrc && isneg)
|
||||
if (issrc && isneg && !isimm)
|
||||
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
|
||||
cnt = patch - insn_buf;
|
||||
|
||||
|
|
|
@ -828,7 +828,7 @@ int nla_strcmp(const struct nlattr *nla, const char *str)
|
|||
int attrlen = nla_len(nla);
|
||||
int d;
|
||||
|
||||
if (attrlen > 0 && buf[attrlen - 1] == '\0')
|
||||
while (attrlen > 0 && buf[attrlen - 1] == '\0')
|
||||
attrlen--;
|
||||
|
||||
d = attrlen - len;
|
||||
|
|
|
@ -103,8 +103,9 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev,
|
|||
|
||||
rcu_read_lock();
|
||||
if (netif_is_bridge_port(dev)) {
|
||||
p = br_port_get_rcu(dev);
|
||||
vg = nbp_vlan_group_rcu(p);
|
||||
p = br_port_get_check_rcu(dev);
|
||||
if (p)
|
||||
vg = nbp_vlan_group_rcu(p);
|
||||
} else if (dev->priv_flags & IFF_EBRIDGE) {
|
||||
br = netdev_priv(dev);
|
||||
vg = br_vlan_group_rcu(br);
|
||||
|
|
|
@ -387,7 +387,8 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
|
|||
int ret;
|
||||
|
||||
ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
|
||||
ðtool_genl_family, 0, ctx->ops->reply_cmd);
|
||||
ðtool_genl_family, NLM_F_MULTI,
|
||||
ctx->ops->reply_cmd);
|
||||
if (!ehdr)
|
||||
return -EMSGSIZE;
|
||||
|
||||
|
|
|
@ -520,6 +520,10 @@ static int fill_frame_info(struct hsr_frame_info *frame,
|
|||
struct ethhdr *ethhdr;
|
||||
__be16 proto;
|
||||
|
||||
/* Check if skb contains hsr_ethhdr */
|
||||
if (skb->mac_len < sizeof(struct hsr_ethhdr))
|
||||
return -EINVAL;
|
||||
|
||||
memset(frame, 0, sizeof(*frame));
|
||||
frame->is_supervision = is_supervision_frame(port->hsr, skb);
|
||||
frame->node_src = hsr_get_node(port, &hsr->node_db, skb,
|
||||
|
|
|
@ -1556,13 +1556,12 @@ out_free:
|
|||
return ret;
|
||||
}
|
||||
|
||||
void arpt_unregister_table_pre_exit(struct net *net, const char *name,
|
||||
const struct nf_hook_ops *ops)
|
||||
void arpt_unregister_table_pre_exit(struct net *net, const char *name)
|
||||
{
|
||||
struct xt_table *table = xt_find_table(net, NFPROTO_ARP, name);
|
||||
|
||||
if (table)
|
||||
nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
|
||||
nf_unregister_net_hooks(net, table->ops, hweight32(table->valid_hooks));
|
||||
}
|
||||
EXPORT_SYMBOL(arpt_unregister_table_pre_exit);
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ static int __net_init arptable_filter_table_init(struct net *net)
|
|||
|
||||
static void __net_exit arptable_filter_net_pre_exit(struct net *net)
|
||||
{
|
||||
arpt_unregister_table_pre_exit(net, "filter", arpfilter_ops);
|
||||
arpt_unregister_table_pre_exit(net, "filter");
|
||||
}
|
||||
|
||||
static void __net_exit arptable_filter_net_exit(struct net *net)
|
||||
|
|
|
@ -2039,6 +2039,7 @@ static void tcp_zc_finalize_rx_tstamp(struct sock *sk,
|
|||
(__kernel_size_t)zc->msg_controllen;
|
||||
cmsg_dummy.msg_flags = in_compat_syscall()
|
||||
? MSG_CMSG_COMPAT : 0;
|
||||
cmsg_dummy.msg_control_is_user = true;
|
||||
zc->msg_flags = 0;
|
||||
if (zc->msg_control == msg_control_addr &&
|
||||
zc->msg_controllen == cmsg_dummy.msg_controllen) {
|
||||
|
|
|
@ -230,6 +230,10 @@ int tcp_set_default_congestion_control(struct net *net, const char *name)
|
|||
ret = -ENOENT;
|
||||
} else if (!bpf_try_module_get(ca, ca->owner)) {
|
||||
ret = -EBUSY;
|
||||
} else if (!net_eq(net, &init_net) &&
|
||||
!(ca->flags & TCP_CONG_NON_RESTRICTED)) {
|
||||
/* Only init netns can set default to a restricted algorithm */
|
||||
ret = -EPERM;
|
||||
} else {
|
||||
prev = xchg(&net->ipv4.tcp_congestion_control, ca);
|
||||
if (prev)
|
||||
|
|
|
@ -122,9 +122,6 @@ static int seg6_genl_sethmac(struct sk_buff *skb, struct genl_info *info)
|
|||
hinfo = seg6_hmac_info_lookup(net, hmackeyid);
|
||||
|
||||
if (!slen) {
|
||||
if (!hinfo)
|
||||
err = -ENOENT;
|
||||
|
||||
err = seg6_hmac_info_del(net, hmackeyid);
|
||||
|
||||
goto out_unlock;
|
||||
|
|
|
@ -93,6 +93,35 @@ struct seg6_end_dt_info {
|
|||
int hdrlen;
|
||||
};
|
||||
|
||||
struct pcpu_seg6_local_counters {
|
||||
u64_stats_t packets;
|
||||
u64_stats_t bytes;
|
||||
u64_stats_t errors;
|
||||
|
||||
struct u64_stats_sync syncp;
|
||||
};
|
||||
|
||||
/* This struct groups all the SRv6 Behavior counters supported so far.
|
||||
*
|
||||
* put_nla_counters() makes use of this data structure to collect all counter
|
||||
* values after the per-CPU counter evaluation has been performed.
|
||||
* Finally, each counter value (in seg6_local_counters) is stored in the
|
||||
* corresponding netlink attribute and sent to user space.
|
||||
*
|
||||
* NB: we don't want to expose this structure to user space!
|
||||
*/
|
||||
struct seg6_local_counters {
|
||||
__u64 packets;
|
||||
__u64 bytes;
|
||||
__u64 errors;
|
||||
};
|
||||
|
||||
#define seg6_local_alloc_pcpu_counters(__gfp) \
|
||||
__netdev_alloc_pcpu_stats(struct pcpu_seg6_local_counters, \
|
||||
((__gfp) | __GFP_ZERO))
|
||||
|
||||
#define SEG6_F_LOCAL_COUNTERS SEG6_F_ATTR(SEG6_LOCAL_COUNTERS)
|
||||
|
||||
struct seg6_local_lwt {
|
||||
int action;
|
||||
struct ipv6_sr_hdr *srh;
|
||||
|
@ -105,6 +134,7 @@ struct seg6_local_lwt {
|
|||
#ifdef CONFIG_NET_L3_MASTER_DEV
|
||||
struct seg6_end_dt_info dt_info;
|
||||
#endif
|
||||
struct pcpu_seg6_local_counters __percpu *pcpu_counters;
|
||||
|
||||
int headroom;
|
||||
struct seg6_action_desc *desc;
|
||||
|
@ -878,36 +908,43 @@ static struct seg6_action_desc seg6_action_table[] = {
|
|||
{
|
||||
.action = SEG6_LOCAL_ACTION_END,
|
||||
.attrs = 0,
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_X,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_NH6),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_x,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_T,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_TABLE),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_t,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_DX2,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_OIF),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_dx2,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_DX6,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_NH6),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_dx6,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_DX4,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_NH4),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_dx4,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_DT4,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_VRFTABLE),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
#ifdef CONFIG_NET_L3_MASTER_DEV
|
||||
.input = input_action_end_dt4,
|
||||
.slwt_ops = {
|
||||
|
@ -919,30 +956,35 @@ static struct seg6_action_desc seg6_action_table[] = {
|
|||
.action = SEG6_LOCAL_ACTION_END_DT6,
|
||||
#ifdef CONFIG_NET_L3_MASTER_DEV
|
||||
.attrs = 0,
|
||||
.optattrs = SEG6_F_ATTR(SEG6_LOCAL_TABLE) |
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS |
|
||||
SEG6_F_ATTR(SEG6_LOCAL_TABLE) |
|
||||
SEG6_F_ATTR(SEG6_LOCAL_VRFTABLE),
|
||||
.slwt_ops = {
|
||||
.build_state = seg6_end_dt6_build,
|
||||
},
|
||||
#else
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_TABLE),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
#endif
|
||||
.input = input_action_end_dt6,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_B6,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_SRH),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_b6,
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_B6_ENCAP,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_SRH),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_b6_encap,
|
||||
.static_headroom = sizeof(struct ipv6hdr),
|
||||
},
|
||||
{
|
||||
.action = SEG6_LOCAL_ACTION_END_BPF,
|
||||
.attrs = SEG6_F_ATTR(SEG6_LOCAL_BPF),
|
||||
.optattrs = SEG6_F_LOCAL_COUNTERS,
|
||||
.input = input_action_end_bpf,
|
||||
},
|
||||
|
||||
|
@ -963,11 +1005,36 @@ static struct seg6_action_desc *__get_action_desc(int action)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static bool seg6_lwtunnel_counters_enabled(struct seg6_local_lwt *slwt)
|
||||
{
|
||||
return slwt->parsed_optattrs & SEG6_F_LOCAL_COUNTERS;
|
||||
}
|
||||
|
||||
static void seg6_local_update_counters(struct seg6_local_lwt *slwt,
|
||||
unsigned int len, int err)
|
||||
{
|
||||
struct pcpu_seg6_local_counters *pcounters;
|
||||
|
||||
pcounters = this_cpu_ptr(slwt->pcpu_counters);
|
||||
u64_stats_update_begin(&pcounters->syncp);
|
||||
|
||||
if (likely(!err)) {
|
||||
u64_stats_inc(&pcounters->packets);
|
||||
u64_stats_add(&pcounters->bytes, len);
|
||||
} else {
|
||||
u64_stats_inc(&pcounters->errors);
|
||||
}
|
||||
|
||||
u64_stats_update_end(&pcounters->syncp);
|
||||
}
|
||||
|
||||
static int seg6_local_input(struct sk_buff *skb)
|
||||
{
|
||||
struct dst_entry *orig_dst = skb_dst(skb);
|
||||
struct seg6_action_desc *desc;
|
||||
struct seg6_local_lwt *slwt;
|
||||
unsigned int len = skb->len;
|
||||
int rc;
|
||||
|
||||
if (skb->protocol != htons(ETH_P_IPV6)) {
|
||||
kfree_skb(skb);
|
||||
|
@ -977,7 +1044,14 @@ static int seg6_local_input(struct sk_buff *skb)
|
|||
slwt = seg6_local_lwtunnel(orig_dst->lwtstate);
|
||||
desc = slwt->desc;
|
||||
|
||||
return desc->input(skb, slwt);
|
||||
rc = desc->input(skb, slwt);
|
||||
|
||||
if (!seg6_lwtunnel_counters_enabled(slwt))
|
||||
return rc;
|
||||
|
||||
seg6_local_update_counters(slwt, len, rc);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static const struct nla_policy seg6_local_policy[SEG6_LOCAL_MAX + 1] = {
|
||||
|
@ -992,6 +1066,7 @@ static const struct nla_policy seg6_local_policy[SEG6_LOCAL_MAX + 1] = {
|
|||
[SEG6_LOCAL_IIF] = { .type = NLA_U32 },
|
||||
[SEG6_LOCAL_OIF] = { .type = NLA_U32 },
|
||||
[SEG6_LOCAL_BPF] = { .type = NLA_NESTED },
|
||||
[SEG6_LOCAL_COUNTERS] = { .type = NLA_NESTED },
|
||||
};
|
||||
|
||||
static int parse_nla_srh(struct nlattr **attrs, struct seg6_local_lwt *slwt)
|
||||
|
@ -1296,6 +1371,112 @@ static void destroy_attr_bpf(struct seg6_local_lwt *slwt)
|
|||
bpf_prog_put(slwt->bpf.prog);
|
||||
}
|
||||
|
||||
static const struct
|
||||
nla_policy seg6_local_counters_policy[SEG6_LOCAL_CNT_MAX + 1] = {
|
||||
[SEG6_LOCAL_CNT_PACKETS] = { .type = NLA_U64 },
|
||||
[SEG6_LOCAL_CNT_BYTES] = { .type = NLA_U64 },
|
||||
[SEG6_LOCAL_CNT_ERRORS] = { .type = NLA_U64 },
|
||||
};
|
||||
|
||||
static int parse_nla_counters(struct nlattr **attrs,
|
||||
struct seg6_local_lwt *slwt)
|
||||
{
|
||||
struct pcpu_seg6_local_counters __percpu *pcounters;
|
||||
struct nlattr *tb[SEG6_LOCAL_CNT_MAX + 1];
|
||||
int ret;
|
||||
|
||||
ret = nla_parse_nested_deprecated(tb, SEG6_LOCAL_CNT_MAX,
|
||||
attrs[SEG6_LOCAL_COUNTERS],
|
||||
seg6_local_counters_policy, NULL);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* basic support for SRv6 Behavior counters requires at least:
|
||||
* packets, bytes and errors.
|
||||
*/
|
||||
if (!tb[SEG6_LOCAL_CNT_PACKETS] || !tb[SEG6_LOCAL_CNT_BYTES] ||
|
||||
!tb[SEG6_LOCAL_CNT_ERRORS])
|
||||
return -EINVAL;
|
||||
|
||||
/* counters are always zero initialized */
|
||||
pcounters = seg6_local_alloc_pcpu_counters(GFP_KERNEL);
|
||||
if (!pcounters)
|
||||
return -ENOMEM;
|
||||
|
||||
slwt->pcpu_counters = pcounters;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int seg6_local_fill_nla_counters(struct sk_buff *skb,
|
||||
struct seg6_local_counters *counters)
|
||||
{
|
||||
if (nla_put_u64_64bit(skb, SEG6_LOCAL_CNT_PACKETS, counters->packets,
|
||||
SEG6_LOCAL_CNT_PAD))
|
||||
return -EMSGSIZE;
|
||||
|
||||
if (nla_put_u64_64bit(skb, SEG6_LOCAL_CNT_BYTES, counters->bytes,
|
||||
SEG6_LOCAL_CNT_PAD))
|
||||
return -EMSGSIZE;
|
||||
|
||||
if (nla_put_u64_64bit(skb, SEG6_LOCAL_CNT_ERRORS, counters->errors,
|
||||
SEG6_LOCAL_CNT_PAD))
|
||||
return -EMSGSIZE;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int put_nla_counters(struct sk_buff *skb, struct seg6_local_lwt *slwt)
|
||||
{
|
||||
struct seg6_local_counters counters = { 0, 0, 0 };
|
||||
struct nlattr *nest;
|
||||
int rc, i;
|
||||
|
||||
nest = nla_nest_start(skb, SEG6_LOCAL_COUNTERS);
|
||||
if (!nest)
|
||||
return -EMSGSIZE;
|
||||
|
||||
for_each_possible_cpu(i) {
|
||||
struct pcpu_seg6_local_counters *pcounters;
|
||||
u64 packets, bytes, errors;
|
||||
unsigned int start;
|
||||
|
||||
pcounters = per_cpu_ptr(slwt->pcpu_counters, i);
|
||||
do {
|
||||
start = u64_stats_fetch_begin_irq(&pcounters->syncp);
|
||||
|
||||
packets = u64_stats_read(&pcounters->packets);
|
||||
bytes = u64_stats_read(&pcounters->bytes);
|
||||
errors = u64_stats_read(&pcounters->errors);
|
||||
|
||||
} while (u64_stats_fetch_retry_irq(&pcounters->syncp, start));
|
||||
|
||||
counters.packets += packets;
|
||||
counters.bytes += bytes;
|
||||
counters.errors += errors;
|
||||
}
|
||||
|
||||
rc = seg6_local_fill_nla_counters(skb, &counters);
|
||||
if (rc < 0) {
|
||||
nla_nest_cancel(skb, nest);
|
||||
return rc;
|
||||
}
|
||||
|
||||
return nla_nest_end(skb, nest);
|
||||
}
|
||||
|
||||
static int cmp_nla_counters(struct seg6_local_lwt *a, struct seg6_local_lwt *b)
|
||||
{
|
||||
/* a and b are equal if both have pcpu_counters set or not */
|
||||
return (!!((unsigned long)a->pcpu_counters)) ^
|
||||
(!!((unsigned long)b->pcpu_counters));
|
||||
}
|
||||
|
||||
static void destroy_attr_counters(struct seg6_local_lwt *slwt)
|
||||
{
|
||||
free_percpu(slwt->pcpu_counters);
|
||||
}
|
||||
|
||||
struct seg6_action_param {
|
||||
int (*parse)(struct nlattr **attrs, struct seg6_local_lwt *slwt);
|
||||
int (*put)(struct sk_buff *skb, struct seg6_local_lwt *slwt);
|
||||
|
@ -1343,6 +1524,10 @@ static struct seg6_action_param seg6_action_params[SEG6_LOCAL_MAX + 1] = {
|
|||
.put = put_nla_vrftable,
|
||||
.cmp = cmp_nla_vrftable },
|
||||
|
||||
[SEG6_LOCAL_COUNTERS] = { .parse = parse_nla_counters,
|
||||
.put = put_nla_counters,
|
||||
.cmp = cmp_nla_counters,
|
||||
.destroy = destroy_attr_counters },
|
||||
};
|
||||
|
||||
/* call the destroy() callback (if available) for each set attribute in
|
||||
|
@ -1645,6 +1830,15 @@ static int seg6_local_get_encap_size(struct lwtunnel_state *lwt)
|
|||
if (attrs & SEG6_F_ATTR(SEG6_LOCAL_VRFTABLE))
|
||||
nlsize += nla_total_size(4);
|
||||
|
||||
if (attrs & SEG6_F_LOCAL_COUNTERS)
|
||||
nlsize += nla_total_size(0) + /* nest SEG6_LOCAL_COUNTERS */
|
||||
/* SEG6_LOCAL_CNT_PACKETS */
|
||||
nla_total_size_64bit(sizeof(__u64)) +
|
||||
/* SEG6_LOCAL_CNT_BYTES */
|
||||
nla_total_size_64bit(sizeof(__u64)) +
|
||||
/* SEG6_LOCAL_CNT_ERRORS */
|
||||
nla_total_size_64bit(sizeof(__u64));
|
||||
|
||||
return nlsize;
|
||||
}
|
||||
|
||||
|
|
|
@ -546,8 +546,7 @@ static void mptcp_sock_destruct(struct sock *sk)
|
|||
* ESTABLISHED state and will not have the SOCK_DEAD flag.
|
||||
* Both result in warnings from inet_sock_destruct.
|
||||
*/
|
||||
|
||||
if (sk->sk_state == TCP_ESTABLISHED) {
|
||||
if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) {
|
||||
sk->sk_state = TCP_CLOSE;
|
||||
WARN_ON_ONCE(sk->sk_socket);
|
||||
sock_orphan(sk);
|
||||
|
|
|
@ -413,7 +413,10 @@ static int help(struct sk_buff *skb,
|
|||
|
||||
spin_lock_bh(&nf_ftp_lock);
|
||||
fb_ptr = skb_header_pointer(skb, dataoff, datalen, ftp_buffer);
|
||||
BUG_ON(fb_ptr == NULL);
|
||||
if (!fb_ptr) {
|
||||
spin_unlock_bh(&nf_ftp_lock);
|
||||
return NF_ACCEPT;
|
||||
}
|
||||
|
||||
ends_in_nl = (fb_ptr[datalen - 1] == '\n');
|
||||
seq = ntohl(th->seq) + datalen;
|
||||
|
|
|
@ -146,7 +146,8 @@ static int get_tpkt_data(struct sk_buff *skb, unsigned int protoff,
|
|||
/* Get first TPKT pointer */
|
||||
tpkt = skb_header_pointer(skb, tcpdataoff, tcpdatalen,
|
||||
h323_buffer);
|
||||
BUG_ON(tpkt == NULL);
|
||||
if (!tpkt)
|
||||
goto clear_out;
|
||||
|
||||
/* Validate TPKT identifier */
|
||||
if (tcpdatalen < 4 || tpkt[0] != 0x03 || tpkt[1] != 0) {
|
||||
|
|
|
@ -143,7 +143,10 @@ static int help(struct sk_buff *skb, unsigned int protoff,
|
|||
spin_lock_bh(&irc_buffer_lock);
|
||||
ib_ptr = skb_header_pointer(skb, dataoff, skb->len - dataoff,
|
||||
irc_buffer);
|
||||
BUG_ON(ib_ptr == NULL);
|
||||
if (!ib_ptr) {
|
||||
spin_unlock_bh(&irc_buffer_lock);
|
||||
return NF_ACCEPT;
|
||||
}
|
||||
|
||||
data = ib_ptr;
|
||||
data_limit = ib_ptr + skb->len - dataoff;
|
||||
|
|
|
@ -544,7 +544,9 @@ conntrack_pptp_help(struct sk_buff *skb, unsigned int protoff,
|
|||
|
||||
nexthdr_off = protoff;
|
||||
tcph = skb_header_pointer(skb, nexthdr_off, sizeof(_tcph), &_tcph);
|
||||
BUG_ON(!tcph);
|
||||
if (!tcph)
|
||||
return NF_ACCEPT;
|
||||
|
||||
nexthdr_off += tcph->doff * 4;
|
||||
datalen = tcplen - tcph->doff * 4;
|
||||
|
||||
|
|
|
@ -338,7 +338,8 @@ static void tcp_options(const struct sk_buff *skb,
|
|||
|
||||
ptr = skb_header_pointer(skb, dataoff + sizeof(struct tcphdr),
|
||||
length, buff);
|
||||
BUG_ON(ptr == NULL);
|
||||
if (!ptr)
|
||||
return;
|
||||
|
||||
state->td_scale =
|
||||
state->flags = 0;
|
||||
|
@ -394,7 +395,8 @@ static void tcp_sack(const struct sk_buff *skb, unsigned int dataoff,
|
|||
|
||||
ptr = skb_header_pointer(skb, dataoff + sizeof(struct tcphdr),
|
||||
length, buff);
|
||||
BUG_ON(ptr == NULL);
|
||||
if (!ptr)
|
||||
return;
|
||||
|
||||
/* Fast path for timestamp-only option */
|
||||
if (length == TCPOLEN_TSTAMP_ALIGNED
|
||||
|
|
|
@ -95,7 +95,10 @@ static int help(struct sk_buff *skb,
|
|||
|
||||
spin_lock_bh(&nf_sane_lock);
|
||||
sb_ptr = skb_header_pointer(skb, dataoff, datalen, sane_buffer);
|
||||
BUG_ON(sb_ptr == NULL);
|
||||
if (!sb_ptr) {
|
||||
spin_unlock_bh(&nf_sane_lock);
|
||||
return NF_ACCEPT;
|
||||
}
|
||||
|
||||
if (dir == IP_CT_DIR_ORIGINAL) {
|
||||
if (datalen != sizeof(struct sane_request))
|
||||
|
|
|
@ -4184,6 +4184,7 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
|
|||
unsigned char *udata;
|
||||
struct nft_set *set;
|
||||
struct nft_ctx ctx;
|
||||
size_t alloc_size;
|
||||
u64 timeout;
|
||||
char *name;
|
||||
int err, i;
|
||||
|
@ -4329,8 +4330,10 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
|
|||
size = 0;
|
||||
if (ops->privsize != NULL)
|
||||
size = ops->privsize(nla, &desc);
|
||||
|
||||
set = kvzalloc(sizeof(*set) + size + udlen, GFP_KERNEL);
|
||||
alloc_size = sizeof(*set) + size + udlen;
|
||||
if (alloc_size < size)
|
||||
return -ENOMEM;
|
||||
set = kvzalloc(alloc_size, GFP_KERNEL);
|
||||
if (!set)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -6615,9 +6618,9 @@ err_obj_ht:
|
|||
INIT_LIST_HEAD(&obj->list);
|
||||
return err;
|
||||
err_trans:
|
||||
kfree(obj->key.name);
|
||||
err_userdata:
|
||||
kfree(obj->udata);
|
||||
err_userdata:
|
||||
kfree(obj->key.name);
|
||||
err_strdup:
|
||||
if (obj->ops->destroy)
|
||||
obj->ops->destroy(&ctx, obj);
|
||||
|
|
|
@ -295,6 +295,7 @@ replay:
|
|||
nfnl_unlock(subsys_id);
|
||||
break;
|
||||
default:
|
||||
rcu_read_unlock();
|
||||
err = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -186,6 +186,8 @@ static const struct tcphdr *nf_osf_hdr_ctx_init(struct nf_osf_hdr_ctx *ctx,
|
|||
|
||||
ctx->optp = skb_header_pointer(skb, ip_hdrlen(skb) +
|
||||
sizeof(struct tcphdr), ctx->optsize, opts);
|
||||
if (!ctx->optp)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return tcp;
|
||||
|
|
|
@ -412,9 +412,17 @@ static void nft_rhash_destroy(const struct nft_set *set)
|
|||
(void *)set);
|
||||
}
|
||||
|
||||
/* Number of buckets is stored in u32, so cap our result to 1U<<31 */
|
||||
#define NFT_MAX_BUCKETS (1U << 31)
|
||||
|
||||
static u32 nft_hash_buckets(u32 size)
|
||||
{
|
||||
return roundup_pow_of_two(size * 4 / 3);
|
||||
u64 val = div_u64((u64)size * 4, 3);
|
||||
|
||||
if (val >= NFT_MAX_BUCKETS)
|
||||
return NFT_MAX_BUCKETS;
|
||||
|
||||
return roundup_pow_of_two(val);
|
||||
}
|
||||
|
||||
static bool nft_rhash_estimate(const struct nft_set_desc *desc, u32 features,
|
||||
|
@ -615,7 +623,7 @@ static u64 nft_hash_privsize(const struct nlattr * const nla[],
|
|||
const struct nft_set_desc *desc)
|
||||
{
|
||||
return sizeof(struct nft_hash) +
|
||||
nft_hash_buckets(desc->size) * sizeof(struct hlist_head);
|
||||
(u64)nft_hash_buckets(desc->size) * sizeof(struct hlist_head);
|
||||
}
|
||||
|
||||
static int nft_hash_init(const struct nft_set *set,
|
||||
|
@ -655,8 +663,8 @@ static bool nft_hash_estimate(const struct nft_set_desc *desc, u32 features,
|
|||
return false;
|
||||
|
||||
est->size = sizeof(struct nft_hash) +
|
||||
nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
|
||||
desc->size * sizeof(struct nft_hash_elem);
|
||||
(u64)nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
|
||||
(u64)desc->size * sizeof(struct nft_hash_elem);
|
||||
est->lookup = NFT_SET_CLASS_O_1;
|
||||
est->space = NFT_SET_CLASS_O_N;
|
||||
|
||||
|
@ -673,8 +681,8 @@ static bool nft_hash_fast_estimate(const struct nft_set_desc *desc, u32 features
|
|||
return false;
|
||||
|
||||
est->size = sizeof(struct nft_hash) +
|
||||
nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
|
||||
desc->size * sizeof(struct nft_hash_elem);
|
||||
(u64)nft_hash_buckets(desc->size) * sizeof(struct hlist_head) +
|
||||
(u64)desc->size * sizeof(struct nft_hash_elem);
|
||||
est->lookup = NFT_SET_CLASS_O_1;
|
||||
est->space = NFT_SET_CLASS_O_N;
|
||||
|
||||
|
|
|
@ -24,10 +24,9 @@ MODULE_ALIAS("ip6t_SECMARK");
|
|||
static u8 mode;
|
||||
|
||||
static unsigned int
|
||||
secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
|
||||
secmark_tg(struct sk_buff *skb, const struct xt_secmark_target_info_v1 *info)
|
||||
{
|
||||
u32 secmark = 0;
|
||||
const struct xt_secmark_target_info *info = par->targinfo;
|
||||
|
||||
switch (mode) {
|
||||
case SECMARK_MODE_SEL:
|
||||
|
@ -41,7 +40,7 @@ secmark_tg(struct sk_buff *skb, const struct xt_action_param *par)
|
|||
return XT_CONTINUE;
|
||||
}
|
||||
|
||||
static int checkentry_lsm(struct xt_secmark_target_info *info)
|
||||
static int checkentry_lsm(struct xt_secmark_target_info_v1 *info)
|
||||
{
|
||||
int err;
|
||||
|
||||
|
@ -73,15 +72,15 @@ static int checkentry_lsm(struct xt_secmark_target_info *info)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int secmark_tg_check(const struct xt_tgchk_param *par)
|
||||
static int
|
||||
secmark_tg_check(const char *table, struct xt_secmark_target_info_v1 *info)
|
||||
{
|
||||
struct xt_secmark_target_info *info = par->targinfo;
|
||||
int err;
|
||||
|
||||
if (strcmp(par->table, "mangle") != 0 &&
|
||||
strcmp(par->table, "security") != 0) {
|
||||
if (strcmp(table, "mangle") != 0 &&
|
||||
strcmp(table, "security") != 0) {
|
||||
pr_info_ratelimited("only valid in \'mangle\' or \'security\' table, not \'%s\'\n",
|
||||
par->table);
|
||||
table);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -116,25 +115,76 @@ static void secmark_tg_destroy(const struct xt_tgdtor_param *par)
|
|||
}
|
||||
}
|
||||
|
||||
static struct xt_target secmark_tg_reg __read_mostly = {
|
||||
.name = "SECMARK",
|
||||
.revision = 0,
|
||||
.family = NFPROTO_UNSPEC,
|
||||
.checkentry = secmark_tg_check,
|
||||
.destroy = secmark_tg_destroy,
|
||||
.target = secmark_tg,
|
||||
.targetsize = sizeof(struct xt_secmark_target_info),
|
||||
.me = THIS_MODULE,
|
||||
static int secmark_tg_check_v0(const struct xt_tgchk_param *par)
|
||||
{
|
||||
struct xt_secmark_target_info *info = par->targinfo;
|
||||
struct xt_secmark_target_info_v1 newinfo = {
|
||||
.mode = info->mode,
|
||||
};
|
||||
int ret;
|
||||
|
||||
memcpy(newinfo.secctx, info->secctx, SECMARK_SECCTX_MAX);
|
||||
|
||||
ret = secmark_tg_check(par->table, &newinfo);
|
||||
info->secid = newinfo.secid;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static unsigned int
|
||||
secmark_tg_v0(struct sk_buff *skb, const struct xt_action_param *par)
|
||||
{
|
||||
const struct xt_secmark_target_info *info = par->targinfo;
|
||||
struct xt_secmark_target_info_v1 newinfo = {
|
||||
.secid = info->secid,
|
||||
};
|
||||
|
||||
return secmark_tg(skb, &newinfo);
|
||||
}
|
||||
|
||||
static int secmark_tg_check_v1(const struct xt_tgchk_param *par)
|
||||
{
|
||||
return secmark_tg_check(par->table, par->targinfo);
|
||||
}
|
||||
|
||||
static unsigned int
|
||||
secmark_tg_v1(struct sk_buff *skb, const struct xt_action_param *par)
|
||||
{
|
||||
return secmark_tg(skb, par->targinfo);
|
||||
}
|
||||
|
||||
static struct xt_target secmark_tg_reg[] __read_mostly = {
|
||||
{
|
||||
.name = "SECMARK",
|
||||
.revision = 0,
|
||||
.family = NFPROTO_UNSPEC,
|
||||
.checkentry = secmark_tg_check_v0,
|
||||
.destroy = secmark_tg_destroy,
|
||||
.target = secmark_tg_v0,
|
||||
.targetsize = sizeof(struct xt_secmark_target_info),
|
||||
.me = THIS_MODULE,
|
||||
},
|
||||
{
|
||||
.name = "SECMARK",
|
||||
.revision = 1,
|
||||
.family = NFPROTO_UNSPEC,
|
||||
.checkentry = secmark_tg_check_v1,
|
||||
.destroy = secmark_tg_destroy,
|
||||
.target = secmark_tg_v1,
|
||||
.targetsize = sizeof(struct xt_secmark_target_info_v1),
|
||||
.usersize = offsetof(struct xt_secmark_target_info_v1, secid),
|
||||
.me = THIS_MODULE,
|
||||
},
|
||||
};
|
||||
|
||||
static int __init secmark_tg_init(void)
|
||||
{
|
||||
return xt_register_target(&secmark_tg_reg);
|
||||
return xt_register_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
|
||||
}
|
||||
|
||||
static void __exit secmark_tg_exit(void)
|
||||
{
|
||||
xt_unregister_target(&secmark_tg_reg);
|
||||
xt_unregister_targets(secmark_tg_reg, ARRAY_SIZE(secmark_tg_reg));
|
||||
}
|
||||
|
||||
module_init(secmark_tg_init);
|
||||
|
|
|
@ -109,12 +109,14 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
|
|||
GFP_KERNEL);
|
||||
if (!llcp_sock->service_name) {
|
||||
nfc_llcp_local_put(llcp_sock->local);
|
||||
llcp_sock->local = NULL;
|
||||
ret = -ENOMEM;
|
||||
goto put_dev;
|
||||
}
|
||||
llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock);
|
||||
if (llcp_sock->ssap == LLCP_SAP_MAX) {
|
||||
nfc_llcp_local_put(llcp_sock->local);
|
||||
llcp_sock->local = NULL;
|
||||
kfree(llcp_sock->service_name);
|
||||
llcp_sock->service_name = NULL;
|
||||
ret = -EADDRINUSE;
|
||||
|
@ -709,6 +711,7 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
|
|||
llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
|
||||
if (llcp_sock->ssap == LLCP_SAP_MAX) {
|
||||
nfc_llcp_local_put(llcp_sock->local);
|
||||
llcp_sock->local = NULL;
|
||||
ret = -ENOMEM;
|
||||
goto put_dev;
|
||||
}
|
||||
|
@ -756,6 +759,7 @@ sock_unlink:
|
|||
sock_llcp_release:
|
||||
nfc_llcp_put_ssap(local, llcp_sock->ssap);
|
||||
nfc_llcp_local_put(llcp_sock->local);
|
||||
llcp_sock->local = NULL;
|
||||
|
||||
put_dev:
|
||||
nfc_put_device(dev);
|
||||
|
|
|
@ -827,17 +827,17 @@ static void ovs_fragment(struct net *net, struct vport *vport,
|
|||
}
|
||||
|
||||
if (key->eth.type == htons(ETH_P_IP)) {
|
||||
struct dst_entry ovs_dst;
|
||||
struct rtable ovs_rt = { 0 };
|
||||
unsigned long orig_dst;
|
||||
|
||||
prepare_frag(vport, skb, orig_network_offset,
|
||||
ovs_key_mac_proto(key));
|
||||
dst_init(&ovs_dst, &ovs_dst_ops, NULL, 1,
|
||||
dst_init(&ovs_rt.dst, &ovs_dst_ops, NULL, 1,
|
||||
DST_OBSOLETE_NONE, DST_NOCOUNT);
|
||||
ovs_dst.dev = vport->dev;
|
||||
ovs_rt.dst.dev = vport->dev;
|
||||
|
||||
orig_dst = skb->_skb_refdst;
|
||||
skb_dst_set_noref(skb, &ovs_dst);
|
||||
skb_dst_set_noref(skb, &ovs_rt.dst);
|
||||
IPCB(skb)->frag_max_size = mru;
|
||||
|
||||
ip_do_fragment(net, skb->sk, skb, ovs_vport_output);
|
||||
|
|
|
@ -90,16 +90,16 @@ static int sch_fragment(struct net *net, struct sk_buff *skb,
|
|||
}
|
||||
|
||||
if (skb_protocol(skb, true) == htons(ETH_P_IP)) {
|
||||
struct dst_entry sch_frag_dst;
|
||||
struct rtable sch_frag_rt = { 0 };
|
||||
unsigned long orig_dst;
|
||||
|
||||
sch_frag_prepare_frag(skb, xmit);
|
||||
dst_init(&sch_frag_dst, &sch_frag_dst_ops, NULL, 1,
|
||||
dst_init(&sch_frag_rt.dst, &sch_frag_dst_ops, NULL, 1,
|
||||
DST_OBSOLETE_NONE, DST_NOCOUNT);
|
||||
sch_frag_dst.dev = skb->dev;
|
||||
sch_frag_rt.dst.dev = skb->dev;
|
||||
|
||||
orig_dst = skb->_skb_refdst;
|
||||
skb_dst_set_noref(skb, &sch_frag_dst);
|
||||
skb_dst_set_noref(skb, &sch_frag_rt.dst);
|
||||
IPCB(skb)->frag_max_size = mru;
|
||||
|
||||
ret = ip_do_fragment(net, skb->sk, skb, sch_frag_xmit);
|
||||
|
|
|
@ -858,11 +858,7 @@ struct sctp_chunk *sctp_make_shutdown(const struct sctp_association *asoc,
|
|||
struct sctp_chunk *retval;
|
||||
__u32 ctsn;
|
||||
|
||||
if (chunk && chunk->asoc)
|
||||
ctsn = sctp_tsnmap_get_ctsn(&chunk->asoc->peer.tsn_map);
|
||||
else
|
||||
ctsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map);
|
||||
|
||||
ctsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map);
|
||||
shut.cum_tsn_ack = htonl(ctsn);
|
||||
|
||||
retval = sctp_make_control(asoc, SCTP_CID_SHUTDOWN, 0,
|
||||
|
|
|
@ -826,28 +826,6 @@ static void sctp_cmd_setup_t2(struct sctp_cmd_seq *cmds,
|
|||
asoc->timeouts[SCTP_EVENT_TIMEOUT_T2_SHUTDOWN] = t->rto;
|
||||
}
|
||||
|
||||
static void sctp_cmd_assoc_update(struct sctp_cmd_seq *cmds,
|
||||
struct sctp_association *asoc,
|
||||
struct sctp_association *new)
|
||||
{
|
||||
struct net *net = asoc->base.net;
|
||||
struct sctp_chunk *abort;
|
||||
|
||||
if (!sctp_assoc_update(asoc, new))
|
||||
return;
|
||||
|
||||
abort = sctp_make_abort(asoc, NULL, sizeof(struct sctp_errhdr));
|
||||
if (abort) {
|
||||
sctp_init_cause(abort, SCTP_ERROR_RSRC_LOW, 0);
|
||||
sctp_add_cmd_sf(cmds, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
|
||||
}
|
||||
sctp_add_cmd_sf(cmds, SCTP_CMD_SET_SK_ERR, SCTP_ERROR(ECONNABORTED));
|
||||
sctp_add_cmd_sf(cmds, SCTP_CMD_ASSOC_FAILED,
|
||||
SCTP_PERR(SCTP_ERROR_RSRC_LOW));
|
||||
SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS);
|
||||
SCTP_DEC_STATS(net, SCTP_MIB_CURRESTAB);
|
||||
}
|
||||
|
||||
/* Helper function to change the state of an association. */
|
||||
static void sctp_cmd_new_state(struct sctp_cmd_seq *cmds,
|
||||
struct sctp_association *asoc,
|
||||
|
@ -1301,10 +1279,6 @@ static int sctp_cmd_interpreter(enum sctp_event_type event_type,
|
|||
sctp_endpoint_add_asoc(ep, asoc);
|
||||
break;
|
||||
|
||||
case SCTP_CMD_UPDATE_ASSOC:
|
||||
sctp_cmd_assoc_update(commands, asoc, cmd->obj.asoc);
|
||||
break;
|
||||
|
||||
case SCTP_CMD_PURGE_OUTQUEUE:
|
||||
sctp_outq_teardown(&asoc->outqueue);
|
||||
break;
|
||||
|
|
|
@ -1773,6 +1773,30 @@ enum sctp_disposition sctp_sf_do_5_2_3_initack(
|
|||
return sctp_sf_discard_chunk(net, ep, asoc, type, arg, commands);
|
||||
}
|
||||
|
||||
static int sctp_sf_do_assoc_update(struct sctp_association *asoc,
|
||||
struct sctp_association *new,
|
||||
struct sctp_cmd_seq *cmds)
|
||||
{
|
||||
struct net *net = asoc->base.net;
|
||||
struct sctp_chunk *abort;
|
||||
|
||||
if (!sctp_assoc_update(asoc, new))
|
||||
return 0;
|
||||
|
||||
abort = sctp_make_abort(asoc, NULL, sizeof(struct sctp_errhdr));
|
||||
if (abort) {
|
||||
sctp_init_cause(abort, SCTP_ERROR_RSRC_LOW, 0);
|
||||
sctp_add_cmd_sf(cmds, SCTP_CMD_REPLY, SCTP_CHUNK(abort));
|
||||
}
|
||||
sctp_add_cmd_sf(cmds, SCTP_CMD_SET_SK_ERR, SCTP_ERROR(ECONNABORTED));
|
||||
sctp_add_cmd_sf(cmds, SCTP_CMD_ASSOC_FAILED,
|
||||
SCTP_PERR(SCTP_ERROR_RSRC_LOW));
|
||||
SCTP_INC_STATS(net, SCTP_MIB_ABORTEDS);
|
||||
SCTP_DEC_STATS(net, SCTP_MIB_CURRESTAB);
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Unexpected COOKIE-ECHO handler for peer restart (Table 2, action 'A')
|
||||
*
|
||||
* Section 5.2.4
|
||||
|
@ -1852,20 +1876,22 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
|
|||
SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO));
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_PURGE_ASCONF_QUEUE, SCTP_NULL());
|
||||
|
||||
repl = sctp_make_cookie_ack(new_asoc, chunk);
|
||||
/* Update the content of current association. */
|
||||
if (sctp_sf_do_assoc_update((struct sctp_association *)asoc, new_asoc, commands))
|
||||
goto nomem;
|
||||
|
||||
repl = sctp_make_cookie_ack(asoc, chunk);
|
||||
if (!repl)
|
||||
goto nomem;
|
||||
|
||||
/* Report association restart to upper layer. */
|
||||
ev = sctp_ulpevent_make_assoc_change(asoc, 0, SCTP_RESTART, 0,
|
||||
new_asoc->c.sinit_num_ostreams,
|
||||
new_asoc->c.sinit_max_instreams,
|
||||
asoc->c.sinit_num_ostreams,
|
||||
asoc->c.sinit_max_instreams,
|
||||
NULL, GFP_ATOMIC);
|
||||
if (!ev)
|
||||
goto nomem_ev;
|
||||
|
||||
/* Update the content of current association. */
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));
|
||||
if ((sctp_state(asoc, SHUTDOWN_PENDING) ||
|
||||
sctp_state(asoc, SHUTDOWN_SENT)) &&
|
||||
|
@ -1925,14 +1951,17 @@ static enum sctp_disposition sctp_sf_do_dupcook_b(
|
|||
if (!sctp_auth_chunk_verify(net, chunk, new_asoc))
|
||||
return SCTP_DISPOSITION_DISCARD;
|
||||
|
||||
/* Update the content of current association. */
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
|
||||
SCTP_STATE(SCTP_STATE_ESTABLISHED));
|
||||
SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
|
||||
if (asoc->state < SCTP_STATE_ESTABLISHED)
|
||||
SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB);
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL());
|
||||
|
||||
repl = sctp_make_cookie_ack(new_asoc, chunk);
|
||||
/* Update the content of current association. */
|
||||
if (sctp_sf_do_assoc_update((struct sctp_association *)asoc, new_asoc, commands))
|
||||
goto nomem;
|
||||
|
||||
repl = sctp_make_cookie_ack(asoc, chunk);
|
||||
if (!repl)
|
||||
goto nomem;
|
||||
|
||||
|
|
|
@ -357,6 +357,18 @@ static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
|
|||
return af;
|
||||
}
|
||||
|
||||
static void sctp_auto_asconf_init(struct sctp_sock *sp)
|
||||
{
|
||||
struct net *net = sock_net(&sp->inet.sk);
|
||||
|
||||
if (net->sctp.default_auto_asconf) {
|
||||
spin_lock(&net->sctp.addr_wq_lock);
|
||||
list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
|
||||
spin_unlock(&net->sctp.addr_wq_lock);
|
||||
sp->do_auto_asconf = 1;
|
||||
}
|
||||
}
|
||||
|
||||
/* Bind a local address either to an endpoint or to an association. */
|
||||
static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
|
||||
{
|
||||
|
@ -418,8 +430,10 @@ static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
|
|||
return -EADDRINUSE;
|
||||
|
||||
/* Refresh ephemeral port. */
|
||||
if (!bp->port)
|
||||
if (!bp->port) {
|
||||
bp->port = inet_sk(sk)->inet_num;
|
||||
sctp_auto_asconf_init(sp);
|
||||
}
|
||||
|
||||
/* Add the address to the bind address list.
|
||||
* Use GFP_ATOMIC since BHs will be disabled.
|
||||
|
@ -1520,9 +1534,11 @@ static void sctp_close(struct sock *sk, long timeout)
|
|||
|
||||
/* Supposedly, no process has access to the socket, but
|
||||
* the net layers still may.
|
||||
* Also, sctp_destroy_sock() needs to be called with addr_wq_lock
|
||||
* held and that should be grabbed before socket lock.
|
||||
*/
|
||||
local_bh_disable();
|
||||
bh_lock_sock(sk);
|
||||
spin_lock_bh(&net->sctp.addr_wq_lock);
|
||||
bh_lock_sock_nested(sk);
|
||||
|
||||
/* Hold the sock, since sk_common_release() will put sock_put()
|
||||
* and we have just a little more cleanup.
|
||||
|
@ -1531,7 +1547,7 @@ static void sctp_close(struct sock *sk, long timeout)
|
|||
sk_common_release(sk);
|
||||
|
||||
bh_unlock_sock(sk);
|
||||
local_bh_enable();
|
||||
spin_unlock_bh(&net->sctp.addr_wq_lock);
|
||||
|
||||
sock_put(sk);
|
||||
|
||||
|
@ -4991,16 +5007,6 @@ static int sctp_init_sock(struct sock *sk)
|
|||
sk_sockets_allocated_inc(sk);
|
||||
sock_prot_inuse_add(net, sk->sk_prot, 1);
|
||||
|
||||
if (net->sctp.default_auto_asconf) {
|
||||
spin_lock(&sock_net(sk)->sctp.addr_wq_lock);
|
||||
list_add_tail(&sp->auto_asconf_list,
|
||||
&net->sctp.auto_asconf_splist);
|
||||
sp->do_auto_asconf = 1;
|
||||
spin_unlock(&sock_net(sk)->sctp.addr_wq_lock);
|
||||
} else {
|
||||
sp->do_auto_asconf = 0;
|
||||
}
|
||||
|
||||
local_bh_enable();
|
||||
|
||||
return 0;
|
||||
|
@ -5025,9 +5031,7 @@ static void sctp_destroy_sock(struct sock *sk)
|
|||
|
||||
if (sp->do_auto_asconf) {
|
||||
sp->do_auto_asconf = 0;
|
||||
spin_lock_bh(&sock_net(sk)->sctp.addr_wq_lock);
|
||||
list_del(&sp->auto_asconf_list);
|
||||
spin_unlock_bh(&sock_net(sk)->sctp.addr_wq_lock);
|
||||
}
|
||||
sctp_endpoint_free(sp->ep);
|
||||
local_bh_disable();
|
||||
|
@ -9398,6 +9402,8 @@ static int sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
|
|||
return err;
|
||||
}
|
||||
|
||||
sctp_auto_asconf_init(newsp);
|
||||
|
||||
/* Move any messages in the old socket's receive queue that are for the
|
||||
* peeled off association to the new socket's receive queue.
|
||||
*/
|
||||
|
|
|
@ -2161,6 +2161,9 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
|
|||
struct smc_sock *smc;
|
||||
int val, rc;
|
||||
|
||||
if (level == SOL_TCP && optname == TCP_ULP)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
smc = smc_sk(sk);
|
||||
|
||||
/* generic setsockopts reaching us here always apply to the
|
||||
|
@ -2185,7 +2188,6 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
|
|||
if (rc || smc->use_fallback)
|
||||
goto out;
|
||||
switch (optname) {
|
||||
case TCP_ULP:
|
||||
case TCP_FASTOPEN:
|
||||
case TCP_FASTOPEN_CONNECT:
|
||||
case TCP_FASTOPEN_KEY:
|
||||
|
|
|
@ -944,8 +944,6 @@ static int vmci_transport_recv_listen(struct sock *sk,
|
|||
bool old_request = false;
|
||||
bool old_pkt_proto = false;
|
||||
|
||||
err = 0;
|
||||
|
||||
/* Because we are in the listen state, we could be receiving a packet
|
||||
* for ourself or any previous connection requests that we received.
|
||||
* If it's the latter, we try to find a socket in our list of pending
|
||||
|
|
|
@ -128,13 +128,12 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
|
|||
static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
|
||||
struct xdp_desc *desc)
|
||||
{
|
||||
u64 chunk, chunk_end;
|
||||
u64 chunk;
|
||||
|
||||
chunk = xp_aligned_extract_addr(pool, desc->addr);
|
||||
chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len);
|
||||
if (chunk != chunk_end)
|
||||
if (desc->len > pool->chunk_size)
|
||||
return false;
|
||||
|
||||
chunk = xp_aligned_extract_addr(pool, desc->addr);
|
||||
if (chunk >= pool->addrs_cnt)
|
||||
return false;
|
||||
|
||||
|
|
|
@ -202,9 +202,11 @@ static inline int roundup_len(__u32 len)
|
|||
return (len + 7) / 8 * 8;
|
||||
}
|
||||
|
||||
static int ringbuf_process_ring(struct ring* r)
|
||||
static int64_t ringbuf_process_ring(struct ring* r)
|
||||
{
|
||||
int *len_ptr, len, err, cnt = 0;
|
||||
int *len_ptr, len, err;
|
||||
/* 64-bit to avoid overflow in case of extreme application behavior */
|
||||
int64_t cnt = 0;
|
||||
unsigned long cons_pos, prod_pos;
|
||||
bool got_new_data;
|
||||
void *sample;
|
||||
|
@ -244,12 +246,14 @@ done:
|
|||
}
|
||||
|
||||
/* Consume available ring buffer(s) data without event polling.
|
||||
* Returns number of records consumed across all registered ring buffers, or
|
||||
* negative number if any of the callbacks return error.
|
||||
* Returns number of records consumed across all registered ring buffers (or
|
||||
* INT_MAX, whichever is less), or negative number if any of the callbacks
|
||||
* return error.
|
||||
*/
|
||||
int ring_buffer__consume(struct ring_buffer *rb)
|
||||
{
|
||||
int i, err, res = 0;
|
||||
int64_t err, res = 0;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < rb->ring_cnt; i++) {
|
||||
struct ring *ring = &rb->rings[i];
|
||||
|
@ -259,18 +263,24 @@ int ring_buffer__consume(struct ring_buffer *rb)
|
|||
return err;
|
||||
res += err;
|
||||
}
|
||||
if (res > INT_MAX)
|
||||
return INT_MAX;
|
||||
return res;
|
||||
}
|
||||
|
||||
/* Poll for available data and consume records, if any are available.
|
||||
* Returns number of records consumed, or negative number, if any of the
|
||||
* registered callbacks returned error.
|
||||
* Returns number of records consumed (or INT_MAX, whichever is less), or
|
||||
* negative number, if any of the registered callbacks returned error.
|
||||
*/
|
||||
int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
|
||||
{
|
||||
int i, cnt, err, res = 0;
|
||||
int i, cnt;
|
||||
int64_t err, res = 0;
|
||||
|
||||
cnt = epoll_wait(rb->epoll_fd, rb->events, rb->ring_cnt, timeout_ms);
|
||||
if (cnt < 0)
|
||||
return -errno;
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
__u32 ring_id = rb->events[i].data.fd;
|
||||
struct ring *ring = &rb->rings[ring_id];
|
||||
|
@ -280,7 +290,9 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
|
|||
return err;
|
||||
res += err;
|
||||
}
|
||||
return cnt < 0 ? -errno : res;
|
||||
if (res > INT_MAX)
|
||||
return INT_MAX;
|
||||
return res;
|
||||
}
|
||||
|
||||
/* Get an fd that can be used to sleep until data is available in the ring(s) */
|
||||
|
|
|
@ -43,6 +43,8 @@ void test_snprintf_positive(void)
|
|||
if (!ASSERT_OK_PTR(skel, "skel_open"))
|
||||
return;
|
||||
|
||||
skel->bss->pid = getpid();
|
||||
|
||||
if (!ASSERT_OK(test_snprintf__attach(skel), "skel_attach"))
|
||||
goto cleanup;
|
||||
|
||||
|
|
|
@ -5,6 +5,8 @@
|
|||
#include <bpf/bpf_helpers.h>
|
||||
#include <bpf/bpf_tracing.h>
|
||||
|
||||
__u32 pid = 0;
|
||||
|
||||
char num_out[64] = {};
|
||||
long num_ret = 0;
|
||||
|
||||
|
@ -42,6 +44,9 @@ int handler(const void *ctx)
|
|||
static const char str1[] = "str1";
|
||||
static const char longstr[] = "longstr";
|
||||
|
||||
if ((int)bpf_get_current_pid_tgid() != pid)
|
||||
return 0;
|
||||
|
||||
/* Integer types */
|
||||
num_ret = BPF_SNPRINTF(num_out, sizeof(num_out),
|
||||
"%d %u %x %li %llu %lX",
|
||||
|
|
Загрузка…
Ссылка в новой задаче