Including fixes from bluetooth, bpf and wireguard.

Current release - regressions:
 
  - nvme-tcp: fix comma-related oops after sendpage changes
 
 Current release - new code bugs:
 
  - ptp: make max_phase_adjustment sysfs device attribute invisible
    when not supported
 
 Previous releases - regressions:
 
  - sctp: fix potential deadlock on &net->sctp.addr_wq_lock
 
  - mptcp:
    - ensure subflow is unhashed before cleaning the backlog
    - do not rely on implicit state check in mptcp_listen()
 
 Previous releases - always broken:
 
  - net: fix net_dev_start_xmit trace event vs skb_transport_offset()
 
  - Bluetooth:
    - fix use-bdaddr-property quirk
    - L2CAP: fix multiple UaFs
    - ISO: use hci_sync for setting CIG parameters
    - hci_event: fix Set CIG Parameters error status handling
    - hci_event: fix parsing of CIS Established Event
    - MGMT: fix marking SCAN_RSP as not connectable
 
  - wireguard: queuing: use saner cpu selection wrapping
 
  - sched: act_ipt: various bug fixes for iptables <> TC interactions
 
  - sched: act_pedit: add size check for TCA_PEDIT_PARMS_EX
 
  - dsa: fixes for receiving PTP packets with 8021q and sja1105 tagging
 
  - eth: sfc: fix null-deref in devlink port without MAE access
 
  - eth: ibmvnic: do not reset dql stats on NON_FATAL err
 
 Misc:
 
  - xsk: honor SO_BINDTODEVICE on bind
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmSlu+MACgkQMUZtbf5S
 Irslgw//S7jf/GL8V6y8VL3te+/OPOZnLDTzFFOdy64/y97FE6XIacJUpyWRhtmz
 oSzcSNHETPW9U+xSGa2ZQlKhAXt6n9iRNvUegql+VBb13Iz+l7AdTeoxRv/YuwDo
 5lTOIB6cBw+ATd0oxS6wr8SyUlcvktUKBfTAItjbVM55aXfIUpXIa84+F7avJgIA
 XP1u/3PHhwItmwo/hXhHH0+P0QA8ix1q2SvRB7DAlQLBsTuQhaKjXWQkYYTKw/Nt
 dtvh8iQSs/YXaHMjTa5CK28HOD8+ywIizr+uJ9VaNqIzV0W5JE9IE8P4NFpBcY7t
 kGjTYODOph7dkNmZ5RLj3N+B6CyC57OXDzoo/tr8940UytCLVj9EVyduarLGLx57
 edqK9cUz5kWejyGoyZ4Pvlo/SKvCQ2HKMeiAJ0/nNpTJMFuygMoqGsaD6ttzkXMj
 fZLPjRUK3axd+15ZzhLEf8HyL5Qh+qPqqX9p7NljfMKwhxMWJ5fuICJfdGOSdMJR
 ndL+wPfRPFQwszZ4pbTY2Ivn29mo8ScBOSOEgQs2mOny+zFzTzmqNWz/jcFfQnjS
 cylxBEHrgudT2FuCImZ/v66TM5yakHXqIdpTGG+zsvJWQqjM96Z3I7WRvi0g9d75
 n84il+j34mnzl90j2xEutqUiK7BQ9ZpZBsutPVTKBIHKWWiortI=
 =9yzk
 -----END PGP SIGNATURE-----

Merge tag 'net-6.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bluetooth, bpf and wireguard.

  Current release - regressions:

   - nvme-tcp: fix comma-related oops after sendpage changes

  Current release - new code bugs:

   - ptp: make max_phase_adjustment sysfs device attribute invisible
     when not supported

  Previous releases - regressions:

   - sctp: fix potential deadlock on &net->sctp.addr_wq_lock

   - mptcp:
      - ensure subflow is unhashed before cleaning the backlog
      - do not rely on implicit state check in mptcp_listen()

  Previous releases - always broken:

   - net: fix net_dev_start_xmit trace event vs skb_transport_offset()

   - Bluetooth:
      - fix use-bdaddr-property quirk
      - L2CAP: fix multiple UaFs
      - ISO: use hci_sync for setting CIG parameters
      - hci_event: fix Set CIG Parameters error status handling
      - hci_event: fix parsing of CIS Established Event
      - MGMT: fix marking SCAN_RSP as not connectable

   - wireguard: queuing: use saner cpu selection wrapping

   - sched: act_ipt: various bug fixes for iptables <> TC interactions

   - sched: act_pedit: add size check for TCA_PEDIT_PARMS_EX

   - dsa: fixes for receiving PTP packets with 8021q and sja1105 tagging

   - eth: sfc: fix null-deref in devlink port without MAE access

   - eth: ibmvnic: do not reset dql stats on NON_FATAL err

  Misc:

   - xsk: honor SO_BINDTODEVICE on bind"

* tag 'net-6.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (70 commits)
  nfp: clean mc addresses in application firmware when closing port
  selftests: mptcp: pm_nl_ctl: fix 32-bit support
  selftests: mptcp: depend on SYN_COOKIES
  selftests: mptcp: userspace_pm: report errors with 'remove' tests
  selftests: mptcp: userspace_pm: use correct server port
  selftests: mptcp: sockopt: return error if wrong mark
  selftests: mptcp: sockopt: use 'iptables-legacy' if available
  selftests: mptcp: connect: fail if nft supposed to work
  mptcp: do not rely on implicit state check in mptcp_listen()
  mptcp: ensure subflow is unhashed before cleaning the backlog
  s390/qeth: Fix vipa deletion
  octeontx-af: fix hardware timestamp configuration
  net: dsa: sja1105: always enable the send_meta options
  net: dsa: tag_sja1105: fix MAC DA patching from meta frames
  net: Replace strlcpy with strscpy
  pptp: Fix fib lookup calls.
  mlxsw: spectrum_router: Fix an IS_ERR() vs NULL check
  net/sched: act_pedit: Add size check for TCA_PEDIT_PARMS_EX
  xsk: Honor SO_BINDTODEVICE on bind
  ptp: Make max_phase_adjustment sysfs device attribute invisible when not supported
  ...
This commit is contained in:
Linus Torvalds 2023-07-05 15:44:45 -07:00
Родитель 73a3fcdaa7 cc7eab25b1
Коммит 6843306689
80 изменённых файлов: 639 добавлений и 329 удалений

Просмотреть файл

@ -62,7 +62,7 @@ Description:
What: /sys/class/net/<iface>/qmi/pass_through
Date: January 2021
KernelVersion: 5.12
Contact: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Contact: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
Description:
Boolean. Default: 'N'

Просмотреть файл

@ -433,6 +433,15 @@ start N bytes into the buffer leaving the first N bytes for the
application to use. The final option is the flags field, but it will
be dealt with in separate sections for each UMEM flag.
SO_BINDTODEVICE setsockopt
--------------------------
This is a generic SOL_SOCKET option that can be used to tie AF_XDP
socket to a particular network interface. It is useful when a socket
is created by a privileged process and passed to a non-privileged one.
Once the option is set, kernel will refuse attempts to bind that socket
to a different interface. Updating the value requires CAP_NET_RAW.
XDP_STATISTICS getsockopt
-------------------------

Просмотреть файл

@ -190,8 +190,7 @@ MAP header|IP Packet|Optional padding|MAP header|Command Packet|Optional pad...
3. Userspace configuration
==========================
rmnet userspace configuration is done through netlink library librmnetctl
and command line utility rmnetcli. Utility is hosted in codeaurora forum git.
The driver uses rtnl_link_ops for communication.
rmnet userspace configuration is done through netlink using iproute2
https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/
https://source.codeaurora.org/quic/la/platform/vendor/qcom-opensource/dataservices/tree/rmnetctl
The driver uses rtnl_link_ops for communication.

Просмотреть файл

@ -149,8 +149,11 @@ themselves. No email should ever be sent to the list with the main purpose
of communicating with the bot, the bot commands should be seen as metadata.
The use of the bot is restricted to authors of the patches (the ``From:``
header on patch submission and command must match!), maintainers themselves
and a handful of senior reviewers. Bot records its activity here:
header on patch submission and command must match!), maintainers of
the modified code according to the MAINTAINERS file (again, ``From:``
must match the MAINTAINERS entry) and a handful of senior reviewers.
Bot records its activity here:
https://patchwork.hopto.org/pw-bot.html

Просмотреть файл

@ -637,7 +637,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
snprintf(config.fwname, sizeof(config.fwname),
"qca/%s", firmware_name);
else if (qca_is_wcn399x(soc_type)) {
if (ver.soc_id == QCA_WCN3991_SOC_ID) {
if (le32_to_cpu(ver.soc_id) == QCA_WCN3991_SOC_ID) {
snprintf(config.fwname, sizeof(config.fwname),
"qca/crnv%02xu.bin", rom_ver);
} else {

Просмотреть файл

@ -1367,14 +1367,30 @@ MODULE_FIRMWARE("rtl_bt/rtl8723cs_vf_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8723cs_vf_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8723cs_xx_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8723cs_xx_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8723d_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8723d_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8723ds_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8723ds_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8761a_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8761a_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8761b_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8761b_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8761bu_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8761bu_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8821a_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8821a_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8821c_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8821c_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8821cs_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8821cs_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8822b_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8822b_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8822cs_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8822cs_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8822cu_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8822cu_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8851bu_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8851bu_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8852au_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8852au_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8852bs_fw.bin");
@ -1383,5 +1399,3 @@ MODULE_FIRMWARE("rtl_bt/rtl8852bu_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8852bu_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8852cu_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8852cu_config.bin");
MODULE_FIRMWARE("rtl_bt/rtl8851bu_fw.bin");
MODULE_FIRMWARE("rtl_bt/rtl8851bu_config.bin");

Просмотреть файл

@ -613,6 +613,9 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x0489, 0xe0f5), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
{ USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
@ -655,6 +658,8 @@ static const struct usb_device_id blacklist_table[] = {
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x0bda, 0x8771), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x6655, 0x8771), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x7392, 0xc611), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x2b89, 0x8761), .driver_info = BTUSB_REALTEK |

Просмотреть файл

@ -643,6 +643,7 @@ static int bcm_setup(struct hci_uart *hu)
* Allow the bootloader to set a valid address through the
* device tree.
*/
if (test_bit(HCI_QUIRK_INVALID_BDADDR, &hu->hdev->quirks))
set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hu->hdev->quirks);
if (!bcm_request_irq(bcm))

Просмотреть файл

@ -1725,6 +1725,18 @@ static bool felix_rxtstamp(struct dsa_switch *ds, int port,
u32 tstamp_hi;
u64 tstamp;
switch (type & PTP_CLASS_PMASK) {
case PTP_CLASS_L2:
if (!(ocelot->ports[port]->trap_proto & OCELOT_PROTO_PTP_L2))
return false;
break;
case PTP_CLASS_IPV4:
case PTP_CLASS_IPV6:
if (!(ocelot->ports[port]->trap_proto & OCELOT_PROTO_PTP_L4))
return false;
break;
}
/* If the "no XTR IRQ" workaround is in use, tell DSA to defer this skb
* for RX timestamping. Then free it, and poll for its copy through
* MMIO in the CPU port module, and inject that into the stack from

Просмотреть файл

@ -252,6 +252,7 @@ struct sja1105_private {
unsigned long ucast_egress_floods;
unsigned long bcast_egress_floods;
unsigned long hwts_tx_en;
unsigned long hwts_rx_en;
const struct sja1105_info *info;
size_t max_xfer_len;
struct spi_device *spidev;
@ -289,7 +290,6 @@ struct sja1105_spi_message {
/* From sja1105_main.c */
enum sja1105_reset_reason {
SJA1105_VLAN_FILTERING = 0,
SJA1105_RX_HWTSTAMPING,
SJA1105_AGEING_TIME,
SJA1105_SCHEDULING,
SJA1105_BEST_EFFORT_POLICING,

Просмотреть файл

@ -866,12 +866,12 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
.hostprio = 7,
.mac_fltres1 = SJA1105_LINKLOCAL_FILTER_A,
.mac_flt1 = SJA1105_LINKLOCAL_FILTER_A_MASK,
.incl_srcpt1 = false,
.send_meta1 = false,
.incl_srcpt1 = true,
.send_meta1 = true,
.mac_fltres0 = SJA1105_LINKLOCAL_FILTER_B,
.mac_flt0 = SJA1105_LINKLOCAL_FILTER_B_MASK,
.incl_srcpt0 = false,
.send_meta0 = false,
.incl_srcpt0 = true,
.send_meta0 = true,
/* Default to an invalid value */
.mirr_port = priv->ds->num_ports,
/* No TTEthernet */
@ -2215,7 +2215,6 @@ static int sja1105_reload_cbs(struct sja1105_private *priv)
static const char * const sja1105_reset_reasons[] = {
[SJA1105_VLAN_FILTERING] = "VLAN filtering",
[SJA1105_RX_HWTSTAMPING] = "RX timestamping",
[SJA1105_AGEING_TIME] = "Ageing time",
[SJA1105_SCHEDULING] = "Time-aware scheduling",
[SJA1105_BEST_EFFORT_POLICING] = "Best-effort policing",
@ -2405,11 +2404,6 @@ int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
general_params->tpid = tpid;
/* EtherType used to identify outer tagged (S-tag) VLAN traffic */
general_params->tpid2 = tpid2;
/* When VLAN filtering is on, we need to at least be able to
* decode management traffic through the "backup plan".
*/
general_params->incl_srcpt1 = enabled;
general_params->incl_srcpt0 = enabled;
for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_unused_port(ds, port))

Просмотреть файл

@ -58,35 +58,10 @@ enum sja1105_ptp_clk_mode {
#define ptp_data_to_sja1105(d) \
container_of((d), struct sja1105_private, ptp_data)
/* Must be called only while the RX timestamping state of the tagger
* is turned off
*/
static int sja1105_change_rxtstamping(struct sja1105_private *priv,
bool on)
{
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
struct sja1105_general_params_entry *general_params;
struct sja1105_table *table;
table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
general_params = table->entries;
general_params->send_meta1 = on;
general_params->send_meta0 = on;
ptp_cancel_worker_sync(ptp_data->clock);
skb_queue_purge(&ptp_data->skb_txtstamp_queue);
skb_queue_purge(&ptp_data->skb_rxtstamp_queue);
return sja1105_static_config_reload(priv, SJA1105_RX_HWTSTAMPING);
}
int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
{
struct sja1105_tagger_data *tagger_data = sja1105_tagger_data(ds);
struct sja1105_private *priv = ds->priv;
struct hwtstamp_config config;
bool rx_on;
int rc;
if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
return -EFAULT;
@ -104,26 +79,13 @@ int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
switch (config.rx_filter) {
case HWTSTAMP_FILTER_NONE:
rx_on = false;
priv->hwts_rx_en &= ~BIT(port);
break;
default:
rx_on = true;
priv->hwts_rx_en |= BIT(port);
break;
}
if (rx_on != tagger_data->rxtstamp_get_state(ds)) {
tagger_data->rxtstamp_set_state(ds, false);
rc = sja1105_change_rxtstamping(priv, rx_on);
if (rc < 0) {
dev_err(ds->dev,
"Failed to change RX timestamping: %d\n", rc);
return rc;
}
if (rx_on)
tagger_data->rxtstamp_set_state(ds, true);
}
if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
return -EFAULT;
return 0;
@ -131,7 +93,6 @@ int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
int sja1105_hwtstamp_get(struct dsa_switch *ds, int port, struct ifreq *ifr)
{
struct sja1105_tagger_data *tagger_data = sja1105_tagger_data(ds);
struct sja1105_private *priv = ds->priv;
struct hwtstamp_config config;
@ -140,7 +101,7 @@ int sja1105_hwtstamp_get(struct dsa_switch *ds, int port, struct ifreq *ifr)
config.tx_type = HWTSTAMP_TX_ON;
else
config.tx_type = HWTSTAMP_TX_OFF;
if (tagger_data->rxtstamp_get_state(ds))
if (priv->hwts_rx_en & BIT(port))
config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
else
config.rx_filter = HWTSTAMP_FILTER_NONE;
@ -413,11 +374,10 @@ static long sja1105_rxtstamp_work(struct ptp_clock_info *ptp)
bool sja1105_rxtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb)
{
struct sja1105_tagger_data *tagger_data = sja1105_tagger_data(ds);
struct sja1105_private *priv = ds->priv;
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
if (!tagger_data->rxtstamp_get_state(ds))
if (!(priv->hwts_rx_en & BIT(port)))
return false;
/* We need to read the full PTP clock to reconstruct the Rx

Просмотреть файл

@ -1025,17 +1025,17 @@ static int vsc73xx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
struct vsc73xx *vsc = ds->priv;
return vsc73xx_write(vsc, VSC73XX_BLOCK_MAC, port,
VSC73XX_MAXLEN, new_mtu);
VSC73XX_MAXLEN, new_mtu + ETH_HLEN + ETH_FCS_LEN);
}
/* According to application not "VSC7398 Jumbo Frames" setting
* up the MTU to 9.6 KB does not affect the performance on standard
* up the frame size to 9.6 KB does not affect the performance on standard
* frames. It is clear from the application note that
* "9.6 kilobytes" == 9600 bytes.
*/
static int vsc73xx_get_max_mtu(struct dsa_switch *ds, int port)
{
return 9600;
return 9600 - ETH_HLEN - ETH_FCS_LEN;
}
static const struct dsa_switch_ops vsc73xx_ds_ops = {

Просмотреть файл

@ -225,6 +225,7 @@ MODULE_AUTHOR("David S. Miller (davem@redhat.com) and Jeff Garzik (jgarzik@pobox
MODULE_DESCRIPTION("Broadcom Tigon3 ethernet driver");
MODULE_LICENSE("GPL");
MODULE_FIRMWARE(FIRMWARE_TG3);
MODULE_FIRMWARE(FIRMWARE_TG357766);
MODULE_FIRMWARE(FIRMWARE_TG3TSO);
MODULE_FIRMWARE(FIRMWARE_TG3TSO5);

Просмотреть файл

@ -1816,6 +1816,13 @@ static int __ibmvnic_open(struct net_device *netdev)
if (prev_state == VNIC_CLOSED)
enable_irq(adapter->tx_scrq[i]->irq);
enable_scrq_irq(adapter, adapter->tx_scrq[i]);
/* netdev_tx_reset_queue will reset dql stats. During NON_FATAL
* resets, don't reset the stats because there could be batched
* skb's waiting to be sent. If we reset dql stats, we risk
* num_completed being greater than num_queued. This will cause
* a BUG_ON in dql_completed().
*/
if (adapter->reset_reason != VNIC_RESET_NON_FATAL)
netdev_tx_reset_queue(netdev_get_tx_queue(netdev, i));
}

Просмотреть файл

@ -169,6 +169,9 @@ void cgx_lmac_write(int cgx_id, int lmac_id, u64 offset, u64 val)
{
struct cgx *cgx_dev = cgx_get_pdata(cgx_id);
/* Software must not access disabled LMAC registers */
if (!is_lmac_valid(cgx_dev, lmac_id))
return;
cgx_write(cgx_dev, lmac_id, offset, val);
}
@ -176,6 +179,10 @@ u64 cgx_lmac_read(int cgx_id, int lmac_id, u64 offset)
{
struct cgx *cgx_dev = cgx_get_pdata(cgx_id);
/* Software must not access disabled LMAC registers */
if (!is_lmac_valid(cgx_dev, lmac_id))
return 0;
return cgx_read(cgx_dev, lmac_id, offset);
}
@ -530,14 +537,15 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id)
int cgx_lmac_internal_loopback(void *cgxd, int lmac_id, bool enable)
{
struct cgx *cgx = cgxd;
u8 lmac_type;
struct lmac *lmac;
u64 cfg;
if (!is_lmac_valid(cgx, lmac_id))
return -ENODEV;
lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac_id);
if (lmac_type == LMAC_MODE_SGMII || lmac_type == LMAC_MODE_QSGMII) {
lmac = lmac_pdata(lmac_id, cgx);
if (lmac->lmac_type == LMAC_MODE_SGMII ||
lmac->lmac_type == LMAC_MODE_QSGMII) {
cfg = cgx_read(cgx, lmac_id, CGXX_GMP_PCS_MRX_CTL);
if (enable)
cfg |= CGXX_GMP_PCS_MRX_CTL_LBK;
@ -1556,6 +1564,23 @@ int cgx_lmac_linkup_start(void *cgxd)
return 0;
}
int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr)
{
struct cgx *cgx = cgxd;
u64 cfg;
if (!is_lmac_valid(cgx, lmac_id))
return -ENODEV;
/* Resetting PFC related CSRs */
cfg = 0xff;
cgx_write(cgxd, lmac_id, CGXX_CMRX_RX_LOGL_XON, cfg);
if (pf_req_flr)
cgx_lmac_internal_loopback(cgxd, lmac_id, false);
return 0;
}
static int cgx_configure_interrupt(struct cgx *cgx, struct lmac *lmac,
int cnt, bool req_free)
{
@ -1675,6 +1700,7 @@ static int cgx_lmac_init(struct cgx *cgx)
cgx->lmac_idmap[lmac->lmac_id] = lmac;
set_bit(lmac->lmac_id, &cgx->lmac_bmap);
cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, true);
lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id);
}
return cgx_lmac_verify_fwi_version(cgx);
@ -1771,6 +1797,7 @@ static struct mac_ops cgx_mac_ops = {
.mac_tx_enable = cgx_lmac_tx_enable,
.pfc_config = cgx_lmac_pfc_config,
.mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg,
.mac_reset = cgx_lmac_reset,
};
static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)

Просмотреть файл

@ -35,6 +35,7 @@
#define CGXX_CMRX_INT_ENA_W1S 0x058
#define CGXX_CMRX_RX_ID_MAP 0x060
#define CGXX_CMRX_RX_STAT0 0x070
#define CGXX_CMRX_RX_LOGL_XON 0x100
#define CGXX_CMRX_RX_LMACS 0x128
#define CGXX_CMRX_RX_DMAC_CTL0 (0x1F8 + mac_ops->csr_offset)
#define CGX_DMAC_CTL0_CAM_ENABLE BIT_ULL(3)
@ -181,4 +182,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause,
u8 *rx_pause);
int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
int pfvf_idx);
int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr);
#endif /* CGX_H */

Просмотреть файл

@ -24,6 +24,7 @@
* @cgx: parent cgx port
* @mcast_filters_count: Number of multicast filters installed
* @lmac_id: lmac port id
* @lmac_type: lmac type like SGMII/XAUI
* @cmd_pend: flag set before new command is started
* flag cleared after command response is received
* @name: lmac port name
@ -43,6 +44,7 @@ struct lmac {
struct cgx *cgx;
u8 mcast_filters_count;
u8 lmac_id;
u8 lmac_type;
bool cmd_pend;
char *name;
};
@ -125,6 +127,7 @@ struct mac_ops {
int (*mac_get_pfc_frm_cfg)(void *cgxd, int lmac_id,
u8 *tx_pause, u8 *rx_pause);
int (*mac_reset)(void *cgxd, int lmac_id, u8 pf_req_flr);
/* FEC stats */
int (*get_fec_stats)(void *cgxd, int lmac_id,

Просмотреть файл

@ -37,6 +37,7 @@ static struct mac_ops rpm_mac_ops = {
.mac_tx_enable = rpm_lmac_tx_enable,
.pfc_config = rpm_lmac_pfc_config,
.mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
.mac_reset = rpm_lmac_reset,
};
static struct mac_ops rpm2_mac_ops = {
@ -47,7 +48,7 @@ static struct mac_ops rpm2_mac_ops = {
.int_set_reg = RPM2_CMRX_SW_INT_ENA_W1S,
.irq_offset = 1,
.int_ena_bit = BIT_ULL(0),
.lmac_fwi = RPM_LMAC_FWI,
.lmac_fwi = RPM2_LMAC_FWI,
.non_contiguous_serdes_lane = true,
.rx_stats_cnt = 43,
.tx_stats_cnt = 34,
@ -68,6 +69,7 @@ static struct mac_ops rpm2_mac_ops = {
.mac_tx_enable = rpm_lmac_tx_enable,
.pfc_config = rpm_lmac_pfc_config,
.mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
.mac_reset = rpm_lmac_reset,
};
bool is_dev_rpm2(void *rpmd)
@ -537,14 +539,15 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
int rpm_lmac_internal_loopback(void *rpmd, int lmac_id, bool enable)
{
rpm_t *rpm = rpmd;
u8 lmac_type;
struct lmac *lmac;
u64 cfg;
if (!is_lmac_valid(rpm, lmac_id))
return -ENODEV;
lmac_type = rpm->mac_ops->get_lmac_type(rpm, lmac_id);
if (lmac_type == LMAC_MODE_QSGMII || lmac_type == LMAC_MODE_SGMII) {
lmac = lmac_pdata(lmac_id, rpm);
if (lmac->lmac_type == LMAC_MODE_QSGMII ||
lmac->lmac_type == LMAC_MODE_SGMII) {
dev_err(&rpm->pdev->dev, "loopback not supported for LPC mode\n");
return 0;
}
@ -713,3 +716,24 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp)
return 0;
}
int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
{
u64 rx_logl_xon, cfg;
rpm_t *rpm = rpmd;
if (!is_lmac_valid(rpm, lmac_id))
return -ENODEV;
/* Resetting PFC related CSRs */
rx_logl_xon = is_dev_rpm2(rpm) ? RPM2_CMRX_RX_LOGL_XON :
RPMX_CMRX_RX_LOGL_XON;
cfg = 0xff;
rpm_write(rpm, lmac_id, rx_logl_xon, cfg);
if (pf_req_flr)
rpm_lmac_internal_loopback(rpm, lmac_id, false);
return 0;
}

Просмотреть файл

@ -74,6 +74,7 @@
#define RPMX_MTI_MAC100X_CL01_PAUSE_QUANTA 0x80A8
#define RPMX_MTI_MAC100X_CL89_PAUSE_QUANTA 0x8108
#define RPM_DEFAULT_PAUSE_TIME 0x7FF
#define RPMX_CMRX_RX_LOGL_XON 0x4100
#define RPMX_MTI_MAC100X_XIF_MODE 0x8100
#define RPMX_ONESTEP_ENABLE BIT_ULL(5)
@ -94,7 +95,8 @@
/* CN10KB CSR Declaration */
#define RPM2_CMRX_SW_INT 0x1b0
#define RPM2_CMRX_SW_INT_ENA_W1S 0x1b8
#define RPM2_CMRX_SW_INT_ENA_W1S 0x1c8
#define RPM2_LMAC_FWI 0x12
#define RPM2_CMR_CHAN_MSK_OR 0x3120
#define RPM2_CMR_RX_OVR_BP_EN BIT_ULL(2)
#define RPM2_CMR_RX_OVR_BP_BP BIT_ULL(1)
@ -131,4 +133,5 @@ int rpm_lmac_get_pfc_frm_cfg(void *rpmd, int lmac_id, u8 *tx_pause,
int rpm2_get_nr_lmacs(void *rpmd);
bool is_dev_rpm2(void *rpmd);
int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp);
int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr);
#endif /* RPM_H */

Просмотреть файл

@ -2629,6 +2629,7 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
* Since LF is detached use LF number as -1.
*/
rvu_npc_free_mcam_entries(rvu, pcifunc, -1);
rvu_mac_reset(rvu, pcifunc);
mutex_unlock(&rvu->flr_lock);
}

Просмотреть файл

@ -23,6 +23,7 @@
#define PCI_DEVID_OCTEONTX2_LBK 0xA061
/* Subsystem Device ID */
#define PCI_SUBSYS_DEVID_98XX 0xB100
#define PCI_SUBSYS_DEVID_96XX 0xB200
#define PCI_SUBSYS_DEVID_CN10K_A 0xB900
#define PCI_SUBSYS_DEVID_CNF10K_B 0xBC00
@ -686,6 +687,16 @@ static inline u16 rvu_nix_chan_cpt(struct rvu *rvu, u8 chan)
return rvu->hw->cpt_chan_base + chan;
}
static inline bool is_rvu_supports_nix1(struct rvu *rvu)
{
struct pci_dev *pdev = rvu->pdev;
if (pdev->subsystem_device == PCI_SUBSYS_DEVID_98XX)
return true;
return false;
}
/* Function Prototypes
* RVU
*/
@ -884,6 +895,7 @@ int rvu_cgx_config_tx(void *cgxd, int lmac_id, bool enable);
int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause,
u16 pfc_en);
int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause);
void rvu_mac_reset(struct rvu *rvu, u16 pcifunc);
u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac);
int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
int type);

Просмотреть файл

@ -114,7 +114,7 @@ static void rvu_map_cgx_nix_block(struct rvu *rvu, int pf,
p2x = cgx_lmac_get_p2x(cgx_id, lmac_id);
/* Firmware sets P2X_SELECT as either NIX0 or NIX1 */
pfvf->nix_blkaddr = BLKADDR_NIX0;
if (p2x == CMR_P2X_SEL_NIX1)
if (is_rvu_supports_nix1(rvu) && p2x == CMR_P2X_SEL_NIX1)
pfvf->nix_blkaddr = BLKADDR_NIX1;
}
@ -763,7 +763,7 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
cgxd = rvu_cgx_pdata(cgx_id, rvu);
mac_ops = get_mac_ops(cgxd);
mac_ops->mac_enadis_ptp_config(cgxd, lmac_id, true);
mac_ops->mac_enadis_ptp_config(cgxd, lmac_id, enable);
/* If PTP is enabled then inform NPC that packets to be
* parsed by this PF will have their data shifted by 8 bytes
* and if PTP is disabled then no shift is required
@ -1250,3 +1250,21 @@ int rvu_mbox_handler_cgx_prio_flow_ctrl_cfg(struct rvu *rvu,
mac_ops->mac_get_pfc_frm_cfg(cgxd, lmac_id, &rsp->tx_pause, &rsp->rx_pause);
return err;
}
void rvu_mac_reset(struct rvu *rvu, u16 pcifunc)
{
int pf = rvu_get_pf(pcifunc);
struct mac_ops *mac_ops;
struct cgx *cgxd;
u8 cgx, lmac;
if (!is_pf_cgxmapped(rvu, pf))
return;
rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx, &lmac);
cgxd = rvu_cgx_pdata(cgx, rvu);
mac_ops = get_mac_ops(cgxd);
if (mac_ops->mac_reset(cgxd, lmac, !is_vf(pcifunc)))
dev_err(rvu->dev, "Failed to reset MAC\n");
}

Просмотреть файл

@ -417,6 +417,7 @@ static int mlxsw_m_linecards_init(struct mlxsw_m *mlxsw_m)
err_kmalloc_array:
for (i--; i >= 0; i--)
kfree(mlxsw_m->line_cards[i]);
kfree(mlxsw_m->line_cards);
err_kcalloc:
kfree(mlxsw_m->ports);
return err;

Просмотреть файл

@ -10794,8 +10794,8 @@ static int mlxsw_sp_lb_rif_init(struct mlxsw_sp *mlxsw_sp,
int err;
router->lb_crif = mlxsw_sp_crif_alloc(NULL);
if (IS_ERR(router->lb_crif))
return PTR_ERR(router->lb_crif);
if (!router->lb_crif)
return -ENOMEM;
/* Create a generic loopback RIF associated with the main table
* (default VRF). Any table can be used, but the main table exists

Просмотреть файл

@ -144,6 +144,18 @@ static int lan743x_csr_light_reset(struct lan743x_adapter *adapter)
!(data & HW_CFG_LRST_), 100000, 10000000);
}
static int lan743x_csr_wait_for_bit_atomic(struct lan743x_adapter *adapter,
int offset, u32 bit_mask,
int target_value, int udelay_min,
int udelay_max, int count)
{
u32 data;
return readx_poll_timeout_atomic(LAN743X_CSR_READ_OP, offset, data,
target_value == !!(data & bit_mask),
udelay_max, udelay_min * count);
}
static int lan743x_csr_wait_for_bit(struct lan743x_adapter *adapter,
int offset, u32 bit_mask,
int target_value, int usleep_min,
@ -736,7 +748,7 @@ static int lan743x_dp_write(struct lan743x_adapter *adapter,
u32 dp_sel;
int i;
if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
if (lan743x_csr_wait_for_bit_atomic(adapter, DP_SEL, DP_SEL_DPRDY_,
1, 40, 100, 100))
return -EIO;
dp_sel = lan743x_csr_read(adapter, DP_SEL);
@ -748,7 +760,8 @@ static int lan743x_dp_write(struct lan743x_adapter *adapter,
lan743x_csr_write(adapter, DP_ADDR, addr + i);
lan743x_csr_write(adapter, DP_DATA_0, buf[i]);
lan743x_csr_write(adapter, DP_CMD, DP_CMD_WRITE_);
if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
if (lan743x_csr_wait_for_bit_atomic(adapter, DP_SEL,
DP_SEL_DPRDY_,
1, 40, 100, 100))
return -EIO;
}

Просмотреть файл

@ -2925,7 +2925,6 @@ int ocelot_init(struct ocelot *ocelot)
}
}
mutex_init(&ocelot->ptp_lock);
mutex_init(&ocelot->mact_lock);
mutex_init(&ocelot->fwd_domain_lock);
mutex_init(&ocelot->tas_lock);

Просмотреть файл

@ -439,8 +439,12 @@ static int ocelot_ipv6_ptp_trap_del(struct ocelot *ocelot, int port)
static int ocelot_setup_ptp_traps(struct ocelot *ocelot, int port,
bool l2, bool l4)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
int err;
ocelot_port->trap_proto &= ~(OCELOT_PROTO_PTP_L2 |
OCELOT_PROTO_PTP_L4);
if (l2)
err = ocelot_l2_ptp_trap_add(ocelot, port);
else
@ -464,6 +468,11 @@ static int ocelot_setup_ptp_traps(struct ocelot *ocelot, int port,
if (err)
return err;
if (l2)
ocelot_port->trap_proto |= OCELOT_PROTO_PTP_L2;
if (l4)
ocelot_port->trap_proto |= OCELOT_PROTO_PTP_L4;
return 0;
err_ipv6:
@ -474,10 +483,38 @@ err_ipv4:
return err;
}
static int ocelot_traps_to_ptp_rx_filter(unsigned int proto)
{
if ((proto & OCELOT_PROTO_PTP_L2) && (proto & OCELOT_PROTO_PTP_L4))
return HWTSTAMP_FILTER_PTP_V2_EVENT;
else if (proto & OCELOT_PROTO_PTP_L2)
return HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
else if (proto & OCELOT_PROTO_PTP_L4)
return HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
return HWTSTAMP_FILTER_NONE;
}
int ocelot_hwstamp_get(struct ocelot *ocelot, int port, struct ifreq *ifr)
{
return copy_to_user(ifr->ifr_data, &ocelot->hwtstamp_config,
sizeof(ocelot->hwtstamp_config)) ? -EFAULT : 0;
struct ocelot_port *ocelot_port = ocelot->ports[port];
struct hwtstamp_config cfg = {};
switch (ocelot_port->ptp_cmd) {
case IFH_REW_OP_TWO_STEP_PTP:
cfg.tx_type = HWTSTAMP_TX_ON;
break;
case IFH_REW_OP_ORIGIN_PTP:
cfg.tx_type = HWTSTAMP_TX_ONESTEP_SYNC;
break;
default:
cfg.tx_type = HWTSTAMP_TX_OFF;
break;
}
cfg.rx_filter = ocelot_traps_to_ptp_rx_filter(ocelot_port->trap_proto);
return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
}
EXPORT_SYMBOL(ocelot_hwstamp_get);
@ -509,8 +546,6 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
return -ERANGE;
}
mutex_lock(&ocelot->ptp_lock);
switch (cfg.rx_filter) {
case HWTSTAMP_FILTER_NONE:
break;
@ -531,28 +566,14 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
l4 = true;
break;
default:
mutex_unlock(&ocelot->ptp_lock);
return -ERANGE;
}
err = ocelot_setup_ptp_traps(ocelot, port, l2, l4);
if (err) {
mutex_unlock(&ocelot->ptp_lock);
if (err)
return err;
}
if (l2 && l4)
cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
else if (l2)
cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
else if (l4)
cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
else
cfg.rx_filter = HWTSTAMP_FILTER_NONE;
/* Commit back the result & save it */
memcpy(&ocelot->hwtstamp_config, &cfg, sizeof(cfg));
mutex_unlock(&ocelot->ptp_lock);
cfg.rx_filter = ocelot_traps_to_ptp_rx_filter(ocelot_port->trap_proto);
return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
}
@ -824,11 +845,6 @@ int ocelot_init_timestamp(struct ocelot *ocelot,
ocelot_write(ocelot, PTP_CFG_MISC_PTP_EN, PTP_CFG_MISC);
/* There is no device reconfiguration, PTP Rx stamping is always
* enabled.
*/
ocelot->hwtstamp_config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
return 0;
}
EXPORT_SYMBOL(ocelot_init_timestamp);

Просмотреть файл

@ -53,6 +53,8 @@
#include "crypto/crypto.h"
#include "crypto/fw.h"
static int nfp_net_mc_unsync(struct net_device *netdev, const unsigned char *addr);
/**
* nfp_net_get_fw_version() - Read and parse the FW version
* @fw_ver: Output fw_version structure to read to
@ -1084,6 +1086,9 @@ static int nfp_net_netdev_close(struct net_device *netdev)
/* Step 2: Tell NFP
*/
if (nn->cap_w1 & NFP_NET_CFG_CTRL_MCAST_FILTER)
__dev_mc_unsync(netdev, nfp_net_mc_unsync);
nfp_net_clear_config_and_disable(nn);
nfp_port_configure(netdev, false);

Просмотреть файл

@ -626,6 +626,9 @@ static struct devlink_port *ef100_set_devlink_port(struct efx_nic *efx, u32 idx)
u32 id;
int rc;
if (!efx->mae)
return NULL;
if (efx_mae_lookup_mport(efx, idx, &id)) {
/* This should not happen. */
if (idx == MAE_MPORT_DESC_VF_IDX_NULL)

Просмотреть файл

@ -24,6 +24,7 @@
#include <linux/in.h>
#include <linux/ip.h>
#include <linux/rcupdate.h>
#include <linux/security.h>
#include <linux/spinlock.h>
#include <net/sock.h>
@ -128,6 +129,23 @@ static void del_chan(struct pppox_sock *sock)
spin_unlock(&chan_lock);
}
static struct rtable *pptp_route_output(struct pppox_sock *po,
struct flowi4 *fl4)
{
struct sock *sk = &po->sk;
struct net *net;
net = sock_net(sk);
flowi4_init_output(fl4, sk->sk_bound_dev_if, sk->sk_mark, 0,
RT_SCOPE_UNIVERSE, IPPROTO_GRE, 0,
po->proto.pptp.dst_addr.sin_addr.s_addr,
po->proto.pptp.src_addr.sin_addr.s_addr,
0, 0, sock_net_uid(net, sk));
security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
return ip_route_output_flow(net, fl4, sk);
}
static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
{
struct sock *sk = (struct sock *) chan->private;
@ -151,11 +169,7 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
if (sk_pppox(po)->sk_state & PPPOX_DEAD)
goto tx_error;
rt = ip_route_output_ports(net, &fl4, NULL,
opt->dst_addr.sin_addr.s_addr,
opt->src_addr.sin_addr.s_addr,
0, 0, IPPROTO_GRE,
RT_TOS(0), sk->sk_bound_dev_if);
rt = pptp_route_output(po, &fl4);
if (IS_ERR(rt))
goto tx_error;
@ -438,12 +452,7 @@ static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
po->chan.private = sk;
po->chan.ops = &pptp_chan_ops;
rt = ip_route_output_ports(sock_net(sk), &fl4, sk,
opt->dst_addr.sin_addr.s_addr,
opt->src_addr.sin_addr.s_addr,
0, 0,
IPPROTO_GRE, RT_CONN_FLAGS(sk),
sk->sk_bound_dev_if);
rt = pptp_route_output(po, &fl4);
if (IS_ERR(rt)) {
error = -EHOSTUNREACH;
goto end;

Просмотреть файл

@ -875,6 +875,12 @@ static const struct usb_device_id products[] = {
USB_CDC_SUBCLASS_ETHERNET,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
/* U-blox LARA-R6 01B */
USB_DEVICE_AND_INTERFACE_INFO(UBLOX_VENDOR_ID, 0x1313, USB_CLASS_COMM,
USB_CDC_SUBCLASS_ETHERNET,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
/* U-blox LARA-L6 */
USB_DEVICE_AND_INTERFACE_INFO(UBLOX_VENDOR_ID, 0x1343, USB_CLASS_COMM,

Просмотреть файл

@ -546,6 +546,7 @@ static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
u8 *private_key = nla_data(info->attrs[WGDEVICE_A_PRIVATE_KEY]);
u8 public_key[NOISE_PUBLIC_KEY_LEN];
struct wg_peer *peer, *temp;
bool send_staged_packets;
if (!crypto_memneq(wg->static_identity.static_private,
private_key, NOISE_PUBLIC_KEY_LEN))
@ -564,14 +565,17 @@ static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
}
down_write(&wg->static_identity.lock);
wg_noise_set_static_identity_private_key(&wg->static_identity,
private_key);
list_for_each_entry_safe(peer, temp, &wg->peer_list,
peer_list) {
send_staged_packets = !wg->static_identity.has_identity && netif_running(wg->dev);
wg_noise_set_static_identity_private_key(&wg->static_identity, private_key);
send_staged_packets = send_staged_packets && wg->static_identity.has_identity;
wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
wg_noise_precompute_static_static(peer);
wg_noise_expire_current_peer_keypairs(peer);
if (send_staged_packets)
wg_packet_send_staged_packets(peer);
}
wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
up_write(&wg->static_identity.lock);
}
skip_set_private_key:

Просмотреть файл

@ -28,6 +28,7 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
int ret;
memset(queue, 0, sizeof(*queue));
queue->last_cpu = -1;
ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
if (ret)
return ret;

Просмотреть файл

@ -117,20 +117,17 @@ static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
return cpu;
}
/* This function is racy, in the sense that next is unlocked, so it could return
* the same CPU twice. A race-free version of this would be to instead store an
* atomic sequence number, do an increment-and-return, and then iterate through
* every possible CPU until we get to that index -- choose_cpu. However that's
* a bit slower, and it doesn't seem like this potential race actually
* introduces any performance loss, so we live with it.
/* This function is racy, in the sense that it's called while last_cpu is
* unlocked, so it could return the same CPU twice. Adding locking or using
* atomic sequence numbers is slower though, and the consequences of racing are
* harmless, so live with it.
*/
static inline int wg_cpumask_next_online(int *next)
static inline int wg_cpumask_next_online(int *last_cpu)
{
int cpu = *next;
while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
*next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
int cpu = cpumask_next(*last_cpu, cpu_online_mask);
if (cpu >= nr_cpu_ids)
cpu = cpumask_first(cpu_online_mask);
*last_cpu = cpu;
return cpu;
}
@ -159,7 +156,7 @@ static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue)
static inline int wg_queue_enqueue_per_device_and_peer(
struct crypt_queue *device_queue, struct prev_queue *peer_queue,
struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
struct sk_buff *skb, struct workqueue_struct *wq)
{
int cpu;
@ -173,7 +170,7 @@ static inline int wg_queue_enqueue_per_device_and_peer(
/* Then we queue it up in the device queue, which consumes the
* packet as soon as it can.
*/
cpu = wg_cpumask_next_online(next_cpu);
cpu = wg_cpumask_next_online(&device_queue->last_cpu);
if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
return -EPIPE;
queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);

Просмотреть файл

@ -524,7 +524,7 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
goto err;
ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb,
wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu);
wg->packet_crypt_wq);
if (unlikely(ret == -EPIPE))
wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD);
if (likely(!ret || ret == -EPIPE)) {

Просмотреть файл

@ -318,7 +318,7 @@ static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first)
goto err;
ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first,
wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu);
wg->packet_crypt_wq);
if (unlikely(ret == -EPIPE))
wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD);
err:

Просмотреть файл

@ -234,10 +234,10 @@ void wg_timers_init(struct wg_peer *peer)
void wg_timers_stop(struct wg_peer *peer)
{
del_timer_sync(&peer->timer_retransmit_handshake);
del_timer_sync(&peer->timer_send_keepalive);
del_timer_sync(&peer->timer_new_handshake);
del_timer_sync(&peer->timer_zero_key_material);
del_timer_sync(&peer->timer_persistent_keepalive);
timer_delete_sync(&peer->timer_retransmit_handshake);
timer_delete_sync(&peer->timer_send_keepalive);
timer_delete_sync(&peer->timer_new_handshake);
timer_delete_sync(&peer->timer_zero_key_material);
timer_delete_sync(&peer->timer_persistent_keepalive);
flush_work(&peer->clear_peer_work);
}

Просмотреть файл

@ -1014,7 +1014,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
msg.msg_flags |= MSG_MORE;
if (!sendpage_ok(page))
msg.msg_flags &= ~MSG_SPLICE_PAGES,
msg.msg_flags &= ~MSG_SPLICE_PAGES;
bvec_set_page(&bvec, page, len, offset);
iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, len);

Просмотреть файл

@ -358,6 +358,9 @@ static umode_t ptp_is_attribute_visible(struct kobject *kobj,
attr == &dev_attr_max_vclocks.attr) {
if (ptp->is_virtual_clock)
mode = 0;
} else if (attr == &dev_attr_max_phase_adjustment.attr) {
if (!info->adjphase || !info->getmaxphase)
mode = 0;
}
return mode;

Просмотреть файл

@ -625,7 +625,7 @@ static QETH_DEVICE_ATTR(vipa_add4, add4, 0644,
static ssize_t qeth_l3_dev_vipa_del4_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
return qeth_l3_vipa_store(dev, buf, true, count, QETH_PROT_IPV4);
return qeth_l3_vipa_store(dev, buf, false, count, QETH_PROT_IPV4);
}
static QETH_DEVICE_ATTR(vipa_del4, del4, 0200, NULL,

Просмотреть файл

@ -48,13 +48,9 @@ struct sja1105_deferred_xmit_work {
/* Global tagger data */
struct sja1105_tagger_data {
/* Tagger to switch */
void (*xmit_work_fn)(struct kthread_work *work);
void (*meta_tstamp_handler)(struct dsa_switch *ds, int port, u8 ts_id,
enum sja1110_meta_tstamp dir, u64 tstamp);
/* Switch to tagger */
bool (*rxtstamp_get_state)(struct dsa_switch *ds);
void (*rxtstamp_set_state)(struct dsa_switch *ds, bool on);
};
struct sja1105_skb_cb {

Просмотреть файл

@ -185,7 +185,7 @@ struct bt_iso_ucast_qos {
struct bt_iso_bcast_qos {
__u8 big;
__u8 bis;
__u8 sync_interval;
__u8 sync_factor;
__u8 packing;
__u8 framing;
struct bt_iso_io_qos in;

Просмотреть файл

@ -979,6 +979,7 @@ struct mgmt_ev_auth_failed {
#define MGMT_DEV_FOUND_NOT_CONNECTABLE BIT(2)
#define MGMT_DEV_FOUND_INITIATED_CONN BIT(3)
#define MGMT_DEV_FOUND_NAME_REQUEST_FAILED BIT(4)
#define MGMT_DEV_FOUND_SCAN_RSP BIT(5)
#define MGMT_EV_DEVICE_FOUND 0x0012
struct mgmt_ev_device_found {

Просмотреть файл

@ -730,6 +730,11 @@ enum macaccess_entry_type {
ENTRYTYPE_MACv6,
};
enum ocelot_proto {
OCELOT_PROTO_PTP_L2 = BIT(0),
OCELOT_PROTO_PTP_L4 = BIT(1),
};
#define OCELOT_QUIRK_PCS_PERFORMS_RATE_ADAPTATION BIT(0)
#define OCELOT_QUIRK_QSGMII_PORTS_MUST_BE_UP BIT(1)
@ -775,6 +780,8 @@ struct ocelot_port {
unsigned int ptp_skbs_in_flight;
struct sk_buff_head tx_skbs;
unsigned int trap_proto;
u16 mrp_ring_id;
u8 ptp_cmd;
@ -868,12 +875,9 @@ struct ocelot {
u8 mm_supported:1;
struct ptp_clock *ptp_clock;
struct ptp_clock_info ptp_info;
struct hwtstamp_config hwtstamp_config;
unsigned int ptp_skbs_in_flight;
/* Protects the 2-step TX timestamp ID logic */
spinlock_t ts_id_lock;
/* Protects the PTP interface state */
struct mutex ptp_lock;
/* Protects the PTP clock */
spinlock_t ptp_clock_lock;
struct ptp_pin_desc ptp_pins[OCELOT_PTP_PINS_NUM];

Просмотреть файл

@ -65,7 +65,7 @@ TRACE_EVENT(fib_table_lookup,
}
dev = nhc ? nhc->nhc_dev : NULL;
strlcpy(__entry->name, dev ? dev->name : "-", IFNAMSIZ);
strscpy(__entry->name, dev ? dev->name : "-", IFNAMSIZ);
if (nhc) {
if (nhc->nhc_gw_family == AF_INET) {

Просмотреть файл

@ -63,7 +63,7 @@ TRACE_EVENT(fib6_table_lookup,
}
if (res->nh && res->nh->fib_nh_dev) {
strlcpy(__entry->name, res->nh->fib_nh_dev->name, IFNAMSIZ);
strscpy(__entry->name, res->nh->fib_nh_dev->name, IFNAMSIZ);
} else {
strcpy(__entry->name, "-");
}

Просмотреть файл

@ -51,7 +51,8 @@ TRACE_EVENT(net_dev_start_xmit,
__entry->network_offset = skb_network_offset(skb);
__entry->transport_offset_valid =
skb_transport_header_was_set(skb);
__entry->transport_offset = skb_transport_offset(skb);
__entry->transport_offset = skb_transport_header_was_set(skb) ?
skb_transport_offset(skb) : 0;
__entry->tx_flags = skb_shinfo(skb)->tx_flags;
__entry->gso_size = skb_shinfo(skb)->gso_size;
__entry->gso_segs = skb_shinfo(skb)->gso_segs;

Просмотреть файл

@ -7891,10 +7891,8 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
pr_err("missing vmlinux BTF, cannot register kfuncs\n");
return -ENOENT;
}
if (kset->owner && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES)) {
pr_err("missing module BTF, cannot register kfuncs\n");
return -ENOENT;
}
if (kset->owner && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES))
pr_warn("missing module BTF, cannot register kfuncs\n");
return 0;
}
if (IS_ERR(btf))

Просмотреть файл

@ -775,6 +775,11 @@ static void le_conn_timeout(struct work_struct *work)
hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
}
struct iso_cig_params {
struct hci_cp_le_set_cig_params cp;
struct hci_cis_params cis[0x1f];
};
struct iso_list_data {
union {
u8 cig;
@ -786,10 +791,7 @@ struct iso_list_data {
u16 sync_handle;
};
int count;
struct {
struct hci_cp_le_set_cig_params cp;
struct hci_cis_params cis[0x11];
} pdu;
struct iso_cig_params pdu;
};
static void bis_list(struct hci_conn *conn, void *data)
@ -1764,10 +1766,33 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
return hci_send_cmd(hdev, HCI_OP_LE_CREATE_BIG, sizeof(cp), &cp);
}
static void set_cig_params_complete(struct hci_dev *hdev, void *data, int err)
{
struct iso_cig_params *pdu = data;
bt_dev_dbg(hdev, "");
if (err)
bt_dev_err(hdev, "Unable to set CIG parameters: %d", err);
kfree(pdu);
}
static int set_cig_params_sync(struct hci_dev *hdev, void *data)
{
struct iso_cig_params *pdu = data;
u32 plen;
plen = sizeof(pdu->cp) + pdu->cp.num_cis * sizeof(pdu->cis[0]);
return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_CIG_PARAMS, plen, pdu,
HCI_CMD_TIMEOUT);
}
static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
{
struct hci_dev *hdev = conn->hdev;
struct iso_list_data data;
struct iso_cig_params *pdu;
memset(&data, 0, sizeof(data));
@ -1837,12 +1862,16 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET || !data.pdu.cp.num_cis)
return false;
if (hci_send_cmd(hdev, HCI_OP_LE_SET_CIG_PARAMS,
sizeof(data.pdu.cp) +
(data.pdu.cp.num_cis * sizeof(*data.pdu.cis)),
&data.pdu) < 0)
pdu = kmemdup(&data.pdu, sizeof(*pdu), GFP_KERNEL);
if (!pdu)
return false;
if (hci_cmd_sync_queue(hdev, set_cig_params_sync, pdu,
set_cig_params_complete) < 0) {
kfree(pdu);
return false;
}
return true;
}
@ -2044,10 +2073,10 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
flags |= MGMT_ADV_FLAG_SEC_2M;
/* Align intervals */
interval = qos->bcast.out.interval / 1250;
interval = (qos->bcast.out.interval / 1250) * qos->bcast.sync_factor;
if (qos->bcast.bis)
sync_interval = qos->bcast.sync_interval * 1600;
sync_interval = interval * 4;
err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->le_per_adv_data_len,
conn->le_per_adv_data, flags, interval,

Просмотреть файл

@ -3812,7 +3812,8 @@ static u8 hci_cc_le_set_cig_params(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_CIG_PARAMS);
if (!cp || rp->num_handles != cp->num_cis || rp->cig_id != cp->cig_id) {
if (!rp->status && (!cp || rp->num_handles != cp->num_cis ||
rp->cig_id != cp->cig_id)) {
bt_dev_err(hdev, "unexpected Set CIG Parameters response data");
status = HCI_ERROR_UNSPECIFIED;
}
@ -6316,23 +6317,18 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
return;
}
/* When receiving non-connectable or scannable undirected
* advertising reports, this means that the remote device is
* not connectable and then clearly indicate this in the
* device found event.
*
* When receiving a scan response, then there is no way to
/* When receiving a scan response, then there is no way to
* know if the remote device is connectable or not. However
* since scan responses are merged with a previously seen
* advertising report, the flags field from that report
* will be used.
*
* In the really unlikely case that a controller get confused
* and just sends a scan response event, then it is marked as
* not connectable as well.
* In the unlikely case that a controller just sends a scan
* response event that doesn't match the pending report, then
* it is marked as a standalone SCAN_RSP.
*/
if (type == LE_ADV_SCAN_RSP)
flags = MGMT_DEV_FOUND_NOT_CONNECTABLE;
flags = MGMT_DEV_FOUND_SCAN_RSP;
/* If there's nothing pending either store the data from this
* event or send an immediate device found event if the data
@ -6790,6 +6786,7 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
{
struct hci_evt_le_cis_established *ev = data;
struct hci_conn *conn;
struct bt_iso_qos *qos;
u16 handle = __le16_to_cpu(ev->handle);
bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
@ -6811,21 +6808,39 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
goto unlock;
}
if (conn->role == HCI_ROLE_SLAVE) {
__le32 interval;
qos = &conn->iso_qos;
memset(&interval, 0, sizeof(interval));
/* Convert ISO Interval (1.25 ms slots) to SDU Interval (us) */
qos->ucast.in.interval = le16_to_cpu(ev->interval) * 1250;
qos->ucast.out.interval = qos->ucast.in.interval;
memcpy(&interval, ev->c_latency, sizeof(ev->c_latency));
conn->iso_qos.ucast.in.interval = le32_to_cpu(interval);
memcpy(&interval, ev->p_latency, sizeof(ev->p_latency));
conn->iso_qos.ucast.out.interval = le32_to_cpu(interval);
conn->iso_qos.ucast.in.latency = le16_to_cpu(ev->interval);
conn->iso_qos.ucast.out.latency = le16_to_cpu(ev->interval);
conn->iso_qos.ucast.in.sdu = le16_to_cpu(ev->c_mtu);
conn->iso_qos.ucast.out.sdu = le16_to_cpu(ev->p_mtu);
conn->iso_qos.ucast.in.phy = ev->c_phy;
conn->iso_qos.ucast.out.phy = ev->p_phy;
switch (conn->role) {
case HCI_ROLE_SLAVE:
/* Convert Transport Latency (us) to Latency (msec) */
qos->ucast.in.latency =
DIV_ROUND_CLOSEST(get_unaligned_le24(ev->c_latency),
1000);
qos->ucast.out.latency =
DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),
1000);
qos->ucast.in.sdu = le16_to_cpu(ev->c_mtu);
qos->ucast.out.sdu = le16_to_cpu(ev->p_mtu);
qos->ucast.in.phy = ev->c_phy;
qos->ucast.out.phy = ev->p_phy;
break;
case HCI_ROLE_MASTER:
/* Convert Transport Latency (us) to Latency (msec) */
qos->ucast.out.latency =
DIV_ROUND_CLOSEST(get_unaligned_le24(ev->c_latency),
1000);
qos->ucast.in.latency =
DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),
1000);
qos->ucast.out.sdu = le16_to_cpu(ev->c_mtu);
qos->ucast.in.sdu = le16_to_cpu(ev->p_mtu);
qos->ucast.out.phy = ev->c_phy;
qos->ucast.in.phy = ev->p_phy;
break;
}
if (!ev->status) {

Просмотреть файл

@ -4623,28 +4623,20 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
* BD_ADDR invalid before creating the HCI device or in
* its setup callback.
*/
invalid_bdaddr = test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
invalid_bdaddr = test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks) ||
test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks);
if (!ret) {
if (test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) {
if (!bacmp(&hdev->public_addr, BDADDR_ANY))
if (test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks) &&
!bacmp(&hdev->public_addr, BDADDR_ANY))
hci_dev_get_bd_addr_from_property(hdev);
if (bacmp(&hdev->public_addr, BDADDR_ANY) &&
if (invalid_bdaddr && bacmp(&hdev->public_addr, BDADDR_ANY) &&
hdev->set_bdaddr) {
ret = hdev->set_bdaddr(hdev,
&hdev->public_addr);
/* If setting of the BD_ADDR from the device
* property succeeds, then treat the address
* as valid even if the invalid BD_ADDR
* quirk indicates otherwise.
*/
ret = hdev->set_bdaddr(hdev, &hdev->public_addr);
if (!ret)
invalid_bdaddr = false;
}
}
}
/* The transport driver can set these quirks before
* creating the HCI device or in its setup callback.

Просмотреть файл

@ -6,7 +6,9 @@
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
static struct class *bt_class;
static const struct class bt_class = {
.name = "bluetooth",
};
static void bt_link_release(struct device *dev)
{
@ -36,7 +38,7 @@ void hci_conn_init_sysfs(struct hci_conn *conn)
BT_DBG("conn %p", conn);
conn->dev.type = &bt_link;
conn->dev.class = bt_class;
conn->dev.class = &bt_class;
conn->dev.parent = &hdev->dev;
device_initialize(&conn->dev);
@ -104,7 +106,7 @@ void hci_init_sysfs(struct hci_dev *hdev)
struct device *dev = &hdev->dev;
dev->type = &bt_host;
dev->class = bt_class;
dev->class = &bt_class;
__module_get(THIS_MODULE);
device_initialize(dev);
@ -112,12 +114,10 @@ void hci_init_sysfs(struct hci_dev *hdev)
int __init bt_sysfs_init(void)
{
bt_class = class_create("bluetooth");
return PTR_ERR_OR_ZERO(bt_class);
return class_register(&bt_class);
}
void bt_sysfs_cleanup(void)
{
class_destroy(bt_class);
class_unregister(&bt_class);
}

Просмотреть файл

@ -704,7 +704,7 @@ static struct bt_iso_qos default_qos = {
.bcast = {
.big = BT_ISO_QOS_BIG_UNSET,
.bis = BT_ISO_QOS_BIS_UNSET,
.sync_interval = 0x00,
.sync_factor = 0x01,
.packing = 0x00,
.framing = 0x00,
.in = DEFAULT_IO_QOS,
@ -1213,7 +1213,7 @@ static bool check_ucast_qos(struct bt_iso_qos *qos)
static bool check_bcast_qos(struct bt_iso_qos *qos)
{
if (qos->bcast.sync_interval > 0x07)
if (qos->bcast.sync_factor == 0x00)
return false;
if (qos->bcast.packing > 0x01)

Просмотреть файл

@ -6374,9 +6374,14 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn,
if (!chan)
goto done;
chan = l2cap_chan_hold_unless_zero(chan);
if (!chan)
goto done;
l2cap_chan_lock(chan);
l2cap_chan_del(chan, ECONNREFUSED);
l2cap_chan_unlock(chan);
l2cap_chan_put(chan);
done:
mutex_unlock(&conn->chan_lock);

Просмотреть файл

@ -46,6 +46,7 @@ static const struct proto_ops l2cap_sock_ops;
static void l2cap_sock_init(struct sock *sk, struct sock *parent);
static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock,
int proto, gfp_t prio, int kern);
static void l2cap_sock_cleanup_listen(struct sock *parent);
bool l2cap_is_socket(struct socket *sock)
{
@ -1415,6 +1416,7 @@ static int l2cap_sock_release(struct socket *sock)
if (!sk)
return 0;
l2cap_sock_cleanup_listen(sk);
bt_sock_unlink(&l2cap_sk_list, sk);
err = l2cap_sock_shutdown(sock, SHUT_RDWR);

Просмотреть файл

@ -166,8 +166,9 @@ void br_manage_promisc(struct net_bridge *br)
* This lets us disable promiscuous mode and write
* this config to hw.
*/
if (br->auto_cnt == 0 ||
(br->auto_cnt == 1 && br_auto_port(p)))
if ((p->dev->priv_flags & IFF_UNICAST_FLT) &&
(br->auto_cnt == 0 ||
(br->auto_cnt == 1 && br_auto_port(p))))
br_port_clear_promisc(p);
else
br_port_set_promisc(p);

Просмотреть файл

@ -58,11 +58,8 @@
#define SJA1110_TX_TRAILER_LEN 4
#define SJA1110_MAX_PADDING_LEN 15
#define SJA1105_HWTS_RX_EN 0
struct sja1105_tagger_private {
struct sja1105_tagger_data data; /* Must be first */
unsigned long state;
/* Protects concurrent access to the meta state machine
* from taggers running on multiple ports on SMP systems
*/
@ -118,8 +115,8 @@ static void sja1105_meta_unpack(const struct sk_buff *skb,
* a unified unpacking command for both device series.
*/
packing(buf, &meta->tstamp, 31, 0, 4, UNPACK, 0);
packing(buf + 4, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
packing(buf + 5, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
packing(buf + 4, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
packing(buf + 5, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
packing(buf + 6, &meta->source_port, 7, 0, 1, UNPACK, 0);
packing(buf + 7, &meta->switch_id, 7, 0, 1, UNPACK, 0);
}
@ -392,10 +389,6 @@ static struct sk_buff
priv = sja1105_tagger_private(ds);
if (!test_bit(SJA1105_HWTS_RX_EN, &priv->state))
/* Do normal processing. */
return skb;
spin_lock(&priv->meta_lock);
/* Was this a link-local frame instead of the meta
* that we were expecting?
@ -431,12 +424,6 @@ static struct sk_buff
priv = sja1105_tagger_private(ds);
/* Drop the meta frame if we're not in the right state
* to process it.
*/
if (!test_bit(SJA1105_HWTS_RX_EN, &priv->state))
return NULL;
spin_lock(&priv->meta_lock);
stampable_skb = priv->stampable_skb;
@ -472,30 +459,6 @@ static struct sk_buff
return skb;
}
static bool sja1105_rxtstamp_get_state(struct dsa_switch *ds)
{
struct sja1105_tagger_private *priv = sja1105_tagger_private(ds);
return test_bit(SJA1105_HWTS_RX_EN, &priv->state);
}
static void sja1105_rxtstamp_set_state(struct dsa_switch *ds, bool on)
{
struct sja1105_tagger_private *priv = sja1105_tagger_private(ds);
if (on)
set_bit(SJA1105_HWTS_RX_EN, &priv->state);
else
clear_bit(SJA1105_HWTS_RX_EN, &priv->state);
/* Initialize the meta state machine to a known state */
if (!priv->stampable_skb)
return;
kfree_skb(priv->stampable_skb);
priv->stampable_skb = NULL;
}
static bool sja1105_skb_has_tag_8021q(const struct sk_buff *skb)
{
u16 tpid = ntohs(eth_hdr(skb)->h_proto);
@ -545,33 +508,53 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
is_link_local = sja1105_is_link_local(skb);
is_meta = sja1105_is_meta_frame(skb);
if (sja1105_skb_has_tag_8021q(skb)) {
/* Normal traffic path. */
sja1105_vlan_rcv(skb, &source_port, &switch_id, &vbid, &vid);
} else if (is_link_local) {
if (is_link_local) {
/* Management traffic path. Switch embeds the switch ID and
* port ID into bytes of the destination MAC, courtesy of
* the incl_srcpt options.
*/
source_port = hdr->h_dest[3];
switch_id = hdr->h_dest[4];
/* Clear the DMAC bytes that were mangled by the switch */
hdr->h_dest[3] = 0;
hdr->h_dest[4] = 0;
} else if (is_meta) {
sja1105_meta_unpack(skb, &meta);
source_port = meta.source_port;
switch_id = meta.switch_id;
} else {
}
/* Normal data plane traffic and link-local frames are tagged with
* a tag_8021q VLAN which we have to strip
*/
if (sja1105_skb_has_tag_8021q(skb)) {
int tmp_source_port = -1, tmp_switch_id = -1;
sja1105_vlan_rcv(skb, &tmp_source_port, &tmp_switch_id, &vbid,
&vid);
/* Preserve the source information from the INCL_SRCPT option,
* if available. This allows us to not overwrite a valid source
* port and switch ID with zeroes when receiving link-local
* frames from a VLAN-unaware bridged port (non-zero vbid) or a
* VLAN-aware bridged port (non-zero vid). Furthermore, the
* tag_8021q source port information is only of trust when the
* vbid is 0 (precise port). Otherwise, tmp_source_port and
* tmp_switch_id will be zeroes.
*/
if (vbid == 0 && source_port == -1)
source_port = tmp_source_port;
if (vbid == 0 && switch_id == -1)
switch_id = tmp_switch_id;
} else if (source_port == -1 && switch_id == -1) {
/* Packets with no source information have no chance of
* getting accepted, drop them straight away.
*/
return NULL;
}
if (vbid >= 1)
skb->dev = dsa_tag_8021q_find_port_by_vbid(netdev, vbid);
else if (source_port == -1 || switch_id == -1)
skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid);
else
if (source_port != -1 && switch_id != -1)
skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
else if (vbid >= 1)
skb->dev = dsa_tag_8021q_find_port_by_vbid(netdev, vbid);
else
skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid);
if (!skb->dev) {
netdev_warn(netdev, "Couldn't decode source port\n");
return NULL;
@ -762,7 +745,6 @@ static void sja1105_disconnect(struct dsa_switch *ds)
static int sja1105_connect(struct dsa_switch *ds)
{
struct sja1105_tagger_data *tagger_data;
struct sja1105_tagger_private *priv;
struct kthread_worker *xmit_worker;
int err;
@ -782,10 +764,6 @@ static int sja1105_connect(struct dsa_switch *ds)
}
priv->xmit_worker = xmit_worker;
/* Export functions for switch driver use */
tagger_data = &priv->data;
tagger_data->rxtstamp_get_state = sja1105_rxtstamp_get_state;
tagger_data->rxtstamp_set_state = sja1105_rxtstamp_set_state;
ds->tagger_data = priv;
return 0;

Просмотреть файл

@ -3590,8 +3590,11 @@ static int tcp_ack_update_window(struct sock *sk, const struct sk_buff *skb, u32
static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
u32 *last_oow_ack_time)
{
if (*last_oow_ack_time) {
s32 elapsed = (s32)(tcp_jiffies32 - *last_oow_ack_time);
/* Paired with the WRITE_ONCE() in this function. */
u32 val = READ_ONCE(*last_oow_ack_time);
if (val) {
s32 elapsed = (s32)(tcp_jiffies32 - val);
if (0 <= elapsed &&
elapsed < READ_ONCE(net->ipv4.sysctl_tcp_invalid_ratelimit)) {
@ -3600,7 +3603,10 @@ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
}
}
*last_oow_ack_time = tcp_jiffies32;
/* Paired with the prior READ_ONCE() and with itself,
* as we might be lockless.
*/
WRITE_ONCE(*last_oow_ack_time, tcp_jiffies32);
return false; /* not rate-limited: go ahead, send dupack now! */
}

Просмотреть файл

@ -2909,10 +2909,10 @@ static void mptcp_check_listen_stop(struct sock *sk)
return;
lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
tcp_set_state(ssk, TCP_CLOSE);
mptcp_subflow_queue_clean(sk, ssk);
inet_csk_listen_stop(ssk);
mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED);
tcp_set_state(ssk, TCP_CLOSE);
release_sock(ssk);
}
@ -3703,6 +3703,11 @@ static int mptcp_listen(struct socket *sock, int backlog)
pr_debug("msk=%p", msk);
lock_sock(sk);
err = -EINVAL;
if (sock->state != SS_UNCONNECTED || sock->type != SOCK_STREAM)
goto unlock;
ssock = __mptcp_nmpc_socket(msk);
if (IS_ERR(ssock)) {
err = PTR_ERR(ssock);

Просмотреть файл

@ -21,6 +21,7 @@
#include <linux/tc_act/tc_ipt.h>
#include <net/tc_act/tc_ipt.h>
#include <net/tc_wrapper.h>
#include <net/ip.h>
#include <linux/netfilter_ipv4/ip_tables.h>
@ -48,7 +49,7 @@ static int ipt_init_target(struct net *net, struct xt_entry_target *t,
par.entryinfo = &e;
par.target = target;
par.targinfo = t->data;
par.hook_mask = hook;
par.hook_mask = 1 << hook;
par.family = NFPROTO_IPV4;
ret = xt_check_target(&par, t->u.target_size - sizeof(*t), 0, false);
@ -85,7 +86,8 @@ static void tcf_ipt_release(struct tc_action *a)
static const struct nla_policy ipt_policy[TCA_IPT_MAX + 1] = {
[TCA_IPT_TABLE] = { .type = NLA_STRING, .len = IFNAMSIZ },
[TCA_IPT_HOOK] = { .type = NLA_U32 },
[TCA_IPT_HOOK] = NLA_POLICY_RANGE(NLA_U32, NF_INET_PRE_ROUTING,
NF_INET_NUMHOOKS),
[TCA_IPT_INDEX] = { .type = NLA_U32 },
[TCA_IPT_TARG] = { .len = sizeof(struct xt_entry_target) },
};
@ -158,15 +160,27 @@ static int __tcf_ipt_init(struct net *net, unsigned int id, struct nlattr *nla,
return -EEXIST;
}
}
hook = nla_get_u32(tb[TCA_IPT_HOOK]);
err = -ENOMEM;
tname = kmalloc(IFNAMSIZ, GFP_KERNEL);
err = -EINVAL;
hook = nla_get_u32(tb[TCA_IPT_HOOK]);
switch (hook) {
case NF_INET_PRE_ROUTING:
break;
case NF_INET_POST_ROUTING:
break;
default:
goto err1;
}
if (tb[TCA_IPT_TABLE]) {
/* mangle only for now */
if (nla_strcmp(tb[TCA_IPT_TABLE], "mangle"))
goto err1;
}
tname = kstrdup("mangle", GFP_KERNEL);
if (unlikely(!tname))
goto err1;
if (tb[TCA_IPT_TABLE] == NULL ||
nla_strscpy(tname, tb[TCA_IPT_TABLE], IFNAMSIZ) >= IFNAMSIZ)
strcpy(tname, "mangle");
t = kmemdup(td, td->u.target_size, GFP_KERNEL);
if (unlikely(!t))
@ -217,10 +231,31 @@ static int tcf_xt_init(struct net *net, struct nlattr *nla,
a, &act_xt_ops, tp, flags);
}
static bool tcf_ipt_act_check(struct sk_buff *skb)
{
const struct iphdr *iph;
unsigned int nhoff, len;
if (!pskb_may_pull(skb, sizeof(struct iphdr)))
return false;
nhoff = skb_network_offset(skb);
iph = ip_hdr(skb);
if (iph->ihl < 5 || iph->version != 4)
return false;
len = skb_ip_totlen(skb);
if (skb->len < nhoff + len || len < (iph->ihl * 4u))
return false;
return pskb_may_pull(skb, iph->ihl * 4u);
}
TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
const struct tc_action *a,
struct tcf_result *res)
{
char saved_cb[sizeof_field(struct sk_buff, cb)];
int ret = 0, result = 0;
struct tcf_ipt *ipt = to_ipt(a);
struct xt_action_param par;
@ -231,9 +266,24 @@ TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
.pf = NFPROTO_IPV4,
};
if (skb_protocol(skb, false) != htons(ETH_P_IP))
return TC_ACT_UNSPEC;
if (skb_unclone(skb, GFP_ATOMIC))
return TC_ACT_UNSPEC;
if (!tcf_ipt_act_check(skb))
return TC_ACT_UNSPEC;
if (state.hook == NF_INET_POST_ROUTING) {
if (!skb_dst(skb))
return TC_ACT_UNSPEC;
state.out = skb->dev;
}
memcpy(saved_cb, skb->cb, sizeof(saved_cb));
spin_lock(&ipt->tcf_lock);
tcf_lastuse_update(&ipt->tcf_tm);
@ -246,6 +296,9 @@ TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
par.state = &state;
par.target = ipt->tcfi_t->u.kernel.target;
par.targinfo = ipt->tcfi_t->data;
memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
ret = par.target->target(skb, &par);
switch (ret) {
@ -266,6 +319,9 @@ TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
break;
}
spin_unlock(&ipt->tcf_lock);
memcpy(skb->cb, saved_cb, sizeof(skb->cb));
return result;
}

Просмотреть файл

@ -29,6 +29,7 @@ static struct tc_action_ops act_pedit_ops;
static const struct nla_policy pedit_policy[TCA_PEDIT_MAX + 1] = {
[TCA_PEDIT_PARMS] = { .len = sizeof(struct tc_pedit) },
[TCA_PEDIT_PARMS_EX] = { .len = sizeof(struct tc_pedit) },
[TCA_PEDIT_KEYS_EX] = { .type = NLA_NESTED },
};

Просмотреть файл

@ -364,9 +364,9 @@ static void sctp_auto_asconf_init(struct sctp_sock *sp)
struct net *net = sock_net(&sp->inet.sk);
if (net->sctp.default_auto_asconf) {
spin_lock(&net->sctp.addr_wq_lock);
spin_lock_bh(&net->sctp.addr_wq_lock);
list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
spin_unlock(&net->sctp.addr_wq_lock);
spin_unlock_bh(&net->sctp.addr_wq_lock);
sp->do_auto_asconf = 1;
}
}

Просмотреть файл

@ -886,6 +886,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
struct sock *sk = sock->sk;
struct xdp_sock *xs = xdp_sk(sk);
struct net_device *dev;
int bound_dev_if;
u32 flags, qid;
int err = 0;
@ -899,6 +900,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
XDP_USE_NEED_WAKEUP))
return -EINVAL;
bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
if (bound_dev_if && bound_dev_if != sxdp->sxdp_ifindex)
return -EINVAL;
rtnl_lock();
mutex_lock(&xs->mutex);
if (xs->state != XSK_READY) {

Просмотреть файл

@ -108,12 +108,13 @@ function pgset() {
fi
}
if [[ -z "$APPEND" ]]; then
if [[ $EUID -eq 0 ]]; then
function trap_exit()
{
# Cleanup pktgen setup on exit if thats not "append mode"
if [[ -z "$APPEND" ]] && [[ $EUID -eq 0 ]]; then
trap 'pg_ctrl "reset"' EXIT
fi
fi
}
## -- General shell tricks --

Просмотреть файл

@ -33,6 +33,10 @@ root_check_run_with_sudo "$@"
# Parameter parsing via include
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
# Using invalid DST_MAC will cause the packets to get dropped in
# ip_rcv() which is part of the test
if [ -z "$DEST_IP" ]; then

Просмотреть файл

@ -14,6 +14,10 @@ root_check_run_with_sudo "$@"
# Parameter parsing via include
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
if [ -z "$DEST_IP" ]; then
[ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
fi

Просмотреть файл

@ -13,6 +13,10 @@ root_check_run_with_sudo "$@"
# - go look in parameters.sh to see which setting are avail
# - required param is the interface "-i" stored in $DEV
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
#
# Set some default params, if they didn't get set
if [ -z "$DEST_IP" ]; then

Просмотреть файл

@ -14,6 +14,9 @@ root_check_run_with_sudo "$@"
# Required param: -i dev in $DEV
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
[ -z "$COUNT" ] && COUNT="100000" # Zero means indefinitely
# Base Config

Просмотреть файл

@ -25,6 +25,10 @@ root_check_run_with_sudo "$@"
# Parameter parsing via include
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
# Set some default params, if they didn't get set
if [ -z "$DEST_IP" ]; then
[ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"

Просмотреть файл

@ -12,6 +12,10 @@ root_check_run_with_sudo "$@"
# Parameter parsing via include
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
# Set some default params, if they didn't get set
if [ -z "$DEST_IP" ]; then
[ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"

Просмотреть файл

@ -16,6 +16,10 @@ root_check_run_with_sudo "$@"
# Parameter parsing via include
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
# Set some default params, if they didn't get set
if [ -z "$DEST_IP" ]; then
[ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"

Просмотреть файл

@ -14,6 +14,9 @@ root_check_run_with_sudo "$@"
# Required param: -i dev in $DEV
source ${basedir}/parameters.sh
# Trap EXIT first
trap_exit
# Base Config
[ -z "$COUNT" ] && COUNT="20000000" # Zero means indefinitely
[ -z "$CLONE_SKB" ] && CLONE_SKB="0"

Просмотреть файл

@ -50,3 +50,4 @@ CONFIG_CRYPTO_SM4_GENERIC=y
CONFIG_AMT=m
CONFIG_VXLAN=m
CONFIG_IP_SCTP=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m

Просмотреть файл

@ -6,6 +6,7 @@ CONFIG_INET_DIAG=m
CONFIG_INET_MPTCP_DIAG=m
CONFIG_VETH=y
CONFIG_NET_SCH_NETEM=m
CONFIG_SYN_COOKIES=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_NETFILTER_NETLINK=m

Просмотреть файл

@ -718,6 +718,7 @@ table inet mangle {
EOF
if [ $? -ne 0 ]; then
echo "SKIP: $msg, could not load nft ruleset"
mptcp_lib_fail_if_expected_feature "nft rules"
return
fi
@ -733,6 +734,7 @@ EOF
if [ $? -ne 0 ]; then
ip netns exec "$listener_ns" nft flush ruleset
echo "SKIP: $msg, ip $r6flag rule failed"
mptcp_lib_fail_if_expected_feature "ip rule"
return
fi
@ -741,6 +743,7 @@ EOF
ip netns exec "$listener_ns" nft flush ruleset
ip -net "$listener_ns" $r6flag rule del fwmark 1 lookup 100
echo "SKIP: $msg, ip route add local $local_addr failed"
mptcp_lib_fail_if_expected_feature "ip route"
return
fi

Просмотреть файл

@ -12,6 +12,8 @@ ksft_skip=4
timeout_poll=30
timeout_test=$((timeout_poll * 2 + 1))
mptcp_connect=""
iptables="iptables"
ip6tables="ip6tables"
sec=$(date +%s)
rndh=$(printf %x $sec)-$(mktemp -u XXXXXX)
@ -25,7 +27,7 @@ add_mark_rules()
local m=$2
local t
for t in iptables ip6tables; do
for t in ${iptables} ${ip6tables}; do
# just to debug: check we have multiple subflows connection requests
ip netns exec $ns $t -A OUTPUT -p tcp --syn -m mark --mark $m -j ACCEPT
@ -95,14 +97,14 @@ if [ $? -ne 0 ];then
exit $ksft_skip
fi
iptables -V > /dev/null 2>&1
if [ $? -ne 0 ];then
# Use the legacy version if available to support old kernel versions
if iptables-legacy -V &> /dev/null; then
iptables="iptables-legacy"
ip6tables="ip6tables-legacy"
elif ! iptables -V &> /dev/null; then
echo "SKIP: Could not run all tests without iptables tool"
exit $ksft_skip
fi
ip6tables -V > /dev/null 2>&1
if [ $? -ne 0 ];then
elif ! ip6tables -V &> /dev/null; then
echo "SKIP: Could not run all tests without ip6tables tool"
exit $ksft_skip
fi
@ -112,10 +114,10 @@ check_mark()
local ns=$1
local af=$2
local tables=iptables
local tables=${iptables}
if [ $af -eq 6 ];then
tables=ip6tables
tables=${ip6tables}
fi
local counters values
@ -126,6 +128,7 @@ check_mark()
for v in $values; do
if [ $v -ne 0 ]; then
echo "FAIL: got $tables $values in ns $ns , not 0 - not all expected packets marked" 1>&2
ret=1
return 1
fi
done
@ -225,11 +228,11 @@ do_transfer()
fi
if [ $local_addr = "::" ];then
check_mark $listener_ns 6
check_mark $connector_ns 6
check_mark $listener_ns 6 || retc=1
check_mark $connector_ns 6 || retc=1
else
check_mark $listener_ns 4
check_mark $connector_ns 4
check_mark $listener_ns 4 || retc=1
check_mark $connector_ns 4 || retc=1
fi
check_transfer $cin $sout "file received by server"

Просмотреть файл

@ -425,7 +425,7 @@ int dsf(int fd, int pm_family, int argc, char *argv[])
}
/* token */
token = atoi(params[4]);
token = strtoul(params[4], NULL, 10);
rta = (void *)(data + off);
rta->rta_type = MPTCP_PM_ATTR_TOKEN;
rta->rta_len = RTA_LENGTH(4);
@ -551,7 +551,7 @@ int csf(int fd, int pm_family, int argc, char *argv[])
}
/* token */
token = atoi(params[4]);
token = strtoul(params[4], NULL, 10);
rta = (void *)(data + off);
rta->rta_type = MPTCP_PM_ATTR_TOKEN;
rta->rta_len = RTA_LENGTH(4);
@ -598,7 +598,7 @@ int remove_addr(int fd, int pm_family, int argc, char *argv[])
if (++arg >= argc)
error(1, 0, " missing token value");
token = atoi(argv[arg]);
token = strtoul(argv[arg], NULL, 10);
rta = (void *)(data + off);
rta->rta_type = MPTCP_PM_ATTR_TOKEN;
rta->rta_len = RTA_LENGTH(4);
@ -710,7 +710,7 @@ int announce_addr(int fd, int pm_family, int argc, char *argv[])
if (++arg >= argc)
error(1, 0, " missing token value");
token = atoi(argv[arg]);
token = strtoul(argv[arg], NULL, 10);
} else
error(1, 0, "unknown keyword %s", argv[arg]);
}
@ -1347,7 +1347,7 @@ int set_flags(int fd, int pm_family, int argc, char *argv[])
error(1, 0, " missing token value");
/* token */
token = atoi(argv[arg]);
token = strtoul(argv[arg], NULL, 10);
} else if (!strcmp(argv[arg], "flags")) {
char *tok, *str;

Просмотреть файл

@ -423,6 +423,7 @@ test_remove()
stdbuf -o0 -e0 printf "[OK]\n"
else
stdbuf -o0 -e0 printf "[FAIL]\n"
exit 1
fi
# RM_ADDR using an invalid addr id should result in no action
@ -437,6 +438,7 @@ test_remove()
stdbuf -o0 -e0 printf "[OK]\n"
else
stdbuf -o0 -e0 printf "[FAIL]\n"
exit 1
fi
# RM_ADDR from the client to server machine
@ -848,7 +850,7 @@ test_prio()
local count
# Send MP_PRIO signal from client to server machine
ip netns exec "$ns2" ./pm_nl_ctl set 10.0.1.2 port "$client4_port" flags backup token "$client4_token" rip 10.0.1.1 rport "$server4_port"
ip netns exec "$ns2" ./pm_nl_ctl set 10.0.1.2 port "$client4_port" flags backup token "$client4_token" rip 10.0.1.1 rport "$app4_port"
sleep 0.5
# Check TX

Просмотреть файл

@ -514,10 +514,32 @@ n2 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/all/rp_filter'
n1 ping -W 1 -c 1 192.168.241.2
[[ $(n2 wg show wg0 endpoints) == "$pub1 10.0.0.3:1" ]]
ip1 link del veth1
ip1 link del veth3
ip1 link del wg0
ip2 link del wg0
ip1 link del dev veth3
ip1 link del dev wg0
ip2 link del dev wg0
# Make sure persistent keep alives are sent when an adapter comes up
ip1 link add dev wg0 type wireguard
n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" endpoint 10.0.0.1:1 persistent-keepalive 1
read _ _ tx_bytes < <(n1 wg show wg0 transfer)
[[ $tx_bytes -eq 0 ]]
ip1 link set dev wg0 up
read _ _ tx_bytes < <(n1 wg show wg0 transfer)
[[ $tx_bytes -gt 0 ]]
ip1 link del dev wg0
# This should also happen even if the private key is set later
ip1 link add dev wg0 type wireguard
n1 wg set wg0 peer "$pub2" endpoint 10.0.0.1:1 persistent-keepalive 1
read _ _ tx_bytes < <(n1 wg show wg0 transfer)
[[ $tx_bytes -eq 0 ]]
ip1 link set dev wg0 up
read _ _ tx_bytes < <(n1 wg show wg0 transfer)
[[ $tx_bytes -eq 0 ]]
n1 wg set wg0 private-key <(echo "$key1")
read _ _ tx_bytes < <(n1 wg show wg0 transfer)
[[ $tx_bytes -gt 0 ]]
ip1 link del dev veth1
ip1 link del dev wg0
# We test that Netlink/IPC is working properly by doing things that usually cause split responses
ip0 link add dev wg0 type wireguard