wireless-drivers-next patches for v5.13
Second set of patches for v5.13. A lot of iwlwifi and mt76 patches this time, and also smaller features and fixes all over. mt76 * mt7915/mt7615 decapsulation offload support * threaded NAPI support * new device IDs * mt7921 device reset support * rx timestamp support iwlwifi * passive scan support for 6GHz * new hardware support wilc1000 * CRC support for SPI bus -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJge+xHAAoJEG4XJFUm622b3+MH/RLorFA9iU+jdbq9nA1a9f/K roiB9+lJogZKdy0rphF3d+/ItOPe1CZ791HP0GRRA3SsE+jO3LAxzEWvh27k7NoB G3gdGSXtMkUY70Fab89QG8pyn07/R/403gr6H6sxMYg3Y/2gUHUbCqPN2N4DXw0/ 2MVtxvrV4WERxirChJV3uzord8BGce1H69S6+jVLQiuoJM1PZZVr6vw+OSw7qntL Zc2fMjBlBulU2R3EFiOkm08PwM8mJRB8JJiQ8vnB2nMeKpFOnCgm3Hozly7q/zCB xCWlZcKBPYhtCRLoipvWnTt0SopZXyqslJbBBqlFVUkjPdrLqBBcVv3j5K7OZd4= =DVDR -----END PGP SIGNATURE----- Merge tag 'wireless-drivers-next-2021-04-18' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for v5.13 Second set of patches for v5.13. A lot of iwlwifi and mt76 patches this time, and also smaller features and fixes all over. mt76 * mt7915/mt7615 decapsulation offload support * threaded NAPI support * new device IDs * mt7921 device reset support * rx timestamp support iwlwifi * passive scan support for 6GHz * new hardware support wilc1000 * CRC support for SPI bus ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Коммит
56aa7b21a5
|
@ -1,24 +0,0 @@
|
|||
Common IEEE 802.11 properties
|
||||
|
||||
This provides documentation of common properties that are valid for all wireless
|
||||
devices.
|
||||
|
||||
Optional properties:
|
||||
- ieee80211-freq-limit : list of supported frequency ranges in KHz. This can be
|
||||
used for devices that in a given config support less channels than
|
||||
normally. It may happen chipset supports a wide wireless band but it is
|
||||
limited to some part of it due to used antennas or power amplifier.
|
||||
An example case for this can be tri-band wireless router with two
|
||||
identical chipsets used for two different 5 GHz subbands. Using them
|
||||
incorrectly could not work or decrease performance noticeably.
|
||||
|
||||
Example:
|
||||
|
||||
pcie@0,0 {
|
||||
reg = <0x0000 0 0 0 0>;
|
||||
wifi@0,0 {
|
||||
reg = <0x0000 0 0 0 0>;
|
||||
ieee80211-freq-limit = <2402000 2482000>,
|
||||
<5170000 5250000>;
|
||||
};
|
||||
};
|
|
@ -0,0 +1,45 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
# Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
|
||||
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/net/wireless/ieee80211.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Common IEEE 802.11 Binding
|
||||
|
||||
maintainers:
|
||||
- Lorenzo Bianconi <lorenzo@kernel.org>
|
||||
|
||||
description: |
|
||||
This provides documentation of common properties that are valid for
|
||||
all wireless devices
|
||||
|
||||
properties:
|
||||
ieee80211-freq-limit:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32-matrix
|
||||
items:
|
||||
minItems: 2
|
||||
maxItems: 2
|
||||
description:
|
||||
List of supported frequency ranges in KHz. This can be used for devices
|
||||
that in a given config support less channels than normally. It may happen
|
||||
chipset supports a wide wireless band but it is limited to some part of
|
||||
it due to used antennas or power amplifier. An example case for this
|
||||
can be tri-band wireless router with two identical chipsets used for two
|
||||
different 5 GHz subbands. Using them incorrectly could not work or
|
||||
decrease performance noticeably
|
||||
|
||||
additionalProperties: true
|
||||
|
||||
examples:
|
||||
- |
|
||||
pcie0 {
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
wifi@0,0 {
|
||||
reg = <0x0000 0 0 0 0>;
|
||||
ieee80211-freq-limit = <2402000 2482000>,
|
||||
<5170000 5250000>;
|
||||
};
|
||||
};
|
|
@ -1,78 +0,0 @@
|
|||
* MediaTek mt76xx devices
|
||||
|
||||
This node provides properties for configuring the MediaTek mt76xx wireless
|
||||
device. The node is expected to be specified as a child node of the PCI
|
||||
controller to which the wireless chip is connected.
|
||||
|
||||
Alternatively, it can specify the wireless part of the MT7628/MT7688 or
|
||||
MT7622 SoC. For SoC, use the following compatible strings:
|
||||
|
||||
compatible:
|
||||
- "mediatek,mt7628-wmac" for MT7628/MT7688
|
||||
- "mediatek,mt7622-wmac" for MT7622
|
||||
|
||||
properties:
|
||||
- reg: Address and length of the register set for the device.
|
||||
- interrupts: Main device interrupt
|
||||
|
||||
MT7622 specific properties:
|
||||
- power-domains: phandle to the power domain that the WMAC is part of
|
||||
- mediatek,infracfg: phandle to the infrastructure bus fabric syscon node
|
||||
|
||||
Optional properties:
|
||||
|
||||
- ieee80211-freq-limit: See ieee80211.txt
|
||||
- mediatek,mtd-eeprom: Specify a MTD partition + offset containing EEPROM data
|
||||
- big-endian: if the radio eeprom partition is written in big-endian, specify
|
||||
this property
|
||||
- mediatek,eeprom-merge-otp: Merge EEPROM data with OTP data. Can be used on
|
||||
boards where the flash calibration data is generic and specific calibration
|
||||
data should be pulled from the OTP ROM
|
||||
|
||||
The MAC address can as well be set with corresponding optional properties
|
||||
defined in net/ethernet.txt.
|
||||
|
||||
Optional nodes:
|
||||
- led: Properties for a connected LED
|
||||
Optional properties:
|
||||
- led-sources: See Documentation/devicetree/bindings/leds/common.txt
|
||||
|
||||
&pcie {
|
||||
pcie0 {
|
||||
wifi@0,0 {
|
||||
compatible = "mediatek,mt76";
|
||||
reg = <0x0000 0 0 0 0>;
|
||||
ieee80211-freq-limit = <5000000 6000000>;
|
||||
mediatek,mtd-eeprom = <&factory 0x8000>;
|
||||
big-endian;
|
||||
|
||||
led {
|
||||
led-sources = <2>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
MT7628 example:
|
||||
|
||||
wmac: wmac@10300000 {
|
||||
compatible = "mediatek,mt7628-wmac";
|
||||
reg = <0x10300000 0x100000>;
|
||||
|
||||
interrupt-parent = <&cpuintc>;
|
||||
interrupts = <6>;
|
||||
|
||||
mediatek,mtd-eeprom = <&factory 0x0000>;
|
||||
};
|
||||
|
||||
MT7622 example:
|
||||
|
||||
wmac: wmac@18000000 {
|
||||
compatible = "mediatek,mt7622-wmac";
|
||||
reg = <0 0x18000000 0 0x100000>;
|
||||
interrupts = <GIC_SPI 211 IRQ_TYPE_LEVEL_LOW>;
|
||||
|
||||
mediatek,infracfg = <&infracfg>;
|
||||
|
||||
power-domains = <&scpsys MT7622_POWER_DOMAIN_WB>;
|
||||
};
|
|
@ -0,0 +1,121 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
# Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
|
||||
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/net/wireless/mediatek,mt76.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: MediaTek mt76 wireless devices Generic Binding
|
||||
|
||||
maintainers:
|
||||
- Felix Fietkau <nbd@nbd.name>
|
||||
- Lorenzo Bianconi <lorenzo@kernel.org>
|
||||
- Ryder Lee <ryder.lee@mediatek.com>
|
||||
|
||||
description: |
|
||||
This node provides properties for configuring the MediaTek mt76xx
|
||||
wireless device. The node is expected to be specified as a child
|
||||
node of the PCI controller to which the wireless chip is connected.
|
||||
Alternatively, it can specify the wireless part of the MT7628/MT7688
|
||||
or MT7622 SoC.
|
||||
|
||||
allOf:
|
||||
- $ref: ieee80211.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- mediatek,mt76
|
||||
- mediatek,mt7628-wmac
|
||||
- mediatek,mt7622-wmac
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
mediatek,infracfg:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
description:
|
||||
Phandle to the infrastructure bus fabric syscon node.
|
||||
This property is MT7622 specific
|
||||
|
||||
ieee80211-freq-limit: true
|
||||
|
||||
mediatek,mtd-eeprom:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
description:
|
||||
Phandle to a MTD partition + offset containing EEPROM data
|
||||
|
||||
big-endian:
|
||||
$ref: /schemas/types.yaml#/definitions/flag
|
||||
description:
|
||||
Specify if the radio eeprom partition is written in big-endian
|
||||
|
||||
mediatek,eeprom-merge-otp:
|
||||
type: boolean
|
||||
description:
|
||||
Merge EEPROM data with OTP data. Can be used on boards where the flash
|
||||
calibration data is generic and specific calibration data should be
|
||||
pulled from the OTP ROM
|
||||
|
||||
led:
|
||||
type: object
|
||||
$ref: /schemas/leds/common.yaml#
|
||||
additionalProperties: false
|
||||
properties:
|
||||
led-sources:
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
pcie0 {
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
wifi@0,0 {
|
||||
compatible = "mediatek,mt76";
|
||||
reg = <0x0000 0 0 0 0>;
|
||||
ieee80211-freq-limit = <5000000 6000000>;
|
||||
mediatek,mtd-eeprom = <&factory 0x8000>;
|
||||
big-endian;
|
||||
|
||||
led {
|
||||
led-sources = <2>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
- |
|
||||
wifi@10300000 {
|
||||
compatible = "mediatek,mt7628-wmac";
|
||||
reg = <0x10300000 0x100000>;
|
||||
|
||||
interrupt-parent = <&cpuintc>;
|
||||
interrupts = <6>;
|
||||
|
||||
mediatek,mtd-eeprom = <&factory 0x0>;
|
||||
};
|
||||
|
||||
- |
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
#include <dt-bindings/interrupt-controller/irq.h>
|
||||
wifi@18000000 {
|
||||
compatible = "mediatek,mt7622-wmac";
|
||||
reg = <0x10300000 0x100000>;
|
||||
interrupts = <GIC_SPI 211 IRQ_TYPE_LEVEL_LOW>;
|
||||
|
||||
mediatek,infracfg = <&infracfg>;
|
||||
|
||||
power-domains = <&scpsys 3>;
|
||||
};
|
|
@ -52,13 +52,6 @@ static inline u32 mips_read32(struct bcma_drv_mips *mcore,
|
|||
return bcma_read32(mcore->core, offset);
|
||||
}
|
||||
|
||||
static inline void mips_write32(struct bcma_drv_mips *mcore,
|
||||
u16 offset,
|
||||
u32 value)
|
||||
{
|
||||
bcma_write32(mcore->core, offset, value);
|
||||
}
|
||||
|
||||
static u32 bcma_core_mips_irqflag(struct bcma_device *dev)
|
||||
{
|
||||
u32 flag;
|
||||
|
|
|
@ -631,14 +631,9 @@ static inline u16 carl9170_get_seq(struct sk_buff *skb)
|
|||
return get_seq_h(carl9170_get_hdr(skb));
|
||||
}
|
||||
|
||||
static inline u16 get_tid_h(struct ieee80211_hdr *hdr)
|
||||
{
|
||||
return (ieee80211_get_qos_ctl(hdr))[0] & IEEE80211_QOS_CTL_TID_MASK;
|
||||
}
|
||||
|
||||
static inline u16 carl9170_get_tid(struct sk_buff *skb)
|
||||
{
|
||||
return get_tid_h(carl9170_get_hdr(skb));
|
||||
return ieee80211_get_tid(carl9170_get_hdr(skb));
|
||||
}
|
||||
|
||||
static inline struct ieee80211_vif *
|
||||
|
|
|
@ -394,7 +394,7 @@ static void carl9170_tx_status_process_ampdu(struct ar9170 *ar,
|
|||
if (unlikely(!sta))
|
||||
goto out_rcu;
|
||||
|
||||
tid = get_tid_h(hdr);
|
||||
tid = ieee80211_get_tid(hdr);
|
||||
|
||||
sta_info = (void *) sta->drv_priv;
|
||||
tid_info = rcu_dereference(sta_info->agg[tid]);
|
||||
|
|
|
@ -1456,7 +1456,7 @@ static void wil_link_stats_store_basic(struct wil6210_vif *vif,
|
|||
u8 cid = basic->cid;
|
||||
struct wil_sta_info *sta;
|
||||
|
||||
if (cid < 0 || cid >= wil->max_assoc_sta) {
|
||||
if (cid >= wil->max_assoc_sta) {
|
||||
wil_err(wil, "invalid cid %d\n", cid);
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -112,7 +112,6 @@ do { \
|
|||
|
||||
extern int brcmf_msg_level;
|
||||
|
||||
struct brcmf_bus;
|
||||
struct brcmf_pub;
|
||||
#ifdef DEBUG
|
||||
struct dentry *brcmf_debugfs_get_devdir(struct brcmf_pub *drvr);
|
||||
|
|
|
@ -28,7 +28,7 @@ struct brcmf_usbdev {
|
|||
int ntxq, nrxq, rxsize;
|
||||
u32 bus_mtu;
|
||||
int devid;
|
||||
int chiprev; /* chip revsion number */
|
||||
int chiprev; /* chip revision number */
|
||||
};
|
||||
|
||||
/* IO Request Block (IRB) */
|
||||
|
|
|
@ -3817,6 +3817,68 @@ static inline void set_auth_type(struct airo_info *local, int auth_type)
|
|||
local->last_auth = auth_type;
|
||||
}
|
||||
|
||||
static int noinline_for_stack airo_readconfig(struct airo_info *ai, u8 *mac, int lock)
|
||||
{
|
||||
int i, status;
|
||||
/* large variables, so don't inline this function,
|
||||
* maybe change to kmalloc
|
||||
*/
|
||||
tdsRssiRid rssi_rid;
|
||||
CapabilityRid cap_rid;
|
||||
|
||||
kfree(ai->SSID);
|
||||
ai->SSID = NULL;
|
||||
// general configuration (read/modify/write)
|
||||
status = readConfigRid(ai, lock);
|
||||
if (status != SUCCESS) return ERROR;
|
||||
|
||||
status = readCapabilityRid(ai, &cap_rid, lock);
|
||||
if (status != SUCCESS) return ERROR;
|
||||
|
||||
status = PC4500_readrid(ai, RID_RSSI, &rssi_rid, sizeof(rssi_rid), lock);
|
||||
if (status == SUCCESS) {
|
||||
if (ai->rssi || (ai->rssi = kmalloc(512, GFP_KERNEL)) != NULL)
|
||||
memcpy(ai->rssi, (u8*)&rssi_rid + 2, 512); /* Skip RID length member */
|
||||
}
|
||||
else {
|
||||
kfree(ai->rssi);
|
||||
ai->rssi = NULL;
|
||||
if (cap_rid.softCap & cpu_to_le16(8))
|
||||
ai->config.rmode |= RXMODE_NORMALIZED_RSSI;
|
||||
else
|
||||
airo_print_warn(ai->dev->name, "unknown received signal "
|
||||
"level scale");
|
||||
}
|
||||
ai->config.opmode = adhoc ? MODE_STA_IBSS : MODE_STA_ESS;
|
||||
set_auth_type(ai, AUTH_OPEN);
|
||||
ai->config.modulation = MOD_CCK;
|
||||
|
||||
if (le16_to_cpu(cap_rid.len) >= sizeof(cap_rid) &&
|
||||
(cap_rid.extSoftCap & cpu_to_le16(1)) &&
|
||||
micsetup(ai) == SUCCESS) {
|
||||
ai->config.opmode |= MODE_MIC;
|
||||
set_bit(FLAG_MIC_CAPABLE, &ai->flags);
|
||||
}
|
||||
|
||||
/* Save off the MAC */
|
||||
for (i = 0; i < ETH_ALEN; i++) {
|
||||
mac[i] = ai->config.macAddr[i];
|
||||
}
|
||||
|
||||
/* Check to see if there are any insmod configured
|
||||
rates to add */
|
||||
if (rates[0]) {
|
||||
memset(ai->config.rates, 0, sizeof(ai->config.rates));
|
||||
for (i = 0; i < 8 && rates[i]; i++) {
|
||||
ai->config.rates[i] = rates[i];
|
||||
}
|
||||
}
|
||||
set_bit (FLAG_COMMIT, &ai->flags);
|
||||
|
||||
return SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
static u16 setup_card(struct airo_info *ai, u8 *mac, int lock)
|
||||
{
|
||||
Cmd cmd;
|
||||
|
@ -3863,58 +3925,9 @@ static u16 setup_card(struct airo_info *ai, u8 *mac, int lock)
|
|||
if (lock)
|
||||
up(&ai->sem);
|
||||
if (ai->config.len == 0) {
|
||||
int i;
|
||||
tdsRssiRid rssi_rid;
|
||||
CapabilityRid cap_rid;
|
||||
|
||||
kfree(ai->SSID);
|
||||
ai->SSID = NULL;
|
||||
// general configuration (read/modify/write)
|
||||
status = readConfigRid(ai, lock);
|
||||
if (status != SUCCESS) return ERROR;
|
||||
|
||||
status = readCapabilityRid(ai, &cap_rid, lock);
|
||||
if (status != SUCCESS) return ERROR;
|
||||
|
||||
status = PC4500_readrid(ai, RID_RSSI,&rssi_rid, sizeof(rssi_rid), lock);
|
||||
if (status == SUCCESS) {
|
||||
if (ai->rssi || (ai->rssi = kmalloc(512, GFP_KERNEL)) != NULL)
|
||||
memcpy(ai->rssi, (u8*)&rssi_rid + 2, 512); /* Skip RID length member */
|
||||
}
|
||||
else {
|
||||
kfree(ai->rssi);
|
||||
ai->rssi = NULL;
|
||||
if (cap_rid.softCap & cpu_to_le16(8))
|
||||
ai->config.rmode |= RXMODE_NORMALIZED_RSSI;
|
||||
else
|
||||
airo_print_warn(ai->dev->name, "unknown received signal "
|
||||
"level scale");
|
||||
}
|
||||
ai->config.opmode = adhoc ? MODE_STA_IBSS : MODE_STA_ESS;
|
||||
set_auth_type(ai, AUTH_OPEN);
|
||||
ai->config.modulation = MOD_CCK;
|
||||
|
||||
if (le16_to_cpu(cap_rid.len) >= sizeof(cap_rid) &&
|
||||
(cap_rid.extSoftCap & cpu_to_le16(1)) &&
|
||||
micsetup(ai) == SUCCESS) {
|
||||
ai->config.opmode |= MODE_MIC;
|
||||
set_bit(FLAG_MIC_CAPABLE, &ai->flags);
|
||||
}
|
||||
|
||||
/* Save off the MAC */
|
||||
for (i = 0; i < ETH_ALEN; i++) {
|
||||
mac[i] = ai->config.macAddr[i];
|
||||
}
|
||||
|
||||
/* Check to see if there are any insmod configured
|
||||
rates to add */
|
||||
if (rates[0]) {
|
||||
memset(ai->config.rates, 0, sizeof(ai->config.rates));
|
||||
for (i = 0; i < 8 && rates[i]; i++) {
|
||||
ai->config.rates[i] = rates[i];
|
||||
}
|
||||
}
|
||||
set_bit (FLAG_COMMIT, &ai->flags);
|
||||
status = airo_readconfig(ai, mac, lock);
|
||||
if (status != SUCCESS)
|
||||
return ERROR;
|
||||
}
|
||||
|
||||
/* Setup the SSIDs if present */
|
||||
|
|
|
@ -633,8 +633,10 @@ int libipw_wx_set_encodeext(struct libipw_device *ieee,
|
|||
}
|
||||
|
||||
if (ext->alg != IW_ENCODE_ALG_NONE) {
|
||||
memcpy(sec.keys[idx], ext->key, ext->key_len);
|
||||
sec.key_sizes[idx] = ext->key_len;
|
||||
int key_len = clamp_val(ext->key_len, 0, SCM_KEY_LEN);
|
||||
|
||||
memcpy(sec.keys[idx], ext->key, key_len);
|
||||
sec.key_sizes[idx] = key_len;
|
||||
sec.flags |= (1 << idx);
|
||||
if (ext->alg == IW_ENCODE_ALG_WEP) {
|
||||
sec.encode_alg[idx] = SEC_ALG_WEP;
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
#include "iwl-prph.h"
|
||||
|
||||
/* Highest firmware API version supported */
|
||||
#define IWL_22000_UCODE_API_MAX 62
|
||||
#define IWL_22000_UCODE_API_MAX 63
|
||||
|
||||
/* Lowest firmware API version supported */
|
||||
#define IWL_22000_UCODE_API_MIN 39
|
||||
|
@ -48,6 +48,10 @@
|
|||
#define IWL_MA_A_GF4_A_FW_PRE "iwlwifi-ma-a0-gf4-a0-"
|
||||
#define IWL_MA_A_MR_A_FW_PRE "iwlwifi-ma-a0-mr-a0-"
|
||||
#define IWL_SNJ_A_MR_A_FW_PRE "iwlwifi-SoSnj-a0-mr-a0-"
|
||||
#define IWL_BZ_A_HR_B_FW_PRE "iwlwifi-bz-a0-hr-b0-"
|
||||
#define IWL_BZ_A_GF_A_FW_PRE "iwlwifi-bz-a0-gf-a0-"
|
||||
#define IWL_BZ_A_GF4_A_FW_PRE "iwlwifi-bz-a0-gf4-a0-"
|
||||
#define IWL_BZ_A_MR_A_FW_PRE "iwlwifi-bz-a0-mr-a0-"
|
||||
|
||||
#define IWL_QU_B_HR_B_MODULE_FIRMWARE(api) \
|
||||
IWL_QU_B_HR_B_FW_PRE __stringify(api) ".ucode"
|
||||
|
@ -91,6 +95,14 @@
|
|||
IWL_MA_A_MR_A_FW_PRE __stringify(api) ".ucode"
|
||||
#define IWL_SNJ_A_MR_A_MODULE_FIRMWARE(api) \
|
||||
IWL_SNJ_A_MR_A_FW_PRE __stringify(api) ".ucode"
|
||||
#define IWL_BZ_A_HR_B_MODULE_FIRMWARE(api) \
|
||||
IWL_BZ_A_HR_B_FW_PRE __stringify(api) ".ucode"
|
||||
#define IWL_BZ_A_GF_A_MODULE_FIRMWARE(api) \
|
||||
IWL_BZ_A_GF_A_FW_PRE __stringify(api) ".ucode"
|
||||
#define IWL_BZ_A_GF4_A_MODULE_FIRMWARE(api) \
|
||||
IWL_BZ_A_GF4_A_FW_PRE __stringify(api) ".ucode"
|
||||
#define IWL_BZ_A_MR_A_MODULE_FIRMWARE(api) \
|
||||
IWL_BZ_A_MR_A_FW_PRE __stringify(api) ".ucode"
|
||||
|
||||
static const struct iwl_base_params iwl_22000_base_params = {
|
||||
.eeprom_size = OTP_LOW_IMAGE_SIZE_32K,
|
||||
|
@ -357,13 +369,27 @@ const struct iwl_cfg_trans_params iwl_ma_trans_cfg = {
|
|||
.umac_prph_offset = 0x300000
|
||||
};
|
||||
|
||||
const struct iwl_cfg_trans_params iwl_bz_trans_cfg = {
|
||||
.device_family = IWL_DEVICE_FAMILY_AX210,
|
||||
.base_params = &iwl_ax210_base_params,
|
||||
.mq_rx_supported = true,
|
||||
.use_tfh = true,
|
||||
.rf_id = true,
|
||||
.gen2 = true,
|
||||
.integrated = true,
|
||||
.umac_prph_offset = 0x300000,
|
||||
.xtal_latency = 12000,
|
||||
.low_latency_xtal = true,
|
||||
.ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
|
||||
};
|
||||
|
||||
const char iwl_ax101_name[] = "Intel(R) Wi-Fi 6 AX101";
|
||||
const char iwl_ax200_name[] = "Intel(R) Wi-Fi 6 AX200 160MHz";
|
||||
const char iwl_ax201_name[] = "Intel(R) Wi-Fi 6 AX201 160MHz";
|
||||
const char iwl_ax203_name[] = "Intel(R) Wi-Fi 6 AX203";
|
||||
const char iwl_ax211_name[] = "Intel(R) Wi-Fi 6 AX211 160MHz";
|
||||
const char iwl_ax411_name[] = "Intel(R) Wi-Fi 6 AX411 160MHz";
|
||||
const char iwl_ma_name[] = "Intel(R) Wi-Fi 6";
|
||||
const char iwl_ax211_name[] = "Intel(R) Wi-Fi 6E AX211 160MHz";
|
||||
const char iwl_ax221_name[] = "Intel(R) Wi-Fi 6E AX221 160MHz";
|
||||
const char iwl_ax411_name[] = "Intel(R) Wi-Fi 6E AX411 160MHz";
|
||||
|
||||
const char iwl_ax200_killer_1650w_name[] =
|
||||
"Killer(R) Wi-Fi 6 AX1650w 160MHz Wireless Network Adapter (200D2W)";
|
||||
|
@ -373,6 +399,10 @@ const char iwl_ax201_killer_1650s_name[] =
|
|||
"Killer(R) Wi-Fi 6 AX1650s 160MHz Wireless Network Adapter (201D2W)";
|
||||
const char iwl_ax201_killer_1650i_name[] =
|
||||
"Killer(R) Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter (201NGW)";
|
||||
const char iwl_ax210_killer_1675w_name[] =
|
||||
"Killer(R) Wi-Fi 6E AX1675w 160MHz Wireless Network Adapter (210D2W)";
|
||||
const char iwl_ax210_killer_1675x_name[] =
|
||||
"Killer(R) Wi-Fi 6E AX1675x 160MHz Wireless Network Adapter (210NGW)";
|
||||
|
||||
const struct iwl_cfg iwl_qu_b0_hr1_b0 = {
|
||||
.fw_name_pre = IWL_QU_B_HR_B_FW_PRE,
|
||||
|
@ -578,7 +608,7 @@ const struct iwl_cfg iwl_qnj_b0_hr_b0_cfg = {
|
|||
.num_rbds = IWL_NUM_RBDS_22000_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwlax210_2ax_cfg_so_jf_a0 = {
|
||||
const struct iwl_cfg iwlax210_2ax_cfg_so_jf_b0 = {
|
||||
.name = "Intel(R) Wireless-AC 9560 160MHz",
|
||||
.fw_name_pre = IWL_SO_A_JF_B_FW_PRE,
|
||||
IWL_DEVICE_AX210,
|
||||
|
@ -719,6 +749,34 @@ const struct iwl_cfg iwl_cfg_quz_a0_hr_b0 = {
|
|||
.num_rbds = IWL_NUM_RBDS_22000_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_cfg_bz_a0_hr_b0 = {
|
||||
.fw_name_pre = IWL_BZ_A_HR_B_FW_PRE,
|
||||
.uhb_supported = true,
|
||||
IWL_DEVICE_AX210,
|
||||
.num_rbds = IWL_NUM_RBDS_AX210_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_cfg_bz_a0_gf_a0 = {
|
||||
.fw_name_pre = IWL_BZ_A_GF_A_FW_PRE,
|
||||
.uhb_supported = true,
|
||||
IWL_DEVICE_AX210,
|
||||
.num_rbds = IWL_NUM_RBDS_AX210_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_cfg_bz_a0_gf4_a0 = {
|
||||
.fw_name_pre = IWL_BZ_A_GF4_A_FW_PRE,
|
||||
.uhb_supported = true,
|
||||
IWL_DEVICE_AX210,
|
||||
.num_rbds = IWL_NUM_RBDS_AX210_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_cfg_bz_a0_mr_a0 = {
|
||||
.fw_name_pre = IWL_BZ_A_MR_A_FW_PRE,
|
||||
.uhb_supported = true,
|
||||
IWL_DEVICE_AX210,
|
||||
.num_rbds = IWL_NUM_RBDS_AX210_HE,
|
||||
};
|
||||
|
||||
MODULE_FIRMWARE(IWL_QU_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_QNJ_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_QU_C_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
|
@ -740,3 +798,7 @@ MODULE_FIRMWARE(IWL_MA_A_GF_A_FW_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
|||
MODULE_FIRMWARE(IWL_MA_A_GF4_A_FW_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_MA_A_MR_A_FW_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_SNJ_A_MR_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_BZ_A_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_BZ_A_GF_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_BZ_A_GF4_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_BZ_A_MR_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
|
|
|
@ -168,7 +168,7 @@ const char iwl9462_160_name[] = "Intel(R) Wireless-AC 9462 160MHz";
|
|||
const char iwl9560_160_name[] = "Intel(R) Wireless-AC 9560 160MHz";
|
||||
|
||||
const char iwl9260_killer_1550_name[] =
|
||||
"Killer (R) Wireless-AC 1550 Wireless Network Adapter (9260NGW)";
|
||||
"Killer (R) Wireless-AC 1550 Wireless Network Adapter (9260NGW) 160MHz";
|
||||
const char iwl9560_killer_1550i_name[] =
|
||||
"Killer (R) Wireless-AC 1550i Wireless Network Adapter (9560NGW)";
|
||||
const char iwl9560_killer_1550s_name[] =
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2017 Intel Deutschland GmbH
|
||||
* Copyright (C) 2019-2020 Intel Corporation
|
||||
* Copyright (C) 2019-2021 Intel Corporation
|
||||
*/
|
||||
#include <linux/uuid.h>
|
||||
#include "iwl-drv.h"
|
||||
|
@ -181,14 +181,13 @@ union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev,
|
|||
/*
|
||||
* We need at least two packages, one for the revision and one
|
||||
* for the data itself. Also check that the revision is valid
|
||||
* (i.e. it is an integer smaller than 2, as we currently support only
|
||||
* 2 revisions).
|
||||
* (i.e. it is an integer (each caller has to check by itself
|
||||
* if the returned revision is supported)).
|
||||
*/
|
||||
if (data->type != ACPI_TYPE_PACKAGE ||
|
||||
data->package.count < 2 ||
|
||||
data->package.elements[0].type != ACPI_TYPE_INTEGER ||
|
||||
data->package.elements[0].integer.value > 1) {
|
||||
IWL_DEBUG_DEV_RADIO(dev, "Unsupported packages structure\n");
|
||||
data->package.elements[0].type != ACPI_TYPE_INTEGER) {
|
||||
IWL_DEBUG_DEV_RADIO(dev, "Invalid packages structure\n");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
|
@ -696,3 +695,70 @@ int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt,
|
|||
return 0;
|
||||
}
|
||||
IWL_EXPORT_SYMBOL(iwl_sar_geo_init);
|
||||
|
||||
static u32 iwl_acpi_eval_dsm_func(struct device *dev, enum iwl_dsm_funcs_rev_0 eval_func)
|
||||
{
|
||||
union acpi_object *obj;
|
||||
u32 ret;
|
||||
|
||||
obj = iwl_acpi_get_dsm_object(dev, 0,
|
||||
eval_func, NULL,
|
||||
&iwl_guid);
|
||||
|
||||
if (IS_ERR(obj)) {
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"ACPI: DSM func '%d': Got Error in obj = %ld\n",
|
||||
eval_func,
|
||||
PTR_ERR(obj));
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (obj->type != ACPI_TYPE_INTEGER) {
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"ACPI: DSM func '%d' did not return a valid object, type=%d\n",
|
||||
eval_func,
|
||||
obj->type);
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = obj->integer.value;
|
||||
IWL_DEBUG_DEV_RADIO(dev,
|
||||
"ACPI: DSM method evaluated: func='%d', ret=%d\n",
|
||||
eval_func,
|
||||
ret);
|
||||
out:
|
||||
ACPI_FREE(obj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
__le32 iwl_acpi_get_lari_config_bitmap(struct iwl_fw_runtime *fwrt)
|
||||
{
|
||||
u32 ret;
|
||||
__le32 config_bitmap = 0;
|
||||
|
||||
/*
|
||||
** Evaluate func 'DSM_FUNC_ENABLE_INDONESIA_5G2'
|
||||
*/
|
||||
ret = iwl_acpi_eval_dsm_func(fwrt->dev, DSM_FUNC_ENABLE_INDONESIA_5G2);
|
||||
|
||||
if (ret == DSM_VALUE_INDONESIA_ENABLE)
|
||||
config_bitmap |=
|
||||
cpu_to_le32(LARI_CONFIG_ENABLE_5G2_IN_INDONESIA_MSK);
|
||||
|
||||
/*
|
||||
** Evaluate func 'DSM_FUNC_DISABLE_SRD'
|
||||
*/
|
||||
ret = iwl_acpi_eval_dsm_func(fwrt->dev, DSM_FUNC_DISABLE_SRD);
|
||||
|
||||
if (ret == DSM_VALUE_SRD_PASSIVE)
|
||||
config_bitmap |=
|
||||
cpu_to_le32(LARI_CONFIG_CHANGE_ETSI_TO_PASSIVE_MSK);
|
||||
|
||||
else if (ret == DSM_VALUE_SRD_DISABLE)
|
||||
config_bitmap |=
|
||||
cpu_to_le32(LARI_CONFIG_CHANGE_ETSI_TO_DISABLED_MSK);
|
||||
|
||||
return config_bitmap;
|
||||
}
|
||||
IWL_EXPORT_SYMBOL(iwl_acpi_get_lari_config_bitmap);
|
||||
|
|
|
@ -53,8 +53,8 @@
|
|||
|
||||
#define ACPI_WGDS_TABLE_SIZE 3
|
||||
|
||||
#define ACPI_PPAG_WIFI_DATA_SIZE ((IWL_NUM_CHAIN_LIMITS * \
|
||||
IWL_NUM_SUB_BANDS) + 2)
|
||||
#define ACPI_PPAG_WIFI_DATA_SIZE_V1 ((IWL_NUM_CHAIN_LIMITS * \
|
||||
IWL_NUM_SUB_BANDS_V1) + 2)
|
||||
#define ACPI_PPAG_WIFI_DATA_SIZE_V2 ((IWL_NUM_CHAIN_LIMITS * \
|
||||
IWL_NUM_SUB_BANDS_V2) + 2)
|
||||
|
||||
|
@ -77,6 +77,7 @@ enum iwl_dsm_funcs_rev_0 {
|
|||
DSM_FUNC_QUERY = 0,
|
||||
DSM_FUNC_DISABLE_SRD = 1,
|
||||
DSM_FUNC_ENABLE_INDONESIA_5G2 = 2,
|
||||
DSM_FUNC_11AX_ENABLEMENT = 6,
|
||||
};
|
||||
|
||||
enum iwl_dsm_values_srd {
|
||||
|
@ -160,6 +161,8 @@ int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt,
|
|||
int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt, __le32 *block_list_array,
|
||||
int *block_list_size);
|
||||
|
||||
__le32 iwl_acpi_get_lari_config_bitmap(struct iwl_fw_runtime *fwrt);
|
||||
|
||||
#else /* CONFIG_ACPI */
|
||||
|
||||
static inline void *iwl_acpi_get_object(struct device *dev, acpi_string method)
|
||||
|
@ -235,5 +238,11 @@ static inline int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
|
|||
{
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
static inline __le32 iwl_acpi_get_lari_config_bitmap(struct iwl_fw_runtime *fwrt)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_ACPI */
|
||||
#endif /* __iwl_fw_acpi__ */
|
||||
|
|
|
@ -147,6 +147,10 @@ enum iwl_tof_mcsi_enable {
|
|||
* @IWL_TOF_RESPONDER_CMD_VALID_RETRY_ON_ALGO_FAIL: retry on algorithm failure
|
||||
* is valid
|
||||
* @IWL_TOF_RESPONDER_CMD_VALID_STA_ID: station ID is valid
|
||||
* @IWL_TOF_RESPONDER_CMD_VALID_NDP_SUPPORT: enable/disable NDP ranging support
|
||||
* is valid
|
||||
* @IWL_TOF_RESPONDER_CMD_VALID_NDP_PARAMS: NDP parameters are valid
|
||||
* @IWL_TOF_RESPONDER_CMD_VALID_LMR_FEEDBACK: LMR feedback support is valid
|
||||
*/
|
||||
enum iwl_tof_responder_cmd_valid_field {
|
||||
IWL_TOF_RESPONDER_CMD_VALID_CHAN_INFO = BIT(0),
|
||||
|
@ -162,6 +166,9 @@ enum iwl_tof_responder_cmd_valid_field {
|
|||
IWL_TOF_RESPONDER_CMD_VALID_FAST_ALGO_SUPPORT = BIT(10),
|
||||
IWL_TOF_RESPONDER_CMD_VALID_RETRY_ON_ALGO_FAIL = BIT(11),
|
||||
IWL_TOF_RESPONDER_CMD_VALID_STA_ID = BIT(12),
|
||||
IWL_TOF_RESPONDER_CMD_VALID_NDP_SUPPORT = BIT(22),
|
||||
IWL_TOF_RESPONDER_CMD_VALID_NDP_PARAMS = BIT(23),
|
||||
IWL_TOF_RESPONDER_CMD_VALID_LMR_FEEDBACK = BIT(24),
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -176,6 +183,9 @@ enum iwl_tof_responder_cmd_valid_field {
|
|||
* @IWL_TOF_RESPONDER_FLAGS_FAST_ALGO_SUPPORT: fast algorithm support
|
||||
* @IWL_TOF_RESPONDER_FLAGS_RETRY_ON_ALGO_FAIL: retry on algorithm fail
|
||||
* @IWL_TOF_RESPONDER_FLAGS_FTM_TX_ANT: TX antenna mask
|
||||
* @IWL_TOF_RESPONDER_FLAGS_NDP_SUPPORT: support NDP ranging
|
||||
* @IWL_TOF_RESPONDER_FLAGS_LMR_FEEDBACK: request for LMR feedback if the
|
||||
* initiator supports it
|
||||
*/
|
||||
enum iwl_tof_responder_cfg_flags {
|
||||
IWL_TOF_RESPONDER_FLAGS_NON_ASAP_SUPPORT = BIT(0),
|
||||
|
@ -188,6 +198,8 @@ enum iwl_tof_responder_cfg_flags {
|
|||
IWL_TOF_RESPONDER_FLAGS_FAST_ALGO_SUPPORT = BIT(9),
|
||||
IWL_TOF_RESPONDER_FLAGS_RETRY_ON_ALGO_FAIL = BIT(10),
|
||||
IWL_TOF_RESPONDER_FLAGS_FTM_TX_ANT = RATE_MCS_ANT_ABC_MSK,
|
||||
IWL_TOF_RESPONDER_FLAGS_NDP_SUPPORT = BIT(24),
|
||||
IWL_TOF_RESPONDER_FLAGS_LMR_FEEDBACK = BIT(25),
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -226,7 +238,7 @@ struct iwl_tof_responder_config_cmd_v6 {
|
|||
} __packed; /* TOF_RESPONDER_CONFIG_CMD_API_S_VER_6 */
|
||||
|
||||
/**
|
||||
* struct iwl_tof_responder_config_cmd - ToF AP mode (for debug)
|
||||
* struct iwl_tof_responder_config_cmd_v7 - ToF AP mode (for debug)
|
||||
* @cmd_valid_fields: &iwl_tof_responder_cmd_valid_field
|
||||
* @responder_cfg_flags: &iwl_tof_responder_cfg_flags
|
||||
* @format_bw: bits 0 - 3: &enum iwl_location_frame_format.
|
||||
|
@ -245,7 +257,7 @@ struct iwl_tof_responder_config_cmd_v6 {
|
|||
* @bssid: Current AP BSSID
|
||||
* @reserved2: reserved
|
||||
*/
|
||||
struct iwl_tof_responder_config_cmd {
|
||||
struct iwl_tof_responder_config_cmd_v7 {
|
||||
__le32 cmd_valid_fields;
|
||||
__le32 responder_cfg_flags;
|
||||
u8 format_bw;
|
||||
|
@ -259,7 +271,56 @@ struct iwl_tof_responder_config_cmd {
|
|||
__le16 specific_calib;
|
||||
u8 bssid[ETH_ALEN];
|
||||
__le16 reserved2;
|
||||
} __packed; /* TOF_RESPONDER_CONFIG_CMD_API_S_VER_6 */
|
||||
} __packed; /* TOF_RESPONDER_CONFIG_CMD_API_S_VER_7 */
|
||||
|
||||
#define IWL_RESPONDER_STS_POS 3
|
||||
#define IWL_RESPONDER_TOTAL_LTF_POS 6
|
||||
|
||||
/**
|
||||
* struct iwl_tof_responder_config_cmd_v8 - ToF AP mode (for debug)
|
||||
* @cmd_valid_fields: &iwl_tof_responder_cmd_valid_field
|
||||
* @responder_cfg_flags: &iwl_tof_responder_cfg_flags
|
||||
* @format_bw: bits 0 - 3: &enum iwl_location_frame_format.
|
||||
* bits 4 - 7: &enum iwl_location_bw.
|
||||
* @rate: current AP rate
|
||||
* @channel_num: current AP Channel
|
||||
* @ctrl_ch_position: coding of the control channel position relative to
|
||||
* the center frequency, see iwl_mvm_get_ctrl_pos()
|
||||
* @sta_id: index of the AP STA when in AP mode
|
||||
* @reserved1: reserved
|
||||
* @toa_offset: Artificial addition [pSec] for the ToA - to be used for debug
|
||||
* purposes, simulating station movement by adding various values
|
||||
* to this field
|
||||
* @common_calib: XVT: common calibration value
|
||||
* @specific_calib: XVT: specific calibration value
|
||||
* @bssid: Current AP BSSID
|
||||
* @r2i_ndp_params: parameters for R2I NDP.
|
||||
* bits 0 - 2: max number of LTF repetitions
|
||||
* bits 3 - 5: max number of spatial streams (supported values are < 2)
|
||||
* bits 6 - 7: max number of total LTFs
|
||||
* (&enum ieee80211_range_params_max_total_ltf)
|
||||
* @i2r_ndp_params: parameters for I2R NDP.
|
||||
* bits 0 - 2: max number of LTF repetitions
|
||||
* bits 3 - 5: max number of spatial streams
|
||||
* bits 6 - 7: max number of total LTFs
|
||||
* (&enum ieee80211_range_params_max_total_ltf)
|
||||
*/
|
||||
struct iwl_tof_responder_config_cmd_v8 {
|
||||
__le32 cmd_valid_fields;
|
||||
__le32 responder_cfg_flags;
|
||||
u8 format_bw;
|
||||
u8 rate;
|
||||
u8 channel_num;
|
||||
u8 ctrl_ch_position;
|
||||
u8 sta_id;
|
||||
u8 reserved1;
|
||||
__le16 toa_offset;
|
||||
__le16 common_calib;
|
||||
__le16 specific_calib;
|
||||
u8 bssid[ETH_ALEN];
|
||||
u8 r2i_ndp_params;
|
||||
u8 i2r_ndp_params;
|
||||
} __packed; /* TOF_RESPONDER_CONFIG_CMD_API_S_VER_8 */
|
||||
|
||||
#define IWL_LCI_CIVIC_IE_MAX_SIZE 400
|
||||
|
||||
|
@ -422,10 +483,12 @@ struct iwl_tof_range_req_ap_entry_v2 {
|
|||
* driver.
|
||||
* @IWL_INITIATOR_AP_FLAGS_NON_TB: Use non trigger based flow
|
||||
* @IWL_INITIATOR_AP_FLAGS_TB: Use trigger based flow
|
||||
* @IWL_INITIATOR_AP_FLAGS_SECURED: request secured measurement
|
||||
* @IWL_INITIATOR_AP_FLAGS_SECURED: request secure LTF measurement
|
||||
* @IWL_INITIATOR_AP_FLAGS_LMR_FEEDBACK: Send LMR feedback
|
||||
* @IWL_INITIATOR_AP_FLAGS_USE_CALIB: Use calibration values from the request
|
||||
* instead of fw internal values.
|
||||
* @IWL_INITIATOR_AP_FLAGS_PMF: request to protect the negotiation and LMR
|
||||
* frames with protected management frames.
|
||||
*/
|
||||
enum iwl_initiator_ap_flags {
|
||||
IWL_INITIATOR_AP_FLAGS_ASAP = BIT(1),
|
||||
|
@ -440,6 +503,7 @@ enum iwl_initiator_ap_flags {
|
|||
IWL_INITIATOR_AP_FLAGS_SECURED = BIT(11),
|
||||
IWL_INITIATOR_AP_FLAGS_LMR_FEEDBACK = BIT(12),
|
||||
IWL_INITIATOR_AP_FLAGS_USE_CALIB = BIT(13),
|
||||
IWL_INITIATOR_AP_FLAGS_PMF = BIT(14),
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -657,6 +721,79 @@ struct iwl_tof_range_req_ap_entry_v7 {
|
|||
u8 tx_pn[IEEE80211_CCMP_PN_LEN];
|
||||
} __packed; /* LOCATION_RANGE_REQ_AP_ENTRY_CMD_API_S_VER_7 */
|
||||
|
||||
#define IWL_LOCATION_MAX_STS_POS 3
|
||||
|
||||
/**
|
||||
* struct iwl_tof_range_req_ap_entry_v8 - AP configuration parameters
|
||||
* @initiator_ap_flags: see &enum iwl_initiator_ap_flags.
|
||||
* @channel_num: AP Channel number
|
||||
* @format_bw: bits 0 - 3: &enum iwl_location_frame_format.
|
||||
* bits 4 - 7: &enum iwl_location_bw.
|
||||
* @ctrl_ch_position: Coding of the control channel position relative to the
|
||||
* center frequency, see iwl_mvm_get_ctrl_pos().
|
||||
* @ftmr_max_retries: Max number of retries to send the FTMR in case of no
|
||||
* reply from the AP.
|
||||
* @bssid: AP's BSSID
|
||||
* @burst_period: Recommended value to be sent to the AP. Measurement
|
||||
* periodicity In units of 100ms. ignored if num_of_bursts_exp = 0
|
||||
* @samples_per_burst: the number of FTMs pairs in single Burst (1-31);
|
||||
* @num_of_bursts: Recommended value to be sent to the AP. 2s Exponent of
|
||||
* the number of measurement iterations (min 2^0 = 1, max 2^14)
|
||||
* @sta_id: the station id of the AP. Only relevant when associated to the AP,
|
||||
* otherwise should be set to &IWL_MVM_INVALID_STA.
|
||||
* @cipher: pairwise cipher suite for secured measurement.
|
||||
* &enum iwl_location_cipher.
|
||||
* @hltk: HLTK to be used for secured 11az measurement
|
||||
* @tk: TK to be used for secured 11az measurement
|
||||
* @calib: An array of calibration values per FTM rx bandwidth.
|
||||
* If &IWL_INITIATOR_AP_FLAGS_USE_CALIB is set, the fw will use the
|
||||
* calibration value that corresponds to the rx bandwidth of the FTM
|
||||
* frame.
|
||||
* @beacon_interval: beacon interval of the AP in TUs. Only required if
|
||||
* &IWL_INITIATOR_AP_FLAGS_TB is set.
|
||||
* @rx_pn: the next expected PN for protected management frames Rx. LE byte
|
||||
* order. Only valid if &IWL_INITIATOR_AP_FLAGS_SECURED is set and sta_id
|
||||
* is set to &IWL_MVM_INVALID_STA.
|
||||
* @tx_pn: the next PN to use for protected management frames Tx. LE byte
|
||||
* order. Only valid if &IWL_INITIATOR_AP_FLAGS_SECURED is set and sta_id
|
||||
* is set to &IWL_MVM_INVALID_STA.
|
||||
* @r2i_ndp_params: parameters for R2I NDP ranging negotiation.
|
||||
* bits 0 - 2: max LTF repetitions
|
||||
* bits 3 - 5: max number of spatial streams
|
||||
* bits 6 - 7: reserved
|
||||
* @i2r_ndp_params: parameters for I2R NDP ranging negotiation.
|
||||
* bits 0 - 2: max LTF repetitions
|
||||
* bits 3 - 5: max number of spatial streams (supported values are < 2)
|
||||
* bits 6 - 7: reserved
|
||||
* @r2i_max_total_ltf: R2I Max Total LTFs for NDP ranging negotiation.
|
||||
* One of &enum ieee80211_range_params_max_total_ltf.
|
||||
* @i2r_max_total_ltf: I2R Max Total LTFs for NDP ranging negotiation.
|
||||
* One of &enum ieee80211_range_params_max_total_ltf.
|
||||
*/
|
||||
struct iwl_tof_range_req_ap_entry_v8 {
|
||||
__le32 initiator_ap_flags;
|
||||
u8 channel_num;
|
||||
u8 format_bw;
|
||||
u8 ctrl_ch_position;
|
||||
u8 ftmr_max_retries;
|
||||
u8 bssid[ETH_ALEN];
|
||||
__le16 burst_period;
|
||||
u8 samples_per_burst;
|
||||
u8 num_of_bursts;
|
||||
u8 sta_id;
|
||||
u8 cipher;
|
||||
u8 hltk[HLTK_11AZ_LEN];
|
||||
u8 tk[TK_11AZ_LEN];
|
||||
__le16 calib[IWL_TOF_BW_NUM];
|
||||
__le16 beacon_interval;
|
||||
u8 rx_pn[IEEE80211_CCMP_PN_LEN];
|
||||
u8 tx_pn[IEEE80211_CCMP_PN_LEN];
|
||||
u8 r2i_ndp_params;
|
||||
u8 i2r_ndp_params;
|
||||
u8 r2i_max_total_ltf;
|
||||
u8 i2r_max_total_ltf;
|
||||
} __packed; /* LOCATION_RANGE_REQ_AP_ENTRY_CMD_API_S_VER_8 */
|
||||
|
||||
/**
|
||||
* enum iwl_tof_response_mode
|
||||
* @IWL_MVM_TOF_RESPONSE_ASAP: report each AP measurement separately as soon as
|
||||
|
@ -878,6 +1015,34 @@ struct iwl_tof_range_req_cmd_v11 {
|
|||
struct iwl_tof_range_req_ap_entry_v7 ap[IWL_MVM_TOF_MAX_APS];
|
||||
} __packed; /* LOCATION_RANGE_REQ_CMD_API_S_VER_11 */
|
||||
|
||||
/**
|
||||
* struct iwl_tof_range_req_cmd_v12 - start measurement cmd
|
||||
* @initiator_flags: see flags @ iwl_tof_initiator_flags
|
||||
* @request_id: A Token incremented per request. The same Token will be
|
||||
* sent back in the range response
|
||||
* @num_of_ap: Number of APs to measure (error if > IWL_MVM_TOF_MAX_APS)
|
||||
* @range_req_bssid: ranging request BSSID
|
||||
* @macaddr_mask: Bits set to 0 shall be copied from the MAC address template.
|
||||
* Bits set to 1 shall be randomized by the UMAC
|
||||
* @macaddr_template: MAC address template to use for non-randomized bits
|
||||
* @req_timeout_ms: Requested timeout of the response in units of milliseconds.
|
||||
* This is the session time for completing the measurement.
|
||||
* @tsf_mac_id: report the measurement start time for each ap in terms of the
|
||||
* TSF of this mac id. 0xff to disable TSF reporting.
|
||||
* @ap: per-AP request data, see &struct iwl_tof_range_req_ap_entry_v2.
|
||||
*/
|
||||
struct iwl_tof_range_req_cmd_v12 {
|
||||
__le32 initiator_flags;
|
||||
u8 request_id;
|
||||
u8 num_of_ap;
|
||||
u8 range_req_bssid[ETH_ALEN];
|
||||
u8 macaddr_mask[ETH_ALEN];
|
||||
u8 macaddr_template[ETH_ALEN];
|
||||
__le32 req_timeout_ms;
|
||||
__le32 tsf_mac_id;
|
||||
struct iwl_tof_range_req_ap_entry_v8 ap[IWL_MVM_TOF_MAX_APS];
|
||||
} __packed; /* LOCATION_RANGE_REQ_CMD_API_S_VER_12 */
|
||||
|
||||
/*
|
||||
* enum iwl_tof_range_request_status - status of the sent request
|
||||
* @IWL_TOF_RANGE_REQUEST_STATUS_SUCCESSFUL - FW successfully received the
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2012-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2012-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -414,6 +414,9 @@ enum iwl_lari_config_masks {
|
|||
LARI_CONFIG_ENABLE_5G2_IN_INDONESIA_MSK = BIT(3),
|
||||
};
|
||||
|
||||
#define IWL_11AX_UKRAINE_MASK 3
|
||||
#define IWL_11AX_UKRAINE_SHIFT 8
|
||||
|
||||
/**
|
||||
* struct iwl_lari_config_change_cmd_v1 - change LARI configuration
|
||||
* @config_bitmap: bit map of the config commands. each bit will trigger a
|
||||
|
@ -434,6 +437,21 @@ struct iwl_lari_config_change_cmd_v2 {
|
|||
__le32 oem_uhb_allow_bitmap;
|
||||
} __packed; /* LARI_CHANGE_CONF_CMD_S_VER_2 */
|
||||
|
||||
/**
|
||||
* struct iwl_lari_config_change_cmd_v3 - change LARI configuration
|
||||
* @config_bitmap: bit map of the config commands. each bit will trigger a
|
||||
* different predefined FW config operation
|
||||
* @oem_uhb_allow_bitmap: bitmap of UHB enabled MCC sets
|
||||
* @oem_11ax_allow_bitmap: bitmap of 11ax allowed MCCs.
|
||||
* For each supported country, a pair of regulatory override bit and 11ax mode exist
|
||||
* in the bit field.
|
||||
*/
|
||||
struct iwl_lari_config_change_cmd_v3 {
|
||||
__le32 config_bitmap;
|
||||
__le32 oem_uhb_allow_bitmap;
|
||||
__le32 oem_11ax_allow_bitmap;
|
||||
} __packed; /* LARI_CHANGE_CONF_CMD_S_VER_3 */
|
||||
|
||||
/**
|
||||
* struct iwl_pnvm_init_complete_ntfy - PNVM initialization complete
|
||||
* @status: PNVM image loading status
|
||||
|
|
|
@ -274,7 +274,7 @@ enum iwl_dev_tx_power_cmd_mode {
|
|||
#define IWL_NUM_CHAIN_TABLES 1
|
||||
#define IWL_NUM_CHAIN_TABLES_V2 2
|
||||
#define IWL_NUM_CHAIN_LIMITS 2
|
||||
#define IWL_NUM_SUB_BANDS 5
|
||||
#define IWL_NUM_SUB_BANDS_V1 5
|
||||
#define IWL_NUM_SUB_BANDS_V2 11
|
||||
|
||||
/**
|
||||
|
@ -300,7 +300,7 @@ struct iwl_dev_tx_power_common {
|
|||
* @per_chain: per chain restrictions
|
||||
*/
|
||||
struct iwl_dev_tx_power_cmd_v3 {
|
||||
__le16 per_chain[IWL_NUM_CHAIN_TABLES][IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS];
|
||||
__le16 per_chain[IWL_NUM_CHAIN_TABLES][IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS_V1];
|
||||
} __packed; /* TX_REDUCED_POWER_API_S_VER_3 */
|
||||
|
||||
#define IWL_DEV_MAX_TX_POWER 0x7FFF
|
||||
|
@ -313,7 +313,7 @@ struct iwl_dev_tx_power_cmd_v3 {
|
|||
* @reserved: reserved (padding)
|
||||
*/
|
||||
struct iwl_dev_tx_power_cmd_v4 {
|
||||
__le16 per_chain[IWL_NUM_CHAIN_TABLES][IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS];
|
||||
__le16 per_chain[IWL_NUM_CHAIN_TABLES][IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS_V1];
|
||||
u8 enable_ack_reduction;
|
||||
u8 reserved[3];
|
||||
} __packed; /* TX_REDUCED_POWER_API_S_VER_4 */
|
||||
|
@ -332,7 +332,7 @@ struct iwl_dev_tx_power_cmd_v4 {
|
|||
* BIOS values. relevant if setMode is IWL_TX_POWER_MODE_SET_SAR_TIMER
|
||||
*/
|
||||
struct iwl_dev_tx_power_cmd_v5 {
|
||||
__le16 per_chain[IWL_NUM_CHAIN_TABLES][IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS];
|
||||
__le16 per_chain[IWL_NUM_CHAIN_TABLES][IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS_V1];
|
||||
u8 enable_ack_reduction;
|
||||
u8 per_chain_restriction_changed;
|
||||
u8 reserved[2];
|
||||
|
@ -454,21 +454,23 @@ struct iwl_geo_tx_power_profiles_resp {
|
|||
|
||||
/**
|
||||
* union iwl_ppag_table_cmd - union for all versions of PPAG command
|
||||
* @v1: version 1, table revision = 0
|
||||
* @v2: version 2, table revision = 1
|
||||
* @v1: version 1
|
||||
* @v2: version 2
|
||||
*
|
||||
* @enabled: 1 if PPAG is enabled, 0 otherwise
|
||||
* @flags: bit 0 - indicates enablement of PPAG for ETSI
|
||||
* bit 1 - indicates enablement of PPAG for CHINA BIOS
|
||||
* bit 1 can be used only in v3 (identical to v2)
|
||||
* @gain: table of antenna gain values per chain and sub-band
|
||||
* @reserved: reserved
|
||||
*/
|
||||
union iwl_ppag_table_cmd {
|
||||
struct {
|
||||
__le32 enabled;
|
||||
s8 gain[IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS];
|
||||
__le32 flags;
|
||||
s8 gain[IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS_V1];
|
||||
s8 reserved[2];
|
||||
} v1;
|
||||
struct {
|
||||
__le32 enabled;
|
||||
__le32 flags;
|
||||
s8 gain[IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS_V2];
|
||||
s8 reserved[2];
|
||||
} v2;
|
||||
|
|
|
@ -779,36 +779,6 @@ struct iwl_rxq_sync_notification {
|
|||
u8 payload[];
|
||||
} __packed; /* MULTI_QUEUE_DRV_SYNC_HDR_CMD_API_S_VER_1 */
|
||||
|
||||
/**
|
||||
* enum iwl_mvm_rxq_notif_type - Internal message identifier
|
||||
*
|
||||
* @IWL_MVM_RXQ_EMPTY: empty sync notification
|
||||
* @IWL_MVM_RXQ_NOTIF_DEL_BA: notify RSS queues of delBA
|
||||
* @IWL_MVM_RXQ_NSSN_SYNC: notify all the RSS queues with the new NSSN
|
||||
*/
|
||||
enum iwl_mvm_rxq_notif_type {
|
||||
IWL_MVM_RXQ_EMPTY,
|
||||
IWL_MVM_RXQ_NOTIF_DEL_BA,
|
||||
IWL_MVM_RXQ_NSSN_SYNC,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct iwl_mvm_internal_rxq_notif - Internal representation of the data sent
|
||||
* in &iwl_rxq_sync_cmd. Should be DWORD aligned.
|
||||
* FW is agnostic to the payload, so there are no endianity requirements.
|
||||
*
|
||||
* @type: value from &iwl_mvm_rxq_notif_type
|
||||
* @sync: ctrl path is waiting for all notifications to be received
|
||||
* @cookie: internal cookie to identify old notifications
|
||||
* @data: payload
|
||||
*/
|
||||
struct iwl_mvm_internal_rxq_notif {
|
||||
u16 type;
|
||||
u16 sync;
|
||||
u32 cookie;
|
||||
u8 data[];
|
||||
} __packed;
|
||||
|
||||
/**
|
||||
* enum iwl_mvm_pm_event - type of station PM event
|
||||
* @IWL_MVM_PM_EVENT_AWAKE: station woke up
|
||||
|
|
|
@ -634,6 +634,12 @@ enum iwl_umac_scan_general_flags2 {
|
|||
* @IWL_UMAC_SCAN_GEN_FLAGS_V2_TRIGGER_UHB_SCAN: at the end of 2.4GHz and
|
||||
* 5.2Ghz bands scan, trigger scan on 6GHz band to discover
|
||||
* the reported collocated APs
|
||||
* @IWL_UMAC_SCAN_GEN_FLAGS_V2_6GHZ_PASSIVE_SCAN: at the end of 2.4GHz and 5GHz
|
||||
* bands scan, if not APs were discovered, allow scan to conitnue and scan
|
||||
* 6GHz PSC channels in order to discover country information.
|
||||
* @IWL_UMAC_SCAN_GEN_FLAGS_V2_6GHZ_PASSIVE_SCAN_FILTER_IN: in case
|
||||
* &IWL_UMAC_SCAN_GEN_FLAGS_V2_6GHZ_PASSIVE_SCAN is enabled and scan is
|
||||
* activated over 6GHz PSC channels, filter in beacons and probe responses.
|
||||
*/
|
||||
enum iwl_umac_scan_general_flags_v2 {
|
||||
IWL_UMAC_SCAN_GEN_FLAGS_V2_PERIODIC = BIT(0),
|
||||
|
@ -649,6 +655,8 @@ enum iwl_umac_scan_general_flags_v2 {
|
|||
IWL_UMAC_SCAN_GEN_FLAGS_V2_MULTI_SSID = BIT(10),
|
||||
IWL_UMAC_SCAN_GEN_FLAGS_V2_FORCE_PASSIVE = BIT(11),
|
||||
IWL_UMAC_SCAN_GEN_FLAGS_V2_TRIGGER_UHB_SCAN = BIT(12),
|
||||
IWL_UMAC_SCAN_GEN_FLAGS_V2_6GHZ_PASSIVE_SCAN = BIT(13),
|
||||
IWL_UMAC_SCAN_GEN_FLAGS_V2_6GHZ_PASSIVE_SCAN_FILTER_IN = BIT(14),
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -2559,7 +2559,9 @@ int iwl_fw_dbg_ini_collect(struct iwl_fw_runtime *fwrt,
|
|||
|
||||
fwrt->dump.wks[idx].dump_data = *dump_data;
|
||||
|
||||
IWL_WARN(fwrt, "WRT: Collecting data: ini trigger %d fired.\n", tp_id);
|
||||
IWL_WARN(fwrt,
|
||||
"WRT: Collecting data: ini trigger %d fired (delay=%dms).\n",
|
||||
tp_id, (u32)(delay / USEC_PER_MSEC));
|
||||
|
||||
schedule_delayed_work(&fwrt->dump.wks[idx].wk, usecs_to_jiffies(delay));
|
||||
|
||||
|
|
|
@ -362,6 +362,8 @@ typedef unsigned int __bitwise iwl_ucode_tlv_capa_t;
|
|||
* @IWL_UCODE_TLV_CAPA_PROTECTED_TWT: Supports protection of TWT action frames
|
||||
* @IWL_UCODE_TLV_CAPA_FW_RESET_HANDSHAKE: Supports the firmware handshake in
|
||||
* reset flow
|
||||
* @IWL_UCODE_TLV_CAPA_PASSIVE_6GHZ_SCAN: Support for passive scan on 6GHz PSC
|
||||
* channels even when these are not enabled.
|
||||
*
|
||||
* @NUM_IWL_UCODE_TLV_CAPA: number of bits used
|
||||
*/
|
||||
|
@ -408,6 +410,7 @@ enum iwl_ucode_tlv_capa {
|
|||
IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD = (__force iwl_ucode_tlv_capa_t)54,
|
||||
IWL_UCODE_TLV_CAPA_PROTECTED_TWT = (__force iwl_ucode_tlv_capa_t)56,
|
||||
IWL_UCODE_TLV_CAPA_FW_RESET_HANDSHAKE = (__force iwl_ucode_tlv_capa_t)57,
|
||||
IWL_UCODE_TLV_CAPA_PASSIVE_6GHZ_SCAN = (__force iwl_ucode_tlv_capa_t)58,
|
||||
|
||||
/* set 2 */
|
||||
IWL_UCODE_TLV_CAPA_EXTENDED_DTS_MEASURE = (__force iwl_ucode_tlv_capa_t)64,
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2005-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -116,6 +116,9 @@ struct fw_img {
|
|||
#define PAGING_CMD_NUM_OF_PAGES_IN_LAST_GRP_POS 0
|
||||
#define PAGING_TLV_SECURE_MASK 1
|
||||
|
||||
/* FW MSB Mask for regions/cache_control */
|
||||
#define FW_ADDR_CACHE_CONTROL 0xC0000000UL
|
||||
|
||||
/**
|
||||
* struct iwl_fw_paging
|
||||
* @fw_paging_phys: page phy pointer
|
||||
|
|
|
@ -10,6 +10,8 @@
|
|||
|
||||
#include "fw/api/soc.h"
|
||||
#include "fw/api/commands.h"
|
||||
#include "fw/api/rx.h"
|
||||
#include "fw/api/datapath.h"
|
||||
|
||||
void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
|
||||
const struct iwl_fw *fw,
|
||||
|
@ -95,3 +97,60 @@ int iwl_set_soc_latency(struct iwl_fw_runtime *fwrt)
|
|||
return ret;
|
||||
}
|
||||
IWL_EXPORT_SYMBOL(iwl_set_soc_latency);
|
||||
|
||||
int iwl_configure_rxq(struct iwl_fw_runtime *fwrt)
|
||||
{
|
||||
int i, num_queues, size, ret;
|
||||
struct iwl_rfh_queue_config *cmd;
|
||||
struct iwl_host_cmd hcmd = {
|
||||
.id = WIDE_ID(DATA_PATH_GROUP, RFH_QUEUE_CONFIG_CMD),
|
||||
.dataflags[0] = IWL_HCMD_DFL_NOCOPY,
|
||||
};
|
||||
|
||||
/*
|
||||
* The default queue is configured via context info, so if we
|
||||
* have a single queue, there's nothing to do here.
|
||||
*/
|
||||
if (fwrt->trans->num_rx_queues == 1)
|
||||
return 0;
|
||||
|
||||
if (fwrt->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_22000)
|
||||
return 0;
|
||||
|
||||
/* skip the default queue */
|
||||
num_queues = fwrt->trans->num_rx_queues - 1;
|
||||
|
||||
size = struct_size(cmd, data, num_queues);
|
||||
|
||||
cmd = kzalloc(size, GFP_KERNEL);
|
||||
if (!cmd)
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->num_queues = num_queues;
|
||||
|
||||
for (i = 0; i < num_queues; i++) {
|
||||
struct iwl_trans_rxq_dma_data data;
|
||||
|
||||
cmd->data[i].q_num = i + 1;
|
||||
iwl_trans_get_rxq_dma_data(fwrt->trans, i + 1, &data);
|
||||
|
||||
cmd->data[i].fr_bd_cb = cpu_to_le64(data.fr_bd_cb);
|
||||
cmd->data[i].urbd_stts_wrptr =
|
||||
cpu_to_le64(data.urbd_stts_wrptr);
|
||||
cmd->data[i].ur_bd_cb = cpu_to_le64(data.ur_bd_cb);
|
||||
cmd->data[i].fr_bd_wid = cpu_to_le32(data.fr_bd_wid);
|
||||
}
|
||||
|
||||
hcmd.data[0] = cmd;
|
||||
hcmd.len[0] = size;
|
||||
|
||||
ret = iwl_trans_send_cmd(fwrt->trans, &hcmd);
|
||||
|
||||
kfree(cmd);
|
||||
|
||||
if (ret)
|
||||
IWL_ERR(fwrt, "Failed to configure RX queues: %d\n", ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
IWL_EXPORT_SYMBOL(iwl_configure_rxq);
|
||||
|
|
|
@ -190,5 +190,6 @@ void iwl_free_fw_paging(struct iwl_fw_runtime *fwrt);
|
|||
|
||||
void iwl_get_shared_mem_conf(struct iwl_fw_runtime *fwrt);
|
||||
int iwl_set_soc_latency(struct iwl_fw_runtime *fwrt);
|
||||
int iwl_configure_rxq(struct iwl_fw_runtime *fwrt);
|
||||
|
||||
#endif /* __iwl_fw_runtime_h__ */
|
||||
|
|
|
@ -416,6 +416,7 @@ struct iwl_cfg {
|
|||
#define IWL_CFG_MAC_TYPE_SNJ 0x42
|
||||
#define IWL_CFG_MAC_TYPE_SOF 0x43
|
||||
#define IWL_CFG_MAC_TYPE_MA 0x44
|
||||
#define IWL_CFG_MAC_TYPE_BZ 0x46
|
||||
|
||||
#define IWL_CFG_RF_TYPE_TH 0x105
|
||||
#define IWL_CFG_RF_TYPE_TH1 0x108
|
||||
|
@ -477,6 +478,7 @@ extern const struct iwl_cfg_trans_params iwl_snj_trans_cfg;
|
|||
extern const struct iwl_cfg_trans_params iwl_so_trans_cfg;
|
||||
extern const struct iwl_cfg_trans_params iwl_so_long_latency_trans_cfg;
|
||||
extern const struct iwl_cfg_trans_params iwl_ma_trans_cfg;
|
||||
extern const struct iwl_cfg_trans_params iwl_bz_trans_cfg;
|
||||
extern const char iwl9162_name[];
|
||||
extern const char iwl9260_name[];
|
||||
extern const char iwl9260_1_name[];
|
||||
|
@ -501,8 +503,10 @@ extern const char iwl_ax200_killer_1650w_name[];
|
|||
extern const char iwl_ax200_killer_1650x_name[];
|
||||
extern const char iwl_ax201_killer_1650s_name[];
|
||||
extern const char iwl_ax201_killer_1650i_name[];
|
||||
extern const char iwl_ma_name[];
|
||||
extern const char iwl_ax210_killer_1675w_name[];
|
||||
extern const char iwl_ax210_killer_1675x_name[];
|
||||
extern const char iwl_ax211_name[];
|
||||
extern const char iwl_ax221_name[];
|
||||
extern const char iwl_ax411_name[];
|
||||
#if IS_ENABLED(CONFIG_IWLDVM)
|
||||
extern const struct iwl_cfg iwl5300_agn_cfg;
|
||||
|
@ -594,7 +598,7 @@ extern const struct iwl_cfg killer1650i_2ax_cfg_qu_c0_hr_b0;
|
|||
extern const struct iwl_cfg killer1650x_2ax_cfg;
|
||||
extern const struct iwl_cfg killer1650w_2ax_cfg;
|
||||
extern const struct iwl_cfg iwl_qnj_b0_hr_b0_cfg;
|
||||
extern const struct iwl_cfg iwlax210_2ax_cfg_so_jf_a0;
|
||||
extern const struct iwl_cfg iwlax210_2ax_cfg_so_jf_b0;
|
||||
extern const struct iwl_cfg iwlax210_2ax_cfg_so_hr_a0;
|
||||
extern const struct iwl_cfg iwlax211_2ax_cfg_so_gf_a0;
|
||||
extern const struct iwl_cfg iwlax211_2ax_cfg_so_gf_a0_long;
|
||||
|
@ -612,6 +616,10 @@ extern const struct iwl_cfg iwl_cfg_ma_a0_mr_a0;
|
|||
extern const struct iwl_cfg iwl_cfg_snj_a0_mr_a0;
|
||||
extern const struct iwl_cfg iwl_cfg_so_a0_hr_a0;
|
||||
extern const struct iwl_cfg iwl_cfg_quz_a0_hr_b0;
|
||||
extern const struct iwl_cfg iwl_cfg_bz_a0_hr_b0;
|
||||
extern const struct iwl_cfg iwl_cfg_bz_a0_gf_a0;
|
||||
extern const struct iwl_cfg iwl_cfg_bz_a0_gf4_a0;
|
||||
extern const struct iwl_cfg iwl_cfg_bz_a0_mr_a0;
|
||||
#endif /* CONFIG_IWLMVM */
|
||||
|
||||
#endif /* __IWL_CONFIG_H__ */
|
||||
|
|
|
@ -578,6 +578,9 @@ enum msix_fh_int_causes {
|
|||
MSIX_FH_INT_CAUSES_FH_ERR = BIT(21),
|
||||
};
|
||||
|
||||
/* The low 16 bits are for rx data queue indication */
|
||||
#define MSIX_FH_INT_CAUSES_DATA_QUEUE 0xffff
|
||||
|
||||
/*
|
||||
* Causes for the HW register interrupts
|
||||
*/
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2018-2021 Intel Corporation
|
||||
*/
|
||||
#include <linux/firmware.h>
|
||||
#include "iwl-drv.h"
|
||||
|
@ -426,7 +426,8 @@ void iwl_dbg_tlv_load_bin(struct device *dev, struct iwl_trans *trans)
|
|||
const struct firmware *fw;
|
||||
int res;
|
||||
|
||||
if (!iwlwifi_mod_params.enable_ini)
|
||||
if (!iwlwifi_mod_params.enable_ini ||
|
||||
trans->trans_cfg->device_family <= IWL_DEVICE_FAMILY_9000)
|
||||
return;
|
||||
|
||||
res = firmware_request_nowarn(&fw, "iwl-debug-yoyo.bin", dev);
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2005-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -550,8 +550,6 @@ static int iwl_parse_v1_v2_firmware(struct iwl_drv *drv,
|
|||
return 0;
|
||||
}
|
||||
|
||||
#define FW_ADDR_CACHE_CONTROL 0xC0000000
|
||||
|
||||
static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
|
||||
const struct firmware *ucode_raw,
|
||||
struct iwl_firmware_pieces *pieces,
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2005-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2015 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -176,6 +176,8 @@ iwl_op_mode_hw_rf_kill(struct iwl_op_mode *op_mode, bool state)
|
|||
static inline void iwl_op_mode_free_skb(struct iwl_op_mode *op_mode,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
if (WARN_ON_ONCE(!op_mode))
|
||||
return;
|
||||
op_mode->ops->free_skb(op_mode, skb);
|
||||
}
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*
|
||||
* Copyright (C) 2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
* Copyright (C) 2019-2020 Intel Corporation
|
||||
* Copyright (C) 2019-2021 Intel Corporation
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/bsearch.h>
|
||||
|
@ -21,7 +21,6 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
|
|||
const struct iwl_cfg_trans_params *cfg_trans)
|
||||
{
|
||||
struct iwl_trans *trans;
|
||||
int txcmd_size, txcmd_align;
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
static struct lock_class_key __key;
|
||||
#endif
|
||||
|
@ -31,23 +30,6 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
|
|||
return NULL;
|
||||
|
||||
trans->trans_cfg = cfg_trans;
|
||||
if (!cfg_trans->gen2) {
|
||||
txcmd_size = sizeof(struct iwl_tx_cmd);
|
||||
txcmd_align = sizeof(void *);
|
||||
} else if (cfg_trans->device_family < IWL_DEVICE_FAMILY_AX210) {
|
||||
txcmd_size = sizeof(struct iwl_tx_cmd_gen2);
|
||||
txcmd_align = 64;
|
||||
} else {
|
||||
txcmd_size = sizeof(struct iwl_tx_cmd_gen3);
|
||||
txcmd_align = 128;
|
||||
}
|
||||
|
||||
txcmd_size += sizeof(struct iwl_cmd_header);
|
||||
txcmd_size += 36; /* biggest possible 802.11 header */
|
||||
|
||||
/* Ensure device TX cmd cannot reach/cross a page boundary in gen2 */
|
||||
if (WARN_ON(cfg_trans->gen2 && txcmd_size >= txcmd_align))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
lockdep_init_map(&trans->sync_cmd_lockdep_map, "sync_cmd_lockdep_map",
|
||||
|
@ -58,22 +40,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
|
|||
trans->ops = ops;
|
||||
trans->num_rx_queues = 1;
|
||||
|
||||
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
|
||||
trans->txqs.bc_tbl_size = sizeof(struct iwl_gen3_bc_tbl);
|
||||
else
|
||||
trans->txqs.bc_tbl_size = sizeof(struct iwlagn_scd_bc_tbl);
|
||||
/*
|
||||
* For gen2 devices, we use a single allocation for each byte-count
|
||||
* table, but they're pretty small (1k) so use a DMA pool that we
|
||||
* allocate here.
|
||||
*/
|
||||
if (trans->trans_cfg->gen2) {
|
||||
trans->txqs.bc_pool = dmam_pool_create("iwlwifi:bc", dev,
|
||||
trans->txqs.bc_tbl_size,
|
||||
256, 0);
|
||||
if (!trans->txqs.bc_pool)
|
||||
return NULL;
|
||||
}
|
||||
WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty);
|
||||
|
||||
if (trans->trans_cfg->use_tfh) {
|
||||
trans->txqs.tfd.addr_size = 64;
|
||||
|
@ -86,6 +53,52 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
|
|||
}
|
||||
trans->max_skb_frags = IWL_TRANS_MAX_FRAGS(trans);
|
||||
|
||||
return trans;
|
||||
}
|
||||
|
||||
int iwl_trans_init(struct iwl_trans *trans)
|
||||
{
|
||||
int txcmd_size, txcmd_align;
|
||||
|
||||
if (!trans->trans_cfg->gen2) {
|
||||
txcmd_size = sizeof(struct iwl_tx_cmd);
|
||||
txcmd_align = sizeof(void *);
|
||||
} else if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210) {
|
||||
txcmd_size = sizeof(struct iwl_tx_cmd_gen2);
|
||||
txcmd_align = 64;
|
||||
} else {
|
||||
txcmd_size = sizeof(struct iwl_tx_cmd_gen3);
|
||||
txcmd_align = 128;
|
||||
}
|
||||
|
||||
txcmd_size += sizeof(struct iwl_cmd_header);
|
||||
txcmd_size += 36; /* biggest possible 802.11 header */
|
||||
|
||||
/* Ensure device TX cmd cannot reach/cross a page boundary in gen2 */
|
||||
if (WARN_ON(trans->trans_cfg->gen2 && txcmd_size >= txcmd_align))
|
||||
return -EINVAL;
|
||||
|
||||
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
|
||||
trans->txqs.bc_tbl_size = sizeof(struct iwl_gen3_bc_tbl);
|
||||
else
|
||||
trans->txqs.bc_tbl_size = sizeof(struct iwlagn_scd_bc_tbl);
|
||||
/*
|
||||
* For gen2 devices, we use a single allocation for each byte-count
|
||||
* table, but they're pretty small (1k) so use a DMA pool that we
|
||||
* allocate here.
|
||||
*/
|
||||
if (trans->trans_cfg->gen2) {
|
||||
trans->txqs.bc_pool = dmam_pool_create("iwlwifi:bc", trans->dev,
|
||||
trans->txqs.bc_tbl_size,
|
||||
256, 0);
|
||||
if (!trans->txqs.bc_pool)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Some things must not change even if the config does */
|
||||
WARN_ON(trans->txqs.tfd.addr_size !=
|
||||
(trans->trans_cfg->use_tfh ? 64 : 36));
|
||||
|
||||
snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name),
|
||||
"iwl_cmd_pool:%s", dev_name(trans->dev));
|
||||
trans->dev_cmd_pool =
|
||||
|
@ -93,36 +106,36 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
|
|||
txcmd_size, txcmd_align,
|
||||
SLAB_HWCACHE_ALIGN, NULL);
|
||||
if (!trans->dev_cmd_pool)
|
||||
return NULL;
|
||||
|
||||
WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty);
|
||||
return -ENOMEM;
|
||||
|
||||
trans->txqs.tso_hdr_page = alloc_percpu(struct iwl_tso_hdr_page);
|
||||
if (!trans->txqs.tso_hdr_page) {
|
||||
kmem_cache_destroy(trans->dev_cmd_pool);
|
||||
return NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Initialize the wait queue for commands */
|
||||
init_waitqueue_head(&trans->wait_command_queue);
|
||||
|
||||
return trans;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void iwl_trans_free(struct iwl_trans *trans)
|
||||
{
|
||||
int i;
|
||||
|
||||
for_each_possible_cpu(i) {
|
||||
struct iwl_tso_hdr_page *p =
|
||||
per_cpu_ptr(trans->txqs.tso_hdr_page, i);
|
||||
if (trans->txqs.tso_hdr_page) {
|
||||
for_each_possible_cpu(i) {
|
||||
struct iwl_tso_hdr_page *p =
|
||||
per_cpu_ptr(trans->txqs.tso_hdr_page, i);
|
||||
|
||||
if (p->page)
|
||||
__free_page(p->page);
|
||||
if (p && p->page)
|
||||
__free_page(p->page);
|
||||
}
|
||||
|
||||
free_percpu(trans->txqs.tso_hdr_page);
|
||||
}
|
||||
|
||||
free_percpu(trans->txqs.tso_hdr_page);
|
||||
|
||||
kmem_cache_destroy(trans->dev_cmd_pool);
|
||||
}
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2005-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -1267,7 +1267,8 @@ static inline int iwl_trans_wait_tx_queues_empty(struct iwl_trans *trans,
|
|||
if (WARN_ON_ONCE(!trans->ops->wait_tx_queues_empty))
|
||||
return -ENOTSUPP;
|
||||
|
||||
if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) {
|
||||
/* No need to wait if the firmware is not alive */
|
||||
if (trans->state != IWL_TRANS_FW_ALIVE) {
|
||||
IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state);
|
||||
return -EIO;
|
||||
}
|
||||
|
@ -1438,6 +1439,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
|
|||
struct device *dev,
|
||||
const struct iwl_trans_ops *ops,
|
||||
const struct iwl_cfg_trans_params *cfg_trans);
|
||||
int iwl_trans_init(struct iwl_trans *trans);
|
||||
void iwl_trans_free(struct iwl_trans *trans);
|
||||
|
||||
/*****************************************************
|
||||
|
|
|
@ -93,6 +93,15 @@
|
|||
#define IWL_MVM_ENABLE_EBS 1
|
||||
#define IWL_MVM_FTM_INITIATOR_ALGO IWL_TOF_ALGO_TYPE_MAX_LIKE
|
||||
#define IWL_MVM_FTM_INITIATOR_DYNACK true
|
||||
#define IWL_MVM_FTM_R2I_MAX_REP 7
|
||||
#define IWL_MVM_FTM_I2R_MAX_REP 7
|
||||
#define IWL_MVM_FTM_R2I_MAX_STS 1
|
||||
#define IWL_MVM_FTM_I2R_MAX_STS 1
|
||||
#define IWL_MVM_FTM_R2I_MAX_TOTAL_LTF 3
|
||||
#define IWL_MVM_FTM_I2R_MAX_TOTAL_LTF 3
|
||||
#define IWL_MVM_FTM_INITIATOR_SECURE_LTF false
|
||||
#define IWL_MVM_FTM_RESP_NDP_SUPPORT true
|
||||
#define IWL_MVM_FTM_RESP_LMR_FEEDBACK_SUPPORT true
|
||||
#define IWL_MVM_D3_DEBUG false
|
||||
#define IWL_MVM_USE_TWT true
|
||||
#define IWL_MVM_AMPDU_CONSEC_DROPS_DELBA 10
|
||||
|
@ -108,5 +117,7 @@
|
|||
#define IWL_MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT 20016
|
||||
#define IWL_MVM_FTM_INITIATOR_SMOOTH_AGE_SEC 2
|
||||
#define IWL_MVM_DISABLE_AP_FILS false
|
||||
#define IWL_MVM_6GHZ_PASSIVE_SCAN_TIMEOUT 3000 /* in seconds */
|
||||
#define IWL_MVM_6GHZ_PASSIVE_SCAN_ASSOC_TIMEOUT 60 /* in seconds */
|
||||
|
||||
#endif /* __MVM_CONSTANTS_H */
|
||||
|
|
|
@ -2028,6 +2028,8 @@ static int __iwl_mvm_resume(struct iwl_mvm *mvm, bool test)
|
|||
|
||||
mutex_lock(&mvm->mutex);
|
||||
|
||||
mvm->last_reset_or_resume_time_jiffies = jiffies;
|
||||
|
||||
/* get the BSS vif pointer again */
|
||||
vif = iwl_mvm_get_bss_vif(mvm);
|
||||
if (IS_ERR_OR_NULL(vif))
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2012-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2012-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -1210,10 +1210,10 @@ static int _iwl_dbgfs_inject_beacon_ie(struct iwl_mvm *mvm, char *bin, int len)
|
|||
IWL_UCODE_TLV_API_NEW_BEACON_TEMPLATE))
|
||||
return -EINVAL;
|
||||
|
||||
rcu_read_lock();
|
||||
mutex_lock(&mvm->mutex);
|
||||
|
||||
for (i = 0; i < NUM_MAC_INDEX_DRIVER; i++) {
|
||||
vif = iwl_mvm_rcu_dereference_vif_id(mvm, i, true);
|
||||
vif = iwl_mvm_rcu_dereference_vif_id(mvm, i, false);
|
||||
if (!vif)
|
||||
continue;
|
||||
|
||||
|
@ -1253,18 +1253,16 @@ static int _iwl_dbgfs_inject_beacon_ie(struct iwl_mvm *mvm, char *bin, int len)
|
|||
&beacon_cmd.tim_size,
|
||||
beacon->data, beacon->len);
|
||||
|
||||
mutex_lock(&mvm->mutex);
|
||||
iwl_mvm_mac_ctxt_send_beacon_cmd(mvm, beacon, &beacon_cmd,
|
||||
sizeof(beacon_cmd));
|
||||
mutex_unlock(&mvm->mutex);
|
||||
|
||||
dev_kfree_skb(beacon);
|
||||
|
||||
rcu_read_unlock();
|
||||
return 0;
|
||||
|
||||
out_err:
|
||||
rcu_read_unlock();
|
||||
mutex_unlock(&mvm->mutex);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
|
|
@ -490,6 +490,15 @@ iwl_mvm_ftm_put_target(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
|||
if (vif->bss_conf.assoc &&
|
||||
!memcmp(peer->addr, vif->bss_conf.bssid, ETH_ALEN)) {
|
||||
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
|
||||
struct ieee80211_sta *sta;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
sta = rcu_dereference(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id]);
|
||||
if (sta->mfp)
|
||||
FTM_PUT_FLAG(PMF);
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
target->sta_id = mvmvif->ap_sta_id;
|
||||
} else {
|
||||
|
@ -684,6 +693,19 @@ iwl_mvm_ftm_set_secured_ranging(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
|||
}
|
||||
}
|
||||
|
||||
static int
|
||||
iwl_mvm_ftm_put_target_v7(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
||||
struct cfg80211_pmsr_request_peer *peer,
|
||||
struct iwl_tof_range_req_ap_entry_v7 *target)
|
||||
{
|
||||
int err = iwl_mvm_ftm_put_target(mvm, vif, peer, (void *)target);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
iwl_mvm_ftm_set_secured_ranging(mvm, vif, target);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int iwl_mvm_ftm_start_v11(struct iwl_mvm *mvm,
|
||||
struct ieee80211_vif *vif,
|
||||
struct cfg80211_pmsr_request *req)
|
||||
|
@ -704,11 +726,67 @@ static int iwl_mvm_ftm_start_v11(struct iwl_mvm *mvm,
|
|||
struct cfg80211_pmsr_request_peer *peer = &req->peers[i];
|
||||
struct iwl_tof_range_req_ap_entry_v7 *target = &cmd.ap[i];
|
||||
|
||||
err = iwl_mvm_ftm_put_target(mvm, vif, peer, (void *)target);
|
||||
err = iwl_mvm_ftm_put_target_v7(mvm, vif, peer, target);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return iwl_mvm_ftm_send_cmd(mvm, &hcmd);
|
||||
}
|
||||
|
||||
static void
|
||||
iwl_mvm_ftm_set_ndp_params(struct iwl_mvm *mvm,
|
||||
struct iwl_tof_range_req_ap_entry_v8 *target)
|
||||
{
|
||||
/* Only 2 STS are supported on Tx */
|
||||
u32 i2r_max_sts = IWL_MVM_FTM_I2R_MAX_STS > 1 ? 1 :
|
||||
IWL_MVM_FTM_I2R_MAX_STS;
|
||||
|
||||
target->r2i_ndp_params = IWL_MVM_FTM_R2I_MAX_REP |
|
||||
(IWL_MVM_FTM_R2I_MAX_STS << IWL_LOCATION_MAX_STS_POS);
|
||||
target->i2r_ndp_params = IWL_MVM_FTM_I2R_MAX_REP |
|
||||
(i2r_max_sts << IWL_LOCATION_MAX_STS_POS);
|
||||
target->r2i_max_total_ltf = IWL_MVM_FTM_R2I_MAX_TOTAL_LTF;
|
||||
target->i2r_max_total_ltf = IWL_MVM_FTM_I2R_MAX_TOTAL_LTF;
|
||||
}
|
||||
|
||||
static int iwl_mvm_ftm_start_v12(struct iwl_mvm *mvm,
|
||||
struct ieee80211_vif *vif,
|
||||
struct cfg80211_pmsr_request *req)
|
||||
{
|
||||
struct iwl_tof_range_req_cmd_v12 cmd;
|
||||
struct iwl_host_cmd hcmd = {
|
||||
.id = iwl_cmd_id(TOF_RANGE_REQ_CMD, LOCATION_GROUP, 0),
|
||||
.dataflags[0] = IWL_HCMD_DFL_DUP,
|
||||
.data[0] = &cmd,
|
||||
.len[0] = sizeof(cmd),
|
||||
};
|
||||
u8 i;
|
||||
int err;
|
||||
|
||||
iwl_mvm_ftm_cmd_common(mvm, vif, (void *)&cmd, req);
|
||||
|
||||
for (i = 0; i < cmd.num_of_ap; i++) {
|
||||
struct cfg80211_pmsr_request_peer *peer = &req->peers[i];
|
||||
struct iwl_tof_range_req_ap_entry_v8 *target = &cmd.ap[i];
|
||||
u32 flags;
|
||||
|
||||
err = iwl_mvm_ftm_put_target_v7(mvm, vif, peer, (void *)target);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
iwl_mvm_ftm_set_secured_ranging(mvm, vif, target);
|
||||
iwl_mvm_ftm_set_ndp_params(mvm, target);
|
||||
|
||||
/*
|
||||
* If secure LTF is turned off, replace the flag with PMF only
|
||||
*/
|
||||
flags = le32_to_cpu(target->initiator_ap_flags);
|
||||
if ((flags & IWL_INITIATOR_AP_FLAGS_SECURED) &&
|
||||
!IWL_MVM_FTM_INITIATOR_SECURE_LTF) {
|
||||
flags &= ~IWL_INITIATOR_AP_FLAGS_SECURED;
|
||||
flags |= IWL_INITIATOR_AP_FLAGS_PMF;
|
||||
target->initiator_ap_flags = cpu_to_le32(flags);
|
||||
}
|
||||
}
|
||||
|
||||
return iwl_mvm_ftm_send_cmd(mvm, &hcmd);
|
||||
|
@ -732,6 +810,9 @@ int iwl_mvm_ftm_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
|||
IWL_FW_CMD_VER_UNKNOWN);
|
||||
|
||||
switch (cmd_ver) {
|
||||
case 12:
|
||||
err = iwl_mvm_ftm_start_v12(mvm, vif, req);
|
||||
break;
|
||||
case 11:
|
||||
err = iwl_mvm_ftm_start_v11(mvm, vif, req);
|
||||
break;
|
||||
|
|
|
@ -75,6 +75,24 @@ static int iwl_mvm_ftm_responder_set_bw_v2(struct cfg80211_chan_def *chandef,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
iwl_mvm_ftm_responder_set_ndp(struct iwl_mvm *mvm,
|
||||
struct iwl_tof_responder_config_cmd_v8 *cmd)
|
||||
{
|
||||
/* Up to 2 R2I STS are allowed on the responder */
|
||||
u32 r2i_max_sts = IWL_MVM_FTM_R2I_MAX_STS < 2 ?
|
||||
IWL_MVM_FTM_R2I_MAX_STS : 1;
|
||||
|
||||
cmd->r2i_ndp_params = IWL_MVM_FTM_R2I_MAX_REP |
|
||||
(r2i_max_sts << IWL_RESPONDER_STS_POS) |
|
||||
(IWL_MVM_FTM_R2I_MAX_TOTAL_LTF << IWL_RESPONDER_TOTAL_LTF_POS);
|
||||
cmd->i2r_ndp_params = IWL_MVM_FTM_I2R_MAX_REP |
|
||||
(IWL_MVM_FTM_I2R_MAX_STS << IWL_RESPONDER_STS_POS) |
|
||||
(IWL_MVM_FTM_I2R_MAX_TOTAL_LTF << IWL_RESPONDER_TOTAL_LTF_POS);
|
||||
cmd->cmd_valid_fields |=
|
||||
cpu_to_le32(IWL_TOF_RESPONDER_CMD_VALID_NDP_PARAMS);
|
||||
}
|
||||
|
||||
static int
|
||||
iwl_mvm_ftm_responder_cmd(struct iwl_mvm *mvm,
|
||||
struct ieee80211_vif *vif,
|
||||
|
@ -82,11 +100,11 @@ iwl_mvm_ftm_responder_cmd(struct iwl_mvm *mvm,
|
|||
{
|
||||
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
|
||||
/*
|
||||
* The command structure is the same for versions 6 and 7, (only the
|
||||
* The command structure is the same for versions 6, 7 and 8 (only the
|
||||
* field interpretation is different), so the same struct can be use
|
||||
* for all cases.
|
||||
*/
|
||||
struct iwl_tof_responder_config_cmd cmd = {
|
||||
struct iwl_tof_responder_config_cmd_v8 cmd = {
|
||||
.channel_num = chandef->chan->hw_value,
|
||||
.cmd_valid_fields =
|
||||
cpu_to_le32(IWL_TOF_RESPONDER_CMD_VALID_CHAN_INFO |
|
||||
|
@ -100,7 +118,10 @@ iwl_mvm_ftm_responder_cmd(struct iwl_mvm *mvm,
|
|||
|
||||
lockdep_assert_held(&mvm->mutex);
|
||||
|
||||
if (cmd_ver == 7)
|
||||
if (cmd_ver == 8)
|
||||
iwl_mvm_ftm_responder_set_ndp(mvm, &cmd);
|
||||
|
||||
if (cmd_ver >= 7)
|
||||
err = iwl_mvm_ftm_responder_set_bw_v2(chandef, &cmd.format_bw,
|
||||
&cmd.ctrl_ch_position);
|
||||
else
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2012-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2012-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -29,6 +29,9 @@
|
|||
|
||||
#define UCODE_VALID_OK cpu_to_le32(0x1)
|
||||
|
||||
#define IWL_PPAG_MASK 3
|
||||
#define IWL_PPAG_ETSI_MASK BIT(0)
|
||||
|
||||
struct iwl_mvm_alive_data {
|
||||
bool valid;
|
||||
u32 scd_base_addr;
|
||||
|
@ -70,56 +73,6 @@ static int iwl_send_rss_cfg_cmd(struct iwl_mvm *mvm)
|
|||
return iwl_mvm_send_cmd_pdu(mvm, RSS_CONFIG_CMD, 0, sizeof(cmd), &cmd);
|
||||
}
|
||||
|
||||
static int iwl_configure_rxq(struct iwl_mvm *mvm)
|
||||
{
|
||||
int i, num_queues, size, ret;
|
||||
struct iwl_rfh_queue_config *cmd;
|
||||
struct iwl_host_cmd hcmd = {
|
||||
.id = WIDE_ID(DATA_PATH_GROUP, RFH_QUEUE_CONFIG_CMD),
|
||||
.dataflags[0] = IWL_HCMD_DFL_NOCOPY,
|
||||
};
|
||||
|
||||
/*
|
||||
* The default queue is configured via context info, so if we
|
||||
* have a single queue, there's nothing to do here.
|
||||
*/
|
||||
if (mvm->trans->num_rx_queues == 1)
|
||||
return 0;
|
||||
|
||||
/* skip the default queue */
|
||||
num_queues = mvm->trans->num_rx_queues - 1;
|
||||
|
||||
size = struct_size(cmd, data, num_queues);
|
||||
|
||||
cmd = kzalloc(size, GFP_KERNEL);
|
||||
if (!cmd)
|
||||
return -ENOMEM;
|
||||
|
||||
cmd->num_queues = num_queues;
|
||||
|
||||
for (i = 0; i < num_queues; i++) {
|
||||
struct iwl_trans_rxq_dma_data data;
|
||||
|
||||
cmd->data[i].q_num = i + 1;
|
||||
iwl_trans_get_rxq_dma_data(mvm->trans, i + 1, &data);
|
||||
|
||||
cmd->data[i].fr_bd_cb = cpu_to_le64(data.fr_bd_cb);
|
||||
cmd->data[i].urbd_stts_wrptr =
|
||||
cpu_to_le64(data.urbd_stts_wrptr);
|
||||
cmd->data[i].ur_bd_cb = cpu_to_le64(data.ur_bd_cb);
|
||||
cmd->data[i].fr_bd_wid = cpu_to_le32(data.fr_bd_wid);
|
||||
}
|
||||
|
||||
hcmd.data[0] = cmd;
|
||||
hcmd.len[0] = size;
|
||||
|
||||
ret = iwl_mvm_send_cmd(mvm, &hcmd);
|
||||
|
||||
kfree(cmd);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int iwl_mvm_send_dqa_cmd(struct iwl_mvm *mvm)
|
||||
{
|
||||
struct iwl_dqa_enable_cmd dqa_cmd = {
|
||||
|
@ -233,7 +186,8 @@ static bool iwl_alive_fn(struct iwl_notif_wait_data *notif_wait,
|
|||
mvm->trans->dbg.lmac_error_event_table[1] =
|
||||
le32_to_cpu(lmac2->dbg_ptrs.error_event_table_ptr);
|
||||
|
||||
umac_error_table = le32_to_cpu(umac->dbg_ptrs.error_info_addr);
|
||||
umac_error_table = le32_to_cpu(umac->dbg_ptrs.error_info_addr) &
|
||||
~FW_ADDR_CACHE_CONTROL;
|
||||
|
||||
if (umac_error_table) {
|
||||
if (umac_error_table >=
|
||||
|
@ -773,16 +727,16 @@ int iwl_mvm_sar_select_profile(struct iwl_mvm *mvm, int prof_a, int prof_b)
|
|||
} else if (fw_has_api(&mvm->fw->ucode_capa,
|
||||
IWL_UCODE_TLV_API_REDUCE_TX_POWER)) {
|
||||
len = sizeof(cmd.v5);
|
||||
n_subbands = IWL_NUM_SUB_BANDS;
|
||||
n_subbands = IWL_NUM_SUB_BANDS_V1;
|
||||
per_chain = cmd.v5.per_chain[0][0];
|
||||
} else if (fw_has_capa(&mvm->fw->ucode_capa,
|
||||
IWL_UCODE_TLV_CAPA_TX_POWER_ACK)) {
|
||||
len = sizeof(cmd.v4);
|
||||
n_subbands = IWL_NUM_SUB_BANDS;
|
||||
n_subbands = IWL_NUM_SUB_BANDS_V1;
|
||||
per_chain = cmd.v4.per_chain[0][0];
|
||||
} else {
|
||||
len = sizeof(cmd.v3);
|
||||
n_subbands = IWL_NUM_SUB_BANDS;
|
||||
n_subbands = IWL_NUM_SUB_BANDS_V1;
|
||||
per_chain = cmd.v3.per_chain[0][0];
|
||||
}
|
||||
|
||||
|
@ -909,46 +863,50 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm)
|
|||
|
||||
static int iwl_mvm_get_ppag_table(struct iwl_mvm *mvm)
|
||||
{
|
||||
union acpi_object *wifi_pkg, *data, *enabled;
|
||||
union acpi_object *wifi_pkg, *data, *flags;
|
||||
int i, j, ret, tbl_rev, num_sub_bands;
|
||||
int idx = 2;
|
||||
s8 *gain;
|
||||
|
||||
/*
|
||||
* The 'enabled' field is the same in v1 and v2 so we can just
|
||||
* The 'flags' field is the same in v1 and in v2 so we can just
|
||||
* use v1 to access it.
|
||||
*/
|
||||
mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(0);
|
||||
mvm->fwrt.ppag_table.v1.flags = cpu_to_le32(0);
|
||||
|
||||
data = iwl_acpi_get_object(mvm->dev, ACPI_PPAG_METHOD);
|
||||
if (IS_ERR(data))
|
||||
return PTR_ERR(data);
|
||||
|
||||
/* try to read ppag table revision 1 */
|
||||
/* try to read ppag table rev 2 or 1 (both have the same data size) */
|
||||
wifi_pkg = iwl_acpi_get_wifi_pkg(mvm->dev, data,
|
||||
ACPI_PPAG_WIFI_DATA_SIZE_V2, &tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 1) {
|
||||
if (tbl_rev == 1 || tbl_rev == 2) {
|
||||
num_sub_bands = IWL_NUM_SUB_BANDS_V2;
|
||||
gain = mvm->fwrt.ppag_table.v2.gain[0];
|
||||
mvm->fwrt.ppag_ver = tbl_rev;
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Reading PPAG table v2 (tbl_rev=%d)\n",
|
||||
tbl_rev);
|
||||
goto read_table;
|
||||
} else {
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
num_sub_bands = IWL_NUM_SUB_BANDS_V2;
|
||||
gain = mvm->fwrt.ppag_table.v2.gain[0];
|
||||
mvm->fwrt.ppag_ver = 2;
|
||||
IWL_DEBUG_RADIO(mvm, "Reading PPAG table v2 (tbl_rev=1)\n");
|
||||
goto read_table;
|
||||
}
|
||||
|
||||
/* try to read ppag table revision 0 */
|
||||
wifi_pkg = iwl_acpi_get_wifi_pkg(mvm->dev, data,
|
||||
ACPI_PPAG_WIFI_DATA_SIZE, &tbl_rev);
|
||||
ACPI_PPAG_WIFI_DATA_SIZE_V1, &tbl_rev);
|
||||
if (!IS_ERR(wifi_pkg)) {
|
||||
if (tbl_rev != 0) {
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
num_sub_bands = IWL_NUM_SUB_BANDS;
|
||||
num_sub_bands = IWL_NUM_SUB_BANDS_V1;
|
||||
gain = mvm->fwrt.ppag_table.v1.gain[0];
|
||||
mvm->fwrt.ppag_ver = 1;
|
||||
mvm->fwrt.ppag_ver = 0;
|
||||
IWL_DEBUG_RADIO(mvm, "Reading PPAG table v1 (tbl_rev=0)\n");
|
||||
goto read_table;
|
||||
}
|
||||
|
@ -956,15 +914,17 @@ static int iwl_mvm_get_ppag_table(struct iwl_mvm *mvm)
|
|||
goto out_free;
|
||||
|
||||
read_table:
|
||||
enabled = &wifi_pkg->package.elements[1];
|
||||
if (enabled->type != ACPI_TYPE_INTEGER ||
|
||||
(enabled->integer.value != 0 && enabled->integer.value != 1)) {
|
||||
flags = &wifi_pkg->package.elements[1];
|
||||
|
||||
if (flags->type != ACPI_TYPE_INTEGER) {
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(enabled->integer.value);
|
||||
if (!mvm->fwrt.ppag_table.v1.enabled) {
|
||||
mvm->fwrt.ppag_table.v1.flags = cpu_to_le32(flags->integer.value &
|
||||
IWL_PPAG_MASK);
|
||||
|
||||
if (!mvm->fwrt.ppag_table.v1.flags) {
|
||||
ret = 0;
|
||||
goto out_free;
|
||||
}
|
||||
|
@ -992,12 +952,13 @@ read_table:
|
|||
(j != 0 &&
|
||||
(gain[i * num_sub_bands + j] > ACPI_PPAG_MAX_HB ||
|
||||
gain[i * num_sub_bands + j] < ACPI_PPAG_MIN_HB))) {
|
||||
mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(0);
|
||||
mvm->fwrt.ppag_table.v1.flags = cpu_to_le32(0);
|
||||
ret = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out_free:
|
||||
kfree(data);
|
||||
|
@ -1015,7 +976,7 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm)
|
|||
"PPAG capability not supported by FW, command not sent.\n");
|
||||
return 0;
|
||||
}
|
||||
if (!mvm->fwrt.ppag_table.v1.enabled) {
|
||||
if (!mvm->fwrt.ppag_table.v1.flags) {
|
||||
IWL_DEBUG_RADIO(mvm, "PPAG not enabled, command not sent.\n");
|
||||
return 0;
|
||||
}
|
||||
|
@ -1024,20 +985,28 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm)
|
|||
PER_PLATFORM_ANT_GAIN_CMD,
|
||||
IWL_FW_CMD_VER_UNKNOWN);
|
||||
if (cmd_ver == 1) {
|
||||
num_sub_bands = IWL_NUM_SUB_BANDS;
|
||||
num_sub_bands = IWL_NUM_SUB_BANDS_V1;
|
||||
gain = mvm->fwrt.ppag_table.v1.gain[0];
|
||||
cmd_size = sizeof(mvm->fwrt.ppag_table.v1);
|
||||
if (mvm->fwrt.ppag_ver == 2) {
|
||||
if (mvm->fwrt.ppag_ver == 1 || mvm->fwrt.ppag_ver == 2) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"PPAG table is v2 but FW supports v1, sending truncated table\n");
|
||||
"PPAG table rev is %d but FW supports v1, sending truncated table\n",
|
||||
mvm->fwrt.ppag_ver);
|
||||
mvm->fwrt.ppag_table.v1.flags &=
|
||||
cpu_to_le32(IWL_PPAG_ETSI_MASK);
|
||||
}
|
||||
} else if (cmd_ver == 2) {
|
||||
} else if (cmd_ver == 2 || cmd_ver == 3) {
|
||||
num_sub_bands = IWL_NUM_SUB_BANDS_V2;
|
||||
gain = mvm->fwrt.ppag_table.v2.gain[0];
|
||||
cmd_size = sizeof(mvm->fwrt.ppag_table.v2);
|
||||
if (mvm->fwrt.ppag_ver == 1) {
|
||||
if (mvm->fwrt.ppag_ver == 0) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"PPAG table is v1 but FW supports v2, sending padded table\n");
|
||||
} else if (cmd_ver == 2 && mvm->fwrt.ppag_ver == 2) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"PPAG table is v3 but FW supports v2, sending partial bitmap.\n");
|
||||
mvm->fwrt.ppag_table.v1.flags &=
|
||||
cpu_to_le32(IWL_PPAG_ETSI_MASK);
|
||||
}
|
||||
} else {
|
||||
IWL_DEBUG_RADIO(mvm, "Unsupported PPAG command version\n");
|
||||
|
@ -1102,7 +1071,7 @@ static int iwl_mvm_ppag_init(struct iwl_mvm *mvm)
|
|||
IWL_DEBUG_RADIO(mvm,
|
||||
"System vendor '%s' is not in the approved list, disabling PPAG.\n",
|
||||
dmi_get_system_info(DMI_SYS_VENDOR));
|
||||
mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(0);
|
||||
mvm->fwrt.ppag_table.v1.flags = cpu_to_le32(0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1144,33 +1113,6 @@ static void iwl_mvm_tas_init(struct iwl_mvm *mvm)
|
|||
IWL_DEBUG_RADIO(mvm, "failed to send TAS_CONFIG (%d)\n", ret);
|
||||
}
|
||||
|
||||
static u8 iwl_mvm_eval_dsm_indonesia_5g2(struct iwl_mvm *mvm)
|
||||
{
|
||||
u8 value;
|
||||
|
||||
int ret = iwl_acpi_get_dsm_u8((&mvm->fwrt)->dev, 0,
|
||||
DSM_FUNC_ENABLE_INDONESIA_5G2,
|
||||
&iwl_guid, &value);
|
||||
|
||||
if (ret < 0)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Failed to evaluate DSM function ENABLE_INDONESIA_5G2, ret=%d\n",
|
||||
ret);
|
||||
|
||||
else if (value >= DSM_VALUE_INDONESIA_MAX)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"DSM function ENABLE_INDONESIA_5G2 return invalid value, value=%d\n",
|
||||
value);
|
||||
|
||||
else if (value == DSM_VALUE_INDONESIA_ENABLE) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Evaluated DSM function ENABLE_INDONESIA_5G2: Enabling 5g2\n");
|
||||
return DSM_VALUE_INDONESIA_ENABLE;
|
||||
}
|
||||
/* default behaviour is disabled */
|
||||
return DSM_VALUE_INDONESIA_DISABLE;
|
||||
}
|
||||
|
||||
static u8 iwl_mvm_eval_dsm_rfi(struct iwl_mvm *mvm)
|
||||
{
|
||||
u8 value;
|
||||
|
@ -1195,64 +1137,27 @@ static u8 iwl_mvm_eval_dsm_rfi(struct iwl_mvm *mvm)
|
|||
return DSM_VALUE_RFI_DISABLE;
|
||||
}
|
||||
|
||||
static u8 iwl_mvm_eval_dsm_disable_srd(struct iwl_mvm *mvm)
|
||||
{
|
||||
u8 value;
|
||||
int ret = iwl_acpi_get_dsm_u8((&mvm->fwrt)->dev, 0,
|
||||
DSM_FUNC_DISABLE_SRD,
|
||||
&iwl_guid, &value);
|
||||
|
||||
if (ret < 0)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Failed to evaluate DSM function DISABLE_SRD, ret=%d\n",
|
||||
ret);
|
||||
|
||||
else if (value >= DSM_VALUE_SRD_MAX)
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"DSM function DISABLE_SRD return invalid value, value=%d\n",
|
||||
value);
|
||||
|
||||
else if (value == DSM_VALUE_SRD_PASSIVE) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Evaluated DSM function DISABLE_SRD: setting SRD to passive\n");
|
||||
return DSM_VALUE_SRD_PASSIVE;
|
||||
|
||||
} else if (value == DSM_VALUE_SRD_DISABLE) {
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"Evaluated DSM function DISABLE_SRD: disabling SRD\n");
|
||||
return DSM_VALUE_SRD_DISABLE;
|
||||
}
|
||||
/* default behaviour is active */
|
||||
return DSM_VALUE_SRD_ACTIVE;
|
||||
}
|
||||
|
||||
static void iwl_mvm_lari_cfg(struct iwl_mvm *mvm)
|
||||
{
|
||||
u8 ret;
|
||||
int cmd_ret;
|
||||
struct iwl_lari_config_change_cmd_v2 cmd = {};
|
||||
struct iwl_lari_config_change_cmd_v3 cmd = {};
|
||||
|
||||
if (iwl_mvm_eval_dsm_indonesia_5g2(mvm) == DSM_VALUE_INDONESIA_ENABLE)
|
||||
cmd.config_bitmap |=
|
||||
cpu_to_le32(LARI_CONFIG_ENABLE_5G2_IN_INDONESIA_MSK);
|
||||
|
||||
ret = iwl_mvm_eval_dsm_disable_srd(mvm);
|
||||
if (ret == DSM_VALUE_SRD_PASSIVE)
|
||||
cmd.config_bitmap |=
|
||||
cpu_to_le32(LARI_CONFIG_CHANGE_ETSI_TO_PASSIVE_MSK);
|
||||
|
||||
else if (ret == DSM_VALUE_SRD_DISABLE)
|
||||
cmd.config_bitmap |=
|
||||
cpu_to_le32(LARI_CONFIG_CHANGE_ETSI_TO_DISABLED_MSK);
|
||||
cmd.config_bitmap = iwl_acpi_get_lari_config_bitmap(&mvm->fwrt);
|
||||
|
||||
/* apply more config masks here */
|
||||
|
||||
if (cmd.config_bitmap) {
|
||||
size_t cmd_size = iwl_fw_lookup_cmd_ver(mvm->fw,
|
||||
REGULATORY_AND_NVM_GROUP,
|
||||
LARI_CONFIG_CHANGE, 1) == 2 ?
|
||||
sizeof(struct iwl_lari_config_change_cmd_v2) :
|
||||
sizeof(struct iwl_lari_config_change_cmd_v1);
|
||||
size_t cmd_size;
|
||||
u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
|
||||
REGULATORY_AND_NVM_GROUP,
|
||||
LARI_CONFIG_CHANGE, 1);
|
||||
if (cmd_ver == 3)
|
||||
cmd_size = sizeof(struct iwl_lari_config_change_cmd_v3);
|
||||
else if (cmd_ver == 2)
|
||||
cmd_size = sizeof(struct iwl_lari_config_change_cmd_v2);
|
||||
else
|
||||
cmd_size = sizeof(struct iwl_lari_config_change_cmd_v1);
|
||||
|
||||
IWL_DEBUG_RADIO(mvm,
|
||||
"sending LARI_CONFIG_CHANGE, config_bitmap=0x%x\n",
|
||||
le32_to_cpu(cmd.config_bitmap));
|
||||
|
@ -1485,14 +1390,9 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
|
|||
}
|
||||
|
||||
/* Init RSS configuration */
|
||||
if (mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_22000) {
|
||||
ret = iwl_configure_rxq(mvm);
|
||||
if (ret) {
|
||||
IWL_ERR(mvm, "Failed to configure RX queues: %d\n",
|
||||
ret);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
ret = iwl_configure_rxq(&mvm->fwrt);
|
||||
if (ret)
|
||||
goto error;
|
||||
|
||||
if (iwl_mvm_has_new_rx_api(mvm)) {
|
||||
ret = iwl_send_rss_cfg_cmd(mvm);
|
||||
|
|
|
@ -1099,6 +1099,8 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm)
|
|||
iwl_dbg_tlv_time_point(&mvm->fwrt, IWL_FW_INI_TIME_POINT_PERIODIC,
|
||||
NULL);
|
||||
|
||||
mvm->last_reset_or_resume_time_jiffies = jiffies;
|
||||
|
||||
if (ret && test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
|
||||
/* Something went wrong - we need to finish some cleanup
|
||||
* that normally iwl_mvm_mac_restart_complete() below
|
||||
|
@ -4610,6 +4612,16 @@ static int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
|
|||
|
||||
break;
|
||||
case NL80211_IFTYPE_STATION:
|
||||
/*
|
||||
* We haven't configured the firmware to be associated yet since
|
||||
* we don't know the dtim period. In this case, the firmware can't
|
||||
* track the beacons.
|
||||
*/
|
||||
if (!vif->bss_conf.assoc || !vif->bss_conf.dtim_period) {
|
||||
ret = -EBUSY;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (chsw->delay > IWL_MAX_CSA_BLOCK_TX)
|
||||
schedule_delayed_work(&mvmvif->csa_work, 0);
|
||||
|
||||
|
@ -5134,28 +5146,50 @@ static void iwl_mvm_mac_event_callback(struct ieee80211_hw *hw,
|
|||
}
|
||||
|
||||
void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
|
||||
struct iwl_mvm_internal_rxq_notif *notif,
|
||||
u32 size)
|
||||
enum iwl_mvm_rxq_notif_type type,
|
||||
bool sync,
|
||||
const void *data, u32 size)
|
||||
{
|
||||
u32 qmask = BIT(mvm->trans->num_rx_queues) - 1;
|
||||
struct {
|
||||
struct iwl_rxq_sync_cmd cmd;
|
||||
struct iwl_mvm_internal_rxq_notif notif;
|
||||
} __packed cmd = {
|
||||
.cmd.rxq_mask = cpu_to_le32(BIT(mvm->trans->num_rx_queues) - 1),
|
||||
.cmd.count =
|
||||
cpu_to_le32(sizeof(struct iwl_mvm_internal_rxq_notif) +
|
||||
size),
|
||||
.notif.type = type,
|
||||
.notif.sync = sync,
|
||||
};
|
||||
struct iwl_host_cmd hcmd = {
|
||||
.id = WIDE_ID(DATA_PATH_GROUP, TRIGGER_RX_QUEUES_NOTIF_CMD),
|
||||
.data[0] = &cmd,
|
||||
.len[0] = sizeof(cmd),
|
||||
.data[1] = data,
|
||||
.len[1] = size,
|
||||
.flags = sync ? 0 : CMD_ASYNC,
|
||||
};
|
||||
int ret;
|
||||
|
||||
/* size must be a multiple of DWORD */
|
||||
if (WARN_ON(cmd.cmd.count & cpu_to_le32(3)))
|
||||
return;
|
||||
|
||||
if (!iwl_mvm_has_new_rx_api(mvm))
|
||||
return;
|
||||
|
||||
if (notif->sync) {
|
||||
notif->cookie = mvm->queue_sync_cookie;
|
||||
if (sync) {
|
||||
cmd.notif.cookie = mvm->queue_sync_cookie;
|
||||
mvm->queue_sync_state = (1 << mvm->trans->num_rx_queues) - 1;
|
||||
}
|
||||
|
||||
ret = iwl_mvm_notify_rx_queue(mvm, qmask, notif, size, !notif->sync);
|
||||
ret = iwl_mvm_send_cmd(mvm, &hcmd);
|
||||
if (ret) {
|
||||
IWL_ERR(mvm, "Failed to trigger RX queues sync (%d)\n", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (notif->sync) {
|
||||
if (sync) {
|
||||
lockdep_assert_held(&mvm->mutex);
|
||||
ret = wait_event_timeout(mvm->rx_sync_waitq,
|
||||
READ_ONCE(mvm->queue_sync_state) == 0 ||
|
||||
|
@ -5167,21 +5201,18 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
|
|||
}
|
||||
|
||||
out:
|
||||
mvm->queue_sync_state = 0;
|
||||
if (notif->sync)
|
||||
if (sync) {
|
||||
mvm->queue_sync_state = 0;
|
||||
mvm->queue_sync_cookie++;
|
||||
}
|
||||
}
|
||||
|
||||
static void iwl_mvm_sync_rx_queues(struct ieee80211_hw *hw)
|
||||
{
|
||||
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
|
||||
struct iwl_mvm_internal_rxq_notif data = {
|
||||
.type = IWL_MVM_RXQ_EMPTY,
|
||||
.sync = 1,
|
||||
};
|
||||
|
||||
mutex_lock(&mvm->mutex);
|
||||
iwl_mvm_sync_rx_queues_internal(mvm, &data, sizeof(data));
|
||||
iwl_mvm_sync_rx_queues_internal(mvm, IWL_MVM_RXQ_EMPTY, true, NULL, 0);
|
||||
mutex_unlock(&mvm->mutex);
|
||||
}
|
||||
|
||||
|
|
|
@ -591,7 +591,6 @@ struct iwl_mvm_tcm {
|
|||
enum iwl_mvm_traffic_load global_load;
|
||||
bool low_latency[NUM_MAC_INDEX_DRIVER];
|
||||
bool change[NUM_MAC_INDEX_DRIVER];
|
||||
bool global_change;
|
||||
} result;
|
||||
};
|
||||
|
||||
|
@ -1096,6 +1095,9 @@ struct iwl_mvm {
|
|||
/* sniffer data to include in radiotap */
|
||||
__le16 cur_aid;
|
||||
u8 cur_bssid[ETH_ALEN];
|
||||
|
||||
unsigned long last_6ghz_passive_scan_jiffies;
|
||||
unsigned long last_reset_or_resume_time_jiffies;
|
||||
};
|
||||
|
||||
/* Extract MVM priv from op_mode and _hw */
|
||||
|
@ -1570,9 +1572,6 @@ void iwl_mvm_rx_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi,
|
|||
struct iwl_rx_cmd_buffer *rxb, int queue);
|
||||
void iwl_mvm_rx_bar_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi,
|
||||
struct iwl_rx_cmd_buffer *rxb, int queue);
|
||||
int iwl_mvm_notify_rx_queue(struct iwl_mvm *mvm, u32 rxq_mask,
|
||||
const struct iwl_mvm_internal_rxq_notif *notif,
|
||||
u32 notif_size, bool async);
|
||||
void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
|
||||
struct iwl_rx_cmd_buffer *rxb, int queue);
|
||||
void iwl_mvm_rx_tx_cmd(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb);
|
||||
|
@ -2001,8 +2000,9 @@ void iwl_mvm_rx_tdls_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb);
|
|||
void iwl_mvm_tdls_ch_switch_work(struct work_struct *work);
|
||||
|
||||
void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
|
||||
struct iwl_mvm_internal_rxq_notif *notif,
|
||||
u32 size);
|
||||
enum iwl_mvm_rxq_notif_type type,
|
||||
bool sync,
|
||||
const void *data, u32 size);
|
||||
void iwl_mvm_reorder_timer_expired(struct timer_list *t);
|
||||
struct ieee80211_vif *iwl_mvm_get_bss_vif(struct iwl_mvm *mvm);
|
||||
struct ieee80211_vif *iwl_mvm_get_vif_by_macid(struct iwl_mvm *mvm, u32 macid);
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2017 Intel Deutschland GmbH
|
||||
* Copyright (C) 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2018-2021 Intel Corporation
|
||||
*/
|
||||
#include "rs.h"
|
||||
#include "fw-api.h"
|
||||
|
@ -72,19 +72,15 @@ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
|
|||
bool vht_ena = vht_cap->vht_supported;
|
||||
u16 flags = 0;
|
||||
|
||||
/* get STBC flags */
|
||||
if (mvm->cfg->ht_params->stbc &&
|
||||
(num_of_ant(iwl_mvm_get_valid_tx_ant(mvm)) > 1)) {
|
||||
if (he_cap->has_he) {
|
||||
if (he_cap->he_cap_elem.phy_cap_info[2] &
|
||||
IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ)
|
||||
flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
|
||||
|
||||
if (he_cap->he_cap_elem.phy_cap_info[7] &
|
||||
IEEE80211_HE_PHY_CAP7_STBC_RX_ABOVE_80MHZ)
|
||||
flags |= IWL_TLC_MNG_CFG_FLAGS_HE_STBC_160MHZ_MSK;
|
||||
} else if ((ht_cap->cap & IEEE80211_HT_CAP_RX_STBC) ||
|
||||
(vht_ena &&
|
||||
(vht_cap->cap & IEEE80211_VHT_CAP_RXSTBC_MASK)))
|
||||
if (he_cap->has_he && he_cap->he_cap_elem.phy_cap_info[2] &
|
||||
IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ)
|
||||
flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
|
||||
else if (vht_cap->cap & IEEE80211_VHT_CAP_RXSTBC_MASK)
|
||||
flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
|
||||
else if (ht_cap->cap & IEEE80211_HT_CAP_RX_STBC)
|
||||
flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
|
||||
}
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/******************************************************************************
|
||||
*
|
||||
* Copyright(c) 2005 - 2014, 2018 - 2020 Intel Corporation. All rights reserved.
|
||||
* Copyright(c) 2005 - 2014, 2018 - 2021 Intel Corporation. All rights reserved.
|
||||
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
|
||||
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
|
||||
*
|
||||
|
@ -1926,9 +1926,7 @@ static bool rs_tpc_allowed(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
|||
if (is_ht(rate))
|
||||
return index == IWL_RATE_MCS_7_INDEX;
|
||||
if (is_vht(rate))
|
||||
return index == IWL_RATE_MCS_7_INDEX ||
|
||||
index == IWL_RATE_MCS_8_INDEX ||
|
||||
index == IWL_RATE_MCS_9_INDEX;
|
||||
return index == IWL_RATE_MCS_9_INDEX;
|
||||
|
||||
WARN_ON_ONCE(1);
|
||||
return false;
|
||||
|
|
|
@ -527,37 +527,6 @@ static bool iwl_mvm_is_dup(struct ieee80211_sta *sta, int queue,
|
|||
return false;
|
||||
}
|
||||
|
||||
int iwl_mvm_notify_rx_queue(struct iwl_mvm *mvm, u32 rxq_mask,
|
||||
const struct iwl_mvm_internal_rxq_notif *notif,
|
||||
u32 notif_size, bool async)
|
||||
{
|
||||
u8 buf[sizeof(struct iwl_rxq_sync_cmd) +
|
||||
sizeof(struct iwl_mvm_rss_sync_notif)];
|
||||
struct iwl_rxq_sync_cmd *cmd = (void *)buf;
|
||||
u32 data_size = sizeof(*cmd) + notif_size;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* size must be a multiple of DWORD
|
||||
* Ensure we don't overflow buf
|
||||
*/
|
||||
if (WARN_ON(notif_size & 3 ||
|
||||
notif_size > sizeof(struct iwl_mvm_rss_sync_notif)))
|
||||
return -EINVAL;
|
||||
|
||||
cmd->rxq_mask = cpu_to_le32(rxq_mask);
|
||||
cmd->count = cpu_to_le32(notif_size);
|
||||
cmd->flags = 0;
|
||||
memcpy(cmd->payload, notif, notif_size);
|
||||
|
||||
ret = iwl_mvm_send_cmd_pdu(mvm,
|
||||
WIDE_ID(DATA_PATH_GROUP,
|
||||
TRIGGER_RX_QUEUES_NOTIF_CMD),
|
||||
async ? CMD_ASYNC : 0, data_size, cmd);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns true if sn2 - buffer_size < sn1 < sn2.
|
||||
* To be used only in order to compare reorder buffer head with NSSN.
|
||||
|
@ -573,15 +542,13 @@ static bool iwl_mvm_is_sn_less(u16 sn1, u16 sn2, u16 buffer_size)
|
|||
static void iwl_mvm_sync_nssn(struct iwl_mvm *mvm, u8 baid, u16 nssn)
|
||||
{
|
||||
if (IWL_MVM_USE_NSSN_SYNC) {
|
||||
struct iwl_mvm_rss_sync_notif notif = {
|
||||
.metadata.type = IWL_MVM_RXQ_NSSN_SYNC,
|
||||
.metadata.sync = 0,
|
||||
.nssn_sync.baid = baid,
|
||||
.nssn_sync.nssn = nssn,
|
||||
struct iwl_mvm_nssn_sync_data notif = {
|
||||
.baid = baid,
|
||||
.nssn = nssn,
|
||||
};
|
||||
|
||||
iwl_mvm_sync_rx_queues_internal(mvm, (void *)¬if,
|
||||
sizeof(notif));
|
||||
iwl_mvm_sync_rx_queues_internal(mvm, IWL_MVM_RXQ_NSSN_SYNC, false,
|
||||
¬if, sizeof(notif));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -830,8 +797,7 @@ void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
|
|||
"invalid notification size %d (%d)",
|
||||
len, (int)(sizeof(*notif) + sizeof(*internal_notif))))
|
||||
return;
|
||||
/* remove only the firmware header, we want all of our payload below */
|
||||
len -= sizeof(*notif);
|
||||
len -= sizeof(*notif) + sizeof(*internal_notif);
|
||||
|
||||
if (internal_notif->sync &&
|
||||
mvm->queue_sync_cookie != internal_notif->cookie) {
|
||||
|
@ -841,21 +807,19 @@ void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
|
|||
|
||||
switch (internal_notif->type) {
|
||||
case IWL_MVM_RXQ_EMPTY:
|
||||
WARN_ONCE(len != sizeof(*internal_notif),
|
||||
"invalid empty notification size %d (%d)",
|
||||
len, (int)sizeof(*internal_notif));
|
||||
WARN_ONCE(len, "invalid empty notification size %d", len);
|
||||
break;
|
||||
case IWL_MVM_RXQ_NOTIF_DEL_BA:
|
||||
if (WARN_ONCE(len != sizeof(struct iwl_mvm_rss_sync_notif),
|
||||
if (WARN_ONCE(len != sizeof(struct iwl_mvm_delba_data),
|
||||
"invalid delba notification size %d (%d)",
|
||||
len, (int)sizeof(struct iwl_mvm_rss_sync_notif)))
|
||||
len, (int)sizeof(struct iwl_mvm_delba_data)))
|
||||
break;
|
||||
iwl_mvm_del_ba(mvm, queue, (void *)internal_notif->data);
|
||||
break;
|
||||
case IWL_MVM_RXQ_NSSN_SYNC:
|
||||
if (WARN_ONCE(len != sizeof(struct iwl_mvm_rss_sync_notif),
|
||||
if (WARN_ONCE(len != sizeof(struct iwl_mvm_nssn_sync_data),
|
||||
"invalid nssn sync notification size %d (%d)",
|
||||
len, (int)sizeof(struct iwl_mvm_rss_sync_notif)))
|
||||
len, (int)sizeof(struct iwl_mvm_nssn_sync_data)))
|
||||
break;
|
||||
iwl_mvm_nssn_sync(mvm, napi, queue,
|
||||
(void *)internal_notif->data);
|
||||
|
|
|
@ -43,6 +43,9 @@
|
|||
/* adaptive dwell number of APs override for social channels */
|
||||
#define IWL_SCAN_ADWELL_N_APS_SOCIAL_CHS 2
|
||||
|
||||
/* minimal number of 2GHz and 5GHz channels in the regular scan request */
|
||||
#define IWL_MVM_6GHZ_PASSIVE_SCAN_MIN_CHANS 4
|
||||
|
||||
struct iwl_mvm_scan_timing_params {
|
||||
u32 suspend_time;
|
||||
u32 max_out_time;
|
||||
|
@ -94,6 +97,7 @@ struct iwl_mvm_scan_params {
|
|||
struct cfg80211_scan_6ghz_params *scan_6ghz_params;
|
||||
u32 n_6ghz_params;
|
||||
bool scan_6ghz;
|
||||
bool enable_6ghz_passive;
|
||||
};
|
||||
|
||||
static inline void *iwl_mvm_get_scan_req_umac_data(struct iwl_mvm *mvm)
|
||||
|
@ -1873,6 +1877,98 @@ static u8 iwl_mvm_scan_umac_chan_flags_v2(struct iwl_mvm *mvm,
|
|||
return flags;
|
||||
}
|
||||
|
||||
static void iwl_mvm_scan_6ghz_passive_scan(struct iwl_mvm *mvm,
|
||||
struct iwl_mvm_scan_params *params,
|
||||
struct ieee80211_vif *vif)
|
||||
{
|
||||
struct ieee80211_supported_band *sband =
|
||||
&mvm->nvm_data->bands[NL80211_BAND_6GHZ];
|
||||
u32 n_disabled, i;
|
||||
|
||||
params->enable_6ghz_passive = false;
|
||||
|
||||
if (params->scan_6ghz)
|
||||
return;
|
||||
|
||||
if (!fw_has_capa(&mvm->fw->ucode_capa,
|
||||
IWL_UCODE_TLV_CAPA_PASSIVE_6GHZ_SCAN)) {
|
||||
IWL_DEBUG_SCAN(mvm,
|
||||
"6GHz passive scan: Not supported by FW\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* 6GHz passive scan allowed only on station interface */
|
||||
if (vif->type != NL80211_IFTYPE_STATION) {
|
||||
IWL_DEBUG_SCAN(mvm,
|
||||
"6GHz passive scan: not station interface\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* 6GHz passive scan is allowed while associated in a defined time
|
||||
* interval following HW reset or resume flow
|
||||
*/
|
||||
if (vif->bss_conf.assoc &&
|
||||
(time_before(mvm->last_reset_or_resume_time_jiffies +
|
||||
(IWL_MVM_6GHZ_PASSIVE_SCAN_ASSOC_TIMEOUT * HZ),
|
||||
jiffies))) {
|
||||
IWL_DEBUG_SCAN(mvm, "6GHz passive scan: associated\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* No need for 6GHz passive scan if not enough time elapsed */
|
||||
if (time_after(mvm->last_6ghz_passive_scan_jiffies +
|
||||
(IWL_MVM_6GHZ_PASSIVE_SCAN_TIMEOUT * HZ), jiffies)) {
|
||||
IWL_DEBUG_SCAN(mvm,
|
||||
"6GHz passive scan: timeout did not expire\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* not enough channels in the regular scan request */
|
||||
if (params->n_channels < IWL_MVM_6GHZ_PASSIVE_SCAN_MIN_CHANS) {
|
||||
IWL_DEBUG_SCAN(mvm,
|
||||
"6GHz passive scan: not enough channels\n");
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < params->n_ssids; i++) {
|
||||
if (!params->ssids[i].ssid_len)
|
||||
break;
|
||||
}
|
||||
|
||||
/* not a wildcard scan, so cannot enable passive 6GHz scan */
|
||||
if (i == params->n_ssids) {
|
||||
IWL_DEBUG_SCAN(mvm,
|
||||
"6GHz passive scan: no wildcard SSID\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (!sband || !sband->n_channels) {
|
||||
IWL_DEBUG_SCAN(mvm,
|
||||
"6GHz passive scan: no 6GHz channels\n");
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0, n_disabled = 0; i < sband->n_channels; i++) {
|
||||
if (sband->channels[i].flags & (IEEE80211_CHAN_DISABLED))
|
||||
n_disabled++;
|
||||
}
|
||||
|
||||
/*
|
||||
* Not all the 6GHz channels are disabled, so no need for 6GHz passive
|
||||
* scan
|
||||
*/
|
||||
if (n_disabled != sband->n_channels) {
|
||||
IWL_DEBUG_SCAN(mvm,
|
||||
"6GHz passive scan: 6GHz channels enabled\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* all conditions to enable 6ghz passive scan are satisfied */
|
||||
IWL_DEBUG_SCAN(mvm, "6GHz passive scan: can be enabled\n");
|
||||
params->enable_6ghz_passive = true;
|
||||
}
|
||||
|
||||
static u16 iwl_mvm_scan_umac_flags_v2(struct iwl_mvm *mvm,
|
||||
struct iwl_mvm_scan_params *params,
|
||||
struct ieee80211_vif *vif,
|
||||
|
@ -1911,6 +2007,9 @@ static u16 iwl_mvm_scan_umac_flags_v2(struct iwl_mvm *mvm,
|
|||
params->flags & NL80211_SCAN_FLAG_COLOCATED_6GHZ)
|
||||
flags |= IWL_UMAC_SCAN_GEN_FLAGS_V2_TRIGGER_UHB_SCAN;
|
||||
|
||||
if (params->enable_6ghz_passive)
|
||||
flags |= IWL_UMAC_SCAN_GEN_FLAGS_V2_6GHZ_PASSIVE_SCAN;
|
||||
|
||||
return flags;
|
||||
}
|
||||
|
||||
|
@ -2183,6 +2282,30 @@ iwl_mvm_scan_umac_fill_ch_p_v6(struct iwl_mvm *mvm,
|
|||
params->n_channels,
|
||||
channel_cfg_flags,
|
||||
vif->type);
|
||||
|
||||
if (params->enable_6ghz_passive) {
|
||||
struct ieee80211_supported_band *sband =
|
||||
&mvm->nvm_data->bands[NL80211_BAND_6GHZ];
|
||||
u32 i;
|
||||
|
||||
for (i = 0; i < sband->n_channels; i++) {
|
||||
struct ieee80211_channel *channel =
|
||||
&sband->channels[i];
|
||||
|
||||
struct iwl_scan_channel_cfg_umac *cfg =
|
||||
&cp->channel_config[cp->count];
|
||||
|
||||
if (!cfg80211_channel_is_psc(channel))
|
||||
continue;
|
||||
|
||||
cfg->flags = 0;
|
||||
cfg->v2.channel_num = channel->hw_value;
|
||||
cfg->v2.band = PHY_BAND_6;
|
||||
cfg->v2.iter_count = 1;
|
||||
cfg->v2.iter_interval = 0;
|
||||
cp->count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int iwl_mvm_scan_umac_v12(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
||||
|
@ -2500,6 +2623,8 @@ int iwl_mvm_reg_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
|||
|
||||
iwl_mvm_build_scan_probe(mvm, vif, ies, ¶ms);
|
||||
|
||||
iwl_mvm_scan_6ghz_passive_scan(mvm, ¶ms, vif);
|
||||
|
||||
uid = iwl_mvm_build_scan_cmd(mvm, vif, &hcmd, ¶ms,
|
||||
IWL_MVM_SCAN_REGULAR);
|
||||
|
||||
|
@ -2524,6 +2649,9 @@ int iwl_mvm_reg_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
|||
mvm->scan_status |= IWL_MVM_SCAN_REGULAR;
|
||||
mvm->scan_vif = iwl_mvm_vif_from_mac80211(vif);
|
||||
|
||||
if (params.enable_6ghz_passive)
|
||||
mvm->last_6ghz_passive_scan_jiffies = jiffies;
|
||||
|
||||
schedule_delayed_work(&mvm->scan_timeout_dwork,
|
||||
msecs_to_jiffies(SCAN_TIMEOUT));
|
||||
|
||||
|
|
|
@ -2441,12 +2441,12 @@ int iwl_mvm_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
|
|||
|
||||
static void iwl_mvm_sync_rxq_del_ba(struct iwl_mvm *mvm, u8 baid)
|
||||
{
|
||||
struct iwl_mvm_rss_sync_notif notif = {
|
||||
.metadata.type = IWL_MVM_RXQ_NOTIF_DEL_BA,
|
||||
.metadata.sync = 1,
|
||||
.delba.baid = baid,
|
||||
struct iwl_mvm_delba_data notif = {
|
||||
.baid = baid,
|
||||
};
|
||||
iwl_mvm_sync_rx_queues_internal(mvm, (void *)¬if, sizeof(notif));
|
||||
|
||||
iwl_mvm_sync_rx_queues_internal(mvm, IWL_MVM_RXQ_NOTIF_DEL_BA, true,
|
||||
¬if, sizeof(notif));
|
||||
};
|
||||
|
||||
static void iwl_mvm_free_reorder(struct iwl_mvm *mvm,
|
||||
|
|
|
@ -281,6 +281,36 @@ struct iwl_mvm_key_pn {
|
|||
} ____cacheline_aligned_in_smp q[];
|
||||
};
|
||||
|
||||
/**
|
||||
* enum iwl_mvm_rxq_notif_type - Internal message identifier
|
||||
*
|
||||
* @IWL_MVM_RXQ_EMPTY: empty sync notification
|
||||
* @IWL_MVM_RXQ_NOTIF_DEL_BA: notify RSS queues of delBA
|
||||
* @IWL_MVM_RXQ_NSSN_SYNC: notify all the RSS queues with the new NSSN
|
||||
*/
|
||||
enum iwl_mvm_rxq_notif_type {
|
||||
IWL_MVM_RXQ_EMPTY,
|
||||
IWL_MVM_RXQ_NOTIF_DEL_BA,
|
||||
IWL_MVM_RXQ_NSSN_SYNC,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct iwl_mvm_internal_rxq_notif - Internal representation of the data sent
|
||||
* in &iwl_rxq_sync_cmd. Should be DWORD aligned.
|
||||
* FW is agnostic to the payload, so there are no endianity requirements.
|
||||
*
|
||||
* @type: value from &iwl_mvm_rxq_notif_type
|
||||
* @sync: ctrl path is waiting for all notifications to be received
|
||||
* @cookie: internal cookie to identify old notifications
|
||||
* @data: payload
|
||||
*/
|
||||
struct iwl_mvm_internal_rxq_notif {
|
||||
u16 type;
|
||||
u16 sync;
|
||||
u32 cookie;
|
||||
u8 data[];
|
||||
} __packed;
|
||||
|
||||
struct iwl_mvm_delba_data {
|
||||
u32 baid;
|
||||
} __packed;
|
||||
|
@ -290,14 +320,6 @@ struct iwl_mvm_nssn_sync_data {
|
|||
u32 nssn;
|
||||
} __packed;
|
||||
|
||||
struct iwl_mvm_rss_sync_notif {
|
||||
struct iwl_mvm_internal_rxq_notif metadata;
|
||||
union {
|
||||
struct iwl_mvm_delba_data delba;
|
||||
struct iwl_mvm_nssn_sync_data nssn_sync;
|
||||
};
|
||||
} __packed;
|
||||
|
||||
/**
|
||||
* struct iwl_mvm_rxq_dup_data - per station per rx queue data
|
||||
* @last_seq: last sequence per tid for duplicate packet detection
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2012-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2012-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -151,6 +151,16 @@ static bool iwl_mvm_te_check_disconnect(struct iwl_mvm *mvm,
|
|||
if (errmsg)
|
||||
IWL_ERR(mvm, "%s\n", errmsg);
|
||||
|
||||
if (mvmvif->csa_bcn_pending) {
|
||||
struct iwl_mvm_sta *mvmsta;
|
||||
|
||||
rcu_read_lock();
|
||||
mvmsta = iwl_mvm_sta_from_staid_rcu(mvm, mvmvif->ap_sta_id);
|
||||
if (!WARN_ON(!mvmsta))
|
||||
iwl_mvm_sta_modify_disable_tx(mvm, mvmsta, false);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
iwl_mvm_connection_loss(mvm, vif, errmsg);
|
||||
return true;
|
||||
}
|
||||
|
@ -284,6 +294,17 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm,
|
|||
iwl_mvm_roc_finished(mvm);
|
||||
break;
|
||||
case NL80211_IFTYPE_STATION:
|
||||
/*
|
||||
* If we are switching channel, don't disconnect
|
||||
* if the time event is already done. Beacons can
|
||||
* be delayed a bit after the switch.
|
||||
*/
|
||||
if (te_data->id == TE_CHANNEL_SWITCH_PERIOD) {
|
||||
IWL_DEBUG_TE(mvm,
|
||||
"No beacon heard and the CS time event is over, don't disconnect\n");
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* By now, we should have finished association
|
||||
* and know the dtim period.
|
||||
|
@ -713,8 +734,8 @@ void iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
|
|||
IWL_DEBUG_TE(mvm, "Removing TE 0x%x\n", le32_to_cpu(time_cmd.id));
|
||||
ret = iwl_mvm_send_cmd_pdu(mvm, TIME_EVENT_CMD, 0,
|
||||
sizeof(time_cmd), &time_cmd);
|
||||
if (WARN_ON(ret))
|
||||
return;
|
||||
if (ret)
|
||||
IWL_ERR(mvm, "Couldn't remove the time event\n");
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -1030,15 +1030,9 @@ iwl_mvm_tcm_load(struct iwl_mvm *mvm, u32 airtime, unsigned long elapsed)
|
|||
return IWL_MVM_TRAFFIC_LOW;
|
||||
}
|
||||
|
||||
struct iwl_mvm_tcm_iter_data {
|
||||
struct iwl_mvm *mvm;
|
||||
bool any_sent;
|
||||
};
|
||||
|
||||
static void iwl_mvm_tcm_iter(void *_data, u8 *mac, struct ieee80211_vif *vif)
|
||||
{
|
||||
struct iwl_mvm_tcm_iter_data *data = _data;
|
||||
struct iwl_mvm *mvm = data->mvm;
|
||||
struct iwl_mvm *mvm = _data;
|
||||
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
|
||||
bool low_latency, prev = mvmvif->low_latency & LOW_LATENCY_TRAFFIC;
|
||||
|
||||
|
@ -1060,22 +1054,15 @@ static void iwl_mvm_tcm_iter(void *_data, u8 *mac, struct ieee80211_vif *vif)
|
|||
} else {
|
||||
iwl_mvm_update_quotas(mvm, false, NULL);
|
||||
}
|
||||
|
||||
data->any_sent = true;
|
||||
}
|
||||
|
||||
static void iwl_mvm_tcm_results(struct iwl_mvm *mvm)
|
||||
{
|
||||
struct iwl_mvm_tcm_iter_data data = {
|
||||
.mvm = mvm,
|
||||
.any_sent = false,
|
||||
};
|
||||
|
||||
mutex_lock(&mvm->mutex);
|
||||
|
||||
ieee80211_iterate_active_interfaces(
|
||||
mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
|
||||
iwl_mvm_tcm_iter, &data);
|
||||
iwl_mvm_tcm_iter, mvm);
|
||||
|
||||
if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_UMAC_SCAN))
|
||||
iwl_mvm_config_scan(mvm);
|
||||
|
@ -1257,7 +1244,6 @@ static unsigned long iwl_mvm_calc_tcm_stats(struct iwl_mvm *mvm,
|
|||
}
|
||||
|
||||
load = iwl_mvm_tcm_load(mvm, total_airtime, elapsed);
|
||||
mvm->tcm.result.global_change = load != mvm->tcm.result.global_load;
|
||||
mvm->tcm.result.global_load = load;
|
||||
|
||||
for (i = 0; i < NUM_NL80211_BANDS; i++) {
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2005-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -17,10 +17,20 @@
|
|||
#include "iwl-prph.h"
|
||||
#include "internal.h"
|
||||
|
||||
#define TRANS_CFG_MARKER BIT(0)
|
||||
#define _IS_A(cfg, _struct) __builtin_types_compatible_p(typeof(cfg), \
|
||||
struct _struct)
|
||||
extern int _invalid_type;
|
||||
#define _TRANS_CFG_MARKER(cfg) \
|
||||
(__builtin_choose_expr(_IS_A(cfg, iwl_cfg_trans_params), \
|
||||
TRANS_CFG_MARKER, \
|
||||
__builtin_choose_expr(_IS_A(cfg, iwl_cfg), 0, _invalid_type)))
|
||||
#define _ASSIGN_CFG(cfg) (_TRANS_CFG_MARKER(cfg) + (kernel_ulong_t)&(cfg))
|
||||
|
||||
#define IWL_PCI_DEVICE(dev, subdev, cfg) \
|
||||
.vendor = PCI_VENDOR_ID_INTEL, .device = (dev), \
|
||||
.subvendor = PCI_ANY_ID, .subdevice = (subdev), \
|
||||
.driver_data = (kernel_ulong_t)&(cfg)
|
||||
.driver_data = _ASSIGN_CFG(cfg)
|
||||
|
||||
/* Hardware specific file defines the PCI IDs table for that hardware module */
|
||||
static const struct pci_device_id iwl_hw_card_ids[] = {
|
||||
|
@ -490,6 +500,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
|
|||
{IWL_PCI_DEVICE(0x2729, PCI_ANY_ID, iwl_ma_trans_cfg)},
|
||||
{IWL_PCI_DEVICE(0x7E40, PCI_ANY_ID, iwl_ma_trans_cfg)},
|
||||
|
||||
/* Bz devices */
|
||||
{IWL_PCI_DEVICE(0x2727, PCI_ANY_ID, iwl_bz_trans_cfg)},
|
||||
#endif /* CONFIG_IWLMVM */
|
||||
|
||||
{0}
|
||||
|
@ -607,6 +619,8 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
|||
IWL_DEV_INFO(0x2725, 0x4020, iwlax210_2ax_cfg_ty_gf_a0, NULL),
|
||||
IWL_DEV_INFO(0x2725, 0x6020, iwlax210_2ax_cfg_ty_gf_a0, NULL),
|
||||
IWL_DEV_INFO(0x2725, 0x6024, iwlax210_2ax_cfg_ty_gf_a0, NULL),
|
||||
IWL_DEV_INFO(0x2725, 0x1673, iwlax210_2ax_cfg_ty_gf_a0, iwl_ax210_killer_1675w_name),
|
||||
IWL_DEV_INFO(0x2725, 0x1674, iwlax210_2ax_cfg_ty_gf_a0, iwl_ax210_killer_1675x_name),
|
||||
IWL_DEV_INFO(0x7A70, 0x0090, iwlax211_2ax_cfg_so_gf_a0_long, NULL),
|
||||
IWL_DEV_INFO(0x7A70, 0x0098, iwlax211_2ax_cfg_so_gf_a0_long, NULL),
|
||||
IWL_DEV_INFO(0x7A70, 0x00B0, iwlax411_2ax_cfg_so_gf4_a0_long, NULL),
|
||||
|
@ -1014,12 +1028,12 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
|||
IWL_CFG_MAC_TYPE_MA, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_MR, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
|
||||
iwl_cfg_ma_a0_mr_a0, iwl_ma_name),
|
||||
iwl_cfg_ma_a0_mr_a0, iwl_ax221_name),
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_SNJ, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_MR, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
|
||||
iwl_cfg_snj_a0_mr_a0, iwl_ma_name),
|
||||
iwl_cfg_snj_a0_mr_a0, iwl_ax221_name),
|
||||
|
||||
/* So with Hr */
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
|
@ -1067,6 +1081,35 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
|||
IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
|
||||
iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name),
|
||||
|
||||
/* Bz */
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
|
||||
iwl_cfg_bz_a0_hr_b0, iwl_ax201_name),
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
|
||||
iwl_cfg_bz_a0_gf_a0, iwl_ax211_name),
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_CDB,
|
||||
iwl_cfg_bz_a0_gf4_a0, iwl_ax211_name),
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_MR, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
|
||||
iwl_cfg_bz_a0_mr_a0, iwl_ax211_name),
|
||||
|
||||
/* So with GF */
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
|
||||
IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
|
||||
iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name)
|
||||
|
||||
#endif /* CONFIG_IWLMVM */
|
||||
};
|
||||
|
||||
|
@ -1075,19 +1118,22 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
|||
|
||||
static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
{
|
||||
const struct iwl_cfg_trans_params *trans =
|
||||
(struct iwl_cfg_trans_params *)(ent->driver_data);
|
||||
const struct iwl_cfg_trans_params *trans;
|
||||
const struct iwl_cfg *cfg_7265d __maybe_unused = NULL;
|
||||
struct iwl_trans *iwl_trans;
|
||||
struct iwl_trans_pcie *trans_pcie;
|
||||
int i, ret;
|
||||
const struct iwl_cfg *cfg;
|
||||
|
||||
trans = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
|
||||
|
||||
/*
|
||||
* This is needed for backwards compatibility with the old
|
||||
* tables, so we don't need to change all the config structs
|
||||
* at the same time. The cfg is used to compare with the old
|
||||
* full cfg structs.
|
||||
*/
|
||||
const struct iwl_cfg *cfg = (struct iwl_cfg *)(ent->driver_data);
|
||||
cfg = (void *)(ent->driver_data & ~TRANS_CFG_MARKER);
|
||||
|
||||
/* make sure trans is the first element in iwl_cfg */
|
||||
BUILD_BUG_ON(offsetof(struct iwl_cfg, trans));
|
||||
|
@ -1165,7 +1211,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
iwl_trans->cfg = &iwlax210_2ax_cfg_ty_gf_a0;
|
||||
} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(iwl_trans->hw_rf_id) ==
|
||||
CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_JF)) {
|
||||
iwl_trans->cfg = &iwlax210_2ax_cfg_so_jf_a0;
|
||||
iwl_trans->cfg = &iwlax210_2ax_cfg_so_jf_b0;
|
||||
} else if (CSR_HW_RF_ID_TYPE_CHIP_ID(iwl_trans->hw_rf_id) ==
|
||||
CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_GF)) {
|
||||
iwl_trans->cfg = &iwlax211_2ax_cfg_so_gf_a0;
|
||||
|
@ -1202,11 +1248,19 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
|
||||
#endif
|
||||
/*
|
||||
* If we didn't set the cfg yet, assume the trans is actually
|
||||
* a full cfg from the old tables.
|
||||
* If we didn't set the cfg yet, the PCI ID table entry should have
|
||||
* been a full config - if yes, use it, otherwise fail.
|
||||
*/
|
||||
if (!iwl_trans->cfg)
|
||||
if (!iwl_trans->cfg) {
|
||||
if (ent->driver_data & TRANS_CFG_MARKER) {
|
||||
pr_err("No config found for PCI dev %04x/%04x, rev=0x%x, rfid=0x%x\n",
|
||||
pdev->device, pdev->subsystem_device,
|
||||
iwl_trans->hw_rev, iwl_trans->hw_rf_id);
|
||||
ret = -EINVAL;
|
||||
goto out_free_trans;
|
||||
}
|
||||
iwl_trans->cfg = cfg;
|
||||
}
|
||||
|
||||
/* if we don't have a name yet, copy name from the old cfg */
|
||||
if (!iwl_trans->name)
|
||||
|
@ -1222,6 +1276,10 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
trans_pcie->num_rx_bufs = RX_QUEUE_SIZE;
|
||||
}
|
||||
|
||||
ret = iwl_trans_init(iwl_trans);
|
||||
if (ret)
|
||||
goto out_free_trans;
|
||||
|
||||
pci_set_drvdata(pdev, iwl_trans);
|
||||
iwl_trans->drv = iwl_drv_start(iwl_trans);
|
||||
|
||||
|
|
|
@ -447,6 +447,11 @@ struct iwl_trans
|
|||
const struct iwl_cfg_trans_params *cfg_trans);
|
||||
void iwl_trans_pcie_free(struct iwl_trans *trans);
|
||||
|
||||
bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans);
|
||||
#define _iwl_trans_pcie_grab_nic_access(trans) \
|
||||
__cond_lock(nic_access_nobh, \
|
||||
likely(__iwl_trans_pcie_grab_nic_access(trans)))
|
||||
|
||||
/*****************************************************
|
||||
* RX
|
||||
******************************************************/
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2003-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2003-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -1023,6 +1023,9 @@ static int iwl_pcie_napi_poll(struct napi_struct *napi, int budget)
|
|||
|
||||
ret = iwl_pcie_rx_handle(trans, rxq->id, budget);
|
||||
|
||||
IWL_DEBUG_ISR(trans, "[%d] handled %d, budget %d\n",
|
||||
rxq->id, ret, budget);
|
||||
|
||||
if (ret < budget) {
|
||||
spin_lock(&trans_pcie->irq_lock);
|
||||
if (test_bit(STATUS_INT_ENABLED, &trans->status))
|
||||
|
@ -1046,33 +1049,19 @@ static int iwl_pcie_napi_poll_msix(struct napi_struct *napi, int budget)
|
|||
trans = trans_pcie->trans;
|
||||
|
||||
ret = iwl_pcie_rx_handle(trans, rxq->id, budget);
|
||||
IWL_DEBUG_ISR(trans, "[%d] handled %d, budget %d\n", rxq->id, ret,
|
||||
budget);
|
||||
|
||||
if (ret < budget) {
|
||||
int irq_line = rxq->id;
|
||||
|
||||
/* FIRST_RSS is shared with line 0 */
|
||||
if (trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_FIRST_RSS &&
|
||||
rxq->id == 1)
|
||||
irq_line = 0;
|
||||
|
||||
spin_lock(&trans_pcie->irq_lock);
|
||||
iwl_pcie_clear_irq(trans, rxq->id);
|
||||
spin_unlock(&trans_pcie->irq_lock);
|
||||
|
||||
napi_complete_done(&rxq->napi, ret);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int iwl_pcie_napi_poll_msix_shared(struct napi_struct *napi, int budget)
|
||||
{
|
||||
struct iwl_rxq *rxq = container_of(napi, struct iwl_rxq, napi);
|
||||
struct iwl_trans_pcie *trans_pcie;
|
||||
struct iwl_trans *trans;
|
||||
int ret;
|
||||
|
||||
trans_pcie = container_of(napi->dev, struct iwl_trans_pcie, napi_dev);
|
||||
trans = trans_pcie->trans;
|
||||
|
||||
ret = iwl_pcie_rx_handle(trans, rxq->id, budget);
|
||||
|
||||
if (ret < budget) {
|
||||
spin_lock(&trans_pcie->irq_lock);
|
||||
iwl_pcie_clear_irq(trans, 0);
|
||||
iwl_pcie_clear_irq(trans, irq_line);
|
||||
spin_unlock(&trans_pcie->irq_lock);
|
||||
|
||||
napi_complete_done(&rxq->napi, ret);
|
||||
|
@ -1134,18 +1123,9 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
|
|||
if (!rxq->napi.poll) {
|
||||
int (*poll)(struct napi_struct *, int) = iwl_pcie_napi_poll;
|
||||
|
||||
if (trans_pcie->msix_enabled) {
|
||||
if (trans_pcie->msix_enabled)
|
||||
poll = iwl_pcie_napi_poll_msix;
|
||||
|
||||
if (trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_NON_RX &&
|
||||
i == 0)
|
||||
poll = iwl_pcie_napi_poll_msix_shared;
|
||||
|
||||
if (trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_FIRST_RSS &&
|
||||
i == 1)
|
||||
poll = iwl_pcie_napi_poll_msix_shared;
|
||||
}
|
||||
|
||||
netif_napi_add(&trans_pcie->napi_dev, &rxq->napi,
|
||||
poll, NAPI_POLL_WEIGHT);
|
||||
napi_enable(&rxq->napi);
|
||||
|
@ -1659,10 +1639,13 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
|
|||
if (WARN_ON(entry->entry >= trans->num_rx_queues))
|
||||
return IRQ_NONE;
|
||||
|
||||
if (WARN_ONCE(!rxq, "Got MSI-X interrupt before we have Rx queues"))
|
||||
if (WARN_ONCE(!rxq,
|
||||
"[%d] Got MSI-X interrupt before we have Rx queues",
|
||||
entry->entry))
|
||||
return IRQ_NONE;
|
||||
|
||||
lock_map_acquire(&trans->sync_cmd_lockdep_map);
|
||||
IWL_DEBUG_ISR(trans, "[%d] Got interrupt\n", entry->entry);
|
||||
|
||||
local_bh_disable();
|
||||
if (napi_schedule_prep(&rxq->napi))
|
||||
|
@ -2194,9 +2177,16 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
|
|||
struct iwl_trans_pcie *trans_pcie = iwl_pcie_get_trans_pcie(entry);
|
||||
struct iwl_trans *trans = trans_pcie->trans;
|
||||
struct isr_statistics *isr_stats = &trans_pcie->isr_stats;
|
||||
u32 inta_fh_msk = ~MSIX_FH_INT_CAUSES_DATA_QUEUE;
|
||||
u32 inta_fh, inta_hw;
|
||||
bool polling = false;
|
||||
|
||||
if (trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_NON_RX)
|
||||
inta_fh_msk |= MSIX_FH_INT_CAUSES_Q0;
|
||||
|
||||
if (trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_FIRST_RSS)
|
||||
inta_fh_msk |= MSIX_FH_INT_CAUSES_Q1;
|
||||
|
||||
lock_map_acquire(&trans->sync_cmd_lockdep_map);
|
||||
|
||||
spin_lock_bh(&trans_pcie->irq_lock);
|
||||
|
@ -2205,7 +2195,7 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
|
|||
/*
|
||||
* Clear causes registers to avoid being handling the same cause.
|
||||
*/
|
||||
iwl_write32(trans, CSR_MSIX_FH_INT_CAUSES_AD, inta_fh);
|
||||
iwl_write32(trans, CSR_MSIX_FH_INT_CAUSES_AD, inta_fh & inta_fh_msk);
|
||||
iwl_write32(trans, CSR_MSIX_HW_INT_CAUSES_AD, inta_hw);
|
||||
spin_unlock_bh(&trans_pcie->irq_lock);
|
||||
|
||||
|
@ -2219,8 +2209,8 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
|
|||
|
||||
if (iwl_have_debug_level(IWL_DL_ISR)) {
|
||||
IWL_DEBUG_ISR(trans,
|
||||
"ISR inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
|
||||
inta_fh, trans_pcie->fh_mask,
|
||||
"ISR[%d] inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
|
||||
entry->entry, inta_fh, trans_pcie->fh_mask,
|
||||
iwl_read32(trans, CSR_MSIX_FH_INT_MASK_AD));
|
||||
if (inta_fh & ~trans_pcie->fh_mask)
|
||||
IWL_DEBUG_ISR(trans,
|
||||
|
@ -2275,8 +2265,8 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
|
|||
/* After checking FH register check HW register */
|
||||
if (iwl_have_debug_level(IWL_DL_ISR)) {
|
||||
IWL_DEBUG_ISR(trans,
|
||||
"ISR inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
|
||||
inta_hw, trans_pcie->hw_mask,
|
||||
"ISR[%d] inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
|
||||
entry->entry, inta_hw, trans_pcie->hw_mask,
|
||||
iwl_read32(trans, CSR_MSIX_HW_INT_MASK_AD));
|
||||
if (inta_hw & ~trans_pcie->hw_mask)
|
||||
IWL_DEBUG_ISR(trans,
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2017 Intel Deutschland GmbH
|
||||
* Copyright (C) 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2018-2021 Intel Corporation
|
||||
*/
|
||||
#include "iwl-trans.h"
|
||||
#include "iwl-prph.h"
|
||||
|
@ -108,8 +108,8 @@ static void iwl_trans_pcie_fw_reset_handshake(struct iwl_trans *trans)
|
|||
ret = wait_event_timeout(trans_pcie->fw_reset_waitq,
|
||||
trans_pcie->fw_reset_done, FW_RESET_TIMEOUT);
|
||||
if (!ret)
|
||||
IWL_ERR(trans,
|
||||
"firmware didn't ACK the reset - continue anyway\n");
|
||||
IWL_INFO(trans,
|
||||
"firmware didn't ACK the reset - continue anyway\n");
|
||||
}
|
||||
|
||||
void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans)
|
||||
|
@ -143,7 +143,7 @@ void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans)
|
|||
if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
|
||||
IWL_DEBUG_INFO(trans,
|
||||
"DEVICE_ENABLED bit was set and is now cleared\n");
|
||||
iwl_txq_gen2_tx_stop(trans);
|
||||
iwl_txq_gen2_tx_free(trans);
|
||||
iwl_pcie_rx_stop(trans);
|
||||
}
|
||||
|
||||
|
|
|
@ -1604,6 +1604,11 @@ iwl_pcie_set_interrupt_capa(struct pci_dev *pdev,
|
|||
} else {
|
||||
trans_pcie->trans->num_rx_queues = num_irqs - 1;
|
||||
}
|
||||
|
||||
IWL_DEBUG_INFO(trans,
|
||||
"MSI-X enabled with rx queues %d, vec mask 0x%x\n",
|
||||
trans_pcie->trans->num_rx_queues, trans_pcie->shared_vec_mask);
|
||||
|
||||
WARN_ON(trans_pcie->trans->num_rx_queues > IWL_MAX_RX_HW_QUEUES);
|
||||
|
||||
trans_pcie->alloc_vecs = num_irqs;
|
||||
|
@ -1973,12 +1978,16 @@ static void iwl_trans_pcie_removal_wk(struct work_struct *wk)
|
|||
module_put(THIS_MODULE);
|
||||
}
|
||||
|
||||
static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
|
||||
/*
|
||||
* This version doesn't disable BHs but rather assumes they're
|
||||
* already disabled.
|
||||
*/
|
||||
bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
|
||||
{
|
||||
int ret;
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
|
||||
spin_lock_bh(&trans_pcie->reg_lock);
|
||||
spin_lock(&trans_pcie->reg_lock);
|
||||
|
||||
if (trans_pcie->cmd_hold_nic_awake)
|
||||
goto out;
|
||||
|
@ -2063,7 +2072,7 @@ static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
|
|||
}
|
||||
|
||||
err:
|
||||
spin_unlock_bh(&trans_pcie->reg_lock);
|
||||
spin_unlock(&trans_pcie->reg_lock);
|
||||
return false;
|
||||
}
|
||||
|
||||
|
@ -2076,6 +2085,20 @@ out:
|
|||
return true;
|
||||
}
|
||||
|
||||
static bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
|
||||
{
|
||||
bool ret;
|
||||
|
||||
local_bh_disable();
|
||||
ret = __iwl_trans_pcie_grab_nic_access(trans);
|
||||
if (ret) {
|
||||
/* keep BHs disabled until iwl_trans_pcie_release_nic_access */
|
||||
return ret;
|
||||
}
|
||||
local_bh_enable();
|
||||
return false;
|
||||
}
|
||||
|
||||
static void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans)
|
||||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2003-2014, 2018-2020 Intel Corporation
|
||||
* Copyright (C) 2003-2014, 2018-2021 Intel Corporation
|
||||
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
|
||||
* Copyright (C) 2016-2017 Intel Deutschland GmbH
|
||||
*/
|
||||
|
@ -181,16 +181,20 @@ static void iwl_pcie_clear_cmd_in_flight(struct iwl_trans *trans)
|
|||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
|
||||
lockdep_assert_held(&trans_pcie->reg_lock);
|
||||
|
||||
if (!trans->trans_cfg->base_params->apmg_wake_up_wa)
|
||||
return;
|
||||
if (WARN_ON(!trans_pcie->cmd_hold_nic_awake))
|
||||
|
||||
spin_lock(&trans_pcie->reg_lock);
|
||||
|
||||
if (WARN_ON(!trans_pcie->cmd_hold_nic_awake)) {
|
||||
spin_unlock(&trans_pcie->reg_lock);
|
||||
return;
|
||||
}
|
||||
|
||||
trans_pcie->cmd_hold_nic_awake = false;
|
||||
__iwl_trans_pcie_clear_bit(trans, CSR_GP_CNTRL,
|
||||
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
|
||||
spin_unlock(&trans_pcie->reg_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -198,7 +202,6 @@ static void iwl_pcie_clear_cmd_in_flight(struct iwl_trans *trans)
|
|||
*/
|
||||
static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
|
||||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
struct iwl_txq *txq = trans->txqs.txq[txq_id];
|
||||
|
||||
if (!txq) {
|
||||
|
@ -222,12 +225,9 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
|
|||
iwl_txq_free_tfd(trans, txq);
|
||||
txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
|
||||
|
||||
if (txq->read_ptr == txq->write_ptr) {
|
||||
spin_lock(&trans_pcie->reg_lock);
|
||||
if (txq_id == trans->txqs.cmd.q_id)
|
||||
iwl_pcie_clear_cmd_in_flight(trans);
|
||||
spin_unlock(&trans_pcie->reg_lock);
|
||||
}
|
||||
if (txq->read_ptr == txq->write_ptr &&
|
||||
txq_id == trans->txqs.cmd.q_id)
|
||||
iwl_pcie_clear_cmd_in_flight(trans);
|
||||
}
|
||||
|
||||
while (!skb_queue_empty(&txq->overflow_q)) {
|
||||
|
@ -629,38 +629,30 @@ static int iwl_pcie_set_cmd_in_flight(struct iwl_trans *trans,
|
|||
const struct iwl_host_cmd *cmd)
|
||||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&trans_pcie->reg_lock);
|
||||
|
||||
/* Make sure the NIC is still alive in the bus */
|
||||
if (test_bit(STATUS_TRANS_DEAD, &trans->status))
|
||||
return -ENODEV;
|
||||
|
||||
if (!trans->trans_cfg->base_params->apmg_wake_up_wa)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* wake up the NIC to make sure that the firmware will see the host
|
||||
* command - we will let the NIC sleep once all the host commands
|
||||
* returned. This needs to be done only on NICs that have
|
||||
* apmg_wake_up_wa set.
|
||||
* apmg_wake_up_wa set (see above.)
|
||||
*/
|
||||
if (trans->trans_cfg->base_params->apmg_wake_up_wa &&
|
||||
!trans_pcie->cmd_hold_nic_awake) {
|
||||
__iwl_trans_pcie_set_bit(trans, CSR_GP_CNTRL,
|
||||
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
|
||||
if (!_iwl_trans_pcie_grab_nic_access(trans))
|
||||
return -EIO;
|
||||
|
||||
ret = iwl_poll_bit(trans, CSR_GP_CNTRL,
|
||||
CSR_GP_CNTRL_REG_VAL_MAC_ACCESS_EN,
|
||||
(CSR_GP_CNTRL_REG_FLAG_MAC_CLOCK_READY |
|
||||
CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP),
|
||||
15000);
|
||||
if (ret < 0) {
|
||||
__iwl_trans_pcie_clear_bit(trans, CSR_GP_CNTRL,
|
||||
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
|
||||
IWL_ERR(trans, "Failed to wake NIC for hcmd\n");
|
||||
return -EIO;
|
||||
}
|
||||
trans_pcie->cmd_hold_nic_awake = true;
|
||||
}
|
||||
/*
|
||||
* In iwl_trans_grab_nic_access(), we've acquired the reg_lock.
|
||||
* There, we also returned immediately if cmd_hold_nic_awake is
|
||||
* already true, so it's OK to unconditionally set it to true.
|
||||
*/
|
||||
trans_pcie->cmd_hold_nic_awake = true;
|
||||
spin_unlock(&trans_pcie->reg_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -674,7 +666,6 @@ static int iwl_pcie_set_cmd_in_flight(struct iwl_trans *trans,
|
|||
*/
|
||||
static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
|
||||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
struct iwl_txq *txq = trans->txqs.txq[txq_id];
|
||||
int nfreed = 0;
|
||||
u16 r;
|
||||
|
@ -705,12 +696,8 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
|
|||
}
|
||||
}
|
||||
|
||||
if (txq->read_ptr == txq->write_ptr) {
|
||||
/* BHs are also disabled due to txq->lock */
|
||||
spin_lock(&trans_pcie->reg_lock);
|
||||
if (txq->read_ptr == txq->write_ptr)
|
||||
iwl_pcie_clear_cmd_in_flight(trans);
|
||||
spin_unlock(&trans_pcie->reg_lock);
|
||||
}
|
||||
|
||||
iwl_txq_progress(txq);
|
||||
}
|
||||
|
@ -914,7 +901,6 @@ void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int txq_id,
|
|||
int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
|
||||
struct iwl_host_cmd *cmd)
|
||||
{
|
||||
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
|
||||
struct iwl_txq *txq = trans->txqs.txq[trans->txqs.cmd.q_id];
|
||||
struct iwl_device_cmd *out_cmd;
|
||||
struct iwl_cmd_meta *out_meta;
|
||||
|
@ -1161,19 +1147,16 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
|
|||
if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
|
||||
mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
|
||||
|
||||
spin_lock(&trans_pcie->reg_lock);
|
||||
ret = iwl_pcie_set_cmd_in_flight(trans, cmd);
|
||||
if (ret < 0) {
|
||||
idx = ret;
|
||||
goto unlock_reg;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Increment and update queue's write index */
|
||||
txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
|
||||
iwl_pcie_txq_inc_wr_ptr(trans, txq);
|
||||
|
||||
unlock_reg:
|
||||
spin_unlock(&trans_pcie->reg_lock);
|
||||
out:
|
||||
spin_unlock_irqrestore(&txq->lock, flags);
|
||||
free_dup_buf:
|
||||
|
@ -1367,7 +1350,6 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
|
|||
/* this is the data left for this subframe */
|
||||
unsigned int data_left =
|
||||
min_t(unsigned int, mss, total_len);
|
||||
struct sk_buff *csum_skb = NULL;
|
||||
unsigned int hdr_tb_len;
|
||||
dma_addr_t hdr_tb_phys;
|
||||
u8 *subf_hdrs_start = hdr_page->pos;
|
||||
|
@ -1398,10 +1380,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
|
|||
hdr_tb_len = hdr_page->pos - start_hdr;
|
||||
hdr_tb_phys = dma_map_single(trans->dev, start_hdr,
|
||||
hdr_tb_len, DMA_TO_DEVICE);
|
||||
if (unlikely(dma_mapping_error(trans->dev, hdr_tb_phys))) {
|
||||
dev_kfree_skb(csum_skb);
|
||||
if (unlikely(dma_mapping_error(trans->dev, hdr_tb_phys)))
|
||||
return -EINVAL;
|
||||
}
|
||||
iwl_pcie_txq_build_tfd(trans, txq, hdr_tb_phys,
|
||||
hdr_tb_len, false);
|
||||
trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr,
|
||||
|
@ -1420,10 +1400,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
|
|||
|
||||
tb_phys = dma_map_single(trans->dev, tso.data,
|
||||
size, DMA_TO_DEVICE);
|
||||
if (unlikely(dma_mapping_error(trans->dev, tb_phys))) {
|
||||
dev_kfree_skb(csum_skb);
|
||||
if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
iwl_pcie_txq_build_tfd(trans, txq, tb_phys,
|
||||
size, false);
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
|
||||
/*
|
||||
* Copyright (C) 2020 Intel Corporation
|
||||
* Copyright (C) 2020-2021 Intel Corporation
|
||||
*/
|
||||
#include <net/tso.h>
|
||||
#include <linux/tcp.h>
|
||||
|
@ -13,30 +13,6 @@
|
|||
#include "iwl-scd.h"
|
||||
#include <linux/dmapool.h>
|
||||
|
||||
/*
|
||||
* iwl_txq_gen2_tx_stop - Stop all Tx DMA channels
|
||||
*/
|
||||
void iwl_txq_gen2_tx_stop(struct iwl_trans *trans)
|
||||
{
|
||||
int txq_id;
|
||||
|
||||
/*
|
||||
* This function can be called before the op_mode disabled the
|
||||
* queues. This happens when we have an rfkill interrupt.
|
||||
* Since we stop Tx altogether - mark the queues as stopped.
|
||||
*/
|
||||
memset(trans->txqs.queue_stopped, 0,
|
||||
sizeof(trans->txqs.queue_stopped));
|
||||
memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used));
|
||||
|
||||
/* Unmap DMA from host system and free skb's */
|
||||
for (txq_id = 0; txq_id < ARRAY_SIZE(trans->txqs.txq); txq_id++) {
|
||||
if (!trans->txqs.txq[txq_id])
|
||||
continue;
|
||||
iwl_txq_gen2_unmap(trans, txq_id);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* iwl_txq_update_byte_tbl - Set up entry in Tx byte-count array
|
||||
*/
|
||||
|
@ -399,7 +375,6 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
|
|||
while (total_len) {
|
||||
/* this is the data left for this subframe */
|
||||
unsigned int data_left = min_t(unsigned int, mss, total_len);
|
||||
struct sk_buff *csum_skb = NULL;
|
||||
unsigned int tb_len;
|
||||
dma_addr_t tb_phys;
|
||||
u8 *subf_hdrs_start = hdr_page->pos;
|
||||
|
@ -430,10 +405,8 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
|
|||
tb_len = hdr_page->pos - start_hdr;
|
||||
tb_phys = dma_map_single(trans->dev, start_hdr,
|
||||
tb_len, DMA_TO_DEVICE);
|
||||
if (unlikely(dma_mapping_error(trans->dev, tb_phys))) {
|
||||
dev_kfree_skb(csum_skb);
|
||||
if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
|
||||
goto out_err;
|
||||
}
|
||||
/*
|
||||
* No need for _with_wa, this is from the TSO page and
|
||||
* we leave some space at the end of it so can't hit
|
||||
|
@ -458,10 +431,8 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
|
|||
ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd,
|
||||
tb_phys, tso.data,
|
||||
tb_len, NULL);
|
||||
if (ret) {
|
||||
dev_kfree_skb(csum_skb);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
data_left -= tb_len;
|
||||
tso_build_data(skb, &tso, tb_len);
|
||||
|
@ -1189,6 +1160,12 @@ static int iwl_txq_alloc_response(struct iwl_trans *trans, struct iwl_txq *txq,
|
|||
goto error_free_resp;
|
||||
}
|
||||
|
||||
if (WARN_ONCE(trans->txqs.txq[qid],
|
||||
"queue %d already allocated\n", qid)) {
|
||||
ret = -EIO;
|
||||
goto error_free_resp;
|
||||
}
|
||||
|
||||
txq->id = qid;
|
||||
trans->txqs.txq[qid] = txq;
|
||||
wr_ptr &= (trans->trans_cfg->base_params->max_tfd_queue_size - 1);
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
|
||||
/*
|
||||
* Copyright (C) 2020 Intel Corporation
|
||||
* Copyright (C) 2020-2021 Intel Corporation
|
||||
*/
|
||||
#ifndef __iwl_trans_queue_tx_h__
|
||||
#define __iwl_trans_queue_tx_h__
|
||||
|
@ -123,7 +123,6 @@ int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb,
|
|||
void iwl_txq_dyn_free(struct iwl_trans *trans, int queue);
|
||||
void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq);
|
||||
void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq);
|
||||
void iwl_txq_gen2_tx_stop(struct iwl_trans *trans);
|
||||
void iwl_txq_gen2_tx_free(struct iwl_trans *trans);
|
||||
int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
|
||||
bool cmd_queue);
|
||||
|
|
|
@ -23,7 +23,6 @@ struct lbs_private;
|
|||
typedef void (*lbs_fw_cb)(struct lbs_private *priv, int ret,
|
||||
const struct firmware *helper, const struct firmware *mainfw);
|
||||
|
||||
struct lbs_private;
|
||||
struct sk_buff;
|
||||
struct net_device;
|
||||
struct cmd_ds_command;
|
||||
|
|
|
@ -60,13 +60,13 @@ void lbs_mesh_ethtool_get_strings(struct net_device *dev,
|
|||
|
||||
#else
|
||||
|
||||
#define lbs_init_mesh(priv)
|
||||
#define lbs_deinit_mesh(priv)
|
||||
#define lbs_start_mesh(priv)
|
||||
#define lbs_add_mesh(priv)
|
||||
#define lbs_remove_mesh(priv)
|
||||
#define lbs_init_mesh(priv) do { } while (0)
|
||||
#define lbs_deinit_mesh(priv) do { } while (0)
|
||||
#define lbs_start_mesh(priv) do { } while (0)
|
||||
#define lbs_add_mesh(priv) do { } while (0)
|
||||
#define lbs_remove_mesh(priv) do { } while (0)
|
||||
#define lbs_mesh_set_dev(priv, dev, rxpd) (dev)
|
||||
#define lbs_mesh_set_txpd(priv, dev, txpd)
|
||||
#define lbs_mesh_set_txpd(priv, dev, txpd) do { } while (0)
|
||||
#define lbs_mesh_set_channel(priv, channel) (0)
|
||||
#define lbs_mesh_activated(priv) (false)
|
||||
|
||||
|
|
|
@ -2300,8 +2300,7 @@ done:
|
|||
is_scanning_required = 1;
|
||||
} else {
|
||||
mwifiex_dbg(priv->adapter, MSG,
|
||||
"info: trying to associate to '%.*s' bssid %pM\n",
|
||||
req_ssid.ssid_len, (char *)req_ssid.ssid,
|
||||
"info: trying to associate to bssid %pM\n",
|
||||
bss->bssid);
|
||||
memcpy(&priv->cfg_bssid, bss->bssid, ETH_ALEN);
|
||||
break;
|
||||
|
@ -2378,8 +2377,7 @@ mwifiex_cfg80211_connect(struct wiphy *wiphy, struct net_device *dev,
|
|||
}
|
||||
|
||||
mwifiex_dbg(adapter, INFO,
|
||||
"info: Trying to associate to %.*s and bssid %pM\n",
|
||||
(int)sme->ssid_len, (char *)sme->ssid, sme->bssid);
|
||||
"info: Trying to associate to bssid %pM\n", sme->bssid);
|
||||
|
||||
if (!mwifiex_stop_bg_scan(priv))
|
||||
cfg80211_sched_scan_stopped_locked(priv->wdev.wiphy, 0);
|
||||
|
@ -2512,9 +2510,8 @@ mwifiex_cfg80211_join_ibss(struct wiphy *wiphy, struct net_device *dev,
|
|||
goto done;
|
||||
}
|
||||
|
||||
mwifiex_dbg(priv->adapter, MSG,
|
||||
"info: trying to join to %.*s and bssid %pM\n",
|
||||
params->ssid_len, (char *)params->ssid, params->bssid);
|
||||
mwifiex_dbg(priv->adapter, MSG, "info: trying to join to bssid %pM\n",
|
||||
params->bssid);
|
||||
|
||||
mwifiex_set_ibss_params(priv, params);
|
||||
|
||||
|
|
|
@ -1211,7 +1211,6 @@ mwifiex_ret_802_11_scan_get_tlv_ptrs(struct mwifiex_adapter *adapter,
|
|||
int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
|
||||
struct mwifiex_bssdescriptor *bss_entry)
|
||||
{
|
||||
int ret = 0;
|
||||
u8 element_id;
|
||||
struct ieee_types_fh_param_set *fh_param_set;
|
||||
struct ieee_types_ds_param_set *ds_param_set;
|
||||
|
@ -1464,7 +1463,7 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
|
|||
bytes_left -= total_ie_len;
|
||||
|
||||
} /* while (bytes_left > 2) */
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -1473,6 +1473,7 @@ static int mwl8k_txq_init(struct ieee80211_hw *hw, int index)
|
|||
if (txq->skb == NULL) {
|
||||
dma_free_coherent(&priv->pdev->dev, size, txq->txd,
|
||||
txq->txd_dma);
|
||||
txq->txd = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
|
|
@ -76,9 +76,9 @@ mt76_rx_aggr_check_release(struct mt76_rx_tid *tid, struct sk_buff_head *frames)
|
|||
|
||||
nframes--;
|
||||
status = (struct mt76_rx_status *)skb->cb;
|
||||
if (!time_after(jiffies,
|
||||
status->reorder_time +
|
||||
mt76_aggr_tid_to_timeo(tid->num)))
|
||||
if (!time_after32(jiffies,
|
||||
status->reorder_time +
|
||||
mt76_aggr_tid_to_timeo(tid->num)))
|
||||
continue;
|
||||
|
||||
mt76_rx_aggr_release_frames(tid, frames, status->seqno);
|
||||
|
@ -122,6 +122,7 @@ mt76_rx_aggr_check_ctl(struct sk_buff *skb, struct sk_buff_head *frames)
|
|||
struct ieee80211_bar *bar = mt76_skb_get_hdr(skb);
|
||||
struct mt76_wcid *wcid = status->wcid;
|
||||
struct mt76_rx_tid *tid;
|
||||
u8 tidno = status->qos_ctl & IEEE80211_QOS_CTL_TID_MASK;
|
||||
u16 seqno;
|
||||
|
||||
if (!ieee80211_is_ctl(bar->frame_control))
|
||||
|
@ -130,9 +131,9 @@ mt76_rx_aggr_check_ctl(struct sk_buff *skb, struct sk_buff_head *frames)
|
|||
if (!ieee80211_is_back_req(bar->frame_control))
|
||||
return;
|
||||
|
||||
status->tid = le16_to_cpu(bar->control) >> 12;
|
||||
status->qos_ctl = tidno = le16_to_cpu(bar->control) >> 12;
|
||||
seqno = IEEE80211_SEQ_TO_SN(le16_to_cpu(bar->start_seq_num));
|
||||
tid = rcu_dereference(wcid->aggr[status->tid]);
|
||||
tid = rcu_dereference(wcid->aggr[tidno]);
|
||||
if (!tid)
|
||||
return;
|
||||
|
||||
|
@ -147,12 +148,12 @@ mt76_rx_aggr_check_ctl(struct sk_buff *skb, struct sk_buff_head *frames)
|
|||
void mt76_rx_aggr_reorder(struct sk_buff *skb, struct sk_buff_head *frames)
|
||||
{
|
||||
struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
|
||||
struct ieee80211_hdr *hdr = mt76_skb_get_hdr(skb);
|
||||
struct mt76_wcid *wcid = status->wcid;
|
||||
struct ieee80211_sta *sta;
|
||||
struct mt76_rx_tid *tid;
|
||||
bool sn_less;
|
||||
u16 seqno, head, size, idx;
|
||||
u8 tidno = status->qos_ctl & IEEE80211_QOS_CTL_TID_MASK;
|
||||
u8 ackp;
|
||||
|
||||
__skb_queue_tail(frames, skb);
|
||||
|
@ -161,18 +162,18 @@ void mt76_rx_aggr_reorder(struct sk_buff *skb, struct sk_buff_head *frames)
|
|||
if (!sta)
|
||||
return;
|
||||
|
||||
if (!status->aggr) {
|
||||
if (!status->aggr && !(status->flag & RX_FLAG_8023)) {
|
||||
mt76_rx_aggr_check_ctl(skb, frames);
|
||||
return;
|
||||
}
|
||||
|
||||
/* not part of a BA session */
|
||||
ackp = *ieee80211_get_qos_ctl(hdr) & IEEE80211_QOS_CTL_ACK_POLICY_MASK;
|
||||
ackp = status->qos_ctl & IEEE80211_QOS_CTL_ACK_POLICY_MASK;
|
||||
if (ackp != IEEE80211_QOS_CTL_ACK_POLICY_BLOCKACK &&
|
||||
ackp != IEEE80211_QOS_CTL_ACK_POLICY_NORMAL)
|
||||
return;
|
||||
|
||||
tid = rcu_dereference(wcid->aggr[status->tid]);
|
||||
tid = rcu_dereference(wcid->aggr[tidno]);
|
||||
if (!tid)
|
||||
return;
|
||||
|
||||
|
|
|
@ -79,13 +79,38 @@ mt76_free_pending_txwi(struct mt76_dev *dev)
|
|||
local_bh_enable();
|
||||
}
|
||||
|
||||
static void
|
||||
mt76_dma_sync_idx(struct mt76_dev *dev, struct mt76_queue *q)
|
||||
{
|
||||
writel(q->desc_dma, &q->regs->desc_base);
|
||||
writel(q->ndesc, &q->regs->ring_size);
|
||||
q->head = readl(&q->regs->dma_idx);
|
||||
q->tail = q->head;
|
||||
}
|
||||
|
||||
static void
|
||||
mt76_dma_queue_reset(struct mt76_dev *dev, struct mt76_queue *q)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (!q)
|
||||
return;
|
||||
|
||||
/* clear descriptors */
|
||||
for (i = 0; i < q->ndesc; i++)
|
||||
q->desc[i].ctrl = cpu_to_le32(MT_DMA_CTL_DMA_DONE);
|
||||
|
||||
writel(0, &q->regs->cpu_idx);
|
||||
writel(0, &q->regs->dma_idx);
|
||||
mt76_dma_sync_idx(dev, q);
|
||||
}
|
||||
|
||||
static int
|
||||
mt76_dma_alloc_queue(struct mt76_dev *dev, struct mt76_queue *q,
|
||||
int idx, int n_desc, int bufsize,
|
||||
u32 ring_base)
|
||||
{
|
||||
int size;
|
||||
int i;
|
||||
|
||||
spin_lock_init(&q->lock);
|
||||
spin_lock_init(&q->cleanup_lock);
|
||||
|
@ -105,14 +130,7 @@ mt76_dma_alloc_queue(struct mt76_dev *dev, struct mt76_queue *q,
|
|||
if (!q->entry)
|
||||
return -ENOMEM;
|
||||
|
||||
/* clear descriptors */
|
||||
for (i = 0; i < q->ndesc; i++)
|
||||
q->desc[i].ctrl = cpu_to_le32(MT_DMA_CTL_DMA_DONE);
|
||||
|
||||
writel(q->desc_dma, &q->regs->desc_base);
|
||||
writel(0, &q->regs->cpu_idx);
|
||||
writel(0, &q->regs->dma_idx);
|
||||
writel(q->ndesc, &q->regs->ring_size);
|
||||
mt76_dma_queue_reset(dev, q);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -201,15 +219,6 @@ mt76_dma_tx_cleanup_idx(struct mt76_dev *dev, struct mt76_queue *q, int idx,
|
|||
memset(e, 0, sizeof(*e));
|
||||
}
|
||||
|
||||
static void
|
||||
mt76_dma_sync_idx(struct mt76_dev *dev, struct mt76_queue *q)
|
||||
{
|
||||
writel(q->desc_dma, &q->regs->desc_base);
|
||||
writel(q->ndesc, &q->regs->ring_size);
|
||||
q->head = readl(&q->regs->dma_idx);
|
||||
q->tail = q->head;
|
||||
}
|
||||
|
||||
static void
|
||||
mt76_dma_kick_queue(struct mt76_dev *dev, struct mt76_queue *q)
|
||||
{
|
||||
|
@ -309,7 +318,7 @@ static int
|
|||
mt76_dma_tx_queue_skb_raw(struct mt76_dev *dev, struct mt76_queue *q,
|
||||
struct sk_buff *skb, u32 tx_info)
|
||||
{
|
||||
struct mt76_queue_buf buf;
|
||||
struct mt76_queue_buf buf = {};
|
||||
dma_addr_t addr;
|
||||
|
||||
if (q->queued + 1 >= q->ndesc - 1)
|
||||
|
@ -602,7 +611,6 @@ mt76_dma_rx_poll(struct napi_struct *napi, int budget)
|
|||
dev = container_of(napi->dev, struct mt76_dev, napi_dev);
|
||||
qid = napi - dev->napi;
|
||||
|
||||
local_bh_disable();
|
||||
rcu_read_lock();
|
||||
|
||||
do {
|
||||
|
@ -612,7 +620,6 @@ mt76_dma_rx_poll(struct napi_struct *napi, int budget)
|
|||
} while (cur && done < budget);
|
||||
|
||||
rcu_read_unlock();
|
||||
local_bh_enable();
|
||||
|
||||
if (done < budget && napi_complete(napi))
|
||||
dev->drv->rx_poll_complete(dev, qid);
|
||||
|
@ -626,6 +633,10 @@ mt76_dma_init(struct mt76_dev *dev)
|
|||
int i;
|
||||
|
||||
init_dummy_netdev(&dev->napi_dev);
|
||||
init_dummy_netdev(&dev->tx_napi_dev);
|
||||
snprintf(dev->napi_dev.name, sizeof(dev->napi_dev.name), "%s",
|
||||
wiphy_name(dev->hw->wiphy));
|
||||
dev->napi_dev.threaded = 1;
|
||||
|
||||
mt76_for_each_q_rx(dev, i) {
|
||||
netif_napi_add(&dev->napi_dev, &dev->napi[i], mt76_dma_rx_poll,
|
||||
|
@ -640,9 +651,11 @@ mt76_dma_init(struct mt76_dev *dev)
|
|||
static const struct mt76_queue_ops mt76_dma_ops = {
|
||||
.init = mt76_dma_init,
|
||||
.alloc = mt76_dma_alloc_queue,
|
||||
.reset_q = mt76_dma_queue_reset,
|
||||
.tx_queue_skb_raw = mt76_dma_tx_queue_skb_raw,
|
||||
.tx_queue_skb = mt76_dma_tx_queue_skb,
|
||||
.tx_cleanup = mt76_dma_tx_cleanup,
|
||||
.rx_cleanup = mt76_dma_rx_cleanup,
|
||||
.rx_reset = mt76_dma_rx_reset,
|
||||
.kick = mt76_dma_kick_queue,
|
||||
};
|
||||
|
|
|
@ -508,6 +508,39 @@ void mt76_free_device(struct mt76_dev *dev)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(mt76_free_device);
|
||||
|
||||
static void mt76_rx_release_amsdu(struct mt76_phy *phy, enum mt76_rxq_id q)
|
||||
{
|
||||
struct sk_buff *skb = phy->rx_amsdu[q].head;
|
||||
struct mt76_dev *dev = phy->dev;
|
||||
|
||||
phy->rx_amsdu[q].head = NULL;
|
||||
phy->rx_amsdu[q].tail = NULL;
|
||||
__skb_queue_tail(&dev->rx_skb[q], skb);
|
||||
}
|
||||
|
||||
static void mt76_rx_release_burst(struct mt76_phy *phy, enum mt76_rxq_id q,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
|
||||
|
||||
if (phy->rx_amsdu[q].head &&
|
||||
(!status->amsdu || status->first_amsdu ||
|
||||
status->seqno != phy->rx_amsdu[q].seqno))
|
||||
mt76_rx_release_amsdu(phy, q);
|
||||
|
||||
if (!phy->rx_amsdu[q].head) {
|
||||
phy->rx_amsdu[q].tail = &skb_shinfo(skb)->frag_list;
|
||||
phy->rx_amsdu[q].seqno = status->seqno;
|
||||
phy->rx_amsdu[q].head = skb;
|
||||
} else {
|
||||
*phy->rx_amsdu[q].tail = skb;
|
||||
phy->rx_amsdu[q].tail = &skb->next;
|
||||
}
|
||||
|
||||
if (!status->amsdu || status->last_amsdu)
|
||||
mt76_rx_release_amsdu(phy, q);
|
||||
}
|
||||
|
||||
void mt76_rx(struct mt76_dev *dev, enum mt76_rxq_id q, struct sk_buff *skb)
|
||||
{
|
||||
struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
|
||||
|
@ -525,7 +558,8 @@ void mt76_rx(struct mt76_dev *dev, enum mt76_rxq_id q, struct sk_buff *skb)
|
|||
phy->test.rx_stats.fcs_error[q]++;
|
||||
}
|
||||
#endif
|
||||
__skb_queue_tail(&dev->rx_skb[q], skb);
|
||||
|
||||
mt76_rx_release_burst(phy, q, skb);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mt76_rx);
|
||||
|
||||
|
@ -720,6 +754,8 @@ mt76_rx_convert(struct mt76_dev *dev, struct sk_buff *skb,
|
|||
status->signal = mstat.signal;
|
||||
status->chains = mstat.chains;
|
||||
status->ampdu_reference = mstat.ampdu_ref;
|
||||
status->device_timestamp = mstat.timestamp;
|
||||
status->mactime = mstat.timestamp;
|
||||
|
||||
BUILD_BUG_ON(sizeof(mstat) > sizeof(skb->cb));
|
||||
BUILD_BUG_ON(sizeof(status->chain_signal) !=
|
||||
|
@ -737,6 +773,7 @@ mt76_check_ccmp_pn(struct sk_buff *skb)
|
|||
struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
|
||||
struct mt76_wcid *wcid = status->wcid;
|
||||
struct ieee80211_hdr *hdr;
|
||||
u8 tidno = status->qos_ctl & IEEE80211_QOS_CTL_TID_MASK;
|
||||
int ret;
|
||||
|
||||
if (!(status->flag & RX_FLAG_DECRYPTED))
|
||||
|
@ -757,12 +794,12 @@ mt76_check_ccmp_pn(struct sk_buff *skb)
|
|||
}
|
||||
|
||||
BUILD_BUG_ON(sizeof(status->iv) != sizeof(wcid->rx_key_pn[0]));
|
||||
ret = memcmp(status->iv, wcid->rx_key_pn[status->tid],
|
||||
ret = memcmp(status->iv, wcid->rx_key_pn[tidno],
|
||||
sizeof(status->iv));
|
||||
if (ret <= 0)
|
||||
return -EINVAL; /* replay */
|
||||
|
||||
memcpy(wcid->rx_key_pn[status->tid], status->iv, sizeof(status->iv));
|
||||
memcpy(wcid->rx_key_pn[tidno], status->iv, sizeof(status->iv));
|
||||
|
||||
if (status->flag & RX_FLAG_IV_STRIPPED)
|
||||
status->flag |= RX_FLAG_PN_VALIDATED;
|
||||
|
@ -785,6 +822,7 @@ mt76_airtime_report(struct mt76_dev *dev, struct mt76_rx_status *status,
|
|||
};
|
||||
struct ieee80211_sta *sta;
|
||||
u32 airtime;
|
||||
u8 tidno = status->qos_ctl & IEEE80211_QOS_CTL_TID_MASK;
|
||||
|
||||
airtime = ieee80211_calc_rx_airtime(dev->hw, &info, len);
|
||||
spin_lock(&dev->cc_lock);
|
||||
|
@ -795,7 +833,7 @@ mt76_airtime_report(struct mt76_dev *dev, struct mt76_rx_status *status,
|
|||
return;
|
||||
|
||||
sta = container_of((void *)wcid, struct ieee80211_sta, drv_priv);
|
||||
ieee80211_sta_register_airtime(sta, status->tid, 0, airtime);
|
||||
ieee80211_sta_register_airtime(sta, tidno, 0, airtime);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -823,7 +861,6 @@ mt76_airtime_flush_ampdu(struct mt76_dev *dev)
|
|||
static void
|
||||
mt76_airtime_check(struct mt76_dev *dev, struct sk_buff *skb)
|
||||
{
|
||||
struct ieee80211_hdr *hdr = mt76_skb_get_hdr(skb);
|
||||
struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
|
||||
struct mt76_wcid *wcid = status->wcid;
|
||||
|
||||
|
@ -831,6 +868,11 @@ mt76_airtime_check(struct mt76_dev *dev, struct sk_buff *skb)
|
|||
return;
|
||||
|
||||
if (!wcid || !wcid->sta) {
|
||||
struct ieee80211_hdr *hdr = mt76_skb_get_hdr(skb);
|
||||
|
||||
if (status->flag & RX_FLAG_8023)
|
||||
return;
|
||||
|
||||
if (!ether_addr_equal(hdr->addr1, dev->phy.macaddr))
|
||||
return;
|
||||
|
||||
|
@ -864,10 +906,12 @@ mt76_check_sta(struct mt76_dev *dev, struct sk_buff *skb)
|
|||
struct ieee80211_sta *sta;
|
||||
struct ieee80211_hw *hw;
|
||||
struct mt76_wcid *wcid = status->wcid;
|
||||
u8 tidno = status->qos_ctl & IEEE80211_QOS_CTL_TID_MASK;
|
||||
bool ps;
|
||||
|
||||
hw = mt76_phy_hw(dev, status->ext_phy);
|
||||
if (ieee80211_is_pspoll(hdr->frame_control) && !wcid) {
|
||||
if (ieee80211_is_pspoll(hdr->frame_control) && !wcid &&
|
||||
!(status->flag & RX_FLAG_8023)) {
|
||||
sta = ieee80211_find_sta_by_ifaddr(hw, hdr->addr2, NULL);
|
||||
if (sta)
|
||||
wcid = status->wcid = (struct mt76_wcid *)sta->drv_priv;
|
||||
|
@ -885,6 +929,9 @@ mt76_check_sta(struct mt76_dev *dev, struct sk_buff *skb)
|
|||
|
||||
wcid->inactive_count = 0;
|
||||
|
||||
if (status->flag & RX_FLAG_8023)
|
||||
return;
|
||||
|
||||
if (!test_bit(MT_WCID_FLAG_CHECK_PS, &wcid->flags))
|
||||
return;
|
||||
|
||||
|
@ -902,7 +949,7 @@ mt76_check_sta(struct mt76_dev *dev, struct sk_buff *skb)
|
|||
|
||||
if (ps && (ieee80211_is_data_qos(hdr->frame_control) ||
|
||||
ieee80211_is_qos_nullfunc(hdr->frame_control)))
|
||||
ieee80211_sta_uapsd_trigger(sta, status->tid);
|
||||
ieee80211_sta_uapsd_trigger(sta, tidno);
|
||||
|
||||
if (!!test_bit(MT_WCID_FLAG_PS, &wcid->flags) == ps)
|
||||
return;
|
||||
|
@ -926,13 +973,26 @@ void mt76_rx_complete(struct mt76_dev *dev, struct sk_buff_head *frames,
|
|||
|
||||
spin_lock(&dev->rx_lock);
|
||||
while ((skb = __skb_dequeue(frames)) != NULL) {
|
||||
struct sk_buff *nskb = skb_shinfo(skb)->frag_list;
|
||||
|
||||
if (mt76_check_ccmp_pn(skb)) {
|
||||
dev_kfree_skb(skb);
|
||||
continue;
|
||||
}
|
||||
|
||||
skb_shinfo(skb)->frag_list = NULL;
|
||||
mt76_rx_convert(dev, skb, &hw, &sta);
|
||||
ieee80211_rx_list(hw, sta, skb, &list);
|
||||
|
||||
/* subsequent amsdu frames */
|
||||
while (nskb) {
|
||||
skb = nskb;
|
||||
nskb = nskb->next;
|
||||
skb->next = NULL;
|
||||
|
||||
mt76_rx_convert(dev, skb, &hw, &sta);
|
||||
ieee80211_rx_list(hw, sta, skb, &list);
|
||||
}
|
||||
}
|
||||
spin_unlock(&dev->rx_lock);
|
||||
|
||||
|
|
|
@ -99,6 +99,10 @@ int mt76_mcu_skb_send_and_get_msg(struct mt76_dev *dev, struct sk_buff *skb,
|
|||
dev_kfree_skb(skb);
|
||||
} while (ret == -EAGAIN);
|
||||
|
||||
/* notify driver code to reset the mcu */
|
||||
if (ret == -ETIMEDOUT && dev->mcu_ops->mcu_reset)
|
||||
dev->mcu_ops->mcu_reset(dev);
|
||||
|
||||
out:
|
||||
mutex_unlock(&dev->mcu.mutex);
|
||||
|
||||
|
|
|
@ -166,6 +166,7 @@ struct mt76_mcu_ops {
|
|||
int (*mcu_rd_rp)(struct mt76_dev *dev, u32 base,
|
||||
struct mt76_reg_pair *rp, int len);
|
||||
int (*mcu_restart)(struct mt76_dev *dev);
|
||||
void (*mcu_reset)(struct mt76_dev *dev);
|
||||
};
|
||||
|
||||
struct mt76_queue_ops {
|
||||
|
@ -190,13 +191,18 @@ struct mt76_queue_ops {
|
|||
void (*tx_cleanup)(struct mt76_dev *dev, struct mt76_queue *q,
|
||||
bool flush);
|
||||
|
||||
void (*rx_cleanup)(struct mt76_dev *dev, struct mt76_queue *q);
|
||||
|
||||
void (*kick)(struct mt76_dev *dev, struct mt76_queue *q);
|
||||
|
||||
void (*reset_q)(struct mt76_dev *dev, struct mt76_queue *q);
|
||||
};
|
||||
|
||||
enum mt76_wcid_flags {
|
||||
MT_WCID_FLAG_CHECK_PS,
|
||||
MT_WCID_FLAG_PS,
|
||||
MT_WCID_FLAG_4ADDR,
|
||||
MT_WCID_FLAG_HDR_TRANS,
|
||||
};
|
||||
|
||||
#define MT76_N_WCIDS 288
|
||||
|
@ -222,6 +228,7 @@ struct mt76_wcid {
|
|||
|
||||
u16 idx;
|
||||
u8 hw_key_idx;
|
||||
u8 hw_key_idx2;
|
||||
|
||||
u8 sta:1;
|
||||
u8 ext_phy:1;
|
||||
|
@ -491,15 +498,16 @@ struct mt76_rx_status {
|
|||
u16 wcid_idx;
|
||||
};
|
||||
|
||||
unsigned long reorder_time;
|
||||
u32 reorder_time;
|
||||
|
||||
u32 ampdu_ref;
|
||||
u32 timestamp;
|
||||
|
||||
u8 iv[6];
|
||||
|
||||
u8 ext_phy:1;
|
||||
u8 aggr:1;
|
||||
u8 tid;
|
||||
u8 qos_ctl;
|
||||
u16 seqno;
|
||||
|
||||
u16 freq;
|
||||
|
@ -507,6 +515,7 @@ struct mt76_rx_status {
|
|||
u8 enc_flags;
|
||||
u8 encoding:2, bw:3, he_ru:3;
|
||||
u8 he_gi:2, he_dcm:1;
|
||||
u8 amsdu:1, first_amsdu:1, last_amsdu:1;
|
||||
u8 rate_idx;
|
||||
u8 nss;
|
||||
u8 band;
|
||||
|
@ -600,6 +609,12 @@ struct mt76_phy {
|
|||
|
||||
struct delayed_work mac_work;
|
||||
u8 mac_work_count;
|
||||
|
||||
struct {
|
||||
struct sk_buff *head;
|
||||
struct sk_buff **tail;
|
||||
u16 seqno;
|
||||
} rx_amsdu[__MT_RXQ_MAX];
|
||||
};
|
||||
|
||||
struct mt76_dev {
|
||||
|
@ -628,6 +643,7 @@ struct mt76_dev {
|
|||
struct mt76_mcu mcu;
|
||||
|
||||
struct net_device napi_dev;
|
||||
struct net_device tx_napi_dev;
|
||||
spinlock_t rx_lock;
|
||||
struct napi_struct napi[__MT_RXQ_MAX];
|
||||
struct sk_buff_head rx_skb[__MT_RXQ_MAX];
|
||||
|
@ -783,8 +799,10 @@ static inline u16 mt76_rev(struct mt76_dev *dev)
|
|||
#define mt76_tx_queue_skb_raw(dev, ...) (dev)->mt76.queue_ops->tx_queue_skb_raw(&((dev)->mt76), __VA_ARGS__)
|
||||
#define mt76_tx_queue_skb(dev, ...) (dev)->mt76.queue_ops->tx_queue_skb(&((dev)->mt76), __VA_ARGS__)
|
||||
#define mt76_queue_rx_reset(dev, ...) (dev)->mt76.queue_ops->rx_reset(&((dev)->mt76), __VA_ARGS__)
|
||||
#define mt76_queue_tx_cleanup(dev, ...) (dev)->mt76.queue_ops->tx_cleanup(&((dev)->mt76), __VA_ARGS__)
|
||||
#define mt76_queue_tx_cleanup(dev, ...) (dev)->mt76.queue_ops->tx_cleanup(&((dev)->mt76), __VA_ARGS__)
|
||||
#define mt76_queue_rx_cleanup(dev, ...) (dev)->mt76.queue_ops->rx_cleanup(&((dev)->mt76), __VA_ARGS__)
|
||||
#define mt76_queue_kick(dev, ...) (dev)->mt76.queue_ops->kick(&((dev)->mt76), __VA_ARGS__)
|
||||
#define mt76_queue_reset(dev, ...) (dev)->mt76.queue_ops->reset_q(&((dev)->mt76), __VA_ARGS__)
|
||||
|
||||
#define mt76_for_each_q_rx(dev, i) \
|
||||
for (i = 0; i < ARRAY_SIZE((dev)->q_rx) && \
|
||||
|
|
|
@ -223,7 +223,7 @@ int mt7603_dma_init(struct mt7603_dev *dev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
netif_tx_napi_add(&dev->mt76.napi_dev, &dev->mt76.tx_napi,
|
||||
netif_tx_napi_add(&dev->mt76.tx_napi_dev, &dev->mt76.tx_napi,
|
||||
mt7603_poll_tx, NAPI_POLL_WEIGHT);
|
||||
napi_enable(&dev->mt76.tx_napi);
|
||||
|
||||
|
|
|
@ -548,6 +548,9 @@ int mt7603_register_device(struct mt7603_dev *dev)
|
|||
hw->max_report_rates = 7;
|
||||
hw->max_rate_tries = 11;
|
||||
|
||||
hw->radiotap_timestamp.units_pos =
|
||||
IEEE80211_RADIOTAP_TIMESTAMP_UNIT_US;
|
||||
|
||||
hw->sta_data_size = sizeof(struct mt7603_sta);
|
||||
hw->vif_data_size = sizeof(struct mt7603_vif);
|
||||
|
||||
|
|
|
@ -532,20 +532,6 @@ mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
|
|||
status->flag |= RX_FLAG_MMIC_STRIPPED | RX_FLAG_MIC_STRIPPED;
|
||||
}
|
||||
|
||||
if (!(rxd2 & (MT_RXD2_NORMAL_NON_AMPDU_SUB |
|
||||
MT_RXD2_NORMAL_NON_AMPDU))) {
|
||||
status->flag |= RX_FLAG_AMPDU_DETAILS;
|
||||
|
||||
/* all subframes of an A-MPDU have the same timestamp */
|
||||
if (dev->rx_ampdu_ts != rxd[12]) {
|
||||
if (!++dev->ampdu_ref)
|
||||
dev->ampdu_ref++;
|
||||
}
|
||||
dev->rx_ampdu_ts = rxd[12];
|
||||
|
||||
status->ampdu_ref = dev->ampdu_ref;
|
||||
}
|
||||
|
||||
remove_pad = rxd1 & MT_RXD1_NORMAL_HDR_OFFSET;
|
||||
|
||||
if (rxd2 & MT_RXD2_NORMAL_MAX_LEN_ERROR)
|
||||
|
@ -579,6 +565,23 @@ mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
|
|||
return -EINVAL;
|
||||
}
|
||||
if (rxd0 & MT_RXD0_NORMAL_GROUP_2) {
|
||||
status->timestamp = le32_to_cpu(rxd[0]);
|
||||
status->flag |= RX_FLAG_MACTIME_START;
|
||||
|
||||
if (!(rxd2 & (MT_RXD2_NORMAL_NON_AMPDU_SUB |
|
||||
MT_RXD2_NORMAL_NON_AMPDU))) {
|
||||
status->flag |= RX_FLAG_AMPDU_DETAILS;
|
||||
|
||||
/* all subframes of an A-MPDU have the same timestamp */
|
||||
if (dev->rx_ampdu_ts != status->timestamp) {
|
||||
if (!++dev->ampdu_ref)
|
||||
dev->ampdu_ref++;
|
||||
}
|
||||
dev->rx_ampdu_ts = status->timestamp;
|
||||
|
||||
status->ampdu_ref = dev->ampdu_ref;
|
||||
}
|
||||
|
||||
rxd += 2;
|
||||
if ((u8 *)rxd - skb->data >= skb->len)
|
||||
return -EINVAL;
|
||||
|
@ -651,7 +654,7 @@ mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
|
|||
|
||||
status->aggr = unicast &&
|
||||
!ieee80211_is_qos_nullfunc(hdr->frame_control);
|
||||
status->tid = *ieee80211_get_qos_ctl(hdr) & IEEE80211_QOS_CTL_TID_MASK;
|
||||
status->qos_ctl = *ieee80211_get_qos_ctl(hdr);
|
||||
status->seqno = IEEE80211_SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl));
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -120,7 +120,7 @@ struct mt7603_dev {
|
|||
unsigned long last_cca_adj;
|
||||
|
||||
u32 ampdu_ref;
|
||||
__le32 rx_ampdu_ts;
|
||||
u32 rx_ampdu_ts;
|
||||
u8 rssi_offset[3];
|
||||
|
||||
u8 slottime;
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
#include "mt7603.h"
|
||||
|
||||
static const struct pci_device_id mt76pci_device_table[] = {
|
||||
{ PCI_DEVICE(0x14c3, 0x7603) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7603) },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -517,18 +517,23 @@ int mt7615_init_debugfs(struct mt7615_dev *dev)
|
|||
&fops_pm_idle_timeout);
|
||||
debugfs_create_devm_seqfile(dev->mt76.dev, "radio", dir,
|
||||
mt7615_radio_read);
|
||||
debugfs_create_u32("dfs_hw_pattern", 0400, dir, &dev->hw_pattern);
|
||||
/* test pattern knobs */
|
||||
debugfs_create_u8("pattern_len", 0600, dir,
|
||||
&dev->radar_pattern.n_pulses);
|
||||
debugfs_create_u32("pulse_period", 0600, dir,
|
||||
&dev->radar_pattern.period);
|
||||
debugfs_create_u16("pulse_width", 0600, dir,
|
||||
&dev->radar_pattern.width);
|
||||
debugfs_create_u16("pulse_power", 0600, dir,
|
||||
&dev->radar_pattern.power);
|
||||
debugfs_create_file("radar_trigger", 0200, dir, dev,
|
||||
&fops_radar_pattern);
|
||||
|
||||
if (is_mt7615(&dev->mt76)) {
|
||||
debugfs_create_u32("dfs_hw_pattern", 0400, dir,
|
||||
&dev->hw_pattern);
|
||||
/* test pattern knobs */
|
||||
debugfs_create_u8("pattern_len", 0600, dir,
|
||||
&dev->radar_pattern.n_pulses);
|
||||
debugfs_create_u32("pulse_period", 0600, dir,
|
||||
&dev->radar_pattern.period);
|
||||
debugfs_create_u16("pulse_width", 0600, dir,
|
||||
&dev->radar_pattern.width);
|
||||
debugfs_create_u16("pulse_power", 0600, dir,
|
||||
&dev->radar_pattern.power);
|
||||
debugfs_create_file("radar_trigger", 0200, dir, dev,
|
||||
&fops_radar_pattern);
|
||||
}
|
||||
|
||||
debugfs_create_file("reset_test", 0200, dir, dev,
|
||||
&fops_reset_test);
|
||||
debugfs_create_devm_seqfile(dev->mt76.dev, "temperature", dir,
|
||||
|
|
|
@ -176,6 +176,21 @@ static void mt7663_dma_sched_init(struct mt7615_dev *dev)
|
|||
mt76_wr(dev, MT_DMA_SHDL(MT_DMASHDL_SCHED_SET1), 0xedcba987);
|
||||
}
|
||||
|
||||
void mt7615_dma_start(struct mt7615_dev *dev)
|
||||
{
|
||||
/* start dma engine */
|
||||
mt76_set(dev, MT_WPDMA_GLO_CFG,
|
||||
MT_WPDMA_GLO_CFG_TX_DMA_EN |
|
||||
MT_WPDMA_GLO_CFG_RX_DMA_EN |
|
||||
MT_WPDMA_GLO_CFG_TX_WRITEBACK_DONE);
|
||||
|
||||
if (is_mt7622(&dev->mt76))
|
||||
mt7622_dma_sched_init(dev);
|
||||
|
||||
if (is_mt7663(&dev->mt76))
|
||||
mt7663_dma_sched_init(dev);
|
||||
}
|
||||
|
||||
int mt7615_dma_init(struct mt7615_dev *dev)
|
||||
{
|
||||
int rx_ring_size = MT7615_RX_RING_SIZE;
|
||||
|
@ -245,7 +260,7 @@ int mt7615_dma_init(struct mt7615_dev *dev)
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
netif_tx_napi_add(&dev->mt76.napi_dev, &dev->mt76.tx_napi,
|
||||
netif_tx_napi_add(&dev->mt76.tx_napi_dev, &dev->mt76.tx_napi,
|
||||
mt7615_poll_tx, NAPI_POLL_WEIGHT);
|
||||
napi_enable(&dev->mt76.tx_napi);
|
||||
|
||||
|
@ -253,20 +268,11 @@ int mt7615_dma_init(struct mt7615_dev *dev)
|
|||
MT_WPDMA_GLO_CFG_TX_DMA_BUSY |
|
||||
MT_WPDMA_GLO_CFG_RX_DMA_BUSY, 0, 1000);
|
||||
|
||||
/* start dma engine */
|
||||
mt76_set(dev, MT_WPDMA_GLO_CFG,
|
||||
MT_WPDMA_GLO_CFG_TX_DMA_EN |
|
||||
MT_WPDMA_GLO_CFG_RX_DMA_EN);
|
||||
|
||||
/* enable interrupts for TX/RX rings */
|
||||
mt7615_irq_enable(dev, MT_INT_RX_DONE_ALL | mt7615_tx_mcu_int_mask(dev) |
|
||||
MT_INT_MCU_CMD);
|
||||
|
||||
if (is_mt7622(&dev->mt76))
|
||||
mt7622_dma_sched_init(dev);
|
||||
|
||||
if (is_mt7663(&dev->mt76))
|
||||
mt7663_dma_sched_init(dev);
|
||||
mt7615_dma_start(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -86,6 +86,7 @@ static int mt7615_check_eeprom(struct mt76_dev *dev)
|
|||
switch (val) {
|
||||
case 0x7615:
|
||||
case 0x7622:
|
||||
case 0x7663:
|
||||
return 0;
|
||||
default:
|
||||
return -EINVAL;
|
||||
|
|
|
@ -116,10 +116,10 @@ mt7615_mac_init(struct mt7615_dev *dev)
|
|||
mt76_set(dev, MT_WF_RMAC_MIB_TIME0, MT_WF_RMAC_MIB_RXTIME_EN);
|
||||
mt76_set(dev, MT_WF_RMAC_MIB_AIRTIME0, MT_WF_RMAC_MIB_RXTIME_EN);
|
||||
|
||||
/* disable hdr translation and hw AMSDU */
|
||||
mt76_wr(dev, MT_DMA_DCR0,
|
||||
FIELD_PREP(MT_DMA_DCR0_MAX_RX_LEN, 3072) |
|
||||
MT_DMA_DCR0_RX_VEC_DROP);
|
||||
MT_DMA_DCR0_RX_VEC_DROP | MT_DMA_DCR0_DAMSDU_EN |
|
||||
MT_DMA_DCR0_RX_HDR_TRANS_EN);
|
||||
/* disable TDLS filtering */
|
||||
mt76_clear(dev, MT_WF_PFCR, MT_WF_PFCR_TDLS_EN);
|
||||
mt76_set(dev, MT_WF_MIB_SCR0, MT_MIB_SCR0_AGG_CNT_RANGE_EN);
|
||||
|
@ -129,6 +129,7 @@ mt7615_mac_init(struct mt7615_dev *dev)
|
|||
} else {
|
||||
mt7615_init_mac_chain(dev, 1);
|
||||
}
|
||||
mt7615_mcu_set_rx_hdr_trans_blacklist(dev);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -330,6 +331,10 @@ mt7615_init_wiphy(struct ieee80211_hw *hw)
|
|||
hw->max_rates = 3;
|
||||
hw->max_report_rates = 7;
|
||||
hw->max_rate_tries = 11;
|
||||
hw->netdev_features = NETIF_F_RXCSUM;
|
||||
|
||||
hw->radiotap_timestamp.units_pos =
|
||||
IEEE80211_RADIOTAP_TIMESTAMP_UNIT_US;
|
||||
|
||||
phy->slottime = 9;
|
||||
|
||||
|
@ -360,11 +365,17 @@ mt7615_init_wiphy(struct ieee80211_hw *hw)
|
|||
ieee80211_hw_set(hw, SINGLE_SCAN_ON_ALL_BANDS);
|
||||
ieee80211_hw_set(hw, TX_STATUS_NO_AMPDU_LEN);
|
||||
ieee80211_hw_set(hw, WANT_MONITOR_VIF);
|
||||
ieee80211_hw_set(hw, SUPPORTS_RX_DECAP_OFFLOAD);
|
||||
|
||||
if (is_mt7615(&phy->dev->mt76))
|
||||
hw->max_tx_fragments = MT_TXP_MAX_BUF_NUM;
|
||||
else
|
||||
hw->max_tx_fragments = MT_HW_TXP_MAX_BUF_NUM;
|
||||
|
||||
phy->mt76->sband_2g.sband.ht_cap.cap |= IEEE80211_HT_CAP_LDPC_CODING;
|
||||
phy->mt76->sband_5g.sband.ht_cap.cap |= IEEE80211_HT_CAP_LDPC_CODING;
|
||||
phy->mt76->sband_5g.sband.vht_cap.cap |=
|
||||
IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -496,16 +507,11 @@ void mt7615_init_device(struct mt7615_dev *dev)
|
|||
init_waitqueue_head(&dev->reset_wait);
|
||||
init_waitqueue_head(&dev->phy.roc_wait);
|
||||
|
||||
INIT_WORK(&dev->reset_work, mt7615_mac_reset_work);
|
||||
INIT_WORK(&dev->phy.roc_work, mt7615_roc_work);
|
||||
timer_setup(&dev->phy.roc_timer, mt7615_roc_timer, 0);
|
||||
|
||||
mt7615_init_wiphy(hw);
|
||||
dev->pm.idle_timeout = MT7615_PM_TIMEOUT;
|
||||
dev->mphy.sband_2g.sband.ht_cap.cap |= IEEE80211_HT_CAP_LDPC_CODING;
|
||||
dev->mphy.sband_5g.sband.ht_cap.cap |= IEEE80211_HT_CAP_LDPC_CODING;
|
||||
dev->mphy.sband_5g.sband.vht_cap.cap |=
|
||||
IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
|
||||
mt7615_cap_dbdc_disable(dev);
|
||||
dev->phy.dfs_state = -1;
|
||||
|
||||
|
|
|
@ -234,11 +234,13 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
u32 rxd0 = le32_to_cpu(rxd[0]);
|
||||
u32 rxd1 = le32_to_cpu(rxd[1]);
|
||||
u32 rxd2 = le32_to_cpu(rxd[2]);
|
||||
__le32 rxd12 = rxd[12];
|
||||
bool unicast, remove_pad, insert_ccmp_hdr = false;
|
||||
u32 csum_mask = MT_RXD0_NORMAL_IP_SUM | MT_RXD0_NORMAL_UDP_TCP_SUM;
|
||||
bool unicast, hdr_trans, remove_pad, insert_ccmp_hdr = false;
|
||||
int phy_idx;
|
||||
int i, idx;
|
||||
u8 chfreq;
|
||||
u8 chfreq, amsdu_info, qos_ctl = 0;
|
||||
u16 seq_ctrl = 0;
|
||||
__le16 fc = 0;
|
||||
|
||||
memset(status, 0, sizeof(*status));
|
||||
|
||||
|
@ -254,8 +256,12 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
else
|
||||
phy_idx = -1;
|
||||
|
||||
if (rxd2 & MT_RXD2_NORMAL_AMSDU_ERR)
|
||||
return -EINVAL;
|
||||
|
||||
unicast = (rxd1 & MT_RXD1_NORMAL_ADDR_TYPE) == MT_RXD1_NORMAL_U2M;
|
||||
idx = FIELD_GET(MT_RXD2_NORMAL_WLAN_IDX, rxd2);
|
||||
hdr_trans = rxd1 & MT_RXD1_NORMAL_HDR_TRANS;
|
||||
status->wcid = mt7615_rx_get_wcid(dev, idx, unicast);
|
||||
|
||||
if (status->wcid) {
|
||||
|
@ -268,6 +274,9 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
spin_unlock_bh(&dev->sta_poll_lock);
|
||||
}
|
||||
|
||||
if ((rxd0 & csum_mask) == csum_mask)
|
||||
skb->ip_summed = CHECKSUM_UNNECESSARY;
|
||||
|
||||
if (rxd2 & MT_RXD2_NORMAL_FCS_ERR)
|
||||
status->flag |= RX_FLAG_FAILED_FCS_CRC;
|
||||
|
||||
|
@ -288,6 +297,13 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
|
||||
rxd += 4;
|
||||
if (rxd0 & MT_RXD0_NORMAL_GROUP_4) {
|
||||
u32 v0 = le32_to_cpu(rxd[0]);
|
||||
u32 v2 = le32_to_cpu(rxd[2]);
|
||||
|
||||
fc = cpu_to_le16(FIELD_GET(MT_RXD4_FRAME_CONTROL, v0));
|
||||
qos_ctl = FIELD_GET(MT_RXD6_QOS_CTL, v2);
|
||||
seq_ctrl = FIELD_GET(MT_RXD6_SEQ_CTRL, v2);
|
||||
|
||||
rxd += 4;
|
||||
if ((u8 *)rxd - skb->data >= skb->len)
|
||||
return -EINVAL;
|
||||
|
@ -312,6 +328,23 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
}
|
||||
|
||||
if (rxd0 & MT_RXD0_NORMAL_GROUP_2) {
|
||||
status->timestamp = le32_to_cpu(rxd[0]);
|
||||
status->flag |= RX_FLAG_MACTIME_START;
|
||||
|
||||
if (!(rxd2 & (MT_RXD2_NORMAL_NON_AMPDU_SUB |
|
||||
MT_RXD2_NORMAL_NON_AMPDU))) {
|
||||
status->flag |= RX_FLAG_AMPDU_DETAILS;
|
||||
|
||||
/* all subframes of an A-MPDU have the same timestamp */
|
||||
if (phy->rx_ampdu_ts != status->timestamp) {
|
||||
if (!++phy->ampdu_ref)
|
||||
phy->ampdu_ref++;
|
||||
}
|
||||
phy->rx_ampdu_ts = status->timestamp;
|
||||
|
||||
status->ampdu_ref = phy->ampdu_ref;
|
||||
}
|
||||
|
||||
rxd += 2;
|
||||
if ((u8 *)rxd - skb->data >= skb->len)
|
||||
return -EINVAL;
|
||||
|
@ -355,20 +388,6 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
if (!sband->channels)
|
||||
return -EINVAL;
|
||||
|
||||
if (!(rxd2 & (MT_RXD2_NORMAL_NON_AMPDU_SUB |
|
||||
MT_RXD2_NORMAL_NON_AMPDU))) {
|
||||
status->flag |= RX_FLAG_AMPDU_DETAILS;
|
||||
|
||||
/* all subframes of an A-MPDU have the same timestamp */
|
||||
if (phy->rx_ampdu_ts != rxd12) {
|
||||
if (!++phy->ampdu_ref)
|
||||
phy->ampdu_ref++;
|
||||
}
|
||||
phy->rx_ampdu_ts = rxd12;
|
||||
|
||||
status->ampdu_ref = phy->ampdu_ref;
|
||||
}
|
||||
|
||||
if (rxd0 & MT_RXD0_NORMAL_GROUP_3) {
|
||||
u32 rxdg0 = le32_to_cpu(rxd[0]);
|
||||
u32 rxdg1 = le32_to_cpu(rxd[1]);
|
||||
|
@ -446,20 +465,42 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
|
||||
skb_pull(skb, (u8 *)rxd - skb->data + 2 * remove_pad);
|
||||
|
||||
if (insert_ccmp_hdr) {
|
||||
amsdu_info = FIELD_GET(MT_RXD1_NORMAL_PAYLOAD_FORMAT, rxd1);
|
||||
status->amsdu = !!amsdu_info;
|
||||
if (status->amsdu) {
|
||||
status->first_amsdu = amsdu_info == MT_RXD1_FIRST_AMSDU_FRAME;
|
||||
status->last_amsdu = amsdu_info == MT_RXD1_LAST_AMSDU_FRAME;
|
||||
if (!hdr_trans) {
|
||||
memmove(skb->data + 2, skb->data,
|
||||
ieee80211_get_hdrlen_from_skb(skb));
|
||||
skb_pull(skb, 2);
|
||||
}
|
||||
}
|
||||
|
||||
if (insert_ccmp_hdr && !hdr_trans) {
|
||||
u8 key_id = FIELD_GET(MT_RXD1_NORMAL_KEY_ID, rxd1);
|
||||
|
||||
mt76_insert_ccmp_hdr(skb, key_id);
|
||||
}
|
||||
|
||||
hdr = (struct ieee80211_hdr *)skb->data;
|
||||
if (!status->wcid || !ieee80211_is_data_qos(hdr->frame_control))
|
||||
if (!hdr_trans) {
|
||||
hdr = (struct ieee80211_hdr *)skb->data;
|
||||
fc = hdr->frame_control;
|
||||
if (ieee80211_is_data_qos(fc)) {
|
||||
seq_ctrl = le16_to_cpu(hdr->seq_ctrl);
|
||||
qos_ctl = *ieee80211_get_qos_ctl(hdr);
|
||||
}
|
||||
} else {
|
||||
status->flag |= RX_FLAG_8023;
|
||||
}
|
||||
|
||||
if (!status->wcid || !ieee80211_is_data_qos(fc))
|
||||
return 0;
|
||||
|
||||
status->aggr = unicast &&
|
||||
!ieee80211_is_qos_nullfunc(hdr->frame_control);
|
||||
status->tid = *ieee80211_get_qos_ctl(hdr) & IEEE80211_QOS_CTL_TID_MASK;
|
||||
status->seqno = IEEE80211_SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl));
|
||||
!ieee80211_is_qos_nullfunc(fc);
|
||||
status->qos_ctl = qos_ctl;
|
||||
status->seqno = IEEE80211_SEQ_TO_SN(seq_ctrl);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -690,7 +731,7 @@ mt7615_txp_skb_unmap_fw(struct mt76_dev *dev, struct mt7615_fw_txp *txp)
|
|||
{
|
||||
int i;
|
||||
|
||||
for (i = 1; i < txp->nbuf; i++)
|
||||
for (i = 0; i < txp->nbuf; i++)
|
||||
dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
|
||||
le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
|
||||
}
|
||||
|
@ -966,6 +1007,7 @@ void mt7615_mac_set_rates(struct mt7615_phy *phy, struct mt7615_sta *sta,
|
|||
struct mt7615_dev *dev = phy->dev;
|
||||
struct mt7615_rate_desc rd;
|
||||
u32 w5, w27, addr;
|
||||
u16 idx = sta->vif->mt76.omac_idx;
|
||||
|
||||
if (!mt76_is_mmio(&dev->mt76)) {
|
||||
mt7615_mac_queue_rate_update(phy, sta, probe_rate, rates);
|
||||
|
@ -1017,7 +1059,10 @@ void mt7615_mac_set_rates(struct mt7615_phy *phy, struct mt7615_sta *sta,
|
|||
|
||||
mt76_wr(dev, addr + 27 * 4, w27);
|
||||
|
||||
mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_MODE); /* TSF read */
|
||||
idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
|
||||
addr = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
|
||||
|
||||
mt76_set(dev, addr, MT_LPON_TCR_MODE); /* TSF read */
|
||||
sta->rate_set_tsf = mt76_rr(dev, MT_LPON_UTTR0) & ~BIT(0);
|
||||
sta->rate_set_tsf |= rd.rateset;
|
||||
|
||||
|
@ -1033,7 +1078,7 @@ EXPORT_SYMBOL_GPL(mt7615_mac_set_rates);
|
|||
static int
|
||||
mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
|
||||
struct ieee80211_key_conf *key,
|
||||
enum mt7615_cipher_type cipher,
|
||||
enum mt7615_cipher_type cipher, u16 cipher_mask,
|
||||
enum set_key_cmd cmd)
|
||||
{
|
||||
u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx) + 30 * 4;
|
||||
|
@ -1050,22 +1095,22 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
|
|||
memcpy(data + 16, key->key + 24, 8);
|
||||
memcpy(data + 24, key->key + 16, 8);
|
||||
} else {
|
||||
if (cipher != MT_CIPHER_BIP_CMAC_128 && wcid->cipher)
|
||||
memmove(data + 16, data, 16);
|
||||
if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher)
|
||||
if (cipher_mask == BIT(cipher))
|
||||
memcpy(data, key->key, key->keylen);
|
||||
else if (cipher == MT_CIPHER_BIP_CMAC_128)
|
||||
else if (cipher != MT_CIPHER_BIP_CMAC_128)
|
||||
memcpy(data, key->key, 16);
|
||||
if (cipher == MT_CIPHER_BIP_CMAC_128)
|
||||
memcpy(data + 16, key->key, 16);
|
||||
}
|
||||
} else {
|
||||
if (wcid->cipher & ~BIT(cipher)) {
|
||||
if (cipher != MT_CIPHER_BIP_CMAC_128)
|
||||
memmove(data, data + 16, 16);
|
||||
if (cipher == MT_CIPHER_BIP_CMAC_128)
|
||||
memset(data + 16, 0, 16);
|
||||
} else {
|
||||
else if (cipher_mask)
|
||||
memset(data, 0, 16);
|
||||
if (!cipher_mask)
|
||||
memset(data, 0, sizeof(data));
|
||||
}
|
||||
}
|
||||
|
||||
mt76_wr_copy(dev, addr, data, sizeof(data));
|
||||
|
||||
return 0;
|
||||
|
@ -1073,7 +1118,7 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
|
|||
|
||||
static int
|
||||
mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
|
||||
enum mt7615_cipher_type cipher,
|
||||
enum mt7615_cipher_type cipher, u16 cipher_mask,
|
||||
int keyidx, enum set_key_cmd cmd)
|
||||
{
|
||||
u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx), w0, w1;
|
||||
|
@ -1083,20 +1128,23 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
|
|||
|
||||
w0 = mt76_rr(dev, addr);
|
||||
w1 = mt76_rr(dev, addr + 4);
|
||||
if (cmd == SET_KEY) {
|
||||
w0 |= MT_WTBL_W0_RX_KEY_VALID |
|
||||
FIELD_PREP(MT_WTBL_W0_RX_IK_VALID,
|
||||
cipher == MT_CIPHER_BIP_CMAC_128);
|
||||
if (cipher != MT_CIPHER_BIP_CMAC_128 ||
|
||||
!wcid->cipher)
|
||||
w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx);
|
||||
} else {
|
||||
if (!(wcid->cipher & ~BIT(cipher)))
|
||||
w0 &= ~(MT_WTBL_W0_RX_KEY_VALID |
|
||||
MT_WTBL_W0_KEY_IDX);
|
||||
if (cipher == MT_CIPHER_BIP_CMAC_128)
|
||||
w0 &= ~MT_WTBL_W0_RX_IK_VALID;
|
||||
|
||||
if (cipher_mask)
|
||||
w0 |= MT_WTBL_W0_RX_KEY_VALID;
|
||||
else
|
||||
w0 &= ~(MT_WTBL_W0_RX_KEY_VALID | MT_WTBL_W0_KEY_IDX);
|
||||
if (cipher_mask & BIT(MT_CIPHER_BIP_CMAC_128))
|
||||
w0 |= MT_WTBL_W0_RX_IK_VALID;
|
||||
else
|
||||
w0 &= ~MT_WTBL_W0_RX_IK_VALID;
|
||||
|
||||
if (cmd == SET_KEY &&
|
||||
(cipher != MT_CIPHER_BIP_CMAC_128 ||
|
||||
cipher_mask == BIT(cipher))) {
|
||||
w0 &= ~MT_WTBL_W0_KEY_IDX;
|
||||
w0 |= FIELD_PREP(MT_WTBL_W0_KEY_IDX, keyidx);
|
||||
}
|
||||
|
||||
mt76_wr(dev, MT_WTBL_RICR0, w0);
|
||||
mt76_wr(dev, MT_WTBL_RICR1, w1);
|
||||
|
||||
|
@ -1109,24 +1157,25 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
|
|||
|
||||
static void
|
||||
mt7615_mac_wtbl_update_cipher(struct mt7615_dev *dev, struct mt76_wcid *wcid,
|
||||
enum mt7615_cipher_type cipher,
|
||||
enum mt7615_cipher_type cipher, u16 cipher_mask,
|
||||
enum set_key_cmd cmd)
|
||||
{
|
||||
u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx);
|
||||
|
||||
if (cmd == SET_KEY) {
|
||||
if (cipher != MT_CIPHER_BIP_CMAC_128 || !wcid->cipher)
|
||||
mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE,
|
||||
FIELD_PREP(MT_WTBL_W2_KEY_TYPE, cipher));
|
||||
} else {
|
||||
if (cipher != MT_CIPHER_BIP_CMAC_128 &&
|
||||
wcid->cipher & BIT(MT_CIPHER_BIP_CMAC_128))
|
||||
mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE,
|
||||
FIELD_PREP(MT_WTBL_W2_KEY_TYPE,
|
||||
MT_CIPHER_BIP_CMAC_128));
|
||||
else if (!(wcid->cipher & ~BIT(cipher)))
|
||||
mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE);
|
||||
if (!cipher_mask) {
|
||||
mt76_clear(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE);
|
||||
return;
|
||||
}
|
||||
|
||||
if (cmd != SET_KEY)
|
||||
return;
|
||||
|
||||
if (cipher == MT_CIPHER_BIP_CMAC_128 &&
|
||||
cipher_mask & ~BIT(MT_CIPHER_BIP_CMAC_128))
|
||||
return;
|
||||
|
||||
mt76_rmw(dev, addr + 2 * 4, MT_WTBL_W2_KEY_TYPE,
|
||||
FIELD_PREP(MT_WTBL_W2_KEY_TYPE, cipher));
|
||||
}
|
||||
|
||||
int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
|
||||
|
@ -1135,25 +1184,30 @@ int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
|
|||
enum set_key_cmd cmd)
|
||||
{
|
||||
enum mt7615_cipher_type cipher;
|
||||
u16 cipher_mask = wcid->cipher;
|
||||
int err;
|
||||
|
||||
cipher = mt7615_mac_get_cipher(key->cipher);
|
||||
if (cipher == MT_CIPHER_NONE)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cmd);
|
||||
err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cmd);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, key->keyidx, cmd);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
if (cmd == SET_KEY)
|
||||
wcid->cipher |= BIT(cipher);
|
||||
cipher_mask |= BIT(cipher);
|
||||
else
|
||||
wcid->cipher &= ~BIT(cipher);
|
||||
cipher_mask &= ~BIT(cipher);
|
||||
|
||||
mt7615_mac_wtbl_update_cipher(dev, wcid, cipher, cipher_mask, cmd);
|
||||
err = mt7615_mac_wtbl_update_key(dev, wcid, key, cipher, cipher_mask,
|
||||
cmd);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = mt7615_mac_wtbl_update_pk(dev, wcid, cipher, cipher_mask,
|
||||
key->keyidx, cmd);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
wcid->cipher = cipher_mask;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1821,10 +1875,8 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
|
|||
int i, aggr;
|
||||
u32 val, val2;
|
||||
|
||||
memset(mib, 0, sizeof(*mib));
|
||||
|
||||
mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
|
||||
MT_MIB_SDR3_FCS_ERR_MASK);
|
||||
mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
|
||||
MT_MIB_SDR3_FCS_ERR_MASK);
|
||||
|
||||
val = mt76_get_field(dev, MT_MIB_SDR14(ext_phy),
|
||||
MT_MIB_AMPDU_MPDU_COUNT);
|
||||
|
@ -1837,24 +1889,16 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
|
|||
aggr = ext_phy ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
|
||||
for (i = 0; i < 4; i++) {
|
||||
val = mt76_rr(dev, MT_MIB_MB_SDR1(ext_phy, i));
|
||||
|
||||
val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
|
||||
if (val2 > mib->ack_fail_cnt)
|
||||
mib->ack_fail_cnt = val2;
|
||||
|
||||
val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
|
||||
if (val2 > mib->ba_miss_cnt)
|
||||
mib->ba_miss_cnt = val2;
|
||||
mib->ba_miss_cnt += FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
|
||||
mib->ack_fail_cnt += FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK,
|
||||
val);
|
||||
|
||||
val = mt76_rr(dev, MT_MIB_MB_SDR0(ext_phy, i));
|
||||
val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
|
||||
if (val2 > mib->rts_retries_cnt) {
|
||||
mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
|
||||
mib->rts_retries_cnt = val2;
|
||||
}
|
||||
mib->rts_cnt += FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
|
||||
mib->rts_retries_cnt += FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK,
|
||||
val);
|
||||
|
||||
val = mt76_rr(dev, MT_TX_AGG_CNT(ext_phy, i));
|
||||
|
||||
dev->mt76.aggr_stats[aggr++] += val & 0xffff;
|
||||
dev->mt76.aggr_stats[aggr++] += val >> 16;
|
||||
}
|
||||
|
@ -1924,74 +1968,6 @@ void mt7615_mac_work(struct work_struct *work)
|
|||
MT7615_WATCHDOG_TIME);
|
||||
}
|
||||
|
||||
static bool
|
||||
mt7615_wait_reset_state(struct mt7615_dev *dev, u32 state)
|
||||
{
|
||||
bool ret;
|
||||
|
||||
ret = wait_event_timeout(dev->reset_wait,
|
||||
(READ_ONCE(dev->reset_state) & state),
|
||||
MT7615_RESET_TIMEOUT);
|
||||
WARN(!ret, "Timeout waiting for MCU reset state %x\n", state);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void
|
||||
mt7615_update_vif_beacon(void *priv, u8 *mac, struct ieee80211_vif *vif)
|
||||
{
|
||||
struct ieee80211_hw *hw = priv;
|
||||
struct mt7615_dev *dev = mt7615_hw_dev(hw);
|
||||
|
||||
switch (vif->type) {
|
||||
case NL80211_IFTYPE_MESH_POINT:
|
||||
case NL80211_IFTYPE_ADHOC:
|
||||
case NL80211_IFTYPE_AP:
|
||||
mt7615_mcu_add_beacon(dev, hw, vif,
|
||||
vif->bss_conf.enable_beacon);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
mt7615_update_beacons(struct mt7615_dev *dev)
|
||||
{
|
||||
ieee80211_iterate_active_interfaces(dev->mt76.hw,
|
||||
IEEE80211_IFACE_ITER_RESUME_ALL,
|
||||
mt7615_update_vif_beacon, dev->mt76.hw);
|
||||
|
||||
if (!dev->mt76.phy2)
|
||||
return;
|
||||
|
||||
ieee80211_iterate_active_interfaces(dev->mt76.phy2->hw,
|
||||
IEEE80211_IFACE_ITER_RESUME_ALL,
|
||||
mt7615_update_vif_beacon, dev->mt76.phy2->hw);
|
||||
}
|
||||
|
||||
void mt7615_dma_reset(struct mt7615_dev *dev)
|
||||
{
|
||||
int i;
|
||||
|
||||
mt76_clear(dev, MT_WPDMA_GLO_CFG,
|
||||
MT_WPDMA_GLO_CFG_RX_DMA_EN | MT_WPDMA_GLO_CFG_TX_DMA_EN |
|
||||
MT_WPDMA_GLO_CFG_TX_WRITEBACK_DONE);
|
||||
usleep_range(1000, 2000);
|
||||
|
||||
mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_WM], true);
|
||||
for (i = 0; i < __MT_TXQ_MAX; i++)
|
||||
mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[i], true);
|
||||
|
||||
mt76_for_each_q_rx(&dev->mt76, i) {
|
||||
mt76_queue_rx_reset(dev, i);
|
||||
}
|
||||
|
||||
mt76_set(dev, MT_WPDMA_GLO_CFG,
|
||||
MT_WPDMA_GLO_CFG_RX_DMA_EN | MT_WPDMA_GLO_CFG_TX_DMA_EN |
|
||||
MT_WPDMA_GLO_CFG_TX_WRITEBACK_DONE);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mt7615_dma_reset);
|
||||
|
||||
void mt7615_tx_token_put(struct mt7615_dev *dev)
|
||||
{
|
||||
struct mt76_txwi_cache *txwi;
|
||||
|
@ -2009,94 +1985,6 @@ void mt7615_tx_token_put(struct mt7615_dev *dev)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(mt7615_tx_token_put);
|
||||
|
||||
void mt7615_mac_reset_work(struct work_struct *work)
|
||||
{
|
||||
struct mt7615_phy *phy2;
|
||||
struct mt76_phy *ext_phy;
|
||||
struct mt7615_dev *dev;
|
||||
|
||||
dev = container_of(work, struct mt7615_dev, reset_work);
|
||||
ext_phy = dev->mt76.phy2;
|
||||
phy2 = ext_phy ? ext_phy->priv : NULL;
|
||||
|
||||
if (!(READ_ONCE(dev->reset_state) & MT_MCU_CMD_STOP_PDMA))
|
||||
return;
|
||||
|
||||
ieee80211_stop_queues(mt76_hw(dev));
|
||||
if (ext_phy)
|
||||
ieee80211_stop_queues(ext_phy->hw);
|
||||
|
||||
set_bit(MT76_RESET, &dev->mphy.state);
|
||||
set_bit(MT76_MCU_RESET, &dev->mphy.state);
|
||||
wake_up(&dev->mt76.mcu.wait);
|
||||
cancel_delayed_work_sync(&dev->mphy.mac_work);
|
||||
del_timer_sync(&dev->phy.roc_timer);
|
||||
cancel_work_sync(&dev->phy.roc_work);
|
||||
if (phy2) {
|
||||
cancel_delayed_work_sync(&phy2->mt76->mac_work);
|
||||
del_timer_sync(&phy2->roc_timer);
|
||||
cancel_work_sync(&phy2->roc_work);
|
||||
}
|
||||
|
||||
/* lock/unlock all queues to ensure that no tx is pending */
|
||||
mt76_txq_schedule_all(&dev->mphy);
|
||||
if (ext_phy)
|
||||
mt76_txq_schedule_all(ext_phy);
|
||||
|
||||
mt76_worker_disable(&dev->mt76.tx_worker);
|
||||
napi_disable(&dev->mt76.napi[0]);
|
||||
napi_disable(&dev->mt76.napi[1]);
|
||||
napi_disable(&dev->mt76.tx_napi);
|
||||
|
||||
mt7615_mutex_acquire(dev);
|
||||
|
||||
mt76_wr(dev, MT_MCU_INT_EVENT, MT_MCU_INT_EVENT_PDMA_STOPPED);
|
||||
|
||||
mt7615_tx_token_put(dev);
|
||||
idr_init(&dev->token);
|
||||
|
||||
if (mt7615_wait_reset_state(dev, MT_MCU_CMD_RESET_DONE)) {
|
||||
mt7615_dma_reset(dev);
|
||||
|
||||
mt76_wr(dev, MT_WPDMA_MEM_RNG_ERR, 0);
|
||||
|
||||
mt76_wr(dev, MT_MCU_INT_EVENT, MT_MCU_INT_EVENT_PDMA_INIT);
|
||||
mt7615_wait_reset_state(dev, MT_MCU_CMD_RECOVERY_DONE);
|
||||
}
|
||||
|
||||
clear_bit(MT76_MCU_RESET, &dev->mphy.state);
|
||||
clear_bit(MT76_RESET, &dev->mphy.state);
|
||||
|
||||
mt76_worker_enable(&dev->mt76.tx_worker);
|
||||
napi_enable(&dev->mt76.tx_napi);
|
||||
napi_schedule(&dev->mt76.tx_napi);
|
||||
|
||||
napi_enable(&dev->mt76.napi[0]);
|
||||
napi_schedule(&dev->mt76.napi[0]);
|
||||
|
||||
napi_enable(&dev->mt76.napi[1]);
|
||||
napi_schedule(&dev->mt76.napi[1]);
|
||||
|
||||
ieee80211_wake_queues(mt76_hw(dev));
|
||||
if (ext_phy)
|
||||
ieee80211_wake_queues(ext_phy->hw);
|
||||
|
||||
mt76_wr(dev, MT_MCU_INT_EVENT, MT_MCU_INT_EVENT_RESET_DONE);
|
||||
mt7615_wait_reset_state(dev, MT_MCU_CMD_NORMAL_STATE);
|
||||
|
||||
mt7615_update_beacons(dev);
|
||||
|
||||
mt7615_mutex_release(dev);
|
||||
|
||||
ieee80211_queue_delayed_work(mt76_hw(dev), &dev->mphy.mac_work,
|
||||
MT7615_WATCHDOG_TIME);
|
||||
if (phy2)
|
||||
ieee80211_queue_delayed_work(ext_phy->hw,
|
||||
&phy2->mt76->mac_work,
|
||||
MT7615_WATCHDOG_TIME);
|
||||
|
||||
}
|
||||
|
||||
static void mt7615_dfs_stop_radar_detector(struct mt7615_phy *phy)
|
||||
{
|
||||
struct mt7615_dev *dev = phy->dev;
|
||||
|
@ -2304,8 +2192,10 @@ void mt7615_coredump_work(struct work_struct *work)
|
|||
break;
|
||||
|
||||
skb_pull(skb, sizeof(struct mt7615_mcu_rxd));
|
||||
if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ)
|
||||
break;
|
||||
if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
|
||||
dev_kfree_skb(skb);
|
||||
continue;
|
||||
}
|
||||
|
||||
memcpy(data, skb->data, skb->len);
|
||||
data += skb->len;
|
||||
|
|
|
@ -33,6 +33,9 @@ enum rx_pkt_type {
|
|||
|
||||
#define MT_RXD1_NORMAL_BSSID GENMASK(31, 26)
|
||||
#define MT_RXD1_NORMAL_PAYLOAD_FORMAT GENMASK(25, 24)
|
||||
#define MT_RXD1_FIRST_AMSDU_FRAME GENMASK(1, 0)
|
||||
#define MT_RXD1_MID_AMSDU_FRAME BIT(1)
|
||||
#define MT_RXD1_LAST_AMSDU_FRAME BIT(0)
|
||||
#define MT_RXD1_NORMAL_HDR_TRANS BIT(23)
|
||||
#define MT_RXD1_NORMAL_HDR_OFFSET BIT(22)
|
||||
#define MT_RXD1_NORMAL_MAC_HDR_LEN GENMASK(21, 16)
|
||||
|
@ -78,6 +81,11 @@ enum rx_pkt_type {
|
|||
#define MT_RXD3_NORMAL_TSF_COMPARE_LOSS BIT(8)
|
||||
#define MT_RXD3_NORMAL_RXV_SEQ GENMASK(7, 0)
|
||||
|
||||
#define MT_RXD4_FRAME_CONTROL GENMASK(15, 0)
|
||||
|
||||
#define MT_RXD6_SEQ_CTRL GENMASK(15, 0)
|
||||
#define MT_RXD6_QOS_CTL GENMASK(31, 16)
|
||||
|
||||
#define MT_RXV1_ACID_DET_H BIT(31)
|
||||
#define MT_RXV1_ACID_DET_L BIT(30)
|
||||
#define MT_RXV1_VHTA2_B8_B3 GENMASK(29, 24)
|
||||
|
|
|
@ -29,6 +29,7 @@ static int mt7615_start(struct ieee80211_hw *hw)
|
|||
struct mt7615_dev *dev = mt7615_hw_dev(hw);
|
||||
struct mt7615_phy *phy = mt7615_hw_phy(hw);
|
||||
bool running;
|
||||
int ret;
|
||||
|
||||
if (!mt7615_wait_for_mcu_init(dev))
|
||||
return -EIO;
|
||||
|
@ -38,21 +39,38 @@ static int mt7615_start(struct ieee80211_hw *hw)
|
|||
running = mt7615_dev_running(dev);
|
||||
|
||||
if (!running) {
|
||||
mt7615_mcu_set_pm(dev, 0, 0);
|
||||
mt76_connac_mcu_set_mac_enable(&dev->mt76, 0, true, false);
|
||||
ret = mt7615_mcu_set_pm(dev, 0, 0);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = mt76_connac_mcu_set_mac_enable(&dev->mt76, 0, true, false);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
mt7615_mac_enable_nf(dev, 0);
|
||||
}
|
||||
|
||||
if (phy != &dev->phy) {
|
||||
mt7615_mcu_set_pm(dev, 1, 0);
|
||||
mt76_connac_mcu_set_mac_enable(&dev->mt76, 1, true, false);
|
||||
ret = mt7615_mcu_set_pm(dev, 1, 0);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = mt76_connac_mcu_set_mac_enable(&dev->mt76, 1, true, false);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
mt7615_mac_enable_nf(dev, 1);
|
||||
}
|
||||
|
||||
if (mt7615_firmware_offload(dev))
|
||||
mt76_connac_mcu_set_channel_domain(phy->mt76);
|
||||
if (mt7615_firmware_offload(dev)) {
|
||||
ret = mt76_connac_mcu_set_channel_domain(phy->mt76);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD_SET_RX_PATH);
|
||||
ret = mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD_SET_RX_PATH);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
set_bit(MT76_STATE_RUNNING, &phy->mt76->state);
|
||||
|
||||
|
@ -62,9 +80,10 @@ static int mt7615_start(struct ieee80211_hw *hw)
|
|||
if (!running)
|
||||
mt7615_mac_reset_counters(dev);
|
||||
|
||||
out:
|
||||
mt7615_mutex_release(dev);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void mt7615_stop(struct ieee80211_hw *hw)
|
||||
|
@ -197,7 +216,9 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
|
|||
dev->omac_mask |= BIT_ULL(mvif->mt76.omac_idx);
|
||||
phy->omac_mask |= BIT_ULL(mvif->mt76.omac_idx);
|
||||
|
||||
mt7615_mcu_set_dbdc(dev);
|
||||
ret = mt7615_mcu_set_dbdc(dev);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
idx = MT7615_WTBL_RESERVED - mvif->mt76.idx;
|
||||
|
||||
|
@ -217,8 +238,6 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
|
|||
ret = mt7615_mcu_add_dev_info(phy, vif, true);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
mt7615_mac_set_beacon_filter(phy, vif, true);
|
||||
out:
|
||||
mt7615_mutex_release(dev);
|
||||
|
||||
|
@ -234,17 +253,17 @@ static void mt7615_remove_interface(struct ieee80211_hw *hw,
|
|||
struct mt7615_phy *phy = mt7615_hw_phy(hw);
|
||||
int idx = msta->wcid.idx;
|
||||
|
||||
/* TODO: disable beacon for the bss */
|
||||
|
||||
mt7615_mutex_acquire(dev);
|
||||
|
||||
mt7615_mcu_add_bss_info(phy, vif, NULL, false);
|
||||
mt7615_mcu_sta_add(phy, vif, NULL, false);
|
||||
|
||||
mt76_testmode_reset(phy->mt76, true);
|
||||
if (vif == phy->monitor_vif)
|
||||
phy->monitor_vif = NULL;
|
||||
|
||||
mt76_connac_free_pending_tx_skbs(&dev->pm, &msta->wcid);
|
||||
|
||||
mt7615_mac_set_beacon_filter(phy, vif, false);
|
||||
mt7615_mcu_add_dev_info(phy, vif, false);
|
||||
|
||||
rcu_assign_pointer(dev->mt76.wcid[idx], NULL);
|
||||
|
@ -296,8 +315,13 @@ int mt7615_set_channel(struct mt7615_phy *phy)
|
|||
mt76_set_channel(phy->mt76);
|
||||
|
||||
if (is_mt7615(&dev->mt76) && dev->flash_eeprom) {
|
||||
mt7615_mcu_apply_rx_dcoc(phy);
|
||||
mt7615_mcu_apply_tx_dpd(phy);
|
||||
ret = mt7615_mcu_apply_rx_dcoc(phy);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = mt7615_mcu_apply_tx_dpd(phy);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = mt7615_mcu_set_chan_info(phy, MCU_EXT_CMD_CHANNEL_SWITCH);
|
||||
|
@ -306,8 +330,13 @@ int mt7615_set_channel(struct mt7615_phy *phy)
|
|||
|
||||
mt7615_mac_set_timing(phy);
|
||||
ret = mt7615_dfs_init_radar_detector(phy);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
mt7615_mac_cca_stats_reset(phy);
|
||||
mt7615_mcu_set_sku_en(phy, true);
|
||||
ret = mt7615_mcu_set_sku_en(phy, true);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
mt7615_mac_reset_counters(dev);
|
||||
phy->noise = 0;
|
||||
|
@ -337,7 +366,8 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
|
|||
struct mt7615_sta *msta = sta ? (struct mt7615_sta *)sta->drv_priv :
|
||||
&mvif->sta;
|
||||
struct mt76_wcid *wcid = &msta->wcid;
|
||||
int idx = key->keyidx, err;
|
||||
int idx = key->keyidx, err = 0;
|
||||
u8 *wcid_keyidx = &wcid->hw_key_idx;
|
||||
|
||||
/* The hardware does not support per-STA RX GTK, fallback
|
||||
* to software mode for these.
|
||||
|
@ -352,6 +382,7 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
|
|||
/* fall back to sw encryption for unsupported ciphers */
|
||||
switch (key->cipher) {
|
||||
case WLAN_CIPHER_SUITE_AES_CMAC:
|
||||
wcid_keyidx = &wcid->hw_key_idx2;
|
||||
key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE;
|
||||
break;
|
||||
case WLAN_CIPHER_SUITE_TKIP:
|
||||
|
@ -369,12 +400,13 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
|
|||
|
||||
mt7615_mutex_acquire(dev);
|
||||
|
||||
if (cmd == SET_KEY) {
|
||||
key->hw_key_idx = wcid->idx;
|
||||
wcid->hw_key_idx = idx;
|
||||
} else if (idx == wcid->hw_key_idx) {
|
||||
wcid->hw_key_idx = -1;
|
||||
}
|
||||
if (cmd == SET_KEY)
|
||||
*wcid_keyidx = idx;
|
||||
else if (idx == *wcid_keyidx)
|
||||
*wcid_keyidx = -1;
|
||||
else
|
||||
goto out;
|
||||
|
||||
mt76_wcid_key_setup(&dev->mt76, wcid,
|
||||
cmd == SET_KEY ? key : NULL);
|
||||
|
||||
|
@ -383,6 +415,7 @@ static int mt7615_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
|
|||
else
|
||||
err = __mt7615_mac_wtbl_set_key(dev, wcid, key, cmd);
|
||||
|
||||
out:
|
||||
mt7615_mutex_release(dev);
|
||||
|
||||
return err;
|
||||
|
@ -526,11 +559,11 @@ static void mt7615_bss_info_changed(struct ieee80211_hw *hw,
|
|||
}
|
||||
}
|
||||
|
||||
if (changed & BSS_CHANGED_BEACON_ENABLED) {
|
||||
mt7615_mcu_add_bss_info(phy, vif, NULL, info->enable_beacon);
|
||||
mt7615_mcu_sta_add(phy, vif, NULL, info->enable_beacon);
|
||||
if (changed & BSS_CHANGED_BEACON_ENABLED && info->enable_beacon) {
|
||||
mt7615_mcu_add_bss_info(phy, vif, NULL, true);
|
||||
mt7615_mcu_sta_add(phy, vif, NULL, true);
|
||||
|
||||
if (vif->p2p && info->enable_beacon)
|
||||
if (vif->p2p)
|
||||
mt7615_mcu_set_p2p_oppps(hw, vif);
|
||||
}
|
||||
|
||||
|
@ -544,6 +577,9 @@ static void mt7615_bss_info_changed(struct ieee80211_hw *hw,
|
|||
if (changed & BSS_CHANGED_ARP_FILTER)
|
||||
mt7615_mcu_update_arp_filter(hw, vif, info);
|
||||
|
||||
if (changed & BSS_CHANGED_ASSOC)
|
||||
mt7615_mac_set_beacon_filter(phy, vif, info->assoc);
|
||||
|
||||
mt7615_mutex_release(dev);
|
||||
}
|
||||
|
||||
|
@ -583,15 +619,21 @@ int mt7615_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
|
||||
mt7615_mcu_add_bss_info(phy, vif, sta, true);
|
||||
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
|
||||
err = mt7615_mcu_add_bss_info(phy, vif, sta, true);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
mt7615_mac_wtbl_update(dev, idx,
|
||||
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
|
||||
mt7615_mcu_sta_add(&dev->phy, vif, sta, true);
|
||||
err = mt7615_mcu_sta_add(&dev->phy, vif, sta, true);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
mt76_connac_power_save_sched(phy->mt76, &dev->pm);
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mt7615_mac_sta_add);
|
||||
|
||||
|
@ -711,13 +753,13 @@ static int mt7615_set_rts_threshold(struct ieee80211_hw *hw, u32 val)
|
|||
{
|
||||
struct mt7615_dev *dev = mt7615_hw_dev(hw);
|
||||
struct mt7615_phy *phy = mt7615_hw_phy(hw);
|
||||
int band = phy != &dev->phy;
|
||||
int err, band = phy != &dev->phy;
|
||||
|
||||
mt7615_mutex_acquire(dev);
|
||||
mt76_connac_mcu_set_rts_thresh(&dev->mt76, val, band);
|
||||
err = mt76_connac_mcu_set_rts_thresh(&dev->mt76, val, band);
|
||||
mt7615_mutex_release(dev);
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -745,16 +787,16 @@ mt7615_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
|
|||
case IEEE80211_AMPDU_RX_START:
|
||||
mt76_rx_aggr_start(&dev->mt76, &msta->wcid, tid, ssn,
|
||||
params->buf_size);
|
||||
mt7615_mcu_add_rx_ba(dev, params, true);
|
||||
ret = mt7615_mcu_add_rx_ba(dev, params, true);
|
||||
break;
|
||||
case IEEE80211_AMPDU_RX_STOP:
|
||||
mt76_rx_aggr_stop(&dev->mt76, &msta->wcid, tid);
|
||||
mt7615_mcu_add_rx_ba(dev, params, false);
|
||||
ret = mt7615_mcu_add_rx_ba(dev, params, false);
|
||||
break;
|
||||
case IEEE80211_AMPDU_TX_OPERATIONAL:
|
||||
mtxq->aggr = true;
|
||||
mtxq->send_bar = false;
|
||||
mt7615_mcu_add_tx_ba(dev, params, true);
|
||||
ret = mt7615_mcu_add_tx_ba(dev, params, true);
|
||||
ssn = mt7615_mac_get_sta_tid_sn(dev, msta->wcid.idx, tid);
|
||||
ieee80211_send_bar(vif, sta->addr, tid,
|
||||
IEEE80211_SN_TO_SEQ(ssn));
|
||||
|
@ -762,7 +804,7 @@ mt7615_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
|
|||
case IEEE80211_AMPDU_TX_STOP_FLUSH:
|
||||
case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
|
||||
mtxq->aggr = false;
|
||||
mt7615_mcu_add_tx_ba(dev, params, false);
|
||||
ret = mt7615_mcu_add_tx_ba(dev, params, false);
|
||||
break;
|
||||
case IEEE80211_AMPDU_TX_START:
|
||||
ssn = mt7615_mac_get_sta_tid_sn(dev, msta->wcid.idx, tid);
|
||||
|
@ -771,7 +813,7 @@ mt7615_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
|
|||
break;
|
||||
case IEEE80211_AMPDU_TX_STOP_CONT:
|
||||
mtxq->aggr = false;
|
||||
mt7615_mcu_add_tx_ba(dev, params, false);
|
||||
ret = mt7615_mcu_add_tx_ba(dev, params, false);
|
||||
ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
|
||||
break;
|
||||
}
|
||||
|
@ -803,26 +845,38 @@ mt7615_get_stats(struct ieee80211_hw *hw,
|
|||
struct mt7615_phy *phy = mt7615_hw_phy(hw);
|
||||
struct mib_stats *mib = &phy->mib;
|
||||
|
||||
mt7615_mutex_acquire(phy->dev);
|
||||
|
||||
stats->dot11RTSSuccessCount = mib->rts_cnt;
|
||||
stats->dot11RTSFailureCount = mib->rts_retries_cnt;
|
||||
stats->dot11FCSErrorCount = mib->fcs_err_cnt;
|
||||
stats->dot11ACKFailureCount = mib->ack_fail_cnt;
|
||||
|
||||
memset(mib, 0, sizeof(*mib));
|
||||
|
||||
mt7615_mutex_release(phy->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u64
|
||||
mt7615_get_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
|
||||
{
|
||||
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
|
||||
struct mt7615_dev *dev = mt7615_hw_dev(hw);
|
||||
union {
|
||||
u64 t64;
|
||||
u32 t32[2];
|
||||
} tsf;
|
||||
u16 idx = mvif->mt76.omac_idx;
|
||||
u32 reg;
|
||||
|
||||
idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
|
||||
reg = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
|
||||
|
||||
mt7615_mutex_acquire(dev);
|
||||
|
||||
mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_MODE); /* TSF read */
|
||||
mt76_set(dev, reg, MT_LPON_TCR_MODE); /* TSF read */
|
||||
tsf.t32[0] = mt76_rr(dev, MT_LPON_UTTR0);
|
||||
tsf.t32[1] = mt76_rr(dev, MT_LPON_UTTR1);
|
||||
|
||||
|
@ -835,18 +889,24 @@ static void
|
|||
mt7615_set_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
|
||||
u64 timestamp)
|
||||
{
|
||||
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
|
||||
struct mt7615_dev *dev = mt7615_hw_dev(hw);
|
||||
union {
|
||||
u64 t64;
|
||||
u32 t32[2];
|
||||
} tsf = { .t64 = timestamp, };
|
||||
u16 idx = mvif->mt76.omac_idx;
|
||||
u32 reg;
|
||||
|
||||
idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
|
||||
reg = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
|
||||
|
||||
mt7615_mutex_acquire(dev);
|
||||
|
||||
mt76_wr(dev, MT_LPON_UTTR0, tsf.t32[0]);
|
||||
mt76_wr(dev, MT_LPON_UTTR1, tsf.t32[1]);
|
||||
/* TSF software overwrite */
|
||||
mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_WRITE);
|
||||
mt76_set(dev, reg, MT_LPON_TCR_WRITE);
|
||||
|
||||
mt7615_mutex_release(dev);
|
||||
}
|
||||
|
@ -1069,6 +1129,7 @@ static int mt7615_cancel_remain_on_channel(struct ieee80211_hw *hw,
|
|||
struct ieee80211_vif *vif)
|
||||
{
|
||||
struct mt7615_phy *phy = mt7615_hw_phy(hw);
|
||||
int err;
|
||||
|
||||
if (!test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
|
||||
return 0;
|
||||
|
@ -1077,10 +1138,26 @@ static int mt7615_cancel_remain_on_channel(struct ieee80211_hw *hw,
|
|||
cancel_work_sync(&phy->roc_work);
|
||||
|
||||
mt7615_mutex_acquire(phy->dev);
|
||||
mt7615_mcu_set_roc(phy, vif, NULL, 0);
|
||||
err = mt7615_mcu_set_roc(phy, vif, NULL, 0);
|
||||
mt7615_mutex_release(phy->dev);
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
|
||||
static void mt7615_sta_set_decap_offload(struct ieee80211_hw *hw,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta,
|
||||
bool enabled)
|
||||
{
|
||||
struct mt7615_dev *dev = mt7615_hw_dev(hw);
|
||||
struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
|
||||
|
||||
if (enabled)
|
||||
set_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
|
||||
else
|
||||
clear_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
|
||||
|
||||
mt7615_mcu_sta_update_hdr_trans(dev, vif, sta);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
|
@ -1183,6 +1260,7 @@ const struct ieee80211_ops mt7615_ops = {
|
|||
.sta_remove = mt7615_sta_remove,
|
||||
.sta_pre_rcu_remove = mt76_sta_pre_rcu_remove,
|
||||
.set_key = mt7615_set_key,
|
||||
.sta_set_decap_offload = mt7615_sta_set_decap_offload,
|
||||
.ampdu_action = mt7615_ampdu_action,
|
||||
.set_rts_threshold = mt7615_set_rts_threshold,
|
||||
.wake_tx_queue = mt7615_wake_tx_queue,
|
||||
|
|
|
@ -274,7 +274,7 @@ int mt7615_rf_wr(struct mt7615_dev *dev, u32 wf, u32 reg, u32 val)
|
|||
sizeof(req), false);
|
||||
}
|
||||
|
||||
static void mt7622_trigger_hif_int(struct mt7615_dev *dev, bool en)
|
||||
void mt7622_trigger_hif_int(struct mt7615_dev *dev, bool en)
|
||||
{
|
||||
if (!is_mt7622(&dev->mt76))
|
||||
return;
|
||||
|
@ -283,6 +283,7 @@ static void mt7622_trigger_hif_int(struct mt7615_dev *dev, bool en)
|
|||
MT_INFRACFG_MISC_AP2CONN_WAKE,
|
||||
!en * MT_INFRACFG_MISC_AP2CONN_WAKE);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mt7622_trigger_hif_int);
|
||||
|
||||
static int mt7615_mcu_drv_pmctrl(struct mt7615_dev *dev)
|
||||
{
|
||||
|
@ -372,6 +373,23 @@ mt7615_mcu_csa_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
|
|||
ieee80211_csa_finish(vif);
|
||||
}
|
||||
|
||||
static void
|
||||
mt7615_mcu_rx_csa_notify(struct mt7615_dev *dev, struct sk_buff *skb)
|
||||
{
|
||||
struct mt7615_phy *ext_phy = mt7615_ext_phy(dev);
|
||||
struct mt76_phy *mphy = &dev->mt76.phy;
|
||||
struct mt7615_mcu_csa_notify *c;
|
||||
|
||||
c = (struct mt7615_mcu_csa_notify *)skb->data;
|
||||
|
||||
if (ext_phy && ext_phy->omac_mask & BIT_ULL(c->omac_idx))
|
||||
mphy = dev->mt76.phy2;
|
||||
|
||||
ieee80211_iterate_active_interfaces_atomic(mphy->hw,
|
||||
IEEE80211_IFACE_ITER_RESUME_ALL,
|
||||
mt7615_mcu_csa_finish, mphy->hw);
|
||||
}
|
||||
|
||||
static void
|
||||
mt7615_mcu_rx_radar_detected(struct mt7615_dev *dev, struct sk_buff *skb)
|
||||
{
|
||||
|
@ -380,7 +398,7 @@ mt7615_mcu_rx_radar_detected(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
|
||||
r = (struct mt7615_mcu_rdd_report *)skb->data;
|
||||
|
||||
if (r->idx && dev->mt76.phy2)
|
||||
if (r->band_idx && dev->mt76.phy2)
|
||||
mphy = dev->mt76.phy2;
|
||||
|
||||
ieee80211_radar_detected(mphy->hw);
|
||||
|
@ -406,7 +424,8 @@ mt7615_mcu_rx_log_message(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
break;
|
||||
}
|
||||
|
||||
wiphy_info(mt76_hw(dev)->wiphy, "%s: %s", type, data);
|
||||
wiphy_info(mt76_hw(dev)->wiphy, "%s: %*s", type,
|
||||
(int)(skb->len - sizeof(*rxd)), data);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -419,9 +438,7 @@ mt7615_mcu_rx_ext_event(struct mt7615_dev *dev, struct sk_buff *skb)
|
|||
mt7615_mcu_rx_radar_detected(dev, skb);
|
||||
break;
|
||||
case MCU_EXT_EVENT_CSA_NOTIFY:
|
||||
ieee80211_iterate_active_interfaces_atomic(dev->mt76.hw,
|
||||
IEEE80211_IFACE_ITER_RESUME_ALL,
|
||||
mt7615_mcu_csa_finish, dev);
|
||||
mt7615_mcu_rx_csa_notify(dev, skb);
|
||||
break;
|
||||
case MCU_EXT_EVENT_FW_LOG_2_HOST:
|
||||
mt7615_mcu_rx_log_message(dev, skb);
|
||||
|
@ -685,6 +702,9 @@ mt7615_mcu_add_beacon_offload(struct mt7615_dev *dev,
|
|||
};
|
||||
struct sk_buff *skb;
|
||||
|
||||
if (!enable)
|
||||
goto out;
|
||||
|
||||
skb = ieee80211_beacon_get_template(hw, vif, &offs);
|
||||
if (!skb)
|
||||
return -EINVAL;
|
||||
|
@ -714,6 +734,7 @@ mt7615_mcu_add_beacon_offload(struct mt7615_dev *dev,
|
|||
}
|
||||
dev_kfree_skb(skb);
|
||||
|
||||
out:
|
||||
return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_BCN_OFFLOAD, &req,
|
||||
sizeof(req), true);
|
||||
}
|
||||
|
@ -973,7 +994,7 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_phy *phy, struct ieee80211_vif *vif,
|
|||
|
||||
mt76_connac_mcu_sta_basic_tlv(sskb, vif, sta, enable);
|
||||
if (enable && sta)
|
||||
mt76_connac_mcu_sta_tlv(phy->mt76, sskb, sta, vif);
|
||||
mt76_connac_mcu_sta_tlv(phy->mt76, sskb, sta, vif, 0);
|
||||
|
||||
wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(&dev->mt76, &msta->wcid,
|
||||
WTBL_RESET_AND_SET, NULL,
|
||||
|
@ -987,6 +1008,8 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_phy *phy, struct ieee80211_vif *vif,
|
|||
if (sta)
|
||||
mt76_connac_mcu_wtbl_ht_tlv(&dev->mt76, wskb, sta,
|
||||
NULL, wtbl_hdr);
|
||||
mt76_connac_mcu_wtbl_hdr_trans_tlv(wskb, &msta->wcid, NULL,
|
||||
wtbl_hdr);
|
||||
}
|
||||
|
||||
cmd = enable ? MCU_EXT_CMD_WTBL_UPDATE : MCU_EXT_CMD_STA_REC_UPDATE;
|
||||
|
@ -1040,6 +1063,9 @@ mt7615_mcu_sta_ba(struct mt7615_dev *dev,
|
|||
|
||||
wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(&dev->mt76, &msta->wcid,
|
||||
WTBL_SET, sta_wtbl, &skb);
|
||||
if (IS_ERR(wtbl_hdr))
|
||||
return PTR_ERR(wtbl_hdr);
|
||||
|
||||
mt76_connac_mcu_wtbl_ba_tlv(&dev->mt76, skb, params, enable, tx,
|
||||
sta_wtbl, wtbl_hdr);
|
||||
|
||||
|
@ -1068,10 +1094,15 @@ __mt7615_mcu_add_sta(struct mt76_phy *phy, struct ieee80211_vif *vif,
|
|||
struct ieee80211_sta *sta, bool enable, int cmd)
|
||||
{
|
||||
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
|
||||
struct mt76_wcid *wcid;
|
||||
struct mt76_sta_cmd_info info = {
|
||||
.sta = sta,
|
||||
.vif = vif,
|
||||
.enable = enable,
|
||||
.cmd = cmd,
|
||||
};
|
||||
|
||||
wcid = sta ? (struct mt76_wcid *)sta->drv_priv : &mvif->sta.wcid;
|
||||
return mt76_connac_mcu_add_sta_cmd(phy, vif, sta, wcid, enable, cmd);
|
||||
info.wcid = sta ? (struct mt76_wcid *)sta->drv_priv : &mvif->sta.wcid;
|
||||
return mt76_connac_mcu_add_sta_cmd(phy, &info);
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -1094,6 +1125,25 @@ static const struct mt7615_mcu_ops sta_update_ops = {
|
|||
.set_fw_ctrl = mt7615_mcu_fw_pmctrl,
|
||||
};
|
||||
|
||||
int mt7615_mcu_sta_update_hdr_trans(struct mt7615_dev *dev,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta)
|
||||
{
|
||||
struct mt7615_sta *msta = (struct mt7615_sta *)sta->drv_priv;
|
||||
struct wtbl_req_hdr *wtbl_hdr;
|
||||
struct sk_buff *skb = NULL;
|
||||
|
||||
wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(&dev->mt76, &msta->wcid,
|
||||
WTBL_SET, NULL, &skb);
|
||||
if (IS_ERR(wtbl_hdr))
|
||||
return PTR_ERR(wtbl_hdr);
|
||||
|
||||
mt76_connac_mcu_wtbl_hdr_trans_tlv(skb, &msta->wcid, NULL, wtbl_hdr);
|
||||
|
||||
return mt76_mcu_skb_send_msg(&dev->mt76, skb, MCU_EXT_CMD_WTBL_UPDATE,
|
||||
true);
|
||||
}
|
||||
|
||||
static int
|
||||
mt7615_mcu_uni_ctrl_pm_state(struct mt7615_dev *dev, int band, int state)
|
||||
{
|
||||
|
@ -1120,8 +1170,8 @@ mt7615_mcu_uni_add_beacon_offload(struct mt7615_dev *dev,
|
|||
__le16 tim_ie_pos;
|
||||
__le16 csa_ie_pos;
|
||||
__le16 bcc_ie_pos;
|
||||
/* 0: enable beacon offload
|
||||
* 1: disable beacon offload
|
||||
/* 0: disable beacon offload
|
||||
* 1: enable beacon offload
|
||||
* 2: update probe respond offload
|
||||
*/
|
||||
u8 enable;
|
||||
|
@ -1144,6 +1194,9 @@ mt7615_mcu_uni_add_beacon_offload(struct mt7615_dev *dev,
|
|||
};
|
||||
struct sk_buff *skb;
|
||||
|
||||
if (!enable)
|
||||
goto out;
|
||||
|
||||
skb = ieee80211_beacon_get_template(mt76_hw(dev), vif, &offs);
|
||||
if (!skb)
|
||||
return -EINVAL;
|
||||
|
@ -1168,6 +1221,7 @@ mt7615_mcu_uni_add_beacon_offload(struct mt7615_dev *dev,
|
|||
}
|
||||
dev_kfree_skb(skb);
|
||||
|
||||
out:
|
||||
return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD_BSS_INFO_UPDATE,
|
||||
&req, sizeof(req), true);
|
||||
}
|
||||
|
@ -1427,8 +1481,7 @@ static int mt7615_load_n9(struct mt7615_dev *dev, const char *name)
|
|||
sizeof(dev->mt76.hw->wiphy->fw_version),
|
||||
"%.10s-%.15s", hdr->fw_ver, hdr->build_date);
|
||||
|
||||
if (!is_mt7615(&dev->mt76) &&
|
||||
!strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) {
|
||||
if (!is_mt7615(&dev->mt76)) {
|
||||
dev->fw_ver = MT7615_FIRMWARE_V2;
|
||||
dev->mcu_ops = &sta_update_ops;
|
||||
} else {
|
||||
|
@ -2155,7 +2208,7 @@ int mt7615_mcu_set_chan_info(struct mt7615_phy *phy, int cmd)
|
|||
.center_chan2 = ieee80211_frequency_to_channel(freq2),
|
||||
};
|
||||
|
||||
if (dev->mt76.hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
|
||||
if (phy->mt76->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
|
||||
req.switch_reason = CH_SWITCH_SCAN_BYPASS_DPD;
|
||||
else if ((chandef->chan->flags & IEEE80211_CHAN_RADAR) &&
|
||||
chandef->chan->dfs_state != NL80211_DFS_AVAILABLE)
|
||||
|
@ -2497,6 +2550,26 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
int mt7615_mcu_set_rx_hdr_trans_blacklist(struct mt7615_dev *dev)
|
||||
{
|
||||
struct {
|
||||
u8 operation;
|
||||
u8 count;
|
||||
u8 _rsv[2];
|
||||
u8 index;
|
||||
u8 enable;
|
||||
__le16 etype;
|
||||
} req = {
|
||||
.operation = 1,
|
||||
.count = 1,
|
||||
.enable = 1,
|
||||
.etype = cpu_to_le16(ETH_P_PAE),
|
||||
};
|
||||
|
||||
return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD_RX_HDR_TRANS,
|
||||
&req, sizeof(req), false);
|
||||
}
|
||||
|
||||
int mt7615_mcu_set_bss_pm(struct mt7615_dev *dev, struct ieee80211_vif *vif,
|
||||
bool enable)
|
||||
{
|
||||
|
|
|
@ -176,10 +176,18 @@ struct mt7615_mcu_rxd {
|
|||
u8 s2d_index;
|
||||
};
|
||||
|
||||
struct mt7615_mcu_csa_notify {
|
||||
struct mt7615_mcu_rxd rxd;
|
||||
|
||||
u8 omac_idx;
|
||||
u8 csa_count;
|
||||
u8 rsv[2];
|
||||
} __packed;
|
||||
|
||||
struct mt7615_mcu_rdd_report {
|
||||
struct mt7615_mcu_rxd rxd;
|
||||
|
||||
u8 idx;
|
||||
u8 band_idx;
|
||||
u8 long_detected;
|
||||
u8 constant_prf_detected;
|
||||
u8 staggered_prf_detected;
|
||||
|
@ -362,30 +370,6 @@ enum {
|
|||
BSS_INFO_MAX_NUM
|
||||
};
|
||||
|
||||
#define MT7615_WTBL_UPDATE_MAX_SIZE (sizeof(struct wtbl_req_hdr) + \
|
||||
sizeof(struct wtbl_generic) + \
|
||||
sizeof(struct wtbl_rx) + \
|
||||
sizeof(struct wtbl_ht) + \
|
||||
sizeof(struct wtbl_vht) + \
|
||||
sizeof(struct wtbl_tx_ps) + \
|
||||
sizeof(struct wtbl_hdr_trans) +\
|
||||
sizeof(struct wtbl_ba) + \
|
||||
sizeof(struct wtbl_bf) + \
|
||||
sizeof(struct wtbl_smps) + \
|
||||
sizeof(struct wtbl_pn) + \
|
||||
sizeof(struct wtbl_spe))
|
||||
|
||||
#define MT7615_STA_UPDATE_MAX_SIZE (sizeof(struct sta_req_hdr) + \
|
||||
sizeof(struct sta_rec_basic) + \
|
||||
sizeof(struct sta_rec_ht) + \
|
||||
sizeof(struct sta_rec_vht) + \
|
||||
sizeof(struct sta_rec_uapsd) + \
|
||||
sizeof(struct tlv) + \
|
||||
MT7615_WTBL_UPDATE_MAX_SIZE)
|
||||
|
||||
#define MT7615_WTBL_UPDATE_BA_SIZE (sizeof(struct wtbl_req_hdr) + \
|
||||
sizeof(struct wtbl_ba))
|
||||
|
||||
enum {
|
||||
CH_SWITCH_NORMAL = 0,
|
||||
CH_SWITCH_SCAN = 3,
|
||||
|
|
|
@ -1,3 +1,6 @@
|
|||
// SPDX-License-Identifier: ISC
|
||||
/* Copyright (C) 2020 MediaTek Inc. */
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
|
|
@ -133,11 +133,11 @@ struct mt7615_vif {
|
|||
};
|
||||
|
||||
struct mib_stats {
|
||||
u16 ack_fail_cnt;
|
||||
u16 fcs_err_cnt;
|
||||
u16 rts_cnt;
|
||||
u16 rts_retries_cnt;
|
||||
u16 ba_miss_cnt;
|
||||
u32 ack_fail_cnt;
|
||||
u32 fcs_err_cnt;
|
||||
u32 rts_cnt;
|
||||
u32 rts_retries_cnt;
|
||||
u32 ba_miss_cnt;
|
||||
unsigned long aggr_per;
|
||||
};
|
||||
|
||||
|
@ -168,7 +168,7 @@ struct mt7615_phy {
|
|||
u8 rdd_state;
|
||||
int dfs_state;
|
||||
|
||||
__le32 rx_ampdu_ts;
|
||||
u32 rx_ampdu_ts;
|
||||
u32 ampdu_ref;
|
||||
|
||||
struct mib_stats mib;
|
||||
|
@ -376,6 +376,7 @@ int mt7615_eeprom_get_power_delta_index(struct mt7615_dev *dev,
|
|||
enum nl80211_band band);
|
||||
int mt7615_wait_pdma_busy(struct mt7615_dev *dev);
|
||||
int mt7615_dma_init(struct mt7615_dev *dev);
|
||||
void mt7615_dma_start(struct mt7615_dev *dev);
|
||||
void mt7615_dma_cleanup(struct mt7615_dev *dev);
|
||||
int mt7615_mcu_init(struct mt7615_dev *dev);
|
||||
bool mt7615_wait_for_mcu_init(struct mt7615_dev *dev);
|
||||
|
@ -408,11 +409,6 @@ static inline bool is_mt7615(struct mt76_dev *dev)
|
|||
return mt76_chip(dev) == 0x7615 || mt76_chip(dev) == 0x7611;
|
||||
}
|
||||
|
||||
static inline bool is_mt7663(struct mt76_dev *dev)
|
||||
{
|
||||
return mt76_chip(dev) == 0x7663;
|
||||
}
|
||||
|
||||
static inline bool is_mt7611(struct mt76_dev *dev)
|
||||
{
|
||||
return mt76_chip(dev) == 0x7611;
|
||||
|
@ -524,6 +520,10 @@ void mt7615_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
|
|||
void mt7615_mac_work(struct work_struct *work);
|
||||
void mt7615_txp_skb_unmap(struct mt76_dev *dev,
|
||||
struct mt76_txwi_cache *txwi);
|
||||
int mt7615_mcu_sta_update_hdr_trans(struct mt7615_dev *dev,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta);
|
||||
int mt7615_mcu_set_rx_hdr_trans_blacklist(struct mt7615_dev *dev);
|
||||
int mt7615_mcu_set_fcc5_lpn(struct mt7615_dev *dev, int val);
|
||||
int mt7615_mcu_set_pulse_th(struct mt7615_dev *dev,
|
||||
const struct mt7615_dfs_pulse *pulse);
|
||||
|
@ -557,6 +557,8 @@ u32 mt7615_mcu_reg_rr(struct mt76_dev *dev, u32 offset);
|
|||
void mt7615_mcu_reg_wr(struct mt76_dev *dev, u32 offset, u32 val);
|
||||
void mt7615_coredump_work(struct work_struct *work);
|
||||
|
||||
void mt7622_trigger_hif_int(struct mt7615_dev *dev, bool en);
|
||||
|
||||
/* usb */
|
||||
int mt7663_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
|
||||
enum mt76_txq_id qid, struct mt76_wcid *wcid,
|
||||
|
|
|
@ -13,9 +13,9 @@
|
|||
#include "mcu.h"
|
||||
|
||||
static const struct pci_device_id mt7615_pci_device_table[] = {
|
||||
{ PCI_DEVICE(0x14c3, 0x7615) },
|
||||
{ PCI_DEVICE(0x14c3, 0x7663) },
|
||||
{ PCI_DEVICE(0x14c3, 0x7611) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7615) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7663) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7611) },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -126,6 +126,7 @@ int mt7615_register_device(struct mt7615_dev *dev)
|
|||
int ret;
|
||||
|
||||
mt7615_init_device(dev);
|
||||
INIT_WORK(&dev->reset_work, mt7615_mac_reset_work);
|
||||
|
||||
/* init led callbacks */
|
||||
if (IS_ENABLED(CONFIG_MT76_LEDS)) {
|
||||
|
|
|
@ -181,3 +181,171 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void mt7615_dma_reset(struct mt7615_dev *dev)
|
||||
{
|
||||
int i;
|
||||
|
||||
mt76_clear(dev, MT_WPDMA_GLO_CFG,
|
||||
MT_WPDMA_GLO_CFG_RX_DMA_EN | MT_WPDMA_GLO_CFG_TX_DMA_EN |
|
||||
MT_WPDMA_GLO_CFG_TX_WRITEBACK_DONE);
|
||||
|
||||
usleep_range(1000, 2000);
|
||||
|
||||
for (i = 0; i < __MT_TXQ_MAX; i++)
|
||||
mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[i], true);
|
||||
|
||||
for (i = 0; i < __MT_MCUQ_MAX; i++)
|
||||
mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[i], true);
|
||||
|
||||
mt76_for_each_q_rx(&dev->mt76, i)
|
||||
mt76_queue_rx_reset(dev, i);
|
||||
|
||||
mt7615_dma_start(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mt7615_dma_reset);
|
||||
|
||||
static void
|
||||
mt7615_hif_int_event_trigger(struct mt7615_dev *dev, u8 event)
|
||||
{
|
||||
mt76_wr(dev, MT_MCU_INT_EVENT, event);
|
||||
|
||||
mt7622_trigger_hif_int(dev, true);
|
||||
mt7622_trigger_hif_int(dev, false);
|
||||
}
|
||||
|
||||
static bool
|
||||
mt7615_wait_reset_state(struct mt7615_dev *dev, u32 state)
|
||||
{
|
||||
bool ret;
|
||||
|
||||
ret = wait_event_timeout(dev->reset_wait,
|
||||
(READ_ONCE(dev->reset_state) & state),
|
||||
MT7615_RESET_TIMEOUT);
|
||||
WARN(!ret, "Timeout waiting for MCU reset state %x\n", state);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void
|
||||
mt7615_update_vif_beacon(void *priv, u8 *mac, struct ieee80211_vif *vif)
|
||||
{
|
||||
struct ieee80211_hw *hw = priv;
|
||||
struct mt7615_dev *dev = mt7615_hw_dev(hw);
|
||||
|
||||
switch (vif->type) {
|
||||
case NL80211_IFTYPE_MESH_POINT:
|
||||
case NL80211_IFTYPE_ADHOC:
|
||||
case NL80211_IFTYPE_AP:
|
||||
mt7615_mcu_add_beacon(dev, hw, vif,
|
||||
vif->bss_conf.enable_beacon);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
mt7615_update_beacons(struct mt7615_dev *dev)
|
||||
{
|
||||
ieee80211_iterate_active_interfaces(dev->mt76.hw,
|
||||
IEEE80211_IFACE_ITER_RESUME_ALL,
|
||||
mt7615_update_vif_beacon, dev->mt76.hw);
|
||||
|
||||
if (!dev->mt76.phy2)
|
||||
return;
|
||||
|
||||
ieee80211_iterate_active_interfaces(dev->mt76.phy2->hw,
|
||||
IEEE80211_IFACE_ITER_RESUME_ALL,
|
||||
mt7615_update_vif_beacon, dev->mt76.phy2->hw);
|
||||
}
|
||||
|
||||
void mt7615_mac_reset_work(struct work_struct *work)
|
||||
{
|
||||
struct mt7615_phy *phy2;
|
||||
struct mt76_phy *ext_phy;
|
||||
struct mt7615_dev *dev;
|
||||
|
||||
dev = container_of(work, struct mt7615_dev, reset_work);
|
||||
ext_phy = dev->mt76.phy2;
|
||||
phy2 = ext_phy ? ext_phy->priv : NULL;
|
||||
|
||||
if (!(READ_ONCE(dev->reset_state) & MT_MCU_CMD_STOP_PDMA))
|
||||
return;
|
||||
|
||||
ieee80211_stop_queues(mt76_hw(dev));
|
||||
if (ext_phy)
|
||||
ieee80211_stop_queues(ext_phy->hw);
|
||||
|
||||
set_bit(MT76_RESET, &dev->mphy.state);
|
||||
set_bit(MT76_MCU_RESET, &dev->mphy.state);
|
||||
wake_up(&dev->mt76.mcu.wait);
|
||||
cancel_delayed_work_sync(&dev->mphy.mac_work);
|
||||
del_timer_sync(&dev->phy.roc_timer);
|
||||
cancel_work_sync(&dev->phy.roc_work);
|
||||
if (phy2) {
|
||||
set_bit(MT76_RESET, &phy2->mt76->state);
|
||||
cancel_delayed_work_sync(&phy2->mt76->mac_work);
|
||||
del_timer_sync(&phy2->roc_timer);
|
||||
cancel_work_sync(&phy2->roc_work);
|
||||
}
|
||||
|
||||
/* lock/unlock all queues to ensure that no tx is pending */
|
||||
mt76_txq_schedule_all(&dev->mphy);
|
||||
if (ext_phy)
|
||||
mt76_txq_schedule_all(ext_phy);
|
||||
|
||||
mt76_worker_disable(&dev->mt76.tx_worker);
|
||||
napi_disable(&dev->mt76.napi[0]);
|
||||
napi_disable(&dev->mt76.napi[1]);
|
||||
napi_disable(&dev->mt76.tx_napi);
|
||||
|
||||
mt7615_mutex_acquire(dev);
|
||||
|
||||
mt7615_hif_int_event_trigger(dev, MT_MCU_INT_EVENT_PDMA_STOPPED);
|
||||
|
||||
mt7615_tx_token_put(dev);
|
||||
idr_init(&dev->token);
|
||||
|
||||
if (mt7615_wait_reset_state(dev, MT_MCU_CMD_RESET_DONE)) {
|
||||
mt7615_dma_reset(dev);
|
||||
|
||||
mt76_wr(dev, MT_WPDMA_MEM_RNG_ERR, 0);
|
||||
|
||||
mt7615_hif_int_event_trigger(dev, MT_MCU_INT_EVENT_PDMA_INIT);
|
||||
mt7615_wait_reset_state(dev, MT_MCU_CMD_RECOVERY_DONE);
|
||||
}
|
||||
|
||||
clear_bit(MT76_MCU_RESET, &dev->mphy.state);
|
||||
clear_bit(MT76_RESET, &dev->mphy.state);
|
||||
if (phy2)
|
||||
clear_bit(MT76_RESET, &phy2->mt76->state);
|
||||
|
||||
mt76_worker_enable(&dev->mt76.tx_worker);
|
||||
napi_enable(&dev->mt76.tx_napi);
|
||||
napi_schedule(&dev->mt76.tx_napi);
|
||||
|
||||
napi_enable(&dev->mt76.napi[0]);
|
||||
napi_schedule(&dev->mt76.napi[0]);
|
||||
|
||||
napi_enable(&dev->mt76.napi[1]);
|
||||
napi_schedule(&dev->mt76.napi[1]);
|
||||
|
||||
ieee80211_wake_queues(mt76_hw(dev));
|
||||
if (ext_phy)
|
||||
ieee80211_wake_queues(ext_phy->hw);
|
||||
|
||||
mt7615_hif_int_event_trigger(dev, MT_MCU_INT_EVENT_RESET_DONE);
|
||||
mt7615_wait_reset_state(dev, MT_MCU_CMD_NORMAL_STATE);
|
||||
|
||||
mt7615_update_beacons(dev);
|
||||
|
||||
mt7615_mutex_release(dev);
|
||||
|
||||
ieee80211_queue_delayed_work(mt76_hw(dev), &dev->mphy.mac_work,
|
||||
MT7615_WATCHDOG_TIME);
|
||||
if (phy2)
|
||||
ieee80211_queue_delayed_work(ext_phy->hw,
|
||||
&phy2->mt76->mac_work,
|
||||
MT7615_WATCHDOG_TIME);
|
||||
|
||||
}
|
||||
|
|
|
@ -368,7 +368,9 @@ enum mt7615_reg_base {
|
|||
|
||||
#define MT_DMA_DCR0 MT_WF_DMA(0x000)
|
||||
#define MT_DMA_DCR0_MAX_RX_LEN GENMASK(15, 2)
|
||||
#define MT_DMA_DCR0_DAMSDU_EN BIT(16)
|
||||
#define MT_DMA_DCR0_RX_VEC_DROP BIT(17)
|
||||
#define MT_DMA_DCR0_RX_HDR_TRANS_EN BIT(19)
|
||||
|
||||
#define MT_DMA_RCFR0(_band) MT_WF_DMA(0x070 + (_band) * 0x40)
|
||||
#define MT_DMA_RCFR0_MCU_RX_MGMT BIT(2)
|
||||
|
@ -447,9 +449,10 @@ enum mt7615_reg_base {
|
|||
|
||||
#define MT_LPON(_n) ((dev)->reg_map[MT_LPON_BASE] + (_n))
|
||||
|
||||
#define MT_LPON_T0CR MT_LPON(0x010)
|
||||
#define MT_LPON_T0CR_MODE GENMASK(1, 0)
|
||||
#define MT_LPON_T0CR_WRITE BIT(0)
|
||||
#define MT_LPON_TCR0(_n) MT_LPON(0x010 + ((_n) * 4))
|
||||
#define MT_LPON_TCR2(_n) MT_LPON(0x0f8 + ((_n) - 2) * 4)
|
||||
#define MT_LPON_TCR_MODE GENMASK(1, 0)
|
||||
#define MT_LPON_TCR_WRITE BIT(0)
|
||||
|
||||
#define MT_LPON_UTTR0 MT_LPON(0x018)
|
||||
#define MT_LPON_UTTR1 MT_LPON(0x01c)
|
||||
|
|
|
@ -218,12 +218,15 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
|
|||
int qid, err, nframes = 0, len = 0, pse_sz = 0, ple_sz = 0;
|
||||
bool mcu = q == dev->q_mcu[MT_MCUQ_WM];
|
||||
struct mt76_sdio *sdio = &dev->sdio;
|
||||
u8 pad;
|
||||
|
||||
qid = mcu ? ARRAY_SIZE(sdio->xmit_buf) - 1 : q->qid;
|
||||
while (q->first != q->head) {
|
||||
struct mt76_queue_entry *e = &q->entry[q->first];
|
||||
struct sk_buff *iter;
|
||||
|
||||
smp_rmb();
|
||||
|
||||
if (!test_bit(MT76_STATE_MCU_RUNNING, &dev->phy.state)) {
|
||||
__skb_put_zero(e->skb, 4);
|
||||
err = __mt7663s_xmit_queue(dev, e->skb->data,
|
||||
|
@ -234,7 +237,8 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
|
|||
goto next;
|
||||
}
|
||||
|
||||
if (len + e->skb->len + 4 > MT76S_XMIT_BUF_SZ)
|
||||
pad = roundup(e->skb->len, 4) - e->skb->len;
|
||||
if (len + e->skb->len + pad + 4 > MT76S_XMIT_BUF_SZ)
|
||||
break;
|
||||
|
||||
if (mt7663s_tx_pick_quota(sdio, mcu, e->buf_sz, &pse_sz,
|
||||
|
@ -252,6 +256,11 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
|
|||
len += iter->len;
|
||||
nframes++;
|
||||
}
|
||||
|
||||
if (unlikely(pad)) {
|
||||
memset(sdio->xmit_buf[qid] + len, 0, pad);
|
||||
len += pad;
|
||||
}
|
||||
next:
|
||||
q->first = (q->first + 1) % q->ndesc;
|
||||
e->done = true;
|
||||
|
|
|
@ -67,6 +67,7 @@ static int mt7663_usb_sdio_set_rates(struct mt7615_dev *dev,
|
|||
struct mt7615_rate_desc *rate = &wrd->rate;
|
||||
struct mt7615_sta *sta = wrd->sta;
|
||||
u32 w5, w27, addr, val;
|
||||
u16 idx = sta->vif->mt76.omac_idx;
|
||||
|
||||
lockdep_assert_held(&dev->mt76.mutex);
|
||||
|
||||
|
@ -118,7 +119,10 @@ static int mt7663_usb_sdio_set_rates(struct mt7615_dev *dev,
|
|||
|
||||
sta->rate_probe = sta->rateset[rate->rateset].probe_rate.idx != -1;
|
||||
|
||||
mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_MODE); /* TSF read */
|
||||
idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
|
||||
addr = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
|
||||
|
||||
mt76_set(dev, addr, MT_LPON_TCR_MODE); /* TSF read */
|
||||
val = mt76_rr(dev, MT_LPON_UTTR0);
|
||||
sta->rate_set_tsf = (val & ~BIT(0)) | rate->rateset;
|
||||
|
||||
|
|
|
@ -73,6 +73,11 @@ static inline bool is_mt7921(struct mt76_dev *dev)
|
|||
return mt76_chip(dev) == 0x7961;
|
||||
}
|
||||
|
||||
static inline bool is_mt7663(struct mt76_dev *dev)
|
||||
{
|
||||
return mt76_chip(dev) == 0x7663;
|
||||
}
|
||||
|
||||
int mt76_connac_pm_wake(struct mt76_phy *phy, struct mt76_connac_pm *pm);
|
||||
void mt76_connac_power_save_sched(struct mt76_phy *phy,
|
||||
struct mt76_connac_pm *pm);
|
||||
|
|
|
@ -287,7 +287,7 @@ mt76_connac_mcu_alloc_wtbl_req(struct mt76_dev *dev, struct mt76_wcid *wcid,
|
|||
&hdr.wlan_idx_hi);
|
||||
if (!nskb) {
|
||||
nskb = mt76_mcu_msg_alloc(dev, NULL,
|
||||
MT76_CONNAC_WTBL_UPDATE_BA_SIZE);
|
||||
MT76_CONNAC_WTBL_UPDATE_MAX_SIZE);
|
||||
if (!nskb)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
|
@ -392,6 +392,21 @@ mt76_connac_mcu_sta_uapsd(struct sk_buff *skb, struct ieee80211_vif *vif,
|
|||
uapsd->max_sp = sta->max_sp;
|
||||
}
|
||||
|
||||
void mt76_connac_mcu_wtbl_hdr_trans_tlv(struct sk_buff *skb,
|
||||
struct mt76_wcid *wcid,
|
||||
void *sta_wtbl, void *wtbl_tlv)
|
||||
{
|
||||
struct wtbl_hdr_trans *htr;
|
||||
struct tlv *tlv;
|
||||
|
||||
tlv = mt76_connac_mcu_add_nested_tlv(skb, WTBL_HDR_TRANS,
|
||||
sizeof(*htr),
|
||||
wtbl_tlv, sta_wtbl);
|
||||
htr = (struct wtbl_hdr_trans *)tlv;
|
||||
htr->no_rx_trans = !test_bit(MT_WCID_FLAG_HDR_TRANS, &wcid->flags);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mt76_connac_mcu_wtbl_hdr_trans_tlv);
|
||||
|
||||
void mt76_connac_mcu_wtbl_generic_tlv(struct mt76_dev *dev,
|
||||
struct sk_buff *skb,
|
||||
struct ieee80211_vif *vif,
|
||||
|
@ -655,7 +670,8 @@ mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
|
|||
|
||||
void mt76_connac_mcu_sta_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
|
||||
struct ieee80211_sta *sta,
|
||||
struct ieee80211_vif *vif)
|
||||
struct ieee80211_vif *vif,
|
||||
u8 rcpi)
|
||||
{
|
||||
struct cfg80211_chan_def *chandef = &mphy->chandef;
|
||||
enum nl80211_band band = chandef->chan->band;
|
||||
|
@ -704,6 +720,7 @@ void mt76_connac_mcu_sta_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
|
|||
phy = (struct sta_rec_phy *)tlv;
|
||||
phy->phy_type = mt76_connac_get_phy_mode_v2(mphy, vif, band, sta);
|
||||
phy->basic_rate = cpu_to_le16((u16)vif->bss_conf.basic_rates);
|
||||
phy->rcpi = rcpi;
|
||||
|
||||
tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_RA, sizeof(*ra_info));
|
||||
ra_info = (struct sta_rec_ra_info *)tlv;
|
||||
|
@ -808,40 +825,42 @@ void mt76_connac_mcu_wtbl_ht_tlv(struct mt76_dev *dev, struct sk_buff *skb,
|
|||
EXPORT_SYMBOL_GPL(mt76_connac_mcu_wtbl_ht_tlv);
|
||||
|
||||
int mt76_connac_mcu_add_sta_cmd(struct mt76_phy *phy,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta,
|
||||
struct mt76_wcid *wcid,
|
||||
bool enable, int cmd)
|
||||
struct mt76_sta_cmd_info *info)
|
||||
{
|
||||
struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
|
||||
struct mt76_vif *mvif = (struct mt76_vif *)info->vif->drv_priv;
|
||||
struct mt76_dev *dev = phy->dev;
|
||||
struct wtbl_req_hdr *wtbl_hdr;
|
||||
struct tlv *sta_wtbl;
|
||||
struct sk_buff *skb;
|
||||
|
||||
skb = mt76_connac_mcu_alloc_sta_req(dev, mvif, wcid);
|
||||
skb = mt76_connac_mcu_alloc_sta_req(dev, mvif, info->wcid);
|
||||
if (IS_ERR(skb))
|
||||
return PTR_ERR(skb);
|
||||
|
||||
mt76_connac_mcu_sta_basic_tlv(skb, vif, sta, enable);
|
||||
if (enable && sta)
|
||||
mt76_connac_mcu_sta_tlv(phy, skb, sta, vif);
|
||||
mt76_connac_mcu_sta_basic_tlv(skb, info->vif, info->sta, info->enable);
|
||||
if (info->enable && info->sta)
|
||||
mt76_connac_mcu_sta_tlv(phy, skb, info->sta, info->vif,
|
||||
info->rcpi);
|
||||
|
||||
sta_wtbl = mt76_connac_mcu_add_tlv(skb, STA_REC_WTBL,
|
||||
sizeof(struct tlv));
|
||||
|
||||
wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(dev, wcid,
|
||||
wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(dev, info->wcid,
|
||||
WTBL_RESET_AND_SET,
|
||||
sta_wtbl, &skb);
|
||||
if (enable) {
|
||||
mt76_connac_mcu_wtbl_generic_tlv(dev, skb, vif, sta, sta_wtbl,
|
||||
if (IS_ERR(wtbl_hdr))
|
||||
return PTR_ERR(wtbl_hdr);
|
||||
|
||||
if (info->enable) {
|
||||
mt76_connac_mcu_wtbl_generic_tlv(dev, skb, info->vif,
|
||||
info->sta, sta_wtbl,
|
||||
wtbl_hdr);
|
||||
if (sta)
|
||||
mt76_connac_mcu_wtbl_ht_tlv(dev, skb, sta, sta_wtbl,
|
||||
wtbl_hdr);
|
||||
if (info->sta)
|
||||
mt76_connac_mcu_wtbl_ht_tlv(dev, skb, info->sta,
|
||||
sta_wtbl, wtbl_hdr);
|
||||
}
|
||||
|
||||
return mt76_mcu_skb_send_msg(dev, skb, cmd, true);
|
||||
return mt76_mcu_skb_send_msg(dev, skb, info->cmd, true);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mt76_connac_mcu_add_sta_cmd);
|
||||
|
||||
|
@ -946,6 +965,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
|
|||
|
||||
switch (vif->type) {
|
||||
case NL80211_IFTYPE_MESH_POINT:
|
||||
case NL80211_IFTYPE_MONITOR:
|
||||
case NL80211_IFTYPE_AP:
|
||||
basic_req.basic.conn_type = cpu_to_le32(CONNECTION_INFRA_AP);
|
||||
break;
|
||||
|
@ -1195,6 +1215,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
|
|||
.center_chan = ieee80211_frequency_to_channel(freq1),
|
||||
.center_chan2 = ieee80211_frequency_to_channel(freq2),
|
||||
.tx_streams = hweight8(phy->antenna_mask),
|
||||
.ht_op_info = 4, /* set HT 40M allowed */
|
||||
.rx_streams = phy->chainmask,
|
||||
.short_st = true,
|
||||
},
|
||||
|
@ -1287,6 +1308,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
|
|||
case NL80211_CHAN_WIDTH_20:
|
||||
default:
|
||||
rlm_req.rlm.bw = CMD_CBW_20MHZ;
|
||||
rlm_req.rlm.ht_op_info = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -1306,7 +1328,7 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
|
|||
{
|
||||
struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
|
||||
struct cfg80211_scan_request *sreq = &scan_req->req;
|
||||
int n_ssids = 0, err, i, duration = MT76_CONNAC_SCAN_CHANNEL_TIME;
|
||||
int n_ssids = 0, err, i, duration;
|
||||
int ext_channels_num = max_t(int, sreq->n_channels - 32, 0);
|
||||
struct ieee80211_channel **scan_list = sreq->channels;
|
||||
struct mt76_dev *mdev = phy->dev;
|
||||
|
@ -1343,6 +1365,7 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
|
|||
req->ssid_type_ext = n_ssids ? BIT(0) : 0;
|
||||
req->ssids_num = n_ssids;
|
||||
|
||||
duration = is_mt7921(phy->dev) ? 0 : MT76_CONNAC_SCAN_CHANNEL_TIME;
|
||||
/* increase channel time for passive scan */
|
||||
if (!sreq->n_ssids)
|
||||
duration *= 2;
|
||||
|
@ -1368,11 +1391,14 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
|
|||
req->ies_len = cpu_to_le16(sreq->ie_len);
|
||||
}
|
||||
|
||||
if (is_mt7921(phy->dev))
|
||||
req->scan_func |= SCAN_FUNC_SPLIT_SCAN;
|
||||
|
||||
memcpy(req->bssid, sreq->bssid, ETH_ALEN);
|
||||
if (sreq->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
|
||||
get_random_mask_addr(req->random_mac, sreq->mac_addr,
|
||||
sreq->mac_addr_mask);
|
||||
req->scan_func = 1;
|
||||
req->scan_func |= SCAN_FUNC_RANDOM_MAC;
|
||||
}
|
||||
|
||||
err = mt76_mcu_skb_send_msg(mdev, skb, MCU_CMD_START_HW_SCAN, false);
|
||||
|
@ -1433,10 +1459,13 @@ int mt76_connac_mcu_sched_scan_req(struct mt76_phy *phy,
|
|||
req->version = 1;
|
||||
req->seq_num = mvif->scan_seq_num | ext_phy << 7;
|
||||
|
||||
if (sreq->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
|
||||
get_random_mask_addr(req->random_mac, sreq->mac_addr,
|
||||
if (is_mt7663(phy->dev) &&
|
||||
(sreq->flags & NL80211_SCAN_FLAG_RANDOM_ADDR)) {
|
||||
get_random_mask_addr(req->mt7663.random_mac, sreq->mac_addr,
|
||||
sreq->mac_addr_mask);
|
||||
req->scan_func = 1;
|
||||
} else if (is_mt7921(phy->dev)) {
|
||||
req->mt7921.bss_idx = mvif->idx;
|
||||
}
|
||||
|
||||
req->ssids_num = sreq->n_ssids;
|
||||
|
|
|
@ -210,7 +210,7 @@ struct wtbl_hdr_trans {
|
|||
__le16 len;
|
||||
u8 to_ds;
|
||||
u8 from_ds;
|
||||
u8 disable_rx_trans;
|
||||
u8 no_rx_trans;
|
||||
u8 rsv;
|
||||
} __packed;
|
||||
|
||||
|
@ -304,9 +304,6 @@ struct wtbl_raw {
|
|||
sizeof(struct tlv) + \
|
||||
MT76_CONNAC_WTBL_UPDATE_MAX_SIZE)
|
||||
|
||||
#define MT76_CONNAC_WTBL_UPDATE_BA_SIZE (sizeof(struct wtbl_req_hdr) + \
|
||||
sizeof(struct wtbl_ba))
|
||||
|
||||
enum {
|
||||
STA_REC_BASIC,
|
||||
STA_REC_RA,
|
||||
|
@ -365,6 +362,9 @@ enum {
|
|||
#define NETWORK_IBSS BIT(18)
|
||||
#define NETWORK_WDS BIT(21)
|
||||
|
||||
#define SCAN_FUNC_RANDOM_MAC BIT(0)
|
||||
#define SCAN_FUNC_SPLIT_SCAN BIT(5)
|
||||
|
||||
#define CONNECTION_INFRA_STA (STA_TYPE_STA | NETWORK_INFRA)
|
||||
#define CONNECTION_INFRA_AP (STA_TYPE_AP | NETWORK_INFRA)
|
||||
#define CONNECTION_P2P_GC (STA_TYPE_STA | NETWORK_P2P)
|
||||
|
@ -759,11 +759,19 @@ struct mt76_connac_sched_scan_req {
|
|||
u8 channel_type;
|
||||
u8 channels_num;
|
||||
u8 intervals_num;
|
||||
u8 scan_func; /* BIT(0) eable random mac address */
|
||||
u8 scan_func; /* MT7663: BIT(0) eable random mac address */
|
||||
struct mt76_connac_mcu_scan_channel channels[64];
|
||||
__le16 intervals[MT76_CONNAC_MAX_SCHED_SCAN_INTERVAL];
|
||||
u8 random_mac[ETH_ALEN]; /* valid when BIT(0) in scan_func is set */
|
||||
u8 pad2[58];
|
||||
union {
|
||||
struct {
|
||||
u8 random_mac[ETH_ALEN];
|
||||
u8 pad2[58];
|
||||
} mt7663;
|
||||
struct {
|
||||
u8 bss_idx;
|
||||
u8 pad2[63];
|
||||
} mt7921;
|
||||
};
|
||||
} __packed;
|
||||
|
||||
struct mt76_connac_sched_scan_done {
|
||||
|
@ -876,6 +884,17 @@ struct mt76_connac_suspend_tlv {
|
|||
u8 pad[5];
|
||||
} __packed;
|
||||
|
||||
struct mt76_sta_cmd_info {
|
||||
struct ieee80211_sta *sta;
|
||||
struct mt76_wcid *wcid;
|
||||
|
||||
struct ieee80211_vif *vif;
|
||||
|
||||
bool enable;
|
||||
int cmd;
|
||||
u8 rcpi;
|
||||
};
|
||||
|
||||
#define to_wcid_lo(id) FIELD_GET(GENMASK(7, 0), (u16)id)
|
||||
#define to_wcid_hi(id) FIELD_GET(GENMASK(9, 8), (u16)id)
|
||||
|
||||
|
@ -917,9 +936,13 @@ void mt76_connac_mcu_wtbl_generic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
|
|||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta, void *sta_wtbl,
|
||||
void *wtbl_tlv);
|
||||
void mt76_connac_mcu_wtbl_hdr_trans_tlv(struct sk_buff *skb,
|
||||
struct mt76_wcid *wcid,
|
||||
void *sta_wtbl, void *wtbl_tlv);
|
||||
void mt76_connac_mcu_sta_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
|
||||
struct ieee80211_sta *sta,
|
||||
struct ieee80211_vif *vif);
|
||||
struct ieee80211_vif *vif,
|
||||
u8 rcpi);
|
||||
void mt76_connac_mcu_wtbl_ht_tlv(struct mt76_dev *dev, struct sk_buff *skb,
|
||||
struct ieee80211_sta *sta, void *sta_wtbl,
|
||||
void *wtbl_tlv);
|
||||
|
@ -942,10 +965,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
|
|||
struct mt76_wcid *wcid,
|
||||
bool enable);
|
||||
int mt76_connac_mcu_add_sta_cmd(struct mt76_phy *phy,
|
||||
struct ieee80211_vif *vif,
|
||||
struct ieee80211_sta *sta,
|
||||
struct mt76_wcid *wcid,
|
||||
bool enable, int cmd);
|
||||
struct mt76_sta_cmd_info *info);
|
||||
void mt76_connac_mcu_beacon_loss_iter(void *priv, u8 *mac,
|
||||
struct ieee80211_vif *vif);
|
||||
int mt76_connac_mcu_set_rts_thresh(struct mt76_dev *dev, u32 val, u8 band);
|
||||
|
|
|
@ -221,9 +221,9 @@ mt76x0e_remove(struct pci_dev *pdev)
|
|||
}
|
||||
|
||||
static const struct pci_device_id mt76x0e_device_table[] = {
|
||||
{ PCI_DEVICE(0x14c3, 0x7610) },
|
||||
{ PCI_DEVICE(0x14c3, 0x7630) },
|
||||
{ PCI_DEVICE(0x14c3, 0x7650) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7610) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7630) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7650) },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -32,7 +32,8 @@ static struct usb_device_id mt76x0_device_table[] = {
|
|||
{ USB_DEVICE(0x20f4, 0x806b) }, /* TRENDnet TEW-806UBH */
|
||||
{ USB_DEVICE(0x7392, 0xc711) }, /* Devolo Wifi ac Stick */
|
||||
{ USB_DEVICE(0x0df6, 0x0079) }, /* Sitecom Europe B.V. ac Stick */
|
||||
{ USB_DEVICE(0x2357, 0x0123) }, /* TP-LINK T2UHP */
|
||||
{ USB_DEVICE(0x2357, 0x0123) }, /* TP-LINK T2UHP_US_v1 */
|
||||
{ USB_DEVICE(0x2357, 0x010b) }, /* TP-LINK T2UHP_UN_v1 */
|
||||
/* TP-LINK Archer T1U */
|
||||
{ USB_DEVICE(0x2357, 0x0105), .driver_info = 1, },
|
||||
/* MT7630U */
|
||||
|
|
|
@ -770,6 +770,7 @@ int mt76x02_mac_process_rx(struct mt76x02_dev *dev, struct sk_buff *skb,
|
|||
void *rxi)
|
||||
{
|
||||
struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
|
||||
struct ieee80211_hdr *hdr;
|
||||
struct mt76x02_rxwi *rxwi = rxi;
|
||||
struct mt76x02_sta *sta;
|
||||
u32 rxinfo = le32_to_cpu(rxwi->rxinfo);
|
||||
|
@ -864,7 +865,8 @@ int mt76x02_mac_process_rx(struct mt76x02_dev *dev, struct sk_buff *skb,
|
|||
status->freq = dev->mphy.chandef.chan->center_freq;
|
||||
status->band = dev->mphy.chandef.chan->band;
|
||||
|
||||
status->tid = FIELD_GET(MT_RXWI_TID, tid_sn);
|
||||
hdr = (struct ieee80211_hdr *)skb->data;
|
||||
status->qos_ctl = *ieee80211_get_qos_ctl(hdr);
|
||||
status->seqno = FIELD_GET(MT_RXWI_SN, tid_sn);
|
||||
|
||||
return mt76x02_mac_process_rate(dev, status, rate);
|
||||
|
|
|
@ -230,7 +230,7 @@ int mt76x02_dma_init(struct mt76x02_dev *dev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
netif_tx_napi_add(&dev->mt76.napi_dev, &dev->mt76.tx_napi,
|
||||
netif_tx_napi_add(&dev->mt76.tx_napi_dev, &dev->mt76.tx_napi,
|
||||
mt76x02_poll_tx, NAPI_POLL_WEIGHT);
|
||||
napi_enable(&dev->mt76.tx_napi);
|
||||
|
||||
|
|
|
@ -447,6 +447,10 @@ int mt76x02_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
|
|||
!(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* MT76x0 GTK offloading does not work with more than one VIF */
|
||||
if (is_mt76x0(dev) && !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
msta = sta ? (struct mt76x02_sta *)sta->drv_priv : NULL;
|
||||
wcid = msta ? &msta->wcid : &mvif->group_wcid;
|
||||
|
||||
|
|
|
@ -10,9 +10,9 @@
|
|||
#include "mt76x2.h"
|
||||
|
||||
static const struct pci_device_id mt76x2e_device_table[] = {
|
||||
{ PCI_DEVICE(0x14c3, 0x7662) },
|
||||
{ PCI_DEVICE(0x14c3, 0x7612) },
|
||||
{ PCI_DEVICE(0x14c3, 0x7602) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7662) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7612) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7602) },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -3,6 +3,6 @@
|
|||
obj-$(CONFIG_MT7915E) += mt7915e.o
|
||||
|
||||
mt7915e-y := pci.o init.o dma.o eeprom.o main.o mcu.o mac.o \
|
||||
debugfs.o
|
||||
debugfs.o mmio.o
|
||||
|
||||
mt7915e-$(CONFIG_NL80211_TESTMODE) += testmode.o
|
||||
|
|
|
@ -124,7 +124,7 @@ mt7915_ampdu_stat_read_phy(struct mt7915_phy *phy,
|
|||
range[i] = mt76_rr(dev, MT_MIB_ARNG(ext_phy, i));
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(bound); i++)
|
||||
bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i) + 1;
|
||||
bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i % 4) + 1;
|
||||
|
||||
seq_printf(file, "\nPhy %d\n", ext_phy);
|
||||
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче