* EHT channel puncturing support (client & AP)
  * some support for AP MLD without mac80211
  * fixes for A-MSDU on mesh connections
 
 Major driver changes:
 
 iwlwifi
  * EHT rate reporting
  * Bump FW API to 74 for AX devices
  * STEP equalizer support: transfer some STEP (connection to radio
    on platforms with integrated wifi) related parameters from the
    BIOS to the firmware
 
 mt76
  * switch to using page pool allocator
  * mt7996 EHT (Wi-Fi 7) support
  * Wireless Ethernet Dispatch (WED) reset support
 
 libertas
  * WPS enrollee support
 
 brcmfmac
  * Rename Cypress 89459 to BCM4355
  * BCM4355 and BCM4377 support
 
 mwifiex
  * SD8978 chipset support
 
 rtl8xxxu
  * LED support
 
 ath12k
  * new driver for Qualcomm Wi-Fi 7 devices
 
 ath11k
  * IPQ5018 support
  * Fine Timing Measurement (FTM) responder role support
  * channel 177 support
 
 ath10k
  * store WLAN firmware version in SMEM image table
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEpeA8sTs3M8SN2hR410qiO8sPaAAFAmPuCvYACgkQ10qiO8sP
 aADG4g/+NkmZHdlQVaWHRnv+hm6/BOODJDpe+tzSqP+kUKFuKWJA6k2RozV6iz2L
 SLpOefU9Wq05xGwP2BfVPGdYrD9eEoPrToN0t6up85iPyJurtd2yg7fiOt7vLk0K
 UahZQd+hnp2N2wDFH+pJn6xWaQvO1ZM46i2ufvxGKEOuRyDiq3abKeba4UWxCOWF
 Zw+3rnfhvki1I4ZBLWuEBHrRoTPYfU7s7bkV2k2GdKJhhTMbojFkKPUMhO8XMba0
 NxT5NcSCsQR3kBLbms/Wy1Q0f6G/rK8O3N0QLaOTpT3UMLLkgKLMHyjU+MNiQ6rH
 G6aC5T9U6br8t+xXMFUmkvdI2jnbZ7nHYDBKrs9kGqq0lfbp2Xjifxu2882qOOFh
 4OfVrnoUQQOrl8RGgwvs6/8gpoXSIsb4M03G3xD5MVwFncTs9V5ehMnSUSkzgrdK
 XN2RijXN4V3SULGXyR8MqsS8O70RK6QjHarJtTClzdIHoHTet7NqSpnx5stuF6+K
 MbZD0C10z1sgiUA591wrfwINFLO1xQKYDsUlds2xoaXi3zb6LiEgfyNUUouN//aj
 dJ/YEtnwc4qCGqv2SeAbkjj0T9jGS3njziUp2tQyi2GGZB8bSLcb9Vz96+o3cjLC
 V4AXk7o4y3XurCN+gj2VL9tqNigJqyHMRbb1HxxyPskb1BXOOJo=
 =p6+A
 -----END PGP SIGNATURE-----

Merge tag 'wireless-next-2023-03-16' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next

Johannes Berg says:

====================
Major stack changes:
 * EHT channel puncturing support (client & AP)
 * some support for AP MLD without mac80211
 * fixes for A-MSDU on mesh connections

Major driver changes:

iwlwifi
 * EHT rate reporting
 * Bump FW API to 74 for AX devices
 * STEP equalizer support: transfer some STEP (connection to radio
   on platforms with integrated wifi) related parameters from the
   BIOS to the firmware

mt76
 * switch to using page pool allocator
 * mt7996 EHT (Wi-Fi 7) support
 * Wireless Ethernet Dispatch (WED) reset support

libertas
 * WPS enrollee support

brcmfmac
 * Rename Cypress 89459 to BCM4355
 * BCM4355 and BCM4377 support

mwifiex
 * SD8978 chipset support

rtl8xxxu
 * LED support

ath12k
 * new driver for Qualcomm Wi-Fi 7 devices

ath11k
 * IPQ5018 support
 * Fine Timing Measurement (FTM) responder role support
 * channel 177 support

ath10k
 * store WLAN firmware version in SMEM image table

* tag 'wireless-next-2023-03-16' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (207 commits)
  wifi: brcmfmac: p2p: Introduce generic flexible array frame member
  wifi: mac80211: add documentation for amsdu_mesh_control
  wifi: cfg80211: remove gfp parameter from cfg80211_obss_color_collision_notify description
  wifi: mac80211: always initialize link_sta with sta
  wifi: mac80211: pass 'sta' to ieee80211_rx_data_set_sta()
  wifi: cfg80211: Set SSID if it is not already set
  wifi: rtw89: move H2C of del_pkt_offload before polling FW status ready
  wifi: rtw89: use readable return 0 in rtw89_mac_cfg_ppdu_status()
  wifi: rtw88: usb: drop now unnecessary URB size check
  wifi: rtw88: usb: send Zero length packets if necessary
  wifi: rtw88: usb: Set qsel correctly
  wifi: mac80211: fix off-by-one link setting
  wifi: mac80211: Fix for Rx fragmented action frames
  wifi: mac80211: avoid u32_encode_bits() warning
  wifi: mac80211: Don't translate MLD addresses for multicast
  wifi: cfg80211: call reg_notifier for self managed wiphy from driver hint
  wifi: cfg80211: get rid of gfp in cfg80211_bss_color_notify
  wifi: nl80211: Allow authentication frames and set keys on NAN interface
  wifi: mac80211: fix non-MLO station association
  wifi: mac80211: Allow NSS change only up to capability
  ...
====================

Link: https://lore.kernel.org/r/20230216105406.208416-1-johannes@sipsolutions.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-02-16 11:33:10 -08:00
Родитель 40967f77df 1a30a6b25f
Коммит ca0df43d21
246 изменённых файлов: 58089 добавлений и 2061 удалений

Просмотреть файл

@ -29,15 +29,15 @@ additionalProperties: false
examples:
- |
mmc {
#address-cells = <1>;
#size-cells = <0>;
mmc {
#address-cells = <1>;
#size-cells = <0>;
wifi@1 {
compatible = "esp,esp8089";
reg = <1>;
esp,crystal-26M-en = <2>;
};
};
wifi@1 {
compatible = "esp,esp8089";
reg = <1>;
esp,crystal-26M-en = <2>;
};
};
...

Просмотреть файл

@ -1,6 +1,5 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/wireless/ieee80211.yaml#

Просмотреть файл

@ -1,4 +1,4 @@
Marvell 8787/8897/8997 (sd8787/sd8897/sd8997/pcie8997) SDIO/PCIE devices
Marvell 8787/8897/8978/8997 (sd8787/sd8897/sd8978/sd8997/pcie8997) SDIO/PCIE devices
------
This node provides properties for controlling the Marvell SDIO/PCIE wireless device.
@ -10,7 +10,9 @@ Required properties:
- compatible : should be one of the following:
* "marvell,sd8787"
* "marvell,sd8897"
* "marvell,sd8978"
* "marvell,sd8997"
* "nxp,iw416"
* "pci11ab,2b42"
* "pci1b4b,2b42"

Просмотреть файл

@ -1,6 +1,5 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/wireless/mediatek,mt76.yaml#

Просмотреть файл

@ -1,6 +1,5 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/wireless/qcom,ath11k.yaml#
@ -21,6 +20,7 @@ properties:
- qcom,ipq8074-wifi
- qcom,ipq6018-wifi
- qcom,wcn6750-wifi
- qcom,ipq5018-wifi
reg:
maxItems: 1
@ -262,10 +262,10 @@ allOf:
examples:
- |
q6v5_wcss: q6v5_wcss@CD00000 {
q6v5_wcss: remoteproc@cd00000 {
compatible = "qcom,ipq8074-wcss-pil";
reg = <0xCD00000 0x4040>,
<0x4AB000 0x20>;
reg = <0xcd00000 0x4040>,
<0x4ab000 0x20>;
reg-names = "qdsp6",
"rmb";
};
@ -386,7 +386,7 @@ examples:
#address-cells = <2>;
#size-cells = <2>;
qcn9074_0: qcn9074_0@51100000 {
qcn9074_0: wifi@51100000 {
no-map;
reg = <0x0 0x51100000 0x0 0x03500000>;
};
@ -463,6 +463,6 @@ examples:
qcom,smem-states = <&wlan_smp2p_out 0>;
qcom,smem-state-names = "wlan-smp2p-out";
wifi-firmware {
iommus = <&apps_smmu 0x1c02 0x1>;
iommus = <&apps_smmu 0x1c02 0x1>;
};
};

Просмотреть файл

@ -2,7 +2,6 @@
# Copyright (c) 2020, Silicon Laboratories, Inc.
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/wireless/silabs,wfx.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#

Просмотреть файл

@ -90,47 +90,47 @@ examples:
// For wl12xx family:
spi1 {
#address-cells = <1>;
#size-cells = <0>;
#address-cells = <1>;
#size-cells = <0>;
wlcore1: wlcore@1 {
compatible = "ti,wl1271";
reg = <1>;
spi-max-frequency = <48000000>;
interrupts = <8 IRQ_TYPE_LEVEL_HIGH>;
vwlan-supply = <&vwlan_fixed>;
clock-xtal;
ref-clock-frequency = <38400000>;
};
wlcore1: wlcore@1 {
compatible = "ti,wl1271";
reg = <1>;
spi-max-frequency = <48000000>;
interrupts = <8 IRQ_TYPE_LEVEL_HIGH>;
vwlan-supply = <&vwlan_fixed>;
clock-xtal;
ref-clock-frequency = <38400000>;
};
};
// For wl18xx family:
spi2 {
#address-cells = <1>;
#size-cells = <0>;
#address-cells = <1>;
#size-cells = <0>;
wlcore2: wlcore@0 {
compatible = "ti,wl1835";
reg = <0>;
spi-max-frequency = <48000000>;
interrupts = <27 IRQ_TYPE_EDGE_RISING>;
vwlan-supply = <&vwlan_fixed>;
};
wlcore2: wlcore@0 {
compatible = "ti,wl1835";
reg = <0>;
spi-max-frequency = <48000000>;
interrupts = <27 IRQ_TYPE_EDGE_RISING>;
vwlan-supply = <&vwlan_fixed>;
};
};
// SDIO example:
mmc3 {
vmmc-supply = <&wlan_en_reg>;
bus-width = <4>;
cap-power-off-card;
keep-power-in-suspend;
vmmc-supply = <&wlan_en_reg>;
bus-width = <4>;
cap-power-off-card;
keep-power-in-suspend;
#address-cells = <1>;
#size-cells = <0>;
#address-cells = <1>;
#size-cells = <0>;
wlcore3: wlcore@2 {
compatible = "ti,wl1835";
reg = <2>;
interrupts = <19 IRQ_TYPE_LEVEL_HIGH>;
};
wlcore3: wlcore@2 {
compatible = "ti,wl1835";
reg = <2>;
interrupts = <19 IRQ_TYPE_LEVEL_HIGH>;
};
};

Просмотреть файл

@ -17254,6 +17254,13 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
F: Documentation/devicetree/bindings/net/wireless/qcom,ath11k.yaml
F: drivers/net/wireless/ath/ath11k/
QUALCOMM ATH12K WIRELESS DRIVER
M: Kalle Valo <kvalo@kernel.org>
L: ath12k@lists.infradead.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
F: drivers/net/wireless/ath/ath12k/
QUALCOMM ATHEROS ATH9K WIRELESS DRIVER
M: Toke Høiland-Jørgensen <toke@toke.dk>
L: linux-wireless@vger.kernel.org

Просмотреть файл

@ -63,5 +63,6 @@ source "drivers/net/wireless/ath/wil6210/Kconfig"
source "drivers/net/wireless/ath/ath10k/Kconfig"
source "drivers/net/wireless/ath/wcn36xx/Kconfig"
source "drivers/net/wireless/ath/ath11k/Kconfig"
source "drivers/net/wireless/ath/ath12k/Kconfig"
endif

Просмотреть файл

@ -8,6 +8,7 @@ obj-$(CONFIG_WIL6210) += wil6210/
obj-$(CONFIG_ATH10K) += ath10k/
obj-$(CONFIG_WCN36XX) += wcn36xx/
obj-$(CONFIG_ATH11K) += ath11k/
obj-$(CONFIG_ATH12K) += ath12k/
obj-$(CONFIG_ATH_COMMON) += ath.o

Просмотреть файл

@ -208,14 +208,6 @@ ath10k_ce_shadow_src_ring_write_index_set(struct ath10k *ar,
ath10k_ce_write32(ar, shadow_sr_wr_ind_addr(ar, ce_state), value);
}
static inline void
ath10k_ce_shadow_dest_ring_write_index_set(struct ath10k *ar,
struct ath10k_ce_pipe *ce_state,
unsigned int value)
{
ath10k_ce_write32(ar, shadow_dst_wr_ind_addr(ar, ce_state), value);
}
static inline void ath10k_ce_src_ring_base_addr_set(struct ath10k *ar,
u32 ce_id,
u64 addr)

Просмотреть файл

@ -32,6 +32,9 @@ static const struct of_device_id ath11k_ahb_of_match[] = {
{ .compatible = "qcom,wcn6750-wifi",
.data = (void *)ATH11K_HW_WCN6750_HW10,
},
{ .compatible = "qcom,ipq5018-wifi",
.data = (void *)ATH11K_HW_IPQ5018_HW10,
},
{ }
};
@ -267,30 +270,42 @@ static void ath11k_ahb_clearbit32(struct ath11k_base *ab, u8 bit, u32 offset)
static void ath11k_ahb_ce_irq_enable(struct ath11k_base *ab, u16 ce_id)
{
const struct ce_attr *ce_attr;
const struct ce_ie_addr *ce_ie_addr = ab->hw_params.ce_ie_addr;
u32 ie1_reg_addr, ie2_reg_addr, ie3_reg_addr;
ie1_reg_addr = ce_ie_addr->ie1_reg_addr + ATH11K_CE_OFFSET(ab);
ie2_reg_addr = ce_ie_addr->ie2_reg_addr + ATH11K_CE_OFFSET(ab);
ie3_reg_addr = ce_ie_addr->ie3_reg_addr + ATH11K_CE_OFFSET(ab);
ce_attr = &ab->hw_params.host_ce_config[ce_id];
if (ce_attr->src_nentries)
ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
ath11k_ahb_setbit32(ab, ce_id, ie1_reg_addr);
if (ce_attr->dest_nentries) {
ath11k_ahb_setbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
ath11k_ahb_setbit32(ab, ce_id, ie2_reg_addr);
ath11k_ahb_setbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
CE_HOST_IE_3_ADDRESS);
ie3_reg_addr);
}
}
static void ath11k_ahb_ce_irq_disable(struct ath11k_base *ab, u16 ce_id)
{
const struct ce_attr *ce_attr;
const struct ce_ie_addr *ce_ie_addr = ab->hw_params.ce_ie_addr;
u32 ie1_reg_addr, ie2_reg_addr, ie3_reg_addr;
ie1_reg_addr = ce_ie_addr->ie1_reg_addr + ATH11K_CE_OFFSET(ab);
ie2_reg_addr = ce_ie_addr->ie2_reg_addr + ATH11K_CE_OFFSET(ab);
ie3_reg_addr = ce_ie_addr->ie3_reg_addr + ATH11K_CE_OFFSET(ab);
ce_attr = &ab->hw_params.host_ce_config[ce_id];
if (ce_attr->src_nentries)
ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_ADDRESS);
ath11k_ahb_clearbit32(ab, ce_id, ie1_reg_addr);
if (ce_attr->dest_nentries) {
ath11k_ahb_clearbit32(ab, ce_id, CE_HOST_IE_2_ADDRESS);
ath11k_ahb_clearbit32(ab, ce_id, ie2_reg_addr);
ath11k_ahb_clearbit32(ab, ce_id + CE_HOST_IE_3_SHIFT,
CE_HOST_IE_3_ADDRESS);
ie3_reg_addr);
}
}
@ -1150,6 +1165,22 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
if (ret)
goto err_core_free;
ab->mem_ce = ab->mem;
if (ab->hw_params.ce_remap) {
const struct ce_remap *ce_remap = ab->hw_params.ce_remap;
/* ce register space is moved out of wcss unlike ipq8074 or ipq6018
* and the space is not contiguous, hence remapping the CE registers
* to a new space for accessing them.
*/
ab->mem_ce = ioremap(ce_remap->base, ce_remap->size);
if (IS_ERR(ab->mem_ce)) {
dev_err(&pdev->dev, "ce ioremap error\n");
ret = -ENOMEM;
goto err_core_free;
}
}
ret = ath11k_ahb_fw_resources_init(ab);
if (ret)
goto err_core_free;
@ -1236,6 +1267,10 @@ static void ath11k_ahb_free_resources(struct ath11k_base *ab)
ath11k_ahb_release_smp2p_handle(ab);
ath11k_ahb_fw_resource_deinit(ab);
ath11k_ce_free_pipes(ab);
if (ab->hw_params.ce_remap)
iounmap(ab->mem_ce);
ath11k_core_free(ab);
platform_set_drvdata(pdev, NULL);
}

Просмотреть файл

@ -49,6 +49,11 @@ void ath11k_ce_byte_swap(void *mem, u32 len);
#define CE_HOST_IE_2_ADDRESS 0x00A18040
#define CE_HOST_IE_3_ADDRESS CE_HOST_IE_ADDRESS
/* CE IE registers are different for IPQ5018 */
#define CE_HOST_IPQ5018_IE_ADDRESS 0x0841804C
#define CE_HOST_IPQ5018_IE_2_ADDRESS 0x08418050
#define CE_HOST_IPQ5018_IE_3_ADDRESS CE_HOST_IPQ5018_IE_ADDRESS
#define CE_HOST_IE_3_SHIFT 0xC
#define CE_RING_IDX_INCR(nentries_mask, idx) (((idx) + 1) & (nentries_mask))
@ -84,6 +89,17 @@ struct ce_pipe_config {
__le32 reserved;
};
struct ce_ie_addr {
u32 ie1_reg_addr;
u32 ie2_reg_addr;
u32 ie3_reg_addr;
};
struct ce_remap {
u32 base;
u32 size;
};
struct ce_attr {
/* CE_ATTR_* values */
unsigned int flags;

Просмотреть файл

@ -54,6 +54,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 11,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_ipq8074,
.svc_to_ce_map_len = 21,
.ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = false,
.rxdma1_enable = true,
.num_rxmda_per_pdev = 1,
@ -115,6 +116,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tcl_ring_retry = true,
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.ftm_responder = true,
},
{
.hw_rev = ATH11K_HW_IPQ6018_HW10,
@ -137,6 +139,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 11,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_ipq6018,
.svc_to_ce_map_len = 19,
.ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = false,
.rxdma1_enable = true,
.num_rxmda_per_pdev = 1,
@ -196,6 +199,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = false,
.ftm_responder = true,
},
{
.name = "qca6390 hw2.0",
@ -218,6 +222,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qca6390,
.svc_to_ce_map_len = 14,
.ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = true,
.rxdma1_enable = false,
.num_rxmda_per_pdev = 2,
@ -279,6 +284,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = true,
.ftm_responder = false,
},
{
.name = "qcn9074 hw1.0",
@ -301,6 +307,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qcn9074,
.svc_to_ce_map_len = 18,
.ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.rxdma1_enable = true,
.num_rxmda_per_pdev = 1,
.rx_mac_buf_ring = false,
@ -359,6 +366,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = false,
.ftm_responder = true,
},
{
.name = "wcn6855 hw2.0",
@ -381,6 +389,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qca6390,
.svc_to_ce_map_len = 14,
.ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = true,
.rxdma1_enable = false,
.num_rxmda_per_pdev = 2,
@ -442,6 +451,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = true,
.ftm_responder = false,
},
{
.name = "wcn6855 hw2.1",
@ -524,6 +534,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = true,
.ftm_responder = false,
},
{
.name = "wcn6750 hw1.0",
@ -546,6 +557,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.target_ce_count = 9,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qca6390,
.svc_to_ce_map_len = 14,
.ce_ie_addr = &ath11k_ce_ie_addr_ipq8074,
.single_pdev_only = true,
.rxdma1_enable = false,
.num_rxmda_per_pdev = 1,
@ -603,6 +615,87 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
.tx_ring_size = DP_TCL_DATA_RING_SIZE_WCN6750,
.smp2p_wow_exit = true,
.support_fw_mac_sequence = true,
.ftm_responder = false,
},
{
.hw_rev = ATH11K_HW_IPQ5018_HW10,
.name = "ipq5018 hw1.0",
.fw = {
.dir = "IPQ5018/hw1.0",
.board_size = 256 * 1024,
.cal_offset = 128 * 1024,
},
.max_radios = MAX_RADIOS_5018,
.bdf_addr = 0x4BA00000,
/* hal_desc_sz and hw ops are similar to qcn9074 */
.hal_desc_sz = sizeof(struct hal_rx_desc_qcn9074),
.qmi_service_ins_id = ATH11K_QMI_WLFW_SERVICE_INS_ID_V01_IPQ8074,
.ring_mask = &ath11k_hw_ring_mask_ipq8074,
.credit_flow = false,
.max_tx_ring = 1,
.spectral = {
.fft_sz = 2,
.fft_pad_sz = 0,
.summary_pad_sz = 16,
.fft_hdr_len = 24,
.max_fft_bins = 1024,
},
.internal_sleep_clock = false,
.regs = &ipq5018_regs,
.hw_ops = &ipq5018_ops,
.host_ce_config = ath11k_host_ce_config_qcn9074,
.ce_count = CE_CNT_5018,
.target_ce_config = ath11k_target_ce_config_wlan_ipq5018,
.target_ce_count = TARGET_CE_CNT_5018,
.svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_ipq5018,
.svc_to_ce_map_len = SVC_CE_MAP_LEN_5018,
.ce_ie_addr = &ath11k_ce_ie_addr_ipq5018,
.ce_remap = &ath11k_ce_remap_ipq5018,
.rxdma1_enable = true,
.num_rxmda_per_pdev = RXDMA_PER_PDEV_5018,
.rx_mac_buf_ring = false,
.vdev_start_delay = false,
.htt_peer_map_v2 = true,
.interface_modes = BIT(NL80211_IFTYPE_STATION) |
BIT(NL80211_IFTYPE_AP) |
BIT(NL80211_IFTYPE_MESH_POINT),
.supports_monitor = false,
.supports_sta_ps = false,
.supports_shadow_regs = false,
.fw_mem_mode = 0,
.num_vdevs = 16 + 1,
.num_peers = 512,
.supports_regdb = false,
.idle_ps = false,
.supports_suspend = false,
.hal_params = &ath11k_hw_hal_params_ipq8074,
.single_pdev_only = false,
.cold_boot_calib = true,
.fix_l1ss = true,
.supports_dynamic_smps_6ghz = false,
.alloc_cacheable_memory = true,
.supports_rssi_stats = false,
.fw_wmi_diag_event = false,
.current_cc_support = false,
.dbr_debug_support = true,
.global_reset = false,
.bios_sar_capa = NULL,
.m3_fw_support = false,
.fixed_bdf_addr = true,
.fixed_mem_region = true,
.static_window_map = false,
.hybrid_bus_type = false,
.fixed_fw_mem = false,
.support_off_channel_tx = false,
.supports_multi_bssid = false,
.sram_dump = {},
.tcl_ring_retry = true,
.tx_ring_size = DP_TCL_DATA_RING_SIZE,
.smp2p_wow_exit = false,
.support_fw_mac_sequence = false,
.ftm_responder = true,
},
};

Просмотреть файл

@ -142,6 +142,7 @@ enum ath11k_hw_rev {
ATH11K_HW_WCN6855_HW20,
ATH11K_HW_WCN6855_HW21,
ATH11K_HW_WCN6750_HW10,
ATH11K_HW_IPQ5018_HW10,
};
enum ath11k_firmware_mode {
@ -230,6 +231,13 @@ struct ath11k_he {
#define MAX_RADIOS 3
/* ipq5018 hw param macros */
#define MAX_RADIOS_5018 1
#define CE_CNT_5018 6
#define TARGET_CE_CNT_5018 9
#define SVC_CE_MAP_LEN_5018 17
#define RXDMA_PER_PDEV_5018 1
enum {
WMI_HOST_TP_SCALE_MAX = 0,
WMI_HOST_TP_SCALE_50 = 1,
@ -338,6 +346,7 @@ struct ath11k_vif {
bool is_started;
bool is_up;
bool ftm_responder;
bool spectral_enabled;
bool ps;
u32 aid;
@ -512,8 +521,8 @@ struct ath11k_sta {
#define ATH11K_MIN_5G_FREQ 4150
#define ATH11K_MIN_6G_FREQ 5925
#define ATH11K_MAX_6G_FREQ 7115
#define ATH11K_NUM_CHANS 101
#define ATH11K_MAX_5G_CHAN 173
#define ATH11K_NUM_CHANS 102
#define ATH11K_MAX_5G_CHAN 177
enum ath11k_state {
ATH11K_STATE_OFF,
@ -843,6 +852,7 @@ struct ath11k_base {
struct ath11k_dp dp;
void __iomem *mem;
void __iomem *mem_ce;
unsigned long mem_len;
struct {
@ -912,7 +922,6 @@ struct ath11k_base {
enum ath11k_dfs_region dfs_region;
#ifdef CONFIG_ATH11K_DEBUGFS
struct dentry *debugfs_soc;
struct dentry *debugfs_ath11k;
#endif
struct ath11k_soc_dp_stats soc_stats;
@ -1138,6 +1147,9 @@ extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_ipq6018
extern const struct ce_pipe_config ath11k_target_ce_config_wlan_qca6390[];
extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_qca6390[];
extern const struct ce_pipe_config ath11k_target_ce_config_wlan_ipq5018[];
extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_ipq5018[];
extern const struct ce_pipe_config ath11k_target_ce_config_wlan_qcn9074[];
extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_qcn9074[];
int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab);

Просмотреть файл

@ -976,10 +976,6 @@ int ath11k_debugfs_pdev_create(struct ath11k_base *ab)
if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags))
return 0;
ab->debugfs_soc = debugfs_create_dir(ab->hw_params.name, ab->debugfs_ath11k);
if (IS_ERR(ab->debugfs_soc))
return PTR_ERR(ab->debugfs_soc);
debugfs_create_file("simulate_fw_crash", 0600, ab->debugfs_soc, ab,
&fops_simulate_fw_crash);
@ -1001,15 +997,51 @@ void ath11k_debugfs_pdev_destroy(struct ath11k_base *ab)
int ath11k_debugfs_soc_create(struct ath11k_base *ab)
{
ab->debugfs_ath11k = debugfs_create_dir("ath11k", NULL);
struct dentry *root;
bool dput_needed;
char name[64];
int ret;
return PTR_ERR_OR_ZERO(ab->debugfs_ath11k);
root = debugfs_lookup("ath11k", NULL);
if (!root) {
root = debugfs_create_dir("ath11k", NULL);
if (IS_ERR_OR_NULL(root))
return PTR_ERR(root);
dput_needed = false;
} else {
/* a dentry from lookup() needs dput() after we don't use it */
dput_needed = true;
}
scnprintf(name, sizeof(name), "%s-%s", ath11k_bus_str(ab->hif.bus),
dev_name(ab->dev));
ab->debugfs_soc = debugfs_create_dir(name, root);
if (IS_ERR_OR_NULL(ab->debugfs_soc)) {
ret = PTR_ERR(ab->debugfs_soc);
goto out;
}
ret = 0;
out:
if (dput_needed)
dput(root);
return ret;
}
void ath11k_debugfs_soc_destroy(struct ath11k_base *ab)
{
debugfs_remove_recursive(ab->debugfs_ath11k);
ab->debugfs_ath11k = NULL;
debugfs_remove_recursive(ab->debugfs_soc);
ab->debugfs_soc = NULL;
/* We are not removing ath11k directory on purpose, even if it
* would be empty. This simplifies the directory handling and it's
* a minor cosmetic issue to leave an empty ath11k directory to
* debugfs.
*/
}
EXPORT_SYMBOL(ath11k_debugfs_soc_destroy);

Просмотреть файл

@ -1535,13 +1535,12 @@ struct htt_ppdu_stats_info *ath11k_dp_htt_get_ppdu_desc(struct ath11k *ar,
{
struct htt_ppdu_stats_info *ppdu_info;
spin_lock_bh(&ar->data_lock);
lockdep_assert_held(&ar->data_lock);
if (!list_empty(&ar->ppdu_stats_info)) {
list_for_each_entry(ppdu_info, &ar->ppdu_stats_info, list) {
if (ppdu_info->ppdu_id == ppdu_id) {
spin_unlock_bh(&ar->data_lock);
if (ppdu_info->ppdu_id == ppdu_id)
return ppdu_info;
}
}
if (ar->ppdu_stat_list_depth > HTT_PPDU_DESC_MAX_DEPTH) {
@ -1553,16 +1552,13 @@ struct htt_ppdu_stats_info *ath11k_dp_htt_get_ppdu_desc(struct ath11k *ar,
kfree(ppdu_info);
}
}
spin_unlock_bh(&ar->data_lock);
ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_ATOMIC);
if (!ppdu_info)
return NULL;
spin_lock_bh(&ar->data_lock);
list_add_tail(&ppdu_info->list, &ar->ppdu_stats_info);
ar->ppdu_stat_list_depth++;
spin_unlock_bh(&ar->data_lock);
return ppdu_info;
}
@ -1586,16 +1582,17 @@ static int ath11k_htt_pull_ppdu_stats(struct ath11k_base *ab,
ar = ath11k_mac_get_ar_by_pdev_id(ab, pdev_id);
if (!ar) {
ret = -EINVAL;
goto exit;
goto out;
}
if (ath11k_debugfs_is_pktlog_lite_mode_enabled(ar))
trace_ath11k_htt_ppdu_stats(ar, skb->data, len);
spin_lock_bh(&ar->data_lock);
ppdu_info = ath11k_dp_htt_get_ppdu_desc(ar, ppdu_id);
if (!ppdu_info) {
ret = -EINVAL;
goto exit;
goto out_unlock_data;
}
ppdu_info->ppdu_id = ppdu_id;
@ -1604,10 +1601,13 @@ static int ath11k_htt_pull_ppdu_stats(struct ath11k_base *ab,
(void *)ppdu_info);
if (ret) {
ath11k_warn(ab, "Failed to parse tlv %d\n", ret);
goto exit;
goto out_unlock_data;
}
exit:
out_unlock_data:
spin_unlock_bh(&ar->data_lock);
out:
rcu_read_unlock();
return ret;
@ -3126,6 +3126,7 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
if (!peer) {
ath11k_warn(ab, "failed to find the peer to set up fragment info\n");
spin_unlock_bh(&ab->base_lock);
crypto_free_shash(tfm);
return -ENOENT;
}
@ -5022,6 +5023,7 @@ static int ath11k_dp_rx_mon_deliver(struct ath11k *ar, u32 mac_id,
} else {
rxs->flag |= RX_FLAG_ALLOW_SAME_PN;
}
rxs->flag |= RX_FLAG_ONLY_MONITOR;
ath11k_update_radiotap(ar, ppduinfo, mon_skb, rxs);
ath11k_dp_rx_deliver_msdu(ar, napi, mon_skb, rxs);

Просмотреть файл

@ -1220,16 +1220,20 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_HP;
s = &hal->srng_config[HAL_CE_SRC];
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP;
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB +
ATH11K_CE_OFFSET(ab);
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP +
ATH11K_CE_OFFSET(ab);
s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
s = &hal->srng_config[HAL_CE_DST];
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP;
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB +
ATH11K_CE_OFFSET(ab);
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP +
ATH11K_CE_OFFSET(ab);
s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
@ -1237,8 +1241,9 @@ static int ath11k_hal_srng_create_config(struct ath11k_base *ab)
s = &hal->srng_config[HAL_CE_DST_STATUS];
s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) +
HAL_CE_DST_STATUS_RING_BASE_LSB;
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP;
HAL_CE_DST_STATUS_RING_BASE_LSB + ATH11K_CE_OFFSET(ab);
s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP +
ATH11K_CE_OFFSET(ab);
s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -

Просмотреть файл

@ -321,6 +321,10 @@ struct ath11k_base;
#define HAL_WBM2SW_RELEASE_RING_BASE_MSB_RING_SIZE 0x000fffff
#define HAL_RXDMA_RING_MAX_SIZE 0x0000ffff
/* IPQ5018 ce registers */
#define HAL_IPQ5018_CE_WFSS_REG_BASE 0x08400000
#define HAL_IPQ5018_CE_SIZE 0x200000
/* Add any other errors here and return them in
* ath11k_hal_rx_desc_get_err().
*/
@ -519,6 +523,7 @@ enum hal_srng_dir {
#define HAL_SRNG_FLAGS_MSI_INTR 0x00020000
#define HAL_SRNG_FLAGS_CACHED 0x20000000
#define HAL_SRNG_FLAGS_LMAC_RING 0x80000000
#define HAL_SRNG_FLAGS_REMAP_CE_RING 0x10000000
#define HAL_SRNG_TLV_HDR_TAG GENMASK(9, 1)
#define HAL_SRNG_TLV_HDR_LEN GENMASK(25, 10)

Просмотреть файл

@ -791,6 +791,49 @@ static void ath11k_hw_wcn6855_reo_setup(struct ath11k_base *ab)
ring_hash_map);
}
static void ath11k_hw_ipq5018_reo_setup(struct ath11k_base *ab)
{
u32 reo_base = HAL_SEQ_WCSS_UMAC_REO_REG;
u32 val;
/* Each hash entry uses three bits to map to a particular ring. */
u32 ring_hash_map = HAL_HASH_ROUTING_RING_SW1 << 0 |
HAL_HASH_ROUTING_RING_SW2 << 4 |
HAL_HASH_ROUTING_RING_SW3 << 8 |
HAL_HASH_ROUTING_RING_SW4 << 12 |
HAL_HASH_ROUTING_RING_SW1 << 16 |
HAL_HASH_ROUTING_RING_SW2 << 20 |
HAL_HASH_ROUTING_RING_SW3 << 24 |
HAL_HASH_ROUTING_RING_SW4 << 28;
val = ath11k_hif_read32(ab, reo_base + HAL_REO1_GEN_ENABLE);
val &= ~HAL_REO1_GEN_ENABLE_FRAG_DST_RING;
val |= FIELD_PREP(HAL_REO1_GEN_ENABLE_FRAG_DST_RING,
HAL_SRNG_RING_ID_REO2SW1) |
FIELD_PREP(HAL_REO1_GEN_ENABLE_AGING_LIST_ENABLE, 1) |
FIELD_PREP(HAL_REO1_GEN_ENABLE_AGING_FLUSH_ENABLE, 1);
ath11k_hif_write32(ab, reo_base + HAL_REO1_GEN_ENABLE, val);
ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_0(ab),
HAL_DEFAULT_REO_TIMEOUT_USEC);
ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_1(ab),
HAL_DEFAULT_REO_TIMEOUT_USEC);
ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_2(ab),
HAL_DEFAULT_REO_TIMEOUT_USEC);
ath11k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_3(ab),
HAL_DEFAULT_REO_TIMEOUT_USEC);
ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_0,
ring_hash_map);
ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_1,
ring_hash_map);
ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_2,
ring_hash_map);
ath11k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_3,
ring_hash_map);
}
static u16 ath11k_hw_ipq8074_mpdu_info_get_peerid(u8 *tlv_data)
{
u16 peer_id = 0;
@ -1084,6 +1127,47 @@ const struct ath11k_hw_ops wcn6750_ops = {
.get_ring_selector = ath11k_hw_wcn6750_get_tcl_ring_selector,
};
/* IPQ5018 hw ops is similar to QCN9074 except for the dest ring remap */
const struct ath11k_hw_ops ipq5018_ops = {
.get_hw_mac_from_pdev_id = ath11k_hw_ipq6018_mac_from_pdev_id,
.wmi_init_config = ath11k_init_wmi_config_ipq8074,
.mac_id_to_pdev_id = ath11k_hw_mac_id_to_pdev_id_ipq8074,
.mac_id_to_srng_id = ath11k_hw_mac_id_to_srng_id_ipq8074,
.tx_mesh_enable = ath11k_hw_qcn9074_tx_mesh_enable,
.rx_desc_get_first_msdu = ath11k_hw_qcn9074_rx_desc_get_first_msdu,
.rx_desc_get_last_msdu = ath11k_hw_qcn9074_rx_desc_get_last_msdu,
.rx_desc_get_l3_pad_bytes = ath11k_hw_qcn9074_rx_desc_get_l3_pad_bytes,
.rx_desc_get_hdr_status = ath11k_hw_qcn9074_rx_desc_get_hdr_status,
.rx_desc_encrypt_valid = ath11k_hw_qcn9074_rx_desc_encrypt_valid,
.rx_desc_get_encrypt_type = ath11k_hw_qcn9074_rx_desc_get_encrypt_type,
.rx_desc_get_decap_type = ath11k_hw_qcn9074_rx_desc_get_decap_type,
.rx_desc_get_mesh_ctl = ath11k_hw_qcn9074_rx_desc_get_mesh_ctl,
.rx_desc_get_ldpc_support = ath11k_hw_qcn9074_rx_desc_get_ldpc_support,
.rx_desc_get_mpdu_seq_ctl_vld = ath11k_hw_qcn9074_rx_desc_get_mpdu_seq_ctl_vld,
.rx_desc_get_mpdu_fc_valid = ath11k_hw_qcn9074_rx_desc_get_mpdu_fc_valid,
.rx_desc_get_mpdu_start_seq_no = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_seq_no,
.rx_desc_get_msdu_len = ath11k_hw_qcn9074_rx_desc_get_msdu_len,
.rx_desc_get_msdu_sgi = ath11k_hw_qcn9074_rx_desc_get_msdu_sgi,
.rx_desc_get_msdu_rate_mcs = ath11k_hw_qcn9074_rx_desc_get_msdu_rate_mcs,
.rx_desc_get_msdu_rx_bw = ath11k_hw_qcn9074_rx_desc_get_msdu_rx_bw,
.rx_desc_get_msdu_freq = ath11k_hw_qcn9074_rx_desc_get_msdu_freq,
.rx_desc_get_msdu_pkt_type = ath11k_hw_qcn9074_rx_desc_get_msdu_pkt_type,
.rx_desc_get_msdu_nss = ath11k_hw_qcn9074_rx_desc_get_msdu_nss,
.rx_desc_get_mpdu_tid = ath11k_hw_qcn9074_rx_desc_get_mpdu_tid,
.rx_desc_get_mpdu_peer_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_peer_id,
.rx_desc_copy_attn_end_tlv = ath11k_hw_qcn9074_rx_desc_copy_attn_end,
.rx_desc_get_mpdu_start_tag = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_tag,
.rx_desc_get_mpdu_ppdu_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_ppdu_id,
.rx_desc_set_msdu_len = ath11k_hw_qcn9074_rx_desc_set_msdu_len,
.rx_desc_get_attention = ath11k_hw_qcn9074_rx_desc_get_attention,
.reo_setup = ath11k_hw_ipq5018_reo_setup,
.rx_desc_get_msdu_payload = ath11k_hw_qcn9074_rx_desc_get_msdu_payload,
.mpdu_info_get_peerid = ath11k_hw_ipq8074_mpdu_info_get_peerid,
.rx_desc_mac_addr2_valid = ath11k_hw_ipq9074_rx_desc_mac_addr2_valid,
.rx_desc_mpdu_start_addr2 = ath11k_hw_ipq9074_rx_desc_mpdu_start_addr2,
};
#define ATH11K_TX_RING_MASK_0 BIT(0)
#define ATH11K_TX_RING_MASK_1 BIT(1)
#define ATH11K_TX_RING_MASK_2 BIT(2)
@ -1972,6 +2056,214 @@ const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_wcn6750 = {
},
};
/* Target firmware's Copy Engine configuration for IPQ5018 */
const struct ce_pipe_config ath11k_target_ce_config_wlan_ipq5018[] = {
/* CE0: host->target HTC control and raw streams */
{
.pipenum = __cpu_to_le32(0),
.pipedir = __cpu_to_le32(PIPEDIR_OUT),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(2048),
.flags = __cpu_to_le32(CE_ATTR_FLAGS),
.reserved = __cpu_to_le32(0),
},
/* CE1: target->host HTT + HTC control */
{
.pipenum = __cpu_to_le32(1),
.pipedir = __cpu_to_le32(PIPEDIR_IN),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(2048),
.flags = __cpu_to_le32(CE_ATTR_FLAGS),
.reserved = __cpu_to_le32(0),
},
/* CE2: target->host WMI */
{
.pipenum = __cpu_to_le32(2),
.pipedir = __cpu_to_le32(PIPEDIR_IN),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(2048),
.flags = __cpu_to_le32(CE_ATTR_FLAGS),
.reserved = __cpu_to_le32(0),
},
/* CE3: host->target WMI */
{
.pipenum = __cpu_to_le32(3),
.pipedir = __cpu_to_le32(PIPEDIR_OUT),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(2048),
.flags = __cpu_to_le32(CE_ATTR_FLAGS),
.reserved = __cpu_to_le32(0),
},
/* CE4: host->target HTT */
{
.pipenum = __cpu_to_le32(4),
.pipedir = __cpu_to_le32(PIPEDIR_OUT),
.nentries = __cpu_to_le32(256),
.nbytes_max = __cpu_to_le32(256),
.flags = __cpu_to_le32(CE_ATTR_FLAGS | CE_ATTR_DIS_INTR),
.reserved = __cpu_to_le32(0),
},
/* CE5: target->host Pktlog */
{
.pipenum = __cpu_to_le32(5),
.pipedir = __cpu_to_le32(PIPEDIR_IN),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(2048),
.flags = __cpu_to_le32(CE_ATTR_FLAGS),
.reserved = __cpu_to_le32(0),
},
/* CE6: Reserved for target autonomous hif_memcpy */
{
.pipenum = __cpu_to_le32(6),
.pipedir = __cpu_to_le32(PIPEDIR_INOUT),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(16384),
.flags = __cpu_to_le32(CE_ATTR_FLAGS),
.reserved = __cpu_to_le32(0),
},
/* CE7 used only by Host */
{
.pipenum = __cpu_to_le32(7),
.pipedir = __cpu_to_le32(PIPEDIR_OUT),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(2048),
.flags = __cpu_to_le32(0x2000),
.reserved = __cpu_to_le32(0),
},
/* CE8 target->host used only by IPA */
{
.pipenum = __cpu_to_le32(8),
.pipedir = __cpu_to_le32(PIPEDIR_INOUT),
.nentries = __cpu_to_le32(32),
.nbytes_max = __cpu_to_le32(16384),
.flags = __cpu_to_le32(CE_ATTR_FLAGS),
.reserved = __cpu_to_le32(0),
},
};
/* Map from service/endpoint to Copy Engine for IPQ5018.
* This table is derived from the CE TABLE, above.
* It is passed to the Target at startup for use by firmware.
*/
const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_ipq5018[] = {
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VO),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(3),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VO),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(2),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BK),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(3),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BK),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(2),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BE),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(3),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_BE),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(2),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VI),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(3),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_DATA_VI),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(2),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_CONTROL),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(3),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_WMI_CONTROL),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(2),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_RSVD_CTRL),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(0),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_RSVD_CTRL),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(1),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_TEST_RAW_STREAMS),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(0),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_TEST_RAW_STREAMS),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(1),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_HTT_DATA_MSG),
.pipedir = __cpu_to_le32(PIPEDIR_OUT), /* out = UL = host -> target */
.pipenum = __cpu_to_le32(4),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_HTT_DATA_MSG),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(1),
},
{
.service_id = __cpu_to_le32(ATH11K_HTC_SVC_ID_PKT_LOG),
.pipedir = __cpu_to_le32(PIPEDIR_IN), /* in = DL = target -> host */
.pipenum = __cpu_to_le32(5),
},
/* (Additions here) */
{ /* terminator entry */ }
};
const struct ce_ie_addr ath11k_ce_ie_addr_ipq8074 = {
.ie1_reg_addr = CE_HOST_IE_ADDRESS,
.ie2_reg_addr = CE_HOST_IE_2_ADDRESS,
.ie3_reg_addr = CE_HOST_IE_3_ADDRESS,
};
const struct ce_ie_addr ath11k_ce_ie_addr_ipq5018 = {
.ie1_reg_addr = CE_HOST_IPQ5018_IE_ADDRESS - HAL_IPQ5018_CE_WFSS_REG_BASE,
.ie2_reg_addr = CE_HOST_IPQ5018_IE_2_ADDRESS - HAL_IPQ5018_CE_WFSS_REG_BASE,
.ie3_reg_addr = CE_HOST_IPQ5018_IE_3_ADDRESS - HAL_IPQ5018_CE_WFSS_REG_BASE,
};
const struct ce_remap ath11k_ce_remap_ipq5018 = {
.base = HAL_IPQ5018_CE_WFSS_REG_BASE,
.size = HAL_IPQ5018_CE_SIZE,
};
const struct ath11k_hw_regs ipq8074_regs = {
/* SW2TCL(x) R0 ring configuration address */
.hal_tcl1_ring_base_lsb = 0x00000510,
@ -2437,6 +2729,85 @@ static const struct ath11k_hw_tcl2wbm_rbm_map ath11k_hw_tcl2wbm_rbm_map_wcn6750[
},
};
const struct ath11k_hw_regs ipq5018_regs = {
/* SW2TCL(x) R0 ring configuration address */
.hal_tcl1_ring_base_lsb = 0x00000694,
.hal_tcl1_ring_base_msb = 0x00000698,
.hal_tcl1_ring_id = 0x0000069c,
.hal_tcl1_ring_misc = 0x000006a4,
.hal_tcl1_ring_tp_addr_lsb = 0x000006b0,
.hal_tcl1_ring_tp_addr_msb = 0x000006b4,
.hal_tcl1_ring_consumer_int_setup_ix0 = 0x000006c4,
.hal_tcl1_ring_consumer_int_setup_ix1 = 0x000006c8,
.hal_tcl1_ring_msi1_base_lsb = 0x000006dc,
.hal_tcl1_ring_msi1_base_msb = 0x000006e0,
.hal_tcl1_ring_msi1_data = 0x000006e4,
.hal_tcl2_ring_base_lsb = 0x000006ec,
.hal_tcl_ring_base_lsb = 0x0000079c,
/* TCL STATUS ring address */
.hal_tcl_status_ring_base_lsb = 0x000008a4,
/* REO2SW(x) R0 ring configuration address */
.hal_reo1_ring_base_lsb = 0x000001ec,
.hal_reo1_ring_base_msb = 0x000001f0,
.hal_reo1_ring_id = 0x000001f4,
.hal_reo1_ring_misc = 0x000001fc,
.hal_reo1_ring_hp_addr_lsb = 0x00000200,
.hal_reo1_ring_hp_addr_msb = 0x00000204,
.hal_reo1_ring_producer_int_setup = 0x00000210,
.hal_reo1_ring_msi1_base_lsb = 0x00000234,
.hal_reo1_ring_msi1_base_msb = 0x00000238,
.hal_reo1_ring_msi1_data = 0x0000023c,
.hal_reo2_ring_base_lsb = 0x00000244,
.hal_reo1_aging_thresh_ix_0 = 0x00000564,
.hal_reo1_aging_thresh_ix_1 = 0x00000568,
.hal_reo1_aging_thresh_ix_2 = 0x0000056c,
.hal_reo1_aging_thresh_ix_3 = 0x00000570,
/* REO2SW(x) R2 ring pointers (head/tail) address */
.hal_reo1_ring_hp = 0x00003028,
.hal_reo1_ring_tp = 0x0000302c,
.hal_reo2_ring_hp = 0x00003030,
/* REO2TCL R0 ring configuration address */
.hal_reo_tcl_ring_base_lsb = 0x000003fc,
.hal_reo_tcl_ring_hp = 0x00003058,
/* SW2REO ring address */
.hal_sw2reo_ring_base_lsb = 0x0000013c,
.hal_sw2reo_ring_hp = 0x00003018,
/* REO CMD ring address */
.hal_reo_cmd_ring_base_lsb = 0x000000e4,
.hal_reo_cmd_ring_hp = 0x00003010,
/* REO status address */
.hal_reo_status_ring_base_lsb = 0x00000504,
.hal_reo_status_hp = 0x00003070,
/* WCSS relative address */
.hal_seq_wcss_umac_ce0_src_reg = 0x08400000
- HAL_IPQ5018_CE_WFSS_REG_BASE,
.hal_seq_wcss_umac_ce0_dst_reg = 0x08401000
- HAL_IPQ5018_CE_WFSS_REG_BASE,
.hal_seq_wcss_umac_ce1_src_reg = 0x08402000
- HAL_IPQ5018_CE_WFSS_REG_BASE,
.hal_seq_wcss_umac_ce1_dst_reg = 0x08403000
- HAL_IPQ5018_CE_WFSS_REG_BASE,
/* WBM Idle address */
.hal_wbm_idle_link_ring_base_lsb = 0x00000874,
.hal_wbm_idle_link_ring_misc = 0x00000884,
/* SW2WBM release address */
.hal_wbm_release_ring_base_lsb = 0x000001ec,
/* WBM2SW release address */
.hal_wbm0_release_ring_base_lsb = 0x00000924,
.hal_wbm1_release_ring_base_lsb = 0x0000097c,
};
const struct ath11k_hw_hal_params ath11k_hw_hal_params_ipq8074 = {
.rx_buf_rbm = HAL_RX_BUF_RBM_SW3_BM,
.tcl2wbm_rbm_map = ath11k_hw_tcl2wbm_rbm_map_ipq8074,

Просмотреть файл

@ -80,6 +80,8 @@
#define ATH11K_M3_FILE "m3.bin"
#define ATH11K_REGDB_FILE_NAME "regdb.bin"
#define ATH11K_CE_OFFSET(ab) (ab->mem_ce - ab->mem)
enum ath11k_hw_rate_cck {
ATH11K_HW_RATE_CCK_LP_11M = 0,
ATH11K_HW_RATE_CCK_LP_5_5M,
@ -158,6 +160,8 @@ struct ath11k_hw_params {
u32 target_ce_count;
const struct service_to_pipe *svc_to_ce_map;
u32 svc_to_ce_map_len;
const struct ce_ie_addr *ce_ie_addr;
const struct ce_remap *ce_remap;
bool single_pdev_only;
@ -220,6 +224,7 @@ struct ath11k_hw_params {
u32 tx_ring_size;
bool smp2p_wow_exit;
bool support_fw_mac_sequence;
bool ftm_responder;
};
struct ath11k_hw_ops {
@ -271,12 +276,18 @@ extern const struct ath11k_hw_ops qca6390_ops;
extern const struct ath11k_hw_ops qcn9074_ops;
extern const struct ath11k_hw_ops wcn6855_ops;
extern const struct ath11k_hw_ops wcn6750_ops;
extern const struct ath11k_hw_ops ipq5018_ops;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_ipq8074;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qca6390;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qcn9074;
extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_wcn6750;
extern const struct ce_ie_addr ath11k_ce_ie_addr_ipq8074;
extern const struct ce_ie_addr ath11k_ce_ie_addr_ipq5018;
extern const struct ce_remap ath11k_ce_remap_ipq5018;
extern const struct ath11k_hw_hal_params ath11k_hw_hal_params_ipq8074;
extern const struct ath11k_hw_hal_params ath11k_hw_hal_params_qca6390;
extern const struct ath11k_hw_hal_params ath11k_hw_hal_params_wcn6750;
@ -406,6 +417,7 @@ extern const struct ath11k_hw_regs qca6390_regs;
extern const struct ath11k_hw_regs qcn9074_regs;
extern const struct ath11k_hw_regs wcn6855_regs;
extern const struct ath11k_hw_regs wcn6750_regs;
extern const struct ath11k_hw_regs ipq5018_regs;
static inline const char *ath11k_bd_ie_type_str(enum ath11k_bd_ie_type type)
{

Просмотреть файл

@ -96,6 +96,7 @@ static const struct ieee80211_channel ath11k_5ghz_channels[] = {
CHAN5G(165, 5825, 0),
CHAN5G(169, 5845, 0),
CHAN5G(173, 5865, 0),
CHAN5G(177, 5885, 0),
};
static const struct ieee80211_channel ath11k_6ghz_channels[] = {
@ -3110,7 +3111,7 @@ static void ath11k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
u16 bitrate;
int ret = 0;
u8 rateidx;
u32 rate;
u32 rate, param;
u32 ipv4_cnt;
mutex_lock(&ar->conf_mutex);
@ -3412,6 +3413,20 @@ static void ath11k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
}
}
if (changed & BSS_CHANGED_FTM_RESPONDER &&
arvif->ftm_responder != info->ftm_responder &&
ar->ab->hw_params.ftm_responder &&
(vif->type == NL80211_IFTYPE_AP ||
vif->type == NL80211_IFTYPE_MESH_POINT)) {
arvif->ftm_responder = info->ftm_responder;
param = WMI_VDEV_PARAM_ENABLE_DISABLE_RTT_RESPONDER_ROLE;
ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id, param,
arvif->ftm_responder);
if (ret)
ath11k_warn(ar->ab, "Failed to set ftm responder %i: %d\n",
arvif->vdev_id, ret);
}
if (changed & BSS_CHANGED_FILS_DISCOVERY ||
changed & BSS_CHANGED_UNSOL_BCAST_PROBE_RESP)
ath11k_mac_fils_discovery(arvif, info);
@ -3612,7 +3627,7 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
struct ath11k *ar = hw->priv;
struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
struct cfg80211_scan_request *req = &hw_req->req;
struct scan_req_params arg;
struct scan_req_params *arg = NULL;
int ret = 0;
int i;
u32 scan_timeout;
@ -3640,72 +3655,78 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
if (ret)
goto exit;
memset(&arg, 0, sizeof(arg));
ath11k_wmi_start_scan_init(ar, &arg);
arg.vdev_id = arvif->vdev_id;
arg.scan_id = ATH11K_SCAN_ID;
arg = kzalloc(sizeof(*arg), GFP_KERNEL);
if (!arg) {
ret = -ENOMEM;
goto exit;
}
ath11k_wmi_start_scan_init(ar, arg);
arg->vdev_id = arvif->vdev_id;
arg->scan_id = ATH11K_SCAN_ID;
if (req->ie_len) {
arg.extraie.ptr = kmemdup(req->ie, req->ie_len, GFP_KERNEL);
if (!arg.extraie.ptr) {
arg->extraie.ptr = kmemdup(req->ie, req->ie_len, GFP_KERNEL);
if (!arg->extraie.ptr) {
ret = -ENOMEM;
goto exit;
}
arg.extraie.len = req->ie_len;
arg->extraie.len = req->ie_len;
}
if (req->n_ssids) {
arg.num_ssids = req->n_ssids;
for (i = 0; i < arg.num_ssids; i++) {
arg.ssid[i].length = req->ssids[i].ssid_len;
memcpy(&arg.ssid[i].ssid, req->ssids[i].ssid,
arg->num_ssids = req->n_ssids;
for (i = 0; i < arg->num_ssids; i++) {
arg->ssid[i].length = req->ssids[i].ssid_len;
memcpy(&arg->ssid[i].ssid, req->ssids[i].ssid,
req->ssids[i].ssid_len);
}
} else {
arg.scan_flags |= WMI_SCAN_FLAG_PASSIVE;
arg->scan_flags |= WMI_SCAN_FLAG_PASSIVE;
}
if (req->n_channels) {
arg.num_chan = req->n_channels;
arg.chan_list = kcalloc(arg.num_chan, sizeof(*arg.chan_list),
GFP_KERNEL);
arg->num_chan = req->n_channels;
arg->chan_list = kcalloc(arg->num_chan, sizeof(*arg->chan_list),
GFP_KERNEL);
if (!arg.chan_list) {
if (!arg->chan_list) {
ret = -ENOMEM;
goto exit;
}
for (i = 0; i < arg.num_chan; i++)
arg.chan_list[i] = req->channels[i]->center_freq;
for (i = 0; i < arg->num_chan; i++)
arg->chan_list[i] = req->channels[i]->center_freq;
}
if (req->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
arg.scan_f_add_spoofed_mac_in_probe = 1;
ether_addr_copy(arg.mac_addr.addr, req->mac_addr);
ether_addr_copy(arg.mac_mask.addr, req->mac_addr_mask);
arg->scan_f_add_spoofed_mac_in_probe = 1;
ether_addr_copy(arg->mac_addr.addr, req->mac_addr);
ether_addr_copy(arg->mac_mask.addr, req->mac_addr_mask);
}
/* if duration is set, default dwell times will be overwritten */
if (req->duration) {
arg.dwell_time_active = req->duration;
arg.dwell_time_active_2g = req->duration;
arg.dwell_time_active_6g = req->duration;
arg.dwell_time_passive = req->duration;
arg.dwell_time_passive_6g = req->duration;
arg.burst_duration = req->duration;
arg->dwell_time_active = req->duration;
arg->dwell_time_active_2g = req->duration;
arg->dwell_time_active_6g = req->duration;
arg->dwell_time_passive = req->duration;
arg->dwell_time_passive_6g = req->duration;
arg->burst_duration = req->duration;
scan_timeout = min_t(u32, arg.max_rest_time *
(arg.num_chan - 1) + (req->duration +
scan_timeout = min_t(u32, arg->max_rest_time *
(arg->num_chan - 1) + (req->duration +
ATH11K_SCAN_CHANNEL_SWITCH_WMI_EVT_OVERHEAD) *
arg.num_chan, arg.max_scan_time);
arg->num_chan, arg->max_scan_time);
} else {
scan_timeout = arg.max_scan_time;
scan_timeout = arg->max_scan_time;
}
/* Add a margin to account for event/command processing */
scan_timeout += ATH11K_MAC_SCAN_CMD_EVT_OVERHEAD;
ret = ath11k_start_scan(ar, &arg);
ret = ath11k_start_scan(ar, arg);
if (ret) {
ath11k_warn(ar->ab, "failed to start hw scan: %d\n", ret);
spin_lock_bh(&ar->data_lock);
@ -3717,10 +3738,11 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
msecs_to_jiffies(scan_timeout));
exit:
kfree(arg.chan_list);
if (req->ie_len)
kfree(arg.extraie.ptr);
if (arg) {
kfree(arg->chan_list);
kfree(arg->extraie.ptr);
kfree(arg);
}
mutex_unlock(&ar->conf_mutex);
@ -9106,6 +9128,10 @@ static int __ath11k_mac_register(struct ath11k *ar)
wiphy_ext_feature_set(ar->hw->wiphy,
NL80211_EXT_FEATURE_SET_SCAN_DWELL);
if (ab->hw_params.ftm_responder)
wiphy_ext_feature_set(ar->hw->wiphy,
NL80211_EXT_FEATURE_ENABLE_FTM_RESPONDER);
ath11k_reg_init(ar);
if (!test_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags)) {

Просмотреть файл

@ -543,6 +543,8 @@ static int ath11k_pci_claim(struct ath11k_pci *ab_pci, struct pci_dev *pdev)
goto clear_master;
}
ab->mem_ce = ab->mem;
ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot pci_mem 0x%pK\n", ab->mem);
return 0;

Просмотреть файл

@ -1073,6 +1073,7 @@ enum wmi_tlv_vdev_param {
WMI_VDEV_PARAM_ENABLE_BCAST_PROBE_RESPONSE,
WMI_VDEV_PARAM_FILS_MAX_CHANNEL_GUARD_TIME,
WMI_VDEV_PARAM_HE_LTF = 0x74,
WMI_VDEV_PARAM_ENABLE_DISABLE_RTT_RESPONDER_ROLE = 0x7d,
WMI_VDEV_PARAM_BA_MODE = 0x7e,
WMI_VDEV_PARAM_AUTORATE_MISC_CFG = 0x80,
WMI_VDEV_PARAM_SET_HE_SOUNDING_MODE = 0x87,

Просмотреть файл

@ -0,0 +1,34 @@
# SPDX-License-Identifier: BSD-3-Clause-Clear
config ATH12K
tristate "Qualcomm Technologies Wi-Fi 7 support (ath12k)"
depends on MAC80211 && HAS_DMA && PCI
depends on CRYPTO_MICHAEL_MIC
select QCOM_QMI_HELPERS
select MHI_BUS
select QRTR
select QRTR_MHI
help
Enable support for Qualcomm Technologies Wi-Fi 7 (IEEE
802.11be) family of chipsets, for example WCN7850 and
QCN9274.
If you choose to build a module, it'll be called ath12k.
config ATH12K_DEBUG
bool "ath12k debugging"
depends on ATH12K
help
Enable debug support, for example debug messages which must
be enabled separately using the debug_mask module parameter.
If unsure, say Y to make it easier to debug problems. But if
you want optimal performance choose N.
config ATH12K_TRACING
bool "ath12k tracing support"
depends on ATH12K && EVENT_TRACING
help
Enable ath12k tracing infrastructure.
If unsure, say Y to make it easier to debug problems. But if
you want optimal performance choose N.

Просмотреть файл

@ -0,0 +1,27 @@
# SPDX-License-Identifier: BSD-3-Clause-Clear
obj-$(CONFIG_ATH12K) += ath12k.o
ath12k-y += core.o \
hal.o \
hal_tx.o \
hal_rx.o \
wmi.o \
mac.o \
reg.o \
htc.o \
qmi.o \
dp.o \
dp_tx.o \
dp_rx.o \
debug.o \
ce.o \
peer.o \
dbring.o \
hw.o \
mhi.o \
pci.o \
dp_mon.o
ath12k-$(CONFIG_ATH12K_TRACING) += trace.o
# for tracing framework to find trace.h
CFLAGS_trace.o := -I$(src)

Просмотреть файл

@ -0,0 +1,964 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "dp_rx.h"
#include "debug.h"
#include "hif.h"
const struct ce_attr ath12k_host_ce_config_qcn9274[] = {
/* CE0: host->target HTC control and raw streams */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 16,
.src_sz_max = 2048,
.dest_nentries = 0,
},
/* CE1: target->host HTT + HTC control */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 0,
.src_sz_max = 2048,
.dest_nentries = 512,
.recv_cb = ath12k_htc_rx_completion_handler,
},
/* CE2: target->host WMI */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 0,
.src_sz_max = 2048,
.dest_nentries = 128,
.recv_cb = ath12k_htc_rx_completion_handler,
},
/* CE3: host->target WMI (mac0) */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 32,
.src_sz_max = 2048,
.dest_nentries = 0,
},
/* CE4: host->target HTT */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 2048,
.src_sz_max = 256,
.dest_nentries = 0,
},
/* CE5: target->host pktlog */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 0,
.src_sz_max = 2048,
.dest_nentries = 512,
.recv_cb = ath12k_dp_htt_htc_t2h_msg_handler,
},
/* CE6: target autonomous hif_memcpy */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE7: host->target WMI (mac1) */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 32,
.src_sz_max = 2048,
.dest_nentries = 0,
},
/* CE8: target autonomous hif_memcpy */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE9: MHI */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE10: MHI */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE11: MHI */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE12: CV Prefetch */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE13: CV Prefetch */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE14: target->host dbg log */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 0,
.src_sz_max = 2048,
.dest_nentries = 512,
.recv_cb = ath12k_htc_rx_completion_handler,
},
/* CE15: reserved for future use */
{
.flags = (CE_ATTR_FLAGS | CE_ATTR_DIS_INTR),
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
};
const struct ce_attr ath12k_host_ce_config_wcn7850[] = {
/* CE0: host->target HTC control and raw streams */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 16,
.src_sz_max = 2048,
.dest_nentries = 0,
},
/* CE1: target->host HTT + HTC control */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 0,
.src_sz_max = 2048,
.dest_nentries = 512,
.recv_cb = ath12k_htc_rx_completion_handler,
},
/* CE2: target->host WMI */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 0,
.src_sz_max = 2048,
.dest_nentries = 64,
.recv_cb = ath12k_htc_rx_completion_handler,
},
/* CE3: host->target WMI (mac0) */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 32,
.src_sz_max = 2048,
.dest_nentries = 0,
},
/* CE4: host->target HTT */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 2048,
.src_sz_max = 256,
.dest_nentries = 0,
},
/* CE5: target->host pktlog */
{
.flags = CE_ATTR_FLAGS,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE6: target autonomous hif_memcpy */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
/* CE7: host->target WMI (mac1) */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 2048,
.dest_nentries = 0,
},
/* CE8: target autonomous hif_memcpy */
{
.flags = CE_ATTR_FLAGS | CE_ATTR_DIS_INTR,
.src_nentries = 0,
.src_sz_max = 0,
.dest_nentries = 0,
},
};
static int ath12k_ce_rx_buf_enqueue_pipe(struct ath12k_ce_pipe *pipe,
struct sk_buff *skb, dma_addr_t paddr)
{
struct ath12k_base *ab = pipe->ab;
struct ath12k_ce_ring *ring = pipe->dest_ring;
struct hal_srng *srng;
unsigned int write_index;
unsigned int nentries_mask = ring->nentries_mask;
struct hal_ce_srng_dest_desc *desc;
int ret;
lockdep_assert_held(&ab->ce.ce_lock);
write_index = ring->write_index;
srng = &ab->hal.srng_list[ring->hal_ring_id];
spin_lock_bh(&srng->lock);
ath12k_hal_srng_access_begin(ab, srng);
if (unlikely(ath12k_hal_srng_src_num_free(ab, srng, false) < 1)) {
ret = -ENOSPC;
goto exit;
}
desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
if (!desc) {
ret = -ENOSPC;
goto exit;
}
ath12k_hal_ce_dst_set_desc(desc, paddr);
ring->skb[write_index] = skb;
write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
ring->write_index = write_index;
pipe->rx_buf_needed--;
ret = 0;
exit:
ath12k_hal_srng_access_end(ab, srng);
spin_unlock_bh(&srng->lock);
return ret;
}
static int ath12k_ce_rx_post_pipe(struct ath12k_ce_pipe *pipe)
{
struct ath12k_base *ab = pipe->ab;
struct sk_buff *skb;
dma_addr_t paddr;
int ret = 0;
if (!(pipe->dest_ring || pipe->status_ring))
return 0;
spin_lock_bh(&ab->ce.ce_lock);
while (pipe->rx_buf_needed) {
skb = dev_alloc_skb(pipe->buf_sz);
if (!skb) {
ret = -ENOMEM;
goto exit;
}
WARN_ON_ONCE(!IS_ALIGNED((unsigned long)skb->data, 4));
paddr = dma_map_single(ab->dev, skb->data,
skb->len + skb_tailroom(skb),
DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(ab->dev, paddr))) {
ath12k_warn(ab, "failed to dma map ce rx buf\n");
dev_kfree_skb_any(skb);
ret = -EIO;
goto exit;
}
ATH12K_SKB_RXCB(skb)->paddr = paddr;
ret = ath12k_ce_rx_buf_enqueue_pipe(pipe, skb, paddr);
if (ret) {
ath12k_warn(ab, "failed to enqueue rx buf: %d\n", ret);
dma_unmap_single(ab->dev, paddr,
skb->len + skb_tailroom(skb),
DMA_FROM_DEVICE);
dev_kfree_skb_any(skb);
goto exit;
}
}
exit:
spin_unlock_bh(&ab->ce.ce_lock);
return ret;
}
static int ath12k_ce_completed_recv_next(struct ath12k_ce_pipe *pipe,
struct sk_buff **skb, int *nbytes)
{
struct ath12k_base *ab = pipe->ab;
struct hal_ce_srng_dst_status_desc *desc;
struct hal_srng *srng;
unsigned int sw_index;
unsigned int nentries_mask;
int ret = 0;
spin_lock_bh(&ab->ce.ce_lock);
sw_index = pipe->dest_ring->sw_index;
nentries_mask = pipe->dest_ring->nentries_mask;
srng = &ab->hal.srng_list[pipe->status_ring->hal_ring_id];
spin_lock_bh(&srng->lock);
ath12k_hal_srng_access_begin(ab, srng);
desc = ath12k_hal_srng_dst_get_next_entry(ab, srng);
if (!desc) {
ret = -EIO;
goto err;
}
*nbytes = ath12k_hal_ce_dst_status_get_length(desc);
if (*nbytes == 0) {
ret = -EIO;
goto err;
}
*skb = pipe->dest_ring->skb[sw_index];
pipe->dest_ring->skb[sw_index] = NULL;
sw_index = CE_RING_IDX_INCR(nentries_mask, sw_index);
pipe->dest_ring->sw_index = sw_index;
pipe->rx_buf_needed++;
err:
ath12k_hal_srng_access_end(ab, srng);
spin_unlock_bh(&srng->lock);
spin_unlock_bh(&ab->ce.ce_lock);
return ret;
}
static void ath12k_ce_recv_process_cb(struct ath12k_ce_pipe *pipe)
{
struct ath12k_base *ab = pipe->ab;
struct sk_buff *skb;
struct sk_buff_head list;
unsigned int nbytes, max_nbytes;
int ret;
__skb_queue_head_init(&list);
while (ath12k_ce_completed_recv_next(pipe, &skb, &nbytes) == 0) {
max_nbytes = skb->len + skb_tailroom(skb);
dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(skb)->paddr,
max_nbytes, DMA_FROM_DEVICE);
if (unlikely(max_nbytes < nbytes)) {
ath12k_warn(ab, "rxed more than expected (nbytes %d, max %d)",
nbytes, max_nbytes);
dev_kfree_skb_any(skb);
continue;
}
skb_put(skb, nbytes);
__skb_queue_tail(&list, skb);
}
while ((skb = __skb_dequeue(&list))) {
ath12k_dbg(ab, ATH12K_DBG_AHB, "rx ce pipe %d len %d\n",
pipe->pipe_num, skb->len);
pipe->recv_cb(ab, skb);
}
ret = ath12k_ce_rx_post_pipe(pipe);
if (ret && ret != -ENOSPC) {
ath12k_warn(ab, "failed to post rx buf to pipe: %d err: %d\n",
pipe->pipe_num, ret);
mod_timer(&ab->rx_replenish_retry,
jiffies + ATH12K_CE_RX_POST_RETRY_JIFFIES);
}
}
static struct sk_buff *ath12k_ce_completed_send_next(struct ath12k_ce_pipe *pipe)
{
struct ath12k_base *ab = pipe->ab;
struct hal_ce_srng_src_desc *desc;
struct hal_srng *srng;
unsigned int sw_index;
unsigned int nentries_mask;
struct sk_buff *skb;
spin_lock_bh(&ab->ce.ce_lock);
sw_index = pipe->src_ring->sw_index;
nentries_mask = pipe->src_ring->nentries_mask;
srng = &ab->hal.srng_list[pipe->src_ring->hal_ring_id];
spin_lock_bh(&srng->lock);
ath12k_hal_srng_access_begin(ab, srng);
desc = ath12k_hal_srng_src_reap_next(ab, srng);
if (!desc) {
skb = ERR_PTR(-EIO);
goto err_unlock;
}
skb = pipe->src_ring->skb[sw_index];
pipe->src_ring->skb[sw_index] = NULL;
sw_index = CE_RING_IDX_INCR(nentries_mask, sw_index);
pipe->src_ring->sw_index = sw_index;
err_unlock:
spin_unlock_bh(&srng->lock);
spin_unlock_bh(&ab->ce.ce_lock);
return skb;
}
static void ath12k_ce_send_done_cb(struct ath12k_ce_pipe *pipe)
{
struct ath12k_base *ab = pipe->ab;
struct sk_buff *skb;
while (!IS_ERR(skb = ath12k_ce_completed_send_next(pipe))) {
if (!skb)
continue;
dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr, skb->len,
DMA_TO_DEVICE);
dev_kfree_skb_any(skb);
}
}
static void ath12k_ce_srng_msi_ring_params_setup(struct ath12k_base *ab, u32 ce_id,
struct hal_srng_params *ring_params)
{
u32 msi_data_start;
u32 msi_data_count, msi_data_idx;
u32 msi_irq_start;
u32 addr_lo;
u32 addr_hi;
int ret;
ret = ath12k_hif_get_user_msi_vector(ab, "CE",
&msi_data_count, &msi_data_start,
&msi_irq_start);
if (ret)
return;
ath12k_hif_get_msi_address(ab, &addr_lo, &addr_hi);
ath12k_hif_get_ce_msi_idx(ab, ce_id, &msi_data_idx);
ring_params->msi_addr = addr_lo;
ring_params->msi_addr |= (dma_addr_t)(((uint64_t)addr_hi) << 32);
ring_params->msi_data = (msi_data_idx % msi_data_count) + msi_data_start;
ring_params->flags |= HAL_SRNG_FLAGS_MSI_INTR;
}
static int ath12k_ce_init_ring(struct ath12k_base *ab,
struct ath12k_ce_ring *ce_ring,
int ce_id, enum hal_ring_type type)
{
struct hal_srng_params params = { 0 };
int ret;
params.ring_base_paddr = ce_ring->base_addr_ce_space;
params.ring_base_vaddr = ce_ring->base_addr_owner_space;
params.num_entries = ce_ring->nentries;
if (!(CE_ATTR_DIS_INTR & ab->hw_params->host_ce_config[ce_id].flags))
ath12k_ce_srng_msi_ring_params_setup(ab, ce_id, &params);
switch (type) {
case HAL_CE_SRC:
if (!(CE_ATTR_DIS_INTR & ab->hw_params->host_ce_config[ce_id].flags))
params.intr_batch_cntr_thres_entries = 1;
break;
case HAL_CE_DST:
params.max_buffer_len = ab->hw_params->host_ce_config[ce_id].src_sz_max;
if (!(ab->hw_params->host_ce_config[ce_id].flags & CE_ATTR_DIS_INTR)) {
params.intr_timer_thres_us = 1024;
params.flags |= HAL_SRNG_FLAGS_LOW_THRESH_INTR_EN;
params.low_threshold = ce_ring->nentries - 3;
}
break;
case HAL_CE_DST_STATUS:
if (!(ab->hw_params->host_ce_config[ce_id].flags & CE_ATTR_DIS_INTR)) {
params.intr_batch_cntr_thres_entries = 1;
params.intr_timer_thres_us = 0x1000;
}
break;
default:
ath12k_warn(ab, "Invalid CE ring type %d\n", type);
return -EINVAL;
}
/* TODO: Init other params needed by HAL to init the ring */
ret = ath12k_hal_srng_setup(ab, type, ce_id, 0, &params);
if (ret < 0) {
ath12k_warn(ab, "failed to setup srng: %d ring_id %d\n",
ret, ce_id);
return ret;
}
ce_ring->hal_ring_id = ret;
return 0;
}
static struct ath12k_ce_ring *
ath12k_ce_alloc_ring(struct ath12k_base *ab, int nentries, int desc_sz)
{
struct ath12k_ce_ring *ce_ring;
dma_addr_t base_addr;
ce_ring = kzalloc(struct_size(ce_ring, skb, nentries), GFP_KERNEL);
if (!ce_ring)
return ERR_PTR(-ENOMEM);
ce_ring->nentries = nentries;
ce_ring->nentries_mask = nentries - 1;
/* Legacy platforms that do not support cache
* coherent DMA are unsupported
*/
ce_ring->base_addr_owner_space_unaligned =
dma_alloc_coherent(ab->dev,
nentries * desc_sz + CE_DESC_RING_ALIGN,
&base_addr, GFP_KERNEL);
if (!ce_ring->base_addr_owner_space_unaligned) {
kfree(ce_ring);
return ERR_PTR(-ENOMEM);
}
ce_ring->base_addr_ce_space_unaligned = base_addr;
ce_ring->base_addr_owner_space =
PTR_ALIGN(ce_ring->base_addr_owner_space_unaligned,
CE_DESC_RING_ALIGN);
ce_ring->base_addr_ce_space = ALIGN(ce_ring->base_addr_ce_space_unaligned,
CE_DESC_RING_ALIGN);
return ce_ring;
}
static int ath12k_ce_alloc_pipe(struct ath12k_base *ab, int ce_id)
{
struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[ce_id];
const struct ce_attr *attr = &ab->hw_params->host_ce_config[ce_id];
struct ath12k_ce_ring *ring;
int nentries;
int desc_sz;
pipe->attr_flags = attr->flags;
if (attr->src_nentries) {
pipe->send_cb = ath12k_ce_send_done_cb;
nentries = roundup_pow_of_two(attr->src_nentries);
desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_SRC);
ring = ath12k_ce_alloc_ring(ab, nentries, desc_sz);
if (IS_ERR(ring))
return PTR_ERR(ring);
pipe->src_ring = ring;
}
if (attr->dest_nentries) {
pipe->recv_cb = attr->recv_cb;
nentries = roundup_pow_of_two(attr->dest_nentries);
desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST);
ring = ath12k_ce_alloc_ring(ab, nentries, desc_sz);
if (IS_ERR(ring))
return PTR_ERR(ring);
pipe->dest_ring = ring;
desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST_STATUS);
ring = ath12k_ce_alloc_ring(ab, nentries, desc_sz);
if (IS_ERR(ring))
return PTR_ERR(ring);
pipe->status_ring = ring;
}
return 0;
}
void ath12k_ce_per_engine_service(struct ath12k_base *ab, u16 ce_id)
{
struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[ce_id];
if (pipe->send_cb)
pipe->send_cb(pipe);
if (pipe->recv_cb)
ath12k_ce_recv_process_cb(pipe);
}
void ath12k_ce_poll_send_completed(struct ath12k_base *ab, u8 pipe_id)
{
struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[pipe_id];
if ((pipe->attr_flags & CE_ATTR_DIS_INTR) && pipe->send_cb)
pipe->send_cb(pipe);
}
int ath12k_ce_send(struct ath12k_base *ab, struct sk_buff *skb, u8 pipe_id,
u16 transfer_id)
{
struct ath12k_ce_pipe *pipe = &ab->ce.ce_pipe[pipe_id];
struct hal_ce_srng_src_desc *desc;
struct hal_srng *srng;
unsigned int write_index, sw_index;
unsigned int nentries_mask;
int ret = 0;
u8 byte_swap_data = 0;
int num_used;
/* Check if some entries could be regained by handling tx completion if
* the CE has interrupts disabled and the used entries is more than the
* defined usage threshold.
*/
if (pipe->attr_flags & CE_ATTR_DIS_INTR) {
spin_lock_bh(&ab->ce.ce_lock);
write_index = pipe->src_ring->write_index;
sw_index = pipe->src_ring->sw_index;
if (write_index >= sw_index)
num_used = write_index - sw_index;
else
num_used = pipe->src_ring->nentries - sw_index +
write_index;
spin_unlock_bh(&ab->ce.ce_lock);
if (num_used > ATH12K_CE_USAGE_THRESHOLD)
ath12k_ce_poll_send_completed(ab, pipe->pipe_num);
}
if (test_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags))
return -ESHUTDOWN;
spin_lock_bh(&ab->ce.ce_lock);
write_index = pipe->src_ring->write_index;
nentries_mask = pipe->src_ring->nentries_mask;
srng = &ab->hal.srng_list[pipe->src_ring->hal_ring_id];
spin_lock_bh(&srng->lock);
ath12k_hal_srng_access_begin(ab, srng);
if (unlikely(ath12k_hal_srng_src_num_free(ab, srng, false) < 1)) {
ath12k_hal_srng_access_end(ab, srng);
ret = -ENOBUFS;
goto unlock;
}
desc = ath12k_hal_srng_src_get_next_reaped(ab, srng);
if (!desc) {
ath12k_hal_srng_access_end(ab, srng);
ret = -ENOBUFS;
goto unlock;
}
if (pipe->attr_flags & CE_ATTR_BYTE_SWAP_DATA)
byte_swap_data = 1;
ath12k_hal_ce_src_set_desc(desc, ATH12K_SKB_CB(skb)->paddr,
skb->len, transfer_id, byte_swap_data);
pipe->src_ring->skb[write_index] = skb;
pipe->src_ring->write_index = CE_RING_IDX_INCR(nentries_mask,
write_index);
ath12k_hal_srng_access_end(ab, srng);
unlock:
spin_unlock_bh(&srng->lock);
spin_unlock_bh(&ab->ce.ce_lock);
return ret;
}
static void ath12k_ce_rx_pipe_cleanup(struct ath12k_ce_pipe *pipe)
{
struct ath12k_base *ab = pipe->ab;
struct ath12k_ce_ring *ring = pipe->dest_ring;
struct sk_buff *skb;
int i;
if (!(ring && pipe->buf_sz))
return;
for (i = 0; i < ring->nentries; i++) {
skb = ring->skb[i];
if (!skb)
continue;
ring->skb[i] = NULL;
dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(skb)->paddr,
skb->len + skb_tailroom(skb), DMA_FROM_DEVICE);
dev_kfree_skb_any(skb);
}
}
void ath12k_ce_cleanup_pipes(struct ath12k_base *ab)
{
struct ath12k_ce_pipe *pipe;
int pipe_num;
for (pipe_num = 0; pipe_num < ab->hw_params->ce_count; pipe_num++) {
pipe = &ab->ce.ce_pipe[pipe_num];
ath12k_ce_rx_pipe_cleanup(pipe);
/* Cleanup any src CE's which have interrupts disabled */
ath12k_ce_poll_send_completed(ab, pipe_num);
/* NOTE: Should we also clean up tx buffer in all pipes? */
}
}
void ath12k_ce_rx_post_buf(struct ath12k_base *ab)
{
struct ath12k_ce_pipe *pipe;
int i;
int ret;
for (i = 0; i < ab->hw_params->ce_count; i++) {
pipe = &ab->ce.ce_pipe[i];
ret = ath12k_ce_rx_post_pipe(pipe);
if (ret) {
if (ret == -ENOSPC)
continue;
ath12k_warn(ab, "failed to post rx buf to pipe: %d err: %d\n",
i, ret);
mod_timer(&ab->rx_replenish_retry,
jiffies + ATH12K_CE_RX_POST_RETRY_JIFFIES);
return;
}
}
}
void ath12k_ce_rx_replenish_retry(struct timer_list *t)
{
struct ath12k_base *ab = from_timer(ab, t, rx_replenish_retry);
ath12k_ce_rx_post_buf(ab);
}
static void ath12k_ce_shadow_config(struct ath12k_base *ab)
{
int i;
for (i = 0; i < ab->hw_params->ce_count; i++) {
if (ab->hw_params->host_ce_config[i].src_nentries)
ath12k_hal_srng_update_shadow_config(ab, HAL_CE_SRC, i);
if (ab->hw_params->host_ce_config[i].dest_nentries) {
ath12k_hal_srng_update_shadow_config(ab, HAL_CE_DST, i);
ath12k_hal_srng_update_shadow_config(ab, HAL_CE_DST_STATUS, i);
}
}
}
void ath12k_ce_get_shadow_config(struct ath12k_base *ab,
u32 **shadow_cfg, u32 *shadow_cfg_len)
{
if (!ab->hw_params->supports_shadow_regs)
return;
ath12k_hal_srng_get_shadow_config(ab, shadow_cfg, shadow_cfg_len);
/* shadow is already configured */
if (*shadow_cfg_len)
return;
/* shadow isn't configured yet, configure now.
* non-CE srngs are configured firstly, then
* all CE srngs.
*/
ath12k_hal_srng_shadow_config(ab);
ath12k_ce_shadow_config(ab);
/* get the shadow configuration */
ath12k_hal_srng_get_shadow_config(ab, shadow_cfg, shadow_cfg_len);
}
int ath12k_ce_init_pipes(struct ath12k_base *ab)
{
struct ath12k_ce_pipe *pipe;
int i;
int ret;
ath12k_ce_get_shadow_config(ab, &ab->qmi.ce_cfg.shadow_reg_v3,
&ab->qmi.ce_cfg.shadow_reg_v3_len);
for (i = 0; i < ab->hw_params->ce_count; i++) {
pipe = &ab->ce.ce_pipe[i];
if (pipe->src_ring) {
ret = ath12k_ce_init_ring(ab, pipe->src_ring, i,
HAL_CE_SRC);
if (ret) {
ath12k_warn(ab, "failed to init src ring: %d\n",
ret);
/* Should we clear any partial init */
return ret;
}
pipe->src_ring->write_index = 0;
pipe->src_ring->sw_index = 0;
}
if (pipe->dest_ring) {
ret = ath12k_ce_init_ring(ab, pipe->dest_ring, i,
HAL_CE_DST);
if (ret) {
ath12k_warn(ab, "failed to init dest ring: %d\n",
ret);
/* Should we clear any partial init */
return ret;
}
pipe->rx_buf_needed = pipe->dest_ring->nentries ?
pipe->dest_ring->nentries - 2 : 0;
pipe->dest_ring->write_index = 0;
pipe->dest_ring->sw_index = 0;
}
if (pipe->status_ring) {
ret = ath12k_ce_init_ring(ab, pipe->status_ring, i,
HAL_CE_DST_STATUS);
if (ret) {
ath12k_warn(ab, "failed to init dest status ing: %d\n",
ret);
/* Should we clear any partial init */
return ret;
}
pipe->status_ring->write_index = 0;
pipe->status_ring->sw_index = 0;
}
}
return 0;
}
void ath12k_ce_free_pipes(struct ath12k_base *ab)
{
struct ath12k_ce_pipe *pipe;
int desc_sz;
int i;
for (i = 0; i < ab->hw_params->ce_count; i++) {
pipe = &ab->ce.ce_pipe[i];
if (pipe->src_ring) {
desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_SRC);
dma_free_coherent(ab->dev,
pipe->src_ring->nentries * desc_sz +
CE_DESC_RING_ALIGN,
pipe->src_ring->base_addr_owner_space,
pipe->src_ring->base_addr_ce_space);
kfree(pipe->src_ring);
pipe->src_ring = NULL;
}
if (pipe->dest_ring) {
desc_sz = ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST);
dma_free_coherent(ab->dev,
pipe->dest_ring->nentries * desc_sz +
CE_DESC_RING_ALIGN,
pipe->dest_ring->base_addr_owner_space,
pipe->dest_ring->base_addr_ce_space);
kfree(pipe->dest_ring);
pipe->dest_ring = NULL;
}
if (pipe->status_ring) {
desc_sz =
ath12k_hal_ce_get_desc_size(HAL_CE_DESC_DST_STATUS);
dma_free_coherent(ab->dev,
pipe->status_ring->nentries * desc_sz +
CE_DESC_RING_ALIGN,
pipe->status_ring->base_addr_owner_space,
pipe->status_ring->base_addr_ce_space);
kfree(pipe->status_ring);
pipe->status_ring = NULL;
}
}
}
int ath12k_ce_alloc_pipes(struct ath12k_base *ab)
{
struct ath12k_ce_pipe *pipe;
int i;
int ret;
const struct ce_attr *attr;
spin_lock_init(&ab->ce.ce_lock);
for (i = 0; i < ab->hw_params->ce_count; i++) {
attr = &ab->hw_params->host_ce_config[i];
pipe = &ab->ce.ce_pipe[i];
pipe->pipe_num = i;
pipe->ab = ab;
pipe->buf_sz = attr->src_sz_max;
ret = ath12k_ce_alloc_pipe(ab, i);
if (ret) {
/* Free any parial successful allocation */
ath12k_ce_free_pipes(ab);
return ret;
}
}
return 0;
}
int ath12k_ce_get_attr_flags(struct ath12k_base *ab, int ce_id)
{
if (ce_id >= ab->hw_params->ce_count)
return -EINVAL;
return ab->hw_params->host_ce_config[ce_id].flags;
}

Просмотреть файл

@ -0,0 +1,184 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_CE_H
#define ATH12K_CE_H
#define CE_COUNT_MAX 16
/* Byte swap data words */
#define CE_ATTR_BYTE_SWAP_DATA 2
/* no interrupt on copy completion */
#define CE_ATTR_DIS_INTR 8
/* Host software's Copy Engine configuration. */
#define CE_ATTR_FLAGS 0
/* Threshold to poll for tx completion in case of Interrupt disabled CE's */
#define ATH12K_CE_USAGE_THRESHOLD 32
/* Directions for interconnect pipe configuration.
* These definitions may be used during configuration and are shared
* between Host and Target.
*
* Pipe Directions are relative to the Host, so PIPEDIR_IN means
* "coming IN over air through Target to Host" as with a WiFi Rx operation.
* Conversely, PIPEDIR_OUT means "going OUT from Host through Target over air"
* as with a WiFi Tx operation. This is somewhat awkward for the "middle-man"
* Target since things that are "PIPEDIR_OUT" are coming IN to the Target
* over the interconnect.
*/
#define PIPEDIR_NONE 0
#define PIPEDIR_IN 1 /* Target-->Host, WiFi Rx direction */
#define PIPEDIR_OUT 2 /* Host->Target, WiFi Tx direction */
#define PIPEDIR_INOUT 3 /* bidirectional */
#define PIPEDIR_INOUT_H2H 4 /* bidirectional, host to host */
/* CE address/mask */
#define CE_HOST_IE_ADDRESS 0x00A1803C
#define CE_HOST_IE_2_ADDRESS 0x00A18040
#define CE_HOST_IE_3_ADDRESS CE_HOST_IE_ADDRESS
#define CE_HOST_IE_3_SHIFT 0xC
#define CE_RING_IDX_INCR(nentries_mask, idx) (((idx) + 1) & (nentries_mask))
#define ATH12K_CE_RX_POST_RETRY_JIFFIES 50
struct ath12k_base;
/* Establish a mapping between a service/direction and a pipe.
* Configuration information for a Copy Engine pipe and services.
* Passed from Host to Target through QMI message and must be in
* little endian format.
*/
struct service_to_pipe {
__le32 service_id;
__le32 pipedir;
__le32 pipenum;
};
/* Configuration information for a Copy Engine pipe.
* Passed from Host to Target through QMI message during startup (one per CE).
*
* NOTE: Structure is shared between Host software and Target firmware!
*/
struct ce_pipe_config {
__le32 pipenum;
__le32 pipedir;
__le32 nentries;
__le32 nbytes_max;
__le32 flags;
__le32 reserved;
};
struct ce_attr {
/* CE_ATTR_* values */
unsigned int flags;
/* #entries in source ring - Must be a power of 2 */
unsigned int src_nentries;
/* Max source send size for this CE.
* This is also the minimum size of a destination buffer.
*/
unsigned int src_sz_max;
/* #entries in destination ring - Must be a power of 2 */
unsigned int dest_nentries;
void (*recv_cb)(struct ath12k_base *ab, struct sk_buff *skb);
};
#define CE_DESC_RING_ALIGN 8
struct ath12k_ce_ring {
/* Number of entries in this ring; must be power of 2 */
unsigned int nentries;
unsigned int nentries_mask;
/* For dest ring, this is the next index to be processed
* by software after it was/is received into.
*
* For src ring, this is the last descriptor that was sent
* and completion processed by software.
*
* Regardless of src or dest ring, this is an invariant
* (modulo ring size):
* write index >= read index >= sw_index
*/
unsigned int sw_index;
/* cached copy */
unsigned int write_index;
/* Start of DMA-coherent area reserved for descriptors */
/* Host address space */
void *base_addr_owner_space_unaligned;
/* CE address space */
u32 base_addr_ce_space_unaligned;
/* Actual start of descriptors.
* Aligned to descriptor-size boundary.
* Points into reserved DMA-coherent area, above.
*/
/* Host address space */
void *base_addr_owner_space;
/* CE address space */
u32 base_addr_ce_space;
/* HAL ring id */
u32 hal_ring_id;
/* keep last */
struct sk_buff *skb[];
};
struct ath12k_ce_pipe {
struct ath12k_base *ab;
u16 pipe_num;
unsigned int attr_flags;
unsigned int buf_sz;
unsigned int rx_buf_needed;
void (*send_cb)(struct ath12k_ce_pipe *pipe);
void (*recv_cb)(struct ath12k_base *ab, struct sk_buff *skb);
struct tasklet_struct intr_tq;
struct ath12k_ce_ring *src_ring;
struct ath12k_ce_ring *dest_ring;
struct ath12k_ce_ring *status_ring;
u64 timestamp;
};
struct ath12k_ce {
struct ath12k_ce_pipe ce_pipe[CE_COUNT_MAX];
/* Protects rings of all ce pipes */
spinlock_t ce_lock;
struct ath12k_hp_update_timer hp_timer[CE_COUNT_MAX];
};
extern const struct ce_attr ath12k_host_ce_config_qcn9274[];
extern const struct ce_attr ath12k_host_ce_config_wcn7850[];
void ath12k_ce_cleanup_pipes(struct ath12k_base *ab);
void ath12k_ce_rx_replenish_retry(struct timer_list *t);
void ath12k_ce_per_engine_service(struct ath12k_base *ab, u16 ce_id);
int ath12k_ce_send(struct ath12k_base *ab, struct sk_buff *skb, u8 pipe_id,
u16 transfer_id);
void ath12k_ce_rx_post_buf(struct ath12k_base *ab);
int ath12k_ce_init_pipes(struct ath12k_base *ab);
int ath12k_ce_alloc_pipes(struct ath12k_base *ab);
void ath12k_ce_free_pipes(struct ath12k_base *ab);
int ath12k_ce_get_attr_flags(struct ath12k_base *ab, int ce_id);
void ath12k_ce_poll_send_completed(struct ath12k_base *ab, u8 pipe_id);
int ath12k_ce_map_service_to_pipe(struct ath12k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe);
int ath12k_ce_attr_attach(struct ath12k_base *ab);
void ath12k_ce_get_shadow_config(struct ath12k_base *ab,
u32 **shadow_cfg, u32 *shadow_cfg_len);
#endif

Просмотреть файл

@ -0,0 +1,939 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/remoteproc.h>
#include <linux/firmware.h>
#include <linux/of.h>
#include "core.h"
#include "dp_tx.h"
#include "dp_rx.h"
#include "debug.h"
#include "hif.h"
unsigned int ath12k_debug_mask;
module_param_named(debug_mask, ath12k_debug_mask, uint, 0644);
MODULE_PARM_DESC(debug_mask, "Debugging mask");
int ath12k_core_suspend(struct ath12k_base *ab)
{
int ret;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
/* TODO: there can frames in queues so for now add delay as a hack.
* Need to implement to handle and remove this delay.
*/
msleep(500);
ret = ath12k_dp_rx_pktlog_stop(ab, true);
if (ret) {
ath12k_warn(ab, "failed to stop dp rx (and timer) pktlog during suspend: %d\n",
ret);
return ret;
}
ret = ath12k_dp_rx_pktlog_stop(ab, false);
if (ret) {
ath12k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n",
ret);
return ret;
}
ath12k_hif_irq_disable(ab);
ath12k_hif_ce_irq_disable(ab);
ret = ath12k_hif_suspend(ab);
if (ret) {
ath12k_warn(ab, "failed to suspend hif: %d\n", ret);
return ret;
}
return 0;
}
int ath12k_core_resume(struct ath12k_base *ab)
{
int ret;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
ret = ath12k_hif_resume(ab);
if (ret) {
ath12k_warn(ab, "failed to resume hif during resume: %d\n", ret);
return ret;
}
ath12k_hif_ce_irq_enable(ab);
ath12k_hif_irq_enable(ab);
ret = ath12k_dp_rx_pktlog_start(ab);
if (ret) {
ath12k_warn(ab, "failed to start rx pktlog during resume: %d\n",
ret);
return ret;
}
return 0;
}
static int ath12k_core_create_board_name(struct ath12k_base *ab, char *name,
size_t name_len)
{
/* strlen(',variant=') + strlen(ab->qmi.target.bdf_ext) */
char variant[9 + ATH12K_QMI_BDF_EXT_STR_LENGTH] = { 0 };
if (ab->qmi.target.bdf_ext[0] != '\0')
scnprintf(variant, sizeof(variant), ",variant=%s",
ab->qmi.target.bdf_ext);
scnprintf(name, name_len,
"bus=%s,qmi-chip-id=%d,qmi-board-id=%d%s",
ath12k_bus_str(ab->hif.bus),
ab->qmi.target.chip_id,
ab->qmi.target.board_id, variant);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot using board name '%s'\n", name);
return 0;
}
const struct firmware *ath12k_core_firmware_request(struct ath12k_base *ab,
const char *file)
{
const struct firmware *fw;
char path[100];
int ret;
if (!file)
return ERR_PTR(-ENOENT);
ath12k_core_create_firmware_path(ab, file, path, sizeof(path));
ret = firmware_request_nowarn(&fw, path, ab->dev);
if (ret)
return ERR_PTR(ret);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot firmware request %s size %zu\n",
path, fw->size);
return fw;
}
void ath12k_core_free_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd)
{
if (!IS_ERR(bd->fw))
release_firmware(bd->fw);
memset(bd, 0, sizeof(*bd));
}
static int ath12k_core_parse_bd_ie_board(struct ath12k_base *ab,
struct ath12k_board_data *bd,
const void *buf, size_t buf_len,
const char *boardname,
int bd_ie_type)
{
const struct ath12k_fw_ie *hdr;
bool name_match_found;
int ret, board_ie_id;
size_t board_ie_len;
const void *board_ie_data;
name_match_found = false;
/* go through ATH12K_BD_IE_BOARD_ elements */
while (buf_len > sizeof(struct ath12k_fw_ie)) {
hdr = buf;
board_ie_id = le32_to_cpu(hdr->id);
board_ie_len = le32_to_cpu(hdr->len);
board_ie_data = hdr->data;
buf_len -= sizeof(*hdr);
buf += sizeof(*hdr);
if (buf_len < ALIGN(board_ie_len, 4)) {
ath12k_err(ab, "invalid ATH12K_BD_IE_BOARD length: %zu < %zu\n",
buf_len, ALIGN(board_ie_len, 4));
ret = -EINVAL;
goto out;
}
switch (board_ie_id) {
case ATH12K_BD_IE_BOARD_NAME:
ath12k_dbg_dump(ab, ATH12K_DBG_BOOT, "board name", "",
board_ie_data, board_ie_len);
if (board_ie_len != strlen(boardname))
break;
ret = memcmp(board_ie_data, boardname, strlen(boardname));
if (ret)
break;
name_match_found = true;
ath12k_dbg(ab, ATH12K_DBG_BOOT,
"boot found match for name '%s'",
boardname);
break;
case ATH12K_BD_IE_BOARD_DATA:
if (!name_match_found)
/* no match found */
break;
ath12k_dbg(ab, ATH12K_DBG_BOOT,
"boot found board data for '%s'", boardname);
bd->data = board_ie_data;
bd->len = board_ie_len;
ret = 0;
goto out;
default:
ath12k_warn(ab, "unknown ATH12K_BD_IE_BOARD found: %d\n",
board_ie_id);
break;
}
/* jump over the padding */
board_ie_len = ALIGN(board_ie_len, 4);
buf_len -= board_ie_len;
buf += board_ie_len;
}
/* no match found */
ret = -ENOENT;
out:
return ret;
}
static int ath12k_core_fetch_board_data_api_n(struct ath12k_base *ab,
struct ath12k_board_data *bd,
const char *boardname)
{
size_t len, magic_len;
const u8 *data;
char *filename, filepath[100];
size_t ie_len;
struct ath12k_fw_ie *hdr;
int ret, ie_id;
filename = ATH12K_BOARD_API2_FILE;
if (!bd->fw)
bd->fw = ath12k_core_firmware_request(ab, filename);
if (IS_ERR(bd->fw))
return PTR_ERR(bd->fw);
data = bd->fw->data;
len = bd->fw->size;
ath12k_core_create_firmware_path(ab, filename,
filepath, sizeof(filepath));
/* magic has extra null byte padded */
magic_len = strlen(ATH12K_BOARD_MAGIC) + 1;
if (len < magic_len) {
ath12k_err(ab, "failed to find magic value in %s, file too short: %zu\n",
filepath, len);
ret = -EINVAL;
goto err;
}
if (memcmp(data, ATH12K_BOARD_MAGIC, magic_len)) {
ath12k_err(ab, "found invalid board magic\n");
ret = -EINVAL;
goto err;
}
/* magic is padded to 4 bytes */
magic_len = ALIGN(magic_len, 4);
if (len < magic_len) {
ath12k_err(ab, "failed: %s too small to contain board data, len: %zu\n",
filepath, len);
ret = -EINVAL;
goto err;
}
data += magic_len;
len -= magic_len;
while (len > sizeof(struct ath12k_fw_ie)) {
hdr = (struct ath12k_fw_ie *)data;
ie_id = le32_to_cpu(hdr->id);
ie_len = le32_to_cpu(hdr->len);
len -= sizeof(*hdr);
data = hdr->data;
if (len < ALIGN(ie_len, 4)) {
ath12k_err(ab, "invalid length for board ie_id %d ie_len %zu len %zu\n",
ie_id, ie_len, len);
ret = -EINVAL;
goto err;
}
switch (ie_id) {
case ATH12K_BD_IE_BOARD:
ret = ath12k_core_parse_bd_ie_board(ab, bd, data,
ie_len,
boardname,
ATH12K_BD_IE_BOARD);
if (ret == -ENOENT)
/* no match found, continue */
break;
else if (ret)
/* there was an error, bail out */
goto err;
/* either found or error, so stop searching */
goto out;
}
/* jump over the padding */
ie_len = ALIGN(ie_len, 4);
len -= ie_len;
data += ie_len;
}
out:
if (!bd->data || !bd->len) {
ath12k_err(ab,
"failed to fetch board data for %s from %s\n",
boardname, filepath);
ret = -ENODATA;
goto err;
}
return 0;
err:
ath12k_core_free_bdf(ab, bd);
return ret;
}
int ath12k_core_fetch_board_data_api_1(struct ath12k_base *ab,
struct ath12k_board_data *bd,
char *filename)
{
bd->fw = ath12k_core_firmware_request(ab, filename);
if (IS_ERR(bd->fw))
return PTR_ERR(bd->fw);
bd->data = bd->fw->data;
bd->len = bd->fw->size;
return 0;
}
#define BOARD_NAME_SIZE 100
int ath12k_core_fetch_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd)
{
char boardname[BOARD_NAME_SIZE];
int ret;
ret = ath12k_core_create_board_name(ab, boardname, BOARD_NAME_SIZE);
if (ret) {
ath12k_err(ab, "failed to create board name: %d", ret);
return ret;
}
ab->bd_api = 2;
ret = ath12k_core_fetch_board_data_api_n(ab, bd, boardname);
if (!ret)
goto success;
ab->bd_api = 1;
ret = ath12k_core_fetch_board_data_api_1(ab, bd, ATH12K_DEFAULT_BOARD_FILE);
if (ret) {
ath12k_err(ab, "failed to fetch board-2.bin or board.bin from %s\n",
ab->hw_params->fw.dir);
return ret;
}
success:
ath12k_dbg(ab, ATH12K_DBG_BOOT, "using board api %d\n", ab->bd_api);
return 0;
}
static void ath12k_core_stop(struct ath12k_base *ab)
{
if (!test_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags))
ath12k_qmi_firmware_stop(ab);
ath12k_hif_stop(ab);
ath12k_wmi_detach(ab);
ath12k_dp_rx_pdev_reo_cleanup(ab);
/* De-Init of components as needed */
}
static int ath12k_core_soc_create(struct ath12k_base *ab)
{
int ret;
ret = ath12k_qmi_init_service(ab);
if (ret) {
ath12k_err(ab, "failed to initialize qmi :%d\n", ret);
return ret;
}
ret = ath12k_hif_power_up(ab);
if (ret) {
ath12k_err(ab, "failed to power up :%d\n", ret);
goto err_qmi_deinit;
}
return 0;
err_qmi_deinit:
ath12k_qmi_deinit_service(ab);
return ret;
}
static void ath12k_core_soc_destroy(struct ath12k_base *ab)
{
ath12k_dp_free(ab);
ath12k_reg_free(ab);
ath12k_qmi_deinit_service(ab);
}
static int ath12k_core_pdev_create(struct ath12k_base *ab)
{
int ret;
ret = ath12k_mac_register(ab);
if (ret) {
ath12k_err(ab, "failed register the radio with mac80211: %d\n", ret);
return ret;
}
ret = ath12k_dp_pdev_alloc(ab);
if (ret) {
ath12k_err(ab, "failed to attach DP pdev: %d\n", ret);
goto err_mac_unregister;
}
return 0;
err_mac_unregister:
ath12k_mac_unregister(ab);
return ret;
}
static void ath12k_core_pdev_destroy(struct ath12k_base *ab)
{
ath12k_mac_unregister(ab);
ath12k_hif_irq_disable(ab);
ath12k_dp_pdev_free(ab);
}
static int ath12k_core_start(struct ath12k_base *ab,
enum ath12k_firmware_mode mode)
{
int ret;
ret = ath12k_wmi_attach(ab);
if (ret) {
ath12k_err(ab, "failed to attach wmi: %d\n", ret);
return ret;
}
ret = ath12k_htc_init(ab);
if (ret) {
ath12k_err(ab, "failed to init htc: %d\n", ret);
goto err_wmi_detach;
}
ret = ath12k_hif_start(ab);
if (ret) {
ath12k_err(ab, "failed to start HIF: %d\n", ret);
goto err_wmi_detach;
}
ret = ath12k_htc_wait_target(&ab->htc);
if (ret) {
ath12k_err(ab, "failed to connect to HTC: %d\n", ret);
goto err_hif_stop;
}
ret = ath12k_dp_htt_connect(&ab->dp);
if (ret) {
ath12k_err(ab, "failed to connect to HTT: %d\n", ret);
goto err_hif_stop;
}
ret = ath12k_wmi_connect(ab);
if (ret) {
ath12k_err(ab, "failed to connect wmi: %d\n", ret);
goto err_hif_stop;
}
ret = ath12k_htc_start(&ab->htc);
if (ret) {
ath12k_err(ab, "failed to start HTC: %d\n", ret);
goto err_hif_stop;
}
ret = ath12k_wmi_wait_for_service_ready(ab);
if (ret) {
ath12k_err(ab, "failed to receive wmi service ready event: %d\n",
ret);
goto err_hif_stop;
}
ret = ath12k_mac_allocate(ab);
if (ret) {
ath12k_err(ab, "failed to create new hw device with mac80211 :%d\n",
ret);
goto err_hif_stop;
}
ath12k_dp_cc_config(ab);
ath12k_dp_pdev_pre_alloc(ab);
ret = ath12k_dp_rx_pdev_reo_setup(ab);
if (ret) {
ath12k_err(ab, "failed to initialize reo destination rings: %d\n", ret);
goto err_mac_destroy;
}
ret = ath12k_wmi_cmd_init(ab);
if (ret) {
ath12k_err(ab, "failed to send wmi init cmd: %d\n", ret);
goto err_reo_cleanup;
}
ret = ath12k_wmi_wait_for_unified_ready(ab);
if (ret) {
ath12k_err(ab, "failed to receive wmi unified ready event: %d\n",
ret);
goto err_reo_cleanup;
}
/* put hardware to DBS mode */
if (ab->hw_params->single_pdev_only) {
ret = ath12k_wmi_set_hw_mode(ab, WMI_HOST_HW_MODE_DBS);
if (ret) {
ath12k_err(ab, "failed to send dbs mode: %d\n", ret);
goto err_reo_cleanup;
}
}
ret = ath12k_dp_tx_htt_h2t_ver_req_msg(ab);
if (ret) {
ath12k_err(ab, "failed to send htt version request message: %d\n",
ret);
goto err_reo_cleanup;
}
return 0;
err_reo_cleanup:
ath12k_dp_rx_pdev_reo_cleanup(ab);
err_mac_destroy:
ath12k_mac_destroy(ab);
err_hif_stop:
ath12k_hif_stop(ab);
err_wmi_detach:
ath12k_wmi_detach(ab);
return ret;
}
static int ath12k_core_start_firmware(struct ath12k_base *ab,
enum ath12k_firmware_mode mode)
{
int ret;
ath12k_ce_get_shadow_config(ab, &ab->qmi.ce_cfg.shadow_reg_v3,
&ab->qmi.ce_cfg.shadow_reg_v3_len);
ret = ath12k_qmi_firmware_start(ab, mode);
if (ret) {
ath12k_err(ab, "failed to send firmware start: %d\n", ret);
return ret;
}
return ret;
}
int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab)
{
int ret;
ret = ath12k_core_start_firmware(ab, ATH12K_FIRMWARE_MODE_NORMAL);
if (ret) {
ath12k_err(ab, "failed to start firmware: %d\n", ret);
return ret;
}
ret = ath12k_ce_init_pipes(ab);
if (ret) {
ath12k_err(ab, "failed to initialize CE: %d\n", ret);
goto err_firmware_stop;
}
ret = ath12k_dp_alloc(ab);
if (ret) {
ath12k_err(ab, "failed to init DP: %d\n", ret);
goto err_firmware_stop;
}
mutex_lock(&ab->core_lock);
ret = ath12k_core_start(ab, ATH12K_FIRMWARE_MODE_NORMAL);
if (ret) {
ath12k_err(ab, "failed to start core: %d\n", ret);
goto err_dp_free;
}
ret = ath12k_core_pdev_create(ab);
if (ret) {
ath12k_err(ab, "failed to create pdev core: %d\n", ret);
goto err_core_stop;
}
ath12k_hif_irq_enable(ab);
mutex_unlock(&ab->core_lock);
return 0;
err_core_stop:
ath12k_core_stop(ab);
ath12k_mac_destroy(ab);
err_dp_free:
ath12k_dp_free(ab);
mutex_unlock(&ab->core_lock);
err_firmware_stop:
ath12k_qmi_firmware_stop(ab);
return ret;
}
static int ath12k_core_reconfigure_on_crash(struct ath12k_base *ab)
{
int ret;
mutex_lock(&ab->core_lock);
ath12k_hif_irq_disable(ab);
ath12k_dp_pdev_free(ab);
ath12k_hif_stop(ab);
ath12k_wmi_detach(ab);
ath12k_dp_rx_pdev_reo_cleanup(ab);
mutex_unlock(&ab->core_lock);
ath12k_dp_free(ab);
ath12k_hal_srng_deinit(ab);
ab->free_vdev_map = (1LL << (ab->num_radios * TARGET_NUM_VDEVS)) - 1;
ret = ath12k_hal_srng_init(ab);
if (ret)
return ret;
clear_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags);
ret = ath12k_core_qmi_firmware_ready(ab);
if (ret)
goto err_hal_srng_deinit;
clear_bit(ATH12K_FLAG_RECOVERY, &ab->dev_flags);
return 0;
err_hal_srng_deinit:
ath12k_hal_srng_deinit(ab);
return ret;
}
void ath12k_core_halt(struct ath12k *ar)
{
struct ath12k_base *ab = ar->ab;
lockdep_assert_held(&ar->conf_mutex);
ar->num_created_vdevs = 0;
ar->allocated_vdev_map = 0;
ath12k_mac_scan_finish(ar);
ath12k_mac_peer_cleanup_all(ar);
cancel_delayed_work_sync(&ar->scan.timeout);
cancel_work_sync(&ar->regd_update_work);
rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], NULL);
synchronize_rcu();
INIT_LIST_HEAD(&ar->arvifs);
idr_init(&ar->txmgmt_idr);
}
static void ath12k_core_pre_reconfigure_recovery(struct ath12k_base *ab)
{
struct ath12k *ar;
struct ath12k_pdev *pdev;
int i;
spin_lock_bh(&ab->base_lock);
ab->stats.fw_crash_counter++;
spin_unlock_bh(&ab->base_lock);
for (i = 0; i < ab->num_radios; i++) {
pdev = &ab->pdevs[i];
ar = pdev->ar;
if (!ar || ar->state == ATH12K_STATE_OFF)
continue;
ieee80211_stop_queues(ar->hw);
ath12k_mac_drain_tx(ar);
complete(&ar->scan.started);
complete(&ar->scan.completed);
complete(&ar->peer_assoc_done);
complete(&ar->peer_delete_done);
complete(&ar->install_key_done);
complete(&ar->vdev_setup_done);
complete(&ar->vdev_delete_done);
complete(&ar->bss_survey_done);
wake_up(&ar->dp.tx_empty_waitq);
idr_for_each(&ar->txmgmt_idr,
ath12k_mac_tx_mgmt_pending_free, ar);
idr_destroy(&ar->txmgmt_idr);
}
wake_up(&ab->wmi_ab.tx_credits_wq);
wake_up(&ab->peer_mapping_wq);
}
static void ath12k_core_post_reconfigure_recovery(struct ath12k_base *ab)
{
struct ath12k *ar;
struct ath12k_pdev *pdev;
int i;
for (i = 0; i < ab->num_radios; i++) {
pdev = &ab->pdevs[i];
ar = pdev->ar;
if (!ar || ar->state == ATH12K_STATE_OFF)
continue;
mutex_lock(&ar->conf_mutex);
switch (ar->state) {
case ATH12K_STATE_ON:
ar->state = ATH12K_STATE_RESTARTING;
ath12k_core_halt(ar);
ieee80211_restart_hw(ar->hw);
break;
case ATH12K_STATE_OFF:
ath12k_warn(ab,
"cannot restart radio %d that hasn't been started\n",
i);
break;
case ATH12K_STATE_RESTARTING:
break;
case ATH12K_STATE_RESTARTED:
ar->state = ATH12K_STATE_WEDGED;
fallthrough;
case ATH12K_STATE_WEDGED:
ath12k_warn(ab,
"device is wedged, will not restart radio %d\n", i);
break;
}
mutex_unlock(&ar->conf_mutex);
}
complete(&ab->driver_recovery);
}
static void ath12k_core_restart(struct work_struct *work)
{
struct ath12k_base *ab = container_of(work, struct ath12k_base, restart_work);
int ret;
if (!ab->is_reset)
ath12k_core_pre_reconfigure_recovery(ab);
ret = ath12k_core_reconfigure_on_crash(ab);
if (ret) {
ath12k_err(ab, "failed to reconfigure driver on crash recovery\n");
return;
}
if (ab->is_reset)
complete_all(&ab->reconfigure_complete);
if (!ab->is_reset)
ath12k_core_post_reconfigure_recovery(ab);
}
static void ath12k_core_reset(struct work_struct *work)
{
struct ath12k_base *ab = container_of(work, struct ath12k_base, reset_work);
int reset_count, fail_cont_count;
long time_left;
if (!(test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags))) {
ath12k_warn(ab, "ignore reset dev flags 0x%lx\n", ab->dev_flags);
return;
}
/* Sometimes the recovery will fail and then the next all recovery fail,
* this is to avoid infinite recovery since it can not recovery success
*/
fail_cont_count = atomic_read(&ab->fail_cont_count);
if (fail_cont_count >= ATH12K_RESET_MAX_FAIL_COUNT_FINAL)
return;
if (fail_cont_count >= ATH12K_RESET_MAX_FAIL_COUNT_FIRST &&
time_before(jiffies, ab->reset_fail_timeout))
return;
reset_count = atomic_inc_return(&ab->reset_count);
if (reset_count > 1) {
/* Sometimes it happened another reset worker before the previous one
* completed, then the second reset worker will destroy the previous one,
* thus below is to avoid that.
*/
ath12k_warn(ab, "already resetting count %d\n", reset_count);
reinit_completion(&ab->reset_complete);
time_left = wait_for_completion_timeout(&ab->reset_complete,
ATH12K_RESET_TIMEOUT_HZ);
if (time_left) {
ath12k_dbg(ab, ATH12K_DBG_BOOT, "to skip reset\n");
atomic_dec(&ab->reset_count);
return;
}
ab->reset_fail_timeout = jiffies + ATH12K_RESET_FAIL_TIMEOUT_HZ;
/* Record the continuous recovery fail count when recovery failed*/
fail_cont_count = atomic_inc_return(&ab->fail_cont_count);
}
ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset starting\n");
ab->is_reset = true;
atomic_set(&ab->recovery_count, 0);
ath12k_core_pre_reconfigure_recovery(ab);
reinit_completion(&ab->reconfigure_complete);
ath12k_core_post_reconfigure_recovery(ab);
reinit_completion(&ab->recovery_start);
atomic_set(&ab->recovery_start_count, 0);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "waiting recovery start...\n");
time_left = wait_for_completion_timeout(&ab->recovery_start,
ATH12K_RECOVER_START_TIMEOUT_HZ);
ath12k_hif_power_down(ab);
ath12k_hif_power_up(ab);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset started\n");
}
int ath12k_core_pre_init(struct ath12k_base *ab)
{
int ret;
ret = ath12k_hw_init(ab);
if (ret) {
ath12k_err(ab, "failed to init hw params: %d\n", ret);
return ret;
}
return 0;
}
int ath12k_core_init(struct ath12k_base *ab)
{
int ret;
ret = ath12k_core_soc_create(ab);
if (ret) {
ath12k_err(ab, "failed to create soc core: %d\n", ret);
return ret;
}
return 0;
}
void ath12k_core_deinit(struct ath12k_base *ab)
{
mutex_lock(&ab->core_lock);
ath12k_core_pdev_destroy(ab);
ath12k_core_stop(ab);
mutex_unlock(&ab->core_lock);
ath12k_hif_power_down(ab);
ath12k_mac_destroy(ab);
ath12k_core_soc_destroy(ab);
}
void ath12k_core_free(struct ath12k_base *ab)
{
destroy_workqueue(ab->workqueue_aux);
destroy_workqueue(ab->workqueue);
kfree(ab);
}
struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
enum ath12k_bus bus)
{
struct ath12k_base *ab;
ab = kzalloc(sizeof(*ab) + priv_size, GFP_KERNEL);
if (!ab)
return NULL;
init_completion(&ab->driver_recovery);
ab->workqueue = create_singlethread_workqueue("ath12k_wq");
if (!ab->workqueue)
goto err_sc_free;
ab->workqueue_aux = create_singlethread_workqueue("ath12k_aux_wq");
if (!ab->workqueue_aux)
goto err_free_wq;
mutex_init(&ab->core_lock);
spin_lock_init(&ab->base_lock);
init_completion(&ab->reset_complete);
init_completion(&ab->reconfigure_complete);
init_completion(&ab->recovery_start);
INIT_LIST_HEAD(&ab->peers);
init_waitqueue_head(&ab->peer_mapping_wq);
init_waitqueue_head(&ab->wmi_ab.tx_credits_wq);
INIT_WORK(&ab->restart_work, ath12k_core_restart);
INIT_WORK(&ab->reset_work, ath12k_core_reset);
timer_setup(&ab->rx_replenish_retry, ath12k_ce_rx_replenish_retry, 0);
init_completion(&ab->htc_suspend);
ab->dev = dev;
ab->hif.bus = bus;
return ab;
err_free_wq:
destroy_workqueue(ab->workqueue);
err_sc_free:
kfree(ab);
return NULL;
}
MODULE_DESCRIPTION("Core module for Qualcomm Atheros 802.11be wireless LAN cards.");
MODULE_LICENSE("Dual BSD/GPL");

Просмотреть файл

@ -0,0 +1,822 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_CORE_H
#define ATH12K_CORE_H
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/bitfield.h>
#include "qmi.h"
#include "htc.h"
#include "wmi.h"
#include "hal.h"
#include "dp.h"
#include "ce.h"
#include "mac.h"
#include "hw.h"
#include "hal_rx.h"
#include "reg.h"
#include "dbring.h"
#define SM(_v, _f) (((_v) << _f##_LSB) & _f##_MASK)
#define ATH12K_TX_MGMT_NUM_PENDING_MAX 512
#define ATH12K_TX_MGMT_TARGET_MAX_SUPPORT_WMI 64
/* Pending management packets threshold for dropping probe responses */
#define ATH12K_PRB_RSP_DROP_THRESHOLD ((ATH12K_TX_MGMT_TARGET_MAX_SUPPORT_WMI * 3) / 4)
#define ATH12K_INVALID_HW_MAC_ID 0xFF
#define ATH12K_RX_RATE_TABLE_NUM 320
#define ATH12K_RX_RATE_TABLE_11AX_NUM 576
#define ATH12K_MON_TIMER_INTERVAL 10
#define ATH12K_RESET_TIMEOUT_HZ (20 * HZ)
#define ATH12K_RESET_MAX_FAIL_COUNT_FIRST 3
#define ATH12K_RESET_MAX_FAIL_COUNT_FINAL 5
#define ATH12K_RESET_FAIL_TIMEOUT_HZ (20 * HZ)
#define ATH12K_RECONFIGURE_TIMEOUT_HZ (10 * HZ)
#define ATH12K_RECOVER_START_TIMEOUT_HZ (20 * HZ)
enum wme_ac {
WME_AC_BE,
WME_AC_BK,
WME_AC_VI,
WME_AC_VO,
WME_NUM_AC
};
#define ATH12K_HT_MCS_MAX 7
#define ATH12K_VHT_MCS_MAX 9
#define ATH12K_HE_MCS_MAX 11
enum ath12k_crypt_mode {
/* Only use hardware crypto engine */
ATH12K_CRYPT_MODE_HW,
/* Only use software crypto */
ATH12K_CRYPT_MODE_SW,
};
static inline enum wme_ac ath12k_tid_to_ac(u32 tid)
{
return (((tid == 0) || (tid == 3)) ? WME_AC_BE :
((tid == 1) || (tid == 2)) ? WME_AC_BK :
((tid == 4) || (tid == 5)) ? WME_AC_VI :
WME_AC_VO);
}
enum ath12k_skb_flags {
ATH12K_SKB_HW_80211_ENCAP = BIT(0),
ATH12K_SKB_CIPHER_SET = BIT(1),
};
struct ath12k_skb_cb {
dma_addr_t paddr;
struct ath12k *ar;
struct ieee80211_vif *vif;
dma_addr_t paddr_ext_desc;
u32 cipher;
u8 flags;
};
struct ath12k_skb_rxcb {
dma_addr_t paddr;
bool is_first_msdu;
bool is_last_msdu;
bool is_continuation;
bool is_mcbc;
bool is_eapol;
struct hal_rx_desc *rx_desc;
u8 err_rel_src;
u8 err_code;
u8 mac_id;
u8 unmapped;
u8 is_frag;
u8 tid;
u16 peer_id;
};
enum ath12k_hw_rev {
ATH12K_HW_QCN9274_HW10,
ATH12K_HW_QCN9274_HW20,
ATH12K_HW_WCN7850_HW20
};
enum ath12k_firmware_mode {
/* the default mode, standard 802.11 functionality */
ATH12K_FIRMWARE_MODE_NORMAL,
/* factory tests etc */
ATH12K_FIRMWARE_MODE_FTM,
};
#define ATH12K_IRQ_NUM_MAX 57
#define ATH12K_EXT_IRQ_NUM_MAX 16
struct ath12k_ext_irq_grp {
struct ath12k_base *ab;
u32 irqs[ATH12K_EXT_IRQ_NUM_MAX];
u32 num_irq;
u32 grp_id;
u64 timestamp;
struct napi_struct napi;
struct net_device napi_ndev;
};
#define HEHANDLE_CAP_PHYINFO_SIZE 3
#define HECAP_PHYINFO_SIZE 9
#define HECAP_MACINFO_SIZE 5
#define HECAP_TXRX_MCS_NSS_SIZE 2
#define HECAP_PPET16_PPET8_MAX_SIZE 25
#define HE_PPET16_PPET8_SIZE 8
/* 802.11ax PPE (PPDU packet Extension) threshold */
struct he_ppe_threshold {
u32 numss_m1;
u32 ru_mask;
u32 ppet16_ppet8_ru3_ru0[HE_PPET16_PPET8_SIZE];
};
struct ath12k_he {
u8 hecap_macinfo[HECAP_MACINFO_SIZE];
u32 hecap_rxmcsnssmap;
u32 hecap_txmcsnssmap;
u32 hecap_phyinfo[HEHANDLE_CAP_PHYINFO_SIZE];
struct he_ppe_threshold hecap_ppet;
u32 heop_param;
};
#define MAX_RADIOS 3
enum {
WMI_HOST_TP_SCALE_MAX = 0,
WMI_HOST_TP_SCALE_50 = 1,
WMI_HOST_TP_SCALE_25 = 2,
WMI_HOST_TP_SCALE_12 = 3,
WMI_HOST_TP_SCALE_MIN = 4,
WMI_HOST_TP_SCALE_SIZE = 5,
};
enum ath12k_scan_state {
ATH12K_SCAN_IDLE,
ATH12K_SCAN_STARTING,
ATH12K_SCAN_RUNNING,
ATH12K_SCAN_ABORTING,
};
enum ath12k_dev_flags {
ATH12K_CAC_RUNNING,
ATH12K_FLAG_CRASH_FLUSH,
ATH12K_FLAG_RAW_MODE,
ATH12K_FLAG_HW_CRYPTO_DISABLED,
ATH12K_FLAG_RECOVERY,
ATH12K_FLAG_UNREGISTERING,
ATH12K_FLAG_REGISTERED,
ATH12K_FLAG_QMI_FAIL,
ATH12K_FLAG_HTC_SUSPEND_COMPLETE,
};
enum ath12k_monitor_flags {
ATH12K_FLAG_MONITOR_ENABLED,
};
struct ath12k_vif {
u32 vdev_id;
enum wmi_vdev_type vdev_type;
enum wmi_vdev_subtype vdev_subtype;
u32 beacon_interval;
u32 dtim_period;
u16 ast_hash;
u16 ast_idx;
u16 tcl_metadata;
u8 hal_addr_search_flags;
u8 search_type;
struct ath12k *ar;
struct ieee80211_vif *vif;
int bank_id;
u8 vdev_id_check_en;
struct wmi_wmm_params_all_arg wmm_params;
struct list_head list;
union {
struct {
u32 uapsd;
} sta;
struct {
/* 127 stations; wmi limit */
u8 tim_bitmap[16];
u8 tim_len;
u32 ssid_len;
u8 ssid[IEEE80211_MAX_SSID_LEN];
bool hidden_ssid;
/* P2P_IE with NoA attribute for P2P_GO case */
u32 noa_len;
u8 *noa_data;
} ap;
} u;
bool is_started;
bool is_up;
u32 aid;
u8 bssid[ETH_ALEN];
struct cfg80211_bitrate_mask bitrate_mask;
int num_legacy_stations;
int rtscts_prot_mode;
int txpower;
bool rsnie_present;
bool wpaie_present;
struct ieee80211_chanctx_conf chanctx;
u32 key_cipher;
u8 tx_encap_type;
u8 vdev_stats_id;
};
struct ath12k_vif_iter {
u32 vdev_id;
struct ath12k_vif *arvif;
};
#define HAL_AST_IDX_INVALID 0xFFFF
#define HAL_RX_MAX_MCS 12
#define HAL_RX_MAX_MCS_HT 31
#define HAL_RX_MAX_MCS_VHT 9
#define HAL_RX_MAX_MCS_HE 11
#define HAL_RX_MAX_NSS 8
#define HAL_RX_MAX_NUM_LEGACY_RATES 12
#define ATH12K_RX_RATE_TABLE_11AX_NUM 576
#define ATH12K_RX_RATE_TABLE_NUM 320
struct ath12k_rx_peer_rate_stats {
u64 ht_mcs_count[HAL_RX_MAX_MCS_HT + 1];
u64 vht_mcs_count[HAL_RX_MAX_MCS_VHT + 1];
u64 he_mcs_count[HAL_RX_MAX_MCS_HE + 1];
u64 nss_count[HAL_RX_MAX_NSS];
u64 bw_count[HAL_RX_BW_MAX];
u64 gi_count[HAL_RX_GI_MAX];
u64 legacy_count[HAL_RX_MAX_NUM_LEGACY_RATES];
u64 rx_rate[ATH12K_RX_RATE_TABLE_11AX_NUM];
};
struct ath12k_rx_peer_stats {
u64 num_msdu;
u64 num_mpdu_fcs_ok;
u64 num_mpdu_fcs_err;
u64 tcp_msdu_count;
u64 udp_msdu_count;
u64 other_msdu_count;
u64 ampdu_msdu_count;
u64 non_ampdu_msdu_count;
u64 stbc_count;
u64 beamformed_count;
u64 mcs_count[HAL_RX_MAX_MCS + 1];
u64 nss_count[HAL_RX_MAX_NSS];
u64 bw_count[HAL_RX_BW_MAX];
u64 gi_count[HAL_RX_GI_MAX];
u64 coding_count[HAL_RX_SU_MU_CODING_MAX];
u64 tid_count[IEEE80211_NUM_TIDS + 1];
u64 pream_cnt[HAL_RX_PREAMBLE_MAX];
u64 reception_type[HAL_RX_RECEPTION_TYPE_MAX];
u64 rx_duration;
u64 dcm_count;
u64 ru_alloc_cnt[HAL_RX_RU_ALLOC_TYPE_MAX];
struct ath12k_rx_peer_rate_stats pkt_stats;
struct ath12k_rx_peer_rate_stats byte_stats;
};
#define ATH12K_HE_MCS_NUM 12
#define ATH12K_VHT_MCS_NUM 10
#define ATH12K_BW_NUM 5
#define ATH12K_NSS_NUM 4
#define ATH12K_LEGACY_NUM 12
#define ATH12K_GI_NUM 4
#define ATH12K_HT_MCS_NUM 32
enum ath12k_pkt_rx_err {
ATH12K_PKT_RX_ERR_FCS,
ATH12K_PKT_RX_ERR_TKIP,
ATH12K_PKT_RX_ERR_CRYPT,
ATH12K_PKT_RX_ERR_PEER_IDX_INVAL,
ATH12K_PKT_RX_ERR_MAX,
};
enum ath12k_ampdu_subfrm_num {
ATH12K_AMPDU_SUBFRM_NUM_10,
ATH12K_AMPDU_SUBFRM_NUM_20,
ATH12K_AMPDU_SUBFRM_NUM_30,
ATH12K_AMPDU_SUBFRM_NUM_40,
ATH12K_AMPDU_SUBFRM_NUM_50,
ATH12K_AMPDU_SUBFRM_NUM_60,
ATH12K_AMPDU_SUBFRM_NUM_MORE,
ATH12K_AMPDU_SUBFRM_NUM_MAX,
};
enum ath12k_amsdu_subfrm_num {
ATH12K_AMSDU_SUBFRM_NUM_1,
ATH12K_AMSDU_SUBFRM_NUM_2,
ATH12K_AMSDU_SUBFRM_NUM_3,
ATH12K_AMSDU_SUBFRM_NUM_4,
ATH12K_AMSDU_SUBFRM_NUM_MORE,
ATH12K_AMSDU_SUBFRM_NUM_MAX,
};
enum ath12k_counter_type {
ATH12K_COUNTER_TYPE_BYTES,
ATH12K_COUNTER_TYPE_PKTS,
ATH12K_COUNTER_TYPE_MAX,
};
enum ath12k_stats_type {
ATH12K_STATS_TYPE_SUCC,
ATH12K_STATS_TYPE_FAIL,
ATH12K_STATS_TYPE_RETRY,
ATH12K_STATS_TYPE_AMPDU,
ATH12K_STATS_TYPE_MAX,
};
struct ath12k_htt_data_stats {
u64 legacy[ATH12K_COUNTER_TYPE_MAX][ATH12K_LEGACY_NUM];
u64 ht[ATH12K_COUNTER_TYPE_MAX][ATH12K_HT_MCS_NUM];
u64 vht[ATH12K_COUNTER_TYPE_MAX][ATH12K_VHT_MCS_NUM];
u64 he[ATH12K_COUNTER_TYPE_MAX][ATH12K_HE_MCS_NUM];
u64 bw[ATH12K_COUNTER_TYPE_MAX][ATH12K_BW_NUM];
u64 nss[ATH12K_COUNTER_TYPE_MAX][ATH12K_NSS_NUM];
u64 gi[ATH12K_COUNTER_TYPE_MAX][ATH12K_GI_NUM];
u64 transmit_type[ATH12K_COUNTER_TYPE_MAX][HAL_RX_RECEPTION_TYPE_MAX];
u64 ru_loc[ATH12K_COUNTER_TYPE_MAX][HAL_RX_RU_ALLOC_TYPE_MAX];
};
struct ath12k_htt_tx_stats {
struct ath12k_htt_data_stats stats[ATH12K_STATS_TYPE_MAX];
u64 tx_duration;
u64 ba_fails;
u64 ack_fails;
u16 ru_start;
u16 ru_tones;
u32 mu_group[MAX_MU_GROUP_ID];
};
struct ath12k_per_ppdu_tx_stats {
u16 succ_pkts;
u16 failed_pkts;
u16 retry_pkts;
u32 succ_bytes;
u32 failed_bytes;
u32 retry_bytes;
};
struct ath12k_wbm_tx_stats {
u64 wbm_tx_comp_stats[HAL_WBM_REL_HTT_TX_COMP_STATUS_MAX];
};
struct ath12k_sta {
struct ath12k_vif *arvif;
/* the following are protected by ar->data_lock */
u32 changed; /* IEEE80211_RC_* */
u32 bw;
u32 nss;
u32 smps;
enum hal_pn_type pn_type;
struct work_struct update_wk;
struct rate_info txrate;
struct rate_info last_txrate;
u64 rx_duration;
u64 tx_duration;
u8 rssi_comb;
struct ath12k_rx_peer_stats *rx_stats;
struct ath12k_wbm_tx_stats *wbm_tx_stats;
};
#define ATH12K_MIN_5G_FREQ 4150
#define ATH12K_MIN_6G_FREQ 5945
#define ATH12K_MAX_6G_FREQ 7115
#define ATH12K_NUM_CHANS 100
#define ATH12K_MAX_5G_CHAN 173
enum ath12k_state {
ATH12K_STATE_OFF,
ATH12K_STATE_ON,
ATH12K_STATE_RESTARTING,
ATH12K_STATE_RESTARTED,
ATH12K_STATE_WEDGED,
/* Add other states as required */
};
/* Antenna noise floor */
#define ATH12K_DEFAULT_NOISE_FLOOR -95
struct ath12k_fw_stats {
u32 pdev_id;
u32 stats_id;
struct list_head pdevs;
struct list_head vdevs;
struct list_head bcn;
};
struct ath12k_per_peer_tx_stats {
u32 succ_bytes;
u32 retry_bytes;
u32 failed_bytes;
u32 duration;
u16 succ_pkts;
u16 retry_pkts;
u16 failed_pkts;
u16 ru_start;
u16 ru_tones;
u8 ba_fails;
u8 ppdu_type;
u32 mu_grpid;
u32 mu_pos;
bool is_ampdu;
};
#define ATH12K_FLUSH_TIMEOUT (5 * HZ)
#define ATH12K_VDEV_DELETE_TIMEOUT_HZ (5 * HZ)
struct ath12k {
struct ath12k_base *ab;
struct ath12k_pdev *pdev;
struct ieee80211_hw *hw;
struct ieee80211_ops *ops;
struct ath12k_wmi_pdev *wmi;
struct ath12k_pdev_dp dp;
u8 mac_addr[ETH_ALEN];
u32 ht_cap_info;
u32 vht_cap_info;
struct ath12k_he ar_he;
enum ath12k_state state;
bool supports_6ghz;
struct {
struct completion started;
struct completion completed;
struct completion on_channel;
struct delayed_work timeout;
enum ath12k_scan_state state;
bool is_roc;
int vdev_id;
int roc_freq;
bool roc_notify;
} scan;
struct {
struct ieee80211_supported_band sbands[NUM_NL80211_BANDS];
struct ieee80211_sband_iftype_data
iftype[NUM_NL80211_BANDS][NUM_NL80211_IFTYPES];
} mac;
unsigned long dev_flags;
unsigned int filter_flags;
unsigned long monitor_flags;
u32 min_tx_power;
u32 max_tx_power;
u32 txpower_limit_2g;
u32 txpower_limit_5g;
u32 txpower_scale;
u32 power_scale;
u32 chan_tx_pwr;
u32 num_stations;
u32 max_num_stations;
bool monitor_present;
/* To synchronize concurrent synchronous mac80211 callback operations,
* concurrent debugfs configuration and concurrent FW statistics events.
*/
struct mutex conf_mutex;
/* protects the radio specific data like debug stats, ppdu_stats_info stats,
* vdev_stop_status info, scan data, ath12k_sta info, ath12k_vif info,
* channel context data, survey info, test mode data.
*/
spinlock_t data_lock;
struct list_head arvifs;
/* should never be NULL; needed for regular htt rx */
struct ieee80211_channel *rx_channel;
/* valid during scan; needed for mgmt rx during scan */
struct ieee80211_channel *scan_channel;
u8 cfg_tx_chainmask;
u8 cfg_rx_chainmask;
u8 num_rx_chains;
u8 num_tx_chains;
/* pdev_idx starts from 0 whereas pdev->pdev_id starts with 1 */
u8 pdev_idx;
u8 lmac_id;
struct completion peer_assoc_done;
struct completion peer_delete_done;
int install_key_status;
struct completion install_key_done;
int last_wmi_vdev_start_status;
struct completion vdev_setup_done;
struct completion vdev_delete_done;
int num_peers;
int max_num_peers;
u32 num_started_vdevs;
u32 num_created_vdevs;
unsigned long long allocated_vdev_map;
struct idr txmgmt_idr;
/* protects txmgmt_idr data */
spinlock_t txmgmt_idr_lock;
atomic_t num_pending_mgmt_tx;
/* cycle count is reported twice for each visited channel during scan.
* access protected by data_lock
*/
u32 survey_last_rx_clear_count;
u32 survey_last_cycle_count;
/* Channel info events are expected to come in pairs without and with
* COMPLETE flag set respectively for each channel visit during scan.
*
* However there are deviations from this rule. This flag is used to
* avoid reporting garbage data.
*/
bool ch_info_can_report_survey;
struct survey_info survey[ATH12K_NUM_CHANS];
struct completion bss_survey_done;
struct work_struct regd_update_work;
struct work_struct wmi_mgmt_tx_work;
struct sk_buff_head wmi_mgmt_tx_queue;
struct ath12k_per_peer_tx_stats peer_tx_stats;
struct list_head ppdu_stats_info;
u32 ppdu_stat_list_depth;
struct ath12k_per_peer_tx_stats cached_stats;
u32 last_ppdu_id;
u32 cached_ppdu_id;
bool dfs_block_radar_events;
bool monitor_conf_enabled;
bool monitor_vdev_created;
bool monitor_started;
int monitor_vdev_id;
};
struct ath12k_band_cap {
u32 phy_id;
u32 max_bw_supported;
u32 ht_cap_info;
u32 he_cap_info[2];
u32 he_mcs;
u32 he_cap_phy_info[PSOC_HOST_MAX_PHY_SIZE];
struct ath12k_wmi_ppe_threshold_arg he_ppet;
u16 he_6ghz_capa;
};
struct ath12k_pdev_cap {
u32 supported_bands;
u32 ampdu_density;
u32 vht_cap;
u32 vht_mcs;
u32 he_mcs;
u32 tx_chain_mask;
u32 rx_chain_mask;
u32 tx_chain_mask_shift;
u32 rx_chain_mask_shift;
struct ath12k_band_cap band[NUM_NL80211_BANDS];
};
struct mlo_timestamp {
u32 info;
u32 sync_timestamp_lo_us;
u32 sync_timestamp_hi_us;
u32 mlo_offset_lo;
u32 mlo_offset_hi;
u32 mlo_offset_clks;
u32 mlo_comp_clks;
u32 mlo_comp_timer;
};
struct ath12k_pdev {
struct ath12k *ar;
u32 pdev_id;
struct ath12k_pdev_cap cap;
u8 mac_addr[ETH_ALEN];
struct mlo_timestamp timestamp;
};
struct ath12k_board_data {
const struct firmware *fw;
const void *data;
size_t len;
};
struct ath12k_soc_dp_tx_err_stats {
/* TCL Ring Descriptor unavailable */
u32 desc_na[DP_TCL_NUM_RING_MAX];
/* Other failures during dp_tx due to mem allocation failure
* idr unavailable etc.
*/
atomic_t misc_fail;
};
struct ath12k_soc_dp_stats {
u32 err_ring_pkts;
u32 invalid_rbm;
u32 rxdma_error[HAL_REO_ENTR_RING_RXDMA_ECODE_MAX];
u32 reo_error[HAL_REO_DEST_RING_ERROR_CODE_MAX];
u32 hal_reo_error[DP_REO_DST_RING_MAX];
struct ath12k_soc_dp_tx_err_stats tx_err;
};
/* Master structure to hold the hw data which may be used in core module */
struct ath12k_base {
enum ath12k_hw_rev hw_rev;
struct platform_device *pdev;
struct device *dev;
struct ath12k_qmi qmi;
struct ath12k_wmi_base wmi_ab;
struct completion fw_ready;
int num_radios;
/* HW channel counters frequency value in hertz common to all MACs */
u32 cc_freq_hz;
struct ath12k_htc htc;
struct ath12k_dp dp;
void __iomem *mem;
unsigned long mem_len;
struct {
enum ath12k_bus bus;
const struct ath12k_hif_ops *ops;
} hif;
struct ath12k_ce ce;
struct timer_list rx_replenish_retry;
struct ath12k_hal hal;
/* To synchronize core_start/core_stop */
struct mutex core_lock;
/* Protects data like peers */
spinlock_t base_lock;
struct ath12k_pdev pdevs[MAX_RADIOS];
struct ath12k_pdev __rcu *pdevs_active[MAX_RADIOS];
struct ath12k_wmi_hal_reg_capabilities_ext_arg hal_reg_cap[MAX_RADIOS];
unsigned long long free_vdev_map;
unsigned long long free_vdev_stats_id_map;
struct list_head peers;
wait_queue_head_t peer_mapping_wq;
u8 mac_addr[ETH_ALEN];
bool wmi_ready;
u32 wlan_init_status;
int irq_num[ATH12K_IRQ_NUM_MAX];
struct ath12k_ext_irq_grp ext_irq_grp[ATH12K_EXT_IRQ_GRP_NUM_MAX];
struct napi_struct *napi;
struct ath12k_wmi_target_cap_arg target_caps;
u32 ext_service_bitmap[WMI_SERVICE_EXT_BM_SIZE];
bool pdevs_macaddr_valid;
int bd_api;
const struct ath12k_hw_params *hw_params;
const struct firmware *cal_file;
/* Below regd's are protected by ab->data_lock */
/* This is the regd set for every radio
* by the firmware during initializatin
*/
struct ieee80211_regdomain *default_regd[MAX_RADIOS];
/* This regd is set during dynamic country setting
* This may or may not be used during the runtime
*/
struct ieee80211_regdomain *new_regd[MAX_RADIOS];
/* Current DFS Regulatory */
enum ath12k_dfs_region dfs_region;
struct ath12k_soc_dp_stats soc_stats;
unsigned long dev_flags;
struct completion driver_recovery;
struct workqueue_struct *workqueue;
struct work_struct restart_work;
struct workqueue_struct *workqueue_aux;
struct work_struct reset_work;
atomic_t reset_count;
atomic_t recovery_count;
atomic_t recovery_start_count;
bool is_reset;
struct completion reset_complete;
struct completion reconfigure_complete;
struct completion recovery_start;
/* continuous recovery fail count */
atomic_t fail_cont_count;
unsigned long reset_fail_timeout;
struct {
/* protected by data_lock */
u32 fw_crash_counter;
} stats;
u32 pktlog_defs_checksum;
struct ath12k_dbring_cap *db_caps;
u32 num_db_cap;
struct timer_list mon_reap_timer;
struct completion htc_suspend;
u64 fw_soc_drop_count;
bool static_window_map;
/* must be last */
u8 drv_priv[] __aligned(sizeof(void *));
};
int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab);
int ath12k_core_pre_init(struct ath12k_base *ab);
int ath12k_core_init(struct ath12k_base *ath12k);
void ath12k_core_deinit(struct ath12k_base *ath12k);
struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
enum ath12k_bus bus);
void ath12k_core_free(struct ath12k_base *ath12k);
int ath12k_core_fetch_board_data_api_1(struct ath12k_base *ab,
struct ath12k_board_data *bd,
char *filename);
int ath12k_core_fetch_bdf(struct ath12k_base *ath12k,
struct ath12k_board_data *bd);
void ath12k_core_free_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd);
int ath12k_core_check_dt(struct ath12k_base *ath12k);
void ath12k_core_halt(struct ath12k *ar);
int ath12k_core_resume(struct ath12k_base *ab);
int ath12k_core_suspend(struct ath12k_base *ab);
const struct firmware *ath12k_core_firmware_request(struct ath12k_base *ab,
const char *filename);
static inline const char *ath12k_scan_state_str(enum ath12k_scan_state state)
{
switch (state) {
case ATH12K_SCAN_IDLE:
return "idle";
case ATH12K_SCAN_STARTING:
return "starting";
case ATH12K_SCAN_RUNNING:
return "running";
case ATH12K_SCAN_ABORTING:
return "aborting";
}
return "unknown";
}
static inline struct ath12k_skb_cb *ATH12K_SKB_CB(struct sk_buff *skb)
{
BUILD_BUG_ON(sizeof(struct ath12k_skb_cb) >
IEEE80211_TX_INFO_DRIVER_DATA_SIZE);
return (struct ath12k_skb_cb *)&IEEE80211_SKB_CB(skb)->driver_data;
}
static inline struct ath12k_skb_rxcb *ATH12K_SKB_RXCB(struct sk_buff *skb)
{
BUILD_BUG_ON(sizeof(struct ath12k_skb_rxcb) > sizeof(skb->cb));
return (struct ath12k_skb_rxcb *)skb->cb;
}
static inline struct ath12k_vif *ath12k_vif_to_arvif(struct ieee80211_vif *vif)
{
return (struct ath12k_vif *)vif->drv_priv;
}
static inline struct ath12k *ath12k_ab_to_ar(struct ath12k_base *ab,
int mac_id)
{
return ab->pdevs[ath12k_hw_mac_id_to_pdev_id(ab->hw_params, mac_id)].ar;
}
static inline void ath12k_core_create_firmware_path(struct ath12k_base *ab,
const char *filename,
void *buf, size_t buf_len)
{
snprintf(buf, buf_len, "%s/%s/%s", ATH12K_FW_DIR,
ab->hw_params->fw.dir, filename);
}
static inline const char *ath12k_bus_str(enum ath12k_bus bus)
{
switch (bus) {
case ATH12K_BUS_PCI:
return "pci";
}
return "unknown";
}
#endif /* _CORE_H_ */

Просмотреть файл

@ -0,0 +1,357 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "core.h"
#include "debug.h"
static int ath12k_dbring_bufs_replenish(struct ath12k *ar,
struct ath12k_dbring *ring,
struct ath12k_dbring_element *buff,
gfp_t gfp)
{
struct ath12k_base *ab = ar->ab;
struct hal_srng *srng;
dma_addr_t paddr;
void *ptr_aligned, *ptr_unaligned, *desc;
int ret;
int buf_id;
u32 cookie;
srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
lockdep_assert_held(&srng->lock);
ath12k_hal_srng_access_begin(ab, srng);
ptr_unaligned = buff->payload;
ptr_aligned = PTR_ALIGN(ptr_unaligned, ring->buf_align);
paddr = dma_map_single(ab->dev, ptr_aligned, ring->buf_sz,
DMA_FROM_DEVICE);
ret = dma_mapping_error(ab->dev, paddr);
if (ret)
goto err;
spin_lock_bh(&ring->idr_lock);
buf_id = idr_alloc(&ring->bufs_idr, buff, 0, ring->bufs_max, gfp);
spin_unlock_bh(&ring->idr_lock);
if (buf_id < 0) {
ret = -ENOBUFS;
goto err_dma_unmap;
}
desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
if (!desc) {
ret = -ENOENT;
goto err_idr_remove;
}
buff->paddr = paddr;
cookie = u32_encode_bits(ar->pdev_idx, DP_RXDMA_BUF_COOKIE_PDEV_ID) |
u32_encode_bits(buf_id, DP_RXDMA_BUF_COOKIE_BUF_ID);
ath12k_hal_rx_buf_addr_info_set(desc, paddr, cookie, 0);
ath12k_hal_srng_access_end(ab, srng);
return 0;
err_idr_remove:
spin_lock_bh(&ring->idr_lock);
idr_remove(&ring->bufs_idr, buf_id);
spin_unlock_bh(&ring->idr_lock);
err_dma_unmap:
dma_unmap_single(ab->dev, paddr, ring->buf_sz,
DMA_FROM_DEVICE);
err:
ath12k_hal_srng_access_end(ab, srng);
return ret;
}
static int ath12k_dbring_fill_bufs(struct ath12k *ar,
struct ath12k_dbring *ring,
gfp_t gfp)
{
struct ath12k_dbring_element *buff;
struct hal_srng *srng;
struct ath12k_base *ab = ar->ab;
int num_remain, req_entries, num_free;
u32 align;
int size, ret;
srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
spin_lock_bh(&srng->lock);
num_free = ath12k_hal_srng_src_num_free(ab, srng, true);
req_entries = min(num_free, ring->bufs_max);
num_remain = req_entries;
align = ring->buf_align;
size = sizeof(*buff) + ring->buf_sz + align - 1;
while (num_remain > 0) {
buff = kzalloc(size, gfp);
if (!buff)
break;
ret = ath12k_dbring_bufs_replenish(ar, ring, buff, gfp);
if (ret) {
ath12k_warn(ab, "failed to replenish db ring num_remain %d req_ent %d\n",
num_remain, req_entries);
kfree(buff);
break;
}
num_remain--;
}
spin_unlock_bh(&srng->lock);
return num_remain;
}
int ath12k_dbring_wmi_cfg_setup(struct ath12k *ar,
struct ath12k_dbring *ring,
enum wmi_direct_buffer_module id)
{
struct ath12k_wmi_pdev_dma_ring_cfg_arg arg = {0};
int ret;
if (id >= WMI_DIRECT_BUF_MAX)
return -EINVAL;
arg.pdev_id = DP_SW2HW_MACID(ring->pdev_id);
arg.module_id = id;
arg.base_paddr_lo = lower_32_bits(ring->refill_srng.paddr);
arg.base_paddr_hi = upper_32_bits(ring->refill_srng.paddr);
arg.head_idx_paddr_lo = lower_32_bits(ring->hp_addr);
arg.head_idx_paddr_hi = upper_32_bits(ring->hp_addr);
arg.tail_idx_paddr_lo = lower_32_bits(ring->tp_addr);
arg.tail_idx_paddr_hi = upper_32_bits(ring->tp_addr);
arg.num_elems = ring->bufs_max;
arg.buf_size = ring->buf_sz;
arg.num_resp_per_event = ring->num_resp_per_event;
arg.event_timeout_ms = ring->event_timeout_ms;
ret = ath12k_wmi_pdev_dma_ring_cfg(ar, &arg);
if (ret) {
ath12k_warn(ar->ab, "failed to setup db ring cfg\n");
return ret;
}
return 0;
}
int ath12k_dbring_set_cfg(struct ath12k *ar, struct ath12k_dbring *ring,
u32 num_resp_per_event, u32 event_timeout_ms,
int (*handler)(struct ath12k *,
struct ath12k_dbring_data *))
{
if (WARN_ON(!ring))
return -EINVAL;
ring->num_resp_per_event = num_resp_per_event;
ring->event_timeout_ms = event_timeout_ms;
ring->handler = handler;
return 0;
}
int ath12k_dbring_buf_setup(struct ath12k *ar,
struct ath12k_dbring *ring,
struct ath12k_dbring_cap *db_cap)
{
struct ath12k_base *ab = ar->ab;
struct hal_srng *srng;
int ret;
srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
ring->bufs_max = ring->refill_srng.size /
ath12k_hal_srng_get_entrysize(ab, HAL_RXDMA_DIR_BUF);
ring->buf_sz = db_cap->min_buf_sz;
ring->buf_align = db_cap->min_buf_align;
ring->pdev_id = db_cap->pdev_id;
ring->hp_addr = ath12k_hal_srng_get_hp_addr(ab, srng);
ring->tp_addr = ath12k_hal_srng_get_tp_addr(ab, srng);
ret = ath12k_dbring_fill_bufs(ar, ring, GFP_KERNEL);
return ret;
}
int ath12k_dbring_srng_setup(struct ath12k *ar, struct ath12k_dbring *ring,
int ring_num, int num_entries)
{
int ret;
ret = ath12k_dp_srng_setup(ar->ab, &ring->refill_srng, HAL_RXDMA_DIR_BUF,
ring_num, ar->pdev_idx, num_entries);
if (ret < 0) {
ath12k_warn(ar->ab, "failed to setup srng: %d ring_id %d\n",
ret, ring_num);
goto err;
}
return 0;
err:
ath12k_dp_srng_cleanup(ar->ab, &ring->refill_srng);
return ret;
}
int ath12k_dbring_get_cap(struct ath12k_base *ab,
u8 pdev_idx,
enum wmi_direct_buffer_module id,
struct ath12k_dbring_cap *db_cap)
{
int i;
if (!ab->num_db_cap || !ab->db_caps)
return -ENOENT;
if (id >= WMI_DIRECT_BUF_MAX)
return -EINVAL;
for (i = 0; i < ab->num_db_cap; i++) {
if (pdev_idx == ab->db_caps[i].pdev_id &&
id == ab->db_caps[i].id) {
*db_cap = ab->db_caps[i];
return 0;
}
}
return -ENOENT;
}
int ath12k_dbring_buffer_release_event(struct ath12k_base *ab,
struct ath12k_dbring_buf_release_event *ev)
{
struct ath12k_dbring *ring = NULL;
struct hal_srng *srng;
struct ath12k *ar;
struct ath12k_dbring_element *buff;
struct ath12k_dbring_data handler_data;
struct ath12k_buffer_addr desc;
u8 *vaddr_unalign;
u32 num_entry, num_buff_reaped;
u8 pdev_idx, rbm;
u32 cookie;
int buf_id;
int size;
dma_addr_t paddr;
int ret = 0;
pdev_idx = le32_to_cpu(ev->fixed.pdev_id);
if (pdev_idx >= ab->num_radios) {
ath12k_warn(ab, "Invalid pdev id %d\n", pdev_idx);
return -EINVAL;
}
if (ev->fixed.num_buf_release_entry !=
ev->fixed.num_meta_data_entry) {
ath12k_warn(ab, "Buffer entry %d mismatch meta entry %d\n",
ev->fixed.num_buf_release_entry,
ev->fixed.num_meta_data_entry);
return -EINVAL;
}
ar = ab->pdevs[pdev_idx].ar;
rcu_read_lock();
if (!rcu_dereference(ab->pdevs_active[pdev_idx])) {
ret = -EINVAL;
goto rcu_unlock;
}
switch (ev->fixed.module_id) {
case WMI_DIRECT_BUF_SPECTRAL:
break;
default:
ring = NULL;
ath12k_warn(ab, "Recv dma buffer release ev on unsupp module %d\n",
ev->fixed.module_id);
break;
}
if (!ring) {
ret = -EINVAL;
goto rcu_unlock;
}
srng = &ab->hal.srng_list[ring->refill_srng.ring_id];
num_entry = le32_to_cpu(ev->fixed.num_buf_release_entry);
size = sizeof(*buff) + ring->buf_sz + ring->buf_align - 1;
num_buff_reaped = 0;
spin_lock_bh(&srng->lock);
while (num_buff_reaped < num_entry) {
desc.info0 = ev->buf_entry[num_buff_reaped].paddr_lo;
desc.info1 = ev->buf_entry[num_buff_reaped].paddr_hi;
handler_data.meta = ev->meta_data[num_buff_reaped];
num_buff_reaped++;
ath12k_hal_rx_buf_addr_info_get(&desc, &paddr, &cookie, &rbm);
buf_id = u32_get_bits(cookie, DP_RXDMA_BUF_COOKIE_BUF_ID);
spin_lock_bh(&ring->idr_lock);
buff = idr_find(&ring->bufs_idr, buf_id);
if (!buff) {
spin_unlock_bh(&ring->idr_lock);
continue;
}
idr_remove(&ring->bufs_idr, buf_id);
spin_unlock_bh(&ring->idr_lock);
dma_unmap_single(ab->dev, buff->paddr, ring->buf_sz,
DMA_FROM_DEVICE);
if (ring->handler) {
vaddr_unalign = buff->payload;
handler_data.data = PTR_ALIGN(vaddr_unalign,
ring->buf_align);
handler_data.data_sz = ring->buf_sz;
ring->handler(ar, &handler_data);
}
memset(buff, 0, size);
ath12k_dbring_bufs_replenish(ar, ring, buff, GFP_ATOMIC);
}
spin_unlock_bh(&srng->lock);
rcu_unlock:
rcu_read_unlock();
return ret;
}
void ath12k_dbring_srng_cleanup(struct ath12k *ar, struct ath12k_dbring *ring)
{
ath12k_dp_srng_cleanup(ar->ab, &ring->refill_srng);
}
void ath12k_dbring_buf_cleanup(struct ath12k *ar, struct ath12k_dbring *ring)
{
struct ath12k_dbring_element *buff;
int buf_id;
spin_lock_bh(&ring->idr_lock);
idr_for_each_entry(&ring->bufs_idr, buff, buf_id) {
idr_remove(&ring->bufs_idr, buf_id);
dma_unmap_single(ar->ab->dev, buff->paddr,
ring->buf_sz, DMA_FROM_DEVICE);
kfree(buff);
}
idr_destroy(&ring->bufs_idr);
spin_unlock_bh(&ring->idr_lock);
}

Просмотреть файл

@ -0,0 +1,80 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_DBRING_H
#define ATH12K_DBRING_H
#include <linux/types.h>
#include <linux/idr.h>
#include <linux/spinlock.h>
#include "dp.h"
struct ath12k_dbring_element {
dma_addr_t paddr;
u8 payload[];
};
struct ath12k_dbring_data {
void *data;
u32 data_sz;
struct ath12k_wmi_dma_buf_release_meta_data_params meta;
};
struct ath12k_dbring_buf_release_event {
struct ath12k_wmi_dma_buf_release_fixed_params fixed;
const struct ath12k_wmi_dma_buf_release_entry_params *buf_entry;
const struct ath12k_wmi_dma_buf_release_meta_data_params *meta_data;
u32 num_buf_entry;
u32 num_meta;
};
struct ath12k_dbring_cap {
u32 pdev_id;
enum wmi_direct_buffer_module id;
u32 min_elem;
u32 min_buf_sz;
u32 min_buf_align;
};
struct ath12k_dbring {
struct dp_srng refill_srng;
struct idr bufs_idr;
/* Protects bufs_idr */
spinlock_t idr_lock;
dma_addr_t tp_addr;
dma_addr_t hp_addr;
int bufs_max;
u32 pdev_id;
u32 buf_sz;
u32 buf_align;
u32 num_resp_per_event;
u32 event_timeout_ms;
int (*handler)(struct ath12k *ar, struct ath12k_dbring_data *data);
};
int ath12k_dbring_set_cfg(struct ath12k *ar,
struct ath12k_dbring *ring,
u32 num_resp_per_event,
u32 event_timeout_ms,
int (*handler)(struct ath12k *,
struct ath12k_dbring_data *));
int ath12k_dbring_wmi_cfg_setup(struct ath12k *ar,
struct ath12k_dbring *ring,
enum wmi_direct_buffer_module id);
int ath12k_dbring_buf_setup(struct ath12k *ar,
struct ath12k_dbring *ring,
struct ath12k_dbring_cap *db_cap);
int ath12k_dbring_srng_setup(struct ath12k *ar, struct ath12k_dbring *ring,
int ring_num, int num_entries);
int ath12k_dbring_buffer_release_event(struct ath12k_base *ab,
struct ath12k_dbring_buf_release_event *ev);
int ath12k_dbring_get_cap(struct ath12k_base *ab,
u8 pdev_idx,
enum wmi_direct_buffer_module id,
struct ath12k_dbring_cap *db_cap);
void ath12k_dbring_srng_cleanup(struct ath12k *ar, struct ath12k_dbring *ring);
void ath12k_dbring_buf_cleanup(struct ath12k *ar, struct ath12k_dbring *ring);
#endif /* ATH12K_DBRING_H */

Просмотреть файл

@ -0,0 +1,102 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/vmalloc.h>
#include "core.h"
#include "debug.h"
void ath12k_info(struct ath12k_base *ab, const char *fmt, ...)
{
struct va_format vaf = {
.fmt = fmt,
};
va_list args;
va_start(args, fmt);
vaf.va = &args;
dev_info(ab->dev, "%pV", &vaf);
/* TODO: Trace the log */
va_end(args);
}
void ath12k_err(struct ath12k_base *ab, const char *fmt, ...)
{
struct va_format vaf = {
.fmt = fmt,
};
va_list args;
va_start(args, fmt);
vaf.va = &args;
dev_err(ab->dev, "%pV", &vaf);
/* TODO: Trace the log */
va_end(args);
}
void ath12k_warn(struct ath12k_base *ab, const char *fmt, ...)
{
struct va_format vaf = {
.fmt = fmt,
};
va_list args;
va_start(args, fmt);
vaf.va = &args;
dev_warn_ratelimited(ab->dev, "%pV", &vaf);
/* TODO: Trace the log */
va_end(args);
}
#ifdef CONFIG_ATH12K_DEBUG
void __ath12k_dbg(struct ath12k_base *ab, enum ath12k_debug_mask mask,
const char *fmt, ...)
{
struct va_format vaf;
va_list args;
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
if (ath12k_debug_mask & mask)
dev_dbg(ab->dev, "%pV", &vaf);
/* TODO: trace log */
va_end(args);
}
void ath12k_dbg_dump(struct ath12k_base *ab,
enum ath12k_debug_mask mask,
const char *msg, const char *prefix,
const void *buf, size_t len)
{
char linebuf[256];
size_t linebuflen;
const void *ptr;
if (ath12k_debug_mask & mask) {
if (msg)
__ath12k_dbg(ab, mask, "%s\n", msg);
for (ptr = buf; (ptr - buf) < len; ptr += 16) {
linebuflen = 0;
linebuflen += scnprintf(linebuf + linebuflen,
sizeof(linebuf) - linebuflen,
"%s%08x: ",
(prefix ? prefix : ""),
(unsigned int)(ptr - buf));
hex_dump_to_buffer(ptr, len - (ptr - buf), 16, 1,
linebuf + linebuflen,
sizeof(linebuf) - linebuflen, true);
dev_dbg(ab->dev, "%s\n", linebuf);
}
}
}
#endif /* CONFIG_ATH12K_DEBUG */

Просмотреть файл

@ -0,0 +1,67 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH12K_DEBUG_H_
#define _ATH12K_DEBUG_H_
#include "trace.h"
enum ath12k_debug_mask {
ATH12K_DBG_AHB = 0x00000001,
ATH12K_DBG_WMI = 0x00000002,
ATH12K_DBG_HTC = 0x00000004,
ATH12K_DBG_DP_HTT = 0x00000008,
ATH12K_DBG_MAC = 0x00000010,
ATH12K_DBG_BOOT = 0x00000020,
ATH12K_DBG_QMI = 0x00000040,
ATH12K_DBG_DATA = 0x00000080,
ATH12K_DBG_MGMT = 0x00000100,
ATH12K_DBG_REG = 0x00000200,
ATH12K_DBG_TESTMODE = 0x00000400,
ATH12K_DBG_HAL = 0x00000800,
ATH12K_DBG_PCI = 0x00001000,
ATH12K_DBG_DP_TX = 0x00002000,
ATH12K_DBG_DP_RX = 0x00004000,
ATH12K_DBG_ANY = 0xffffffff,
};
__printf(2, 3) void ath12k_info(struct ath12k_base *ab, const char *fmt, ...);
__printf(2, 3) void ath12k_err(struct ath12k_base *ab, const char *fmt, ...);
__printf(2, 3) void ath12k_warn(struct ath12k_base *ab, const char *fmt, ...);
extern unsigned int ath12k_debug_mask;
#ifdef CONFIG_ATH12K_DEBUG
__printf(3, 4) void __ath12k_dbg(struct ath12k_base *ab,
enum ath12k_debug_mask mask,
const char *fmt, ...);
void ath12k_dbg_dump(struct ath12k_base *ab,
enum ath12k_debug_mask mask,
const char *msg, const char *prefix,
const void *buf, size_t len);
#else /* CONFIG_ATH12K_DEBUG */
static inline void __ath12k_dbg(struct ath12k_base *ab,
enum ath12k_debug_mask dbg_mask,
const char *fmt, ...)
{
}
static inline void ath12k_dbg_dump(struct ath12k_base *ab,
enum ath12k_debug_mask mask,
const char *msg, const char *prefix,
const void *buf, size_t len)
{
}
#endif /* CONFIG_ATH12K_DEBUG */
#define ath12k_dbg(ar, dbg_mask, fmt, ...) \
do { \
typeof(dbg_mask) mask = (dbg_mask); \
if (ath12k_debug_mask & mask) \
__ath12k_dbg(ar, mask, fmt, ##__VA_ARGS__); \
} while (0)
#endif /* _ATH12K_DEBUG_H_ */

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,106 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_DP_MON_H
#define ATH12K_DP_MON_H
#include "core.h"
enum dp_monitor_mode {
ATH12K_DP_TX_MONITOR_MODE,
ATH12K_DP_RX_MONITOR_MODE
};
enum dp_mon_tx_ppdu_info_type {
DP_MON_TX_PROT_PPDU_INFO,
DP_MON_TX_DATA_PPDU_INFO
};
enum dp_mon_tx_tlv_status {
DP_MON_TX_FES_SETUP,
DP_MON_TX_FES_STATUS_END,
DP_MON_RX_RESPONSE_REQUIRED_INFO,
DP_MON_RESPONSE_END_STATUS_INFO,
DP_MON_TX_MPDU_START,
DP_MON_TX_MSDU_START,
DP_MON_TX_BUFFER_ADDR,
DP_MON_TX_DATA,
DP_MON_TX_STATUS_PPDU_NOT_DONE,
};
enum dp_mon_tx_medium_protection_type {
DP_MON_TX_MEDIUM_NO_PROTECTION,
DP_MON_TX_MEDIUM_RTS_LEGACY,
DP_MON_TX_MEDIUM_RTS_11AC_STATIC_BW,
DP_MON_TX_MEDIUM_RTS_11AC_DYNAMIC_BW,
DP_MON_TX_MEDIUM_CTS2SELF,
DP_MON_TX_MEDIUM_QOS_NULL_NO_ACK_3ADDR,
DP_MON_TX_MEDIUM_QOS_NULL_NO_ACK_4ADDR
};
struct dp_mon_qosframe_addr4 {
__le16 frame_control;
__le16 duration;
u8 addr1[ETH_ALEN];
u8 addr2[ETH_ALEN];
u8 addr3[ETH_ALEN];
__le16 seq_ctrl;
u8 addr4[ETH_ALEN];
__le16 qos_ctrl;
} __packed;
struct dp_mon_frame_min_one {
__le16 frame_control;
__le16 duration;
u8 addr1[ETH_ALEN];
} __packed;
struct dp_mon_packet_info {
u64 cookie;
u16 dma_length;
bool msdu_continuation;
bool truncated;
};
struct dp_mon_tx_ppdu_info {
u32 ppdu_id;
u8 num_users;
bool is_used;
struct hal_rx_mon_ppdu_info rx_status;
struct list_head dp_tx_mon_mpdu_list;
struct dp_mon_mpdu *tx_mon_mpdu;
};
enum hal_rx_mon_status
ath12k_dp_mon_rx_parse_mon_status(struct ath12k *ar,
struct ath12k_mon_data *pmon,
int mac_id, struct sk_buff *skb,
struct napi_struct *napi);
int ath12k_dp_mon_buf_replenish(struct ath12k_base *ab,
struct dp_rxdma_ring *buf_ring,
int req_entries);
int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id,
int *budget, enum dp_monitor_mode monitor_mode,
struct napi_struct *napi);
int ath12k_dp_mon_process_ring(struct ath12k_base *ab, int mac_id,
struct napi_struct *napi, int budget,
enum dp_monitor_mode monitor_mode);
struct sk_buff *ath12k_dp_mon_tx_alloc_skb(void);
enum dp_mon_tx_tlv_status
ath12k_dp_mon_tx_status_get_num_user(u16 tlv_tag,
struct hal_tlv_hdr *tx_tlv,
u8 *num_users);
enum hal_rx_mon_status
ath12k_dp_mon_tx_parse_mon_status(struct ath12k *ar,
struct ath12k_mon_data *pmon,
int mac_id,
struct sk_buff *skb,
struct napi_struct *napi,
u32 ppdu_id);
void ath12k_dp_mon_rx_process_ulofdma(struct hal_rx_mon_ppdu_info *ppdu_info);
int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
struct napi_struct *napi, int *budget);
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,145 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_DP_RX_H
#define ATH12K_DP_RX_H
#include "core.h"
#include "rx_desc.h"
#include "debug.h"
#define DP_MAX_NWIFI_HDR_LEN 30
struct ath12k_dp_rx_tid {
u8 tid;
u32 *vaddr;
dma_addr_t paddr;
u32 size;
u32 ba_win_sz;
bool active;
/* Info related to rx fragments */
u32 cur_sn;
u16 last_frag_no;
u16 rx_frag_bitmap;
struct sk_buff_head rx_frags;
struct hal_reo_dest_ring *dst_ring_desc;
/* Timer info related to fragments */
struct timer_list frag_timer;
struct ath12k_base *ab;
};
struct ath12k_dp_rx_reo_cache_flush_elem {
struct list_head list;
struct ath12k_dp_rx_tid data;
unsigned long ts;
};
struct ath12k_dp_rx_reo_cmd {
struct list_head list;
struct ath12k_dp_rx_tid data;
int cmd_num;
void (*handler)(struct ath12k_dp *dp, void *ctx,
enum hal_reo_cmd_status status);
};
#define ATH12K_DP_RX_REO_DESC_FREE_THRES 64
#define ATH12K_DP_RX_REO_DESC_FREE_TIMEOUT_MS 1000
enum ath12k_dp_rx_decap_type {
DP_RX_DECAP_TYPE_RAW,
DP_RX_DECAP_TYPE_NATIVE_WIFI,
DP_RX_DECAP_TYPE_ETHERNET2_DIX,
DP_RX_DECAP_TYPE_8023,
};
struct ath12k_dp_rx_rfc1042_hdr {
u8 llc_dsap;
u8 llc_ssap;
u8 llc_ctrl;
u8 snap_oui[3];
__be16 snap_type;
} __packed;
static inline u32 ath12k_he_gi_to_nl80211_he_gi(u8 sgi)
{
u32 ret = 0;
switch (sgi) {
case RX_MSDU_START_SGI_0_8_US:
ret = NL80211_RATE_INFO_HE_GI_0_8;
break;
case RX_MSDU_START_SGI_1_6_US:
ret = NL80211_RATE_INFO_HE_GI_1_6;
break;
case RX_MSDU_START_SGI_3_2_US:
ret = NL80211_RATE_INFO_HE_GI_3_2;
break;
}
return ret;
}
int ath12k_dp_rx_ampdu_start(struct ath12k *ar,
struct ieee80211_ampdu_params *params);
int ath12k_dp_rx_ampdu_stop(struct ath12k *ar,
struct ieee80211_ampdu_params *params);
int ath12k_dp_rx_peer_pn_replay_config(struct ath12k_vif *arvif,
const u8 *peer_addr,
enum set_key_cmd key_cmd,
struct ieee80211_key_conf *key);
void ath12k_dp_rx_peer_tid_cleanup(struct ath12k *ar, struct ath12k_peer *peer);
void ath12k_dp_rx_peer_tid_delete(struct ath12k *ar,
struct ath12k_peer *peer, u8 tid);
int ath12k_dp_rx_peer_tid_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_id,
u8 tid, u32 ba_win_sz, u16 ssn,
enum hal_pn_type pn_type);
void ath12k_dp_htt_htc_t2h_msg_handler(struct ath12k_base *ab,
struct sk_buff *skb);
int ath12k_dp_rx_pdev_reo_setup(struct ath12k_base *ab);
void ath12k_dp_rx_pdev_reo_cleanup(struct ath12k_base *ab);
int ath12k_dp_rx_htt_setup(struct ath12k_base *ab);
int ath12k_dp_rx_alloc(struct ath12k_base *ab);
void ath12k_dp_rx_free(struct ath12k_base *ab);
int ath12k_dp_rx_pdev_alloc(struct ath12k_base *ab, int pdev_idx);
void ath12k_dp_rx_pdev_free(struct ath12k_base *ab, int pdev_idx);
void ath12k_dp_rx_reo_cmd_list_cleanup(struct ath12k_base *ab);
void ath12k_dp_rx_process_reo_status(struct ath12k_base *ab);
int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
struct napi_struct *napi, int budget);
int ath12k_dp_rx_process_err(struct ath12k_base *ab, struct napi_struct *napi,
int budget);
int ath12k_dp_rx_process(struct ath12k_base *ab, int mac_id,
struct napi_struct *napi,
int budget);
int ath12k_dp_rx_bufs_replenish(struct ath12k_base *ab, int mac_id,
struct dp_rxdma_ring *rx_ring,
int req_entries,
enum hal_rx_buf_return_buf_manager mgr,
bool hw_cc);
int ath12k_dp_rx_pdev_mon_attach(struct ath12k *ar);
int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_id);
int ath12k_dp_rx_pktlog_start(struct ath12k_base *ab);
int ath12k_dp_rx_pktlog_stop(struct ath12k_base *ab, bool stop_timer);
u8 ath12k_dp_rx_h_l3pad(struct ath12k_base *ab,
struct hal_rx_desc *desc);
struct ath12k_peer *
ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu);
u8 ath12k_dp_rx_h_decap_type(struct ath12k_base *ab,
struct hal_rx_desc *desc);
u32 ath12k_dp_rx_h_mpdu_err(struct ath12k_base *ab,
struct hal_rx_desc *desc);
void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
struct ieee80211_rx_status *rx_status);
struct ath12k_peer *
ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu);
int ath12k_dp_rxdma_ring_sel_config_qcn9274(struct ath12k_base *ab);
int ath12k_dp_rxdma_ring_sel_config_wcn7850(struct ath12k_base *ab);
#endif /* ATH12K_DP_RX_H */

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,41 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_DP_TX_H
#define ATH12K_DP_TX_H
#include "core.h"
#include "hal_tx.h"
struct ath12k_dp_htt_wbm_tx_status {
bool acked;
int ack_rssi;
};
int ath12k_dp_tx_htt_h2t_ver_req_msg(struct ath12k_base *ab);
int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif,
struct sk_buff *skb);
void ath12k_dp_tx_completion_handler(struct ath12k_base *ab, int ring_id);
int ath12k_dp_tx_htt_h2t_ppdu_stats_req(struct ath12k *ar, u32 mask);
int
ath12k_dp_tx_htt_h2t_ext_stats_req(struct ath12k *ar, u8 type,
struct htt_ext_stats_cfg_params *cfg_params,
u64 cookie);
int ath12k_dp_tx_htt_rx_monitor_mode_ring_config(struct ath12k *ar, bool reset);
int ath12k_dp_tx_htt_rx_filter_setup(struct ath12k_base *ab, u32 ring_id,
int mac_id, enum hal_ring_type ring_type,
int rx_buf_size,
struct htt_rx_ring_tlv_filter *tlv_filter);
void ath12k_dp_tx_put_bank_profile(struct ath12k_dp *dp, u8 bank_id);
int ath12k_dp_tx_htt_tx_filter_setup(struct ath12k_base *ab, u32 ring_id,
int mac_id, enum hal_ring_type ring_type,
int tx_buf_size,
struct htt_tx_ring_tlv_filter *htt_tlv_filter);
int ath12k_dp_tx_htt_tx_monitor_mode_ring_config(struct ath12k *ar, bool reset);
int ath12k_dp_tx_htt_monitor_mode_ring_config(struct ath12k *ar, bool reset);
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,850 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "debug.h"
#include "hal.h"
#include "hal_tx.h"
#include "hal_rx.h"
#include "hal_desc.h"
#include "hif.h"
static void ath12k_hal_reo_set_desc_hdr(struct hal_desc_header *hdr,
u8 owner, u8 buffer_type, u32 magic)
{
hdr->info0 = le32_encode_bits(owner, HAL_DESC_HDR_INFO0_OWNER) |
le32_encode_bits(buffer_type, HAL_DESC_HDR_INFO0_BUF_TYPE);
/* Magic pattern in reserved bits for debugging */
hdr->info0 |= le32_encode_bits(magic, HAL_DESC_HDR_INFO0_DBG_RESERVED);
}
static int ath12k_hal_reo_cmd_queue_stats(struct hal_tlv_64_hdr *tlv,
struct ath12k_hal_reo_cmd *cmd)
{
struct hal_reo_get_queue_stats *desc;
tlv->tl = u32_encode_bits(HAL_REO_GET_QUEUE_STATS, HAL_TLV_HDR_TAG) |
u32_encode_bits(sizeof(*desc), HAL_TLV_HDR_LEN);
desc = (struct hal_reo_get_queue_stats *)tlv->value;
memset_startat(desc, 0, queue_addr_lo);
desc->cmd.info0 &= ~cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
if (cmd->flag & HAL_REO_CMD_FLG_NEED_STATUS)
desc->cmd.info0 |= cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
desc->queue_addr_lo = cpu_to_le32(cmd->addr_lo);
desc->info0 = le32_encode_bits(cmd->addr_hi,
HAL_REO_GET_QUEUE_STATS_INFO0_QUEUE_ADDR_HI);
if (cmd->flag & HAL_REO_CMD_FLG_STATS_CLEAR)
desc->info0 |= cpu_to_le32(HAL_REO_GET_QUEUE_STATS_INFO0_CLEAR_STATS);
return le32_get_bits(desc->cmd.info0, HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
}
static int ath12k_hal_reo_cmd_flush_cache(struct ath12k_hal *hal,
struct hal_tlv_64_hdr *tlv,
struct ath12k_hal_reo_cmd *cmd)
{
struct hal_reo_flush_cache *desc;
u8 avail_slot = ffz(hal->avail_blk_resource);
if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_BLOCK_LATER) {
if (avail_slot >= HAL_MAX_AVAIL_BLK_RES)
return -ENOSPC;
hal->current_blk_index = avail_slot;
}
tlv->tl = u32_encode_bits(HAL_REO_FLUSH_CACHE, HAL_TLV_HDR_TAG) |
u32_encode_bits(sizeof(*desc), HAL_TLV_HDR_LEN);
desc = (struct hal_reo_flush_cache *)tlv->value;
memset_startat(desc, 0, cache_addr_lo);
desc->cmd.info0 &= ~cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
if (cmd->flag & HAL_REO_CMD_FLG_NEED_STATUS)
desc->cmd.info0 |= cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
desc->cache_addr_lo = cpu_to_le32(cmd->addr_lo);
desc->info0 = le32_encode_bits(cmd->addr_hi,
HAL_REO_FLUSH_CACHE_INFO0_CACHE_ADDR_HI);
if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_FWD_ALL_MPDUS)
desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_FWD_ALL_MPDUS);
if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_BLOCK_LATER) {
desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_BLOCK_CACHE_USAGE);
desc->info0 |=
le32_encode_bits(avail_slot,
HAL_REO_FLUSH_CACHE_INFO0_BLOCK_RESRC_IDX);
}
if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_NO_INVAL)
desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_FLUSH_WO_INVALIDATE);
if (cmd->flag & HAL_REO_CMD_FLG_FLUSH_ALL)
desc->info0 |= cpu_to_le32(HAL_REO_FLUSH_CACHE_INFO0_FLUSH_ALL);
return le32_get_bits(desc->cmd.info0, HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
}
static int ath12k_hal_reo_cmd_update_rx_queue(struct hal_tlv_64_hdr *tlv,
struct ath12k_hal_reo_cmd *cmd)
{
struct hal_reo_update_rx_queue *desc;
tlv->tl = u32_encode_bits(HAL_REO_UPDATE_RX_REO_QUEUE, HAL_TLV_HDR_TAG) |
u32_encode_bits(sizeof(*desc), HAL_TLV_HDR_LEN);
desc = (struct hal_reo_update_rx_queue *)tlv->value;
memset_startat(desc, 0, queue_addr_lo);
desc->cmd.info0 &= ~cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
if (cmd->flag & HAL_REO_CMD_FLG_NEED_STATUS)
desc->cmd.info0 |= cpu_to_le32(HAL_REO_CMD_HDR_INFO0_STATUS_REQUIRED);
desc->queue_addr_lo = cpu_to_le32(cmd->addr_lo);
desc->info0 =
le32_encode_bits(cmd->addr_hi,
HAL_REO_UPD_RX_QUEUE_INFO0_QUEUE_ADDR_HI) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_RX_QUEUE_NUM),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_RX_QUEUE_NUM) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_VLD),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_VLD) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_ALDC),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_ASSOC_LNK_DESC_CNT) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_DIS_DUP_DETECTION),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_DIS_DUP_DETECTION) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SOFT_REORDER_EN),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SOFT_REORDER_EN) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_AC),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_AC) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_BAR),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_BAR) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_RETRY),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_RETRY) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_CHECK_2K_MODE),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_CHECK_2K_MODE) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_OOR_MODE),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_OOR_MODE) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_BA_WINDOW_SIZE),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_BA_WINDOW_SIZE) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_CHECK),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_CHECK) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_EVEN_PN),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_EVEN_PN) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_UNEVEN_PN),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_UNEVEN_PN) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_HANDLE_ENABLE),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_HANDLE_ENABLE) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_SIZE),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_SIZE) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_IGNORE_AMPDU_FLG),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_IGNORE_AMPDU_FLG) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SVLD),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SVLD) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SSN),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SSN) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_SEQ_2K_ERR),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_SEQ_2K_ERR) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN_VALID),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN_VALID) |
le32_encode_bits(!!(cmd->upd0 & HAL_REO_CMD_UPD0_PN),
HAL_REO_UPD_RX_QUEUE_INFO0_UPD_PN);
desc->info1 =
le32_encode_bits(cmd->rx_queue_num,
HAL_REO_UPD_RX_QUEUE_INFO1_RX_QUEUE_NUMBER) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_VLD),
HAL_REO_UPD_RX_QUEUE_INFO1_VLD) |
le32_encode_bits(u32_get_bits(cmd->upd1, HAL_REO_CMD_UPD1_ALDC),
HAL_REO_UPD_RX_QUEUE_INFO1_ASSOC_LNK_DESC_COUNTER) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_DIS_DUP_DETECTION),
HAL_REO_UPD_RX_QUEUE_INFO1_DIS_DUP_DETECTION) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_SOFT_REORDER_EN),
HAL_REO_UPD_RX_QUEUE_INFO1_SOFT_REORDER_EN) |
le32_encode_bits(u32_get_bits(cmd->upd1, HAL_REO_CMD_UPD1_AC),
HAL_REO_UPD_RX_QUEUE_INFO1_AC) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_BAR),
HAL_REO_UPD_RX_QUEUE_INFO1_BAR) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_CHECK_2K_MODE),
HAL_REO_UPD_RX_QUEUE_INFO1_CHECK_2K_MODE) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_RETRY),
HAL_REO_UPD_RX_QUEUE_INFO1_RETRY) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_OOR_MODE),
HAL_REO_UPD_RX_QUEUE_INFO1_OOR_MODE) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_PN_CHECK),
HAL_REO_UPD_RX_QUEUE_INFO1_PN_CHECK) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_EVEN_PN),
HAL_REO_UPD_RX_QUEUE_INFO1_EVEN_PN) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_UNEVEN_PN),
HAL_REO_UPD_RX_QUEUE_INFO1_UNEVEN_PN) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_PN_HANDLE_ENABLE),
HAL_REO_UPD_RX_QUEUE_INFO1_PN_HANDLE_ENABLE) |
le32_encode_bits(!!(cmd->upd1 & HAL_REO_CMD_UPD1_IGNORE_AMPDU_FLG),
HAL_REO_UPD_RX_QUEUE_INFO1_IGNORE_AMPDU_FLG);
if (cmd->pn_size == 24)
cmd->pn_size = HAL_RX_REO_QUEUE_PN_SIZE_24;
else if (cmd->pn_size == 48)
cmd->pn_size = HAL_RX_REO_QUEUE_PN_SIZE_48;
else if (cmd->pn_size == 128)
cmd->pn_size = HAL_RX_REO_QUEUE_PN_SIZE_128;
if (cmd->ba_window_size < 1)
cmd->ba_window_size = 1;
if (cmd->ba_window_size == 1)
cmd->ba_window_size++;
desc->info2 =
le32_encode_bits(cmd->ba_window_size - 1,
HAL_REO_UPD_RX_QUEUE_INFO2_BA_WINDOW_SIZE) |
le32_encode_bits(cmd->pn_size, HAL_REO_UPD_RX_QUEUE_INFO2_PN_SIZE) |
le32_encode_bits(!!(cmd->upd2 & HAL_REO_CMD_UPD2_SVLD),
HAL_REO_UPD_RX_QUEUE_INFO2_SVLD) |
le32_encode_bits(u32_get_bits(cmd->upd2, HAL_REO_CMD_UPD2_SSN),
HAL_REO_UPD_RX_QUEUE_INFO2_SSN) |
le32_encode_bits(!!(cmd->upd2 & HAL_REO_CMD_UPD2_SEQ_2K_ERR),
HAL_REO_UPD_RX_QUEUE_INFO2_SEQ_2K_ERR) |
le32_encode_bits(!!(cmd->upd2 & HAL_REO_CMD_UPD2_PN_ERR),
HAL_REO_UPD_RX_QUEUE_INFO2_PN_ERR);
return le32_get_bits(desc->cmd.info0, HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
}
int ath12k_hal_reo_cmd_send(struct ath12k_base *ab, struct hal_srng *srng,
enum hal_reo_cmd_type type,
struct ath12k_hal_reo_cmd *cmd)
{
struct hal_tlv_64_hdr *reo_desc;
int ret;
spin_lock_bh(&srng->lock);
ath12k_hal_srng_access_begin(ab, srng);
reo_desc = ath12k_hal_srng_src_get_next_entry(ab, srng);
if (!reo_desc) {
ret = -ENOBUFS;
goto out;
}
switch (type) {
case HAL_REO_CMD_GET_QUEUE_STATS:
ret = ath12k_hal_reo_cmd_queue_stats(reo_desc, cmd);
break;
case HAL_REO_CMD_FLUSH_CACHE:
ret = ath12k_hal_reo_cmd_flush_cache(&ab->hal, reo_desc, cmd);
break;
case HAL_REO_CMD_UPDATE_RX_QUEUE:
ret = ath12k_hal_reo_cmd_update_rx_queue(reo_desc, cmd);
break;
case HAL_REO_CMD_FLUSH_QUEUE:
case HAL_REO_CMD_UNBLOCK_CACHE:
case HAL_REO_CMD_FLUSH_TIMEOUT_LIST:
ath12k_warn(ab, "Unsupported reo command %d\n", type);
ret = -ENOTSUPP;
break;
default:
ath12k_warn(ab, "Unknown reo command %d\n", type);
ret = -EINVAL;
break;
}
out:
ath12k_hal_srng_access_end(ab, srng);
spin_unlock_bh(&srng->lock);
return ret;
}
void ath12k_hal_rx_buf_addr_info_set(struct ath12k_buffer_addr *binfo,
dma_addr_t paddr, u32 cookie, u8 manager)
{
u32 paddr_lo, paddr_hi;
paddr_lo = lower_32_bits(paddr);
paddr_hi = upper_32_bits(paddr);
binfo->info0 = le32_encode_bits(paddr_lo, BUFFER_ADDR_INFO0_ADDR);
binfo->info1 = le32_encode_bits(paddr_hi, BUFFER_ADDR_INFO1_ADDR) |
le32_encode_bits(cookie, BUFFER_ADDR_INFO1_SW_COOKIE) |
le32_encode_bits(manager, BUFFER_ADDR_INFO1_RET_BUF_MGR);
}
void ath12k_hal_rx_buf_addr_info_get(struct ath12k_buffer_addr *binfo,
dma_addr_t *paddr,
u32 *cookie, u8 *rbm)
{
*paddr = (((u64)le32_get_bits(binfo->info1, BUFFER_ADDR_INFO1_ADDR)) << 32) |
le32_get_bits(binfo->info0, BUFFER_ADDR_INFO0_ADDR);
*cookie = le32_get_bits(binfo->info1, BUFFER_ADDR_INFO1_SW_COOKIE);
*rbm = le32_get_bits(binfo->info1, BUFFER_ADDR_INFO1_RET_BUF_MGR);
}
void ath12k_hal_rx_msdu_link_info_get(struct hal_rx_msdu_link *link, u32 *num_msdus,
u32 *msdu_cookies,
enum hal_rx_buf_return_buf_manager *rbm)
{
struct hal_rx_msdu_details *msdu;
u32 val;
int i;
*num_msdus = HAL_NUM_RX_MSDUS_PER_LINK_DESC;
msdu = &link->msdu_link[0];
*rbm = le32_get_bits(msdu->buf_addr_info.info1,
BUFFER_ADDR_INFO1_RET_BUF_MGR);
for (i = 0; i < *num_msdus; i++) {
msdu = &link->msdu_link[i];
val = le32_get_bits(msdu->buf_addr_info.info0,
BUFFER_ADDR_INFO0_ADDR);
if (val == 0) {
*num_msdus = i;
break;
}
*msdu_cookies = le32_get_bits(msdu->buf_addr_info.info1,
BUFFER_ADDR_INFO1_SW_COOKIE);
msdu_cookies++;
}
}
int ath12k_hal_desc_reo_parse_err(struct ath12k_base *ab,
struct hal_reo_dest_ring *desc,
dma_addr_t *paddr, u32 *desc_bank)
{
enum hal_reo_dest_ring_push_reason push_reason;
enum hal_reo_dest_ring_error_code err_code;
u32 cookie, val;
push_reason = le32_get_bits(desc->info0,
HAL_REO_DEST_RING_INFO0_PUSH_REASON);
err_code = le32_get_bits(desc->info0,
HAL_REO_DEST_RING_INFO0_ERROR_CODE);
ab->soc_stats.reo_error[err_code]++;
if (push_reason != HAL_REO_DEST_RING_PUSH_REASON_ERR_DETECTED &&
push_reason != HAL_REO_DEST_RING_PUSH_REASON_ROUTING_INSTRUCTION) {
ath12k_warn(ab, "expected error push reason code, received %d\n",
push_reason);
return -EINVAL;
}
val = le32_get_bits(desc->info0, HAL_REO_DEST_RING_INFO0_BUFFER_TYPE);
if (val != HAL_REO_DEST_RING_BUFFER_TYPE_LINK_DESC) {
ath12k_warn(ab, "expected buffer type link_desc");
return -EINVAL;
}
ath12k_hal_rx_reo_ent_paddr_get(ab, &desc->buf_addr_info, paddr, &cookie);
*desc_bank = u32_get_bits(cookie, DP_LINK_DESC_BANK_MASK);
return 0;
}
int ath12k_hal_wbm_desc_parse_err(struct ath12k_base *ab, void *desc,
struct hal_rx_wbm_rel_info *rel_info)
{
struct hal_wbm_release_ring *wbm_desc = desc;
struct hal_wbm_release_ring_cc_rx *wbm_cc_desc = desc;
enum hal_wbm_rel_desc_type type;
enum hal_wbm_rel_src_module rel_src;
bool hw_cc_done;
u64 desc_va;
u32 val;
type = le32_get_bits(wbm_desc->info0, HAL_WBM_RELEASE_INFO0_DESC_TYPE);
/* We expect only WBM_REL buffer type */
if (type != HAL_WBM_REL_DESC_TYPE_REL_MSDU) {
WARN_ON(1);
return -EINVAL;
}
rel_src = le32_get_bits(wbm_desc->info0,
HAL_WBM_RELEASE_INFO0_REL_SRC_MODULE);
if (rel_src != HAL_WBM_REL_SRC_MODULE_RXDMA &&
rel_src != HAL_WBM_REL_SRC_MODULE_REO)
return -EINVAL;
/* The format of wbm rel ring desc changes based on the
* hw cookie conversion status
*/
hw_cc_done = le32_get_bits(wbm_desc->info0,
HAL_WBM_RELEASE_RX_INFO0_CC_STATUS);
if (!hw_cc_done) {
val = le32_get_bits(wbm_desc->buf_addr_info.info1,
BUFFER_ADDR_INFO1_RET_BUF_MGR);
if (val != HAL_RX_BUF_RBM_SW3_BM) {
ab->soc_stats.invalid_rbm++;
return -EINVAL;
}
rel_info->cookie = le32_get_bits(wbm_desc->buf_addr_info.info1,
BUFFER_ADDR_INFO1_SW_COOKIE);
rel_info->rx_desc = NULL;
} else {
val = le32_get_bits(wbm_cc_desc->info0,
HAL_WBM_RELEASE_RX_CC_INFO0_RBM);
if (val != HAL_RX_BUF_RBM_SW3_BM) {
ab->soc_stats.invalid_rbm++;
return -EINVAL;
}
rel_info->cookie = le32_get_bits(wbm_cc_desc->info1,
HAL_WBM_RELEASE_RX_CC_INFO1_COOKIE);
desc_va = ((u64)le32_to_cpu(wbm_cc_desc->buf_va_hi) << 32 |
le32_to_cpu(wbm_cc_desc->buf_va_lo));
rel_info->rx_desc =
(struct ath12k_rx_desc_info *)((unsigned long)desc_va);
}
rel_info->err_rel_src = rel_src;
rel_info->hw_cc_done = hw_cc_done;
rel_info->first_msdu = le32_get_bits(wbm_desc->info3,
HAL_WBM_RELEASE_INFO3_FIRST_MSDU);
rel_info->last_msdu = le32_get_bits(wbm_desc->info3,
HAL_WBM_RELEASE_INFO3_LAST_MSDU);
rel_info->continuation = le32_get_bits(wbm_desc->info3,
HAL_WBM_RELEASE_INFO3_CONTINUATION);
if (rel_info->err_rel_src == HAL_WBM_REL_SRC_MODULE_REO) {
rel_info->push_reason =
le32_get_bits(wbm_desc->info0,
HAL_WBM_RELEASE_INFO0_REO_PUSH_REASON);
rel_info->err_code =
le32_get_bits(wbm_desc->info0,
HAL_WBM_RELEASE_INFO0_REO_ERROR_CODE);
} else {
rel_info->push_reason =
le32_get_bits(wbm_desc->info0,
HAL_WBM_RELEASE_INFO0_RXDMA_PUSH_REASON);
rel_info->err_code =
le32_get_bits(wbm_desc->info0,
HAL_WBM_RELEASE_INFO0_RXDMA_ERROR_CODE);
}
return 0;
}
void ath12k_hal_rx_reo_ent_paddr_get(struct ath12k_base *ab,
struct ath12k_buffer_addr *buff_addr,
dma_addr_t *paddr, u32 *cookie)
{
*paddr = ((u64)(le32_get_bits(buff_addr->info1,
BUFFER_ADDR_INFO1_ADDR)) << 32) |
le32_get_bits(buff_addr->info0, BUFFER_ADDR_INFO0_ADDR);
*cookie = le32_get_bits(buff_addr->info1, BUFFER_ADDR_INFO1_SW_COOKIE);
}
void ath12k_hal_rx_msdu_link_desc_set(struct ath12k_base *ab,
struct hal_wbm_release_ring *dst_desc,
struct hal_wbm_release_ring *src_desc,
enum hal_wbm_rel_bm_act action)
{
dst_desc->buf_addr_info = src_desc->buf_addr_info;
dst_desc->info0 |= le32_encode_bits(HAL_WBM_REL_SRC_MODULE_SW,
HAL_WBM_RELEASE_INFO0_REL_SRC_MODULE) |
le32_encode_bits(action, HAL_WBM_RELEASE_INFO0_BM_ACTION) |
le32_encode_bits(HAL_WBM_REL_DESC_TYPE_MSDU_LINK,
HAL_WBM_RELEASE_INFO0_DESC_TYPE);
}
void ath12k_hal_reo_status_queue_stats(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status)
{
struct hal_reo_get_queue_stats_status *desc =
(struct hal_reo_get_queue_stats_status *)tlv->value;
status->uniform_hdr.cmd_num =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
status->uniform_hdr.cmd_status =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
ath12k_dbg(ab, ATH12K_DBG_HAL, "Queue stats status:\n");
ath12k_dbg(ab, ATH12K_DBG_HAL, "header: cmd_num %d status %d\n",
status->uniform_hdr.cmd_num,
status->uniform_hdr.cmd_status);
ath12k_dbg(ab, ATH12K_DBG_HAL, "ssn %u cur_idx %u\n",
le32_get_bits(desc->info0,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO0_SSN),
le32_get_bits(desc->info0,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO0_CUR_IDX));
ath12k_dbg(ab, ATH12K_DBG_HAL, "pn = [%08x, %08x, %08x, %08x]\n",
desc->pn[0], desc->pn[1], desc->pn[2], desc->pn[3]);
ath12k_dbg(ab, ATH12K_DBG_HAL, "last_rx: enqueue_tstamp %08x dequeue_tstamp %08x\n",
desc->last_rx_enqueue_timestamp,
desc->last_rx_dequeue_timestamp);
ath12k_dbg(ab, ATH12K_DBG_HAL, "rx_bitmap [%08x %08x %08x %08x %08x %08x %08x %08x]\n",
desc->rx_bitmap[0], desc->rx_bitmap[1], desc->rx_bitmap[2],
desc->rx_bitmap[3], desc->rx_bitmap[4], desc->rx_bitmap[5],
desc->rx_bitmap[6], desc->rx_bitmap[7]);
ath12k_dbg(ab, ATH12K_DBG_HAL, "count: cur_mpdu %u cur_msdu %u\n",
le32_get_bits(desc->info1,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO1_MPDU_COUNT),
le32_get_bits(desc->info1,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO1_MSDU_COUNT));
ath12k_dbg(ab, ATH12K_DBG_HAL, "fwd_timeout %u fwd_bar %u dup_count %u\n",
le32_get_bits(desc->info2,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_TIMEOUT_COUNT),
le32_get_bits(desc->info2,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_FDTB_COUNT),
le32_get_bits(desc->info2,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_DUPLICATE_COUNT));
ath12k_dbg(ab, ATH12K_DBG_HAL, "frames_in_order %u bar_rcvd %u\n",
le32_get_bits(desc->info3,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO3_FIO_COUNT),
le32_get_bits(desc->info3,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO3_BAR_RCVD_CNT));
ath12k_dbg(ab, ATH12K_DBG_HAL, "num_mpdus %d num_msdus %d total_bytes %d\n",
desc->num_mpdu_frames, desc->num_msdu_frames,
desc->total_bytes);
ath12k_dbg(ab, ATH12K_DBG_HAL, "late_rcvd %u win_jump_2k %u hole_cnt %u\n",
le32_get_bits(desc->info4,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO4_LATE_RX_MPDU),
le32_get_bits(desc->info2,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO2_WINDOW_JMP2K),
le32_get_bits(desc->info4,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO4_HOLE_COUNT));
ath12k_dbg(ab, ATH12K_DBG_HAL, "looping count %u\n",
le32_get_bits(desc->info5,
HAL_REO_GET_QUEUE_STATS_STATUS_INFO5_LOOPING_CNT));
}
void ath12k_hal_reo_flush_queue_status(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status)
{
struct hal_reo_flush_queue_status *desc =
(struct hal_reo_flush_queue_status *)tlv->value;
status->uniform_hdr.cmd_num =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
status->uniform_hdr.cmd_status =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
status->u.flush_queue.err_detected =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_QUEUE_INFO0_ERR_DETECTED);
}
void ath12k_hal_reo_flush_cache_status(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status)
{
struct ath12k_hal *hal = &ab->hal;
struct hal_reo_flush_cache_status *desc =
(struct hal_reo_flush_cache_status *)tlv->value;
status->uniform_hdr.cmd_num =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
status->uniform_hdr.cmd_status =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
status->u.flush_cache.err_detected =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_CACHE_STATUS_INFO0_IS_ERR);
status->u.flush_cache.err_code =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_CACHE_STATUS_INFO0_BLOCK_ERR_CODE);
if (!status->u.flush_cache.err_code)
hal->avail_blk_resource |= BIT(hal->current_blk_index);
status->u.flush_cache.cache_controller_flush_status_hit =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_STATUS_HIT);
status->u.flush_cache.cache_controller_flush_status_desc_type =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_DESC_TYPE);
status->u.flush_cache.cache_controller_flush_status_client_id =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_CLIENT_ID);
status->u.flush_cache.cache_controller_flush_status_err =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_ERR);
status->u.flush_cache.cache_controller_flush_status_cnt =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_CACHE_STATUS_INFO0_FLUSH_COUNT);
}
void ath12k_hal_reo_unblk_cache_status(struct ath12k_base *ab, struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status)
{
struct ath12k_hal *hal = &ab->hal;
struct hal_reo_unblock_cache_status *desc =
(struct hal_reo_unblock_cache_status *)tlv->value;
status->uniform_hdr.cmd_num =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
status->uniform_hdr.cmd_status =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
status->u.unblock_cache.err_detected =
le32_get_bits(desc->info0,
HAL_REO_UNBLOCK_CACHE_STATUS_INFO0_IS_ERR);
status->u.unblock_cache.unblock_type =
le32_get_bits(desc->info0,
HAL_REO_UNBLOCK_CACHE_STATUS_INFO0_TYPE);
if (!status->u.unblock_cache.err_detected &&
status->u.unblock_cache.unblock_type ==
HAL_REO_STATUS_UNBLOCK_BLOCKING_RESOURCE)
hal->avail_blk_resource &= ~BIT(hal->current_blk_index);
}
void ath12k_hal_reo_flush_timeout_list_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status)
{
struct hal_reo_flush_timeout_list_status *desc =
(struct hal_reo_flush_timeout_list_status *)tlv->value;
status->uniform_hdr.cmd_num =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
status->uniform_hdr.cmd_status =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
status->u.timeout_list.err_detected =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_TIMEOUT_STATUS_INFO0_IS_ERR);
status->u.timeout_list.list_empty =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_TIMEOUT_STATUS_INFO0_LIST_EMPTY);
status->u.timeout_list.release_desc_cnt =
le32_get_bits(desc->info1,
HAL_REO_FLUSH_TIMEOUT_STATUS_INFO1_REL_DESC_COUNT);
status->u.timeout_list.fwd_buf_cnt =
le32_get_bits(desc->info0,
HAL_REO_FLUSH_TIMEOUT_STATUS_INFO1_FWD_BUF_COUNT);
}
void ath12k_hal_reo_desc_thresh_reached_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status)
{
struct hal_reo_desc_thresh_reached_status *desc =
(struct hal_reo_desc_thresh_reached_status *)tlv->value;
status->uniform_hdr.cmd_num =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
status->uniform_hdr.cmd_status =
le32_get_bits(desc->hdr.info0,
HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
status->u.desc_thresh_reached.threshold_idx =
le32_get_bits(desc->info0,
HAL_REO_DESC_THRESH_STATUS_INFO0_THRESH_INDEX);
status->u.desc_thresh_reached.link_desc_counter0 =
le32_get_bits(desc->info1,
HAL_REO_DESC_THRESH_STATUS_INFO1_LINK_DESC_COUNTER0);
status->u.desc_thresh_reached.link_desc_counter1 =
le32_get_bits(desc->info2,
HAL_REO_DESC_THRESH_STATUS_INFO2_LINK_DESC_COUNTER1);
status->u.desc_thresh_reached.link_desc_counter2 =
le32_get_bits(desc->info3,
HAL_REO_DESC_THRESH_STATUS_INFO3_LINK_DESC_COUNTER2);
status->u.desc_thresh_reached.link_desc_counter_sum =
le32_get_bits(desc->info4,
HAL_REO_DESC_THRESH_STATUS_INFO4_LINK_DESC_COUNTER_SUM);
}
void ath12k_hal_reo_update_rx_reo_queue_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status)
{
struct hal_reo_status_hdr *desc =
(struct hal_reo_status_hdr *)tlv->value;
status->uniform_hdr.cmd_num =
le32_get_bits(desc->info0,
HAL_REO_STATUS_HDR_INFO0_STATUS_NUM);
status->uniform_hdr.cmd_status =
le32_get_bits(desc->info0,
HAL_REO_STATUS_HDR_INFO0_EXEC_STATUS);
}
u32 ath12k_hal_reo_qdesc_size(u32 ba_window_size, u8 tid)
{
u32 num_ext_desc;
if (ba_window_size <= 1) {
if (tid != HAL_DESC_REO_NON_QOS_TID)
num_ext_desc = 1;
else
num_ext_desc = 0;
} else if (ba_window_size <= 105) {
num_ext_desc = 1;
} else if (ba_window_size <= 210) {
num_ext_desc = 2;
} else {
num_ext_desc = 3;
}
return sizeof(struct hal_rx_reo_queue) +
(num_ext_desc * sizeof(struct hal_rx_reo_queue_ext));
}
void ath12k_hal_reo_qdesc_setup(struct hal_rx_reo_queue *qdesc,
int tid, u32 ba_window_size,
u32 start_seq, enum hal_pn_type type)
{
struct hal_rx_reo_queue_ext *ext_desc;
memset(qdesc, 0, sizeof(*qdesc));
ath12k_hal_reo_set_desc_hdr(&qdesc->desc_hdr, HAL_DESC_REO_OWNED,
HAL_DESC_REO_QUEUE_DESC,
REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_0);
qdesc->rx_queue_num = le32_encode_bits(tid, HAL_RX_REO_QUEUE_RX_QUEUE_NUMBER);
qdesc->info0 =
le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_VLD) |
le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_ASSOC_LNK_DESC_COUNTER) |
le32_encode_bits(ath12k_tid_to_ac(tid), HAL_RX_REO_QUEUE_INFO0_AC);
if (ba_window_size < 1)
ba_window_size = 1;
if (ba_window_size == 1 && tid != HAL_DESC_REO_NON_QOS_TID)
ba_window_size++;
if (ba_window_size == 1)
qdesc->info0 |= le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_RETRY);
qdesc->info0 |= le32_encode_bits(ba_window_size - 1,
HAL_RX_REO_QUEUE_INFO0_BA_WINDOW_SIZE);
switch (type) {
case HAL_PN_TYPE_NONE:
case HAL_PN_TYPE_WAPI_EVEN:
case HAL_PN_TYPE_WAPI_UNEVEN:
break;
case HAL_PN_TYPE_WPA:
qdesc->info0 |=
le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_PN_CHECK) |
le32_encode_bits(HAL_RX_REO_QUEUE_PN_SIZE_48,
HAL_RX_REO_QUEUE_INFO0_PN_SIZE);
break;
}
/* TODO: Set Ignore ampdu flags based on BA window size and/or
* AMPDU capabilities
*/
qdesc->info0 |= le32_encode_bits(1, HAL_RX_REO_QUEUE_INFO0_IGNORE_AMPDU_FLG);
qdesc->info1 |= le32_encode_bits(0, HAL_RX_REO_QUEUE_INFO1_SVLD);
if (start_seq <= 0xfff)
qdesc->info1 = le32_encode_bits(start_seq,
HAL_RX_REO_QUEUE_INFO1_SSN);
if (tid == HAL_DESC_REO_NON_QOS_TID)
return;
ext_desc = qdesc->ext_desc;
/* TODO: HW queue descriptors are currently allocated for max BA
* window size for all QOS TIDs so that same descriptor can be used
* later when ADDBA request is received. This should be changed to
* allocate HW queue descriptors based on BA window size being
* negotiated (0 for non BA cases), and reallocate when BA window
* size changes and also send WMI message to FW to change the REO
* queue descriptor in Rx peer entry as part of dp_rx_tid_update.
*/
memset(ext_desc, 0, 3 * sizeof(*ext_desc));
ath12k_hal_reo_set_desc_hdr(&ext_desc->desc_hdr, HAL_DESC_REO_OWNED,
HAL_DESC_REO_QUEUE_EXT_DESC,
REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_1);
ext_desc++;
ath12k_hal_reo_set_desc_hdr(&ext_desc->desc_hdr, HAL_DESC_REO_OWNED,
HAL_DESC_REO_QUEUE_EXT_DESC,
REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_2);
ext_desc++;
ath12k_hal_reo_set_desc_hdr(&ext_desc->desc_hdr, HAL_DESC_REO_OWNED,
HAL_DESC_REO_QUEUE_EXT_DESC,
REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_3);
}
void ath12k_hal_reo_init_cmd_ring(struct ath12k_base *ab,
struct hal_srng *srng)
{
struct hal_srng_params params;
struct hal_tlv_64_hdr *tlv;
struct hal_reo_get_queue_stats *desc;
int i, cmd_num = 1;
int entry_size;
u8 *entry;
memset(&params, 0, sizeof(params));
entry_size = ath12k_hal_srng_get_entrysize(ab, HAL_REO_CMD);
ath12k_hal_srng_get_params(ab, srng, &params);
entry = (u8 *)params.ring_base_vaddr;
for (i = 0; i < params.num_entries; i++) {
tlv = (struct hal_tlv_64_hdr *)entry;
desc = (struct hal_reo_get_queue_stats *)tlv->value;
desc->cmd.info0 = le32_encode_bits(cmd_num++,
HAL_REO_CMD_HDR_INFO0_CMD_NUMBER);
entry += entry_size;
}
}
void ath12k_hal_reo_hw_setup(struct ath12k_base *ab, u32 ring_hash_map)
{
u32 reo_base = HAL_SEQ_WCSS_UMAC_REO_REG;
u32 val;
val = ath12k_hif_read32(ab, reo_base + HAL_REO1_GEN_ENABLE);
val |= u32_encode_bits(1, HAL_REO1_GEN_ENABLE_AGING_LIST_ENABLE) |
u32_encode_bits(1, HAL_REO1_GEN_ENABLE_AGING_FLUSH_ENABLE);
ath12k_hif_write32(ab, reo_base + HAL_REO1_GEN_ENABLE, val);
val = ath12k_hif_read32(ab, reo_base + HAL_REO1_MISC_CTRL_ADDR(ab));
val &= ~(HAL_REO1_MISC_CTL_FRAG_DST_RING |
HAL_REO1_MISC_CTL_BAR_DST_RING);
val |= u32_encode_bits(HAL_SRNG_RING_ID_REO2SW0,
HAL_REO1_MISC_CTL_FRAG_DST_RING);
val |= u32_encode_bits(HAL_SRNG_RING_ID_REO2SW0,
HAL_REO1_MISC_CTL_BAR_DST_RING);
ath12k_hif_write32(ab, reo_base + HAL_REO1_MISC_CTRL_ADDR(ab), val);
ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_0(ab),
HAL_DEFAULT_BE_BK_VI_REO_TIMEOUT_USEC);
ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_1(ab),
HAL_DEFAULT_BE_BK_VI_REO_TIMEOUT_USEC);
ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_2(ab),
HAL_DEFAULT_BE_BK_VI_REO_TIMEOUT_USEC);
ath12k_hif_write32(ab, reo_base + HAL_REO1_AGING_THRESH_IX_3(ab),
HAL_DEFAULT_VO_REO_TIMEOUT_USEC);
ath12k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_2,
ring_hash_map);
ath12k_hif_write32(ab, reo_base + HAL_REO1_DEST_RING_CTRL_IX_3,
ring_hash_map);
}

Просмотреть файл

@ -0,0 +1,704 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_HAL_RX_H
#define ATH12K_HAL_RX_H
struct hal_rx_wbm_rel_info {
u32 cookie;
enum hal_wbm_rel_src_module err_rel_src;
enum hal_reo_dest_ring_push_reason push_reason;
u32 err_code;
bool first_msdu;
bool last_msdu;
bool continuation;
void *rx_desc;
bool hw_cc_done;
};
#define HAL_INVALID_PEERID 0xffff
#define VHT_SIG_SU_NSS_MASK 0x7
#define HAL_RX_MAX_MCS 12
#define HAL_RX_MAX_NSS 8
#define HAL_RX_MPDU_INFO_PN_GET_BYTE1(__val) \
le32_get_bits((__val), GENMASK(7, 0))
#define HAL_RX_MPDU_INFO_PN_GET_BYTE2(__val) \
le32_get_bits((__val), GENMASK(15, 8))
#define HAL_RX_MPDU_INFO_PN_GET_BYTE3(__val) \
le32_get_bits((__val), GENMASK(23, 16))
#define HAL_RX_MPDU_INFO_PN_GET_BYTE4(__val) \
le32_get_bits((__val), GENMASK(31, 24))
struct hal_rx_mon_status_tlv_hdr {
u32 hdr;
u8 value[];
};
enum hal_rx_su_mu_coding {
HAL_RX_SU_MU_CODING_BCC,
HAL_RX_SU_MU_CODING_LDPC,
HAL_RX_SU_MU_CODING_MAX,
};
enum hal_rx_gi {
HAL_RX_GI_0_8_US,
HAL_RX_GI_0_4_US,
HAL_RX_GI_1_6_US,
HAL_RX_GI_3_2_US,
HAL_RX_GI_MAX,
};
enum hal_rx_bw {
HAL_RX_BW_20MHZ,
HAL_RX_BW_40MHZ,
HAL_RX_BW_80MHZ,
HAL_RX_BW_160MHZ,
HAL_RX_BW_MAX,
};
enum hal_rx_preamble {
HAL_RX_PREAMBLE_11A,
HAL_RX_PREAMBLE_11B,
HAL_RX_PREAMBLE_11N,
HAL_RX_PREAMBLE_11AC,
HAL_RX_PREAMBLE_11AX,
HAL_RX_PREAMBLE_MAX,
};
enum hal_rx_reception_type {
HAL_RX_RECEPTION_TYPE_SU,
HAL_RX_RECEPTION_TYPE_MU_MIMO,
HAL_RX_RECEPTION_TYPE_MU_OFDMA,
HAL_RX_RECEPTION_TYPE_MU_OFDMA_MIMO,
HAL_RX_RECEPTION_TYPE_MAX,
};
enum hal_rx_legacy_rate {
HAL_RX_LEGACY_RATE_1_MBPS,
HAL_RX_LEGACY_RATE_2_MBPS,
HAL_RX_LEGACY_RATE_5_5_MBPS,
HAL_RX_LEGACY_RATE_6_MBPS,
HAL_RX_LEGACY_RATE_9_MBPS,
HAL_RX_LEGACY_RATE_11_MBPS,
HAL_RX_LEGACY_RATE_12_MBPS,
HAL_RX_LEGACY_RATE_18_MBPS,
HAL_RX_LEGACY_RATE_24_MBPS,
HAL_RX_LEGACY_RATE_36_MBPS,
HAL_RX_LEGACY_RATE_48_MBPS,
HAL_RX_LEGACY_RATE_54_MBPS,
HAL_RX_LEGACY_RATE_INVALID,
};
#define HAL_TLV_STATUS_PPDU_NOT_DONE 0
#define HAL_TLV_STATUS_PPDU_DONE 1
#define HAL_TLV_STATUS_BUF_DONE 2
#define HAL_TLV_STATUS_PPDU_NON_STD_DONE 3
#define HAL_RX_FCS_LEN 4
enum hal_rx_mon_status {
HAL_RX_MON_STATUS_PPDU_NOT_DONE,
HAL_RX_MON_STATUS_PPDU_DONE,
HAL_RX_MON_STATUS_BUF_DONE,
};
#define HAL_RX_MAX_MPDU 256
#define HAL_RX_NUM_WORDS_PER_PPDU_BITMAP (HAL_RX_MAX_MPDU >> 5)
struct hal_rx_user_status {
u32 mcs:4,
nss:3,
ofdma_info_valid:1,
ul_ofdma_ru_start_index:7,
ul_ofdma_ru_width:7,
ul_ofdma_ru_size:8;
u32 ul_ofdma_user_v0_word0;
u32 ul_ofdma_user_v0_word1;
u32 ast_index;
u32 tid;
u16 tcp_msdu_count;
u16 tcp_ack_msdu_count;
u16 udp_msdu_count;
u16 other_msdu_count;
u16 frame_control;
u8 frame_control_info_valid;
u8 data_sequence_control_info_valid;
u16 first_data_seq_ctrl;
u32 preamble_type;
u16 ht_flags;
u16 vht_flags;
u16 he_flags;
u8 rs_flags;
u8 ldpc;
u32 mpdu_cnt_fcs_ok;
u32 mpdu_cnt_fcs_err;
u32 mpdu_fcs_ok_bitmap[HAL_RX_NUM_WORDS_PER_PPDU_BITMAP];
u32 mpdu_ok_byte_count;
u32 mpdu_err_byte_count;
};
#define HAL_MAX_UL_MU_USERS 37
struct hal_rx_mon_ppdu_info {
u32 ppdu_id;
u32 last_ppdu_id;
u64 ppdu_ts;
u32 num_mpdu_fcs_ok;
u32 num_mpdu_fcs_err;
u32 preamble_type;
u32 mpdu_len;
u16 chan_num;
u16 tcp_msdu_count;
u16 tcp_ack_msdu_count;
u16 udp_msdu_count;
u16 other_msdu_count;
u16 peer_id;
u8 rate;
u8 mcs;
u8 nss;
u8 bw;
u8 vht_flag_values1;
u8 vht_flag_values2;
u8 vht_flag_values3[4];
u8 vht_flag_values4;
u8 vht_flag_values5;
u16 vht_flag_values6;
u8 is_stbc;
u8 gi;
u8 sgi;
u8 ldpc;
u8 beamformed;
u8 rssi_comb;
u16 tid;
u8 fc_valid;
u16 ht_flags;
u16 vht_flags;
u16 he_flags;
u16 he_mu_flags;
u8 dcm;
u8 ru_alloc;
u8 reception_type;
u64 tsft;
u64 rx_duration;
u16 frame_control;
u32 ast_index;
u8 rs_fcs_err;
u8 rs_flags;
u8 cck_flag;
u8 ofdm_flag;
u8 ulofdma_flag;
u8 frame_control_info_valid;
u16 he_per_user_1;
u16 he_per_user_2;
u8 he_per_user_position;
u8 he_per_user_known;
u16 he_flags1;
u16 he_flags2;
u8 he_RU[4];
u16 he_data1;
u16 he_data2;
u16 he_data3;
u16 he_data4;
u16 he_data5;
u16 he_data6;
u32 ppdu_len;
u32 prev_ppdu_id;
u32 device_id;
u16 first_data_seq_ctrl;
u8 monitor_direct_used;
u8 data_sequence_control_info_valid;
u8 ltf_size;
u8 rxpcu_filter_pass;
s8 rssi_chain[8][8];
u32 num_users;
u32 mpdu_fcs_ok_bitmap[HAL_RX_NUM_WORDS_PER_PPDU_BITMAP];
u8 addr1[ETH_ALEN];
u8 addr2[ETH_ALEN];
u8 addr3[ETH_ALEN];
u8 addr4[ETH_ALEN];
struct hal_rx_user_status userstats[HAL_MAX_UL_MU_USERS];
u8 userid;
u16 ampdu_id[HAL_MAX_UL_MU_USERS];
bool first_msdu_in_mpdu;
bool is_ampdu;
u8 medium_prot_type;
};
#define HAL_RX_PPDU_START_INFO0_PPDU_ID GENMASK(15, 0)
struct hal_rx_ppdu_start {
__le32 info0;
__le32 chan_num;
__le32 ppdu_start_ts;
} __packed;
#define HAL_RX_PPDU_END_USER_STATS_INFO0_MPDU_CNT_FCS_ERR GENMASK(25, 16)
#define HAL_RX_PPDU_END_USER_STATS_INFO1_MPDU_CNT_FCS_OK GENMASK(8, 0)
#define HAL_RX_PPDU_END_USER_STATS_INFO1_FC_VALID BIT(9)
#define HAL_RX_PPDU_END_USER_STATS_INFO1_QOS_CTRL_VALID BIT(10)
#define HAL_RX_PPDU_END_USER_STATS_INFO1_HT_CTRL_VALID BIT(11)
#define HAL_RX_PPDU_END_USER_STATS_INFO1_PKT_TYPE GENMASK(23, 20)
#define HAL_RX_PPDU_END_USER_STATS_INFO2_AST_INDEX GENMASK(15, 0)
#define HAL_RX_PPDU_END_USER_STATS_INFO2_FRAME_CTRL GENMASK(31, 16)
#define HAL_RX_PPDU_END_USER_STATS_INFO3_QOS_CTRL GENMASK(31, 16)
#define HAL_RX_PPDU_END_USER_STATS_INFO4_UDP_MSDU_CNT GENMASK(15, 0)
#define HAL_RX_PPDU_END_USER_STATS_INFO4_TCP_MSDU_CNT GENMASK(31, 16)
#define HAL_RX_PPDU_END_USER_STATS_INFO5_OTHER_MSDU_CNT GENMASK(15, 0)
#define HAL_RX_PPDU_END_USER_STATS_INFO5_TCP_ACK_MSDU_CNT GENMASK(31, 16)
#define HAL_RX_PPDU_END_USER_STATS_INFO6_TID_BITMAP GENMASK(15, 0)
#define HAL_RX_PPDU_END_USER_STATS_INFO6_TID_EOSP_BITMAP GENMASK(31, 16)
#define HAL_RX_PPDU_END_USER_STATS_MPDU_DELIM_OK_BYTE_COUNT GENMASK(24, 0)
#define HAL_RX_PPDU_END_USER_STATS_MPDU_DELIM_ERR_BYTE_COUNT GENMASK(24, 0)
struct hal_rx_ppdu_end_user_stats {
__le32 rsvd0[2];
__le32 info0;
__le32 info1;
__le32 info2;
__le32 info3;
__le32 ht_ctrl;
__le32 rsvd1[2];
__le32 info4;
__le32 info5;
__le32 usr_resp_ref;
__le32 info6;
__le32 rsvd3[4];
__le32 mpdu_ok_cnt;
__le32 rsvd4;
__le32 mpdu_err_cnt;
__le32 rsvd5[2];
__le32 usr_resp_ref_ext;
__le32 rsvd6;
} __packed;
struct hal_rx_ppdu_end_user_stats_ext {
__le32 info0;
__le32 info1;
__le32 info2;
__le32 info3;
__le32 info4;
__le32 info5;
__le32 info6;
} __packed;
#define HAL_RX_HT_SIG_INFO_INFO0_MCS GENMASK(6, 0)
#define HAL_RX_HT_SIG_INFO_INFO0_BW BIT(7)
#define HAL_RX_HT_SIG_INFO_INFO1_STBC GENMASK(5, 4)
#define HAL_RX_HT_SIG_INFO_INFO1_FEC_CODING BIT(6)
#define HAL_RX_HT_SIG_INFO_INFO1_GI BIT(7)
struct hal_rx_ht_sig_info {
__le32 info0;
__le32 info1;
} __packed;
#define HAL_RX_LSIG_B_INFO_INFO0_RATE GENMASK(3, 0)
#define HAL_RX_LSIG_B_INFO_INFO0_LEN GENMASK(15, 4)
struct hal_rx_lsig_b_info {
__le32 info0;
} __packed;
#define HAL_RX_LSIG_A_INFO_INFO0_RATE GENMASK(3, 0)
#define HAL_RX_LSIG_A_INFO_INFO0_LEN GENMASK(16, 5)
#define HAL_RX_LSIG_A_INFO_INFO0_PKT_TYPE GENMASK(27, 24)
struct hal_rx_lsig_a_info {
__le32 info0;
} __packed;
#define HAL_RX_VHT_SIG_A_INFO_INFO0_BW GENMASK(1, 0)
#define HAL_RX_VHT_SIG_A_INFO_INFO0_STBC BIT(3)
#define HAL_RX_VHT_SIG_A_INFO_INFO0_GROUP_ID GENMASK(9, 4)
#define HAL_RX_VHT_SIG_A_INFO_INFO0_NSTS GENMASK(21, 10)
#define HAL_RX_VHT_SIG_A_INFO_INFO1_GI_SETTING GENMASK(1, 0)
#define HAL_RX_VHT_SIG_A_INFO_INFO1_SU_MU_CODING BIT(2)
#define HAL_RX_VHT_SIG_A_INFO_INFO1_MCS GENMASK(7, 4)
#define HAL_RX_VHT_SIG_A_INFO_INFO1_BEAMFORMED BIT(8)
struct hal_rx_vht_sig_a_info {
__le32 info0;
__le32 info1;
} __packed;
enum hal_rx_vht_sig_a_gi_setting {
HAL_RX_VHT_SIG_A_NORMAL_GI = 0,
HAL_RX_VHT_SIG_A_SHORT_GI = 1,
HAL_RX_VHT_SIG_A_SHORT_GI_AMBIGUITY = 3,
};
#define HE_GI_0_8 0
#define HE_GI_0_4 1
#define HE_GI_1_6 2
#define HE_GI_3_2 3
#define HE_LTF_1_X 0
#define HE_LTF_2_X 1
#define HE_LTF_4_X 2
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_MCS GENMASK(6, 3)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_DCM BIT(7)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_TRANSMIT_BW GENMASK(20, 19)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_CP_LTF_SIZE GENMASK(22, 21)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_NSTS GENMASK(25, 23)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_BSS_COLOR GENMASK(13, 8)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_SPATIAL_REUSE GENMASK(18, 15)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_FORMAT_IND BIT(0)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_BEAM_CHANGE BIT(1)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO0_DL_UL_FLAG BIT(2)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_TXOP_DURATION GENMASK(6, 0)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_CODING BIT(7)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_LDPC_EXTRA BIT(8)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_STBC BIT(9)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_TXBF BIT(10)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_PKT_EXT_FACTOR GENMASK(12, 11)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_PKT_EXT_PE_DISAM BIT(13)
#define HAL_RX_HE_SIG_A_SU_INFO_INFO1_DOPPLER_IND BIT(15)
struct hal_rx_he_sig_a_su_info {
__le32 info0;
__le32 info1;
} __packed;
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_UL_FLAG BIT(1)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_MCS_OF_SIGB GENMASK(3, 1)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_DCM_OF_SIGB BIT(4)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_BSS_COLOR GENMASK(10, 5)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_SPATIAL_REUSE GENMASK(14, 11)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_TRANSMIT_BW GENMASK(17, 15)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_NUM_SIGB_SYMB GENMASK(21, 18)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_COMP_MODE_SIGB BIT(22)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_CP_LTF_SIZE GENMASK(24, 23)
#define HAL_RX_HE_SIG_A_MU_DL_INFO0_DOPPLER_INDICATION BIT(25)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_TXOP_DURATION GENMASK(6, 0)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_CODING BIT(7)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_NUM_LTF_SYMB GENMASK(10, 8)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_LDPC_EXTRA BIT(11)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_STBC BIT(12)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_TXBF BIT(10)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_PKT_EXT_FACTOR GENMASK(14, 13)
#define HAL_RX_HE_SIG_A_MU_DL_INFO1_PKT_EXT_PE_DISAM BIT(15)
struct hal_rx_he_sig_a_mu_dl_info {
__le32 info0;
__le32 info1;
} __packed;
#define HAL_RX_HE_SIG_B1_MU_INFO_INFO0_RU_ALLOCATION GENMASK(7, 0)
struct hal_rx_he_sig_b1_mu_info {
__le32 info0;
} __packed;
#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_ID GENMASK(10, 0)
#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_MCS GENMASK(18, 15)
#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_CODING BIT(20)
#define HAL_RX_HE_SIG_B2_MU_INFO_INFO0_STA_NSTS GENMASK(31, 29)
struct hal_rx_he_sig_b2_mu_info {
__le32 info0;
} __packed;
#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_ID GENMASK(10, 0)
#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_NSTS GENMASK(13, 11)
#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_TXBF BIT(19)
#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_MCS GENMASK(18, 15)
#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_DCM BIT(19)
#define HAL_RX_HE_SIG_B2_OFDMA_INFO_INFO0_STA_CODING BIT(20)
struct hal_rx_he_sig_b2_ofdma_info {
__le32 info0;
} __packed;
enum hal_rx_ul_reception_type {
HAL_RECEPTION_TYPE_ULOFMDA,
HAL_RECEPTION_TYPE_ULMIMO,
HAL_RECEPTION_TYPE_OTHER,
HAL_RECEPTION_TYPE_FRAMELESS
};
#define HAL_RX_PHYRX_RSSI_LEGACY_INFO_INFO0_RSSI_COMB GENMASK(15, 8)
#define HAL_RX_PHYRX_RSSI_LEGACY_INFO_RSVD1_RECEPTION GENMASK(3, 0)
struct hal_rx_phyrx_rssi_legacy_info {
__le32 rsvd[35];
__le32 info0;
} __packed;
#define HAL_RX_MPDU_START_INFO0_PPDU_ID GENMASK(31, 16)
#define HAL_RX_MPDU_START_INFO1_PEERID GENMASK(31, 16)
#define HAL_RX_MPDU_START_INFO2_MPDU_LEN GENMASK(13, 0)
struct hal_rx_mpdu_start {
__le32 info0;
__le32 info1;
__le32 rsvd1[11];
__le32 info2;
__le32 rsvd2[9];
} __packed;
#define HAL_RX_PPDU_END_DURATION GENMASK(23, 0)
struct hal_rx_ppdu_end_duration {
__le32 rsvd0[9];
__le32 info0;
__le32 rsvd1[4];
} __packed;
struct hal_rx_rxpcu_classification_overview {
u32 rsvd0;
} __packed;
struct hal_rx_msdu_desc_info {
u32 msdu_flags;
u16 msdu_len; /* 14 bits for length */
};
#define HAL_RX_NUM_MSDU_DESC 6
struct hal_rx_msdu_list {
struct hal_rx_msdu_desc_info msdu_info[HAL_RX_NUM_MSDU_DESC];
u32 sw_cookie[HAL_RX_NUM_MSDU_DESC];
u8 rbm[HAL_RX_NUM_MSDU_DESC];
};
#define HAL_RX_FBM_ACK_INFO0_ADDR1_31_0 GENMASK(31, 0)
#define HAL_RX_FBM_ACK_INFO1_ADDR1_47_32 GENMASK(15, 0)
#define HAL_RX_FBM_ACK_INFO1_ADDR2_15_0 GENMASK(31, 16)
#define HAL_RX_FBM_ACK_INFO2_ADDR2_47_16 GENMASK(31, 0)
struct hal_rx_frame_bitmap_ack {
__le32 reserved;
__le32 info0;
__le32 info1;
__le32 info2;
__le32 reserved1[10];
} __packed;
#define HAL_RX_RESP_REQ_INFO0_PPDU_ID GENMASK(15, 0)
#define HAL_RX_RESP_REQ_INFO0_RECEPTION_TYPE BIT(16)
#define HAL_RX_RESP_REQ_INFO1_DURATION GENMASK(15, 0)
#define HAL_RX_RESP_REQ_INFO1_RATE_MCS GENMASK(24, 21)
#define HAL_RX_RESP_REQ_INFO1_SGI GENMASK(26, 25)
#define HAL_RX_RESP_REQ_INFO1_STBC BIT(27)
#define HAL_RX_RESP_REQ_INFO1_LDPC BIT(28)
#define HAL_RX_RESP_REQ_INFO1_IS_AMPDU BIT(29)
#define HAL_RX_RESP_REQ_INFO2_NUM_USER GENMASK(6, 0)
#define HAL_RX_RESP_REQ_INFO3_ADDR1_31_0 GENMASK(31, 0)
#define HAL_RX_RESP_REQ_INFO4_ADDR1_47_32 GENMASK(15, 0)
#define HAL_RX_RESP_REQ_INFO4_ADDR1_15_0 GENMASK(31, 16)
#define HAL_RX_RESP_REQ_INFO5_ADDR1_47_16 GENMASK(31, 0)
struct hal_rx_resp_req_info {
__le32 info0;
__le32 reserved[1];
__le32 info1;
__le32 info2;
__le32 reserved1[2];
__le32 info3;
__le32 info4;
__le32 info5;
__le32 reserved2[5];
} __packed;
#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_0 0xDDBEEF
#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_1 0xADBEEF
#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_2 0xBDBEEF
#define REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_3 0xCDBEEF
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W0_VALID BIT(30)
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W0_VER BIT(31)
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_NSS GENMASK(2, 0)
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_MCS GENMASK(6, 3)
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_LDPC BIT(7)
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_DCM BIT(8)
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_RU_START GENMASK(15, 9)
#define HAL_RX_UL_OFDMA_USER_INFO_V0_W1_RU_SIZE GENMASK(18, 16)
/* HE Radiotap data1 Mask */
#define HE_SU_FORMAT_TYPE 0x0000
#define HE_EXT_SU_FORMAT_TYPE 0x0001
#define HE_MU_FORMAT_TYPE 0x0002
#define HE_TRIG_FORMAT_TYPE 0x0003
#define HE_BEAM_CHANGE_KNOWN 0x0008
#define HE_DL_UL_KNOWN 0x0010
#define HE_MCS_KNOWN 0x0020
#define HE_DCM_KNOWN 0x0040
#define HE_CODING_KNOWN 0x0080
#define HE_LDPC_EXTRA_SYMBOL_KNOWN 0x0100
#define HE_STBC_KNOWN 0x0200
#define HE_DATA_BW_RU_KNOWN 0x4000
#define HE_DOPPLER_KNOWN 0x8000
#define HE_BSS_COLOR_KNOWN 0x0004
/* HE Radiotap data2 Mask */
#define HE_GI_KNOWN 0x0002
#define HE_TXBF_KNOWN 0x0010
#define HE_PE_DISAMBIGUITY_KNOWN 0x0020
#define HE_TXOP_KNOWN 0x0040
#define HE_LTF_SYMBOLS_KNOWN 0x0004
#define HE_PRE_FEC_PADDING_KNOWN 0x0008
#define HE_MIDABLE_PERIODICITY_KNOWN 0x0080
/* HE radiotap data3 shift values */
#define HE_BEAM_CHANGE_SHIFT 6
#define HE_DL_UL_SHIFT 7
#define HE_TRANSMIT_MCS_SHIFT 8
#define HE_DCM_SHIFT 12
#define HE_CODING_SHIFT 13
#define HE_LDPC_EXTRA_SYMBOL_SHIFT 14
#define HE_STBC_SHIFT 15
/* HE radiotap data4 shift values */
#define HE_STA_ID_SHIFT 4
/* HE radiotap data5 */
#define HE_GI_SHIFT 4
#define HE_LTF_SIZE_SHIFT 6
#define HE_LTF_SYM_SHIFT 8
#define HE_TXBF_SHIFT 14
#define HE_PE_DISAMBIGUITY_SHIFT 15
#define HE_PRE_FEC_PAD_SHIFT 12
/* HE radiotap data6 */
#define HE_DOPPLER_SHIFT 4
#define HE_TXOP_SHIFT 8
/* HE radiotap HE-MU flags1 */
#define HE_SIG_B_MCS_KNOWN 0x0010
#define HE_SIG_B_DCM_KNOWN 0x0040
#define HE_SIG_B_SYM_NUM_KNOWN 0x8000
#define HE_RU_0_KNOWN 0x0100
#define HE_RU_1_KNOWN 0x0200
#define HE_RU_2_KNOWN 0x0400
#define HE_RU_3_KNOWN 0x0800
#define HE_DCM_FLAG_1_SHIFT 5
#define HE_SPATIAL_REUSE_MU_KNOWN 0x0100
#define HE_SIG_B_COMPRESSION_FLAG_1_KNOWN 0x4000
/* HE radiotap HE-MU flags2 */
#define HE_SIG_B_COMPRESSION_FLAG_2_SHIFT 3
#define HE_BW_KNOWN 0x0004
#define HE_NUM_SIG_B_SYMBOLS_SHIFT 4
#define HE_SIG_B_COMPRESSION_FLAG_2_KNOWN 0x0100
#define HE_NUM_SIG_B_FLAG_2_SHIFT 9
#define HE_LTF_FLAG_2_SYMBOLS_SHIFT 12
#define HE_LTF_KNOWN 0x8000
/* HE radiotap per_user_1 */
#define HE_STA_SPATIAL_SHIFT 11
#define HE_TXBF_SHIFT 14
#define HE_RESERVED_SET_TO_1_SHIFT 19
#define HE_STA_CODING_SHIFT 20
/* HE radiotap per_user_2 */
#define HE_STA_MCS_SHIFT 4
#define HE_STA_DCM_SHIFT 5
/* HE radiotap per user known */
#define HE_USER_FIELD_POSITION_KNOWN 0x01
#define HE_STA_ID_PER_USER_KNOWN 0x02
#define HE_STA_NSTS_KNOWN 0x04
#define HE_STA_TX_BF_KNOWN 0x08
#define HE_STA_SPATIAL_CONFIG_KNOWN 0x10
#define HE_STA_MCS_KNOWN 0x20
#define HE_STA_DCM_KNOWN 0x40
#define HE_STA_CODING_KNOWN 0x80
#define HAL_RX_MPDU_ERR_FCS BIT(0)
#define HAL_RX_MPDU_ERR_DECRYPT BIT(1)
#define HAL_RX_MPDU_ERR_TKIP_MIC BIT(2)
#define HAL_RX_MPDU_ERR_AMSDU_ERR BIT(3)
#define HAL_RX_MPDU_ERR_OVERFLOW BIT(4)
#define HAL_RX_MPDU_ERR_MSDU_LEN BIT(5)
#define HAL_RX_MPDU_ERR_MPDU_LEN BIT(6)
#define HAL_RX_MPDU_ERR_UNENCRYPTED_FRAME BIT(7)
static inline
enum nl80211_he_ru_alloc ath12k_he_ru_tones_to_nl80211_he_ru_alloc(u16 ru_tones)
{
enum nl80211_he_ru_alloc ret;
switch (ru_tones) {
case RU_52:
ret = NL80211_RATE_INFO_HE_RU_ALLOC_52;
break;
case RU_106:
ret = NL80211_RATE_INFO_HE_RU_ALLOC_106;
break;
case RU_242:
ret = NL80211_RATE_INFO_HE_RU_ALLOC_242;
break;
case RU_484:
ret = NL80211_RATE_INFO_HE_RU_ALLOC_484;
break;
case RU_996:
ret = NL80211_RATE_INFO_HE_RU_ALLOC_996;
break;
case RU_26:
fallthrough;
default:
ret = NL80211_RATE_INFO_HE_RU_ALLOC_26;
break;
}
return ret;
}
void ath12k_hal_reo_status_queue_stats(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status);
void ath12k_hal_reo_flush_queue_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status);
void ath12k_hal_reo_flush_cache_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status);
void ath12k_hal_reo_unblk_cache_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status);
void ath12k_hal_reo_flush_timeout_list_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status);
void ath12k_hal_reo_desc_thresh_reached_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status);
void ath12k_hal_reo_update_rx_reo_queue_status(struct ath12k_base *ab,
struct hal_tlv_64_hdr *tlv,
struct hal_reo_status *status);
void ath12k_hal_rx_msdu_link_info_get(struct hal_rx_msdu_link *link, u32 *num_msdus,
u32 *msdu_cookies,
enum hal_rx_buf_return_buf_manager *rbm);
void ath12k_hal_rx_msdu_link_desc_set(struct ath12k_base *ab,
struct hal_wbm_release_ring *dst_desc,
struct hal_wbm_release_ring *src_desc,
enum hal_wbm_rel_bm_act action);
void ath12k_hal_rx_buf_addr_info_set(struct ath12k_buffer_addr *binfo,
dma_addr_t paddr, u32 cookie, u8 manager);
void ath12k_hal_rx_buf_addr_info_get(struct ath12k_buffer_addr *binfo,
dma_addr_t *paddr,
u32 *cookie, u8 *rbm);
int ath12k_hal_desc_reo_parse_err(struct ath12k_base *ab,
struct hal_reo_dest_ring *desc,
dma_addr_t *paddr, u32 *desc_bank);
int ath12k_hal_wbm_desc_parse_err(struct ath12k_base *ab, void *desc,
struct hal_rx_wbm_rel_info *rel_info);
void ath12k_hal_rx_reo_ent_paddr_get(struct ath12k_base *ab,
struct ath12k_buffer_addr *buff_addr,
dma_addr_t *paddr, u32 *cookie);
#endif

Просмотреть файл

@ -0,0 +1,145 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "hal_desc.h"
#include "hal.h"
#include "hal_tx.h"
#include "hif.h"
#define DSCP_TID_MAP_TBL_ENTRY_SIZE 64
/* dscp_tid_map - Default DSCP-TID mapping
*=================
* DSCP TID
*=================
* 000xxx 0
* 001xxx 1
* 010xxx 2
* 011xxx 3
* 100xxx 4
* 101xxx 5
* 110xxx 6
* 111xxx 7
*/
static inline u8 dscp2tid(u8 dscp)
{
return dscp >> 3;
}
void ath12k_hal_tx_cmd_desc_setup(struct ath12k_base *ab,
struct hal_tcl_data_cmd *tcl_cmd,
struct hal_tx_info *ti)
{
tcl_cmd->buf_addr_info.info0 =
le32_encode_bits(ti->paddr, BUFFER_ADDR_INFO0_ADDR);
tcl_cmd->buf_addr_info.info1 =
le32_encode_bits(((uint64_t)ti->paddr >> HAL_ADDR_MSB_REG_SHIFT),
BUFFER_ADDR_INFO1_ADDR);
tcl_cmd->buf_addr_info.info1 |=
le32_encode_bits((ti->rbm_id), BUFFER_ADDR_INFO1_RET_BUF_MGR) |
le32_encode_bits(ti->desc_id, BUFFER_ADDR_INFO1_SW_COOKIE);
tcl_cmd->info0 =
le32_encode_bits(ti->type, HAL_TCL_DATA_CMD_INFO0_DESC_TYPE) |
le32_encode_bits(ti->bank_id, HAL_TCL_DATA_CMD_INFO0_BANK_ID);
tcl_cmd->info1 =
le32_encode_bits(ti->meta_data_flags,
HAL_TCL_DATA_CMD_INFO1_CMD_NUM);
tcl_cmd->info2 = cpu_to_le32(ti->flags0) |
le32_encode_bits(ti->data_len, HAL_TCL_DATA_CMD_INFO2_DATA_LEN) |
le32_encode_bits(ti->pkt_offset, HAL_TCL_DATA_CMD_INFO2_PKT_OFFSET);
tcl_cmd->info3 = cpu_to_le32(ti->flags1) |
le32_encode_bits(ti->tid, HAL_TCL_DATA_CMD_INFO3_TID) |
le32_encode_bits(ti->lmac_id, HAL_TCL_DATA_CMD_INFO3_PMAC_ID) |
le32_encode_bits(ti->vdev_id, HAL_TCL_DATA_CMD_INFO3_VDEV_ID);
tcl_cmd->info4 = le32_encode_bits(ti->bss_ast_idx,
HAL_TCL_DATA_CMD_INFO4_SEARCH_INDEX) |
le32_encode_bits(ti->bss_ast_hash,
HAL_TCL_DATA_CMD_INFO4_CACHE_SET_NUM);
tcl_cmd->info5 = 0;
}
void ath12k_hal_tx_set_dscp_tid_map(struct ath12k_base *ab, int id)
{
u32 ctrl_reg_val;
u32 addr;
u8 hw_map_val[HAL_DSCP_TID_TBL_SIZE], dscp, tid;
int i;
u32 value;
ctrl_reg_val = ath12k_hif_read32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
HAL_TCL1_RING_CMN_CTRL_REG);
/* Enable read/write access */
ctrl_reg_val |= HAL_TCL1_RING_CMN_CTRL_DSCP_TID_MAP_PROG_EN;
ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
HAL_TCL1_RING_CMN_CTRL_REG, ctrl_reg_val);
addr = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_DSCP_TID_MAP +
(4 * id * (HAL_DSCP_TID_TBL_SIZE / 4));
/* Configure each DSCP-TID mapping in three bits there by configure
* three bytes in an iteration.
*/
for (i = 0, dscp = 0; i < HAL_DSCP_TID_TBL_SIZE; i += 3) {
tid = dscp2tid(dscp);
value = u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP0);
dscp++;
tid = dscp2tid(dscp);
value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP1);
dscp++;
tid = dscp2tid(dscp);
value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP2);
dscp++;
tid = dscp2tid(dscp);
value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP3);
dscp++;
tid = dscp2tid(dscp);
value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP4);
dscp++;
tid = dscp2tid(dscp);
value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP5);
dscp++;
tid = dscp2tid(dscp);
value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP6);
dscp++;
tid = dscp2tid(dscp);
value |= u32_encode_bits(tid, HAL_TCL1_RING_FIELD_DSCP_TID_MAP7);
dscp++;
memcpy(&hw_map_val[i], &value, 3);
}
for (i = 0; i < HAL_DSCP_TID_TBL_SIZE; i += 4) {
ath12k_hif_write32(ab, addr, *(u32 *)&hw_map_val[i]);
addr += 4;
}
/* Disable read/write access */
ctrl_reg_val = ath12k_hif_read32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
HAL_TCL1_RING_CMN_CTRL_REG);
ctrl_reg_val &= ~HAL_TCL1_RING_CMN_CTRL_DSCP_TID_MAP_PROG_EN;
ath12k_hif_write32(ab, HAL_SEQ_WCSS_UMAC_TCL_REG +
HAL_TCL1_RING_CMN_CTRL_REG,
ctrl_reg_val);
}
void ath12k_hal_tx_configure_bank_register(struct ath12k_base *ab, u32 bank_config,
u8 bank_id)
{
ath12k_hif_write32(ab, HAL_TCL_SW_CONFIG_BANK_ADDR + 4 * bank_id,
bank_config);
}

Просмотреть файл

@ -0,0 +1,194 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_HAL_TX_H
#define ATH12K_HAL_TX_H
#include "hal_desc.h"
#include "core.h"
#define HAL_TX_ADDRX_EN 1
#define HAL_TX_ADDRY_EN 2
#define HAL_TX_ADDR_SEARCH_DEFAULT 0
#define HAL_TX_ADDR_SEARCH_INDEX 1
/* TODO: check all these data can be managed with struct ath12k_tx_desc_info for perf */
struct hal_tx_info {
u16 meta_data_flags; /* %HAL_TCL_DATA_CMD_INFO0_META_ */
u8 ring_id;
u8 rbm_id;
u32 desc_id;
enum hal_tcl_desc_type type;
enum hal_tcl_encap_type encap_type;
dma_addr_t paddr;
u32 data_len;
u32 pkt_offset;
enum hal_encrypt_type encrypt_type;
u32 flags0; /* %HAL_TCL_DATA_CMD_INFO1_ */
u32 flags1; /* %HAL_TCL_DATA_CMD_INFO2_ */
u16 addr_search_flags; /* %HAL_TCL_DATA_CMD_INFO0_ADDR(X/Y)_ */
u16 bss_ast_hash;
u16 bss_ast_idx;
u8 tid;
u8 search_type; /* %HAL_TX_ADDR_SEARCH_ */
u8 lmac_id;
u8 vdev_id;
u8 dscp_tid_tbl_idx;
bool enable_mesh;
int bank_id;
};
/* TODO: Check if the actual desc macros can be used instead */
#define HAL_TX_STATUS_FLAGS_FIRST_MSDU BIT(0)
#define HAL_TX_STATUS_FLAGS_LAST_MSDU BIT(1)
#define HAL_TX_STATUS_FLAGS_MSDU_IN_AMSDU BIT(2)
#define HAL_TX_STATUS_FLAGS_RATE_STATS_VALID BIT(3)
#define HAL_TX_STATUS_FLAGS_RATE_LDPC BIT(4)
#define HAL_TX_STATUS_FLAGS_RATE_STBC BIT(5)
#define HAL_TX_STATUS_FLAGS_OFDMA BIT(6)
#define HAL_TX_STATUS_DESC_LEN sizeof(struct hal_wbm_release_ring)
/* Tx status parsed from srng desc */
struct hal_tx_status {
enum hal_wbm_rel_src_module buf_rel_source;
enum hal_wbm_tqm_rel_reason status;
u8 ack_rssi;
u32 flags; /* %HAL_TX_STATUS_FLAGS_ */
u32 ppdu_id;
u8 try_cnt;
u8 tid;
u16 peer_id;
u32 rate_stats;
};
#define HAL_TX_PHY_DESC_INFO0_BF_TYPE GENMASK(17, 16)
#define HAL_TX_PHY_DESC_INFO0_PREAMBLE_11B BIT(20)
#define HAL_TX_PHY_DESC_INFO0_PKT_TYPE GENMASK(24, 21)
#define HAL_TX_PHY_DESC_INFO0_BANDWIDTH GENMASK(30, 28)
#define HAL_TX_PHY_DESC_INFO1_MCS GENMASK(3, 0)
#define HAL_TX_PHY_DESC_INFO1_STBC BIT(6)
#define HAL_TX_PHY_DESC_INFO2_NSS GENMASK(23, 21)
#define HAL_TX_PHY_DESC_INFO3_AP_PKT_BW GENMASK(6, 4)
#define HAL_TX_PHY_DESC_INFO3_LTF_SIZE GENMASK(20, 19)
#define HAL_TX_PHY_DESC_INFO3_ACTIVE_CHANNEL GENMASK(17, 15)
struct hal_tx_phy_desc {
__le32 info0;
__le32 info1;
__le32 info2;
__le32 info3;
} __packed;
#define HAL_TX_FES_STAT_PROT_INFO0_STRT_FRM_TS_15_0 GENMASK(15, 0)
#define HAL_TX_FES_STAT_PROT_INFO0_STRT_FRM_TS_31_16 GENMASK(31, 16)
#define HAL_TX_FES_STAT_PROT_INFO1_END_FRM_TS_15_0 GENMASK(15, 0)
#define HAL_TX_FES_STAT_PROT_INFO1_END_FRM_TS_31_16 GENMASK(31, 16)
struct hal_tx_fes_status_prot {
__le64 reserved;
__le32 info0;
__le32 info1;
__le32 reserved1[11];
} __packed;
#define HAL_TX_FES_STAT_USR_PPDU_INFO0_DURATION GENMASK(15, 0)
struct hal_tx_fes_status_user_ppdu {
__le64 reserved;
__le32 info0;
__le32 reserved1[3];
} __packed;
#define HAL_TX_FES_STAT_STRT_INFO0_PROT_TS_LOWER_32 GENMASK(31, 0)
#define HAL_TX_FES_STAT_STRT_INFO1_PROT_TS_UPPER_32 GENMASK(31, 0)
struct hal_tx_fes_status_start_prot {
__le32 info0;
__le32 info1;
__le64 reserved;
} __packed;
#define HAL_TX_FES_STATUS_START_INFO0_MEDIUM_PROT_TYPE GENMASK(29, 27)
struct hal_tx_fes_status_start {
__le32 reserved;
__le32 info0;
__le64 reserved1;
} __packed;
#define HAL_TX_Q_EXT_INFO0_FRAME_CTRL GENMASK(15, 0)
#define HAL_TX_Q_EXT_INFO0_QOS_CTRL GENMASK(31, 16)
#define HAL_TX_Q_EXT_INFO1_AMPDU_FLAG BIT(0)
struct hal_tx_queue_exten {
__le32 info0;
__le32 info1;
} __packed;
#define HAL_TX_FES_SETUP_INFO0_NUM_OF_USERS GENMASK(28, 23)
struct hal_tx_fes_setup {
__le32 schedule_id;
__le32 info0;
__le64 reserved;
} __packed;
#define HAL_TX_PPDU_SETUP_INFO0_MEDIUM_PROT_TYPE GENMASK(2, 0)
#define HAL_TX_PPDU_SETUP_INFO1_PROT_FRAME_ADDR1_31_0 GENMASK(31, 0)
#define HAL_TX_PPDU_SETUP_INFO2_PROT_FRAME_ADDR1_47_32 GENMASK(15, 0)
#define HAL_TX_PPDU_SETUP_INFO2_PROT_FRAME_ADDR2_15_0 GENMASK(31, 16)
#define HAL_TX_PPDU_SETUP_INFO3_PROT_FRAME_ADDR2_47_16 GENMASK(31, 0)
#define HAL_TX_PPDU_SETUP_INFO4_PROT_FRAME_ADDR3_31_0 GENMASK(31, 0)
#define HAL_TX_PPDU_SETUP_INFO5_PROT_FRAME_ADDR3_47_32 GENMASK(15, 0)
#define HAL_TX_PPDU_SETUP_INFO5_PROT_FRAME_ADDR4_15_0 GENMASK(31, 16)
#define HAL_TX_PPDU_SETUP_INFO6_PROT_FRAME_ADDR4_47_16 GENMASK(31, 0)
struct hal_tx_pcu_ppdu_setup_init {
__le32 info0;
__le32 info1;
__le32 info2;
__le32 info3;
__le32 reserved;
__le32 info4;
__le32 info5;
__le32 info6;
} __packed;
#define HAL_TX_FES_STATUS_END_INFO0_START_TIMESTAMP_15_0 GENMASK(15, 0)
#define HAL_TX_FES_STATUS_END_INFO0_START_TIMESTAMP_31_16 GENMASK(31, 16)
struct hal_tx_fes_status_end {
__le32 reserved[2];
__le32 info0;
__le32 reserved1[19];
} __packed;
#define HAL_TX_BANK_CONFIG_EPD BIT(0)
#define HAL_TX_BANK_CONFIG_ENCAP_TYPE GENMASK(2, 1)
#define HAL_TX_BANK_CONFIG_ENCRYPT_TYPE GENMASK(6, 3)
#define HAL_TX_BANK_CONFIG_SRC_BUFFER_SWAP BIT(7)
#define HAL_TX_BANK_CONFIG_LINK_META_SWAP BIT(8)
#define HAL_TX_BANK_CONFIG_INDEX_LOOKUP_EN BIT(9)
#define HAL_TX_BANK_CONFIG_ADDRX_EN BIT(10)
#define HAL_TX_BANK_CONFIG_ADDRY_EN BIT(11)
#define HAL_TX_BANK_CONFIG_MESH_EN GENMASK(13, 12)
#define HAL_TX_BANK_CONFIG_VDEV_ID_CHECK_EN BIT(14)
#define HAL_TX_BANK_CONFIG_PMAC_ID GENMASK(16, 15)
/* STA mode will have MCAST_PKT_CTRL instead of DSCP_TID_MAP bitfield */
#define HAL_TX_BANK_CONFIG_DSCP_TIP_MAP_ID GENMASK(22, 17)
void ath12k_hal_tx_cmd_desc_setup(struct ath12k_base *ab,
struct hal_tcl_data_cmd *tcl_cmd,
struct hal_tx_info *ti);
void ath12k_hal_tx_set_dscp_tid_map(struct ath12k_base *ab, int id);
int ath12k_hal_reo_cmd_send(struct ath12k_base *ab, struct hal_srng *srng,
enum hal_reo_cmd_type type,
struct ath12k_hal_reo_cmd *cmd);
void ath12k_hal_tx_configure_bank_register(struct ath12k_base *ab, u32 bank_config,
u8 bank_id);
#endif

Просмотреть файл

@ -0,0 +1,144 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_HIF_H
#define ATH12K_HIF_H
#include "core.h"
struct ath12k_hif_ops {
u32 (*read32)(struct ath12k_base *sc, u32 address);
void (*write32)(struct ath12k_base *sc, u32 address, u32 data);
void (*irq_enable)(struct ath12k_base *sc);
void (*irq_disable)(struct ath12k_base *sc);
int (*start)(struct ath12k_base *sc);
void (*stop)(struct ath12k_base *sc);
int (*power_up)(struct ath12k_base *sc);
void (*power_down)(struct ath12k_base *sc);
int (*suspend)(struct ath12k_base *ab);
int (*resume)(struct ath12k_base *ab);
int (*map_service_to_pipe)(struct ath12k_base *sc, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe);
int (*get_user_msi_vector)(struct ath12k_base *ab, char *user_name,
int *num_vectors, u32 *user_base_data,
u32 *base_vector);
void (*get_msi_address)(struct ath12k_base *ab, u32 *msi_addr_lo,
u32 *msi_addr_hi);
void (*ce_irq_enable)(struct ath12k_base *ab);
void (*ce_irq_disable)(struct ath12k_base *ab);
void (*get_ce_msi_idx)(struct ath12k_base *ab, u32 ce_id, u32 *msi_idx);
};
static inline int ath12k_hif_map_service_to_pipe(struct ath12k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe)
{
return ab->hif.ops->map_service_to_pipe(ab, service_id,
ul_pipe, dl_pipe);
}
static inline int ath12k_hif_get_user_msi_vector(struct ath12k_base *ab,
char *user_name,
int *num_vectors,
u32 *user_base_data,
u32 *base_vector)
{
if (!ab->hif.ops->get_user_msi_vector)
return -EOPNOTSUPP;
return ab->hif.ops->get_user_msi_vector(ab, user_name, num_vectors,
user_base_data,
base_vector);
}
static inline void ath12k_hif_get_msi_address(struct ath12k_base *ab,
u32 *msi_addr_lo,
u32 *msi_addr_hi)
{
if (!ab->hif.ops->get_msi_address)
return;
ab->hif.ops->get_msi_address(ab, msi_addr_lo, msi_addr_hi);
}
static inline void ath12k_hif_get_ce_msi_idx(struct ath12k_base *ab, u32 ce_id,
u32 *msi_data_idx)
{
if (ab->hif.ops->get_ce_msi_idx)
ab->hif.ops->get_ce_msi_idx(ab, ce_id, msi_data_idx);
else
*msi_data_idx = ce_id;
}
static inline void ath12k_hif_ce_irq_enable(struct ath12k_base *ab)
{
if (ab->hif.ops->ce_irq_enable)
ab->hif.ops->ce_irq_enable(ab);
}
static inline void ath12k_hif_ce_irq_disable(struct ath12k_base *ab)
{
if (ab->hif.ops->ce_irq_disable)
ab->hif.ops->ce_irq_disable(ab);
}
static inline void ath12k_hif_irq_enable(struct ath12k_base *ab)
{
ab->hif.ops->irq_enable(ab);
}
static inline void ath12k_hif_irq_disable(struct ath12k_base *ab)
{
ab->hif.ops->irq_disable(ab);
}
static inline int ath12k_hif_suspend(struct ath12k_base *ab)
{
if (ab->hif.ops->suspend)
return ab->hif.ops->suspend(ab);
return 0;
}
static inline int ath12k_hif_resume(struct ath12k_base *ab)
{
if (ab->hif.ops->resume)
return ab->hif.ops->resume(ab);
return 0;
}
static inline int ath12k_hif_start(struct ath12k_base *ab)
{
return ab->hif.ops->start(ab);
}
static inline void ath12k_hif_stop(struct ath12k_base *ab)
{
ab->hif.ops->stop(ab);
}
static inline u32 ath12k_hif_read32(struct ath12k_base *ab, u32 address)
{
return ab->hif.ops->read32(ab, address);
}
static inline void ath12k_hif_write32(struct ath12k_base *ab, u32 address,
u32 data)
{
ab->hif.ops->write32(ab, address, data);
}
static inline int ath12k_hif_power_up(struct ath12k_base *ab)
{
return ab->hif.ops->power_up(ab);
}
static inline void ath12k_hif_power_down(struct ath12k_base *ab)
{
ab->hif.ops->power_down(ab);
}
#endif /* ATH12K_HIF_H */

Просмотреть файл

@ -0,0 +1,789 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/skbuff.h>
#include <linux/ctype.h>
#include "debug.h"
#include "hif.h"
struct sk_buff *ath12k_htc_alloc_skb(struct ath12k_base *ab, int size)
{
struct sk_buff *skb;
skb = dev_alloc_skb(size + sizeof(struct ath12k_htc_hdr));
if (!skb)
return NULL;
skb_reserve(skb, sizeof(struct ath12k_htc_hdr));
/* FW/HTC requires 4-byte aligned streams */
if (!IS_ALIGNED((unsigned long)skb->data, 4))
ath12k_warn(ab, "Unaligned HTC tx skb\n");
return skb;
}
static void ath12k_htc_control_tx_complete(struct ath12k_base *ab,
struct sk_buff *skb)
{
kfree_skb(skb);
}
static struct sk_buff *ath12k_htc_build_tx_ctrl_skb(void)
{
struct sk_buff *skb;
struct ath12k_skb_cb *skb_cb;
skb = dev_alloc_skb(ATH12K_HTC_CONTROL_BUFFER_SIZE);
if (!skb)
return NULL;
skb_reserve(skb, sizeof(struct ath12k_htc_hdr));
WARN_ON_ONCE(!IS_ALIGNED((unsigned long)skb->data, 4));
skb_cb = ATH12K_SKB_CB(skb);
memset(skb_cb, 0, sizeof(*skb_cb));
return skb;
}
static void ath12k_htc_prepare_tx_skb(struct ath12k_htc_ep *ep,
struct sk_buff *skb)
{
struct ath12k_htc_hdr *hdr;
hdr = (struct ath12k_htc_hdr *)skb->data;
memset(hdr, 0, sizeof(*hdr));
hdr->htc_info = le32_encode_bits(ep->eid, HTC_HDR_ENDPOINTID) |
le32_encode_bits((skb->len - sizeof(*hdr)),
HTC_HDR_PAYLOADLEN);
if (ep->tx_credit_flow_enabled)
hdr->htc_info |= le32_encode_bits(ATH12K_HTC_FLAG_NEED_CREDIT_UPDATE,
HTC_HDR_FLAGS);
spin_lock_bh(&ep->htc->tx_lock);
hdr->ctrl_info = le32_encode_bits(ep->seq_no++, HTC_HDR_CONTROLBYTES1);
spin_unlock_bh(&ep->htc->tx_lock);
}
int ath12k_htc_send(struct ath12k_htc *htc,
enum ath12k_htc_ep_id eid,
struct sk_buff *skb)
{
struct ath12k_htc_ep *ep = &htc->endpoint[eid];
struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
struct device *dev = htc->ab->dev;
struct ath12k_base *ab = htc->ab;
int credits = 0;
int ret;
if (eid >= ATH12K_HTC_EP_COUNT) {
ath12k_warn(ab, "Invalid endpoint id: %d\n", eid);
return -ENOENT;
}
skb_push(skb, sizeof(struct ath12k_htc_hdr));
if (ep->tx_credit_flow_enabled) {
credits = DIV_ROUND_UP(skb->len, htc->target_credit_size);
spin_lock_bh(&htc->tx_lock);
if (ep->tx_credits < credits) {
ath12k_dbg(ab, ATH12K_DBG_HTC,
"htc insufficient credits ep %d required %d available %d\n",
eid, credits, ep->tx_credits);
spin_unlock_bh(&htc->tx_lock);
ret = -EAGAIN;
goto err_pull;
}
ep->tx_credits -= credits;
ath12k_dbg(ab, ATH12K_DBG_HTC,
"htc ep %d consumed %d credits (total %d)\n",
eid, credits, ep->tx_credits);
spin_unlock_bh(&htc->tx_lock);
}
ath12k_htc_prepare_tx_skb(ep, skb);
skb_cb->paddr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE);
ret = dma_mapping_error(dev, skb_cb->paddr);
if (ret) {
ret = -EIO;
goto err_credits;
}
ret = ath12k_ce_send(htc->ab, skb, ep->ul_pipe_id, ep->eid);
if (ret)
goto err_unmap;
return 0;
err_unmap:
dma_unmap_single(dev, skb_cb->paddr, skb->len, DMA_TO_DEVICE);
err_credits:
if (ep->tx_credit_flow_enabled) {
spin_lock_bh(&htc->tx_lock);
ep->tx_credits += credits;
ath12k_dbg(ab, ATH12K_DBG_HTC,
"htc ep %d reverted %d credits back (total %d)\n",
eid, credits, ep->tx_credits);
spin_unlock_bh(&htc->tx_lock);
if (ep->ep_ops.ep_tx_credits)
ep->ep_ops.ep_tx_credits(htc->ab);
}
err_pull:
skb_pull(skb, sizeof(struct ath12k_htc_hdr));
return ret;
}
static void
ath12k_htc_process_credit_report(struct ath12k_htc *htc,
const struct ath12k_htc_credit_report *report,
int len,
enum ath12k_htc_ep_id eid)
{
struct ath12k_base *ab = htc->ab;
struct ath12k_htc_ep *ep;
int i, n_reports;
if (len % sizeof(*report))
ath12k_warn(ab, "Uneven credit report len %d", len);
n_reports = len / sizeof(*report);
spin_lock_bh(&htc->tx_lock);
for (i = 0; i < n_reports; i++, report++) {
if (report->eid >= ATH12K_HTC_EP_COUNT)
break;
ep = &htc->endpoint[report->eid];
ep->tx_credits += report->credits;
ath12k_dbg(ab, ATH12K_DBG_HTC, "htc ep %d got %d credits (total %d)\n",
report->eid, report->credits, ep->tx_credits);
if (ep->ep_ops.ep_tx_credits) {
spin_unlock_bh(&htc->tx_lock);
ep->ep_ops.ep_tx_credits(htc->ab);
spin_lock_bh(&htc->tx_lock);
}
}
spin_unlock_bh(&htc->tx_lock);
}
static int ath12k_htc_process_trailer(struct ath12k_htc *htc,
u8 *buffer,
int length,
enum ath12k_htc_ep_id src_eid)
{
struct ath12k_base *ab = htc->ab;
int status = 0;
struct ath12k_htc_record *record;
size_t len;
while (length > 0) {
record = (struct ath12k_htc_record *)buffer;
if (length < sizeof(record->hdr)) {
status = -EINVAL;
break;
}
if (record->hdr.len > length) {
/* no room left in buffer for record */
ath12k_warn(ab, "Invalid record length: %d\n",
record->hdr.len);
status = -EINVAL;
break;
}
switch (record->hdr.id) {
case ATH12K_HTC_RECORD_CREDITS:
len = sizeof(struct ath12k_htc_credit_report);
if (record->hdr.len < len) {
ath12k_warn(ab, "Credit report too long\n");
status = -EINVAL;
break;
}
ath12k_htc_process_credit_report(htc,
record->credit_report,
record->hdr.len,
src_eid);
break;
default:
ath12k_warn(ab, "Unhandled record: id:%d length:%d\n",
record->hdr.id, record->hdr.len);
break;
}
if (status)
break;
/* multiple records may be present in a trailer */
buffer += sizeof(record->hdr) + record->hdr.len;
length -= sizeof(record->hdr) + record->hdr.len;
}
return status;
}
static void ath12k_htc_suspend_complete(struct ath12k_base *ab, bool ack)
{
ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot suspend complete %d\n", ack);
if (ack)
set_bit(ATH12K_FLAG_HTC_SUSPEND_COMPLETE, &ab->dev_flags);
else
clear_bit(ATH12K_FLAG_HTC_SUSPEND_COMPLETE, &ab->dev_flags);
complete(&ab->htc_suspend);
}
void ath12k_htc_rx_completion_handler(struct ath12k_base *ab,
struct sk_buff *skb)
{
int status = 0;
struct ath12k_htc *htc = &ab->htc;
struct ath12k_htc_hdr *hdr;
struct ath12k_htc_ep *ep;
u16 payload_len;
u32 trailer_len = 0;
size_t min_len;
u8 eid;
bool trailer_present;
hdr = (struct ath12k_htc_hdr *)skb->data;
skb_pull(skb, sizeof(*hdr));
eid = le32_get_bits(hdr->htc_info, HTC_HDR_ENDPOINTID);
if (eid >= ATH12K_HTC_EP_COUNT) {
ath12k_warn(ab, "HTC Rx: invalid eid %d\n", eid);
goto out;
}
ep = &htc->endpoint[eid];
payload_len = le32_get_bits(hdr->htc_info, HTC_HDR_PAYLOADLEN);
if (payload_len + sizeof(*hdr) > ATH12K_HTC_MAX_LEN) {
ath12k_warn(ab, "HTC rx frame too long, len: %zu\n",
payload_len + sizeof(*hdr));
goto out;
}
if (skb->len < payload_len) {
ath12k_warn(ab, "HTC Rx: insufficient length, got %d, expected %d\n",
skb->len, payload_len);
goto out;
}
/* get flags to check for trailer */
trailer_present = le32_get_bits(hdr->htc_info, HTC_HDR_FLAGS) &
ATH12K_HTC_FLAG_TRAILER_PRESENT;
if (trailer_present) {
u8 *trailer;
trailer_len = le32_get_bits(hdr->ctrl_info,
HTC_HDR_CONTROLBYTES0);
min_len = sizeof(struct ath12k_htc_record_hdr);
if ((trailer_len < min_len) ||
(trailer_len > payload_len)) {
ath12k_warn(ab, "Invalid trailer length: %d\n",
trailer_len);
goto out;
}
trailer = (u8 *)hdr;
trailer += sizeof(*hdr);
trailer += payload_len;
trailer -= trailer_len;
status = ath12k_htc_process_trailer(htc, trailer,
trailer_len, eid);
if (status)
goto out;
skb_trim(skb, skb->len - trailer_len);
}
if (trailer_len >= payload_len)
/* zero length packet with trailer data, just drop these */
goto out;
if (eid == ATH12K_HTC_EP_0) {
struct ath12k_htc_msg *msg = (struct ath12k_htc_msg *)skb->data;
switch (le32_get_bits(msg->msg_svc_id, HTC_MSG_MESSAGEID)) {
case ATH12K_HTC_MSG_READY_ID:
case ATH12K_HTC_MSG_CONNECT_SERVICE_RESP_ID:
/* handle HTC control message */
if (completion_done(&htc->ctl_resp)) {
/* this is a fatal error, target should not be
* sending unsolicited messages on the ep 0
*/
ath12k_warn(ab, "HTC rx ctrl still processing\n");
complete(&htc->ctl_resp);
goto out;
}
htc->control_resp_len =
min_t(int, skb->len,
ATH12K_HTC_MAX_CTRL_MSG_LEN);
memcpy(htc->control_resp_buffer, skb->data,
htc->control_resp_len);
complete(&htc->ctl_resp);
break;
case ATH12K_HTC_MSG_SEND_SUSPEND_COMPLETE:
ath12k_htc_suspend_complete(ab, true);
break;
case ATH12K_HTC_MSG_NACK_SUSPEND:
ath12k_htc_suspend_complete(ab, false);
break;
case ATH12K_HTC_MSG_WAKEUP_FROM_SUSPEND_ID:
break;
default:
ath12k_warn(ab, "ignoring unsolicited htc ep0 event %u\n",
le32_get_bits(msg->msg_svc_id, HTC_MSG_MESSAGEID));
break;
}
goto out;
}
ath12k_dbg(ab, ATH12K_DBG_HTC, "htc rx completion ep %d skb %pK\n",
eid, skb);
ep->ep_ops.ep_rx_complete(ab, skb);
/* poll tx completion for interrupt disabled CE's */
ath12k_ce_poll_send_completed(ab, ep->ul_pipe_id);
/* skb is now owned by the rx completion handler */
skb = NULL;
out:
kfree_skb(skb);
}
static void ath12k_htc_control_rx_complete(struct ath12k_base *ab,
struct sk_buff *skb)
{
/* This is unexpected. FW is not supposed to send regular rx on this
* endpoint.
*/
ath12k_warn(ab, "unexpected htc rx\n");
kfree_skb(skb);
}
static const char *htc_service_name(enum ath12k_htc_svc_id id)
{
switch (id) {
case ATH12K_HTC_SVC_ID_RESERVED:
return "Reserved";
case ATH12K_HTC_SVC_ID_RSVD_CTRL:
return "Control";
case ATH12K_HTC_SVC_ID_WMI_CONTROL:
return "WMI";
case ATH12K_HTC_SVC_ID_WMI_DATA_BE:
return "DATA BE";
case ATH12K_HTC_SVC_ID_WMI_DATA_BK:
return "DATA BK";
case ATH12K_HTC_SVC_ID_WMI_DATA_VI:
return "DATA VI";
case ATH12K_HTC_SVC_ID_WMI_DATA_VO:
return "DATA VO";
case ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1:
return "WMI MAC1";
case ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2:
return "WMI MAC2";
case ATH12K_HTC_SVC_ID_NMI_CONTROL:
return "NMI Control";
case ATH12K_HTC_SVC_ID_NMI_DATA:
return "NMI Data";
case ATH12K_HTC_SVC_ID_HTT_DATA_MSG:
return "HTT Data";
case ATH12K_HTC_SVC_ID_TEST_RAW_STREAMS:
return "RAW";
case ATH12K_HTC_SVC_ID_IPA_TX:
return "IPA TX";
case ATH12K_HTC_SVC_ID_PKT_LOG:
return "PKT LOG";
case ATH12K_HTC_SVC_ID_WMI_CONTROL_DIAG:
return "WMI DIAG";
}
return "Unknown";
}
static void ath12k_htc_reset_endpoint_states(struct ath12k_htc *htc)
{
struct ath12k_htc_ep *ep;
int i;
for (i = ATH12K_HTC_EP_0; i < ATH12K_HTC_EP_COUNT; i++) {
ep = &htc->endpoint[i];
ep->service_id = ATH12K_HTC_SVC_ID_UNUSED;
ep->max_ep_message_len = 0;
ep->max_tx_queue_depth = 0;
ep->eid = i;
ep->htc = htc;
ep->tx_credit_flow_enabled = true;
}
}
static u8 ath12k_htc_get_credit_allocation(struct ath12k_htc *htc,
u16 service_id)
{
struct ath12k_htc_svc_tx_credits *serv_entry;
u8 i, allocation = 0;
serv_entry = htc->service_alloc_table;
for (i = 0; i < ATH12K_HTC_MAX_SERVICE_ALLOC_ENTRIES; i++) {
if (serv_entry[i].service_id == service_id) {
allocation = serv_entry[i].credit_allocation;
break;
}
}
return allocation;
}
static int ath12k_htc_setup_target_buffer_assignments(struct ath12k_htc *htc)
{
struct ath12k_htc_svc_tx_credits *serv_entry;
static const u32 svc_id[] = {
ATH12K_HTC_SVC_ID_WMI_CONTROL,
ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1,
ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2,
};
int i, credits;
credits = htc->total_transmit_credits;
serv_entry = htc->service_alloc_table;
if ((htc->wmi_ep_count == 0) ||
(htc->wmi_ep_count > ARRAY_SIZE(svc_id)))
return -EINVAL;
/* Divide credits among number of endpoints for WMI */
credits = credits / htc->wmi_ep_count;
for (i = 0; i < htc->wmi_ep_count; i++) {
serv_entry[i].service_id = svc_id[i];
serv_entry[i].credit_allocation = credits;
}
return 0;
}
int ath12k_htc_wait_target(struct ath12k_htc *htc)
{
int i, status = 0;
struct ath12k_base *ab = htc->ab;
unsigned long time_left;
struct ath12k_htc_ready *ready;
u16 message_id;
u16 credit_count;
u16 credit_size;
time_left = wait_for_completion_timeout(&htc->ctl_resp,
ATH12K_HTC_WAIT_TIMEOUT_HZ);
if (!time_left) {
ath12k_warn(ab, "failed to receive control response completion, polling..\n");
for (i = 0; i < ab->hw_params->ce_count; i++)
ath12k_ce_per_engine_service(htc->ab, i);
time_left =
wait_for_completion_timeout(&htc->ctl_resp,
ATH12K_HTC_WAIT_TIMEOUT_HZ);
if (!time_left)
status = -ETIMEDOUT;
}
if (status < 0) {
ath12k_warn(ab, "ctl_resp never came in (%d)\n", status);
return status;
}
if (htc->control_resp_len < sizeof(*ready)) {
ath12k_warn(ab, "Invalid HTC ready msg len:%d\n",
htc->control_resp_len);
return -ECOMM;
}
ready = (struct ath12k_htc_ready *)htc->control_resp_buffer;
message_id = le32_get_bits(ready->id_credit_count, HTC_MSG_MESSAGEID);
credit_count = le32_get_bits(ready->id_credit_count,
HTC_READY_MSG_CREDITCOUNT);
credit_size = le32_get_bits(ready->size_ep, HTC_READY_MSG_CREDITSIZE);
if (message_id != ATH12K_HTC_MSG_READY_ID) {
ath12k_warn(ab, "Invalid HTC ready msg: 0x%x\n", message_id);
return -ECOMM;
}
htc->total_transmit_credits = credit_count;
htc->target_credit_size = credit_size;
ath12k_dbg(ab, ATH12K_DBG_HTC,
"Target ready! transmit resources: %d size:%d\n",
htc->total_transmit_credits, htc->target_credit_size);
if ((htc->total_transmit_credits == 0) ||
(htc->target_credit_size == 0)) {
ath12k_warn(ab, "Invalid credit size received\n");
return -ECOMM;
}
ath12k_htc_setup_target_buffer_assignments(htc);
return 0;
}
int ath12k_htc_connect_service(struct ath12k_htc *htc,
struct ath12k_htc_svc_conn_req *conn_req,
struct ath12k_htc_svc_conn_resp *conn_resp)
{
struct ath12k_base *ab = htc->ab;
struct ath12k_htc_conn_svc *req_msg;
struct ath12k_htc_conn_svc_resp resp_msg_dummy;
struct ath12k_htc_conn_svc_resp *resp_msg = &resp_msg_dummy;
enum ath12k_htc_ep_id assigned_eid = ATH12K_HTC_EP_COUNT;
struct ath12k_htc_ep *ep;
struct sk_buff *skb;
unsigned int max_msg_size = 0;
int length, status;
unsigned long time_left;
bool disable_credit_flow_ctrl = false;
u16 message_id, service_id, flags = 0;
u8 tx_alloc = 0;
/* special case for HTC pseudo control service */
if (conn_req->service_id == ATH12K_HTC_SVC_ID_RSVD_CTRL) {
disable_credit_flow_ctrl = true;
assigned_eid = ATH12K_HTC_EP_0;
max_msg_size = ATH12K_HTC_MAX_CTRL_MSG_LEN;
memset(&resp_msg_dummy, 0, sizeof(resp_msg_dummy));
goto setup;
}
tx_alloc = ath12k_htc_get_credit_allocation(htc,
conn_req->service_id);
if (!tx_alloc)
ath12k_dbg(ab, ATH12K_DBG_BOOT,
"boot htc service %s does not allocate target credits\n",
htc_service_name(conn_req->service_id));
skb = ath12k_htc_build_tx_ctrl_skb();
if (!skb) {
ath12k_warn(ab, "Failed to allocate HTC packet\n");
return -ENOMEM;
}
length = sizeof(*req_msg);
skb_put(skb, length);
memset(skb->data, 0, length);
req_msg = (struct ath12k_htc_conn_svc *)skb->data;
req_msg->msg_svc_id = le32_encode_bits(ATH12K_HTC_MSG_CONNECT_SERVICE_ID,
HTC_MSG_MESSAGEID);
flags |= u32_encode_bits(tx_alloc, ATH12K_HTC_CONN_FLAGS_RECV_ALLOC);
/* Only enable credit flow control for WMI ctrl service */
if (!(conn_req->service_id == ATH12K_HTC_SVC_ID_WMI_CONTROL ||
conn_req->service_id == ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1 ||
conn_req->service_id == ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2)) {
flags |= ATH12K_HTC_CONN_FLAGS_DISABLE_CREDIT_FLOW_CTRL;
disable_credit_flow_ctrl = true;
}
req_msg->flags_len = le32_encode_bits(flags, HTC_SVC_MSG_CONNECTIONFLAGS);
req_msg->msg_svc_id |= le32_encode_bits(conn_req->service_id,
HTC_SVC_MSG_SERVICE_ID);
reinit_completion(&htc->ctl_resp);
status = ath12k_htc_send(htc, ATH12K_HTC_EP_0, skb);
if (status) {
kfree_skb(skb);
return status;
}
/* wait for response */
time_left = wait_for_completion_timeout(&htc->ctl_resp,
ATH12K_HTC_CONN_SVC_TIMEOUT_HZ);
if (!time_left) {
ath12k_err(ab, "Service connect timeout\n");
return -ETIMEDOUT;
}
/* we controlled the buffer creation, it's aligned */
resp_msg = (struct ath12k_htc_conn_svc_resp *)htc->control_resp_buffer;
message_id = le32_get_bits(resp_msg->msg_svc_id, HTC_MSG_MESSAGEID);
service_id = le32_get_bits(resp_msg->msg_svc_id,
HTC_SVC_RESP_MSG_SERVICEID);
if ((message_id != ATH12K_HTC_MSG_CONNECT_SERVICE_RESP_ID) ||
(htc->control_resp_len < sizeof(*resp_msg))) {
ath12k_err(ab, "Invalid resp message ID 0x%x", message_id);
return -EPROTO;
}
ath12k_dbg(ab, ATH12K_DBG_HTC,
"HTC Service %s connect response: status: %u, assigned ep: %u\n",
htc_service_name(service_id),
le32_get_bits(resp_msg->flags_len, HTC_SVC_RESP_MSG_STATUS),
le32_get_bits(resp_msg->flags_len, HTC_SVC_RESP_MSG_ENDPOINTID));
conn_resp->connect_resp_code = le32_get_bits(resp_msg->flags_len,
HTC_SVC_RESP_MSG_STATUS);
/* check response status */
if (conn_resp->connect_resp_code != ATH12K_HTC_CONN_SVC_STATUS_SUCCESS) {
ath12k_err(ab, "HTC Service %s connect request failed: 0x%x)\n",
htc_service_name(service_id),
conn_resp->connect_resp_code);
return -EPROTO;
}
assigned_eid = le32_get_bits(resp_msg->flags_len,
HTC_SVC_RESP_MSG_ENDPOINTID);
max_msg_size = le32_get_bits(resp_msg->flags_len,
HTC_SVC_RESP_MSG_MAXMSGSIZE);
setup:
if (assigned_eid >= ATH12K_HTC_EP_COUNT)
return -EPROTO;
if (max_msg_size == 0)
return -EPROTO;
ep = &htc->endpoint[assigned_eid];
ep->eid = assigned_eid;
if (ep->service_id != ATH12K_HTC_SVC_ID_UNUSED)
return -EPROTO;
/* return assigned endpoint to caller */
conn_resp->eid = assigned_eid;
conn_resp->max_msg_len = le32_get_bits(resp_msg->flags_len,
HTC_SVC_RESP_MSG_MAXMSGSIZE);
/* setup the endpoint */
ep->service_id = conn_req->service_id;
ep->max_tx_queue_depth = conn_req->max_send_queue_depth;
ep->max_ep_message_len = le32_get_bits(resp_msg->flags_len,
HTC_SVC_RESP_MSG_MAXMSGSIZE);
ep->tx_credits = tx_alloc;
/* copy all the callbacks */
ep->ep_ops = conn_req->ep_ops;
status = ath12k_hif_map_service_to_pipe(htc->ab,
ep->service_id,
&ep->ul_pipe_id,
&ep->dl_pipe_id);
if (status)
return status;
ath12k_dbg(ab, ATH12K_DBG_BOOT,
"boot htc service '%s' ul pipe %d dl pipe %d eid %d ready\n",
htc_service_name(ep->service_id), ep->ul_pipe_id,
ep->dl_pipe_id, ep->eid);
if (disable_credit_flow_ctrl && ep->tx_credit_flow_enabled) {
ep->tx_credit_flow_enabled = false;
ath12k_dbg(ab, ATH12K_DBG_BOOT,
"boot htc service '%s' eid %d TX flow control disabled\n",
htc_service_name(ep->service_id), assigned_eid);
}
return status;
}
int ath12k_htc_start(struct ath12k_htc *htc)
{
struct sk_buff *skb;
int status;
struct ath12k_base *ab = htc->ab;
struct ath12k_htc_setup_complete_extended *msg;
skb = ath12k_htc_build_tx_ctrl_skb();
if (!skb)
return -ENOMEM;
skb_put(skb, sizeof(*msg));
memset(skb->data, 0, skb->len);
msg = (struct ath12k_htc_setup_complete_extended *)skb->data;
msg->msg_id = le32_encode_bits(ATH12K_HTC_MSG_SETUP_COMPLETE_EX_ID,
HTC_MSG_MESSAGEID);
ath12k_dbg(ab, ATH12K_DBG_HTC, "HTC is using TX credit flow control\n");
status = ath12k_htc_send(htc, ATH12K_HTC_EP_0, skb);
if (status) {
kfree_skb(skb);
return status;
}
return 0;
}
int ath12k_htc_init(struct ath12k_base *ab)
{
struct ath12k_htc *htc = &ab->htc;
struct ath12k_htc_svc_conn_req conn_req = { };
struct ath12k_htc_svc_conn_resp conn_resp = { };
int ret;
spin_lock_init(&htc->tx_lock);
ath12k_htc_reset_endpoint_states(htc);
htc->ab = ab;
switch (ab->wmi_ab.preferred_hw_mode) {
case WMI_HOST_HW_MODE_SINGLE:
htc->wmi_ep_count = 1;
break;
case WMI_HOST_HW_MODE_DBS:
case WMI_HOST_HW_MODE_DBS_OR_SBS:
htc->wmi_ep_count = 2;
break;
case WMI_HOST_HW_MODE_DBS_SBS:
htc->wmi_ep_count = 3;
break;
default:
htc->wmi_ep_count = ab->hw_params->max_radios;
break;
}
/* setup our pseudo HTC control endpoint connection */
conn_req.ep_ops.ep_tx_complete = ath12k_htc_control_tx_complete;
conn_req.ep_ops.ep_rx_complete = ath12k_htc_control_rx_complete;
conn_req.max_send_queue_depth = ATH12K_NUM_CONTROL_TX_BUFFERS;
conn_req.service_id = ATH12K_HTC_SVC_ID_RSVD_CTRL;
/* connect fake service */
ret = ath12k_htc_connect_service(htc, &conn_req, &conn_resp);
if (ret) {
ath12k_err(ab, "could not connect to htc service (%d)\n", ret);
return ret;
}
init_completion(&htc->ctl_resp);
return 0;
}

Просмотреть файл

@ -0,0 +1,316 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_HTC_H
#define ATH12K_HTC_H
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/bug.h>
#include <linux/skbuff.h>
#include <linux/timer.h>
struct ath12k_base;
#define HTC_HDR_ENDPOINTID GENMASK(7, 0)
#define HTC_HDR_FLAGS GENMASK(15, 8)
#define HTC_HDR_PAYLOADLEN GENMASK(31, 16)
#define HTC_HDR_CONTROLBYTES0 GENMASK(7, 0)
#define HTC_HDR_CONTROLBYTES1 GENMASK(15, 8)
#define HTC_HDR_RESERVED GENMASK(31, 16)
#define HTC_SVC_MSG_SERVICE_ID GENMASK(31, 16)
#define HTC_SVC_MSG_CONNECTIONFLAGS GENMASK(15, 0)
#define HTC_SVC_MSG_SERVICEMETALENGTH GENMASK(23, 16)
#define HTC_READY_MSG_CREDITCOUNT GENMASK(31, 16)
#define HTC_READY_MSG_CREDITSIZE GENMASK(15, 0)
#define HTC_READY_MSG_MAXENDPOINTS GENMASK(23, 16)
#define HTC_READY_EX_MSG_HTCVERSION GENMASK(7, 0)
#define HTC_READY_EX_MSG_MAXMSGSPERHTCBUNDLE GENMASK(15, 8)
#define HTC_SVC_RESP_MSG_SERVICEID GENMASK(31, 16)
#define HTC_SVC_RESP_MSG_STATUS GENMASK(7, 0)
#define HTC_SVC_RESP_MSG_ENDPOINTID GENMASK(15, 8)
#define HTC_SVC_RESP_MSG_MAXMSGSIZE GENMASK(31, 16)
#define HTC_SVC_RESP_MSG_SERVICEMETALENGTH GENMASK(7, 0)
#define HTC_MSG_MESSAGEID GENMASK(15, 0)
#define HTC_SETUP_COMPLETE_EX_MSG_SETUPFLAGS GENMASK(31, 0)
#define HTC_SETUP_COMPLETE_EX_MSG_MAXMSGSPERBUNDLEDRECV GENMASK(7, 0)
#define HTC_SETUP_COMPLETE_EX_MSG_RSVD0 GENMASK(15, 8)
#define HTC_SETUP_COMPLETE_EX_MSG_RSVD1 GENMASK(23, 16)
#define HTC_SETUP_COMPLETE_EX_MSG_RSVD2 GENMASK(31, 24)
enum ath12k_htc_tx_flags {
ATH12K_HTC_FLAG_NEED_CREDIT_UPDATE = 0x01,
ATH12K_HTC_FLAG_SEND_BUNDLE = 0x02
};
enum ath12k_htc_rx_flags {
ATH12K_HTC_FLAG_TRAILER_PRESENT = 0x02,
ATH12K_HTC_FLAG_BUNDLE_MASK = 0xF0
};
struct ath12k_htc_hdr {
__le32 htc_info;
__le32 ctrl_info;
} __packed __aligned(4);
enum ath12k_htc_msg_id {
ATH12K_HTC_MSG_READY_ID = 1,
ATH12K_HTC_MSG_CONNECT_SERVICE_ID = 2,
ATH12K_HTC_MSG_CONNECT_SERVICE_RESP_ID = 3,
ATH12K_HTC_MSG_SETUP_COMPLETE_ID = 4,
ATH12K_HTC_MSG_SETUP_COMPLETE_EX_ID = 5,
ATH12K_HTC_MSG_SEND_SUSPEND_COMPLETE = 6,
ATH12K_HTC_MSG_NACK_SUSPEND = 7,
ATH12K_HTC_MSG_WAKEUP_FROM_SUSPEND_ID = 8,
};
enum ath12k_htc_version {
ATH12K_HTC_VERSION_2P0 = 0x00, /* 2.0 */
ATH12K_HTC_VERSION_2P1 = 0x01, /* 2.1 */
};
enum ath12k_htc_conn_flag_threshold_level {
ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_ONE_FOURTH,
ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_ONE_HALF,
ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_THREE_FOURTHS,
ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_UNITY,
};
#define ATH12K_HTC_CONN_FLAGS_THRESHOLD_LEVEL_MASK GENMASK(1, 0)
#define ATH12K_HTC_CONN_FLAGS_REDUCE_CREDIT_DRIBBLE BIT(2)
#define ATH12K_HTC_CONN_FLAGS_DISABLE_CREDIT_FLOW_CTRL BIT(3)
#define ATH12K_HTC_CONN_FLAGS_RECV_ALLOC GENMASK(15, 8)
enum ath12k_htc_conn_svc_status {
ATH12K_HTC_CONN_SVC_STATUS_SUCCESS = 0,
ATH12K_HTC_CONN_SVC_STATUS_NOT_FOUND = 1,
ATH12K_HTC_CONN_SVC_STATUS_FAILED = 2,
ATH12K_HTC_CONN_SVC_STATUS_NO_RESOURCES = 3,
ATH12K_HTC_CONN_SVC_STATUS_NO_MORE_EP = 4
};
struct ath12k_htc_ready {
__le32 id_credit_count;
__le32 size_ep;
} __packed;
struct ath12k_htc_ready_extended {
struct ath12k_htc_ready base;
__le32 ver_bundle;
} __packed;
struct ath12k_htc_conn_svc {
__le32 msg_svc_id;
__le32 flags_len;
} __packed;
struct ath12k_htc_conn_svc_resp {
__le32 msg_svc_id;
__le32 flags_len;
__le32 svc_meta_pad;
} __packed;
struct ath12k_htc_setup_complete_extended {
__le32 msg_id;
__le32 flags;
__le32 max_msgs_per_bundled_recv;
} __packed;
struct ath12k_htc_msg {
__le32 msg_svc_id;
__le32 flags_len;
} __packed __aligned(4);
enum ath12k_htc_record_id {
ATH12K_HTC_RECORD_NULL = 0,
ATH12K_HTC_RECORD_CREDITS = 1
};
struct ath12k_htc_record_hdr {
u8 id; /* @enum ath12k_htc_record_id */
u8 len;
u8 pad0;
u8 pad1;
} __packed;
struct ath12k_htc_credit_report {
u8 eid; /* @enum ath12k_htc_ep_id */
u8 credits;
u8 pad0;
u8 pad1;
} __packed;
struct ath12k_htc_record {
struct ath12k_htc_record_hdr hdr;
struct ath12k_htc_credit_report credit_report[];
} __packed __aligned(4);
/* HTC FRAME structure layout draft
*
* note: the trailer offset is dynamic depending
* on payload length. this is only a struct layout draft
*
*=======================================================
*
* HTC HEADER
*
*=======================================================
* |
* HTC message | payload
* (variable length) | (variable length)
*=======================================================
*
* HTC Record
*
*=======================================================
*/
enum ath12k_htc_svc_gid {
ATH12K_HTC_SVC_GRP_RSVD = 0,
ATH12K_HTC_SVC_GRP_WMI = 1,
ATH12K_HTC_SVC_GRP_NMI = 2,
ATH12K_HTC_SVC_GRP_HTT = 3,
ATH12K_HTC_SVC_GRP_CFG = 4,
ATH12K_HTC_SVC_GRP_IPA = 5,
ATH12K_HTC_SVC_GRP_PKTLOG = 6,
ATH12K_HTC_SVC_GRP_TEST = 254,
ATH12K_HTC_SVC_GRP_LAST = 255,
};
#define SVC(group, idx) \
(int)(((int)(group) << 8) | (int)(idx))
enum ath12k_htc_svc_id {
/* NOTE: service ID of 0x0000 is reserved and should never be used */
ATH12K_HTC_SVC_ID_RESERVED = 0x0000,
ATH12K_HTC_SVC_ID_UNUSED = ATH12K_HTC_SVC_ID_RESERVED,
ATH12K_HTC_SVC_ID_RSVD_CTRL = SVC(ATH12K_HTC_SVC_GRP_RSVD, 1),
ATH12K_HTC_SVC_ID_WMI_CONTROL = SVC(ATH12K_HTC_SVC_GRP_WMI, 0),
ATH12K_HTC_SVC_ID_WMI_DATA_BE = SVC(ATH12K_HTC_SVC_GRP_WMI, 1),
ATH12K_HTC_SVC_ID_WMI_DATA_BK = SVC(ATH12K_HTC_SVC_GRP_WMI, 2),
ATH12K_HTC_SVC_ID_WMI_DATA_VI = SVC(ATH12K_HTC_SVC_GRP_WMI, 3),
ATH12K_HTC_SVC_ID_WMI_DATA_VO = SVC(ATH12K_HTC_SVC_GRP_WMI, 4),
ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC1 = SVC(ATH12K_HTC_SVC_GRP_WMI, 5),
ATH12K_HTC_SVC_ID_WMI_CONTROL_MAC2 = SVC(ATH12K_HTC_SVC_GRP_WMI, 6),
ATH12K_HTC_SVC_ID_WMI_CONTROL_DIAG = SVC(ATH12K_HTC_SVC_GRP_WMI, 7),
ATH12K_HTC_SVC_ID_NMI_CONTROL = SVC(ATH12K_HTC_SVC_GRP_NMI, 0),
ATH12K_HTC_SVC_ID_NMI_DATA = SVC(ATH12K_HTC_SVC_GRP_NMI, 1),
ATH12K_HTC_SVC_ID_HTT_DATA_MSG = SVC(ATH12K_HTC_SVC_GRP_HTT, 0),
/* raw stream service (i.e. flash, tcmd, calibration apps) */
ATH12K_HTC_SVC_ID_TEST_RAW_STREAMS = SVC(ATH12K_HTC_SVC_GRP_TEST, 0),
ATH12K_HTC_SVC_ID_IPA_TX = SVC(ATH12K_HTC_SVC_GRP_IPA, 0),
ATH12K_HTC_SVC_ID_PKT_LOG = SVC(ATH12K_HTC_SVC_GRP_PKTLOG, 0),
};
#undef SVC
enum ath12k_htc_ep_id {
ATH12K_HTC_EP_UNUSED = -1,
ATH12K_HTC_EP_0 = 0,
ATH12K_HTC_EP_1 = 1,
ATH12K_HTC_EP_2,
ATH12K_HTC_EP_3,
ATH12K_HTC_EP_4,
ATH12K_HTC_EP_5,
ATH12K_HTC_EP_6,
ATH12K_HTC_EP_7,
ATH12K_HTC_EP_8,
ATH12K_HTC_EP_COUNT,
};
struct ath12k_htc_ep_ops {
void (*ep_tx_complete)(struct ath12k_base *ab, struct sk_buff *skb);
void (*ep_rx_complete)(struct ath12k_base *ab, struct sk_buff *skb);
void (*ep_tx_credits)(struct ath12k_base *ab);
};
/* service connection information */
struct ath12k_htc_svc_conn_req {
u16 service_id;
struct ath12k_htc_ep_ops ep_ops;
int max_send_queue_depth;
};
/* service connection response information */
struct ath12k_htc_svc_conn_resp {
u8 buffer_len;
u8 actual_len;
enum ath12k_htc_ep_id eid;
unsigned int max_msg_len;
u8 connect_resp_code;
};
#define ATH12K_NUM_CONTROL_TX_BUFFERS 2
#define ATH12K_HTC_MAX_LEN 4096
#define ATH12K_HTC_MAX_CTRL_MSG_LEN 256
#define ATH12K_HTC_WAIT_TIMEOUT_HZ (1 * HZ)
#define ATH12K_HTC_CONTROL_BUFFER_SIZE (ATH12K_HTC_MAX_CTRL_MSG_LEN + \
sizeof(struct ath12k_htc_hdr))
#define ATH12K_HTC_CONN_SVC_TIMEOUT_HZ (1 * HZ)
#define ATH12K_HTC_MAX_SERVICE_ALLOC_ENTRIES 8
struct ath12k_htc_ep {
struct ath12k_htc *htc;
enum ath12k_htc_ep_id eid;
enum ath12k_htc_svc_id service_id;
struct ath12k_htc_ep_ops ep_ops;
int max_tx_queue_depth;
int max_ep_message_len;
u8 ul_pipe_id;
u8 dl_pipe_id;
u8 seq_no; /* for debugging */
int tx_credits;
bool tx_credit_flow_enabled;
};
struct ath12k_htc_svc_tx_credits {
u16 service_id;
u8 credit_allocation;
};
struct ath12k_htc {
struct ath12k_base *ab;
struct ath12k_htc_ep endpoint[ATH12K_HTC_EP_COUNT];
/* protects endpoints */
spinlock_t tx_lock;
u8 control_resp_buffer[ATH12K_HTC_MAX_CTRL_MSG_LEN];
int control_resp_len;
struct completion ctl_resp;
int total_transmit_credits;
struct ath12k_htc_svc_tx_credits
service_alloc_table[ATH12K_HTC_MAX_SERVICE_ALLOC_ENTRIES];
int target_credit_size;
u8 wmi_ep_count;
};
int ath12k_htc_init(struct ath12k_base *ar);
int ath12k_htc_wait_target(struct ath12k_htc *htc);
int ath12k_htc_start(struct ath12k_htc *htc);
int ath12k_htc_connect_service(struct ath12k_htc *htc,
struct ath12k_htc_svc_conn_req *conn_req,
struct ath12k_htc_svc_conn_resp *conn_resp);
int ath12k_htc_send(struct ath12k_htc *htc, enum ath12k_htc_ep_id eid,
struct sk_buff *packet);
struct sk_buff *ath12k_htc_alloc_skb(struct ath12k_base *ar, int size);
void ath12k_htc_rx_completion_handler(struct ath12k_base *ar,
struct sk_buff *skb);
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,312 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_HW_H
#define ATH12K_HW_H
#include <linux/mhi.h>
#include "wmi.h"
#include "hal.h"
/* Target configuration defines */
/* Num VDEVS per radio */
#define TARGET_NUM_VDEVS (16 + 1)
#define TARGET_NUM_PEERS_PDEV (512 + TARGET_NUM_VDEVS)
/* Num of peers for Single Radio mode */
#define TARGET_NUM_PEERS_SINGLE (TARGET_NUM_PEERS_PDEV)
/* Num of peers for DBS */
#define TARGET_NUM_PEERS_DBS (2 * TARGET_NUM_PEERS_PDEV)
/* Num of peers for DBS_SBS */
#define TARGET_NUM_PEERS_DBS_SBS (3 * TARGET_NUM_PEERS_PDEV)
/* Max num of stations (per radio) */
#define TARGET_NUM_STATIONS 512
#define TARGET_NUM_PEERS(x) TARGET_NUM_PEERS_##x
#define TARGET_NUM_PEER_KEYS 2
#define TARGET_NUM_TIDS(x) (2 * TARGET_NUM_PEERS(x) + \
4 * TARGET_NUM_VDEVS + 8)
#define TARGET_AST_SKID_LIMIT 16
#define TARGET_NUM_OFFLD_PEERS 4
#define TARGET_NUM_OFFLD_REORDER_BUFFS 4
#define TARGET_TX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2) | BIT(4))
#define TARGET_RX_CHAIN_MASK (BIT(0) | BIT(1) | BIT(2) | BIT(4))
#define TARGET_RX_TIMEOUT_LO_PRI 100
#define TARGET_RX_TIMEOUT_HI_PRI 40
#define TARGET_DECAP_MODE_RAW 0
#define TARGET_DECAP_MODE_NATIVE_WIFI 1
#define TARGET_DECAP_MODE_ETH 2
#define TARGET_SCAN_MAX_PENDING_REQS 4
#define TARGET_BMISS_OFFLOAD_MAX_VDEV 3
#define TARGET_ROAM_OFFLOAD_MAX_VDEV 3
#define TARGET_ROAM_OFFLOAD_MAX_AP_PROFILES 8
#define TARGET_GTK_OFFLOAD_MAX_VDEV 3
#define TARGET_NUM_MCAST_GROUPS 12
#define TARGET_NUM_MCAST_TABLE_ELEMS 64
#define TARGET_MCAST2UCAST_MODE 2
#define TARGET_TX_DBG_LOG_SIZE 1024
#define TARGET_RX_SKIP_DEFRAG_TIMEOUT_DUP_DETECTION_CHECK 1
#define TARGET_VOW_CONFIG 0
#define TARGET_NUM_MSDU_DESC (2500)
#define TARGET_MAX_FRAG_ENTRIES 6
#define TARGET_MAX_BCN_OFFLD 16
#define TARGET_NUM_WDS_ENTRIES 32
#define TARGET_DMA_BURST_SIZE 1
#define TARGET_RX_BATCHMODE 1
#define ATH12K_HW_MAX_QUEUES 4
#define ATH12K_QUEUE_LEN 4096
#define ATH12K_HW_RATECODE_CCK_SHORT_PREAM_MASK 0x4
#define ATH12K_FW_DIR "ath12k"
#define ATH12K_BOARD_MAGIC "QCA-ATH12K-BOARD"
#define ATH12K_BOARD_API2_FILE "board-2.bin"
#define ATH12K_DEFAULT_BOARD_FILE "board.bin"
#define ATH12K_DEFAULT_CAL_FILE "caldata.bin"
#define ATH12K_AMSS_FILE "amss.bin"
#define ATH12K_M3_FILE "m3.bin"
#define ATH12K_REGDB_FILE_NAME "regdb.bin"
enum ath12k_hw_rate_cck {
ATH12K_HW_RATE_CCK_LP_11M = 0,
ATH12K_HW_RATE_CCK_LP_5_5M,
ATH12K_HW_RATE_CCK_LP_2M,
ATH12K_HW_RATE_CCK_LP_1M,
ATH12K_HW_RATE_CCK_SP_11M,
ATH12K_HW_RATE_CCK_SP_5_5M,
ATH12K_HW_RATE_CCK_SP_2M,
};
enum ath12k_hw_rate_ofdm {
ATH12K_HW_RATE_OFDM_48M = 0,
ATH12K_HW_RATE_OFDM_24M,
ATH12K_HW_RATE_OFDM_12M,
ATH12K_HW_RATE_OFDM_6M,
ATH12K_HW_RATE_OFDM_54M,
ATH12K_HW_RATE_OFDM_36M,
ATH12K_HW_RATE_OFDM_18M,
ATH12K_HW_RATE_OFDM_9M,
};
enum ath12k_bus {
ATH12K_BUS_PCI,
};
#define ATH12K_EXT_IRQ_GRP_NUM_MAX 11
struct hal_rx_desc;
struct hal_tcl_data_cmd;
struct htt_rx_ring_tlv_filter;
enum hal_encrypt_type;
struct ath12k_hw_ring_mask {
u8 tx[ATH12K_EXT_IRQ_GRP_NUM_MAX];
u8 rx_mon_dest[ATH12K_EXT_IRQ_GRP_NUM_MAX];
u8 rx[ATH12K_EXT_IRQ_GRP_NUM_MAX];
u8 rx_err[ATH12K_EXT_IRQ_GRP_NUM_MAX];
u8 rx_wbm_rel[ATH12K_EXT_IRQ_GRP_NUM_MAX];
u8 reo_status[ATH12K_EXT_IRQ_GRP_NUM_MAX];
u8 host2rxdma[ATH12K_EXT_IRQ_GRP_NUM_MAX];
u8 tx_mon_dest[ATH12K_EXT_IRQ_GRP_NUM_MAX];
};
struct ath12k_hw_hal_params {
enum hal_rx_buf_return_buf_manager rx_buf_rbm;
u32 wbm2sw_cc_enable;
};
struct ath12k_hw_params {
const char *name;
u16 hw_rev;
struct {
const char *dir;
size_t board_size;
size_t cal_offset;
} fw;
u8 max_radios;
bool single_pdev_only:1;
u32 qmi_service_ins_id;
bool internal_sleep_clock:1;
const struct ath12k_hw_ops *hw_ops;
const struct ath12k_hw_ring_mask *ring_mask;
const struct ath12k_hw_regs *regs;
const struct ce_attr *host_ce_config;
u32 ce_count;
const struct ce_pipe_config *target_ce_config;
u32 target_ce_count;
const struct service_to_pipe *svc_to_ce_map;
u32 svc_to_ce_map_len;
const struct ath12k_hw_hal_params *hal_params;
bool rxdma1_enable:1;
int num_rxmda_per_pdev;
int num_rxdma_dst_ring;
bool rx_mac_buf_ring:1;
bool vdev_start_delay:1;
u16 interface_modes;
bool supports_monitor:1;
bool idle_ps:1;
bool download_calib:1;
bool supports_suspend:1;
bool tcl_ring_retry:1;
bool reoq_lut_support:1;
bool supports_shadow_regs:1;
u32 hal_desc_sz;
u32 num_tcl_banks;
u32 max_tx_ring;
const struct mhi_controller_config *mhi_config;
void (*wmi_init)(struct ath12k_base *ab,
struct ath12k_wmi_resource_config_arg *config);
const struct hal_ops *hal_ops;
};
struct ath12k_hw_ops {
u8 (*get_hw_mac_from_pdev_id)(int pdev_id);
int (*mac_id_to_pdev_id)(const struct ath12k_hw_params *hw, int mac_id);
int (*mac_id_to_srng_id)(const struct ath12k_hw_params *hw, int mac_id);
int (*rxdma_ring_sel_config)(struct ath12k_base *ab);
u8 (*get_ring_selector)(struct sk_buff *skb);
bool (*dp_srng_is_tx_comp_ring)(int ring_num);
};
static inline
int ath12k_hw_get_mac_from_pdev_id(const struct ath12k_hw_params *hw,
int pdev_idx)
{
if (hw->hw_ops->get_hw_mac_from_pdev_id)
return hw->hw_ops->get_hw_mac_from_pdev_id(pdev_idx);
return 0;
}
static inline int ath12k_hw_mac_id_to_pdev_id(const struct ath12k_hw_params *hw,
int mac_id)
{
if (hw->hw_ops->mac_id_to_pdev_id)
return hw->hw_ops->mac_id_to_pdev_id(hw, mac_id);
return 0;
}
static inline int ath12k_hw_mac_id_to_srng_id(const struct ath12k_hw_params *hw,
int mac_id)
{
if (hw->hw_ops->mac_id_to_srng_id)
return hw->hw_ops->mac_id_to_srng_id(hw, mac_id);
return 0;
}
struct ath12k_fw_ie {
__le32 id;
__le32 len;
u8 data[];
};
enum ath12k_bd_ie_board_type {
ATH12K_BD_IE_BOARD_NAME = 0,
ATH12K_BD_IE_BOARD_DATA = 1,
};
enum ath12k_bd_ie_type {
/* contains sub IEs of enum ath12k_bd_ie_board_type */
ATH12K_BD_IE_BOARD = 0,
ATH12K_BD_IE_BOARD_EXT = 1,
};
struct ath12k_hw_regs {
u32 hal_tcl1_ring_id;
u32 hal_tcl1_ring_misc;
u32 hal_tcl1_ring_tp_addr_lsb;
u32 hal_tcl1_ring_tp_addr_msb;
u32 hal_tcl1_ring_consumer_int_setup_ix0;
u32 hal_tcl1_ring_consumer_int_setup_ix1;
u32 hal_tcl1_ring_msi1_base_lsb;
u32 hal_tcl1_ring_msi1_base_msb;
u32 hal_tcl1_ring_msi1_data;
u32 hal_tcl_ring_base_lsb;
u32 hal_tcl_status_ring_base_lsb;
u32 hal_wbm_idle_ring_base_lsb;
u32 hal_wbm_idle_ring_misc_addr;
u32 hal_wbm_r0_idle_list_cntl_addr;
u32 hal_wbm_r0_idle_list_size_addr;
u32 hal_wbm_scattered_ring_base_lsb;
u32 hal_wbm_scattered_ring_base_msb;
u32 hal_wbm_scattered_desc_head_info_ix0;
u32 hal_wbm_scattered_desc_head_info_ix1;
u32 hal_wbm_scattered_desc_tail_info_ix0;
u32 hal_wbm_scattered_desc_tail_info_ix1;
u32 hal_wbm_scattered_desc_ptr_hp_addr;
u32 hal_wbm_sw_release_ring_base_lsb;
u32 hal_wbm_sw1_release_ring_base_lsb;
u32 hal_wbm0_release_ring_base_lsb;
u32 hal_wbm1_release_ring_base_lsb;
u32 pcie_qserdes_sysclk_en_sel;
u32 pcie_pcs_osc_dtct_config_base;
u32 hal_ppe_rel_ring_base;
u32 hal_reo2_ring_base;
u32 hal_reo1_misc_ctrl_addr;
u32 hal_reo1_sw_cookie_cfg0;
u32 hal_reo1_sw_cookie_cfg1;
u32 hal_reo1_qdesc_lut_base0;
u32 hal_reo1_qdesc_lut_base1;
u32 hal_reo1_ring_base_lsb;
u32 hal_reo1_ring_base_msb;
u32 hal_reo1_ring_id;
u32 hal_reo1_ring_misc;
u32 hal_reo1_ring_hp_addr_lsb;
u32 hal_reo1_ring_hp_addr_msb;
u32 hal_reo1_ring_producer_int_setup;
u32 hal_reo1_ring_msi1_base_lsb;
u32 hal_reo1_ring_msi1_base_msb;
u32 hal_reo1_ring_msi1_data;
u32 hal_reo1_aging_thres_ix0;
u32 hal_reo1_aging_thres_ix1;
u32 hal_reo1_aging_thres_ix2;
u32 hal_reo1_aging_thres_ix3;
u32 hal_reo2_sw0_ring_base;
u32 hal_sw2reo_ring_base;
u32 hal_sw2reo1_ring_base;
u32 hal_reo_cmd_ring_base;
u32 hal_reo_status_ring_base;
};
int ath12k_hw_init(struct ath12k_base *ab);
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,76 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_MAC_H
#define ATH12K_MAC_H
#include <net/mac80211.h>
#include <net/cfg80211.h>
struct ath12k;
struct ath12k_base;
struct ath12k_generic_iter {
struct ath12k *ar;
int ret;
};
/* number of failed packets (20 packets with 16 sw reties each) */
#define ATH12K_KICKOUT_THRESHOLD (20 * 16)
/* Use insanely high numbers to make sure that the firmware implementation
* won't start, we have the same functionality already in hostapd. Unit
* is seconds.
*/
#define ATH12K_KEEPALIVE_MIN_IDLE 3747
#define ATH12K_KEEPALIVE_MAX_IDLE 3895
#define ATH12K_KEEPALIVE_MAX_UNRESPONSIVE 3900
/* FIXME: should these be in ieee80211.h? */
#define IEEE80211_VHT_MCS_SUPPORT_0_11_MASK GENMASK(23, 16)
#define IEEE80211_DISABLE_VHT_MCS_SUPPORT_0_11 BIT(24)
#define ATH12K_CHAN_WIDTH_NUM 8
#define ATH12K_TX_POWER_MAX_VAL 70
#define ATH12K_TX_POWER_MIN_VAL 0
enum ath12k_supported_bw {
ATH12K_BW_20 = 0,
ATH12K_BW_40 = 1,
ATH12K_BW_80 = 2,
ATH12K_BW_160 = 3,
};
extern const struct htt_rx_ring_tlv_filter ath12k_mac_mon_status_filter_default;
void ath12k_mac_destroy(struct ath12k_base *ab);
void ath12k_mac_unregister(struct ath12k_base *ab);
int ath12k_mac_register(struct ath12k_base *ab);
int ath12k_mac_allocate(struct ath12k_base *ab);
int ath12k_mac_hw_ratecode_to_legacy_rate(u8 hw_rc, u8 preamble, u8 *rateidx,
u16 *rate);
u8 ath12k_mac_bitrate_to_idx(const struct ieee80211_supported_band *sband,
u32 bitrate);
u8 ath12k_mac_hw_rate_to_idx(const struct ieee80211_supported_band *sband,
u8 hw_rate, bool cck);
void __ath12k_mac_scan_finish(struct ath12k *ar);
void ath12k_mac_scan_finish(struct ath12k *ar);
struct ath12k_vif *ath12k_mac_get_arvif(struct ath12k *ar, u32 vdev_id);
struct ath12k_vif *ath12k_mac_get_arvif_by_vdev_id(struct ath12k_base *ab,
u32 vdev_id);
struct ath12k *ath12k_mac_get_ar_by_vdev_id(struct ath12k_base *ab, u32 vdev_id);
struct ath12k *ath12k_mac_get_ar_by_pdev_id(struct ath12k_base *ab, u32 pdev_id);
void ath12k_mac_drain_tx(struct ath12k *ar);
void ath12k_mac_peer_cleanup_all(struct ath12k *ar);
int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx);
enum rate_info_bw ath12k_mac_bw_to_mac80211_bw(enum ath12k_supported_bw bw);
enum ath12k_supported_bw ath12k_mac_mac80211_bw_to_ath12k_bw(enum rate_info_bw bw);
enum hal_encrypt_type ath12k_dp_tx_get_encrypt_type(u32 cipher);
#endif

Просмотреть файл

@ -0,0 +1,616 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/msi.h>
#include <linux/pci.h>
#include "core.h"
#include "debug.h"
#include "mhi.h"
#include "pci.h"
#define MHI_TIMEOUT_DEFAULT_MS 90000
static const struct mhi_channel_config ath12k_mhi_channels_qcn9274[] = {
{
.num = 0,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 1,
.dir = DMA_TO_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 1,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 1,
.dir = DMA_FROM_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 20,
.name = "IPCR",
.num_elements = 32,
.event_ring = 1,
.dir = DMA_TO_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 21,
.name = "IPCR",
.num_elements = 32,
.event_ring = 1,
.dir = DMA_FROM_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = true,
},
};
static struct mhi_event_config ath12k_mhi_events_qcn9274[] = {
{
.num_elements = 32,
.irq_moderation_ms = 0,
.irq = 1,
.data_type = MHI_ER_CTRL,
.mode = MHI_DB_BRST_DISABLE,
.hardware_event = false,
.client_managed = false,
.offload_channel = false,
},
{
.num_elements = 256,
.irq_moderation_ms = 1,
.irq = 2,
.mode = MHI_DB_BRST_DISABLE,
.priority = 1,
.hardware_event = false,
.client_managed = false,
.offload_channel = false,
},
};
const struct mhi_controller_config ath12k_mhi_config_qcn9274 = {
.max_channels = 30,
.timeout_ms = 10000,
.use_bounce_buf = false,
.buf_len = 0,
.num_channels = ARRAY_SIZE(ath12k_mhi_channels_qcn9274),
.ch_cfg = ath12k_mhi_channels_qcn9274,
.num_events = ARRAY_SIZE(ath12k_mhi_events_qcn9274),
.event_cfg = ath12k_mhi_events_qcn9274,
};
static const struct mhi_channel_config ath12k_mhi_channels_wcn7850[] = {
{
.num = 0,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 1,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 20,
.name = "IPCR",
.num_elements = 64,
.event_ring = 1,
.dir = DMA_TO_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 21,
.name = "IPCR",
.num_elements = 64,
.event_ring = 1,
.dir = DMA_FROM_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = true,
},
};
static struct mhi_event_config ath12k_mhi_events_wcn7850[] = {
{
.num_elements = 32,
.irq_moderation_ms = 0,
.irq = 1,
.mode = MHI_DB_BRST_DISABLE,
.data_type = MHI_ER_CTRL,
.hardware_event = false,
.client_managed = false,
.offload_channel = false,
},
{
.num_elements = 256,
.irq_moderation_ms = 1,
.irq = 2,
.mode = MHI_DB_BRST_DISABLE,
.priority = 1,
.hardware_event = false,
.client_managed = false,
.offload_channel = false,
},
};
const struct mhi_controller_config ath12k_mhi_config_wcn7850 = {
.max_channels = 128,
.timeout_ms = 2000,
.use_bounce_buf = false,
.buf_len = 0,
.num_channels = ARRAY_SIZE(ath12k_mhi_channels_wcn7850),
.ch_cfg = ath12k_mhi_channels_wcn7850,
.num_events = ARRAY_SIZE(ath12k_mhi_events_wcn7850),
.event_cfg = ath12k_mhi_events_wcn7850,
};
void ath12k_mhi_set_mhictrl_reset(struct ath12k_base *ab)
{
u32 val;
val = ath12k_pci_read32(ab, MHISTATUS);
ath12k_dbg(ab, ATH12K_DBG_PCI, "MHISTATUS 0x%x\n", val);
/* Observed on some targets that after SOC_GLOBAL_RESET, MHISTATUS
* has SYSERR bit set and thus need to set MHICTRL_RESET
* to clear SYSERR.
*/
ath12k_pci_write32(ab, MHICTRL, MHICTRL_RESET_MASK);
mdelay(10);
}
static void ath12k_mhi_reset_txvecdb(struct ath12k_base *ab)
{
ath12k_pci_write32(ab, PCIE_TXVECDB, 0);
}
static void ath12k_mhi_reset_txvecstatus(struct ath12k_base *ab)
{
ath12k_pci_write32(ab, PCIE_TXVECSTATUS, 0);
}
static void ath12k_mhi_reset_rxvecdb(struct ath12k_base *ab)
{
ath12k_pci_write32(ab, PCIE_RXVECDB, 0);
}
static void ath12k_mhi_reset_rxvecstatus(struct ath12k_base *ab)
{
ath12k_pci_write32(ab, PCIE_RXVECSTATUS, 0);
}
void ath12k_mhi_clear_vector(struct ath12k_base *ab)
{
ath12k_mhi_reset_txvecdb(ab);
ath12k_mhi_reset_txvecstatus(ab);
ath12k_mhi_reset_rxvecdb(ab);
ath12k_mhi_reset_rxvecstatus(ab);
}
static int ath12k_mhi_get_msi(struct ath12k_pci *ab_pci)
{
struct ath12k_base *ab = ab_pci->ab;
u32 user_base_data, base_vector;
int ret, num_vectors, i;
int *irq;
ret = ath12k_pci_get_user_msi_assignment(ab,
"MHI", &num_vectors,
&user_base_data, &base_vector);
if (ret)
return ret;
ath12k_dbg(ab, ATH12K_DBG_PCI, "Number of assigned MSI for MHI is %d, base vector is %d\n",
num_vectors, base_vector);
irq = kcalloc(num_vectors, sizeof(*irq), GFP_KERNEL);
if (!irq)
return -ENOMEM;
for (i = 0; i < num_vectors; i++)
irq[i] = ath12k_pci_get_msi_irq(ab->dev,
base_vector + i);
ab_pci->mhi_ctrl->irq = irq;
ab_pci->mhi_ctrl->nr_irqs = num_vectors;
return 0;
}
static int ath12k_mhi_op_runtime_get(struct mhi_controller *mhi_cntrl)
{
return 0;
}
static void ath12k_mhi_op_runtime_put(struct mhi_controller *mhi_cntrl)
{
}
static char *ath12k_mhi_op_callback_to_str(enum mhi_callback reason)
{
switch (reason) {
case MHI_CB_IDLE:
return "MHI_CB_IDLE";
case MHI_CB_PENDING_DATA:
return "MHI_CB_PENDING_DATA";
case MHI_CB_LPM_ENTER:
return "MHI_CB_LPM_ENTER";
case MHI_CB_LPM_EXIT:
return "MHI_CB_LPM_EXIT";
case MHI_CB_EE_RDDM:
return "MHI_CB_EE_RDDM";
case MHI_CB_EE_MISSION_MODE:
return "MHI_CB_EE_MISSION_MODE";
case MHI_CB_SYS_ERROR:
return "MHI_CB_SYS_ERROR";
case MHI_CB_FATAL_ERROR:
return "MHI_CB_FATAL_ERROR";
case MHI_CB_BW_REQ:
return "MHI_CB_BW_REQ";
default:
return "UNKNOWN";
}
}
static void ath12k_mhi_op_status_cb(struct mhi_controller *mhi_cntrl,
enum mhi_callback cb)
{
struct ath12k_base *ab = dev_get_drvdata(mhi_cntrl->cntrl_dev);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "mhi notify status reason %s\n",
ath12k_mhi_op_callback_to_str(cb));
switch (cb) {
case MHI_CB_SYS_ERROR:
ath12k_warn(ab, "firmware crashed: MHI_CB_SYS_ERROR\n");
break;
case MHI_CB_EE_RDDM:
if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags)))
queue_work(ab->workqueue_aux, &ab->reset_work);
break;
default:
break;
}
}
static int ath12k_mhi_op_read_reg(struct mhi_controller *mhi_cntrl,
void __iomem *addr,
u32 *out)
{
*out = readl(addr);
return 0;
}
static void ath12k_mhi_op_write_reg(struct mhi_controller *mhi_cntrl,
void __iomem *addr,
u32 val)
{
writel(val, addr);
}
int ath12k_mhi_register(struct ath12k_pci *ab_pci)
{
struct ath12k_base *ab = ab_pci->ab;
struct mhi_controller *mhi_ctrl;
int ret;
mhi_ctrl = mhi_alloc_controller();
if (!mhi_ctrl)
return -ENOMEM;
ath12k_core_create_firmware_path(ab, ATH12K_AMSS_FILE,
ab_pci->amss_path,
sizeof(ab_pci->amss_path));
ab_pci->mhi_ctrl = mhi_ctrl;
mhi_ctrl->cntrl_dev = ab->dev;
mhi_ctrl->fw_image = ab_pci->amss_path;
mhi_ctrl->regs = ab->mem;
mhi_ctrl->reg_len = ab->mem_len;
ret = ath12k_mhi_get_msi(ab_pci);
if (ret) {
ath12k_err(ab, "failed to get msi for mhi\n");
mhi_free_controller(mhi_ctrl);
return ret;
}
mhi_ctrl->iova_start = 0;
mhi_ctrl->iova_stop = 0xffffffff;
mhi_ctrl->sbl_size = SZ_512K;
mhi_ctrl->seg_len = SZ_512K;
mhi_ctrl->fbc_download = true;
mhi_ctrl->runtime_get = ath12k_mhi_op_runtime_get;
mhi_ctrl->runtime_put = ath12k_mhi_op_runtime_put;
mhi_ctrl->status_cb = ath12k_mhi_op_status_cb;
mhi_ctrl->read_reg = ath12k_mhi_op_read_reg;
mhi_ctrl->write_reg = ath12k_mhi_op_write_reg;
ret = mhi_register_controller(mhi_ctrl, ab->hw_params->mhi_config);
if (ret) {
ath12k_err(ab, "failed to register to mhi bus, err = %d\n", ret);
mhi_free_controller(mhi_ctrl);
return ret;
}
return 0;
}
void ath12k_mhi_unregister(struct ath12k_pci *ab_pci)
{
struct mhi_controller *mhi_ctrl = ab_pci->mhi_ctrl;
mhi_unregister_controller(mhi_ctrl);
kfree(mhi_ctrl->irq);
mhi_free_controller(mhi_ctrl);
ab_pci->mhi_ctrl = NULL;
}
static char *ath12k_mhi_state_to_str(enum ath12k_mhi_state mhi_state)
{
switch (mhi_state) {
case ATH12K_MHI_INIT:
return "INIT";
case ATH12K_MHI_DEINIT:
return "DEINIT";
case ATH12K_MHI_POWER_ON:
return "POWER_ON";
case ATH12K_MHI_POWER_OFF:
return "POWER_OFF";
case ATH12K_MHI_FORCE_POWER_OFF:
return "FORCE_POWER_OFF";
case ATH12K_MHI_SUSPEND:
return "SUSPEND";
case ATH12K_MHI_RESUME:
return "RESUME";
case ATH12K_MHI_TRIGGER_RDDM:
return "TRIGGER_RDDM";
case ATH12K_MHI_RDDM_DONE:
return "RDDM_DONE";
default:
return "UNKNOWN";
}
};
static void ath12k_mhi_set_state_bit(struct ath12k_pci *ab_pci,
enum ath12k_mhi_state mhi_state)
{
struct ath12k_base *ab = ab_pci->ab;
switch (mhi_state) {
case ATH12K_MHI_INIT:
set_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state);
break;
case ATH12K_MHI_DEINIT:
clear_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state);
break;
case ATH12K_MHI_POWER_ON:
set_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state);
break;
case ATH12K_MHI_POWER_OFF:
case ATH12K_MHI_FORCE_POWER_OFF:
clear_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state);
clear_bit(ATH12K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state);
clear_bit(ATH12K_MHI_RDDM_DONE, &ab_pci->mhi_state);
break;
case ATH12K_MHI_SUSPEND:
set_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state);
break;
case ATH12K_MHI_RESUME:
clear_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state);
break;
case ATH12K_MHI_TRIGGER_RDDM:
set_bit(ATH12K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state);
break;
case ATH12K_MHI_RDDM_DONE:
set_bit(ATH12K_MHI_RDDM_DONE, &ab_pci->mhi_state);
break;
default:
ath12k_err(ab, "unhandled mhi state (%d)\n", mhi_state);
}
}
static int ath12k_mhi_check_state_bit(struct ath12k_pci *ab_pci,
enum ath12k_mhi_state mhi_state)
{
struct ath12k_base *ab = ab_pci->ab;
switch (mhi_state) {
case ATH12K_MHI_INIT:
if (!test_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state))
return 0;
break;
case ATH12K_MHI_DEINIT:
case ATH12K_MHI_POWER_ON:
if (test_bit(ATH12K_MHI_INIT, &ab_pci->mhi_state) &&
!test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state))
return 0;
break;
case ATH12K_MHI_FORCE_POWER_OFF:
if (test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state))
return 0;
break;
case ATH12K_MHI_POWER_OFF:
case ATH12K_MHI_SUSPEND:
if (test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state) &&
!test_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state))
return 0;
break;
case ATH12K_MHI_RESUME:
if (test_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state))
return 0;
break;
case ATH12K_MHI_TRIGGER_RDDM:
if (test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state) &&
!test_bit(ATH12K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state))
return 0;
break;
case ATH12K_MHI_RDDM_DONE:
return 0;
default:
ath12k_err(ab, "unhandled mhi state: %s(%d)\n",
ath12k_mhi_state_to_str(mhi_state), mhi_state);
}
ath12k_err(ab, "failed to set mhi state %s(%d) in current mhi state (0x%lx)\n",
ath12k_mhi_state_to_str(mhi_state), mhi_state,
ab_pci->mhi_state);
return -EINVAL;
}
static int ath12k_mhi_set_state(struct ath12k_pci *ab_pci,
enum ath12k_mhi_state mhi_state)
{
struct ath12k_base *ab = ab_pci->ab;
int ret;
ret = ath12k_mhi_check_state_bit(ab_pci, mhi_state);
if (ret)
goto out;
ath12k_dbg(ab, ATH12K_DBG_PCI, "setting mhi state: %s(%d)\n",
ath12k_mhi_state_to_str(mhi_state), mhi_state);
switch (mhi_state) {
case ATH12K_MHI_INIT:
ret = mhi_prepare_for_power_up(ab_pci->mhi_ctrl);
break;
case ATH12K_MHI_DEINIT:
mhi_unprepare_after_power_down(ab_pci->mhi_ctrl);
ret = 0;
break;
case ATH12K_MHI_POWER_ON:
ret = mhi_async_power_up(ab_pci->mhi_ctrl);
break;
case ATH12K_MHI_POWER_OFF:
mhi_power_down(ab_pci->mhi_ctrl, true);
ret = 0;
break;
case ATH12K_MHI_FORCE_POWER_OFF:
mhi_power_down(ab_pci->mhi_ctrl, false);
ret = 0;
break;
case ATH12K_MHI_SUSPEND:
ret = mhi_pm_suspend(ab_pci->mhi_ctrl);
break;
case ATH12K_MHI_RESUME:
ret = mhi_pm_resume(ab_pci->mhi_ctrl);
break;
case ATH12K_MHI_TRIGGER_RDDM:
ret = mhi_force_rddm_mode(ab_pci->mhi_ctrl);
break;
case ATH12K_MHI_RDDM_DONE:
break;
default:
ath12k_err(ab, "unhandled MHI state (%d)\n", mhi_state);
ret = -EINVAL;
}
if (ret)
goto out;
ath12k_mhi_set_state_bit(ab_pci, mhi_state);
return 0;
out:
ath12k_err(ab, "failed to set mhi state: %s(%d)\n",
ath12k_mhi_state_to_str(mhi_state), mhi_state);
return ret;
}
int ath12k_mhi_start(struct ath12k_pci *ab_pci)
{
int ret;
ab_pci->mhi_ctrl->timeout_ms = MHI_TIMEOUT_DEFAULT_MS;
ret = ath12k_mhi_set_state(ab_pci, ATH12K_MHI_INIT);
if (ret)
goto out;
ret = ath12k_mhi_set_state(ab_pci, ATH12K_MHI_POWER_ON);
if (ret)
goto out;
return 0;
out:
return ret;
}
void ath12k_mhi_stop(struct ath12k_pci *ab_pci)
{
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_POWER_OFF);
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_DEINIT);
}
void ath12k_mhi_suspend(struct ath12k_pci *ab_pci)
{
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_SUSPEND);
}
void ath12k_mhi_resume(struct ath12k_pci *ab_pci)
{
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_RESUME);
}

Просмотреть файл

@ -0,0 +1,46 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH12K_MHI_H
#define _ATH12K_MHI_H
#include "pci.h"
#define PCIE_TXVECDB 0x360
#define PCIE_TXVECSTATUS 0x368
#define PCIE_RXVECDB 0x394
#define PCIE_RXVECSTATUS 0x39C
#define MHISTATUS 0x48
#define MHICTRL 0x38
#define MHICTRL_RESET_MASK 0x2
enum ath12k_mhi_state {
ATH12K_MHI_INIT,
ATH12K_MHI_DEINIT,
ATH12K_MHI_POWER_ON,
ATH12K_MHI_POWER_OFF,
ATH12K_MHI_FORCE_POWER_OFF,
ATH12K_MHI_SUSPEND,
ATH12K_MHI_RESUME,
ATH12K_MHI_TRIGGER_RDDM,
ATH12K_MHI_RDDM,
ATH12K_MHI_RDDM_DONE,
};
extern const struct mhi_controller_config ath12k_mhi_config_qcn9274;
extern const struct mhi_controller_config ath12k_mhi_config_wcn7850;
int ath12k_mhi_start(struct ath12k_pci *ar_pci);
void ath12k_mhi_stop(struct ath12k_pci *ar_pci);
int ath12k_mhi_register(struct ath12k_pci *ar_pci);
void ath12k_mhi_unregister(struct ath12k_pci *ar_pci);
void ath12k_mhi_set_mhictrl_reset(struct ath12k_base *ab);
void ath12k_mhi_clear_vector(struct ath12k_base *ab);
void ath12k_mhi_suspend(struct ath12k_pci *ar_pci);
void ath12k_mhi_resume(struct ath12k_pci *ar_pci);
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,135 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_PCI_H
#define ATH12K_PCI_H
#include <linux/mhi.h>
#include "core.h"
#define PCIE_SOC_GLOBAL_RESET 0x3008
#define PCIE_SOC_GLOBAL_RESET_V 1
#define WLAON_WARM_SW_ENTRY 0x1f80504
#define WLAON_SOC_RESET_CAUSE_REG 0x01f8060c
#define PCIE_Q6_COOKIE_ADDR 0x01f80500
#define PCIE_Q6_COOKIE_DATA 0xc0000000
/* register to wake the UMAC from power collapse */
#define PCIE_SCRATCH_0_SOC_PCIE_REG 0x4040
/* register used for handshake mechanism to validate UMAC is awake */
#define PCIE_SOC_WAKE_PCIE_LOCAL_REG 0x3004
#define PCIE_PCIE_PARF_LTSSM 0x1e081b0
#define PARM_LTSSM_VALUE 0x111
#define GCC_GCC_PCIE_HOT_RST 0x1e38338
#define GCC_GCC_PCIE_HOT_RST_VAL 0x10
#define PCIE_PCIE_INT_ALL_CLEAR 0x1e08228
#define PCIE_SMLH_REQ_RST_LINK_DOWN 0x2
#define PCIE_INT_CLEAR_ALL 0xffffffff
#define PCIE_QSERDES_COM_SYSCLK_EN_SEL_REG(ab) \
((ab)->hw_params->regs->pcie_qserdes_sysclk_en_sel)
#define PCIE_QSERDES_COM_SYSCLK_EN_SEL_VAL 0x10
#define PCIE_QSERDES_COM_SYSCLK_EN_SEL_MSK 0xffffffff
#define PCIE_PCS_OSC_DTCT_CONFIG1_REG(ab) \
((ab)->hw_params->regs->pcie_pcs_osc_dtct_config_base)
#define PCIE_PCS_OSC_DTCT_CONFIG1_VAL 0x02
#define PCIE_PCS_OSC_DTCT_CONFIG2_REG(ab) \
((ab)->hw_params->regs->pcie_pcs_osc_dtct_config_base + 0x4)
#define PCIE_PCS_OSC_DTCT_CONFIG2_VAL 0x52
#define PCIE_PCS_OSC_DTCT_CONFIG4_REG(ab) \
((ab)->hw_params->regs->pcie_pcs_osc_dtct_config_base + 0xc)
#define PCIE_PCS_OSC_DTCT_CONFIG4_VAL 0xff
#define PCIE_PCS_OSC_DTCT_CONFIG_MSK 0x000000ff
#define WLAON_QFPROM_PWR_CTRL_REG 0x01f8031c
#define QFPROM_PWR_CTRL_VDD4BLOW_MASK 0x4
#define PCI_BAR_WINDOW0_BASE 0x1E00000
#define PCI_BAR_WINDOW0_END 0x1E7FFFC
#define PCI_SOC_RANGE_MASK 0x3FFF
#define PCI_SOC_PCI_REG_BASE 0x1E04000
#define PCI_SOC_PCI_REG_END 0x1E07FFC
#define PCI_PARF_BASE 0x1E08000
#define PCI_PARF_END 0x1E0BFFC
#define PCI_MHIREGLEN_REG 0x1E0E100
#define PCI_MHI_REGION_END 0x1E0EFFC
#define QRTR_PCI_DOMAIN_NR_MASK GENMASK(7, 4)
#define QRTR_PCI_BUS_NUMBER_MASK GENMASK(3, 0)
#define ATH12K_PCI_SOC_HW_VERSION_1 1
#define ATH12K_PCI_SOC_HW_VERSION_2 2
struct ath12k_msi_user {
const char *name;
int num_vectors;
u32 base_vector;
};
struct ath12k_msi_config {
int total_vectors;
int total_users;
const struct ath12k_msi_user *users;
};
enum ath12k_pci_flags {
ATH12K_PCI_FLAG_INIT_DONE,
ATH12K_PCI_FLAG_IS_MSI_64,
ATH12K_PCI_ASPM_RESTORE,
};
struct ath12k_pci {
struct pci_dev *pdev;
struct ath12k_base *ab;
u16 dev_id;
char amss_path[100];
u32 msi_ep_base_data;
struct mhi_controller *mhi_ctrl;
const struct ath12k_msi_config *msi_config;
unsigned long mhi_state;
u32 register_window;
/* protects register_window above */
spinlock_t window_lock;
/* enum ath12k_pci_flags */
unsigned long flags;
u16 link_ctl;
};
static inline struct ath12k_pci *ath12k_pci_priv(struct ath12k_base *ab)
{
return (struct ath12k_pci *)ab->drv_priv;
}
int ath12k_pci_get_user_msi_assignment(struct ath12k_base *ab, char *user_name,
int *num_vectors, u32 *user_base_data,
u32 *base_vector);
int ath12k_pci_get_msi_irq(struct device *dev, unsigned int vector);
void ath12k_pci_write32(struct ath12k_base *ab, u32 offset, u32 value);
u32 ath12k_pci_read32(struct ath12k_base *ab, u32 offset);
int ath12k_pci_map_service_to_pipe(struct ath12k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe);
void ath12k_pci_get_msi_address(struct ath12k_base *ab, u32 *msi_addr_lo,
u32 *msi_addr_hi);
void ath12k_pci_get_ce_msi_idx(struct ath12k_base *ab, u32 ce_id,
u32 *msi_idx);
void ath12k_pci_hif_ce_irq_enable(struct ath12k_base *ab);
void ath12k_pci_hif_ce_irq_disable(struct ath12k_base *ab);
void ath12k_pci_ext_irq_enable(struct ath12k_base *ab);
void ath12k_pci_ext_irq_disable(struct ath12k_base *ab);
int ath12k_pci_hif_suspend(struct ath12k_base *ab);
int ath12k_pci_hif_resume(struct ath12k_base *ab);
void ath12k_pci_stop(struct ath12k_base *ab);
int ath12k_pci_start(struct ath12k_base *ab);
int ath12k_pci_power_up(struct ath12k_base *ab);
void ath12k_pci_power_down(struct ath12k_base *ab);
#endif /* ATH12K_PCI_H */

Просмотреть файл

@ -0,0 +1,342 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "core.h"
#include "peer.h"
#include "debug.h"
struct ath12k_peer *ath12k_peer_find(struct ath12k_base *ab, int vdev_id,
const u8 *addr)
{
struct ath12k_peer *peer;
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list) {
if (peer->vdev_id != vdev_id)
continue;
if (!ether_addr_equal(peer->addr, addr))
continue;
return peer;
}
return NULL;
}
static struct ath12k_peer *ath12k_peer_find_by_pdev_idx(struct ath12k_base *ab,
u8 pdev_idx, const u8 *addr)
{
struct ath12k_peer *peer;
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list) {
if (peer->pdev_idx != pdev_idx)
continue;
if (!ether_addr_equal(peer->addr, addr))
continue;
return peer;
}
return NULL;
}
struct ath12k_peer *ath12k_peer_find_by_addr(struct ath12k_base *ab,
const u8 *addr)
{
struct ath12k_peer *peer;
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list) {
if (!ether_addr_equal(peer->addr, addr))
continue;
return peer;
}
return NULL;
}
struct ath12k_peer *ath12k_peer_find_by_id(struct ath12k_base *ab,
int peer_id)
{
struct ath12k_peer *peer;
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list)
if (peer_id == peer->peer_id)
return peer;
return NULL;
}
bool ath12k_peer_exist_by_vdev_id(struct ath12k_base *ab, int vdev_id)
{
struct ath12k_peer *peer;
spin_lock_bh(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list) {
if (vdev_id == peer->vdev_id) {
spin_unlock_bh(&ab->base_lock);
return true;
}
}
spin_unlock_bh(&ab->base_lock);
return false;
}
struct ath12k_peer *ath12k_peer_find_by_ast(struct ath12k_base *ab,
int ast_hash)
{
struct ath12k_peer *peer;
lockdep_assert_held(&ab->base_lock);
list_for_each_entry(peer, &ab->peers, list)
if (ast_hash == peer->ast_hash)
return peer;
return NULL;
}
void ath12k_peer_unmap_event(struct ath12k_base *ab, u16 peer_id)
{
struct ath12k_peer *peer;
spin_lock_bh(&ab->base_lock);
peer = ath12k_peer_find_by_id(ab, peer_id);
if (!peer) {
ath12k_warn(ab, "peer-unmap-event: unknown peer id %d\n",
peer_id);
goto exit;
}
ath12k_dbg(ab, ATH12K_DBG_DP_HTT, "htt peer unmap vdev %d peer %pM id %d\n",
peer->vdev_id, peer->addr, peer_id);
list_del(&peer->list);
kfree(peer);
wake_up(&ab->peer_mapping_wq);
exit:
spin_unlock_bh(&ab->base_lock);
}
void ath12k_peer_map_event(struct ath12k_base *ab, u8 vdev_id, u16 peer_id,
u8 *mac_addr, u16 ast_hash, u16 hw_peer_id)
{
struct ath12k_peer *peer;
spin_lock_bh(&ab->base_lock);
peer = ath12k_peer_find(ab, vdev_id, mac_addr);
if (!peer) {
peer = kzalloc(sizeof(*peer), GFP_ATOMIC);
if (!peer)
goto exit;
peer->vdev_id = vdev_id;
peer->peer_id = peer_id;
peer->ast_hash = ast_hash;
peer->hw_peer_id = hw_peer_id;
ether_addr_copy(peer->addr, mac_addr);
list_add(&peer->list, &ab->peers);
wake_up(&ab->peer_mapping_wq);
}
ath12k_dbg(ab, ATH12K_DBG_DP_HTT, "htt peer map vdev %d peer %pM id %d\n",
vdev_id, mac_addr, peer_id);
exit:
spin_unlock_bh(&ab->base_lock);
}
static int ath12k_wait_for_peer_common(struct ath12k_base *ab, int vdev_id,
const u8 *addr, bool expect_mapped)
{
int ret;
ret = wait_event_timeout(ab->peer_mapping_wq, ({
bool mapped;
spin_lock_bh(&ab->base_lock);
mapped = !!ath12k_peer_find(ab, vdev_id, addr);
spin_unlock_bh(&ab->base_lock);
(mapped == expect_mapped ||
test_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags));
}), 3 * HZ);
if (ret <= 0)
return -ETIMEDOUT;
return 0;
}
void ath12k_peer_cleanup(struct ath12k *ar, u32 vdev_id)
{
struct ath12k_peer *peer, *tmp;
struct ath12k_base *ab = ar->ab;
lockdep_assert_held(&ar->conf_mutex);
spin_lock_bh(&ab->base_lock);
list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
if (peer->vdev_id != vdev_id)
continue;
ath12k_warn(ab, "removing stale peer %pM from vdev_id %d\n",
peer->addr, vdev_id);
list_del(&peer->list);
kfree(peer);
ar->num_peers--;
}
spin_unlock_bh(&ab->base_lock);
}
static int ath12k_wait_for_peer_deleted(struct ath12k *ar, int vdev_id, const u8 *addr)
{
return ath12k_wait_for_peer_common(ar->ab, vdev_id, addr, false);
}
int ath12k_wait_for_peer_delete_done(struct ath12k *ar, u32 vdev_id,
const u8 *addr)
{
int ret;
unsigned long time_left;
ret = ath12k_wait_for_peer_deleted(ar, vdev_id, addr);
if (ret) {
ath12k_warn(ar->ab, "failed wait for peer deleted");
return ret;
}
time_left = wait_for_completion_timeout(&ar->peer_delete_done,
3 * HZ);
if (time_left == 0) {
ath12k_warn(ar->ab, "Timeout in receiving peer delete response\n");
return -ETIMEDOUT;
}
return 0;
}
int ath12k_peer_delete(struct ath12k *ar, u32 vdev_id, u8 *addr)
{
int ret;
lockdep_assert_held(&ar->conf_mutex);
reinit_completion(&ar->peer_delete_done);
ret = ath12k_wmi_send_peer_delete_cmd(ar, addr, vdev_id);
if (ret) {
ath12k_warn(ar->ab,
"failed to delete peer vdev_id %d addr %pM ret %d\n",
vdev_id, addr, ret);
return ret;
}
ret = ath12k_wait_for_peer_delete_done(ar, vdev_id, addr);
if (ret)
return ret;
ar->num_peers--;
return 0;
}
static int ath12k_wait_for_peer_created(struct ath12k *ar, int vdev_id, const u8 *addr)
{
return ath12k_wait_for_peer_common(ar->ab, vdev_id, addr, true);
}
int ath12k_peer_create(struct ath12k *ar, struct ath12k_vif *arvif,
struct ieee80211_sta *sta,
struct ath12k_wmi_peer_create_arg *arg)
{
struct ath12k_peer *peer;
int ret;
lockdep_assert_held(&ar->conf_mutex);
if (ar->num_peers > (ar->max_num_peers - 1)) {
ath12k_warn(ar->ab,
"failed to create peer due to insufficient peer entry resource in firmware\n");
return -ENOBUFS;
}
spin_lock_bh(&ar->ab->base_lock);
peer = ath12k_peer_find_by_pdev_idx(ar->ab, ar->pdev_idx, arg->peer_addr);
if (peer) {
spin_unlock_bh(&ar->ab->base_lock);
return -EINVAL;
}
spin_unlock_bh(&ar->ab->base_lock);
ret = ath12k_wmi_send_peer_create_cmd(ar, arg);
if (ret) {
ath12k_warn(ar->ab,
"failed to send peer create vdev_id %d ret %d\n",
arg->vdev_id, ret);
return ret;
}
ret = ath12k_wait_for_peer_created(ar, arg->vdev_id,
arg->peer_addr);
if (ret)
return ret;
spin_lock_bh(&ar->ab->base_lock);
peer = ath12k_peer_find(ar->ab, arg->vdev_id, arg->peer_addr);
if (!peer) {
spin_unlock_bh(&ar->ab->base_lock);
ath12k_warn(ar->ab, "failed to find peer %pM on vdev %i after creation\n",
arg->peer_addr, arg->vdev_id);
reinit_completion(&ar->peer_delete_done);
ret = ath12k_wmi_send_peer_delete_cmd(ar, arg->peer_addr,
arg->vdev_id);
if (ret) {
ath12k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
arg->vdev_id, arg->peer_addr);
return ret;
}
ret = ath12k_wait_for_peer_delete_done(ar, arg->vdev_id,
arg->peer_addr);
if (ret)
return ret;
return -ENOENT;
}
peer->pdev_idx = ar->pdev_idx;
peer->sta = sta;
if (arvif->vif->type == NL80211_IFTYPE_STATION) {
arvif->ast_hash = peer->ast_hash;
arvif->ast_idx = peer->hw_peer_id;
}
peer->sec_type = HAL_ENCRYPT_TYPE_OPEN;
peer->sec_type_grp = HAL_ENCRYPT_TYPE_OPEN;
ar->num_peers++;
spin_unlock_bh(&ar->ab->base_lock);
return 0;
}

Просмотреть файл

@ -0,0 +1,67 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_PEER_H
#define ATH12K_PEER_H
#include "dp_rx.h"
struct ppdu_user_delayba {
u16 sw_peer_id;
u32 info0;
u16 ru_end;
u16 ru_start;
u32 info1;
u32 rate_flags;
u32 resp_rate_flags;
};
struct ath12k_peer {
struct list_head list;
struct ieee80211_sta *sta;
int vdev_id;
u8 addr[ETH_ALEN];
int peer_id;
u16 ast_hash;
u8 pdev_idx;
u16 hw_peer_id;
/* protected by ab->data_lock */
struct ieee80211_key_conf *keys[WMI_MAX_KEY_INDEX + 1];
struct ath12k_dp_rx_tid rx_tid[IEEE80211_NUM_TIDS + 1];
/* Info used in MMIC verification of
* RX fragments
*/
struct crypto_shash *tfm_mmic;
u8 mcast_keyidx;
u8 ucast_keyidx;
u16 sec_type;
u16 sec_type_grp;
struct ppdu_user_delayba ppdu_stats_delayba;
bool delayba_flag;
bool is_authorized;
};
void ath12k_peer_unmap_event(struct ath12k_base *ab, u16 peer_id);
void ath12k_peer_map_event(struct ath12k_base *ab, u8 vdev_id, u16 peer_id,
u8 *mac_addr, u16 ast_hash, u16 hw_peer_id);
struct ath12k_peer *ath12k_peer_find(struct ath12k_base *ab, int vdev_id,
const u8 *addr);
struct ath12k_peer *ath12k_peer_find_by_addr(struct ath12k_base *ab,
const u8 *addr);
struct ath12k_peer *ath12k_peer_find_by_id(struct ath12k_base *ab, int peer_id);
void ath12k_peer_cleanup(struct ath12k *ar, u32 vdev_id);
int ath12k_peer_delete(struct ath12k *ar, u32 vdev_id, u8 *addr);
int ath12k_peer_create(struct ath12k *ar, struct ath12k_vif *arvif,
struct ieee80211_sta *sta,
struct ath12k_wmi_peer_create_arg *arg);
int ath12k_wait_for_peer_delete_done(struct ath12k *ar, u32 vdev_id,
const u8 *addr);
bool ath12k_peer_exist_by_vdev_id(struct ath12k_base *ab, int vdev_id);
struct ath12k_peer *ath12k_peer_find_by_ast(struct ath12k_base *ab, int ast_hash);
#endif /* _PEER_H_ */

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,569 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_QMI_H
#define ATH12K_QMI_H
#include <linux/mutex.h>
#include <linux/soc/qcom/qmi.h>
#define ATH12K_HOST_VERSION_STRING "WIN"
#define ATH12K_QMI_WLANFW_TIMEOUT_MS 10000
#define ATH12K_QMI_MAX_BDF_FILE_NAME_SIZE 64
#define ATH12K_QMI_CALDB_ADDRESS 0x4BA00000
#define ATH12K_QMI_WLANFW_MAX_BUILD_ID_LEN_V01 128
#define ATH12K_QMI_WLFW_NODE_ID_BASE 0x07
#define ATH12K_QMI_WLFW_SERVICE_ID_V01 0x45
#define ATH12K_QMI_WLFW_SERVICE_VERS_V01 0x01
#define ATH12K_QMI_WLFW_SERVICE_INS_ID_V01 0x02
#define ATH12K_QMI_WLFW_SERVICE_INS_ID_V01_WCN7850 0x1
#define ATH12K_QMI_WLFW_SERVICE_INS_ID_V01_QCN9274 0x07
#define ATH12K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 32
#define ATH12K_QMI_RESP_LEN_MAX 8192
#define ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01 52
#define ATH12K_QMI_CALDB_SIZE 0x480000
#define ATH12K_QMI_BDF_EXT_STR_LENGTH 0x20
#define ATH12K_QMI_FW_MEM_REQ_SEGMENT_CNT 3
#define ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01 4
#define ATH12K_QMI_DEVMEM_CMEM_INDEX 0
#define QMI_WLFW_REQUEST_MEM_IND_V01 0x0035
#define QMI_WLFW_FW_MEM_READY_IND_V01 0x0037
#define QMI_WLFW_FW_READY_IND_V01 0x0038
#define QMI_WLANFW_MAX_DATA_SIZE_V01 6144
#define ATH12K_FIRMWARE_MODE_OFF 4
#define ATH12K_QMI_TARGET_MEM_MODE_DEFAULT 0
#define ATH12K_BOARD_ID_DEFAULT 0xFF
struct ath12k_base;
enum ath12k_qmi_file_type {
ATH12K_QMI_FILE_TYPE_BDF_GOLDEN = 0,
ATH12K_QMI_FILE_TYPE_CALDATA = 2,
ATH12K_QMI_FILE_TYPE_EEPROM = 3,
ATH12K_QMI_MAX_FILE_TYPE = 4,
};
enum ath12k_qmi_bdf_type {
ATH12K_QMI_BDF_TYPE_BIN = 0,
ATH12K_QMI_BDF_TYPE_ELF = 1,
ATH12K_QMI_BDF_TYPE_REGDB = 4,
ATH12K_QMI_BDF_TYPE_CALIBRATION = 5,
};
enum ath12k_qmi_event_type {
ATH12K_QMI_EVENT_SERVER_ARRIVE,
ATH12K_QMI_EVENT_SERVER_EXIT,
ATH12K_QMI_EVENT_REQUEST_MEM,
ATH12K_QMI_EVENT_FW_MEM_READY,
ATH12K_QMI_EVENT_FW_READY,
ATH12K_QMI_EVENT_REGISTER_DRIVER,
ATH12K_QMI_EVENT_UNREGISTER_DRIVER,
ATH12K_QMI_EVENT_RECOVERY,
ATH12K_QMI_EVENT_FORCE_FW_ASSERT,
ATH12K_QMI_EVENT_POWER_UP,
ATH12K_QMI_EVENT_POWER_DOWN,
ATH12K_QMI_EVENT_MAX,
};
struct ath12k_qmi_driver_event {
struct list_head list;
enum ath12k_qmi_event_type type;
void *data;
};
struct ath12k_qmi_ce_cfg {
const struct ce_pipe_config *tgt_ce;
int tgt_ce_len;
const struct service_to_pipe *svc_to_ce_map;
int svc_to_ce_map_len;
const u8 *shadow_reg;
int shadow_reg_len;
u32 *shadow_reg_v3;
int shadow_reg_v3_len;
};
struct ath12k_qmi_event_msg {
struct list_head list;
enum ath12k_qmi_event_type type;
};
struct target_mem_chunk {
u32 size;
u32 type;
dma_addr_t paddr;
union {
void __iomem *ioaddr;
void *addr;
} v;
};
struct target_info {
u32 chip_id;
u32 chip_family;
u32 board_id;
u32 soc_id;
u32 fw_version;
u32 eeprom_caldata;
char fw_build_timestamp[ATH12K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 + 1];
char fw_build_id[ATH12K_QMI_WLANFW_MAX_BUILD_ID_LEN_V01 + 1];
char bdf_ext[ATH12K_QMI_BDF_EXT_STR_LENGTH];
};
struct m3_mem_region {
u32 size;
dma_addr_t paddr;
void *vaddr;
};
struct dev_mem_info {
u64 start;
u64 size;
};
struct ath12k_qmi {
struct ath12k_base *ab;
struct qmi_handle handle;
struct sockaddr_qrtr sq;
struct work_struct event_work;
struct workqueue_struct *event_wq;
struct list_head event_list;
spinlock_t event_lock; /* spinlock for qmi event list */
struct ath12k_qmi_ce_cfg ce_cfg;
struct target_mem_chunk target_mem[ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01];
u32 mem_seg_count;
u32 target_mem_mode;
bool target_mem_delayed;
u8 cal_done;
struct target_info target;
struct m3_mem_region m3_mem;
unsigned int service_ins_id;
struct dev_mem_info dev_mem[ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01];
};
#define QMI_WLANFW_HOST_CAP_REQ_MSG_V01_MAX_LEN 261
#define QMI_WLANFW_HOST_CAP_REQ_V01 0x0034
#define QMI_WLANFW_HOST_CAP_RESP_MSG_V01_MAX_LEN 7
#define QMI_WLFW_HOST_CAP_RESP_V01 0x0034
#define QMI_WLFW_MAX_NUM_GPIO_V01 32
#define QMI_WLANFW_MAX_PLATFORM_NAME_LEN_V01 64
#define QMI_WLANFW_MAX_HOST_DDR_RANGE_SIZE_V01 3
struct qmi_wlanfw_host_ddr_range {
u64 start;
u64 size;
};
enum ath12k_qmi_target_mem {
HOST_DDR_REGION_TYPE = 0x1,
BDF_MEM_REGION_TYPE = 0x2,
M3_DUMP_REGION_TYPE = 0x3,
CALDB_MEM_REGION_TYPE = 0x4,
PAGEABLE_MEM_REGION_TYPE = 0x9,
};
enum qmi_wlanfw_host_build_type {
WLANFW_HOST_BUILD_TYPE_ENUM_MIN_VAL_V01 = INT_MIN,
QMI_WLANFW_HOST_BUILD_TYPE_UNSPECIFIED_V01 = 0,
QMI_WLANFW_HOST_BUILD_TYPE_PRIMARY_V01 = 1,
QMI_WLANFW_HOST_BUILD_TYPE_SECONDARY_V01 = 2,
WLANFW_HOST_BUILD_TYPE_ENUM_MAX_VAL_V01 = INT_MAX,
};
#define QMI_WLFW_MAX_NUM_MLO_CHIPS_V01 3
#define QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01 2
struct wlfw_host_mlo_chip_info_s_v01 {
u8 chip_id;
u8 num_local_links;
u8 hw_link_id[QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01];
u8 valid_mlo_link_id[QMI_WLFW_MAX_NUM_MLO_LINKS_PER_CHIP_V01];
};
enum ath12k_qmi_cnss_feature {
CNSS_FEATURE_MIN_ENUM_VAL_V01 = INT_MIN,
CNSS_QDSS_CFG_MISS_V01 = 3,
CNSS_MAX_FEATURE_V01 = 64,
CNSS_FEATURE_MAX_ENUM_VAL_V01 = INT_MAX,
};
struct qmi_wlanfw_host_cap_req_msg_v01 {
u8 num_clients_valid;
u32 num_clients;
u8 wake_msi_valid;
u32 wake_msi;
u8 gpios_valid;
u32 gpios_len;
u32 gpios[QMI_WLFW_MAX_NUM_GPIO_V01];
u8 nm_modem_valid;
u8 nm_modem;
u8 bdf_support_valid;
u8 bdf_support;
u8 bdf_cache_support_valid;
u8 bdf_cache_support;
u8 m3_support_valid;
u8 m3_support;
u8 m3_cache_support_valid;
u8 m3_cache_support;
u8 cal_filesys_support_valid;
u8 cal_filesys_support;
u8 cal_cache_support_valid;
u8 cal_cache_support;
u8 cal_done_valid;
u8 cal_done;
u8 mem_bucket_valid;
u32 mem_bucket;
u8 mem_cfg_mode_valid;
u8 mem_cfg_mode;
u8 cal_duration_valid;
u16 cal_duraiton;
u8 platform_name_valid;
char platform_name[QMI_WLANFW_MAX_PLATFORM_NAME_LEN_V01 + 1];
u8 ddr_range_valid;
struct qmi_wlanfw_host_ddr_range ddr_range[QMI_WLANFW_MAX_HOST_DDR_RANGE_SIZE_V01];
u8 host_build_type_valid;
enum qmi_wlanfw_host_build_type host_build_type;
u8 mlo_capable_valid;
u8 mlo_capable;
u8 mlo_chip_id_valid;
u16 mlo_chip_id;
u8 mlo_group_id_valid;
u8 mlo_group_id;
u8 max_mlo_peer_valid;
u16 max_mlo_peer;
u8 mlo_num_chips_valid;
u8 mlo_num_chips;
u8 mlo_chip_info_valid;
struct wlfw_host_mlo_chip_info_s_v01 mlo_chip_info[QMI_WLFW_MAX_NUM_MLO_CHIPS_V01];
u8 feature_list_valid;
u64 feature_list;
};
struct qmi_wlanfw_host_cap_resp_msg_v01 {
struct qmi_response_type_v01 resp;
};
#define QMI_WLANFW_IND_REGISTER_REQ_MSG_V01_MAX_LEN 54
#define QMI_WLANFW_IND_REGISTER_REQ_V01 0x0020
#define QMI_WLANFW_IND_REGISTER_RESP_MSG_V01_MAX_LEN 18
#define QMI_WLANFW_IND_REGISTER_RESP_V01 0x0020
#define QMI_WLANFW_CLIENT_ID 0x4b4e454c
struct qmi_wlanfw_ind_register_req_msg_v01 {
u8 fw_ready_enable_valid;
u8 fw_ready_enable;
u8 initiate_cal_download_enable_valid;
u8 initiate_cal_download_enable;
u8 initiate_cal_update_enable_valid;
u8 initiate_cal_update_enable;
u8 msa_ready_enable_valid;
u8 msa_ready_enable;
u8 pin_connect_result_enable_valid;
u8 pin_connect_result_enable;
u8 client_id_valid;
u32 client_id;
u8 request_mem_enable_valid;
u8 request_mem_enable;
u8 fw_mem_ready_enable_valid;
u8 fw_mem_ready_enable;
u8 fw_init_done_enable_valid;
u8 fw_init_done_enable;
u8 rejuvenate_enable_valid;
u32 rejuvenate_enable;
u8 xo_cal_enable_valid;
u8 xo_cal_enable;
u8 cal_done_enable_valid;
u8 cal_done_enable;
};
struct qmi_wlanfw_ind_register_resp_msg_v01 {
struct qmi_response_type_v01 resp;
u8 fw_status_valid;
u64 fw_status;
};
#define QMI_WLANFW_REQUEST_MEM_IND_MSG_V01_MAX_LEN 1824
#define QMI_WLANFW_RESPOND_MEM_REQ_MSG_V01_MAX_LEN 888
#define QMI_WLANFW_RESPOND_MEM_RESP_MSG_V01_MAX_LEN 7
#define QMI_WLANFW_REQUEST_MEM_IND_V01 0x0035
#define QMI_WLANFW_RESPOND_MEM_REQ_V01 0x0036
#define QMI_WLANFW_RESPOND_MEM_RESP_V01 0x0036
#define QMI_WLANFW_MAX_NUM_MEM_CFG_V01 2
#define QMI_WLANFW_MAX_STR_LEN_V01 16
struct qmi_wlanfw_mem_cfg_s_v01 {
u64 offset;
u32 size;
u8 secure_flag;
};
enum qmi_wlanfw_mem_type_enum_v01 {
WLANFW_MEM_TYPE_ENUM_MIN_VAL_V01 = INT_MIN,
QMI_WLANFW_MEM_TYPE_MSA_V01 = 0,
QMI_WLANFW_MEM_TYPE_DDR_V01 = 1,
QMI_WLANFW_MEM_BDF_V01 = 2,
QMI_WLANFW_MEM_M3_V01 = 3,
QMI_WLANFW_MEM_CAL_V01 = 4,
QMI_WLANFW_MEM_DPD_V01 = 5,
WLANFW_MEM_TYPE_ENUM_MAX_VAL_V01 = INT_MAX,
};
struct qmi_wlanfw_mem_seg_s_v01 {
u32 size;
enum qmi_wlanfw_mem_type_enum_v01 type;
u32 mem_cfg_len;
struct qmi_wlanfw_mem_cfg_s_v01 mem_cfg[QMI_WLANFW_MAX_NUM_MEM_CFG_V01];
};
struct qmi_wlanfw_request_mem_ind_msg_v01 {
u32 mem_seg_len;
struct qmi_wlanfw_mem_seg_s_v01 mem_seg[ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01];
};
struct qmi_wlanfw_mem_seg_resp_s_v01 {
u64 addr;
u32 size;
enum qmi_wlanfw_mem_type_enum_v01 type;
u8 restore;
};
struct qmi_wlanfw_respond_mem_req_msg_v01 {
u32 mem_seg_len;
struct qmi_wlanfw_mem_seg_resp_s_v01 mem_seg[ATH12K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01];
};
struct qmi_wlanfw_respond_mem_resp_msg_v01 {
struct qmi_response_type_v01 resp;
};
struct qmi_wlanfw_fw_mem_ready_ind_msg_v01 {
char placeholder;
};
struct qmi_wlanfw_fw_ready_ind_msg_v01 {
char placeholder;
};
#define QMI_WLANFW_CAP_REQ_MSG_V01_MAX_LEN 0
#define QMI_WLANFW_CAP_RESP_MSG_V01_MAX_LEN 207
#define QMI_WLANFW_CAP_REQ_V01 0x0024
#define QMI_WLANFW_CAP_RESP_V01 0x0024
enum qmi_wlanfw_pipedir_enum_v01 {
QMI_WLFW_PIPEDIR_NONE_V01 = 0,
QMI_WLFW_PIPEDIR_IN_V01 = 1,
QMI_WLFW_PIPEDIR_OUT_V01 = 2,
QMI_WLFW_PIPEDIR_INOUT_V01 = 3,
};
struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01 {
__le32 pipe_num;
__le32 pipe_dir;
__le32 nentries;
__le32 nbytes_max;
__le32 flags;
};
struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01 {
__le32 service_id;
__le32 pipe_dir;
__le32 pipe_num;
};
struct qmi_wlanfw_shadow_reg_cfg_s_v01 {
u16 id;
u16 offset;
};
struct qmi_wlanfw_shadow_reg_v3_cfg_s_v01 {
u32 addr;
};
struct qmi_wlanfw_memory_region_info_s_v01 {
u64 region_addr;
u32 size;
u8 secure_flag;
};
struct qmi_wlanfw_rf_chip_info_s_v01 {
u32 chip_id;
u32 chip_family;
};
struct qmi_wlanfw_rf_board_info_s_v01 {
u32 board_id;
};
struct qmi_wlanfw_soc_info_s_v01 {
u32 soc_id;
};
struct qmi_wlanfw_fw_version_info_s_v01 {
u32 fw_version;
char fw_build_timestamp[ATH12K_QMI_WLANFW_MAX_TIMESTAMP_LEN_V01 + 1];
};
struct qmi_wlanfw_dev_mem_info_s_v01 {
u64 start;
u64 size;
};
enum qmi_wlanfw_cal_temp_id_enum_v01 {
QMI_WLANFW_CAL_TEMP_IDX_0_V01 = 0,
QMI_WLANFW_CAL_TEMP_IDX_1_V01 = 1,
QMI_WLANFW_CAL_TEMP_IDX_2_V01 = 2,
QMI_WLANFW_CAL_TEMP_IDX_3_V01 = 3,
QMI_WLANFW_CAL_TEMP_IDX_4_V01 = 4,
QMI_WLANFW_CAL_TEMP_ID_MAX_V01 = 0xFF,
};
enum qmi_wlanfw_rd_card_chain_cap_v01 {
WLFW_RD_CARD_CHAIN_CAP_MIN_VAL_V01 = INT_MIN,
WLFW_RD_CARD_CHAIN_CAP_UNSPECIFIED_V01 = 0,
WLFW_RD_CARD_CHAIN_CAP_1x1_V01 = 1,
WLFW_RD_CARD_CHAIN_CAP_2x2_V01 = 2,
WLFW_RD_CARD_CHAIN_CAP_MAX_VAL_V01 = INT_MAX,
};
struct qmi_wlanfw_cap_resp_msg_v01 {
struct qmi_response_type_v01 resp;
u8 chip_info_valid;
struct qmi_wlanfw_rf_chip_info_s_v01 chip_info;
u8 board_info_valid;
struct qmi_wlanfw_rf_board_info_s_v01 board_info;
u8 soc_info_valid;
struct qmi_wlanfw_soc_info_s_v01 soc_info;
u8 fw_version_info_valid;
struct qmi_wlanfw_fw_version_info_s_v01 fw_version_info;
u8 fw_build_id_valid;
char fw_build_id[ATH12K_QMI_WLANFW_MAX_BUILD_ID_LEN_V01 + 1];
u8 num_macs_valid;
u8 num_macs;
u8 voltage_mv_valid;
u32 voltage_mv;
u8 time_freq_hz_valid;
u32 time_freq_hz;
u8 otp_version_valid;
u32 otp_version;
u8 eeprom_caldata_read_timeout_valid;
u32 eeprom_caldata_read_timeout;
u8 fw_caps_valid;
u64 fw_caps;
u8 rd_card_chain_cap_valid;
enum qmi_wlanfw_rd_card_chain_cap_v01 rd_card_chain_cap;
u8 dev_mem_info_valid;
struct qmi_wlanfw_dev_mem_info_s_v01 dev_mem[ATH12K_QMI_WLFW_MAX_DEV_MEM_NUM_V01];
};
struct qmi_wlanfw_cap_req_msg_v01 {
char placeholder;
};
#define QMI_WLANFW_BDF_DOWNLOAD_REQ_MSG_V01_MAX_LEN 6182
#define QMI_WLANFW_BDF_DOWNLOAD_RESP_MSG_V01_MAX_LEN 7
#define QMI_WLANFW_BDF_DOWNLOAD_RESP_V01 0x0025
#define QMI_WLANFW_BDF_DOWNLOAD_REQ_V01 0x0025
/* TODO: Need to check with MCL and FW team that data can be pointer and
* can be last element in structure
*/
struct qmi_wlanfw_bdf_download_req_msg_v01 {
u8 valid;
u8 file_id_valid;
enum qmi_wlanfw_cal_temp_id_enum_v01 file_id;
u8 total_size_valid;
u32 total_size;
u8 seg_id_valid;
u32 seg_id;
u8 data_valid;
u32 data_len;
u8 data[QMI_WLANFW_MAX_DATA_SIZE_V01];
u8 end_valid;
u8 end;
u8 bdf_type_valid;
u8 bdf_type;
};
struct qmi_wlanfw_bdf_download_resp_msg_v01 {
struct qmi_response_type_v01 resp;
};
#define QMI_WLANFW_M3_INFO_REQ_MSG_V01_MAX_MSG_LEN 18
#define QMI_WLANFW_M3_INFO_RESP_MSG_V01_MAX_MSG_LEN 7
#define QMI_WLANFW_M3_INFO_RESP_V01 0x003C
#define QMI_WLANFW_M3_INFO_REQ_V01 0x003C
struct qmi_wlanfw_m3_info_req_msg_v01 {
u64 addr;
u32 size;
};
struct qmi_wlanfw_m3_info_resp_msg_v01 {
struct qmi_response_type_v01 resp;
};
#define QMI_WLANFW_WLAN_MODE_REQ_MSG_V01_MAX_LEN 11
#define QMI_WLANFW_WLAN_MODE_RESP_MSG_V01_MAX_LEN 7
#define QMI_WLANFW_WLAN_CFG_REQ_MSG_V01_MAX_LEN 803
#define QMI_WLANFW_WLAN_CFG_RESP_MSG_V01_MAX_LEN 7
#define QMI_WLANFW_WLAN_MODE_REQ_V01 0x0022
#define QMI_WLANFW_WLAN_MODE_RESP_V01 0x0022
#define QMI_WLANFW_WLAN_CFG_REQ_V01 0x0023
#define QMI_WLANFW_WLAN_CFG_RESP_V01 0x0023
#define QMI_WLANFW_MAX_STR_LEN_V01 16
#define QMI_WLANFW_MAX_NUM_CE_V01 12
#define QMI_WLANFW_MAX_NUM_SVC_V01 24
#define QMI_WLANFW_MAX_NUM_SHADOW_REG_V01 24
#define QMI_WLANFW_MAX_NUM_SHADOW_REG_V3_V01 60
struct qmi_wlanfw_wlan_mode_req_msg_v01 {
u32 mode;
u8 hw_debug_valid;
u8 hw_debug;
};
struct qmi_wlanfw_wlan_mode_resp_msg_v01 {
struct qmi_response_type_v01 resp;
};
struct qmi_wlanfw_wlan_cfg_req_msg_v01 {
u8 host_version_valid;
char host_version[QMI_WLANFW_MAX_STR_LEN_V01 + 1];
u8 tgt_cfg_valid;
u32 tgt_cfg_len;
struct qmi_wlanfw_ce_tgt_pipe_cfg_s_v01
tgt_cfg[QMI_WLANFW_MAX_NUM_CE_V01];
u8 svc_cfg_valid;
u32 svc_cfg_len;
struct qmi_wlanfw_ce_svc_pipe_cfg_s_v01
svc_cfg[QMI_WLANFW_MAX_NUM_SVC_V01];
u8 shadow_reg_valid;
u32 shadow_reg_len;
struct qmi_wlanfw_shadow_reg_cfg_s_v01
shadow_reg[QMI_WLANFW_MAX_NUM_SHADOW_REG_V01];
u8 shadow_reg_v3_valid;
u32 shadow_reg_v3_len;
struct qmi_wlanfw_shadow_reg_v3_cfg_s_v01
shadow_reg_v3[QMI_WLANFW_MAX_NUM_SHADOW_REG_V3_V01];
};
struct qmi_wlanfw_wlan_cfg_resp_msg_v01 {
struct qmi_response_type_v01 resp;
};
int ath12k_qmi_firmware_start(struct ath12k_base *ab,
u32 mode);
void ath12k_qmi_firmware_stop(struct ath12k_base *ab);
void ath12k_qmi_event_work(struct work_struct *work);
void ath12k_qmi_msg_recv_work(struct work_struct *work);
void ath12k_qmi_deinit_service(struct ath12k_base *ab);
int ath12k_qmi_init_service(struct ath12k_base *ab);
#endif

Просмотреть файл

@ -0,0 +1,732 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/rtnetlink.h>
#include "core.h"
#include "debug.h"
/* World regdom to be used in case default regd from fw is unavailable */
#define ATH12K_2GHZ_CH01_11 REG_RULE(2412 - 10, 2462 + 10, 40, 0, 20, 0)
#define ATH12K_5GHZ_5150_5350 REG_RULE(5150 - 10, 5350 + 10, 80, 0, 30,\
NL80211_RRF_NO_IR)
#define ATH12K_5GHZ_5725_5850 REG_RULE(5725 - 10, 5850 + 10, 80, 0, 30,\
NL80211_RRF_NO_IR)
#define ETSI_WEATHER_RADAR_BAND_LOW 5590
#define ETSI_WEATHER_RADAR_BAND_HIGH 5650
#define ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT 600000
static const struct ieee80211_regdomain ath12k_world_regd = {
.n_reg_rules = 3,
.alpha2 = "00",
.reg_rules = {
ATH12K_2GHZ_CH01_11,
ATH12K_5GHZ_5150_5350,
ATH12K_5GHZ_5725_5850,
}
};
static bool ath12k_regdom_changes(struct ath12k *ar, char *alpha2)
{
const struct ieee80211_regdomain *regd;
regd = rcu_dereference_rtnl(ar->hw->wiphy->regd);
/* This can happen during wiphy registration where the previous
* user request is received before we update the regd received
* from firmware.
*/
if (!regd)
return true;
return memcmp(regd->alpha2, alpha2, 2) != 0;
}
static void
ath12k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
{
struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
struct ath12k_wmi_init_country_arg arg;
struct ath12k *ar = hw->priv;
int ret;
ath12k_dbg(ar->ab, ATH12K_DBG_REG,
"Regulatory Notification received for %s\n", wiphy_name(wiphy));
/* Currently supporting only General User Hints. Cell base user
* hints to be handled later.
* Hints from other sources like Core, Beacons are not expected for
* self managed wiphy's
*/
if (!(request->initiator == NL80211_REGDOM_SET_BY_USER &&
request->user_reg_hint_type == NL80211_USER_REG_HINT_USER)) {
ath12k_warn(ar->ab, "Unexpected Regulatory event for this wiphy\n");
return;
}
if (!IS_ENABLED(CONFIG_ATH_REG_DYNAMIC_USER_REG_HINTS)) {
ath12k_dbg(ar->ab, ATH12K_DBG_REG,
"Country Setting is not allowed\n");
return;
}
if (!ath12k_regdom_changes(ar, request->alpha2)) {
ath12k_dbg(ar->ab, ATH12K_DBG_REG, "Country is already set\n");
return;
}
/* Set the country code to the firmware and wait for
* the WMI_REG_CHAN_LIST_CC EVENT for updating the
* reg info
*/
arg.flags = ALPHA_IS_SET;
memcpy(&arg.cc_info.alpha2, request->alpha2, 2);
arg.cc_info.alpha2[2] = 0;
ret = ath12k_wmi_send_init_country_cmd(ar, &arg);
if (ret)
ath12k_warn(ar->ab,
"INIT Country code set to fw failed : %d\n", ret);
}
int ath12k_reg_update_chan_list(struct ath12k *ar)
{
struct ieee80211_supported_band **bands;
struct ath12k_wmi_scan_chan_list_arg *arg;
struct ieee80211_channel *channel;
struct ieee80211_hw *hw = ar->hw;
struct ath12k_wmi_channel_arg *ch;
enum nl80211_band band;
int num_channels = 0;
int i, ret;
bands = hw->wiphy->bands;
for (band = 0; band < NUM_NL80211_BANDS; band++) {
if (!bands[band])
continue;
for (i = 0; i < bands[band]->n_channels; i++) {
if (bands[band]->channels[i].flags &
IEEE80211_CHAN_DISABLED)
continue;
num_channels++;
}
}
if (WARN_ON(!num_channels))
return -EINVAL;
arg = kzalloc(struct_size(arg, channel, num_channels), GFP_KERNEL);
if (!arg)
return -ENOMEM;
arg->pdev_id = ar->pdev->pdev_id;
arg->nallchans = num_channels;
ch = arg->channel;
for (band = 0; band < NUM_NL80211_BANDS; band++) {
if (!bands[band])
continue;
for (i = 0; i < bands[band]->n_channels; i++) {
channel = &bands[band]->channels[i];
if (channel->flags & IEEE80211_CHAN_DISABLED)
continue;
/* TODO: Set to true/false based on some condition? */
ch->allow_ht = true;
ch->allow_vht = true;
ch->allow_he = true;
ch->dfs_set =
!!(channel->flags & IEEE80211_CHAN_RADAR);
ch->is_chan_passive = !!(channel->flags &
IEEE80211_CHAN_NO_IR);
ch->is_chan_passive |= ch->dfs_set;
ch->mhz = channel->center_freq;
ch->cfreq1 = channel->center_freq;
ch->minpower = 0;
ch->maxpower = channel->max_power * 2;
ch->maxregpower = channel->max_reg_power * 2;
ch->antennamax = channel->max_antenna_gain * 2;
/* TODO: Use appropriate phymodes */
if (channel->band == NL80211_BAND_2GHZ)
ch->phy_mode = MODE_11G;
else
ch->phy_mode = MODE_11A;
if (channel->band == NL80211_BAND_6GHZ &&
cfg80211_channel_is_psc(channel))
ch->psc_channel = true;
ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
"mac channel [%d/%d] freq %d maxpower %d regpower %d antenna %d mode %d\n",
i, arg->nallchans,
ch->mhz, ch->maxpower, ch->maxregpower,
ch->antennamax, ch->phy_mode);
ch++;
/* TODO: use quarrter/half rate, cfreq12, dfs_cfreq2
* set_agile, reg_class_idx
*/
}
}
ret = ath12k_wmi_send_scan_chan_list_cmd(ar, arg);
kfree(arg);
return ret;
}
static void ath12k_copy_regd(struct ieee80211_regdomain *regd_orig,
struct ieee80211_regdomain *regd_copy)
{
u8 i;
/* The caller should have checked error conditions */
memcpy(regd_copy, regd_orig, sizeof(*regd_orig));
for (i = 0; i < regd_orig->n_reg_rules; i++)
memcpy(&regd_copy->reg_rules[i], &regd_orig->reg_rules[i],
sizeof(struct ieee80211_reg_rule));
}
int ath12k_regd_update(struct ath12k *ar, bool init)
{
struct ieee80211_regdomain *regd, *regd_copy = NULL;
int ret, regd_len, pdev_id;
struct ath12k_base *ab;
ab = ar->ab;
pdev_id = ar->pdev_idx;
spin_lock_bh(&ab->base_lock);
if (init) {
/* Apply the regd received during init through
* WMI_REG_CHAN_LIST_CC event. In case of failure to
* receive the regd, initialize with a default world
* regulatory.
*/
if (ab->default_regd[pdev_id]) {
regd = ab->default_regd[pdev_id];
} else {
ath12k_warn(ab,
"failed to receive default regd during init\n");
regd = (struct ieee80211_regdomain *)&ath12k_world_regd;
}
} else {
regd = ab->new_regd[pdev_id];
}
if (!regd) {
ret = -EINVAL;
spin_unlock_bh(&ab->base_lock);
goto err;
}
regd_len = sizeof(*regd) + (regd->n_reg_rules *
sizeof(struct ieee80211_reg_rule));
regd_copy = kzalloc(regd_len, GFP_ATOMIC);
if (regd_copy)
ath12k_copy_regd(regd, regd_copy);
spin_unlock_bh(&ab->base_lock);
if (!regd_copy) {
ret = -ENOMEM;
goto err;
}
rtnl_lock();
wiphy_lock(ar->hw->wiphy);
ret = regulatory_set_wiphy_regd_sync(ar->hw->wiphy, regd_copy);
wiphy_unlock(ar->hw->wiphy);
rtnl_unlock();
kfree(regd_copy);
if (ret)
goto err;
if (ar->state == ATH12K_STATE_ON) {
ret = ath12k_reg_update_chan_list(ar);
if (ret)
goto err;
}
return 0;
err:
ath12k_warn(ab, "failed to perform regd update : %d\n", ret);
return ret;
}
static enum nl80211_dfs_regions
ath12k_map_fw_dfs_region(enum ath12k_dfs_region dfs_region)
{
switch (dfs_region) {
case ATH12K_DFS_REG_FCC:
case ATH12K_DFS_REG_CN:
return NL80211_DFS_FCC;
case ATH12K_DFS_REG_ETSI:
case ATH12K_DFS_REG_KR:
return NL80211_DFS_ETSI;
case ATH12K_DFS_REG_MKK:
case ATH12K_DFS_REG_MKK_N:
return NL80211_DFS_JP;
default:
return NL80211_DFS_UNSET;
}
}
static u32 ath12k_map_fw_reg_flags(u16 reg_flags)
{
u32 flags = 0;
if (reg_flags & REGULATORY_CHAN_NO_IR)
flags = NL80211_RRF_NO_IR;
if (reg_flags & REGULATORY_CHAN_RADAR)
flags |= NL80211_RRF_DFS;
if (reg_flags & REGULATORY_CHAN_NO_OFDM)
flags |= NL80211_RRF_NO_OFDM;
if (reg_flags & REGULATORY_CHAN_INDOOR_ONLY)
flags |= NL80211_RRF_NO_OUTDOOR;
if (reg_flags & REGULATORY_CHAN_NO_HT40)
flags |= NL80211_RRF_NO_HT40;
if (reg_flags & REGULATORY_CHAN_NO_80MHZ)
flags |= NL80211_RRF_NO_80MHZ;
if (reg_flags & REGULATORY_CHAN_NO_160MHZ)
flags |= NL80211_RRF_NO_160MHZ;
return flags;
}
static bool
ath12k_reg_can_intersect(struct ieee80211_reg_rule *rule1,
struct ieee80211_reg_rule *rule2)
{
u32 start_freq1, end_freq1;
u32 start_freq2, end_freq2;
start_freq1 = rule1->freq_range.start_freq_khz;
start_freq2 = rule2->freq_range.start_freq_khz;
end_freq1 = rule1->freq_range.end_freq_khz;
end_freq2 = rule2->freq_range.end_freq_khz;
if ((start_freq1 >= start_freq2 &&
start_freq1 < end_freq2) ||
(start_freq2 > start_freq1 &&
start_freq2 < end_freq1))
return true;
/* TODO: Should we restrict intersection feasibility
* based on min bandwidth of the intersected region also,
* say the intersected rule should have a min bandwidth
* of 20MHz?
*/
return false;
}
static void ath12k_reg_intersect_rules(struct ieee80211_reg_rule *rule1,
struct ieee80211_reg_rule *rule2,
struct ieee80211_reg_rule *new_rule)
{
u32 start_freq1, end_freq1;
u32 start_freq2, end_freq2;
u32 freq_diff, max_bw;
start_freq1 = rule1->freq_range.start_freq_khz;
start_freq2 = rule2->freq_range.start_freq_khz;
end_freq1 = rule1->freq_range.end_freq_khz;
end_freq2 = rule2->freq_range.end_freq_khz;
new_rule->freq_range.start_freq_khz = max_t(u32, start_freq1,
start_freq2);
new_rule->freq_range.end_freq_khz = min_t(u32, end_freq1, end_freq2);
freq_diff = new_rule->freq_range.end_freq_khz -
new_rule->freq_range.start_freq_khz;
max_bw = min_t(u32, rule1->freq_range.max_bandwidth_khz,
rule2->freq_range.max_bandwidth_khz);
new_rule->freq_range.max_bandwidth_khz = min_t(u32, max_bw, freq_diff);
new_rule->power_rule.max_antenna_gain =
min_t(u32, rule1->power_rule.max_antenna_gain,
rule2->power_rule.max_antenna_gain);
new_rule->power_rule.max_eirp = min_t(u32, rule1->power_rule.max_eirp,
rule2->power_rule.max_eirp);
/* Use the flags of both the rules */
new_rule->flags = rule1->flags | rule2->flags;
/* To be safe, lts use the max cac timeout of both rules */
new_rule->dfs_cac_ms = max_t(u32, rule1->dfs_cac_ms,
rule2->dfs_cac_ms);
}
static struct ieee80211_regdomain *
ath12k_regd_intersect(struct ieee80211_regdomain *default_regd,
struct ieee80211_regdomain *curr_regd)
{
u8 num_old_regd_rules, num_curr_regd_rules, num_new_regd_rules;
struct ieee80211_reg_rule *old_rule, *curr_rule, *new_rule;
struct ieee80211_regdomain *new_regd = NULL;
u8 i, j, k;
num_old_regd_rules = default_regd->n_reg_rules;
num_curr_regd_rules = curr_regd->n_reg_rules;
num_new_regd_rules = 0;
/* Find the number of intersecting rules to allocate new regd memory */
for (i = 0; i < num_old_regd_rules; i++) {
old_rule = default_regd->reg_rules + i;
for (j = 0; j < num_curr_regd_rules; j++) {
curr_rule = curr_regd->reg_rules + j;
if (ath12k_reg_can_intersect(old_rule, curr_rule))
num_new_regd_rules++;
}
}
if (!num_new_regd_rules)
return NULL;
new_regd = kzalloc(sizeof(*new_regd) + (num_new_regd_rules *
sizeof(struct ieee80211_reg_rule)),
GFP_ATOMIC);
if (!new_regd)
return NULL;
/* We set the new country and dfs region directly and only trim
* the freq, power, antenna gain by intersecting with the
* default regdomain. Also MAX of the dfs cac timeout is selected.
*/
new_regd->n_reg_rules = num_new_regd_rules;
memcpy(new_regd->alpha2, curr_regd->alpha2, sizeof(new_regd->alpha2));
new_regd->dfs_region = curr_regd->dfs_region;
new_rule = new_regd->reg_rules;
for (i = 0, k = 0; i < num_old_regd_rules; i++) {
old_rule = default_regd->reg_rules + i;
for (j = 0; j < num_curr_regd_rules; j++) {
curr_rule = curr_regd->reg_rules + j;
if (ath12k_reg_can_intersect(old_rule, curr_rule))
ath12k_reg_intersect_rules(old_rule, curr_rule,
(new_rule + k++));
}
}
return new_regd;
}
static const char *
ath12k_reg_get_regdom_str(enum nl80211_dfs_regions dfs_region)
{
switch (dfs_region) {
case NL80211_DFS_FCC:
return "FCC";
case NL80211_DFS_ETSI:
return "ETSI";
case NL80211_DFS_JP:
return "JP";
default:
return "UNSET";
}
}
static u16
ath12k_reg_adjust_bw(u16 start_freq, u16 end_freq, u16 max_bw)
{
u16 bw;
bw = end_freq - start_freq;
bw = min_t(u16, bw, max_bw);
if (bw >= 80 && bw < 160)
bw = 80;
else if (bw >= 40 && bw < 80)
bw = 40;
else if (bw < 40)
bw = 20;
return bw;
}
static void
ath12k_reg_update_rule(struct ieee80211_reg_rule *reg_rule, u32 start_freq,
u32 end_freq, u32 bw, u32 ant_gain, u32 reg_pwr,
u32 reg_flags)
{
reg_rule->freq_range.start_freq_khz = MHZ_TO_KHZ(start_freq);
reg_rule->freq_range.end_freq_khz = MHZ_TO_KHZ(end_freq);
reg_rule->freq_range.max_bandwidth_khz = MHZ_TO_KHZ(bw);
reg_rule->power_rule.max_antenna_gain = DBI_TO_MBI(ant_gain);
reg_rule->power_rule.max_eirp = DBM_TO_MBM(reg_pwr);
reg_rule->flags = reg_flags;
}
static void
ath12k_reg_update_weather_radar_band(struct ath12k_base *ab,
struct ieee80211_regdomain *regd,
struct ath12k_reg_rule *reg_rule,
u8 *rule_idx, u32 flags, u16 max_bw)
{
u32 end_freq;
u16 bw;
u8 i;
i = *rule_idx;
bw = ath12k_reg_adjust_bw(reg_rule->start_freq,
ETSI_WEATHER_RADAR_BAND_LOW, max_bw);
ath12k_reg_update_rule(regd->reg_rules + i, reg_rule->start_freq,
ETSI_WEATHER_RADAR_BAND_LOW, bw,
reg_rule->ant_gain, reg_rule->reg_power,
flags);
ath12k_dbg(ab, ATH12K_DBG_REG,
"\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
i + 1, reg_rule->start_freq, ETSI_WEATHER_RADAR_BAND_LOW,
bw, reg_rule->ant_gain, reg_rule->reg_power,
regd->reg_rules[i].dfs_cac_ms,
flags);
if (reg_rule->end_freq > ETSI_WEATHER_RADAR_BAND_HIGH)
end_freq = ETSI_WEATHER_RADAR_BAND_HIGH;
else
end_freq = reg_rule->end_freq;
bw = ath12k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
max_bw);
i++;
ath12k_reg_update_rule(regd->reg_rules + i,
ETSI_WEATHER_RADAR_BAND_LOW, end_freq, bw,
reg_rule->ant_gain, reg_rule->reg_power,
flags);
regd->reg_rules[i].dfs_cac_ms = ETSI_WEATHER_RADAR_BAND_CAC_TIMEOUT;
ath12k_dbg(ab, ATH12K_DBG_REG,
"\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
i + 1, ETSI_WEATHER_RADAR_BAND_LOW, end_freq,
bw, reg_rule->ant_gain, reg_rule->reg_power,
regd->reg_rules[i].dfs_cac_ms,
flags);
if (end_freq == reg_rule->end_freq) {
regd->n_reg_rules--;
*rule_idx = i;
return;
}
bw = ath12k_reg_adjust_bw(ETSI_WEATHER_RADAR_BAND_HIGH,
reg_rule->end_freq, max_bw);
i++;
ath12k_reg_update_rule(regd->reg_rules + i, ETSI_WEATHER_RADAR_BAND_HIGH,
reg_rule->end_freq, bw,
reg_rule->ant_gain, reg_rule->reg_power,
flags);
ath12k_dbg(ab, ATH12K_DBG_REG,
"\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
i + 1, ETSI_WEATHER_RADAR_BAND_HIGH, reg_rule->end_freq,
bw, reg_rule->ant_gain, reg_rule->reg_power,
regd->reg_rules[i].dfs_cac_ms,
flags);
*rule_idx = i;
}
struct ieee80211_regdomain *
ath12k_reg_build_regd(struct ath12k_base *ab,
struct ath12k_reg_info *reg_info, bool intersect)
{
struct ieee80211_regdomain *tmp_regd, *default_regd, *new_regd = NULL;
struct ath12k_reg_rule *reg_rule;
u8 i = 0, j = 0, k = 0;
u8 num_rules;
u16 max_bw;
u32 flags;
char alpha2[3];
num_rules = reg_info->num_5g_reg_rules + reg_info->num_2g_reg_rules;
/* FIXME: Currently taking reg rules for 6G only from Indoor AP mode list.
* This can be updated to choose the combination dynamically based on AP
* type and client type, after complete 6G regulatory support is added.
*/
if (reg_info->is_ext_reg_event)
num_rules += reg_info->num_6g_reg_rules_ap[WMI_REG_INDOOR_AP];
if (!num_rules)
goto ret;
/* Add max additional rules to accommodate weather radar band */
if (reg_info->dfs_region == ATH12K_DFS_REG_ETSI)
num_rules += 2;
tmp_regd = kzalloc(sizeof(*tmp_regd) +
(num_rules * sizeof(struct ieee80211_reg_rule)),
GFP_ATOMIC);
if (!tmp_regd)
goto ret;
memcpy(tmp_regd->alpha2, reg_info->alpha2, REG_ALPHA2_LEN + 1);
memcpy(alpha2, reg_info->alpha2, REG_ALPHA2_LEN + 1);
alpha2[2] = '\0';
tmp_regd->dfs_region = ath12k_map_fw_dfs_region(reg_info->dfs_region);
ath12k_dbg(ab, ATH12K_DBG_REG,
"\r\nCountry %s, CFG Regdomain %s FW Regdomain %d, num_reg_rules %d\n",
alpha2, ath12k_reg_get_regdom_str(tmp_regd->dfs_region),
reg_info->dfs_region, num_rules);
/* Update reg_rules[] below. Firmware is expected to
* send these rules in order(2G rules first and then 5G)
*/
for (; i < num_rules; i++) {
if (reg_info->num_2g_reg_rules &&
(i < reg_info->num_2g_reg_rules)) {
reg_rule = reg_info->reg_rules_2g_ptr + i;
max_bw = min_t(u16, reg_rule->max_bw,
reg_info->max_bw_2g);
flags = 0;
} else if (reg_info->num_5g_reg_rules &&
(j < reg_info->num_5g_reg_rules)) {
reg_rule = reg_info->reg_rules_5g_ptr + j++;
max_bw = min_t(u16, reg_rule->max_bw,
reg_info->max_bw_5g);
/* FW doesn't pass NL80211_RRF_AUTO_BW flag for
* BW Auto correction, we can enable this by default
* for all 5G rules here. The regulatory core performs
* BW correction if required and applies flags as
* per other BW rule flags we pass from here
*/
flags = NL80211_RRF_AUTO_BW;
} else if (reg_info->is_ext_reg_event &&
reg_info->num_6g_reg_rules_ap[WMI_REG_INDOOR_AP] &&
(k < reg_info->num_6g_reg_rules_ap[WMI_REG_INDOOR_AP])) {
reg_rule = reg_info->reg_rules_6g_ap_ptr[WMI_REG_INDOOR_AP] + k++;
max_bw = min_t(u16, reg_rule->max_bw,
reg_info->max_bw_6g_ap[WMI_REG_INDOOR_AP]);
flags = NL80211_RRF_AUTO_BW;
} else {
break;
}
flags |= ath12k_map_fw_reg_flags(reg_rule->flags);
ath12k_reg_update_rule(tmp_regd->reg_rules + i,
reg_rule->start_freq,
reg_rule->end_freq, max_bw,
reg_rule->ant_gain, reg_rule->reg_power,
flags);
/* Update dfs cac timeout if the dfs domain is ETSI and the
* new rule covers weather radar band.
* Default value of '0' corresponds to 60s timeout, so no
* need to update that for other rules.
*/
if (flags & NL80211_RRF_DFS &&
reg_info->dfs_region == ATH12K_DFS_REG_ETSI &&
(reg_rule->end_freq > ETSI_WEATHER_RADAR_BAND_LOW &&
reg_rule->start_freq < ETSI_WEATHER_RADAR_BAND_HIGH)){
ath12k_reg_update_weather_radar_band(ab, tmp_regd,
reg_rule, &i,
flags, max_bw);
continue;
}
if (reg_info->is_ext_reg_event) {
ath12k_dbg(ab, ATH12K_DBG_REG, "\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d) (%d, %d)\n",
i + 1, reg_rule->start_freq, reg_rule->end_freq,
max_bw, reg_rule->ant_gain, reg_rule->reg_power,
tmp_regd->reg_rules[i].dfs_cac_ms,
flags, reg_rule->psd_flag, reg_rule->psd_eirp);
} else {
ath12k_dbg(ab, ATH12K_DBG_REG,
"\t%d. (%d - %d @ %d) (%d, %d) (%d ms) (FLAGS %d)\n",
i + 1, reg_rule->start_freq, reg_rule->end_freq,
max_bw, reg_rule->ant_gain, reg_rule->reg_power,
tmp_regd->reg_rules[i].dfs_cac_ms,
flags);
}
}
tmp_regd->n_reg_rules = i;
if (intersect) {
default_regd = ab->default_regd[reg_info->phy_id];
/* Get a new regd by intersecting the received regd with
* our default regd.
*/
new_regd = ath12k_regd_intersect(default_regd, tmp_regd);
kfree(tmp_regd);
if (!new_regd) {
ath12k_warn(ab, "Unable to create intersected regdomain\n");
goto ret;
}
} else {
new_regd = tmp_regd;
}
ret:
return new_regd;
}
void ath12k_regd_update_work(struct work_struct *work)
{
struct ath12k *ar = container_of(work, struct ath12k,
regd_update_work);
int ret;
ret = ath12k_regd_update(ar, false);
if (ret) {
/* Firmware has already moved to the new regd. We need
* to maintain channel consistency across FW, Host driver
* and userspace. Hence as a fallback mechanism we can set
* the prev or default country code to the firmware.
*/
/* TODO: Implement Fallback Mechanism */
}
}
void ath12k_reg_init(struct ath12k *ar)
{
ar->hw->wiphy->regulatory_flags = REGULATORY_WIPHY_SELF_MANAGED;
ar->hw->wiphy->reg_notifier = ath12k_reg_notifier;
}
void ath12k_reg_free(struct ath12k_base *ab)
{
int i;
for (i = 0; i < ab->hw_params->max_radios; i++) {
kfree(ab->default_regd[i]);
kfree(ab->new_regd[i]);
}
}

Просмотреть файл

@ -0,0 +1,95 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_REG_H
#define ATH12K_REG_H
#include <linux/kernel.h>
#include <net/regulatory.h>
struct ath12k_base;
struct ath12k;
/* DFS regdomains supported by Firmware */
enum ath12k_dfs_region {
ATH12K_DFS_REG_UNSET,
ATH12K_DFS_REG_FCC,
ATH12K_DFS_REG_ETSI,
ATH12K_DFS_REG_MKK,
ATH12K_DFS_REG_CN,
ATH12K_DFS_REG_KR,
ATH12K_DFS_REG_MKK_N,
ATH12K_DFS_REG_UNDEF,
};
enum ath12k_reg_cc_code {
REG_SET_CC_STATUS_PASS = 0,
REG_CURRENT_ALPHA2_NOT_FOUND = 1,
REG_INIT_ALPHA2_NOT_FOUND = 2,
REG_SET_CC_CHANGE_NOT_ALLOWED = 3,
REG_SET_CC_STATUS_NO_MEMORY = 4,
REG_SET_CC_STATUS_FAIL = 5,
};
struct ath12k_reg_rule {
u16 start_freq;
u16 end_freq;
u16 max_bw;
u8 reg_power;
u8 ant_gain;
u16 flags;
bool psd_flag;
u16 psd_eirp;
};
struct ath12k_reg_info {
enum ath12k_reg_cc_code status_code;
u8 num_phy;
u8 phy_id;
u16 reg_dmn_pair;
u16 ctry_code;
u8 alpha2[REG_ALPHA2_LEN + 1];
u32 dfs_region;
u32 phybitmap;
bool is_ext_reg_event;
u32 min_bw_2g;
u32 max_bw_2g;
u32 min_bw_5g;
u32 max_bw_5g;
u32 num_2g_reg_rules;
u32 num_5g_reg_rules;
struct ath12k_reg_rule *reg_rules_2g_ptr;
struct ath12k_reg_rule *reg_rules_5g_ptr;
enum wmi_reg_6g_client_type client_type;
bool rnr_tpe_usable;
bool unspecified_ap_usable;
/* TODO: All 6G related info can be stored only for required
* combination instead of all types, to optimize memory usage.
*/
u8 domain_code_6g_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
u8 domain_code_6g_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
u32 domain_code_6g_super_id;
u32 min_bw_6g_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
u32 max_bw_6g_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
u32 min_bw_6g_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
u32 max_bw_6g_client[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
struct ath12k_reg_rule *reg_rules_6g_ap_ptr[WMI_REG_CURRENT_MAX_AP_TYPE];
struct ath12k_reg_rule *reg_rules_6g_client_ptr
[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
};
void ath12k_reg_init(struct ath12k *ar);
void ath12k_reg_free(struct ath12k_base *ab);
void ath12k_regd_update_work(struct work_struct *work);
struct ieee80211_regdomain *ath12k_reg_build_regd(struct ath12k_base *ab,
struct ath12k_reg_info *reg_info,
bool intersect);
int ath12k_regd_update(struct ath12k *ar, bool init);
int ath12k_reg_update_chan_list(struct ath12k *ar);
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,10 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/module.h>
#define CREATE_TRACE_POINTS
#include "trace.h"

Просмотреть файл

@ -0,0 +1,152 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#if !defined(_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
#include <linux/tracepoint.h>
#include "core.h"
#define _TRACE_H_
/* create empty functions when tracing is disabled */
#if !defined(CONFIG_ATH12K_TRACING)
#undef TRACE_EVENT
#define TRACE_EVENT(name, proto, ...) \
static inline void trace_ ## name(proto) {}
#endif /* !CONFIG_ATH12K_TRACING || __CHECKER__ */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM ath12k
TRACE_EVENT(ath12k_htt_pktlog,
TP_PROTO(struct ath12k *ar, const void *buf, u16 buf_len,
u32 pktlog_checksum),
TP_ARGS(ar, buf, buf_len, pktlog_checksum),
TP_STRUCT__entry(
__string(device, dev_name(ar->ab->dev))
__string(driver, dev_driver_string(ar->ab->dev))
__field(u16, buf_len)
__field(u32, pktlog_checksum)
__dynamic_array(u8, pktlog, buf_len)
),
TP_fast_assign(
__assign_str(device, dev_name(ar->ab->dev));
__assign_str(driver, dev_driver_string(ar->ab->dev));
__entry->buf_len = buf_len;
__entry->pktlog_checksum = pktlog_checksum;
memcpy(__get_dynamic_array(pktlog), buf, buf_len);
),
TP_printk(
"%s %s size %u pktlog_checksum %d",
__get_str(driver),
__get_str(device),
__entry->buf_len,
__entry->pktlog_checksum
)
);
TRACE_EVENT(ath12k_htt_ppdu_stats,
TP_PROTO(struct ath12k *ar, const void *data, size_t len),
TP_ARGS(ar, data, len),
TP_STRUCT__entry(
__string(device, dev_name(ar->ab->dev))
__string(driver, dev_driver_string(ar->ab->dev))
__field(u16, len)
__field(u32, info)
__field(u32, sync_tstmp_lo_us)
__field(u32, sync_tstmp_hi_us)
__field(u32, mlo_offset_lo)
__field(u32, mlo_offset_hi)
__field(u32, mlo_offset_clks)
__field(u32, mlo_comp_clks)
__field(u32, mlo_comp_timer)
__dynamic_array(u8, ppdu, len)
),
TP_fast_assign(
__assign_str(device, dev_name(ar->ab->dev));
__assign_str(driver, dev_driver_string(ar->ab->dev));
__entry->len = len;
__entry->info = ar->pdev->timestamp.info;
__entry->sync_tstmp_lo_us = ar->pdev->timestamp.sync_timestamp_hi_us;
__entry->sync_tstmp_hi_us = ar->pdev->timestamp.sync_timestamp_lo_us;
__entry->mlo_offset_lo = ar->pdev->timestamp.mlo_offset_lo;
__entry->mlo_offset_hi = ar->pdev->timestamp.mlo_offset_hi;
__entry->mlo_offset_clks = ar->pdev->timestamp.mlo_offset_clks;
__entry->mlo_comp_clks = ar->pdev->timestamp.mlo_comp_clks;
__entry->mlo_comp_timer = ar->pdev->timestamp.mlo_comp_timer;
memcpy(__get_dynamic_array(ppdu), data, len);
),
TP_printk(
"%s %s ppdu len %d",
__get_str(driver),
__get_str(device),
__entry->len
)
);
TRACE_EVENT(ath12k_htt_rxdesc,
TP_PROTO(struct ath12k *ar, const void *data, size_t type, size_t len),
TP_ARGS(ar, data, type, len),
TP_STRUCT__entry(
__string(device, dev_name(ar->ab->dev))
__string(driver, dev_driver_string(ar->ab->dev))
__field(u16, len)
__field(u16, type)
__field(u32, info)
__field(u32, sync_tstmp_lo_us)
__field(u32, sync_tstmp_hi_us)
__field(u32, mlo_offset_lo)
__field(u32, mlo_offset_hi)
__field(u32, mlo_offset_clks)
__field(u32, mlo_comp_clks)
__field(u32, mlo_comp_timer)
__dynamic_array(u8, rxdesc, len)
),
TP_fast_assign(
__assign_str(device, dev_name(ar->ab->dev));
__assign_str(driver, dev_driver_string(ar->ab->dev));
__entry->len = len;
__entry->type = type;
__entry->info = ar->pdev->timestamp.info;
__entry->sync_tstmp_lo_us = ar->pdev->timestamp.sync_timestamp_hi_us;
__entry->sync_tstmp_hi_us = ar->pdev->timestamp.sync_timestamp_lo_us;
__entry->mlo_offset_lo = ar->pdev->timestamp.mlo_offset_lo;
__entry->mlo_offset_hi = ar->pdev->timestamp.mlo_offset_hi;
__entry->mlo_offset_clks = ar->pdev->timestamp.mlo_offset_clks;
__entry->mlo_comp_clks = ar->pdev->timestamp.mlo_comp_clks;
__entry->mlo_comp_timer = ar->pdev->timestamp.mlo_comp_timer;
memcpy(__get_dynamic_array(rxdesc), data, len);
),
TP_printk(
"%s %s rxdesc len %d",
__get_str(driver),
__get_str(device),
__entry->len
)
);
#endif /* _TRACE_H_ || TRACE_HEADER_MULTI_READ*/
/* we don't want to use include/trace/events */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE trace
/* This part must be outside protection */
#include <trace/define_trace.h>

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1119,7 +1119,7 @@ void ath6kl_cfg80211_ch_switch_notify(struct ath6kl_vif *vif, int freq,
NL80211_CHAN_HT20 : NL80211_CHAN_NO_HT);
mutex_lock(&vif->wdev.mtx);
cfg80211_ch_switch_notify(vif->ndev, &chandef, 0);
cfg80211_ch_switch_notify(vif->ndev, &chandef, 0, 0);
mutex_unlock(&vif->wdev.mtx);
}

Просмотреть файл

@ -1277,13 +1277,13 @@ static void ar5008_hw_set_radar_conf(struct ath_hw *ah)
static void ar5008_hw_init_txpower_cck(struct ath_hw *ah, int16_t *rate_array)
{
#define CCK_DELTA(x) ((OLC_FOR_AR9280_20_LATER) ? max((x) - 2, 0) : (x))
ah->tx_power[0] = CCK_DELTA(rate_array[rate1l]);
ah->tx_power[1] = CCK_DELTA(min(rate_array[rate2l],
#define CCK_DELTA(_ah, x) ((OLC_FOR_AR9280_20_LATER(_ah)) ? max((x) - 2, 0) : (x))
ah->tx_power[0] = CCK_DELTA(ah, rate_array[rate1l]);
ah->tx_power[1] = CCK_DELTA(ah, min(rate_array[rate2l],
rate_array[rate2s]));
ah->tx_power[2] = CCK_DELTA(min(rate_array[rate5_5l],
ah->tx_power[2] = CCK_DELTA(ah, min(rate_array[rate5_5l],
rate_array[rate5_5s]));
ah->tx_power[3] = CCK_DELTA(min(rate_array[rate11l],
ah->tx_power[3] = CCK_DELTA(ah, min(rate_array[rate11l],
rate_array[rate11s]));
#undef CCK_DELTA
}

Просмотреть файл

@ -659,9 +659,9 @@ static void ar9002_hw_pa_cal(struct ath_hw *ah, bool is_reset)
static void ar9002_hw_olc_temp_compensation(struct ath_hw *ah)
{
if (OLC_FOR_AR9287_10_LATER)
if (OLC_FOR_AR9287_10_LATER(ah))
ar9287_hw_olc_temp_compensation(ah);
else if (OLC_FOR_AR9280_20_LATER)
else if (OLC_FOR_AR9280_20_LATER(ah))
ar9280_hw_olc_temp_compensation(ah);
}
@ -672,7 +672,7 @@ static int ar9002_hw_calibrate(struct ath_hw *ah, struct ath9k_channel *chan,
bool nfcal, nfcal_pending = false, percal_pending;
int ret;
nfcal = !!(REG_READ(ah, AR_PHY_AGC_CONTROL) & AR_PHY_AGC_CONTROL_NF);
nfcal = !!(REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) & AR_PHY_AGC_CONTROL_NF);
if (ah->caldata) {
nfcal_pending = test_bit(NFCAL_PENDING, &ah->caldata->cal_flags);
if (longcal) /* Remember to not miss */
@ -752,11 +752,11 @@ static bool ar9285_hw_cl_cal(struct ath_hw *ah, struct ath9k_channel *chan)
if (IS_CHAN_HT20(chan)) {
REG_SET_BIT(ah, AR_PHY_CL_CAL_CTL, AR_PHY_PARALLEL_CAL_ENABLE);
REG_SET_BIT(ah, AR_PHY_TURBO, AR_PHY_FC_DYN2040_EN);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
REG_CLR_BIT(ah, AR_PHY_TPCRG1, AR_PHY_TPCRG1_PD_CAL_ENABLE);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_CAL);
if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_CAL);
if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL, 0, AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE,
"offset calibration failed to complete in %d ms; noisy environment?\n",
@ -768,10 +768,10 @@ static bool ar9285_hw_cl_cal(struct ath_hw *ah, struct ath9k_channel *chan)
REG_CLR_BIT(ah, AR_PHY_CL_CAL_CTL, AR_PHY_CL_CAL_ENABLE);
}
REG_CLR_BIT(ah, AR_PHY_ADC_CTL, AR_PHY_ADC_CTL_OFF_PWDADC);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_FLTR_CAL);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_FLTR_CAL);
REG_SET_BIT(ah, AR_PHY_TPCRG1, AR_PHY_TPCRG1_PD_CAL_ENABLE);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_CAL);
if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_CAL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_CAL);
if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE,
"offset calibration failed to complete in %d ms; noisy environment?\n",
@ -781,7 +781,7 @@ static bool ar9285_hw_cl_cal(struct ath_hw *ah, struct ath9k_channel *chan)
REG_SET_BIT(ah, AR_PHY_ADC_CTL, AR_PHY_ADC_CTL_OFF_PWDADC);
REG_CLR_BIT(ah, AR_PHY_CL_CAL_CTL, AR_PHY_CL_CAL_ENABLE);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_FLTR_CAL);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_FLTR_CAL);
return true;
}
@ -857,17 +857,17 @@ static bool ar9002_hw_init_cal(struct ath_hw *ah, struct ath9k_channel *chan)
if (!AR_SREV_9287_11_OR_LATER(ah))
REG_CLR_BIT(ah, AR_PHY_ADC_CTL,
AR_PHY_ADC_CTL_OFF_PWDADC);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
}
/* Calibrate the AGC */
REG_WRITE(ah, AR_PHY_AGC_CONTROL,
REG_READ(ah, AR_PHY_AGC_CONTROL) |
REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) |
AR_PHY_AGC_CONTROL_CAL);
/* Poll for offset calibration complete */
if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
if (!ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE,
@ -880,7 +880,7 @@ static bool ar9002_hw_init_cal(struct ath_hw *ah, struct ath9k_channel *chan)
if (!AR_SREV_9287_11_OR_LATER(ah))
REG_SET_BIT(ah, AR_PHY_ADC_CTL,
AR_PHY_ADC_CTL_OFF_PWDADC);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
}
}

Просмотреть файл

@ -249,9 +249,9 @@ static void ar9002_hw_configpcipowersave(struct ath_hw *ah,
if (power_off) {
/* clear bit 19 to disable L1 */
REG_CLR_BIT(ah, AR_PCIE_PM_CTRL, AR_PCIE_PM_CTRL_ENA);
REG_CLR_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PCIE_PM_CTRL_ENA);
val = REG_READ(ah, AR_WA);
val = REG_READ(ah, AR_WA(ah));
/*
* Set PCIe workaround bits
@ -286,7 +286,7 @@ static void ar9002_hw_configpcipowersave(struct ath_hw *ah,
if (AR_SREV_9285E_20(ah))
val |= AR_WA_BIT23;
REG_WRITE(ah, AR_WA, val);
REG_WRITE(ah, AR_WA(ah), val);
} else {
if (ah->config.pcie_waen) {
val = ah->config.pcie_waen;
@ -314,10 +314,10 @@ static void ar9002_hw_configpcipowersave(struct ath_hw *ah,
if (AR_SREV_9285E_20(ah))
val |= AR_WA_BIT23;
REG_WRITE(ah, AR_WA, val);
REG_WRITE(ah, AR_WA(ah), val);
/* set bit 19 to allow forcing of pcie core into L1 state */
REG_SET_BIT(ah, AR_PCIE_PM_CTRL, AR_PCIE_PM_CTRL_ENA);
REG_SET_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PCIE_PM_CTRL_ENA);
}
}

Просмотреть файл

@ -21,7 +21,7 @@
static void ar9002_hw_rx_enable(struct ath_hw *ah)
{
REG_WRITE(ah, AR_CR, AR_CR_RXE);
REG_WRITE(ah, AR_CR, AR_CR_RXE(ah));
}
static void ar9002_hw_set_desc_link(void *ds, u32 ds_link)
@ -40,14 +40,14 @@ static bool ar9002_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
struct ath_common *common = ath9k_hw_common(ah);
if (!AR_SREV_9100(ah)) {
if (REG_READ(ah, AR_INTR_ASYNC_CAUSE) & AR_INTR_MAC_IRQ) {
if ((REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M)
if (REG_READ(ah, AR_INTR_ASYNC_CAUSE(ah)) & AR_INTR_MAC_IRQ) {
if ((REG_READ(ah, AR_RTC_STATUS(ah)) & AR_RTC_STATUS_M(ah))
== AR_RTC_STATUS_ON) {
isr = REG_READ(ah, AR_ISR);
}
}
sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE) &
sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah)) &
AR_INTR_SYNC_DEFAULT;
*masked = 0;
@ -138,7 +138,7 @@ static bool ar9002_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
u32 s5_s;
if (pCap->hw_caps & ATH9K_HW_CAP_RAC_SUPPORTED) {
s5_s = REG_READ(ah, AR_ISR_S5_S);
s5_s = REG_READ(ah, AR_ISR_S5_S(ah));
} else {
s5_s = REG_READ(ah, AR_ISR_S5);
}
@ -201,8 +201,8 @@ static bool ar9002_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
"AR_INTR_SYNC_LOCAL_TIMEOUT\n");
}
REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR, sync_cause);
(void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR);
REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR(ah), sync_cause);
(void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR(ah));
}
return true;

Просмотреть файл

@ -281,10 +281,10 @@ static void ar9002_olc_init(struct ath_hw *ah)
{
u32 i;
if (!OLC_FOR_AR9280_20_LATER)
if (!OLC_FOR_AR9280_20_LATER(ah))
return;
if (OLC_FOR_AR9287_10_LATER) {
if (OLC_FOR_AR9287_10_LATER(ah)) {
REG_SET_BIT(ah, AR_PHY_TX_PWRCTRL9,
AR_PHY_TX_PWRCTRL9_RES_DC_REMOVAL);
ath9k_hw_analog_shift_rmw(ah, AR9287_AN_TXPC0,

Просмотреть файл

@ -346,14 +346,14 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
/*
* Clear offset and IQ calibration, run AGC cal.
*/
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_OFFSET_CAL);
REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
REG_WRITE(ah, AR_PHY_AGC_CONTROL,
REG_READ(ah, AR_PHY_AGC_CONTROL) | AR_PHY_AGC_CONTROL_CAL);
REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) | AR_PHY_AGC_CONTROL_CAL);
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
if (!status) {
@ -367,13 +367,13 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
* (Carrier Leak calibration, TX Filter calibration and
* Peak Detector offset calibration).
*/
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_OFFSET_CAL);
REG_CLR_BIT(ah, AR_PHY_CL_CAL_CTL,
AR_PHY_CL_CAL_ENABLE);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_FLTR_CAL);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_PKDET_CAL);
ch0_done = 0;
@ -387,10 +387,10 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
REG_SET_BIT(ah, AR_PHY_ACTIVE, AR_PHY_ACTIVE_EN);
REG_WRITE(ah, AR_PHY_AGC_CONTROL,
REG_READ(ah, AR_PHY_AGC_CONTROL) | AR_PHY_AGC_CONTROL_CAL);
REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) | AR_PHY_AGC_CONTROL_CAL);
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
if (!status) {
@ -531,7 +531,7 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
}
}
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_OFFSET_CAL);
REG_SET_BIT(ah, AR_PHY_ACTIVE, AR_PHY_ACTIVE_EN);
@ -539,7 +539,7 @@ static bool ar9003_hw_dynamic_osdac_selection(struct ath_hw *ah,
* We don't need to check txiqcal_done here since it is always
* set for AR9550.
*/
REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
return true;
@ -897,7 +897,7 @@ static void ar9003_hw_tx_iq_cal_outlier_detection(struct ath_hw *ah,
memset(tx_corr_coeff, 0, sizeof(tx_corr_coeff));
for (i = 0; i < MAX_MEASUREMENT / 2; i++) {
tx_corr_coeff[i * 2][0] = tx_corr_coeff[(i * 2) + 1][0] =
AR_PHY_TX_IQCAL_CORR_COEFF_B0(i);
AR_PHY_TX_IQCAL_CORR_COEFF_B0(ah, i);
if (!AR_SREV_9485(ah)) {
tx_corr_coeff[i * 2][1] =
tx_corr_coeff[(i * 2) + 1][1] =
@ -914,7 +914,7 @@ static void ar9003_hw_tx_iq_cal_outlier_detection(struct ath_hw *ah,
if (!(ah->txchainmask & (1 << i)))
continue;
nmeasurement = REG_READ_FIELD(ah,
AR_PHY_TX_IQCAL_STATUS_B0,
AR_PHY_TX_IQCAL_STATUS_B0(ah),
AR_PHY_CALIBRATED_GAINS_0);
if (nmeasurement > MAX_MEASUREMENT)
@ -988,10 +988,10 @@ static bool ar9003_hw_tx_iq_cal_run(struct ath_hw *ah)
REG_RMW_FIELD(ah, AR_PHY_TX_FORCED_GAIN,
AR_PHY_TXGAIN_FORCE, 0);
REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_START,
REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_START(ah),
AR_PHY_TX_IQCAL_START_DO_CAL, 1);
if (!ath9k_hw_wait(ah, AR_PHY_TX_IQCAL_START,
if (!ath9k_hw_wait(ah, AR_PHY_TX_IQCAL_START(ah),
AR_PHY_TX_IQCAL_START_DO_CAL, 0,
AH_WAIT_TIMEOUT)) {
ath_dbg(common, CALIBRATE, "Tx IQ Cal is not completed\n");
@ -1056,7 +1056,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
{
struct ath_common *common = ath9k_hw_common(ah);
const u32 txiqcal_status[AR9300_MAX_CHAINS] = {
AR_PHY_TX_IQCAL_STATUS_B0,
AR_PHY_TX_IQCAL_STATUS_B0(ah),
AR_PHY_TX_IQCAL_STATUS_B1,
AR_PHY_TX_IQCAL_STATUS_B2,
};
@ -1076,7 +1076,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
continue;
nmeasurement = REG_READ_FIELD(ah,
AR_PHY_TX_IQCAL_STATUS_B0,
AR_PHY_TX_IQCAL_STATUS_B0(ah),
AR_PHY_CALIBRATED_GAINS_0);
if (nmeasurement > MAX_MEASUREMENT)
nmeasurement = MAX_MEASUREMENT;
@ -1096,7 +1096,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
u32 idx = 2 * j, offset = 4 * (3 * im + j);
REG_RMW_FIELD(ah,
AR_PHY_CHAN_INFO_MEMORY,
AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_TAB_S2_READ,
0);
@ -1106,7 +1106,7 @@ static void ar9003_hw_tx_iq_cal_post_proc(struct ath_hw *ah,
offset);
REG_RMW_FIELD(ah,
AR_PHY_CHAN_INFO_MEMORY,
AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_TAB_S2_READ,
1);
@ -1161,7 +1161,7 @@ static void ar9003_hw_tx_iq_cal_reload(struct ath_hw *ah)
memset(tx_corr_coeff, 0, sizeof(tx_corr_coeff));
for (i = 0; i < MAX_MEASUREMENT / 2; i++) {
tx_corr_coeff[i * 2][0] = tx_corr_coeff[(i * 2) + 1][0] =
AR_PHY_TX_IQCAL_CORR_COEFF_B0(i);
AR_PHY_TX_IQCAL_CORR_COEFF_B0(ah, i);
if (!AR_SREV_9485(ah)) {
tx_corr_coeff[i * 2][1] =
tx_corr_coeff[(i * 2) + 1][1] =
@ -1346,7 +1346,7 @@ static void ar9003_hw_cl_cal_post_proc(struct ath_hw *ah, bool is_reusable)
if (!caldata || !(ah->enabled_cals & TX_CL_CAL))
return;
txclcal_done = !!(REG_READ(ah, AR_PHY_AGC_CONTROL) &
txclcal_done = !!(REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) &
AR_PHY_AGC_CONTROL_CLC_SUCCESS);
if (test_bit(TXCLCAL_DONE, &caldata->cal_flags)) {
@ -1424,12 +1424,12 @@ static bool ar9003_hw_init_cal_pcoem(struct ath_hw *ah,
if (rtt) {
if (!run_rtt_cal) {
agc_ctrl = REG_READ(ah, AR_PHY_AGC_CONTROL);
agc_ctrl = REG_READ(ah, AR_PHY_AGC_CONTROL(ah));
agc_supp_cals &= agc_ctrl;
agc_ctrl &= ~(AR_PHY_AGC_CONTROL_OFFSET_CAL |
AR_PHY_AGC_CONTROL_FLTR_CAL |
AR_PHY_AGC_CONTROL_PKDET_CAL);
REG_WRITE(ah, AR_PHY_AGC_CONTROL, agc_ctrl);
REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah), agc_ctrl);
} else {
if (ah->ah_flags & AH_FASTCC)
run_agc_cal = true;
@ -1452,7 +1452,7 @@ static bool ar9003_hw_init_cal_pcoem(struct ath_hw *ah,
goto skip_tx_iqcal;
/* Do Tx IQ Calibration */
REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1,
REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1(ah),
AR_PHY_TX_IQCAL_CONTROL_1_IQCORR_I_Q_COFF_DELPT,
DELPT);
@ -1462,10 +1462,10 @@ static bool ar9003_hw_init_cal_pcoem(struct ath_hw *ah,
*/
if (ah->enabled_cals & TX_IQ_ON_AGC_CAL) {
if (caldata && !test_bit(TXIQCAL_DONE, &caldata->cal_flags))
REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
REG_SET_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
else
REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0,
REG_CLR_BIT(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL);
txiqcal_done = run_agc_cal = true;
}
@ -1485,12 +1485,12 @@ skip_tx_iqcal:
if (run_agc_cal || !(ah->ah_flags & AH_FASTCC)) {
/* Calibrate the AGC */
REG_WRITE(ah, AR_PHY_AGC_CONTROL,
REG_READ(ah, AR_PHY_AGC_CONTROL) |
REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) |
AR_PHY_AGC_CONTROL_CAL);
/* Poll for offset calibration complete */
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
@ -1507,7 +1507,7 @@ skip_tx_iqcal:
if (rtt && !run_rtt_cal) {
agc_ctrl |= agc_supp_cals;
REG_WRITE(ah, AR_PHY_AGC_CONTROL, agc_ctrl);
REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah), agc_ctrl);
}
if (!status) {
@ -1558,11 +1558,11 @@ static bool do_ar9003_agc_cal(struct ath_hw *ah)
struct ath_common *common = ath9k_hw_common(ah);
bool status;
REG_WRITE(ah, AR_PHY_AGC_CONTROL,
REG_READ(ah, AR_PHY_AGC_CONTROL) |
REG_WRITE(ah, AR_PHY_AGC_CONTROL(ah),
REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) |
AR_PHY_AGC_CONTROL_CAL);
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL,
status = ath9k_hw_wait(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_CAL,
0, AH_WAIT_TIMEOUT);
if (!status) {
@ -1596,7 +1596,7 @@ static bool ar9003_hw_init_cal_soc(struct ath_hw *ah,
goto skip_tx_iqcal;
/* Do Tx IQ Calibration */
REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1,
REG_RMW_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_1(ah),
AR_PHY_TX_IQCAL_CONTROL_1_IQCORR_I_Q_COFF_DELPT,
DELPT);
@ -1605,7 +1605,7 @@ static bool ar9003_hw_init_cal_soc(struct ath_hw *ah,
* AGC calibration. Specifically, AR9550 in SoC chips.
*/
if (ah->enabled_cals & TX_IQ_ON_AGC_CAL) {
if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0,
if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL)) {
txiqcal_done = true;
} else {

Просмотреть файл

@ -3084,13 +3084,13 @@ error:
static bool ar9300_otp_read_word(struct ath_hw *ah, int addr, u32 *data)
{
REG_READ(ah, AR9300_OTP_BASE + (4 * addr));
REG_READ(ah, AR9300_OTP_BASE(ah) + (4 * addr));
if (!ath9k_hw_wait(ah, AR9300_OTP_STATUS, AR9300_OTP_STATUS_TYPE,
if (!ath9k_hw_wait(ah, AR9300_OTP_STATUS(ah), AR9300_OTP_STATUS_TYPE,
AR9300_OTP_STATUS_VALID, 1000))
return false;
*data = REG_READ(ah, AR9300_OTP_READ_DATA);
*data = REG_READ(ah, AR9300_OTP_READ_DATA(ah));
return true;
}
@ -3607,15 +3607,15 @@ static void ar9003_hw_xpa_bias_level_apply(struct ath_hw *ah, bool is2ghz)
if (AR_SREV_9485(ah) || AR_SREV_9330(ah) || AR_SREV_9340(ah) ||
AR_SREV_9531(ah) || AR_SREV_9561(ah))
REG_RMW_FIELD(ah, AR_CH0_TOP2, AR_CH0_TOP2_XPABIASLVL, bias);
REG_RMW_FIELD(ah, AR_CH0_TOP2(ah), AR_CH0_TOP2_XPABIASLVL, bias);
else if (AR_SREV_9462(ah) || AR_SREV_9550(ah) || AR_SREV_9565(ah))
REG_RMW_FIELD(ah, AR_CH0_TOP, AR_CH0_TOP_XPABIASLVL, bias);
REG_RMW_FIELD(ah, AR_CH0_TOP(ah), AR_CH0_TOP_XPABIASLVL, bias);
else {
REG_RMW_FIELD(ah, AR_CH0_TOP, AR_CH0_TOP_XPABIASLVL, bias);
REG_RMW_FIELD(ah, AR_CH0_THERM,
REG_RMW_FIELD(ah, AR_CH0_TOP(ah), AR_CH0_TOP_XPABIASLVL, bias);
REG_RMW_FIELD(ah, AR_CH0_THERM(ah),
AR_CH0_THERM_XPABIASLVL_MSB,
bias >> 2);
REG_RMW_FIELD(ah, AR_CH0_THERM,
REG_RMW_FIELD(ah, AR_CH0_THERM(ah),
AR_CH0_THERM_XPASHORT2GND, 1);
}
}
@ -3960,9 +3960,9 @@ void ar9003_hw_internal_regulator_apply(struct ath_hw *ah)
if (AR_SREV_9330(ah) || AR_SREV_9485(ah)) {
int reg_pmu_set;
reg_pmu_set = REG_READ(ah, AR_PHY_PMU2) & ~AR_PHY_PMU2_PGM;
REG_WRITE(ah, AR_PHY_PMU2, reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU2, reg_pmu_set))
reg_pmu_set = REG_READ(ah, AR_PHY_PMU2(ah)) & ~AR_PHY_PMU2_PGM;
REG_WRITE(ah, AR_PHY_PMU2(ah), reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU2(ah), reg_pmu_set))
return;
if (AR_SREV_9330(ah)) {
@ -3984,28 +3984,28 @@ void ar9003_hw_internal_regulator_apply(struct ath_hw *ah)
(3 << 24) | (1 << 28);
}
REG_WRITE(ah, AR_PHY_PMU1, reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU1, reg_pmu_set))
REG_WRITE(ah, AR_PHY_PMU1(ah), reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU1(ah), reg_pmu_set))
return;
reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2) & ~0xFFC00000)
reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2(ah)) & ~0xFFC00000)
| (4 << 26);
REG_WRITE(ah, AR_PHY_PMU2, reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU2, reg_pmu_set))
REG_WRITE(ah, AR_PHY_PMU2(ah), reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU2(ah), reg_pmu_set))
return;
reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2) & ~0x00200000)
reg_pmu_set = (REG_READ(ah, AR_PHY_PMU2(ah)) & ~0x00200000)
| (1 << 21);
REG_WRITE(ah, AR_PHY_PMU2, reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU2, reg_pmu_set))
REG_WRITE(ah, AR_PHY_PMU2(ah), reg_pmu_set);
if (!is_pmu_set(ah, AR_PHY_PMU2(ah), reg_pmu_set))
return;
} else if (AR_SREV_9462(ah) || AR_SREV_9565(ah) ||
AR_SREV_9561(ah)) {
reg_val = le32_to_cpu(pBase->swreg);
REG_WRITE(ah, AR_PHY_PMU1, reg_val);
REG_WRITE(ah, AR_PHY_PMU1(ah), reg_val);
if (AR_SREV_9561(ah))
REG_WRITE(ah, AR_PHY_PMU2, 0x10200000);
REG_WRITE(ah, AR_PHY_PMU2(ah), 0x10200000);
} else {
/* Internal regulator is ON. Write swreg register. */
reg_val = le32_to_cpu(pBase->swreg);
@ -4021,25 +4021,25 @@ void ar9003_hw_internal_regulator_apply(struct ath_hw *ah)
}
} else {
if (AR_SREV_9330(ah) || AR_SREV_9485(ah)) {
REG_RMW_FIELD(ah, AR_PHY_PMU2, AR_PHY_PMU2_PGM, 0);
while (REG_READ_FIELD(ah, AR_PHY_PMU2,
REG_RMW_FIELD(ah, AR_PHY_PMU2(ah), AR_PHY_PMU2_PGM, 0);
while (REG_READ_FIELD(ah, AR_PHY_PMU2(ah),
AR_PHY_PMU2_PGM))
udelay(10);
REG_RMW_FIELD(ah, AR_PHY_PMU1, AR_PHY_PMU1_PWD, 0x1);
while (!REG_READ_FIELD(ah, AR_PHY_PMU1,
REG_RMW_FIELD(ah, AR_PHY_PMU1(ah), AR_PHY_PMU1_PWD, 0x1);
while (!REG_READ_FIELD(ah, AR_PHY_PMU1(ah),
AR_PHY_PMU1_PWD))
udelay(10);
REG_RMW_FIELD(ah, AR_PHY_PMU2, AR_PHY_PMU2_PGM, 0x1);
while (!REG_READ_FIELD(ah, AR_PHY_PMU2,
REG_RMW_FIELD(ah, AR_PHY_PMU2(ah), AR_PHY_PMU2_PGM, 0x1);
while (!REG_READ_FIELD(ah, AR_PHY_PMU2(ah),
AR_PHY_PMU2_PGM))
udelay(10);
} else if (AR_SREV_9462(ah) || AR_SREV_9565(ah))
REG_RMW_FIELD(ah, AR_PHY_PMU1, AR_PHY_PMU1_PWD, 0x1);
REG_RMW_FIELD(ah, AR_PHY_PMU1(ah), AR_PHY_PMU1_PWD, 0x1);
else {
reg_val = REG_READ(ah, AR_RTC_SLEEP_CLK) |
reg_val = REG_READ(ah, AR_RTC_SLEEP_CLK(ah)) |
AR_RTC_FORCE_SWREG_PRD;
REG_WRITE(ah, AR_RTC_SLEEP_CLK, reg_val);
REG_WRITE(ah, AR_RTC_SLEEP_CLK(ah), reg_val);
}
}
@ -4055,9 +4055,9 @@ static void ar9003_hw_apply_tuning_caps(struct ath_hw *ah)
if (eep->baseEepHeader.featureEnable & 0x40) {
tuning_caps_param &= 0x7f;
REG_RMW_FIELD(ah, AR_CH0_XTAL, AR_CH0_XTAL_CAPINDAC,
REG_RMW_FIELD(ah, AR_CH0_XTAL(ah), AR_CH0_XTAL_CAPINDAC,
tuning_caps_param);
REG_RMW_FIELD(ah, AR_CH0_XTAL, AR_CH0_XTAL_CAPOUTDAC,
REG_RMW_FIELD(ah, AR_CH0_XTAL(ah), AR_CH0_XTAL_CAPOUTDAC,
tuning_caps_param);
}
}

Просмотреть файл

@ -82,16 +82,16 @@
/* AR5416_EEPMISC_BIG_ENDIAN not set indicates little endian */
#define AR9300_EEPMISC_LITTLE_ENDIAN 0
#define AR9300_OTP_BASE \
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x30000 : 0x14000)
#define AR9300_OTP_STATUS \
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x31018 : 0x15f18)
#define AR9300_OTP_BASE(_ah) \
((AR_SREV_9340(_ah) || AR_SREV_9550(_ah)) ? 0x30000 : 0x14000)
#define AR9300_OTP_STATUS(_ah) \
((AR_SREV_9340(_ah) || AR_SREV_9550(_ah)) ? 0x31018 : 0x15f18)
#define AR9300_OTP_STATUS_TYPE 0x7
#define AR9300_OTP_STATUS_VALID 0x4
#define AR9300_OTP_STATUS_ACCESS_BUSY 0x2
#define AR9300_OTP_STATUS_SM_BUSY 0x1
#define AR9300_OTP_READ_DATA \
((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x3101c : 0x15f1c)
#define AR9300_OTP_READ_DATA(_ah) \
((AR_SREV_9340(_ah) || AR_SREV_9550(_ah)) ? 0x3101c : 0x15f1c)
enum targetPowerHTRates {
HT_TARGET_RATE_0_8_16,

Просмотреть файл

@ -1032,8 +1032,8 @@ static void ar9003_hw_configpcipowersave(struct ath_hw *ah,
/* Nothing to do on restore for 11N */
if (!power_off /* !restore */) {
/* set bit 19 to allow forcing of pcie core into L1 state */
REG_SET_BIT(ah, AR_PCIE_PM_CTRL, AR_PCIE_PM_CTRL_ENA);
REG_WRITE(ah, AR_WA, ah->WARegVal);
REG_SET_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PCIE_PM_CTRL_ENA);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
}
/*

Просмотреть файл

@ -193,16 +193,16 @@ static bool ar9003_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
if (ath9k_hw_mci_is_enabled(ah))
async_mask |= AR_INTR_ASYNC_MASK_MCI;
async_cause = REG_READ(ah, AR_INTR_ASYNC_CAUSE);
async_cause = REG_READ(ah, AR_INTR_ASYNC_CAUSE(ah));
if (async_cause & async_mask) {
if ((REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M)
if ((REG_READ(ah, AR_RTC_STATUS(ah)) & AR_RTC_STATUS_M(ah))
== AR_RTC_STATUS_ON)
isr = REG_READ(ah, AR_ISR);
}
sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE) & AR_INTR_SYNC_DEFAULT;
sync_cause = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah)) & AR_INTR_SYNC_DEFAULT;
*masked = 0;
@ -280,7 +280,7 @@ static bool ar9003_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
u32 s5;
if (pCap->hw_caps & ATH9K_HW_CAP_RAC_SUPPORTED)
s5 = REG_READ(ah, AR_ISR_S5_S);
s5 = REG_READ(ah, AR_ISR_S5_S(ah));
else
s5 = REG_READ(ah, AR_ISR_S5);
@ -345,8 +345,8 @@ static bool ar9003_hw_get_isr(struct ath_hw *ah, enum ath9k_int *masked,
ath_dbg(common, INTERRUPT,
"AR_INTR_SYNC_LOCAL_TIMEOUT\n");
REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR, sync_cause);
(void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR);
REG_WRITE(ah, AR_INTR_SYNC_CAUSE_CLR(ah), sync_cause);
(void) REG_READ(ah, AR_INTR_SYNC_CAUSE_CLR(ah));
}
return true;

Просмотреть файл

@ -458,7 +458,7 @@ static void ar9003_mci_observation_set_up(struct ath_hw *ah)
} else
return;
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL, AR_GPIO_JTAG_DISABLE);
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah), AR_GPIO_JTAG_DISABLE);
REG_RMW_FIELD(ah, AR_PHY_GLB_CONTROL, AR_GLB_DS_JTAG_DISABLE, 1);
REG_RMW_FIELD(ah, AR_PHY_GLB_CONTROL, AR_GLB_WLAN_UART_INTF_EN, 0);
@ -466,12 +466,12 @@ static void ar9003_mci_observation_set_up(struct ath_hw *ah)
REG_RMW_FIELD(ah, AR_BTCOEX_CTRL2, AR_BTCOEX_CTRL2_GPIO_OBS_SEL, 0);
REG_RMW_FIELD(ah, AR_BTCOEX_CTRL2, AR_BTCOEX_CTRL2_MAC_BB_OBS_SEL, 1);
REG_WRITE(ah, AR_OBS, 0x4b);
REG_WRITE(ah, AR_OBS(ah), 0x4b);
REG_RMW_FIELD(ah, AR_DIAG_SW, AR_DIAG_OBS_PT_SEL1, 0x03);
REG_RMW_FIELD(ah, AR_DIAG_SW, AR_DIAG_OBS_PT_SEL2, 0x01);
REG_RMW_FIELD(ah, AR_MACMISC, AR_MACMISC_MISC_OBS_BUS_LSB, 0x02);
REG_RMW_FIELD(ah, AR_MACMISC, AR_MACMISC_MISC_OBS_BUS_MSB, 0x03);
REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS,
REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS(ah),
AR_PHY_TEST_CTL_DEBUGPORT_SEL, 0x07);
}

Просмотреть файл

@ -201,19 +201,19 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
ar9003_paprd_enable(ah, false);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_SKIP, 0x30);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_ENABLE, 1);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_TX_GAIN_FORCE, 1);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_RX_BB_GAIN_FORCE, 0);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_IQCORR_ENABLE, 0);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_AGC2_SETTLING, 28);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL1(ah),
AR_PHY_PAPRD_TRAINER_CNTL1_CF_CF_PAPRD_TRAIN_ENABLE, 1);
if (AR_SREV_9485(ah)) {
@ -229,15 +229,15 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
}
}
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL2,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL2(ah),
AR_PHY_PAPRD_TRAINER_CNTL2_CF_PAPRD_INIT_RX_BB_GAIN, val);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_FINE_CORR_LEN, 4);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_COARSE_CORR_LEN, 4);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_NUM_CORR_STAGES, 7);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_MIN_LOOPBACK_DEL, 1);
if (AR_SREV_9485(ah) ||
@ -246,10 +246,10 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
AR_SREV_9550(ah) ||
AR_SREV_9330(ah) ||
AR_SREV_9340(ah))
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP, -3);
else
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP, -6);
val = -10;
@ -257,16 +257,16 @@ static int ar9003_paprd_setup_single_table(struct ath_hw *ah)
if (IS_CHAN_2GHZ(ah->curchan) && !AR_SREV_9462(ah) && !AR_SREV_9565(ah))
val = -15;
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_ADC_DESIRED_SIZE,
val);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_BBTXMIX_DISABLE, 1);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4(ah),
AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_SAFETY_DELTA, 0);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4(ah),
AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_MIN_CORR, 400);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL4(ah),
AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_NUM_TRAIN_SAMPLES,
100);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_PRE_POST_SCALE_0_B0,
@ -313,7 +313,7 @@ static unsigned int ar9003_get_desired_gain(struct ath_hw *ah, int chain,
int desired_scale, desired_gain = 0;
u32 reg_olpc = 0, reg_cl_gain = 0;
REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1,
REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
desired_scale = REG_READ_FIELD(ah, AR_PHY_TPC_12,
AR_PHY_TPC_12_DESIRED_SCALE_HT40_5);
@ -812,7 +812,7 @@ void ar9003_paprd_setup_gain_table(struct ath_hw *ah, int chain)
ar9003_tx_force_gain(ah, gain_index);
REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1,
REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
}
EXPORT_SYMBOL(ar9003_paprd_setup_gain_table);
@ -833,7 +833,7 @@ static bool ar9003_paprd_retrain_pa_in(struct ath_hw *ah,
capdiv2g = REG_READ_FIELD(ah, AR_PHY_65NM_CH0_TXRF3,
AR_PHY_65NM_CH0_TXRF3_CAPDIV2G);
quick_drop = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
quick_drop = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP);
if (quick_drop)
@ -906,7 +906,7 @@ static bool ar9003_paprd_retrain_pa_in(struct ath_hw *ah,
REG_RMW_FIELD(ah, AR_PHY_65NM_CH0_TXRF3,
AR_PHY_65NM_CH0_TXRF3_CAPDIV2G, capdiv2g);
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3,
REG_RMW_FIELD(ah, AR_PHY_PAPRD_TRAINER_CNTL3(ah),
AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_QUICK_DROP,
quick_drop);
@ -932,14 +932,14 @@ int ar9003_paprd_create_curve(struct ath_hw *ah,
data_L = &buf[0];
data_U = &buf[48];
REG_CLR_BIT(ah, AR_PHY_CHAN_INFO_MEMORY,
REG_CLR_BIT(ah, AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ);
reg = AR_PHY_CHAN_INFO_TAB_0;
for (i = 0; i < 48; i++)
data_L[i] = REG_READ(ah, reg + (i << 2));
REG_SET_BIT(ah, AR_PHY_CHAN_INFO_MEMORY,
REG_SET_BIT(ah, AR_PHY_CHAN_INFO_MEMORY(ah),
AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ);
for (i = 0; i < 48; i++)
@ -951,7 +951,7 @@ int ar9003_paprd_create_curve(struct ath_hw *ah,
if (ar9003_paprd_retrain_pa_in(ah, caldata, chain))
status = -EINPROGRESS;
REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1,
REG_CLR_BIT(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
kfree(buf);
@ -977,14 +977,14 @@ bool ar9003_paprd_is_done(struct ath_hw *ah)
{
int paprd_done, agc2_pwr;
paprd_done = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1,
paprd_done = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE);
if (AR_SREV_9485(ah))
goto exit;
if (paprd_done == 0x1) {
agc2_pwr = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1,
agc2_pwr = REG_READ_FIELD(ah, AR_PHY_PAPRD_TRAINER_STAT1(ah),
AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_AGC2_PWR);
ath_dbg(ath9k_hw_common(ah), CALIBRATE,

Просмотреть файл

@ -296,7 +296,7 @@ static void ar9003_hw_spur_mitigate_mrc_cck(struct ath_hw *ah,
cck_spur_freq = cck_spur_freq & 0xfffff;
REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL,
REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_YCOK_MAX, 0x7);
REG_RMW_FIELD(ah, AR_PHY_CCK_SPUR_MIT,
AR_PHY_CCK_SPUR_MIT_SPUR_RSSI_THR, 0x7f);
@ -314,7 +314,7 @@ static void ar9003_hw_spur_mitigate_mrc_cck(struct ath_hw *ah,
}
}
REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL,
REG_RMW_FIELD(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_YCOK_MAX, 0x5);
REG_RMW_FIELD(ah, AR_PHY_CCK_SPUR_MIT,
AR_PHY_CCK_SPUR_MIT_USE_CCK_SPUR_MIT, 0x0);
@ -352,7 +352,7 @@ static void ar9003_hw_spur_ofdm_clear(struct ath_hw *ah)
AR_PHY_TIMING4_ENABLE_CHAN_MASK, 0);
REG_RMW_FIELD(ah, AR_PHY_PILOT_SPUR_MASK,
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_IDX_A, 0);
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_IDX_A, 0);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_IDX_A, 0);
@ -360,7 +360,7 @@ static void ar9003_hw_spur_ofdm_clear(struct ath_hw *ah)
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_A, 0);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_A, 0);
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A, 0);
REG_RMW_FIELD(ah, AR_PHY_SPUR_REG,
AR_PHY_SPUR_REG_MASK_RATE_CNTL, 0);
@ -419,7 +419,7 @@ static void ar9003_hw_spur_ofdm(struct ath_hw *ah,
AR_PHY_TIMING4_ENABLE_CHAN_MASK, 0x1);
REG_RMW_FIELD(ah, AR_PHY_PILOT_SPUR_MASK,
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_IDX_A, mask_index);
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_IDX_A, mask_index);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_IDX_A, mask_index);
@ -427,7 +427,7 @@ static void ar9003_hw_spur_ofdm(struct ath_hw *ah,
AR_PHY_PILOT_SPUR_MASK_CF_PILOT_MASK_A, 0xc);
REG_RMW_FIELD(ah, AR_PHY_CHAN_SPUR_MASK,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_A, 0xc);
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A,
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_A(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A, 0xa0);
REG_RMW_FIELD(ah, AR_PHY_SPUR_REG,
AR_PHY_SPUR_REG_MASK_RATE_CNTL, 0xff);
@ -449,7 +449,7 @@ static void ar9003_hw_spur_ofdm_9565(struct ath_hw *ah,
mask_index);
/* A == B */
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B,
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_IDX_A,
mask_index);
@ -462,7 +462,7 @@ static void ar9003_hw_spur_ofdm_9565(struct ath_hw *ah,
AR_PHY_CHAN_SPUR_MASK_CF_CHAN_MASK_B, 0xe);
/* A == B */
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B,
REG_RMW_FIELD(ah, AR_PHY_SPUR_MASK_B(ah),
AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A, 0xa0);
}
@ -710,7 +710,7 @@ static void ar9003_hw_override_ini(struct ath_hw *ah)
REG_WRITE(ah, AR_GLB_SWREG_DISCONT_MODE,
AR_GLB_SWREG_DISCONT_EN_BT_WLAN);
if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0,
if (REG_READ_FIELD(ah, AR_PHY_TX_IQCAL_CONTROL_0(ah),
AR_PHY_TX_IQCAL_CONTROL_0_ENABLE_TXIQ_CAL))
ah->enabled_cals |= TX_IQ_CAL;
else
@ -726,11 +726,11 @@ static void ar9003_hw_override_ini(struct ath_hw *ah)
if (AR_SREV_9340(ah) || AR_SREV_9531(ah) || AR_SREV_9550(ah) ||
AR_SREV_9561(ah)) {
if (ah->is_clk_25mhz) {
REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x17c << 1);
REG_WRITE(ah, AR_RTC_DERIVED_CLK(ah), 0x17c << 1);
REG_WRITE(ah, AR_SLP32_MODE, 0x0010f3d7);
REG_WRITE(ah, AR_SLP32_INC, 0x0001e7ae);
} else {
REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x261 << 1);
REG_WRITE(ah, AR_RTC_DERIVED_CLK(ah), 0x261 << 1);
REG_WRITE(ah, AR_SLP32_MODE, 0x0010f400);
REG_WRITE(ah, AR_SLP32_INC, 0x0001e800);
}
@ -1795,7 +1795,7 @@ static void ar9003_hw_spectral_scan_wait(struct ath_hw *ah)
static void ar9003_hw_tx99_start(struct ath_hw *ah, u32 qnum)
{
REG_SET_BIT(ah, AR_PHY_TEST, PHY_AGC_CLR);
REG_SET_BIT(ah, AR_PHY_TEST(ah), PHY_AGC_CLR);
REG_CLR_BIT(ah, AR_DIAG_SW, AR_DIAG_RX_DIS);
REG_WRITE(ah, AR_CR, AR_CR_RXD);
REG_WRITE(ah, AR_DLCL_IFS(qnum), 0);
@ -1808,7 +1808,7 @@ static void ar9003_hw_tx99_start(struct ath_hw *ah, u32 qnum)
static void ar9003_hw_tx99_stop(struct ath_hw *ah)
{
REG_CLR_BIT(ah, AR_PHY_TEST, PHY_AGC_CLR);
REG_CLR_BIT(ah, AR_PHY_TEST(ah), PHY_AGC_CLR);
REG_SET_BIT(ah, AR_DIAG_SW, AR_DIAG_RX_DIS);
}

Просмотреть файл

@ -454,8 +454,8 @@
#define AR_PHY_GEN_CTRL (AR_SM_BASE + 0x4)
#define AR_PHY_MODE (AR_SM_BASE + 0x8)
#define AR_PHY_ACTIVE (AR_SM_BASE + 0xc)
#define AR_PHY_SPUR_MASK_A (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x18 : 0x20))
#define AR_PHY_SPUR_MASK_B (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x1c : 0x24))
#define AR_PHY_SPUR_MASK_A(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x18 : 0x20))
#define AR_PHY_SPUR_MASK_B(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x1c : 0x24))
#define AR_PHY_SPECTRAL_SCAN (AR_SM_BASE + 0x28)
#define AR_PHY_RADAR_BW_FILTER (AR_SM_BASE + 0x2c)
#define AR_PHY_SEARCH_START_DELAY (AR_SM_BASE + 0x30)
@ -498,7 +498,7 @@
#define AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A 0x3FF
#define AR_PHY_SPUR_MASK_A_CF_PUNC_MASK_A_S 0
#define AR_PHY_TEST (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x15c : 0x160))
#define AR_PHY_TEST(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x15c : 0x160))
#define AR_PHY_TEST_BBB_OBS_SEL 0x780000
#define AR_PHY_TEST_BBB_OBS_SEL_S 19
@ -509,7 +509,7 @@
#define AR_PHY_TEST_CHAIN_SEL 0xC0000000
#define AR_PHY_TEST_CHAIN_SEL_S 30
#define AR_PHY_TEST_CTL_STATUS (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x160 : 0x164))
#define AR_PHY_TEST_CTL_STATUS(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x160 : 0x164))
#define AR_PHY_TEST_CTL_TSTDAC_EN 0x1
#define AR_PHY_TEST_CTL_TSTDAC_EN_S 0
#define AR_PHY_TEST_CTL_TX_OBS_SEL 0x1C
@ -524,22 +524,22 @@
#define AR_PHY_TEST_CTL_DEBUGPORT_SEL_S 29
#define AR_PHY_TSTDAC (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x164 : 0x168))
#define AR_PHY_TSTDAC(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x164 : 0x168))
#define AR_PHY_CHAN_STATUS (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x168 : 0x16c))
#define AR_PHY_CHAN_STATUS(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x168 : 0x16c))
#define AR_PHY_CHAN_INFO_MEMORY (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x16c : 0x170))
#define AR_PHY_CHAN_INFO_MEMORY(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x16c : 0x170))
#define AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ 0x00000008
#define AR_PHY_CHAN_INFO_MEMORY_CHANINFOMEM_S2_READ_S 3
#define AR_PHY_CHNINFO_NOISEPWR (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x170 : 0x174))
#define AR_PHY_CHNINFO_GAINDIFF (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x174 : 0x178))
#define AR_PHY_CHNINFO_FINETIM (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x178 : 0x17c))
#define AR_PHY_CHAN_INFO_GAIN_0 (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x17c : 0x180))
#define AR_PHY_SCRAMBLER_SEED (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x184 : 0x190))
#define AR_PHY_CCK_TX_CTRL (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x188 : 0x194))
#define AR_PHY_CHNINFO_NOISEPWR(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x170 : 0x174))
#define AR_PHY_CHNINFO_GAINDIFF(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x174 : 0x178))
#define AR_PHY_CHNINFO_FINETIM(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x178 : 0x17c))
#define AR_PHY_CHAN_INFO_GAIN_0(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x17c : 0x180))
#define AR_PHY_SCRAMBLER_SEED(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x184 : 0x190))
#define AR_PHY_CCK_TX_CTRL(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x188 : 0x194))
#define AR_PHY_HEAVYCLIP_CTL (AR_SM_BASE + (AR_SREV_9561(ah) ? 0x198 : 0x1a4))
#define AR_PHY_HEAVYCLIP_CTL(_ah) (AR_SM_BASE + (AR_SREV_9561(_ah) ? 0x198 : 0x1a4))
#define AR_PHY_HEAVYCLIP_20 (AR_SM_BASE + 0x1a8)
#define AR_PHY_HEAVYCLIP_40 (AR_SM_BASE + 0x1ac)
#define AR_PHY_HEAVYCLIP_1 (AR_SM_BASE + 0x19c)
@ -611,16 +611,16 @@
#define AR_PHY_TXGAIN_TABLE (AR_SM_BASE + 0x300)
#define AR_PHY_TX_IQCAL_CONTROL_0 (AR_SM_BASE + (AR_SREV_9485(ah) ? \
#define AR_PHY_TX_IQCAL_CONTROL_0(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3c4 : 0x444))
#define AR_PHY_TX_IQCAL_CONTROL_1 (AR_SM_BASE + (AR_SREV_9485(ah) ? \
#define AR_PHY_TX_IQCAL_CONTROL_1(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3c8 : 0x448))
#define AR_PHY_TX_IQCAL_START (AR_SM_BASE + (AR_SREV_9485(ah) ? \
#define AR_PHY_TX_IQCAL_START(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3c4 : 0x440))
#define AR_PHY_TX_IQCAL_STATUS_B0 (AR_SM_BASE + (AR_SREV_9485(ah) ? \
#define AR_PHY_TX_IQCAL_STATUS_B0(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? \
0x3f0 : 0x48c))
#define AR_PHY_TX_IQCAL_CORR_COEFF_B0(_i) (AR_SM_BASE + \
(AR_SREV_9485(ah) ? \
#define AR_PHY_TX_IQCAL_CORR_COEFF_B0(_ah, _i) (AR_SM_BASE + \
(AR_SREV_9485(_ah) ? \
0x3d0 : 0x450) + ((_i) << 2))
#define AR_PHY_RTT_CTRL (AR_SM_BASE + 0x380)
@ -684,8 +684,8 @@
#define AR_PHY_65NM_CH0_RXTX2_SYNTHOVR_MASK 0x00000008
#define AR_PHY_65NM_CH0_RXTX2_SYNTHOVR_MASK_S 3
#define AR_CH0_TOP (AR_SREV_9300(ah) ? 0x16288 : \
(((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x1628c : 0x16280)))
#define AR_CH0_TOP(_ah) (AR_SREV_9300(_ah) ? 0x16288 : \
(((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x1628c : 0x16280)))
#define AR_CH0_TOP_XPABIASLVL (AR_SREV_9550(ah) ? 0x3c0 : 0x300)
#define AR_CH0_TOP_XPABIASLVL_S (AR_SREV_9550(ah) ? 6 : 8)
@ -705,8 +705,8 @@
#define AR_SWITCH_TABLE_ALL (0xfff)
#define AR_SWITCH_TABLE_ALL_S (0)
#define AR_CH0_THERM (AR_SREV_9300(ah) ? 0x16290 :\
((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16294 : 0x1628c))
#define AR_CH0_THERM(_ah) (AR_SREV_9300(_ah) ? 0x16290 :\
((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16294 : 0x1628c))
#define AR_CH0_THERM_XPABIASLVL_MSB 0x3
#define AR_CH0_THERM_XPABIASLVL_MSB_S 0
#define AR_CH0_THERM_XPASHORT2GND 0x4
@ -717,26 +717,26 @@
#define AR_CH0_THERM_SAR_ADC_OUT 0x0000ff00
#define AR_CH0_THERM_SAR_ADC_OUT_S 8
#define AR_CH0_TOP2 (AR_SREV_9300(ah) ? 0x1628c : \
(AR_SREV_9462(ah) ? 0x16290 : 0x16284))
#define AR_CH0_TOP2(_ah) (AR_SREV_9300(_ah) ? 0x1628c : \
(AR_SREV_9462(_ah) ? 0x16290 : 0x16284))
#define AR_CH0_TOP2_XPABIASLVL (AR_SREV_9561(ah) ? 0x1e00 : 0xf000)
#define AR_CH0_TOP2_XPABIASLVL_S (AR_SREV_9561(ah) ? 9 : 12)
#define AR_CH0_XTAL (AR_SREV_9300(ah) ? 0x16294 : \
((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \
(AR_SREV_9561(ah) ? 0x162c0 : 0x16290)))
#define AR_CH0_XTAL(_ah) (AR_SREV_9300(_ah) ? 0x16294 : \
((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16298 : \
(AR_SREV_9561(_ah) ? 0x162c0 : 0x16290)))
#define AR_CH0_XTAL_CAPINDAC 0x7f000000
#define AR_CH0_XTAL_CAPINDAC_S 24
#define AR_CH0_XTAL_CAPOUTDAC 0x00fe0000
#define AR_CH0_XTAL_CAPOUTDAC_S 17
#define AR_PHY_PMU1 ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16340 : \
(AR_SREV_9561(ah) ? 0x16cc0 : 0x16c40))
#define AR_PHY_PMU1(_ah) ((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16340 : \
(AR_SREV_9561(_ah) ? 0x16cc0 : 0x16c40))
#define AR_PHY_PMU1_PWD 0x1
#define AR_PHY_PMU1_PWD_S 0
#define AR_PHY_PMU2 ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16344 : \
(AR_SREV_9561(ah) ? 0x16cc4 : 0x16c44))
#define AR_PHY_PMU2(_ah) ((AR_SREV_9462(_ah) || AR_SREV_9565(_ah)) ? 0x16344 : \
(AR_SREV_9561(_ah) ? 0x16cc4 : 0x16c44))
#define AR_PHY_PMU2_PGM 0x00200000
#define AR_PHY_PMU2_PGM_S 21
@ -974,7 +974,7 @@
#define AR_PHY_TPC_5_B1 (AR_SM1_BASE + 0x208)
#define AR_PHY_TPC_6_B1 (AR_SM1_BASE + 0x20c)
#define AR_PHY_TPC_11_B1 (AR_SM1_BASE + 0x220)
#define AR_PHY_PDADC_TAB_1 (AR_SM1_BASE + (AR_SREV_9462_20_OR_LATER(ah) ? \
#define AR_PHY_PDADC_TAB_1(_ah) (AR_SM1_BASE + (AR_SREV_9462_20_OR_LATER(_ah) ? \
0x280 : 0x240))
#define AR_PHY_TPC_19_B1 (AR_SM1_BASE + 0x240)
#define AR_PHY_TPC_19_B1_ALPHA_THERM 0xff
@ -1152,7 +1152,7 @@
#define AR_PHY_PAPRD_CTRL1_PAPRD_MAG_SCALE_FACT 0x0ffe0000
#define AR_PHY_PAPRD_CTRL1_PAPRD_MAG_SCALE_FACT_S 17
#define AR_PHY_PAPRD_TRAINER_CNTL1 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x580 : 0x490))
#define AR_PHY_PAPRD_TRAINER_CNTL1(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x580 : 0x490))
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_CF_PAPRD_TRAIN_ENABLE 0x00000001
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_CF_PAPRD_TRAIN_ENABLE_S 0
@ -1169,12 +1169,12 @@
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_SKIP 0x0003f000
#define AR_PHY_PAPRD_TRAINER_CNTL1_CF_PAPRD_LB_SKIP_S 12
#define AR_PHY_PAPRD_TRAINER_CNTL2 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x584 : 0x494))
#define AR_PHY_PAPRD_TRAINER_CNTL2(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x584 : 0x494))
#define AR_PHY_PAPRD_TRAINER_CNTL2_CF_PAPRD_INIT_RX_BB_GAIN 0xFFFFFFFF
#define AR_PHY_PAPRD_TRAINER_CNTL2_CF_PAPRD_INIT_RX_BB_GAIN_S 0
#define AR_PHY_PAPRD_TRAINER_CNTL3 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x588 : 0x498))
#define AR_PHY_PAPRD_TRAINER_CNTL3(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x588 : 0x498))
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_ADC_DESIRED_SIZE 0x0000003f
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_ADC_DESIRED_SIZE_S 0
@ -1191,7 +1191,7 @@
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_BBTXMIX_DISABLE 0x20000000
#define AR_PHY_PAPRD_TRAINER_CNTL3_CF_PAPRD_BBTXMIX_DISABLE_S 29
#define AR_PHY_PAPRD_TRAINER_CNTL4 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x58c : 0x49c))
#define AR_PHY_PAPRD_TRAINER_CNTL4(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x58c : 0x49c))
#define AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_NUM_TRAIN_SAMPLES 0x03ff0000
#define AR_PHY_PAPRD_TRAINER_CNTL4_CF_PAPRD_NUM_TRAIN_SAMPLES_S 16
@ -1211,7 +1211,7 @@
#define AR_PHY_PAPRD_PRE_POST_SCALING 0x3FFFF
#define AR_PHY_PAPRD_PRE_POST_SCALING_S 0
#define AR_PHY_PAPRD_TRAINER_STAT1 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x590 : 0x4a0))
#define AR_PHY_PAPRD_TRAINER_STAT1(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x590 : 0x4a0))
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE 0x00000001
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_TRAIN_DONE_S 0
@ -1226,7 +1226,7 @@
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_AGC2_PWR 0x0001fe00
#define AR_PHY_PAPRD_TRAINER_STAT1_PAPRD_AGC2_PWR_S 9
#define AR_PHY_PAPRD_TRAINER_STAT2 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x594 : 0x4a4))
#define AR_PHY_PAPRD_TRAINER_STAT2(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x594 : 0x4a4))
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_VAL 0x0000ffff
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_VAL_S 0
@ -1235,7 +1235,7 @@
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_IDX 0x00600000
#define AR_PHY_PAPRD_TRAINER_STAT2_PAPRD_FINE_IDX_S 21
#define AR_PHY_PAPRD_TRAINER_STAT3 (AR_SM_BASE + (AR_SREV_9485(ah) ? 0x598 : 0x4a8))
#define AR_PHY_PAPRD_TRAINER_STAT3(_ah) (AR_SM_BASE + (AR_SREV_9485(_ah) ? 0x598 : 0x4a8))
#define AR_PHY_PAPRD_TRAINER_STAT3_PAPRD_TRAIN_SAMPLES_CNT 0x000fffff
#define AR_PHY_PAPRD_TRAINER_STAT3_PAPRD_TRAIN_SAMPLES_CNT_S 0

Просмотреть файл

@ -43,7 +43,7 @@ static void ath9k_hw_set_powermode_wow_sleep(struct ath_hw *ah)
/* set rx disable bit */
REG_WRITE(ah, AR_CR, AR_CR_RXD);
if (!ath9k_hw_wait(ah, AR_CR, AR_CR_RXE, 0, AH_WAIT_TIMEOUT)) {
if (!ath9k_hw_wait(ah, AR_CR, AR_CR_RXE(ah), 0, AH_WAIT_TIMEOUT)) {
ath_err(common, "Failed to stop Rx DMA in 10ms AR_CR=0x%08x AR_DIAG_SW=0x%08x\n",
REG_READ(ah, AR_CR), REG_READ(ah, AR_DIAG_SW));
return;
@ -61,7 +61,7 @@ static void ath9k_hw_set_powermode_wow_sleep(struct ath_hw *ah)
if (ath9k_hw_mci_is_enabled(ah))
REG_WRITE(ah, AR_RTC_KEEP_AWAKE, 0x2);
REG_WRITE(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_ON_INT);
REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_ON_INT);
}
static void ath9k_wow_create_keep_alive_pattern(struct ath_hw *ah)
@ -226,7 +226,7 @@ u32 ath9k_hw_wow_wakeup(struct ath_hw *ah)
*/
/* do we need to check the bit value 0x01000000 (7-10) ?? */
REG_RMW(ah, AR_PCIE_PM_CTRL, AR_PMCTRL_WOW_PME_CLR,
REG_RMW(ah, AR_PCIE_PM_CTRL(ah), AR_PMCTRL_WOW_PME_CLR,
AR_PMCTRL_PWR_STATE_D1D3);
/*
@ -278,12 +278,12 @@ static void ath9k_hw_wow_set_arwr_reg(struct ath_hw *ah)
* to the external PCI-E reset. We also need to tie
* the PCI-E Phy reset to the PCI-E reset.
*/
wa_reg = REG_READ(ah, AR_WA);
wa_reg = REG_READ(ah, AR_WA(ah));
wa_reg &= ~AR_WA_UNTIE_RESET_EN;
wa_reg |= AR_WA_RESET_EN;
wa_reg |= AR_WA_POR_SHORT;
REG_WRITE(ah, AR_WA, wa_reg);
REG_WRITE(ah, AR_WA(ah), wa_reg);
}
void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
@ -309,11 +309,11 @@ void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
* Set and clear WOW_PME_CLEAR for the chip
* to generate next wow signal.
*/
REG_SET_BIT(ah, AR_PCIE_PM_CTRL, AR_PMCTRL_HOST_PME_EN |
REG_SET_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PMCTRL_HOST_PME_EN |
AR_PMCTRL_PWR_PM_CTRL_ENA |
AR_PMCTRL_AUX_PWR_DET |
AR_PMCTRL_WOW_PME_CLR);
REG_CLR_BIT(ah, AR_PCIE_PM_CTRL, AR_PMCTRL_WOW_PME_CLR);
REG_CLR_BIT(ah, AR_PCIE_PM_CTRL(ah), AR_PMCTRL_WOW_PME_CLR);
/*
* Random Backoff.
@ -414,7 +414,7 @@ void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
/*
* Set the power states appropriately and enable PME.
*/
host_pm_ctrl = REG_READ(ah, AR_PCIE_PM_CTRL);
host_pm_ctrl = REG_READ(ah, AR_PCIE_PM_CTRL(ah));
host_pm_ctrl |= AR_PMCTRL_PWR_STATE_D1D3 |
AR_PMCTRL_HOST_PME_EN |
AR_PMCTRL_PWR_PM_CTRL_ENA;
@ -430,7 +430,7 @@ void ath9k_hw_wow_enable(struct ath_hw *ah, u32 pattern_enable)
host_pm_ctrl |= AR_PMCTRL_PWR_STATE_D1D3_REAL;
}
REG_WRITE(ah, AR_PCIE_PM_CTRL, host_pm_ctrl);
REG_WRITE(ah, AR_PCIE_PM_CTRL(ah), host_pm_ctrl);
/*
* Enable sequence number generation when asleep.

Просмотреть файл

@ -173,16 +173,16 @@ void ath9k_hw_btcoex_init_2wire(struct ath_hw *ah)
struct ath_btcoex_hw *btcoex_hw = &ah->btcoex_hw;
/* connect bt_active to baseband */
REG_CLR_BIT(ah, AR_GPIO_INPUT_EN_VAL,
REG_CLR_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah),
(AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_DEF |
AR_GPIO_INPUT_EN_VAL_BT_FREQUENCY_DEF));
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL,
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah),
AR_GPIO_INPUT_EN_VAL_BT_ACTIVE_BB);
/* Set input mux for bt_active to gpio pin */
if (!AR_SREV_SOC(ah))
REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1,
REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1(ah),
AR_GPIO_INPUT_MUX1_BT_ACTIVE,
btcoex_hw->btactive_gpio);
@ -197,17 +197,17 @@ void ath9k_hw_btcoex_init_3wire(struct ath_hw *ah)
struct ath_btcoex_hw *btcoex_hw = &ah->btcoex_hw;
/* btcoex 3-wire */
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL,
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah),
(AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_BB |
AR_GPIO_INPUT_EN_VAL_BT_ACTIVE_BB));
/* Set input mux for bt_prority_async and
* bt_active_async to GPIO pins */
if (!AR_SREV_SOC(ah)) {
REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1,
REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1(ah),
AR_GPIO_INPUT_MUX1_BT_ACTIVE,
btcoex_hw->btactive_gpio);
REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1,
REG_RMW_FIELD(ah, AR_GPIO_INPUT_MUX1(ah),
AR_GPIO_INPUT_MUX1_BT_PRIORITY,
btcoex_hw->btpriority_gpio);
}
@ -404,7 +404,7 @@ void ath9k_hw_btcoex_enable(struct ath_hw *ah)
if (ath9k_hw_get_btcoex_scheme(ah) != ATH_BTCOEX_CFG_MCI &&
!AR_SREV_SOC(ah)) {
REG_RMW(ah, AR_GPIO_PDPU,
REG_RMW(ah, AR_GPIO_PDPU(ah),
(0x2 << (btcoex_hw->btactive_gpio * 2)),
(0x3 << (btcoex_hw->btactive_gpio * 2)));
}

Просмотреть файл

@ -231,17 +231,17 @@ void ath9k_hw_start_nfcal(struct ath_hw *ah, bool update)
if (ah->caldata)
set_bit(NFCAL_PENDING, &ah->caldata->cal_flags);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_ENABLE_NF);
if (update)
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
else
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
}
int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
@ -251,7 +251,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
u8 chainmask = (ah->rxchainmask << 3) | ah->rxchainmask;
struct ath_common *common = ath9k_hw_common(ah);
s16 default_nf = ath9k_hw_get_nf_limits(ah, chan)->nominal;
u32 bb_agc_ctl = REG_READ(ah, AR_PHY_AGC_CONTROL);
u32 bb_agc_ctl = REG_READ(ah, AR_PHY_AGC_CONTROL(ah));
if (ah->caldata)
h = ah->caldata->nfCalHist;
@ -286,7 +286,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
* (or after end rx/tx frame if ongoing)
*/
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_NF) {
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
REG_RMW_BUFFER_FLUSH(ah);
ENABLE_REG_RMW_BUFFER(ah);
}
@ -295,11 +295,11 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
* Load software filtered NF value into baseband internal minCCApwr
* variable.
*/
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_ENABLE_NF);
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL,
REG_CLR_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
REG_RMW_BUFFER_FLUSH(ah);
/*
@ -309,7 +309,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
* (11n max length 22.1 msec)
*/
for (j = 0; j < 22200; j++) {
if ((REG_READ(ah, AR_PHY_AGC_CONTROL) &
if ((REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) &
AR_PHY_AGC_CONTROL_NF) == 0)
break;
udelay(10);
@ -321,12 +321,12 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_NF) {
ENABLE_REG_RMW_BUFFER(ah);
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_ENABLE_NF)
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_ENABLE_NF);
if (bb_agc_ctl & AR_PHY_AGC_CONTROL_NO_UPDATE_NF)
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL,
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah),
AR_PHY_AGC_CONTROL_NO_UPDATE_NF);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL, AR_PHY_AGC_CONTROL_NF);
REG_SET_BIT(ah, AR_PHY_AGC_CONTROL(ah), AR_PHY_AGC_CONTROL_NF);
REG_RMW_BUFFER_FLUSH(ah);
}
@ -342,7 +342,7 @@ int ath9k_hw_loadnf(struct ath_hw *ah, struct ath9k_channel *chan)
if (j == 22200) {
ath_dbg(common, ANY,
"Timeout while waiting for nf to load: AR_PHY_AGC_CONTROL=0x%x\n",
REG_READ(ah, AR_PHY_AGC_CONTROL));
REG_READ(ah, AR_PHY_AGC_CONTROL(ah)));
return -ETIMEDOUT;
}
@ -410,7 +410,7 @@ bool ath9k_hw_getnf(struct ath_hw *ah, struct ath9k_channel *chan)
struct ieee80211_channel *c = chan->chan;
struct ath9k_hw_cal_data *caldata = ah->caldata;
if (REG_READ(ah, AR_PHY_AGC_CONTROL) & AR_PHY_AGC_CONTROL_NF) {
if (REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) & AR_PHY_AGC_CONTROL_NF) {
ath_dbg(common, CALIBRATE,
"NF did not complete in calibration window\n");
return false;
@ -478,7 +478,7 @@ void ath9k_hw_bstuck_nfcal(struct ath_hw *ah)
*/
if (!test_bit(NFCAL_PENDING, &caldata->cal_flags))
ath9k_hw_start_nfcal(ah, true);
else if (!(REG_READ(ah, AR_PHY_AGC_CONTROL) & AR_PHY_AGC_CONTROL_NF))
else if (!(REG_READ(ah, AR_PHY_AGC_CONTROL(ah)) & AR_PHY_AGC_CONTROL_NF))
ath9k_hw_getnf(ah, ah->curchan);
set_bit(NFCAL_INTF, &caldata->cal_flags);

Просмотреть файл

@ -68,8 +68,8 @@
#define AR5416_EEPROM_OFFSET 0x2000
#define AR5416_EEPROM_MAX 0xae0
#define AR5416_EEPROM_START_ADDR \
(AR_SREV_9100(ah)) ? 0x1fff1000 : 0x503f1200
#define AR5416_EEPROM_START_ADDR(_ah) \
(AR_SREV_9100(_ah)) ? 0x1fff1000 : 0x503f1200
#define SD_NO_CTL 0xE0
#define NO_CTL 0xff
@ -110,10 +110,10 @@
#define FBIN2FREQ(x, y) ((y) ? (2300 + x) : (4800 + 5 * x))
#define ath9k_hw_use_flash(_ah) (!(_ah->ah_flags & AH_USE_EEPROM))
#define OLC_FOR_AR9280_20_LATER (AR_SREV_9280_20_OR_LATER(ah) && \
ah->eep_ops->get_eeprom(ah, EEP_OL_PWRCTRL))
#define OLC_FOR_AR9287_10_LATER (AR_SREV_9287_11_OR_LATER(ah) && \
ah->eep_ops->get_eeprom(ah, EEP_OL_PWRCTRL))
#define OLC_FOR_AR9280_20_LATER(_ah) (AR_SREV_9280_20_OR_LATER(_ah) && \
_ah->eep_ops->get_eeprom(_ah, EEP_OL_PWRCTRL))
#define OLC_FOR_AR9287_10_LATER(_ah) (AR_SREV_9287_11_OR_LATER(_ah) && \
_ah->eep_ops->get_eeprom(_ah, EEP_OL_PWRCTRL))
#define EEP_RFSILENT_ENABLED 0x0001
#define EEP_RFSILENT_ENABLED_S 0

Просмотреть файл

@ -800,7 +800,7 @@ static void ath9k_hw_set_def_power_cal_table(struct ath_hw *ah,
numPiers = AR5416_NUM_5G_CAL_PIERS;
}
if (OLC_FOR_AR9280_20_LATER && IS_CHAN_2GHZ(chan)) {
if (OLC_FOR_AR9280_20_LATER(ah) && IS_CHAN_2GHZ(chan)) {
pRawDataset = pEepData->calPierData2G[0];
ah->initPDADC = ((struct calDataPerFreqOpLoop *)
pRawDataset)->vpdPdg[0][0];
@ -841,7 +841,7 @@ static void ath9k_hw_set_def_power_cal_table(struct ath_hw *ah,
pRawDataset = pEepData->calPierData5G[i];
if (OLC_FOR_AR9280_20_LATER) {
if (OLC_FOR_AR9280_20_LATER(ah)) {
u8 pcdacIdx;
u8 txPower;
@ -869,7 +869,7 @@ static void ath9k_hw_set_def_power_cal_table(struct ath_hw *ah,
ENABLE_REGWRITE_BUFFER(ah);
if (OLC_FOR_AR9280_20_LATER) {
if (OLC_FOR_AR9280_20_LATER(ah)) {
REG_WRITE(ah,
AR_PHY_TPCRG5 + regChainOffset,
SM(0x6,
@ -1203,7 +1203,7 @@ static void ath9k_hw_def_set_txpower(struct ath_hw *ah,
| ATH9K_POW_SM(ratesArray[rate24mb], 0));
if (IS_CHAN_2GHZ(chan)) {
if (OLC_FOR_AR9280_20_LATER) {
if (OLC_FOR_AR9280_20_LATER(ah)) {
cck_ofdm_delta = 2;
REG_WRITE(ah, AR_PHY_POWER_TX_RATE3,
ATH9K_POW_SM(RT_AR_DELTA(rate2s), 24)
@ -1259,7 +1259,7 @@ static void ath9k_hw_def_set_txpower(struct ath_hw *ah,
ht40PowerIncForPdadc, 8)
| ATH9K_POW_SM(ratesArray[rateHt40_4] +
ht40PowerIncForPdadc, 0));
if (OLC_FOR_AR9280_20_LATER) {
if (OLC_FOR_AR9280_20_LATER(ah)) {
REG_WRITE(ah, AR_PHY_POWER_TX_RATE9,
ATH9K_POW_SM(ratesArray[rateExtOfdm], 24)
| ATH9K_POW_SM(RT_AR_DELTA(rateExtCck), 16)

Просмотреть файл

@ -561,11 +561,11 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
memcpy(ptr, skb->data, rx_remain_len);
rx_pkt_len += rx_remain_len;
hif_dev->rx_remain_len = 0;
skb_put(remain_skb, rx_pkt_len);
skb_pool[pool_index++] = remain_skb;
hif_dev->remain_skb = NULL;
hif_dev->rx_remain_len = 0;
} else {
index = rx_remain_len;
}
@ -584,16 +584,21 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
pkt_len = get_unaligned_le16(ptr + index);
pkt_tag = get_unaligned_le16(ptr + index + 2);
/* It is supposed that if we have an invalid pkt_tag or
* pkt_len then the whole input SKB is considered invalid
* and dropped; the associated packets already in skb_pool
* are dropped, too.
*/
if (pkt_tag != ATH_USB_RX_STREAM_MODE_TAG) {
RX_STAT_INC(hif_dev, skb_dropped);
return;
goto invalid_pkt;
}
if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
dev_err(&hif_dev->udev->dev,
"ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
RX_STAT_INC(hif_dev, skb_dropped);
return;
goto invalid_pkt;
}
pad_len = 4 - (pkt_len & 0x3);
@ -605,11 +610,6 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
if (index > MAX_RX_BUF_SIZE) {
spin_lock(&hif_dev->rx_lock);
hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
hif_dev->rx_transfer_len =
MAX_RX_BUF_SIZE - chk_idx - 4;
hif_dev->rx_pad_len = pad_len;
nskb = __dev_alloc_skb(pkt_len + 32, GFP_ATOMIC);
if (!nskb) {
dev_err(&hif_dev->udev->dev,
@ -617,6 +617,12 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
spin_unlock(&hif_dev->rx_lock);
goto err;
}
hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
hif_dev->rx_transfer_len =
MAX_RX_BUF_SIZE - chk_idx - 4;
hif_dev->rx_pad_len = pad_len;
skb_reserve(nskb, 32);
RX_STAT_INC(hif_dev, skb_allocated);
@ -654,6 +660,13 @@ err:
skb_pool[i]->len, USB_WLAN_RX_PIPE);
RX_STAT_INC(hif_dev, skb_completed);
}
return;
invalid_pkt:
for (i = 0; i < pool_index; i++) {
dev_kfree_skb_any(skb_pool[i]);
RX_STAT_INC(hif_dev, skb_dropped);
}
return;
}
static void ath9k_hif_usb_rx_cb(struct urb *urb)
@ -1411,8 +1424,6 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
if (hif_dev->flags & HIF_USB_READY) {
ath9k_htc_hw_deinit(hif_dev->htc_handle, unplugged);
ath9k_hif_usb_dev_deinit(hif_dev);
ath9k_destroy_wmi(hif_dev->htc_handle->drv_priv);
ath9k_htc_hw_free(hif_dev->htc_handle);
}

Просмотреть файл

@ -523,13 +523,13 @@ static bool ath_usb_eeprom_read(struct ath_common *common, u32 off, u16 *data)
(void)REG_READ(ah, AR5416_EEPROM_OFFSET + (off << AR5416_EEPROM_S));
if (!ath9k_hw_wait(ah,
AR_EEPROM_STATUS_DATA,
AR_EEPROM_STATUS_DATA(ah),
AR_EEPROM_STATUS_DATA_BUSY |
AR_EEPROM_STATUS_DATA_PROT_ACCESS, 0,
AH_WAIT_TIMEOUT))
return false;
*data = MS(REG_READ(ah, AR_EEPROM_STATUS_DATA),
*data = MS(REG_READ(ah, AR_EEPROM_STATUS_DATA(ah)),
AR_EEPROM_STATUS_DATA_VAL);
return true;
@ -988,6 +988,8 @@ void ath9k_htc_disconnect_device(struct htc_target *htc_handle, bool hotunplug)
ath9k_deinit_device(htc_handle->drv_priv);
ath9k_stop_wmi(htc_handle->drv_priv);
ath9k_hif_usb_dealloc_urbs((struct hif_device_usb *)htc_handle->hif_dev);
ath9k_destroy_wmi(htc_handle->drv_priv);
ieee80211_free_hw(htc_handle->drv_priv->hw);
}
}

Просмотреть файл

@ -391,7 +391,7 @@ static void ath9k_htc_fw_panic_report(struct htc_target *htc_handle,
* HTC Messages are handled directly here and the obtained SKB
* is freed.
*
* Service messages (Data, WMI) passed to the corresponding
* Service messages (Data, WMI) are passed to the corresponding
* endpoint RX handlers, which have to free the SKB.
*/
void ath9k_htc_rx_msg(struct htc_target *htc_handle,
@ -478,6 +478,8 @@ invalid:
if (endpoint->ep_callbacks.rx)
endpoint->ep_callbacks.rx(endpoint->ep_callbacks.priv,
skb, epid);
else
goto invalid;
}
}

Просмотреть файл

@ -266,7 +266,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
case AR9300_DEVID_AR9330:
ah->hw_version.macVersion = AR_SREV_VERSION_9330;
if (!ah->get_mac_revision) {
val = REG_READ(ah, AR_SREV);
val = REG_READ(ah, AR_SREV(ah));
ah->hw_version.macRev = MS(val, AR_SREV_REVISION2);
}
return true;
@ -284,7 +284,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
return true;
}
srev = REG_READ(ah, AR_SREV);
srev = REG_READ(ah, AR_SREV(ah));
if (srev == -1) {
ath_err(ath9k_hw_common(ah),
@ -292,7 +292,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
return false;
}
val = srev & AR_SREV_ID;
val = srev & AR_SREV_ID(ah);
if (val == 0xFF) {
val = srev;
@ -601,12 +601,12 @@ static int __ath9k_hw_init(struct ath_hw *ah)
}
/*
* Read back AR_WA into a permanent copy and set bits 14 and 17.
* Read back AR_WA(ah) into a permanent copy and set bits 14 and 17.
* We need to do this to avoid RMW of this register. We cannot
* read the reg when chip is asleep.
*/
if (AR_SREV_9300_20_OR_LATER(ah)) {
ah->WARegVal = REG_READ(ah, AR_WA);
ah->WARegVal = REG_READ(ah, AR_WA(ah));
ah->WARegVal |= (AR_WA_D3_L1_DISABLE |
AR_WA_ASPM_TIMER_BASED_DISABLE);
}
@ -618,7 +618,7 @@ static int __ath9k_hw_init(struct ath_hw *ah)
if (AR_SREV_9565(ah)) {
ah->WARegVal |= AR_WA_BIT22;
REG_WRITE(ah, AR_WA, ah->WARegVal);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
}
ath9k_hw_init_defaults(ah);
@ -814,7 +814,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
REG_RMW_FIELD(ah, AR_CH0_DDR_DPLL3,
AR_CH0_DPLL3_PHASE_SHIFT, 0x1);
REG_WRITE(ah, AR_RTC_PLL_CONTROL,
REG_WRITE(ah, AR_RTC_PLL_CONTROL(ah),
pll | AR_RTC_9300_PLL_BYPASS);
udelay(1000);
@ -832,7 +832,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
AR_SREV_9561(ah)) {
u32 regval, pll2_divint, pll2_divfrac, refdiv;
REG_WRITE(ah, AR_RTC_PLL_CONTROL,
REG_WRITE(ah, AR_RTC_PLL_CONTROL(ah),
pll | AR_RTC_9300_SOC_PLL_BYPASS);
udelay(1000);
@ -911,7 +911,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
if (AR_SREV_9565(ah))
pll |= 0x40000;
REG_WRITE(ah, AR_RTC_PLL_CONTROL, pll);
REG_WRITE(ah, AR_RTC_PLL_CONTROL(ah), pll);
if (AR_SREV_9485(ah) || AR_SREV_9340(ah) || AR_SREV_9330(ah) ||
AR_SREV_9550(ah))
@ -925,7 +925,7 @@ static void ath9k_hw_init_pll(struct ath_hw *ah,
udelay(RTC_PLL_SETTLE_DELAY);
REG_WRITE(ah, AR_RTC_SLEEP_CLK, AR_RTC_FORCE_DERIVED_CLK);
REG_WRITE(ah, AR_RTC_SLEEP_CLK(ah), AR_RTC_FORCE_DERIVED_CLK);
}
static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
@ -977,7 +977,7 @@ static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
REG_WRITE(ah, AR_IMR_S2, ah->imrs2_reg);
if (ah->msi_enabled) {
ah->msi_reg = REG_READ(ah, AR_PCIE_MSI);
ah->msi_reg = REG_READ(ah, AR_PCIE_MSI(ah));
ah->msi_reg |= AR_PCIE_MSI_HW_DBI_WR_EN;
ah->msi_reg &= AR_PCIE_MSI_HW_INT_PENDING_ADDR_MSI_64;
REG_WRITE(ah, AR_INTCFG, msi_cfg);
@ -987,18 +987,18 @@ static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
}
if (!AR_SREV_9100(ah)) {
REG_WRITE(ah, AR_INTR_SYNC_CAUSE, 0xFFFFFFFF);
REG_WRITE(ah, AR_INTR_SYNC_ENABLE, sync_default);
REG_WRITE(ah, AR_INTR_SYNC_MASK, 0);
REG_WRITE(ah, AR_INTR_SYNC_CAUSE(ah), 0xFFFFFFFF);
REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), sync_default);
REG_WRITE(ah, AR_INTR_SYNC_MASK(ah), 0);
}
REGWRITE_BUFFER_FLUSH(ah);
if (AR_SREV_9300_20_OR_LATER(ah)) {
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE, 0);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK, 0);
REG_WRITE(ah, AR_INTR_PRIO_SYNC_ENABLE, 0);
REG_WRITE(ah, AR_INTR_PRIO_SYNC_MASK, 0);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah), 0);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK(ah), 0);
REG_WRITE(ah, AR_INTR_PRIO_SYNC_ENABLE(ah), 0);
REG_WRITE(ah, AR_INTR_PRIO_SYNC_MASK(ah), 0);
}
}
@ -1341,7 +1341,7 @@ static bool ath9k_hw_ar9330_reset_war(struct ath_hw *ah, int type)
return false;
}
REG_WRITE(ah, AR_RTC_RESET, 1);
REG_WRITE(ah, AR_RTC_RESET(ah), 1);
}
return true;
@ -1353,26 +1353,26 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
u32 tmpReg;
if (AR_SREV_9100(ah)) {
REG_RMW_FIELD(ah, AR_RTC_DERIVED_CLK,
REG_RMW_FIELD(ah, AR_RTC_DERIVED_CLK(ah),
AR_RTC_DERIVED_CLK_PERIOD, 1);
(void)REG_READ(ah, AR_RTC_DERIVED_CLK);
(void)REG_READ(ah, AR_RTC_DERIVED_CLK(ah));
}
ENABLE_REGWRITE_BUFFER(ah);
if (AR_SREV_9300_20_OR_LATER(ah)) {
REG_WRITE(ah, AR_WA, ah->WARegVal);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
REG_WRITE(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN |
REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN |
AR_RTC_FORCE_WAKE_ON_INT);
if (AR_SREV_9100(ah)) {
rst_flags = AR_RTC_RC_MAC_WARM | AR_RTC_RC_MAC_COLD |
AR_RTC_RC_COLD_RESET | AR_RTC_RC_WARM_RESET;
} else {
tmpReg = REG_READ(ah, AR_INTR_SYNC_CAUSE);
tmpReg = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah));
if (AR_SREV_9340(ah))
tmpReg &= AR9340_INTR_SYNC_LOCAL_TIMEOUT;
else
@ -1381,7 +1381,7 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
if (tmpReg) {
u32 val;
REG_WRITE(ah, AR_INTR_SYNC_ENABLE, 0);
REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), 0);
val = AR_RC_HOSTIF;
if (!AR_SREV_9300_20_OR_LATER(ah))
@ -1414,7 +1414,7 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
REG_CLR_BIT(ah, AR_CFG, AR_CFG_HALT_REQ);
}
REG_WRITE(ah, AR_RTC_RC, rst_flags);
REG_WRITE(ah, AR_RTC_RC(ah), rst_flags);
REGWRITE_BUFFER_FLUSH(ah);
@ -1425,8 +1425,8 @@ static bool ath9k_hw_set_reset(struct ath_hw *ah, int type)
else
udelay(100);
REG_WRITE(ah, AR_RTC_RC, 0);
if (!ath9k_hw_wait(ah, AR_RTC_RC, AR_RTC_RC_M, 0, AH_WAIT_TIMEOUT)) {
REG_WRITE(ah, AR_RTC_RC(ah), 0);
if (!ath9k_hw_wait(ah, AR_RTC_RC(ah), AR_RTC_RC_M, 0, AH_WAIT_TIMEOUT)) {
ath_dbg(ath9k_hw_common(ah), RESET, "RTC stuck in MAC reset\n");
return false;
}
@ -1445,17 +1445,17 @@ static bool ath9k_hw_set_reset_power_on(struct ath_hw *ah)
ENABLE_REGWRITE_BUFFER(ah);
if (AR_SREV_9300_20_OR_LATER(ah)) {
REG_WRITE(ah, AR_WA, ah->WARegVal);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
REG_WRITE(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN |
REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN |
AR_RTC_FORCE_WAKE_ON_INT);
if (!AR_SREV_9100(ah) && !AR_SREV_9300_20_OR_LATER(ah))
REG_WRITE(ah, AR_RC, AR_RC_AHB);
REG_WRITE(ah, AR_RTC_RESET, 0);
REG_WRITE(ah, AR_RTC_RESET(ah), 0);
REGWRITE_BUFFER_FLUSH(ah);
@ -1464,11 +1464,11 @@ static bool ath9k_hw_set_reset_power_on(struct ath_hw *ah)
if (!AR_SREV_9100(ah) && !AR_SREV_9300_20_OR_LATER(ah))
REG_WRITE(ah, AR_RC, 0);
REG_WRITE(ah, AR_RTC_RESET, 1);
REG_WRITE(ah, AR_RTC_RESET(ah), 1);
if (!ath9k_hw_wait(ah,
AR_RTC_STATUS,
AR_RTC_STATUS_M,
AR_RTC_STATUS(ah),
AR_RTC_STATUS_M(ah),
AR_RTC_STATUS_ON,
AH_WAIT_TIMEOUT)) {
ath_dbg(ath9k_hw_common(ah), RESET, "RTC not waking up\n");
@ -1483,11 +1483,11 @@ static bool ath9k_hw_set_reset_reg(struct ath_hw *ah, u32 type)
bool ret = false;
if (AR_SREV_9300_20_OR_LATER(ah)) {
REG_WRITE(ah, AR_WA, ah->WARegVal);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
REG_WRITE(ah, AR_RTC_FORCE_WAKE,
REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_EN | AR_RTC_FORCE_WAKE_ON_INT);
if (!ah->reset_power_on)
@ -1521,7 +1521,7 @@ static bool ath9k_hw_chip_reset(struct ath_hw *ah,
else
reset_type = ATH9K_RESET_COLD;
} else if (ah->chip_fullsleep || REG_READ(ah, AR_Q_TXE) ||
(REG_READ(ah, AR_CR) & AR_CR_RXE))
(REG_READ(ah, AR_CR) & AR_CR_RXE(ah)))
reset_type = ATH9K_RESET_COLD;
if (!ath9k_hw_set_reset_reg(ah, reset_type))
@ -1955,7 +1955,7 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan,
ath9k_hw_settsf64(ah, tsf + tsf_offset);
if (AR_SREV_9280_20_OR_LATER(ah))
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL, AR_GPIO_JTAG_DISABLE);
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL(ah), AR_GPIO_JTAG_DISABLE);
if (!AR_SREV_9300_20_OR_LATER(ah))
ar9002_hw_enable_async_fifo(ah);
@ -2017,7 +2017,7 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan,
ath9k_hw_set_dma(ah);
if (!ath9k_hw_mci_is_enabled(ah))
REG_WRITE(ah, AR_OBS, 8);
REG_WRITE(ah, AR_OBS(ah), 8);
ENABLE_REG_RMW_BUFFER(ah);
if (ah->config.rx_intr_mitigation) {
@ -2111,7 +2111,7 @@ static void ath9k_set_power_sleep(struct ath_hw *ah)
* Clear the RTC force wake bit to allow the
* mac to go to sleep.
*/
REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN);
REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN);
if (ath9k_hw_mci_is_enabled(ah))
udelay(100);
@ -2121,13 +2121,13 @@ static void ath9k_set_power_sleep(struct ath_hw *ah)
/* Shutdown chip. Active low */
if (!AR_SREV_5416(ah) && !AR_SREV_9271(ah)) {
REG_CLR_BIT(ah, AR_RTC_RESET, AR_RTC_RESET_EN);
REG_CLR_BIT(ah, AR_RTC_RESET(ah), AR_RTC_RESET_EN);
udelay(2);
}
/* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */
/* Clear Bit 14 of AR_WA(ah) after putting chip into Full Sleep mode. */
if (AR_SREV_9300_20_OR_LATER(ah))
REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
}
/*
@ -2143,7 +2143,7 @@ static void ath9k_set_power_network_sleep(struct ath_hw *ah)
if (!(pCap->hw_caps & ATH9K_HW_CAP_AUTOSLEEP)) {
/* Set WakeOnInterrupt bit; clear ForceWake bit */
REG_WRITE(ah, AR_RTC_FORCE_WAKE,
REG_WRITE(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_ON_INT);
} else {
@ -2163,15 +2163,15 @@ static void ath9k_set_power_network_sleep(struct ath_hw *ah)
* Clear the RTC force wake bit to allow the
* mac to go to sleep.
*/
REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE, AR_RTC_FORCE_WAKE_EN);
REG_CLR_BIT(ah, AR_RTC_FORCE_WAKE(ah), AR_RTC_FORCE_WAKE_EN);
if (ath9k_hw_mci_is_enabled(ah))
udelay(30);
}
/* Clear Bit 14 of AR_WA after putting chip into Net Sleep mode. */
/* Clear Bit 14 of AR_WA(ah) after putting chip into Net Sleep mode. */
if (AR_SREV_9300_20_OR_LATER(ah))
REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal & ~AR_WA_D3_L1_DISABLE);
}
static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
@ -2179,14 +2179,14 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
u32 val;
int i;
/* Set Bits 14 and 17 of AR_WA before powering on the chip. */
/* Set Bits 14 and 17 of AR_WA(ah) before powering on the chip. */
if (AR_SREV_9300_20_OR_LATER(ah)) {
REG_WRITE(ah, AR_WA, ah->WARegVal);
REG_WRITE(ah, AR_WA(ah), ah->WARegVal);
udelay(10);
}
if ((REG_READ(ah, AR_RTC_STATUS) &
AR_RTC_STATUS_M) == AR_RTC_STATUS_SHUTDOWN) {
if ((REG_READ(ah, AR_RTC_STATUS(ah)) &
AR_RTC_STATUS_M(ah)) == AR_RTC_STATUS_SHUTDOWN) {
if (!ath9k_hw_set_reset_reg(ah, ATH9K_RESET_POWER_ON)) {
return false;
}
@ -2194,10 +2194,10 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
ath9k_hw_init_pll(ah, NULL);
}
if (AR_SREV_9100(ah))
REG_SET_BIT(ah, AR_RTC_RESET,
REG_SET_BIT(ah, AR_RTC_RESET(ah),
AR_RTC_RESET_EN);
REG_SET_BIT(ah, AR_RTC_FORCE_WAKE,
REG_SET_BIT(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_EN);
if (AR_SREV_9100(ah))
mdelay(10);
@ -2205,11 +2205,11 @@ static bool ath9k_hw_set_power_awake(struct ath_hw *ah)
udelay(50);
for (i = POWER_UP_TIME / 50; i > 0; i--) {
val = REG_READ(ah, AR_RTC_STATUS) & AR_RTC_STATUS_M;
val = REG_READ(ah, AR_RTC_STATUS(ah)) & AR_RTC_STATUS_M(ah);
if (val == AR_RTC_STATUS_ON)
break;
udelay(50);
REG_SET_BIT(ah, AR_RTC_FORCE_WAKE,
REG_SET_BIT(ah, AR_RTC_FORCE_WAKE(ah),
AR_RTC_FORCE_WAKE_EN);
}
if (i == 0) {
@ -2701,16 +2701,16 @@ static void ath9k_hw_gpio_cfg_output_mux(struct ath_hw *ah, u32 gpio, u32 type)
u32 gpio_shift, tmp;
if (gpio > 11)
addr = AR_GPIO_OUTPUT_MUX3;
addr = AR_GPIO_OUTPUT_MUX3(ah);
else if (gpio > 5)
addr = AR_GPIO_OUTPUT_MUX2;
addr = AR_GPIO_OUTPUT_MUX2(ah);
else
addr = AR_GPIO_OUTPUT_MUX1;
addr = AR_GPIO_OUTPUT_MUX1(ah);
gpio_shift = (gpio % 6) * 5;
if (AR_SREV_9280_20_OR_LATER(ah) ||
(addr != AR_GPIO_OUTPUT_MUX1)) {
(addr != AR_GPIO_OUTPUT_MUX1(ah))) {
REG_RMW(ah, addr, (type << gpio_shift),
(0x1f << gpio_shift));
} else {
@ -2754,13 +2754,13 @@ static void ath9k_hw_gpio_cfg_wmac(struct ath_hw *ah, u32 gpio, bool out,
AR7010_GPIO_OE_MASK << gpio_shift);
} else if (AR_SREV_SOC(ah)) {
gpio_set = out ? 1 : 0;
REG_RMW(ah, AR_GPIO_OE_OUT, gpio_set << gpio_shift,
REG_RMW(ah, AR_GPIO_OE_OUT(ah), gpio_set << gpio_shift,
gpio_set << gpio_shift);
} else {
gpio_shift = gpio << 1;
gpio_set = out ?
AR_GPIO_OE_OUT_DRV_ALL : AR_GPIO_OE_OUT_DRV_NO;
REG_RMW(ah, AR_GPIO_OE_OUT, gpio_set << gpio_shift,
REG_RMW(ah, AR_GPIO_OE_OUT(ah), gpio_set << gpio_shift,
AR_GPIO_OE_OUT_DRV << gpio_shift);
if (out)
@ -2813,7 +2813,7 @@ u32 ath9k_hw_gpio_get(struct ath_hw *ah, u32 gpio)
u32 val = 0xffffffff;
#define MS_REG_READ(x, y) \
(MS(REG_READ(ah, AR_GPIO_IN_OUT), x##_GPIO_IN_VAL) & BIT(y))
(MS(REG_READ(ah, AR_GPIO_IN_OUT(ah)), x##_GPIO_IN_VAL) & BIT(y))
WARN_ON(gpio >= ah->caps.num_gpio_pins);
@ -2829,7 +2829,7 @@ u32 ath9k_hw_gpio_get(struct ath_hw *ah, u32 gpio)
else if (AR_DEVID_7010(ah))
val = REG_READ(ah, AR7010_GPIO_IN) & BIT(gpio);
else if (AR_SREV_9300_20_OR_LATER(ah))
val = REG_READ(ah, AR_GPIO_IN) & BIT(gpio);
val = REG_READ(ah, AR_GPIO_IN(ah)) & BIT(gpio);
else
val = MS_REG_READ(AR, gpio);
} else if (BIT(gpio) & ah->caps.gpio_requested) {
@ -2853,7 +2853,7 @@ void ath9k_hw_set_gpio(struct ath_hw *ah, u32 gpio, u32 val)
if (BIT(gpio) & ah->caps.gpio_mask) {
u32 out_addr = AR_DEVID_7010(ah) ?
AR7010_GPIO_OUT : AR_GPIO_IN_OUT;
AR7010_GPIO_OUT : AR_GPIO_IN_OUT(ah);
REG_RMW(ah, out_addr, val << gpio, BIT(gpio));
} else if (BIT(gpio) & ah->caps.gpio_requested) {

Просмотреть файл

@ -707,7 +707,7 @@ bool ath9k_hw_stopdmarecv(struct ath_hw *ah, bool *reset)
/* Wait for rx enable bit to go low */
for (i = AH_RX_STOP_DMA_TIMEOUT / AH_TIME_QUANTUM; i != 0; i--) {
if ((REG_READ(ah, AR_CR) & AR_CR_RXE) == 0)
if ((REG_READ(ah, AR_CR) & AR_CR_RXE(ah)) == 0)
break;
if (!AR_SREV_9300_20_OR_LATER(ah)) {
@ -762,14 +762,14 @@ bool ath9k_hw_intrpend(struct ath_hw *ah)
if (AR_SREV_9100(ah))
return true;
host_isr = REG_READ(ah, AR_INTR_ASYNC_CAUSE);
host_isr = REG_READ(ah, AR_INTR_ASYNC_CAUSE(ah));
if (((host_isr & AR_INTR_MAC_IRQ) ||
(host_isr & AR_INTR_ASYNC_MASK_MCI)) &&
(host_isr != AR_INTR_SPURIOUS))
return true;
host_isr = REG_READ(ah, AR_INTR_SYNC_CAUSE);
host_isr = REG_READ(ah, AR_INTR_SYNC_CAUSE(ah));
if ((host_isr & AR_INTR_SYNC_DEFAULT)
&& (host_isr != AR_INTR_SPURIOUS))
return true;
@ -786,11 +786,11 @@ void ath9k_hw_kill_interrupts(struct ath_hw *ah)
REG_WRITE(ah, AR_IER, AR_IER_DISABLE);
(void) REG_READ(ah, AR_IER);
if (!AR_SREV_9100(ah)) {
REG_WRITE(ah, AR_INTR_ASYNC_ENABLE, 0);
(void) REG_READ(ah, AR_INTR_ASYNC_ENABLE);
REG_WRITE(ah, AR_INTR_ASYNC_ENABLE(ah), 0);
(void) REG_READ(ah, AR_INTR_ASYNC_ENABLE(ah));
REG_WRITE(ah, AR_INTR_SYNC_ENABLE, 0);
(void) REG_READ(ah, AR_INTR_SYNC_ENABLE);
REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), 0);
(void) REG_READ(ah, AR_INTR_SYNC_ENABLE(ah));
}
}
EXPORT_SYMBOL(ath9k_hw_kill_interrupts);
@ -824,11 +824,11 @@ static void __ath9k_hw_enable_interrupts(struct ath_hw *ah)
ath_dbg(common, INTERRUPT, "enable IER\n");
REG_WRITE(ah, AR_IER, AR_IER_ENABLE);
if (!AR_SREV_9100(ah)) {
REG_WRITE(ah, AR_INTR_ASYNC_ENABLE, async_mask);
REG_WRITE(ah, AR_INTR_ASYNC_MASK, async_mask);
REG_WRITE(ah, AR_INTR_ASYNC_ENABLE(ah), async_mask);
REG_WRITE(ah, AR_INTR_ASYNC_MASK(ah), async_mask);
REG_WRITE(ah, AR_INTR_SYNC_ENABLE, sync_default);
REG_WRITE(ah, AR_INTR_SYNC_MASK, sync_default);
REG_WRITE(ah, AR_INTR_SYNC_ENABLE(ah), sync_default);
REG_WRITE(ah, AR_INTR_SYNC_MASK(ah), sync_default);
}
ath_dbg(common, INTERRUPT, "AR_IMR 0x%x IER 0x%x\n",
REG_READ(ah, AR_IMR), REG_READ(ah, AR_IER));
@ -841,26 +841,26 @@ static void __ath9k_hw_enable_interrupts(struct ath_hw *ah)
ath_dbg(ath9k_hw_common(ah), INTERRUPT,
"Enabling MSI, msi_mask=0x%X\n", ah->msi_mask);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE, ah->msi_mask);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK, ah->msi_mask);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah), ah->msi_mask);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_MASK(ah), ah->msi_mask);
ath_dbg(ath9k_hw_common(ah), INTERRUPT,
"AR_INTR_PRIO_ASYNC_ENABLE=0x%X, AR_INTR_PRIO_ASYNC_MASK=0x%X\n",
REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE),
REG_READ(ah, AR_INTR_PRIO_ASYNC_MASK));
REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah)),
REG_READ(ah, AR_INTR_PRIO_ASYNC_MASK(ah)));
if (ah->msi_reg == 0)
ah->msi_reg = REG_READ(ah, AR_PCIE_MSI);
ah->msi_reg = REG_READ(ah, AR_PCIE_MSI(ah));
ath_dbg(ath9k_hw_common(ah), INTERRUPT,
"AR_PCIE_MSI=0x%X, ah->msi_reg = 0x%X\n",
AR_PCIE_MSI, ah->msi_reg);
AR_PCIE_MSI(ah), ah->msi_reg);
i = 0;
do {
REG_WRITE(ah, AR_PCIE_MSI,
REG_WRITE(ah, AR_PCIE_MSI(ah),
(ah->msi_reg | AR_PCIE_MSI_ENABLE)
& msi_pend_addr_mask);
_msi_reg = REG_READ(ah, AR_PCIE_MSI);
_msi_reg = REG_READ(ah, AR_PCIE_MSI(ah));
i++;
} while ((_msi_reg & AR_PCIE_MSI_ENABLE) == 0 && i < 200);
@ -918,8 +918,8 @@ void ath9k_hw_set_interrupts(struct ath_hw *ah)
if (ah->msi_enabled) {
ath_dbg(common, INTERRUPT, "Clearing AR_INTR_PRIO_ASYNC_ENABLE\n");
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE, 0);
REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE);
REG_WRITE(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah), 0);
REG_READ(ah, AR_INTR_PRIO_ASYNC_ENABLE(ah));
}
ath_dbg(common, INTERRUPT, "New interrupt mask 0x%x\n", ints);

Просмотреть файл

@ -804,14 +804,14 @@ static bool ath_pci_eeprom_read(struct ath_common *common, u32 off, u16 *data)
common->ops->read(ah, AR5416_EEPROM_OFFSET + (off << AR5416_EEPROM_S));
if (!ath9k_hw_wait(ah,
AR_EEPROM_STATUS_DATA,
AR_EEPROM_STATUS_DATA(ah),
AR_EEPROM_STATUS_DATA_BUSY |
AR_EEPROM_STATUS_DATA_PROT_ACCESS, 0,
AH_WAIT_TIMEOUT)) {
return false;
}
*data = MS(common->ops->read(ah, AR_EEPROM_STATUS_DATA),
*data = MS(common->ops->read(ah, AR_EEPROM_STATUS_DATA(ah)),
AR_EEPROM_STATUS_DATA_VAL);
return true;

Просмотреть файл

@ -20,7 +20,7 @@
#include "../reg.h"
#define AR_CR 0x0008
#define AR_CR_RXE (AR_SREV_9300_20_OR_LATER(ah) ? 0x0000000c : 0x00000004)
#define AR_CR_RXE(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x0000000c : 0x00000004)
#define AR_CR_RXD 0x00000020
#define AR_CR_SWI 0x00000040
@ -352,10 +352,10 @@
#define AR_ISR_S1_QCU_TXEOL 0x03FF0000
#define AR_ISR_S1_QCU_TXEOL_S 16
#define AR_ISR_S2_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00d0 : 0x00cc)
#define AR_ISR_S3_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00d4 : 0x00d0)
#define AR_ISR_S4_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00d8 : 0x00d4)
#define AR_ISR_S5_S (AR_SREV_9300_20_OR_LATER(ah) ? 0x00dc : 0x00d8)
#define AR_ISR_S2_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00d0 : 0x00cc)
#define AR_ISR_S3_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00d4 : 0x00d0)
#define AR_ISR_S4_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00d8 : 0x00d4)
#define AR_ISR_S5_S(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x00dc : 0x00d8)
#define AR_DMADBG_0 0x00e0
#define AR_DMADBG_1 0x00e4
#define AR_DMADBG_2 0x00e8
@ -699,7 +699,7 @@
#define AR_RC_APB 0x00000002
#define AR_RC_HOSTIF 0x00000100
#define AR_WA (AR_SREV_9340(ah) ? 0x40c4 : 0x4004)
#define AR_WA(_ah) (AR_SREV_9340(_ah) ? 0x40c4 : 0x4004)
#define AR_WA_BIT6 (1 << 6)
#define AR_WA_BIT7 (1 << 7)
#define AR_WA_BIT23 (1 << 23)
@ -721,7 +721,7 @@
#define AR_PM_STATE 0x4008
#define AR_PM_STATE_PME_D3COLD_VAUX 0x00100000
#define AR_HOST_TIMEOUT (AR_SREV_9340(ah) ? 0x4008 : 0x4018)
#define AR_HOST_TIMEOUT(_ah) (AR_SREV_9340(_ah) ? 0x4008 : 0x4018)
#define AR_HOST_TIMEOUT_APB_CNTR 0x0000FFFF
#define AR_HOST_TIMEOUT_APB_CNTR_S 0
#define AR_HOST_TIMEOUT_LCL_CNTR 0xFFFF0000
@ -750,12 +750,12 @@
#define EEPROM_PROTECT_RP_1024_2047 0x4000
#define EEPROM_PROTECT_WP_1024_2047 0x8000
#define AR_SREV \
((AR_SREV_9100(ah)) ? 0x0600 : (AR_SREV_9340(ah) \
#define AR_SREV(_ah) \
((AR_SREV_9100(_ah)) ? 0x0600 : (AR_SREV_9340(_ah) \
? 0x400c : 0x4020))
#define AR_SREV_ID \
((AR_SREV_9100(ah)) ? 0x00000FFF : 0x000000FF)
#define AR_SREV_ID(_ah) \
((AR_SREV_9100(_ah)) ? 0x00000FFF : 0x000000FF)
#define AR_SREV_VERSION 0x000000F0
#define AR_SREV_VERSION_S 4
#define AR_SREV_REVISION 0x00000007
@ -1038,11 +1038,11 @@ enum ath_usb_dev {
#define AR_INTR_SPURIOUS 0xFFFFFFFF
#define AR_INTR_SYNC_CAUSE (AR_SREV_9340(ah) ? 0x4010 : 0x4028)
#define AR_INTR_SYNC_CAUSE_CLR (AR_SREV_9340(ah) ? 0x4010 : 0x4028)
#define AR_INTR_SYNC_CAUSE(_ah) (AR_SREV_9340(_ah) ? 0x4010 : 0x4028)
#define AR_INTR_SYNC_CAUSE_CLR(_ah) (AR_SREV_9340(_ah) ? 0x4010 : 0x4028)
#define AR_INTR_SYNC_ENABLE (AR_SREV_9340(ah) ? 0x4014 : 0x402c)
#define AR_INTR_SYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4014 : 0x402c)
#define AR_INTR_SYNC_ENABLE_GPIO 0xFFFC0000
#define AR_INTR_SYNC_ENABLE_GPIO_S 18
@ -1084,18 +1084,18 @@ enum {
};
#define AR_INTR_ASYNC_MASK (AR_SREV_9340(ah) ? 0x4018 : 0x4030)
#define AR_INTR_ASYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x4018 : 0x4030)
#define AR_INTR_ASYNC_MASK_GPIO 0xFFFC0000
#define AR_INTR_ASYNC_MASK_GPIO_S 18
#define AR_INTR_ASYNC_MASK_MCI 0x00000080
#define AR_INTR_ASYNC_MASK_MCI_S 7
#define AR_INTR_SYNC_MASK (AR_SREV_9340(ah) ? 0x401c : 0x4034)
#define AR_INTR_SYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x401c : 0x4034)
#define AR_INTR_SYNC_MASK_GPIO 0xFFFC0000
#define AR_INTR_SYNC_MASK_GPIO_S 18
#define AR_INTR_ASYNC_CAUSE_CLR (AR_SREV_9340(ah) ? 0x4020 : 0x4038)
#define AR_INTR_ASYNC_CAUSE (AR_SREV_9340(ah) ? 0x4020 : 0x4038)
#define AR_INTR_ASYNC_CAUSE_CLR(_ah) (AR_SREV_9340(_ah) ? 0x4020 : 0x4038)
#define AR_INTR_ASYNC_CAUSE(_ah) (AR_SREV_9340(_ah) ? 0x4020 : 0x4038)
#define AR_INTR_ASYNC_CAUSE_MCI 0x00000080
#define AR_INTR_ASYNC_USED (AR_INTR_MAC_IRQ | \
AR_INTR_ASYNC_CAUSE_MCI)
@ -1105,13 +1105,13 @@ enum {
#define AR_INTR_ASYNC_ENABLE_MCI_S 7
#define AR_INTR_ASYNC_ENABLE (AR_SREV_9340(ah) ? 0x4024 : 0x403c)
#define AR_INTR_ASYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4024 : 0x403c)
#define AR_INTR_ASYNC_ENABLE_GPIO 0xFFFC0000
#define AR_INTR_ASYNC_ENABLE_GPIO_S 18
#define AR_PCIE_SERDES 0x4040
#define AR_PCIE_SERDES2 0x4044
#define AR_PCIE_PM_CTRL (AR_SREV_9340(ah) ? 0x4004 : 0x4014)
#define AR_PCIE_PM_CTRL(_ah) (AR_SREV_9340(_ah) ? 0x4004 : 0x4014)
#define AR_PCIE_PM_CTRL_ENA 0x00080000
#define AR_PCIE_PHY_REG3 0x18c08
@ -1156,7 +1156,7 @@ enum {
#define AR9580_GPIO_MASK 0x0000F4FF
#define AR7010_GPIO_MASK 0x0000FFFF
#define AR_GPIO_IN_OUT (AR_SREV_9340(ah) ? 0x4028 : 0x4048)
#define AR_GPIO_IN_OUT(_ah) (AR_SREV_9340(_ah) ? 0x4028 : 0x4048)
#define AR_GPIO_IN_VAL 0x0FFFC000
#define AR_GPIO_IN_VAL_S 14
#define AR928X_GPIO_IN_VAL 0x000FFC00
@ -1170,12 +1170,12 @@ enum {
#define AR7010_GPIO_IN_VAL 0x0000FFFF
#define AR7010_GPIO_IN_VAL_S 0
#define AR_GPIO_IN (AR_SREV_9340(ah) ? 0x402c : 0x404c)
#define AR_GPIO_IN(_ah) (AR_SREV_9340(_ah) ? 0x402c : 0x404c)
#define AR9300_GPIO_IN_VAL 0x0001FFFF
#define AR9300_GPIO_IN_VAL_S 0
#define AR_GPIO_OE_OUT (AR_SREV_9340(ah) ? 0x4030 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4050 : 0x404c))
#define AR_GPIO_OE_OUT(_ah) (AR_SREV_9340(_ah) ? 0x4030 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4050 : 0x404c))
#define AR_GPIO_OE_OUT_DRV 0x3
#define AR_GPIO_OE_OUT_DRV_NO 0x0
#define AR_GPIO_OE_OUT_DRV_LOW 0x1
@ -1197,13 +1197,13 @@ enum {
#define AR7010_GPIO_INT_MASK 0x52024
#define AR7010_GPIO_FUNCTION 0x52028
#define AR_GPIO_INTR_POL (AR_SREV_9340(ah) ? 0x4038 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4058 : 0x4050))
#define AR_GPIO_INTR_POL(_ah) (AR_SREV_9340(_ah) ? 0x4038 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4058 : 0x4050))
#define AR_GPIO_INTR_POL_VAL 0x0001FFFF
#define AR_GPIO_INTR_POL_VAL_S 0
#define AR_GPIO_INPUT_EN_VAL (AR_SREV_9340(ah) ? 0x403c : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x405c : 0x4054))
#define AR_GPIO_INPUT_EN_VAL(_ah) (AR_SREV_9340(_ah) ? 0x403c : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x405c : 0x4054))
#define AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_DEF 0x00000004
#define AR_GPIO_INPUT_EN_VAL_BT_PRIORITY_S 2
#define AR_GPIO_INPUT_EN_VAL_BT_FREQUENCY_DEF 0x00000008
@ -1221,15 +1221,15 @@ enum {
#define AR_GPIO_RTC_RESET_OVERRIDE_ENABLE 0x00010000
#define AR_GPIO_JTAG_DISABLE 0x00020000
#define AR_GPIO_INPUT_MUX1 (AR_SREV_9340(ah) ? 0x4040 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4060 : 0x4058))
#define AR_GPIO_INPUT_MUX1(_ah) (AR_SREV_9340(_ah) ? 0x4040 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4060 : 0x4058))
#define AR_GPIO_INPUT_MUX1_BT_ACTIVE 0x000f0000
#define AR_GPIO_INPUT_MUX1_BT_ACTIVE_S 16
#define AR_GPIO_INPUT_MUX1_BT_PRIORITY 0x00000f00
#define AR_GPIO_INPUT_MUX1_BT_PRIORITY_S 8
#define AR_GPIO_INPUT_MUX2 (AR_SREV_9340(ah) ? 0x4044 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4064 : 0x405c))
#define AR_GPIO_INPUT_MUX2(_ah) (AR_SREV_9340(_ah) ? 0x4044 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4064 : 0x405c))
#define AR_GPIO_INPUT_MUX2_CLK25 0x0000000f
#define AR_GPIO_INPUT_MUX2_CLK25_S 0
#define AR_GPIO_INPUT_MUX2_RFSILENT 0x000000f0
@ -1237,18 +1237,18 @@ enum {
#define AR_GPIO_INPUT_MUX2_RTC_RESET 0x00000f00
#define AR_GPIO_INPUT_MUX2_RTC_RESET_S 8
#define AR_GPIO_OUTPUT_MUX1 (AR_SREV_9340(ah) ? 0x4048 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4068 : 0x4060))
#define AR_GPIO_OUTPUT_MUX2 (AR_SREV_9340(ah) ? 0x404c : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x406c : 0x4064))
#define AR_GPIO_OUTPUT_MUX3 (AR_SREV_9340(ah) ? 0x4050 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4070 : 0x4068))
#define AR_GPIO_OUTPUT_MUX1(_ah) (AR_SREV_9340(_ah) ? 0x4048 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4068 : 0x4060))
#define AR_GPIO_OUTPUT_MUX2(_ah) (AR_SREV_9340(_ah) ? 0x404c : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x406c : 0x4064))
#define AR_GPIO_OUTPUT_MUX3(_ah) (AR_SREV_9340(_ah) ? 0x4050 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4070 : 0x4068))
#define AR_INPUT_STATE (AR_SREV_9340(ah) ? 0x4054 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4074 : 0x406c))
#define AR_INPUT_STATE(_ah) (AR_SREV_9340(_ah) ? 0x4054 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4074 : 0x406c))
#define AR_EEPROM_STATUS_DATA (AR_SREV_9340(ah) ? 0x40c8 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4084 : 0x407c))
#define AR_EEPROM_STATUS_DATA(_ah) (AR_SREV_9340(_ah) ? 0x40c8 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4084 : 0x407c))
#define AR_EEPROM_STATUS_DATA_VAL 0x0000ffff
#define AR_EEPROM_STATUS_DATA_VAL_S 0
#define AR_EEPROM_STATUS_DATA_BUSY 0x00010000
@ -1256,13 +1256,13 @@ enum {
#define AR_EEPROM_STATUS_DATA_PROT_ACCESS 0x00040000
#define AR_EEPROM_STATUS_DATA_ABSENT_ACCESS 0x00080000
#define AR_OBS (AR_SREV_9340(ah) ? 0x405c : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x4088 : 0x4080))
#define AR_OBS(_ah) (AR_SREV_9340(_ah) ? 0x405c : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x4088 : 0x4080))
#define AR_GPIO_PDPU (AR_SREV_9300_20_OR_LATER(ah) ? 0x4090 : 0x4088)
#define AR_GPIO_PDPU(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? 0x4090 : 0x4088)
#define AR_PCIE_MSI (AR_SREV_9340(ah) ? 0x40d8 : \
(AR_SREV_9300_20_OR_LATER(ah) ? 0x40a4 : 0x4094))
#define AR_PCIE_MSI(_ah) (AR_SREV_9340(_ah) ? 0x40d8 : \
(AR_SREV_9300_20_OR_LATER(_ah) ? 0x40a4 : 0x4094))
#define AR_PCIE_MSI_ENABLE 0x00000001
#define AR_PCIE_MSI_HW_DBI_WR_EN 0x02000000
#define AR_PCIE_MSI_HW_INT_PENDING_ADDR 0xFFA0C1FF /* bits 8..11: value must be 0x5060 */
@ -1272,10 +1272,10 @@ enum {
#define AR_INTR_PRIO_RXLP 0x00000002
#define AR_INTR_PRIO_RXHP 0x00000004
#define AR_INTR_PRIO_SYNC_ENABLE (AR_SREV_9340(ah) ? 0x4088 : 0x40c4)
#define AR_INTR_PRIO_ASYNC_MASK (AR_SREV_9340(ah) ? 0x408c : 0x40c8)
#define AR_INTR_PRIO_SYNC_MASK (AR_SREV_9340(ah) ? 0x4090 : 0x40cc)
#define AR_INTR_PRIO_ASYNC_ENABLE (AR_SREV_9340(ah) ? 0x4094 : 0x40d4)
#define AR_INTR_PRIO_SYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4088 : 0x40c4)
#define AR_INTR_PRIO_ASYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x408c : 0x40c8)
#define AR_INTR_PRIO_SYNC_MASK(_ah) (AR_SREV_9340(_ah) ? 0x4090 : 0x40cc)
#define AR_INTR_PRIO_ASYNC_ENABLE(_ah) (AR_SREV_9340(_ah) ? 0x4094 : 0x40d4)
#define AR_ENT_OTP 0x40d8
#define AR_ENT_OTP_CHAIN2_DISABLE 0x00020000
#define AR_ENT_OTP_49GHZ_DISABLE 0x00100000
@ -1339,8 +1339,8 @@ enum {
#define AR_RTC_9160_PLL_CLKSEL_S 14
#define AR_RTC_BASE 0x00020000
#define AR_RTC_RC \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0000) : 0x7000)
#define AR_RTC_RC(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0000) : 0x7000)
#define AR_RTC_RC_M 0x00000003
#define AR_RTC_RC_MAC_WARM 0x00000001
#define AR_RTC_RC_MAC_COLD 0x00000002
@ -1357,8 +1357,8 @@ enum {
#define AR_RTC_REG_CONTROL1 0x700c
#define AR_RTC_REG_CONTROL1_SWREG_PROGRAM 0x00000001
#define AR_RTC_PLL_CONTROL \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0014) : 0x7014)
#define AR_RTC_PLL_CONTROL(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0014) : 0x7014)
#define AR_RTC_PLL_CONTROL2 0x703c
@ -1378,15 +1378,15 @@ enum {
#define PLL4_MEAS_DONE 0x8
#define SQSUM_DVC_MASK 0x007ffff8
#define AR_RTC_RESET \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0040) : 0x7040)
#define AR_RTC_RESET(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0040) : 0x7040)
#define AR_RTC_RESET_EN (0x00000001)
#define AR_RTC_STATUS \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0044) : 0x7044)
#define AR_RTC_STATUS(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0044) : 0x7044)
#define AR_RTC_STATUS_M \
((AR_SREV_9100(ah)) ? 0x0000003f : 0x0000000f)
#define AR_RTC_STATUS_M(_ah) \
((AR_SREV_9100(_ah)) ? 0x0000003f : 0x0000000f)
#define AR_RTC_PM_STATUS_M 0x0000000f
@ -1395,32 +1395,32 @@ enum {
#define AR_RTC_STATUS_SLEEP 0x00000004
#define AR_RTC_STATUS_WAKEUP 0x00000008
#define AR_RTC_SLEEP_CLK \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0048) : 0x7048)
#define AR_RTC_SLEEP_CLK(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0048) : 0x7048)
#define AR_RTC_FORCE_DERIVED_CLK 0x2
#define AR_RTC_FORCE_SWREG_PRD 0x00000004
#define AR_RTC_FORCE_WAKE \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x004c) : 0x704c)
#define AR_RTC_FORCE_WAKE(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x004c) : 0x704c)
#define AR_RTC_FORCE_WAKE_EN 0x00000001
#define AR_RTC_FORCE_WAKE_ON_INT 0x00000002
#define AR_RTC_INTR_CAUSE \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0050) : 0x7050)
#define AR_RTC_INTR_CAUSE(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0050) : 0x7050)
#define AR_RTC_INTR_ENABLE \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0054) : 0x7054)
#define AR_RTC_INTR_ENABLE(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0054) : 0x7054)
#define AR_RTC_INTR_MASK \
((AR_SREV_9100(ah)) ? (AR_RTC_BASE + 0x0058) : 0x7058)
#define AR_RTC_INTR_MASK(_ah) \
((AR_SREV_9100(_ah)) ? (AR_RTC_BASE + 0x0058) : 0x7058)
#define AR_RTC_KEEP_AWAKE 0x7034
/* RTC_DERIVED_* - only for AR9100 */
#define AR_RTC_DERIVED_CLK \
(AR_SREV_9100(ah) ? (AR_RTC_BASE + 0x0038) : 0x7038)
#define AR_RTC_DERIVED_CLK(_ah) \
(AR_SREV_9100(_ah) ? (AR_RTC_BASE + 0x0038) : 0x7038)
#define AR_RTC_DERIVED_CLK_PERIOD 0x0000fffe
#define AR_RTC_DERIVED_CLK_PERIOD_S 1
@ -2114,7 +2114,7 @@ enum {
#define AR9300_SM_BASE 0xa200
#define AR9002_PHY_AGC_CONTROL 0x9860
#define AR9003_PHY_AGC_CONTROL AR9300_SM_BASE + 0xc4
#define AR_PHY_AGC_CONTROL (AR_SREV_9300_20_OR_LATER(ah) ? AR9003_PHY_AGC_CONTROL : AR9002_PHY_AGC_CONTROL)
#define AR_PHY_AGC_CONTROL(_ah) (AR_SREV_9300_20_OR_LATER(_ah) ? AR9003_PHY_AGC_CONTROL : AR9002_PHY_AGC_CONTROL)
#define AR_PHY_AGC_CONTROL_CAL 0x00000001 /* do internal calibration */
#define AR_PHY_AGC_CONTROL_NF 0x00000002 /* do noise-floor calibration */
#define AR_PHY_AGC_CONTROL_OFFSET_CAL 0x00000800 /* allow offset calibration */

Просмотреть файл

@ -29,9 +29,9 @@ static int ath9k_rng_data_read(struct ath_softc *sc, u32 *buf, u32 buf_size)
ath9k_ps_wakeup(sc);
REG_RMW_FIELD(ah, AR_PHY_TEST, AR_PHY_TEST_BBB_OBS_SEL, 1);
REG_CLR_BIT(ah, AR_PHY_TEST, AR_PHY_TEST_RX_OBS_SEL_BIT5);
REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS, AR_PHY_TEST_CTL_RX_OBS_SEL, 0);
REG_RMW_FIELD(ah, AR_PHY_TEST(ah), AR_PHY_TEST_BBB_OBS_SEL, 1);
REG_CLR_BIT(ah, AR_PHY_TEST(ah), AR_PHY_TEST_RX_OBS_SEL_BIT5);
REG_RMW_FIELD(ah, AR_PHY_TEST_CTL_STATUS(ah), AR_PHY_TEST_CTL_RX_OBS_SEL, 0);
for (i = 0, j = 0; i < buf_size; i++) {
v1 = REG_READ(ah, AR_PHY_TST_ADC) & 0xffff;

Просмотреть файл

@ -341,6 +341,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
if (!time_left) {
ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
wmi_cmd_to_name(cmd_id));
wmi->last_seq_id = 0;
mutex_unlock(&wmi->op_mutex);
return -ETIMEDOUT;
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше