pci-v5.7-changes
-----BEGIN PGP SIGNATURE----- iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAl6GTQMUHGJoZWxnYWFz QGdvb2dsZS5jb20ACgkQWYigwDrT+vy3PhAAmqpYBRobOsG8QbmKDjoJEFtkqdvD z6+4zf/R+hF11RyXjMDwihIe8d+tkQ4eAaYu6Oh5PrTyanz0G0PgeCrivZeytULk thqQIWzDQMVA5vN/2/Vy8s5s+3HzP8z/MZOFScJ7+xA1MndXptPRTNmFUbjx+GAv x8/pTp0u9AF6m7itX65DxXvwkzjWamt+Ar4Yx2IcuKAU/M5RtfuZO3PpDnqn7/wk JFlkRoYeFB6qNnnkPdeyPHl9dALhuhzgdTyklQEnKVW3nf3xThYDhcEwdh6kBQgl 0dH8lL5LXy7PKGN8RES4wB0Vqndw/HlsCF5O4wkkfItbnbJxGJtS139e5973m0ud sgWvF4yJAT2jCKhIeNz34sePQJMyWALhv0XzZCsJ0YeGHsrV1jrHELkwUT1+eIsT 3UV0iZ6aL06zQJDyKUbbIcQzEQ/wwBC+x9VgsyL54K1quCQZ1N1Nl/dvrb4cRG9m m9EhJK/brDf4c0uFlOmMTSxV1t5J+z6ZSQnh1ShD/o5yBsxqN6q5brDT6LEs+jbM LsIkA18jJOd4OyiDs98YiFKvIfFQbQ0LEBQpJwhF0snvfBFMMbUYN/T/NYneWON/ F0TpkFoP7PXDuq55iNaLdnObfzrpC9kdzUyWvePUvjxIl55bkf+/qtUny+H48t4L dNggvW052d7BHes= =deWu -----END PGP SIGNATURE----- Merge tag 'pci-v5.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull pci updates from Bjorn Helgaas: "Enumeration: - Revert sysfs "rescan" renames that broke apps (Kelsey Skunberg) - Add more 32 GT/s link speed decoding and improve the implementation (Yicong Yang) Resource management: - Add support for sizing programmable host bridge apertures and fix a related alpha Nautilus regression (Ivan Kokshaysky) Interrupts: - Add boot interrupt quirk mechanism for Xeon chipsets and document boot interrupts (Sean V Kelley) PCIe native device hotplug: - When possible, disable in-band presence detect and use PDS (Alexandru Gagniuc) - Add DMI table for devices that don't use in-band presence detection but don't advertise that correctly (Stuart Hayes) - Fix hang when powering slots up/down via sysfs (Lukas Wunner) - Fix an MSI interrupt race (Stuart Hayes) Virtualization: - Add ACS quirks for Zhaoxin devices (Raymond Pang) Error handling: - Add Error Disconnect Recover (EDR) support so firmware can report devices disconnected via DPC and we can try to recover (Kuppuswamy Sathyanarayanan) Peer-to-peer DMA: - Add Intel Sky Lake-E Root Ports B, C, D to the whitelist (Andrew Maier) ASPM: - Reduce severity of common clock config message (Chris Packham) - Clear the correct bits when enabling L1 substates, so we don't go to the wrong state (Yicong Yang) Endpoint framework: - Replace EPF linkup ops with notifier call chain and improve locking (Kishon Vijay Abraham I) - Fix concurrent memory allocation in OB address region (Kishon Vijay Abraham I) - Move PF function number assignment to EPC core to support multiple function creation methods (Kishon Vijay Abraham I) - Fix issue with clearing configfs "start" entry (Kunihiko Hayashi) - Fix issue with endpoint MSI-X ignoring BAR Indicator and Table Offset (Kishon Vijay Abraham I) - Add support for testing DMA transfers (Kishon Vijay Abraham I) - Add support for testing > 10 endpoint devices (Kishon Vijay Abraham I) - Add support for tests to clear IRQ (Kishon Vijay Abraham I) - Add common DT schema for endpoint controllers (Kishon Vijay Abraham I) Amlogic Meson PCIe controller driver: - Add DT bindings for AXG PCIe PHY, shared MIPI/PCIe analog PHY (Remi Pommarel) - Add Amlogic AXG PCIe PHY, AXG MIPI/PCIe analog PHY drivers (Remi Pommarel) Cadence PCIe controller driver: - Add Root Complex/Endpoint DT schema for Cadence PCIe (Kishon Vijay Abraham I) Intel VMD host bridge driver: - Add two VMD Device IDs that require bus restriction mode (Sushma Kalakota) Mobiveil PCIe controller driver: - Refactor and modularize mobiveil driver (Hou Zhiqiang) - Add support for Mobiveil GPEX Gen4 host (Hou Zhiqiang) Microsoft Hyper-V host bridge driver: - Add support for Hyper-V PCI protocol version 1.3 and PCI_BUS_RELATIONS2 (Long Li) - Refactor to prepare for virtual PCI on non-x86 architectures (Boqun Feng) - Fix memory leak in hv_pci_probe()'s error path (Dexuan Cui) NVIDIA Tegra PCIe controller driver: - Use pci_parse_request_of_pci_ranges() (Rob Herring) - Add support for endpoint mode and related DT updates (Vidya Sagar) - Reduce -EPROBE_DEFER error message log level (Thierry Reding) Qualcomm PCIe controller driver: - Restrict class fixup to specific Qualcomm devices (Bjorn Andersson) Synopsys DesignWare PCIe controller driver: - Refactor core initialization code for endpoint mode (Vidya Sagar) - Fix endpoint MSI-X to use correct table address (Kishon Vijay Abraham I) TI DRA7xx PCIe controller driver: - Fix MSI IRQ handling (Vignesh Raghavendra) TI Keystone PCIe controller driver: - Allow AM654 endpoint to raise MSI-X interrupt (Kishon Vijay Abraham I) Miscellaneous: - Quirk ASMedia XHCI USB to avoid "PME# from D0" defect (Kai-Heng Feng) - Use ioremap(), not phys_to_virt(), for platform ROM to fix video ROM mapping with CONFIG_HIGHMEM (Mikel Rychliski)" * tag 'pci-v5.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (96 commits) misc: pci_endpoint_test: remove duplicate macro PCI_ENDPOINT_TEST_STATUS PCI: tegra: Print -EPROBE_DEFER error message at debug level misc: pci_endpoint_test: Use full pci-endpoint-test name in request_irq() misc: pci_endpoint_test: Fix to support > 10 pci-endpoint-test devices tools: PCI: Add 'e' to clear IRQ misc: pci_endpoint_test: Add ioctl to clear IRQ misc: pci_endpoint_test: Avoid using module parameter to determine irqtype PCI: keystone: Allow AM654 PCIe Endpoint to raise MSI-X interrupt PCI: dwc: Fix dw_pcie_ep_raise_msix_irq() to get correct MSI-X table address PCI: endpoint: Fix ->set_msix() to take BIR and offset as arguments misc: pci_endpoint_test: Add support to get DMA option from userspace tools: PCI: Add 'd' command line option to support DMA misc: pci_endpoint_test: Use streaming DMA APIs for buffer allocation PCI: endpoint: functions/pci-epf-test: Print throughput information PCI: endpoint: functions/pci-epf-test: Add DMA support to transfer data PCI: pciehp: Fix MSI interrupt race PCI: pciehp: Fix indefinite wait on sysfs requests PCI: endpoint: Fix clearing start entry in configfs PCI: tegra: Add support for PCIe endpoint mode in Tegra194 PCI: sysfs: Revert "rescan" file renames ...
This commit is contained in:
Коммит
86f26a77cb
|
@ -0,0 +1,155 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
===============
|
||||
Boot Interrupts
|
||||
===============
|
||||
|
||||
:Author: - Sean V Kelley <sean.v.kelley@linux.intel.com>
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
On PCI Express, interrupts are represented with either MSI or inbound
|
||||
interrupt messages (Assert_INTx/Deassert_INTx). The integrated IO-APIC in a
|
||||
given Core IO converts the legacy interrupt messages from PCI Express to
|
||||
MSI interrupts. If the IO-APIC is disabled (via the mask bits in the
|
||||
IO-APIC table entries), the messages are routed to the legacy PCH. This
|
||||
in-band interrupt mechanism was traditionally necessary for systems that
|
||||
did not support the IO-APIC and for boot. Intel in the past has used the
|
||||
term "boot interrupts" to describe this mechanism. Further, the PCI Express
|
||||
protocol describes this in-band legacy wire-interrupt INTx mechanism for
|
||||
I/O devices to signal PCI-style level interrupts. The subsequent paragraphs
|
||||
describe problems with the Core IO handling of INTx message routing to the
|
||||
PCH and mitigation within BIOS and the OS.
|
||||
|
||||
|
||||
Issue
|
||||
=====
|
||||
|
||||
When in-band legacy INTx messages are forwarded to the PCH, they in turn
|
||||
trigger a new interrupt for which the OS likely lacks a handler. When an
|
||||
interrupt goes unhandled over time, they are tracked by the Linux kernel as
|
||||
Spurious Interrupts. The IRQ will be disabled by the Linux kernel after it
|
||||
reaches a specific count with the error "nobody cared". This disabled IRQ
|
||||
now prevents valid usage by an existing interrupt which may happen to share
|
||||
the IRQ line.
|
||||
|
||||
irq 19: nobody cared (try booting with the "irqpoll" option)
|
||||
CPU: 0 PID: 2988 Comm: irq/34-nipalk Tainted: 4.14.87-rt49-02410-g4a640ec-dirty #1
|
||||
Hardware name: National Instruments NI PXIe-8880/NI PXIe-8880, BIOS 2.1.5f1 01/09/2020
|
||||
Call Trace:
|
||||
<IRQ>
|
||||
? dump_stack+0x46/0x5e
|
||||
? __report_bad_irq+0x2e/0xb0
|
||||
? note_interrupt+0x242/0x290
|
||||
? nNIKAL100_memoryRead16+0x8/0x10 [nikal]
|
||||
? handle_irq_event_percpu+0x55/0x70
|
||||
? handle_irq_event+0x4f/0x80
|
||||
? handle_fasteoi_irq+0x81/0x180
|
||||
? handle_irq+0x1c/0x30
|
||||
? do_IRQ+0x41/0xd0
|
||||
? common_interrupt+0x84/0x84
|
||||
</IRQ>
|
||||
|
||||
handlers:
|
||||
irq_default_primary_handler threaded usb_hcd_irq
|
||||
Disabling IRQ #19
|
||||
|
||||
|
||||
Conditions
|
||||
==========
|
||||
|
||||
The use of threaded interrupts is the most likely condition to trigger
|
||||
this problem today. Threaded interrupts may not be reenabled after the IRQ
|
||||
handler wakes. These "one shot" conditions mean that the threaded interrupt
|
||||
needs to keep the interrupt line masked until the threaded handler has run.
|
||||
Especially when dealing with high data rate interrupts, the thread needs to
|
||||
run to completion; otherwise some handlers will end up in stack overflows
|
||||
since the interrupt of the issuing device is still active.
|
||||
|
||||
Affected Chipsets
|
||||
=================
|
||||
|
||||
The legacy interrupt forwarding mechanism exists today in a number of
|
||||
devices including but not limited to chipsets from AMD/ATI, Broadcom, and
|
||||
Intel. Changes made through the mitigations below have been applied to
|
||||
drivers/pci/quirks.c
|
||||
|
||||
Starting with ICX there are no longer any IO-APICs in the Core IO's
|
||||
devices. IO-APIC is only in the PCH. Devices connected to the Core IO's
|
||||
PCIe Root Ports will use native MSI/MSI-X mechanisms.
|
||||
|
||||
Mitigations
|
||||
===========
|
||||
|
||||
The mitigations take the form of PCI quirks. The preference has been to
|
||||
first identify and make use of a means to disable the routing to the PCH.
|
||||
In such a case a quirk to disable boot interrupt generation can be
|
||||
added.[1]
|
||||
|
||||
Intel® 6300ESB I/O Controller Hub
|
||||
Alternate Base Address Register:
|
||||
BIE: Boot Interrupt Enable
|
||||
0 = Boot interrupt is enabled.
|
||||
1 = Boot interrupt is disabled.
|
||||
|
||||
Intel® Sandy Bridge through Sky Lake based Xeon servers:
|
||||
Coherent Interface Protocol Interrupt Control
|
||||
dis_intx_route2pch/dis_intx_route2ich/dis_intx_route2dmi2:
|
||||
When this bit is set. Local INTx messages received from the
|
||||
Intel® Quick Data DMA/PCI Express ports are not routed to legacy
|
||||
PCH - they are either converted into MSI via the integrated IO-APIC
|
||||
(if the IO-APIC mask bit is clear in the appropriate entries)
|
||||
or cause no further action (when mask bit is set)
|
||||
|
||||
In the absence of a way to directly disable the routing, another approach
|
||||
has been to make use of PCI Interrupt pin to INTx routing tables for
|
||||
purposes of redirecting the interrupt handler to the rerouted interrupt
|
||||
line by default. Therefore, on chipsets where this INTx routing cannot be
|
||||
disabled, the Linux kernel will reroute the valid interrupt to its legacy
|
||||
interrupt. This redirection of the handler will prevent the occurrence of
|
||||
the spurious interrupt detection which would ordinarily disable the IRQ
|
||||
line due to excessive unhandled counts.[2]
|
||||
|
||||
The config option X86_REROUTE_FOR_BROKEN_BOOT_IRQS exists to enable (or
|
||||
disable) the redirection of the interrupt handler to the PCH interrupt
|
||||
line. The option can be overridden by either pci=ioapicreroute or
|
||||
pci=noioapicreroute.[3]
|
||||
|
||||
|
||||
More Documentation
|
||||
==================
|
||||
|
||||
There is an overview of the legacy interrupt handling in several datasheets
|
||||
(6300ESB and 6700PXH below). While largely the same, it provides insight
|
||||
into the evolution of its handling with chipsets.
|
||||
|
||||
Example of disabling of the boot interrupt
|
||||
------------------------------------------
|
||||
|
||||
Intel® 6300ESB I/O Controller Hub (Document # 300641-004US)
|
||||
5.7.3 Boot Interrupt
|
||||
https://www.intel.com/content/dam/doc/datasheet/6300esb-io-controller-hub-datasheet.pdf
|
||||
|
||||
Intel® Xeon® Processor E5-1600/2400/2600/4600 v3 Product Families
|
||||
Datasheet - Volume 2: Registers (Document # 330784-003)
|
||||
6.6.41 cipintrc Coherent Interface Protocol Interrupt Control
|
||||
https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-v3-datasheet-vol-2.pdf
|
||||
|
||||
Example of handler rerouting
|
||||
----------------------------
|
||||
|
||||
Intel® 6700PXH 64-bit PCI Hub (Document # 302628)
|
||||
2.15.2 PCI Express Legacy INTx Support and Boot Interrupt
|
||||
https://www.intel.com/content/dam/doc/datasheet/6700pxh-64-bit-pci-hub-datasheet.pdf
|
||||
|
||||
|
||||
If you have any legacy PCI interrupt questions that aren't answered, email me.
|
||||
|
||||
Cheers,
|
||||
Sean V Kelley
|
||||
sean.v.kelley@linux.intel.com
|
||||
|
||||
[1] https://lore.kernel.org/r/12131949181903-git-send-email-sassmann@suse.de/
|
||||
[2] https://lore.kernel.org/r/12131949182094-git-send-email-sassmann@suse.de/
|
||||
[3] https://lore.kernel.org/r/487C8EA7.6020205@suse.de/
|
|
@ -16,3 +16,4 @@ Linux PCI Bus Subsystem
|
|||
pci-error-recovery
|
||||
pcieaer-howto
|
||||
endpoint/index
|
||||
boot-interrupts
|
||||
|
|
|
@ -156,12 +156,6 @@ default reset_link function, but different upstream ports might
|
|||
have different specifications to reset pci express link, so all
|
||||
upstream ports should provide their own reset_link functions.
|
||||
|
||||
In struct pcie_port_service_driver, a new pointer, reset_link, is
|
||||
added.
|
||||
::
|
||||
|
||||
pci_ers_result_t (*reset_link) (struct pci_dev *dev);
|
||||
|
||||
Section 3.2.2.2 provides more detailed info on when to call
|
||||
reset_link.
|
||||
|
||||
|
@ -212,15 +206,10 @@ error_detected(dev, pci_channel_io_frozen) to all drivers within
|
|||
a hierarchy in question. Then, performing link reset at upstream is
|
||||
necessary. As different kinds of devices might use different approaches
|
||||
to reset link, AER port service driver is required to provide the
|
||||
function to reset link. Firstly, kernel looks for if the upstream
|
||||
component has an aer driver. If it has, kernel uses the reset_link
|
||||
callback of the aer driver. If the upstream component has no aer driver
|
||||
and the port is downstream port, we will perform a hot reset as the
|
||||
default by setting the Secondary Bus Reset bit of the Bridge Control
|
||||
register associated with the downstream port. As for upstream ports,
|
||||
they should provide their own aer service drivers with reset_link
|
||||
function. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER and
|
||||
reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
|
||||
function to reset link via callback parameter of pcie_do_recovery()
|
||||
function. If reset_link is not NULL, recovery function will use it
|
||||
to reset the link. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER
|
||||
and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
|
||||
to mmio_enabled.
|
||||
|
||||
helper functions
|
||||
|
@ -243,9 +232,9 @@ messages to root port when an error is detected.
|
|||
|
||||
::
|
||||
|
||||
int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev);`
|
||||
int pci_aer_clear_nonfatal_status(struct pci_dev *dev);`
|
||||
|
||||
pci_cleanup_aer_uncorrect_error_status cleanups the uncorrectable
|
||||
pci_aer_clear_nonfatal_status clears non-fatal errors in the uncorrectable
|
||||
error status register.
|
||||
|
||||
Frequent Asked Questions
|
||||
|
|
|
@ -18,7 +18,6 @@ Required properties:
|
|||
- reg-names: Must be
|
||||
- "elbi" External local bus interface registers
|
||||
- "cfg" Meson specific registers
|
||||
- "phy" Meson PCIE PHY registers for AXG SoC Family
|
||||
- "config" PCIe configuration space
|
||||
- reset-gpios: The GPIO to generate PCIe PERST# assert and deassert signal.
|
||||
- clocks: Must contain an entry for each entry in clock-names.
|
||||
|
@ -26,13 +25,13 @@ Required properties:
|
|||
- "pclk" PCIe GEN 100M PLL clock
|
||||
- "port" PCIe_x(A or B) RC clock gate
|
||||
- "general" PCIe Phy clock
|
||||
- "mipi" PCIe_x(A or B) 100M ref clock gate for AXG SoC Family
|
||||
- resets: phandle to the reset lines.
|
||||
- reset-names: must contain "phy" "port" and "apb"
|
||||
- "phy" Share PHY reset for AXG SoC Family
|
||||
- reset-names: must contain "port" and "apb"
|
||||
- "port" Port A or B reset
|
||||
- "apb" Share APB reset
|
||||
- phys: should contain a phandle to the shared phy for G12A SoC Family
|
||||
- phys: should contain a phandle to the PCIE phy
|
||||
- phy-names: must contain "pcie"
|
||||
|
||||
- device_type:
|
||||
should be "pci". As specified in designware-pcie.txt
|
||||
|
||||
|
@ -43,9 +42,8 @@ Example configuration:
|
|||
compatible = "amlogic,axg-pcie", "snps,dw-pcie";
|
||||
reg = <0x0 0xf9800000 0x0 0x400000
|
||||
0x0 0xff646000 0x0 0x2000
|
||||
0x0 0xff644000 0x0 0x2000
|
||||
0x0 0xf9f00000 0x0 0x100000>;
|
||||
reg-names = "elbi", "cfg", "phy", "config";
|
||||
reg-names = "elbi", "cfg", "config";
|
||||
reset-gpios = <&gpio GPIOX_19 GPIO_ACTIVE_HIGH>;
|
||||
interrupts = <GIC_SPI 177 IRQ_TYPE_EDGE_RISING>;
|
||||
#interrupt-cells = <1>;
|
||||
|
@ -58,17 +56,15 @@ Example configuration:
|
|||
ranges = <0x82000000 0 0 0x0 0xf9c00000 0 0x00300000>;
|
||||
|
||||
clocks = <&clkc CLKID_USB
|
||||
&clkc CLKID_MIPI_ENABLE
|
||||
&clkc CLKID_PCIE_A
|
||||
&clkc CLKID_PCIE_CML_EN0>;
|
||||
clock-names = "general",
|
||||
"mipi",
|
||||
"pclk",
|
||||
"port";
|
||||
resets = <&reset RESET_PCIE_PHY>,
|
||||
<&reset RESET_PCIE_A>,
|
||||
resets = <&reset RESET_PCIE_A>,
|
||||
<&reset RESET_PCIE_APB>;
|
||||
reset-names = "phy",
|
||||
"port",
|
||||
reset-names = "port",
|
||||
"apb";
|
||||
phys = <&pcie_phy>;
|
||||
phy-names = "pcie";
|
||||
};
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
* Cadence PCIe endpoint controller
|
||||
|
||||
Required properties:
|
||||
- compatible: Should contain "cdns,cdns-pcie-ep" to identify the IP used.
|
||||
- reg: Should contain the controller register base address and AXI interface
|
||||
region base address respectively.
|
||||
- reg-names: Must be "reg" and "mem" respectively.
|
||||
- cdns,max-outbound-regions: Set to maximum number of outbound regions
|
||||
|
||||
Optional properties:
|
||||
- max-functions: Maximum number of functions that can be configured (default 1).
|
||||
- phys: From PHY bindings: List of Generic PHY phandles. One per lane if more
|
||||
than one in the list. If only one PHY listed it must manage all lanes.
|
||||
- phy-names: List of names to identify the PHY.
|
||||
|
||||
Example:
|
||||
|
||||
pcie@fc000000 {
|
||||
compatible = "cdns,cdns-pcie-ep";
|
||||
reg = <0x0 0xfc000000 0x0 0x01000000>,
|
||||
<0x0 0x80000000 0x0 0x40000000>;
|
||||
reg-names = "reg", "mem";
|
||||
cdns,max-outbound-regions = <16>;
|
||||
max-functions = /bits/ 8 <8>;
|
||||
phys = <&ep_phy0 &ep_phy1>;
|
||||
phy-names = "pcie-lane0","pcie-lane1";
|
||||
};
|
|
@ -0,0 +1,49 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/pci/cdns,cdns-pcie-ep.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Cadence PCIe EP Controller
|
||||
|
||||
maintainers:
|
||||
- Tom Joseph <tjoseph@cadence.com>
|
||||
|
||||
allOf:
|
||||
- $ref: "cdns-pcie.yaml#"
|
||||
- $ref: "pci-ep.yaml#"
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: cdns,cdns-pcie-ep
|
||||
|
||||
reg:
|
||||
maxItems: 2
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: reg
|
||||
- const: mem
|
||||
|
||||
required:
|
||||
- reg
|
||||
- reg-names
|
||||
|
||||
examples:
|
||||
- |
|
||||
bus {
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
pcie-ep@fc000000 {
|
||||
compatible = "cdns,cdns-pcie-ep";
|
||||
reg = <0x0 0xfc000000 0x0 0x01000000>,
|
||||
<0x0 0x80000000 0x0 0x40000000>;
|
||||
reg-names = "reg", "mem";
|
||||
cdns,max-outbound-regions = <16>;
|
||||
max-functions = /bits/ 8 <8>;
|
||||
phys = <&pcie_phy0>;
|
||||
phy-names = "pcie-phy";
|
||||
};
|
||||
};
|
||||
...
|
|
@ -1,66 +0,0 @@
|
|||
* Cadence PCIe host controller
|
||||
|
||||
This PCIe controller inherits the base properties defined in
|
||||
host-generic-pci.txt.
|
||||
|
||||
Required properties:
|
||||
- compatible: Should contain "cdns,cdns-pcie-host" to identify the IP used.
|
||||
- reg: Should contain the controller register base address, PCIe configuration
|
||||
window base address, and AXI interface region base address respectively.
|
||||
- reg-names: Must be "reg", "cfg" and "mem" respectively.
|
||||
- #address-cells: Set to <3>
|
||||
- #size-cells: Set to <2>
|
||||
- device_type: Set to "pci"
|
||||
- ranges: Ranges for the PCI memory and I/O regions
|
||||
- #interrupt-cells: Set to <1>
|
||||
- interrupt-map-mask and interrupt-map: Standard PCI properties to define the
|
||||
mapping of the PCIe interface to interrupt numbers.
|
||||
|
||||
Optional properties:
|
||||
- cdns,max-outbound-regions: Set to maximum number of outbound regions
|
||||
(default 32)
|
||||
- cdns,no-bar-match-nbits: Set into the no BAR match register to configure the
|
||||
number of least significant bits kept during inbound (PCIe -> AXI) address
|
||||
translations (default 32)
|
||||
- vendor-id: The PCI vendor ID (16 bits, default is design dependent)
|
||||
- device-id: The PCI device ID (16 bits, default is design dependent)
|
||||
- phys: From PHY bindings: List of Generic PHY phandles. One per lane if more
|
||||
than one in the list. If only one PHY listed it must manage all lanes.
|
||||
- phy-names: List of names to identify the PHY.
|
||||
|
||||
Example:
|
||||
|
||||
pcie@fb000000 {
|
||||
compatible = "cdns,cdns-pcie-host";
|
||||
device_type = "pci";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
bus-range = <0x0 0xff>;
|
||||
linux,pci-domain = <0>;
|
||||
cdns,max-outbound-regions = <16>;
|
||||
cdns,no-bar-match-nbits = <32>;
|
||||
vendor-id = /bits/ 16 <0x17cd>;
|
||||
device-id = /bits/ 16 <0x0200>;
|
||||
|
||||
reg = <0x0 0xfb000000 0x0 0x01000000>,
|
||||
<0x0 0x41000000 0x0 0x00001000>,
|
||||
<0x0 0x40000000 0x0 0x04000000>;
|
||||
reg-names = "reg", "cfg", "mem";
|
||||
|
||||
ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>,
|
||||
<0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>;
|
||||
|
||||
#interrupt-cells = <0x1>;
|
||||
|
||||
interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0x0 0x0 14 0x1
|
||||
0x0 0x0 0x0 0x2 &gic 0x0 0x0 0x0 15 0x1
|
||||
0x0 0x0 0x0 0x3 &gic 0x0 0x0 0x0 16 0x1
|
||||
0x0 0x0 0x0 0x4 &gic 0x0 0x0 0x0 17 0x1>;
|
||||
|
||||
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
|
||||
|
||||
msi-parent = <&its_pci>;
|
||||
|
||||
phys = <&pcie_phy0>;
|
||||
phy-names = "pcie-phy";
|
||||
};
|
|
@ -0,0 +1,76 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/pci/cdns,cdns-pcie-host.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Cadence PCIe host controller
|
||||
|
||||
maintainers:
|
||||
- Tom Joseph <tjoseph@cadence.com>
|
||||
|
||||
allOf:
|
||||
- $ref: /schemas/pci/pci-bus.yaml#
|
||||
- $ref: "cdns-pcie-host.yaml#"
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: cdns,cdns-pcie-host
|
||||
|
||||
reg:
|
||||
maxItems: 3
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: reg
|
||||
- const: cfg
|
||||
- const: mem
|
||||
|
||||
msi-parent: true
|
||||
|
||||
required:
|
||||
- reg
|
||||
- reg-names
|
||||
|
||||
examples:
|
||||
- |
|
||||
bus {
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
pcie@fb000000 {
|
||||
compatible = "cdns,cdns-pcie-host";
|
||||
device_type = "pci";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
bus-range = <0x0 0xff>;
|
||||
linux,pci-domain = <0>;
|
||||
cdns,max-outbound-regions = <16>;
|
||||
cdns,no-bar-match-nbits = <32>;
|
||||
vendor-id = <0x17cd>;
|
||||
device-id = <0x0200>;
|
||||
|
||||
reg = <0x0 0xfb000000 0x0 0x01000000>,
|
||||
<0x0 0x41000000 0x0 0x00001000>,
|
||||
<0x0 0x40000000 0x0 0x04000000>;
|
||||
reg-names = "reg", "cfg", "mem";
|
||||
|
||||
ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>,
|
||||
<0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>;
|
||||
|
||||
#interrupt-cells = <0x1>;
|
||||
|
||||
interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0x0 0x0 14 0x1>,
|
||||
<0x0 0x0 0x0 0x2 &gic 0x0 0x0 0x0 15 0x1>,
|
||||
<0x0 0x0 0x0 0x3 &gic 0x0 0x0 0x0 16 0x1>,
|
||||
<0x0 0x0 0x0 0x4 &gic 0x0 0x0 0x0 17 0x1>;
|
||||
|
||||
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
|
||||
|
||||
msi-parent = <&its_pci>;
|
||||
|
||||
phys = <&pcie_phy0>;
|
||||
phy-names = "pcie-phy";
|
||||
};
|
||||
};
|
||||
...
|
|
@ -0,0 +1,27 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/pci/cdns-pcie-host.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Cadence PCIe Host
|
||||
|
||||
maintainers:
|
||||
- Tom Joseph <tjoseph@cadence.com>
|
||||
|
||||
allOf:
|
||||
- $ref: "/schemas/pci/pci-bus.yaml#"
|
||||
- $ref: "cdns-pcie.yaml#"
|
||||
|
||||
properties:
|
||||
cdns,no-bar-match-nbits:
|
||||
description:
|
||||
Set into the no BAR match register to configure the number of least
|
||||
significant bits kept during inbound (PCIe -> AXI) address translations
|
||||
allOf:
|
||||
- $ref: /schemas/types.yaml#/definitions/uint32
|
||||
minimum: 0
|
||||
maximum: 64
|
||||
default: 32
|
||||
|
||||
msi-parent: true
|
|
@ -0,0 +1,31 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/pci/cdns-pcie.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Cadence PCIe Core
|
||||
|
||||
maintainers:
|
||||
- Tom Joseph <tjoseph@cadence.com>
|
||||
|
||||
properties:
|
||||
cdns,max-outbound-regions:
|
||||
description: maximum number of outbound regions
|
||||
allOf:
|
||||
- $ref: /schemas/types.yaml#/definitions/uint32
|
||||
minimum: 1
|
||||
maximum: 32
|
||||
default: 32
|
||||
|
||||
phys:
|
||||
description:
|
||||
One per lane if more than one in the list. If only one PHY listed it must
|
||||
manage all lanes.
|
||||
minItems: 1
|
||||
maxItems: 16
|
||||
|
||||
phy-names:
|
||||
items:
|
||||
- const: pcie-phy
|
||||
# FIXME: names when more than 1
|
|
@ -0,0 +1,52 @@
|
|||
NXP Layerscape PCIe Gen4 controller
|
||||
|
||||
This PCIe controller is based on the Mobiveil PCIe IP and thus inherits all
|
||||
the common properties defined in mobiveil-pcie.txt.
|
||||
|
||||
Required properties:
|
||||
- compatible: should contain the platform identifier such as:
|
||||
"fsl,lx2160a-pcie"
|
||||
- reg: base addresses and lengths of the PCIe controller register blocks.
|
||||
"csr_axi_slave": Bridge config registers
|
||||
"config_axi_slave": PCIe controller registers
|
||||
- interrupts: A list of interrupt outputs of the controller. Must contain an
|
||||
entry for each entry in the interrupt-names property.
|
||||
- interrupt-names: It could include the following entries:
|
||||
"intr": The interrupt that is asserted for controller interrupts
|
||||
"aer": Asserted for aer interrupt when chip support the aer interrupt with
|
||||
none MSI/MSI-X/INTx mode,but there is interrupt line for aer.
|
||||
"pme": Asserted for pme interrupt when chip support the pme interrupt with
|
||||
none MSI/MSI-X/INTx mode,but there is interrupt line for pme.
|
||||
- dma-coherent: Indicates that the hardware IP block can ensure the coherency
|
||||
of the data transferred from/to the IP block. This can avoid the software
|
||||
cache flush/invalid actions, and improve the performance significantly.
|
||||
- msi-parent : See the generic MSI binding described in
|
||||
Documentation/devicetree/bindings/interrupt-controller/msi.txt.
|
||||
|
||||
Example:
|
||||
|
||||
pcie@3400000 {
|
||||
compatible = "fsl,lx2160a-pcie";
|
||||
reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */
|
||||
0x80 0x00000000 0x0 0x00001000>; /* configuration space */
|
||||
reg-names = "csr_axi_slave", "config_axi_slave";
|
||||
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* AER interrupt */
|
||||
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* PME interrupt */
|
||||
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
|
||||
interrupt-names = "aer", "pme", "intr";
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
device_type = "pci";
|
||||
apio-wins = <8>;
|
||||
ppio-wins = <8>;
|
||||
dma-coherent;
|
||||
bus-range = <0x0 0xff>;
|
||||
msi-parent = <&its>;
|
||||
ranges = <0x82000000 0x0 0x40000000 0x80 0x40000000 0x0 0x40000000>;
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 7>;
|
||||
interrupt-map = <0000 0 0 1 &gic 0 0 GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 2 &gic 0 0 GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 3 &gic 0 0 GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<0000 0 0 4 &gic 0 0 GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
|
@ -1,11 +1,11 @@
|
|||
NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
|
||||
|
||||
This PCIe host controller is based on the Synopsis Designware PCIe IP
|
||||
This PCIe controller is based on the Synopsis Designware PCIe IP
|
||||
and thus inherits all the common properties defined in designware-pcie.txt.
|
||||
Some of the controller instances are dual mode where in they can work either
|
||||
in root port mode or endpoint mode but one at a time.
|
||||
|
||||
Required properties:
|
||||
- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
|
||||
- device_type: Must be "pci"
|
||||
- power-domains: A phandle to the node that controls power to the respective
|
||||
PCIe controller and a specifier name for the PCIe controller. Following are
|
||||
the specifiers for the different PCIe controllers
|
||||
|
@ -32,6 +32,32 @@ Required properties:
|
|||
entry for each entry in the interrupt-names property.
|
||||
- interrupt-names: Must include the following entries:
|
||||
"intr": The Tegra interrupt that is asserted for controller interrupts
|
||||
- clocks: Must contain an entry for each entry in clock-names.
|
||||
See ../clocks/clock-bindings.txt for details.
|
||||
- clock-names: Must include the following entries:
|
||||
- core
|
||||
- resets: Must contain an entry for each entry in reset-names.
|
||||
See ../reset/reset.txt for details.
|
||||
- reset-names: Must include the following entries:
|
||||
- apb
|
||||
- core
|
||||
- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
|
||||
- phy-names: Must include an entry for each active lane.
|
||||
"p2u-N": where N ranges from 0 to one less than the total number of lanes
|
||||
- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed
|
||||
by controller-id. Following are the controller ids for each controller.
|
||||
0: C0
|
||||
1: C1
|
||||
2: C2
|
||||
3: C3
|
||||
4: C4
|
||||
5: C5
|
||||
- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
|
||||
|
||||
RC mode:
|
||||
- compatible: Tegra19x must contain "nvidia,tegra194-pcie"
|
||||
- device_type: Must be "pci" for RC mode
|
||||
- interrupt-names: Must include the following entries:
|
||||
"msi": The Tegra interrupt that is asserted when an MSI is received
|
||||
- bus-range: Range of bus numbers associated with this controller
|
||||
- #address-cells: Address representation for root ports (must be 3)
|
||||
|
@ -60,27 +86,15 @@ Required properties:
|
|||
- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
|
||||
Please refer to the standard PCI bus binding document for a more detailed
|
||||
explanation.
|
||||
- clocks: Must contain an entry for each entry in clock-names.
|
||||
See ../clocks/clock-bindings.txt for details.
|
||||
- clock-names: Must include the following entries:
|
||||
- core
|
||||
- resets: Must contain an entry for each entry in reset-names.
|
||||
See ../reset/reset.txt for details.
|
||||
- reset-names: Must include the following entries:
|
||||
- apb
|
||||
- core
|
||||
- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
|
||||
- phy-names: Must include an entry for each active lane.
|
||||
"p2u-N": where N ranges from 0 to one less than the total number of lanes
|
||||
- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed
|
||||
by controller-id. Following are the controller ids for each controller.
|
||||
0: C0
|
||||
1: C1
|
||||
2: C2
|
||||
3: C3
|
||||
4: C4
|
||||
5: C5
|
||||
- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
|
||||
|
||||
EP mode:
|
||||
In Tegra194, Only controllers C0, C4 & C5 support EP mode.
|
||||
- compatible: Tegra19x must contain "nvidia,tegra194-pcie-ep"
|
||||
- reg-names: Must include the following entries:
|
||||
"addr_space": Used to map remote RC address space
|
||||
- reset-gpios: Must contain a phandle to a GPIO controller followed by
|
||||
GPIO that is being used as PERST input signal. Please refer to pci.txt
|
||||
document.
|
||||
|
||||
Optional properties:
|
||||
- pinctrl-names: A list of pinctrl state names.
|
||||
|
@ -104,6 +118,8 @@ Optional properties:
|
|||
specified in microseconds
|
||||
- nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be
|
||||
specified in microseconds
|
||||
|
||||
RC mode:
|
||||
- vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot
|
||||
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
|
||||
in p2972-0000 platform).
|
||||
|
@ -111,11 +127,18 @@ Optional properties:
|
|||
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
|
||||
in p2972-0000 platform).
|
||||
|
||||
EP mode:
|
||||
- nvidia,refclk-select-gpios: Must contain a phandle to a GPIO controller
|
||||
followed by GPIO that is being used to enable REFCLK to controller from host
|
||||
|
||||
NOTE:- On Tegra194's P2972-0000 platform, only C5 controller can be enabled to
|
||||
operate in the endpoint mode because of the way the platform is designed.
|
||||
|
||||
Examples:
|
||||
=========
|
||||
|
||||
Tegra194:
|
||||
--------
|
||||
Tegra194 RC mode:
|
||||
-----------------
|
||||
|
||||
pcie@14180000 {
|
||||
compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
|
||||
|
@ -169,3 +192,53 @@ Tegra194:
|
|||
<&p2u_hsio_5>;
|
||||
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3";
|
||||
};
|
||||
|
||||
Tegra194 EP mode:
|
||||
-----------------
|
||||
|
||||
pcie_ep@141a0000 {
|
||||
compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
|
||||
power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
|
||||
reg = <0x00 0x141a0000 0x0 0x00020000 /* appl registers (128K) */
|
||||
0x00 0x3a040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */
|
||||
0x00 0x3a080000 0x0 0x00040000 /* DBI reg space (256K) */
|
||||
0x1c 0x00000000 0x4 0x00000000>; /* Address Space (16G) */
|
||||
reg-names = "appl", "atu_dma", "dbi", "addr_space";
|
||||
|
||||
num-lanes = <8>;
|
||||
num-ib-windows = <2>;
|
||||
num-ob-windows = <8>;
|
||||
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&clkreq_c5_bi_dir_state>;
|
||||
|
||||
clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>;
|
||||
clock-names = "core";
|
||||
|
||||
resets = <&bpmp TEGRA194_RESET_PEX1_CORE_5_APB>,
|
||||
<&bpmp TEGRA194_RESET_PEX1_CORE_5>;
|
||||
reset-names = "apb", "core";
|
||||
|
||||
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
|
||||
interrupt-names = "intr";
|
||||
|
||||
nvidia,bpmp = <&bpmp 5>;
|
||||
|
||||
nvidia,aspm-cmrt-us = <60>;
|
||||
nvidia,aspm-pwr-on-t-us = <20>;
|
||||
nvidia,aspm-l0s-entrance-latency-us = <3>;
|
||||
|
||||
vddio-pex-ctl-supply = <&vdd_1v8ao>;
|
||||
|
||||
reset-gpios = <&gpio TEGRA194_MAIN_GPIO(GG, 1) GPIO_ACTIVE_LOW>;
|
||||
|
||||
nvidia,refclk-select-gpios = <&gpio_aon TEGRA194_AON_GPIO(AA, 5)
|
||||
GPIO_ACTIVE_HIGH>;
|
||||
|
||||
phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
|
||||
<&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>,
|
||||
<&p2u_nvhs_6>, <&p2u_nvhs_7>;
|
||||
|
||||
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4",
|
||||
"p2u-5", "p2u-6", "p2u-7";
|
||||
};
|
||||
|
|
|
@ -0,0 +1,41 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/pci/pci-ep.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: PCI Endpoint Controller Schema
|
||||
|
||||
description: |
|
||||
Common properties for PCI Endpoint Controller Nodes.
|
||||
|
||||
maintainers:
|
||||
- Kishon Vijay Abraham I <kishon@ti.com>
|
||||
|
||||
properties:
|
||||
$nodename:
|
||||
pattern: "^pcie-ep@"
|
||||
|
||||
max-functions:
|
||||
description: Maximum number of functions that can be configured
|
||||
allOf:
|
||||
- $ref: /schemas/types.yaml#/definitions/uint8
|
||||
minimum: 1
|
||||
default: 1
|
||||
maximum: 255
|
||||
|
||||
max-link-speed:
|
||||
allOf:
|
||||
- $ref: /schemas/types.yaml#/definitions/uint32
|
||||
enum: [ 1, 2, 3, 4 ]
|
||||
|
||||
num-lanes:
|
||||
description: maximum number of lanes
|
||||
allOf:
|
||||
- $ref: /schemas/types.yaml#/definitions/uint32
|
||||
minimum: 1
|
||||
default: 1
|
||||
maximum: 16
|
||||
|
||||
required:
|
||||
- compatible
|
|
@ -0,0 +1,35 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/phy/amlogic,meson-axg-mipi-pcie-analog.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Amlogic AXG shared MIPI/PCIE analog PHY
|
||||
|
||||
maintainers:
|
||||
- Remi Pommarel <repk@triplefau.lt>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: amlogic,axg-mipi-pcie-analog-phy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
"#phy-cells":
|
||||
const: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- "#phy-cells"
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
mpphy: phy@0 {
|
||||
compatible = "amlogic,axg-mipi-pcie-analog-phy";
|
||||
reg = <0x0 0x0 0x0 0xc>;
|
||||
#phy-cells = <1>;
|
||||
};
|
|
@ -0,0 +1,52 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/phy/amlogic,meson-axg-pcie.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Amlogic AXG PCIE PHY
|
||||
|
||||
maintainers:
|
||||
- Remi Pommarel <repk@triplefau.lt>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: amlogic,axg-pcie-phy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
resets:
|
||||
maxItems: 1
|
||||
|
||||
phys:
|
||||
maxItems: 1
|
||||
|
||||
phy-names:
|
||||
const: analog
|
||||
|
||||
"#phy-cells":
|
||||
const: 0
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- phys
|
||||
- phy-names
|
||||
- resets
|
||||
- "#phy-cells"
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/reset/amlogic,meson-axg-reset.h>
|
||||
#include <dt-bindings/phy/phy.h>
|
||||
pcie_phy: pcie-phy@ff644000 {
|
||||
compatible = "amlogic,axg-pcie-phy";
|
||||
reg = <0x0 0xff644000 0x0 0x1c>;
|
||||
resets = <&reset RESET_PCIE_PHY>;
|
||||
phys = <&mipi_analog_phy PHY_TYPE_PCIE>;
|
||||
phy-names = "analog";
|
||||
#phy-cells = <0>;
|
||||
};
|
12
MAINTAINERS
12
MAINTAINERS
|
@ -12857,7 +12857,7 @@ PCI DRIVER FOR CADENCE PCIE IP
|
|||
M: Tom Joseph <tjoseph@cadence.com>
|
||||
L: linux-pci@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/pci/cdns,*.txt
|
||||
F: Documentation/devicetree/bindings/pci/cdns,*
|
||||
F: drivers/pci/controller/cadence/
|
||||
|
||||
PCI DRIVER FOR FREESCALE LAYERSCAPE
|
||||
|
@ -12870,6 +12870,14 @@ L: linux-arm-kernel@lists.infradead.org
|
|||
S: Maintained
|
||||
F: drivers/pci/controller/dwc/*layerscape*
|
||||
|
||||
PCI DRIVER FOR NXP LAYERSCAPE GEN4 CONTROLLER
|
||||
M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
||||
L: linux-pci@vger.kernel.org
|
||||
L: linux-arm-kernel@lists.infradead.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt
|
||||
F: drivers/pci/controller/mobibeil/pcie-layerscape-gen4.c
|
||||
|
||||
PCI DRIVER FOR GENERIC OF HOSTS
|
||||
M: Will Deacon <will@kernel.org>
|
||||
L: linux-pci@vger.kernel.org
|
||||
|
@ -12912,7 +12920,7 @@ M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
|||
L: linux-pci@vger.kernel.org
|
||||
S: Supported
|
||||
F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt
|
||||
F: drivers/pci/controller/pcie-mobiveil.c
|
||||
F: drivers/pci/controller/mobiveil/pcie-mobiveil*
|
||||
|
||||
PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)
|
||||
M: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
|
||||
|
|
|
@ -187,10 +187,6 @@ nautilus_machine_check(unsigned long vector, unsigned long la_ptr)
|
|||
|
||||
extern void pcibios_claim_one_bus(struct pci_bus *);
|
||||
|
||||
static struct resource irongate_io = {
|
||||
.name = "Irongate PCI IO",
|
||||
.flags = IORESOURCE_IO,
|
||||
};
|
||||
static struct resource irongate_mem = {
|
||||
.name = "Irongate PCI MEM",
|
||||
.flags = IORESOURCE_MEM,
|
||||
|
@ -208,17 +204,19 @@ nautilus_init_pci(void)
|
|||
struct pci_controller *hose = hose_head;
|
||||
struct pci_host_bridge *bridge;
|
||||
struct pci_bus *bus;
|
||||
struct pci_dev *irongate;
|
||||
unsigned long bus_align, bus_size, pci_mem;
|
||||
unsigned long memtop = max_low_pfn << PAGE_SHIFT;
|
||||
int ret;
|
||||
|
||||
bridge = pci_alloc_host_bridge(0);
|
||||
if (!bridge)
|
||||
return;
|
||||
|
||||
/* Use default IO. */
|
||||
pci_add_resource(&bridge->windows, &ioport_resource);
|
||||
pci_add_resource(&bridge->windows, &iomem_resource);
|
||||
/* Irongate PCI memory aperture, calculate requred size before
|
||||
setting it up. */
|
||||
pci_add_resource(&bridge->windows, &irongate_mem);
|
||||
|
||||
pci_add_resource(&bridge->windows, &busn_resource);
|
||||
bridge->dev.parent = NULL;
|
||||
bridge->sysdata = hose;
|
||||
|
@ -226,59 +224,49 @@ nautilus_init_pci(void)
|
|||
bridge->ops = alpha_mv.pci_ops;
|
||||
bridge->swizzle_irq = alpha_mv.pci_swizzle;
|
||||
bridge->map_irq = alpha_mv.pci_map_irq;
|
||||
bridge->size_windows = 1;
|
||||
|
||||
/* Scan our single hose. */
|
||||
ret = pci_scan_root_bus_bridge(bridge);
|
||||
if (ret) {
|
||||
if (pci_scan_root_bus_bridge(bridge)) {
|
||||
pci_free_host_bridge(bridge);
|
||||
return;
|
||||
}
|
||||
|
||||
bus = hose->bus = bridge->bus;
|
||||
pcibios_claim_one_bus(bus);
|
||||
|
||||
irongate = pci_get_domain_bus_and_slot(pci_domain_nr(bus), 0, 0);
|
||||
bus->self = irongate;
|
||||
bus->resource[0] = &irongate_io;
|
||||
bus->resource[1] = &irongate_mem;
|
||||
|
||||
pci_bus_size_bridges(bus);
|
||||
|
||||
/* IO port range. */
|
||||
bus->resource[0]->start = 0;
|
||||
bus->resource[0]->end = 0xffff;
|
||||
|
||||
/* Set up PCI memory range - limit is hardwired to 0xffffffff,
|
||||
base must be at aligned to 16Mb. */
|
||||
bus_align = bus->resource[1]->start;
|
||||
bus_size = bus->resource[1]->end + 1 - bus_align;
|
||||
/* Now we've got the size and alignment of PCI memory resources
|
||||
stored in irongate_mem. Set up the PCI memory range: limit is
|
||||
hardwired to 0xffffffff, base must be aligned to 16Mb. */
|
||||
bus_align = irongate_mem.start;
|
||||
bus_size = irongate_mem.end + 1 - bus_align;
|
||||
if (bus_align < 0x1000000UL)
|
||||
bus_align = 0x1000000UL;
|
||||
|
||||
pci_mem = (0x100000000UL - bus_size) & -bus_align;
|
||||
irongate_mem.start = pci_mem;
|
||||
irongate_mem.end = 0xffffffffUL;
|
||||
|
||||
bus->resource[1]->start = pci_mem;
|
||||
bus->resource[1]->end = 0xffffffffUL;
|
||||
if (request_resource(&iomem_resource, bus->resource[1]) < 0)
|
||||
/* Register our newly calculated PCI memory window in the resource
|
||||
tree. */
|
||||
if (request_resource(&iomem_resource, &irongate_mem) < 0)
|
||||
printk(KERN_ERR "Failed to request MEM on hose 0\n");
|
||||
|
||||
printk(KERN_INFO "Irongate pci_mem %pR\n", &irongate_mem);
|
||||
|
||||
if (pci_mem < memtop)
|
||||
memtop = pci_mem;
|
||||
if (memtop > alpha_mv.min_mem_address) {
|
||||
free_reserved_area(__va(alpha_mv.min_mem_address),
|
||||
__va(memtop), -1, NULL);
|
||||
printk("nautilus_init_pci: %ldk freed\n",
|
||||
printk(KERN_INFO "nautilus_init_pci: %ldk freed\n",
|
||||
(memtop - alpha_mv.min_mem_address) >> 10);
|
||||
}
|
||||
|
||||
if ((IRONGATE0->dev_vendor >> 16) > 0x7006) /* Albacore? */
|
||||
IRONGATE0->pci_mem = pci_mem;
|
||||
|
||||
pci_bus_assign_resources(bus);
|
||||
|
||||
/* pci_common_swizzle() relies on bus->self being NULL
|
||||
for the root bus, so just clear it. */
|
||||
bus->self = NULL;
|
||||
pci_bus_add_devices(bus);
|
||||
}
|
||||
|
||||
|
|
|
@ -376,6 +376,7 @@ struct hv_tsc_emulation_status {
|
|||
#define HVCALL_SEND_IPI_EX 0x0015
|
||||
#define HVCALL_POST_MESSAGE 0x005c
|
||||
#define HVCALL_SIGNAL_EVENT 0x005d
|
||||
#define HVCALL_RETARGET_INTERRUPT 0x007e
|
||||
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
|
||||
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
|
||||
|
||||
|
@ -405,6 +406,8 @@ enum HV_GENERIC_SET_FORMAT {
|
|||
HV_GENERIC_SET_ALL,
|
||||
};
|
||||
|
||||
#define HV_PARTITION_ID_SELF ((u64)-1)
|
||||
|
||||
#define HV_HYPERCALL_RESULT_MASK GENMASK_ULL(15, 0)
|
||||
#define HV_HYPERCALL_FAST_BIT BIT(16)
|
||||
#define HV_HYPERCALL_VARHEAD_OFFSET 17
|
||||
|
@ -909,4 +912,42 @@ struct hv_tlb_flush_ex {
|
|||
struct hv_partition_assist_pg {
|
||||
u32 tlb_lock_count;
|
||||
};
|
||||
|
||||
union hv_msi_entry {
|
||||
u64 as_uint64;
|
||||
struct {
|
||||
u32 address;
|
||||
u32 data;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct hv_interrupt_entry {
|
||||
u32 source; /* 1 for MSI(-X) */
|
||||
u32 reserved1;
|
||||
union hv_msi_entry msi_entry;
|
||||
} __packed;
|
||||
|
||||
/*
|
||||
* flags for hv_device_interrupt_target.flags
|
||||
*/
|
||||
#define HV_DEVICE_INTERRUPT_TARGET_MULTICAST 1
|
||||
#define HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET 2
|
||||
|
||||
struct hv_device_interrupt_target {
|
||||
u32 vector;
|
||||
u32 flags;
|
||||
union {
|
||||
u64 vp_mask;
|
||||
struct hv_vpset vp_set;
|
||||
};
|
||||
} __packed;
|
||||
|
||||
/* HvRetargetDeviceInterrupt hypercall */
|
||||
struct hv_retarget_device_interrupt {
|
||||
u64 partition_id; /* use "self" */
|
||||
u64 device_id;
|
||||
struct hv_interrupt_entry int_entry;
|
||||
u64 reserved2;
|
||||
struct hv_device_interrupt_target int_target;
|
||||
} __packed __aligned(8);
|
||||
#endif
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
|
||||
#include <linux/types.h>
|
||||
#include <linux/nmi.h>
|
||||
#include <linux/msi.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/hyperv-tlfs.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
@ -242,6 +243,13 @@ bool hv_vcpu_is_preempted(int vcpu);
|
|||
static inline void hv_apic_init(void) {}
|
||||
#endif
|
||||
|
||||
static inline void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
|
||||
struct msi_desc *msi_desc)
|
||||
{
|
||||
msi_entry->address = msi_desc->msg.address_lo;
|
||||
msi_entry->data = msi_desc->msg.data;
|
||||
}
|
||||
|
||||
#else /* CONFIG_HYPERV */
|
||||
static inline void hyperv_init(void) {}
|
||||
static inline void hyperv_setup_mmu_ops(void) {}
|
||||
|
|
|
@ -131,6 +131,7 @@ static struct pci_osc_bit_struct pci_osc_support_bit[] = {
|
|||
{ OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" },
|
||||
{ OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" },
|
||||
{ OSC_PCI_MSI_SUPPORT, "MSI" },
|
||||
{ OSC_PCI_EDR_SUPPORT, "EDR" },
|
||||
{ OSC_PCI_HPX_TYPE_3_SUPPORT, "HPX-Type3" },
|
||||
};
|
||||
|
||||
|
@ -141,6 +142,7 @@ static struct pci_osc_bit_struct pci_osc_control_bit[] = {
|
|||
{ OSC_PCI_EXPRESS_AER_CONTROL, "AER" },
|
||||
{ OSC_PCI_EXPRESS_CAPABILITY_CONTROL, "PCIeCapability" },
|
||||
{ OSC_PCI_EXPRESS_LTR_CONTROL, "LTR" },
|
||||
{ OSC_PCI_EXPRESS_DPC_CONTROL, "DPC" },
|
||||
};
|
||||
|
||||
static void decode_osc_bits(struct acpi_pci_root *root, char *msg, u32 word,
|
||||
|
@ -440,6 +442,8 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
|
|||
support |= OSC_PCI_ASPM_SUPPORT | OSC_PCI_CLOCK_PM_SUPPORT;
|
||||
if (pci_msi_enabled())
|
||||
support |= OSC_PCI_MSI_SUPPORT;
|
||||
if (IS_ENABLED(CONFIG_PCIE_EDR))
|
||||
support |= OSC_PCI_EDR_SUPPORT;
|
||||
|
||||
decode_osc_support(root, "OS supports", support);
|
||||
status = acpi_pci_osc_support(root, support);
|
||||
|
@ -487,6 +491,15 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
|
|||
control |= OSC_PCI_EXPRESS_AER_CONTROL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Per the Downstream Port Containment Related Enhancements ECN to
|
||||
* the PCI Firmware Spec, r3.2, sec 4.5.1, table 4-5,
|
||||
* OSC_PCI_EXPRESS_DPC_CONTROL indicates the OS supports both DPC
|
||||
* and EDR.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_PCIE_DPC) && IS_ENABLED(CONFIG_PCIE_EDR))
|
||||
control |= OSC_PCI_EXPRESS_DPC_CONTROL;
|
||||
|
||||
requested = control;
|
||||
status = acpi_pci_osc_control_set(handle, &control,
|
||||
OSC_PCI_EXPRESS_CAPABILITY_CONTROL);
|
||||
|
@ -916,6 +929,8 @@ struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root,
|
|||
host_bridge->native_pme = 0;
|
||||
if (!(root->osc_control_set & OSC_PCI_EXPRESS_LTR_CONTROL))
|
||||
host_bridge->native_ltr = 0;
|
||||
if (!(root->osc_control_set & OSC_PCI_EXPRESS_DPC_CONTROL))
|
||||
host_bridge->native_dpc = 0;
|
||||
|
||||
/*
|
||||
* Evaluate the "PCI Boot Configuration" _DSM Function. If it
|
||||
|
|
|
@ -192,30 +192,35 @@ static bool amdgpu_read_bios_from_rom(struct amdgpu_device *adev)
|
|||
|
||||
static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
|
||||
{
|
||||
uint8_t __iomem *bios;
|
||||
size_t size;
|
||||
phys_addr_t rom = adev->pdev->rom;
|
||||
size_t romlen = adev->pdev->romlen;
|
||||
void __iomem *bios;
|
||||
|
||||
adev->bios = NULL;
|
||||
|
||||
bios = pci_platform_rom(adev->pdev, &size);
|
||||
if (!bios) {
|
||||
return false;
|
||||
}
|
||||
|
||||
adev->bios = kzalloc(size, GFP_KERNEL);
|
||||
if (adev->bios == NULL)
|
||||
if (!rom || romlen == 0)
|
||||
return false;
|
||||
|
||||
memcpy_fromio(adev->bios, bios, size);
|
||||
|
||||
if (!check_atom_bios(adev->bios, size)) {
|
||||
kfree(adev->bios);
|
||||
adev->bios = kzalloc(romlen, GFP_KERNEL);
|
||||
if (!adev->bios)
|
||||
return false;
|
||||
}
|
||||
|
||||
adev->bios_size = size;
|
||||
bios = ioremap(rom, romlen);
|
||||
if (!bios)
|
||||
goto free_bios;
|
||||
|
||||
memcpy_fromio(adev->bios, bios, romlen);
|
||||
iounmap(bios);
|
||||
|
||||
if (!check_atom_bios(adev->bios, romlen))
|
||||
goto free_bios;
|
||||
|
||||
adev->bios_size = romlen;
|
||||
|
||||
return true;
|
||||
free_bios:
|
||||
kfree(adev->bios);
|
||||
return false;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
|
|
|
@ -101,9 +101,13 @@ platform_init(struct nvkm_bios *bios, const char *name)
|
|||
else
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
if (!pdev->rom || pdev->romlen == 0)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
if ((priv = kmalloc(sizeof(*priv), GFP_KERNEL))) {
|
||||
priv->size = pdev->romlen;
|
||||
if (ret = -ENODEV,
|
||||
(priv->rom = pci_platform_rom(pdev, &priv->size)))
|
||||
(priv->rom = ioremap(pdev->rom, pdev->romlen)))
|
||||
return priv;
|
||||
kfree(priv);
|
||||
}
|
||||
|
@ -111,11 +115,20 @@ platform_init(struct nvkm_bios *bios, const char *name)
|
|||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
static void
|
||||
platform_fini(void *data)
|
||||
{
|
||||
struct priv *priv = data;
|
||||
|
||||
iounmap(priv->rom);
|
||||
kfree(priv);
|
||||
}
|
||||
|
||||
const struct nvbios_source
|
||||
nvbios_platform = {
|
||||
.name = "PLATFORM",
|
||||
.init = platform_init,
|
||||
.fini = (void(*)(void *))kfree,
|
||||
.fini = platform_fini,
|
||||
.read = pcirom_read,
|
||||
.rw = true,
|
||||
};
|
||||
|
|
|
@ -108,25 +108,33 @@ static bool radeon_read_bios(struct radeon_device *rdev)
|
|||
|
||||
static bool radeon_read_platform_bios(struct radeon_device *rdev)
|
||||
{
|
||||
uint8_t __iomem *bios;
|
||||
size_t size;
|
||||
phys_addr_t rom = rdev->pdev->rom;
|
||||
size_t romlen = rdev->pdev->romlen;
|
||||
void __iomem *bios;
|
||||
|
||||
rdev->bios = NULL;
|
||||
|
||||
bios = pci_platform_rom(rdev->pdev, &size);
|
||||
if (!bios) {
|
||||
if (!rom || romlen == 0)
|
||||
return false;
|
||||
}
|
||||
|
||||
if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) {
|
||||
rdev->bios = kzalloc(romlen, GFP_KERNEL);
|
||||
if (!rdev->bios)
|
||||
return false;
|
||||
}
|
||||
rdev->bios = kmemdup(bios, size, GFP_KERNEL);
|
||||
if (rdev->bios == NULL) {
|
||||
return false;
|
||||
}
|
||||
|
||||
bios = ioremap(rom, romlen);
|
||||
if (!bios)
|
||||
goto free_bios;
|
||||
|
||||
memcpy_fromio(rdev->bios, bios, romlen);
|
||||
iounmap(bios);
|
||||
|
||||
if (rdev->bios[0] != 0x55 || rdev->bios[1] != 0xaa)
|
||||
goto free_bios;
|
||||
|
||||
return true;
|
||||
free_bios:
|
||||
kfree(rdev->bios);
|
||||
return false;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
#include <linux/mutex.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci_ids.h>
|
||||
|
||||
|
@ -64,6 +65,9 @@
|
|||
#define PCI_ENDPOINT_TEST_IRQ_TYPE 0x24
|
||||
#define PCI_ENDPOINT_TEST_IRQ_NUMBER 0x28
|
||||
|
||||
#define PCI_ENDPOINT_TEST_FLAGS 0x2c
|
||||
#define FLAG_USE_DMA BIT(0)
|
||||
|
||||
#define PCI_DEVICE_ID_TI_AM654 0xb00c
|
||||
|
||||
#define is_am654_pci_dev(pdev) \
|
||||
|
@ -98,11 +102,13 @@ struct pci_endpoint_test {
|
|||
struct completion irq_raised;
|
||||
int last_irq;
|
||||
int num_irqs;
|
||||
int irq_type;
|
||||
/* mutex to protect the ioctls */
|
||||
struct mutex mutex;
|
||||
struct miscdevice miscdev;
|
||||
enum pci_barno test_reg_bar;
|
||||
size_t alignment;
|
||||
const char *name;
|
||||
};
|
||||
|
||||
struct pci_endpoint_test_data {
|
||||
|
@ -157,6 +163,7 @@ static void pci_endpoint_test_free_irq_vectors(struct pci_endpoint_test *test)
|
|||
struct pci_dev *pdev = test->pdev;
|
||||
|
||||
pci_free_irq_vectors(pdev);
|
||||
test->irq_type = IRQ_TYPE_UNDEFINED;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
|
||||
|
@ -191,6 +198,8 @@ static bool pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
|
|||
irq = 0;
|
||||
res = false;
|
||||
}
|
||||
|
||||
test->irq_type = type;
|
||||
test->num_irqs = irq;
|
||||
|
||||
return res;
|
||||
|
@ -218,7 +227,7 @@ static bool pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
|
|||
for (i = 0; i < test->num_irqs; i++) {
|
||||
err = devm_request_irq(dev, pci_irq_vector(pdev, i),
|
||||
pci_endpoint_test_irqhandler,
|
||||
IRQF_SHARED, DRV_MODULE_NAME, test);
|
||||
IRQF_SHARED, test->name, test);
|
||||
if (err)
|
||||
goto fail;
|
||||
}
|
||||
|
@ -315,11 +324,16 @@ static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
|
|||
return false;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
|
||||
static bool pci_endpoint_test_copy(struct pci_endpoint_test *test,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param;
|
||||
bool ret = false;
|
||||
void *src_addr;
|
||||
void *dst_addr;
|
||||
u32 flags = 0;
|
||||
bool use_dma;
|
||||
size_t size;
|
||||
dma_addr_t src_phys_addr;
|
||||
dma_addr_t dst_phys_addr;
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
|
@ -330,25 +344,46 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
|
|||
dma_addr_t orig_dst_phys_addr;
|
||||
size_t offset;
|
||||
size_t alignment = test->alignment;
|
||||
int irq_type = test->irq_type;
|
||||
u32 src_crc32;
|
||||
u32 dst_crc32;
|
||||
int err;
|
||||
|
||||
err = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (err) {
|
||||
dev_err(dev, "Failed to get transfer param\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
size = param.size;
|
||||
if (size > SIZE_MAX - alignment)
|
||||
goto err;
|
||||
|
||||
use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
|
||||
if (use_dma)
|
||||
flags |= FLAG_USE_DMA;
|
||||
|
||||
if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) {
|
||||
dev_err(dev, "Invalid IRQ type option\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
orig_src_addr = dma_alloc_coherent(dev, size + alignment,
|
||||
&orig_src_phys_addr, GFP_KERNEL);
|
||||
orig_src_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_src_addr) {
|
||||
dev_err(dev, "Failed to allocate source buffer\n");
|
||||
ret = false;
|
||||
goto err;
|
||||
}
|
||||
|
||||
get_random_bytes(orig_src_addr, size + alignment);
|
||||
orig_src_phys_addr = dma_map_single(dev, orig_src_addr,
|
||||
size + alignment, DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_src_phys_addr)) {
|
||||
dev_err(dev, "failed to map source buffer address\n");
|
||||
ret = false;
|
||||
goto err_src_phys_addr;
|
||||
}
|
||||
|
||||
if (alignment && !IS_ALIGNED(orig_src_phys_addr, alignment)) {
|
||||
src_phys_addr = PTR_ALIGN(orig_src_phys_addr, alignment);
|
||||
offset = src_phys_addr - orig_src_phys_addr;
|
||||
|
@ -364,15 +399,21 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
|
|||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_UPPER_SRC_ADDR,
|
||||
upper_32_bits(src_phys_addr));
|
||||
|
||||
get_random_bytes(src_addr, size);
|
||||
src_crc32 = crc32_le(~0, src_addr, size);
|
||||
|
||||
orig_dst_addr = dma_alloc_coherent(dev, size + alignment,
|
||||
&orig_dst_phys_addr, GFP_KERNEL);
|
||||
orig_dst_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_dst_addr) {
|
||||
dev_err(dev, "Failed to allocate destination address\n");
|
||||
ret = false;
|
||||
goto err_orig_src_addr;
|
||||
goto err_dst_addr;
|
||||
}
|
||||
|
||||
orig_dst_phys_addr = dma_map_single(dev, orig_dst_addr,
|
||||
size + alignment, DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_dst_phys_addr)) {
|
||||
dev_err(dev, "failed to map destination buffer address\n");
|
||||
ret = false;
|
||||
goto err_dst_phys_addr;
|
||||
}
|
||||
|
||||
if (alignment && !IS_ALIGNED(orig_dst_phys_addr, alignment)) {
|
||||
|
@ -392,6 +433,7 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
|
|||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE,
|
||||
size);
|
||||
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_FLAGS, flags);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
|
||||
|
@ -399,24 +441,34 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
|
|||
|
||||
wait_for_completion(&test->irq_raised);
|
||||
|
||||
dma_unmap_single(dev, orig_dst_phys_addr, size + alignment,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
dst_crc32 = crc32_le(~0, dst_addr, size);
|
||||
if (dst_crc32 == src_crc32)
|
||||
ret = true;
|
||||
|
||||
dma_free_coherent(dev, size + alignment, orig_dst_addr,
|
||||
orig_dst_phys_addr);
|
||||
err_dst_phys_addr:
|
||||
kfree(orig_dst_addr);
|
||||
|
||||
err_orig_src_addr:
|
||||
dma_free_coherent(dev, size + alignment, orig_src_addr,
|
||||
orig_src_phys_addr);
|
||||
err_dst_addr:
|
||||
dma_unmap_single(dev, orig_src_phys_addr, size + alignment,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
err_src_phys_addr:
|
||||
kfree(orig_src_addr);
|
||||
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size)
|
||||
static bool pci_endpoint_test_write(struct pci_endpoint_test *test,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param;
|
||||
bool ret = false;
|
||||
u32 flags = 0;
|
||||
bool use_dma;
|
||||
u32 reg;
|
||||
void *addr;
|
||||
dma_addr_t phys_addr;
|
||||
|
@ -426,24 +478,47 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size)
|
|||
dma_addr_t orig_phys_addr;
|
||||
size_t offset;
|
||||
size_t alignment = test->alignment;
|
||||
int irq_type = test->irq_type;
|
||||
size_t size;
|
||||
u32 crc32;
|
||||
int err;
|
||||
|
||||
err = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (err != 0) {
|
||||
dev_err(dev, "Failed to get transfer param\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
size = param.size;
|
||||
if (size > SIZE_MAX - alignment)
|
||||
goto err;
|
||||
|
||||
use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
|
||||
if (use_dma)
|
||||
flags |= FLAG_USE_DMA;
|
||||
|
||||
if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) {
|
||||
dev_err(dev, "Invalid IRQ type option\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr,
|
||||
GFP_KERNEL);
|
||||
orig_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_addr) {
|
||||
dev_err(dev, "Failed to allocate address\n");
|
||||
ret = false;
|
||||
goto err;
|
||||
}
|
||||
|
||||
get_random_bytes(orig_addr, size + alignment);
|
||||
|
||||
orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment,
|
||||
DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_phys_addr)) {
|
||||
dev_err(dev, "failed to map source buffer address\n");
|
||||
ret = false;
|
||||
goto err_phys_addr;
|
||||
}
|
||||
|
||||
if (alignment && !IS_ALIGNED(orig_phys_addr, alignment)) {
|
||||
phys_addr = PTR_ALIGN(orig_phys_addr, alignment);
|
||||
offset = phys_addr - orig_phys_addr;
|
||||
|
@ -453,8 +528,6 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size)
|
|||
addr = orig_addr;
|
||||
}
|
||||
|
||||
get_random_bytes(addr, size);
|
||||
|
||||
crc32 = crc32_le(~0, addr, size);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_CHECKSUM,
|
||||
crc32);
|
||||
|
@ -466,6 +539,7 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size)
|
|||
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, size);
|
||||
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_FLAGS, flags);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
|
||||
|
@ -477,15 +551,24 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size)
|
|||
if (reg & STATUS_READ_SUCCESS)
|
||||
ret = true;
|
||||
|
||||
dma_free_coherent(dev, size + alignment, orig_addr, orig_phys_addr);
|
||||
dma_unmap_single(dev, orig_phys_addr, size + alignment,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
err_phys_addr:
|
||||
kfree(orig_addr);
|
||||
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_read(struct pci_endpoint_test *test, size_t size)
|
||||
static bool pci_endpoint_test_read(struct pci_endpoint_test *test,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param;
|
||||
bool ret = false;
|
||||
u32 flags = 0;
|
||||
bool use_dma;
|
||||
size_t size;
|
||||
void *addr;
|
||||
dma_addr_t phys_addr;
|
||||
struct pci_dev *pdev = test->pdev;
|
||||
|
@ -494,24 +577,44 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test, size_t size)
|
|||
dma_addr_t orig_phys_addr;
|
||||
size_t offset;
|
||||
size_t alignment = test->alignment;
|
||||
int irq_type = test->irq_type;
|
||||
u32 crc32;
|
||||
int err;
|
||||
|
||||
err = copy_from_user(¶m, (void __user *)arg, sizeof(param));
|
||||
if (err) {
|
||||
dev_err(dev, "Failed to get transfer param\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
size = param.size;
|
||||
if (size > SIZE_MAX - alignment)
|
||||
goto err;
|
||||
|
||||
use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA);
|
||||
if (use_dma)
|
||||
flags |= FLAG_USE_DMA;
|
||||
|
||||
if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) {
|
||||
dev_err(dev, "Invalid IRQ type option\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr,
|
||||
GFP_KERNEL);
|
||||
orig_addr = kzalloc(size + alignment, GFP_KERNEL);
|
||||
if (!orig_addr) {
|
||||
dev_err(dev, "Failed to allocate destination address\n");
|
||||
ret = false;
|
||||
goto err;
|
||||
}
|
||||
|
||||
orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment,
|
||||
DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(dev, orig_phys_addr)) {
|
||||
dev_err(dev, "failed to map source buffer address\n");
|
||||
ret = false;
|
||||
goto err_phys_addr;
|
||||
}
|
||||
|
||||
if (alignment && !IS_ALIGNED(orig_phys_addr, alignment)) {
|
||||
phys_addr = PTR_ALIGN(orig_phys_addr, alignment);
|
||||
offset = phys_addr - orig_phys_addr;
|
||||
|
@ -528,6 +631,7 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test, size_t size)
|
|||
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, size);
|
||||
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_FLAGS, flags);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1);
|
||||
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
|
||||
|
@ -535,15 +639,26 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test, size_t size)
|
|||
|
||||
wait_for_completion(&test->irq_raised);
|
||||
|
||||
dma_unmap_single(dev, orig_phys_addr, size + alignment,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
crc32 = crc32_le(~0, addr, size);
|
||||
if (crc32 == pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CHECKSUM))
|
||||
ret = true;
|
||||
|
||||
dma_free_coherent(dev, size + alignment, orig_addr, orig_phys_addr);
|
||||
err_phys_addr:
|
||||
kfree(orig_addr);
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_clear_irq(struct pci_endpoint_test *test)
|
||||
{
|
||||
pci_endpoint_test_release_irq(test);
|
||||
pci_endpoint_test_free_irq_vectors(test);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
|
||||
int req_irq_type)
|
||||
{
|
||||
|
@ -555,7 +670,7 @@ static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
|
|||
return false;
|
||||
}
|
||||
|
||||
if (irq_type == req_irq_type)
|
||||
if (test->irq_type == req_irq_type)
|
||||
return true;
|
||||
|
||||
pci_endpoint_test_release_irq(test);
|
||||
|
@ -567,12 +682,10 @@ static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
|
|||
if (!pci_endpoint_test_request_irq(test))
|
||||
goto err;
|
||||
|
||||
irq_type = req_irq_type;
|
||||
return true;
|
||||
|
||||
err:
|
||||
pci_endpoint_test_free_irq_vectors(test);
|
||||
irq_type = IRQ_TYPE_UNDEFINED;
|
||||
return false;
|
||||
}
|
||||
|
||||
|
@ -616,6 +729,9 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
|
|||
case PCITEST_GET_IRQTYPE:
|
||||
ret = irq_type;
|
||||
break;
|
||||
case PCITEST_CLEAR_IRQ:
|
||||
ret = pci_endpoint_test_clear_irq(test);
|
||||
break;
|
||||
}
|
||||
|
||||
ret:
|
||||
|
@ -633,7 +749,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
{
|
||||
int err;
|
||||
int id;
|
||||
char name[20];
|
||||
char name[24];
|
||||
enum pci_barno bar;
|
||||
void __iomem *base;
|
||||
struct device *dev = &pdev->dev;
|
||||
|
@ -652,6 +768,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
test->test_reg_bar = 0;
|
||||
test->alignment = 0;
|
||||
test->pdev = pdev;
|
||||
test->irq_type = IRQ_TYPE_UNDEFINED;
|
||||
|
||||
if (no_msi)
|
||||
irq_type = IRQ_TYPE_LEGACY;
|
||||
|
@ -667,6 +784,12 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
init_completion(&test->irq_raised);
|
||||
mutex_init(&test->mutex);
|
||||
|
||||
if ((dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48)) != 0) &&
|
||||
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)) != 0) {
|
||||
dev_err(dev, "Cannot set DMA mask\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
err = pci_enable_device(pdev);
|
||||
if (err) {
|
||||
dev_err(dev, "Cannot enable PCI device\n");
|
||||
|
@ -684,9 +807,6 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
if (!pci_endpoint_test_alloc_irq_vectors(test, irq_type))
|
||||
goto err_disable_irq;
|
||||
|
||||
if (!pci_endpoint_test_request_irq(test))
|
||||
goto err_disable_irq;
|
||||
|
||||
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
|
||||
if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) {
|
||||
base = pci_ioremap_bar(pdev, bar);
|
||||
|
@ -716,12 +836,21 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
}
|
||||
|
||||
snprintf(name, sizeof(name), DRV_MODULE_NAME ".%d", id);
|
||||
test->name = kstrdup(name, GFP_KERNEL);
|
||||
if (!test->name) {
|
||||
err = -ENOMEM;
|
||||
goto err_ida_remove;
|
||||
}
|
||||
|
||||
if (!pci_endpoint_test_request_irq(test))
|
||||
goto err_kfree_test_name;
|
||||
|
||||
misc_device = &test->miscdev;
|
||||
misc_device->minor = MISC_DYNAMIC_MINOR;
|
||||
misc_device->name = kstrdup(name, GFP_KERNEL);
|
||||
if (!misc_device->name) {
|
||||
err = -ENOMEM;
|
||||
goto err_ida_remove;
|
||||
goto err_release_irq;
|
||||
}
|
||||
misc_device->fops = &pci_endpoint_test_fops,
|
||||
|
||||
|
@ -736,6 +865,12 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
|
|||
err_kfree_name:
|
||||
kfree(misc_device->name);
|
||||
|
||||
err_release_irq:
|
||||
pci_endpoint_test_release_irq(test);
|
||||
|
||||
err_kfree_test_name:
|
||||
kfree(test->name);
|
||||
|
||||
err_ida_remove:
|
||||
ida_simple_remove(&pci_endpoint_test_ida, id);
|
||||
|
||||
|
@ -744,7 +879,6 @@ err_iounmap:
|
|||
if (test->bar[bar])
|
||||
pci_iounmap(pdev, test->bar[bar]);
|
||||
}
|
||||
pci_endpoint_test_release_irq(test);
|
||||
|
||||
err_disable_irq:
|
||||
pci_endpoint_test_free_irq_vectors(test);
|
||||
|
@ -770,6 +904,7 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
|
|||
|
||||
misc_deregister(&test->miscdev);
|
||||
kfree(misc_device->name);
|
||||
kfree(test->name);
|
||||
ida_simple_remove(&pci_endpoint_test_ida, id);
|
||||
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
|
||||
if (test->bar[bar])
|
||||
|
@ -783,6 +918,12 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
|
|||
pci_disable_device(pdev);
|
||||
}
|
||||
|
||||
static const struct pci_endpoint_test_data default_data = {
|
||||
.test_reg_bar = BAR_0,
|
||||
.alignment = SZ_4K,
|
||||
.irq_type = IRQ_TYPE_MSI,
|
||||
};
|
||||
|
||||
static const struct pci_endpoint_test_data am654_data = {
|
||||
.test_reg_bar = BAR_2,
|
||||
.alignment = SZ_64K,
|
||||
|
@ -790,8 +931,12 @@ static const struct pci_endpoint_test_data am654_data = {
|
|||
};
|
||||
|
||||
static const struct pci_device_id pci_endpoint_test_tbl[] = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x),
|
||||
.driver_data = (kernel_ulong_t)&default_data,
|
||||
},
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x),
|
||||
.driver_data = (kernel_ulong_t)&default_data,
|
||||
},
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0) },
|
||||
{ PCI_DEVICE_DATA(SYNOPSYS, EDDA, NULL) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654),
|
||||
|
|
|
@ -3508,9 +3508,9 @@ static pci_ers_result_t ice_pci_err_slot_reset(struct pci_dev *pdev)
|
|||
result = PCI_ERS_RESULT_DISCONNECT;
|
||||
}
|
||||
|
||||
err = pci_cleanup_aer_uncorrect_error_status(pdev);
|
||||
err = pci_aer_clear_nonfatal_status(pdev);
|
||||
if (err)
|
||||
dev_dbg(&pdev->dev, "pci_cleanup_aer_uncorrect_error_status failed, error %d\n",
|
||||
dev_dbg(&pdev->dev, "pci_aer_clear_nonfatal_status() failed, error %d\n",
|
||||
err);
|
||||
/* non-fatal, continue */
|
||||
|
||||
|
|
|
@ -2674,8 +2674,8 @@ static int idt_init_pci(struct idt_ntb_dev *ndev)
|
|||
ret = pci_enable_pcie_error_reporting(pdev);
|
||||
if (ret != 0)
|
||||
dev_warn(&pdev->dev, "PCIe AER capability disabled\n");
|
||||
else /* Cleanup uncorrectable error status before getting to init */
|
||||
pci_cleanup_aer_uncorrect_error_status(pdev);
|
||||
else /* Cleanup nonfatal error status before getting to init */
|
||||
pci_aer_clear_nonfatal_status(pdev);
|
||||
|
||||
/* First enable the PCI device */
|
||||
ret = pcim_enable_device(pdev);
|
||||
|
|
|
@ -213,16 +213,6 @@ config PCIE_MEDIATEK
|
|||
Say Y here if you want to enable PCIe controller support on
|
||||
MediaTek SoCs.
|
||||
|
||||
config PCIE_MOBIVEIL
|
||||
bool "Mobiveil AXI PCIe controller"
|
||||
depends on ARCH_ZYNQMP || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
help
|
||||
Say Y here if you want to enable support for the Mobiveil AXI PCIe
|
||||
Soft IP. It has up to 8 outbound and inbound windows
|
||||
for address translation and it is a PCIe Gen4 IP.
|
||||
|
||||
config PCIE_TANGO_SMP8759
|
||||
bool "Tango SMP8759 PCIe controller (DANGEROUS)"
|
||||
depends on ARCH_TANGO && PCI_MSI && OF
|
||||
|
@ -269,5 +259,6 @@ config PCI_HYPERV_INTERFACE
|
|||
have a common interface with the Hyper-V PCI frontend driver.
|
||||
|
||||
source "drivers/pci/controller/dwc/Kconfig"
|
||||
source "drivers/pci/controller/mobiveil/Kconfig"
|
||||
source "drivers/pci/controller/cadence/Kconfig"
|
||||
endmenu
|
||||
|
|
|
@ -25,12 +25,12 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
|
|||
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
|
||||
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
|
||||
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
|
||||
obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o
|
||||
obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o
|
||||
obj-$(CONFIG_VMD) += vmd.o
|
||||
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
|
||||
# pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
|
||||
obj-y += dwc/
|
||||
obj-y += mobiveil/
|
||||
|
||||
|
||||
# The following drivers are for devices that use the generic ACPI
|
||||
|
|
|
@ -248,14 +248,37 @@ config PCI_MESON
|
|||
implement the driver.
|
||||
|
||||
config PCIE_TEGRA194
|
||||
tristate "NVIDIA Tegra194 (and later) PCIe controller"
|
||||
tristate
|
||||
|
||||
config PCIE_TEGRA194_HOST
|
||||
tristate "NVIDIA Tegra194 (and later) PCIe controller - Host Mode"
|
||||
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_DW_HOST
|
||||
select PHY_TEGRA194_P2U
|
||||
select PCIE_TEGRA194
|
||||
help
|
||||
Say Y here if you want support for DesignWare core based PCIe host
|
||||
controller found in NVIDIA Tegra194 SoC.
|
||||
Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to
|
||||
work in host mode. There are two instances of PCIe controllers in
|
||||
Tegra194. This controller can work either as EP or RC. In order to
|
||||
enable host-specific features PCIE_TEGRA194_HOST must be selected and
|
||||
in order to enable device-specific features PCIE_TEGRA194_EP must be
|
||||
selected. This uses the DesignWare core.
|
||||
|
||||
config PCIE_TEGRA194_EP
|
||||
tristate "NVIDIA Tegra194 (and later) PCIe controller - Endpoint Mode"
|
||||
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
|
||||
depends on PCI_ENDPOINT
|
||||
select PCIE_DW_EP
|
||||
select PHY_TEGRA194_P2U
|
||||
select PCIE_TEGRA194
|
||||
help
|
||||
Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to
|
||||
work in host mode. There are two instances of PCIe controllers in
|
||||
Tegra194. This controller can work either as EP or RC. In order to
|
||||
enable host-specific features PCIE_TEGRA194_HOST must be selected and
|
||||
in order to enable device-specific features PCIE_TEGRA194_EP must be
|
||||
selected. This uses the DesignWare core.
|
||||
|
||||
config PCIE_UNIPHIER
|
||||
bool "Socionext UniPhier PCIe controllers"
|
||||
|
|
|
@ -215,10 +215,6 @@ static int dra7xx_pcie_host_init(struct pcie_port *pp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = {
|
||||
.host_init = dra7xx_pcie_host_init,
|
||||
};
|
||||
|
||||
static int dra7xx_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
|
||||
irq_hw_number_t hwirq)
|
||||
{
|
||||
|
@ -233,43 +229,77 @@ static const struct irq_domain_ops intx_domain_ops = {
|
|||
.xlate = pci_irqd_intx_xlate,
|
||||
};
|
||||
|
||||
static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
|
||||
static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct device *dev = pci->dev;
|
||||
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
|
||||
struct device_node *node = dev->of_node;
|
||||
struct device_node *pcie_intc_node = of_get_next_child(node, NULL);
|
||||
unsigned long val;
|
||||
int pos, irq;
|
||||
|
||||
if (!pcie_intc_node) {
|
||||
dev_err(dev, "No PCIe Intc node found\n");
|
||||
return -ENODEV;
|
||||
val = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS +
|
||||
(index * MSI_REG_CTRL_BLOCK_SIZE));
|
||||
if (!val)
|
||||
return 0;
|
||||
|
||||
pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, 0);
|
||||
while (pos != MAX_MSI_IRQS_PER_CTRL) {
|
||||
irq = irq_find_mapping(pp->irq_domain,
|
||||
(index * MAX_MSI_IRQS_PER_CTRL) + pos);
|
||||
generic_handle_irq(irq);
|
||||
pos++;
|
||||
pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, pos);
|
||||
}
|
||||
|
||||
dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
|
||||
&intx_domain_ops, pp);
|
||||
of_node_put(pcie_intc_node);
|
||||
if (!dra7xx->irq_domain) {
|
||||
dev_err(dev, "Failed to get a INTx IRQ domain\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
|
||||
static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
|
||||
static void dra7xx_pcie_handle_msi_irq(struct pcie_port *pp)
|
||||
{
|
||||
struct dra7xx_pcie *dra7xx = arg;
|
||||
struct dw_pcie *pci = dra7xx->pci;
|
||||
struct pcie_port *pp = &pci->pp;
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
int ret, i, count, num_ctrls;
|
||||
|
||||
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
|
||||
|
||||
/**
|
||||
* Need to make sure all MSI status bits read 0 before exiting.
|
||||
* Else, new MSI IRQs are not registered by the wrapper. Have an
|
||||
* upperbound for the loop and exit the IRQ in case of IRQ flood
|
||||
* to avoid locking up system in interrupt context.
|
||||
*/
|
||||
count = 0;
|
||||
do {
|
||||
ret = 0;
|
||||
|
||||
for (i = 0; i < num_ctrls; i++)
|
||||
ret |= dra7xx_pcie_handle_msi(pp, i);
|
||||
count++;
|
||||
} while (ret && count <= 1000);
|
||||
|
||||
if (count > 1000)
|
||||
dev_warn_ratelimited(pci->dev,
|
||||
"Too many MSI IRQs to handle\n");
|
||||
}
|
||||
|
||||
static void dra7xx_pcie_msi_irq_handler(struct irq_desc *desc)
|
||||
{
|
||||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
||||
struct dra7xx_pcie *dra7xx;
|
||||
struct dw_pcie *pci;
|
||||
struct pcie_port *pp;
|
||||
unsigned long reg;
|
||||
u32 virq, bit;
|
||||
|
||||
chained_irq_enter(chip, desc);
|
||||
|
||||
pp = irq_desc_get_handler_data(desc);
|
||||
pci = to_dw_pcie_from_pp(pp);
|
||||
dra7xx = to_dra7xx_pcie(pci);
|
||||
|
||||
reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI);
|
||||
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg);
|
||||
|
||||
switch (reg) {
|
||||
case MSI:
|
||||
dw_handle_msi_irq(pp);
|
||||
dra7xx_pcie_handle_msi_irq(pp);
|
||||
break;
|
||||
case INTA:
|
||||
case INTB:
|
||||
|
@ -283,9 +313,7 @@ static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
|
|||
break;
|
||||
}
|
||||
|
||||
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
chained_irq_exit(chip, desc);
|
||||
}
|
||||
|
||||
static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg)
|
||||
|
@ -347,6 +375,145 @@ static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
struct device *dev = pci->dev;
|
||||
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
|
||||
struct device_node *node = dev->of_node;
|
||||
struct device_node *pcie_intc_node = of_get_next_child(node, NULL);
|
||||
|
||||
if (!pcie_intc_node) {
|
||||
dev_err(dev, "No PCIe Intc node found\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
irq_set_chained_handler_and_data(pp->irq, dra7xx_pcie_msi_irq_handler,
|
||||
pp);
|
||||
dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
|
||||
&intx_domain_ops, pp);
|
||||
of_node_put(pcie_intc_node);
|
||||
if (!dra7xx->irq_domain) {
|
||||
dev_err(dev, "Failed to get a INTx IRQ domain\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void dra7xx_pcie_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
|
||||
{
|
||||
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
u64 msi_target;
|
||||
|
||||
msi_target = (u64)pp->msi_data;
|
||||
|
||||
msg->address_lo = lower_32_bits(msi_target);
|
||||
msg->address_hi = upper_32_bits(msi_target);
|
||||
|
||||
msg->data = d->hwirq;
|
||||
|
||||
dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n",
|
||||
(int)d->hwirq, msg->address_hi, msg->address_lo);
|
||||
}
|
||||
|
||||
static int dra7xx_pcie_msi_set_affinity(struct irq_data *d,
|
||||
const struct cpumask *mask,
|
||||
bool force)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static void dra7xx_pcie_bottom_mask(struct irq_data *d)
|
||||
{
|
||||
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
unsigned int res, bit, ctrl;
|
||||
unsigned long flags;
|
||||
|
||||
raw_spin_lock_irqsave(&pp->lock, flags);
|
||||
|
||||
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
|
||||
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
|
||||
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
|
||||
|
||||
pp->irq_mask[ctrl] |= BIT(bit);
|
||||
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res,
|
||||
pp->irq_mask[ctrl]);
|
||||
|
||||
raw_spin_unlock_irqrestore(&pp->lock, flags);
|
||||
}
|
||||
|
||||
static void dra7xx_pcie_bottom_unmask(struct irq_data *d)
|
||||
{
|
||||
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
unsigned int res, bit, ctrl;
|
||||
unsigned long flags;
|
||||
|
||||
raw_spin_lock_irqsave(&pp->lock, flags);
|
||||
|
||||
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
|
||||
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
|
||||
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
|
||||
|
||||
pp->irq_mask[ctrl] &= ~BIT(bit);
|
||||
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res,
|
||||
pp->irq_mask[ctrl]);
|
||||
|
||||
raw_spin_unlock_irqrestore(&pp->lock, flags);
|
||||
}
|
||||
|
||||
static void dra7xx_pcie_bottom_ack(struct irq_data *d)
|
||||
{
|
||||
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
unsigned int res, bit, ctrl;
|
||||
|
||||
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
|
||||
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
|
||||
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
|
||||
|
||||
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_STATUS + res, BIT(bit));
|
||||
}
|
||||
|
||||
static struct irq_chip dra7xx_pci_msi_bottom_irq_chip = {
|
||||
.name = "DRA7XX-PCI-MSI",
|
||||
.irq_ack = dra7xx_pcie_bottom_ack,
|
||||
.irq_compose_msi_msg = dra7xx_pcie_setup_msi_msg,
|
||||
.irq_set_affinity = dra7xx_pcie_msi_set_affinity,
|
||||
.irq_mask = dra7xx_pcie_bottom_mask,
|
||||
.irq_unmask = dra7xx_pcie_bottom_unmask,
|
||||
};
|
||||
|
||||
static int dra7xx_pcie_msi_host_init(struct pcie_port *pp)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
|
||||
u32 ctrl, num_ctrls;
|
||||
|
||||
pp->msi_irq_chip = &dra7xx_pci_msi_bottom_irq_chip;
|
||||
|
||||
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
|
||||
/* Initialize IRQ Status array */
|
||||
for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
|
||||
pp->irq_mask[ctrl] = ~0;
|
||||
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK +
|
||||
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
|
||||
pp->irq_mask[ctrl]);
|
||||
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_ENABLE +
|
||||
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
|
||||
~0);
|
||||
}
|
||||
|
||||
return dw_pcie_allocate_domains(pp);
|
||||
}
|
||||
|
||||
static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = {
|
||||
.host_init = dra7xx_pcie_host_init,
|
||||
.msi_host_init = dra7xx_pcie_msi_host_init,
|
||||
};
|
||||
|
||||
static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
|
@ -467,14 +634,6 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
|
|||
return pp->irq;
|
||||
}
|
||||
|
||||
ret = devm_request_irq(dev, pp->irq, dra7xx_pcie_msi_irq_handler,
|
||||
IRQF_SHARED | IRQF_NO_THREAD,
|
||||
"dra7-pcie-msi", dra7xx);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to request irq\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = dra7xx_pcie_init_irq_domain(pp);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
|
|
@ -959,6 +959,9 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
|
|||
case PCI_EPC_IRQ_MSI:
|
||||
dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
|
||||
break;
|
||||
case PCI_EPC_IRQ_MSIX:
|
||||
dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num);
|
||||
break;
|
||||
default:
|
||||
dev_err(pci->dev, "UNKNOWN IRQ type\n");
|
||||
return -EINVAL;
|
||||
|
@ -970,7 +973,7 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
|
|||
static const struct pci_epc_features ks_pcie_am654_epc_features = {
|
||||
.linkup_notifier = false,
|
||||
.msi_capable = true,
|
||||
.msix_capable = false,
|
||||
.msix_capable = true,
|
||||
.reserved_bar = 1 << BAR_0 | 1 << BAR_1,
|
||||
.bar_fixed_64bit = 1 << BAR_0,
|
||||
.bar_fixed_size[2] = SZ_1M,
|
||||
|
|
|
@ -66,7 +66,6 @@
|
|||
#define PORT_CLK_RATE 100000000UL
|
||||
#define MAX_PAYLOAD_SIZE 256
|
||||
#define MAX_READ_REQ_SIZE 256
|
||||
#define MESON_PCIE_PHY_POWERUP 0x1c
|
||||
#define PCIE_RESET_DELAY 500
|
||||
#define PCIE_SHARED_RESET 1
|
||||
#define PCIE_NORMAL_RESET 0
|
||||
|
@ -81,26 +80,19 @@ enum pcie_data_rate {
|
|||
struct meson_pcie_mem_res {
|
||||
void __iomem *elbi_base;
|
||||
void __iomem *cfg_base;
|
||||
void __iomem *phy_base;
|
||||
};
|
||||
|
||||
struct meson_pcie_clk_res {
|
||||
struct clk *clk;
|
||||
struct clk *mipi_gate;
|
||||
struct clk *port_clk;
|
||||
struct clk *general_clk;
|
||||
};
|
||||
|
||||
struct meson_pcie_rc_reset {
|
||||
struct reset_control *phy;
|
||||
struct reset_control *port;
|
||||
struct reset_control *apb;
|
||||
};
|
||||
|
||||
struct meson_pcie_param {
|
||||
bool has_shared_phy;
|
||||
};
|
||||
|
||||
struct meson_pcie {
|
||||
struct dw_pcie pci;
|
||||
struct meson_pcie_mem_res mem_res;
|
||||
|
@ -108,7 +100,6 @@ struct meson_pcie {
|
|||
struct meson_pcie_rc_reset mrst;
|
||||
struct gpio_desc *reset_gpio;
|
||||
struct phy *phy;
|
||||
const struct meson_pcie_param *param;
|
||||
};
|
||||
|
||||
static struct reset_control *meson_pcie_get_reset(struct meson_pcie *mp,
|
||||
|
@ -130,13 +121,6 @@ static int meson_pcie_get_resets(struct meson_pcie *mp)
|
|||
{
|
||||
struct meson_pcie_rc_reset *mrst = &mp->mrst;
|
||||
|
||||
if (!mp->param->has_shared_phy) {
|
||||
mrst->phy = meson_pcie_get_reset(mp, "phy", PCIE_SHARED_RESET);
|
||||
if (IS_ERR(mrst->phy))
|
||||
return PTR_ERR(mrst->phy);
|
||||
reset_control_deassert(mrst->phy);
|
||||
}
|
||||
|
||||
mrst->port = meson_pcie_get_reset(mp, "port", PCIE_NORMAL_RESET);
|
||||
if (IS_ERR(mrst->port))
|
||||
return PTR_ERR(mrst->port);
|
||||
|
@ -162,22 +146,6 @@ static void __iomem *meson_pcie_get_mem(struct platform_device *pdev,
|
|||
return devm_ioremap_resource(dev, res);
|
||||
}
|
||||
|
||||
static void __iomem *meson_pcie_get_mem_shared(struct platform_device *pdev,
|
||||
struct meson_pcie *mp,
|
||||
const char *id)
|
||||
{
|
||||
struct device *dev = mp->pci.dev;
|
||||
struct resource *res;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, id);
|
||||
if (!res) {
|
||||
dev_err(dev, "No REG resource %s\n", id);
|
||||
return ERR_PTR(-ENXIO);
|
||||
}
|
||||
|
||||
return devm_ioremap(dev, res->start, resource_size(res));
|
||||
}
|
||||
|
||||
static int meson_pcie_get_mems(struct platform_device *pdev,
|
||||
struct meson_pcie *mp)
|
||||
{
|
||||
|
@ -189,14 +157,6 @@ static int meson_pcie_get_mems(struct platform_device *pdev,
|
|||
if (IS_ERR(mp->mem_res.cfg_base))
|
||||
return PTR_ERR(mp->mem_res.cfg_base);
|
||||
|
||||
/* Meson AXG SoC has two PCI controllers use same phy register */
|
||||
if (!mp->param->has_shared_phy) {
|
||||
mp->mem_res.phy_base =
|
||||
meson_pcie_get_mem_shared(pdev, mp, "phy");
|
||||
if (IS_ERR(mp->mem_res.phy_base))
|
||||
return PTR_ERR(mp->mem_res.phy_base);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -204,37 +164,33 @@ static int meson_pcie_power_on(struct meson_pcie *mp)
|
|||
{
|
||||
int ret = 0;
|
||||
|
||||
if (mp->param->has_shared_phy) {
|
||||
ret = phy_init(mp->phy);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = phy_init(mp->phy);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = phy_power_on(mp->phy);
|
||||
if (ret) {
|
||||
phy_exit(mp->phy);
|
||||
return ret;
|
||||
}
|
||||
} else
|
||||
writel(MESON_PCIE_PHY_POWERUP, mp->mem_res.phy_base);
|
||||
ret = phy_power_on(mp->phy);
|
||||
if (ret) {
|
||||
phy_exit(mp->phy);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void meson_pcie_power_off(struct meson_pcie *mp)
|
||||
{
|
||||
phy_power_off(mp->phy);
|
||||
phy_exit(mp->phy);
|
||||
}
|
||||
|
||||
static int meson_pcie_reset(struct meson_pcie *mp)
|
||||
{
|
||||
struct meson_pcie_rc_reset *mrst = &mp->mrst;
|
||||
int ret = 0;
|
||||
|
||||
if (mp->param->has_shared_phy) {
|
||||
ret = phy_reset(mp->phy);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else {
|
||||
reset_control_assert(mrst->phy);
|
||||
udelay(PCIE_RESET_DELAY);
|
||||
reset_control_deassert(mrst->phy);
|
||||
udelay(PCIE_RESET_DELAY);
|
||||
}
|
||||
ret = phy_reset(mp->phy);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
reset_control_assert(mrst->port);
|
||||
reset_control_assert(mrst->apb);
|
||||
|
@ -286,12 +242,6 @@ static int meson_pcie_probe_clocks(struct meson_pcie *mp)
|
|||
if (IS_ERR(res->port_clk))
|
||||
return PTR_ERR(res->port_clk);
|
||||
|
||||
if (!mp->param->has_shared_phy) {
|
||||
res->mipi_gate = meson_pcie_probe_clock(dev, "mipi", 0);
|
||||
if (IS_ERR(res->mipi_gate))
|
||||
return PTR_ERR(res->mipi_gate);
|
||||
}
|
||||
|
||||
res->general_clk = meson_pcie_probe_clock(dev, "general", 0);
|
||||
if (IS_ERR(res->general_clk))
|
||||
return PTR_ERR(res->general_clk);
|
||||
|
@ -562,7 +512,6 @@ static const struct dw_pcie_ops dw_pcie_ops = {
|
|||
|
||||
static int meson_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct meson_pcie_param *match_data;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct dw_pcie *pci;
|
||||
struct meson_pcie *mp;
|
||||
|
@ -576,17 +525,10 @@ static int meson_pcie_probe(struct platform_device *pdev)
|
|||
pci->dev = dev;
|
||||
pci->ops = &dw_pcie_ops;
|
||||
|
||||
match_data = of_device_get_match_data(dev);
|
||||
if (!match_data) {
|
||||
dev_err(dev, "failed to get match data\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
mp->param = match_data;
|
||||
|
||||
if (mp->param->has_shared_phy) {
|
||||
mp->phy = devm_phy_get(dev, "pcie");
|
||||
if (IS_ERR(mp->phy))
|
||||
return PTR_ERR(mp->phy);
|
||||
mp->phy = devm_phy_get(dev, "pcie");
|
||||
if (IS_ERR(mp->phy)) {
|
||||
dev_err(dev, "get phy failed, %ld\n", PTR_ERR(mp->phy));
|
||||
return PTR_ERR(mp->phy);
|
||||
}
|
||||
|
||||
mp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
|
||||
|
@ -636,30 +578,16 @@ static int meson_pcie_probe(struct platform_device *pdev)
|
|||
return 0;
|
||||
|
||||
err_phy:
|
||||
if (mp->param->has_shared_phy) {
|
||||
phy_power_off(mp->phy);
|
||||
phy_exit(mp->phy);
|
||||
}
|
||||
|
||||
meson_pcie_power_off(mp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct meson_pcie_param meson_pcie_axg_param = {
|
||||
.has_shared_phy = false,
|
||||
};
|
||||
|
||||
static struct meson_pcie_param meson_pcie_g12a_param = {
|
||||
.has_shared_phy = true,
|
||||
};
|
||||
|
||||
static const struct of_device_id meson_pcie_of_match[] = {
|
||||
{
|
||||
.compatible = "amlogic,axg-pcie",
|
||||
.data = &meson_pcie_axg_param,
|
||||
},
|
||||
{
|
||||
.compatible = "amlogic,g12a-pcie",
|
||||
.data = &meson_pcie_g12a_param,
|
||||
},
|
||||
{},
|
||||
};
|
||||
|
|
|
@ -18,6 +18,15 @@ void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
|
|||
|
||||
pci_epc_linkup(epc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup);
|
||||
|
||||
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
|
||||
{
|
||||
struct pci_epc *epc = ep->epc;
|
||||
|
||||
pci_epc_init_notify(epc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify);
|
||||
|
||||
static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar,
|
||||
int flags)
|
||||
|
@ -125,6 +134,7 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no,
|
|||
|
||||
dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND);
|
||||
clear_bit(atu_index, ep->ib_window_map);
|
||||
ep->epf_bar[bar] = NULL;
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no,
|
||||
|
@ -158,6 +168,7 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no,
|
|||
dw_pcie_writel_dbi(pci, reg + 4, 0);
|
||||
}
|
||||
|
||||
ep->epf_bar[bar] = epf_bar;
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
return 0;
|
||||
|
@ -269,7 +280,8 @@ static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no)
|
|||
return val;
|
||||
}
|
||||
|
||||
static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts)
|
||||
static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
|
||||
enum pci_barno bir, u32 offset)
|
||||
{
|
||||
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
|
@ -278,12 +290,22 @@ static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts)
|
|||
if (!ep->msix_cap)
|
||||
return -EINVAL;
|
||||
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
|
||||
reg = ep->msix_cap + PCI_MSIX_FLAGS;
|
||||
val = dw_pcie_readw_dbi(pci, reg);
|
||||
val &= ~PCI_MSIX_FLAGS_QSIZE;
|
||||
val |= interrupts;
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
dw_pcie_writew_dbi(pci, reg, val);
|
||||
|
||||
reg = ep->msix_cap + PCI_MSIX_TABLE;
|
||||
val = offset | bir;
|
||||
dw_pcie_writel_dbi(pci, reg, val);
|
||||
|
||||
reg = ep->msix_cap + PCI_MSIX_PBA;
|
||||
val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
|
||||
dw_pcie_writel_dbi(pci, reg, val);
|
||||
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
|
||||
return 0;
|
||||
|
@ -409,55 +431,41 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
|
|||
u16 interrupt_num)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct pci_epf_msix_tbl *msix_tbl;
|
||||
struct pci_epc *epc = ep->epc;
|
||||
u16 tbl_offset, bir;
|
||||
u32 bar_addr_upper, bar_addr_lower;
|
||||
u32 msg_addr_upper, msg_addr_lower;
|
||||
struct pci_epf_bar *epf_bar;
|
||||
u32 reg, msg_data, vec_ctrl;
|
||||
u64 tbl_addr, msg_addr, reg_u64;
|
||||
void __iomem *msix_tbl;
|
||||
unsigned int aligned_offset;
|
||||
u32 tbl_offset;
|
||||
u64 msg_addr;
|
||||
int ret;
|
||||
u8 bir;
|
||||
|
||||
reg = ep->msix_cap + PCI_MSIX_TABLE;
|
||||
tbl_offset = dw_pcie_readl_dbi(pci, reg);
|
||||
bir = (tbl_offset & PCI_MSIX_TABLE_BIR);
|
||||
tbl_offset &= PCI_MSIX_TABLE_OFFSET;
|
||||
|
||||
reg = PCI_BASE_ADDRESS_0 + (4 * bir);
|
||||
bar_addr_upper = 0;
|
||||
bar_addr_lower = dw_pcie_readl_dbi(pci, reg);
|
||||
reg_u64 = (bar_addr_lower & PCI_BASE_ADDRESS_MEM_TYPE_MASK);
|
||||
if (reg_u64 == PCI_BASE_ADDRESS_MEM_TYPE_64)
|
||||
bar_addr_upper = dw_pcie_readl_dbi(pci, reg + 4);
|
||||
epf_bar = ep->epf_bar[bir];
|
||||
msix_tbl = epf_bar->addr;
|
||||
msix_tbl = (struct pci_epf_msix_tbl *)((char *)msix_tbl + tbl_offset);
|
||||
|
||||
tbl_addr = ((u64) bar_addr_upper) << 32 | bar_addr_lower;
|
||||
tbl_addr += (tbl_offset + ((interrupt_num - 1) * PCI_MSIX_ENTRY_SIZE));
|
||||
tbl_addr &= PCI_BASE_ADDRESS_MEM_MASK;
|
||||
|
||||
msix_tbl = ioremap(ep->phys_base + tbl_addr,
|
||||
PCI_MSIX_ENTRY_SIZE);
|
||||
if (!msix_tbl)
|
||||
return -EINVAL;
|
||||
|
||||
msg_addr_lower = readl(msix_tbl + PCI_MSIX_ENTRY_LOWER_ADDR);
|
||||
msg_addr_upper = readl(msix_tbl + PCI_MSIX_ENTRY_UPPER_ADDR);
|
||||
msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower;
|
||||
msg_data = readl(msix_tbl + PCI_MSIX_ENTRY_DATA);
|
||||
vec_ctrl = readl(msix_tbl + PCI_MSIX_ENTRY_VECTOR_CTRL);
|
||||
|
||||
iounmap(msix_tbl);
|
||||
msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr;
|
||||
msg_data = msix_tbl[(interrupt_num - 1)].msg_data;
|
||||
vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl;
|
||||
|
||||
if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT) {
|
||||
dev_dbg(pci->dev, "MSI-X entry ctrl set\n");
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
|
||||
aligned_offset = msg_addr & (epc->mem->page_size - 1);
|
||||
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
|
||||
epc->mem->page_size);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
writel(msg_data, ep->msi_mem);
|
||||
writel(msg_data, ep->msi_mem + aligned_offset);
|
||||
|
||||
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys);
|
||||
|
||||
|
@ -492,19 +500,54 @@ static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
unsigned int offset;
|
||||
unsigned int nbars;
|
||||
u8 hdr_type;
|
||||
u32 reg;
|
||||
int i;
|
||||
|
||||
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE);
|
||||
if (hdr_type != PCI_HEADER_TYPE_NORMAL) {
|
||||
dev_err(pci->dev,
|
||||
"PCIe controller is not set to EP mode (hdr_type:0x%x)!\n",
|
||||
hdr_type);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
|
||||
|
||||
ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
|
||||
|
||||
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
|
||||
if (offset) {
|
||||
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
|
||||
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
|
||||
PCI_REBAR_CTRL_NBAR_SHIFT;
|
||||
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
|
||||
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
}
|
||||
|
||||
dw_pcie_setup(pci);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_complete);
|
||||
|
||||
int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
||||
{
|
||||
int i;
|
||||
int ret;
|
||||
u32 reg;
|
||||
void *addr;
|
||||
u8 hdr_type;
|
||||
unsigned int nbars;
|
||||
unsigned int offset;
|
||||
struct pci_epc *epc;
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct device *dev = pci->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
const struct pci_epc_features *epc_features;
|
||||
|
||||
if (!pci->dbi_base || !pci->dbi_base2) {
|
||||
dev_err(dev, "dbi_base/dbi_base2 is not populated\n");
|
||||
|
@ -563,13 +606,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
|||
if (ep->ops->ep_init)
|
||||
ep->ops->ep_init(ep);
|
||||
|
||||
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE);
|
||||
if (hdr_type != PCI_HEADER_TYPE_NORMAL) {
|
||||
dev_err(pci->dev, "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n",
|
||||
hdr_type);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
|
||||
if (ret < 0)
|
||||
epc->max_functions = 1;
|
||||
|
@ -587,23 +623,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
|||
dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
|
||||
|
||||
ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
|
||||
|
||||
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
|
||||
if (offset) {
|
||||
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
|
||||
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
|
||||
PCI_REBAR_CTRL_NBAR_SHIFT;
|
||||
|
||||
dw_pcie_dbi_ro_wr_en(pci);
|
||||
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
|
||||
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
|
||||
dw_pcie_dbi_ro_wr_dis(pci);
|
||||
if (ep->ops->get_features) {
|
||||
epc_features = ep->ops->get_features(ep);
|
||||
if (epc_features->core_init_notifier)
|
||||
return 0;
|
||||
}
|
||||
|
||||
dw_pcie_setup(pci);
|
||||
|
||||
return 0;
|
||||
return dw_pcie_ep_init_complete(ep);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
|
||||
|
|
|
@ -233,6 +233,7 @@ struct dw_pcie_ep {
|
|||
phys_addr_t msi_mem_phys;
|
||||
u8 msi_cap; /* MSI capability offset */
|
||||
u8 msix_cap; /* MSI-X capability offset */
|
||||
struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS];
|
||||
};
|
||||
|
||||
struct dw_pcie_ops {
|
||||
|
@ -411,6 +412,8 @@ static inline int dw_pcie_allocate_domains(struct pcie_port *pp)
|
|||
#ifdef CONFIG_PCIE_DW_EP
|
||||
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep);
|
||||
int dw_pcie_ep_init(struct dw_pcie_ep *ep);
|
||||
int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep);
|
||||
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep);
|
||||
void dw_pcie_ep_exit(struct dw_pcie_ep *ep);
|
||||
int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no);
|
||||
int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
|
@ -428,6 +431,15 @@ static inline int dw_pcie_ep_init(struct dw_pcie_ep *ep)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
|
||||
{
|
||||
}
|
||||
|
|
|
@ -1439,7 +1439,13 @@ static void qcom_fixup_class(struct pci_dev *dev)
|
|||
{
|
||||
dev->class = PCI_CLASS_BRIDGE_PCI << 8;
|
||||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, PCI_ANY_ID, qcom_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0106, qcom_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0107, qcom_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class);
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class);
|
||||
|
||||
static struct platform_driver qcom_pcie_driver = {
|
||||
.probe = qcom_pcie_probe,
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
#include <linux/debugfs.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/kernel.h>
|
||||
|
@ -53,6 +54,7 @@
|
|||
#define APPL_INTR_EN_L0_0_LINK_STATE_INT_EN BIT(0)
|
||||
#define APPL_INTR_EN_L0_0_MSI_RCV_INT_EN BIT(4)
|
||||
#define APPL_INTR_EN_L0_0_INT_INT_EN BIT(8)
|
||||
#define APPL_INTR_EN_L0_0_PCI_CMD_EN_INT_EN BIT(15)
|
||||
#define APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN BIT(19)
|
||||
#define APPL_INTR_EN_L0_0_SYS_INTR_EN BIT(30)
|
||||
#define APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN BIT(31)
|
||||
|
@ -60,19 +62,26 @@
|
|||
#define APPL_INTR_STATUS_L0 0xC
|
||||
#define APPL_INTR_STATUS_L0_LINK_STATE_INT BIT(0)
|
||||
#define APPL_INTR_STATUS_L0_INT_INT BIT(8)
|
||||
#define APPL_INTR_STATUS_L0_PCI_CMD_EN_INT BIT(15)
|
||||
#define APPL_INTR_STATUS_L0_PEX_RST_INT BIT(16)
|
||||
#define APPL_INTR_STATUS_L0_CDM_REG_CHK_INT BIT(18)
|
||||
|
||||
#define APPL_INTR_EN_L1_0_0 0x1C
|
||||
#define APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN BIT(1)
|
||||
#define APPL_INTR_EN_L1_0_0_RDLH_LINK_UP_INT_EN BIT(3)
|
||||
#define APPL_INTR_EN_L1_0_0_HOT_RESET_DONE_INT_EN BIT(30)
|
||||
|
||||
#define APPL_INTR_STATUS_L1_0_0 0x20
|
||||
#define APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED BIT(1)
|
||||
#define APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED BIT(3)
|
||||
#define APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE BIT(30)
|
||||
|
||||
#define APPL_INTR_STATUS_L1_1 0x2C
|
||||
#define APPL_INTR_STATUS_L1_2 0x30
|
||||
#define APPL_INTR_STATUS_L1_3 0x34
|
||||
#define APPL_INTR_STATUS_L1_6 0x3C
|
||||
#define APPL_INTR_STATUS_L1_7 0x40
|
||||
#define APPL_INTR_STATUS_L1_15_CFG_BME_CHGED BIT(1)
|
||||
|
||||
#define APPL_INTR_EN_L1_8_0 0x44
|
||||
#define APPL_INTR_EN_L1_8_BW_MGT_INT_EN BIT(2)
|
||||
|
@ -103,8 +112,12 @@
|
|||
#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR BIT(1)
|
||||
#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0)
|
||||
|
||||
#define APPL_MSI_CTRL_1 0xAC
|
||||
|
||||
#define APPL_MSI_CTRL_2 0xB0
|
||||
|
||||
#define APPL_LEGACY_INTX 0xB8
|
||||
|
||||
#define APPL_LTR_MSG_1 0xC4
|
||||
#define LTR_MSG_REQ BIT(15)
|
||||
#define LTR_MST_NO_SNOOP_SHIFT 16
|
||||
|
@ -205,6 +218,13 @@
|
|||
#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFFFFFF 1
|
||||
#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 2
|
||||
|
||||
#define MSIX_ADDR_MATCH_LOW_OFF 0x940
|
||||
#define MSIX_ADDR_MATCH_LOW_OFF_EN BIT(0)
|
||||
#define MSIX_ADDR_MATCH_LOW_OFF_MASK GENMASK(31, 2)
|
||||
|
||||
#define MSIX_ADDR_MATCH_HIGH_OFF 0x944
|
||||
#define MSIX_ADDR_MATCH_HIGH_OFF_MASK GENMASK(31, 0)
|
||||
|
||||
#define PORT_LOGIC_MSIX_DOORBELL 0x948
|
||||
|
||||
#define CAP_SPCIE_CAP_OFF 0x154
|
||||
|
@ -223,6 +243,13 @@
|
|||
#define GEN3_CORE_CLK_FREQ 250000000
|
||||
#define GEN4_CORE_CLK_FREQ 500000000
|
||||
|
||||
#define LTR_MSG_TIMEOUT (100 * 1000)
|
||||
|
||||
#define PERST_DEBOUNCE_TIME (5 * 1000)
|
||||
|
||||
#define EP_STATE_DISABLED 0
|
||||
#define EP_STATE_ENABLED 1
|
||||
|
||||
static const unsigned int pcie_gen_freq[] = {
|
||||
GEN1_CORE_CLK_FREQ,
|
||||
GEN2_CORE_CLK_FREQ,
|
||||
|
@ -260,6 +287,8 @@ struct tegra_pcie_dw {
|
|||
struct dw_pcie pci;
|
||||
struct tegra_bpmp *bpmp;
|
||||
|
||||
enum dw_pcie_device_mode mode;
|
||||
|
||||
bool supports_clkreq;
|
||||
bool enable_cdm_check;
|
||||
bool link_state;
|
||||
|
@ -283,6 +312,16 @@ struct tegra_pcie_dw {
|
|||
struct phy **phys;
|
||||
|
||||
struct dentry *debugfs;
|
||||
|
||||
/* Endpoint mode specific */
|
||||
struct gpio_desc *pex_rst_gpiod;
|
||||
struct gpio_desc *pex_refclk_sel_gpiod;
|
||||
unsigned int pex_rst_irq;
|
||||
int ep_state;
|
||||
};
|
||||
|
||||
struct tegra_pcie_dw_of_data {
|
||||
enum dw_pcie_device_mode mode;
|
||||
};
|
||||
|
||||
static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci)
|
||||
|
@ -339,8 +378,9 @@ static void apply_bad_link_workaround(struct pcie_port *pp)
|
|||
}
|
||||
}
|
||||
|
||||
static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie)
|
||||
static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg)
|
||||
{
|
||||
struct tegra_pcie_dw *pcie = arg;
|
||||
struct dw_pcie *pci = &pcie->pci;
|
||||
struct pcie_port *pp = &pci->pp;
|
||||
u32 val, tmp;
|
||||
|
@ -411,11 +451,121 @@ static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg)
|
||||
static void pex_ep_event_hot_rst_done(struct tegra_pcie_dw *pcie)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_MSI_CTRL_2);
|
||||
|
||||
val = appl_readl(pcie, APPL_CTRL);
|
||||
val |= APPL_CTRL_LTSSM_EN;
|
||||
appl_writel(pcie, val, APPL_CTRL);
|
||||
}
|
||||
|
||||
static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg)
|
||||
{
|
||||
struct tegra_pcie_dw *pcie = arg;
|
||||
struct dw_pcie *pci = &pcie->pci;
|
||||
u32 val, speed;
|
||||
|
||||
return tegra_pcie_rp_irq_handler(pcie);
|
||||
speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
|
||||
PCI_EXP_LNKSTA_CLS;
|
||||
clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
|
||||
|
||||
/* If EP doesn't advertise L1SS, just return */
|
||||
val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
|
||||
if (!(val & (PCI_L1SS_CAP_ASPM_L1_1 | PCI_L1SS_CAP_ASPM_L1_2)))
|
||||
return IRQ_HANDLED;
|
||||
|
||||
/* Check if BME is set to '1' */
|
||||
val = dw_pcie_readl_dbi(pci, PCI_COMMAND);
|
||||
if (val & PCI_COMMAND_MASTER) {
|
||||
ktime_t timeout;
|
||||
|
||||
/* 110us for both snoop and no-snoop */
|
||||
val = 110 | (2 << PCI_LTR_SCALE_SHIFT) | LTR_MSG_REQ;
|
||||
val |= (val << LTR_MST_NO_SNOOP_SHIFT);
|
||||
appl_writel(pcie, val, APPL_LTR_MSG_1);
|
||||
|
||||
/* Send LTR upstream */
|
||||
val = appl_readl(pcie, APPL_LTR_MSG_2);
|
||||
val |= APPL_LTR_MSG_2_LTR_MSG_REQ_STATE;
|
||||
appl_writel(pcie, val, APPL_LTR_MSG_2);
|
||||
|
||||
timeout = ktime_add_us(ktime_get(), LTR_MSG_TIMEOUT);
|
||||
for (;;) {
|
||||
val = appl_readl(pcie, APPL_LTR_MSG_2);
|
||||
if (!(val & APPL_LTR_MSG_2_LTR_MSG_REQ_STATE))
|
||||
break;
|
||||
if (ktime_after(ktime_get(), timeout))
|
||||
break;
|
||||
usleep_range(1000, 1100);
|
||||
}
|
||||
if (val & APPL_LTR_MSG_2_LTR_MSG_REQ_STATE)
|
||||
dev_err(pcie->dev, "Failed to send LTR message\n");
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
|
||||
{
|
||||
struct tegra_pcie_dw *pcie = arg;
|
||||
struct dw_pcie_ep *ep = &pcie->pci.ep;
|
||||
int spurious = 1;
|
||||
u32 val, tmp;
|
||||
|
||||
val = appl_readl(pcie, APPL_INTR_STATUS_L0);
|
||||
if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
|
||||
val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
|
||||
appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
|
||||
|
||||
if (val & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE)
|
||||
pex_ep_event_hot_rst_done(pcie);
|
||||
|
||||
if (val & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) {
|
||||
tmp = appl_readl(pcie, APPL_LINK_STATUS);
|
||||
if (tmp & APPL_LINK_STATUS_RDLH_LINK_UP) {
|
||||
dev_dbg(pcie->dev, "Link is up with Host\n");
|
||||
dw_pcie_ep_linkup(ep);
|
||||
}
|
||||
}
|
||||
|
||||
spurious = 0;
|
||||
}
|
||||
|
||||
if (val & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) {
|
||||
val = appl_readl(pcie, APPL_INTR_STATUS_L1_15);
|
||||
appl_writel(pcie, val, APPL_INTR_STATUS_L1_15);
|
||||
|
||||
if (val & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED)
|
||||
return IRQ_WAKE_THREAD;
|
||||
|
||||
spurious = 0;
|
||||
}
|
||||
|
||||
if (spurious) {
|
||||
dev_warn(pcie->dev, "Random interrupt (STATUS = 0x%08X)\n",
|
||||
val);
|
||||
appl_writel(pcie, val, APPL_INTR_STATUS_L0);
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size,
|
||||
|
@ -884,8 +1034,26 @@ static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp)
|
|||
pp->num_vectors = MAX_MSI_IRQS;
|
||||
}
|
||||
|
||||
static int tegra_pcie_dw_start_link(struct dw_pcie *pci)
|
||||
{
|
||||
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
|
||||
|
||||
enable_irq(pcie->pex_rst_irq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tegra_pcie_dw_stop_link(struct dw_pcie *pci)
|
||||
{
|
||||
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
|
||||
|
||||
disable_irq(pcie->pex_rst_irq);
|
||||
}
|
||||
|
||||
static const struct dw_pcie_ops tegra_dw_pcie_ops = {
|
||||
.link_up = tegra_pcie_dw_link_up,
|
||||
.start_link = tegra_pcie_dw_start_link,
|
||||
.stop_link = tegra_pcie_dw_stop_link,
|
||||
};
|
||||
|
||||
static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
|
||||
|
@ -986,6 +1154,40 @@ static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
|
|||
pcie->enable_cdm_check =
|
||||
of_property_read_bool(np, "snps,enable-cdm-check");
|
||||
|
||||
if (pcie->mode == DW_PCIE_RC_TYPE)
|
||||
return 0;
|
||||
|
||||
/* Endpoint mode specific DT entries */
|
||||
pcie->pex_rst_gpiod = devm_gpiod_get(pcie->dev, "reset", GPIOD_IN);
|
||||
if (IS_ERR(pcie->pex_rst_gpiod)) {
|
||||
int err = PTR_ERR(pcie->pex_rst_gpiod);
|
||||
const char *level = KERN_ERR;
|
||||
|
||||
if (err == -EPROBE_DEFER)
|
||||
level = KERN_DEBUG;
|
||||
|
||||
dev_printk(level, pcie->dev,
|
||||
dev_fmt("Failed to get PERST GPIO: %d\n"),
|
||||
err);
|
||||
return err;
|
||||
}
|
||||
|
||||
pcie->pex_refclk_sel_gpiod = devm_gpiod_get(pcie->dev,
|
||||
"nvidia,refclk-select",
|
||||
GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(pcie->pex_refclk_sel_gpiod)) {
|
||||
int err = PTR_ERR(pcie->pex_refclk_sel_gpiod);
|
||||
const char *level = KERN_ERR;
|
||||
|
||||
if (err == -EPROBE_DEFER)
|
||||
level = KERN_DEBUG;
|
||||
|
||||
dev_printk(level, pcie->dev,
|
||||
dev_fmt("Failed to get REFCLK select GPIOs: %d\n"),
|
||||
err);
|
||||
pcie->pex_refclk_sel_gpiod = NULL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1017,6 +1219,34 @@ static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
|
|||
return tegra_bpmp_transfer(pcie->bpmp, &msg);
|
||||
}
|
||||
|
||||
static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie,
|
||||
bool enable)
|
||||
{
|
||||
struct mrq_uphy_response resp;
|
||||
struct tegra_bpmp_message msg;
|
||||
struct mrq_uphy_request req;
|
||||
|
||||
memset(&req, 0, sizeof(req));
|
||||
memset(&resp, 0, sizeof(resp));
|
||||
|
||||
if (enable) {
|
||||
req.cmd = CMD_UPHY_PCIE_EP_CONTROLLER_PLL_INIT;
|
||||
req.ep_ctrlr_pll_init.ep_controller = pcie->cid;
|
||||
} else {
|
||||
req.cmd = CMD_UPHY_PCIE_EP_CONTROLLER_PLL_OFF;
|
||||
req.ep_ctrlr_pll_off.ep_controller = pcie->cid;
|
||||
}
|
||||
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
msg.mrq = MRQ_UPHY;
|
||||
msg.tx.data = &req;
|
||||
msg.tx.size = sizeof(req);
|
||||
msg.rx.data = &resp;
|
||||
msg.rx.size = sizeof(resp);
|
||||
|
||||
return tegra_bpmp_transfer(pcie->bpmp, &msg);
|
||||
}
|
||||
|
||||
static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
|
||||
{
|
||||
struct pcie_port *pp = &pcie->pci.pp;
|
||||
|
@ -1427,8 +1657,396 @@ fail_pm_get_sync:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
|
||||
{
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
if (pcie->ep_state == EP_STATE_DISABLED)
|
||||
return;
|
||||
|
||||
/* Disable LTSSM */
|
||||
val = appl_readl(pcie, APPL_CTRL);
|
||||
val &= ~APPL_CTRL_LTSSM_EN;
|
||||
appl_writel(pcie, val, APPL_CTRL);
|
||||
|
||||
ret = readl_poll_timeout(pcie->appl_base + APPL_DEBUG, val,
|
||||
((val & APPL_DEBUG_LTSSM_STATE_MASK) >>
|
||||
APPL_DEBUG_LTSSM_STATE_SHIFT) ==
|
||||
LTSSM_STATE_PRE_DETECT,
|
||||
1, LTSSM_TIMEOUT);
|
||||
if (ret)
|
||||
dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
|
||||
|
||||
reset_control_assert(pcie->core_rst);
|
||||
|
||||
tegra_pcie_disable_phy(pcie);
|
||||
|
||||
reset_control_assert(pcie->core_apb_rst);
|
||||
|
||||
clk_disable_unprepare(pcie->core_clk);
|
||||
|
||||
pm_runtime_put_sync(pcie->dev);
|
||||
|
||||
ret = tegra_pcie_bpmp_set_pll_state(pcie, false);
|
||||
if (ret)
|
||||
dev_err(pcie->dev, "Failed to turn off UPHY: %d\n", ret);
|
||||
|
||||
pcie->ep_state = EP_STATE_DISABLED;
|
||||
dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n");
|
||||
}
|
||||
|
||||
static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
|
||||
{
|
||||
struct dw_pcie *pci = &pcie->pci;
|
||||
struct dw_pcie_ep *ep = &pci->ep;
|
||||
struct device *dev = pcie->dev;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
if (pcie->ep_state == EP_STATE_ENABLED)
|
||||
return;
|
||||
|
||||
ret = pm_runtime_get_sync(dev);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
|
||||
ret);
|
||||
return;
|
||||
}
|
||||
|
||||
ret = tegra_pcie_bpmp_set_pll_state(pcie, true);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to init UPHY for PCIe EP: %d\n", ret);
|
||||
goto fail_pll_init;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(pcie->core_clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to enable core clock: %d\n", ret);
|
||||
goto fail_core_clk_enable;
|
||||
}
|
||||
|
||||
ret = reset_control_deassert(pcie->core_apb_rst);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to deassert core APB reset: %d\n", ret);
|
||||
goto fail_core_apb_rst;
|
||||
}
|
||||
|
||||
ret = tegra_pcie_enable_phy(pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to enable PHY: %d\n", ret);
|
||||
goto fail_phy;
|
||||
}
|
||||
|
||||
/* Clear any stale interrupt statuses */
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
|
||||
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
|
||||
|
||||
/* configure this core for EP mode operation */
|
||||
val = appl_readl(pcie, APPL_DM_TYPE);
|
||||
val &= ~APPL_DM_TYPE_MASK;
|
||||
val |= APPL_DM_TYPE_EP;
|
||||
appl_writel(pcie, val, APPL_DM_TYPE);
|
||||
|
||||
appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE);
|
||||
|
||||
val = appl_readl(pcie, APPL_CTRL);
|
||||
val |= APPL_CTRL_SYS_PRE_DET_STATE;
|
||||
val |= APPL_CTRL_HW_HOT_RST_EN;
|
||||
appl_writel(pcie, val, APPL_CTRL);
|
||||
|
||||
val = appl_readl(pcie, APPL_CFG_MISC);
|
||||
val |= APPL_CFG_MISC_SLV_EP_MODE;
|
||||
val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT);
|
||||
appl_writel(pcie, val, APPL_CFG_MISC);
|
||||
|
||||
val = appl_readl(pcie, APPL_PINMUX);
|
||||
val |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN;
|
||||
val |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE;
|
||||
appl_writel(pcie, val, APPL_PINMUX);
|
||||
|
||||
appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK,
|
||||
APPL_CFG_BASE_ADDR);
|
||||
|
||||
appl_writel(pcie, pcie->atu_dma_res->start &
|
||||
APPL_CFG_IATU_DMA_BASE_ADDR_MASK,
|
||||
APPL_CFG_IATU_DMA_BASE_ADDR);
|
||||
|
||||
val = appl_readl(pcie, APPL_INTR_EN_L0_0);
|
||||
val |= APPL_INTR_EN_L0_0_SYS_INTR_EN;
|
||||
val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN;
|
||||
val |= APPL_INTR_EN_L0_0_PCI_CMD_EN_INT_EN;
|
||||
appl_writel(pcie, val, APPL_INTR_EN_L0_0);
|
||||
|
||||
val = appl_readl(pcie, APPL_INTR_EN_L1_0_0);
|
||||
val |= APPL_INTR_EN_L1_0_0_HOT_RESET_DONE_INT_EN;
|
||||
val |= APPL_INTR_EN_L1_0_0_RDLH_LINK_UP_INT_EN;
|
||||
appl_writel(pcie, val, APPL_INTR_EN_L1_0_0);
|
||||
|
||||
reset_control_deassert(pcie->core_rst);
|
||||
|
||||
if (pcie->update_fc_fixup) {
|
||||
val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF);
|
||||
val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT;
|
||||
dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val);
|
||||
}
|
||||
|
||||
config_gen3_gen4_eq_presets(pcie);
|
||||
|
||||
init_host_aspm(pcie);
|
||||
|
||||
/* Disable ASPM-L1SS advertisement if there is no CLKREQ routing */
|
||||
if (!pcie->supports_clkreq) {
|
||||
disable_aspm_l11(pcie);
|
||||
disable_aspm_l12(pcie);
|
||||
}
|
||||
|
||||
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
|
||||
val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
|
||||
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
|
||||
|
||||
/* Configure N_FTS & FTS */
|
||||
val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
|
||||
val &= ~(N_FTS_MASK << N_FTS_SHIFT);
|
||||
val |= N_FTS_VAL << N_FTS_SHIFT;
|
||||
dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
|
||||
|
||||
val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
|
||||
val &= ~FTS_MASK;
|
||||
val |= FTS_VAL;
|
||||
dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
|
||||
|
||||
/* Configure Max Speed from DT */
|
||||
if (pcie->max_speed && pcie->max_speed != -EINVAL) {
|
||||
val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base +
|
||||
PCI_EXP_LNKCAP);
|
||||
val &= ~PCI_EXP_LNKCAP_SLS;
|
||||
val |= pcie->max_speed;
|
||||
dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP,
|
||||
val);
|
||||
}
|
||||
|
||||
pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
|
||||
PCI_CAP_ID_EXP);
|
||||
clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ);
|
||||
|
||||
val = (ep->msi_mem_phys & MSIX_ADDR_MATCH_LOW_OFF_MASK);
|
||||
val |= MSIX_ADDR_MATCH_LOW_OFF_EN;
|
||||
dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_LOW_OFF, val);
|
||||
val = (lower_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
|
||||
dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val);
|
||||
|
||||
ret = dw_pcie_ep_init_complete(ep);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to complete initialization: %d\n", ret);
|
||||
goto fail_init_complete;
|
||||
}
|
||||
|
||||
dw_pcie_ep_init_notify(ep);
|
||||
|
||||
/* Enable LTSSM */
|
||||
val = appl_readl(pcie, APPL_CTRL);
|
||||
val |= APPL_CTRL_LTSSM_EN;
|
||||
appl_writel(pcie, val, APPL_CTRL);
|
||||
|
||||
pcie->ep_state = EP_STATE_ENABLED;
|
||||
dev_dbg(dev, "Initialization of endpoint is completed\n");
|
||||
|
||||
return;
|
||||
|
||||
fail_init_complete:
|
||||
reset_control_assert(pcie->core_rst);
|
||||
tegra_pcie_disable_phy(pcie);
|
||||
fail_phy:
|
||||
reset_control_assert(pcie->core_apb_rst);
|
||||
fail_core_apb_rst:
|
||||
clk_disable_unprepare(pcie->core_clk);
|
||||
fail_core_clk_enable:
|
||||
tegra_pcie_bpmp_set_pll_state(pcie, false);
|
||||
fail_pll_init:
|
||||
pm_runtime_put_sync(dev);
|
||||
}
|
||||
|
||||
static irqreturn_t tegra_pcie_ep_pex_rst_irq(int irq, void *arg)
|
||||
{
|
||||
struct tegra_pcie_dw *pcie = arg;
|
||||
|
||||
if (gpiod_get_value(pcie->pex_rst_gpiod))
|
||||
pex_ep_event_pex_rst_assert(pcie);
|
||||
else
|
||||
pex_ep_event_pex_rst_deassert(pcie);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int tegra_pcie_ep_raise_legacy_irq(struct tegra_pcie_dw *pcie, u16 irq)
|
||||
{
|
||||
/* Tegra194 supports only INTA */
|
||||
if (irq > 1)
|
||||
return -EINVAL;
|
||||
|
||||
appl_writel(pcie, 1, APPL_LEGACY_INTX);
|
||||
usleep_range(1000, 2000);
|
||||
appl_writel(pcie, 0, APPL_LEGACY_INTX);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq)
|
||||
{
|
||||
if (unlikely(irq > 31))
|
||||
return -EINVAL;
|
||||
|
||||
appl_writel(pcie, (1 << irq), APPL_MSI_CTRL_1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tegra_pcie_ep_raise_msix_irq(struct tegra_pcie_dw *pcie, u16 irq)
|
||||
{
|
||||
struct dw_pcie_ep *ep = &pcie->pci.ep;
|
||||
|
||||
writel(irq, ep->msi_mem);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
|
||||
enum pci_epc_irq_type type,
|
||||
u16 interrupt_num)
|
||||
{
|
||||
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
|
||||
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
|
||||
|
||||
switch (type) {
|
||||
case PCI_EPC_IRQ_LEGACY:
|
||||
return tegra_pcie_ep_raise_legacy_irq(pcie, interrupt_num);
|
||||
|
||||
case PCI_EPC_IRQ_MSI:
|
||||
return tegra_pcie_ep_raise_msi_irq(pcie, interrupt_num);
|
||||
|
||||
case PCI_EPC_IRQ_MSIX:
|
||||
return tegra_pcie_ep_raise_msix_irq(pcie, interrupt_num);
|
||||
|
||||
default:
|
||||
dev_err(pci->dev, "Unknown IRQ type\n");
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct pci_epc_features tegra_pcie_epc_features = {
|
||||
.linkup_notifier = true,
|
||||
.core_init_notifier = true,
|
||||
.msi_capable = false,
|
||||
.msix_capable = false,
|
||||
.reserved_bar = 1 << BAR_2 | 1 << BAR_3 | 1 << BAR_4 | 1 << BAR_5,
|
||||
.bar_fixed_64bit = 1 << BAR_0,
|
||||
.bar_fixed_size[0] = SZ_1M,
|
||||
};
|
||||
|
||||
static const struct pci_epc_features*
|
||||
tegra_pcie_ep_get_features(struct dw_pcie_ep *ep)
|
||||
{
|
||||
return &tegra_pcie_epc_features;
|
||||
}
|
||||
|
||||
static struct dw_pcie_ep_ops pcie_ep_ops = {
|
||||
.raise_irq = tegra_pcie_ep_raise_irq,
|
||||
.get_features = tegra_pcie_ep_get_features,
|
||||
};
|
||||
|
||||
static int tegra_pcie_config_ep(struct tegra_pcie_dw *pcie,
|
||||
struct platform_device *pdev)
|
||||
{
|
||||
struct dw_pcie *pci = &pcie->pci;
|
||||
struct device *dev = pcie->dev;
|
||||
struct dw_pcie_ep *ep;
|
||||
struct resource *res;
|
||||
char *name;
|
||||
int ret;
|
||||
|
||||
ep = &pci->ep;
|
||||
ep->ops = &pcie_ep_ops;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space");
|
||||
if (!res)
|
||||
return -EINVAL;
|
||||
|
||||
ep->phys_base = res->start;
|
||||
ep->addr_size = resource_size(res);
|
||||
ep->page_size = SZ_64K;
|
||||
|
||||
ret = gpiod_set_debounce(pcie->pex_rst_gpiod, PERST_DEBOUNCE_TIME);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to set PERST GPIO debounce time: %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = gpiod_to_irq(pcie->pex_rst_gpiod);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to get IRQ for PERST GPIO: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
pcie->pex_rst_irq = (unsigned int)ret;
|
||||
|
||||
name = devm_kasprintf(dev, GFP_KERNEL, "tegra_pcie_%u_pex_rst_irq",
|
||||
pcie->cid);
|
||||
if (!name) {
|
||||
dev_err(dev, "Failed to create PERST IRQ string\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
irq_set_status_flags(pcie->pex_rst_irq, IRQ_NOAUTOEN);
|
||||
|
||||
pcie->ep_state = EP_STATE_DISABLED;
|
||||
|
||||
ret = devm_request_threaded_irq(dev, pcie->pex_rst_irq, NULL,
|
||||
tegra_pcie_ep_pex_rst_irq,
|
||||
IRQF_TRIGGER_RISING |
|
||||
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
|
||||
name, (void *)pcie);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to request IRQ for PERST: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
name = devm_kasprintf(dev, GFP_KERNEL, "tegra_pcie_%u_ep_work",
|
||||
pcie->cid);
|
||||
if (!name) {
|
||||
dev_err(dev, "Failed to create PCIe EP work thread string\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
pm_runtime_enable(dev);
|
||||
|
||||
ret = dw_pcie_ep_init(ep);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to initialize DWC Endpoint subsystem: %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tegra_pcie_dw_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct tegra_pcie_dw_of_data *data;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct resource *atu_dma_res;
|
||||
struct tegra_pcie_dw *pcie;
|
||||
|
@ -1440,6 +2058,8 @@ static int tegra_pcie_dw_probe(struct platform_device *pdev)
|
|||
int ret;
|
||||
u32 i;
|
||||
|
||||
data = of_device_get_match_data(dev);
|
||||
|
||||
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
|
||||
if (!pcie)
|
||||
return -ENOMEM;
|
||||
|
@ -1449,19 +2069,37 @@ static int tegra_pcie_dw_probe(struct platform_device *pdev)
|
|||
pci->ops = &tegra_dw_pcie_ops;
|
||||
pp = &pci->pp;
|
||||
pcie->dev = &pdev->dev;
|
||||
pcie->mode = (enum dw_pcie_device_mode)data->mode;
|
||||
|
||||
ret = tegra_pcie_dw_parse_dt(pcie);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to parse device tree: %d\n", ret);
|
||||
const char *level = KERN_ERR;
|
||||
|
||||
if (ret == -EPROBE_DEFER)
|
||||
level = KERN_DEBUG;
|
||||
|
||||
dev_printk(level, dev,
|
||||
dev_fmt("Failed to parse device tree: %d\n"),
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = tegra_pcie_get_slot_regulators(pcie);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to get slot regulators: %d\n", ret);
|
||||
const char *level = KERN_ERR;
|
||||
|
||||
if (ret == -EPROBE_DEFER)
|
||||
level = KERN_DEBUG;
|
||||
|
||||
dev_printk(level, dev,
|
||||
dev_fmt("Failed to get slot regulators: %d\n"),
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (pcie->pex_refclk_sel_gpiod)
|
||||
gpiod_set_value(pcie->pex_refclk_sel_gpiod, 1);
|
||||
|
||||
pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
|
||||
if (IS_ERR(pcie->pex_ctl_supply)) {
|
||||
ret = PTR_ERR(pcie->pex_ctl_supply);
|
||||
|
@ -1557,24 +2195,49 @@ static int tegra_pcie_dw_probe(struct platform_device *pdev)
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
|
||||
IRQF_SHARED, "tegra-pcie-intr", pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
pcie->bpmp = tegra_bpmp_get(dev);
|
||||
if (IS_ERR(pcie->bpmp))
|
||||
return PTR_ERR(pcie->bpmp);
|
||||
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
ret = tegra_pcie_config_rp(pcie);
|
||||
if (ret && ret != -ENOMEDIUM)
|
||||
goto fail;
|
||||
else
|
||||
return 0;
|
||||
switch (pcie->mode) {
|
||||
case DW_PCIE_RC_TYPE:
|
||||
ret = devm_request_irq(dev, pp->irq, tegra_pcie_rp_irq_handler,
|
||||
IRQF_SHARED, "tegra-pcie-intr", pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq,
|
||||
ret);
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ret = tegra_pcie_config_rp(pcie);
|
||||
if (ret && ret != -ENOMEDIUM)
|
||||
goto fail;
|
||||
else
|
||||
return 0;
|
||||
break;
|
||||
|
||||
case DW_PCIE_EP_TYPE:
|
||||
ret = devm_request_threaded_irq(dev, pp->irq,
|
||||
tegra_pcie_ep_hard_irq,
|
||||
tegra_pcie_ep_irq_thread,
|
||||
IRQF_SHARED | IRQF_ONESHOT,
|
||||
"tegra-pcie-ep-intr", pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq,
|
||||
ret);
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ret = tegra_pcie_config_ep(pcie, pdev);
|
||||
if (ret < 0)
|
||||
goto fail;
|
||||
break;
|
||||
|
||||
default:
|
||||
dev_err(dev, "Invalid PCIe device type %d\n", pcie->mode);
|
||||
}
|
||||
|
||||
fail:
|
||||
tegra_bpmp_put(pcie->bpmp);
|
||||
|
@ -1593,6 +2256,8 @@ static int tegra_pcie_dw_remove(struct platform_device *pdev)
|
|||
pm_runtime_put_sync(pcie->dev);
|
||||
pm_runtime_disable(pcie->dev);
|
||||
tegra_bpmp_put(pcie->bpmp);
|
||||
if (pcie->pex_refclk_sel_gpiod)
|
||||
gpiod_set_value(pcie->pex_refclk_sel_gpiod, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1697,9 +2362,22 @@ static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
|
|||
__deinit_controller(pcie);
|
||||
}
|
||||
|
||||
static const struct tegra_pcie_dw_of_data tegra_pcie_dw_rc_of_data = {
|
||||
.mode = DW_PCIE_RC_TYPE,
|
||||
};
|
||||
|
||||
static const struct tegra_pcie_dw_of_data tegra_pcie_dw_ep_of_data = {
|
||||
.mode = DW_PCIE_EP_TYPE,
|
||||
};
|
||||
|
||||
static const struct of_device_id tegra_pcie_dw_of_match[] = {
|
||||
{
|
||||
.compatible = "nvidia,tegra194-pcie",
|
||||
.data = &tegra_pcie_dw_rc_of_data,
|
||||
},
|
||||
{
|
||||
.compatible = "nvidia,tegra194-pcie-ep",
|
||||
.data = &tegra_pcie_dw_ep_of_data,
|
||||
},
|
||||
{},
|
||||
};
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
menu "Mobiveil PCIe Core Support"
|
||||
depends on PCI
|
||||
|
||||
config PCIE_MOBIVEIL
|
||||
bool
|
||||
|
||||
config PCIE_MOBIVEIL_HOST
|
||||
bool
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_MOBIVEIL
|
||||
|
||||
config PCIE_MOBIVEIL_PLAT
|
||||
bool "Mobiveil AXI PCIe controller"
|
||||
depends on ARCH_ZYNQMP || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_MOBIVEIL_HOST
|
||||
help
|
||||
Say Y here if you want to enable support for the Mobiveil AXI PCIe
|
||||
Soft IP. It has up to 8 outbound and inbound windows
|
||||
for address translation and it is a PCIe Gen4 IP.
|
||||
|
||||
config PCIE_LAYERSCAPE_GEN4
|
||||
bool "Freescale Layerscape PCIe Gen4 controller"
|
||||
depends on PCI
|
||||
depends on OF && (ARM64 || ARCH_LAYERSCAPE)
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select PCIE_MOBIVEIL_HOST
|
||||
help
|
||||
Say Y here if you want PCIe Gen4 controller support on
|
||||
Layerscape SoCs.
|
||||
endmenu
|
|
@ -0,0 +1,5 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o
|
||||
obj-$(CONFIG_PCIE_MOBIVEIL_HOST) += pcie-mobiveil-host.o
|
||||
obj-$(CONFIG_PCIE_MOBIVEIL_PLAT) += pcie-mobiveil-plat.o
|
||||
obj-$(CONFIG_PCIE_LAYERSCAPE_GEN4) += pcie-layerscape-gen4.o
|
|
@ -0,0 +1,267 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* PCIe Gen4 host controller driver for NXP Layerscape SoCs
|
||||
*
|
||||
* Copyright 2019-2020 NXP
|
||||
*
|
||||
* Author: Zhiqiang Hou <Zhiqiang.Hou@nxp.com>
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/resource.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/regmap.h>
|
||||
|
||||
#include "pcie-mobiveil.h"
|
||||
|
||||
/* LUT and PF control registers */
|
||||
#define PCIE_LUT_OFF 0x80000
|
||||
#define PCIE_PF_OFF 0xc0000
|
||||
#define PCIE_PF_INT_STAT 0x18
|
||||
#define PF_INT_STAT_PABRST BIT(31)
|
||||
|
||||
#define PCIE_PF_DBG 0x7fc
|
||||
#define PF_DBG_LTSSM_MASK 0x3f
|
||||
#define PF_DBG_LTSSM_L0 0x2d /* L0 state */
|
||||
#define PF_DBG_WE BIT(31)
|
||||
#define PF_DBG_PABR BIT(27)
|
||||
|
||||
#define to_ls_pcie_g4(x) platform_get_drvdata((x)->pdev)
|
||||
|
||||
struct ls_pcie_g4 {
|
||||
struct mobiveil_pcie pci;
|
||||
struct delayed_work dwork;
|
||||
int irq;
|
||||
};
|
||||
|
||||
static inline u32 ls_pcie_g4_lut_readl(struct ls_pcie_g4 *pcie, u32 off)
|
||||
{
|
||||
return ioread32(pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off);
|
||||
}
|
||||
|
||||
static inline void ls_pcie_g4_lut_writel(struct ls_pcie_g4 *pcie,
|
||||
u32 off, u32 val)
|
||||
{
|
||||
iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off);
|
||||
}
|
||||
|
||||
static inline u32 ls_pcie_g4_pf_readl(struct ls_pcie_g4 *pcie, u32 off)
|
||||
{
|
||||
return ioread32(pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off);
|
||||
}
|
||||
|
||||
static inline void ls_pcie_g4_pf_writel(struct ls_pcie_g4 *pcie,
|
||||
u32 off, u32 val)
|
||||
{
|
||||
iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off);
|
||||
}
|
||||
|
||||
static int ls_pcie_g4_link_up(struct mobiveil_pcie *pci)
|
||||
{
|
||||
struct ls_pcie_g4 *pcie = to_ls_pcie_g4(pci);
|
||||
u32 state;
|
||||
|
||||
state = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
|
||||
state = state & PF_DBG_LTSSM_MASK;
|
||||
|
||||
if (state == PF_DBG_LTSSM_L0)
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ls_pcie_g4_disable_interrupt(struct ls_pcie_g4 *pcie)
|
||||
{
|
||||
struct mobiveil_pcie *mv_pci = &pcie->pci;
|
||||
|
||||
mobiveil_csr_writel(mv_pci, 0, PAB_INTP_AMBA_MISC_ENB);
|
||||
}
|
||||
|
||||
static void ls_pcie_g4_enable_interrupt(struct ls_pcie_g4 *pcie)
|
||||
{
|
||||
struct mobiveil_pcie *mv_pci = &pcie->pci;
|
||||
u32 val;
|
||||
|
||||
/* Clear the interrupt status */
|
||||
mobiveil_csr_writel(mv_pci, 0xffffffff, PAB_INTP_AMBA_MISC_STAT);
|
||||
|
||||
val = PAB_INTP_INTX_MASK | PAB_INTP_MSI | PAB_INTP_RESET |
|
||||
PAB_INTP_PCIE_UE | PAB_INTP_IE_PMREDI | PAB_INTP_IE_EC;
|
||||
mobiveil_csr_writel(mv_pci, val, PAB_INTP_AMBA_MISC_ENB);
|
||||
}
|
||||
|
||||
static int ls_pcie_g4_reinit_hw(struct ls_pcie_g4 *pcie)
|
||||
{
|
||||
struct mobiveil_pcie *mv_pci = &pcie->pci;
|
||||
struct device *dev = &mv_pci->pdev->dev;
|
||||
u32 val, act_stat;
|
||||
int to = 100;
|
||||
|
||||
/* Poll for pab_csb_reset to set and PAB activity to clear */
|
||||
do {
|
||||
usleep_range(10, 15);
|
||||
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_INT_STAT);
|
||||
act_stat = mobiveil_csr_readl(mv_pci, PAB_ACTIVITY_STAT);
|
||||
} while (((val & PF_INT_STAT_PABRST) == 0 || act_stat) && to--);
|
||||
if (to < 0) {
|
||||
dev_err(dev, "Poll PABRST&PABACT timeout\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
/* clear PEX_RESET bit in PEX_PF0_DBG register */
|
||||
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
|
||||
val |= PF_DBG_WE;
|
||||
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
|
||||
|
||||
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
|
||||
val |= PF_DBG_PABR;
|
||||
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
|
||||
|
||||
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
|
||||
val &= ~PF_DBG_WE;
|
||||
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
|
||||
|
||||
mobiveil_host_init(mv_pci, true);
|
||||
|
||||
to = 100;
|
||||
while (!ls_pcie_g4_link_up(mv_pci) && to--)
|
||||
usleep_range(200, 250);
|
||||
if (to < 0) {
|
||||
dev_err(dev, "PCIe link training timeout\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static irqreturn_t ls_pcie_g4_isr(int irq, void *dev_id)
|
||||
{
|
||||
struct ls_pcie_g4 *pcie = (struct ls_pcie_g4 *)dev_id;
|
||||
struct mobiveil_pcie *mv_pci = &pcie->pci;
|
||||
u32 val;
|
||||
|
||||
val = mobiveil_csr_readl(mv_pci, PAB_INTP_AMBA_MISC_STAT);
|
||||
if (!val)
|
||||
return IRQ_NONE;
|
||||
|
||||
if (val & PAB_INTP_RESET) {
|
||||
ls_pcie_g4_disable_interrupt(pcie);
|
||||
schedule_delayed_work(&pcie->dwork, msecs_to_jiffies(1));
|
||||
}
|
||||
|
||||
mobiveil_csr_writel(mv_pci, val, PAB_INTP_AMBA_MISC_STAT);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int ls_pcie_g4_interrupt_init(struct mobiveil_pcie *mv_pci)
|
||||
{
|
||||
struct ls_pcie_g4 *pcie = to_ls_pcie_g4(mv_pci);
|
||||
struct platform_device *pdev = mv_pci->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
int ret;
|
||||
|
||||
pcie->irq = platform_get_irq_byname(pdev, "intr");
|
||||
if (pcie->irq < 0) {
|
||||
dev_err(dev, "Can't get 'intr' IRQ, errno = %d\n", pcie->irq);
|
||||
return pcie->irq;
|
||||
}
|
||||
ret = devm_request_irq(dev, pcie->irq, ls_pcie_g4_isr,
|
||||
IRQF_SHARED, pdev->name, pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Can't register PCIe IRQ, errno = %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ls_pcie_g4_reset(struct work_struct *work)
|
||||
{
|
||||
struct delayed_work *dwork = container_of(work, struct delayed_work,
|
||||
work);
|
||||
struct ls_pcie_g4 *pcie = container_of(dwork, struct ls_pcie_g4, dwork);
|
||||
struct mobiveil_pcie *mv_pci = &pcie->pci;
|
||||
u16 ctrl;
|
||||
|
||||
ctrl = mobiveil_csr_readw(mv_pci, PCI_BRIDGE_CONTROL);
|
||||
ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
|
||||
mobiveil_csr_writew(mv_pci, ctrl, PCI_BRIDGE_CONTROL);
|
||||
|
||||
if (!ls_pcie_g4_reinit_hw(pcie))
|
||||
return;
|
||||
|
||||
ls_pcie_g4_enable_interrupt(pcie);
|
||||
}
|
||||
|
||||
static struct mobiveil_rp_ops ls_pcie_g4_rp_ops = {
|
||||
.interrupt_init = ls_pcie_g4_interrupt_init,
|
||||
};
|
||||
|
||||
static const struct mobiveil_pab_ops ls_pcie_g4_pab_ops = {
|
||||
.link_up = ls_pcie_g4_link_up,
|
||||
};
|
||||
|
||||
static int __init ls_pcie_g4_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct pci_host_bridge *bridge;
|
||||
struct mobiveil_pcie *mv_pci;
|
||||
struct ls_pcie_g4 *pcie;
|
||||
struct device_node *np = dev->of_node;
|
||||
int ret;
|
||||
|
||||
if (!of_parse_phandle(np, "msi-parent", 0)) {
|
||||
dev_err(dev, "Failed to find msi-parent\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
|
||||
pcie = pci_host_bridge_priv(bridge);
|
||||
mv_pci = &pcie->pci;
|
||||
|
||||
mv_pci->pdev = pdev;
|
||||
mv_pci->ops = &ls_pcie_g4_pab_ops;
|
||||
mv_pci->rp.ops = &ls_pcie_g4_rp_ops;
|
||||
mv_pci->rp.bridge = bridge;
|
||||
|
||||
platform_set_drvdata(pdev, pcie);
|
||||
|
||||
INIT_DELAYED_WORK(&pcie->dwork, ls_pcie_g4_reset);
|
||||
|
||||
ret = mobiveil_pcie_host_probe(mv_pci);
|
||||
if (ret) {
|
||||
dev_err(dev, "Fail to probe\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ls_pcie_g4_enable_interrupt(pcie);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id ls_pcie_g4_of_match[] = {
|
||||
{ .compatible = "fsl,lx2160a-pcie", },
|
||||
{ },
|
||||
};
|
||||
|
||||
static struct platform_driver ls_pcie_g4_driver = {
|
||||
.driver = {
|
||||
.name = "layerscape-pcie-gen4",
|
||||
.of_match_table = ls_pcie_g4_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
};
|
||||
|
||||
builtin_platform_driver_probe(ls_pcie_g4_driver, ls_pcie_g4_probe);
|
|
@ -3,10 +3,12 @@
|
|||
* PCIe host controller driver for Mobiveil PCIe Host controller
|
||||
*
|
||||
* Copyright (c) 2018 Mobiveil Inc.
|
||||
* Copyright 2019-2020 NXP
|
||||
*
|
||||
* Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
|
||||
* Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irq.h>
|
||||
|
@ -23,274 +25,22 @@
|
|||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "../pci.h"
|
||||
|
||||
/* register offsets and bit positions */
|
||||
|
||||
/*
|
||||
* translation tables are grouped into windows, each window registers are
|
||||
* grouped into blocks of 4 or 16 registers each
|
||||
*/
|
||||
#define PAB_REG_BLOCK_SIZE 16
|
||||
#define PAB_EXT_REG_BLOCK_SIZE 4
|
||||
|
||||
#define PAB_REG_ADDR(offset, win) \
|
||||
(offset + (win * PAB_REG_BLOCK_SIZE))
|
||||
#define PAB_EXT_REG_ADDR(offset, win) \
|
||||
(offset + (win * PAB_EXT_REG_BLOCK_SIZE))
|
||||
|
||||
#define LTSSM_STATUS 0x0404
|
||||
#define LTSSM_STATUS_L0_MASK 0x3f
|
||||
#define LTSSM_STATUS_L0 0x2d
|
||||
|
||||
#define PAB_CTRL 0x0808
|
||||
#define AMBA_PIO_ENABLE_SHIFT 0
|
||||
#define PEX_PIO_ENABLE_SHIFT 1
|
||||
#define PAGE_SEL_SHIFT 13
|
||||
#define PAGE_SEL_MASK 0x3f
|
||||
#define PAGE_LO_MASK 0x3ff
|
||||
#define PAGE_SEL_OFFSET_SHIFT 10
|
||||
|
||||
#define PAB_AXI_PIO_CTRL 0x0840
|
||||
#define APIO_EN_MASK 0xf
|
||||
|
||||
#define PAB_PEX_PIO_CTRL 0x08c0
|
||||
#define PIO_ENABLE_SHIFT 0
|
||||
|
||||
#define PAB_INTP_AMBA_MISC_ENB 0x0b0c
|
||||
#define PAB_INTP_AMBA_MISC_STAT 0x0b1c
|
||||
#define PAB_INTP_INTX_MASK 0x01e0
|
||||
#define PAB_INTP_MSI_MASK 0x8
|
||||
|
||||
#define PAB_AXI_AMAP_CTRL(win) PAB_REG_ADDR(0x0ba0, win)
|
||||
#define WIN_ENABLE_SHIFT 0
|
||||
#define WIN_TYPE_SHIFT 1
|
||||
#define WIN_TYPE_MASK 0x3
|
||||
#define WIN_SIZE_MASK 0xfffffc00
|
||||
|
||||
#define PAB_EXT_AXI_AMAP_SIZE(win) PAB_EXT_REG_ADDR(0xbaf0, win)
|
||||
|
||||
#define PAB_EXT_AXI_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0x80a0, win)
|
||||
#define PAB_AXI_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x0ba4, win)
|
||||
#define AXI_WINDOW_ALIGN_MASK 3
|
||||
|
||||
#define PAB_AXI_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x0ba8, win)
|
||||
#define PAB_BUS_SHIFT 24
|
||||
#define PAB_DEVICE_SHIFT 19
|
||||
#define PAB_FUNCTION_SHIFT 16
|
||||
|
||||
#define PAB_AXI_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x0bac, win)
|
||||
#define PAB_INTP_AXI_PIO_CLASS 0x474
|
||||
|
||||
#define PAB_PEX_AMAP_CTRL(win) PAB_REG_ADDR(0x4ba0, win)
|
||||
#define AMAP_CTRL_EN_SHIFT 0
|
||||
#define AMAP_CTRL_TYPE_SHIFT 1
|
||||
#define AMAP_CTRL_TYPE_MASK 3
|
||||
|
||||
#define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win)
|
||||
#define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win)
|
||||
#define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win)
|
||||
#define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win)
|
||||
#define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win)
|
||||
|
||||
/* starting offset of INTX bits in status register */
|
||||
#define PAB_INTX_START 5
|
||||
|
||||
/* supported number of MSI interrupts */
|
||||
#define PCI_NUM_MSI 16
|
||||
|
||||
/* MSI registers */
|
||||
#define MSI_BASE_LO_OFFSET 0x04
|
||||
#define MSI_BASE_HI_OFFSET 0x08
|
||||
#define MSI_SIZE_OFFSET 0x0c
|
||||
#define MSI_ENABLE_OFFSET 0x14
|
||||
#define MSI_STATUS_OFFSET 0x18
|
||||
#define MSI_DATA_OFFSET 0x20
|
||||
#define MSI_ADDR_L_OFFSET 0x24
|
||||
#define MSI_ADDR_H_OFFSET 0x28
|
||||
|
||||
/* outbound and inbound window definitions */
|
||||
#define WIN_NUM_0 0
|
||||
#define WIN_NUM_1 1
|
||||
#define CFG_WINDOW_TYPE 0
|
||||
#define IO_WINDOW_TYPE 1
|
||||
#define MEM_WINDOW_TYPE 2
|
||||
#define IB_WIN_SIZE ((u64)256 * 1024 * 1024 * 1024)
|
||||
#define MAX_PIO_WINDOWS 8
|
||||
|
||||
/* Parameters for the waiting for link up routine */
|
||||
#define LINK_WAIT_MAX_RETRIES 10
|
||||
#define LINK_WAIT_MIN 90000
|
||||
#define LINK_WAIT_MAX 100000
|
||||
|
||||
#define PAGED_ADDR_BNDRY 0xc00
|
||||
#define OFFSET_TO_PAGE_ADDR(off) \
|
||||
((off & PAGE_LO_MASK) | PAGED_ADDR_BNDRY)
|
||||
#define OFFSET_TO_PAGE_IDX(off) \
|
||||
((off >> PAGE_SEL_OFFSET_SHIFT) & PAGE_SEL_MASK)
|
||||
|
||||
struct mobiveil_msi { /* MSI information */
|
||||
struct mutex lock; /* protect bitmap variable */
|
||||
struct irq_domain *msi_domain;
|
||||
struct irq_domain *dev_domain;
|
||||
phys_addr_t msi_pages_phys;
|
||||
int num_of_vectors;
|
||||
DECLARE_BITMAP(msi_irq_in_use, PCI_NUM_MSI);
|
||||
};
|
||||
|
||||
struct mobiveil_pcie {
|
||||
struct platform_device *pdev;
|
||||
void __iomem *config_axi_slave_base; /* endpoint config base */
|
||||
void __iomem *csr_axi_slave_base; /* root port config base */
|
||||
void __iomem *apb_csr_base; /* MSI register base */
|
||||
phys_addr_t pcie_reg_base; /* Physical PCIe Controller Base */
|
||||
struct irq_domain *intx_domain;
|
||||
raw_spinlock_t intx_mask_lock;
|
||||
int irq;
|
||||
int apio_wins;
|
||||
int ppio_wins;
|
||||
int ob_wins_configured; /* configured outbound windows */
|
||||
int ib_wins_configured; /* configured inbound windows */
|
||||
struct resource *ob_io_res;
|
||||
char root_bus_nr;
|
||||
struct mobiveil_msi msi;
|
||||
};
|
||||
|
||||
/*
|
||||
* mobiveil_pcie_sel_page - routine to access paged register
|
||||
*
|
||||
* Registers whose address greater than PAGED_ADDR_BNDRY (0xc00) are paged,
|
||||
* for this scheme to work extracted higher 6 bits of the offset will be
|
||||
* written to pg_sel field of PAB_CTRL register and rest of the lower 10
|
||||
* bits enabled with PAGED_ADDR_BNDRY are used as offset of the register.
|
||||
*/
|
||||
static void mobiveil_pcie_sel_page(struct mobiveil_pcie *pcie, u8 pg_idx)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
val = readl(pcie->csr_axi_slave_base + PAB_CTRL);
|
||||
val &= ~(PAGE_SEL_MASK << PAGE_SEL_SHIFT);
|
||||
val |= (pg_idx & PAGE_SEL_MASK) << PAGE_SEL_SHIFT;
|
||||
|
||||
writel(val, pcie->csr_axi_slave_base + PAB_CTRL);
|
||||
}
|
||||
|
||||
static void *mobiveil_pcie_comp_addr(struct mobiveil_pcie *pcie, u32 off)
|
||||
{
|
||||
if (off < PAGED_ADDR_BNDRY) {
|
||||
/* For directly accessed registers, clear the pg_sel field */
|
||||
mobiveil_pcie_sel_page(pcie, 0);
|
||||
return pcie->csr_axi_slave_base + off;
|
||||
}
|
||||
|
||||
mobiveil_pcie_sel_page(pcie, OFFSET_TO_PAGE_IDX(off));
|
||||
return pcie->csr_axi_slave_base + OFFSET_TO_PAGE_ADDR(off);
|
||||
}
|
||||
|
||||
static int mobiveil_pcie_read(void __iomem *addr, int size, u32 *val)
|
||||
{
|
||||
if ((uintptr_t)addr & (size - 1)) {
|
||||
*val = 0;
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
}
|
||||
|
||||
switch (size) {
|
||||
case 4:
|
||||
*val = readl(addr);
|
||||
break;
|
||||
case 2:
|
||||
*val = readw(addr);
|
||||
break;
|
||||
case 1:
|
||||
*val = readb(addr);
|
||||
break;
|
||||
default:
|
||||
*val = 0;
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
}
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
static int mobiveil_pcie_write(void __iomem *addr, int size, u32 val)
|
||||
{
|
||||
if ((uintptr_t)addr & (size - 1))
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
|
||||
switch (size) {
|
||||
case 4:
|
||||
writel(val, addr);
|
||||
break;
|
||||
case 2:
|
||||
writew(val, addr);
|
||||
break;
|
||||
case 1:
|
||||
writeb(val, addr);
|
||||
break;
|
||||
default:
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
}
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
static u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size)
|
||||
{
|
||||
void *addr;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
addr = mobiveil_pcie_comp_addr(pcie, off);
|
||||
|
||||
ret = mobiveil_pcie_read(addr, size, &val);
|
||||
if (ret)
|
||||
dev_err(&pcie->pdev->dev, "read CSR address failed\n");
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off,
|
||||
size_t size)
|
||||
{
|
||||
void *addr;
|
||||
int ret;
|
||||
|
||||
addr = mobiveil_pcie_comp_addr(pcie, off);
|
||||
|
||||
ret = mobiveil_pcie_write(addr, size, val);
|
||||
if (ret)
|
||||
dev_err(&pcie->pdev->dev, "write CSR address failed\n");
|
||||
}
|
||||
|
||||
static u32 mobiveil_csr_readl(struct mobiveil_pcie *pcie, u32 off)
|
||||
{
|
||||
return mobiveil_csr_read(pcie, off, 0x4);
|
||||
}
|
||||
|
||||
static void mobiveil_csr_writel(struct mobiveil_pcie *pcie, u32 val, u32 off)
|
||||
{
|
||||
mobiveil_csr_write(pcie, val, off, 0x4);
|
||||
}
|
||||
|
||||
static bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
return (mobiveil_csr_readl(pcie, LTSSM_STATUS) &
|
||||
LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0;
|
||||
}
|
||||
#include "pcie-mobiveil.h"
|
||||
|
||||
static bool mobiveil_pcie_valid_device(struct pci_bus *bus, unsigned int devfn)
|
||||
{
|
||||
struct mobiveil_pcie *pcie = bus->sysdata;
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
|
||||
/* Only one device down on each root port */
|
||||
if ((bus->number == pcie->root_bus_nr) && (devfn > 0))
|
||||
if ((bus->number == rp->root_bus_nr) && (devfn > 0))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Do not read more than one device on the bus directly
|
||||
* attached to RC
|
||||
*/
|
||||
if ((bus->primary == pcie->root_bus_nr) && (PCI_SLOT(devfn) > 0))
|
||||
if ((bus->primary == rp->root_bus_nr) && (PCI_SLOT(devfn) > 0))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
|
@ -304,13 +54,14 @@ static void __iomem *mobiveil_pcie_map_bus(struct pci_bus *bus,
|
|||
unsigned int devfn, int where)
|
||||
{
|
||||
struct mobiveil_pcie *pcie = bus->sysdata;
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
u32 value;
|
||||
|
||||
if (!mobiveil_pcie_valid_device(bus, devfn))
|
||||
return NULL;
|
||||
|
||||
/* RC config access */
|
||||
if (bus->number == pcie->root_bus_nr)
|
||||
if (bus->number == rp->root_bus_nr)
|
||||
return pcie->csr_axi_slave_base + where;
|
||||
|
||||
/*
|
||||
|
@ -325,7 +76,7 @@ static void __iomem *mobiveil_pcie_map_bus(struct pci_bus *bus,
|
|||
|
||||
mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_PEX_WIN_L(WIN_NUM_0));
|
||||
|
||||
return pcie->config_axi_slave_base + where;
|
||||
return rp->config_axi_slave_base + where;
|
||||
}
|
||||
|
||||
static struct pci_ops mobiveil_pcie_ops = {
|
||||
|
@ -339,7 +90,8 @@ static void mobiveil_pcie_isr(struct irq_desc *desc)
|
|||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
||||
struct mobiveil_pcie *pcie = irq_desc_get_handler_data(desc);
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
struct mobiveil_msi *msi = &pcie->msi;
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
struct mobiveil_msi *msi = &rp->msi;
|
||||
u32 msi_data, msi_addr_lo, msi_addr_hi;
|
||||
u32 intr_status, msi_status;
|
||||
unsigned long shifted_status;
|
||||
|
@ -365,7 +117,7 @@ static void mobiveil_pcie_isr(struct irq_desc *desc)
|
|||
shifted_status >>= PAB_INTX_START;
|
||||
do {
|
||||
for_each_set_bit(bit, &shifted_status, PCI_NUM_INTX) {
|
||||
virq = irq_find_mapping(pcie->intx_domain,
|
||||
virq = irq_find_mapping(rp->intx_domain,
|
||||
bit + 1);
|
||||
if (virq)
|
||||
generic_handle_irq(virq);
|
||||
|
@ -424,15 +176,16 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
|
|||
struct device *dev = &pcie->pdev->dev;
|
||||
struct platform_device *pdev = pcie->pdev;
|
||||
struct device_node *node = dev->of_node;
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
struct resource *res;
|
||||
|
||||
/* map config resource */
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
"config_axi_slave");
|
||||
pcie->config_axi_slave_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pcie->config_axi_slave_base))
|
||||
return PTR_ERR(pcie->config_axi_slave_base);
|
||||
pcie->ob_io_res = res;
|
||||
rp->config_axi_slave_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(rp->config_axi_slave_base))
|
||||
return PTR_ERR(rp->config_axi_slave_base);
|
||||
rp->ob_io_res = res;
|
||||
|
||||
/* map csr resource */
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
|
@ -442,12 +195,6 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
|
|||
return PTR_ERR(pcie->csr_axi_slave_base);
|
||||
pcie->pcie_reg_base = res->start;
|
||||
|
||||
/* map MSI config resource */
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb_csr");
|
||||
pcie->apb_csr_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pcie->apb_csr_base))
|
||||
return PTR_ERR(pcie->apb_csr_base);
|
||||
|
||||
/* read the number of windows requested */
|
||||
if (of_property_read_u32(node, "apio-wins", &pcie->apio_wins))
|
||||
pcie->apio_wins = MAX_PIO_WINDOWS;
|
||||
|
@ -455,118 +202,15 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
|
|||
if (of_property_read_u32(node, "ppio-wins", &pcie->ppio_wins))
|
||||
pcie->ppio_wins = MAX_PIO_WINDOWS;
|
||||
|
||||
pcie->irq = platform_get_irq(pdev, 0);
|
||||
if (pcie->irq <= 0) {
|
||||
dev_err(dev, "failed to map IRQ: %d\n", pcie->irq);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
|
||||
u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
|
||||
{
|
||||
u32 value;
|
||||
u64 size64 = ~(size - 1);
|
||||
|
||||
if (win_num >= pcie->ppio_wins) {
|
||||
dev_err(&pcie->pdev->dev,
|
||||
"ERROR: max inbound windows reached !\n");
|
||||
return;
|
||||
}
|
||||
|
||||
value = mobiveil_csr_readl(pcie, PAB_PEX_AMAP_CTRL(win_num));
|
||||
value &= ~(AMAP_CTRL_TYPE_MASK << AMAP_CTRL_TYPE_SHIFT | WIN_SIZE_MASK);
|
||||
value |= type << AMAP_CTRL_TYPE_SHIFT | 1 << AMAP_CTRL_EN_SHIFT |
|
||||
(lower_32_bits(size64) & WIN_SIZE_MASK);
|
||||
mobiveil_csr_writel(pcie, value, PAB_PEX_AMAP_CTRL(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(size64),
|
||||
PAB_EXT_PEX_AMAP_SIZEN(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, lower_32_bits(cpu_addr),
|
||||
PAB_PEX_AMAP_AXI_WIN(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr),
|
||||
PAB_EXT_PEX_AMAP_AXI_WIN(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, lower_32_bits(pci_addr),
|
||||
PAB_PEX_AMAP_PEX_WIN_L(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(pci_addr),
|
||||
PAB_PEX_AMAP_PEX_WIN_H(win_num));
|
||||
|
||||
pcie->ib_wins_configured++;
|
||||
}
|
||||
|
||||
/*
|
||||
* routine to program the outbound windows
|
||||
*/
|
||||
static void program_ob_windows(struct mobiveil_pcie *pcie, int win_num,
|
||||
u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
|
||||
{
|
||||
u32 value;
|
||||
u64 size64 = ~(size - 1);
|
||||
|
||||
if (win_num >= pcie->apio_wins) {
|
||||
dev_err(&pcie->pdev->dev,
|
||||
"ERROR: max outbound windows reached !\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit
|
||||
* to 4 KB in PAB_AXI_AMAP_CTRL register
|
||||
*/
|
||||
value = mobiveil_csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num));
|
||||
value &= ~(WIN_TYPE_MASK << WIN_TYPE_SHIFT | WIN_SIZE_MASK);
|
||||
value |= 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT |
|
||||
(lower_32_bits(size64) & WIN_SIZE_MASK);
|
||||
mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_CTRL(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(size64),
|
||||
PAB_EXT_AXI_AMAP_SIZE(win_num));
|
||||
|
||||
/*
|
||||
* program AXI window base with appropriate value in
|
||||
* PAB_AXI_AMAP_AXI_WIN0 register
|
||||
*/
|
||||
mobiveil_csr_writel(pcie,
|
||||
lower_32_bits(cpu_addr) & (~AXI_WINDOW_ALIGN_MASK),
|
||||
PAB_AXI_AMAP_AXI_WIN(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr),
|
||||
PAB_EXT_AXI_AMAP_AXI_WIN(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, lower_32_bits(pci_addr),
|
||||
PAB_AXI_AMAP_PEX_WIN_L(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(pci_addr),
|
||||
PAB_AXI_AMAP_PEX_WIN_H(win_num));
|
||||
|
||||
pcie->ob_wins_configured++;
|
||||
}
|
||||
|
||||
static int mobiveil_bringup_link(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
int retries;
|
||||
|
||||
/* check if the link is up or not */
|
||||
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
|
||||
if (mobiveil_pcie_link_up(pcie))
|
||||
return 0;
|
||||
|
||||
usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX);
|
||||
}
|
||||
|
||||
dev_err(&pcie->pdev->dev, "link never came up\n");
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static void mobiveil_pcie_enable_msi(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
phys_addr_t msg_addr = pcie->pcie_reg_base;
|
||||
struct mobiveil_msi *msi = &pcie->msi;
|
||||
struct mobiveil_msi *msi = &pcie->rp.msi;
|
||||
|
||||
pcie->msi.num_of_vectors = PCI_NUM_MSI;
|
||||
msi->num_of_vectors = PCI_NUM_MSI;
|
||||
msi->msi_pages_phys = (phys_addr_t)msg_addr;
|
||||
|
||||
writel_relaxed(lower_32_bits(msg_addr),
|
||||
|
@ -577,17 +221,23 @@ static void mobiveil_pcie_enable_msi(struct mobiveil_pcie *pcie)
|
|||
writel_relaxed(1, pcie->apb_csr_base + MSI_ENABLE_OFFSET);
|
||||
}
|
||||
|
||||
static int mobiveil_host_init(struct mobiveil_pcie *pcie)
|
||||
int mobiveil_host_init(struct mobiveil_pcie *pcie, bool reinit)
|
||||
{
|
||||
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
struct pci_host_bridge *bridge = rp->bridge;
|
||||
u32 value, pab_ctrl, type;
|
||||
struct resource_entry *win;
|
||||
|
||||
/* setup bus numbers */
|
||||
value = mobiveil_csr_readl(pcie, PCI_PRIMARY_BUS);
|
||||
value &= 0xff000000;
|
||||
value |= 0x00ff0100;
|
||||
mobiveil_csr_writel(pcie, value, PCI_PRIMARY_BUS);
|
||||
pcie->ib_wins_configured = 0;
|
||||
pcie->ob_wins_configured = 0;
|
||||
|
||||
if (!reinit) {
|
||||
/* setup bus numbers */
|
||||
value = mobiveil_csr_readl(pcie, PCI_PRIMARY_BUS);
|
||||
value &= 0xff000000;
|
||||
value |= 0x00ff0100;
|
||||
mobiveil_csr_writel(pcie, value, PCI_PRIMARY_BUS);
|
||||
}
|
||||
|
||||
/*
|
||||
* program Bus Master Enable Bit in Command Register in PAB Config
|
||||
|
@ -605,9 +255,6 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie)
|
|||
pab_ctrl |= (1 << AMBA_PIO_ENABLE_SHIFT) | (1 << PEX_PIO_ENABLE_SHIFT);
|
||||
mobiveil_csr_writel(pcie, pab_ctrl, PAB_CTRL);
|
||||
|
||||
mobiveil_csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK),
|
||||
PAB_INTP_AMBA_MISC_ENB);
|
||||
|
||||
/*
|
||||
* program PIO Enable Bit to 1 and Config Window Enable Bit to 1 in
|
||||
* PAB_AXI_PIO_CTRL Register
|
||||
|
@ -629,8 +276,8 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie)
|
|||
*/
|
||||
|
||||
/* config outbound translation window */
|
||||
program_ob_windows(pcie, WIN_NUM_0, pcie->ob_io_res->start, 0,
|
||||
CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res));
|
||||
program_ob_windows(pcie, WIN_NUM_0, rp->ob_io_res->start, 0,
|
||||
CFG_WINDOW_TYPE, resource_size(rp->ob_io_res));
|
||||
|
||||
/* memory inbound translation window */
|
||||
program_ib_windows(pcie, WIN_NUM_0, 0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE);
|
||||
|
@ -657,9 +304,6 @@ static int mobiveil_host_init(struct mobiveil_pcie *pcie)
|
|||
value |= (PCI_CLASS_BRIDGE_PCI << 16);
|
||||
mobiveil_csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS);
|
||||
|
||||
/* setup MSI hardware registers */
|
||||
mobiveil_pcie_enable_msi(pcie);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -667,32 +311,36 @@ static void mobiveil_mask_intx_irq(struct irq_data *data)
|
|||
{
|
||||
struct irq_desc *desc = irq_to_desc(data->irq);
|
||||
struct mobiveil_pcie *pcie;
|
||||
struct mobiveil_root_port *rp;
|
||||
unsigned long flags;
|
||||
u32 mask, shifted_val;
|
||||
|
||||
pcie = irq_desc_get_chip_data(desc);
|
||||
rp = &pcie->rp;
|
||||
mask = 1 << ((data->hwirq + PAB_INTX_START) - 1);
|
||||
raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags);
|
||||
raw_spin_lock_irqsave(&rp->intx_mask_lock, flags);
|
||||
shifted_val = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB);
|
||||
shifted_val &= ~mask;
|
||||
mobiveil_csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB);
|
||||
raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&rp->intx_mask_lock, flags);
|
||||
}
|
||||
|
||||
static void mobiveil_unmask_intx_irq(struct irq_data *data)
|
||||
{
|
||||
struct irq_desc *desc = irq_to_desc(data->irq);
|
||||
struct mobiveil_pcie *pcie;
|
||||
struct mobiveil_root_port *rp;
|
||||
unsigned long flags;
|
||||
u32 shifted_val, mask;
|
||||
|
||||
pcie = irq_desc_get_chip_data(desc);
|
||||
rp = &pcie->rp;
|
||||
mask = 1 << ((data->hwirq + PAB_INTX_START) - 1);
|
||||
raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags);
|
||||
raw_spin_lock_irqsave(&rp->intx_mask_lock, flags);
|
||||
shifted_val = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB);
|
||||
shifted_val |= mask;
|
||||
mobiveil_csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB);
|
||||
raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&rp->intx_mask_lock, flags);
|
||||
}
|
||||
|
||||
static struct irq_chip intx_irq_chip = {
|
||||
|
@ -760,7 +408,7 @@ static int mobiveil_irq_msi_domain_alloc(struct irq_domain *domain,
|
|||
unsigned int nr_irqs, void *args)
|
||||
{
|
||||
struct mobiveil_pcie *pcie = domain->host_data;
|
||||
struct mobiveil_msi *msi = &pcie->msi;
|
||||
struct mobiveil_msi *msi = &pcie->rp.msi;
|
||||
unsigned long bit;
|
||||
|
||||
WARN_ON(nr_irqs != 1);
|
||||
|
@ -787,7 +435,7 @@ static void mobiveil_irq_msi_domain_free(struct irq_domain *domain,
|
|||
{
|
||||
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
|
||||
struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(d);
|
||||
struct mobiveil_msi *msi = &pcie->msi;
|
||||
struct mobiveil_msi *msi = &pcie->rp.msi;
|
||||
|
||||
mutex_lock(&msi->lock);
|
||||
|
||||
|
@ -808,9 +456,9 @@ static int mobiveil_allocate_msi_domains(struct mobiveil_pcie *pcie)
|
|||
{
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
|
||||
struct mobiveil_msi *msi = &pcie->msi;
|
||||
struct mobiveil_msi *msi = &pcie->rp.msi;
|
||||
|
||||
mutex_init(&pcie->msi.lock);
|
||||
mutex_init(&msi->lock);
|
||||
msi->dev_domain = irq_domain_add_linear(NULL, msi->num_of_vectors,
|
||||
&msi_domain_ops, pcie);
|
||||
if (!msi->dev_domain) {
|
||||
|
@ -834,18 +482,19 @@ static int mobiveil_pcie_init_irq_domain(struct mobiveil_pcie *pcie)
|
|||
{
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
struct device_node *node = dev->of_node;
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
int ret;
|
||||
|
||||
/* setup INTx */
|
||||
pcie->intx_domain = irq_domain_add_linear(node, PCI_NUM_INTX,
|
||||
&intx_domain_ops, pcie);
|
||||
rp->intx_domain = irq_domain_add_linear(node, PCI_NUM_INTX,
|
||||
&intx_domain_ops, pcie);
|
||||
|
||||
if (!pcie->intx_domain) {
|
||||
if (!rp->intx_domain) {
|
||||
dev_err(dev, "Failed to get a INTx IRQ domain\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
raw_spin_lock_init(&pcie->intx_mask_lock);
|
||||
raw_spin_lock_init(&rp->intx_mask_lock);
|
||||
|
||||
/* setup MSI */
|
||||
ret = mobiveil_allocate_msi_domains(pcie);
|
||||
|
@ -855,23 +504,74 @@ static int mobiveil_pcie_init_irq_domain(struct mobiveil_pcie *pcie)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int mobiveil_pcie_probe(struct platform_device *pdev)
|
||||
static int mobiveil_pcie_integrated_interrupt_init(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
struct mobiveil_pcie *pcie;
|
||||
struct pci_bus *bus;
|
||||
struct pci_bus *child;
|
||||
struct pci_host_bridge *bridge;
|
||||
struct platform_device *pdev = pcie->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
struct resource *res;
|
||||
int ret;
|
||||
|
||||
/* allocate the PCIe port */
|
||||
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
/* map MSI config resource */
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb_csr");
|
||||
pcie->apb_csr_base = devm_pci_remap_cfg_resource(dev, res);
|
||||
if (IS_ERR(pcie->apb_csr_base))
|
||||
return PTR_ERR(pcie->apb_csr_base);
|
||||
|
||||
pcie = pci_host_bridge_priv(bridge);
|
||||
/* setup MSI hardware registers */
|
||||
mobiveil_pcie_enable_msi(pcie);
|
||||
|
||||
pcie->pdev = pdev;
|
||||
rp->irq = platform_get_irq(pdev, 0);
|
||||
if (rp->irq <= 0) {
|
||||
dev_err(dev, "failed to map IRQ: %d\n", rp->irq);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
/* initialize the IRQ domains */
|
||||
ret = mobiveil_pcie_init_irq_domain(pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed creating IRQ Domain\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
irq_set_chained_handler_and_data(rp->irq, mobiveil_pcie_isr, pcie);
|
||||
|
||||
/* Enable interrupts */
|
||||
mobiveil_csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK),
|
||||
PAB_INTP_AMBA_MISC_ENB);
|
||||
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mobiveil_pcie_interrupt_init(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
|
||||
if (rp->ops->interrupt_init)
|
||||
return rp->ops->interrupt_init(pcie);
|
||||
|
||||
return mobiveil_pcie_integrated_interrupt_init(pcie);
|
||||
}
|
||||
|
||||
static bool mobiveil_pcie_is_bridge(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
u32 header_type;
|
||||
|
||||
header_type = mobiveil_csr_readb(pcie, PCI_HEADER_TYPE);
|
||||
header_type &= 0x7f;
|
||||
|
||||
return header_type == PCI_HEADER_TYPE_BRIDGE;
|
||||
}
|
||||
|
||||
int mobiveil_pcie_host_probe(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
struct mobiveil_root_port *rp = &pcie->rp;
|
||||
struct pci_host_bridge *bridge = rp->bridge;
|
||||
struct device *dev = &pcie->pdev->dev;
|
||||
struct pci_bus *bus;
|
||||
struct pci_bus *child;
|
||||
int ret;
|
||||
|
||||
ret = mobiveil_pcie_parse_dt(pcie);
|
||||
if (ret) {
|
||||
|
@ -879,6 +579,9 @@ static int mobiveil_pcie_probe(struct platform_device *pdev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
if (!mobiveil_pcie_is_bridge(pcie))
|
||||
return -ENODEV;
|
||||
|
||||
/* parse the host bridge base addresses from the device tree file */
|
||||
ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows,
|
||||
&bridge->dma_ranges, NULL);
|
||||
|
@ -891,25 +594,22 @@ static int mobiveil_pcie_probe(struct platform_device *pdev)
|
|||
* configure all inbound and outbound windows and prepare the RC for
|
||||
* config access
|
||||
*/
|
||||
ret = mobiveil_host_init(pcie);
|
||||
ret = mobiveil_host_init(pcie, false);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to initialize host\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* initialize the IRQ domains */
|
||||
ret = mobiveil_pcie_init_irq_domain(pcie);
|
||||
ret = mobiveil_pcie_interrupt_init(pcie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed creating IRQ Domain\n");
|
||||
dev_err(dev, "Interrupt init failed\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
irq_set_chained_handler_and_data(pcie->irq, mobiveil_pcie_isr, pcie);
|
||||
|
||||
/* Initialize bridge */
|
||||
bridge->dev.parent = dev;
|
||||
bridge->sysdata = pcie;
|
||||
bridge->busnr = pcie->root_bus_nr;
|
||||
bridge->busnr = rp->root_bus_nr;
|
||||
bridge->ops = &mobiveil_pcie_ops;
|
||||
bridge->map_irq = of_irq_parse_and_map_pci;
|
||||
bridge->swizzle_irq = pci_common_swizzle;
|
||||
|
@ -934,25 +634,3 @@ static int mobiveil_pcie_probe(struct platform_device *pdev)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id mobiveil_pcie_of_match[] = {
|
||||
{.compatible = "mbvl,gpex40-pcie",},
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(of, mobiveil_pcie_of_match);
|
||||
|
||||
static struct platform_driver mobiveil_pcie_driver = {
|
||||
.probe = mobiveil_pcie_probe,
|
||||
.driver = {
|
||||
.name = "mobiveil-pcie",
|
||||
.of_match_table = mobiveil_pcie_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
};
|
||||
|
||||
builtin_platform_driver(mobiveil_pcie_driver);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("Mobiveil PCIe host controller driver");
|
||||
MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");
|
|
@ -0,0 +1,61 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* PCIe host controller driver for Mobiveil PCIe Host controller
|
||||
*
|
||||
* Copyright (c) 2018 Mobiveil Inc.
|
||||
* Copyright 2019 NXP
|
||||
*
|
||||
* Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
|
||||
* Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
||||
*/
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "pcie-mobiveil.h"
|
||||
|
||||
static int mobiveil_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct mobiveil_pcie *pcie;
|
||||
struct pci_host_bridge *bridge;
|
||||
struct device *dev = &pdev->dev;
|
||||
|
||||
/* allocate the PCIe port */
|
||||
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
|
||||
pcie = pci_host_bridge_priv(bridge);
|
||||
pcie->rp.bridge = bridge;
|
||||
|
||||
pcie->pdev = pdev;
|
||||
|
||||
return mobiveil_pcie_host_probe(pcie);
|
||||
}
|
||||
|
||||
static const struct of_device_id mobiveil_pcie_of_match[] = {
|
||||
{.compatible = "mbvl,gpex40-pcie",},
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(of, mobiveil_pcie_of_match);
|
||||
|
||||
static struct platform_driver mobiveil_pcie_driver = {
|
||||
.probe = mobiveil_pcie_probe,
|
||||
.driver = {
|
||||
.name = "mobiveil-pcie",
|
||||
.of_match_table = mobiveil_pcie_of_match,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
};
|
||||
|
||||
builtin_platform_driver(mobiveil_pcie_driver);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("Mobiveil PCIe host controller driver");
|
||||
MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");
|
|
@ -0,0 +1,231 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* PCIe host controller driver for Mobiveil PCIe Host controller
|
||||
*
|
||||
* Copyright (c) 2018 Mobiveil Inc.
|
||||
* Copyright 2019 NXP
|
||||
*
|
||||
* Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
|
||||
* Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include "pcie-mobiveil.h"
|
||||
|
||||
/*
|
||||
* mobiveil_pcie_sel_page - routine to access paged register
|
||||
*
|
||||
* Registers whose address greater than PAGED_ADDR_BNDRY (0xc00) are paged,
|
||||
* for this scheme to work extracted higher 6 bits of the offset will be
|
||||
* written to pg_sel field of PAB_CTRL register and rest of the lower 10
|
||||
* bits enabled with PAGED_ADDR_BNDRY are used as offset of the register.
|
||||
*/
|
||||
static void mobiveil_pcie_sel_page(struct mobiveil_pcie *pcie, u8 pg_idx)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
val = readl(pcie->csr_axi_slave_base + PAB_CTRL);
|
||||
val &= ~(PAGE_SEL_MASK << PAGE_SEL_SHIFT);
|
||||
val |= (pg_idx & PAGE_SEL_MASK) << PAGE_SEL_SHIFT;
|
||||
|
||||
writel(val, pcie->csr_axi_slave_base + PAB_CTRL);
|
||||
}
|
||||
|
||||
static void __iomem *mobiveil_pcie_comp_addr(struct mobiveil_pcie *pcie,
|
||||
u32 off)
|
||||
{
|
||||
if (off < PAGED_ADDR_BNDRY) {
|
||||
/* For directly accessed registers, clear the pg_sel field */
|
||||
mobiveil_pcie_sel_page(pcie, 0);
|
||||
return pcie->csr_axi_slave_base + off;
|
||||
}
|
||||
|
||||
mobiveil_pcie_sel_page(pcie, OFFSET_TO_PAGE_IDX(off));
|
||||
return pcie->csr_axi_slave_base + OFFSET_TO_PAGE_ADDR(off);
|
||||
}
|
||||
|
||||
static int mobiveil_pcie_read(void __iomem *addr, int size, u32 *val)
|
||||
{
|
||||
if ((uintptr_t)addr & (size - 1)) {
|
||||
*val = 0;
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
}
|
||||
|
||||
switch (size) {
|
||||
case 4:
|
||||
*val = readl(addr);
|
||||
break;
|
||||
case 2:
|
||||
*val = readw(addr);
|
||||
break;
|
||||
case 1:
|
||||
*val = readb(addr);
|
||||
break;
|
||||
default:
|
||||
*val = 0;
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
}
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
static int mobiveil_pcie_write(void __iomem *addr, int size, u32 val)
|
||||
{
|
||||
if ((uintptr_t)addr & (size - 1))
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
|
||||
switch (size) {
|
||||
case 4:
|
||||
writel(val, addr);
|
||||
break;
|
||||
case 2:
|
||||
writew(val, addr);
|
||||
break;
|
||||
case 1:
|
||||
writeb(val, addr);
|
||||
break;
|
||||
default:
|
||||
return PCIBIOS_BAD_REGISTER_NUMBER;
|
||||
}
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
|
||||
u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size)
|
||||
{
|
||||
void __iomem *addr;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
addr = mobiveil_pcie_comp_addr(pcie, off);
|
||||
|
||||
ret = mobiveil_pcie_read(addr, size, &val);
|
||||
if (ret)
|
||||
dev_err(&pcie->pdev->dev, "read CSR address failed\n");
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off,
|
||||
size_t size)
|
||||
{
|
||||
void __iomem *addr;
|
||||
int ret;
|
||||
|
||||
addr = mobiveil_pcie_comp_addr(pcie, off);
|
||||
|
||||
ret = mobiveil_pcie_write(addr, size, val);
|
||||
if (ret)
|
||||
dev_err(&pcie->pdev->dev, "write CSR address failed\n");
|
||||
}
|
||||
|
||||
bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
if (pcie->ops->link_up)
|
||||
return pcie->ops->link_up(pcie);
|
||||
|
||||
return (mobiveil_csr_readl(pcie, LTSSM_STATUS) &
|
||||
LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0;
|
||||
}
|
||||
|
||||
void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
|
||||
u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
|
||||
{
|
||||
u32 value;
|
||||
u64 size64 = ~(size - 1);
|
||||
|
||||
if (win_num >= pcie->ppio_wins) {
|
||||
dev_err(&pcie->pdev->dev,
|
||||
"ERROR: max inbound windows reached !\n");
|
||||
return;
|
||||
}
|
||||
|
||||
value = mobiveil_csr_readl(pcie, PAB_PEX_AMAP_CTRL(win_num));
|
||||
value &= ~(AMAP_CTRL_TYPE_MASK << AMAP_CTRL_TYPE_SHIFT | WIN_SIZE_MASK);
|
||||
value |= type << AMAP_CTRL_TYPE_SHIFT | 1 << AMAP_CTRL_EN_SHIFT |
|
||||
(lower_32_bits(size64) & WIN_SIZE_MASK);
|
||||
mobiveil_csr_writel(pcie, value, PAB_PEX_AMAP_CTRL(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(size64),
|
||||
PAB_EXT_PEX_AMAP_SIZEN(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, lower_32_bits(cpu_addr),
|
||||
PAB_PEX_AMAP_AXI_WIN(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr),
|
||||
PAB_EXT_PEX_AMAP_AXI_WIN(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, lower_32_bits(pci_addr),
|
||||
PAB_PEX_AMAP_PEX_WIN_L(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(pci_addr),
|
||||
PAB_PEX_AMAP_PEX_WIN_H(win_num));
|
||||
|
||||
pcie->ib_wins_configured++;
|
||||
}
|
||||
|
||||
/*
|
||||
* routine to program the outbound windows
|
||||
*/
|
||||
void program_ob_windows(struct mobiveil_pcie *pcie, int win_num,
|
||||
u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
|
||||
{
|
||||
u32 value;
|
||||
u64 size64 = ~(size - 1);
|
||||
|
||||
if (win_num >= pcie->apio_wins) {
|
||||
dev_err(&pcie->pdev->dev,
|
||||
"ERROR: max outbound windows reached !\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit
|
||||
* to 4 KB in PAB_AXI_AMAP_CTRL register
|
||||
*/
|
||||
value = mobiveil_csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num));
|
||||
value &= ~(WIN_TYPE_MASK << WIN_TYPE_SHIFT | WIN_SIZE_MASK);
|
||||
value |= 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT |
|
||||
(lower_32_bits(size64) & WIN_SIZE_MASK);
|
||||
mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_CTRL(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(size64),
|
||||
PAB_EXT_AXI_AMAP_SIZE(win_num));
|
||||
|
||||
/*
|
||||
* program AXI window base with appropriate value in
|
||||
* PAB_AXI_AMAP_AXI_WIN0 register
|
||||
*/
|
||||
mobiveil_csr_writel(pcie,
|
||||
lower_32_bits(cpu_addr) & (~AXI_WINDOW_ALIGN_MASK),
|
||||
PAB_AXI_AMAP_AXI_WIN(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr),
|
||||
PAB_EXT_AXI_AMAP_AXI_WIN(win_num));
|
||||
|
||||
mobiveil_csr_writel(pcie, lower_32_bits(pci_addr),
|
||||
PAB_AXI_AMAP_PEX_WIN_L(win_num));
|
||||
mobiveil_csr_writel(pcie, upper_32_bits(pci_addr),
|
||||
PAB_AXI_AMAP_PEX_WIN_H(win_num));
|
||||
|
||||
pcie->ob_wins_configured++;
|
||||
}
|
||||
|
||||
int mobiveil_bringup_link(struct mobiveil_pcie *pcie)
|
||||
{
|
||||
int retries;
|
||||
|
||||
/* check if the link is up or not */
|
||||
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
|
||||
if (mobiveil_pcie_link_up(pcie))
|
||||
return 0;
|
||||
|
||||
usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX);
|
||||
}
|
||||
|
||||
dev_err(&pcie->pdev->dev, "link never came up\n");
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
|
@ -0,0 +1,226 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* PCIe host controller driver for Mobiveil PCIe Host controller
|
||||
*
|
||||
* Copyright (c) 2018 Mobiveil Inc.
|
||||
* Copyright 2019 NXP
|
||||
*
|
||||
* Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
|
||||
* Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
|
||||
*/
|
||||
|
||||
#ifndef _PCIE_MOBIVEIL_H
|
||||
#define _PCIE_MOBIVEIL_H
|
||||
|
||||
#include <linux/pci.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/msi.h>
|
||||
#include "../../pci.h"
|
||||
|
||||
/* register offsets and bit positions */
|
||||
|
||||
/*
|
||||
* translation tables are grouped into windows, each window registers are
|
||||
* grouped into blocks of 4 or 16 registers each
|
||||
*/
|
||||
#define PAB_REG_BLOCK_SIZE 16
|
||||
#define PAB_EXT_REG_BLOCK_SIZE 4
|
||||
|
||||
#define PAB_REG_ADDR(offset, win) \
|
||||
(offset + (win * PAB_REG_BLOCK_SIZE))
|
||||
#define PAB_EXT_REG_ADDR(offset, win) \
|
||||
(offset + (win * PAB_EXT_REG_BLOCK_SIZE))
|
||||
|
||||
#define LTSSM_STATUS 0x0404
|
||||
#define LTSSM_STATUS_L0_MASK 0x3f
|
||||
#define LTSSM_STATUS_L0 0x2d
|
||||
|
||||
#define PAB_CTRL 0x0808
|
||||
#define AMBA_PIO_ENABLE_SHIFT 0
|
||||
#define PEX_PIO_ENABLE_SHIFT 1
|
||||
#define PAGE_SEL_SHIFT 13
|
||||
#define PAGE_SEL_MASK 0x3f
|
||||
#define PAGE_LO_MASK 0x3ff
|
||||
#define PAGE_SEL_OFFSET_SHIFT 10
|
||||
|
||||
#define PAB_ACTIVITY_STAT 0x81c
|
||||
|
||||
#define PAB_AXI_PIO_CTRL 0x0840
|
||||
#define APIO_EN_MASK 0xf
|
||||
|
||||
#define PAB_PEX_PIO_CTRL 0x08c0
|
||||
#define PIO_ENABLE_SHIFT 0
|
||||
|
||||
#define PAB_INTP_AMBA_MISC_ENB 0x0b0c
|
||||
#define PAB_INTP_AMBA_MISC_STAT 0x0b1c
|
||||
#define PAB_INTP_RESET BIT(1)
|
||||
#define PAB_INTP_MSI BIT(3)
|
||||
#define PAB_INTP_INTA BIT(5)
|
||||
#define PAB_INTP_INTB BIT(6)
|
||||
#define PAB_INTP_INTC BIT(7)
|
||||
#define PAB_INTP_INTD BIT(8)
|
||||
#define PAB_INTP_PCIE_UE BIT(9)
|
||||
#define PAB_INTP_IE_PMREDI BIT(29)
|
||||
#define PAB_INTP_IE_EC BIT(30)
|
||||
#define PAB_INTP_MSI_MASK PAB_INTP_MSI
|
||||
#define PAB_INTP_INTX_MASK (PAB_INTP_INTA | PAB_INTP_INTB |\
|
||||
PAB_INTP_INTC | PAB_INTP_INTD)
|
||||
|
||||
#define PAB_AXI_AMAP_CTRL(win) PAB_REG_ADDR(0x0ba0, win)
|
||||
#define WIN_ENABLE_SHIFT 0
|
||||
#define WIN_TYPE_SHIFT 1
|
||||
#define WIN_TYPE_MASK 0x3
|
||||
#define WIN_SIZE_MASK 0xfffffc00
|
||||
|
||||
#define PAB_EXT_AXI_AMAP_SIZE(win) PAB_EXT_REG_ADDR(0xbaf0, win)
|
||||
|
||||
#define PAB_EXT_AXI_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0x80a0, win)
|
||||
#define PAB_AXI_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x0ba4, win)
|
||||
#define AXI_WINDOW_ALIGN_MASK 3
|
||||
|
||||
#define PAB_AXI_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x0ba8, win)
|
||||
#define PAB_BUS_SHIFT 24
|
||||
#define PAB_DEVICE_SHIFT 19
|
||||
#define PAB_FUNCTION_SHIFT 16
|
||||
|
||||
#define PAB_AXI_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x0bac, win)
|
||||
#define PAB_INTP_AXI_PIO_CLASS 0x474
|
||||
|
||||
#define PAB_PEX_AMAP_CTRL(win) PAB_REG_ADDR(0x4ba0, win)
|
||||
#define AMAP_CTRL_EN_SHIFT 0
|
||||
#define AMAP_CTRL_TYPE_SHIFT 1
|
||||
#define AMAP_CTRL_TYPE_MASK 3
|
||||
|
||||
#define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win)
|
||||
#define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win)
|
||||
#define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win)
|
||||
#define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win)
|
||||
#define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win)
|
||||
|
||||
/* starting offset of INTX bits in status register */
|
||||
#define PAB_INTX_START 5
|
||||
|
||||
/* supported number of MSI interrupts */
|
||||
#define PCI_NUM_MSI 16
|
||||
|
||||
/* MSI registers */
|
||||
#define MSI_BASE_LO_OFFSET 0x04
|
||||
#define MSI_BASE_HI_OFFSET 0x08
|
||||
#define MSI_SIZE_OFFSET 0x0c
|
||||
#define MSI_ENABLE_OFFSET 0x14
|
||||
#define MSI_STATUS_OFFSET 0x18
|
||||
#define MSI_DATA_OFFSET 0x20
|
||||
#define MSI_ADDR_L_OFFSET 0x24
|
||||
#define MSI_ADDR_H_OFFSET 0x28
|
||||
|
||||
/* outbound and inbound window definitions */
|
||||
#define WIN_NUM_0 0
|
||||
#define WIN_NUM_1 1
|
||||
#define CFG_WINDOW_TYPE 0
|
||||
#define IO_WINDOW_TYPE 1
|
||||
#define MEM_WINDOW_TYPE 2
|
||||
#define IB_WIN_SIZE ((u64)256 * 1024 * 1024 * 1024)
|
||||
#define MAX_PIO_WINDOWS 8
|
||||
|
||||
/* Parameters for the waiting for link up routine */
|
||||
#define LINK_WAIT_MAX_RETRIES 10
|
||||
#define LINK_WAIT_MIN 90000
|
||||
#define LINK_WAIT_MAX 100000
|
||||
|
||||
#define PAGED_ADDR_BNDRY 0xc00
|
||||
#define OFFSET_TO_PAGE_ADDR(off) \
|
||||
((off & PAGE_LO_MASK) | PAGED_ADDR_BNDRY)
|
||||
#define OFFSET_TO_PAGE_IDX(off) \
|
||||
((off >> PAGE_SEL_OFFSET_SHIFT) & PAGE_SEL_MASK)
|
||||
|
||||
struct mobiveil_msi { /* MSI information */
|
||||
struct mutex lock; /* protect bitmap variable */
|
||||
struct irq_domain *msi_domain;
|
||||
struct irq_domain *dev_domain;
|
||||
phys_addr_t msi_pages_phys;
|
||||
int num_of_vectors;
|
||||
DECLARE_BITMAP(msi_irq_in_use, PCI_NUM_MSI);
|
||||
};
|
||||
|
||||
struct mobiveil_pcie;
|
||||
|
||||
struct mobiveil_rp_ops {
|
||||
int (*interrupt_init)(struct mobiveil_pcie *pcie);
|
||||
};
|
||||
|
||||
struct mobiveil_root_port {
|
||||
char root_bus_nr;
|
||||
void __iomem *config_axi_slave_base; /* endpoint config base */
|
||||
struct resource *ob_io_res;
|
||||
struct mobiveil_rp_ops *ops;
|
||||
int irq;
|
||||
raw_spinlock_t intx_mask_lock;
|
||||
struct irq_domain *intx_domain;
|
||||
struct mobiveil_msi msi;
|
||||
struct pci_host_bridge *bridge;
|
||||
};
|
||||
|
||||
struct mobiveil_pab_ops {
|
||||
int (*link_up)(struct mobiveil_pcie *pcie);
|
||||
};
|
||||
|
||||
struct mobiveil_pcie {
|
||||
struct platform_device *pdev;
|
||||
void __iomem *csr_axi_slave_base; /* root port config base */
|
||||
void __iomem *apb_csr_base; /* MSI register base */
|
||||
phys_addr_t pcie_reg_base; /* Physical PCIe Controller Base */
|
||||
int apio_wins;
|
||||
int ppio_wins;
|
||||
int ob_wins_configured; /* configured outbound windows */
|
||||
int ib_wins_configured; /* configured inbound windows */
|
||||
const struct mobiveil_pab_ops *ops;
|
||||
struct mobiveil_root_port rp;
|
||||
};
|
||||
|
||||
int mobiveil_pcie_host_probe(struct mobiveil_pcie *pcie);
|
||||
int mobiveil_host_init(struct mobiveil_pcie *pcie, bool reinit);
|
||||
bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie);
|
||||
int mobiveil_bringup_link(struct mobiveil_pcie *pcie);
|
||||
void program_ob_windows(struct mobiveil_pcie *pcie, int win_num, u64 cpu_addr,
|
||||
u64 pci_addr, u32 type, u64 size);
|
||||
void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, u64 cpu_addr,
|
||||
u64 pci_addr, u32 type, u64 size);
|
||||
u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size);
|
||||
void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off,
|
||||
size_t size);
|
||||
|
||||
static inline u32 mobiveil_csr_readl(struct mobiveil_pcie *pcie, u32 off)
|
||||
{
|
||||
return mobiveil_csr_read(pcie, off, 0x4);
|
||||
}
|
||||
|
||||
static inline u16 mobiveil_csr_readw(struct mobiveil_pcie *pcie, u32 off)
|
||||
{
|
||||
return mobiveil_csr_read(pcie, off, 0x2);
|
||||
}
|
||||
|
||||
static inline u8 mobiveil_csr_readb(struct mobiveil_pcie *pcie, u32 off)
|
||||
{
|
||||
return mobiveil_csr_read(pcie, off, 0x1);
|
||||
}
|
||||
|
||||
|
||||
static inline void mobiveil_csr_writel(struct mobiveil_pcie *pcie, u32 val,
|
||||
u32 off)
|
||||
{
|
||||
mobiveil_csr_write(pcie, val, off, 0x4);
|
||||
}
|
||||
|
||||
static inline void mobiveil_csr_writew(struct mobiveil_pcie *pcie, u16 val,
|
||||
u32 off)
|
||||
{
|
||||
mobiveil_csr_write(pcie, val, off, 0x2);
|
||||
}
|
||||
|
||||
static inline void mobiveil_csr_writeb(struct mobiveil_pcie *pcie, u8 val,
|
||||
u32 off)
|
||||
{
|
||||
mobiveil_csr_write(pcie, val, off, 0x1);
|
||||
}
|
||||
|
||||
#endif /* _PCIE_MOBIVEIL_H */
|
|
@ -63,6 +63,7 @@
|
|||
enum pci_protocol_version_t {
|
||||
PCI_PROTOCOL_VERSION_1_1 = PCI_MAKE_VERSION(1, 1), /* Win10 */
|
||||
PCI_PROTOCOL_VERSION_1_2 = PCI_MAKE_VERSION(1, 2), /* RS1 */
|
||||
PCI_PROTOCOL_VERSION_1_3 = PCI_MAKE_VERSION(1, 3), /* Vibranium */
|
||||
};
|
||||
|
||||
#define CPU_AFFINITY_ALL -1ULL
|
||||
|
@ -72,6 +73,7 @@ enum pci_protocol_version_t {
|
|||
* first.
|
||||
*/
|
||||
static enum pci_protocol_version_t pci_protocol_versions[] = {
|
||||
PCI_PROTOCOL_VERSION_1_3,
|
||||
PCI_PROTOCOL_VERSION_1_2,
|
||||
PCI_PROTOCOL_VERSION_1_1,
|
||||
};
|
||||
|
@ -119,6 +121,7 @@ enum pci_message_type {
|
|||
PCI_RESOURCES_ASSIGNED2 = PCI_MESSAGE_BASE + 0x16,
|
||||
PCI_CREATE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x17,
|
||||
PCI_DELETE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x18, /* unused */
|
||||
PCI_BUS_RELATIONS2 = PCI_MESSAGE_BASE + 0x19,
|
||||
PCI_MESSAGE_MAXIMUM
|
||||
};
|
||||
|
||||
|
@ -164,6 +167,26 @@ struct pci_function_description {
|
|||
u32 ser; /* serial number */
|
||||
} __packed;
|
||||
|
||||
enum pci_device_description_flags {
|
||||
HV_PCI_DEVICE_FLAG_NONE = 0x0,
|
||||
HV_PCI_DEVICE_FLAG_NUMA_AFFINITY = 0x1,
|
||||
};
|
||||
|
||||
struct pci_function_description2 {
|
||||
u16 v_id; /* vendor ID */
|
||||
u16 d_id; /* device ID */
|
||||
u8 rev;
|
||||
u8 prog_intf;
|
||||
u8 subclass;
|
||||
u8 base_class;
|
||||
u32 subsystem_id;
|
||||
union win_slot_encoding win_slot;
|
||||
u32 ser; /* serial number */
|
||||
u32 flags;
|
||||
u16 virtual_numa_node;
|
||||
u16 reserved;
|
||||
} __packed;
|
||||
|
||||
/**
|
||||
* struct hv_msi_desc
|
||||
* @vector: IDT entry
|
||||
|
@ -260,7 +283,7 @@ struct pci_packet {
|
|||
int resp_packet_size);
|
||||
void *compl_ctxt;
|
||||
|
||||
struct pci_message message[0];
|
||||
struct pci_message message[];
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -296,7 +319,13 @@ struct pci_bus_d0_entry {
|
|||
struct pci_bus_relations {
|
||||
struct pci_incoming_message incoming;
|
||||
u32 device_count;
|
||||
struct pci_function_description func[0];
|
||||
struct pci_function_description func[];
|
||||
} __packed;
|
||||
|
||||
struct pci_bus_relations2 {
|
||||
struct pci_incoming_message incoming;
|
||||
u32 device_count;
|
||||
struct pci_function_description2 func[];
|
||||
} __packed;
|
||||
|
||||
struct pci_q_res_req_response {
|
||||
|
@ -406,42 +435,6 @@ struct pci_eject_response {
|
|||
|
||||
static int pci_ring_size = (4 * PAGE_SIZE);
|
||||
|
||||
/*
|
||||
* Definitions or interrupt steering hypercall.
|
||||
*/
|
||||
#define HV_PARTITION_ID_SELF ((u64)-1)
|
||||
#define HVCALL_RETARGET_INTERRUPT 0x7e
|
||||
|
||||
struct hv_interrupt_entry {
|
||||
u32 source; /* 1 for MSI(-X) */
|
||||
u32 reserved1;
|
||||
u32 address;
|
||||
u32 data;
|
||||
};
|
||||
|
||||
/*
|
||||
* flags for hv_device_interrupt_target.flags
|
||||
*/
|
||||
#define HV_DEVICE_INTERRUPT_TARGET_MULTICAST 1
|
||||
#define HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET 2
|
||||
|
||||
struct hv_device_interrupt_target {
|
||||
u32 vector;
|
||||
u32 flags;
|
||||
union {
|
||||
u64 vp_mask;
|
||||
struct hv_vpset vp_set;
|
||||
};
|
||||
};
|
||||
|
||||
struct retarget_msi_interrupt {
|
||||
u64 partition_id; /* use "self" */
|
||||
u64 device_id;
|
||||
struct hv_interrupt_entry int_entry;
|
||||
u64 reserved2;
|
||||
struct hv_device_interrupt_target int_target;
|
||||
} __packed __aligned(8);
|
||||
|
||||
/*
|
||||
* Driver specific state.
|
||||
*/
|
||||
|
@ -488,7 +481,7 @@ struct hv_pcibus_device {
|
|||
struct workqueue_struct *wq;
|
||||
|
||||
/* hypercall arg, must not cross page boundary */
|
||||
struct retarget_msi_interrupt retarget_msi_interrupt_params;
|
||||
struct hv_retarget_device_interrupt retarget_msi_interrupt_params;
|
||||
|
||||
/*
|
||||
* Don't put anything here: retarget_msi_interrupt_params must be last
|
||||
|
@ -505,10 +498,24 @@ struct hv_dr_work {
|
|||
struct hv_pcibus_device *bus;
|
||||
};
|
||||
|
||||
struct hv_pcidev_description {
|
||||
u16 v_id; /* vendor ID */
|
||||
u16 d_id; /* device ID */
|
||||
u8 rev;
|
||||
u8 prog_intf;
|
||||
u8 subclass;
|
||||
u8 base_class;
|
||||
u32 subsystem_id;
|
||||
union win_slot_encoding win_slot;
|
||||
u32 ser; /* serial number */
|
||||
u32 flags;
|
||||
u16 virtual_numa_node;
|
||||
};
|
||||
|
||||
struct hv_dr_state {
|
||||
struct list_head list_entry;
|
||||
u32 device_count;
|
||||
struct pci_function_description func[0];
|
||||
struct hv_pcidev_description func[];
|
||||
};
|
||||
|
||||
enum hv_pcichild_state {
|
||||
|
@ -525,7 +532,7 @@ struct hv_pci_dev {
|
|||
refcount_t refs;
|
||||
enum hv_pcichild_state state;
|
||||
struct pci_slot *pci_slot;
|
||||
struct pci_function_description desc;
|
||||
struct hv_pcidev_description desc;
|
||||
bool reported_missing;
|
||||
struct hv_pcibus_device *hbus;
|
||||
struct work_struct wrk;
|
||||
|
@ -1184,7 +1191,7 @@ static void hv_irq_unmask(struct irq_data *data)
|
|||
{
|
||||
struct msi_desc *msi_desc = irq_data_get_msi_desc(data);
|
||||
struct irq_cfg *cfg = irqd_cfg(data);
|
||||
struct retarget_msi_interrupt *params;
|
||||
struct hv_retarget_device_interrupt *params;
|
||||
struct hv_pcibus_device *hbus;
|
||||
struct cpumask *dest;
|
||||
cpumask_var_t tmp;
|
||||
|
@ -1206,8 +1213,7 @@ static void hv_irq_unmask(struct irq_data *data)
|
|||
memset(params, 0, sizeof(*params));
|
||||
params->partition_id = HV_PARTITION_ID_SELF;
|
||||
params->int_entry.source = 1; /* MSI(-X) */
|
||||
params->int_entry.address = msi_desc->msg.address_lo;
|
||||
params->int_entry.data = msi_desc->msg.data;
|
||||
hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry, msi_desc);
|
||||
params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
|
||||
(hbus->hdev->dev_instance.b[4] << 16) |
|
||||
(hbus->hdev->dev_instance.b[7] << 8) |
|
||||
|
@ -1401,6 +1407,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
|
|||
break;
|
||||
|
||||
case PCI_PROTOCOL_VERSION_1_2:
|
||||
case PCI_PROTOCOL_VERSION_1_3:
|
||||
size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2,
|
||||
dest,
|
||||
hpdev->desc.win_slot.slot,
|
||||
|
@ -1799,6 +1806,27 @@ static void hv_pci_remove_slots(struct hv_pcibus_device *hbus)
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Set NUMA node for the devices on the bus
|
||||
*/
|
||||
static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
struct pci_bus *bus = hbus->pci_bus;
|
||||
struct hv_pci_dev *hv_dev;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
hv_dev = get_pcichild_wslot(hbus, devfn_to_wslot(dev->devfn));
|
||||
if (!hv_dev)
|
||||
continue;
|
||||
|
||||
if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY)
|
||||
set_dev_node(&dev->dev, hv_dev->desc.virtual_numa_node);
|
||||
|
||||
put_pcichild(hv_dev);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* create_root_hv_pci_bus() - Expose a new root PCI bus
|
||||
* @hbus: Root PCI bus, as understood by this driver
|
||||
|
@ -1821,6 +1849,7 @@ static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus)
|
|||
|
||||
pci_lock_rescan_remove();
|
||||
pci_scan_child_bus(hbus->pci_bus);
|
||||
hv_pci_assign_numa_node(hbus);
|
||||
pci_bus_assign_resources(hbus->pci_bus);
|
||||
hv_pci_assign_slots(hbus);
|
||||
pci_bus_add_devices(hbus->pci_bus);
|
||||
|
@ -1877,7 +1906,7 @@ static void q_resource_requirements(void *context, struct pci_response *resp,
|
|||
* Return: Pointer to the new tracking struct
|
||||
*/
|
||||
static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
|
||||
struct pci_function_description *desc)
|
||||
struct hv_pcidev_description *desc)
|
||||
{
|
||||
struct hv_pci_dev *hpdev;
|
||||
struct pci_child_message *res_req;
|
||||
|
@ -1988,7 +2017,7 @@ static void pci_devices_present_work(struct work_struct *work)
|
|||
{
|
||||
u32 child_no;
|
||||
bool found;
|
||||
struct pci_function_description *new_desc;
|
||||
struct hv_pcidev_description *new_desc;
|
||||
struct hv_pci_dev *hpdev;
|
||||
struct hv_pcibus_device *hbus;
|
||||
struct list_head removed;
|
||||
|
@ -2089,6 +2118,7 @@ static void pci_devices_present_work(struct work_struct *work)
|
|||
*/
|
||||
pci_lock_rescan_remove();
|
||||
pci_scan_child_bus(hbus->pci_bus);
|
||||
hv_pci_assign_numa_node(hbus);
|
||||
hv_pci_assign_slots(hbus);
|
||||
pci_unlock_rescan_remove();
|
||||
break;
|
||||
|
@ -2107,17 +2137,15 @@ static void pci_devices_present_work(struct work_struct *work)
|
|||
}
|
||||
|
||||
/**
|
||||
* hv_pci_devices_present() - Handles list of new children
|
||||
* hv_pci_start_relations_work() - Queue work to start device discovery
|
||||
* @hbus: Root PCI bus, as understood by this driver
|
||||
* @relations: Packet from host listing children
|
||||
* @dr: The list of children returned from host
|
||||
*
|
||||
* This function is invoked whenever a new list of devices for
|
||||
* this bus appears.
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
|
||||
struct pci_bus_relations *relations)
|
||||
static int hv_pci_start_relations_work(struct hv_pcibus_device *hbus,
|
||||
struct hv_dr_state *dr)
|
||||
{
|
||||
struct hv_dr_state *dr;
|
||||
struct hv_dr_work *dr_wrk;
|
||||
unsigned long flags;
|
||||
bool pending_dr;
|
||||
|
@ -2125,29 +2153,15 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
|
|||
if (hbus->state == hv_pcibus_removing) {
|
||||
dev_info(&hbus->hdev->device,
|
||||
"PCI VMBus BUS_RELATIONS: ignored\n");
|
||||
return;
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
dr_wrk = kzalloc(sizeof(*dr_wrk), GFP_NOWAIT);
|
||||
if (!dr_wrk)
|
||||
return;
|
||||
|
||||
dr = kzalloc(offsetof(struct hv_dr_state, func) +
|
||||
(sizeof(struct pci_function_description) *
|
||||
(relations->device_count)), GFP_NOWAIT);
|
||||
if (!dr) {
|
||||
kfree(dr_wrk);
|
||||
return;
|
||||
}
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_WORK(&dr_wrk->wrk, pci_devices_present_work);
|
||||
dr_wrk->bus = hbus;
|
||||
dr->device_count = relations->device_count;
|
||||
if (dr->device_count != 0) {
|
||||
memcpy(dr->func, relations->func,
|
||||
sizeof(struct pci_function_description) *
|
||||
dr->device_count);
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&hbus->device_list_lock, flags);
|
||||
/*
|
||||
|
@ -2165,6 +2179,87 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
|
|||
get_hvpcibus(hbus);
|
||||
queue_work(hbus->wq, &dr_wrk->wrk);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* hv_pci_devices_present() - Handle list of new children
|
||||
* @hbus: Root PCI bus, as understood by this driver
|
||||
* @relations: Packet from host listing children
|
||||
*
|
||||
* Process a new list of devices on the bus. The list of devices is
|
||||
* discovered by VSP and sent to us via VSP message PCI_BUS_RELATIONS,
|
||||
* whenever a new list of devices for this bus appears.
|
||||
*/
|
||||
static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
|
||||
struct pci_bus_relations *relations)
|
||||
{
|
||||
struct hv_dr_state *dr;
|
||||
int i;
|
||||
|
||||
dr = kzalloc(offsetof(struct hv_dr_state, func) +
|
||||
(sizeof(struct hv_pcidev_description) *
|
||||
(relations->device_count)), GFP_NOWAIT);
|
||||
|
||||
if (!dr)
|
||||
return;
|
||||
|
||||
dr->device_count = relations->device_count;
|
||||
for (i = 0; i < dr->device_count; i++) {
|
||||
dr->func[i].v_id = relations->func[i].v_id;
|
||||
dr->func[i].d_id = relations->func[i].d_id;
|
||||
dr->func[i].rev = relations->func[i].rev;
|
||||
dr->func[i].prog_intf = relations->func[i].prog_intf;
|
||||
dr->func[i].subclass = relations->func[i].subclass;
|
||||
dr->func[i].base_class = relations->func[i].base_class;
|
||||
dr->func[i].subsystem_id = relations->func[i].subsystem_id;
|
||||
dr->func[i].win_slot = relations->func[i].win_slot;
|
||||
dr->func[i].ser = relations->func[i].ser;
|
||||
}
|
||||
|
||||
if (hv_pci_start_relations_work(hbus, dr))
|
||||
kfree(dr);
|
||||
}
|
||||
|
||||
/**
|
||||
* hv_pci_devices_present2() - Handle list of new children
|
||||
* @hbus: Root PCI bus, as understood by this driver
|
||||
* @relations: Packet from host listing children
|
||||
*
|
||||
* This function is the v2 version of hv_pci_devices_present()
|
||||
*/
|
||||
static void hv_pci_devices_present2(struct hv_pcibus_device *hbus,
|
||||
struct pci_bus_relations2 *relations)
|
||||
{
|
||||
struct hv_dr_state *dr;
|
||||
int i;
|
||||
|
||||
dr = kzalloc(offsetof(struct hv_dr_state, func) +
|
||||
(sizeof(struct hv_pcidev_description) *
|
||||
(relations->device_count)), GFP_NOWAIT);
|
||||
|
||||
if (!dr)
|
||||
return;
|
||||
|
||||
dr->device_count = relations->device_count;
|
||||
for (i = 0; i < dr->device_count; i++) {
|
||||
dr->func[i].v_id = relations->func[i].v_id;
|
||||
dr->func[i].d_id = relations->func[i].d_id;
|
||||
dr->func[i].rev = relations->func[i].rev;
|
||||
dr->func[i].prog_intf = relations->func[i].prog_intf;
|
||||
dr->func[i].subclass = relations->func[i].subclass;
|
||||
dr->func[i].base_class = relations->func[i].base_class;
|
||||
dr->func[i].subsystem_id = relations->func[i].subsystem_id;
|
||||
dr->func[i].win_slot = relations->func[i].win_slot;
|
||||
dr->func[i].ser = relations->func[i].ser;
|
||||
dr->func[i].flags = relations->func[i].flags;
|
||||
dr->func[i].virtual_numa_node =
|
||||
relations->func[i].virtual_numa_node;
|
||||
}
|
||||
|
||||
if (hv_pci_start_relations_work(hbus, dr))
|
||||
kfree(dr);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2280,6 +2375,7 @@ static void hv_pci_onchannelcallback(void *context)
|
|||
struct pci_response *response;
|
||||
struct pci_incoming_message *new_message;
|
||||
struct pci_bus_relations *bus_rel;
|
||||
struct pci_bus_relations2 *bus_rel2;
|
||||
struct pci_dev_inval_block *inval;
|
||||
struct pci_dev_incoming *dev_message;
|
||||
struct hv_pci_dev *hpdev;
|
||||
|
@ -2347,6 +2443,21 @@ static void hv_pci_onchannelcallback(void *context)
|
|||
hv_pci_devices_present(hbus, bus_rel);
|
||||
break;
|
||||
|
||||
case PCI_BUS_RELATIONS2:
|
||||
|
||||
bus_rel2 = (struct pci_bus_relations2 *)buffer;
|
||||
if (bytes_recvd <
|
||||
offsetof(struct pci_bus_relations2, func) +
|
||||
(sizeof(struct pci_function_description2) *
|
||||
(bus_rel2->device_count))) {
|
||||
dev_err(&hbus->hdev->device,
|
||||
"bus relations v2 too small\n");
|
||||
break;
|
||||
}
|
||||
|
||||
hv_pci_devices_present2(hbus, bus_rel2);
|
||||
break;
|
||||
|
||||
case PCI_EJECT:
|
||||
|
||||
dev_message = (struct pci_dev_incoming *)buffer;
|
||||
|
@ -2922,7 +3033,7 @@ static int hv_pci_probe(struct hv_device *hdev,
|
|||
* positive by using kmemleak_alloc() and kmemleak_free() to ask
|
||||
* kmemleak to track and scan the hbus buffer.
|
||||
*/
|
||||
hbus = (struct hv_pcibus_device *)kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL);
|
||||
hbus = kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL);
|
||||
if (!hbus)
|
||||
return -ENOMEM;
|
||||
hbus->state = hv_pcibus_init;
|
||||
|
@ -3058,7 +3169,7 @@ destroy_wq:
|
|||
free_dom:
|
||||
hv_put_dom_num(hbus->sysdata.domain);
|
||||
free_bus:
|
||||
free_page((unsigned long)hbus);
|
||||
kfree(hbus);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -3069,7 +3180,7 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool hibernating)
|
|||
struct pci_packet teardown_packet;
|
||||
u8 buffer[sizeof(struct pci_message)];
|
||||
} pkt;
|
||||
struct pci_bus_relations relations;
|
||||
struct hv_dr_state *dr;
|
||||
struct hv_pci_compl comp_pkt;
|
||||
int ret;
|
||||
|
||||
|
@ -3082,8 +3193,9 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool hibernating)
|
|||
|
||||
if (!hibernating) {
|
||||
/* Delete any children which might still exist. */
|
||||
memset(&relations, 0, sizeof(relations));
|
||||
hv_pci_devices_present(hbus, &relations);
|
||||
dr = kzalloc(sizeof(*dr), GFP_KERNEL);
|
||||
if (dr && hv_pci_start_relations_work(hbus, dr))
|
||||
kfree(dr);
|
||||
}
|
||||
|
||||
ret = hv_send_resources_released(hdev);
|
||||
|
|
|
@ -355,16 +355,6 @@ struct tegra_pcie {
|
|||
int irq;
|
||||
|
||||
struct resource cs;
|
||||
struct resource io;
|
||||
struct resource pio;
|
||||
struct resource mem;
|
||||
struct resource prefetch;
|
||||
struct resource busn;
|
||||
|
||||
struct {
|
||||
resource_size_t mem;
|
||||
resource_size_t io;
|
||||
} offset;
|
||||
|
||||
struct clk *pex_clk;
|
||||
struct clk *afi_clk;
|
||||
|
@ -797,38 +787,6 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0bf1, tegra_pcie_relax_enable);
|
|||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0e1c, tegra_pcie_relax_enable);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0e1d, tegra_pcie_relax_enable);
|
||||
|
||||
static int tegra_pcie_request_resources(struct tegra_pcie *pcie)
|
||||
{
|
||||
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
|
||||
struct list_head *windows = &host->windows;
|
||||
struct device *dev = pcie->dev;
|
||||
int err;
|
||||
|
||||
pci_add_resource_offset(windows, &pcie->pio, pcie->offset.io);
|
||||
pci_add_resource_offset(windows, &pcie->mem, pcie->offset.mem);
|
||||
pci_add_resource_offset(windows, &pcie->prefetch, pcie->offset.mem);
|
||||
pci_add_resource(windows, &pcie->busn);
|
||||
|
||||
err = devm_request_pci_bus_resources(dev, windows);
|
||||
if (err < 0) {
|
||||
pci_free_resource_list(windows);
|
||||
return err;
|
||||
}
|
||||
|
||||
pci_remap_iospace(&pcie->pio, pcie->io.start);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tegra_pcie_free_resources(struct tegra_pcie *pcie)
|
||||
{
|
||||
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
|
||||
struct list_head *windows = &host->windows;
|
||||
|
||||
pci_unmap_iospace(&pcie->pio);
|
||||
pci_free_resource_list(windows);
|
||||
}
|
||||
|
||||
static int tegra_pcie_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin)
|
||||
{
|
||||
struct tegra_pcie *pcie = pdev->bus->sysdata;
|
||||
|
@ -909,36 +867,49 @@ static irqreturn_t tegra_pcie_isr(int irq, void *arg)
|
|||
*/
|
||||
static void tegra_pcie_setup_translations(struct tegra_pcie *pcie)
|
||||
{
|
||||
u32 fpci_bar, size, axi_address;
|
||||
u32 size;
|
||||
struct resource_entry *entry;
|
||||
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
|
||||
|
||||
/* Bar 0: type 1 extended configuration space */
|
||||
size = resource_size(&pcie->cs);
|
||||
afi_writel(pcie, pcie->cs.start, AFI_AXI_BAR0_START);
|
||||
afi_writel(pcie, size >> 12, AFI_AXI_BAR0_SZ);
|
||||
|
||||
/* Bar 1: downstream IO bar */
|
||||
fpci_bar = 0xfdfc0000;
|
||||
size = resource_size(&pcie->io);
|
||||
axi_address = pcie->io.start;
|
||||
afi_writel(pcie, axi_address, AFI_AXI_BAR1_START);
|
||||
afi_writel(pcie, size >> 12, AFI_AXI_BAR1_SZ);
|
||||
afi_writel(pcie, fpci_bar, AFI_FPCI_BAR1);
|
||||
resource_list_for_each_entry(entry, &bridge->windows) {
|
||||
u32 fpci_bar, axi_address;
|
||||
struct resource *res = entry->res;
|
||||
|
||||
/* Bar 2: prefetchable memory BAR */
|
||||
fpci_bar = (((pcie->prefetch.start >> 12) & 0x0fffffff) << 4) | 0x1;
|
||||
size = resource_size(&pcie->prefetch);
|
||||
axi_address = pcie->prefetch.start;
|
||||
afi_writel(pcie, axi_address, AFI_AXI_BAR2_START);
|
||||
afi_writel(pcie, size >> 12, AFI_AXI_BAR2_SZ);
|
||||
afi_writel(pcie, fpci_bar, AFI_FPCI_BAR2);
|
||||
size = resource_size(res);
|
||||
|
||||
/* Bar 3: non prefetchable memory BAR */
|
||||
fpci_bar = (((pcie->mem.start >> 12) & 0x0fffffff) << 4) | 0x1;
|
||||
size = resource_size(&pcie->mem);
|
||||
axi_address = pcie->mem.start;
|
||||
afi_writel(pcie, axi_address, AFI_AXI_BAR3_START);
|
||||
afi_writel(pcie, size >> 12, AFI_AXI_BAR3_SZ);
|
||||
afi_writel(pcie, fpci_bar, AFI_FPCI_BAR3);
|
||||
switch (resource_type(res)) {
|
||||
case IORESOURCE_IO:
|
||||
/* Bar 1: downstream IO bar */
|
||||
fpci_bar = 0xfdfc0000;
|
||||
axi_address = pci_pio_to_address(res->start);
|
||||
afi_writel(pcie, axi_address, AFI_AXI_BAR1_START);
|
||||
afi_writel(pcie, size >> 12, AFI_AXI_BAR1_SZ);
|
||||
afi_writel(pcie, fpci_bar, AFI_FPCI_BAR1);
|
||||
break;
|
||||
case IORESOURCE_MEM:
|
||||
fpci_bar = (((res->start >> 12) & 0x0fffffff) << 4) | 0x1;
|
||||
axi_address = res->start;
|
||||
|
||||
if (res->flags & IORESOURCE_PREFETCH) {
|
||||
/* Bar 2: prefetchable memory BAR */
|
||||
afi_writel(pcie, axi_address, AFI_AXI_BAR2_START);
|
||||
afi_writel(pcie, size >> 12, AFI_AXI_BAR2_SZ);
|
||||
afi_writel(pcie, fpci_bar, AFI_FPCI_BAR2);
|
||||
|
||||
} else {
|
||||
/* Bar 3: non prefetchable memory BAR */
|
||||
afi_writel(pcie, axi_address, AFI_AXI_BAR3_START);
|
||||
afi_writel(pcie, size >> 12, AFI_AXI_BAR3_SZ);
|
||||
afi_writel(pcie, fpci_bar, AFI_FPCI_BAR3);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* NULL out the remaining BARs as they are not used */
|
||||
afi_writel(pcie, 0, AFI_AXI_BAR4_START);
|
||||
|
@ -2157,76 +2128,10 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
|
|||
struct device *dev = pcie->dev;
|
||||
struct device_node *np = dev->of_node, *port;
|
||||
const struct tegra_pcie_soc *soc = pcie->soc;
|
||||
struct of_pci_range_parser parser;
|
||||
struct of_pci_range range;
|
||||
u32 lanes = 0, mask = 0;
|
||||
unsigned int lane = 0;
|
||||
struct resource res;
|
||||
int err;
|
||||
|
||||
if (of_pci_range_parser_init(&parser, np)) {
|
||||
dev_err(dev, "missing \"ranges\" property\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
for_each_of_pci_range(&parser, &range) {
|
||||
err = of_pci_range_to_resource(&range, np, &res);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
switch (res.flags & IORESOURCE_TYPE_BITS) {
|
||||
case IORESOURCE_IO:
|
||||
/* Track the bus -> CPU I/O mapping offset. */
|
||||
pcie->offset.io = res.start - range.pci_addr;
|
||||
|
||||
memcpy(&pcie->pio, &res, sizeof(res));
|
||||
pcie->pio.name = np->full_name;
|
||||
|
||||
/*
|
||||
* The Tegra PCIe host bridge uses this to program the
|
||||
* mapping of the I/O space to the physical address,
|
||||
* so we override the .start and .end fields here that
|
||||
* of_pci_range_to_resource() converted to I/O space.
|
||||
* We also set the IORESOURCE_MEM type to clarify that
|
||||
* the resource is in the physical memory space.
|
||||
*/
|
||||
pcie->io.start = range.cpu_addr;
|
||||
pcie->io.end = range.cpu_addr + range.size - 1;
|
||||
pcie->io.flags = IORESOURCE_MEM;
|
||||
pcie->io.name = "I/O";
|
||||
|
||||
memcpy(&res, &pcie->io, sizeof(res));
|
||||
break;
|
||||
|
||||
case IORESOURCE_MEM:
|
||||
/*
|
||||
* Track the bus -> CPU memory mapping offset. This
|
||||
* assumes that the prefetchable and non-prefetchable
|
||||
* regions will be the last of type IORESOURCE_MEM in
|
||||
* the ranges property.
|
||||
* */
|
||||
pcie->offset.mem = res.start - range.pci_addr;
|
||||
|
||||
if (res.flags & IORESOURCE_PREFETCH) {
|
||||
memcpy(&pcie->prefetch, &res, sizeof(res));
|
||||
pcie->prefetch.name = "prefetchable";
|
||||
} else {
|
||||
memcpy(&pcie->mem, &res, sizeof(res));
|
||||
pcie->mem.name = "non-prefetchable";
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
err = of_pci_parse_bus_range(np, &pcie->busn);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to parse ranges property: %d\n", err);
|
||||
pcie->busn.name = np->name;
|
||||
pcie->busn.start = 0;
|
||||
pcie->busn.end = 0xff;
|
||||
pcie->busn.flags = IORESOURCE_BUS;
|
||||
}
|
||||
|
||||
/* parse root ports */
|
||||
for_each_child_of_node(np, port) {
|
||||
struct tegra_pcie_port *rp;
|
||||
|
@ -2766,6 +2671,7 @@ static int tegra_pcie_probe(struct platform_device *pdev)
|
|||
struct pci_host_bridge *host;
|
||||
struct tegra_pcie *pcie;
|
||||
struct pci_bus *child;
|
||||
struct resource *bus;
|
||||
int err;
|
||||
|
||||
host = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
|
||||
|
@ -2780,6 +2686,12 @@ static int tegra_pcie_probe(struct platform_device *pdev)
|
|||
INIT_LIST_HEAD(&pcie->ports);
|
||||
pcie->dev = dev;
|
||||
|
||||
err = pci_parse_request_of_pci_ranges(dev, &host->windows, NULL, &bus);
|
||||
if (err) {
|
||||
dev_err(dev, "Getting bridge resources failed\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
err = tegra_pcie_parse_dt(pcie);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
@ -2803,11 +2715,7 @@ static int tegra_pcie_probe(struct platform_device *pdev)
|
|||
goto teardown_msi;
|
||||
}
|
||||
|
||||
err = tegra_pcie_request_resources(pcie);
|
||||
if (err)
|
||||
goto pm_runtime_put;
|
||||
|
||||
host->busnr = pcie->busn.start;
|
||||
host->busnr = bus->start;
|
||||
host->dev.parent = &pdev->dev;
|
||||
host->ops = &tegra_pcie_ops;
|
||||
host->map_irq = tegra_pcie_map_irq;
|
||||
|
@ -2816,7 +2724,7 @@ static int tegra_pcie_probe(struct platform_device *pdev)
|
|||
err = pci_scan_root_bus_bridge(host);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to register host: %d\n", err);
|
||||
goto free_resources;
|
||||
goto pm_runtime_put;
|
||||
}
|
||||
|
||||
pci_bus_size_bridges(host->bus);
|
||||
|
@ -2835,8 +2743,6 @@ static int tegra_pcie_probe(struct platform_device *pdev)
|
|||
|
||||
return 0;
|
||||
|
||||
free_resources:
|
||||
tegra_pcie_free_resources(pcie);
|
||||
pm_runtime_put:
|
||||
pm_runtime_put_sync(pcie->dev);
|
||||
pm_runtime_disable(pcie->dev);
|
||||
|
@ -2858,7 +2764,6 @@ static int tegra_pcie_remove(struct platform_device *pdev)
|
|||
|
||||
pci_stop_root_bus(host->bus);
|
||||
pci_remove_root_bus(host->bus);
|
||||
tegra_pcie_free_resources(pcie);
|
||||
pm_runtime_put_sync(pcie->dev);
|
||||
pm_runtime_disable(pcie->dev);
|
||||
|
||||
|
|
|
@ -824,8 +824,8 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
|
|||
cls = FIELD_GET(PCI_EXP_LNKSTA_CLS, lnksta);
|
||||
nlw = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta);
|
||||
dev_info(dev, "link up, %s x%u %s\n",
|
||||
PCIE_SPEED2STR(cls + PCI_SPEED_133MHz_PCIX_533),
|
||||
nlw, ssc_good ? "(SSC)" : "(!SSC)");
|
||||
pci_speed_string(pcie_link_speed[cls]), nlw,
|
||||
ssc_good ? "(SSC)" : "(!SSC)");
|
||||
|
||||
/* PCIe->SCB endian mode for BAR */
|
||||
tmp = readl(base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1);
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
|
||||
#include <linux/crc32.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/dmaengine.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
|
@ -39,6 +40,8 @@
|
|||
#define STATUS_SRC_ADDR_INVALID BIT(7)
|
||||
#define STATUS_DST_ADDR_INVALID BIT(8)
|
||||
|
||||
#define FLAG_USE_DMA BIT(0)
|
||||
|
||||
#define TIMER_RESOLUTION 1
|
||||
|
||||
static struct workqueue_struct *kpcitest_workqueue;
|
||||
|
@ -47,7 +50,11 @@ struct pci_epf_test {
|
|||
void *reg[PCI_STD_NUM_BARS];
|
||||
struct pci_epf *epf;
|
||||
enum pci_barno test_reg_bar;
|
||||
size_t msix_table_offset;
|
||||
struct delayed_work cmd_handler;
|
||||
struct dma_chan *dma_chan;
|
||||
struct completion transfer_complete;
|
||||
bool dma_supported;
|
||||
const struct pci_epc_features *epc_features;
|
||||
};
|
||||
|
||||
|
@ -61,6 +68,7 @@ struct pci_epf_test_reg {
|
|||
u32 checksum;
|
||||
u32 irq_type;
|
||||
u32 irq_number;
|
||||
u32 flags;
|
||||
} __packed;
|
||||
|
||||
static struct pci_epf_header test_header = {
|
||||
|
@ -72,13 +80,156 @@ static struct pci_epf_header test_header = {
|
|||
|
||||
static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 };
|
||||
|
||||
static void pci_epf_test_dma_callback(void *param)
|
||||
{
|
||||
struct pci_epf_test *epf_test = param;
|
||||
|
||||
complete(&epf_test->transfer_complete);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_epf_test_data_transfer() - Function that uses dmaengine API to transfer
|
||||
* data between PCIe EP and remote PCIe RC
|
||||
* @epf_test: the EPF test device that performs the data transfer operation
|
||||
* @dma_dst: The destination address of the data transfer. It can be a physical
|
||||
* address given by pci_epc_mem_alloc_addr or DMA mapping APIs.
|
||||
* @dma_src: The source address of the data transfer. It can be a physical
|
||||
* address given by pci_epc_mem_alloc_addr or DMA mapping APIs.
|
||||
* @len: The size of the data transfer
|
||||
*
|
||||
* Function that uses dmaengine API to transfer data between PCIe EP and remote
|
||||
* PCIe RC. The source and destination address can be a physical address given
|
||||
* by pci_epc_mem_alloc_addr or the one obtained using DMA mapping APIs.
|
||||
*
|
||||
* The function returns '0' on success and negative value on failure.
|
||||
*/
|
||||
static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
|
||||
dma_addr_t dma_dst, dma_addr_t dma_src,
|
||||
size_t len)
|
||||
{
|
||||
enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
|
||||
struct dma_chan *chan = epf_test->dma_chan;
|
||||
struct pci_epf *epf = epf_test->epf;
|
||||
struct dma_async_tx_descriptor *tx;
|
||||
struct device *dev = &epf->dev;
|
||||
dma_cookie_t cookie;
|
||||
int ret;
|
||||
|
||||
if (IS_ERR_OR_NULL(chan)) {
|
||||
dev_err(dev, "Invalid DMA memcpy channel\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags);
|
||||
if (!tx) {
|
||||
dev_err(dev, "Failed to prepare DMA memcpy\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
tx->callback = pci_epf_test_dma_callback;
|
||||
tx->callback_param = epf_test;
|
||||
cookie = tx->tx_submit(tx);
|
||||
reinit_completion(&epf_test->transfer_complete);
|
||||
|
||||
ret = dma_submit_error(cookie);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to do DMA tx_submit %d\n", cookie);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
dma_async_issue_pending(chan);
|
||||
ret = wait_for_completion_interruptible(&epf_test->transfer_complete);
|
||||
if (ret < 0) {
|
||||
dmaengine_terminate_sync(chan);
|
||||
dev_err(dev, "DMA wait_for_completion_timeout\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_epf_test_init_dma_chan() - Function to initialize EPF test DMA channel
|
||||
* @epf_test: the EPF test device that performs data transfer operation
|
||||
*
|
||||
* Function to initialize EPF test DMA channel.
|
||||
*/
|
||||
static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
|
||||
{
|
||||
struct pci_epf *epf = epf_test->epf;
|
||||
struct device *dev = &epf->dev;
|
||||
struct dma_chan *dma_chan;
|
||||
dma_cap_mask_t mask;
|
||||
int ret;
|
||||
|
||||
dma_cap_zero(mask);
|
||||
dma_cap_set(DMA_MEMCPY, mask);
|
||||
|
||||
dma_chan = dma_request_chan_by_mask(&mask);
|
||||
if (IS_ERR(dma_chan)) {
|
||||
ret = PTR_ERR(dma_chan);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(dev, "Failed to get DMA channel\n");
|
||||
return ret;
|
||||
}
|
||||
init_completion(&epf_test->transfer_complete);
|
||||
|
||||
epf_test->dma_chan = dma_chan;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_epf_test_clean_dma_chan() - Function to cleanup EPF test DMA channel
|
||||
* @epf: the EPF test device that performs data transfer operation
|
||||
*
|
||||
* Helper to cleanup EPF test DMA channel.
|
||||
*/
|
||||
static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
|
||||
{
|
||||
dma_release_channel(epf_test->dma_chan);
|
||||
epf_test->dma_chan = NULL;
|
||||
}
|
||||
|
||||
static void pci_epf_test_print_rate(const char *ops, u64 size,
|
||||
struct timespec64 *start,
|
||||
struct timespec64 *end, bool dma)
|
||||
{
|
||||
struct timespec64 ts;
|
||||
u64 rate, ns;
|
||||
|
||||
ts = timespec64_sub(*end, *start);
|
||||
|
||||
/* convert both size (stored in 'rate') and time in terms of 'ns' */
|
||||
ns = timespec64_to_ns(&ts);
|
||||
rate = size * NSEC_PER_SEC;
|
||||
|
||||
/* Divide both size (stored in 'rate') and ns by a common factor */
|
||||
while (ns > UINT_MAX) {
|
||||
rate >>= 1;
|
||||
ns >>= 1;
|
||||
}
|
||||
|
||||
if (!ns)
|
||||
return;
|
||||
|
||||
/* calculate the rate */
|
||||
do_div(rate, (uint32_t)ns);
|
||||
|
||||
pr_info("\n%s => Size: %llu bytes\t DMA: %s\t Time: %llu.%09u seconds\t"
|
||||
"Rate: %llu KB/s\n", ops, size, dma ? "YES" : "NO",
|
||||
(u64)ts.tv_sec, (u32)ts.tv_nsec, rate / 1024);
|
||||
}
|
||||
|
||||
static int pci_epf_test_copy(struct pci_epf_test *epf_test)
|
||||
{
|
||||
int ret;
|
||||
bool use_dma;
|
||||
void __iomem *src_addr;
|
||||
void __iomem *dst_addr;
|
||||
phys_addr_t src_phys_addr;
|
||||
phys_addr_t dst_phys_addr;
|
||||
struct timespec64 start, end;
|
||||
struct pci_epf *epf = epf_test->epf;
|
||||
struct device *dev = &epf->dev;
|
||||
struct pci_epc *epc = epf->epc;
|
||||
|
@ -117,8 +268,26 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
|
|||
goto err_dst_addr;
|
||||
}
|
||||
|
||||
memcpy(dst_addr, src_addr, reg->size);
|
||||
ktime_get_ts64(&start);
|
||||
use_dma = !!(reg->flags & FLAG_USE_DMA);
|
||||
if (use_dma) {
|
||||
if (!epf_test->dma_supported) {
|
||||
dev_err(dev, "Cannot transfer data using DMA\n");
|
||||
ret = -EINVAL;
|
||||
goto err_map_addr;
|
||||
}
|
||||
|
||||
ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr,
|
||||
src_phys_addr, reg->size);
|
||||
if (ret)
|
||||
dev_err(dev, "Data transfer failed\n");
|
||||
} else {
|
||||
memcpy(dst_addr, src_addr, reg->size);
|
||||
}
|
||||
ktime_get_ts64(&end);
|
||||
pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma);
|
||||
|
||||
err_map_addr:
|
||||
pci_epc_unmap_addr(epc, epf->func_no, dst_phys_addr);
|
||||
|
||||
err_dst_addr:
|
||||
|
@ -140,10 +309,14 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
|
|||
void __iomem *src_addr;
|
||||
void *buf;
|
||||
u32 crc32;
|
||||
bool use_dma;
|
||||
phys_addr_t phys_addr;
|
||||
phys_addr_t dst_phys_addr;
|
||||
struct timespec64 start, end;
|
||||
struct pci_epf *epf = epf_test->epf;
|
||||
struct device *dev = &epf->dev;
|
||||
struct pci_epc *epc = epf->epc;
|
||||
struct device *dma_dev = epf->epc->dev.parent;
|
||||
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
|
||||
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
|
||||
|
||||
|
@ -169,12 +342,44 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
|
|||
goto err_map_addr;
|
||||
}
|
||||
|
||||
memcpy_fromio(buf, src_addr, reg->size);
|
||||
use_dma = !!(reg->flags & FLAG_USE_DMA);
|
||||
if (use_dma) {
|
||||
if (!epf_test->dma_supported) {
|
||||
dev_err(dev, "Cannot transfer data using DMA\n");
|
||||
ret = -EINVAL;
|
||||
goto err_dma_map;
|
||||
}
|
||||
|
||||
dst_phys_addr = dma_map_single(dma_dev, buf, reg->size,
|
||||
DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(dma_dev, dst_phys_addr)) {
|
||||
dev_err(dev, "Failed to map destination buffer addr\n");
|
||||
ret = -ENOMEM;
|
||||
goto err_dma_map;
|
||||
}
|
||||
|
||||
ktime_get_ts64(&start);
|
||||
ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr,
|
||||
phys_addr, reg->size);
|
||||
if (ret)
|
||||
dev_err(dev, "Data transfer failed\n");
|
||||
ktime_get_ts64(&end);
|
||||
|
||||
dma_unmap_single(dma_dev, dst_phys_addr, reg->size,
|
||||
DMA_FROM_DEVICE);
|
||||
} else {
|
||||
ktime_get_ts64(&start);
|
||||
memcpy_fromio(buf, src_addr, reg->size);
|
||||
ktime_get_ts64(&end);
|
||||
}
|
||||
|
||||
pci_epf_test_print_rate("READ", reg->size, &start, &end, use_dma);
|
||||
|
||||
crc32 = crc32_le(~0, buf, reg->size);
|
||||
if (crc32 != reg->checksum)
|
||||
ret = -EIO;
|
||||
|
||||
err_dma_map:
|
||||
kfree(buf);
|
||||
|
||||
err_map_addr:
|
||||
|
@ -192,10 +397,14 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
|
|||
int ret;
|
||||
void __iomem *dst_addr;
|
||||
void *buf;
|
||||
bool use_dma;
|
||||
phys_addr_t phys_addr;
|
||||
phys_addr_t src_phys_addr;
|
||||
struct timespec64 start, end;
|
||||
struct pci_epf *epf = epf_test->epf;
|
||||
struct device *dev = &epf->dev;
|
||||
struct pci_epc *epc = epf->epc;
|
||||
struct device *dma_dev = epf->epc->dev.parent;
|
||||
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
|
||||
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
|
||||
|
||||
|
@ -224,7 +433,38 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
|
|||
get_random_bytes(buf, reg->size);
|
||||
reg->checksum = crc32_le(~0, buf, reg->size);
|
||||
|
||||
memcpy_toio(dst_addr, buf, reg->size);
|
||||
use_dma = !!(reg->flags & FLAG_USE_DMA);
|
||||
if (use_dma) {
|
||||
if (!epf_test->dma_supported) {
|
||||
dev_err(dev, "Cannot transfer data using DMA\n");
|
||||
ret = -EINVAL;
|
||||
goto err_map_addr;
|
||||
}
|
||||
|
||||
src_phys_addr = dma_map_single(dma_dev, buf, reg->size,
|
||||
DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(dma_dev, src_phys_addr)) {
|
||||
dev_err(dev, "Failed to map source buffer addr\n");
|
||||
ret = -ENOMEM;
|
||||
goto err_dma_map;
|
||||
}
|
||||
|
||||
ktime_get_ts64(&start);
|
||||
ret = pci_epf_test_data_transfer(epf_test, phys_addr,
|
||||
src_phys_addr, reg->size);
|
||||
if (ret)
|
||||
dev_err(dev, "Data transfer failed\n");
|
||||
ktime_get_ts64(&end);
|
||||
|
||||
dma_unmap_single(dma_dev, src_phys_addr, reg->size,
|
||||
DMA_TO_DEVICE);
|
||||
} else {
|
||||
ktime_get_ts64(&start);
|
||||
memcpy_toio(dst_addr, buf, reg->size);
|
||||
ktime_get_ts64(&end);
|
||||
}
|
||||
|
||||
pci_epf_test_print_rate("WRITE", reg->size, &start, &end, use_dma);
|
||||
|
||||
/*
|
||||
* wait 1ms inorder for the write to complete. Without this delay L3
|
||||
|
@ -232,6 +472,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
|
|||
*/
|
||||
usleep_range(1000, 2000);
|
||||
|
||||
err_dma_map:
|
||||
kfree(buf);
|
||||
|
||||
err_map_addr:
|
||||
|
@ -360,14 +601,6 @@ reset_handler:
|
|||
msecs_to_jiffies(1));
|
||||
}
|
||||
|
||||
static void pci_epf_test_linkup(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
|
||||
queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler,
|
||||
msecs_to_jiffies(1));
|
||||
}
|
||||
|
||||
static void pci_epf_test_unbind(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
|
@ -376,6 +609,7 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
|
|||
int bar;
|
||||
|
||||
cancel_delayed_work(&epf_test->cmd_handler);
|
||||
pci_epf_test_clean_dma_chan(epf_test);
|
||||
pci_epc_stop(epc);
|
||||
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
|
||||
epf_bar = &epf->bar[bar];
|
||||
|
@ -424,11 +658,90 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int pci_epf_test_core_init(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
struct pci_epf_header *header = epf->header;
|
||||
const struct pci_epc_features *epc_features;
|
||||
struct pci_epc *epc = epf->epc;
|
||||
struct device *dev = &epf->dev;
|
||||
bool msix_capable = false;
|
||||
bool msi_capable = true;
|
||||
int ret;
|
||||
|
||||
epc_features = pci_epc_get_features(epc, epf->func_no);
|
||||
if (epc_features) {
|
||||
msix_capable = epc_features->msix_capable;
|
||||
msi_capable = epc_features->msi_capable;
|
||||
}
|
||||
|
||||
ret = pci_epc_write_header(epc, epf->func_no, header);
|
||||
if (ret) {
|
||||
dev_err(dev, "Configuration header write failed\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = pci_epf_test_set_bar(epf);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (msi_capable) {
|
||||
ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts);
|
||||
if (ret) {
|
||||
dev_err(dev, "MSI configuration failed\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (msix_capable) {
|
||||
ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts,
|
||||
epf_test->test_reg_bar,
|
||||
epf_test->msix_table_offset);
|
||||
if (ret) {
|
||||
dev_err(dev, "MSI-X configuration failed\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_epf_test_notifier(struct notifier_block *nb, unsigned long val,
|
||||
void *data)
|
||||
{
|
||||
struct pci_epf *epf = container_of(nb, struct pci_epf, nb);
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
int ret;
|
||||
|
||||
switch (val) {
|
||||
case CORE_INIT:
|
||||
ret = pci_epf_test_core_init(epf);
|
||||
if (ret)
|
||||
return NOTIFY_BAD;
|
||||
break;
|
||||
|
||||
case LINK_UP:
|
||||
queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler,
|
||||
msecs_to_jiffies(1));
|
||||
break;
|
||||
|
||||
default:
|
||||
dev_err(&epf->dev, "Invalid EPF test notifier event\n");
|
||||
return NOTIFY_BAD;
|
||||
}
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static int pci_epf_test_alloc_space(struct pci_epf *epf)
|
||||
{
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
struct device *dev = &epf->dev;
|
||||
struct pci_epf_bar *epf_bar;
|
||||
size_t msix_table_size = 0;
|
||||
size_t test_reg_bar_size;
|
||||
size_t pba_size = 0;
|
||||
bool msix_capable;
|
||||
void *base;
|
||||
int bar, add;
|
||||
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
|
||||
|
@ -437,13 +750,25 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
|
|||
|
||||
epc_features = epf_test->epc_features;
|
||||
|
||||
if (epc_features->bar_fixed_size[test_reg_bar])
|
||||
test_reg_size = bar_size[test_reg_bar];
|
||||
else
|
||||
test_reg_size = sizeof(struct pci_epf_test_reg);
|
||||
test_reg_bar_size = ALIGN(sizeof(struct pci_epf_test_reg), 128);
|
||||
|
||||
base = pci_epf_alloc_space(epf, test_reg_size,
|
||||
test_reg_bar, epc_features->align);
|
||||
msix_capable = epc_features->msix_capable;
|
||||
if (msix_capable) {
|
||||
msix_table_size = PCI_MSIX_ENTRY_SIZE * epf->msix_interrupts;
|
||||
epf_test->msix_table_offset = test_reg_bar_size;
|
||||
/* Align to QWORD or 8 Bytes */
|
||||
pba_size = ALIGN(DIV_ROUND_UP(epf->msix_interrupts, 8), 8);
|
||||
}
|
||||
test_reg_size = test_reg_bar_size + msix_table_size + pba_size;
|
||||
|
||||
if (epc_features->bar_fixed_size[test_reg_bar]) {
|
||||
if (test_reg_size > bar_size[test_reg_bar])
|
||||
return -ENOMEM;
|
||||
test_reg_size = bar_size[test_reg_bar];
|
||||
}
|
||||
|
||||
base = pci_epf_alloc_space(epf, test_reg_size, test_reg_bar,
|
||||
epc_features->align);
|
||||
if (!base) {
|
||||
dev_err(dev, "Failed to allocated register space\n");
|
||||
return -ENOMEM;
|
||||
|
@ -492,14 +817,11 @@ static int pci_epf_test_bind(struct pci_epf *epf)
|
|||
{
|
||||
int ret;
|
||||
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
|
||||
struct pci_epf_header *header = epf->header;
|
||||
const struct pci_epc_features *epc_features;
|
||||
enum pci_barno test_reg_bar = BAR_0;
|
||||
struct pci_epc *epc = epf->epc;
|
||||
struct device *dev = &epf->dev;
|
||||
bool linkup_notifier = false;
|
||||
bool msix_capable = false;
|
||||
bool msi_capable = true;
|
||||
bool core_init_notifier = false;
|
||||
|
||||
if (WARN_ON_ONCE(!epc))
|
||||
return -EINVAL;
|
||||
|
@ -507,8 +829,7 @@ static int pci_epf_test_bind(struct pci_epf *epf)
|
|||
epc_features = pci_epc_get_features(epc, epf->func_no);
|
||||
if (epc_features) {
|
||||
linkup_notifier = epc_features->linkup_notifier;
|
||||
msix_capable = epc_features->msix_capable;
|
||||
msi_capable = epc_features->msi_capable;
|
||||
core_init_notifier = epc_features->core_init_notifier;
|
||||
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
|
||||
pci_epf_configure_bar(epf, epc_features);
|
||||
}
|
||||
|
@ -516,38 +837,28 @@ static int pci_epf_test_bind(struct pci_epf *epf)
|
|||
epf_test->test_reg_bar = test_reg_bar;
|
||||
epf_test->epc_features = epc_features;
|
||||
|
||||
ret = pci_epc_write_header(epc, epf->func_no, header);
|
||||
if (ret) {
|
||||
dev_err(dev, "Configuration header write failed\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = pci_epf_test_alloc_space(epf);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pci_epf_test_set_bar(epf);
|
||||
if (!core_init_notifier) {
|
||||
ret = pci_epf_test_core_init(epf);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
epf_test->dma_supported = true;
|
||||
|
||||
ret = pci_epf_test_init_dma_chan(epf_test);
|
||||
if (ret)
|
||||
return ret;
|
||||
epf_test->dma_supported = false;
|
||||
|
||||
if (msi_capable) {
|
||||
ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts);
|
||||
if (ret) {
|
||||
dev_err(dev, "MSI configuration failed\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (msix_capable) {
|
||||
ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts);
|
||||
if (ret) {
|
||||
dev_err(dev, "MSI-X configuration failed\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (!linkup_notifier)
|
||||
if (linkup_notifier) {
|
||||
epf->nb.notifier_call = pci_epf_test_notifier;
|
||||
pci_epc_register_notifier(epc, &epf->nb);
|
||||
} else {
|
||||
queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -580,7 +891,6 @@ static int pci_epf_test_probe(struct pci_epf *epf)
|
|||
static struct pci_epf_ops ops = {
|
||||
.unbind = pci_epf_test_unbind,
|
||||
.bind = pci_epf_test_bind,
|
||||
.linkup = pci_epf_test_linkup,
|
||||
};
|
||||
|
||||
static struct pci_epf_driver test_driver = {
|
||||
|
|
|
@ -29,7 +29,6 @@ struct pci_epc_group {
|
|||
struct config_group group;
|
||||
struct pci_epc *epc;
|
||||
bool start;
|
||||
unsigned long function_num_map;
|
||||
};
|
||||
|
||||
static inline struct pci_epf_group *to_pci_epf_group(struct config_item *item)
|
||||
|
@ -58,6 +57,7 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
|
|||
|
||||
if (!start) {
|
||||
pci_epc_stop(epc);
|
||||
epc_group->start = 0;
|
||||
return len;
|
||||
}
|
||||
|
||||
|
@ -89,37 +89,22 @@ static int pci_epc_epf_link(struct config_item *epc_item,
|
|||
struct config_item *epf_item)
|
||||
{
|
||||
int ret;
|
||||
u32 func_no = 0;
|
||||
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item);
|
||||
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
|
||||
struct pci_epc *epc = epc_group->epc;
|
||||
struct pci_epf *epf = epf_group->epf;
|
||||
|
||||
func_no = find_first_zero_bit(&epc_group->function_num_map,
|
||||
BITS_PER_LONG);
|
||||
if (func_no >= BITS_PER_LONG)
|
||||
return -EINVAL;
|
||||
|
||||
set_bit(func_no, &epc_group->function_num_map);
|
||||
epf->func_no = func_no;
|
||||
|
||||
ret = pci_epc_add_epf(epc, epf);
|
||||
if (ret)
|
||||
goto err_add_epf;
|
||||
return ret;
|
||||
|
||||
ret = pci_epf_bind(epf);
|
||||
if (ret)
|
||||
goto err_epf_bind;
|
||||
if (ret) {
|
||||
pci_epc_remove_epf(epc, epf);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_epf_bind:
|
||||
pci_epc_remove_epf(epc, epf);
|
||||
|
||||
err_add_epf:
|
||||
clear_bit(func_no, &epc_group->function_num_map);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void pci_epc_epf_unlink(struct config_item *epc_item,
|
||||
|
@ -134,7 +119,6 @@ static void pci_epc_epf_unlink(struct config_item *epc_item,
|
|||
|
||||
epc = epc_group->epc;
|
||||
epf = epf_group->epf;
|
||||
clear_bit(epf->func_no, &epc_group->function_num_map);
|
||||
pci_epf_unbind(epf);
|
||||
pci_epc_remove_epf(epc, epf);
|
||||
}
|
||||
|
|
|
@ -120,7 +120,6 @@ const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
|
|||
u8 func_no)
|
||||
{
|
||||
const struct pci_epc_features *epc_features;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
|
||||
return NULL;
|
||||
|
@ -128,9 +127,9 @@ const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
|
|||
if (!epc->ops->get_features)
|
||||
return NULL;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
epc_features = epc->ops->get_features(epc, func_no);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return epc_features;
|
||||
}
|
||||
|
@ -144,14 +143,12 @@ EXPORT_SYMBOL_GPL(pci_epc_get_features);
|
|||
*/
|
||||
void pci_epc_stop(struct pci_epc *epc)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR(epc) || !epc->ops->stop)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
epc->ops->stop(epc);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_stop);
|
||||
|
||||
|
@ -164,7 +161,6 @@ EXPORT_SYMBOL_GPL(pci_epc_stop);
|
|||
int pci_epc_start(struct pci_epc *epc)
|
||||
{
|
||||
int ret;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR(epc))
|
||||
return -EINVAL;
|
||||
|
@ -172,9 +168,9 @@ int pci_epc_start(struct pci_epc *epc)
|
|||
if (!epc->ops->start)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->start(epc);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -193,7 +189,6 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
|
|||
enum pci_epc_irq_type type, u16 interrupt_num)
|
||||
{
|
||||
int ret;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
|
||||
return -EINVAL;
|
||||
|
@ -201,9 +196,9 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
|
|||
if (!epc->ops->raise_irq)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->raise_irq(epc, func_no, type, interrupt_num);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -219,7 +214,6 @@ EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
|
|||
int pci_epc_get_msi(struct pci_epc *epc, u8 func_no)
|
||||
{
|
||||
int interrupt;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
|
||||
return 0;
|
||||
|
@ -227,9 +221,9 @@ int pci_epc_get_msi(struct pci_epc *epc, u8 func_no)
|
|||
if (!epc->ops->get_msi)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
interrupt = epc->ops->get_msi(epc, func_no);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
if (interrupt < 0)
|
||||
return 0;
|
||||
|
@ -252,7 +246,6 @@ int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts)
|
|||
{
|
||||
int ret;
|
||||
u8 encode_int;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
|
||||
interrupts > 32)
|
||||
|
@ -263,9 +256,9 @@ int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts)
|
|||
|
||||
encode_int = order_base_2(interrupts);
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->set_msi(epc, func_no, encode_int);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -281,7 +274,6 @@ EXPORT_SYMBOL_GPL(pci_epc_set_msi);
|
|||
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no)
|
||||
{
|
||||
int interrupt;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
|
||||
return 0;
|
||||
|
@ -289,9 +281,9 @@ int pci_epc_get_msix(struct pci_epc *epc, u8 func_no)
|
|||
if (!epc->ops->get_msix)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
interrupt = epc->ops->get_msix(epc, func_no);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
if (interrupt < 0)
|
||||
return 0;
|
||||
|
@ -305,13 +297,15 @@ EXPORT_SYMBOL_GPL(pci_epc_get_msix);
|
|||
* @epc: the EPC device on which MSI-X has to be configured
|
||||
* @func_no: the endpoint function number in the EPC device
|
||||
* @interrupts: number of MSI-X interrupts required by the EPF
|
||||
* @bir: BAR where the MSI-X table resides
|
||||
* @offset: Offset pointing to the start of MSI-X table
|
||||
*
|
||||
* Invoke to set the required number of MSI-X interrupts.
|
||||
*/
|
||||
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts)
|
||||
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
|
||||
enum pci_barno bir, u32 offset)
|
||||
{
|
||||
int ret;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
|
||||
interrupts < 1 || interrupts > 2048)
|
||||
|
@ -320,9 +314,9 @@ int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts)
|
|||
if (!epc->ops->set_msix)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
ret = epc->ops->set_msix(epc, func_no, interrupts - 1);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->set_msix(epc, func_no, interrupts - 1, bir, offset);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -339,17 +333,15 @@ EXPORT_SYMBOL_GPL(pci_epc_set_msix);
|
|||
void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no,
|
||||
phys_addr_t phys_addr)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
|
||||
return;
|
||||
|
||||
if (!epc->ops->unmap_addr)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
epc->ops->unmap_addr(epc, func_no, phys_addr);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_unmap_addr);
|
||||
|
||||
|
@ -367,7 +359,6 @@ int pci_epc_map_addr(struct pci_epc *epc, u8 func_no,
|
|||
phys_addr_t phys_addr, u64 pci_addr, size_t size)
|
||||
{
|
||||
int ret;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
|
||||
return -EINVAL;
|
||||
|
@ -375,9 +366,9 @@ int pci_epc_map_addr(struct pci_epc *epc, u8 func_no,
|
|||
if (!epc->ops->map_addr)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->map_addr(epc, func_no, phys_addr, pci_addr, size);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -394,8 +385,6 @@ EXPORT_SYMBOL_GPL(pci_epc_map_addr);
|
|||
void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no,
|
||||
struct pci_epf_bar *epf_bar)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
|
||||
(epf_bar->barno == BAR_5 &&
|
||||
epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64))
|
||||
|
@ -404,9 +393,9 @@ void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no,
|
|||
if (!epc->ops->clear_bar)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
epc->ops->clear_bar(epc, func_no, epf_bar);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_clear_bar);
|
||||
|
||||
|
@ -422,7 +411,6 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no,
|
|||
struct pci_epf_bar *epf_bar)
|
||||
{
|
||||
int ret;
|
||||
unsigned long irq_flags;
|
||||
int flags = epf_bar->flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
|
||||
|
@ -437,9 +425,9 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no,
|
|||
if (!epc->ops->set_bar)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, irq_flags);
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->set_bar(epc, func_no, epf_bar);
|
||||
spin_unlock_irqrestore(&epc->lock, irq_flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -460,7 +448,6 @@ int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
|
|||
struct pci_epf_header *header)
|
||||
{
|
||||
int ret;
|
||||
unsigned long flags;
|
||||
|
||||
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
|
||||
return -EINVAL;
|
||||
|
@ -468,9 +455,9 @@ int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
|
|||
if (!epc->ops->write_header)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
ret = epc->ops->write_header(epc, func_no, header);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -487,7 +474,8 @@ EXPORT_SYMBOL_GPL(pci_epc_write_header);
|
|||
*/
|
||||
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
|
||||
{
|
||||
unsigned long flags;
|
||||
u32 func_no;
|
||||
int ret = 0;
|
||||
|
||||
if (epf->epc)
|
||||
return -EBUSY;
|
||||
|
@ -495,16 +483,30 @@ int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
|
|||
if (IS_ERR(epc))
|
||||
return -EINVAL;
|
||||
|
||||
if (epf->func_no > epc->max_functions - 1)
|
||||
return -EINVAL;
|
||||
mutex_lock(&epc->lock);
|
||||
func_no = find_first_zero_bit(&epc->function_num_map,
|
||||
BITS_PER_LONG);
|
||||
if (func_no >= BITS_PER_LONG) {
|
||||
ret = -EINVAL;
|
||||
goto ret;
|
||||
}
|
||||
|
||||
if (func_no > epc->max_functions - 1) {
|
||||
dev_err(&epc->dev, "Exceeding max supported Function Number\n");
|
||||
ret = -EINVAL;
|
||||
goto ret;
|
||||
}
|
||||
|
||||
set_bit(func_no, &epc->function_num_map);
|
||||
epf->func_no = func_no;
|
||||
epf->epc = epc;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
list_add_tail(&epf->list, &epc->pci_epf);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
|
||||
return 0;
|
||||
ret:
|
||||
mutex_unlock(&epc->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_add_epf);
|
||||
|
||||
|
@ -517,15 +519,14 @@ EXPORT_SYMBOL_GPL(pci_epc_add_epf);
|
|||
*/
|
||||
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (!epc || IS_ERR(epc) || !epf)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
mutex_lock(&epc->lock);
|
||||
clear_bit(epf->func_no, &epc->function_num_map);
|
||||
list_del(&epf->list);
|
||||
epf->epc = NULL;
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
mutex_unlock(&epc->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_remove_epf);
|
||||
|
||||
|
@ -539,19 +540,30 @@ EXPORT_SYMBOL_GPL(pci_epc_remove_epf);
|
|||
*/
|
||||
void pci_epc_linkup(struct pci_epc *epc)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct pci_epf *epf;
|
||||
|
||||
if (!epc || IS_ERR(epc))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&epc->lock, flags);
|
||||
list_for_each_entry(epf, &epc->pci_epf, list)
|
||||
pci_epf_linkup(epf);
|
||||
spin_unlock_irqrestore(&epc->lock, flags);
|
||||
atomic_notifier_call_chain(&epc->notifier, LINK_UP, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_linkup);
|
||||
|
||||
/**
|
||||
* pci_epc_init_notify() - Notify the EPF device that EPC device's core
|
||||
* initialization is completed.
|
||||
* @epc: the EPC device whose core initialization is completeds
|
||||
*
|
||||
* Invoke to Notify the EPF device that the EPC device's initialization
|
||||
* is completed.
|
||||
*/
|
||||
void pci_epc_init_notify(struct pci_epc *epc)
|
||||
{
|
||||
if (!epc || IS_ERR(epc))
|
||||
return;
|
||||
|
||||
atomic_notifier_call_chain(&epc->notifier, CORE_INIT, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_init_notify);
|
||||
|
||||
/**
|
||||
* pci_epc_destroy() - destroy the EPC device
|
||||
* @epc: the EPC device that has to be destroyed
|
||||
|
@ -610,8 +622,9 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
|
|||
goto err_ret;
|
||||
}
|
||||
|
||||
spin_lock_init(&epc->lock);
|
||||
mutex_init(&epc->lock);
|
||||
INIT_LIST_HEAD(&epc->pci_epf);
|
||||
ATOMIC_INIT_NOTIFIER_HEAD(&epc->notifier);
|
||||
|
||||
device_initialize(&epc->dev);
|
||||
epc->dev.class = pci_epc_class;
|
||||
|
|
|
@ -79,6 +79,7 @@ int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size,
|
|||
mem->page_size = page_size;
|
||||
mem->pages = pages;
|
||||
mem->size = size;
|
||||
mutex_init(&mem->lock);
|
||||
|
||||
epc->mem = mem;
|
||||
|
||||
|
@ -122,7 +123,7 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
|
|||
phys_addr_t *phys_addr, size_t size)
|
||||
{
|
||||
int pageno;
|
||||
void __iomem *virt_addr;
|
||||
void __iomem *virt_addr = NULL;
|
||||
struct pci_epc_mem *mem = epc->mem;
|
||||
unsigned int page_shift = ilog2(mem->page_size);
|
||||
int order;
|
||||
|
@ -130,15 +131,18 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
|
|||
size = ALIGN(size, mem->page_size);
|
||||
order = pci_epc_mem_get_order(mem, size);
|
||||
|
||||
mutex_lock(&mem->lock);
|
||||
pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order);
|
||||
if (pageno < 0)
|
||||
return NULL;
|
||||
goto ret;
|
||||
|
||||
*phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift);
|
||||
virt_addr = ioremap(*phys_addr, size);
|
||||
if (!virt_addr)
|
||||
bitmap_release_region(mem->bitmap, pageno, order);
|
||||
|
||||
ret:
|
||||
mutex_unlock(&mem->lock);
|
||||
return virt_addr;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
|
||||
|
@ -164,7 +168,9 @@ void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr,
|
|||
pageno = (phys_addr - mem->phys_base) >> page_shift;
|
||||
size = ALIGN(size, mem->page_size);
|
||||
order = pci_epc_mem_get_order(mem, size);
|
||||
mutex_lock(&mem->lock);
|
||||
bitmap_release_region(mem->bitmap, pageno, order);
|
||||
mutex_unlock(&mem->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr);
|
||||
|
||||
|
|
|
@ -20,26 +20,6 @@ static DEFINE_MUTEX(pci_epf_mutex);
|
|||
static struct bus_type pci_epf_bus_type;
|
||||
static const struct device_type pci_epf_type;
|
||||
|
||||
/**
|
||||
* pci_epf_linkup() - Notify the function driver that EPC device has
|
||||
* established a connection with the Root Complex.
|
||||
* @epf: the EPF device bound to the EPC device which has established
|
||||
* the connection with the host
|
||||
*
|
||||
* Invoke to notify the function driver that EPC device has established
|
||||
* a connection with the Root Complex.
|
||||
*/
|
||||
void pci_epf_linkup(struct pci_epf *epf)
|
||||
{
|
||||
if (!epf->driver) {
|
||||
dev_WARN(&epf->dev, "epf device not bound to driver\n");
|
||||
return;
|
||||
}
|
||||
|
||||
epf->driver->ops->linkup(epf);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epf_linkup);
|
||||
|
||||
/**
|
||||
* pci_epf_unbind() - Notify the function driver that the binding between the
|
||||
* EPF device and EPC device has been lost
|
||||
|
@ -55,7 +35,9 @@ void pci_epf_unbind(struct pci_epf *epf)
|
|||
return;
|
||||
}
|
||||
|
||||
mutex_lock(&epf->lock);
|
||||
epf->driver->ops->unbind(epf);
|
||||
mutex_unlock(&epf->lock);
|
||||
module_put(epf->driver->owner);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epf_unbind);
|
||||
|
@ -69,6 +51,8 @@ EXPORT_SYMBOL_GPL(pci_epf_unbind);
|
|||
*/
|
||||
int pci_epf_bind(struct pci_epf *epf)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!epf->driver) {
|
||||
dev_WARN(&epf->dev, "epf device not bound to driver\n");
|
||||
return -EINVAL;
|
||||
|
@ -77,7 +61,11 @@ int pci_epf_bind(struct pci_epf *epf)
|
|||
if (!try_module_get(epf->driver->owner))
|
||||
return -EAGAIN;
|
||||
|
||||
return epf->driver->ops->bind(epf);
|
||||
mutex_lock(&epf->lock);
|
||||
ret = epf->driver->ops->bind(epf);
|
||||
mutex_unlock(&epf->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_epf_bind);
|
||||
|
||||
|
@ -99,6 +87,7 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar)
|
|||
epf->bar[bar].phys_addr);
|
||||
|
||||
epf->bar[bar].phys_addr = 0;
|
||||
epf->bar[bar].addr = NULL;
|
||||
epf->bar[bar].size = 0;
|
||||
epf->bar[bar].barno = 0;
|
||||
epf->bar[bar].flags = 0;
|
||||
|
@ -135,6 +124,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
|||
}
|
||||
|
||||
epf->bar[bar].phys_addr = phys_addr;
|
||||
epf->bar[bar].addr = space;
|
||||
epf->bar[bar].size = size;
|
||||
epf->bar[bar].barno = bar;
|
||||
epf->bar[bar].flags |= upper_32_bits(size) ?
|
||||
|
@ -214,7 +204,7 @@ int __pci_epf_register_driver(struct pci_epf_driver *driver,
|
|||
if (!driver->ops)
|
||||
return -EINVAL;
|
||||
|
||||
if (!driver->ops->bind || !driver->ops->unbind || !driver->ops->linkup)
|
||||
if (!driver->ops->bind || !driver->ops->unbind)
|
||||
return -EINVAL;
|
||||
|
||||
driver->driver.bus = &pci_epf_bus_type;
|
||||
|
@ -272,6 +262,7 @@ struct pci_epf *pci_epf_create(const char *name)
|
|||
device_initialize(dev);
|
||||
dev->bus = &pci_epf_bus_type;
|
||||
dev->type = &pci_epf_type;
|
||||
mutex_init(&epf->lock);
|
||||
|
||||
ret = dev_set_name(dev, "%s", name);
|
||||
if (ret) {
|
||||
|
|
|
@ -84,6 +84,7 @@ struct controller {
|
|||
struct pcie_device *pcie;
|
||||
|
||||
u32 slot_cap; /* capabilities and quirks */
|
||||
unsigned int inband_presence_disabled:1;
|
||||
|
||||
u16 slot_ctrl; /* control register access */
|
||||
struct mutex ctrl_lock;
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
|
||||
#define dev_fmt(fmt) "pciehp: " fmt
|
||||
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/jiffies.h>
|
||||
|
@ -26,6 +27,24 @@
|
|||
#include "../pci.h"
|
||||
#include "pciehp.h"
|
||||
|
||||
static const struct dmi_system_id inband_presence_disabled_dmi_table[] = {
|
||||
/*
|
||||
* Match all Dell systems, as some Dell systems have inband
|
||||
* presence disabled on NVMe slots (but don't support the bit to
|
||||
* report it). Setting inband presence disabled should have no
|
||||
* negative effect, except on broken hotplug slots that never
|
||||
* assert presence detect--and those will still work, they will
|
||||
* just have a bit of extra delay before being probed.
|
||||
*/
|
||||
{
|
||||
.ident = "Dell System",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_OEM_STRING, "Dell System"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
static inline struct pci_dev *ctrl_dev(struct controller *ctrl)
|
||||
{
|
||||
return ctrl->pcie->port;
|
||||
|
@ -252,6 +271,22 @@ static bool pci_bus_check_dev(struct pci_bus *bus, int devfn)
|
|||
return found;
|
||||
}
|
||||
|
||||
static void pcie_wait_for_presence(struct pci_dev *pdev)
|
||||
{
|
||||
int timeout = 1250;
|
||||
u16 slot_status;
|
||||
|
||||
do {
|
||||
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
|
||||
if (slot_status & PCI_EXP_SLTSTA_PDS)
|
||||
return;
|
||||
msleep(10);
|
||||
timeout -= 10;
|
||||
} while (timeout > 0);
|
||||
|
||||
pci_info(pdev, "Timeout waiting for Presence Detect\n");
|
||||
}
|
||||
|
||||
int pciehp_check_link_status(struct controller *ctrl)
|
||||
{
|
||||
struct pci_dev *pdev = ctrl_dev(ctrl);
|
||||
|
@ -261,6 +296,9 @@ int pciehp_check_link_status(struct controller *ctrl)
|
|||
if (!pcie_wait_for_link(pdev, true))
|
||||
return -1;
|
||||
|
||||
if (ctrl->inband_presence_disabled)
|
||||
pcie_wait_for_presence(pdev);
|
||||
|
||||
found = pci_bus_check_dev(ctrl->pcie->port->subordinate,
|
||||
PCI_DEVFN(0, 0));
|
||||
|
||||
|
@ -527,7 +565,7 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
|
|||
struct controller *ctrl = (struct controller *)dev_id;
|
||||
struct pci_dev *pdev = ctrl_dev(ctrl);
|
||||
struct device *parent = pdev->dev.parent;
|
||||
u16 status, events;
|
||||
u16 status, events = 0;
|
||||
|
||||
/*
|
||||
* Interrupts only occur in D3hot or shallower and only if enabled
|
||||
|
@ -552,6 +590,7 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
|
|||
}
|
||||
}
|
||||
|
||||
read_status:
|
||||
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &status);
|
||||
if (status == (u16) ~0) {
|
||||
ctrl_info(ctrl, "%s: no response from device\n", __func__);
|
||||
|
@ -564,24 +603,37 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
|
|||
* Slot Status contains plain status bits as well as event
|
||||
* notification bits; right now we only want the event bits.
|
||||
*/
|
||||
events = status & (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD |
|
||||
PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC |
|
||||
PCI_EXP_SLTSTA_DLLSC);
|
||||
status &= PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD |
|
||||
PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC |
|
||||
PCI_EXP_SLTSTA_DLLSC;
|
||||
|
||||
/*
|
||||
* If we've already reported a power fault, don't report it again
|
||||
* until we've done something to handle it.
|
||||
*/
|
||||
if (ctrl->power_fault_detected)
|
||||
events &= ~PCI_EXP_SLTSTA_PFD;
|
||||
status &= ~PCI_EXP_SLTSTA_PFD;
|
||||
|
||||
events |= status;
|
||||
if (!events) {
|
||||
if (parent)
|
||||
pm_runtime_put(parent);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events);
|
||||
if (status) {
|
||||
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events);
|
||||
|
||||
/*
|
||||
* In MSI mode, all event bits must be zero before the port
|
||||
* will send a new interrupt (PCIe Base Spec r5.0 sec 6.7.3.4).
|
||||
* So re-read the Slot Status register in case a bit was set
|
||||
* between read and write.
|
||||
*/
|
||||
if (pci_dev_msi_enabled(pdev) && !pciehp_poll_mode)
|
||||
goto read_status;
|
||||
}
|
||||
|
||||
ctrl_dbg(ctrl, "pending interrupts %#06x from Slot Status\n", events);
|
||||
if (parent)
|
||||
pm_runtime_put(parent);
|
||||
|
@ -625,17 +677,15 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
|
|||
if (atomic_fetch_and(~RERUN_ISR, &ctrl->pending_events) & RERUN_ISR) {
|
||||
ret = pciehp_isr(irq, dev_id);
|
||||
enable_irq(irq);
|
||||
if (ret != IRQ_WAKE_THREAD) {
|
||||
pci_config_pm_runtime_put(pdev);
|
||||
return ret;
|
||||
}
|
||||
if (ret != IRQ_WAKE_THREAD)
|
||||
goto out;
|
||||
}
|
||||
|
||||
synchronize_hardirq(irq);
|
||||
events = atomic_xchg(&ctrl->pending_events, 0);
|
||||
if (!events) {
|
||||
pci_config_pm_runtime_put(pdev);
|
||||
return IRQ_NONE;
|
||||
ret = IRQ_NONE;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Check Attention Button Pressed */
|
||||
|
@ -664,10 +714,12 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
|
|||
pciehp_handle_presence_or_link_change(ctrl, events);
|
||||
up_read(&ctrl->reset_lock);
|
||||
|
||||
ret = IRQ_HANDLED;
|
||||
out:
|
||||
pci_config_pm_runtime_put(pdev);
|
||||
ctrl->ist_running = false;
|
||||
wake_up(&ctrl->requester);
|
||||
return IRQ_HANDLED;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int pciehp_poll(void *data)
|
||||
|
@ -848,7 +900,7 @@ static inline void dbg_ctrl(struct controller *ctrl)
|
|||
struct controller *pcie_init(struct pcie_device *dev)
|
||||
{
|
||||
struct controller *ctrl;
|
||||
u32 slot_cap, link_cap;
|
||||
u32 slot_cap, slot_cap2, link_cap;
|
||||
u8 poweron;
|
||||
struct pci_dev *pdev = dev->port;
|
||||
struct pci_bus *subordinate = pdev->subordinate;
|
||||
|
@ -883,6 +935,16 @@ struct controller *pcie_init(struct pcie_device *dev)
|
|||
ctrl->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE;
|
||||
up_read(&pci_bus_sem);
|
||||
|
||||
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP2, &slot_cap2);
|
||||
if (slot_cap2 & PCI_EXP_SLTCAP2_IBPD) {
|
||||
pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_IBPD_DISABLE,
|
||||
PCI_EXP_SLTCTL_IBPD_DISABLE);
|
||||
ctrl->inband_presence_disabled = 1;
|
||||
}
|
||||
|
||||
if (dmi_first_match(inband_presence_disabled_dmi_table))
|
||||
ctrl->inband_presence_disabled = 1;
|
||||
|
||||
/* Check if Data Link Layer Link Active Reporting is implemented */
|
||||
pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap);
|
||||
|
||||
|
@ -892,7 +954,7 @@ struct controller *pcie_init(struct pcie_device *dev)
|
|||
PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC |
|
||||
PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC);
|
||||
|
||||
ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c%s\n",
|
||||
ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c IbPresDis%c LLActRep%c%s\n",
|
||||
(slot_cap & PCI_EXP_SLTCAP_PSN) >> 19,
|
||||
FLAG(slot_cap, PCI_EXP_SLTCAP_ABP),
|
||||
FLAG(slot_cap, PCI_EXP_SLTCAP_PCP),
|
||||
|
@ -903,6 +965,7 @@ struct controller *pcie_init(struct pcie_device *dev)
|
|||
FLAG(slot_cap, PCI_EXP_SLTCAP_HPS),
|
||||
FLAG(slot_cap, PCI_EXP_SLTCAP_EIP),
|
||||
FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS),
|
||||
FLAG(slot_cap2, PCI_EXP_SLTCAP2_IBPD),
|
||||
FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC),
|
||||
pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : "");
|
||||
|
||||
|
|
|
@ -291,6 +291,9 @@ static const struct pci_p2pdma_whitelist_entry {
|
|||
{PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE},
|
||||
/* Intel SkyLake-E */
|
||||
{PCI_VENDOR_ID_INTEL, 0x2030, 0},
|
||||
{PCI_VENDOR_ID_INTEL, 0x2031, 0},
|
||||
{PCI_VENDOR_ID_INTEL, 0x2032, 0},
|
||||
{PCI_VENDOR_ID_INTEL, 0x2033, 0},
|
||||
{PCI_VENDOR_ID_INTEL, 0x2020, 0},
|
||||
{}
|
||||
};
|
||||
|
|
|
@ -439,7 +439,7 @@ enum hpx_type3_dev_type {
|
|||
static u16 hpx3_device_type(struct pci_dev *dev)
|
||||
{
|
||||
u16 pcie_type = pci_pcie_type(dev);
|
||||
const int pcie_to_hpx3_type[] = {
|
||||
static const int pcie_to_hpx3_type[] = {
|
||||
[PCI_EXP_TYPE_ENDPOINT] = HPX_TYPE_ENDPOINT,
|
||||
[PCI_EXP_TYPE_LEG_END] = HPX_TYPE_LEG_END,
|
||||
[PCI_EXP_TYPE_RC_END] = HPX_TYPE_RC_END,
|
||||
|
@ -1241,6 +1241,7 @@ static void pci_acpi_setup(struct device *dev)
|
|||
|
||||
pci_acpi_optimize_delay(pci_dev, adev->handle);
|
||||
pci_acpi_set_untrusted(pci_dev);
|
||||
pci_acpi_add_edr_notifier(pci_dev);
|
||||
|
||||
pci_acpi_add_pm_notifier(adev, pci_dev);
|
||||
if (!adev->wakeup.flags.valid)
|
||||
|
@ -1268,6 +1269,7 @@ static void pci_acpi_cleanup(struct device *dev)
|
|||
if (!adev)
|
||||
return;
|
||||
|
||||
pci_acpi_remove_edr_notifier(pci_dev);
|
||||
pci_acpi_remove_pm_notifier(adev);
|
||||
if (adev->wakeup.flags.valid) {
|
||||
acpi_device_power_remove_dependent(adev, dev);
|
||||
|
|
|
@ -156,7 +156,8 @@ static ssize_t max_link_speed_show(struct device *dev,
|
|||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
|
||||
return sprintf(buf, "%s\n", PCIE_SPEED2STR(pcie_get_speed_cap(pdev)));
|
||||
return sprintf(buf, "%s\n",
|
||||
pci_speed_string(pcie_get_speed_cap(pdev)));
|
||||
}
|
||||
static DEVICE_ATTR_RO(max_link_speed);
|
||||
|
||||
|
@ -175,33 +176,15 @@ static ssize_t current_link_speed_show(struct device *dev,
|
|||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||
u16 linkstat;
|
||||
int err;
|
||||
const char *speed;
|
||||
enum pci_bus_speed speed;
|
||||
|
||||
err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat);
|
||||
if (err)
|
||||
return -EINVAL;
|
||||
|
||||
switch (linkstat & PCI_EXP_LNKSTA_CLS) {
|
||||
case PCI_EXP_LNKSTA_CLS_32_0GB:
|
||||
speed = "32 GT/s";
|
||||
break;
|
||||
case PCI_EXP_LNKSTA_CLS_16_0GB:
|
||||
speed = "16 GT/s";
|
||||
break;
|
||||
case PCI_EXP_LNKSTA_CLS_8_0GB:
|
||||
speed = "8 GT/s";
|
||||
break;
|
||||
case PCI_EXP_LNKSTA_CLS_5_0GB:
|
||||
speed = "5 GT/s";
|
||||
break;
|
||||
case PCI_EXP_LNKSTA_CLS_2_5GB:
|
||||
speed = "2.5 GT/s";
|
||||
break;
|
||||
default:
|
||||
speed = "Unknown speed";
|
||||
}
|
||||
speed = pcie_link_speed[linkstat & PCI_EXP_LNKSTA_CLS];
|
||||
|
||||
return sprintf(buf, "%s\n", speed);
|
||||
return sprintf(buf, "%s\n", pci_speed_string(speed));
|
||||
}
|
||||
static DEVICE_ATTR_RO(current_link_speed);
|
||||
|
||||
|
@ -464,7 +447,8 @@ static ssize_t dev_rescan_store(struct device *dev,
|
|||
}
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_WO(dev_rescan);
|
||||
static struct device_attribute dev_attr_dev_rescan = __ATTR(rescan, 0200, NULL,
|
||||
dev_rescan_store);
|
||||
|
||||
static ssize_t remove_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
|
@ -501,7 +485,8 @@ static ssize_t bus_rescan_store(struct device *dev,
|
|||
}
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_WO(bus_rescan);
|
||||
static struct device_attribute dev_attr_bus_rescan = __ATTR(rescan, 0200, NULL,
|
||||
bus_rescan_store);
|
||||
|
||||
#if defined(CONFIG_PM) && defined(CONFIG_ACPI)
|
||||
static ssize_t d3cold_allowed_store(struct device *dev,
|
||||
|
|
|
@ -1560,7 +1560,7 @@ void pci_restore_state(struct pci_dev *dev)
|
|||
pci_restore_rebar_state(dev);
|
||||
pci_restore_dpc_state(dev);
|
||||
|
||||
pci_cleanup_aer_error_status_regs(dev);
|
||||
pci_aer_clear_status(dev);
|
||||
pci_restore_aer_state(dev);
|
||||
|
||||
pci_restore_config_space(dev);
|
||||
|
@ -5841,19 +5841,10 @@ enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev)
|
|||
* where only 2.5 GT/s and 5.0 GT/s speeds were defined.
|
||||
*/
|
||||
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2);
|
||||
if (lnkcap2) { /* PCIe r3.0-compliant */
|
||||
if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_32_0GB)
|
||||
return PCIE_SPEED_32_0GT;
|
||||
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_16_0GB)
|
||||
return PCIE_SPEED_16_0GT;
|
||||
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_8_0GB)
|
||||
return PCIE_SPEED_8_0GT;
|
||||
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_5_0GB)
|
||||
return PCIE_SPEED_5_0GT;
|
||||
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_2_5GB)
|
||||
return PCIE_SPEED_2_5GT;
|
||||
return PCI_SPEED_UNKNOWN;
|
||||
}
|
||||
|
||||
/* PCIe r3.0-compliant */
|
||||
if (lnkcap2)
|
||||
return PCIE_LNKCAP2_SLS2SPEED(lnkcap2);
|
||||
|
||||
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);
|
||||
if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB)
|
||||
|
@ -5929,14 +5920,14 @@ void __pcie_print_link_status(struct pci_dev *dev, bool verbose)
|
|||
if (bw_avail >= bw_cap && verbose)
|
||||
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n",
|
||||
bw_cap / 1000, bw_cap % 1000,
|
||||
PCIE_SPEED2STR(speed_cap), width_cap);
|
||||
pci_speed_string(speed_cap), width_cap);
|
||||
else if (bw_avail < bw_cap)
|
||||
pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n",
|
||||
bw_avail / 1000, bw_avail % 1000,
|
||||
PCIE_SPEED2STR(speed), width,
|
||||
pci_speed_string(speed), width,
|
||||
limiting_dev ? pci_name(limiting_dev) : "<unknown>",
|
||||
bw_cap / 1000, bw_cap % 1000,
|
||||
PCIE_SPEED2STR(speed_cap), width_cap);
|
||||
pci_speed_string(speed_cap), width_cap);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -292,22 +292,25 @@ void pci_disable_bridge_window(struct pci_dev *dev);
|
|||
struct pci_bus *pci_bus_get(struct pci_bus *bus);
|
||||
void pci_bus_put(struct pci_bus *bus);
|
||||
|
||||
/* PCIe link information */
|
||||
#define PCIE_SPEED2STR(speed) \
|
||||
((speed) == PCIE_SPEED_16_0GT ? "16 GT/s" : \
|
||||
(speed) == PCIE_SPEED_8_0GT ? "8 GT/s" : \
|
||||
(speed) == PCIE_SPEED_5_0GT ? "5 GT/s" : \
|
||||
(speed) == PCIE_SPEED_2_5GT ? "2.5 GT/s" : \
|
||||
"Unknown speed")
|
||||
/* PCIe link information from Link Capabilities 2 */
|
||||
#define PCIE_LNKCAP2_SLS2SPEED(lnkcap2) \
|
||||
((lnkcap2) & PCI_EXP_LNKCAP2_SLS_32_0GB ? PCIE_SPEED_32_0GT : \
|
||||
(lnkcap2) & PCI_EXP_LNKCAP2_SLS_16_0GB ? PCIE_SPEED_16_0GT : \
|
||||
(lnkcap2) & PCI_EXP_LNKCAP2_SLS_8_0GB ? PCIE_SPEED_8_0GT : \
|
||||
(lnkcap2) & PCI_EXP_LNKCAP2_SLS_5_0GB ? PCIE_SPEED_5_0GT : \
|
||||
(lnkcap2) & PCI_EXP_LNKCAP2_SLS_2_5GB ? PCIE_SPEED_2_5GT : \
|
||||
PCI_SPEED_UNKNOWN)
|
||||
|
||||
/* PCIe speed to Mb/s reduced by encoding overhead */
|
||||
#define PCIE_SPEED2MBS_ENC(speed) \
|
||||
((speed) == PCIE_SPEED_16_0GT ? 16000*128/130 : \
|
||||
((speed) == PCIE_SPEED_32_0GT ? 32000*128/130 : \
|
||||
(speed) == PCIE_SPEED_16_0GT ? 16000*128/130 : \
|
||||
(speed) == PCIE_SPEED_8_0GT ? 8000*128/130 : \
|
||||
(speed) == PCIE_SPEED_5_0GT ? 5000*8/10 : \
|
||||
(speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
|
||||
0)
|
||||
|
||||
const char *pci_speed_string(enum pci_bus_speed speed);
|
||||
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
|
||||
enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
|
||||
u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed,
|
||||
|
@ -448,9 +451,13 @@ void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
|
|||
#ifdef CONFIG_PCIE_DPC
|
||||
void pci_save_dpc_state(struct pci_dev *dev);
|
||||
void pci_restore_dpc_state(struct pci_dev *dev);
|
||||
void pci_dpc_init(struct pci_dev *pdev);
|
||||
void dpc_process_error(struct pci_dev *pdev);
|
||||
pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
|
||||
#else
|
||||
static inline void pci_save_dpc_state(struct pci_dev *dev) {}
|
||||
static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
|
||||
static inline void pci_dpc_init(struct pci_dev *pdev) {}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_ATS
|
||||
|
@ -547,8 +554,9 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
|
|||
#endif
|
||||
|
||||
/* PCI error reporting and recovery */
|
||||
void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
|
||||
u32 service);
|
||||
pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
|
||||
enum pci_channel_state state,
|
||||
pci_ers_result_t (*reset_link)(struct pci_dev *pdev));
|
||||
|
||||
bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
|
||||
#ifdef CONFIG_PCIEASPM
|
||||
|
@ -651,12 +659,16 @@ void pci_aer_exit(struct pci_dev *dev);
|
|||
extern const struct attribute_group aer_stats_attr_group;
|
||||
void pci_aer_clear_fatal_status(struct pci_dev *dev);
|
||||
void pci_aer_clear_device_status(struct pci_dev *dev);
|
||||
int pci_aer_clear_status(struct pci_dev *dev);
|
||||
int pci_aer_raw_clear_status(struct pci_dev *dev);
|
||||
#else
|
||||
static inline void pci_no_aer(void) { }
|
||||
static inline void pci_aer_init(struct pci_dev *d) { }
|
||||
static inline void pci_aer_exit(struct pci_dev *d) { }
|
||||
static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { }
|
||||
static inline void pci_aer_clear_device_status(struct pci_dev *dev) { }
|
||||
static inline int pci_aer_clear_status(struct pci_dev *dev) { return -EINVAL; }
|
||||
static inline int pci_aer_raw_clear_status(struct pci_dev *dev) { return -EINVAL; }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
|
|
|
@ -141,3 +141,13 @@ config PCIE_BW
|
|||
This enables PCI Express Bandwidth Change Notification. If
|
||||
you know link width or rate changes occur only to correct
|
||||
unreliable links, you may answer Y.
|
||||
|
||||
config PCIE_EDR
|
||||
bool "PCI Express Error Disconnect Recover support"
|
||||
depends on PCIE_DPC && ACPI
|
||||
help
|
||||
This option adds Error Disconnect Recover support as specified
|
||||
in the Downstream Port Containment Related Enhancements ECN to
|
||||
the PCI Firmware Specification r3.2. Enable this if you want to
|
||||
support hybrid DPC model which uses both firmware and OS to
|
||||
implement DPC.
|
||||
|
|
|
@ -13,3 +13,4 @@ obj-$(CONFIG_PCIE_PME) += pme.o
|
|||
obj-$(CONFIG_PCIE_DPC) += dpc.o
|
||||
obj-$(CONFIG_PCIE_PTM) += ptm.o
|
||||
obj-$(CONFIG_PCIE_BW) += bw_notification.o
|
||||
obj-$(CONFIG_PCIE_EDR) += edr.o
|
||||
|
|
|
@ -102,6 +102,7 @@ struct aer_stats {
|
|||
#define ERR_UNCOR_ID(d) (d >> 16)
|
||||
|
||||
static int pcie_aer_disable;
|
||||
static pci_ers_result_t aer_root_reset(struct pci_dev *dev);
|
||||
|
||||
void pci_no_aer(void)
|
||||
{
|
||||
|
@ -376,7 +377,7 @@ void pci_aer_clear_device_status(struct pci_dev *dev)
|
|||
pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta);
|
||||
}
|
||||
|
||||
int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev)
|
||||
int pci_aer_clear_nonfatal_status(struct pci_dev *dev)
|
||||
{
|
||||
int pos;
|
||||
u32 status, sev;
|
||||
|
@ -397,7 +398,7 @@ int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev)
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_cleanup_aer_uncorrect_error_status);
|
||||
EXPORT_SYMBOL_GPL(pci_aer_clear_nonfatal_status);
|
||||
|
||||
void pci_aer_clear_fatal_status(struct pci_dev *dev)
|
||||
{
|
||||
|
@ -419,7 +420,16 @@ void pci_aer_clear_fatal_status(struct pci_dev *dev)
|
|||
pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status);
|
||||
}
|
||||
|
||||
int pci_cleanup_aer_error_status_regs(struct pci_dev *dev)
|
||||
/**
|
||||
* pci_aer_raw_clear_status - Clear AER error registers.
|
||||
* @dev: the PCI device
|
||||
*
|
||||
* Clearing AER error status registers unconditionally, regardless of
|
||||
* whether they're owned by firmware or the OS.
|
||||
*
|
||||
* Returns 0 on success, or negative on failure.
|
||||
*/
|
||||
int pci_aer_raw_clear_status(struct pci_dev *dev)
|
||||
{
|
||||
int pos;
|
||||
u32 status;
|
||||
|
@ -432,9 +442,6 @@ int pci_cleanup_aer_error_status_regs(struct pci_dev *dev)
|
|||
if (!pos)
|
||||
return -EIO;
|
||||
|
||||
if (pcie_aer_get_firmware_first(dev))
|
||||
return -EIO;
|
||||
|
||||
port_type = pci_pcie_type(dev);
|
||||
if (port_type == PCI_EXP_TYPE_ROOT_PORT) {
|
||||
pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &status);
|
||||
|
@ -450,6 +457,14 @@ int pci_cleanup_aer_error_status_regs(struct pci_dev *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int pci_aer_clear_status(struct pci_dev *dev)
|
||||
{
|
||||
if (pcie_aer_get_firmware_first(dev))
|
||||
return -EIO;
|
||||
|
||||
return pci_aer_raw_clear_status(dev);
|
||||
}
|
||||
|
||||
void pci_save_aer_state(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_cap_saved_state *save_state;
|
||||
|
@ -515,7 +530,7 @@ void pci_aer_init(struct pci_dev *dev)
|
|||
n = pcie_cap_has_rtctl(dev) ? 5 : 4;
|
||||
pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_ERR, sizeof(u32) * n);
|
||||
|
||||
pci_cleanup_aer_error_status_regs(dev);
|
||||
pci_aer_clear_status(dev);
|
||||
}
|
||||
|
||||
void pci_aer_exit(struct pci_dev *dev)
|
||||
|
@ -1053,11 +1068,9 @@ static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info)
|
|||
info->status);
|
||||
pci_aer_clear_device_status(dev);
|
||||
} else if (info->severity == AER_NONFATAL)
|
||||
pcie_do_recovery(dev, pci_channel_io_normal,
|
||||
PCIE_PORT_SERVICE_AER);
|
||||
pcie_do_recovery(dev, pci_channel_io_normal, aer_root_reset);
|
||||
else if (info->severity == AER_FATAL)
|
||||
pcie_do_recovery(dev, pci_channel_io_frozen,
|
||||
PCIE_PORT_SERVICE_AER);
|
||||
pcie_do_recovery(dev, pci_channel_io_frozen, aer_root_reset);
|
||||
pci_dev_put(dev);
|
||||
}
|
||||
|
||||
|
@ -1094,10 +1107,10 @@ static void aer_recover_work_func(struct work_struct *work)
|
|||
cper_print_aer(pdev, entry.severity, entry.regs);
|
||||
if (entry.severity == AER_NONFATAL)
|
||||
pcie_do_recovery(pdev, pci_channel_io_normal,
|
||||
PCIE_PORT_SERVICE_AER);
|
||||
aer_root_reset);
|
||||
else if (entry.severity == AER_FATAL)
|
||||
pcie_do_recovery(pdev, pci_channel_io_frozen,
|
||||
PCIE_PORT_SERVICE_AER);
|
||||
aer_root_reset);
|
||||
pci_dev_put(pdev);
|
||||
}
|
||||
}
|
||||
|
@ -1501,7 +1514,6 @@ static struct pcie_port_service_driver aerdriver = {
|
|||
|
||||
.probe = aer_probe,
|
||||
.remove = aer_remove,
|
||||
.reset_link = aer_root_reset,
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -273,7 +273,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
|
|||
}
|
||||
if (consistent)
|
||||
return;
|
||||
pci_warn(parent, "ASPM: current common clock configuration is broken, reconfiguring\n");
|
||||
pci_info(parent, "ASPM: current common clock configuration is inconsistent, reconfiguring\n");
|
||||
}
|
||||
|
||||
/* Configure downstream component, all functions */
|
||||
|
@ -747,9 +747,9 @@ static void pcie_config_aspm_l1ss(struct pcie_link_state *link, u32 state)
|
|||
|
||||
/* Enable what we need to enable */
|
||||
pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1,
|
||||
PCI_L1SS_CAP_L1_PM_SS, val);
|
||||
PCI_L1SS_CTL1_L1SS_MASK, val);
|
||||
pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1,
|
||||
PCI_L1SS_CAP_L1_PM_SS, val);
|
||||
PCI_L1SS_CTL1_L1SS_MASK, val);
|
||||
}
|
||||
|
||||
static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val)
|
||||
|
|
|
@ -17,13 +17,6 @@
|
|||
#include "portdrv.h"
|
||||
#include "../pci.h"
|
||||
|
||||
struct dpc_dev {
|
||||
struct pcie_device *dev;
|
||||
u16 cap_pos;
|
||||
bool rp_extensions;
|
||||
u8 rp_log_size;
|
||||
};
|
||||
|
||||
static const char * const rp_pio_error_string[] = {
|
||||
"Configuration Request received UR Completion", /* Bit Position 0 */
|
||||
"Configuration Request received CA Completion", /* Bit Position 1 */
|
||||
|
@ -46,63 +39,42 @@ static const char * const rp_pio_error_string[] = {
|
|||
"Memory Request Completion Timeout", /* Bit Position 18 */
|
||||
};
|
||||
|
||||
static struct dpc_dev *to_dpc_dev(struct pci_dev *dev)
|
||||
{
|
||||
struct device *device;
|
||||
|
||||
device = pcie_port_find_device(dev, PCIE_PORT_SERVICE_DPC);
|
||||
if (!device)
|
||||
return NULL;
|
||||
return get_service_data(to_pcie_device(device));
|
||||
}
|
||||
|
||||
void pci_save_dpc_state(struct pci_dev *dev)
|
||||
{
|
||||
struct dpc_dev *dpc;
|
||||
struct pci_cap_saved_state *save_state;
|
||||
u16 *cap;
|
||||
|
||||
if (!pci_is_pcie(dev))
|
||||
return;
|
||||
|
||||
dpc = to_dpc_dev(dev);
|
||||
if (!dpc)
|
||||
return;
|
||||
|
||||
save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC);
|
||||
if (!save_state)
|
||||
return;
|
||||
|
||||
cap = (u16 *)&save_state->cap.data[0];
|
||||
pci_read_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, cap);
|
||||
pci_read_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, cap);
|
||||
}
|
||||
|
||||
void pci_restore_dpc_state(struct pci_dev *dev)
|
||||
{
|
||||
struct dpc_dev *dpc;
|
||||
struct pci_cap_saved_state *save_state;
|
||||
u16 *cap;
|
||||
|
||||
if (!pci_is_pcie(dev))
|
||||
return;
|
||||
|
||||
dpc = to_dpc_dev(dev);
|
||||
if (!dpc)
|
||||
return;
|
||||
|
||||
save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC);
|
||||
if (!save_state)
|
||||
return;
|
||||
|
||||
cap = (u16 *)&save_state->cap.data[0];
|
||||
pci_write_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, *cap);
|
||||
pci_write_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, *cap);
|
||||
}
|
||||
|
||||
static int dpc_wait_rp_inactive(struct dpc_dev *dpc)
|
||||
static int dpc_wait_rp_inactive(struct pci_dev *pdev)
|
||||
{
|
||||
unsigned long timeout = jiffies + HZ;
|
||||
struct pci_dev *pdev = dpc->dev->port;
|
||||
u16 cap = dpc->cap_pos, status;
|
||||
u16 cap = pdev->dpc_cap, status;
|
||||
|
||||
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
|
||||
while (status & PCI_EXP_DPC_RP_BUSY &&
|
||||
|
@ -117,17 +89,15 @@ static int dpc_wait_rp_inactive(struct dpc_dev *dpc)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
|
||||
pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
|
||||
{
|
||||
struct dpc_dev *dpc;
|
||||
u16 cap;
|
||||
|
||||
/*
|
||||
* DPC disables the Link automatically in hardware, so it has
|
||||
* already been reset by the time we get here.
|
||||
*/
|
||||
dpc = to_dpc_dev(pdev);
|
||||
cap = dpc->cap_pos;
|
||||
cap = pdev->dpc_cap;
|
||||
|
||||
/*
|
||||
* Wait until the Link is inactive, then clear DPC Trigger Status
|
||||
|
@ -135,7 +105,7 @@ static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
|
|||
*/
|
||||
pcie_wait_for_link(pdev, false);
|
||||
|
||||
if (dpc->rp_extensions && dpc_wait_rp_inactive(dpc))
|
||||
if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev))
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
|
||||
pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
|
||||
|
@ -147,10 +117,9 @@ static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
|
|||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
|
||||
static void dpc_process_rp_pio_error(struct pci_dev *pdev)
|
||||
{
|
||||
struct pci_dev *pdev = dpc->dev->port;
|
||||
u16 cap = dpc->cap_pos, dpc_status, first_error;
|
||||
u16 cap = pdev->dpc_cap, dpc_status, first_error;
|
||||
u32 status, mask, sev, syserr, exc, dw0, dw1, dw2, dw3, log, prefix;
|
||||
int i;
|
||||
|
||||
|
@ -175,7 +144,7 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
|
|||
first_error == i ? " (First)" : "");
|
||||
}
|
||||
|
||||
if (dpc->rp_log_size < 4)
|
||||
if (pdev->dpc_rp_log_size < 4)
|
||||
goto clear_status;
|
||||
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG,
|
||||
&dw0);
|
||||
|
@ -188,12 +157,12 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
|
|||
pci_err(pdev, "TLP Header: %#010x %#010x %#010x %#010x\n",
|
||||
dw0, dw1, dw2, dw3);
|
||||
|
||||
if (dpc->rp_log_size < 5)
|
||||
if (pdev->dpc_rp_log_size < 5)
|
||||
goto clear_status;
|
||||
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log);
|
||||
pci_err(pdev, "RP PIO ImpSpec Log %#010x\n", log);
|
||||
|
||||
for (i = 0; i < dpc->rp_log_size - 5; i++) {
|
||||
for (i = 0; i < pdev->dpc_rp_log_size - 5; i++) {
|
||||
pci_read_config_dword(pdev,
|
||||
cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix);
|
||||
pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix);
|
||||
|
@ -224,12 +193,10 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
|
|||
return 1;
|
||||
}
|
||||
|
||||
static irqreturn_t dpc_handler(int irq, void *context)
|
||||
void dpc_process_error(struct pci_dev *pdev)
|
||||
{
|
||||
u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
|
||||
struct aer_err_info info;
|
||||
struct dpc_dev *dpc = context;
|
||||
struct pci_dev *pdev = dpc->dev->port;
|
||||
u16 cap = dpc->cap_pos, status, source, reason, ext_reason;
|
||||
|
||||
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
|
||||
pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
|
||||
|
@ -248,27 +215,33 @@ static irqreturn_t dpc_handler(int irq, void *context)
|
|||
"reserved error");
|
||||
|
||||
/* show RP PIO error detail information */
|
||||
if (dpc->rp_extensions && reason == 3 && ext_reason == 0)
|
||||
dpc_process_rp_pio_error(dpc);
|
||||
if (pdev->dpc_rp_extensions && reason == 3 && ext_reason == 0)
|
||||
dpc_process_rp_pio_error(pdev);
|
||||
else if (reason == 0 &&
|
||||
dpc_get_aer_uncorrect_severity(pdev, &info) &&
|
||||
aer_get_device_error_info(pdev, &info)) {
|
||||
aer_print_error(pdev, &info);
|
||||
pci_cleanup_aer_uncorrect_error_status(pdev);
|
||||
pci_aer_clear_nonfatal_status(pdev);
|
||||
pci_aer_clear_fatal_status(pdev);
|
||||
}
|
||||
}
|
||||
|
||||
static irqreturn_t dpc_handler(int irq, void *context)
|
||||
{
|
||||
struct pci_dev *pdev = context;
|
||||
|
||||
dpc_process_error(pdev);
|
||||
|
||||
/* We configure DPC so it only triggers on ERR_FATAL */
|
||||
pcie_do_recovery(pdev, pci_channel_io_frozen, PCIE_PORT_SERVICE_DPC);
|
||||
pcie_do_recovery(pdev, pci_channel_io_frozen, dpc_reset_link);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static irqreturn_t dpc_irq(int irq, void *context)
|
||||
{
|
||||
struct dpc_dev *dpc = (struct dpc_dev *)context;
|
||||
struct pci_dev *pdev = dpc->dev->port;
|
||||
u16 cap = dpc->cap_pos, status;
|
||||
struct pci_dev *pdev = context;
|
||||
u16 cap = pdev->dpc_cap, status;
|
||||
|
||||
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
|
||||
|
||||
|
@ -282,10 +255,30 @@ static irqreturn_t dpc_irq(int irq, void *context)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
void pci_dpc_init(struct pci_dev *pdev)
|
||||
{
|
||||
u16 cap;
|
||||
|
||||
pdev->dpc_cap = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DPC);
|
||||
if (!pdev->dpc_cap)
|
||||
return;
|
||||
|
||||
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap);
|
||||
if (!(cap & PCI_EXP_DPC_CAP_RP_EXT))
|
||||
return;
|
||||
|
||||
pdev->dpc_rp_extensions = true;
|
||||
pdev->dpc_rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8;
|
||||
if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) {
|
||||
pci_err(pdev, "RP PIO log size %u is invalid\n",
|
||||
pdev->dpc_rp_log_size);
|
||||
pdev->dpc_rp_log_size = 0;
|
||||
}
|
||||
}
|
||||
|
||||
#define FLAG(x, y) (((x) & (y)) ? '+' : '-')
|
||||
static int dpc_probe(struct pcie_device *dev)
|
||||
{
|
||||
struct dpc_dev *dpc;
|
||||
struct pci_dev *pdev = dev->port;
|
||||
struct device *device = &dev->device;
|
||||
int status;
|
||||
|
@ -294,43 +287,25 @@ static int dpc_probe(struct pcie_device *dev)
|
|||
if (pcie_aer_get_firmware_first(pdev) && !pcie_ports_dpc_native)
|
||||
return -ENOTSUPP;
|
||||
|
||||
dpc = devm_kzalloc(device, sizeof(*dpc), GFP_KERNEL);
|
||||
if (!dpc)
|
||||
return -ENOMEM;
|
||||
|
||||
dpc->cap_pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DPC);
|
||||
dpc->dev = dev;
|
||||
set_service_data(dev, dpc);
|
||||
|
||||
status = devm_request_threaded_irq(device, dev->irq, dpc_irq,
|
||||
dpc_handler, IRQF_SHARED,
|
||||
"pcie-dpc", dpc);
|
||||
"pcie-dpc", pdev);
|
||||
if (status) {
|
||||
pci_warn(pdev, "request IRQ%d failed: %d\n", dev->irq,
|
||||
status);
|
||||
return status;
|
||||
}
|
||||
|
||||
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap);
|
||||
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl);
|
||||
|
||||
dpc->rp_extensions = (cap & PCI_EXP_DPC_CAP_RP_EXT);
|
||||
if (dpc->rp_extensions) {
|
||||
dpc->rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8;
|
||||
if (dpc->rp_log_size < 4 || dpc->rp_log_size > 9) {
|
||||
pci_err(pdev, "RP PIO log size %u is invalid\n",
|
||||
dpc->rp_log_size);
|
||||
dpc->rp_log_size = 0;
|
||||
}
|
||||
}
|
||||
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap);
|
||||
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl);
|
||||
|
||||
ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN;
|
||||
pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl);
|
||||
pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl);
|
||||
|
||||
pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n",
|
||||
cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT),
|
||||
FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP),
|
||||
FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size,
|
||||
FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), pdev->dpc_rp_log_size,
|
||||
FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE));
|
||||
|
||||
pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16));
|
||||
|
@ -339,13 +314,12 @@ static int dpc_probe(struct pcie_device *dev)
|
|||
|
||||
static void dpc_remove(struct pcie_device *dev)
|
||||
{
|
||||
struct dpc_dev *dpc = get_service_data(dev);
|
||||
struct pci_dev *pdev = dev->port;
|
||||
u16 ctl;
|
||||
|
||||
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl);
|
||||
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl);
|
||||
ctl &= ~(PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN);
|
||||
pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl);
|
||||
pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl);
|
||||
}
|
||||
|
||||
static struct pcie_port_service_driver dpcdriver = {
|
||||
|
@ -354,7 +328,6 @@ static struct pcie_port_service_driver dpcdriver = {
|
|||
.service = PCIE_PORT_SERVICE_DPC,
|
||||
.probe = dpc_probe,
|
||||
.remove = dpc_remove,
|
||||
.reset_link = dpc_reset_link,
|
||||
};
|
||||
|
||||
int __init pcie_dpc_init(void)
|
||||
|
|
|
@ -0,0 +1,239 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* PCI Error Disconnect Recover support
|
||||
* Author: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
|
||||
*
|
||||
* Copyright (C) 2020 Intel Corp.
|
||||
*/
|
||||
|
||||
#define dev_fmt(fmt) "EDR: " fmt
|
||||
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci-acpi.h>
|
||||
|
||||
#include "portdrv.h"
|
||||
#include "../pci.h"
|
||||
|
||||
#define EDR_PORT_DPC_ENABLE_DSM 0x0C
|
||||
#define EDR_PORT_LOCATE_DSM 0x0D
|
||||
#define EDR_OST_SUCCESS 0x80
|
||||
#define EDR_OST_FAILED 0x81
|
||||
|
||||
/*
|
||||
* _DSM wrapper function to enable/disable DPC
|
||||
* @pdev : PCI device structure
|
||||
*
|
||||
* returns 0 on success or errno on failure.
|
||||
*/
|
||||
static int acpi_enable_dpc(struct pci_dev *pdev)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
|
||||
union acpi_object *obj, argv4, req;
|
||||
int status = 0;
|
||||
|
||||
/*
|
||||
* Behavior when calling unsupported _DSM functions is undefined,
|
||||
* so check whether EDR_PORT_DPC_ENABLE_DSM is supported.
|
||||
*/
|
||||
if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5,
|
||||
1ULL << EDR_PORT_DPC_ENABLE_DSM))
|
||||
return 0;
|
||||
|
||||
req.type = ACPI_TYPE_INTEGER;
|
||||
req.integer.value = 1;
|
||||
|
||||
argv4.type = ACPI_TYPE_PACKAGE;
|
||||
argv4.package.count = 1;
|
||||
argv4.package.elements = &req;
|
||||
|
||||
/*
|
||||
* Per Downstream Port Containment Related Enhancements ECN to PCI
|
||||
* Firmware Specification r3.2, sec 4.6.12, EDR_PORT_DPC_ENABLE_DSM is
|
||||
* optional. Return success if it's not implemented.
|
||||
*/
|
||||
obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 5,
|
||||
EDR_PORT_DPC_ENABLE_DSM, &argv4);
|
||||
if (!obj)
|
||||
return 0;
|
||||
|
||||
if (obj->type != ACPI_TYPE_INTEGER) {
|
||||
pci_err(pdev, FW_BUG "Enable DPC _DSM returned non integer\n");
|
||||
status = -EIO;
|
||||
}
|
||||
|
||||
if (obj->integer.value != 1) {
|
||||
pci_err(pdev, "Enable DPC _DSM failed to enable DPC\n");
|
||||
status = -EIO;
|
||||
}
|
||||
|
||||
ACPI_FREE(obj);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
/*
|
||||
* _DSM wrapper function to locate DPC port
|
||||
* @pdev : Device which received EDR event
|
||||
*
|
||||
* Returns pci_dev or NULL. Caller is responsible for dropping a reference
|
||||
* on the returned pci_dev with pci_dev_put().
|
||||
*/
|
||||
static struct pci_dev *acpi_dpc_port_get(struct pci_dev *pdev)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
|
||||
union acpi_object *obj;
|
||||
u16 port;
|
||||
|
||||
/*
|
||||
* Behavior when calling unsupported _DSM functions is undefined,
|
||||
* so check whether EDR_PORT_DPC_ENABLE_DSM is supported.
|
||||
*/
|
||||
if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5,
|
||||
1ULL << EDR_PORT_LOCATE_DSM))
|
||||
return pci_dev_get(pdev);
|
||||
|
||||
obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 5,
|
||||
EDR_PORT_LOCATE_DSM, NULL);
|
||||
if (!obj)
|
||||
return pci_dev_get(pdev);
|
||||
|
||||
if (obj->type != ACPI_TYPE_INTEGER) {
|
||||
ACPI_FREE(obj);
|
||||
pci_err(pdev, FW_BUG "Locate Port _DSM returned non integer\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Firmware returns DPC port BDF details in following format:
|
||||
* 15:8 = bus
|
||||
* 7:3 = device
|
||||
* 2:0 = function
|
||||
*/
|
||||
port = obj->integer.value;
|
||||
|
||||
ACPI_FREE(obj);
|
||||
|
||||
return pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
|
||||
PCI_BUS_NUM(port), port & 0xff);
|
||||
}
|
||||
|
||||
/*
|
||||
* _OST wrapper function to let firmware know the status of EDR event
|
||||
* @pdev : Device used to send _OST
|
||||
* @edev : Device which experienced EDR event
|
||||
* @status : Status of EDR event
|
||||
*/
|
||||
static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev,
|
||||
u16 status)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
|
||||
u32 ost_status;
|
||||
|
||||
pci_dbg(pdev, "Status for %s: %#x\n", pci_name(edev), status);
|
||||
|
||||
ost_status = PCI_DEVID(edev->bus->number, edev->devfn) << 16;
|
||||
ost_status |= status;
|
||||
|
||||
status = acpi_evaluate_ost(adev->handle, ACPI_NOTIFY_DISCONNECT_RECOVER,
|
||||
ost_status, NULL);
|
||||
if (ACPI_FAILURE(status))
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void edr_handle_event(acpi_handle handle, u32 event, void *data)
|
||||
{
|
||||
struct pci_dev *pdev = data, *edev;
|
||||
pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT;
|
||||
u16 status;
|
||||
|
||||
pci_info(pdev, "ACPI event %#x received\n", event);
|
||||
|
||||
if (event != ACPI_NOTIFY_DISCONNECT_RECOVER)
|
||||
return;
|
||||
|
||||
/* Locate the port which issued EDR event */
|
||||
edev = acpi_dpc_port_get(pdev);
|
||||
if (!edev) {
|
||||
pci_err(pdev, "Firmware failed to locate DPC port\n");
|
||||
return;
|
||||
}
|
||||
|
||||
pci_dbg(pdev, "Reported EDR dev: %s\n", pci_name(edev));
|
||||
|
||||
/* If port does not support DPC, just send the OST */
|
||||
if (!edev->dpc_cap) {
|
||||
pci_err(edev, FW_BUG "This device doesn't support DPC\n");
|
||||
goto send_ost;
|
||||
}
|
||||
|
||||
/* Check if there is a valid DPC trigger */
|
||||
pci_read_config_word(edev, edev->dpc_cap + PCI_EXP_DPC_STATUS, &status);
|
||||
if (!(status & PCI_EXP_DPC_STATUS_TRIGGER)) {
|
||||
pci_err(edev, "Invalid DPC trigger %#010x\n", status);
|
||||
goto send_ost;
|
||||
}
|
||||
|
||||
dpc_process_error(edev);
|
||||
pci_aer_raw_clear_status(edev);
|
||||
|
||||
/*
|
||||
* Irrespective of whether the DPC event is triggered by ERR_FATAL
|
||||
* or ERR_NONFATAL, since the link is already down, use the FATAL
|
||||
* error recovery path for both cases.
|
||||
*/
|
||||
estate = pcie_do_recovery(edev, pci_channel_io_frozen, dpc_reset_link);
|
||||
|
||||
send_ost:
|
||||
|
||||
/*
|
||||
* If recovery is successful, send _OST(0xF, BDF << 16 | 0x80)
|
||||
* to firmware. If not successful, send _OST(0xF, BDF << 16 | 0x81).
|
||||
*/
|
||||
if (estate == PCI_ERS_RESULT_RECOVERED) {
|
||||
pci_dbg(edev, "DPC port successfully recovered\n");
|
||||
acpi_send_edr_status(pdev, edev, EDR_OST_SUCCESS);
|
||||
} else {
|
||||
pci_dbg(edev, "DPC port recovery failed\n");
|
||||
acpi_send_edr_status(pdev, edev, EDR_OST_FAILED);
|
||||
}
|
||||
|
||||
pci_dev_put(edev);
|
||||
}
|
||||
|
||||
void pci_acpi_add_edr_notifier(struct pci_dev *pdev)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
|
||||
acpi_status status;
|
||||
|
||||
if (!adev) {
|
||||
pci_dbg(pdev, "No valid ACPI node, skipping EDR init\n");
|
||||
return;
|
||||
}
|
||||
|
||||
status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY,
|
||||
edr_handle_event, pdev);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
pci_err(pdev, "Failed to install notify handler\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (acpi_enable_dpc(pdev))
|
||||
acpi_remove_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY,
|
||||
edr_handle_event);
|
||||
else
|
||||
pci_dbg(pdev, "Notify handler installed\n");
|
||||
}
|
||||
|
||||
void pci_acpi_remove_edr_notifier(struct pci_dev *pdev)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
|
||||
|
||||
if (!adev)
|
||||
return;
|
||||
|
||||
acpi_remove_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY,
|
||||
edr_handle_event);
|
||||
pci_dbg(pdev, "Notify handler removed\n");
|
||||
}
|
|
@ -146,49 +146,9 @@ out:
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* default_reset_link - default reset function
|
||||
* @dev: pointer to pci_dev data structure
|
||||
*
|
||||
* Invoked when performing link reset on a Downstream Port or a
|
||||
* Root Port with no aer driver.
|
||||
*/
|
||||
static pci_ers_result_t default_reset_link(struct pci_dev *dev)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = pci_bus_error_reset(dev);
|
||||
pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n");
|
||||
return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service)
|
||||
{
|
||||
pci_ers_result_t status;
|
||||
struct pcie_port_service_driver *driver = NULL;
|
||||
|
||||
driver = pcie_port_find_service(dev, service);
|
||||
if (driver && driver->reset_link) {
|
||||
status = driver->reset_link(dev);
|
||||
} else if (pcie_downstream_port(dev)) {
|
||||
status = default_reset_link(dev);
|
||||
} else {
|
||||
pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n",
|
||||
pci_name(dev));
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
}
|
||||
|
||||
if (status != PCI_ERS_RESULT_RECOVERED) {
|
||||
pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n",
|
||||
pci_name(dev));
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
}
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
|
||||
u32 service)
|
||||
pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
|
||||
enum pci_channel_state state,
|
||||
pci_ers_result_t (*reset_link)(struct pci_dev *pdev))
|
||||
{
|
||||
pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
|
||||
struct pci_bus *bus;
|
||||
|
@ -203,14 +163,16 @@ void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
|
|||
bus = dev->subordinate;
|
||||
|
||||
pci_dbg(dev, "broadcast error_detected message\n");
|
||||
if (state == pci_channel_io_frozen)
|
||||
if (state == pci_channel_io_frozen) {
|
||||
pci_walk_bus(bus, report_frozen_detected, &status);
|
||||
else
|
||||
status = reset_link(dev);
|
||||
if (status != PCI_ERS_RESULT_RECOVERED) {
|
||||
pci_warn(dev, "link reset failed\n");
|
||||
goto failed;
|
||||
}
|
||||
} else {
|
||||
pci_walk_bus(bus, report_normal_detected, &status);
|
||||
|
||||
if (state == pci_channel_io_frozen &&
|
||||
reset_link(dev, service) != PCI_ERS_RESULT_RECOVERED)
|
||||
goto failed;
|
||||
}
|
||||
|
||||
if (status == PCI_ERS_RESULT_CAN_RECOVER) {
|
||||
status = PCI_ERS_RESULT_RECOVERED;
|
||||
|
@ -236,13 +198,15 @@ void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
|
|||
pci_walk_bus(bus, report_resume, &status);
|
||||
|
||||
pci_aer_clear_device_status(dev);
|
||||
pci_cleanup_aer_uncorrect_error_status(dev);
|
||||
pci_aer_clear_nonfatal_status(dev);
|
||||
pci_info(dev, "device recovery successful\n");
|
||||
return;
|
||||
return status;
|
||||
|
||||
failed:
|
||||
pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT);
|
||||
|
||||
/* TODO: Should kernel panic here? */
|
||||
pci_info(dev, "device recovery failed\n");
|
||||
|
||||
return status;
|
||||
}
|
||||
|
|
|
@ -92,9 +92,6 @@ struct pcie_port_service_driver {
|
|||
/* Device driver may resume normal operations */
|
||||
void (*error_resume)(struct pci_dev *dev);
|
||||
|
||||
/* Link Reset Capability - AER service driver specific */
|
||||
pci_ers_result_t (*reset_link)(struct pci_dev *dev);
|
||||
|
||||
int port_type; /* Type of the port this driver can handle */
|
||||
u32 service; /* Port service this device represents */
|
||||
|
||||
|
@ -161,7 +158,5 @@ static inline int pcie_aer_get_firmware_first(struct pci_dev *pci_dev)
|
|||
}
|
||||
#endif
|
||||
|
||||
struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev,
|
||||
u32 service);
|
||||
struct device *pcie_port_find_device(struct pci_dev *dev, u32 service);
|
||||
#endif /* _PORTDRV_H_ */
|
||||
|
|
|
@ -458,27 +458,6 @@ static int find_service_iter(struct device *device, void *data)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* pcie_port_find_service - find the service driver
|
||||
* @dev: PCI Express port the service is associated with
|
||||
* @service: Service to find
|
||||
*
|
||||
* Find PCI Express port service driver associated with given service
|
||||
*/
|
||||
struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev,
|
||||
u32 service)
|
||||
{
|
||||
struct pcie_port_service_driver *drv;
|
||||
struct portdrv_service_data pdrvs;
|
||||
|
||||
pdrvs.drv = NULL;
|
||||
pdrvs.service = service;
|
||||
device_for_each_child(&dev->dev, &pdrvs, find_service_iter);
|
||||
|
||||
drv = pdrvs.drv;
|
||||
return drv;
|
||||
}
|
||||
|
||||
/**
|
||||
* pcie_port_find_device - find the struct device
|
||||
* @dev: PCI Express port the service is associated with
|
||||
|
|
|
@ -598,6 +598,7 @@ static void pci_init_host_bridge(struct pci_host_bridge *bridge)
|
|||
bridge->native_shpc_hotplug = 1;
|
||||
bridge->native_pme = 1;
|
||||
bridge->native_ltr = 1;
|
||||
bridge->native_dpc = 1;
|
||||
}
|
||||
|
||||
struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
|
||||
|
@ -640,6 +641,7 @@ void pci_free_host_bridge(struct pci_host_bridge *bridge)
|
|||
}
|
||||
EXPORT_SYMBOL(pci_free_host_bridge);
|
||||
|
||||
/* Indexed by PCI_X_SSTATUS_FREQ (secondary bus mode and frequency) */
|
||||
static const unsigned char pcix_bus_speed[] = {
|
||||
PCI_SPEED_UNKNOWN, /* 0 */
|
||||
PCI_SPEED_66MHz_PCIX, /* 1 */
|
||||
|
@ -659,6 +661,7 @@ static const unsigned char pcix_bus_speed[] = {
|
|||
PCI_SPEED_133MHz_PCIX_533 /* F */
|
||||
};
|
||||
|
||||
/* Indexed by PCI_EXP_LNKCAP_SLS, PCI_EXP_LNKSTA_CLS */
|
||||
const unsigned char pcie_link_speed[] = {
|
||||
PCI_SPEED_UNKNOWN, /* 0 */
|
||||
PCIE_SPEED_2_5GT, /* 1 */
|
||||
|
@ -677,6 +680,44 @@ const unsigned char pcie_link_speed[] = {
|
|||
PCI_SPEED_UNKNOWN, /* E */
|
||||
PCI_SPEED_UNKNOWN /* F */
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(pcie_link_speed);
|
||||
|
||||
const char *pci_speed_string(enum pci_bus_speed speed)
|
||||
{
|
||||
/* Indexed by the pci_bus_speed enum */
|
||||
static const char *speed_strings[] = {
|
||||
"33 MHz PCI", /* 0x00 */
|
||||
"66 MHz PCI", /* 0x01 */
|
||||
"66 MHz PCI-X", /* 0x02 */
|
||||
"100 MHz PCI-X", /* 0x03 */
|
||||
"133 MHz PCI-X", /* 0x04 */
|
||||
NULL, /* 0x05 */
|
||||
NULL, /* 0x06 */
|
||||
NULL, /* 0x07 */
|
||||
NULL, /* 0x08 */
|
||||
"66 MHz PCI-X 266", /* 0x09 */
|
||||
"100 MHz PCI-X 266", /* 0x0a */
|
||||
"133 MHz PCI-X 266", /* 0x0b */
|
||||
"Unknown AGP", /* 0x0c */
|
||||
"1x AGP", /* 0x0d */
|
||||
"2x AGP", /* 0x0e */
|
||||
"4x AGP", /* 0x0f */
|
||||
"8x AGP", /* 0x10 */
|
||||
"66 MHz PCI-X 533", /* 0x11 */
|
||||
"100 MHz PCI-X 533", /* 0x12 */
|
||||
"133 MHz PCI-X 533", /* 0x13 */
|
||||
"2.5 GT/s PCIe", /* 0x14 */
|
||||
"5.0 GT/s PCIe", /* 0x15 */
|
||||
"8.0 GT/s PCIe", /* 0x16 */
|
||||
"16.0 GT/s PCIe", /* 0x17 */
|
||||
"32.0 GT/s PCIe", /* 0x18 */
|
||||
};
|
||||
|
||||
if (speed < ARRAY_SIZE(speed_strings))
|
||||
return speed_strings[speed];
|
||||
return "Unknown";
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_speed_string);
|
||||
|
||||
void pcie_update_link_speed(struct pci_bus *bus, u16 linksta)
|
||||
{
|
||||
|
@ -2329,6 +2370,7 @@ static void pci_init_capabilities(struct pci_dev *dev)
|
|||
pci_enable_acs(dev); /* Enable ACS P2P upstream forwarding */
|
||||
pci_ptm_init(dev); /* Precision Time Measurement */
|
||||
pci_aer_init(dev); /* Advanced Error Reporting */
|
||||
pci_dpc_init(dev); /* Downstream Port Containment */
|
||||
|
||||
pcie_report_downtraining(dev);
|
||||
|
||||
|
|
|
@ -1970,26 +1970,92 @@ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_80332_1, quirk
|
|||
/*
|
||||
* IO-APIC1 on 6300ESB generates boot interrupts, see Intel order no
|
||||
* 300641-004US, section 5.7.3.
|
||||
*
|
||||
* Core IO on Xeon E5 1600/2600/4600, see Intel order no 326509-003.
|
||||
* Core IO on Xeon E5 v2, see Intel order no 329188-003.
|
||||
* Core IO on Xeon E7 v2, see Intel order no 329595-002.
|
||||
* Core IO on Xeon E5 v3, see Intel order no 330784-003.
|
||||
* Core IO on Xeon E7 v3, see Intel order no 332315-001US.
|
||||
* Core IO on Xeon E5 v4, see Intel order no 333810-002US.
|
||||
* Core IO on Xeon E7 v4, see Intel order no 332315-001US.
|
||||
* Core IO on Xeon D-1500, see Intel order no 332051-001.
|
||||
* Core IO on Xeon Scalable, see Intel order no 610950.
|
||||
*/
|
||||
#define INTEL_6300_IOAPIC_ABAR 0x40
|
||||
#define INTEL_6300_IOAPIC_ABAR 0x40 /* Bus 0, Dev 29, Func 5 */
|
||||
#define INTEL_6300_DISABLE_BOOT_IRQ (1<<14)
|
||||
|
||||
#define INTEL_CIPINTRC_CFG_OFFSET 0x14C /* Bus 0, Dev 5, Func 0 */
|
||||
#define INTEL_CIPINTRC_DIS_INTX_ICH (1<<25)
|
||||
|
||||
static void quirk_disable_intel_boot_interrupt(struct pci_dev *dev)
|
||||
{
|
||||
u16 pci_config_word;
|
||||
u32 pci_config_dword;
|
||||
|
||||
if (noioapicquirk)
|
||||
return;
|
||||
|
||||
pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, &pci_config_word);
|
||||
pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ;
|
||||
pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, pci_config_word);
|
||||
|
||||
switch (dev->device) {
|
||||
case PCI_DEVICE_ID_INTEL_ESB_10:
|
||||
pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR,
|
||||
&pci_config_word);
|
||||
pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ;
|
||||
pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR,
|
||||
pci_config_word);
|
||||
break;
|
||||
case 0x3c28: /* Xeon E5 1600/2600/4600 */
|
||||
case 0x0e28: /* Xeon E5/E7 V2 */
|
||||
case 0x2f28: /* Xeon E5/E7 V3,V4 */
|
||||
case 0x6f28: /* Xeon D-1500 */
|
||||
case 0x2034: /* Xeon Scalable Family */
|
||||
pci_read_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET,
|
||||
&pci_config_dword);
|
||||
pci_config_dword |= INTEL_CIPINTRC_DIS_INTX_ICH;
|
||||
pci_write_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET,
|
||||
pci_config_dword);
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
pci_info(dev, "disabled boot interrupts on device [%04x:%04x]\n",
|
||||
dev->vendor, dev->device);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt);
|
||||
/*
|
||||
* Device 29 Func 5 Device IDs of IO-APIC
|
||||
* containing ABAR—APIC1 Alternate Base Address Register
|
||||
*/
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
|
||||
/*
|
||||
* Device 5 Func 0 Device IDs of Core IO modules/hubs
|
||||
* containing Coherent Interface Protocol Interrupt Control
|
||||
*
|
||||
* Device IDs obtained from volume 2 datasheets of commented
|
||||
* families above.
|
||||
*/
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x3c28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0e28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2f28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x6f28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2034,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x3c28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x0e28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2f28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x6f28,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2034,
|
||||
quirk_disable_intel_boot_interrupt);
|
||||
|
||||
/* Disable boot interrupts on HT-1000 */
|
||||
#define BC_HT1000_FEATURE_REG 0x64
|
||||
|
@ -4399,6 +4465,29 @@ static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags)
|
|||
PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
|
||||
}
|
||||
|
||||
/*
|
||||
* Many Zhaoxin Root Ports and Switch Downstream Ports have no ACS capability.
|
||||
* But the implementation could block peer-to-peer transactions between them
|
||||
* and provide ACS-like functionality.
|
||||
*/
|
||||
static int pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags)
|
||||
{
|
||||
if (!pci_is_pcie(dev) ||
|
||||
((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) &&
|
||||
(pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM)))
|
||||
return -ENOTTY;
|
||||
|
||||
switch (dev->device) {
|
||||
case 0x0710 ... 0x071e:
|
||||
case 0x0721:
|
||||
case 0x0723 ... 0x0732:
|
||||
return pci_acs_ctrl_enabled(acs_flags,
|
||||
PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Many Intel PCH Root Ports do provide ACS-like features to disable peer
|
||||
* transactions and validate bus numbers in requests, but do not provide an
|
||||
|
@ -4701,6 +4790,12 @@ static const struct pci_dev_acs_enabled {
|
|||
{ PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
|
||||
/* Amazon Annapurna Labs */
|
||||
{ PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
|
||||
/* Zhaoxin multi-function devices */
|
||||
{ PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs },
|
||||
{ PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs },
|
||||
{ PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
|
||||
/* Zhaoxin Root/Downstream Ports */
|
||||
{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
|
||||
{ 0 }
|
||||
};
|
||||
|
||||
|
@ -5461,3 +5556,14 @@ out_disable:
|
|||
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
|
||||
PCI_CLASS_DISPLAY_VGA, 8,
|
||||
quirk_reset_lenovo_thinkpad_p50_nvgpu);
|
||||
|
||||
/*
|
||||
* Device [1b21:2142]
|
||||
* When in D0, PME# doesn't get asserted when plugging USB 3.0 device.
|
||||
*/
|
||||
static void pci_fixup_no_d0_pme(struct pci_dev *dev)
|
||||
{
|
||||
pci_info(dev, "PME# does not work under D0, disabling it\n");
|
||||
dev->pme_support &= ~(PCI_PM_CAP_PME_D0 >> PCI_PM_CAP_PME_SHIFT);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x2142, pci_fixup_no_d0_pme);
|
||||
|
|
|
@ -195,20 +195,3 @@ void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom)
|
|||
pci_disable_rom(pdev);
|
||||
}
|
||||
EXPORT_SYMBOL(pci_unmap_rom);
|
||||
|
||||
/**
|
||||
* pci_platform_rom - provides a pointer to any ROM image provided by the
|
||||
* platform
|
||||
* @pdev: pointer to pci device struct
|
||||
* @size: pointer to receive size of pci window over ROM
|
||||
*/
|
||||
void __iomem *pci_platform_rom(struct pci_dev *pdev, size_t *size)
|
||||
{
|
||||
if (pdev->rom && pdev->romlen) {
|
||||
*size = pdev->romlen;
|
||||
return phys_to_virt((phys_addr_t)pdev->rom);
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_platform_rom);
|
||||
|
|
|
@ -846,7 +846,7 @@ static resource_size_t window_alignment(struct pci_bus *bus, unsigned long type)
|
|||
* Per spec, I/O windows are 4K-aligned, but some bridges have
|
||||
* an extension to support 1K alignment.
|
||||
*/
|
||||
if (bus->self->io_window_1k)
|
||||
if (bus->self && bus->self->io_window_1k)
|
||||
align = PCI_P2P_DEFAULT_IO_ALIGN_1K;
|
||||
else
|
||||
align = PCI_P2P_DEFAULT_IO_ALIGN;
|
||||
|
@ -920,7 +920,7 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
|
|||
calculate_iosize(size, min_size, size1, add_size, children_add_size,
|
||||
resource_size(b_res), min_align);
|
||||
if (!size0 && !size1) {
|
||||
if (b_res->start || b_res->end)
|
||||
if (bus->self && (b_res->start || b_res->end))
|
||||
pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n",
|
||||
b_res, &bus->busn_res);
|
||||
b_res->flags = 0;
|
||||
|
@ -930,7 +930,7 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
|
|||
b_res->start = min_align;
|
||||
b_res->end = b_res->start + size0 - 1;
|
||||
b_res->flags |= IORESOURCE_STARTALIGN;
|
||||
if (size1 > size0 && realloc_head) {
|
||||
if (bus->self && size1 > size0 && realloc_head) {
|
||||
add_to_list(realloc_head, bus->self, b_res, size1-size0,
|
||||
min_align);
|
||||
pci_info(bus->self, "bridge window %pR to %pR add_size %llx\n",
|
||||
|
@ -1073,7 +1073,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
calculate_memsize(size, min_size, add_size, children_add_size,
|
||||
resource_size(b_res), add_align);
|
||||
if (!size0 && !size1) {
|
||||
if (b_res->start || b_res->end)
|
||||
if (bus->self && (b_res->start || b_res->end))
|
||||
pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n",
|
||||
b_res, &bus->busn_res);
|
||||
b_res->flags = 0;
|
||||
|
@ -1082,7 +1082,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
b_res->start = min_align;
|
||||
b_res->end = size0 + min_align - 1;
|
||||
b_res->flags |= IORESOURCE_STARTALIGN;
|
||||
if (size1 > size0 && realloc_head) {
|
||||
if (bus->self && size1 > size0 && realloc_head) {
|
||||
add_to_list(realloc_head, bus->self, b_res, size1-size0, add_align);
|
||||
pci_info(bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n",
|
||||
b_res, &bus->busn_res,
|
||||
|
@ -1196,8 +1196,9 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head)
|
|||
unsigned long mask, prefmask, type2 = 0, type3 = 0;
|
||||
resource_size_t additional_io_size = 0, additional_mmio_size = 0,
|
||||
additional_mmio_pref_size = 0;
|
||||
struct resource *b_res;
|
||||
int ret;
|
||||
struct resource *pref;
|
||||
struct pci_host_bridge *host;
|
||||
int hdr_type, i, ret;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
struct pci_bus *b = dev->subordinate;
|
||||
|
@ -1217,10 +1218,20 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head)
|
|||
}
|
||||
|
||||
/* The root bus? */
|
||||
if (pci_is_root_bus(bus))
|
||||
return;
|
||||
if (pci_is_root_bus(bus)) {
|
||||
host = to_pci_host_bridge(bus->bridge);
|
||||
if (!host->size_windows)
|
||||
return;
|
||||
pci_bus_for_each_resource(bus, pref, i)
|
||||
if (pref && (pref->flags & IORESOURCE_PREFETCH))
|
||||
break;
|
||||
hdr_type = -1; /* Intentionally invalid - not a PCI device. */
|
||||
} else {
|
||||
pref = &bus->self->resource[PCI_BRIDGE_RESOURCES + 2];
|
||||
hdr_type = bus->self->hdr_type;
|
||||
}
|
||||
|
||||
switch (bus->self->hdr_type) {
|
||||
switch (hdr_type) {
|
||||
case PCI_HEADER_TYPE_CARDBUS:
|
||||
/* Don't size CardBuses yet */
|
||||
break;
|
||||
|
@ -1242,10 +1253,9 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head)
|
|||
* the size required to put all 64-bit prefetchable
|
||||
* resources in it.
|
||||
*/
|
||||
b_res = &bus->self->resource[PCI_BRIDGE_RESOURCES];
|
||||
mask = IORESOURCE_MEM;
|
||||
prefmask = IORESOURCE_MEM | IORESOURCE_PREFETCH;
|
||||
if (b_res[2].flags & IORESOURCE_MEM_64) {
|
||||
if (pref && (pref->flags & IORESOURCE_MEM_64)) {
|
||||
prefmask |= IORESOURCE_MEM_64;
|
||||
ret = pbus_size_mem(bus, prefmask, prefmask,
|
||||
prefmask, prefmask,
|
||||
|
|
|
@ -49,45 +49,9 @@ static ssize_t address_read_file(struct pci_slot *slot, char *buf)
|
|||
slot->number);
|
||||
}
|
||||
|
||||
/* these strings match up with the values in pci_bus_speed */
|
||||
static const char *pci_bus_speed_strings[] = {
|
||||
"33 MHz PCI", /* 0x00 */
|
||||
"66 MHz PCI", /* 0x01 */
|
||||
"66 MHz PCI-X", /* 0x02 */
|
||||
"100 MHz PCI-X", /* 0x03 */
|
||||
"133 MHz PCI-X", /* 0x04 */
|
||||
NULL, /* 0x05 */
|
||||
NULL, /* 0x06 */
|
||||
NULL, /* 0x07 */
|
||||
NULL, /* 0x08 */
|
||||
"66 MHz PCI-X 266", /* 0x09 */
|
||||
"100 MHz PCI-X 266", /* 0x0a */
|
||||
"133 MHz PCI-X 266", /* 0x0b */
|
||||
"Unknown AGP", /* 0x0c */
|
||||
"1x AGP", /* 0x0d */
|
||||
"2x AGP", /* 0x0e */
|
||||
"4x AGP", /* 0x0f */
|
||||
"8x AGP", /* 0x10 */
|
||||
"66 MHz PCI-X 533", /* 0x11 */
|
||||
"100 MHz PCI-X 533", /* 0x12 */
|
||||
"133 MHz PCI-X 533", /* 0x13 */
|
||||
"2.5 GT/s PCIe", /* 0x14 */
|
||||
"5.0 GT/s PCIe", /* 0x15 */
|
||||
"8.0 GT/s PCIe", /* 0x16 */
|
||||
"16.0 GT/s PCIe", /* 0x17 */
|
||||
"32.0 GT/s PCIe", /* 0x18 */
|
||||
};
|
||||
|
||||
static ssize_t bus_speed_read(enum pci_bus_speed speed, char *buf)
|
||||
{
|
||||
const char *speed_string;
|
||||
|
||||
if (speed < ARRAY_SIZE(pci_bus_speed_strings))
|
||||
speed_string = pci_bus_speed_strings[speed];
|
||||
else
|
||||
speed_string = "Unknown";
|
||||
|
||||
return sprintf(buf, "%s\n", speed_string);
|
||||
return sprintf(buf, "%s\n", pci_speed_string(speed));
|
||||
}
|
||||
|
||||
static ssize_t max_speed_read_file(struct pci_slot *slot, char *buf)
|
||||
|
|
|
@ -59,3 +59,25 @@ config PHY_MESON_G12A_USB3_PCIE
|
|||
Enable this to support the Meson USB3 + PCIE Combo PHY found
|
||||
in Meson G12A SoCs.
|
||||
If unsure, say N.
|
||||
|
||||
config PHY_MESON_AXG_PCIE
|
||||
tristate "Meson AXG PCIE PHY driver"
|
||||
default ARCH_MESON
|
||||
depends on OF && (ARCH_MESON || COMPILE_TEST)
|
||||
select GENERIC_PHY
|
||||
select REGMAP_MMIO
|
||||
help
|
||||
Enable this to support the Meson MIPI + PCIE PHY found
|
||||
in Meson AXG SoCs.
|
||||
If unsure, say N.
|
||||
|
||||
config PHY_MESON_AXG_MIPI_PCIE_ANALOG
|
||||
tristate "Meson AXG MIPI + PCIE analog PHY driver"
|
||||
default ARCH_MESON
|
||||
depends on OF && (ARCH_MESON || COMPILE_TEST)
|
||||
select GENERIC_PHY
|
||||
select REGMAP_MMIO
|
||||
help
|
||||
Enable this to support the Meson MIPI + PCIE analog PHY
|
||||
found in Meson AXG SoCs.
|
||||
If unsure, say N.
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
obj-$(CONFIG_PHY_MESON8B_USB2) += phy-meson8b-usb2.o
|
||||
obj-$(CONFIG_PHY_MESON_GXL_USB2) += phy-meson-gxl-usb2.o
|
||||
obj-$(CONFIG_PHY_MESON_G12A_USB2) += phy-meson-g12a-usb2.o
|
||||
obj-$(CONFIG_PHY_MESON_GXL_USB3) += phy-meson-gxl-usb3.o
|
||||
obj-$(CONFIG_PHY_MESON_G12A_USB3_PCIE) += phy-meson-g12a-usb3-pcie.o
|
||||
obj-$(CONFIG_PHY_MESON8B_USB2) += phy-meson8b-usb2.o
|
||||
obj-$(CONFIG_PHY_MESON_GXL_USB2) += phy-meson-gxl-usb2.o
|
||||
obj-$(CONFIG_PHY_MESON_G12A_USB2) += phy-meson-g12a-usb2.o
|
||||
obj-$(CONFIG_PHY_MESON_GXL_USB3) += phy-meson-gxl-usb3.o
|
||||
obj-$(CONFIG_PHY_MESON_G12A_USB3_PCIE) += phy-meson-g12a-usb3-pcie.o
|
||||
obj-$(CONFIG_PHY_MESON_AXG_PCIE) += phy-meson-axg-pcie.o
|
||||
obj-$(CONFIG_PHY_MESON_AXG_MIPI_PCIE_ANALOG) += phy-meson-axg-mipi-pcie-analog.o
|
||||
|
|
|
@ -0,0 +1,188 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Amlogic AXG MIPI + PCIE analog PHY driver
|
||||
*
|
||||
* Copyright (C) 2019 Remi Pommarel <repk@triplefau.lt>
|
||||
*/
|
||||
#include <linux/module.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <dt-bindings/phy/phy.h>
|
||||
|
||||
#define HHI_MIPI_CNTL0 0x00
|
||||
#define HHI_MIPI_CNTL0_COMMON_BLOCK GENMASK(31, 28)
|
||||
#define HHI_MIPI_CNTL0_ENABLE BIT(29)
|
||||
#define HHI_MIPI_CNTL0_BANDGAP BIT(26)
|
||||
#define HHI_MIPI_CNTL0_DECODE_TO_RTERM GENMASK(15, 12)
|
||||
#define HHI_MIPI_CNTL0_OUTPUT_EN BIT(3)
|
||||
|
||||
#define HHI_MIPI_CNTL1 0x01
|
||||
#define HHI_MIPI_CNTL1_CH0_CML_PDR_EN BIT(12)
|
||||
#define HHI_MIPI_CNTL1_LP_ABILITY GENMASK(5, 4)
|
||||
#define HHI_MIPI_CNTL1_LP_RESISTER BIT(3)
|
||||
#define HHI_MIPI_CNTL1_INPUT_SETTING BIT(2)
|
||||
#define HHI_MIPI_CNTL1_INPUT_SEL BIT(1)
|
||||
#define HHI_MIPI_CNTL1_PRBS7_EN BIT(0)
|
||||
|
||||
#define HHI_MIPI_CNTL2 0x02
|
||||
#define HHI_MIPI_CNTL2_CH_PU GENMASK(31, 25)
|
||||
#define HHI_MIPI_CNTL2_CH_CTL GENMASK(24, 19)
|
||||
#define HHI_MIPI_CNTL2_CH0_DIGDR_EN BIT(18)
|
||||
#define HHI_MIPI_CNTL2_CH_DIGDR_EN BIT(17)
|
||||
#define HHI_MIPI_CNTL2_LPULPS_EN BIT(16)
|
||||
#define HHI_MIPI_CNTL2_CH_EN(n) BIT(15 - (n))
|
||||
#define HHI_MIPI_CNTL2_CH0_LP_CTL GENMASK(10, 1)
|
||||
|
||||
struct phy_axg_mipi_pcie_analog_priv {
|
||||
struct phy *phy;
|
||||
unsigned int mode;
|
||||
struct regmap *regmap;
|
||||
};
|
||||
|
||||
static const struct regmap_config phy_axg_mipi_pcie_analog_regmap_conf = {
|
||||
.reg_bits = 8,
|
||||
.val_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.max_register = HHI_MIPI_CNTL2,
|
||||
};
|
||||
|
||||
static int phy_axg_mipi_pcie_analog_power_on(struct phy *phy)
|
||||
{
|
||||
struct phy_axg_mipi_pcie_analog_priv *priv = phy_get_drvdata(phy);
|
||||
|
||||
/* MIPI not supported yet */
|
||||
if (priv->mode != PHY_TYPE_PCIE)
|
||||
return -EINVAL;
|
||||
|
||||
regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0,
|
||||
HHI_MIPI_CNTL0_BANDGAP, HHI_MIPI_CNTL0_BANDGAP);
|
||||
|
||||
regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0,
|
||||
HHI_MIPI_CNTL0_ENABLE, HHI_MIPI_CNTL0_ENABLE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int phy_axg_mipi_pcie_analog_power_off(struct phy *phy)
|
||||
{
|
||||
struct phy_axg_mipi_pcie_analog_priv *priv = phy_get_drvdata(phy);
|
||||
|
||||
/* MIPI not supported yet */
|
||||
if (priv->mode != PHY_TYPE_PCIE)
|
||||
return -EINVAL;
|
||||
|
||||
regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0,
|
||||
HHI_MIPI_CNTL0_BANDGAP, 0);
|
||||
regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0,
|
||||
HHI_MIPI_CNTL0_ENABLE, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int phy_axg_mipi_pcie_analog_init(struct phy *phy)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int phy_axg_mipi_pcie_analog_exit(struct phy *phy)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct phy_ops phy_axg_mipi_pcie_analog_ops = {
|
||||
.init = phy_axg_mipi_pcie_analog_init,
|
||||
.exit = phy_axg_mipi_pcie_analog_exit,
|
||||
.power_on = phy_axg_mipi_pcie_analog_power_on,
|
||||
.power_off = phy_axg_mipi_pcie_analog_power_off,
|
||||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static struct phy *phy_axg_mipi_pcie_analog_xlate(struct device *dev,
|
||||
struct of_phandle_args *args)
|
||||
{
|
||||
struct phy_axg_mipi_pcie_analog_priv *priv = dev_get_drvdata(dev);
|
||||
unsigned int mode;
|
||||
|
||||
if (args->args_count != 1) {
|
||||
dev_err(dev, "invalid number of arguments\n");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
mode = args->args[0];
|
||||
|
||||
/* MIPI mode is not supported yet */
|
||||
if (mode != PHY_TYPE_PCIE) {
|
||||
dev_err(dev, "invalid phy mode select argument\n");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
priv->mode = mode;
|
||||
return priv->phy;
|
||||
}
|
||||
|
||||
static int phy_axg_mipi_pcie_analog_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct phy_provider *phy;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct phy_axg_mipi_pcie_analog_priv *priv;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct regmap *map;
|
||||
struct resource *res;
|
||||
void __iomem *base;
|
||||
int ret;
|
||||
|
||||
priv = devm_kmalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
base = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(base)) {
|
||||
dev_err(dev, "failed to get regmap base\n");
|
||||
return PTR_ERR(base);
|
||||
}
|
||||
|
||||
map = devm_regmap_init_mmio(dev, base,
|
||||
&phy_axg_mipi_pcie_analog_regmap_conf);
|
||||
if (IS_ERR(map)) {
|
||||
dev_err(dev, "failed to get HHI regmap\n");
|
||||
return PTR_ERR(map);
|
||||
}
|
||||
priv->regmap = map;
|
||||
|
||||
priv->phy = devm_phy_create(dev, np, &phy_axg_mipi_pcie_analog_ops);
|
||||
if (IS_ERR(priv->phy)) {
|
||||
ret = PTR_ERR(priv->phy);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(dev, "failed to create PHY\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
phy_set_drvdata(priv->phy, priv);
|
||||
dev_set_drvdata(dev, priv);
|
||||
|
||||
phy = devm_of_phy_provider_register(dev,
|
||||
phy_axg_mipi_pcie_analog_xlate);
|
||||
|
||||
return PTR_ERR_OR_ZERO(phy);
|
||||
}
|
||||
|
||||
static const struct of_device_id phy_axg_mipi_pcie_analog_of_match[] = {
|
||||
{
|
||||
.compatible = "amlogic,axg-mipi-pcie-analog-phy",
|
||||
},
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, phy_axg_mipi_pcie_analog_of_match);
|
||||
|
||||
static struct platform_driver phy_axg_mipi_pcie_analog_driver = {
|
||||
.probe = phy_axg_mipi_pcie_analog_probe,
|
||||
.driver = {
|
||||
.name = "phy-axg-mipi-pcie-analog",
|
||||
.of_match_table = phy_axg_mipi_pcie_analog_of_match,
|
||||
},
|
||||
};
|
||||
module_platform_driver(phy_axg_mipi_pcie_analog_driver);
|
||||
|
||||
MODULE_AUTHOR("Remi Pommarel <repk@triplefau.lt>");
|
||||
MODULE_DESCRIPTION("Amlogic AXG MIPI + PCIE analog PHY driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,192 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Amlogic AXG PCIE PHY driver
|
||||
*
|
||||
* Copyright (C) 2020 Remi Pommarel <repk@triplefau.lt>
|
||||
*/
|
||||
#include <linux/module.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/reset.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <dt-bindings/phy/phy.h>
|
||||
|
||||
#define MESON_PCIE_REG0 0x00
|
||||
#define MESON_PCIE_COMMON_CLK BIT(4)
|
||||
#define MESON_PCIE_PORT_SEL GENMASK(3, 2)
|
||||
#define MESON_PCIE_CLK BIT(1)
|
||||
#define MESON_PCIE_POWERDOWN BIT(0)
|
||||
|
||||
#define MESON_PCIE_TWO_X1 FIELD_PREP(MESON_PCIE_PORT_SEL, 0x3)
|
||||
#define MESON_PCIE_COMMON_REF_CLK FIELD_PREP(MESON_PCIE_COMMON_CLK, 0x1)
|
||||
#define MESON_PCIE_PHY_INIT (MESON_PCIE_TWO_X1 | \
|
||||
MESON_PCIE_COMMON_REF_CLK)
|
||||
#define MESON_PCIE_RESET_DELAY 500
|
||||
|
||||
struct phy_axg_pcie_priv {
|
||||
struct phy *phy;
|
||||
struct phy *analog;
|
||||
struct regmap *regmap;
|
||||
struct reset_control *reset;
|
||||
};
|
||||
|
||||
static const struct regmap_config phy_axg_pcie_regmap_conf = {
|
||||
.reg_bits = 8,
|
||||
.val_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.max_register = MESON_PCIE_REG0,
|
||||
};
|
||||
|
||||
static int phy_axg_pcie_power_on(struct phy *phy)
|
||||
{
|
||||
struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy);
|
||||
int ret;
|
||||
|
||||
ret = phy_power_on(priv->analog);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
regmap_update_bits(priv->regmap, MESON_PCIE_REG0,
|
||||
MESON_PCIE_POWERDOWN, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int phy_axg_pcie_power_off(struct phy *phy)
|
||||
{
|
||||
struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy);
|
||||
int ret;
|
||||
|
||||
ret = phy_power_off(priv->analog);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
regmap_update_bits(priv->regmap, MESON_PCIE_REG0,
|
||||
MESON_PCIE_POWERDOWN, 1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int phy_axg_pcie_init(struct phy *phy)
|
||||
{
|
||||
struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy);
|
||||
int ret;
|
||||
|
||||
ret = phy_init(priv->analog);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
regmap_write(priv->regmap, MESON_PCIE_REG0, MESON_PCIE_PHY_INIT);
|
||||
return reset_control_reset(priv->reset);
|
||||
}
|
||||
|
||||
static int phy_axg_pcie_exit(struct phy *phy)
|
||||
{
|
||||
struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy);
|
||||
int ret;
|
||||
|
||||
ret = phy_exit(priv->analog);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
|
||||
return reset_control_reset(priv->reset);
|
||||
}
|
||||
|
||||
static int phy_axg_pcie_reset(struct phy *phy)
|
||||
{
|
||||
struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy);
|
||||
int ret = 0;
|
||||
|
||||
ret = phy_reset(priv->analog);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
|
||||
ret = reset_control_assert(priv->reset);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
udelay(MESON_PCIE_RESET_DELAY);
|
||||
|
||||
ret = reset_control_deassert(priv->reset);
|
||||
if (ret != 0)
|
||||
goto out;
|
||||
udelay(MESON_PCIE_RESET_DELAY);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct phy_ops phy_axg_pcie_ops = {
|
||||
.init = phy_axg_pcie_init,
|
||||
.exit = phy_axg_pcie_exit,
|
||||
.power_on = phy_axg_pcie_power_on,
|
||||
.power_off = phy_axg_pcie_power_off,
|
||||
.reset = phy_axg_pcie_reset,
|
||||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static int phy_axg_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct phy_provider *pphy;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct phy_axg_pcie_priv *priv;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct resource *res;
|
||||
void __iomem *base;
|
||||
int ret;
|
||||
|
||||
priv = devm_kmalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->phy = devm_phy_create(dev, np, &phy_axg_pcie_ops);
|
||||
if (IS_ERR(priv->phy)) {
|
||||
ret = PTR_ERR(priv->phy);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(dev, "failed to create PHY\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
base = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(base))
|
||||
return PTR_ERR(base);
|
||||
|
||||
priv->regmap = devm_regmap_init_mmio(dev, base,
|
||||
&phy_axg_pcie_regmap_conf);
|
||||
if (IS_ERR(priv->regmap))
|
||||
return PTR_ERR(priv->regmap);
|
||||
|
||||
priv->reset = devm_reset_control_array_get(dev, false, false);
|
||||
if (IS_ERR(priv->reset))
|
||||
return PTR_ERR(priv->reset);
|
||||
|
||||
priv->analog = devm_phy_get(dev, "analog");
|
||||
if (IS_ERR(priv->analog))
|
||||
return PTR_ERR(priv->analog);
|
||||
|
||||
phy_set_drvdata(priv->phy, priv);
|
||||
dev_set_drvdata(dev, priv);
|
||||
pphy = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
|
||||
|
||||
return PTR_ERR_OR_ZERO(pphy);
|
||||
}
|
||||
|
||||
static const struct of_device_id phy_axg_pcie_of_match[] = {
|
||||
{
|
||||
.compatible = "amlogic,axg-pcie-phy",
|
||||
},
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, phy_axg_pcie_of_match);
|
||||
|
||||
static struct platform_driver phy_axg_pcie_driver = {
|
||||
.probe = phy_axg_pcie_probe,
|
||||
.driver = {
|
||||
.name = "phy-axg-pcie",
|
||||
.of_match_table = phy_axg_pcie_of_match,
|
||||
},
|
||||
};
|
||||
module_platform_driver(phy_axg_pcie_driver);
|
||||
|
||||
MODULE_AUTHOR("Remi Pommarel <repk@triplefau.lt>");
|
||||
MODULE_DESCRIPTION("Amlogic AXG PCIE PHY driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -4780,7 +4780,7 @@ static DEVICE_ATTR_RW(lpfc_aer_support);
|
|||
* Description:
|
||||
* If the @buf contains 1 and the device currently has the AER support
|
||||
* enabled, then invokes the kernel AER helper routine
|
||||
* pci_cleanup_aer_uncorrect_error_status to clean up the uncorrectable
|
||||
* pci_aer_clear_nonfatal_status() to clean up the uncorrectable
|
||||
* error status register.
|
||||
*
|
||||
* Notes:
|
||||
|
@ -4806,7 +4806,7 @@ lpfc_aer_cleanup_state(struct device *dev, struct device_attribute *attr,
|
|||
return -EINVAL;
|
||||
|
||||
if (phba->hba_flag & HBA_AER_ENABLED)
|
||||
rc = pci_cleanup_aer_uncorrect_error_status(phba->pcidev);
|
||||
rc = pci_aer_clear_nonfatal_status(phba->pcidev);
|
||||
|
||||
if (rc == 0)
|
||||
return strlen(buf);
|
||||
|
|
|
@ -530,8 +530,9 @@ extern bool osc_pc_lpi_support_confirmed;
|
|||
#define OSC_PCI_CLOCK_PM_SUPPORT 0x00000004
|
||||
#define OSC_PCI_SEGMENT_GROUPS_SUPPORT 0x00000008
|
||||
#define OSC_PCI_MSI_SUPPORT 0x00000010
|
||||
#define OSC_PCI_EDR_SUPPORT 0x00000080
|
||||
#define OSC_PCI_HPX_TYPE_3_SUPPORT 0x00000100
|
||||
#define OSC_PCI_SUPPORT_MASKS 0x0000011f
|
||||
#define OSC_PCI_SUPPORT_MASKS 0x0000019f
|
||||
|
||||
/* PCI Host Bridge _OSC: Capabilities DWORD 3: Control Field */
|
||||
#define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 0x00000001
|
||||
|
@ -540,7 +541,8 @@ extern bool osc_pc_lpi_support_confirmed;
|
|||
#define OSC_PCI_EXPRESS_AER_CONTROL 0x00000008
|
||||
#define OSC_PCI_EXPRESS_CAPABILITY_CONTROL 0x00000010
|
||||
#define OSC_PCI_EXPRESS_LTR_CONTROL 0x00000020
|
||||
#define OSC_PCI_CONTROL_MASKS 0x0000003f
|
||||
#define OSC_PCI_EXPRESS_DPC_CONTROL 0x00000080
|
||||
#define OSC_PCI_CONTROL_MASKS 0x000000bf
|
||||
|
||||
#define ACPI_GSB_ACCESS_ATTRIB_QUICK 0x00000002
|
||||
#define ACPI_GSB_ACCESS_ATTRIB_SEND_RCV 0x00000004
|
||||
|
|
|
@ -44,8 +44,7 @@ struct aer_capability_regs {
|
|||
/* PCIe port driver needs this function to enable AER */
|
||||
int pci_enable_pcie_error_reporting(struct pci_dev *dev);
|
||||
int pci_disable_pcie_error_reporting(struct pci_dev *dev);
|
||||
int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev);
|
||||
int pci_cleanup_aer_error_status_regs(struct pci_dev *dev);
|
||||
int pci_aer_clear_nonfatal_status(struct pci_dev *dev);
|
||||
void pci_save_aer_state(struct pci_dev *dev);
|
||||
void pci_restore_aer_state(struct pci_dev *dev);
|
||||
#else
|
||||
|
@ -57,11 +56,7 @@ static inline int pci_disable_pcie_error_reporting(struct pci_dev *dev)
|
|||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
static inline int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
static inline int pci_cleanup_aer_error_status_regs(struct pci_dev *dev)
|
||||
static inline int pci_aer_clear_nonfatal_status(struct pci_dev *dev)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -112,6 +112,14 @@ extern const guid_t pci_acpi_dsm_guid;
|
|||
#define RESET_DELAY_DSM 0x08
|
||||
#define FUNCTION_DELAY_DSM 0x09
|
||||
|
||||
#ifdef CONFIG_PCIE_EDR
|
||||
void pci_acpi_add_edr_notifier(struct pci_dev *pdev);
|
||||
void pci_acpi_remove_edr_notifier(struct pci_dev *pdev);
|
||||
#else
|
||||
static inline void pci_acpi_add_edr_notifier(struct pci_dev *pdev) { }
|
||||
static inline void pci_acpi_remove_edr_notifier(struct pci_dev *pdev) { }
|
||||
#endif /* CONFIG_PCIE_EDR */
|
||||
|
||||
#else /* CONFIG_ACPI */
|
||||
static inline void acpi_pci_add_bus(struct pci_bus *bus) { }
|
||||
static inline void acpi_pci_remove_bus(struct pci_bus *bus) { }
|
||||
|
|
|
@ -53,7 +53,8 @@ struct pci_epc_ops {
|
|||
phys_addr_t addr);
|
||||
int (*set_msi)(struct pci_epc *epc, u8 func_no, u8 interrupts);
|
||||
int (*get_msi)(struct pci_epc *epc, u8 func_no);
|
||||
int (*set_msix)(struct pci_epc *epc, u8 func_no, u16 interrupts);
|
||||
int (*set_msix)(struct pci_epc *epc, u8 func_no, u16 interrupts,
|
||||
enum pci_barno, u32 offset);
|
||||
int (*get_msix)(struct pci_epc *epc, u8 func_no);
|
||||
int (*raise_irq)(struct pci_epc *epc, u8 func_no,
|
||||
enum pci_epc_irq_type type, u16 interrupt_num);
|
||||
|
@ -71,6 +72,7 @@ struct pci_epc_ops {
|
|||
* @bitmap: bitmap to manage the PCI address space
|
||||
* @pages: number of bits representing the address region
|
||||
* @page_size: size of each page
|
||||
* @lock: mutex to protect bitmap
|
||||
*/
|
||||
struct pci_epc_mem {
|
||||
phys_addr_t phys_base;
|
||||
|
@ -78,6 +80,8 @@ struct pci_epc_mem {
|
|||
unsigned long *bitmap;
|
||||
size_t page_size;
|
||||
int pages;
|
||||
/* mutex to protect against concurrent access for memory allocation*/
|
||||
struct mutex lock;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -88,7 +92,9 @@ struct pci_epc_mem {
|
|||
* @mem: address space of the endpoint controller
|
||||
* @max_functions: max number of functions that can be configured in this EPC
|
||||
* @group: configfs group representing the PCI EPC device
|
||||
* @lock: spinlock to protect pci_epc ops
|
||||
* @lock: mutex to protect pci_epc ops
|
||||
* @function_num_map: bitmap to manage physical function number
|
||||
* @notifier: used to notify EPF of any EPC events (like linkup)
|
||||
*/
|
||||
struct pci_epc {
|
||||
struct device dev;
|
||||
|
@ -97,8 +103,10 @@ struct pci_epc {
|
|||
struct pci_epc_mem *mem;
|
||||
u8 max_functions;
|
||||
struct config_group *group;
|
||||
/* spinlock to protect against concurrent access of EP controller */
|
||||
spinlock_t lock;
|
||||
/* mutex to protect against concurrent access of EP controller */
|
||||
struct mutex lock;
|
||||
unsigned long function_num_map;
|
||||
struct atomic_notifier_head notifier;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -113,6 +121,7 @@ struct pci_epc {
|
|||
*/
|
||||
struct pci_epc_features {
|
||||
unsigned int linkup_notifier : 1;
|
||||
unsigned int core_init_notifier : 1;
|
||||
unsigned int msi_capable : 1;
|
||||
unsigned int msix_capable : 1;
|
||||
u8 reserved_bar;
|
||||
|
@ -141,6 +150,12 @@ static inline void *epc_get_drvdata(struct pci_epc *epc)
|
|||
return dev_get_drvdata(&epc->dev);
|
||||
}
|
||||
|
||||
static inline int
|
||||
pci_epc_register_notifier(struct pci_epc *epc, struct notifier_block *nb)
|
||||
{
|
||||
return atomic_notifier_chain_register(&epc->notifier, nb);
|
||||
}
|
||||
|
||||
struct pci_epc *
|
||||
__devm_pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
|
||||
struct module *owner);
|
||||
|
@ -151,6 +166,7 @@ void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc);
|
|||
void pci_epc_destroy(struct pci_epc *epc);
|
||||
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf);
|
||||
void pci_epc_linkup(struct pci_epc *epc);
|
||||
void pci_epc_init_notify(struct pci_epc *epc);
|
||||
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf);
|
||||
int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
|
||||
struct pci_epf_header *hdr);
|
||||
|
@ -165,7 +181,8 @@ void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no,
|
|||
phys_addr_t phys_addr);
|
||||
int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts);
|
||||
int pci_epc_get_msi(struct pci_epc *epc, u8 func_no);
|
||||
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts);
|
||||
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
|
||||
enum pci_barno, u32 offset);
|
||||
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no);
|
||||
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
|
||||
enum pci_epc_irq_type type, u16 interrupt_num);
|
||||
|
|
|
@ -15,6 +15,11 @@
|
|||
|
||||
struct pci_epf;
|
||||
|
||||
enum pci_notify_event {
|
||||
CORE_INIT,
|
||||
LINK_UP,
|
||||
};
|
||||
|
||||
enum pci_barno {
|
||||
BAR_0,
|
||||
BAR_1,
|
||||
|
@ -55,13 +60,10 @@ struct pci_epf_header {
|
|||
* @bind: ops to perform when a EPC device has been bound to EPF device
|
||||
* @unbind: ops to perform when a binding has been lost between a EPC device
|
||||
* and EPF device
|
||||
* @linkup: ops to perform when the EPC device has established a connection with
|
||||
* a host system
|
||||
*/
|
||||
struct pci_epf_ops {
|
||||
int (*bind)(struct pci_epf *epf);
|
||||
void (*unbind)(struct pci_epf *epf);
|
||||
void (*linkup)(struct pci_epf *epf);
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -92,10 +94,12 @@ struct pci_epf_driver {
|
|||
/**
|
||||
* struct pci_epf_bar - represents the BAR of EPF device
|
||||
* @phys_addr: physical address that should be mapped to the BAR
|
||||
* @addr: virtual address corresponding to the @phys_addr
|
||||
* @size: the size of the address space present in BAR
|
||||
*/
|
||||
struct pci_epf_bar {
|
||||
dma_addr_t phys_addr;
|
||||
void *addr;
|
||||
size_t size;
|
||||
enum pci_barno barno;
|
||||
int flags;
|
||||
|
@ -112,6 +116,8 @@ struct pci_epf_bar {
|
|||
* @epc: the EPC device to which this EPF device is bound
|
||||
* @driver: the EPF driver to which this EPF device is bound
|
||||
* @list: to add pci_epf as a list of PCI endpoint functions to pci_epc
|
||||
* @nb: notifier block to notify EPF of any EPC events (like linkup)
|
||||
* @lock: mutex to protect pci_epf_ops
|
||||
*/
|
||||
struct pci_epf {
|
||||
struct device dev;
|
||||
|
@ -125,6 +131,22 @@ struct pci_epf {
|
|||
struct pci_epc *epc;
|
||||
struct pci_epf_driver *driver;
|
||||
struct list_head list;
|
||||
struct notifier_block nb;
|
||||
/* mutex to protect against concurrent access of pci_epf_ops */
|
||||
struct mutex lock;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct pci_epf_msix_tbl - represents the MSIX table entry structure
|
||||
* @msg_addr: Writes to this address will trigger MSIX interrupt in host
|
||||
* @msg_data: Data that should be written to @msg_addr to trigger MSIX interrupt
|
||||
* @vector_ctrl: Identifies if the function is prohibited from sending a message
|
||||
* using this MSIX table entry
|
||||
*/
|
||||
struct pci_epf_msix_tbl {
|
||||
u64 msg_addr;
|
||||
u32 msg_data;
|
||||
u32 vector_ctrl;
|
||||
};
|
||||
|
||||
#define to_pci_epf(epf_dev) container_of((epf_dev), struct pci_epf, dev)
|
||||
|
@ -154,5 +176,4 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
|
|||
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar);
|
||||
int pci_epf_bind(struct pci_epf *epf);
|
||||
void pci_epf_unbind(struct pci_epf *epf);
|
||||
void pci_epf_linkup(struct pci_epf *epf);
|
||||
#endif /* __LINUX_PCI_EPF_H */
|
||||
|
|
|
@ -243,7 +243,7 @@ enum pcie_link_width {
|
|||
PCIE_LNK_WIDTH_UNKNOWN = 0xff,
|
||||
};
|
||||
|
||||
/* Based on the PCI Hotplug Spec, but some values are made up by us */
|
||||
/* See matching string table in pci_speed_string() */
|
||||
enum pci_bus_speed {
|
||||
PCI_SPEED_33MHz = 0x00,
|
||||
PCI_SPEED_66MHz = 0x01,
|
||||
|
@ -451,6 +451,11 @@ struct pci_dev {
|
|||
const struct attribute_group **msi_irq_groups;
|
||||
#endif
|
||||
struct pci_vpd *vpd;
|
||||
#ifdef CONFIG_PCIE_DPC
|
||||
u16 dpc_cap;
|
||||
unsigned int dpc_rp_extensions:1;
|
||||
u8 dpc_rp_log_size;
|
||||
#endif
|
||||
#ifdef CONFIG_PCI_ATS
|
||||
union {
|
||||
struct pci_sriov *sriov; /* PF: SR-IOV info */
|
||||
|
@ -517,7 +522,9 @@ struct pci_host_bridge {
|
|||
unsigned int native_shpc_hotplug:1; /* OS may use SHPC hotplug */
|
||||
unsigned int native_pme:1; /* OS may use PCIe PME */
|
||||
unsigned int native_ltr:1; /* OS may use PCIe LTR */
|
||||
unsigned int native_dpc:1; /* OS may use PCIe DPC */
|
||||
unsigned int preserve_config:1; /* Preserve FW resource setup */
|
||||
unsigned int size_windows:1; /* Enable root bus sizing */
|
||||
|
||||
/* Resource alignment requirements */
|
||||
resource_size_t (*align_resource)(struct pci_dev *dev,
|
||||
|
@ -1224,7 +1231,6 @@ int pci_enable_rom(struct pci_dev *pdev);
|
|||
void pci_disable_rom(struct pci_dev *pdev);
|
||||
void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size);
|
||||
void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom);
|
||||
void __iomem __must_check *pci_platform_rom(struct pci_dev *pdev, size_t *size);
|
||||
|
||||
/* Power management related routines */
|
||||
int pci_save_state(struct pci_dev *dev);
|
||||
|
|
|
@ -2585,6 +2585,8 @@
|
|||
|
||||
#define PCI_VENDOR_ID_AMAZON 0x1d0f
|
||||
|
||||
#define PCI_VENDOR_ID_ZHAOXIN 0x1d17
|
||||
|
||||
#define PCI_VENDOR_ID_HYGON 0x1d94
|
||||
|
||||
#define PCI_VENDOR_ID_HXT 0x1dbf
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (c) 2014-2018, NVIDIA CORPORATION. All rights reserved.
|
||||
* Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
*/
|
||||
|
||||
#ifndef _ABI_BPMP_ABI_H_
|
||||
|
@ -2119,6 +2119,7 @@ enum {
|
|||
CMD_UPHY_PCIE_LANE_MARGIN_STATUS = 2,
|
||||
CMD_UPHY_PCIE_EP_CONTROLLER_PLL_INIT = 3,
|
||||
CMD_UPHY_PCIE_CONTROLLER_STATE = 4,
|
||||
CMD_UPHY_PCIE_EP_CONTROLLER_PLL_OFF = 5,
|
||||
CMD_UPHY_MAX,
|
||||
};
|
||||
|
||||
|
@ -2151,6 +2152,11 @@ struct cmd_uphy_pcie_controller_state_request {
|
|||
uint8_t enable;
|
||||
} __ABI_PACKED;
|
||||
|
||||
struct cmd_uphy_ep_controller_pll_off_request {
|
||||
/** @brief EP controller number, valid: 0, 4, 5 */
|
||||
uint8_t ep_controller;
|
||||
} __ABI_PACKED;
|
||||
|
||||
/**
|
||||
* @ingroup UPHY
|
||||
* @brief Request with #MRQ_UPHY
|
||||
|
@ -2165,6 +2171,7 @@ struct cmd_uphy_pcie_controller_state_request {
|
|||
* |CMD_UPHY_PCIE_LANE_MARGIN_STATUS | |
|
||||
* |CMD_UPHY_PCIE_EP_CONTROLLER_PLL_INIT |cmd_uphy_ep_controller_pll_init_request |
|
||||
* |CMD_UPHY_PCIE_CONTROLLER_STATE |cmd_uphy_pcie_controller_state_request |
|
||||
* |CMD_UPHY_PCIE_EP_CONTROLLER_PLL_OFF |cmd_uphy_ep_controller_pll_off_request |
|
||||
*
|
||||
*/
|
||||
|
||||
|
@ -2178,6 +2185,7 @@ struct mrq_uphy_request {
|
|||
struct cmd_uphy_margin_control_request uphy_set_margin_control;
|
||||
struct cmd_uphy_ep_controller_pll_init_request ep_ctrlr_pll_init;
|
||||
struct cmd_uphy_pcie_controller_state_request controller_state;
|
||||
struct cmd_uphy_ep_controller_pll_off_request ep_ctrlr_pll_off;
|
||||
} __UNION_ANON;
|
||||
} __ABI_PACKED;
|
||||
|
||||
|
|
|
@ -605,6 +605,7 @@
|
|||
#define PCI_EXP_SLTCTL_PWR_OFF 0x0400 /* Power Off */
|
||||
#define PCI_EXP_SLTCTL_EIC 0x0800 /* Electromechanical Interlock Control */
|
||||
#define PCI_EXP_SLTCTL_DLLSCE 0x1000 /* Data Link Layer State Changed Enable */
|
||||
#define PCI_EXP_SLTCTL_IBPD_DISABLE 0x4000 /* In-band PD disable */
|
||||
#define PCI_EXP_SLTSTA 26 /* Slot Status */
|
||||
#define PCI_EXP_SLTSTA_ABP 0x0001 /* Attention Button Pressed */
|
||||
#define PCI_EXP_SLTSTA_PFD 0x0002 /* Power Fault Detected */
|
||||
|
@ -680,6 +681,7 @@
|
|||
#define PCI_EXP_LNKSTA2 50 /* Link Status 2 */
|
||||
#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */
|
||||
#define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */
|
||||
#define PCI_EXP_SLTCAP2_IBPD 0x00000001 /* In-band PD Disable Supported */
|
||||
#define PCI_EXP_SLTCTL2 56 /* Slot Control 2 */
|
||||
#define PCI_EXP_SLTSTA2 58 /* Slot Status 2 */
|
||||
|
||||
|
|
|
@ -19,5 +19,13 @@
|
|||
#define PCITEST_MSIX _IOW('P', 0x7, int)
|
||||
#define PCITEST_SET_IRQTYPE _IOW('P', 0x8, int)
|
||||
#define PCITEST_GET_IRQTYPE _IO('P', 0x9)
|
||||
#define PCITEST_CLEAR_IRQ _IO('P', 0x10)
|
||||
|
||||
#define PCITEST_FLAGS_USE_DMA 0x00000001
|
||||
|
||||
struct pci_endpoint_test_xfer_param {
|
||||
unsigned long size;
|
||||
unsigned char flags;
|
||||
};
|
||||
|
||||
#endif /* __UAPI_LINUX_PCITEST_H */
|
||||
|
|
|
@ -30,14 +30,17 @@ struct pci_test {
|
|||
int irqtype;
|
||||
bool set_irqtype;
|
||||
bool get_irqtype;
|
||||
bool clear_irq;
|
||||
bool read;
|
||||
bool write;
|
||||
bool copy;
|
||||
unsigned long size;
|
||||
bool use_dma;
|
||||
};
|
||||
|
||||
static int run_test(struct pci_test *test)
|
||||
{
|
||||
struct pci_endpoint_test_xfer_param param;
|
||||
int ret = -EINVAL;
|
||||
int fd;
|
||||
|
||||
|
@ -74,6 +77,15 @@ static int run_test(struct pci_test *test)
|
|||
fprintf(stdout, "%s\n", irq[ret]);
|
||||
}
|
||||
|
||||
if (test->clear_irq) {
|
||||
ret = ioctl(fd, PCITEST_CLEAR_IRQ);
|
||||
fprintf(stdout, "CLEAR IRQ:\t\t");
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "FAILED\n");
|
||||
else
|
||||
fprintf(stdout, "%s\n", result[ret]);
|
||||
}
|
||||
|
||||
if (test->legacyirq) {
|
||||
ret = ioctl(fd, PCITEST_LEGACY_IRQ, 0);
|
||||
fprintf(stdout, "LEGACY IRQ:\t");
|
||||
|
@ -102,7 +114,10 @@ static int run_test(struct pci_test *test)
|
|||
}
|
||||
|
||||
if (test->write) {
|
||||
ret = ioctl(fd, PCITEST_WRITE, test->size);
|
||||
param.size = test->size;
|
||||
if (test->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
ret = ioctl(fd, PCITEST_WRITE, ¶m);
|
||||
fprintf(stdout, "WRITE (%7ld bytes):\t\t", test->size);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
|
@ -111,7 +126,10 @@ static int run_test(struct pci_test *test)
|
|||
}
|
||||
|
||||
if (test->read) {
|
||||
ret = ioctl(fd, PCITEST_READ, test->size);
|
||||
param.size = test->size;
|
||||
if (test->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
ret = ioctl(fd, PCITEST_READ, ¶m);
|
||||
fprintf(stdout, "READ (%7ld bytes):\t\t", test->size);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
|
@ -120,7 +138,10 @@ static int run_test(struct pci_test *test)
|
|||
}
|
||||
|
||||
if (test->copy) {
|
||||
ret = ioctl(fd, PCITEST_COPY, test->size);
|
||||
param.size = test->size;
|
||||
if (test->use_dma)
|
||||
param.flags = PCITEST_FLAGS_USE_DMA;
|
||||
ret = ioctl(fd, PCITEST_COPY, ¶m);
|
||||
fprintf(stdout, "COPY (%7ld bytes):\t\t", test->size);
|
||||
if (ret < 0)
|
||||
fprintf(stdout, "TEST FAILED\n");
|
||||
|
@ -153,7 +174,7 @@ int main(int argc, char **argv)
|
|||
/* set default endpoint device */
|
||||
test->device = "/dev/pci-endpoint-test.0";
|
||||
|
||||
while ((c = getopt(argc, argv, "D:b:m:x:i:Ilhrwcs:")) != EOF)
|
||||
while ((c = getopt(argc, argv, "D:b:m:x:i:deIlhrwcs:")) != EOF)
|
||||
switch (c) {
|
||||
case 'D':
|
||||
test->device = optarg;
|
||||
|
@ -194,9 +215,15 @@ int main(int argc, char **argv)
|
|||
case 'c':
|
||||
test->copy = true;
|
||||
continue;
|
||||
case 'e':
|
||||
test->clear_irq = true;
|
||||
continue;
|
||||
case 's':
|
||||
test->size = strtoul(optarg, NULL, 0);
|
||||
continue;
|
||||
case 'd':
|
||||
test->use_dma = true;
|
||||
continue;
|
||||
case 'h':
|
||||
default:
|
||||
usage:
|
||||
|
@ -208,7 +235,9 @@ usage:
|
|||
"\t-m <msi num> MSI test (msi number between 1..32)\n"
|
||||
"\t-x <msix num> \tMSI-X test (msix number between 1..2048)\n"
|
||||
"\t-i <irq type> \tSet IRQ type (0 - Legacy, 1 - MSI, 2 - MSI-X)\n"
|
||||
"\t-e Clear IRQ\n"
|
||||
"\t-I Get current IRQ type configured\n"
|
||||
"\t-d Use DMA\n"
|
||||
"\t-l Legacy IRQ test\n"
|
||||
"\t-r Read buffer test\n"
|
||||
"\t-w Write buffer test\n"
|
||||
|
|
Загрузка…
Ссылка в новой задаче