PCI changes for the v3.13 merge window:

Resource management
     - Fix host bridge window coalescing (Alexey Neyman)
     - Pass type, width, and prefetchability for window alignment (Wei Yang)
 
   PCI device hotplug
     - Convert acpiphp, acpiphp_ibm to dynamic debug (Lan Tianyu)
 
   Power management
     - Remove pci_pm_complete() (Liu Chuansheng)
 
   MSI
     - Fail initialization if device is not in PCI_D0 (Yijing Wang)
 
   MPS (Max Payload Size)
     - Use pcie_get_mps() and pcie_set_mps() to simplify code (Yijing Wang)
     - Use pcie_set_readrq() to simplify code (Yijing Wang)
     - Use cached pci_dev->pcie_mpss to simplify code (Yijing Wang)
 
   SR-IOV
     - Enable upstream bridges even for VFs on virtual buses (Bjorn Helgaas)
     - Use pci_is_root_bus() to avoid catching virtual buses (Wei Yang)
 
   Virtualization
     - Add x86 MSI masking ops (Konrad Rzeszutek Wilk)
 
   Freescale i.MX6
     - Support i.MX6 PCIe controller (Sean Cross)
     - Increase link startup timeout (Marek Vasut)
     - Probe PCIe in fs_initcall() (Marek Vasut)
     - Fix imprecise abort handler (Tim Harvey)
     - Remove redundant of_match_ptr (Sachin Kamat)
 
   Renesas R-Car
     - Support Gen2 internal PCIe controller (Valentine Barshak)
 
   Samsung Exynos
     - Add MSI support (Jingoo Han)
     - Turn off power when link fails (Jingoo Han)
     - Add Jingoo Han as maintainer (Jingoo Han)
     - Add clk_disable_unprepare() on error path (Wei Yongjun)
     - Remove redundant of_match_ptr (Sachin Kamat)
 
   Synopsys DesignWare
     - Add irq_create_mapping() (Pratyush Anand)
     - Add header guards (Seungwon Jeon)
 
   Miscellaneous
     - Enable native PCIe services by default on non-ACPI (Andrew Murray)
     - Cleanup _OSC usage and messages (Bjorn Helgaas)
     - Remove pcibios_last_bus boot option on non-x86 (Bjorn Helgaas)
     - Convert bus code to use bus_, drv_, and dev_groups (Greg Kroah-Hartman)
     - Remove unused pci_mem_start (Myron Stowe)
     - Make sysfs functions static (Sachin Kamat)
     - Warn on invalid return from driver probe (Stephen M. Cameron)
     - Remove Intel Haswell D3 delays (Todd E Brandt)
     - Call pci_set_master() in core if driver doesn't do it (Yinghai Lu)
     - Use pci_is_pcie() to simplify code (Yijing Wang)
     - Use PCIe capability accessors to simplify code (Yijing Wang)
     - Use cached pci_dev->pcie_cap to simplify code (Yijing Wang)
     - Removed unused "is_pcie" from struct pci_dev (Yijing Wang)
     - Simplify sysfs CPU affinity implementation (Yijing Wang))
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJSgUzsAAoJEFmIoMA60/r8wmsQAJhwmtkUYR2L4T1g9smAyjJz
 bLm5zoC6WdywFcbTpTBfsTrS1CHIQG5akRgkEXGdr99epiho5F2lwmagWsUR4ijL
 39Qn3knAUMgtNjoVXXI106h/DfTyxSmkZBfih2AQFyWobJq+0kg7hjQQA3+836b4
 8ssWr1+NSl6JJTqYQ0Paw1kSqvvYoXsu5rWFEfCHk8D0s/1bvr5ldAUpk2jTg93I
 uo9/5+O264yt1YoKZOMqAMZLUfd5DaWY1mV3yeF0Uauy1pBmol5csE8ckqJPDrES
 PRdJT1+PhBeLYWcgXANOBZsW58ddxA0pQ5jQV6VJHQWsm5cE82OBpYJf6xUZ2moV
 o6DZ0KRnCPVA3NllYYR16H+wbMfADwwO83QoA+QTIZJy/WgpDH3Cst+m8KePGqbL
 uFgDdXSws9Bs1BCFs7bfYzAM3OdkBFnn+ac7JoPXKP5ibgAp9nDlurgK2r90zRnp
 j15vHMx0mV+e8B8/iwiW5eRtg7NoCHYiNfFy7JalOlsPmYr2KFazBVKclp13Hng7
 fe/Jy6X4UhWoQPdqsy4ftvSQb0gm1MClxFJeZ3VAt6LY9j8OP6S/Vdf6lpAL85KR
 lAQoQzB+lOhTPdXxFY2xgGkITkqPDOQMjPfowYUYFwybqBuG6BHXZPJobL+niBlb
 Nh+M2WlUUA9Z3V6rWJB6
 =CTPk
 -----END PGP SIGNATURE-----

Merge tag 'pci-v3.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI changes from Bjorn Helgaas:
 "Resource management
    - Fix host bridge window coalescing (Alexey Neyman)
    - Pass type, width, and prefetchability for window alignment (Wei Yang)

  PCI device hotplug
    - Convert acpiphp, acpiphp_ibm to dynamic debug (Lan Tianyu)

  Power management
    - Remove pci_pm_complete() (Liu Chuansheng)

  MSI
    - Fail initialization if device is not in PCI_D0 (Yijing Wang)

  MPS (Max Payload Size)
    - Use pcie_get_mps() and pcie_set_mps() to simplify code (Yijing Wang)
    - Use pcie_set_readrq() to simplify code (Yijing Wang)
    - Use cached pci_dev->pcie_mpss to simplify code (Yijing Wang)

  SR-IOV
    - Enable upstream bridges even for VFs on virtual buses (Bjorn Helgaas)
    - Use pci_is_root_bus() to avoid catching virtual buses (Wei Yang)

  Virtualization
    - Add x86 MSI masking ops (Konrad Rzeszutek Wilk)

  Freescale i.MX6
    - Support i.MX6 PCIe controller (Sean Cross)
    - Increase link startup timeout (Marek Vasut)
    - Probe PCIe in fs_initcall() (Marek Vasut)
    - Fix imprecise abort handler (Tim Harvey)
    - Remove redundant of_match_ptr (Sachin Kamat)

  Renesas R-Car
    - Support Gen2 internal PCIe controller (Valentine Barshak)

  Samsung Exynos
    - Add MSI support (Jingoo Han)
    - Turn off power when link fails (Jingoo Han)
    - Add Jingoo Han as maintainer (Jingoo Han)
    - Add clk_disable_unprepare() on error path (Wei Yongjun)
    - Remove redundant of_match_ptr (Sachin Kamat)

  Synopsys DesignWare
    - Add irq_create_mapping() (Pratyush Anand)
    - Add header guards (Seungwon Jeon)

  Miscellaneous
    - Enable native PCIe services by default on non-ACPI (Andrew Murray)
    - Cleanup _OSC usage and messages (Bjorn Helgaas)
    - Remove pcibios_last_bus boot option on non-x86 (Bjorn Helgaas)
    - Convert bus code to use bus_, drv_, and dev_groups (Greg Kroah-Hartman)
    - Remove unused pci_mem_start (Myron Stowe)
    - Make sysfs functions static (Sachin Kamat)
    - Warn on invalid return from driver probe (Stephen M. Cameron)
    - Remove Intel Haswell D3 delays (Todd E Brandt)
    - Call pci_set_master() in core if driver doesn't do it (Yinghai Lu)
    - Use pci_is_pcie() to simplify code (Yijing Wang)
    - Use PCIe capability accessors to simplify code (Yijing Wang)
    - Use cached pci_dev->pcie_cap to simplify code (Yijing Wang)
    - Removed unused "is_pcie" from struct pci_dev (Yijing Wang)
    - Simplify sysfs CPU affinity implementation (Yijing Wang)"

* tag 'pci-v3.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (79 commits)
  PCI: Enable upstream bridges even for VFs on virtual buses
  PCI: Add pci_upstream_bridge()
  PCI: Add x86_msi.msi_mask_irq() and msix_mask_irq()
  PCI: Warn on driver probe return value greater than zero
  PCI: Drop warning about drivers that don't use pci_set_master()
  PCI: Workaround missing pci_set_master in pci drivers
  powerpc/pci: Use pci_is_pcie() to simplify code [fix]
  PCI: Update pcie_ports 'auto' behavior for non-ACPI platforms
  PCI: imx6: Probe the PCIe in fs_initcall()
  PCI: Add R-Car Gen2 internal PCI support
  PCI: imx6: Remove redundant of_match_ptr
  PCI: Report pci_pme_active() kmalloc failure
  mn10300/PCI: Remove useless pcibios_last_bus
  frv/PCI: Remove pcibios_last_bus
  PCI: imx6: Increase link startup timeout
  PCI: exynos: Remove redundant of_match_ptr
  PCI: imx6: Fix imprecise abort handler
  PCI: Fail MSI/MSI-X initialization if device is not in PCI_D0
  PCI: imx6: Remove redundant dev_err() in imx6_pcie_probe()
  x86/PCI: Coalesce multiple overlapping host bridge windows
  ...
This commit is contained in:
Linus Torvalds 2013-11-14 14:02:00 +09:00
Родитель f9300eaaac eaaeb1cb33
Коммит 2f466d33f5
56 изменённых файлов: 1883 добавлений и 612 удалений

Просмотреть файл

@ -525,8 +525,9 @@ corresponding register block for you.
6. Other interesting functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pci_find_slot() Find pci_dev corresponding to given bus and
slot numbers.
pci_get_domain_bus_and_slot() Find pci_dev corresponding to given domain,
bus and slot and number. If the device is
found, its reference count is increased.
pci_set_power_state() Set PCI Power Management state (0=D0 ... 3=D3)
pci_find_capability() Find specified capability in device's capability
list.
@ -582,7 +583,8 @@ having sane locking.
pci_find_device() Superseded by pci_get_device()
pci_find_subsys() Superseded by pci_get_subsys()
pci_find_slot() Superseded by pci_get_slot()
pci_find_slot() Superseded by pci_get_domain_bus_and_slot()
pci_get_slot() Superseded by pci_get_domain_bus_and_slot()
The alternative is the traditional PCI device driver that walks PCI

Просмотреть файл

@ -3,7 +3,7 @@
Required properties:
- compatible: should contain "snps,dw-pcie" to identify the
core, plus an identifier for the specific instance, such
as "samsung,exynos5440-pcie".
as "samsung,exynos5440-pcie" or "fsl,imx6q-pcie".
- reg: base addresses and lengths of the pcie controller,
the phy controller, additional register for the phy controller.
- interrupts: interrupt values for level interrupt,
@ -21,6 +21,11 @@ Required properties:
- num-lanes: number of lanes to use
- reset-gpio: gpio pin number of power good signal
Optional properties for fsl,imx6q-pcie
- power-on-gpio: gpio pin number of power-enable signal
- wake-up-gpio: gpio pin number of incoming wakeup signal
- disable-gpio: gpio pin number of outgoing rfkill/endpoint disable signal
Example:
SoC specific DT Entry:

Просмотреть файл

@ -6427,6 +6427,7 @@ S: Supported
F: Documentation/PCI/
F: drivers/pci/
F: include/linux/pci*
F: arch/x86/pci/
PCI DRIVER FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@gmail.com>
@ -6435,6 +6436,12 @@ S: Supported
F: Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
F: drivers/pci/host/pci-tegra.c
PCI DRIVER FOR SAMSUNG EXYNOS
M: Jingoo Han <jg1.han@samsung.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/host/pci-exynos.c
PCMCIA SUBSYSTEM
P: Linux PCMCIA Team
L: linux-pcmcia@lists.infradead.org

Просмотреть файл

@ -11,7 +11,6 @@
#define pcibios_assign_all_busses(void) 1
extern unsigned long pci_mem_start;
#define PCIBIOS_MIN_IO 0x1000
#define PCIBIOS_MIN_MEM 0x10000000

Просмотреть файл

@ -30,7 +30,6 @@ void pcibios_resource_survey(void);
/* pci-vdk.c */
extern int __nongpreldata pcibios_last_bus;
extern struct pci_ops *__nongpreldata pci_root_ops;
/* pci-irq.c */

Просмотреть файл

@ -25,7 +25,6 @@
unsigned int __nongpreldata pci_probe = 1;
int __nongpreldata pcibios_last_bus = -1;
struct pci_ops *__nongpreldata pci_root_ops;
/*
@ -219,37 +218,6 @@ static struct pci_ops * __init pci_check_direct(void)
return NULL;
}
/*
* Discover remaining PCI buses in case there are peer host bridges.
* We use the number of last PCI bus provided by the PCI BIOS.
*/
static void __init pcibios_fixup_peer_bridges(void)
{
struct pci_bus bus;
struct pci_dev dev;
int n;
u16 l;
if (pcibios_last_bus <= 0 || pcibios_last_bus >= 0xff)
return;
printk("PCI: Peer bridge fixup\n");
for (n=0; n <= pcibios_last_bus; n++) {
if (pci_find_bus(0, n))
continue;
bus.number = n;
bus.ops = pci_root_ops;
dev.bus = &bus;
for(dev.devfn=0; dev.devfn<256; dev.devfn += 8)
if (!pci_read_config_word(&dev, PCI_VENDOR_ID, &l) &&
l != 0x0000 && l != 0xffff) {
printk("Found device at %02x:%02x [%04x]\n", n, dev.devfn, l);
printk("PCI: Discovered peer bus %02x\n", n);
pci_scan_bus(n, pci_root_ops, NULL);
break;
}
}
}
/*
* Exceptions for specific devices. Usually work-arounds for fatal design flaws.
*/
@ -418,7 +386,6 @@ int __init pcibios_init(void)
pci_scan_root_bus(NULL, 0, pci_root_ops, NULL, &resources);
pcibios_irq_init();
pcibios_fixup_peer_bridges();
pcibios_fixup_irqs();
pcibios_resource_survey();
@ -432,9 +399,6 @@ char * __init pcibios_setup(char *str)
if (!strcmp(str, "off")) {
pci_probe = 0;
return NULL;
} else if (!strncmp(str, "lastbus=", 8)) {
pcibios_last_bus = simple_strtol(str+8, NULL, 0);
return NULL;
}
return str;
}

Просмотреть файл

@ -44,7 +44,6 @@ extern void unit_pci_init(void);
#define pcibios_assign_all_busses() 0
#endif
extern unsigned long pci_mem_start;
#define PCIBIOS_MIN_IO 0xBE000004
#define PCIBIOS_MIN_MEM 0xB8000000

Просмотреть файл

@ -35,9 +35,6 @@
struct mn10300_cpuinfo boot_cpu_data;
/* For PCI or other memory-mapped resources */
unsigned long pci_mem_start = 0x18000000;
static char __initdata cmd_line[COMMAND_LINE_SIZE];
char redboot_command_line[COMMAND_LINE_SIZE] =
"console=ttyS0,115200 root=/dev/mtdblock3 rw";

Просмотреть файл

@ -35,7 +35,6 @@ extern void pcibios_resource_survey(void);
/* pci.c */
extern int pcibios_last_bus;
extern struct pci_ops *pci_root_ops;
extern struct irq_routing_table *pcibios_get_irq_routing_table(void);

Просмотреть файл

@ -24,7 +24,6 @@
unsigned int pci_probe = 1;
int pcibios_last_bus = -1;
struct pci_ops *pci_root_ops;
/*
@ -392,10 +391,6 @@ char *__init pcibios_setup(char *str)
if (!strcmp(str, "off")) {
pci_probe = 0;
return NULL;
} else if (!strncmp(str, "lastbus=", 8)) {
pcibios_last_bus = simple_strtol(str+8, NULL, 0);
return NULL;
}
return str;

Просмотреть файл

@ -189,14 +189,13 @@ static size_t eeh_gather_pci_data(struct eeh_dev *edev, char * buf, size_t len)
}
/* If PCI-E capable, dump PCI-E cap 10, and the AER */
cap = pci_find_capability(dev, PCI_CAP_ID_EXP);
if (cap) {
if (pci_is_pcie(dev)) {
n += scnprintf(buf+n, len-n, "pci-e cap10:\n");
printk(KERN_WARNING
"EEH: PCI-E capabilities and status follow:\n");
for (i=0; i<=8; i++) {
eeh_ops->read_config(dn, cap+4*i, 4, &cfg);
eeh_ops->read_config(dn, dev->pcie_cap+4*i, 4, &cfg);
n += scnprintf(buf+n, len-n, "%02x:%x\n", 4*i, cfg);
printk(KERN_WARNING "EEH: PCI-E %02x: %08x\n", i, cfg);
}

Просмотреть файл

@ -45,7 +45,7 @@ static void quirk_fsl_pcie_early(struct pci_dev *dev)
u8 hdr_type;
/* if we aren't a PCIe don't bother */
if (!pci_find_capability(dev, PCI_CAP_ID_EXP))
if (!pci_is_pcie(dev))
return;
/* if we aren't in host mode don't bother */

Просмотреть файл

@ -251,15 +251,12 @@ static void fixup_read_and_payload_sizes(void)
/* Scan for the smallest maximum payload size. */
for_each_pci_dev(dev) {
u32 devcap;
int max_payload;
if (!pci_is_pcie(dev))
continue;
pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &devcap);
max_payload = devcap & PCI_EXP_DEVCAP_PAYLOAD;
if (max_payload < smallest_max_payload)
smallest_max_payload = max_payload;
if (dev->pcie_mpss < smallest_max_payload)
smallest_max_payload = dev->pcie_mpss;
}
/* Now, set the max_payload_size for all devices to that value. */

Просмотреть файл

@ -172,6 +172,7 @@ struct x86_platform_ops {
struct pci_dev;
struct msi_msg;
struct msi_desc;
struct x86_msi_ops {
int (*setup_msi_irqs)(struct pci_dev *dev, int nvec, int type);
@ -182,6 +183,8 @@ struct x86_msi_ops {
void (*teardown_msi_irqs)(struct pci_dev *dev);
void (*restore_msi_irqs)(struct pci_dev *dev, int irq);
int (*setup_hpet_msi)(unsigned int irq, unsigned int id);
u32 (*msi_mask_irq)(struct msi_desc *desc, u32 mask, u32 flag);
u32 (*msix_mask_irq)(struct msi_desc *desc, u32 flag);
};
struct IO_APIC_route_entry;

Просмотреть файл

@ -116,6 +116,8 @@ struct x86_msi_ops x86_msi = {
.teardown_msi_irqs = default_teardown_msi_irqs,
.restore_msi_irqs = default_restore_msi_irqs,
.setup_hpet_msi = default_setup_hpet_msi,
.msi_mask_irq = default_msi_mask_irq,
.msix_mask_irq = default_msix_mask_irq,
};
/* MSI arch specific hooks */
@ -138,6 +140,14 @@ void arch_restore_msi_irqs(struct pci_dev *dev, int irq)
{
x86_msi.restore_msi_irqs(dev, irq);
}
u32 arch_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
{
return x86_msi.msi_mask_irq(desc, mask, flag);
}
u32 arch_msix_mask_irq(struct msi_desc *desc, u32 flag)
{
return x86_msi.msix_mask_irq(desc, flag);
}
#endif
struct x86_io_apic_ops x86_io_apic_ops = {

Просмотреть файл

@ -354,12 +354,12 @@ static void coalesce_windows(struct pci_root_info *info, unsigned long type)
* the kernel resource tree doesn't allow overlaps.
*/
if (resource_overlaps(res1, res2)) {
res1->start = min(res1->start, res2->start);
res1->end = max(res1->end, res2->end);
res2->start = min(res1->start, res2->start);
res2->end = max(res1->end, res2->end);
dev_info(&info->bridge->dev,
"host bridge window expanded to %pR; %pR ignored\n",
res1, res2);
res2->flags = 0;
res2, res1);
res1->flags = 0;
}
}
}

Просмотреть файл

@ -231,7 +231,7 @@ static int quirk_pcie_aspm_write(struct pci_bus *bus, unsigned int devfn, int wh
offset = quirk_aspm_offset[GET_INDEX(bus->self->device, devfn)];
if ((offset) && (where == offset))
value = value & 0xfffffffc;
value = value & ~PCI_EXP_LNKCTL_ASPMC;
return raw_pci_write(pci_domain_nr(bus), bus->number,
devfn, where, size, value);
@ -252,7 +252,7 @@ static struct pci_ops quirk_pcie_aspm_ops = {
*/
static void pcie_rootport_aspm_quirk(struct pci_dev *pdev)
{
int cap_base, i;
int i;
struct pci_bus *pbus;
struct pci_dev *dev;
@ -278,7 +278,7 @@ static void pcie_rootport_aspm_quirk(struct pci_dev *pdev)
for (i = GET_INDEX(pdev->device, 0); i <= GET_INDEX(pdev->device, 7); ++i)
quirk_aspm_offset[i] = 0;
pbus->ops = pbus->parent->ops;
pci_bus_set_ops(pbus, pbus->parent->ops);
} else {
/*
* If devices are attached to the root port at power-up or
@ -286,13 +286,15 @@ static void pcie_rootport_aspm_quirk(struct pci_dev *pdev)
* each root port to save the register offsets and replace the
* bus ops.
*/
list_for_each_entry(dev, &pbus->devices, bus_list) {
list_for_each_entry(dev, &pbus->devices, bus_list)
/* There are 0 to 8 devices attached to this bus */
cap_base = pci_find_capability(dev, PCI_CAP_ID_EXP);
quirk_aspm_offset[GET_INDEX(pdev->device, dev->devfn)] = cap_base + 0x10;
}
pbus->ops = &quirk_pcie_aspm_ops;
quirk_aspm_offset[GET_INDEX(pdev->device, dev->devfn)] =
dev->pcie_cap + PCI_EXP_LNKCTL;
pci_bus_set_ops(pbus, &quirk_pcie_aspm_ops);
dev_info(&pbus->dev, "writes to ASPM control bits will be ignored\n");
}
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PA, pcie_rootport_aspm_quirk);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PA1, pcie_rootport_aspm_quirk);

Просмотреть файл

@ -382,7 +382,14 @@ static void xen_teardown_msi_irq(unsigned int irq)
{
xen_destroy_irq(irq);
}
static u32 xen_nop_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
{
return 0;
}
static u32 xen_nop_msix_mask_irq(struct msi_desc *desc, u32 flag)
{
return 0;
}
#endif
int __init pci_xen_init(void)
@ -406,6 +413,8 @@ int __init pci_xen_init(void)
x86_msi.setup_msi_irqs = xen_setup_msi_irqs;
x86_msi.teardown_msi_irq = xen_teardown_msi_irq;
x86_msi.teardown_msi_irqs = xen_teardown_msi_irqs;
x86_msi.msi_mask_irq = xen_nop_msi_mask_irq;
x86_msi.msix_mask_irq = xen_nop_msix_mask_irq;
#endif
return 0;
}
@ -485,6 +494,8 @@ int __init pci_xen_initial_domain(void)
x86_msi.setup_msi_irqs = xen_initdom_setup_msi_irqs;
x86_msi.teardown_msi_irq = xen_teardown_msi_irq;
x86_msi.restore_msi_irqs = xen_initdom_restore_msi_irqs;
x86_msi.msi_mask_irq = xen_nop_msi_mask_irq;
x86_msi.msix_mask_irq = xen_nop_msix_mask_irq;
#endif
xen_setup_acpi_sci();
__acpi_register_gsi = acpi_register_gsi_xen;

Просмотреть файл

@ -758,9 +758,9 @@ int apei_osc_setup(void)
.cap.pointer = capbuf,
};
capbuf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_TYPE] = 1;
capbuf[OSC_CONTROL_TYPE] = 0;
capbuf[OSC_QUERY_DWORD] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_DWORD] = 1;
capbuf[OSC_CONTROL_DWORD] = 0;
if (ACPI_FAILURE(acpi_get_handle(NULL, "\\_SB", &handle))
|| ACPI_FAILURE(acpi_run_osc(handle, &context)))

Просмотреть файл

@ -256,7 +256,7 @@ acpi_status acpi_run_osc(acpi_handle handle, struct acpi_osc_context *context)
acpi_print_osc_error(handle, context,
"_OSC invalid revision");
if (errors & OSC_CAPABILITIES_MASK_ERROR) {
if (((u32 *)context->cap.pointer)[OSC_QUERY_TYPE]
if (((u32 *)context->cap.pointer)[OSC_QUERY_DWORD]
& OSC_QUERY_ENABLE)
goto out_success;
status = AE_SUPPORT;
@ -296,30 +296,30 @@ static void acpi_bus_osc_support(void)
};
acpi_handle handle;
capbuf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_TYPE] = OSC_SB_PR3_SUPPORT; /* _PR3 is in use */
capbuf[OSC_QUERY_DWORD] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_DWORD] = OSC_SB_PR3_SUPPORT; /* _PR3 is in use */
#if defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR) ||\
defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR_MODULE)
capbuf[OSC_SUPPORT_TYPE] |= OSC_SB_PAD_SUPPORT;
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_PAD_SUPPORT;
#endif
#if defined(CONFIG_ACPI_PROCESSOR) || defined(CONFIG_ACPI_PROCESSOR_MODULE)
capbuf[OSC_SUPPORT_TYPE] |= OSC_SB_PPC_OST_SUPPORT;
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_PPC_OST_SUPPORT;
#endif
#ifdef ACPI_HOTPLUG_OST
capbuf[OSC_SUPPORT_TYPE] |= OSC_SB_HOTPLUG_OST_SUPPORT;
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_HOTPLUG_OST_SUPPORT;
#endif
if (!ghes_disable)
capbuf[OSC_SUPPORT_TYPE] |= OSC_SB_APEI_SUPPORT;
capbuf[OSC_SUPPORT_DWORD] |= OSC_SB_APEI_SUPPORT;
if (ACPI_FAILURE(acpi_get_handle(NULL, "\\_SB", &handle)))
return;
if (ACPI_SUCCESS(acpi_run_osc(handle, &context))) {
u32 *capbuf_ret = context.ret.pointer;
if (context.ret.length > OSC_SUPPORT_TYPE)
if (context.ret.length > OSC_SUPPORT_DWORD)
osc_sb_apei_support_acked =
capbuf_ret[OSC_SUPPORT_TYPE] & OSC_SB_APEI_SUPPORT;
capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;
kfree(context.ret.pointer);
}
/* do we need to check other returned cap? Sounds no */

Просмотреть файл

@ -51,10 +51,10 @@ static int acpi_pci_root_add(struct acpi_device *device,
const struct acpi_device_id *not_used);
static void acpi_pci_root_remove(struct acpi_device *device);
#define ACPI_PCIE_REQ_SUPPORT (OSC_EXT_PCI_CONFIG_SUPPORT \
| OSC_ACTIVE_STATE_PWR_SUPPORT \
| OSC_CLOCK_PWR_CAPABILITY_SUPPORT \
| OSC_MSI_SUPPORT)
#define ACPI_PCIE_REQ_SUPPORT (OSC_PCI_EXT_CONFIG_SUPPORT \
| OSC_PCI_ASPM_SUPPORT \
| OSC_PCI_CLOCK_PM_SUPPORT \
| OSC_PCI_MSI_SUPPORT)
static const struct acpi_device_id root_device_ids[] = {
{"PNP0A03", 0},
@ -129,6 +129,55 @@ static acpi_status try_get_root_bridge_busnr(acpi_handle handle,
return AE_OK;
}
struct pci_osc_bit_struct {
u32 bit;
char *desc;
};
static struct pci_osc_bit_struct pci_osc_support_bit[] = {
{ OSC_PCI_EXT_CONFIG_SUPPORT, "ExtendedConfig" },
{ OSC_PCI_ASPM_SUPPORT, "ASPM" },
{ OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" },
{ OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" },
{ OSC_PCI_MSI_SUPPORT, "MSI" },
};
static struct pci_osc_bit_struct pci_osc_control_bit[] = {
{ OSC_PCI_EXPRESS_NATIVE_HP_CONTROL, "PCIeHotplug" },
{ OSC_PCI_SHPC_NATIVE_HP_CONTROL, "SHPCHotplug" },
{ OSC_PCI_EXPRESS_PME_CONTROL, "PME" },
{ OSC_PCI_EXPRESS_AER_CONTROL, "AER" },
{ OSC_PCI_EXPRESS_CAPABILITY_CONTROL, "PCIeCapability" },
};
static void decode_osc_bits(struct acpi_pci_root *root, char *msg, u32 word,
struct pci_osc_bit_struct *table, int size)
{
char buf[80];
int i, len = 0;
struct pci_osc_bit_struct *entry;
buf[0] = '\0';
for (i = 0, entry = table; i < size; i++, entry++)
if (word & entry->bit)
len += snprintf(buf + len, sizeof(buf) - len, "%s%s",
len ? " " : "", entry->desc);
dev_info(&root->device->dev, "_OSC: %s [%s]\n", msg, buf);
}
static void decode_osc_support(struct acpi_pci_root *root, char *msg, u32 word)
{
decode_osc_bits(root, msg, word, pci_osc_support_bit,
ARRAY_SIZE(pci_osc_support_bit));
}
static void decode_osc_control(struct acpi_pci_root *root, char *msg, u32 word)
{
decode_osc_bits(root, msg, word, pci_osc_control_bit,
ARRAY_SIZE(pci_osc_control_bit));
}
static u8 pci_osc_uuid_str[] = "33DB4D5B-1FF7-401C-9657-7441C03DD766";
static acpi_status acpi_pci_run_osc(acpi_handle handle,
@ -160,14 +209,14 @@ static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root,
support &= OSC_PCI_SUPPORT_MASKS;
support |= root->osc_support_set;
capbuf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_TYPE] = support;
capbuf[OSC_QUERY_DWORD] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_DWORD] = support;
if (control) {
*control &= OSC_PCI_CONTROL_MASKS;
capbuf[OSC_CONTROL_TYPE] = *control | root->osc_control_set;
capbuf[OSC_CONTROL_DWORD] = *control | root->osc_control_set;
} else {
/* Run _OSC query only with existing controls. */
capbuf[OSC_CONTROL_TYPE] = root->osc_control_set;
capbuf[OSC_CONTROL_DWORD] = root->osc_control_set;
}
status = acpi_pci_run_osc(root->device->handle, capbuf, &result);
@ -182,11 +231,7 @@ static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root,
static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags)
{
acpi_status status;
acpi_handle tmp;
status = acpi_get_handle(root->device->handle, "_OSC", &tmp);
if (ACPI_FAILURE(status))
return status;
mutex_lock(&osc_lock);
status = acpi_pci_query_osc(root, flags, NULL);
mutex_unlock(&osc_lock);
@ -318,9 +363,8 @@ EXPORT_SYMBOL_GPL(acpi_get_pci_dev);
acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
{
struct acpi_pci_root *root;
acpi_status status;
acpi_status status = AE_OK;
u32 ctrl, capbuf[3];
acpi_handle tmp;
if (!mask)
return AE_BAD_PARAMETER;
@ -333,10 +377,6 @@ acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
if (!root)
return AE_NOT_EXIST;
status = acpi_get_handle(handle, "_OSC", &tmp);
if (ACPI_FAILURE(status))
return status;
mutex_lock(&osc_lock);
*mask = ctrl | root->osc_control_set;
@ -351,17 +391,21 @@ acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
goto out;
if (ctrl == *mask)
break;
decode_osc_control(root, "platform does not support",
ctrl & ~(*mask));
ctrl = *mask;
}
if ((ctrl & req) != req) {
decode_osc_control(root, "not requesting control; platform does not support",
req & ~(ctrl));
status = AE_SUPPORT;
goto out;
}
capbuf[OSC_QUERY_TYPE] = 0;
capbuf[OSC_SUPPORT_TYPE] = root->osc_support_set;
capbuf[OSC_CONTROL_TYPE] = ctrl;
capbuf[OSC_QUERY_DWORD] = 0;
capbuf[OSC_SUPPORT_DWORD] = root->osc_support_set;
capbuf[OSC_CONTROL_DWORD] = ctrl;
status = acpi_pci_run_osc(handle, capbuf, mask);
if (ACPI_SUCCESS(status))
root->osc_control_set = *mask;
@ -371,6 +415,87 @@ out:
}
EXPORT_SYMBOL(acpi_pci_osc_control_set);
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
int *clear_aspm)
{
u32 support, control, requested;
acpi_status status;
struct acpi_device *device = root->device;
acpi_handle handle = device->handle;
/*
* All supported architectures that use ACPI have support for
* PCI domains, so we indicate this in _OSC support capabilities.
*/
support = OSC_PCI_SEGMENT_GROUPS_SUPPORT;
if (pci_ext_cfg_avail())
support |= OSC_PCI_EXT_CONFIG_SUPPORT;
if (pcie_aspm_support_enabled())
support |= OSC_PCI_ASPM_SUPPORT | OSC_PCI_CLOCK_PM_SUPPORT;
if (pci_msi_enabled())
support |= OSC_PCI_MSI_SUPPORT;
decode_osc_support(root, "OS supports", support);
status = acpi_pci_osc_support(root, support);
if (ACPI_FAILURE(status)) {
dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n",
acpi_format_exception(status));
*no_aspm = 1;
return;
}
if (pcie_ports_disabled) {
dev_info(&device->dev, "PCIe port services disabled; not requesting _OSC control\n");
return;
}
if ((support & ACPI_PCIE_REQ_SUPPORT) != ACPI_PCIE_REQ_SUPPORT) {
decode_osc_support(root, "not requesting OS control; OS requires",
ACPI_PCIE_REQ_SUPPORT);
return;
}
control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL
| OSC_PCI_EXPRESS_NATIVE_HP_CONTROL
| OSC_PCI_EXPRESS_PME_CONTROL;
if (pci_aer_available()) {
if (aer_acpi_firmware_first())
dev_info(&device->dev,
"PCIe AER handled by firmware\n");
else
control |= OSC_PCI_EXPRESS_AER_CONTROL;
}
requested = control;
status = acpi_pci_osc_control_set(handle, &control,
OSC_PCI_EXPRESS_CAPABILITY_CONTROL);
if (ACPI_SUCCESS(status)) {
decode_osc_control(root, "OS now controls", control);
if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) {
/*
* We have ASPM control, but the FADT indicates
* that it's unsupported. Clear it.
*/
*clear_aspm = 1;
}
} else {
decode_osc_control(root, "OS requested", requested);
decode_osc_control(root, "platform willing to grant", control);
dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n",
acpi_format_exception(status));
/*
* We want to disable ASPM here, but aspm_disabled
* needs to remain in its state from boot so that we
* properly handle PCIe 1.1 devices. So we set this
* flag here, to defer the action until after the ACPI
* root scan.
*/
*no_aspm = 1;
}
}
static int acpi_pci_root_add(struct acpi_device *device,
const struct acpi_device_id *not_used)
{
@ -378,9 +503,8 @@ static int acpi_pci_root_add(struct acpi_device *device,
acpi_status status;
int result;
struct acpi_pci_root *root;
u32 flags, base_flags;
acpi_handle handle = device->handle;
bool no_aspm = false, clear_aspm = false;
int no_aspm = 0, clear_aspm = 0;
root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
if (!root)
@ -433,81 +557,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle);
/*
* All supported architectures that use ACPI have support for
* PCI domains, so we indicate this in _OSC support capabilities.
*/
flags = base_flags = OSC_PCI_SEGMENT_GROUPS_SUPPORT;
acpi_pci_osc_support(root, flags);
if (pci_ext_cfg_avail())
flags |= OSC_EXT_PCI_CONFIG_SUPPORT;
if (pcie_aspm_support_enabled()) {
flags |= OSC_ACTIVE_STATE_PWR_SUPPORT |
OSC_CLOCK_PWR_CAPABILITY_SUPPORT;
}
if (pci_msi_enabled())
flags |= OSC_MSI_SUPPORT;
if (flags != base_flags) {
status = acpi_pci_osc_support(root, flags);
if (ACPI_FAILURE(status)) {
dev_info(&device->dev, "ACPI _OSC support "
"notification failed, disabling PCIe ASPM\n");
no_aspm = true;
flags = base_flags;
}
}
if (!pcie_ports_disabled
&& (flags & ACPI_PCIE_REQ_SUPPORT) == ACPI_PCIE_REQ_SUPPORT) {
flags = OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL
| OSC_PCI_EXPRESS_NATIVE_HP_CONTROL
| OSC_PCI_EXPRESS_PME_CONTROL;
if (pci_aer_available()) {
if (aer_acpi_firmware_first())
dev_dbg(&device->dev,
"PCIe errors handled by BIOS.\n");
else
flags |= OSC_PCI_EXPRESS_AER_CONTROL;
}
dev_info(&device->dev,
"Requesting ACPI _OSC control (0x%02x)\n", flags);
status = acpi_pci_osc_control_set(handle, &flags,
OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL);
if (ACPI_SUCCESS(status)) {
dev_info(&device->dev,
"ACPI _OSC control (0x%02x) granted\n", flags);
if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) {
/*
* We have ASPM control, but the FADT indicates
* that it's unsupported. Clear it.
*/
clear_aspm = true;
}
} else {
dev_info(&device->dev,
"ACPI _OSC request failed (%s), "
"returned control mask: 0x%02x\n",
acpi_format_exception(status), flags);
dev_info(&device->dev,
"ACPI _OSC control for PCIe not granted, disabling ASPM\n");
/*
* We want to disable ASPM here, but aspm_disabled
* needs to remain in its state from boot so that we
* properly handle PCIe 1.1 devices. So we set this
* flag here, to defer the action until after the ACPI
* root scan.
*/
no_aspm = true;
}
} else {
dev_info(&device->dev,
"Unable to request _OSC control "
"(_OSC support mask: 0x%02x)\n", flags);
}
negotiate_os_control(root, &no_aspm, &clear_aspm);
/*
* TBD: Need PCI interface for enumeration/configuration of roots.

Просмотреть файл

@ -1174,23 +1174,16 @@ int evergreen_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk)
void evergreen_fix_pci_max_read_req_size(struct radeon_device *rdev)
{
u16 ctl, v;
int err;
err = pcie_capability_read_word(rdev->pdev, PCI_EXP_DEVCTL, &ctl);
if (err)
return;
v = (ctl & PCI_EXP_DEVCTL_READRQ) >> 12;
int readrq;
u16 v;
readrq = pcie_get_readrq(rdev->pdev);
v = ffs(readrq) - 8;
/* if bios or OS sets MAX_READ_REQUEST_SIZE to an invalid value, fix it
* to avoid hangs or perfomance issues
*/
if ((v == 0) || (v == 6) || (v == 7)) {
ctl &= ~PCI_EXP_DEVCTL_READRQ;
ctl |= (2 << 12);
pcie_capability_write_word(rdev->pdev, PCI_EXP_DEVCTL, ctl);
}
if ((v == 0) || (v == 6) || (v == 7))
pcie_set_readrq(rdev->pdev, 512);
}
static bool dce4_is_in_vblank(struct radeon_device *rdev, int crtc)

Просмотреть файл

@ -51,8 +51,8 @@
* file calls, even though this violates some
* expectations of harmlessness.
*/
static int qib_tune_pcie_caps(struct qib_devdata *);
static int qib_tune_pcie_coalesce(struct qib_devdata *);
static void qib_tune_pcie_caps(struct qib_devdata *);
static void qib_tune_pcie_coalesce(struct qib_devdata *);
/*
* Do all the common PCIe setup and initialization.
@ -476,30 +476,6 @@ void qib_pcie_reenable(struct qib_devdata *dd, u16 cmd, u8 iline, u8 cline)
"pci_enable_device failed after reset: %d\n", r);
}
/* code to adjust PCIe capabilities. */
static int fld2val(int wd, int mask)
{
int lsbmask;
if (!mask)
return 0;
wd &= mask;
lsbmask = mask ^ (mask & (mask - 1));
wd /= lsbmask;
return wd;
}
static int val2fld(int wd, int mask)
{
int lsbmask;
if (!mask)
return 0;
lsbmask = mask ^ (mask & (mask - 1));
wd *= lsbmask;
return wd;
}
static int qib_pcie_coalesce;
module_param_named(pcie_coalesce, qib_pcie_coalesce, int, S_IRUGO);
@ -511,7 +487,7 @@ MODULE_PARM_DESC(pcie_coalesce, "tune PCIe colescing on some Intel chipsets");
* of these chipsets, with some BIOS settings, and enabling it on those
* systems may result in the system crashing, and/or data corruption.
*/
static int qib_tune_pcie_coalesce(struct qib_devdata *dd)
static void qib_tune_pcie_coalesce(struct qib_devdata *dd)
{
int r;
struct pci_dev *parent;
@ -519,18 +495,18 @@ static int qib_tune_pcie_coalesce(struct qib_devdata *dd)
u32 mask, bits, val;
if (!qib_pcie_coalesce)
return 0;
return;
/* Find out supported and configured values for parent (root) */
parent = dd->pcidev->bus->self;
if (parent->bus->parent) {
qib_devinfo(dd->pcidev, "Parent not root\n");
return 1;
return;
}
if (!pci_is_pcie(parent))
return 1;
return;
if (parent->vendor != 0x8086)
return 1;
return;
/*
* - bit 12: Max_rdcmp_Imt_EN: need to set to 1
@ -563,13 +539,12 @@ static int qib_tune_pcie_coalesce(struct qib_devdata *dd)
mask = (3U << 24) | (7U << 10);
} else {
/* not one of the chipsets that we know about */
return 1;
return;
}
pci_read_config_dword(parent, 0x48, &val);
val &= ~mask;
val |= bits;
r = pci_write_config_dword(parent, 0x48, val);
return 0;
}
/*
@ -580,55 +555,44 @@ static int qib_pcie_caps;
module_param_named(pcie_caps, qib_pcie_caps, int, S_IRUGO);
MODULE_PARM_DESC(pcie_caps, "Max PCIe tuning: Payload (0..3), ReadReq (4..7)");
static int qib_tune_pcie_caps(struct qib_devdata *dd)
static void qib_tune_pcie_caps(struct qib_devdata *dd)
{
int ret = 1; /* Assume the worst */
struct pci_dev *parent;
u16 pcaps, pctl, ecaps, ectl;
int rc_sup, ep_sup;
int rc_cur, ep_cur;
u16 rc_mpss, rc_mps, ep_mpss, ep_mps;
u16 rc_mrrs, ep_mrrs, max_mrrs;
/* Find out supported and configured values for parent (root) */
parent = dd->pcidev->bus->self;
if (parent->bus->parent) {
if (!pci_is_root_bus(parent->bus)) {
qib_devinfo(dd->pcidev, "Parent not root\n");
goto bail;
return;
}
if (!pci_is_pcie(parent) || !pci_is_pcie(dd->pcidev))
goto bail;
pcie_capability_read_word(parent, PCI_EXP_DEVCAP, &pcaps);
pcie_capability_read_word(parent, PCI_EXP_DEVCTL, &pctl);
return;
rc_mpss = parent->pcie_mpss;
rc_mps = ffs(pcie_get_mps(parent)) - 8;
/* Find out supported and configured values for endpoint (us) */
pcie_capability_read_word(dd->pcidev, PCI_EXP_DEVCAP, &ecaps);
pcie_capability_read_word(dd->pcidev, PCI_EXP_DEVCTL, &ectl);
ep_mpss = dd->pcidev->pcie_mpss;
ep_mps = ffs(pcie_get_mps(dd->pcidev)) - 8;
ret = 0;
/* Find max payload supported by root, endpoint */
rc_sup = fld2val(pcaps, PCI_EXP_DEVCAP_PAYLOAD);
ep_sup = fld2val(ecaps, PCI_EXP_DEVCAP_PAYLOAD);
if (rc_sup > ep_sup)
rc_sup = ep_sup;
rc_cur = fld2val(pctl, PCI_EXP_DEVCTL_PAYLOAD);
ep_cur = fld2val(ectl, PCI_EXP_DEVCTL_PAYLOAD);
if (rc_mpss > ep_mpss)
rc_mpss = ep_mpss;
/* If Supported greater than limit in module param, limit it */
if (rc_sup > (qib_pcie_caps & 7))
rc_sup = qib_pcie_caps & 7;
if (rc_mpss > (qib_pcie_caps & 7))
rc_mpss = qib_pcie_caps & 7;
/* If less than (allowed, supported), bump root payload */
if (rc_sup > rc_cur) {
rc_cur = rc_sup;
pctl = (pctl & ~PCI_EXP_DEVCTL_PAYLOAD) |
val2fld(rc_cur, PCI_EXP_DEVCTL_PAYLOAD);
pcie_capability_write_word(parent, PCI_EXP_DEVCTL, pctl);
if (rc_mpss > rc_mps) {
rc_mps = rc_mpss;
pcie_set_mps(parent, 128 << rc_mps);
}
/* If less than (allowed, supported), bump endpoint payload */
if (rc_sup > ep_cur) {
ep_cur = rc_sup;
ectl = (ectl & ~PCI_EXP_DEVCTL_PAYLOAD) |
val2fld(ep_cur, PCI_EXP_DEVCTL_PAYLOAD);
pcie_capability_write_word(dd->pcidev, PCI_EXP_DEVCTL, ectl);
if (rc_mpss > ep_mps) {
ep_mps = rc_mpss;
pcie_set_mps(dd->pcidev, 128 << ep_mps);
}
/*
@ -636,26 +600,22 @@ static int qib_tune_pcie_caps(struct qib_devdata *dd)
* No field for max supported, but PCIe spec limits it to 4096,
* which is code '5' (log2(4096) - 7)
*/
rc_sup = 5;
if (rc_sup > ((qib_pcie_caps >> 4) & 7))
rc_sup = (qib_pcie_caps >> 4) & 7;
rc_cur = fld2val(pctl, PCI_EXP_DEVCTL_READRQ);
ep_cur = fld2val(ectl, PCI_EXP_DEVCTL_READRQ);
max_mrrs = 5;
if (max_mrrs > ((qib_pcie_caps >> 4) & 7))
max_mrrs = (qib_pcie_caps >> 4) & 7;
if (rc_sup > rc_cur) {
rc_cur = rc_sup;
pctl = (pctl & ~PCI_EXP_DEVCTL_READRQ) |
val2fld(rc_cur, PCI_EXP_DEVCTL_READRQ);
pcie_capability_write_word(parent, PCI_EXP_DEVCTL, pctl);
max_mrrs = 128 << max_mrrs;
rc_mrrs = pcie_get_readrq(parent);
ep_mrrs = pcie_get_readrq(dd->pcidev);
if (max_mrrs > rc_mrrs) {
rc_mrrs = max_mrrs;
pcie_set_readrq(parent, rc_mrrs);
}
if (rc_sup > ep_cur) {
ep_cur = rc_sup;
ectl = (ectl & ~PCI_EXP_DEVCTL_READRQ) |
val2fld(ep_cur, PCI_EXP_DEVCTL_READRQ);
pcie_capability_write_word(dd->pcidev, PCI_EXP_DEVCTL, ectl);
if (max_mrrs > ep_mrrs) {
ep_mrrs = max_mrrs;
pcie_set_readrq(dd->pcidev, ep_mrrs);
}
bail:
return ret;
}
/* End of PCIe capability tuning */

Просмотреть файл

@ -15,8 +15,22 @@ config PCI_EXYNOS
select PCIEPORTBUS
select PCIE_DW
config PCI_IMX6
bool "Freescale i.MX6 PCIe controller"
depends on SOC_IMX6Q
select PCIEPORTBUS
select PCIE_DW
config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA
config PCI_RCAR_GEN2
bool "Renesas R-Car Gen2 Internal PCI controller"
depends on ARM && (ARCH_R8A7790 || ARCH_R8A7791 || COMPILE_TEST)
help
Say Y here if you want internal PCI support on R-Car Gen2 SoC.
There are 3 internal PCI controllers available with a single
built-in EHCI/OHCI host controller present on each one.
endmenu

Просмотреть файл

@ -1,4 +1,6 @@
obj-$(CONFIG_PCIE_DW) += pcie-designware.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o
obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o
obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o

Просмотреть файл

@ -48,6 +48,7 @@ struct exynos_pcie {
#define PCIE_IRQ_SPECIAL 0x008
#define PCIE_IRQ_EN_PULSE 0x00c
#define PCIE_IRQ_EN_LEVEL 0x010
#define IRQ_MSI_ENABLE (0x1 << 2)
#define PCIE_IRQ_EN_SPECIAL 0x014
#define PCIE_PWR_RESET 0x018
#define PCIE_CORE_RESET 0x01c
@ -77,18 +78,28 @@ struct exynos_pcie {
#define PCIE_PHY_PLL_BIAS 0x00c
#define PCIE_PHY_DCC_FEEDBACK 0x014
#define PCIE_PHY_PLL_DIV_1 0x05c
#define PCIE_PHY_COMMON_POWER 0x064
#define PCIE_PHY_COMMON_PD_CMN (0x1 << 3)
#define PCIE_PHY_TRSV0_EMP_LVL 0x084
#define PCIE_PHY_TRSV0_DRV_LVL 0x088
#define PCIE_PHY_TRSV0_RXCDR 0x0ac
#define PCIE_PHY_TRSV0_POWER 0x0c4
#define PCIE_PHY_TRSV0_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV0_LVCC 0x0dc
#define PCIE_PHY_TRSV1_EMP_LVL 0x144
#define PCIE_PHY_TRSV1_RXCDR 0x16c
#define PCIE_PHY_TRSV1_POWER 0x184
#define PCIE_PHY_TRSV1_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV1_LVCC 0x19c
#define PCIE_PHY_TRSV2_EMP_LVL 0x204
#define PCIE_PHY_TRSV2_RXCDR 0x22c
#define PCIE_PHY_TRSV2_POWER 0x244
#define PCIE_PHY_TRSV2_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV2_LVCC 0x25c
#define PCIE_PHY_TRSV3_EMP_LVL 0x2c4
#define PCIE_PHY_TRSV3_RXCDR 0x2ec
#define PCIE_PHY_TRSV3_POWER 0x304
#define PCIE_PHY_TRSV3_PD_TSV (0x1 << 7)
#define PCIE_PHY_TRSV3_LVCC 0x31c
static inline void exynos_elb_writel(struct exynos_pcie *pcie, u32 val, u32 reg)
@ -202,6 +213,58 @@ static void exynos_pcie_deassert_phy_reset(struct pcie_port *pp)
exynos_blk_writel(exynos_pcie, 0, PCIE_PHY_TRSV_RESET);
}
static void exynos_pcie_power_on_phy(struct pcie_port *pp)
{
u32 val;
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_COMMON_POWER);
val &= ~PCIE_PHY_COMMON_PD_CMN;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_COMMON_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV0_POWER);
val &= ~PCIE_PHY_TRSV0_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV0_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV1_POWER);
val &= ~PCIE_PHY_TRSV1_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV1_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV2_POWER);
val &= ~PCIE_PHY_TRSV2_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV2_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV3_POWER);
val &= ~PCIE_PHY_TRSV3_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV3_POWER);
}
static void exynos_pcie_power_off_phy(struct pcie_port *pp)
{
u32 val;
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_COMMON_POWER);
val |= PCIE_PHY_COMMON_PD_CMN;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_COMMON_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV0_POWER);
val |= PCIE_PHY_TRSV0_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV0_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV1_POWER);
val |= PCIE_PHY_TRSV1_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV1_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV2_POWER);
val |= PCIE_PHY_TRSV2_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV2_POWER);
val = exynos_phy_readl(exynos_pcie, PCIE_PHY_TRSV3_POWER);
val |= PCIE_PHY_TRSV3_PD_TSV;
exynos_phy_writel(exynos_pcie, val, PCIE_PHY_TRSV3_POWER);
}
static void exynos_pcie_init_phy(struct pcie_port *pp)
{
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
@ -270,6 +333,9 @@ static int exynos_pcie_establish_link(struct pcie_port *pp)
/* de-assert phy reset */
exynos_pcie_deassert_phy_reset(pp);
/* power on phy */
exynos_pcie_power_on_phy(pp);
/* initialize phy */
exynos_pcie_init_phy(pp);
@ -302,6 +368,9 @@ static int exynos_pcie_establish_link(struct pcie_port *pp)
PCIE_PHY_PLL_LOCKED);
dev_info(pp->dev, "PLL Locked: 0x%x\n", val);
}
/* power off phy */
exynos_pcie_power_off_phy(pp);
dev_err(pp->dev, "PCIe Link Fail\n");
return -EINVAL;
}
@ -342,9 +411,36 @@ static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg)
return IRQ_HANDLED;
}
static irqreturn_t exynos_pcie_msi_irq_handler(int irq, void *arg)
{
struct pcie_port *pp = arg;
dw_handle_msi_irq(pp);
return IRQ_HANDLED;
}
static void exynos_pcie_msi_init(struct pcie_port *pp)
{
u32 val;
struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp);
dw_pcie_msi_init(pp);
/* enable MSI interrupt */
val = exynos_elb_readl(exynos_pcie, PCIE_IRQ_EN_LEVEL);
val |= IRQ_MSI_ENABLE;
exynos_elb_writel(exynos_pcie, val, PCIE_IRQ_EN_LEVEL);
return;
}
static void exynos_pcie_enable_interrupts(struct pcie_port *pp)
{
exynos_pcie_enable_irq_pulse(pp);
if (IS_ENABLED(CONFIG_PCI_MSI))
exynos_pcie_msi_init(pp);
return;
}
@ -430,6 +526,22 @@ static int add_pcie_port(struct pcie_port *pp, struct platform_device *pdev)
return ret;
}
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->msi_irq = platform_get_irq(pdev, 0);
if (!pp->msi_irq) {
dev_err(&pdev->dev, "failed to get msi irq\n");
return -ENODEV;
}
ret = devm_request_irq(&pdev->dev, pp->msi_irq,
exynos_pcie_msi_irq_handler,
IRQF_SHARED, "exynos-pcie", pp);
if (ret) {
dev_err(&pdev->dev, "failed to request msi irq\n");
return ret;
}
}
pp->root_bus_nr = -1;
pp->ops = &exynos_pcie_host_ops;
@ -487,18 +599,24 @@ static int __init exynos_pcie_probe(struct platform_device *pdev)
elbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
exynos_pcie->elbi_base = devm_ioremap_resource(&pdev->dev, elbi_base);
if (IS_ERR(exynos_pcie->elbi_base))
return PTR_ERR(exynos_pcie->elbi_base);
if (IS_ERR(exynos_pcie->elbi_base)) {
ret = PTR_ERR(exynos_pcie->elbi_base);
goto fail_bus_clk;
}
phy_base = platform_get_resource(pdev, IORESOURCE_MEM, 1);
exynos_pcie->phy_base = devm_ioremap_resource(&pdev->dev, phy_base);
if (IS_ERR(exynos_pcie->phy_base))
return PTR_ERR(exynos_pcie->phy_base);
if (IS_ERR(exynos_pcie->phy_base)) {
ret = PTR_ERR(exynos_pcie->phy_base);
goto fail_bus_clk;
}
block_base = platform_get_resource(pdev, IORESOURCE_MEM, 2);
exynos_pcie->block_base = devm_ioremap_resource(&pdev->dev, block_base);
if (IS_ERR(exynos_pcie->block_base))
return PTR_ERR(exynos_pcie->block_base);
if (IS_ERR(exynos_pcie->block_base)) {
ret = PTR_ERR(exynos_pcie->block_base);
goto fail_bus_clk;
}
ret = add_pcie_port(pp, pdev);
if (ret < 0)
@ -535,7 +653,7 @@ static struct platform_driver exynos_pcie_driver = {
.driver = {
.name = "exynos-pcie",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(exynos_pcie_of_match),
.of_match_table = exynos_pcie_of_match,
},
};

568
drivers/pci/host/pci-imx6.c Normal file
Просмотреть файл

@ -0,0 +1,568 @@
/*
* PCIe host controller driver for Freescale i.MX6 SoCs
*
* Copyright (C) 2013 Kosagi
* http://www.kosagi.com
*
* Author: Sean Cross <xobs@kosagi.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
#include <linux/module.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/resource.h>
#include <linux/signal.h>
#include <linux/types.h>
#include "pcie-designware.h"
#define to_imx6_pcie(x) container_of(x, struct imx6_pcie, pp)
struct imx6_pcie {
int reset_gpio;
int power_on_gpio;
int wake_up_gpio;
int disable_gpio;
struct clk *lvds_gate;
struct clk *sata_ref_100m;
struct clk *pcie_ref_125m;
struct clk *pcie_axi;
struct pcie_port pp;
struct regmap *iomuxc_gpr;
void __iomem *mem_base;
};
/* PCIe Port Logic registers (memory-mapped) */
#define PL_OFFSET 0x700
#define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28)
#define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c)
#define PCIE_PHY_CTRL (PL_OFFSET + 0x114)
#define PCIE_PHY_CTRL_DATA_LOC 0
#define PCIE_PHY_CTRL_CAP_ADR_LOC 16
#define PCIE_PHY_CTRL_CAP_DAT_LOC 17
#define PCIE_PHY_CTRL_WR_LOC 18
#define PCIE_PHY_CTRL_RD_LOC 19
#define PCIE_PHY_STAT (PL_OFFSET + 0x110)
#define PCIE_PHY_STAT_ACK_LOC 16
/* PHY registers (not memory-mapped) */
#define PCIE_PHY_RX_ASIC_OUT 0x100D
#define PHY_RX_OVRD_IN_LO 0x1005
#define PHY_RX_OVRD_IN_LO_RX_DATA_EN (1 << 5)
#define PHY_RX_OVRD_IN_LO_RX_PLL_EN (1 << 3)
static int pcie_phy_poll_ack(void __iomem *dbi_base, int exp_val)
{
u32 val;
u32 max_iterations = 10;
u32 wait_counter = 0;
do {
val = readl(dbi_base + PCIE_PHY_STAT);
val = (val >> PCIE_PHY_STAT_ACK_LOC) & 0x1;
wait_counter++;
if (val == exp_val)
return 0;
udelay(1);
} while (wait_counter < max_iterations);
return -ETIMEDOUT;
}
static int pcie_phy_wait_ack(void __iomem *dbi_base, int addr)
{
u32 val;
int ret;
val = addr << PCIE_PHY_CTRL_DATA_LOC;
writel(val, dbi_base + PCIE_PHY_CTRL);
val |= (0x1 << PCIE_PHY_CTRL_CAP_ADR_LOC);
writel(val, dbi_base + PCIE_PHY_CTRL);
ret = pcie_phy_poll_ack(dbi_base, 1);
if (ret)
return ret;
val = addr << PCIE_PHY_CTRL_DATA_LOC;
writel(val, dbi_base + PCIE_PHY_CTRL);
ret = pcie_phy_poll_ack(dbi_base, 0);
if (ret)
return ret;
return 0;
}
/* Read from the 16-bit PCIe PHY control registers (not memory-mapped) */
static int pcie_phy_read(void __iomem *dbi_base, int addr , int *data)
{
u32 val, phy_ctl;
int ret;
ret = pcie_phy_wait_ack(dbi_base, addr);
if (ret)
return ret;
/* assert Read signal */
phy_ctl = 0x1 << PCIE_PHY_CTRL_RD_LOC;
writel(phy_ctl, dbi_base + PCIE_PHY_CTRL);
ret = pcie_phy_poll_ack(dbi_base, 1);
if (ret)
return ret;
val = readl(dbi_base + PCIE_PHY_STAT);
*data = val & 0xffff;
/* deassert Read signal */
writel(0x00, dbi_base + PCIE_PHY_CTRL);
ret = pcie_phy_poll_ack(dbi_base, 0);
if (ret)
return ret;
return 0;
}
static int pcie_phy_write(void __iomem *dbi_base, int addr, int data)
{
u32 var;
int ret;
/* write addr */
/* cap addr */
ret = pcie_phy_wait_ack(dbi_base, addr);
if (ret)
return ret;
var = data << PCIE_PHY_CTRL_DATA_LOC;
writel(var, dbi_base + PCIE_PHY_CTRL);
/* capture data */
var |= (0x1 << PCIE_PHY_CTRL_CAP_DAT_LOC);
writel(var, dbi_base + PCIE_PHY_CTRL);
ret = pcie_phy_poll_ack(dbi_base, 1);
if (ret)
return ret;
/* deassert cap data */
var = data << PCIE_PHY_CTRL_DATA_LOC;
writel(var, dbi_base + PCIE_PHY_CTRL);
/* wait for ack de-assertion */
ret = pcie_phy_poll_ack(dbi_base, 0);
if (ret)
return ret;
/* assert wr signal */
var = 0x1 << PCIE_PHY_CTRL_WR_LOC;
writel(var, dbi_base + PCIE_PHY_CTRL);
/* wait for ack */
ret = pcie_phy_poll_ack(dbi_base, 1);
if (ret)
return ret;
/* deassert wr signal */
var = data << PCIE_PHY_CTRL_DATA_LOC;
writel(var, dbi_base + PCIE_PHY_CTRL);
/* wait for ack de-assertion */
ret = pcie_phy_poll_ack(dbi_base, 0);
if (ret)
return ret;
writel(0x0, dbi_base + PCIE_PHY_CTRL);
return 0;
}
/* Added for PCI abort handling */
static int imx6q_pcie_abort_handler(unsigned long addr,
unsigned int fsr, struct pt_regs *regs)
{
return 0;
}
static int imx6_pcie_assert_core_reset(struct pcie_port *pp)
{
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16);
gpio_set_value(imx6_pcie->reset_gpio, 0);
msleep(100);
gpio_set_value(imx6_pcie->reset_gpio, 1);
return 0;
}
static int imx6_pcie_deassert_core_reset(struct pcie_port *pp)
{
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
int ret;
if (gpio_is_valid(imx6_pcie->power_on_gpio))
gpio_set_value(imx6_pcie->power_on_gpio, 1);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_TEST_PD, 0 << 18);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16);
ret = clk_prepare_enable(imx6_pcie->sata_ref_100m);
if (ret) {
dev_err(pp->dev, "unable to enable sata_ref_100m\n");
goto err_sata_ref;
}
ret = clk_prepare_enable(imx6_pcie->pcie_ref_125m);
if (ret) {
dev_err(pp->dev, "unable to enable pcie_ref_125m\n");
goto err_pcie_ref;
}
ret = clk_prepare_enable(imx6_pcie->lvds_gate);
if (ret) {
dev_err(pp->dev, "unable to enable lvds_gate\n");
goto err_lvds_gate;
}
ret = clk_prepare_enable(imx6_pcie->pcie_axi);
if (ret) {
dev_err(pp->dev, "unable to enable pcie_axi\n");
goto err_pcie_axi;
}
/* allow the clocks to stabilize */
usleep_range(200, 500);
return 0;
err_pcie_axi:
clk_disable_unprepare(imx6_pcie->lvds_gate);
err_lvds_gate:
clk_disable_unprepare(imx6_pcie->pcie_ref_125m);
err_pcie_ref:
clk_disable_unprepare(imx6_pcie->sata_ref_100m);
err_sata_ref:
return ret;
}
static void imx6_pcie_init_phy(struct pcie_port *pp)
{
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 0 << 10);
/* configure constant input signal to the pcie ctrl and phy */
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_DEVICE_TYPE, PCI_EXP_TYPE_ROOT_PORT << 12);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_LOS_LEVEL, 9 << 4);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8,
IMX6Q_GPR8_TX_DEEMPH_GEN1, 0 << 0);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8,
IMX6Q_GPR8_TX_DEEMPH_GEN2_3P5DB, 0 << 6);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8,
IMX6Q_GPR8_TX_DEEMPH_GEN2_6DB, 20 << 12);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8,
IMX6Q_GPR8_TX_SWING_FULL, 127 << 18);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8,
IMX6Q_GPR8_TX_SWING_LOW, 127 << 25);
}
static void imx6_pcie_host_init(struct pcie_port *pp)
{
int count = 0;
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
imx6_pcie_assert_core_reset(pp);
imx6_pcie_init_phy(pp);
imx6_pcie_deassert_core_reset(pp);
dw_pcie_setup_rc(pp);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
while (!dw_pcie_link_up(pp)) {
usleep_range(100, 1000);
count++;
if (count >= 200) {
dev_err(pp->dev, "phy link never came up\n");
dev_dbg(pp->dev,
"DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
readl(pp->dbi_base + PCIE_PHY_DEBUG_R0),
readl(pp->dbi_base + PCIE_PHY_DEBUG_R1));
break;
}
}
return;
}
static int imx6_pcie_link_up(struct pcie_port *pp)
{
u32 rc, ltssm, rx_valid, temp;
/* link is debug bit 36, debug register 1 starts at bit 32 */
rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1) & (0x1 << (36 - 32));
if (rc)
return -EAGAIN;
/*
* From L0, initiate MAC entry to gen2 if EP/RC supports gen2.
* Wait 2ms (LTSSM timeout is 24ms, PHY lock is ~5us in gen2).
* If (MAC/LTSSM.state == Recovery.RcvrLock)
* && (PHY/rx_valid==0) then pulse PHY/rx_reset. Transition
* to gen2 is stuck
*/
pcie_phy_read(pp->dbi_base, PCIE_PHY_RX_ASIC_OUT, &rx_valid);
ltssm = readl(pp->dbi_base + PCIE_PHY_DEBUG_R0) & 0x3F;
if (rx_valid & 0x01)
return 0;
if (ltssm != 0x0d)
return 0;
dev_err(pp->dev, "transition to gen2 is stuck, reset PHY!\n");
pcie_phy_read(pp->dbi_base,
PHY_RX_OVRD_IN_LO, &temp);
temp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN
| PHY_RX_OVRD_IN_LO_RX_PLL_EN);
pcie_phy_write(pp->dbi_base,
PHY_RX_OVRD_IN_LO, temp);
usleep_range(2000, 3000);
pcie_phy_read(pp->dbi_base,
PHY_RX_OVRD_IN_LO, &temp);
temp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN
| PHY_RX_OVRD_IN_LO_RX_PLL_EN);
pcie_phy_write(pp->dbi_base,
PHY_RX_OVRD_IN_LO, temp);
return 0;
}
static struct pcie_host_ops imx6_pcie_host_ops = {
.link_up = imx6_pcie_link_up,
.host_init = imx6_pcie_host_init,
};
static int imx6_add_pcie_port(struct pcie_port *pp,
struct platform_device *pdev)
{
int ret;
pp->irq = platform_get_irq(pdev, 0);
if (!pp->irq) {
dev_err(&pdev->dev, "failed to get irq\n");
return -ENODEV;
}
pp->root_bus_nr = -1;
pp->ops = &imx6_pcie_host_ops;
spin_lock_init(&pp->conf_lock);
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(&pdev->dev, "failed to initialize host\n");
return ret;
}
return 0;
}
static int __init imx6_pcie_probe(struct platform_device *pdev)
{
struct imx6_pcie *imx6_pcie;
struct pcie_port *pp;
struct device_node *np = pdev->dev.of_node;
struct resource *dbi_base;
int ret;
imx6_pcie = devm_kzalloc(&pdev->dev, sizeof(*imx6_pcie), GFP_KERNEL);
if (!imx6_pcie)
return -ENOMEM;
pp = &imx6_pcie->pp;
pp->dev = &pdev->dev;
/* Added for PCI abort handling */
hook_fault_code(16 + 6, imx6q_pcie_abort_handler, SIGBUS, 0,
"imprecise external abort");
dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!dbi_base) {
dev_err(&pdev->dev, "dbi_base memory resource not found\n");
return -ENODEV;
}
pp->dbi_base = devm_ioremap_resource(&pdev->dev, dbi_base);
if (IS_ERR(pp->dbi_base)) {
ret = PTR_ERR(pp->dbi_base);
goto err;
}
/* Fetch GPIOs */
imx6_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0);
if (!gpio_is_valid(imx6_pcie->reset_gpio)) {
dev_err(&pdev->dev, "no reset-gpio defined\n");
ret = -ENODEV;
}
ret = devm_gpio_request_one(&pdev->dev,
imx6_pcie->reset_gpio,
GPIOF_OUT_INIT_LOW,
"PCIe reset");
if (ret) {
dev_err(&pdev->dev, "unable to get reset gpio\n");
goto err;
}
imx6_pcie->power_on_gpio = of_get_named_gpio(np, "power-on-gpio", 0);
if (gpio_is_valid(imx6_pcie->power_on_gpio)) {
ret = devm_gpio_request_one(&pdev->dev,
imx6_pcie->power_on_gpio,
GPIOF_OUT_INIT_LOW,
"PCIe power enable");
if (ret) {
dev_err(&pdev->dev, "unable to get power-on gpio\n");
goto err;
}
}
imx6_pcie->wake_up_gpio = of_get_named_gpio(np, "wake-up-gpio", 0);
if (gpio_is_valid(imx6_pcie->wake_up_gpio)) {
ret = devm_gpio_request_one(&pdev->dev,
imx6_pcie->wake_up_gpio,
GPIOF_IN,
"PCIe wake up");
if (ret) {
dev_err(&pdev->dev, "unable to get wake-up gpio\n");
goto err;
}
}
imx6_pcie->disable_gpio = of_get_named_gpio(np, "disable-gpio", 0);
if (gpio_is_valid(imx6_pcie->disable_gpio)) {
ret = devm_gpio_request_one(&pdev->dev,
imx6_pcie->disable_gpio,
GPIOF_OUT_INIT_HIGH,
"PCIe disable endpoint");
if (ret) {
dev_err(&pdev->dev, "unable to get disable-ep gpio\n");
goto err;
}
}
/* Fetch clocks */
imx6_pcie->lvds_gate = devm_clk_get(&pdev->dev, "lvds_gate");
if (IS_ERR(imx6_pcie->lvds_gate)) {
dev_err(&pdev->dev,
"lvds_gate clock select missing or invalid\n");
ret = PTR_ERR(imx6_pcie->lvds_gate);
goto err;
}
imx6_pcie->sata_ref_100m = devm_clk_get(&pdev->dev, "sata_ref_100m");
if (IS_ERR(imx6_pcie->sata_ref_100m)) {
dev_err(&pdev->dev,
"sata_ref_100m clock source missing or invalid\n");
ret = PTR_ERR(imx6_pcie->sata_ref_100m);
goto err;
}
imx6_pcie->pcie_ref_125m = devm_clk_get(&pdev->dev, "pcie_ref_125m");
if (IS_ERR(imx6_pcie->pcie_ref_125m)) {
dev_err(&pdev->dev,
"pcie_ref_125m clock source missing or invalid\n");
ret = PTR_ERR(imx6_pcie->pcie_ref_125m);
goto err;
}
imx6_pcie->pcie_axi = devm_clk_get(&pdev->dev, "pcie_axi");
if (IS_ERR(imx6_pcie->pcie_axi)) {
dev_err(&pdev->dev,
"pcie_axi clock source missing or invalid\n");
ret = PTR_ERR(imx6_pcie->pcie_axi);
goto err;
}
/* Grab GPR config register range */
imx6_pcie->iomuxc_gpr =
syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");
if (IS_ERR(imx6_pcie->iomuxc_gpr)) {
dev_err(&pdev->dev, "unable to find iomuxc registers\n");
ret = PTR_ERR(imx6_pcie->iomuxc_gpr);
goto err;
}
ret = imx6_add_pcie_port(pp, pdev);
if (ret < 0)
goto err;
platform_set_drvdata(pdev, imx6_pcie);
return 0;
err:
return ret;
}
static const struct of_device_id imx6_pcie_of_match[] = {
{ .compatible = "fsl,imx6q-pcie", },
{},
};
MODULE_DEVICE_TABLE(of, imx6_pcie_of_match);
static struct platform_driver imx6_pcie_driver = {
.driver = {
.name = "imx6q-pcie",
.owner = THIS_MODULE,
.of_match_table = imx6_pcie_of_match,
},
};
/* Freescale PCIe driver does not allow module unload */
static int __init imx6_pcie_init(void)
{
return platform_driver_probe(&imx6_pcie_driver, imx6_pcie_probe);
}
fs_initcall(imx6_pcie_init);
MODULE_AUTHOR("Sean Cross <xobs@kosagi.com>");
MODULE_DESCRIPTION("Freescale i.MX6 PCIe host controller driver");
MODULE_LICENSE("GPL v2");

Просмотреть файл

@ -0,0 +1,333 @@
/*
* pci-rcar-gen2: internal PCI bus support
*
* Copyright (C) 2013 Renesas Solutions Corp.
* Copyright (C) 2013 Cogent Embedded, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
/* AHB-PCI Bridge PCI communication registers */
#define RCAR_AHBPCI_PCICOM_OFFSET 0x800
#define RCAR_PCIAHB_WIN1_CTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x00)
#define RCAR_PCIAHB_WIN2_CTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x04)
#define RCAR_PCIAHB_PREFETCH0 0x0
#define RCAR_PCIAHB_PREFETCH4 0x1
#define RCAR_PCIAHB_PREFETCH8 0x2
#define RCAR_PCIAHB_PREFETCH16 0x3
#define RCAR_AHBPCI_WIN1_CTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x10)
#define RCAR_AHBPCI_WIN2_CTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x14)
#define RCAR_AHBPCI_WIN_CTR_MEM (3 << 1)
#define RCAR_AHBPCI_WIN_CTR_CFG (5 << 1)
#define RCAR_AHBPCI_WIN1_HOST (1 << 30)
#define RCAR_AHBPCI_WIN1_DEVICE (1 << 31)
#define RCAR_PCI_INT_ENABLE_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x20)
#define RCAR_PCI_INT_STATUS_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x24)
#define RCAR_PCI_INT_A (1 << 16)
#define RCAR_PCI_INT_B (1 << 17)
#define RCAR_PCI_INT_PME (1 << 19)
#define RCAR_AHB_BUS_CTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x30)
#define RCAR_AHB_BUS_MMODE_HTRANS (1 << 0)
#define RCAR_AHB_BUS_MMODE_BYTE_BURST (1 << 1)
#define RCAR_AHB_BUS_MMODE_WR_INCR (1 << 2)
#define RCAR_AHB_BUS_MMODE_HBUS_REQ (1 << 7)
#define RCAR_AHB_BUS_SMODE_READYCTR (1 << 17)
#define RCAR_AHB_BUS_MODE (RCAR_AHB_BUS_MMODE_HTRANS | \
RCAR_AHB_BUS_MMODE_BYTE_BURST | \
RCAR_AHB_BUS_MMODE_WR_INCR | \
RCAR_AHB_BUS_MMODE_HBUS_REQ | \
RCAR_AHB_BUS_SMODE_READYCTR)
#define RCAR_USBCTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x34)
#define RCAR_USBCTR_USBH_RST (1 << 0)
#define RCAR_USBCTR_PCICLK_MASK (1 << 1)
#define RCAR_USBCTR_PLL_RST (1 << 2)
#define RCAR_USBCTR_DIRPD (1 << 8)
#define RCAR_USBCTR_PCIAHB_WIN2_EN (1 << 9)
#define RCAR_USBCTR_PCIAHB_WIN1_256M (0 << 10)
#define RCAR_USBCTR_PCIAHB_WIN1_512M (1 << 10)
#define RCAR_USBCTR_PCIAHB_WIN1_1G (2 << 10)
#define RCAR_USBCTR_PCIAHB_WIN1_2G (3 << 10)
#define RCAR_USBCTR_PCIAHB_WIN1_MASK (3 << 10)
#define RCAR_PCI_ARBITER_CTR_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x40)
#define RCAR_PCI_ARBITER_PCIREQ0 (1 << 0)
#define RCAR_PCI_ARBITER_PCIREQ1 (1 << 1)
#define RCAR_PCI_ARBITER_PCIBP_MODE (1 << 12)
#define RCAR_PCI_UNIT_REV_REG (RCAR_AHBPCI_PCICOM_OFFSET + 0x48)
/* Number of internal PCI controllers */
#define RCAR_PCI_NR_CONTROLLERS 3
struct rcar_pci_priv {
void __iomem *reg;
struct resource io_res;
struct resource mem_res;
struct resource *cfg_res;
int irq;
};
/* PCI configuration space operations */
static void __iomem *rcar_pci_cfg_base(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct pci_sys_data *sys = bus->sysdata;
struct rcar_pci_priv *priv = sys->private_data;
int slot, val;
if (sys->busnr != bus->number || PCI_FUNC(devfn))
return NULL;
/* Only one EHCI/OHCI device built-in */
slot = PCI_SLOT(devfn);
if (slot > 2)
return NULL;
val = slot ? RCAR_AHBPCI_WIN1_DEVICE | RCAR_AHBPCI_WIN_CTR_CFG :
RCAR_AHBPCI_WIN1_HOST | RCAR_AHBPCI_WIN_CTR_CFG;
iowrite32(val, priv->reg + RCAR_AHBPCI_WIN1_CTR_REG);
return priv->reg + (slot >> 1) * 0x100 + where;
}
static int rcar_pci_read_config(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *val)
{
void __iomem *reg = rcar_pci_cfg_base(bus, devfn, where);
if (!reg)
return PCIBIOS_DEVICE_NOT_FOUND;
switch (size) {
case 1:
*val = ioread8(reg);
break;
case 2:
*val = ioread16(reg);
break;
default:
*val = ioread32(reg);
break;
}
return PCIBIOS_SUCCESSFUL;
}
static int rcar_pci_write_config(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 val)
{
void __iomem *reg = rcar_pci_cfg_base(bus, devfn, where);
if (!reg)
return PCIBIOS_DEVICE_NOT_FOUND;
switch (size) {
case 1:
iowrite8(val, reg);
break;
case 2:
iowrite16(val, reg);
break;
default:
iowrite32(val, reg);
break;
}
return PCIBIOS_SUCCESSFUL;
}
/* PCI interrupt mapping */
static int __init rcar_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
struct pci_sys_data *sys = dev->bus->sysdata;
struct rcar_pci_priv *priv = sys->private_data;
return priv->irq;
}
/* PCI host controller setup */
static int __init rcar_pci_setup(int nr, struct pci_sys_data *sys)
{
struct rcar_pci_priv *priv = sys->private_data;
void __iomem *reg = priv->reg;
u32 val;
val = ioread32(reg + RCAR_PCI_UNIT_REV_REG);
pr_info("PCI: bus%u revision %x\n", sys->busnr, val);
/* Disable Direct Power Down State and assert reset */
val = ioread32(reg + RCAR_USBCTR_REG) & ~RCAR_USBCTR_DIRPD;
val |= RCAR_USBCTR_USBH_RST | RCAR_USBCTR_PLL_RST;
iowrite32(val, reg + RCAR_USBCTR_REG);
udelay(4);
/* De-assert reset and set PCIAHB window1 size to 1GB */
val &= ~(RCAR_USBCTR_PCIAHB_WIN1_MASK | RCAR_USBCTR_PCICLK_MASK |
RCAR_USBCTR_USBH_RST | RCAR_USBCTR_PLL_RST);
iowrite32(val | RCAR_USBCTR_PCIAHB_WIN1_1G, reg + RCAR_USBCTR_REG);
/* Configure AHB master and slave modes */
iowrite32(RCAR_AHB_BUS_MODE, reg + RCAR_AHB_BUS_CTR_REG);
/* Configure PCI arbiter */
val = ioread32(reg + RCAR_PCI_ARBITER_CTR_REG);
val |= RCAR_PCI_ARBITER_PCIREQ0 | RCAR_PCI_ARBITER_PCIREQ1 |
RCAR_PCI_ARBITER_PCIBP_MODE;
iowrite32(val, reg + RCAR_PCI_ARBITER_CTR_REG);
/* PCI-AHB mapping: 0x40000000-0x80000000 */
iowrite32(0x40000000 | RCAR_PCIAHB_PREFETCH16,
reg + RCAR_PCIAHB_WIN1_CTR_REG);
/* AHB-PCI mapping: OHCI/EHCI registers */
val = priv->mem_res.start | RCAR_AHBPCI_WIN_CTR_MEM;
iowrite32(val, reg + RCAR_AHBPCI_WIN2_CTR_REG);
/* Enable AHB-PCI bridge PCI configuration access */
iowrite32(RCAR_AHBPCI_WIN1_HOST | RCAR_AHBPCI_WIN_CTR_CFG,
reg + RCAR_AHBPCI_WIN1_CTR_REG);
/* Set PCI-AHB Window1 address */
iowrite32(0x40000000 | PCI_BASE_ADDRESS_MEM_PREFETCH,
reg + PCI_BASE_ADDRESS_1);
/* Set AHB-PCI bridge PCI communication area address */
val = priv->cfg_res->start + RCAR_AHBPCI_PCICOM_OFFSET;
iowrite32(val, reg + PCI_BASE_ADDRESS_0);
val = ioread32(reg + PCI_COMMAND);
val |= PCI_COMMAND_SERR | PCI_COMMAND_PARITY |
PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER;
iowrite32(val, reg + PCI_COMMAND);
/* Enable PCI interrupts */
iowrite32(RCAR_PCI_INT_A | RCAR_PCI_INT_B | RCAR_PCI_INT_PME,
reg + RCAR_PCI_INT_ENABLE_REG);
/* Add PCI resources */
pci_add_resource(&sys->resources, &priv->io_res);
pci_add_resource(&sys->resources, &priv->mem_res);
return 1;
}
static struct pci_ops rcar_pci_ops = {
.read = rcar_pci_read_config,
.write = rcar_pci_write_config,
};
static struct hw_pci rcar_hw_pci __initdata = {
.map_irq = rcar_pci_map_irq,
.ops = &rcar_pci_ops,
.setup = rcar_pci_setup,
};
static int rcar_pci_count __initdata;
static int __init rcar_pci_add_controller(struct rcar_pci_priv *priv)
{
void **private_data;
int count;
if (rcar_hw_pci.nr_controllers < rcar_pci_count)
goto add_priv;
/* (Re)allocate private data pointer array if needed */
count = rcar_pci_count + RCAR_PCI_NR_CONTROLLERS;
private_data = kzalloc(count * sizeof(void *), GFP_KERNEL);
if (!private_data)
return -ENOMEM;
rcar_pci_count = count;
if (rcar_hw_pci.private_data) {
memcpy(private_data, rcar_hw_pci.private_data,
rcar_hw_pci.nr_controllers * sizeof(void *));
kfree(rcar_hw_pci.private_data);
}
rcar_hw_pci.private_data = private_data;
add_priv:
/* Add private data pointer to the array */
rcar_hw_pci.private_data[rcar_hw_pci.nr_controllers++] = priv;
return 0;
}
static int __init rcar_pci_probe(struct platform_device *pdev)
{
struct resource *cfg_res, *mem_res;
struct rcar_pci_priv *priv;
void __iomem *reg;
cfg_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
reg = devm_ioremap_resource(&pdev->dev, cfg_res);
if (!reg)
return -ENODEV;
mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!mem_res || !mem_res->start)
return -ENODEV;
priv = devm_kzalloc(&pdev->dev,
sizeof(struct rcar_pci_priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
priv->mem_res = *mem_res;
/*
* The controller does not support/use port I/O,
* so setup a dummy port I/O region here.
*/
priv->io_res.start = priv->mem_res.start;
priv->io_res.end = priv->mem_res.end;
priv->io_res.flags = IORESOURCE_IO;
priv->cfg_res = cfg_res;
priv->irq = platform_get_irq(pdev, 0);
priv->reg = reg;
return rcar_pci_add_controller(priv);
}
static struct platform_driver rcar_pci_driver = {
.driver = {
.name = "pci-rcar-gen2",
},
};
static int __init rcar_pci_init(void)
{
int retval;
retval = platform_driver_probe(&rcar_pci_driver, rcar_pci_probe);
if (!retval)
pci_common_init(&rcar_hw_pci);
/* Private data pointer array is not needed any more */
kfree(rcar_hw_pci.private_data);
rcar_hw_pci.private_data = NULL;
return retval;
}
subsys_initcall(rcar_pci_init);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Renesas R-Car Gen2 internal PCI");
MODULE_AUTHOR("Valentine Barshak <valentine.barshak@cogentembedded.com>");

Просмотреть файл

@ -408,7 +408,7 @@ static void __iomem *tegra_pcie_bus_map(struct tegra_pcie *pcie,
list_for_each_entry(bus, &pcie->busses, list)
if (bus->nr == busnr)
return bus->area->addr;
return (void __iomem *)bus->area->addr;
bus = tegra_pcie_bus_alloc(pcie, busnr);
if (IS_ERR(bus))
@ -416,7 +416,7 @@ static void __iomem *tegra_pcie_bus_map(struct tegra_pcie *pcie,
list_add_tail(&bus->list, &pcie->busses);
return bus->area->addr;
return (void __iomem *)bus->area->addr;
}
static void __iomem *tegra_pcie_conf_address(struct pci_bus *bus,

Просмотреть файл

@ -11,8 +11,11 @@
* published by the Free Software Foundation.
*/
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/pci.h>
#include <linux/pci_regs.h>
@ -64,7 +67,7 @@
static struct hw_pci dw_pci;
unsigned long global_io_offset;
static unsigned long global_io_offset;
static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys)
{
@ -115,8 +118,8 @@ static inline void dw_pcie_writel_rc(struct pcie_port *pp, u32 val, u32 reg)
writel(val, pp->dbi_base + reg);
}
int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
static int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
{
int ret;
@ -128,8 +131,8 @@ int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
return ret;
}
int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
u32 val)
static int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
u32 val)
{
int ret;
@ -142,6 +145,205 @@ int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
return ret;
}
static struct irq_chip dw_msi_irq_chip = {
.name = "PCI-MSI",
.irq_enable = unmask_msi_irq,
.irq_disable = mask_msi_irq,
.irq_mask = mask_msi_irq,
.irq_unmask = unmask_msi_irq,
};
/* MSI int handler */
void dw_handle_msi_irq(struct pcie_port *pp)
{
unsigned long val;
int i, pos, irq;
for (i = 0; i < MAX_MSI_CTRLS; i++) {
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4,
(u32 *)&val);
if (val) {
pos = 0;
while ((pos = find_next_bit(&val, 32, pos)) != 32) {
irq = irq_find_mapping(pp->irq_domain,
i * 32 + pos);
generic_handle_irq(irq);
pos++;
}
}
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4, val);
}
}
void dw_pcie_msi_init(struct pcie_port *pp)
{
pp->msi_data = __get_free_pages(GFP_KERNEL, 0);
/* program the msi_data */
dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4,
virt_to_phys((void *)pp->msi_data));
dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, 0);
}
static int find_valid_pos0(struct pcie_port *pp, int msgvec, int pos, int *pos0)
{
int flag = 1;
do {
pos = find_next_zero_bit(pp->msi_irq_in_use,
MAX_MSI_IRQS, pos);
/*if you have reached to the end then get out from here.*/
if (pos == MAX_MSI_IRQS)
return -ENOSPC;
/*
* Check if this position is at correct offset.nvec is always a
* power of two. pos0 must be nvec bit alligned.
*/
if (pos % msgvec)
pos += msgvec - (pos % msgvec);
else
flag = 0;
} while (flag);
*pos0 = pos;
return 0;
}
static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
{
int res, bit, irq, pos0, pos1, i;
u32 val;
struct pcie_port *pp = sys_to_pcie(desc->dev->bus->sysdata);
if (!pp) {
BUG();
return -EINVAL;
}
pos0 = find_first_zero_bit(pp->msi_irq_in_use,
MAX_MSI_IRQS);
if (pos0 % no_irqs) {
if (find_valid_pos0(pp, no_irqs, pos0, &pos0))
goto no_valid_irq;
}
if (no_irqs > 1) {
pos1 = find_next_bit(pp->msi_irq_in_use,
MAX_MSI_IRQS, pos0);
/* there must be nvec number of consecutive free bits */
while ((pos1 - pos0) < no_irqs) {
if (find_valid_pos0(pp, no_irqs, pos1, &pos0))
goto no_valid_irq;
pos1 = find_next_bit(pp->msi_irq_in_use,
MAX_MSI_IRQS, pos0);
}
}
irq = irq_find_mapping(pp->irq_domain, pos0);
if (!irq)
goto no_valid_irq;
i = 0;
while (i < no_irqs) {
set_bit(pos0 + i, pp->msi_irq_in_use);
irq_alloc_descs((irq + i), (irq + i), 1, 0);
irq_set_msi_desc(irq + i, desc);
/*Enable corresponding interrupt in MSI interrupt controller */
res = ((pos0 + i) / 32) * 12;
bit = (pos0 + i) % 32;
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
val |= 1 << bit;
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
i++;
}
*pos = pos0;
return irq;
no_valid_irq:
*pos = pos0;
return -ENOSPC;
}
static void clear_irq(unsigned int irq)
{
int res, bit, val, pos;
struct irq_desc *desc;
struct msi_desc *msi;
struct pcie_port *pp;
struct irq_data *data = irq_get_irq_data(irq);
/* get the port structure */
desc = irq_to_desc(irq);
msi = irq_desc_get_msi_desc(desc);
pp = sys_to_pcie(msi->dev->bus->sysdata);
if (!pp) {
BUG();
return;
}
pos = data->hwirq;
irq_free_desc(irq);
clear_bit(pos, pp->msi_irq_in_use);
/* Disable corresponding interrupt on MSI interrupt controller */
res = (pos / 32) * 12;
bit = pos % 32;
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
val &= ~(1 << bit);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
}
static int dw_msi_setup_irq(struct msi_chip *chip, struct pci_dev *pdev,
struct msi_desc *desc)
{
int irq, pos, msgvec;
u16 msg_ctr;
struct msi_msg msg;
struct pcie_port *pp = sys_to_pcie(pdev->bus->sysdata);
if (!pp) {
BUG();
return -EINVAL;
}
pci_read_config_word(pdev, desc->msi_attrib.pos+PCI_MSI_FLAGS,
&msg_ctr);
msgvec = (msg_ctr&PCI_MSI_FLAGS_QSIZE) >> 4;
if (msgvec == 0)
msgvec = (msg_ctr & PCI_MSI_FLAGS_QMASK) >> 1;
if (msgvec > 5)
msgvec = 0;
irq = assign_irq((1 << msgvec), desc, &pos);
if (irq < 0)
return irq;
msg_ctr &= ~PCI_MSI_FLAGS_QSIZE;
msg_ctr |= msgvec << 4;
pci_write_config_word(pdev, desc->msi_attrib.pos + PCI_MSI_FLAGS,
msg_ctr);
desc->msi_attrib.multiple = msgvec;
msg.address_lo = virt_to_phys((void *)pp->msi_data);
msg.address_hi = 0x0;
msg.data = pos;
write_msi_msg(irq, &msg);
return 0;
}
static void dw_msi_teardown_irq(struct msi_chip *chip, unsigned int irq)
{
clear_irq(irq);
}
static struct msi_chip dw_pcie_msi_chip = {
.setup_irq = dw_msi_setup_irq,
.teardown_irq = dw_msi_teardown_irq,
};
int dw_pcie_link_up(struct pcie_port *pp)
{
if (pp->ops->link_up)
@ -150,12 +352,27 @@ int dw_pcie_link_up(struct pcie_port *pp)
return 0;
}
static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq);
irq_set_chip_data(irq, domain->host_data);
set_irq_flags(irq, IRQF_VALID);
return 0;
}
static const struct irq_domain_ops msi_domain_ops = {
.map = dw_pcie_msi_map,
};
int __init dw_pcie_host_init(struct pcie_port *pp)
{
struct device_node *np = pp->dev->of_node;
struct of_pci_range range;
struct of_pci_range_parser parser;
u32 val;
int i;
if (of_pci_range_parser_init(&parser, np)) {
dev_err(pp->dev, "missing ranges property\n");
@ -223,6 +440,19 @@ int __init dw_pcie_host_init(struct pcie_port *pp)
return -EINVAL;
}
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->irq_domain = irq_domain_add_linear(pp->dev->of_node,
MAX_MSI_IRQS, &msi_domain_ops,
&dw_pcie_msi_chip);
if (!pp->irq_domain) {
dev_err(pp->dev, "irq domain init failed\n");
return -ENXIO;
}
for (i = 0; i < MAX_MSI_IRQS; i++)
irq_create_mapping(pp->irq_domain, i);
}
if (pp->ops->host_init)
pp->ops->host_init(pp);
@ -438,7 +668,7 @@ static struct pci_ops dw_pcie_ops = {
.write = dw_pcie_wr_conf,
};
int dw_pcie_setup(int nr, struct pci_sys_data *sys)
static int dw_pcie_setup(int nr, struct pci_sys_data *sys)
{
struct pcie_port *pp;
@ -461,7 +691,7 @@ int dw_pcie_setup(int nr, struct pci_sys_data *sys)
return 1;
}
struct pci_bus *dw_pcie_scan_bus(int nr, struct pci_sys_data *sys)
static struct pci_bus *dw_pcie_scan_bus(int nr, struct pci_sys_data *sys)
{
struct pci_bus *bus;
struct pcie_port *pp = sys_to_pcie(sys);
@ -478,17 +708,28 @@ struct pci_bus *dw_pcie_scan_bus(int nr, struct pci_sys_data *sys)
return bus;
}
int dw_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
static int dw_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
struct pcie_port *pp = sys_to_pcie(dev->bus->sysdata);
return pp->irq;
}
static void dw_pcie_add_bus(struct pci_bus *bus)
{
if (IS_ENABLED(CONFIG_PCI_MSI)) {
struct pcie_port *pp = sys_to_pcie(bus->sysdata);
dw_pcie_msi_chip.dev = pp->dev;
bus->msi = &dw_pcie_msi_chip;
}
}
static struct hw_pci dw_pci = {
.setup = dw_pcie_setup,
.scan = dw_pcie_scan_bus,
.map_irq = dw_pcie_map_irq,
.add_bus = dw_pcie_add_bus,
};
void dw_pcie_setup_rc(struct pcie_port *pp)

Просмотреть файл

@ -11,6 +11,9 @@
* published by the Free Software Foundation.
*/
#ifndef _PCIE_DESIGNWARE_H
#define _PCIE_DESIGNWARE_H
struct pcie_port_info {
u32 cfg0_size;
u32 cfg1_size;
@ -20,6 +23,14 @@ struct pcie_port_info {
phys_addr_t mem_bus_addr;
};
/*
* Maximum number of MSI IRQs can be 256 per controller. But keep
* it 32 as of now. Probably we will never need more than 32. If needed,
* then increment it in multiple of 32.
*/
#define MAX_MSI_IRQS 32
#define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32)
struct pcie_port {
struct device *dev;
u8 root_bus_nr;
@ -38,6 +49,10 @@ struct pcie_port {
int irq;
u32 lanes;
struct pcie_host_ops *ops;
int msi_irq;
struct irq_domain *irq_domain;
unsigned long msi_data;
DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
};
struct pcie_host_ops {
@ -51,15 +66,12 @@ struct pcie_host_ops {
void (*host_init)(struct pcie_port *pp);
};
extern unsigned long global_io_offset;
int cfg_read(void __iomem *addr, int where, int size, u32 *val);
int cfg_write(void __iomem *addr, int where, int size, u32 val);
int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size, u32 val);
int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, u32 *val);
void dw_handle_msi_irq(struct pcie_port *pp);
void dw_pcie_msi_init(struct pcie_port *pp);
int dw_pcie_link_up(struct pcie_port *pp);
void dw_pcie_setup_rc(struct pcie_port *pp);
int dw_pcie_host_init(struct pcie_port *pp);
int dw_pcie_setup(int nr, struct pci_sys_data *sys);
struct pci_bus *dw_pcie_scan_bus(int nr, struct pci_sys_data *sys);
int dw_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin);
#endif /* _PCIE_DESIGNWARE_H */

Просмотреть файл

@ -338,7 +338,7 @@ int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev, u32 flags)
acpi_handle chandle, handle;
struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL };
flags &= OSC_SHPC_NATIVE_HP_CONTROL;
flags &= OSC_PCI_SHPC_NATIVE_HP_CONTROL;
if (!flags) {
err("Invalid flags %u specified!\n", flags);
return -EINVAL;

Просмотреть файл

@ -39,16 +39,6 @@
#include <linux/mutex.h>
#include <linux/pci_hotplug.h>
#define dbg(format, arg...) \
do { \
if (acpiphp_debug) \
printk(KERN_DEBUG "%s: " format, \
MY_NAME , ## arg); \
} while (0)
#define err(format, arg...) printk(KERN_ERR "%s: " format, MY_NAME , ## arg)
#define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME , ## arg)
#define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME , ## arg)
struct acpiphp_context;
struct acpiphp_bridge;
struct acpiphp_slot;

Просмотреть файл

@ -31,6 +31,8 @@
*
*/
#define pr_fmt(fmt) "acpiphp: " fmt
#include <linux/init.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@ -43,12 +45,9 @@
#include <linux/smp.h>
#include "acpiphp.h"
#define MY_NAME "acpiphp"
/* name size which is used for entries in pcihpfs */
#define SLOT_NAME_SIZE 21 /* {_SUN} */
bool acpiphp_debug;
bool acpiphp_disabled;
/* local variables */
@ -61,9 +60,7 @@ static struct acpiphp_attention_info *attention_info;
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_LICENSE("GPL");
MODULE_PARM_DESC(debug, "Debugging mode enabled or not");
MODULE_PARM_DESC(disable, "disable acpiphp driver");
module_param_named(debug, acpiphp_debug, bool, 0644);
module_param_named(disable, acpiphp_disabled, bool, 0444);
/* export the attention callback registration methods */
@ -139,7 +136,7 @@ static int enable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
/* enable the specified slot */
return acpiphp_enable_slot(slot->acpi_slot);
@ -156,7 +153,7 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
/* disable the specified slot */
return acpiphp_disable_and_eject_slot(slot->acpi_slot);
@ -176,8 +173,9 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
{
int retval = -ENODEV;
dbg("%s - physical_slot = %s\n", __func__, hotplug_slot_name(hotplug_slot));
pr_debug("%s - physical_slot = %s\n", __func__,
hotplug_slot_name(hotplug_slot));
if (attention_info && try_module_get(attention_info->owner)) {
retval = attention_info->set_attn(hotplug_slot, status);
module_put(attention_info->owner);
@ -199,7 +197,7 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
*value = acpiphp_get_power_status(slot->acpi_slot);
@ -221,7 +219,8 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
int retval = -EINVAL;
dbg("%s - physical_slot = %s\n", __func__, hotplug_slot_name(hotplug_slot));
pr_debug("%s - physical_slot = %s\n", __func__,
hotplug_slot_name(hotplug_slot));
if (attention_info && try_module_get(attention_info->owner)) {
retval = attention_info->get_attn(hotplug_slot, value);
@ -244,7 +243,7 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
*value = acpiphp_get_latch_status(slot->acpi_slot);
@ -264,7 +263,7 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
*value = acpiphp_get_adapter_status(slot->acpi_slot);
@ -279,7 +278,7 @@ static void release_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
kfree(slot->hotplug_slot);
kfree(slot);
@ -322,11 +321,11 @@ int acpiphp_register_hotplug_slot(struct acpiphp_slot *acpiphp_slot,
if (retval == -EBUSY)
goto error_hpslot;
if (retval) {
err("pci_hp_register failed with error %d\n", retval);
pr_err("pci_hp_register failed with error %d\n", retval);
goto error_hpslot;
}
info("Slot [%s] registered\n", slot_name(slot));
pr_info("Slot [%s] registered\n", slot_name(slot));
return 0;
error_hpslot:
@ -343,17 +342,17 @@ void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *acpiphp_slot)
struct slot *slot = acpiphp_slot->slot;
int retval = 0;
info("Slot [%s] unregistered\n", slot_name(slot));
pr_info("Slot [%s] unregistered\n", slot_name(slot));
retval = pci_hp_deregister(slot->hotplug_slot);
if (retval)
err("pci_hp_deregister failed with error %d\n", retval);
pr_err("pci_hp_deregister failed with error %d\n", retval);
}
void __init acpiphp_init(void)
{
info(DRIVER_DESC " version: " DRIVER_VERSION "%s\n",
pr_info(DRIVER_DESC " version: " DRIVER_VERSION "%s\n",
acpiphp_disabled ? ", disabled by user; please report a bug"
: "");
}

Просмотреть файл

@ -39,6 +39,8 @@
* bus. It loses the refcount when the the driver unloads.
*/
#define pr_fmt(fmt) "acpiphp_glue: " fmt
#include <linux/init.h>
#include <linux/module.h>
@ -58,8 +60,6 @@ static LIST_HEAD(bridge_list);
static DEFINE_MUTEX(bridge_mutex);
static DEFINE_MUTEX(acpiphp_context_lock);
#define MY_NAME "acpiphp_glue"
static void handle_hotplug_event(acpi_handle handle, u32 type, void *data);
static void acpiphp_sanitize_bus(struct pci_bus *bus);
static void acpiphp_set_hpp_values(struct pci_bus *bus);
@ -335,7 +335,7 @@ static acpi_status register_slot(acpi_handle handle, u32 lvl, void *data,
if (ACPI_FAILURE(status))
sun = bridge->nr_slots;
dbg("found ACPI PCI Hotplug slot %llu at PCI %04x:%02x:%02x\n",
pr_debug("found ACPI PCI Hotplug slot %llu at PCI %04x:%02x:%02x\n",
sun, pci_domain_nr(pbus), pbus->number, device);
retval = acpiphp_register_hotplug_slot(slot, sun);
@ -343,10 +343,10 @@ static acpi_status register_slot(acpi_handle handle, u32 lvl, void *data,
slot->slot = NULL;
bridge->nr_slots--;
if (retval == -EBUSY)
warn("Slot %llu already registered by another "
pr_warn("Slot %llu already registered by another "
"hotplug driver\n", sun);
else
warn("acpiphp_register_hotplug_slot failed "
pr_warn("acpiphp_register_hotplug_slot failed "
"(err code = 0x%x)\n", retval);
}
/* Even if the slot registration fails, we can still use it. */
@ -369,7 +369,7 @@ static acpi_status register_slot(acpi_handle handle, u32 lvl, void *data,
if (register_hotplug_dock_device(handle,
&acpiphp_dock_ops, context,
acpiphp_dock_init, acpiphp_dock_release))
dbg("failed to register dock device\n");
pr_debug("failed to register dock device\n");
}
/* install notify handler */
@ -427,7 +427,7 @@ static void cleanup_bridge(struct acpiphp_bridge *bridge)
ACPI_SYSTEM_NOTIFY,
handle_hotplug_event);
if (ACPI_FAILURE(status))
err("failed to remove notify handler\n");
pr_err("failed to remove notify handler\n");
}
}
if (slot->slot)
@ -826,8 +826,9 @@ static void hotplug_event(acpi_handle handle, u32 type, void *data)
switch (type) {
case ACPI_NOTIFY_BUS_CHECK:
/* bus re-enumerate */
dbg("%s: Bus check notify on %s\n", __func__, objname);
dbg("%s: re-enumerating slots under %s\n", __func__, objname);
pr_debug("%s: Bus check notify on %s\n", __func__, objname);
pr_debug("%s: re-enumerating slots under %s\n",
__func__, objname);
if (bridge) {
acpiphp_check_bridge(bridge);
} else {
@ -841,7 +842,7 @@ static void hotplug_event(acpi_handle handle, u32 type, void *data)
case ACPI_NOTIFY_DEVICE_CHECK:
/* device check */
dbg("%s: Device check notify on %s\n", __func__, objname);
pr_debug("%s: Device check notify on %s\n", __func__, objname);
if (bridge) {
acpiphp_check_bridge(bridge);
} else {
@ -862,7 +863,7 @@ static void hotplug_event(acpi_handle handle, u32 type, void *data)
case ACPI_NOTIFY_EJECT_REQUEST:
/* request device eject */
dbg("%s: Device eject notify on %s\n", __func__, objname);
pr_debug("%s: Device eject notify on %s\n", __func__, objname);
acpiphp_disable_and_eject_slot(func->slot);
break;
}

Просмотреть файл

@ -25,6 +25,8 @@
*
*/
#define pr_fmt(fmt) "acpiphp_ibm: " fmt
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/module.h>
@ -43,23 +45,11 @@
#define DRIVER_AUTHOR "Irene Zubarev <zubarev@us.ibm.com>, Vernon Mauery <vernux@us.ibm.com>"
#define DRIVER_DESC "ACPI Hot Plug PCI Controller Driver IBM extension"
static bool debug;
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_LICENSE("GPL");
MODULE_VERSION(DRIVER_VERSION);
module_param(debug, bool, 0644);
MODULE_PARM_DESC(debug, " Debugging mode enabled or not");
#define MY_NAME "acpiphp_ibm"
#undef dbg
#define dbg(format, arg...) \
do { \
if (debug) \
printk(KERN_DEBUG "%s: " format, \
MY_NAME , ## arg); \
} while (0)
#define FOUND_APCI 0x61504349
/* these are the names for the IBM ACPI pseudo-device */
@ -189,7 +179,7 @@ static int ibm_set_attention_status(struct hotplug_slot *slot, u8 status)
ibm_slot = ibm_slot_from_id(hpslot_to_sun(slot));
dbg("%s: set slot %d (%d) attention status to %d\n", __func__,
pr_debug("%s: set slot %d (%d) attention status to %d\n", __func__,
ibm_slot->slot.slot_num, ibm_slot->slot.slot_id,
(status ? 1 : 0));
@ -202,10 +192,10 @@ static int ibm_set_attention_status(struct hotplug_slot *slot, u8 status)
stat = acpi_evaluate_integer(ibm_acpi_handle, "APLS", &params, &rc);
if (ACPI_FAILURE(stat)) {
err("APLS evaluation failed: 0x%08x\n", stat);
pr_err("APLS evaluation failed: 0x%08x\n", stat);
return -ENODEV;
} else if (!rc) {
err("APLS method failed: 0x%08llx\n", rc);
pr_err("APLS method failed: 0x%08llx\n", rc);
return -ERANGE;
}
return 0;
@ -234,7 +224,7 @@ static int ibm_get_attention_status(struct hotplug_slot *slot, u8 *status)
else
*status = 0;
dbg("%s: get slot %d (%d) attention status is %d\n", __func__,
pr_debug("%s: get slot %d (%d) attention status is %d\n", __func__,
ibm_slot->slot.slot_num, ibm_slot->slot.slot_id,
*status);
@ -266,10 +256,10 @@ static void ibm_handle_events(acpi_handle handle, u32 event, void *context)
u8 subevent = event & 0xf0;
struct notification *note = context;
dbg("%s: Received notification %02x\n", __func__, event);
pr_debug("%s: Received notification %02x\n", __func__, event);
if (subevent == 0x80) {
dbg("%s: generationg bus event\n", __func__);
pr_debug("%s: generationg bus event\n", __func__);
acpi_bus_generate_netlink_event(note->device->pnp.device_class,
dev_name(&note->device->dev),
note->event, detail);
@ -301,7 +291,7 @@ static int ibm_get_table_from_acpi(char **bufp)
status = acpi_evaluate_object(ibm_acpi_handle, "APCI", NULL, &buffer);
if (ACPI_FAILURE(status)) {
err("%s: APCI evaluation failed\n", __func__);
pr_err("%s: APCI evaluation failed\n", __func__);
return -ENODEV;
}
@ -309,13 +299,13 @@ static int ibm_get_table_from_acpi(char **bufp)
if (!(package) ||
(package->type != ACPI_TYPE_PACKAGE) ||
!(package->package.elements)) {
err("%s: Invalid APCI object\n", __func__);
pr_err("%s: Invalid APCI object\n", __func__);
goto read_table_done;
}
for(size = 0, i = 0; i < package->package.count; i++) {
if (package->package.elements[i].type != ACPI_TYPE_BUFFER) {
err("%s: Invalid APCI element %d\n", __func__, i);
pr_err("%s: Invalid APCI element %d\n", __func__, i);
goto read_table_done;
}
size += package->package.elements[i].buffer.length;
@ -325,7 +315,7 @@ static int ibm_get_table_from_acpi(char **bufp)
goto read_table_done;
lbuf = kzalloc(size, GFP_KERNEL);
dbg("%s: element count: %i, ASL table size: %i, &table = 0x%p\n",
pr_debug("%s: element count: %i, ASL table size: %i, &table = 0x%p\n",
__func__, package->package.count, size, lbuf);
if (lbuf) {
@ -370,8 +360,8 @@ static ssize_t ibm_read_apci_table(struct file *filp, struct kobject *kobj,
{
int bytes_read = -EINVAL;
char *table = NULL;
dbg("%s: pos = %d, size = %zd\n", __func__, (int)pos, size);
pr_debug("%s: pos = %d, size = %zd\n", __func__, (int)pos, size);
if (pos == 0) {
bytes_read = ibm_get_table_from_acpi(&table);
@ -403,7 +393,7 @@ static acpi_status __init ibm_find_acpi_device(acpi_handle handle,
status = acpi_get_object_info(handle, &info);
if (ACPI_FAILURE(status)) {
err("%s: Failed to get device information status=0x%x\n",
pr_err("%s: Failed to get device information status=0x%x\n",
__func__, status);
return retval;
}
@ -411,7 +401,7 @@ static acpi_status __init ibm_find_acpi_device(acpi_handle handle,
if (info->current_status && (info->valid & ACPI_VALID_HID) &&
(!strcmp(info->hardware_id.string, IBM_HARDWARE_ID1) ||
!strcmp(info->hardware_id.string, IBM_HARDWARE_ID2))) {
dbg("found hardware: %s, handle: %p\n",
pr_debug("found hardware: %s, handle: %p\n",
info->hardware_id.string, handle);
*phandle = handle;
/* returning non-zero causes the search to stop
@ -432,18 +422,18 @@ static int __init ibm_acpiphp_init(void)
struct acpi_device *device;
struct kobject *sysdir = &pci_slots_kset->kobj;
dbg("%s\n", __func__);
pr_debug("%s\n", __func__);
if (acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, ibm_find_acpi_device, NULL,
&ibm_acpi_handle, NULL) != FOUND_APCI) {
err("%s: acpi_walk_namespace failed\n", __func__);
pr_err("%s: acpi_walk_namespace failed\n", __func__);
retval = -ENODEV;
goto init_return;
}
dbg("%s: found IBM aPCI device\n", __func__);
pr_debug("%s: found IBM aPCI device\n", __func__);
if (acpi_bus_get_device(ibm_acpi_handle, &device)) {
err("%s: acpi_bus_get_device failed\n", __func__);
pr_err("%s: acpi_bus_get_device failed\n", __func__);
retval = -ENODEV;
goto init_return;
}
@ -457,7 +447,7 @@ static int __init ibm_acpiphp_init(void)
ACPI_DEVICE_NOTIFY, ibm_handle_events,
&ibm_note);
if (ACPI_FAILURE(status)) {
err("%s: Failed to register notification handler\n",
pr_err("%s: Failed to register notification handler\n",
__func__);
retval = -EBUSY;
goto init_cleanup;
@ -479,17 +469,17 @@ static void __exit ibm_acpiphp_exit(void)
acpi_status status;
struct kobject *sysdir = &pci_slots_kset->kobj;
dbg("%s\n", __func__);
pr_debug("%s\n", __func__);
if (acpiphp_unregister_attention(&ibm_attention_info))
err("%s: attention info deregistration failed", __func__);
pr_err("%s: attention info deregistration failed", __func__);
status = acpi_remove_notify_handler(
ibm_acpi_handle,
ACPI_DEVICE_NOTIFY,
ibm_handle_events);
if (ACPI_FAILURE(status))
err("%s: Notification handler removal failed\n", __func__);
pr_err("%s: Notification handler removal failed\n", __func__);
/* remove the /sys entries */
sysfs_remove_bin_file(sysdir, &ibm_apci_table_attr);
}

Просмотреть файл

@ -191,7 +191,7 @@ static inline const char *slot_name(struct slot *slot)
#include <linux/pci-acpi.h>
static inline int get_hp_hw_control_from_firmware(struct pci_dev *dev)
{
u32 flags = OSC_SHPC_NATIVE_HP_CONTROL;
u32 flags = OSC_PCI_SHPC_NATIVE_HP_CONTROL;
return acpi_get_hp_hw_control_from_firmware(dev, flags);
}
#else

Просмотреть файл

@ -185,7 +185,7 @@ static inline __attribute_const__ u32 msi_enabled_mask(u16 control)
* reliably as devices without an INTx disable bit will then generate a
* level IRQ which will never be cleared.
*/
static u32 __msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
u32 default_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
{
u32 mask_bits = desc->masked;
@ -199,9 +199,14 @@ static u32 __msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
return mask_bits;
}
__weak u32 arch_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
{
return default_msi_mask_irq(desc, mask, flag);
}
static void msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
{
desc->masked = __msi_mask_irq(desc, mask, flag);
desc->masked = arch_msi_mask_irq(desc, mask, flag);
}
/*
@ -211,7 +216,7 @@ static void msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
* file. This saves a few milliseconds when initialising devices with lots
* of MSI-X interrupts.
*/
static u32 __msix_mask_irq(struct msi_desc *desc, u32 flag)
u32 default_msix_mask_irq(struct msi_desc *desc, u32 flag)
{
u32 mask_bits = desc->masked;
unsigned offset = desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE +
@ -224,9 +229,14 @@ static u32 __msix_mask_irq(struct msi_desc *desc, u32 flag)
return mask_bits;
}
__weak u32 arch_msix_mask_irq(struct msi_desc *desc, u32 flag)
{
return default_msix_mask_irq(desc, flag);
}
static void msix_mask_irq(struct msi_desc *desc, u32 flag)
{
desc->masked = __msix_mask_irq(desc, flag);
desc->masked = arch_msix_mask_irq(desc, flag);
}
static void msi_set_mask_bit(struct irq_data *data, u32 flag)
@ -831,7 +841,7 @@ int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec)
int status, maxvec;
u16 msgctl;
if (!dev->msi_cap)
if (!dev->msi_cap || dev->current_state != PCI_D0)
return -EINVAL;
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
@ -862,7 +872,7 @@ int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *maxvec)
int ret, nvec;
u16 msgctl;
if (!dev->msi_cap)
if (!dev->msi_cap || dev->current_state != PCI_D0)
return -EINVAL;
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
@ -902,7 +912,7 @@ void pci_msi_shutdown(struct pci_dev *dev)
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &ctrl);
mask = msi_capable_mask(ctrl);
/* Keep cached state to be restored */
__msi_mask_irq(desc, mask, ~mask);
arch_msi_mask_irq(desc, mask, ~mask);
/* Restore dev->irq to its default pin-assertion irq */
dev->irq = desc->msi_attrib.default_irq;
@ -955,7 +965,7 @@ int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
int status, nr_entries;
int i, j;
if (!entries || !dev->msix_cap)
if (!entries || !dev->msix_cap || dev->current_state != PCI_D0)
return -EINVAL;
status = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSIX);
@ -998,7 +1008,7 @@ void pci_msix_shutdown(struct pci_dev *dev)
/* Return the device with MSI-X masked as initial states */
list_for_each_entry(entry, &dev->msi_list, list) {
/* Keep cached states to be restored */
__msix_mask_irq(entry, 1);
arch_msix_mask_irq(entry, 1);
}
msix_set_enable(dev, 0);

Просмотреть файл

@ -267,11 +267,19 @@ static long local_pci_probe(void *_ddi)
pm_runtime_get_sync(dev);
pci_dev->driver = pci_drv;
rc = pci_drv->probe(pci_dev, ddi->id);
if (rc) {
if (!rc)
return rc;
if (rc < 0) {
pci_dev->driver = NULL;
pm_runtime_put_sync(dev);
return rc;
}
return rc;
/*
* Probe function should return < 0 for failure, 0 for success
* Treat values > 0 as success, but warn.
*/
dev_warn(dev, "Driver probe function unexpectedly returned %d\n", rc);
return 0;
}
static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
@ -602,18 +610,10 @@ static int pci_pm_prepare(struct device *dev)
return error;
}
static void pci_pm_complete(struct device *dev)
{
struct device_driver *drv = dev->driver;
if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev);
}
#else /* !CONFIG_PM_SLEEP */
#define pci_pm_prepare NULL
#define pci_pm_complete NULL
#endif /* !CONFIG_PM_SLEEP */
@ -1124,9 +1124,8 @@ static int pci_pm_runtime_idle(struct device *dev)
#ifdef CONFIG_PM
const struct dev_pm_ops pci_dev_pm_ops = {
static const struct dev_pm_ops pci_dev_pm_ops = {
.prepare = pci_pm_prepare,
.complete = pci_pm_complete,
.suspend = pci_pm_suspend,
.resume = pci_pm_resume,
.freeze = pci_pm_freeze,
@ -1319,7 +1318,7 @@ struct bus_type pci_bus_type = {
.probe = pci_device_probe,
.remove = pci_device_remove,
.shutdown = pci_device_shutdown,
.dev_attrs = pci_dev_attrs,
.dev_groups = pci_dev_groups,
.bus_groups = pci_bus_groups,
.drv_groups = pci_drv_groups,
.pm = PCI_PM_OPS_PTR,

Просмотреть файл

@ -42,7 +42,8 @@ field##_show(struct device *dev, struct device_attribute *attr, char *buf) \
\
pdev = to_pci_dev (dev); \
return sprintf (buf, format_string, pdev->field); \
}
} \
static DEVICE_ATTR_RO(field)
pci_config_attr(vendor, "0x%04x\n");
pci_config_attr(device, "0x%04x\n");
@ -73,28 +74,12 @@ static ssize_t broken_parity_status_store(struct device *dev,
return count;
}
static DEVICE_ATTR_RW(broken_parity_status);
static ssize_t local_cpus_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
const struct cpumask *mask;
int len;
#ifdef CONFIG_NUMA
mask = (dev_to_node(dev) == -1) ? cpu_online_mask :
cpumask_of_node(dev_to_node(dev));
#else
mask = cpumask_of_pcibus(to_pci_dev(dev)->bus);
#endif
len = cpumask_scnprintf(buf, PAGE_SIZE-2, mask);
buf[len++] = '\n';
buf[len] = '\0';
return len;
}
static ssize_t local_cpulist_show(struct device *dev,
struct device_attribute *attr, char *buf)
static ssize_t pci_dev_show_local_cpu(struct device *dev,
int type,
struct device_attribute *attr,
char *buf)
{
const struct cpumask *mask;
int len;
@ -105,12 +90,29 @@ static ssize_t local_cpulist_show(struct device *dev,
#else
mask = cpumask_of_pcibus(to_pci_dev(dev)->bus);
#endif
len = cpulist_scnprintf(buf, PAGE_SIZE-2, mask);
len = type ?
cpumask_scnprintf(buf, PAGE_SIZE-2, mask) :
cpulist_scnprintf(buf, PAGE_SIZE-2, mask);
buf[len++] = '\n';
buf[len] = '\0';
return len;
}
static ssize_t local_cpus_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return pci_dev_show_local_cpu(dev, 1, attr, buf);
}
static DEVICE_ATTR_RO(local_cpus);
static ssize_t local_cpulist_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return pci_dev_show_local_cpu(dev, 0, attr, buf);
}
static DEVICE_ATTR_RO(local_cpulist);
/*
* PCI Bus Class Devices
*/
@ -170,6 +172,7 @@ resource_show(struct device * dev, struct device_attribute *attr, char * buf)
}
return (str - buf);
}
static DEVICE_ATTR_RO(resource);
static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, char *buf)
{
@ -181,10 +184,11 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
(u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8),
(u8)(pci_dev->class));
}
static DEVICE_ATTR_RO(modalias);
static ssize_t is_enabled_store(struct device *dev,
struct device_attribute *attr, const char *buf,
size_t count)
static ssize_t enabled_store(struct device *dev,
struct device_attribute *attr, const char *buf,
size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
unsigned long val;
@ -208,14 +212,15 @@ static ssize_t is_enabled_store(struct device *dev,
return result < 0 ? result : count;
}
static ssize_t is_enabled_show(struct device *dev,
struct device_attribute *attr, char *buf)
static ssize_t enabled_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct pci_dev *pdev;
pdev = to_pci_dev (dev);
return sprintf (buf, "%u\n", atomic_read(&pdev->enable_cnt));
}
static DEVICE_ATTR_RW(enabled);
#ifdef CONFIG_NUMA
static ssize_t
@ -223,6 +228,7 @@ numa_node_show(struct device *dev, struct device_attribute *attr, char *buf)
{
return sprintf (buf, "%d\n", dev->numa_node);
}
static DEVICE_ATTR_RO(numa_node);
#endif
static ssize_t
@ -232,6 +238,7 @@ dma_mask_bits_show(struct device *dev, struct device_attribute *attr, char *buf)
return sprintf (buf, "%d\n", fls64(pdev->dma_mask));
}
static DEVICE_ATTR_RO(dma_mask_bits);
static ssize_t
consistent_dma_mask_bits_show(struct device *dev, struct device_attribute *attr,
@ -239,6 +246,7 @@ consistent_dma_mask_bits_show(struct device *dev, struct device_attribute *attr,
{
return sprintf (buf, "%d\n", fls64(dev->coherent_dma_mask));
}
static DEVICE_ATTR_RO(consistent_dma_mask_bits);
static ssize_t
msi_bus_show(struct device *dev, struct device_attribute *attr, char *buf)
@ -283,6 +291,7 @@ msi_bus_store(struct device *dev, struct device_attribute *attr,
return count;
}
static DEVICE_ATTR_RW(msi_bus);
static DEFINE_MUTEX(pci_remove_rescan_mutex);
static ssize_t bus_rescan_store(struct bus_type *bus, const char *buf,
@ -304,7 +313,7 @@ static ssize_t bus_rescan_store(struct bus_type *bus, const char *buf,
}
static BUS_ATTR(rescan, (S_IWUSR|S_IWGRP), NULL, bus_rescan_store);
struct attribute *pci_bus_attrs[] = {
static struct attribute *pci_bus_attrs[] = {
&bus_attr_rescan.attr,
NULL,
};
@ -335,8 +344,9 @@ dev_rescan_store(struct device *dev, struct device_attribute *attr,
}
return count;
}
struct device_attribute dev_rescan_attr = __ATTR(rescan, (S_IWUSR|S_IWGRP),
NULL, dev_rescan_store);
static struct device_attribute dev_rescan_attr = __ATTR(rescan,
(S_IWUSR|S_IWGRP),
NULL, dev_rescan_store);
static void remove_callback(struct device *dev)
{
@ -366,8 +376,9 @@ remove_store(struct device *dev, struct device_attribute *dummy,
count = ret;
return count;
}
struct device_attribute dev_remove_attr = __ATTR(remove, (S_IWUSR|S_IWGRP),
NULL, remove_store);
static struct device_attribute dev_remove_attr = __ATTR(remove,
(S_IWUSR|S_IWGRP),
NULL, remove_store);
static ssize_t
dev_bus_rescan_store(struct device *dev, struct device_attribute *attr,
@ -414,6 +425,7 @@ static ssize_t d3cold_allowed_show(struct device *dev,
struct pci_dev *pdev = to_pci_dev(dev);
return sprintf (buf, "%u\n", pdev->d3cold_allowed);
}
static DEVICE_ATTR_RW(d3cold_allowed);
#endif
#ifdef CONFIG_PCI_IOV
@ -499,30 +511,38 @@ static struct device_attribute sriov_numvfs_attr =
sriov_numvfs_show, sriov_numvfs_store);
#endif /* CONFIG_PCI_IOV */
struct device_attribute pci_dev_attrs[] = {
__ATTR_RO(resource),
__ATTR_RO(vendor),
__ATTR_RO(device),
__ATTR_RO(subsystem_vendor),
__ATTR_RO(subsystem_device),
__ATTR_RO(class),
__ATTR_RO(irq),
__ATTR_RO(local_cpus),
__ATTR_RO(local_cpulist),
__ATTR_RO(modalias),
static struct attribute *pci_dev_attrs[] = {
&dev_attr_resource.attr,
&dev_attr_vendor.attr,
&dev_attr_device.attr,
&dev_attr_subsystem_vendor.attr,
&dev_attr_subsystem_device.attr,
&dev_attr_class.attr,
&dev_attr_irq.attr,
&dev_attr_local_cpus.attr,
&dev_attr_local_cpulist.attr,
&dev_attr_modalias.attr,
#ifdef CONFIG_NUMA
__ATTR_RO(numa_node),
&dev_attr_numa_node.attr,
#endif
__ATTR_RO(dma_mask_bits),
__ATTR_RO(consistent_dma_mask_bits),
__ATTR(enable, 0600, is_enabled_show, is_enabled_store),
__ATTR(broken_parity_status,(S_IRUGO|S_IWUSR),
broken_parity_status_show,broken_parity_status_store),
__ATTR(msi_bus, 0644, msi_bus_show, msi_bus_store),
&dev_attr_dma_mask_bits.attr,
&dev_attr_consistent_dma_mask_bits.attr,
&dev_attr_enabled.attr,
&dev_attr_broken_parity_status.attr,
&dev_attr_msi_bus.attr,
#if defined(CONFIG_PM_RUNTIME) && defined(CONFIG_ACPI)
__ATTR(d3cold_allowed, 0644, d3cold_allowed_show, d3cold_allowed_store),
&dev_attr_d3cold_allowed.attr,
#endif
__ATTR_NULL,
NULL,
};
static const struct attribute_group pci_dev_group = {
.attrs = pci_dev_attrs,
};
const struct attribute_group *pci_dev_groups[] = {
&pci_dev_group,
NULL,
};
static struct attribute *pcibus_attrs[] = {
@ -554,7 +574,7 @@ boot_vga_show(struct device *dev, struct device_attribute *attr, char *buf)
!!(pdev->resource[PCI_ROM_RESOURCE].flags &
IORESOURCE_ROM_SHADOW));
}
struct device_attribute vga_attr = __ATTR_RO(boot_vga);
static struct device_attribute vga_attr = __ATTR_RO(boot_vga);
static ssize_t
pci_read_config(struct file *filp, struct kobject *kobj,

Просмотреть файл

@ -1148,18 +1148,16 @@ int pci_reenable_device(struct pci_dev *dev)
static void pci_enable_bridge(struct pci_dev *dev)
{
struct pci_dev *bridge;
int retval;
if (!dev)
return;
pci_enable_bridge(dev->bus->self);
bridge = pci_upstream_bridge(dev);
if (bridge)
pci_enable_bridge(bridge);
if (pci_is_enabled(dev)) {
if (!dev->is_busmaster) {
dev_warn(&dev->dev, "driver skip pci_set_master, fix it!\n");
if (!dev->is_busmaster)
pci_set_master(dev);
}
return;
}
@ -1172,6 +1170,7 @@ static void pci_enable_bridge(struct pci_dev *dev)
static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
{
struct pci_dev *bridge;
int err;
int i, bars = 0;
@ -1190,7 +1189,9 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
if (atomic_inc_return(&dev->enable_cnt) > 1)
return 0; /* already enabled */
pci_enable_bridge(dev->bus->self);
bridge = pci_upstream_bridge(dev);
if (bridge)
pci_enable_bridge(bridge);
/* only skip sriov related */
for (i = 0; i <= PCI_ROM_RESOURCE; i++)
@ -1644,8 +1645,10 @@ void pci_pme_active(struct pci_dev *dev, bool enable)
if (enable) {
pme_dev = kmalloc(sizeof(struct pci_pme_device),
GFP_KERNEL);
if (!pme_dev)
goto out;
if (!pme_dev) {
dev_warn(&dev->dev, "can't enable PME#\n");
return;
}
pme_dev->dev = dev;
mutex_lock(&pci_pme_list_mutex);
list_add(&pme_dev->list, &pci_pme_list);
@ -1666,7 +1669,6 @@ void pci_pme_active(struct pci_dev *dev, bool enable)
}
}
out:
dev_dbg(&dev->dev, "PME# %s\n", enable ? "enabled" : "disabled");
}
@ -2860,7 +2862,7 @@ void __weak pcibios_set_master(struct pci_dev *dev)
lat = pcibios_max_latency;
else
return;
dev_printk(KERN_DEBUG, &dev->dev, "setting latency timer to %d\n", lat);
pci_write_config_byte(dev, PCI_LATENCY_TIMER, lat);
}
@ -3978,6 +3980,7 @@ int pcie_get_mps(struct pci_dev *dev)
return 128 << ((ctl & PCI_EXP_DEVCTL_PAYLOAD) >> 5);
}
EXPORT_SYMBOL(pcie_get_mps);
/**
* pcie_set_mps - set PCI Express maximum payload size
@ -4002,6 +4005,7 @@ int pcie_set_mps(struct pci_dev *dev, int mps)
return pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL,
PCI_EXP_DEVCTL_PAYLOAD, v);
}
EXPORT_SYMBOL(pcie_set_mps);
/**
* pcie_get_minimum_link - determine minimum link settings of a PCI device

Просмотреть файл

@ -153,7 +153,7 @@ static inline int pci_no_d1d2(struct pci_dev *dev)
return (dev->no_d1d2 || parent_dstates);
}
extern struct device_attribute pci_dev_attrs[];
extern const struct attribute_group *pci_dev_groups[];
extern const struct attribute_group *pcibus_groups[];
extern struct device_type pci_dev_type;
extern const struct attribute_group *pci_bus_groups[];

Просмотреть файл

@ -260,13 +260,14 @@ static int get_port_device_capability(struct pci_dev *dev)
if (pcie_ports_disabled)
return 0;
err = pcie_port_platform_notify(dev, &cap_mask);
if (!pcie_ports_auto) {
cap_mask = PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP
| PCIE_PORT_SERVICE_VC;
if (pci_aer_available())
cap_mask |= PCIE_PORT_SERVICE_AER;
} else if (err) {
cap_mask = PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP
| PCIE_PORT_SERVICE_VC;
if (pci_aer_available())
cap_mask |= PCIE_PORT_SERVICE_AER;
if (pcie_ports_auto) {
err = pcie_port_platform_notify(dev, &cap_mask);
if (err)
return 0;
}

Просмотреть файл

@ -641,8 +641,7 @@ static void pci_set_bus_speed(struct pci_bus *bus)
return;
}
pos = pci_find_capability(bridge, PCI_CAP_ID_EXP);
if (pos) {
if (pci_is_pcie(bridge)) {
u32 linkcap;
u16 linksta;
@ -984,7 +983,6 @@ void set_pcie_port_type(struct pci_dev *pdev)
pos = pci_find_capability(pdev, PCI_CAP_ID_EXP);
if (!pos)
return;
pdev->is_pcie = 1;
pdev->pcie_cap = pos;
pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16);
pdev->pcie_flags_reg = reg16;

Просмотреть файл

@ -2954,6 +2954,29 @@ static void disable_igfx_irq(struct pci_dev *dev)
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
/*
* PCI devices which are on Intel chips can skip the 10ms delay
* before entering D3 mode.
*/
static void quirk_remove_d3_delay(struct pci_dev *dev)
{
dev->d3_delay = 0;
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c00, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0412, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c0c, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c31, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3a, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3d, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c2d, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c20, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c18, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c1c, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c26, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c4e, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c02, quirk_remove_d3_delay);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c22, quirk_remove_d3_delay);
/*
* Some devices may pass our check in pci_intx_mask_supported if
* PCI_COMMAND_INTX_DISABLE works though they actually do not properly

Просмотреть файл

@ -982,7 +982,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
}
min_align = calculate_mem_align(aligns, max_order);
min_align = max(min_align, window_alignment(bus, b_res->flags & mask));
min_align = max(min_align, window_alignment(bus, b_res->flags));
size0 = calculate_memsize(size, min_size, 0, resource_size(b_res), min_align);
if (children_add_size > add_size)
add_size = children_add_size;
@ -1136,7 +1136,7 @@ void __ref __pci_bus_size_bridges(struct pci_bus *bus,
}
/* The root bus? */
if (!bus->self)
if (pci_is_root_bus(bus))
return;
switch (bus->self->class >> 8) {

Просмотреть файл

@ -766,49 +766,20 @@ bfad_pci_init(struct pci_dev *pdev, struct bfad_s *bfad)
bfad->pcidev = pdev;
/* Adjust PCIe Maximum Read Request Size */
if (pcie_max_read_reqsz > 0) {
int pcie_cap_reg;
u16 pcie_dev_ctl;
u16 mask = 0xffff;
switch (pcie_max_read_reqsz) {
case 128:
mask = 0x0;
break;
case 256:
mask = 0x1000;
break;
case 512:
mask = 0x2000;
break;
case 1024:
mask = 0x3000;
break;
case 2048:
mask = 0x4000;
break;
case 4096:
mask = 0x5000;
break;
default:
break;
}
pcie_cap_reg = pci_find_capability(pdev, PCI_CAP_ID_EXP);
if (mask != 0xffff && pcie_cap_reg) {
pcie_cap_reg += 0x08;
pci_read_config_word(pdev, pcie_cap_reg, &pcie_dev_ctl);
if ((pcie_dev_ctl & 0x7000) != mask) {
printk(KERN_WARNING "BFA[%s]: "
if (pci_is_pcie(pdev) && pcie_max_read_reqsz) {
if (pcie_max_read_reqsz >= 128 &&
pcie_max_read_reqsz <= 4096 &&
is_power_of_2(pcie_max_read_reqsz)) {
int max_rq = pcie_get_readrq(pdev);
printk(KERN_WARNING "BFA[%s]: "
"pcie_max_read_request_size is %d, "
"reset to %d\n", bfad->pci_name,
(1 << ((pcie_dev_ctl & 0x7000) >> 12)) << 7,
"reset to %d\n", bfad->pci_name, max_rq,
pcie_max_read_reqsz);
pcie_dev_ctl &= ~0x7000;
pci_write_config_word(pdev, pcie_cap_reg,
pcie_dev_ctl | mask);
}
pcie_set_readrq(pdev, pcie_max_read_reqsz);
} else {
printk(KERN_WARNING "BFA[%s]: invalid "
"pcie_max_read_request_size %d ignored\n",
bfad->pci_name, pcie_max_read_reqsz);
}
}

Просмотреть файл

@ -852,22 +852,6 @@ csio_hw_get_flash_params(struct csio_hw *hw)
return 0;
}
static void
csio_set_pcie_completion_timeout(struct csio_hw *hw, u8 range)
{
uint16_t val;
int pcie_cap;
if (!csio_pci_capability(hw->pdev, PCI_CAP_ID_EXP, &pcie_cap)) {
pci_read_config_word(hw->pdev,
pcie_cap + PCI_EXP_DEVCTL2, &val);
val &= 0xfff0;
val |= range ;
pci_write_config_word(hw->pdev,
pcie_cap + PCI_EXP_DEVCTL2, val);
}
}
/*****************************************************************************/
/* HW State machine assists */
/*****************************************************************************/
@ -2069,8 +2053,10 @@ csio_hw_configure(struct csio_hw *hw)
goto out;
}
/* Set pci completion timeout value to 4 seconds. */
csio_set_pcie_completion_timeout(hw, 0xd);
/* Set PCIe completion timeout to 4 seconds */
if (pci_is_pcie(hw->pdev))
pcie_capability_clear_and_set_word(hw->pdev, PCI_EXP_DEVCTL2,
PCI_EXP_DEVCTL2_COMP_TIMEOUT, 0xd);
hw->chip_ops->chip_set_mem_win(hw, MEMWIN_CSIOSTOR);

Просмотреть файл

@ -507,7 +507,7 @@ qlafx00_pci_config(scsi_qla_host_t *vha)
pci_write_config_word(ha->pdev, PCI_COMMAND, w);
/* PCIe -- adjust Maximum Read Request Size (2048). */
if (pci_find_capability(ha->pdev, PCI_CAP_ID_EXP))
if (pci_is_pcie(ha->pdev))
pcie_set_readrq(ha->pdev, 2048);
ha->chip_revision = ha->pdev->revision;
@ -660,10 +660,8 @@ char *
qlafx00_pci_info_str(struct scsi_qla_host *vha, char *str)
{
struct qla_hw_data *ha = vha->hw;
int pcie_reg;
pcie_reg = pci_find_capability(ha->pdev, PCI_CAP_ID_EXP);
if (pcie_reg) {
if (pci_is_pcie(ha->pdev)) {
strcpy(str, "PCIe iSA");
return str;
}

Просмотреть файл

@ -494,18 +494,14 @@ qla24xx_pci_info_str(struct scsi_qla_host *vha, char *str)
static char *pci_bus_modes[] = { "33", "66", "100", "133", };
struct qla_hw_data *ha = vha->hw;
uint32_t pci_bus;
int pcie_reg;
pcie_reg = pci_pcie_cap(ha->pdev);
if (pcie_reg) {
if (pci_is_pcie(ha->pdev)) {
char lwstr[6];
uint16_t pcie_lstat, lspeed, lwidth;
uint32_t lstat, lspeed, lwidth;
pcie_reg += PCI_EXP_LNKCAP;
pci_read_config_word(ha->pdev, pcie_reg, &pcie_lstat);
lspeed = pcie_lstat & (BIT_0 | BIT_1 | BIT_2 | BIT_3);
lwidth = (pcie_lstat &
(BIT_4 | BIT_5 | BIT_6 | BIT_7 | BIT_8 | BIT_9)) >> 4;
pcie_capability_read_dword(ha->pdev, PCI_EXP_LNKCAP, &lstat);
lspeed = lstat & PCI_EXP_LNKCAP_SLS;
lwidth = (lstat & PCI_EXP_LNKCAP_MLW) >> 4;
strcpy(str, "PCIe (");
switch (lspeed) {

Просмотреть файл

@ -3601,17 +3601,10 @@ static int et131x_pci_init(struct et131x_adapter *adapter,
goto err_out;
}
/* Let's set up the PORT LOGIC Register. First we need to know what
* the max_payload_size is
*/
if (pcie_capability_read_word(pdev, PCI_EXP_DEVCAP, &max_payload)) {
dev_err(&pdev->dev,
"Could not read PCI config space for Max Payload Size\n");
goto err_out;
}
/* Let's set up the PORT LOGIC Register. */
/* Program the Ack/Nak latency and replay timers */
max_payload &= 0x07;
max_payload = pdev->pcie_mpss;
if (max_payload < 2) {
static const u16 acknak[2] = { 0x76, 0xD0 };
@ -3641,8 +3634,7 @@ static int et131x_pci_init(struct et131x_adapter *adapter,
}
/* Change the max read size to 2k */
if (pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL,
PCI_EXP_DEVCTL_READRQ, 0x4 << 12)) {
if (pcie_set_readrq(pdev, 2048)) {
dev_err(&pdev->dev,
"Couldn't change PCI config space for Max read size\n");
goto err_out;

Просмотреть файл

@ -294,59 +294,52 @@ void __init acpi_nvs_nosave_s3(void);
#endif /* CONFIG_PM_SLEEP */
struct acpi_osc_context {
char *uuid_str; /* uuid string */
char *uuid_str; /* UUID string */
int rev;
struct acpi_buffer cap; /* arg2/arg3 */
struct acpi_buffer ret; /* free by caller if success */
struct acpi_buffer cap; /* list of DWORD capabilities */
struct acpi_buffer ret; /* free by caller if success */
};
#define OSC_QUERY_TYPE 0
#define OSC_SUPPORT_TYPE 1
#define OSC_CONTROL_TYPE 2
/* _OSC DW0 Definition */
#define OSC_QUERY_ENABLE 1
#define OSC_REQUEST_ERROR 2
#define OSC_INVALID_UUID_ERROR 4
#define OSC_INVALID_REVISION_ERROR 8
#define OSC_CAPABILITIES_MASK_ERROR 16
acpi_status acpi_str_to_uuid(char *str, u8 *uuid);
acpi_status acpi_run_osc(acpi_handle handle, struct acpi_osc_context *context);
/* platform-wide _OSC bits */
#define OSC_SB_PAD_SUPPORT 1
#define OSC_SB_PPC_OST_SUPPORT 2
#define OSC_SB_PR3_SUPPORT 4
#define OSC_SB_HOTPLUG_OST_SUPPORT 8
#define OSC_SB_APEI_SUPPORT 16
/* Indexes into _OSC Capabilities Buffer (DWORDs 2 & 3 are device-specific) */
#define OSC_QUERY_DWORD 0 /* DWORD 1 */
#define OSC_SUPPORT_DWORD 1 /* DWORD 2 */
#define OSC_CONTROL_DWORD 2 /* DWORD 3 */
/* _OSC Capabilities DWORD 1: Query/Control and Error Returns (generic) */
#define OSC_QUERY_ENABLE 0x00000001 /* input */
#define OSC_REQUEST_ERROR 0x00000002 /* return */
#define OSC_INVALID_UUID_ERROR 0x00000004 /* return */
#define OSC_INVALID_REVISION_ERROR 0x00000008 /* return */
#define OSC_CAPABILITIES_MASK_ERROR 0x00000010 /* return */
/* Platform-Wide Capabilities _OSC: Capabilities DWORD 2: Support Field */
#define OSC_SB_PAD_SUPPORT 0x00000001
#define OSC_SB_PPC_OST_SUPPORT 0x00000002
#define OSC_SB_PR3_SUPPORT 0x00000004
#define OSC_SB_HOTPLUG_OST_SUPPORT 0x00000008
#define OSC_SB_APEI_SUPPORT 0x00000010
#define OSC_SB_CPC_SUPPORT 0x00000020
extern bool osc_sb_apei_support_acked;
/* PCI defined _OSC bits */
/* _OSC DW1 Definition (OS Support Fields) */
#define OSC_EXT_PCI_CONFIG_SUPPORT 1
#define OSC_ACTIVE_STATE_PWR_SUPPORT 2
#define OSC_CLOCK_PWR_CAPABILITY_SUPPORT 4
#define OSC_PCI_SEGMENT_GROUPS_SUPPORT 8
#define OSC_MSI_SUPPORT 16
#define OSC_PCI_SUPPORT_MASKS 0x1f
/* PCI Host Bridge _OSC: Capabilities DWORD 2: Support Field */
#define OSC_PCI_EXT_CONFIG_SUPPORT 0x00000001
#define OSC_PCI_ASPM_SUPPORT 0x00000002
#define OSC_PCI_CLOCK_PM_SUPPORT 0x00000004
#define OSC_PCI_SEGMENT_GROUPS_SUPPORT 0x00000008
#define OSC_PCI_MSI_SUPPORT 0x00000010
#define OSC_PCI_SUPPORT_MASKS 0x0000001f
/* _OSC DW1 Definition (OS Control Fields) */
#define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 1
#define OSC_SHPC_NATIVE_HP_CONTROL 2
#define OSC_PCI_EXPRESS_PME_CONTROL 4
#define OSC_PCI_EXPRESS_AER_CONTROL 8
#define OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL 16
#define OSC_PCI_CONTROL_MASKS (OSC_PCI_EXPRESS_NATIVE_HP_CONTROL | \
OSC_SHPC_NATIVE_HP_CONTROL | \
OSC_PCI_EXPRESS_PME_CONTROL | \
OSC_PCI_EXPRESS_AER_CONTROL | \
OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL)
#define OSC_PCI_NATIVE_HOTPLUG (OSC_PCI_EXPRESS_NATIVE_HP_CONTROL | \
OSC_SHPC_NATIVE_HP_CONTROL)
/* PCI Host Bridge _OSC: Capabilities DWORD 3: Control Field */
#define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 0x00000001
#define OSC_PCI_SHPC_NATIVE_HP_CONTROL 0x00000002
#define OSC_PCI_EXPRESS_PME_CONTROL 0x00000004
#define OSC_PCI_EXPRESS_AER_CONTROL 0x00000008
#define OSC_PCI_EXPRESS_CAPABILITY_CONTROL 0x00000010
#define OSC_PCI_CONTROL_MASKS 0x0000001f
extern acpi_status acpi_pci_osc_control_set(acpi_handle handle,
u32 *mask, u32 req);

Просмотреть файл

@ -241,6 +241,12 @@
#define IMX6Q_GPR5_L2_CLK_STOP BIT(8)
#define IMX6Q_GPR8_TX_SWING_LOW (0x7f << 25)
#define IMX6Q_GPR8_TX_SWING_FULL (0x7f << 18)
#define IMX6Q_GPR8_TX_DEEMPH_GEN2_6DB (0x3f << 12)
#define IMX6Q_GPR8_TX_DEEMPH_GEN2_3P5DB (0x3f << 6)
#define IMX6Q_GPR8_TX_DEEMPH_GEN1 (0x3f << 0)
#define IMX6Q_GPR9_TZASC2_BYP BIT(1)
#define IMX6Q_GPR9_TZASC1_BYP BIT(0)
@ -273,7 +279,9 @@
#define IMX6Q_GPR12_ARMP_AHB_CLK_EN BIT(26)
#define IMX6Q_GPR12_ARMP_ATB_CLK_EN BIT(25)
#define IMX6Q_GPR12_ARMP_APB_CLK_EN BIT(24)
#define IMX6Q_GPR12_DEVICE_TYPE (0xf << 12)
#define IMX6Q_GPR12_PCIE_CTL_2 BIT(10)
#define IMX6Q_GPR12_LOS_LEVEL (0x1f << 4)
#define IMX6Q_GPR13_SDMA_STOP_REQ BIT(30)
#define IMX6Q_GPR13_CAN2_STOP_REQ BIT(29)

Просмотреть файл

@ -64,6 +64,8 @@ void arch_restore_msi_irqs(struct pci_dev *dev, int irq);
void default_teardown_msi_irqs(struct pci_dev *dev);
void default_restore_msi_irqs(struct pci_dev *dev, int irq);
u32 default_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag);
u32 default_msix_mask_irq(struct msi_desc *desc, u32 flag);
struct msi_chip {
struct module *owner;

Просмотреть файл

@ -330,8 +330,6 @@ struct pci_dev {
unsigned int msix_enabled:1;
unsigned int ari_enabled:1; /* ARI forwarding */
unsigned int is_managed:1;
unsigned int is_pcie:1; /* Obsolete. Will be removed.
Use pci_is_pcie() instead */
unsigned int needs_freset:1; /* Dev requires fundamental reset */
unsigned int state_saved:1;
unsigned int is_physfn:1;
@ -472,12 +470,25 @@ struct pci_bus {
/*
* Returns true if the pci bus is root (behind host-pci bridge),
* false otherwise
*
* Some code assumes that "bus->self == NULL" means that bus is a root bus.
* This is incorrect because "virtual" buses added for SR-IOV (via
* virtfn_add_bus()) have "bus->self == NULL" but are not root buses.
*/
static inline bool pci_is_root_bus(struct pci_bus *pbus)
{
return !(pbus->parent);
}
static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev)
{
dev = pci_physfn(dev);
if (pci_is_root_bus(dev->bus))
return NULL;
return dev->bus->self;
}
#ifdef CONFIG_PCI_MSI
static inline bool pci_dev_msi_enabled(struct pci_dev *pci_dev)
{
@ -1749,11 +1760,11 @@ static inline int pci_pcie_cap(struct pci_dev *dev)
* pci_is_pcie - check if the PCI device is PCI Express capable
* @dev: PCI device
*
* Retrun true if the PCI device is PCI Express capable, false otherwise.
* Returns: true if the PCI device is PCI Express capable, false otherwise.
*/
static inline bool pci_is_pcie(struct pci_dev *dev)
{
return !!pci_pcie_cap(dev);
return pci_pcie_cap(dev);
}
/**

Просмотреть файл

@ -319,7 +319,6 @@
#define PCI_MSIX_PBA 8 /* Pending Bit Array offset */
#define PCI_MSIX_PBA_BIR 0x00000007 /* BAR index */
#define PCI_MSIX_PBA_OFFSET 0xfffffff8 /* Offset into specified BAR */
#define PCI_MSIX_FLAGS_BIRMASK (7 << 0) /* deprecated */
#define PCI_CAP_MSIX_SIZEOF 12 /* size of MSIX registers */
/* MSI-X entry's format */
@ -558,7 +557,8 @@
#define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */
#define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */
#define PCI_EXP_DEVCTL2 40 /* Device Control 2 */
#define PCI_EXP_DEVCTL2_ARI 0x20 /* Alternative Routing-ID */
#define PCI_EXP_DEVCTL2_COMP_TIMEOUT 0x000f /* Completion Timeout Value */
#define PCI_EXP_DEVCTL2_ARI 0x0020 /* Alternative Routing-ID */
#define PCI_EXP_DEVCTL2_IDO_REQ_EN 0x0100 /* Allow IDO for requests */
#define PCI_EXP_DEVCTL2_IDO_CMP_EN 0x0200 /* Allow IDO for completions */
#define PCI_EXP_DEVCTL2_LTR_EN 0x0400 /* Enable LTR mechanism */