Merge branches 'dma-api', 'pci/virtualization', 'pci/msi', 'pci/misc' and 'pci/resource' into next
* dma-api: iommu/exynos: Remove unnecessary "&" from function pointers DMA-API: Update dma_pool_create ()and dma_pool_alloc() descriptions DMA-API: Fix duplicated word in DMA-API-HOWTO.txt DMA-API: Capitalize "CPU" consistently sh/PCI: Pass GAPSPCI_DMA_BASE CPU & bus address to dma_declare_coherent_memory() DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t DMA-API: Clarify physical/bus address distinction * pci/virtualization: PCI: Mark RTL8110SC INTx masking as broken * pci/msi: PCI/MSI: Remove pci_enable_msi_block() * pci/misc: PCI: Remove pcibios_add_platform_entries() s390/pci: use pdev->dev.groups for attribute creation PCI: Move Open Firmware devspec attribute to PCI common code * pci/resource: PCI: Add resource allocation comments PCI: Simplify __pci_assign_resource() coding style PCI: Change pbus_size_mem() return values to be more conventional PCI: Restrict 64-bit prefetchable bridge windows to 64-bit resources PCI: Support BAR sizes up to 8GB resources: Clarify sanity check message PCI: Don't add disabled subtractive decode bus resources PCI: Don't print anything while decoding is disabled PCI: Don't set BAR to zero if dma_addr_t is too small PCI: Don't convert BAR address to resource if dma_addr_t is too small PCI: Reject BAR above 4GB if dma_addr_t is too small PCI: Fail safely if we can't handle BARs larger than 4GB x86/gart: Tidy messages and add bridge device info x86/gart: Replace printk() with pr_info() x86/PCI: Move pcibios_assign_resources() annotation to definition x86/PCI: Mark ATI SBx00 HPET BAR as IORESOURCE_PCI_FIXED x86/PCI: Don't try to move IORESOURCE_PCI_FIXED resources x86/PCI: Fix Broadcom CNB20LE unintended sign extension
This commit is contained in:
Коммит
e5558d1a51
|
@ -9,16 +9,76 @@ This is a guide to device driver writers on how to use the DMA API
|
|||
with example pseudo-code. For a concise description of the API, see
|
||||
DMA-API.txt.
|
||||
|
||||
Most of the 64bit platforms have special hardware that translates bus
|
||||
addresses (DMA addresses) into physical addresses. This is similar to
|
||||
how page tables and/or a TLB translates virtual addresses to physical
|
||||
addresses on a CPU. This is needed so that e.g. PCI devices can
|
||||
access with a Single Address Cycle (32bit DMA address) any page in the
|
||||
64bit physical address space. Previously in Linux those 64bit
|
||||
platforms had to set artificial limits on the maximum RAM size in the
|
||||
system, so that the virt_to_bus() static scheme works (the DMA address
|
||||
translation tables were simply filled on bootup to map each bus
|
||||
address to the physical page __pa(bus_to_virt())).
|
||||
CPU and DMA addresses
|
||||
|
||||
There are several kinds of addresses involved in the DMA API, and it's
|
||||
important to understand the differences.
|
||||
|
||||
The kernel normally uses virtual addresses. Any address returned by
|
||||
kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
|
||||
be stored in a "void *".
|
||||
|
||||
The virtual memory system (TLB, page tables, etc.) translates virtual
|
||||
addresses to CPU physical addresses, which are stored as "phys_addr_t" or
|
||||
"resource_size_t". The kernel manages device resources like registers as
|
||||
physical addresses. These are the addresses in /proc/iomem. The physical
|
||||
address is not directly useful to a driver; it must use ioremap() to map
|
||||
the space and produce a virtual address.
|
||||
|
||||
I/O devices use a third kind of address: a "bus address" or "DMA address".
|
||||
If a device has registers at an MMIO address, or if it performs DMA to read
|
||||
or write system memory, the addresses used by the device are bus addresses.
|
||||
In some systems, bus addresses are identical to CPU physical addresses, but
|
||||
in general they are not. IOMMUs and host bridges can produce arbitrary
|
||||
mappings between physical and bus addresses.
|
||||
|
||||
Here's a picture and some examples:
|
||||
|
||||
CPU CPU Bus
|
||||
Virtual Physical Address
|
||||
Address Address Space
|
||||
Space Space
|
||||
|
||||
+-------+ +------+ +------+
|
||||
| | |MMIO | Offset | |
|
||||
| | Virtual |Space | applied | |
|
||||
C +-------+ --------> B +------+ ----------> +------+ A
|
||||
| | mapping | | by host | |
|
||||
+-----+ | | | | bridge | | +--------+
|
||||
| | | | +------+ | | | |
|
||||
| CPU | | | | RAM | | | | Device |
|
||||
| | | | | | | | | |
|
||||
+-----+ +-------+ +------+ +------+ +--------+
|
||||
| | Virtual |Buffer| Mapping | |
|
||||
X +-------+ --------> Y +------+ <---------- +------+ Z
|
||||
| | mapping | RAM | by IOMMU
|
||||
| | | |
|
||||
| | | |
|
||||
+-------+ +------+
|
||||
|
||||
During the enumeration process, the kernel learns about I/O devices and
|
||||
their MMIO space and the host bridges that connect them to the system. For
|
||||
example, if a PCI device has a BAR, the kernel reads the bus address (A)
|
||||
from the BAR and converts it to a CPU physical address (B). The address B
|
||||
is stored in a struct resource and usually exposed via /proc/iomem. When a
|
||||
driver claims a device, it typically uses ioremap() to map physical address
|
||||
B at a virtual address (C). It can then use, e.g., ioread32(C), to access
|
||||
the device registers at bus address A.
|
||||
|
||||
If the device supports DMA, the driver sets up a buffer using kmalloc() or
|
||||
a similar interface, which returns a virtual address (X). The virtual
|
||||
memory system maps X to a physical address (Y) in system RAM. The driver
|
||||
can use virtual address X to access the buffer, but the device itself
|
||||
cannot because DMA doesn't go through the CPU virtual memory system.
|
||||
|
||||
In some simple systems, the device can do DMA directly to physical address
|
||||
Y. But in many others, there is IOMMU hardware that translates bus
|
||||
addresses to physical addresses, e.g., it translates Z to Y. This is part
|
||||
of the reason for the DMA API: the driver can give a virtual address X to
|
||||
an interface like dma_map_single(), which sets up any required IOMMU
|
||||
mapping and returns the bus address Z. The driver then tells the device to
|
||||
do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
|
||||
RAM.
|
||||
|
||||
So that Linux can use the dynamic DMA mapping, it needs some help from the
|
||||
drivers, namely it has to take into account that DMA addresses should be
|
||||
|
@ -29,17 +89,17 @@ The following API will work of course even on platforms where no such
|
|||
hardware exists.
|
||||
|
||||
Note that the DMA API works with any bus independent of the underlying
|
||||
microprocessor architecture. You should use the DMA API rather than
|
||||
the bus specific DMA API (e.g. pci_dma_*).
|
||||
microprocessor architecture. You should use the DMA API rather than the
|
||||
bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
|
||||
pci_map_*() interfaces.
|
||||
|
||||
First of all, you should make sure
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
|
||||
is in your driver. This file will obtain for you the definition of the
|
||||
dma_addr_t (which can hold any valid DMA address for the platform)
|
||||
type which should be used everywhere you hold a DMA (bus) address
|
||||
returned from the DMA mapping functions.
|
||||
is in your driver, which provides the definition of dma_addr_t. This type
|
||||
can hold any valid DMA or bus address for the platform and should be used
|
||||
everywhere you hold a DMA address returned from the DMA mapping functions.
|
||||
|
||||
What memory is DMA'able?
|
||||
|
||||
|
@ -123,9 +183,9 @@ Here, dev is a pointer to the device struct of your device, and mask
|
|||
is a bit mask describing which bits of an address your device
|
||||
supports. It returns zero if your card can perform DMA properly on
|
||||
the machine given the address mask you provided. In general, the
|
||||
device struct of your device is embedded in the bus specific device
|
||||
struct of your device. For example, a pointer to the device struct of
|
||||
your PCI device is pdev->dev (pdev is a pointer to the PCI device
|
||||
device struct of your device is embedded in the bus-specific device
|
||||
struct of your device. For example, &pdev->dev is a pointer to the
|
||||
device struct of a PCI device (pdev is a pointer to the PCI device
|
||||
struct of your device).
|
||||
|
||||
If it returns non-zero, your device cannot perform DMA properly on
|
||||
|
@ -147,8 +207,7 @@ exactly why.
|
|||
The standard 32-bit addressing device would do something like this:
|
||||
|
||||
if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
|
||||
printk(KERN_WARNING
|
||||
"mydev: No suitable DMA available.\n");
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
|
@ -170,8 +229,7 @@ all 64-bits when accessing streaming DMA:
|
|||
} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
|
||||
using_dac = 0;
|
||||
} else {
|
||||
printk(KERN_WARNING
|
||||
"mydev: No suitable DMA available.\n");
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
|
@ -187,22 +245,20 @@ the case would look like this:
|
|||
using_dac = 0;
|
||||
consistent_using_dac = 0;
|
||||
} else {
|
||||
printk(KERN_WARNING
|
||||
"mydev: No suitable DMA available.\n");
|
||||
dev_warn(dev, "mydev: No suitable DMA available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
The coherent coherent mask will always be able to set the same or a
|
||||
smaller mask as the streaming mask. However for the rare case that a
|
||||
device driver only uses consistent allocations, one would have to
|
||||
check the return value from dma_set_coherent_mask().
|
||||
The coherent mask will always be able to set the same or a smaller mask as
|
||||
the streaming mask. However for the rare case that a device driver only
|
||||
uses consistent allocations, one would have to check the return value from
|
||||
dma_set_coherent_mask().
|
||||
|
||||
Finally, if your device can only drive the low 24-bits of
|
||||
address you might do something like:
|
||||
|
||||
if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
|
||||
printk(KERN_WARNING
|
||||
"mydev: 24-bit DMA addressing not available.\n");
|
||||
dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
|
||||
goto ignore_this_device;
|
||||
}
|
||||
|
||||
|
@ -232,14 +288,14 @@ Here is pseudo-code showing how this might be done:
|
|||
card->playback_enabled = 1;
|
||||
} else {
|
||||
card->playback_enabled = 0;
|
||||
printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
|
||||
dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
|
||||
card->name);
|
||||
}
|
||||
if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
|
||||
card->record_enabled = 1;
|
||||
} else {
|
||||
card->record_enabled = 0;
|
||||
printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
|
||||
dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
|
||||
card->name);
|
||||
}
|
||||
|
||||
|
@ -331,7 +387,7 @@ context with the GFP_ATOMIC flag.
|
|||
Size is the length of the region you want to allocate, in bytes.
|
||||
|
||||
This routine will allocate RAM for that region, so it acts similarly to
|
||||
__get_free_pages (but takes size instead of a page order). If your
|
||||
__get_free_pages() (but takes size instead of a page order). If your
|
||||
driver needs regions sized smaller than a page, you may prefer using
|
||||
the dma_pool interface, described below.
|
||||
|
||||
|
@ -343,11 +399,11 @@ the consistent DMA mask has been explicitly changed via
|
|||
dma_set_coherent_mask(). This is true of the dma_pool interface as
|
||||
well.
|
||||
|
||||
dma_alloc_coherent returns two values: the virtual address which you
|
||||
dma_alloc_coherent() returns two values: the virtual address which you
|
||||
can use to access it from the CPU and dma_handle which you pass to the
|
||||
card.
|
||||
|
||||
The cpu return address and the DMA bus master address are both
|
||||
The CPU virtual address and the DMA bus address are both
|
||||
guaranteed to be aligned to the smallest PAGE_SIZE order which
|
||||
is greater than or equal to the requested size. This invariant
|
||||
exists (for example) to guarantee that if you allocate a chunk
|
||||
|
@ -359,13 +415,13 @@ To unmap and free such a DMA region, you call:
|
|||
dma_free_coherent(dev, size, cpu_addr, dma_handle);
|
||||
|
||||
where dev, size are the same as in the above call and cpu_addr and
|
||||
dma_handle are the values dma_alloc_coherent returned to you.
|
||||
dma_handle are the values dma_alloc_coherent() returned to you.
|
||||
This function may not be called in interrupt context.
|
||||
|
||||
If your driver needs lots of smaller memory regions, you can write
|
||||
custom code to subdivide pages returned by dma_alloc_coherent,
|
||||
custom code to subdivide pages returned by dma_alloc_coherent(),
|
||||
or you can use the dma_pool API to do that. A dma_pool is like
|
||||
a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
|
||||
a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
|
||||
Also, it understands common hardware constraints for alignment,
|
||||
like queue heads needing to be aligned on N byte boundaries.
|
||||
|
||||
|
@ -373,37 +429,37 @@ Create a dma_pool like this:
|
|||
|
||||
struct dma_pool *pool;
|
||||
|
||||
pool = dma_pool_create(name, dev, size, align, alloc);
|
||||
pool = dma_pool_create(name, dev, size, align, boundary);
|
||||
|
||||
The "name" is for diagnostics (like a kmem_cache name); dev and size
|
||||
are as above. The device's hardware alignment requirement for this
|
||||
type of data is "align" (which is expressed in bytes, and must be a
|
||||
power of two). If your device has no boundary crossing restrictions,
|
||||
pass 0 for alloc; passing 4096 says memory allocated from this pool
|
||||
pass 0 for boundary; passing 4096 says memory allocated from this pool
|
||||
must not cross 4KByte boundaries (but at that time it may be better to
|
||||
go for dma_alloc_coherent directly instead).
|
||||
use dma_alloc_coherent() directly instead).
|
||||
|
||||
Allocate memory from a dma pool like this:
|
||||
Allocate memory from a DMA pool like this:
|
||||
|
||||
cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
|
||||
|
||||
flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
|
||||
holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent,
|
||||
flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
|
||||
holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
|
||||
this returns two values, cpu_addr and dma_handle.
|
||||
|
||||
Free memory that was allocated from a dma_pool like this:
|
||||
|
||||
dma_pool_free(pool, cpu_addr, dma_handle);
|
||||
|
||||
where pool is what you passed to dma_pool_alloc, and cpu_addr and
|
||||
dma_handle are the values dma_pool_alloc returned. This function
|
||||
where pool is what you passed to dma_pool_alloc(), and cpu_addr and
|
||||
dma_handle are the values dma_pool_alloc() returned. This function
|
||||
may be called in interrupt context.
|
||||
|
||||
Destroy a dma_pool by calling:
|
||||
|
||||
dma_pool_destroy(pool);
|
||||
|
||||
Make sure you've called dma_pool_free for all memory allocated
|
||||
Make sure you've called dma_pool_free() for all memory allocated
|
||||
from a pool before you destroy the pool. This function may not
|
||||
be called in interrupt context.
|
||||
|
||||
|
@ -418,7 +474,7 @@ one of the following values:
|
|||
DMA_FROM_DEVICE
|
||||
DMA_NONE
|
||||
|
||||
One should provide the exact DMA direction if you know it.
|
||||
You should provide the exact DMA direction if you know it.
|
||||
|
||||
DMA_TO_DEVICE means "from main memory to the device"
|
||||
DMA_FROM_DEVICE means "from the device to main memory"
|
||||
|
@ -489,14 +545,14 @@ and to unmap it:
|
|||
dma_unmap_single(dev, dma_handle, size, direction);
|
||||
|
||||
You should call dma_mapping_error() as dma_map_single() could fail and return
|
||||
error. Not all dma implementations support dma_mapping_error() interface.
|
||||
error. Not all DMA implementations support the dma_mapping_error() interface.
|
||||
However, it is a good practice to call dma_mapping_error() interface, which
|
||||
will invoke the generic mapping error check interface. Doing so will ensure
|
||||
that the mapping code will work correctly on all dma implementations without
|
||||
that the mapping code will work correctly on all DMA implementations without
|
||||
any dependency on the specifics of the underlying implementation. Using the
|
||||
returned address without checking for errors could result in failures ranging
|
||||
from panics to silent data corruption. A couple of examples of incorrect ways
|
||||
to check for errors that make assumptions about the underlying dma
|
||||
to check for errors that make assumptions about the underlying DMA
|
||||
implementation are as follows and these are applicable to dma_map_page() as
|
||||
well.
|
||||
|
||||
|
@ -516,13 +572,13 @@ Incorrect example 2:
|
|||
goto map_error;
|
||||
}
|
||||
|
||||
You should call dma_unmap_single when the DMA activity is finished, e.g.
|
||||
You should call dma_unmap_single() when the DMA activity is finished, e.g.,
|
||||
from the interrupt which told you that the DMA transfer is done.
|
||||
|
||||
Using cpu pointers like this for single mappings has a disadvantage,
|
||||
Using CPU pointers like this for single mappings has a disadvantage:
|
||||
you cannot reference HIGHMEM memory in this way. Thus, there is a
|
||||
map/unmap interface pair akin to dma_{map,unmap}_single. These
|
||||
interfaces deal with page/offset pairs instead of cpu pointers.
|
||||
map/unmap interface pair akin to dma_{map,unmap}_single(). These
|
||||
interfaces deal with page/offset pairs instead of CPU pointers.
|
||||
Specifically:
|
||||
|
||||
struct device *dev = &my_dev->dev;
|
||||
|
@ -550,7 +606,7 @@ Here, "offset" means byte offset within the given page.
|
|||
You should call dma_mapping_error() as dma_map_page() could fail and return
|
||||
error as outlined under the dma_map_single() discussion.
|
||||
|
||||
You should call dma_unmap_page when the DMA activity is finished, e.g.
|
||||
You should call dma_unmap_page() when the DMA activity is finished, e.g.,
|
||||
from the interrupt which told you that the DMA transfer is done.
|
||||
|
||||
With scatterlists, you map a region gathered from several regions by:
|
||||
|
@ -588,18 +644,16 @@ PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
|
|||
it should _NOT_ be the 'count' value _returned_ from the
|
||||
dma_map_sg call.
|
||||
|
||||
Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
|
||||
counterpart, because the bus address space is a shared resource (although
|
||||
in some ports the mapping is per each BUS so less devices contend for the
|
||||
same bus address space) and you could render the machine unusable by eating
|
||||
all bus addresses.
|
||||
Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
|
||||
counterpart, because the bus address space is a shared resource and
|
||||
you could render the machine unusable by consuming all bus addresses.
|
||||
|
||||
If you need to use the same streaming DMA region multiple times and touch
|
||||
the data in between the DMA transfers, the buffer needs to be synced
|
||||
properly in order for the cpu and device to see the most uptodate and
|
||||
properly in order for the CPU and device to see the most up-to-date and
|
||||
correct copy of the DMA buffer.
|
||||
|
||||
So, firstly, just map it with dma_map_{single,sg}, and after each DMA
|
||||
So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
|
||||
transfer call either:
|
||||
|
||||
dma_sync_single_for_cpu(dev, dma_handle, size, direction);
|
||||
|
@ -611,7 +665,7 @@ or:
|
|||
as appropriate.
|
||||
|
||||
Then, if you wish to let the device get at the DMA area again,
|
||||
finish accessing the data with the cpu, and then before actually
|
||||
finish accessing the data with the CPU, and then before actually
|
||||
giving the buffer to the hardware call either:
|
||||
|
||||
dma_sync_single_for_device(dev, dma_handle, size, direction);
|
||||
|
@ -623,9 +677,9 @@ or:
|
|||
as appropriate.
|
||||
|
||||
After the last DMA transfer call one of the DMA unmap routines
|
||||
dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
|
||||
call till dma_unmap_*, then you don't have to call the dma_sync_*
|
||||
routines at all.
|
||||
dma_unmap_{single,sg}(). If you don't touch the data from the first
|
||||
dma_map_*() call till dma_unmap_*(), then you don't have to call the
|
||||
dma_sync_*() routines at all.
|
||||
|
||||
Here is pseudo code which shows a situation in which you would need
|
||||
to use the dma_sync_*() interfaces.
|
||||
|
@ -690,12 +744,12 @@ to use the dma_sync_*() interfaces.
|
|||
}
|
||||
}
|
||||
|
||||
Drivers converted fully to this interface should not use virt_to_bus any
|
||||
longer, nor should they use bus_to_virt. Some drivers have to be changed a
|
||||
little bit, because there is no longer an equivalent to bus_to_virt in the
|
||||
Drivers converted fully to this interface should not use virt_to_bus() any
|
||||
longer, nor should they use bus_to_virt(). Some drivers have to be changed a
|
||||
little bit, because there is no longer an equivalent to bus_to_virt() in the
|
||||
dynamic DMA mapping scheme - you have to always store the DMA addresses
|
||||
returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
|
||||
calls (dma_map_sg stores them in the scatterlist itself if the platform
|
||||
returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
|
||||
calls (dma_map_sg() stores them in the scatterlist itself if the platform
|
||||
supports dynamic DMA mapping in hardware) in your driver structures and/or
|
||||
in the card registers.
|
||||
|
||||
|
@ -709,9 +763,9 @@ as it is impossible to correctly support them.
|
|||
DMA address space is limited on some architectures and an allocation
|
||||
failure can be determined by:
|
||||
|
||||
- checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
|
||||
- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
|
||||
|
||||
- checking the returned dma_addr_t of dma_map_single and dma_map_page
|
||||
- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
|
||||
by using dma_mapping_error():
|
||||
|
||||
dma_addr_t dma_handle;
|
||||
|
@ -794,7 +848,7 @@ Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
|
|||
dma_unmap_single(array[i].dma_addr);
|
||||
}
|
||||
|
||||
Networking drivers must call dev_kfree_skb to free the socket buffer
|
||||
Networking drivers must call dev_kfree_skb() to free the socket buffer
|
||||
and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
|
||||
(ndo_start_xmit). This means that the socket buffer is just dropped in
|
||||
the failure case.
|
||||
|
@ -831,7 +885,7 @@ transform some example code.
|
|||
DEFINE_DMA_UNMAP_LEN(len);
|
||||
};
|
||||
|
||||
2) Use dma_unmap_{addr,len}_set to set these values.
|
||||
2) Use dma_unmap_{addr,len}_set() to set these values.
|
||||
Example, before:
|
||||
|
||||
ringp->mapping = FOO;
|
||||
|
@ -842,7 +896,7 @@ transform some example code.
|
|||
dma_unmap_addr_set(ringp, mapping, FOO);
|
||||
dma_unmap_len_set(ringp, len, BAR);
|
||||
|
||||
3) Use dma_unmap_{addr,len} to access these values.
|
||||
3) Use dma_unmap_{addr,len}() to access these values.
|
||||
Example, before:
|
||||
|
||||
dma_unmap_single(dev, ringp->mapping, ringp->len,
|
||||
|
|
|
@ -4,22 +4,26 @@
|
|||
James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
|
||||
|
||||
This document describes the DMA API. For a more gentle introduction
|
||||
of the API (and actual examples) see
|
||||
Documentation/DMA-API-HOWTO.txt.
|
||||
of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
|
||||
|
||||
This API is split into two pieces. Part I describes the API. Part II
|
||||
describes the extensions to the API for supporting non-consistent
|
||||
memory machines. Unless you know that your driver absolutely has to
|
||||
support non-consistent platforms (this is usually only legacy
|
||||
platforms) you should only use the API described in part I.
|
||||
This API is split into two pieces. Part I describes the basic API.
|
||||
Part II describes extensions for supporting non-consistent memory
|
||||
machines. Unless you know that your driver absolutely has to support
|
||||
non-consistent platforms (this is usually only legacy platforms) you
|
||||
should only use the API described in part I.
|
||||
|
||||
Part I - dma_ API
|
||||
-------------------------------------
|
||||
|
||||
To get the dma_ API, you must #include <linux/dma-mapping.h>
|
||||
To get the dma_ API, you must #include <linux/dma-mapping.h>. This
|
||||
provides dma_addr_t and the interfaces described below.
|
||||
|
||||
A dma_addr_t can hold any valid DMA or bus address for the platform. It
|
||||
can be given to a device to use as a DMA source or target. A CPU cannot
|
||||
reference a dma_addr_t directly because there may be translation between
|
||||
its physical address space and the bus address space.
|
||||
|
||||
Part Ia - Using large dma-coherent buffers
|
||||
Part Ia - Using large DMA-coherent buffers
|
||||
------------------------------------------
|
||||
|
||||
void *
|
||||
|
@ -33,20 +37,21 @@ to make sure to flush the processor's write buffers before telling
|
|||
devices to read that memory.)
|
||||
|
||||
This routine allocates a region of <size> bytes of consistent memory.
|
||||
It also returns a <dma_handle> which may be cast to an unsigned
|
||||
integer the same width as the bus and used as the physical address
|
||||
base of the region.
|
||||
|
||||
Returns: a pointer to the allocated region (in the processor's virtual
|
||||
It returns a pointer to the allocated region (in the processor's virtual
|
||||
address space) or NULL if the allocation failed.
|
||||
|
||||
It also returns a <dma_handle> which may be cast to an unsigned integer the
|
||||
same width as the bus and given to the device as the bus address base of
|
||||
the region.
|
||||
|
||||
Note: consistent memory can be expensive on some platforms, and the
|
||||
minimum allocation length may be as big as a page, so you should
|
||||
consolidate your requests for consistent memory as much as possible.
|
||||
The simplest way to do that is to use the dma_pool calls (see below).
|
||||
|
||||
The flag parameter (dma_alloc_coherent only) allows the caller to
|
||||
specify the GFP_ flags (see kmalloc) for the allocation (the
|
||||
The flag parameter (dma_alloc_coherent() only) allows the caller to
|
||||
specify the GFP_ flags (see kmalloc()) for the allocation (the
|
||||
implementation may choose to ignore flags that affect the location of
|
||||
the returned memory, like GFP_DMA).
|
||||
|
||||
|
@ -61,24 +66,24 @@ void
|
|||
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
|
||||
dma_addr_t dma_handle)
|
||||
|
||||
Free the region of consistent memory you previously allocated. dev,
|
||||
size and dma_handle must all be the same as those passed into the
|
||||
consistent allocate. cpu_addr must be the virtual address returned by
|
||||
the consistent allocate.
|
||||
Free a region of consistent memory you previously allocated. dev,
|
||||
size and dma_handle must all be the same as those passed into
|
||||
dma_alloc_coherent(). cpu_addr must be the virtual address returned by
|
||||
the dma_alloc_coherent().
|
||||
|
||||
Note that unlike their sibling allocation calls, these routines
|
||||
may only be called with IRQs enabled.
|
||||
|
||||
|
||||
Part Ib - Using small dma-coherent buffers
|
||||
Part Ib - Using small DMA-coherent buffers
|
||||
------------------------------------------
|
||||
|
||||
To get this part of the dma_ API, you must #include <linux/dmapool.h>
|
||||
|
||||
Many drivers need lots of small dma-coherent memory regions for DMA
|
||||
Many drivers need lots of small DMA-coherent memory regions for DMA
|
||||
descriptors or I/O buffers. Rather than allocating in units of a page
|
||||
or more using dma_alloc_coherent(), you can use DMA pools. These work
|
||||
much like a struct kmem_cache, except that they use the dma-coherent allocator,
|
||||
much like a struct kmem_cache, except that they use the DMA-coherent allocator,
|
||||
not __get_free_pages(). Also, they understand common hardware constraints
|
||||
for alignment, like queue heads needing to be aligned on N-byte boundaries.
|
||||
|
||||
|
@ -87,7 +92,7 @@ for alignment, like queue heads needing to be aligned on N-byte boundaries.
|
|||
dma_pool_create(const char *name, struct device *dev,
|
||||
size_t size, size_t align, size_t alloc);
|
||||
|
||||
The pool create() routines initialize a pool of dma-coherent buffers
|
||||
dma_pool_create() initializes a pool of DMA-coherent buffers
|
||||
for use with a given device. It must be called in a context which
|
||||
can sleep.
|
||||
|
||||
|
@ -102,25 +107,26 @@ from this pool must not cross 4KByte boundaries.
|
|||
void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
|
||||
dma_addr_t *dma_handle);
|
||||
|
||||
This allocates memory from the pool; the returned memory will meet the size
|
||||
and alignment requirements specified at creation time. Pass GFP_ATOMIC to
|
||||
prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
|
||||
pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns
|
||||
two values: an address usable by the cpu, and the dma address usable by the
|
||||
pool's device.
|
||||
This allocates memory from the pool; the returned memory will meet the
|
||||
size and alignment requirements specified at creation time. Pass
|
||||
GFP_ATOMIC to prevent blocking, or if it's permitted (not
|
||||
in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
|
||||
blocking. Like dma_alloc_coherent(), this returns two values: an
|
||||
address usable by the CPU, and the DMA address usable by the pool's
|
||||
device.
|
||||
|
||||
|
||||
void dma_pool_free(struct dma_pool *pool, void *vaddr,
|
||||
dma_addr_t addr);
|
||||
|
||||
This puts memory back into the pool. The pool is what was passed to
|
||||
the pool allocation routine; the cpu (vaddr) and dma addresses are what
|
||||
dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
|
||||
were returned when that routine allocated the memory being freed.
|
||||
|
||||
|
||||
void dma_pool_destroy(struct dma_pool *pool);
|
||||
|
||||
The pool destroy() routines free the resources of the pool. They must be
|
||||
dma_pool_destroy() frees the resources of the pool. It must be
|
||||
called in a context which can sleep. Make sure you've freed all allocated
|
||||
memory back to the pool before you destroy it.
|
||||
|
||||
|
@ -187,9 +193,9 @@ dma_map_single(struct device *dev, void *cpu_addr, size_t size,
|
|||
enum dma_data_direction direction)
|
||||
|
||||
Maps a piece of processor virtual memory so it can be accessed by the
|
||||
device and returns the physical handle of the memory.
|
||||
device and returns the bus address of the memory.
|
||||
|
||||
The direction for both api's may be converted freely by casting.
|
||||
The direction for both APIs may be converted freely by casting.
|
||||
However the dma_ API uses a strongly typed enumerator for its
|
||||
direction:
|
||||
|
||||
|
@ -198,31 +204,30 @@ DMA_TO_DEVICE data is going from the memory to the device
|
|||
DMA_FROM_DEVICE data is coming from the device to the memory
|
||||
DMA_BIDIRECTIONAL direction isn't known
|
||||
|
||||
Notes: Not all memory regions in a machine can be mapped by this
|
||||
API. Further, regions that appear to be physically contiguous in
|
||||
kernel virtual space may not be contiguous as physical memory. Since
|
||||
this API does not provide any scatter/gather capability, it will fail
|
||||
if the user tries to map a non-physically contiguous piece of memory.
|
||||
For this reason, it is recommended that memory mapped by this API be
|
||||
obtained only from sources which guarantee it to be physically contiguous
|
||||
(like kmalloc).
|
||||
Notes: Not all memory regions in a machine can be mapped by this API.
|
||||
Further, contiguous kernel virtual space may not be contiguous as
|
||||
physical memory. Since this API does not provide any scatter/gather
|
||||
capability, it will fail if the user tries to map a non-physically
|
||||
contiguous piece of memory. For this reason, memory to be mapped by
|
||||
this API should be obtained from sources which guarantee it to be
|
||||
physically contiguous (like kmalloc).
|
||||
|
||||
Further, the physical address of the memory must be within the
|
||||
dma_mask of the device (the dma_mask represents a bit mask of the
|
||||
addressable region for the device. I.e., if the physical address of
|
||||
the memory anded with the dma_mask is still equal to the physical
|
||||
address, then the device can perform DMA to the memory). In order to
|
||||
Further, the bus address of the memory must be within the
|
||||
dma_mask of the device (the dma_mask is a bit mask of the
|
||||
addressable region for the device, i.e., if the bus address of
|
||||
the memory ANDed with the dma_mask is still equal to the bus
|
||||
address, then the device can perform DMA to the memory). To
|
||||
ensure that the memory allocated by kmalloc is within the dma_mask,
|
||||
the driver may specify various platform-dependent flags to restrict
|
||||
the physical memory range of the allocation (e.g. on x86, GFP_DMA
|
||||
guarantees to be within the first 16Mb of available physical memory,
|
||||
the bus address range of the allocation (e.g., on x86, GFP_DMA
|
||||
guarantees to be within the first 16MB of available bus addresses,
|
||||
as required by ISA devices).
|
||||
|
||||
Note also that the above constraints on physical contiguity and
|
||||
dma_mask may not apply if the platform has an IOMMU (a device which
|
||||
supplies a physical to virtual mapping between the I/O memory bus and
|
||||
the device). However, to be portable, device driver writers may *not*
|
||||
assume that such an IOMMU exists.
|
||||
maps an I/O bus address to a physical memory address). However, to be
|
||||
portable, device driver writers may *not* assume that such an IOMMU
|
||||
exists.
|
||||
|
||||
Warnings: Memory coherency operates at a granularity called the cache
|
||||
line width. In order for memory mapped by this API to operate
|
||||
|
@ -281,9 +286,9 @@ cache width is.
|
|||
int
|
||||
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
|
||||
|
||||
In some circumstances dma_map_single and dma_map_page will fail to create
|
||||
In some circumstances dma_map_single() and dma_map_page() will fail to create
|
||||
a mapping. A driver can check for these errors by testing the returned
|
||||
dma address with dma_mapping_error(). A non-zero return value means the mapping
|
||||
DMA address with dma_mapping_error(). A non-zero return value means the mapping
|
||||
could not be created and the driver should take appropriate action (e.g.
|
||||
reduce current DMA mapping usage or delay and try again later).
|
||||
|
||||
|
@ -291,7 +296,7 @@ reduce current DMA mapping usage or delay and try again later).
|
|||
dma_map_sg(struct device *dev, struct scatterlist *sg,
|
||||
int nents, enum dma_data_direction direction)
|
||||
|
||||
Returns: the number of physical segments mapped (this may be shorter
|
||||
Returns: the number of bus address segments mapped (this may be shorter
|
||||
than <nents> passed in if some elements of the scatter/gather list are
|
||||
physically or virtually adjacent and an IOMMU maps them with a single
|
||||
entry).
|
||||
|
@ -299,7 +304,7 @@ entry).
|
|||
Please note that the sg cannot be mapped again if it has been mapped once.
|
||||
The mapping process is allowed to destroy information in the sg.
|
||||
|
||||
As with the other mapping interfaces, dma_map_sg can fail. When it
|
||||
As with the other mapping interfaces, dma_map_sg() can fail. When it
|
||||
does, 0 is returned and a driver must take appropriate action. It is
|
||||
critical that the driver do something, in the case of a block driver
|
||||
aborting the request or even oopsing is better than doing nothing and
|
||||
|
@ -335,7 +340,7 @@ must be the same as those and passed in to the scatter/gather mapping
|
|||
API.
|
||||
|
||||
Note: <nents> must be the number you passed in, *not* the number of
|
||||
physical entries returned.
|
||||
bus address entries returned.
|
||||
|
||||
void
|
||||
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
|
||||
|
@ -350,7 +355,7 @@ void
|
|||
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
|
||||
enum dma_data_direction direction)
|
||||
|
||||
Synchronise a single contiguous or scatter/gather mapping for the cpu
|
||||
Synchronise a single contiguous or scatter/gather mapping for the CPU
|
||||
and device. With the sync_sg API, all the parameters must be the same
|
||||
as those passed into the single mapping API. With the sync_single API,
|
||||
you can use dma_handle and size parameters that aren't identical to
|
||||
|
@ -391,10 +396,10 @@ The four functions above are just like the counterpart functions
|
|||
without the _attrs suffixes, except that they pass an optional
|
||||
struct dma_attrs*.
|
||||
|
||||
struct dma_attrs encapsulates a set of "dma attributes". For the
|
||||
struct dma_attrs encapsulates a set of "DMA attributes". For the
|
||||
definition of struct dma_attrs see linux/dma-attrs.h.
|
||||
|
||||
The interpretation of dma attributes is architecture-specific, and
|
||||
The interpretation of DMA attributes is architecture-specific, and
|
||||
each attribute should be documented in Documentation/DMA-attributes.txt.
|
||||
|
||||
If struct dma_attrs* is NULL, the semantics of each of these
|
||||
|
@ -458,7 +463,7 @@ Note: where the platform can return consistent memory, it will
|
|||
guarantee that the sync points become nops.
|
||||
|
||||
Warning: Handling non-consistent memory is a real pain. You should
|
||||
only ever use this API if you positively know your driver will be
|
||||
only use this API if you positively know your driver will be
|
||||
required to work on one of the rare (usually non-PCI) architectures
|
||||
that simply cannot make consistent memory.
|
||||
|
||||
|
@ -492,30 +497,29 @@ continuing on for size. Again, you *must* observe the cache line
|
|||
boundaries when doing this.
|
||||
|
||||
int
|
||||
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int
|
||||
flags)
|
||||
|
||||
Declare region of memory to be handed out by dma_alloc_coherent when
|
||||
Declare region of memory to be handed out by dma_alloc_coherent() when
|
||||
it's asked for coherent memory for this device.
|
||||
|
||||
bus_addr is the physical address to which the memory is currently
|
||||
assigned in the bus responding region (this will be used by the
|
||||
platform to perform the mapping).
|
||||
phys_addr is the CPU physical address to which the memory is currently
|
||||
assigned (this will be ioremapped so the CPU can access the region).
|
||||
|
||||
device_addr is the physical address the device needs to be programmed
|
||||
with actually to address this memory (this will be handed out as the
|
||||
device_addr is the bus address the device needs to be programmed
|
||||
with to actually address this memory (this will be handed out as the
|
||||
dma_addr_t in dma_alloc_coherent()).
|
||||
|
||||
size is the size of the area (must be multiples of PAGE_SIZE).
|
||||
|
||||
flags can be or'd together and are:
|
||||
flags can be ORed together and are:
|
||||
|
||||
DMA_MEMORY_MAP - request that the memory returned from
|
||||
dma_alloc_coherent() be directly writable.
|
||||
|
||||
DMA_MEMORY_IO - request that the memory returned from
|
||||
dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
|
||||
dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc.
|
||||
|
||||
One or both of these flags must be present.
|
||||
|
||||
|
@ -572,7 +576,7 @@ region is occupied.
|
|||
Part III - Debug drivers use of the DMA-API
|
||||
-------------------------------------------
|
||||
|
||||
The DMA-API as described above as some constraints. DMA addresses must be
|
||||
The DMA-API as described above has some constraints. DMA addresses must be
|
||||
released with the corresponding function with the same size for example. With
|
||||
the advent of hardware IOMMUs it becomes more and more important that drivers
|
||||
do not violate those constraints. In the worst case such a violation can
|
||||
|
@ -690,11 +694,11 @@ architectural default.
|
|||
void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr);
|
||||
|
||||
dma-debug interface debug_dma_mapping_error() to debug drivers that fail
|
||||
to check dma mapping errors on addresses returned by dma_map_single() and
|
||||
to check DMA mapping errors on addresses returned by dma_map_single() and
|
||||
dma_map_page() interfaces. This interface clears a flag set by
|
||||
debug_dma_map_page() to indicate that dma_mapping_error() has been called by
|
||||
the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
|
||||
this flag is still set, prints warning message that includes call trace that
|
||||
leads up to the unmap. This interface can be called from dma_mapping_error()
|
||||
routines to enable dma mapping error check debugging.
|
||||
routines to enable DMA mapping error check debugging.
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ To do ISA style DMA you need to include two headers:
|
|||
#include <asm/dma.h>
|
||||
|
||||
The first is the generic DMA API used to convert virtual addresses to
|
||||
physical addresses (see Documentation/DMA-API.txt for details).
|
||||
bus addresses (see Documentation/DMA-API.txt for details).
|
||||
|
||||
The second contains the routines specific to ISA DMA transfers. Since
|
||||
this is not present on all platforms make sure you construct your
|
||||
|
@ -50,7 +50,7 @@ early as possible and not release it until the driver is unloaded.)
|
|||
Part III - Address translation
|
||||
------------------------------
|
||||
|
||||
To translate the virtual address to a physical use the normal DMA
|
||||
To translate the virtual address to a bus address, use the normal DMA
|
||||
API. Do _not_ use isa_virt_to_phys() even though it does the same
|
||||
thing. The reason for this is that the function isa_virt_to_phys()
|
||||
will require a Kconfig dependency to ISA, not just ISA_DMA_API which
|
||||
|
|
|
@ -168,26 +168,6 @@ struct pci_controller *pci_find_hose_for_OF_device(struct device_node *node)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static ssize_t pci_show_devspec(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
struct device_node *np;
|
||||
|
||||
pdev = to_pci_dev(dev);
|
||||
np = pci_device_to_OF_node(pdev);
|
||||
if (np == NULL || np->full_name == NULL)
|
||||
return 0;
|
||||
return sprintf(buf, "%s", np->full_name);
|
||||
}
|
||||
static DEVICE_ATTR(devspec, S_IRUGO, pci_show_devspec, NULL);
|
||||
|
||||
/* Add sysfs properties */
|
||||
int pcibios_add_platform_entries(struct pci_dev *pdev)
|
||||
{
|
||||
return device_create_file(&pdev->dev, &dev_attr_devspec);
|
||||
}
|
||||
|
||||
void pcibios_set_master(struct pci_dev *dev)
|
||||
{
|
||||
/* No special bus mastering setup handling */
|
||||
|
|
|
@ -201,26 +201,6 @@ struct pci_controller* pci_find_hose_for_OF_device(struct device_node* node)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static ssize_t pci_show_devspec(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
struct device_node *np;
|
||||
|
||||
pdev = to_pci_dev (dev);
|
||||
np = pci_device_to_OF_node(pdev);
|
||||
if (np == NULL || np->full_name == NULL)
|
||||
return 0;
|
||||
return sprintf(buf, "%s", np->full_name);
|
||||
}
|
||||
static DEVICE_ATTR(devspec, S_IRUGO, pci_show_devspec, NULL);
|
||||
|
||||
/* Add sysfs properties */
|
||||
int pcibios_add_platform_entries(struct pci_dev *pdev)
|
||||
{
|
||||
return device_create_file(&pdev->dev, &dev_attr_devspec);
|
||||
}
|
||||
|
||||
/*
|
||||
* Reads the interrupt pin to determine if interrupt is use by card.
|
||||
* If the interrupt is used, then gets the interrupt line from the
|
||||
|
|
|
@ -120,6 +120,8 @@ static inline bool zdev_enabled(struct zpci_dev *zdev)
|
|||
return (zdev->fh & (1UL << 31)) ? true : false;
|
||||
}
|
||||
|
||||
extern const struct attribute_group *zpci_attr_groups[];
|
||||
|
||||
/* -----------------------------------------------------------------------------
|
||||
Prototypes
|
||||
----------------------------------------------------------------------------- */
|
||||
|
@ -166,10 +168,6 @@ static inline void zpci_exit_slot(struct zpci_dev *zdev) {}
|
|||
struct zpci_dev *get_zdev(struct pci_dev *);
|
||||
struct zpci_dev *get_zdev_by_fid(u32);
|
||||
|
||||
/* sysfs */
|
||||
int zpci_sysfs_add_device(struct device *);
|
||||
void zpci_sysfs_remove_device(struct device *);
|
||||
|
||||
/* DMA */
|
||||
int zpci_dma_init(void);
|
||||
void zpci_dma_exit(void);
|
||||
|
|
|
@ -530,11 +530,6 @@ static void zpci_unmap_resources(struct zpci_dev *zdev)
|
|||
}
|
||||
}
|
||||
|
||||
int pcibios_add_platform_entries(struct pci_dev *pdev)
|
||||
{
|
||||
return zpci_sysfs_add_device(&pdev->dev);
|
||||
}
|
||||
|
||||
static int __init zpci_irq_init(void)
|
||||
{
|
||||
int rc;
|
||||
|
@ -671,6 +666,7 @@ int pcibios_add_device(struct pci_dev *pdev)
|
|||
int i;
|
||||
|
||||
zdev->pdev = pdev;
|
||||
pdev->dev.groups = zpci_attr_groups;
|
||||
zpci_map_resources(zdev);
|
||||
|
||||
for (i = 0; i < PCI_BAR_COUNT; i++) {
|
||||
|
|
|
@ -72,36 +72,18 @@ static ssize_t store_recover(struct device *dev, struct device_attribute *attr,
|
|||
}
|
||||
static DEVICE_ATTR(recover, S_IWUSR, NULL, store_recover);
|
||||
|
||||
static struct device_attribute *zpci_dev_attrs[] = {
|
||||
&dev_attr_function_id,
|
||||
&dev_attr_function_handle,
|
||||
&dev_attr_pchid,
|
||||
&dev_attr_pfgid,
|
||||
&dev_attr_recover,
|
||||
static struct attribute *zpci_dev_attrs[] = {
|
||||
&dev_attr_function_id.attr,
|
||||
&dev_attr_function_handle.attr,
|
||||
&dev_attr_pchid.attr,
|
||||
&dev_attr_pfgid.attr,
|
||||
&dev_attr_recover.attr,
|
||||
NULL,
|
||||
};
|
||||
static struct attribute_group zpci_attr_group = {
|
||||
.attrs = zpci_dev_attrs,
|
||||
};
|
||||
const struct attribute_group *zpci_attr_groups[] = {
|
||||
&zpci_attr_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
int zpci_sysfs_add_device(struct device *dev)
|
||||
{
|
||||
int i, rc = 0;
|
||||
|
||||
for (i = 0; zpci_dev_attrs[i]; i++) {
|
||||
rc = device_create_file(dev, zpci_dev_attrs[i]);
|
||||
if (rc)
|
||||
goto error;
|
||||
}
|
||||
return 0;
|
||||
|
||||
error:
|
||||
while (--i >= 0)
|
||||
device_remove_file(dev, zpci_dev_attrs[i]);
|
||||
return rc;
|
||||
}
|
||||
|
||||
void zpci_sysfs_remove_device(struct device *dev)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; zpci_dev_attrs[i]; i++)
|
||||
device_remove_file(dev, zpci_dev_attrs[i]);
|
||||
}
|
||||
|
|
|
@ -31,6 +31,8 @@
|
|||
static void gapspci_fixup_resources(struct pci_dev *dev)
|
||||
{
|
||||
struct pci_channel *p = dev->sysdata;
|
||||
struct resource res;
|
||||
struct pci_bus_region region;
|
||||
|
||||
printk(KERN_NOTICE "PCI: Fixing up device %s\n", pci_name(dev));
|
||||
|
||||
|
@ -50,11 +52,21 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
|
|||
|
||||
/*
|
||||
* Redirect dma memory allocations to special memory window.
|
||||
*
|
||||
* If this GAPSPCI region were mapped by a BAR, the CPU
|
||||
* phys_addr_t would be pci_resource_start(), and the bus
|
||||
* address would be pci_bus_address(pci_resource_start()).
|
||||
* But apparently there's no BAR mapping it, so we just
|
||||
* "know" its CPU address is GAPSPCI_DMA_BASE.
|
||||
*/
|
||||
res.start = GAPSPCI_DMA_BASE;
|
||||
res.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
|
||||
res.flags = IORESOURCE_MEM;
|
||||
pcibios_resource_to_bus(dev->bus, ®ion, &res);
|
||||
BUG_ON(!dma_declare_coherent_memory(&dev->dev,
|
||||
GAPSPCI_DMA_BASE,
|
||||
GAPSPCI_DMA_BASE,
|
||||
GAPSPCI_DMA_SIZE,
|
||||
res.start,
|
||||
region.start,
|
||||
resource_size(&res),
|
||||
DMA_MEMORY_MAP |
|
||||
DMA_MEMORY_EXCLUSIVE));
|
||||
break;
|
||||
|
|
|
@ -10,6 +10,8 @@
|
|||
*
|
||||
* Copyright 2002 Andi Kleen, SuSE Labs.
|
||||
*/
|
||||
#define pr_fmt(fmt) "AGP: " fmt
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/init.h>
|
||||
|
@ -75,14 +77,13 @@ static u32 __init allocate_aperture(void)
|
|||
addr = memblock_find_in_range(GART_MIN_ADDR, GART_MAX_ADDR,
|
||||
aper_size, aper_size);
|
||||
if (!addr) {
|
||||
printk(KERN_ERR
|
||||
"Cannot allocate aperture memory hole (%lx,%uK)\n",
|
||||
addr, aper_size>>10);
|
||||
pr_err("Cannot allocate aperture memory hole [mem %#010lx-%#010lx] (%uKB)\n",
|
||||
addr, addr + aper_size - 1, aper_size >> 10);
|
||||
return 0;
|
||||
}
|
||||
memblock_reserve(addr, aper_size);
|
||||
printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n",
|
||||
aper_size >> 10, addr);
|
||||
pr_info("Mapping aperture over RAM [mem %#010lx-%#010lx] (%uKB)\n",
|
||||
addr, addr + aper_size - 1, aper_size >> 10);
|
||||
register_nosave_region(addr >> PAGE_SHIFT,
|
||||
(addr+aper_size) >> PAGE_SHIFT);
|
||||
|
||||
|
@ -126,10 +127,11 @@ static u32 __init read_agp(int bus, int slot, int func, int cap, u32 *order)
|
|||
u64 aper;
|
||||
u32 old_order;
|
||||
|
||||
printk(KERN_INFO "AGP bridge at %02x:%02x:%02x\n", bus, slot, func);
|
||||
pr_info("pci 0000:%02x:%02x:%02x: AGP bridge\n", bus, slot, func);
|
||||
apsizereg = read_pci_config_16(bus, slot, func, cap + 0x14);
|
||||
if (apsizereg == 0xffffffff) {
|
||||
printk(KERN_ERR "APSIZE in AGP bridge unreadable\n");
|
||||
pr_err("pci 0000:%02x:%02x.%d: APSIZE unreadable\n",
|
||||
bus, slot, func);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -153,16 +155,18 @@ static u32 __init read_agp(int bus, int slot, int func, int cap, u32 *order)
|
|||
* On some sick chips, APSIZE is 0. It means it wants 4G
|
||||
* so let double check that order, and lets trust AMD NB settings:
|
||||
*/
|
||||
printk(KERN_INFO "Aperture from AGP @ %Lx old size %u MB\n",
|
||||
aper, 32 << old_order);
|
||||
pr_info("pci 0000:%02x:%02x.%d: AGP aperture [bus addr %#010Lx-%#010Lx] (old size %uMB)\n",
|
||||
bus, slot, func, aper, aper + (32ULL << (old_order + 20)) - 1,
|
||||
32 << old_order);
|
||||
if (aper + (32ULL<<(20 + *order)) > 0x100000000ULL) {
|
||||
printk(KERN_INFO "Aperture size %u MB (APSIZE %x) is not right, using settings from NB\n",
|
||||
32 << *order, apsizereg);
|
||||
pr_info("pci 0000:%02x:%02x.%d: AGP aperture size %uMB (APSIZE %#x) is not right, using settings from NB\n",
|
||||
bus, slot, func, 32 << *order, apsizereg);
|
||||
*order = old_order;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "Aperture from AGP @ %Lx size %u MB (APSIZE %x)\n",
|
||||
aper, 32 << *order, apsizereg);
|
||||
pr_info("pci 0000:%02x:%02x.%d: AGP aperture [bus addr %#010Lx-%#010Lx] (%uMB, APSIZE %#x)\n",
|
||||
bus, slot, func, aper, aper + (32ULL << (*order + 20)) - 1,
|
||||
32 << *order, apsizereg);
|
||||
|
||||
if (!aperture_valid(aper, (32*1024*1024) << *order, 32<<20))
|
||||
return 0;
|
||||
|
@ -218,7 +222,7 @@ static u32 __init search_agp_bridge(u32 *order, int *valid_agp)
|
|||
}
|
||||
}
|
||||
}
|
||||
printk(KERN_INFO "No AGP bridge found\n");
|
||||
pr_info("No AGP bridge found\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -310,7 +314,8 @@ void __init early_gart_iommu_check(void)
|
|||
if (e820_any_mapped(aper_base, aper_base + aper_size,
|
||||
E820_RAM)) {
|
||||
/* reserve it, so we can reuse it in second kernel */
|
||||
printk(KERN_INFO "update e820 for GART\n");
|
||||
pr_info("e820: reserve [mem %#010Lx-%#010Lx] for GART\n",
|
||||
aper_base, aper_base + aper_size - 1);
|
||||
e820_add_region(aper_base, aper_size, E820_RESERVED);
|
||||
update_e820();
|
||||
}
|
||||
|
@ -354,7 +359,7 @@ int __init gart_iommu_hole_init(void)
|
|||
!early_pci_allowed())
|
||||
return -ENODEV;
|
||||
|
||||
printk(KERN_INFO "Checking aperture...\n");
|
||||
pr_info("Checking aperture...\n");
|
||||
|
||||
if (!fallback_aper_force)
|
||||
agp_aper_base = search_agp_bridge(&agp_aper_order, &valid_agp);
|
||||
|
@ -395,8 +400,9 @@ int __init gart_iommu_hole_init(void)
|
|||
aper_base = read_pci_config(bus, slot, 3, AMD64_GARTAPERTUREBASE) & 0x7fff;
|
||||
aper_base <<= 25;
|
||||
|
||||
printk(KERN_INFO "Node %d: aperture @ %Lx size %u MB\n",
|
||||
node, aper_base, aper_size >> 20);
|
||||
pr_info("Node %d: aperture [bus addr %#010Lx-%#010Lx] (%uMB)\n",
|
||||
node, aper_base, aper_base + aper_size - 1,
|
||||
aper_size >> 20);
|
||||
node++;
|
||||
|
||||
if (!aperture_valid(aper_base, aper_size, 64<<20)) {
|
||||
|
@ -407,9 +413,9 @@ int __init gart_iommu_hole_init(void)
|
|||
if (!no_iommu &&
|
||||
max_pfn > MAX_DMA32_PFN &&
|
||||
!printed_gart_size_msg) {
|
||||
printk(KERN_ERR "you are using iommu with agp, but GART size is less than 64M\n");
|
||||
printk(KERN_ERR "please increase GART size in your BIOS setup\n");
|
||||
printk(KERN_ERR "if BIOS doesn't have that option, contact your HW vendor!\n");
|
||||
pr_err("you are using iommu with agp, but GART size is less than 64MB\n");
|
||||
pr_err("please increase GART size in your BIOS setup\n");
|
||||
pr_err("if BIOS doesn't have that option, contact your HW vendor!\n");
|
||||
printed_gart_size_msg = 1;
|
||||
}
|
||||
} else {
|
||||
|
@ -446,12 +452,9 @@ out:
|
|||
force_iommu ||
|
||||
valid_agp ||
|
||||
fallback_aper_force) {
|
||||
printk(KERN_INFO
|
||||
"Your BIOS doesn't leave a aperture memory hole\n");
|
||||
printk(KERN_INFO
|
||||
"Please enable the IOMMU option in the BIOS setup\n");
|
||||
printk(KERN_INFO
|
||||
"This costs you %d MB of RAM\n",
|
||||
pr_info("Your BIOS doesn't leave a aperture memory hole\n");
|
||||
pr_info("Please enable the IOMMU option in the BIOS setup\n");
|
||||
pr_info("This costs you %dMB of RAM\n",
|
||||
32 << fallback_aper_order);
|
||||
|
||||
aper_order = fallback_aper_order;
|
||||
|
|
|
@ -60,8 +60,8 @@ static void __init cnb20le_res(u8 bus, u8 slot, u8 func)
|
|||
word1 = read_pci_config_16(bus, slot, func, 0xc4);
|
||||
word2 = read_pci_config_16(bus, slot, func, 0xc6);
|
||||
if (word1 != word2) {
|
||||
res.start = (word1 << 16) | 0x0000;
|
||||
res.end = (word2 << 16) | 0xffff;
|
||||
res.start = ((resource_size_t) word1 << 16) | 0x0000;
|
||||
res.end = ((resource_size_t) word2 << 16) | 0xffff;
|
||||
res.flags = IORESOURCE_MEM | IORESOURCE_PREFETCH;
|
||||
update_res(info, res.start, res.end, res.flags, 0);
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
#include <linux/dmi.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/vgaarb.h>
|
||||
#include <asm/hpet.h>
|
||||
#include <asm/pci_x86.h>
|
||||
|
||||
static void pci_fixup_i450nx(struct pci_dev *d)
|
||||
|
@ -526,6 +527,19 @@ static void sb600_disable_hpet_bar(struct pci_dev *dev)
|
|||
}
|
||||
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ATI, 0x4385, sb600_disable_hpet_bar);
|
||||
|
||||
#ifdef CONFIG_HPET_TIMER
|
||||
static void sb600_hpet_quirk(struct pci_dev *dev)
|
||||
{
|
||||
struct resource *r = &dev->resource[1];
|
||||
|
||||
if (r->flags & IORESOURCE_MEM && r->start == hpet_address) {
|
||||
r->flags |= IORESOURCE_PCI_FIXED;
|
||||
dev_info(&dev->dev, "reg 0x14 contains HPET; making it immovable\n");
|
||||
}
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATI, 0x4385, sb600_hpet_quirk);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Twinhead H12Y needs us to block out a region otherwise we map devices
|
||||
* there and any access kills the box.
|
||||
|
|
|
@ -271,6 +271,10 @@ static void pcibios_allocate_dev_resources(struct pci_dev *dev, int pass)
|
|||
"BAR %d: reserving %pr (d=%d, p=%d)\n",
|
||||
idx, r, disabled, pass);
|
||||
if (pci_claim_resource(dev, idx) < 0) {
|
||||
if (r->flags & IORESOURCE_PCI_FIXED) {
|
||||
dev_info(&dev->dev, "BAR %d %pR is immovable\n",
|
||||
idx, r);
|
||||
} else {
|
||||
/* We'll assign a new address later */
|
||||
pcibios_save_fw_addr(dev,
|
||||
idx, r->start);
|
||||
|
@ -279,6 +283,7 @@ static void pcibios_allocate_dev_resources(struct pci_dev *dev, int pass)
|
|||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!pass) {
|
||||
r = &dev->resource[PCI_ROM_RESOURCE];
|
||||
if (r->flags & IORESOURCE_ROM_ENABLE) {
|
||||
|
@ -356,6 +361,12 @@ static int __init pcibios_assign_resources(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* called in fs_initcall (one below subsys_initcall),
|
||||
* give a chance for motherboard reserve resources
|
||||
*/
|
||||
fs_initcall(pcibios_assign_resources);
|
||||
|
||||
void pcibios_resource_survey_bus(struct pci_bus *bus)
|
||||
{
|
||||
dev_printk(KERN_DEBUG, &bus->dev, "Allocating resources\n");
|
||||
|
@ -392,12 +403,6 @@ void __init pcibios_resource_survey(void)
|
|||
ioapic_insert_resources();
|
||||
}
|
||||
|
||||
/**
|
||||
* called in fs_initcall (one below subsys_initcall),
|
||||
* give a chance for motherboard reserve resources
|
||||
*/
|
||||
fs_initcall(pcibios_assign_resources);
|
||||
|
||||
static const struct vm_operations_struct pci_mmap_ops = {
|
||||
.access = generic_access_phys,
|
||||
};
|
||||
|
|
|
@ -10,13 +10,13 @@
|
|||
struct dma_coherent_mem {
|
||||
void *virt_base;
|
||||
dma_addr_t device_base;
|
||||
phys_addr_t pfn_base;
|
||||
unsigned long pfn_base;
|
||||
int size;
|
||||
int flags;
|
||||
unsigned long *bitmap;
|
||||
};
|
||||
|
||||
int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags)
|
||||
{
|
||||
void __iomem *mem_base = NULL;
|
||||
|
@ -32,7 +32,7 @@ int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
|||
|
||||
/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
|
||||
|
||||
mem_base = ioremap(bus_addr, size);
|
||||
mem_base = ioremap(phys_addr, size);
|
||||
if (!mem_base)
|
||||
goto out;
|
||||
|
||||
|
@ -45,7 +45,7 @@ int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
|||
|
||||
dev->dma_mem->virt_base = mem_base;
|
||||
dev->dma_mem->device_base = device_addr;
|
||||
dev->dma_mem->pfn_base = PFN_DOWN(bus_addr);
|
||||
dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
|
||||
dev->dma_mem->size = pages;
|
||||
dev->dma_mem->flags = flags;
|
||||
|
||||
|
@ -208,7 +208,7 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
|
|||
|
||||
*ret = -ENXIO;
|
||||
if (off < count && user_count <= count - off) {
|
||||
unsigned pfn = mem->pfn_base + start + off;
|
||||
unsigned long pfn = mem->pfn_base + start + off;
|
||||
*ret = remap_pfn_range(vma, vma->vm_start, pfn,
|
||||
user_count << PAGE_SHIFT,
|
||||
vma->vm_page_prot);
|
||||
|
|
|
@ -175,7 +175,7 @@ static void dmam_coherent_decl_release(struct device *dev, void *res)
|
|||
/**
|
||||
* dmam_declare_coherent_memory - Managed dma_declare_coherent_memory()
|
||||
* @dev: Device to declare coherent memory for
|
||||
* @bus_addr: Bus address of coherent memory to be declared
|
||||
* @phys_addr: Physical address of coherent memory to be declared
|
||||
* @device_addr: Device address of coherent memory to be declared
|
||||
* @size: Size of coherent memory to be declared
|
||||
* @flags: Flags
|
||||
|
@ -185,7 +185,7 @@ static void dmam_coherent_decl_release(struct device *dev, void *res)
|
|||
* RETURNS:
|
||||
* 0 on success, -errno on failure.
|
||||
*/
|
||||
int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
int dmam_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags)
|
||||
{
|
||||
void *res;
|
||||
|
@ -195,7 +195,7 @@ int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
|||
if (!res)
|
||||
return -ENOMEM;
|
||||
|
||||
rc = dma_declare_coherent_memory(dev, bus_addr, device_addr, size,
|
||||
rc = dma_declare_coherent_memory(dev, phys_addr, device_addr, size,
|
||||
flags);
|
||||
if (rc == 0)
|
||||
devres_add(dev, res);
|
||||
|
|
|
@ -1011,13 +1011,13 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain,
|
|||
}
|
||||
|
||||
static struct iommu_ops exynos_iommu_ops = {
|
||||
.domain_init = &exynos_iommu_domain_init,
|
||||
.domain_destroy = &exynos_iommu_domain_destroy,
|
||||
.attach_dev = &exynos_iommu_attach_device,
|
||||
.detach_dev = &exynos_iommu_detach_device,
|
||||
.map = &exynos_iommu_map,
|
||||
.unmap = &exynos_iommu_unmap,
|
||||
.iova_to_phys = &exynos_iommu_iova_to_phys,
|
||||
.domain_init = exynos_iommu_domain_init,
|
||||
.domain_destroy = exynos_iommu_domain_destroy,
|
||||
.attach_dev = exynos_iommu_attach_device,
|
||||
.detach_dev = exynos_iommu_detach_device,
|
||||
.map = exynos_iommu_map,
|
||||
.unmap = exynos_iommu_unmap,
|
||||
.iova_to_phys = exynos_iommu_iova_to_phys,
|
||||
.pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE,
|
||||
};
|
||||
|
||||
|
|
|
@ -878,50 +878,6 @@ int pci_msi_vec_count(struct pci_dev *dev)
|
|||
}
|
||||
EXPORT_SYMBOL(pci_msi_vec_count);
|
||||
|
||||
/**
|
||||
* pci_enable_msi_block - configure device's MSI capability structure
|
||||
* @dev: device to configure
|
||||
* @nvec: number of interrupts to configure
|
||||
*
|
||||
* Allocate IRQs for a device with the MSI capability.
|
||||
* This function returns a negative errno if an error occurs. If it
|
||||
* is unable to allocate the number of interrupts requested, it returns
|
||||
* the number of interrupts it might be able to allocate. If it successfully
|
||||
* allocates at least the number of interrupts requested, it returns 0 and
|
||||
* updates the @dev's irq member to the lowest new interrupt number; the
|
||||
* other interrupt numbers allocated to this device are consecutive.
|
||||
*/
|
||||
int pci_enable_msi_block(struct pci_dev *dev, int nvec)
|
||||
{
|
||||
int status, maxvec;
|
||||
|
||||
if (dev->current_state != PCI_D0)
|
||||
return -EINVAL;
|
||||
|
||||
maxvec = pci_msi_vec_count(dev);
|
||||
if (maxvec < 0)
|
||||
return maxvec;
|
||||
if (nvec > maxvec)
|
||||
return maxvec;
|
||||
|
||||
status = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSI);
|
||||
if (status)
|
||||
return status;
|
||||
|
||||
WARN_ON(!!dev->msi_enabled);
|
||||
|
||||
/* Check whether driver already requested MSI-X irqs */
|
||||
if (dev->msix_enabled) {
|
||||
dev_info(&dev->dev, "can't enable MSI "
|
||||
"(MSI-X already enabled)\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
status = msi_capability_init(dev, nvec);
|
||||
return status;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_enable_msi_block);
|
||||
|
||||
void pci_msi_shutdown(struct pci_dev *dev)
|
||||
{
|
||||
struct msi_desc *desc;
|
||||
|
@ -1127,14 +1083,45 @@ void pci_msi_init_pci_dev(struct pci_dev *dev)
|
|||
**/
|
||||
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
|
||||
{
|
||||
int nvec = maxvec;
|
||||
int nvec;
|
||||
int rc;
|
||||
|
||||
if (dev->current_state != PCI_D0)
|
||||
return -EINVAL;
|
||||
|
||||
WARN_ON(!!dev->msi_enabled);
|
||||
|
||||
/* Check whether driver already requested MSI-X irqs */
|
||||
if (dev->msix_enabled) {
|
||||
dev_info(&dev->dev,
|
||||
"can't enable MSI (MSI-X already enabled)\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (maxvec < minvec)
|
||||
return -ERANGE;
|
||||
|
||||
nvec = pci_msi_vec_count(dev);
|
||||
if (nvec < 0)
|
||||
return nvec;
|
||||
else if (nvec < minvec)
|
||||
return -EINVAL;
|
||||
else if (nvec > maxvec)
|
||||
nvec = maxvec;
|
||||
|
||||
do {
|
||||
rc = pci_enable_msi_block(dev, nvec);
|
||||
rc = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSI);
|
||||
if (rc < 0) {
|
||||
return rc;
|
||||
} else if (rc > 0) {
|
||||
if (rc < minvec)
|
||||
return -ENOSPC;
|
||||
nvec = rc;
|
||||
}
|
||||
} while (rc);
|
||||
|
||||
do {
|
||||
rc = msi_capability_init(dev, nvec);
|
||||
if (rc < 0) {
|
||||
return rc;
|
||||
} else if (rc > 0) {
|
||||
|
|
|
@ -29,6 +29,7 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/vgaarb.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/of.h>
|
||||
#include "pci.h"
|
||||
|
||||
static int sysfs_initialized; /* = 0 */
|
||||
|
@ -416,6 +417,20 @@ static ssize_t d3cold_allowed_show(struct device *dev,
|
|||
static DEVICE_ATTR_RW(d3cold_allowed);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static ssize_t devspec_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct device_node *np = pci_device_to_OF_node(pdev);
|
||||
|
||||
if (np == NULL || np->full_name == NULL)
|
||||
return 0;
|
||||
return sprintf(buf, "%s", np->full_name);
|
||||
}
|
||||
static DEVICE_ATTR_RO(devspec);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
static ssize_t sriov_totalvfs_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
|
@ -520,6 +535,9 @@ static struct attribute *pci_dev_attrs[] = {
|
|||
&dev_attr_msi_bus.attr,
|
||||
#if defined(CONFIG_PM_RUNTIME) && defined(CONFIG_ACPI)
|
||||
&dev_attr_d3cold_allowed.attr,
|
||||
#endif
|
||||
#ifdef CONFIG_OF
|
||||
&dev_attr_devspec.attr,
|
||||
#endif
|
||||
NULL,
|
||||
};
|
||||
|
@ -1255,11 +1273,6 @@ static struct bin_attribute pcie_config_attr = {
|
|||
.write = pci_write_config,
|
||||
};
|
||||
|
||||
int __weak pcibios_add_platform_entries(struct pci_dev *dev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t reset_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf,
|
||||
size_t count)
|
||||
|
@ -1375,11 +1388,6 @@ int __must_check pci_create_sysfs_dev_files (struct pci_dev *pdev)
|
|||
pdev->rom_attr = attr;
|
||||
}
|
||||
|
||||
/* add platform-specific attributes */
|
||||
retval = pcibios_add_platform_entries(pdev);
|
||||
if (retval)
|
||||
goto err_rom_file;
|
||||
|
||||
/* add sysfs entries for various capabilities */
|
||||
retval = pci_create_capabilities_sysfs(pdev);
|
||||
if (retval)
|
||||
|
|
|
@ -171,9 +171,10 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
struct resource *res, unsigned int pos)
|
||||
{
|
||||
u32 l, sz, mask;
|
||||
u64 l64, sz64, mask64;
|
||||
u16 orig_cmd;
|
||||
struct pci_bus_region region, inverted_region;
|
||||
bool bar_too_big = false, bar_disabled = false;
|
||||
bool bar_too_big = false, bar_too_high = false, bar_invalid = false;
|
||||
|
||||
mask = type ? PCI_ROM_ADDRESS_MASK : ~0;
|
||||
|
||||
|
@ -226,9 +227,9 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
}
|
||||
|
||||
if (res->flags & IORESOURCE_MEM_64) {
|
||||
u64 l64 = l;
|
||||
u64 sz64 = sz;
|
||||
u64 mask64 = mask | (u64)~0 << 32;
|
||||
l64 = l;
|
||||
sz64 = sz;
|
||||
mask64 = mask | (u64)~0 << 32;
|
||||
|
||||
pci_read_config_dword(dev, pos + 4, &l);
|
||||
pci_write_config_dword(dev, pos + 4, ~0);
|
||||
|
@ -243,19 +244,22 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
if (!sz64)
|
||||
goto fail;
|
||||
|
||||
if ((sizeof(resource_size_t) < 8) && (sz64 > 0x100000000ULL)) {
|
||||
if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) &&
|
||||
sz64 > 0x100000000ULL) {
|
||||
res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
|
||||
res->start = 0;
|
||||
res->end = 0;
|
||||
bar_too_big = true;
|
||||
goto fail;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if ((sizeof(resource_size_t) < 8) && l) {
|
||||
/* Address above 32-bit boundary; disable the BAR */
|
||||
pci_write_config_dword(dev, pos, 0);
|
||||
pci_write_config_dword(dev, pos + 4, 0);
|
||||
if ((sizeof(dma_addr_t) < 8) && l) {
|
||||
/* Above 32-bit boundary; try to reallocate */
|
||||
res->flags |= IORESOURCE_UNSET;
|
||||
region.start = 0;
|
||||
region.end = sz64;
|
||||
bar_disabled = true;
|
||||
res->start = 0;
|
||||
res->end = sz64;
|
||||
bar_too_high = true;
|
||||
goto out;
|
||||
} else {
|
||||
region.start = l64;
|
||||
region.end = l64 + sz64;
|
||||
|
@ -285,11 +289,10 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
|||
* be claimed by the device.
|
||||
*/
|
||||
if (inverted_region.start != region.start) {
|
||||
dev_info(&dev->dev, "reg 0x%x: initial BAR value %pa invalid; forcing reassignment\n",
|
||||
pos, ®ion.start);
|
||||
res->flags |= IORESOURCE_UNSET;
|
||||
res->end -= res->start;
|
||||
res->start = 0;
|
||||
res->end = region.end - region.start;
|
||||
bar_invalid = true;
|
||||
}
|
||||
|
||||
goto out;
|
||||
|
@ -303,8 +306,15 @@ out:
|
|||
pci_write_config_word(dev, PCI_COMMAND, orig_cmd);
|
||||
|
||||
if (bar_too_big)
|
||||
dev_err(&dev->dev, "reg 0x%x: can't handle 64-bit BAR\n", pos);
|
||||
if (res->flags && !bar_disabled)
|
||||
dev_err(&dev->dev, "reg 0x%x: can't handle BAR larger than 4GB (size %#010llx)\n",
|
||||
pos, (unsigned long long) sz64);
|
||||
if (bar_too_high)
|
||||
dev_info(&dev->dev, "reg 0x%x: can't handle BAR above 4G (bus address %#010llx)\n",
|
||||
pos, (unsigned long long) l64);
|
||||
if (bar_invalid)
|
||||
dev_info(&dev->dev, "reg 0x%x: initial BAR value %#010llx invalid\n",
|
||||
pos, (unsigned long long) region.start);
|
||||
if (res->flags)
|
||||
dev_printk(KERN_DEBUG, &dev->dev, "reg 0x%x: %pR\n", pos, res);
|
||||
|
||||
return (res->flags & IORESOURCE_MEM_64) ? 1 : 0;
|
||||
|
@ -465,7 +475,7 @@ void pci_read_bridge_bases(struct pci_bus *child)
|
|||
|
||||
if (dev->transparent) {
|
||||
pci_bus_for_each_resource(child->parent, res, i) {
|
||||
if (res) {
|
||||
if (res && res->flags) {
|
||||
pci_bus_add_resource(child, res,
|
||||
PCI_SUBTRACTIVE_DECODE);
|
||||
dev_printk(KERN_DEBUG, &dev->dev,
|
||||
|
|
|
@ -2992,6 +2992,14 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, 0x0030,
|
|||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */
|
||||
quirk_broken_intx_masking);
|
||||
/*
|
||||
* Realtek RTL8169 PCI Gigabit Ethernet Controller (rev 10)
|
||||
* Subsystem: Realtek RTL8169/8110 Family PCI Gigabit Ethernet NIC
|
||||
*
|
||||
* RTL8110SC - Fails under PCI device assignment using DisINTx masking.
|
||||
*/
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_REALTEK, 0x8169,
|
||||
quirk_broken_intx_masking);
|
||||
|
||||
static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
|
||||
struct pci_fixup *end)
|
||||
|
|
|
@ -713,12 +713,11 @@ static void pci_bridge_check_ranges(struct pci_bus *bus)
|
|||
bus resource of a given type. Note: we intentionally skip
|
||||
the bus resources which have already been assigned (that is,
|
||||
have non-NULL parent resource). */
|
||||
static struct resource *find_free_bus_resource(struct pci_bus *bus, unsigned long type)
|
||||
static struct resource *find_free_bus_resource(struct pci_bus *bus,
|
||||
unsigned long type_mask, unsigned long type)
|
||||
{
|
||||
int i;
|
||||
struct resource *r;
|
||||
unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM |
|
||||
IORESOURCE_PREFETCH;
|
||||
|
||||
pci_bus_for_each_resource(bus, r, i) {
|
||||
if (r == &ioport_resource || r == &iomem_resource)
|
||||
|
@ -815,7 +814,8 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
|
|||
resource_size_t add_size, struct list_head *realloc_head)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO);
|
||||
struct resource *b_res = find_free_bus_resource(bus, IORESOURCE_IO,
|
||||
IORESOURCE_IO);
|
||||
resource_size_t size = 0, size0 = 0, size1 = 0;
|
||||
resource_size_t children_add_size = 0;
|
||||
resource_size_t min_align, align;
|
||||
|
@ -907,36 +907,40 @@ static inline resource_size_t calculate_mem_align(resource_size_t *aligns,
|
|||
* @bus : the bus
|
||||
* @mask: mask the resource flag, then compare it with type
|
||||
* @type: the type of free resource from bridge
|
||||
* @type2: second match type
|
||||
* @type3: third match type
|
||||
* @min_size : the minimum memory window that must to be allocated
|
||||
* @add_size : additional optional memory window
|
||||
* @realloc_head : track the additional memory window on this list
|
||||
*
|
||||
* Calculate the size of the bus and minimal alignment which
|
||||
* guarantees that all child resources fit in this size.
|
||||
*
|
||||
* Returns -ENOSPC if there's no available bus resource of the desired type.
|
||||
* Otherwise, sets the bus resource start/end to indicate the required
|
||||
* size, adds things to realloc_head (if supplied), and returns 0.
|
||||
*/
|
||||
static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
||||
unsigned long type, resource_size_t min_size,
|
||||
resource_size_t add_size,
|
||||
unsigned long type, unsigned long type2,
|
||||
unsigned long type3,
|
||||
resource_size_t min_size, resource_size_t add_size,
|
||||
struct list_head *realloc_head)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
resource_size_t min_align, align, size, size0, size1;
|
||||
resource_size_t aligns[12]; /* Alignments from 1Mb to 2Gb */
|
||||
resource_size_t aligns[14]; /* Alignments from 1Mb to 8Gb */
|
||||
int order, max_order;
|
||||
struct resource *b_res = find_free_bus_resource(bus, type);
|
||||
unsigned int mem64_mask = 0;
|
||||
struct resource *b_res = find_free_bus_resource(bus,
|
||||
mask | IORESOURCE_PREFETCH, type);
|
||||
resource_size_t children_add_size = 0;
|
||||
|
||||
if (!b_res)
|
||||
return 0;
|
||||
return -ENOSPC;
|
||||
|
||||
memset(aligns, 0, sizeof(aligns));
|
||||
max_order = 0;
|
||||
size = 0;
|
||||
|
||||
mem64_mask = b_res->flags & IORESOURCE_MEM_64;
|
||||
b_res->flags &= ~IORESOURCE_MEM_64;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
int i;
|
||||
|
||||
|
@ -944,7 +948,9 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
struct resource *r = &dev->resource[i];
|
||||
resource_size_t r_size;
|
||||
|
||||
if (r->parent || (r->flags & mask) != type)
|
||||
if (r->parent || ((r->flags & mask) != type &&
|
||||
(r->flags & mask) != type2 &&
|
||||
(r->flags & mask) != type3))
|
||||
continue;
|
||||
r_size = resource_size(r);
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
|
@ -957,10 +963,17 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
continue;
|
||||
}
|
||||
#endif
|
||||
/* For bridges size != alignment */
|
||||
/*
|
||||
* aligns[0] is for 1MB (since bridge memory
|
||||
* windows are always at least 1MB aligned), so
|
||||
* keep "order" from being negative for smaller
|
||||
* resources.
|
||||
*/
|
||||
align = pci_resource_alignment(dev, r);
|
||||
order = __ffs(align) - 20;
|
||||
if (order > 11) {
|
||||
if (order < 0)
|
||||
order = 0;
|
||||
if (order >= ARRAY_SIZE(aligns)) {
|
||||
dev_warn(&dev->dev, "disabling BAR %d: %pR "
|
||||
"(bad alignment %#llx)\n", i, r,
|
||||
(unsigned long long) align);
|
||||
|
@ -968,15 +981,12 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
continue;
|
||||
}
|
||||
size += r_size;
|
||||
if (order < 0)
|
||||
order = 0;
|
||||
/* Exclude ranges with size > align from
|
||||
calculation of the alignment. */
|
||||
if (r_size == align)
|
||||
aligns[order] += align;
|
||||
if (order > max_order)
|
||||
max_order = order;
|
||||
mem64_mask &= r->flags & IORESOURCE_MEM_64;
|
||||
|
||||
if (realloc_head)
|
||||
children_add_size += get_res_add_size(realloc_head, r);
|
||||
|
@ -997,18 +1007,18 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
|
|||
"%pR to %pR (unused)\n", b_res,
|
||||
&bus->busn_res);
|
||||
b_res->flags = 0;
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
b_res->start = min_align;
|
||||
b_res->end = size0 + min_align - 1;
|
||||
b_res->flags |= IORESOURCE_STARTALIGN | mem64_mask;
|
||||
b_res->flags |= IORESOURCE_STARTALIGN;
|
||||
if (size1 > size0 && realloc_head) {
|
||||
add_to_list(realloc_head, bus->self, b_res, size1-size0, min_align);
|
||||
dev_printk(KERN_DEBUG, &bus->self->dev, "bridge window "
|
||||
"%pR to %pR add_size %llx\n", b_res,
|
||||
&bus->busn_res, (unsigned long long)size1-size0);
|
||||
}
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
unsigned long pci_cardbus_resource_alignment(struct resource *res)
|
||||
|
@ -1116,8 +1126,10 @@ handle_done:
|
|||
void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head)
|
||||
{
|
||||
struct pci_dev *dev;
|
||||
unsigned long mask, prefmask;
|
||||
unsigned long mask, prefmask, type2 = 0, type3 = 0;
|
||||
resource_size_t additional_mem_size = 0, additional_io_size = 0;
|
||||
struct resource *b_res;
|
||||
int ret;
|
||||
|
||||
list_for_each_entry(dev, &bus->devices, bus_list) {
|
||||
struct pci_bus *b = dev->subordinate;
|
||||
|
@ -1151,26 +1163,78 @@ void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head)
|
|||
additional_io_size = pci_hotplug_io_size;
|
||||
additional_mem_size = pci_hotplug_mem_size;
|
||||
}
|
||||
/*
|
||||
* Follow thru
|
||||
*/
|
||||
/* Fall through */
|
||||
default:
|
||||
pbus_size_io(bus, realloc_head ? 0 : additional_io_size,
|
||||
additional_io_size, realloc_head);
|
||||
/* If the bridge supports prefetchable range, size it
|
||||
separately. If it doesn't, or its prefetchable window
|
||||
has already been allocated by arch code, try
|
||||
non-prefetchable range for both types of PCI memory
|
||||
resources. */
|
||||
|
||||
/*
|
||||
* If there's a 64-bit prefetchable MMIO window, compute
|
||||
* the size required to put all 64-bit prefetchable
|
||||
* resources in it.
|
||||
*/
|
||||
b_res = &bus->self->resource[PCI_BRIDGE_RESOURCES];
|
||||
mask = IORESOURCE_MEM;
|
||||
prefmask = IORESOURCE_MEM | IORESOURCE_PREFETCH;
|
||||
if (pbus_size_mem(bus, prefmask, prefmask,
|
||||
if (b_res[2].flags & IORESOURCE_MEM_64) {
|
||||
prefmask |= IORESOURCE_MEM_64;
|
||||
ret = pbus_size_mem(bus, prefmask, prefmask,
|
||||
prefmask, prefmask,
|
||||
realloc_head ? 0 : additional_mem_size,
|
||||
additional_mem_size, realloc_head))
|
||||
mask = prefmask; /* Success, size non-prefetch only. */
|
||||
additional_mem_size, realloc_head);
|
||||
|
||||
/*
|
||||
* If successful, all non-prefetchable resources
|
||||
* and any 32-bit prefetchable resources will go in
|
||||
* the non-prefetchable window.
|
||||
*/
|
||||
if (ret == 0) {
|
||||
mask = prefmask;
|
||||
type2 = prefmask & ~IORESOURCE_MEM_64;
|
||||
type3 = prefmask & ~IORESOURCE_PREFETCH;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If there is no 64-bit prefetchable window, compute the
|
||||
* size required to put all prefetchable resources in the
|
||||
* 32-bit prefetchable window (if there is one).
|
||||
*/
|
||||
if (!type2) {
|
||||
prefmask &= ~IORESOURCE_MEM_64;
|
||||
ret = pbus_size_mem(bus, prefmask, prefmask,
|
||||
prefmask, prefmask,
|
||||
realloc_head ? 0 : additional_mem_size,
|
||||
additional_mem_size, realloc_head);
|
||||
|
||||
/*
|
||||
* If successful, only non-prefetchable resources
|
||||
* will go in the non-prefetchable window.
|
||||
*/
|
||||
if (ret == 0)
|
||||
mask = prefmask;
|
||||
else
|
||||
additional_mem_size += additional_mem_size;
|
||||
pbus_size_mem(bus, mask, IORESOURCE_MEM,
|
||||
|
||||
type2 = type3 = IORESOURCE_MEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* Compute the size required to put everything else in the
|
||||
* non-prefetchable window. This includes:
|
||||
*
|
||||
* - all non-prefetchable resources
|
||||
* - 32-bit prefetchable resources if there's a 64-bit
|
||||
* prefetchable window or no prefetchable window at all
|
||||
* - 64-bit prefetchable resources if there's no
|
||||
* prefetchable window at all
|
||||
*
|
||||
* Note that the strategy in __pci_assign_resource() must
|
||||
* match that used here. Specifically, we cannot put a
|
||||
* 32-bit prefetchable resource in a 64-bit prefetchable
|
||||
* window.
|
||||
*/
|
||||
pbus_size_mem(bus, mask, IORESOURCE_MEM, type2, type3,
|
||||
realloc_head ? 0 : additional_mem_size,
|
||||
additional_mem_size, realloc_head);
|
||||
break;
|
||||
|
@ -1256,42 +1320,66 @@ static void __pci_bridge_assign_resources(const struct pci_dev *bridge,
|
|||
static void pci_bridge_release_resources(struct pci_bus *bus,
|
||||
unsigned long type)
|
||||
{
|
||||
int idx;
|
||||
bool changed = false;
|
||||
struct pci_dev *dev;
|
||||
struct pci_dev *dev = bus->self;
|
||||
struct resource *r;
|
||||
unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM |
|
||||
IORESOURCE_PREFETCH;
|
||||
IORESOURCE_PREFETCH | IORESOURCE_MEM_64;
|
||||
unsigned old_flags = 0;
|
||||
struct resource *b_res;
|
||||
int idx = 1;
|
||||
|
||||
b_res = &dev->resource[PCI_BRIDGE_RESOURCES];
|
||||
|
||||
/*
|
||||
* 1. if there is io port assign fail, will release bridge
|
||||
* io port.
|
||||
* 2. if there is non pref mmio assign fail, release bridge
|
||||
* nonpref mmio.
|
||||
* 3. if there is 64bit pref mmio assign fail, and bridge pref
|
||||
* is 64bit, release bridge pref mmio.
|
||||
* 4. if there is pref mmio assign fail, and bridge pref is
|
||||
* 32bit mmio, release bridge pref mmio
|
||||
* 5. if there is pref mmio assign fail, and bridge pref is not
|
||||
* assigned, release bridge nonpref mmio.
|
||||
*/
|
||||
if (type & IORESOURCE_IO)
|
||||
idx = 0;
|
||||
else if (!(type & IORESOURCE_PREFETCH))
|
||||
idx = 1;
|
||||
else if ((type & IORESOURCE_MEM_64) &&
|
||||
(b_res[2].flags & IORESOURCE_MEM_64))
|
||||
idx = 2;
|
||||
else if (!(b_res[2].flags & IORESOURCE_MEM_64) &&
|
||||
(b_res[2].flags & IORESOURCE_PREFETCH))
|
||||
idx = 2;
|
||||
else
|
||||
idx = 1;
|
||||
|
||||
r = &b_res[idx];
|
||||
|
||||
dev = bus->self;
|
||||
for (idx = PCI_BRIDGE_RESOURCES; idx <= PCI_BRIDGE_RESOURCE_END;
|
||||
idx++) {
|
||||
r = &dev->resource[idx];
|
||||
if ((r->flags & type_mask) != type)
|
||||
continue;
|
||||
if (!r->parent)
|
||||
continue;
|
||||
return;
|
||||
|
||||
/*
|
||||
* if there are children under that, we should release them
|
||||
* all
|
||||
*/
|
||||
release_child_resources(r);
|
||||
if (!release_resource(r)) {
|
||||
dev_printk(KERN_DEBUG, &dev->dev,
|
||||
"resource %d %pR released\n", idx, r);
|
||||
type = old_flags = r->flags & type_mask;
|
||||
dev_printk(KERN_DEBUG, &dev->dev, "resource %d %pR released\n",
|
||||
PCI_BRIDGE_RESOURCES + idx, r);
|
||||
/* keep the old size */
|
||||
r->end = resource_size(r) - 1;
|
||||
r->start = 0;
|
||||
r->flags = 0;
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (changed) {
|
||||
/* avoiding touch the one without PREF */
|
||||
if (type & IORESOURCE_PREFETCH)
|
||||
type = IORESOURCE_PREFETCH;
|
||||
__pci_setup_bridge(bus, type);
|
||||
/* for next child res under same bridge */
|
||||
r->flags = old_flags;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1470,7 +1558,7 @@ void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus)
|
|||
LIST_HEAD(fail_head);
|
||||
struct pci_dev_resource *fail_res;
|
||||
unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM |
|
||||
IORESOURCE_PREFETCH;
|
||||
IORESOURCE_PREFETCH | IORESOURCE_MEM_64;
|
||||
int pci_try_num = 1;
|
||||
enum enable_type enable_local;
|
||||
|
||||
|
|
|
@ -208,21 +208,42 @@ static int __pci_assign_resource(struct pci_bus *bus, struct pci_dev *dev,
|
|||
|
||||
min = (res->flags & IORESOURCE_IO) ? PCIBIOS_MIN_IO : PCIBIOS_MIN_MEM;
|
||||
|
||||
/* First, try exact prefetching match.. */
|
||||
/*
|
||||
* First, try exact prefetching match. Even if a 64-bit
|
||||
* prefetchable bridge window is below 4GB, we can't put a 32-bit
|
||||
* prefetchable resource in it because pbus_size_mem() assumes a
|
||||
* 64-bit window will contain no 32-bit resources. If we assign
|
||||
* things differently than they were sized, not everything will fit.
|
||||
*/
|
||||
ret = pci_bus_alloc_resource(bus, res, size, align, min,
|
||||
IORESOURCE_PREFETCH | IORESOURCE_MEM_64,
|
||||
pcibios_align_resource, dev);
|
||||
if (ret == 0)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* If the prefetchable window is only 32 bits wide, we can put
|
||||
* 64-bit prefetchable resources in it.
|
||||
*/
|
||||
if ((res->flags & (IORESOURCE_PREFETCH | IORESOURCE_MEM_64)) ==
|
||||
(IORESOURCE_PREFETCH | IORESOURCE_MEM_64)) {
|
||||
ret = pci_bus_alloc_resource(bus, res, size, align, min,
|
||||
IORESOURCE_PREFETCH,
|
||||
pcibios_align_resource, dev);
|
||||
if (ret == 0)
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (ret < 0 && (res->flags & IORESOURCE_PREFETCH)) {
|
||||
/*
|
||||
* That failed.
|
||||
*
|
||||
* But a prefetching area can handle a non-prefetching
|
||||
* window (it will just not perform as well).
|
||||
* If we didn't find a better match, we can put any memory resource
|
||||
* in a non-prefetchable window. If this resource is 32 bits and
|
||||
* non-prefetchable, the first call already tried the only possibility
|
||||
* so we don't need to try again.
|
||||
*/
|
||||
if (res->flags & (IORESOURCE_PREFETCH | IORESOURCE_MEM_64))
|
||||
ret = pci_bus_alloc_resource(bus, res, size, align, min, 0,
|
||||
pcibios_align_resource, dev);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -16,15 +16,12 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
|
|||
* Standard interface
|
||||
*/
|
||||
#define ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
|
||||
extern int
|
||||
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags);
|
||||
|
||||
extern void
|
||||
dma_release_declared_memory(struct device *dev);
|
||||
void dma_release_declared_memory(struct device *dev);
|
||||
|
||||
extern void *
|
||||
dma_mark_declared_memory_occupied(struct device *dev,
|
||||
void *dma_mark_declared_memory_occupied(struct device *dev,
|
||||
dma_addr_t device_addr, size_t size);
|
||||
#else
|
||||
#define dma_alloc_from_coherent(dev, size, handle, ret) (0)
|
||||
|
|
|
@ -8,6 +8,12 @@
|
|||
#include <linux/dma-direction.h>
|
||||
#include <linux/scatterlist.h>
|
||||
|
||||
/*
|
||||
* A dma_addr_t can hold any valid DMA or bus address for the platform.
|
||||
* It can be given to a device to use as a DMA source or target. A CPU cannot
|
||||
* reference a dma_addr_t directly because there may be translation between
|
||||
* its physical address space and the bus address space.
|
||||
*/
|
||||
struct dma_map_ops {
|
||||
void* (*alloc)(struct device *dev, size_t size,
|
||||
dma_addr_t *dma_handle, gfp_t gfp,
|
||||
|
@ -186,7 +192,7 @@ static inline int dma_get_cache_alignment(void)
|
|||
|
||||
#ifndef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
|
||||
static inline int
|
||||
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size, int flags)
|
||||
{
|
||||
return 0;
|
||||
|
@ -217,13 +223,14 @@ extern void *dmam_alloc_noncoherent(struct device *dev, size_t size,
|
|||
extern void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr,
|
||||
dma_addr_t dma_handle);
|
||||
#ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
|
||||
extern int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
|
||||
extern int dmam_declare_coherent_memory(struct device *dev,
|
||||
phys_addr_t phys_addr,
|
||||
dma_addr_t device_addr, size_t size,
|
||||
int flags);
|
||||
extern void dmam_release_declared_memory(struct device *dev);
|
||||
#else /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */
|
||||
static inline int dmam_declare_coherent_memory(struct device *dev,
|
||||
dma_addr_t bus_addr, dma_addr_t device_addr,
|
||||
phys_addr_t phys_addr, dma_addr_t device_addr,
|
||||
size_t size, gfp_t gfp)
|
||||
{
|
||||
return 0;
|
||||
|
|
|
@ -1158,7 +1158,6 @@ struct msix_entry {
|
|||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
int pci_msi_vec_count(struct pci_dev *dev);
|
||||
int pci_enable_msi_block(struct pci_dev *dev, int nvec);
|
||||
void pci_msi_shutdown(struct pci_dev *dev);
|
||||
void pci_disable_msi(struct pci_dev *dev);
|
||||
int pci_msix_vec_count(struct pci_dev *dev);
|
||||
|
@ -1188,8 +1187,6 @@ static inline int pci_enable_msix_exact(struct pci_dev *dev,
|
|||
}
|
||||
#else
|
||||
static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; }
|
||||
static inline int pci_enable_msi_block(struct pci_dev *dev, int nvec)
|
||||
{ return -ENOSYS; }
|
||||
static inline void pci_msi_shutdown(struct pci_dev *dev) { }
|
||||
static inline void pci_disable_msi(struct pci_dev *dev) { }
|
||||
static inline int pci_msix_vec_count(struct pci_dev *dev) { return -ENOSYS; }
|
||||
|
@ -1244,7 +1241,7 @@ static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { }
|
|||
static inline void pcie_ecrc_get_policy(char *str) { }
|
||||
#endif
|
||||
|
||||
#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)
|
||||
#define pci_enable_msi(pdev) pci_enable_msi_exact(pdev, 1)
|
||||
|
||||
#ifdef CONFIG_HT_IRQ
|
||||
/* The functions a driver should call */
|
||||
|
@ -1572,7 +1569,6 @@ extern unsigned long pci_hotplug_io_size;
|
|||
extern unsigned long pci_hotplug_mem_size;
|
||||
|
||||
/* Architecture-specific versions may override these (weak) */
|
||||
int pcibios_add_platform_entries(struct pci_dev *dev);
|
||||
void pcibios_disable_device(struct pci_dev *dev);
|
||||
void pcibios_set_master(struct pci_dev *dev);
|
||||
int pcibios_set_pcie_reset_state(struct pci_dev *dev,
|
||||
|
|
|
@ -142,6 +142,7 @@ typedef unsigned long blkcnt_t;
|
|||
#define pgoff_t unsigned long
|
||||
#endif
|
||||
|
||||
/* A dma_addr_t can hold any valid DMA or bus address for the platform */
|
||||
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
|
||||
typedef u64 dma_addr_t;
|
||||
#else
|
||||
|
|
|
@ -1288,13 +1288,10 @@ int iomem_map_sanity_check(resource_size_t addr, unsigned long size)
|
|||
if (p->flags & IORESOURCE_BUSY)
|
||||
continue;
|
||||
|
||||
printk(KERN_WARNING "resource map sanity check conflict: "
|
||||
"0x%llx 0x%llx 0x%llx 0x%llx %s\n",
|
||||
printk(KERN_WARNING "resource sanity check: requesting [mem %#010llx-%#010llx], which spans more than %s %pR\n",
|
||||
(unsigned long long)addr,
|
||||
(unsigned long long)(addr + size - 1),
|
||||
(unsigned long long)p->start,
|
||||
(unsigned long long)p->end,
|
||||
p->name);
|
||||
p->name, p);
|
||||
err = -1;
|
||||
break;
|
||||
}
|
||||
|
|
Загрузка…
Ссылка в новой задаче