[ Upstream commit a251994a44 ]
The dw-edma driver stops after processing a DMA request even if a request
remains in the issued queue, which is not the expected behavior. The DMA
engine API requires continuous processing.
Add a trigger to start after one processing finished if there are requests
remain.
Fixes: e63d79d1ff ("dmaengine: Add Synopsys eDMA IP core driver")
Signed-off-by: Shunsuke Mie <mie@igel.co.jp>
Link: https://lore.kernel.org/r/20230411101758.438472-1-mie@igel.co.jp
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 5fdca4a995 ]
Previously, readq_ch() did a 64-bit readq(), but truncated the result by
storing it in the u32 "value". Change "value" to u64 to avoid the
truncation.
Note: the method is currently unused, so the bug hasn't caused any problem
so far.
Fixes: 04e0a39fc1 ("dmaengine: dw-edma: Add writeq() and readq() for 64 bits architectures")
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 13b6299cf6 ]
Interleaved DMA transfer support was added by 85e7518f42 ("dmaengine:
dw-edma: Add device_prep_interleave_dma() support"), but depending on the
selected channel, either source or destination address are left
uninitialized which was obviously wrong.
Initialize the destination address of the eDMA burst descriptors for
DEV_TO_MEM interleaved operations and the source address for MEM_TO_DEV
operations.
Link: https://lore.kernel.org/r/20230113171409.30470-5-Sergey.Semin@baikalelectronics.ru
Fixes: 85e7518f42 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support")
Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit c1e3397917 ]
In accordance with [1, 2] the DW eDMA controller has been created to be
part of the DW PCIe Root Port and DW PCIe End-point controllers and to
offload the transferring of large blocks of data between application and
remote PCIe domains leaving the system CPU free for other tasks. In the
first case (eDMA being part of DW PCIe Root Port) the eDMA controller is
always accessible via the CPU DBI interface and never over the PCIe wire.
The latter case is more complex. Depending on the DW PCIe End-Point IP-core
synthesize parameters it's possible to have the eDMA registers accessible
not only from the application CPU side, but also via mapping the eDMA CSRs
over a dedicated endpoint BAR. So based on the specifics denoted above the
eDMA driver is supposed to support two types of the DMA controller setups:
1) eDMA embedded into the DW PCIe Root Port/End-point and accessible over
the local CPU from the application side.
2) eDMA embedded into the DW PCIe End-point and accessible via the PCIe
wire with MWr/MRd TLPs generated by the CPU PCIe host controller.
Since the CPU memory resides different sides in these cases the semantics
of the MEM_TO_DEV and DEV_TO_MEM operations is flipped with respect to the
Tx and Rx DMA channels. So MEM_TO_DEV/DEV_TO_MEM corresponds to the Tx/Rx
channels in setup 1) and to the Rx/Tx channels in case of setup 2).
The DW eDMA driver has supported the case 2) since e63d79d1ff
("dmaengine: Add Synopsys eDMA IP core driver") in the framework of the
drivers/dma/dw-edma/dw-edma-pcie.c driver.
The case 1) support was added later by bd96f1b2f4 ("dmaengine: dw-edma:
support local dma device transfer semantics"). Afterwards the driver was
supposed to cover the both possible eDMA setups, but the latter commit
turned out to be not fully correct.
The problem was that the commit together with the new functionality support
also changed the channel direction semantics so the eDMA Read-channel
(corresponding to the DMA_DEV_TO_MEM direction for case 1) now uses the
sgl/cyclic base addresses as the Source addresses of the DMA transfers and
dma_slave_config.dst_addr as the Destination address of the DMA transfers.
Similarly the eDMA Write-channel (corresponding to the DMA_MEM_TO_DEV
direction for case 1) now uses dma_slave_config.src_addr as a source
address of the DMA transfers and sgl/cyclic base address as the Destination
address of the DMA transfers. This contradicts the logic of the
DMA-interface, which implies that DEV side is supposed to belong to the
PCIe device memory and MEM - to the CPU/Application memory. Indeed it seems
irrational to have the SG-list defined in the PCIe bus space, while
expecting a contiguous buffer allocated in the CPU memory. Moreover the
passed SG-list and cyclic DMA buffers are supposed to be mapped in a way so
to be seen by the DW eDMA Application (CPU) interface.
So in order to have the correct DW eDMA interface we need to invert the
eDMA Rd/Wr-channels and DMA-slave directions semantics by selecting the
src/dst addresses based on the DMA transfer direction instead of using the
channel direction capability.
[1] DesignWare Cores PCI Express Controller Databook - DWC PCIe Root Port,
v.5.40a, March 2019, p.1092
[2] DesignWare Cores PCI Express Controller Databook - DWC PCIe Endpoint,
v.5.40a, March 2019, p.1189
Co-developed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Fixes: bd96f1b2f4 ("dmaengine: dw-edma: support local dma device transfer semantics")
Link: https://lore.kernel.org/r/20220524152159.2370739-7-Frank.Li@nxp.com
Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Signed-off-by: Frank Li <Frank.Li@nxp.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-By: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Currently, is missing a null check on a pcim_iomap_table() return value
and this can lead to a null pointer dereference if the desired BAR
wasn't mapped previously.
Fix this by adding a null check and returning -ENOMEM.
Addresses-Coverity: ("Dereference null return")
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/bc5e6b8632c84660bb6dae454980e9419992ed14.1613674948.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When the driver is compiled as a module and loaded if we try to unload
it, the Kernel shows a crash log. This Kernel crash is due to the
dma_async_device_unregister() call done after deleting the channels,
this patch fixes this issue.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/4aa850c035cf7ee488f1d3fb6dee0e37be0dce0a.1613674948.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Changes the linked list and data blocks offset and sizes to follow the
recommendation given by the hardware team for the IPK solution.
Although the previous data blocks offset and sizes are still valid and
functional, using them that might present some issues related to the IPK
solution, since this solution is based on FPGA and might be subjected to
timmings constrains.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/f682e7f7f06dc6b2efdd431481d6fb4d762c2c05.1613674948.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the previous implementation, the driver assumed that there existed
only two memory spaces that would equally distribute the amount of
read/write channels.
This might not be the case on some other implementations, therefore this
patch change this requirement so that each write/read channel has
its own linked list and data space well defined, which allows
different sizes and locations.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/2e316cb983f8a1e09ce929029f87619dc92a52de.1613674948.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In the driver code structure, I tried to keep the code style consistency
by writing the write channels instructions first, and then follow by the
read channels instructions, mimicking the hardware implementation.
However, this code style failed in some cases. This patch fixes that and
no functional changes are expected.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/9bd1f86f19df8bb5de502fb85a0c5dc07978a9ba.1613674948.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for the HDMA feature.
This new feature enables the current eDMA IP to use a deeper prefetch
of the linked list, which reduces the algorithm execution latency
observed when loading the elements of the list, causing more stable
and higher data transfer.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/5f40f89ef7d6255a12d5b23f34e6e59dcd28861e.1613674948.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add writeq() and readq() for 64 bits architures support.
Supporting these two functions will allow the write or the read of eDMA
64 bits registers at once instead of having two consecutive operations.
Also, this improvement will allow the PCI optimization transaction
messages, which will generate a 64 bits message instead of two messages
of 32 bits.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/3f1120f7c6003b38ec8b851fc68936007c4d9fd8.1613674948.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If the dw_edma_alloc_burst() function fails then we free "chunk" but
it's still on the "desc->chunk->list" list so it will lead to a use
after free. Also the "->chunks_alloc" count is incremented when it
shouldn't be.
In current kernels small allocations are guaranteed to succeed and
dw_edma_alloc_burst() can't fail so this will not actually affect
runtime.
Fixes: e63d79d1ff ("dmaengine: Add Synopsys eDMA IP core driver")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/X9dTBFrUPEvvW7qc@mwanda
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Fix the source and destination physical address calculation of a
peripheral device on scatter-gather implementation.
This issue manifested during tests using a 64 bits architecture system.
The abnormal behavior wasn't visible before due to all previous tests
were done using 32 bits architecture system, that masked his effect.
Fixes: e63d79d1ff ("dmaengine: Add Synopsys eDMA IP core driver")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/8d3ab7e2ba96563fe3495b32f60077fffb85307d.1597327623.git.gustavo.pimentel@synopsys.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Modify dw_edma_device_transfer() to also support the semantics of dma
device transfer for additional use cases involving pcitest utility as a
local initiator.
For its original use case, dw-edma supported the semantics of dma device
transfer from the perspective of a remote initiator who is located across
the PCIe bus from dma channel hardware.
To a remote initiator, DMA_DEV_TO_MEM means using a remote dma WRITE
channel to transfer from remote memory to local memory. A WRITE channel
would be employed on the remote device in order to move the contents of
remote memory to the bus destined for local memory.
To a remote initiator, DMA_MEM_TO_DEV means using a remote dma READ
channel to transfer from local memory to remote memory. A READ channel
would be employed on the remote device in order to move the contents of
local memory to the bus destined for remote memory.
>From the perspective of a local dma initiator who is co-located on the
same side of the PCIe bus as the dma channel hardware, the semantics of
dma device transfer are flipped.
To a local initiator, DMA_DEV_TO_MEM means using a local dma READ channel
to transfer from remote memory to local memory. A READ channel would be
employed on the local device in order to move the contents of remote
memory to the bus destined for local memory.
To a local initiator, DMA_MEM_TO_DEV means using a local dma WRITE channel
to transfer from local memory to remote memory. A WRITE channel would be
employed on the local device in order to move the contents of local memory
to the bus destined for remote memory.
To support local dma initiators, dw_edma_device_transfer() is modified to
now examine the direction field of struct dma_slave_config for the channel
which initiators can configure by calling dmaengine_slave_config().
If direction is configured as either DMA_DEV_TO_MEM or DMA_MEM_TO_DEV,
local initiator semantics are used. If direction is a value other than
DMA_DEV_TO_MEM nor DMA_MEM_TO_DEV, then remote initiator semantics are
used. This should maintain backward compatibility with the original use
case of dw-edma.
The dw-edma-test utility is an example of a remote initiator. From reading
its patch, dw-edma-test does not specifically set the direction field of
struct dma_slave_config. Since dw_edma_device_transfer() also does not
check the direction field of struct dma_slave_config, it seems safe to use
this convention in dw-edma to support both local and remote initiator
semantics.
Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com>
Link: https://lore.kernel.org/r/1588122633-1552-1-git-send-email-alan.mikhak@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Modify dw_edma_irq_request() to check if a struct msi_desc entry exists
before copying the contents of its struct msi_msg pointer.
Without this sanity check, __get_cached_msi_msg() crashes when invoked by
dw_edma_irq_request() running on a Linux-based PCIe endpoint device. MSI
interrupt are not received by PCIe endpoint devices. If irq_get_msi_desc()
returns null, then there is no cached struct msi_msg to be copied.
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/1587607101-31914-1-git-send-email-alan.mikhak@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Decouple dw-edma-core.c from struct pci_dev as a step toward integration
of dw-edma with pci-epf-test so the latter can initiate dma operations
locally from the endpoint side. A barrier to such integration is the
dependency of dw_edma_probe() and other functions in dw-edma-core.c on
struct pci_dev.
The Synopsys DesignWare dw-edma driver was designed to run on host side
of PCIe link to initiate DMA operations remotely using eDMA channels of
PCIe controller on the endpoint side. This can be inferred from seeing
that dw-edma uses struct pci_dev and accesses hardware registers of dma
channels across the bus using BAR0 and BAR2.
The ops field of struct dw_edma in dw-edma-core.h is currenty undefined:
const struct dw_edma_core_ops *ops;
However, the kernel builds without failure even when dw-edma driver is
enabled. Instead of removing the currently undefined and usued ops field,
define struct dw_edma_core_ops and use the ops field to decouple
dw-edma-core.c from struct pci_dev.
Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/1586971629-30196-1-git-send-email-alan.mikhak@sifive.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When building with 'make C=1', sparse reports an endianess bug:
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:60:30: warning: cast removes address space of expression
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: warning: incorrect type in argument 1 (different address spaces)
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: expected void const volatile [noderef] <asn:2>*addr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: got void *[assigned] ptr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: warning: incorrect type in argument 1 (different address spaces)
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: expected void const volatile [noderef] <asn:2>*addr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: got void *[assigned] ptr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: warning: incorrect type in argument 1 (different address spaces)
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: expected void const volatile [noderef] <asn:2>*addr
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:86:24: got void *[assigned] ptr
The current code is clearly wrong, as it passes an endian-swapped word
into a register function where it gets swapped again. Just pass the variables
directly into lower_32_bits()/upper_32_bits().
Fixes: 7e4b8a4fbe ("dmaengine: Add Synopsys eDMA IP version 0 support")
Link: https://lore.kernel.org/lkml/20190617131820.2470686-1-arnd@arndb.de/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/20190722124457.1093886-3-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The new driver mixes up dma_addr_t and __iomem pointers, which results
in warnings on some 32-bit architectures, like:
drivers/dma/dw-edma/dw-edma-v0-core.c: In function '__dw_regs':
drivers/dma/dw-edma/dw-edma-v0-core.c:28:9: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
return (struct dw_edma_v0_regs __iomem *)dw->rg_region.vaddr;
Make it use __iomem pointers consistently here, and avoid using dma_addr_t
for __iomem tokens altogether.
A small complication here is the debugfs code, which passes an __iomem
token as the private data for debugfs files, requiring the use of
extra __force.
Fixes: 7e4b8a4fbe ("dmaengine: Add Synopsys eDMA IP version 0 support")
Link: https://lore.kernel.org/lkml/20190617131918.2518727-1-arnd@arndb.de/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20190722124457.1093886-2-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Putting large constant data on the stack causes unnecessary overhead
and stack usage:
drivers/dma/dw-edma/dw-edma-v0-debugfs.c:285:6: error: stack frame size of 1376 bytes in function 'dw_edma_v0_debugfs_on' [-Werror,-Wframe-larger-than=]
Mark the variable 'static const' in order for the compiler to move it
into the .rodata section where it does no such harm.
Fixes: 305aebeff8 ("dmaengine: Add Synopsys eDMA IP version 0 debugfs support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Link: https://lore.kernel.org/r/20190722124457.1093886-1-arnd@arndb.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
If CONFIG_PCI_MSI is not set, building with CONFIG_DW_EDMA
fails:
drivers/dma/dw-edma/dw-edma-core.c: In function dw_edma_irq_request:
drivers/dma/dw-edma/dw-edma-core.c:784:21: error: implicit declaration of function pci_irq_vector; did you mean rcu_irq_enter? [-Werror=implicit-function-declaration]
err = request_irq(pci_irq_vector(to_pci_dev(dev), 0),
^~~~~~~~~~~~~~
Reported-by: Hulk Robot <hulkci@huawei.com>
Fixes: e63d79d1ff ("dmaengine: Add Synopsys eDMA IP core driver")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Synopsys eDMA IP is normally distributed along with Synopsys PCIe
EndPoint IP (depends of the use and licensing agreement).
This IP requires some basic configurations, such as:
- eDMA registers BAR
- eDMA registers offset
- eDMA registers size
- eDMA linked list memory BAR
- eDMA linked list memory offset
- eDMA linked list memory size
- eDMA data memory BAR
- eDMA data memory offset
- eDMA data memory size
- eDMA version
- eDMA mode
- IRQs available for eDMA
As a working example, PCIe glue-logic will attach to a Synopsys PCIe
EndPoint IP prototype kit (Vendor ID = 0x16c3, Device ID = 0xedda),
which has built-in an eDMA IP with this default configuration:
- eDMA registers BAR = 0
- eDMA registers offset = 0x00001000 (4 Kbytes)
- eDMA registers size = 0x00002000 (8 Kbytes)
- eDMA linked list memory BAR = 2
- eDMA linked list memory offset = 0x00000000 (0 Kbytes)
- eDMA linked list memory size = 0x00800000 (8 Mbytes)
- eDMA data memory BAR = 2
- eDMA data memory offset = 0x00800000 (8 Mbytes)
- eDMA data memory size = 0x03800000 (56 Mbytes)
- eDMA version = 0
- eDMA mode = EDMA_MODE_UNROLL
- IRQs = 1
This driver can be compile as built-in or external module in kernel.
To enable this driver just select DW_EDMA_PCIE option in kernel
configuration, however it requires and selects automatically DW_EDMA
option too.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for the eDMA IP version 0 driver for both register maps (legacy
and unroll).
The legacy register mapping was the initial implementation, which consisted
in having all registers belonging to channels multiplexed, which could be
change anytime (which could led a race-condition) by view port register
(access to only one channel available each time).
This register mapping is not very effective and efficient in a multithread
environment, which has led to the development of unroll registers mapping,
which consists of having all channels registers accessible any time by
spreading all channels registers by an offset between them.
This version supports a maximum of 16 independent channels (8 write +
8 read), which can run simultaneously.
Implements a scatter-gather transfer through a linked list, where the size
of linked list depends on the allocated memory divided equally among all
channels.
Each linked list descriptor can transfer from 1 byte to 4 Gbytes and is
alignmented to DWORD.
Both SAR (Source Address Register) and DAR (Destination Address Register)
are alignmented to byte.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add Synopsys PCIe Endpoint eDMA IP core driver to kernel.
This IP is generally distributed with Synopsys PCIe Endpoint IP (depends
of the use and licensing agreement).
This core driver, initializes and configures the eDMA IP using vma-helpers
functions and dma-engine subsystem.
This driver can be compile as built-in or external module in kernel.
To enable this driver just select DW_EDMA option in kernel configuration,
however it requires and selects automatically DMA_ENGINE and
DMA_VIRTUAL_CHANNELS option too.
In order to transfer data from point A to B as fast as possible this IP
requires a dedicated memory space containing linked list of elements.
All elements of this linked list are continuous and each one describes a
data transfer (source and destination addresses, length and a control
variable).
For the sake of simplicity, lets assume a memory space for channel write
0 which allows about 42 elements.
+---------+
| Desc #0 |-+
+---------+ |
V
+----------+
| Chunk #0 |-+
| CB = 1 | | +----------+ +-----+ +-----------+ +-----+
+----------+ +->| Burst #0 |->| ... |->| Burst #41 |->| llp |
| +----------+ +-----+ +-----------+ +-----+
V
+----------+
| Chunk #1 |-+
| CB = 0 | | +-----------+ +-----+ +-----------+ +-----+
+----------+ +->| Burst #42 |->| ... |->| Burst #83 |->| llp |
| +-----------+ +-----+ +-----------+ +-----+
V
+----------+
| Chunk #2 |-+
| CB = 1 | | +-----------+ +-----+ +------------+ +-----+
+----------+ +->| Burst #84 |->| ... |->| Burst #125 |->| llp |
| +-----------+ +-----+ +------------+ +-----+
V
+----------+
| Chunk #3 |-+
| CB = 0 | | +------------+ +-----+ +------------+ +-----+
+----------+ +->| Burst #126 |->| ... |->| Burst #129 |->| llp |
+------------+ +-----+ +------------+ +-----+
Legend:
- Linked list, also know as Chunk
- Linked list element*, also know as Burst *CB*, also know as Change Bit,
it's a control bit (and typically is toggled) that allows to easily
identify and differentiate between the current linked list and the
previous or the next one.
- LLP, is a special element that indicates the end of the linked list
element stream also informs that the next CB should be toggle
On every last Burst of the Chunk (Burst #41, Burst #83, Burst #125 or
even Burst #129) is set some flags on their control variable (RIE and
LIE bits) that will trigger the send of "done" interruption.
On the interruptions callback, is decided whether to recycle the linked
list memory space by writing a new set of Bursts elements (if still
exists Chunks to transfer) or is considered completed (if there is no
Chunks available to transfer).
On scatter-gather transfer mode, the client will submit a scatter-gather
list of n (on this case 130) elements, that will be divide in multiple
Chunks, each Chunk will have (on this case 42) a limited number of
Bursts and after transferring all Bursts, an interrupt will be
triggered, which will allow to recycle the all linked list dedicated
memory again with the new information relative to the next Chunk and
respective Burst associated and repeat the whole cycle again.
On cyclic transfer mode, the client will submit a buffer pointer, length
of it and number of repetitions, in this case each burst will correspond
directly to each repetition.
Each Burst can describes a data transfer from point A(source) to point
B(destination) with a length that can be from 1 byte up to 4 GB. Since
dedicated the memory space where the linked list will reside is limited,
the whole n burst elements will be organized in several Chunks, that
will be used later to recycle the dedicated memory space to initiate a
new sequence of data transfers.
The whole transfer is considered has completed when it was transferred
all bursts.
Currently this IP has a set well-known register map, which includes
support for legacy and unroll modes. Legacy mode is version of this
register map that has multiplexer register that allows to switch
registers between all write and read channels and the unroll modes
repeats all write and read channels registers with an offset between
them. This register map is called v0.
The IP team is creating a new register map more suitable to the latest
PCIe features, that very likely will change the map register, which this
version will be called v1. As soon as this new version is released by
the IP team the support for this version in be included on this driver.
According to the logic, patches 1, 2 and 3 should be squashed into 1
unique patch, but for the sake of simplicity of review, it was divided
in this 3 patches files.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>