Граф коммитов

3006 Коммитов

Автор SHA1 Сообщение Дата
Ben Dooks 7978a583b1 dmaengine: sirf: fix un-exported struct warnings
The sirfsoc_dmadata structs are not used outside the driver, so
remove build warnings by making them static. Fixes:

drivers/dma/sirf-dma.c:1129:24: warning: symbol 'sirfsoc_dmadata_a6' was not declared. Should it be static?
drivers/dma/sirf-dma.c:1134:24: warning: symbol 'sirfsoc_dmadata_a7v1' was not declared. Should it be static?
drivers/dma/sirf-dma.c:1139:24: warning: symbol 'sirfsoc_dmadata_a7v2' was not declared. Should it be static?

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-06-08 09:02:16 +05:30
Ben Dooks 0161df1325 dmaengine: ste_dma40_ll: make d40_width_to_bits static
Fix warning due to d40_width_to_bits() not being used outside
this file. Fixes:

drivers/dma/ste_dma40_ll.c:13:4: warning: symbol 'd40_width_to_bits' was not declared. Should it be static?

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-06-08 08:59:55 +05:30
Peter Ujfalusi 2e4ed0879e dmaengine: edma: Use early completion for intermediate paRAM set in slave_sg
The driver limits the physical number of paRAM slots to be used by channels.
If the transfer needs more slots (more SGs) then the transfer is broken up
to smaller chunks. When the chunk is finished the driver will rewrite the
physical slots and continues the transfer. This set up time can take some
time and we might miss DMA events. If the intermediate set completion is
using early completion (the interrupt will happen when the last slot is
issued to the TPTC and not when the transfer is finished by the TPTC) we
will have a bit more time to update the paRAM slots and less likely to have
missed events.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-06-08 08:57:29 +05:30
Stefan Roese 5156463588 dmaengine: mv_xor: Fix incorrect offset in dma_map_page()
Upon booting, I occasionally spotted some BUGs triggered by the internal
DMA test routine executed upon driver probing. This was detected by
SLUB_DEBUG ("Freechain corrupt" or "Redzone overwritten"). Tracking
this down located a problem in passing 0 as offset in dma_map_page().
As kmalloc, especially when used with SLUB_DEBUG, may return a non page
aligned address.

This patch fixes this issue by passing the correct offset in
dma_map_page().

Tested on a custom Armada XP board.

Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Gregory CLEMENT <gregory.clement@free-electrons.com>
Cc: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-06-07 12:44:23 +05:30
Stefan Roese a4a1e53df4 dmaengine: mv_xor: Minor coding style fix
Remove the space before the "err_free_dma:" label in mv_xor_channel_add().

Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Gregory CLEMENT <gregory.clement@free-electrons.com>
Cc: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-06-07 12:43:40 +05:30
Kedareswara rao Appana 6214786651 dmaengine: vdma: Use dma_pool_zalloc
dma_pool_zalloc combines dma_pool_alloc and memset 0
this patch updates the driver to use dma_pool_zalloc.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-06-07 11:48:24 +05:30
Kedareswara rao Appana 92d794dfb6 dmaengine: vdma: Add support for cyclic dma mode
This patch adds support for AXI DMA cyclic dma mode.
In cyclic mode, DMA fetches and processes the same
BDs without interruption. The DMA continues to fetch and process
until it is stopped or reset.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-06-07 11:48:24 +05:30
Ludovic Desroches 9295c41d77 dmaengine: at_xdmac: double FIFO flush needed to compute residue
Due to the way CUBC register is updated, a double flush is needed to
compute an accurate residue. First flush aim is to get data from the DMA
FIFO and second one ensures that we won't report data which are not in
memory.

Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-30 10:47:52 +05:30
Ludovic Desroches 53398f4888 dmaengine: at_xdmac: fix residue corruption
An unexpected value of CUBC can lead to a corrupted residue. A more
complex sequence is needed to detect an inaccurate value for NCA or CUBC.

Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-30 10:47:52 +05:30
Ludovic Desroches 4a9723e8df dmaengine: at_xdmac: align descriptors on 64 bits
Having descriptors aligned on 64 bits allows update CNDA and CUBC in an
atomic way.

Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #v4.1 and later
Reviewed-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-30 10:47:52 +05:30
Kuninori Morimoto 3565fe5333 dmaengine: rcar-dmac: use list_add() on rcar_dmac_desc_put()
For each descriptor, in addition to the memory used by the descriptors
structure itself, the driver allocates a list of chunks as well as a
buffer for hardware descriptors. Descriptors themselves are preallocated,
and allocation of the chunks and buffer is performed the first time the
descriptor is used. The memory isn't freed when the transfer is completed,
as the chunks and buffer will be needed again when the descriptor is
reused internally, so the driver keeps the memory around.

If only a few descriptors are used concurrently, the current
list_add_tail() implementation will result in all preallocated descriptors
being used before going back to the first one, and will thus allocate
chunks and a buffer for all preallocated descriptors. Using list_add()
will put the complete descriptor at the head of the list of available
descriptors, so the next transfer will be more likely to reuse a
descriptor that already has associated memory instead of one that has
never been used before.

Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-30 09:12:00 +05:30
Arnd Bergmann 287980e49f remove lots of IS_ERR_VALUE abuses
Most users of IS_ERR_VALUE() in the kernel are wrong, as they
pass an 'int' into a function that takes an 'unsigned long'
argument. This happens to work because the type is sign-extended
on 64-bit architectures before it gets converted into an
unsigned type.

However, anything that passes an 'unsigned short' or 'unsigned int'
argument into IS_ERR_VALUE() is guaranteed to be broken, as are
8-bit integers and types that are wider than 'unsigned long'.

Andrzej Hajda has already fixed a lot of the worst abusers that
were causing actual bugs, but it would be nice to prevent any
users that are not passing 'unsigned long' arguments.

This patch changes all users of IS_ERR_VALUE() that I could find
on 32-bit ARM randconfig builds and x86 allmodconfig. For the
moment, this doesn't change the definition of IS_ERR_VALUE()
because there are probably still architecture specific users
elsewhere.

Almost all the warnings I got are for files that are better off
using 'if (err)' or 'if (err < 0)'.
The only legitimate user I could find that we get a warning for
is the (32-bit only) freescale fman driver, so I did not remove
the IS_ERR_VALUE() there but changed the type to 'unsigned long'.
For 9pfs, I just worked around one user whose calling conventions
are so obscure that I did not dare change the behavior.

I was using this definition for testing:

 #define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \
       unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO))

which ends up making all 16-bit or wider types work correctly with
the most plausible interpretation of what IS_ERR_VALUE() was supposed
to return according to its users, but also causes a compile-time
warning for any users that do not pass an 'unsigned long' argument.

I suggested this approach earlier this year, but back then we ended
up deciding to just fix the users that are obviously broken. After
the initial warning that caused me to get involved in the discussion
(fs/gfs2/dir.c) showed up again in the mainline kernel, Linus
asked me to send the whole thing again.

[ Updated the 9p parts as per Al Viro  - Linus ]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.org/lkml/2016/1/7/363
Link: https://lkml.org/lkml/2016/5/27/486
Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> # For nvmem part
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-27 15:26:11 -07:00
Linus Torvalds a0d3c7c5c0 dmaengine updates for 4.7
This time round the update brings in following changes:
 
  - New tegra driver for ADMA device
  - Support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI Central
    Direct Memory Access Engine and few updates to this driver.
  - New cyclic capability to sun6i and few updates.
  - Slave-sg support in bcm2835.
  - Updates to many drivers like designware, hsu, mv_xor, pxa, edma,
    qcom_hidma & bam.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJXPVb9AAoJEHwUBw8lI4NHnDQP/AtUYBTI8XD68iGh5eCTEtwO
 3dNgUmOvIAIl0ZtVKex3b7j2S52IN7EDv44QmsmvMHgjvaupUsZ/HeIHgoI37y39
 /qoRkyiG75ht68BrNjKcpJLsOyxaAUT1tMyf/bYXlDW8O7qEPtRDhuvUB+i+s3RX
 ljNOQXH2WaQTJrNeZxkvbp92iGiu3j7AKyCh9MJ4gnF4y2oA1bFp++QpH5qcBOTp
 0nccs7pgDQhw2nzHmhYbEmvgcKPrPQi+67U7eIed7n7wiThAIXIEbZl6AYk9kFaK
 gSa4/N3fwnZc9TFR5O6qdanvsYdW4JC1P5Ydm0opExo3lgtMckQ3sGKFIwTG8eU4
 YiyQE1uVHRqT82zxPCecTF+I0Y4g68oCJURrHED6kxKGA5a8ojU04aGebXDiNKlp
 FEDceEC5ch7ZPw8CCTola+TYpf9Vni3g7OkrdkPY9cX/aDXDROghTCg9jgPJ2aL/
 oai5axc5gQMEFzHPaEwFp45tgXw7IvIzaqYHmiWE11fsRbGUSB2HAwBXytI9ReC0
 XTMBvc08YvisbIpIR29T0R5cerzdDuK9bXxYHHHOeUFg0t8R8UGaP1UxEQCVmLsT
 AIrHupoccPJ7IAn0h6mShtZ2yzBfj3rU4tEMJR/Oj/VvjW3gKbbZ5XVi92fOurBs
 xjn9uBBZ/Pt9hgprwlmY
 =0Sy7
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time round the update brings in following changes:

   - new tegra driver for ADMA device

   - support for Xilinx AXI Direct Memory Access Engine and Xilinx AXI
     Central Direct Memory Access Engine and few updates to this driver

   - new cyclic capability to sun6i and few updates

   - slave-sg support in bcm2835

   - updates to many drivers like designware, hsu, mv_xor, pxa, edma,
     qcom_hidma & bam"

* tag 'dmaengine-4.7-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (84 commits)
  dmaengine: ioatdma: disable relaxed ordering for ioatdma
  dmaengine: of_dma: approximate an average distribution
  dmaengine: core: Use IS_ENABLED() instead of checking for built-in or module
  dmaengine: edma: Re-evaluate errors when ccerr is triggered w/o error event
  dmaengine: qcom_hidma: add support for object hierarchy
  dmaengine: qcom_hidma: add debugfs hooks
  dmaengine: qcom_hidma: implement lower level hardware interface
  dmaengine: vdma: Add clock support
  Documentation: DT: vdma: Add clock support for dmas
  dmaengine: vdma: Add config structure to differentiate dmas
  MAINTAINERS: Update Tegra DMA maintainers
  dmaengine: tegra-adma: Add support for Tegra210 ADMA
  Documentation: DT: Add binding documentation for NVIDIA ADMA
  dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine
  Documentation: DT: vdma: update binding doc for AXI CDMA
  dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  Documentation: DT: vdma: update binding doc for AXI DMA
  dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma
  dmaengine: slave means at least one of DMA_SLAVE, DMA_CYCLIC
  dmaengine: mv_xor: Allow selecting mv_xor for mvebu only compatible SoC
  ...
2016-05-19 11:47:18 -07:00
Vinod Koul f9114a54c1 Merge branch 'topic/xilinx' into for-linus 2016-05-17 10:15:34 +05:30
Vinod Koul 0f5c85f48a Merge branch 'topic/tegra' into for-linus 2016-05-17 10:15:27 +05:30
Vinod Koul 53b84bad9e Merge branch 'topic/sun6i' into for-linus 2016-05-17 10:15:20 +05:30
Vinod Koul 82770a2f65 Merge branch 'topic/qcom' into for-linus 2016-05-17 10:15:13 +05:30
Vinod Koul ba8b6cc072 Merge branch 'topic/pxa' into for-linus 2016-05-17 10:15:06 +05:30
Vinod Koul 4dc50060c9 Merge branch 'topic/pl08x' into for-linus 2016-05-17 10:14:59 +05:30
Vinod Koul 112db20e81 Merge branch 'topic/mv_xor' into for-linus 2016-05-17 10:14:50 +05:30
Vinod Koul ee5644ce4b Merge branch 'topic/mpc512x' into for-linus 2016-05-17 10:14:40 +05:30
Vinod Koul 8dfc27af62 Merge branch 'topic/hsu' into for-linus 2016-05-17 10:14:30 +05:30
Vinod Koul 56214883c5 Merge branch 'topic/dw' into for-linus 2016-05-17 10:14:16 +05:30
Vinod Koul 95c4dc7b2c Merge branch 'topic/bcm' into for-linus 2016-05-17 10:14:07 +05:30
Vinod Koul a365c96854 Merge branch 'topic/core' into for-linus 2016-05-17 10:13:40 +05:30
Dave Jiang 511deae026 dmaengine: ioatdma: disable relaxed ordering for ioatdma
ioatdma by default is in snoop mode. Relaxed ordering according to spec
does not do anything in snoop mode. However, it causes hang or significant
performance degrade when tested with NTB. Disabling in the driver due to
some BIOS do not configure it correctly.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-14 13:36:52 +05:30
Niklas Söderlund 20ea6be6bf dmaengine: of_dma: approximate an average distribution
Currently the following DT description would result in dmac0 always
being tried first and dmac1 second if dmac0 was unavailable. This
results in heavier use of dmac0 then of dmac1. This patch adds an
approximate average distribution over the two nodes lessening the load
of anyone of them.

   i2c6: i2c@e60b0000 {
           ...
           dmas = <&dmac0 0x77>, <&dmac0 0x78>,
                  <&dmac1 0x77>, <&dmac1 0x78>;
           dma-names = "tx", "rx", "tx", "rx";
           ...
   };

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-14 13:34:10 +05:30
Javier Martinez Canillas d57d3a48ca dmaengine: core: Use IS_ENABLED() instead of checking for built-in or module
The IS_ENABLED() macro checks if a Kconfig symbol has been enabled either
built-in or as a module, use that macro instead of open coding the same.

Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-14 13:32:03 +05:30
Peter Ujfalusi 3b2bc8a732 dmaengine: edma: Re-evaluate errors when ccerr is triggered w/o error event
When the ccerr handler is called but the error registers indicate no error
events we need to command eDMA to re-evaluate the errors. Otherwise we can
receive flood of error interrupts.

Reported-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-14 13:27:10 +05:30
Sinan Kaya 42d236f8a4 dmaengine: qcom_hidma: add support for object hierarchy
In order to create a relationship model between the channels and the
management object, we are adding support for object hierarchy to the
drivers. This patch simplifies the userspace application development.
We will not have to traverse different firmware paths based on device
tree or ACPI based kernels.

No matter what flavor of kernel is used, objects will be represented as
platform devices.

The new layout is as follows:

hidmam_10: hidma-mgmt@0x5A000000 {
	compatible = "qcom,hidma-mgmt-1.0";
	...

	hidma_10: hidma@0x5a010000 {
			compatible = "qcom,hidma-1.0";
			...
	}
}

The hidma_mgmt_init detects each instance of the hidma-mgmt-1.0 objects
in device tree and calls into the channel driver to create platform devices
for each child of the management object.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-14 11:54:45 +05:30
Sinan Kaya 570d017629 dmaengine: qcom_hidma: add debugfs hooks
Add debugfs hooks for debugging the execution behavior of the DMA
channel. The debugfs hooks get initialized by the probe function and
uninitialized by the remove function.

A stats file is created in debugfs. The stats file will show the
information about each HIDMA channel as well as each asynchronous job
queued and completed at a given time.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-14 11:54:45 +05:30
Sinan Kaya d1615ca2e0 dmaengine: qcom_hidma: implement lower level hardware interface
This patch implements the hardware hooks for the HIDMA channel driver.

The main functions of interest are:
- hidma_ll_init
- hidma_ll_request
- hidma_ll_queue_request
- hidma_ll_hw_start

OS layer calls the hidma_ll_init function during probe to set up the
hardware. At this moment, the number of supported descriptors are also
given. On each request, a descriptor is allocated from the free pool and
filled in with the transfer parameters. Multiple requests can be queued
into the hardware via the OS interface. When client is ready for requests
to be executed, start method is called.

Completions are delivered via callbacks via tasklet.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-14 11:54:45 +05:30
Kedareswara rao Appana ba16db36b5 dmaengine: vdma: Add clock support
Added basic clock support for axi dma's.
The clocks are requested at probe and released at remove.

Reviewed-by: Shubhrajyoti Datta <shubhraj@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-13 15:00:18 +05:30
Kedareswara rao Appana fb2366675e dmaengine: vdma: Add config structure to differentiate dmas
This patch adds config structure in the driver to differentiate
AXI DMA's and to add more features(clock support etc..) to these DMA's.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-13 15:00:18 +05:30
Jon Hunter f46b195799 dmaengine: tegra-adma: Add support for Tegra210 ADMA
Add support for the Tegra210 Audio DMA controller that is used for
transferring data between system memory and the Audio sub-system.
The driver only supports cyclic transfers because this is being solely
used for audio.

This driver is based upon the work by Dara Ramesh <dramesh@nvidia.com>.

Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-13 14:56:24 +05:30
Kedareswara rao Appana 07b0e7d49c dmaengine: vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine
This patch adds support for the AXI Central Direct Memory Access
(AXI CDMA) core to the existing vdma driver, AXI CDMA is a
soft Xilinx IP core that provides high-bandwidth
Direct Memory Access(DMA) between a memory-mapped
source address and a memory-mapped destination address.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-12 11:58:58 +05:30
Kedareswara rao Appana c0bba3a99f dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine
This patch adds support for the AXI Direct Memory Access (AXI DMA)
core in the existing vdma driver, AXI DMA Core is a
soft Xilinx IP core that provides high-bandwidth
direct memory access between memory and AXI4-Stream
type target peripherals.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-12 11:58:30 +05:30
Kedareswara rao Appana 42c1a2ede4 dmaengine: vdma: Rename xilinx_vdma_ prefix to xilinx_dma
This patch renames the xilinx_vdma_ prefix to xilinx_dma
for the API's and masks that will be shared b/w three DMA
IP cores.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-12 11:58:30 +05:30
Andy Shevchenko dd4e91d538 dmaengine: slave means at least one of DMA_SLAVE, DMA_CYCLIC
When check for capabilities recognize slave support by either DMA_SLAVE or
DMA_CYCLIC bit set. If we don't do that the user can't get a normally worked
DMA support for engines that doesn't have one of the mentioned bits set.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-12 11:14:56 +05:30
Gregory CLEMENT c39290a1f3 dmaengine: mv_xor: Allow selecting mv_xor for mvebu only compatible SoC
Armada 3700 SoC uses the mv_xor driver but don't select anymore the
PLAT_ORION symbol. This commit extends the dependency of the mv_xor
driver to the more modern SoCs only compatible with ARCH_MVEBU, which
allows using it with the Armada 3700 SoC.

In the same time it also add the COMPILE_TEST dependency allowing a wider
test coverage.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:27:47 +05:30
Marcin Wojtas ac5f0f3f86 dmaengine: mv_xor: add support for Armada 3700 SoC
Armada 3700 SoC comprise a single XOR engine compliant with the ones used
in older Marvell SoC's like Armada XP or 38x. The only thing that needs
modification is the Mbus configuration, which has to be done on two
levels: global and in device. The first one is inherited from the
bootloader. The latter can be opened in a default way, leaving
arbitration to the bus controller. Hence filled mbus_dram_target_info
structure is not needed.

Patch "dmaengine: mv_xor: optimize performance by using a subset
of the XOR channels" introduced limitation for using XOR engines and
channels vs number of available CPU's. Those constraints do not however
fit Armada 3700 architecture with two possible CPU's and single,
dual-channel engine. Hence in this commit an adjustment for setting
maximum available channels is added.

This patch enables XOR access to DRAM by opening default window to 4GB
space with specific attribute.

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:27:47 +05:30
Gregory CLEMENT dd130c652c dmaengine: mv_xor: use SoC type instead of directly the operation mode
Currently the main difference between legacy XOR engine and newer one, is
the way the engine modes are setup (either in the descriptor or through
the controller registers). In order to be able to take into account new
generation of the XOR engine for the ARM64 SoC, we need to identify them
by type, and then depending to the type the engine setup will be
selected.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:27:47 +05:30
Gregory CLEMENT bc822e1251 dmaengine: mv_xor: make the code 64 bits compliant
Fix two warnings which appear when building for 64 bits target:

drivers/dma/mv_xor.c: In function ‘mv_xor_prep_dma_xor’:
drivers/dma/mv_xor.c:480:3: warning: format ‘%u’ expects argument of type ‘unsigned int’, but argument 6 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
   "%s src_cnt: %d len: %u dest %pad flags: %ld\n",
   ^
drivers/dma/mv_xor.c: In function ‘mv_xor_probe’:
drivers/dma/mv_xor.c:1223:17: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
    op_in_desc = (int)of_id->data;
                 ^

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:27:47 +05:30
Julia Lawall 2ba4f8abfe dmaengine: vdma: Use dma_pool_zalloc
Dma_pool_zalloc combines dma_pool_alloc and memset 0.  The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)

// <smpl>
@@
expression d,e;
statement S;
@@

        d =
-            dma_pool_alloc
+            dma_pool_zalloc
             (...);
        if (!d) S
-       memset(d, 0, sizeof(*d));
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Sören Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:24:11 +05:30
Julia Lawall 4376455727 dmaengine: fsldma: Use dma_pool_zalloc
Dma_pool_zalloc combines dma_pool_alloc and memset 0.  The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)

// <smpl>
@@
expression d,e;
statement S;
@@

        d =
-            dma_pool_alloc
+            dma_pool_zalloc
             (...);
        if (!d) S
-       memset(d, 0, sizeof(*d));
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:23:25 +05:30
Julia Lawall 305697facd dmaengine: ioatdma: Use dma_pool_zalloc
Dma_pool_zalloc combines dma_pool_alloc and memset 0.  The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)

// <smpl>
@@
expression d,e;
statement S;
@@

        d =
-            dma_pool_alloc
+            dma_pool_zalloc
             (...);
        if (!d) S
-       memset(d, 0, sizeof(*d));
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:23:12 +05:30
Julia Lawall 1c85a8440f dmaengine: mmp_pdma: Use dma_pool_zalloc
Dma_pool_zalloc combines dma_pool_alloc and memset 0.  The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)

// <smpl>
@@
expression d,e;
statement S;
@@

        d =
-            dma_pool_alloc
+            dma_pool_zalloc
             (...);
        if (!d) S
-       memset(d, 0, sizeof(*d));
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:23:11 +05:30
Jean-Francois Moine a90e173f3f dmaengine: sun6i: Add cyclic capability
DMA cyclic transfers are required by audio streaming.

Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:59:02 +05:30
Jean-Francois Moine 3435fb1853 dmaengine: sun6i: Remove useless check
The transfer direction is now checked in set_config.
There is no need to check it twice.

Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:59:02 +05:30
Jean-Francois Moine a4eb36b02c dmaengine: sun6i: Set default maxburst size and bus width
Some DMA clients, as audio, don't set the maxburst size and bus width
on the memory side when starting DMA transfers.
This patch prevents such transfers to be aborted by providing system
default values to the lacking ones.

Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:59:01 +05:30
Andy Shevchenko 3a14c66d43 dmaengine: dw: pass platform data via struct dw_dma_chip
We pass struct dw_dma_chip to dw_dma_probe() anyway, thus we may use it to
pass a platform data as well.

While here, constify the source of the platform data.

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:31:05 +05:30
Andy Shevchenko 161c3d04ae dmaengine: dw: keep entire platform data in struct dw_dma
Keep the entire platform data in the struct dw_dma.
It makes the driver a bit cleaner.

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:31:05 +05:30
Andy Shevchenko 2e65060e80 dmaengine: dw: revisit data_width property
There several changes are done here:

- Convert the property to be in bytes

  Besides that this is a common practice for such property, the use of a value
  in bytes much more convenient than handling the encoded one.

- Rename data_width to data-width in the device tree bindings

  The change leaves the support for the old format as well just in case someone
  will use a newer kernel with an old device tree blob.

- While here, replace dwc_fast_ffs() by __ffs()

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:30:47 +05:30
Andy Shevchenko 969f750fc6 dmaengine: dw: platform: check nr_masters to be non-zero
The value of nr_masters equal to 0 is invalid since this DMA controller has to
have at least one master.

Check this before we proceed with the rest of properties.

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:30:25 +05:30
Shardar Shariff Md 00ef4490eb dmaengine: tegra-apb: proper default init of channel slave_id
Initialize default channel slave_id(req_sel) to invalid id
(i.e max supported slave id + 1) to avoid overwriting of slave_id
during tegra_dma_slave_config() with client data if slave_id
is not initialized through DT

Signed-off-by: Shardar Shariff Md <smohammed@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:23:56 +05:30
Martin Sperl 0eef727a47 dmaengine: bcm2835: fix typo/added newline in legacy-mode warning message
Fix typo in warning message that there is no "interrupt-names"
property defined in the device-tree and legacy-mode is used.

Also added newline to end of message.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-02 15:08:19 +05:30
Eric Engestrom 4e0def887d dmaengine: pxa_dma: remove duplicate const qualifier
Signed-off-by: Eric Engestrom <eric.engestrom@imgtec.com>
Acked-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-26 21:59:46 +05:30
Jean-Francois Moine 52c871798f dmaengine: sun6i: Simplify lli setting
Checking the DMA config before setting the lli list avoids to do tests
inside the setting loop.

Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-26 09:19:18 +05:30
Jean-Francois Moine dc6a58c17c dmaengine: sun6i: Fix impossible settings of burst and bus width
In the commit 1f9cd915b6 ("dmaengine: sun6i: Fix memcpy operation"),
the signed values returned by convert_burst() and convert_buswidth()
were stored in an unsigned value.
Then, these values were considered as errors when non null.

As a result, DMA transfers were rejected when the burst or buswidth
had values different from 1, as 8 for the burst or 4 for the bus width.

Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-26 09:09:28 +05:30
Jean-Francois Moine 128fe7e9a0 dmaengine: sun6i: Fix the access of the IRQ register
The IRQ register number is computed, but this number was not used
and the register was the one indexed by the channel index instead.
Then, only the first DMA channel was working.

Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-26 09:09:28 +05:30
Robert Jarzmik e093bf60ca dmaengine: pxa: handle bus errors
In the current state, upon bus error the driver will spin endlessly,
relaunching the last tx, which will fail again and again :
 - a bus error happens
 - pxad_chan_handler() is called
 - as PXA_DCSR_STOPSTATE is true, the last non-terminated transaction is
   lauched, which is the one triggering the bus error, as it didn't
   terminate
 - moreover, the STOP interrupt fires a new, as the STOPIRQEN is still
   active

Break this logic by stopping the automatic relaunch of a dma channel
upon a bus error, even if there are still pending issued requests on it.

As dma_cookie_status() seems unable to return DMA_ERROR in its current
form, ie. there seems no way to mark a DMA_ERROR on a per-async-tx
basis, it is chosen in this patch to remember on the channel which
transaction failed, and report it in pxad_tx_status().

It's a bit misleading because if T1, T2, T3 and T4 were queued, and T1
was completed while T2 causes a bus error, the status of T3 and T4 will
be reported as DMA_IN_PROGRESS, while the channel is actually stopped.

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-26 09:03:57 +05:30
Christian Lamparter ab703f818a dmaengine: dw: lazy allocation of dma descriptors
This patch changes the driver to allocate DMA descriptors when
needed. This stops memory resources to be wasted and letting
them sit idle in the free_list structure when the device doesn't
need it... This also solves the problem, that a driver has to
guess the number of how many descriptors it needs to allocate
in advance. Currently, the dma engine will just fail when put
under load by sata_dwc_460ex.

Signed-off-by: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19 21:14:45 +05:30
Stanimir Varbanov 5ad3f29f6a dmaengine: qcom: bam_dma: rename BAM_MAX_DATA_SIZE define
It seems that the define has not been with acurate name and
makes confusion while reading the code. The more acurate
name should be BAM_FIFO_SIZE.

Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19 21:11:32 +05:30
Stanimir Varbanov 2a663ed9fe dmaengine: qcom: bam_dma: use correct pipe FIFO size
The pipe fifo size register must instruct the bam hw
how many hw descriptors can be pushed to fifo. Currently
we instruct the hw with 32KBytes but wrap the tail in
bam_start_dma in BAM_P_EVNT_REG on 4095 i.e. 32760. This
leads to stalled transactions when the tail wraps.

Fix this by use the correct fifo size in BAM_P_FIFO_SIZES
register i.e. 32K - 8.

Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19 21:11:32 +05:30
Stanimir Varbanov 5172c9eb89 dmaengine: qcom: bam_dma: add controlled-remotely dt property
Some of the peripherals has bam which is controlled by remote
processor, thus the bam dma driver must avoid register writes
which initialise bam hw block. Those registers are protected
from xPU block and any writes to them will lead to secure
violation and system reboot.

Adding the contolled_remotely flag in bam driver to avoid
not permitted register writes in bam_init function.

Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Pramod Gurav <gpramod@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19 21:11:31 +05:30
Stanimir Varbanov f89117c0f5 dmaengine: qcom: bam_dma: clear BAM interrupt only if it is raised
Currently we write BAM_IRQ_CLR register with zero even when no
BAM_IRQ occured. This write has some bad side effects when the
BAM instance is for the crypto engine. In case of crypto engine
some of the BAM registers are xPU protected and they cannot be
controlled by the driver.

Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Pramod Gurav <gpramod@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19 21:11:31 +05:30
Stanimir Varbanov f139f97878 dmaengine: qcom: bam_dma: fix dma free memory on remove
Building the driver as a module and when removing the already
inserted module gives below:

[ 1389.392788] Unable to handle kernel paging request at virtual address ffffffbdc000001c
[ 1389.421321] pgd = ffffffc02fa87000
[ 1389.447899] [ffffffbdc000001c] *pgd=0000000000000000, *pud=0000000000000000
[ 1389.460142] Internal error: Oops: 96000006 [#1] PREEMPT SMP
[ 1389.466963] Modules linked in: qcom_bam_dma(-)
[ 1389.486608] CPU: 2 PID: 2442 Comm: rmmod Not tainted 4.2.0+ #407
[ 1389.493885] Hardware name: Qualcomm Technologies, Inc. APQ 8016 SBC (DT)
[ 1389.501196] task: ffffffc035bae2c0 ti: ffffffc0368a8000 task.ti: ffffffc0368a8000
[ 1389.508566] PC is at __free_pages+0xc/0x40
[ 1389.515893] LR is at free_pages.part.93+0x30/0x38
[ 1389.523141] pc : [<ffffffc00016180c>] lr : [<ffffffc00016197c>] pstate: 80000145
[ 1389.530602] sp : ffffffc0368abc20
[ 1389.537931] x29: ffffffc0368abc20 x28: ffffffc0368a8000
[ 1389.549153] x27: 0000000000000000 x26: 0000000000000000
[ 1389.560412] x25: ffffffc000cb2000 x24: 0000000000000170
[ 1389.571530] x23: 0000000000000004 x22: ffffffc036bc5010
[ 1389.582721] x21: ffffffc036bc5010 x20: 0000000000000000
[ 1389.593981] x19: 0000000000000002 x18: 0000007fcbc8e8b0
[ 1389.605301] x17: 0000007f9b8226ec x16: ffffffc0002089e8
[ 1389.616647] x15: 0000007f9b8a0588 x14: 0ffffffffffffffc
[ 1389.628039] x13: 0000000000000030 x12: 0000000000000000
[ 1389.639436] x11: 0000000000000008 x10: ffffffc000ecc000
[ 1389.650872] x9 : ffffffc035bae2c0 x8 : ffffffc035bae9a8
[ 1389.662367] x7 : ffffffc035bae9a0 x6 : 0000000000000000
[ 1389.673906] x5 : ffffffbdc000001c x4 : 0000000080000000
[ 1389.685475] x3 : ffffffbdc0000000 x2 : 0000004080000000
[ 1389.697049] x1 : 0000000000000003 x0 : ffffffbdc0000000

The memory has been already freed by bam_free_chan() so fix this
by skiping already freed memory.

Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19 21:11:31 +05:30
Martin Sperl e2eca6389b dmaengine: bcm2835: use platform_get_irq_byname
Use platform_get_irq_byname to allow for correct mapping of
interrupts to dma channels.

The currently implemented device tree is unfortunately
implemented with the wrong assumption, that each dma-channel
has its own dma channel, but dma-irq 11 is handling
dma-channel 11-14 and dma-irq 12 is actually a "catch all"
interrupt.

So here we use the byname variant and require that interrupts
are explicitly named via the interrupts-name property in the
device tree.

The use of shared interrupts is also implemented.

As a side-effect this means we can now use dma channels 12, 13 and 14
in a correct manner - also testing shows that onl using
channels 11 to 14 for spi and i2s works perfectly (when playing
some video)

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Acked-by: Eric Anholt <eric@anholt.net>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-19 21:02:48 +05:30
Vinod Koul 956e6c8e18 Merge branch 'fix/edma' into fixes 2016-04-16 22:52:03 +05:30
Vinod Koul 1cc3334e2e Merge branch 'fix/xilinx' into fixes 2016-04-16 22:45:26 +05:30
Vinod Koul 4bd613596b Merge branch 'fix/omap' into fixes 2016-04-16 22:45:17 +05:30
Vinod Koul 09c505ced3 Merge branch 'fix/hsu' into fixes 2016-04-16 22:44:32 +05:30
Martin Sperl d9f094a02f dmaengine: bcm2835: add dma_memcopy support to bcm2835-dma
Also added check for an error condition in bcm2835_dma_create_cb_chain
that showed up during development of this patch.

Tested using dmatest for all enabled channels.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:22 +05:30
Martin Sperl 388cc7a281 dmaengine: bcm2835: add slave_sg support to bcm2835-dma
Add slave_sg support to bcm2835-dma using shared allocation
code for bcm2835_desc and DMA-control blocks already used by
dma_cyclic.

Note that bcm2835_dma_callback had to get modified to support
both modes of operation (cyclic and non-cyclic).

Tested using:
* Hifiberry I2S card (using cyclic DMA)
* fb_st7735r SPI-framebuffer (using slave_sg DMA via spi-bcm2835)
playing BigBuckBunny for audio and video.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:22 +05:30
Martin Sperl 4087412258 dmaengine: bcm2835: limit max length based on channel type
The bcm2835 dma system has 2 basic types of dma-channels:
* "normal" channels
* "light" channels

Lite channels are limited in several aspects:
* internal data-structure is 128 bit (not 256)
* does not support BCM2835_DMA_TDMODE (2D)
* DMA length register is limited to 16 bit.
  so 0-65535 (not 0-65536 as mentioned in the official datasheet)
* BCM2835_DMA_S/D_IGNORE are not supported

The detection of the type of mode is implemented by looking at
the LITE bit in the DEBUG register for each channel.
This allows automatic detection.

Based on this the maximum block size is set to (64K - 4) or to 1G
and this limit is honored during generation of control block
chains. The effect is that when a LITE channel is used more
control blocks are used to do the same transfer (compared
to a normal channel).

As there are several sources/target DREQS that are 32 bit wide
we need to have the transfer to be a multiple of 4 as this would
break the transfer otherwise.

This is why the limit of (64K - 4) was chosen over the
alternative of (64K - 4K).

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:22 +05:30
Martin Sperl 92153bb534 dmaengine: bcm2835: move controlblock chain generation into separate method
In preparation of adding slave_sg functionality this patch moves the
generation/allocation of bcm2835_desc and the building of
the corresponding DMA-control-block chain from bcm2835_dma_prep_dma_cyclic
into the newly created method bcm2835_dma_create_cb_chain.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:21 +05:30
Martin Sperl a4dcdd849e dmaengine: bcm2835: move cyclic member from bcm2835_chan into bcm2835_desc
In preparation to consolidating code we move the cyclic member
into the bcm_2835_desc structure.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:21 +05:30
Martin Sperl e42685d7a7 dmaengine: bcm2835: add additional defines for DMA-registers
Add additional defines describing the DMA registers
as well as adding some more documentation to those registers.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:21 +05:30
Martin Sperl a1d71ba90c dmaengine: bcm2835: remove unnecessary masking of dma channels
The original patch contained 3 dma channels that were masked out.

These - as far as research and discussions show - are a
artefacts remaining from the downstream legacy dma-api.

Right now down-stream still includes a legacy api used only
in a single (downstream only) driver (bcm2708_fb) that requires
2D DMA for speedup (DMA-channel 0).
Formerly the sd-card support driver also was using this legacy
api (DMA-channel 2), but since has been moved over to use
dmaengine directly.

The DMA-channel 3 is already masked out in the devicetree in
the default property "brcm,dma-channel-mask = <0x7f35>;"

So we can remove the whole masking of DMA channels.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:21 +05:30
Martin Sperl 0fa5867e6a dmaengine: bcm2835: set residue_granularity field
bcm2835-dma supports residue reporting at burst level but didn't report
this via the residue_granularity field.

See also:
b015555327
for the downstream patch.

Signed-off-by: Matthias Reichl <hias@horus.com>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-15 09:57:21 +05:30
Andy Shevchenko 925a7d045e dmaengine: dw: set cdesc to NULL when free cyclic transfers
To be sure we have the cyclic transfers already gone we set cdesc to NULL. It
will prevent the double free.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:16 +05:30
Andy Shevchenko b68fd09762 dmaengine: dw: move residue to a descriptor
Residue is a property of any active descriptor. So, any descriptor may be in
different state but residue is a feature of active descriptor. Check if the
asked descriptor is active and return proper residue value for it.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:16 +05:30
Andy Shevchenko 423f9cbf2d dmaengine: dw: move dwc->initialized to dwc->flags
We have already dedicated variable for flags, therefore no need to create an
additional storage for that. Covert dwc->initialized to use dwc->flags.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:16 +05:30
Andy Shevchenko 5e09f98e77 dmaengine: dw: move dwc->paused to dwc->flags
We have already dedicated variable for flags, therefore no need to create an
additional storage for that. Convert dwc->paused to use dwc->flags.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:15 +05:30
Andy Shevchenko 7794e5b920 dmaengine: dw: define counter variables as unsigned int
The code is fixed to satisfy a compiler otherwise we have

drivers/dma/dw/core.c: In function ‘dwc_handle_cyclic’:
drivers/dma/dw/core.c:568: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_tasklet’:
drivers/dma/dw/core.c:590: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_off’:
drivers/dma/dw/core.c:1103: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_cyclic_free’:
drivers/dma/dw/core.c:1469: warning: comparison between signed and unsigned
drivers/dma/dw/core.c: In function ‘dw_dma_probe’:
drivers/dma/dw/core.c:1574: warning: comparison between signed and unsigned

There is no functional change.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:10 +05:30
Andy Shevchenko 897e40d3b1 dmaengine: dw: substitute dma_read_byaddr by dma_readl_native
Since struct dw_dma is allocated and regs member is assigned properly we can
use standard IO accessors to the DMA registers.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:10 +05:30
Mans Rullgard a3e557999b dmaengine: dw: clear LLP_[SD]_EN bits in last descriptor of a chain
The datasheet requires that the LLP_[SD]_EN bits be cleared whenever
LLP.LOC is zero, i.e. in the last descriptor of a multi-block chain.
Make the driver do this.

Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:10 +05:30
Mans Rullgard 2a0fae025e dmaengine: dw: set LMS field in descriptors
The LMS field indicates from which master the descriptor is to be
read.  This patch assumes this is always the same as the memory
side in a peripheral transfer which is true for all known systems.

Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:10 +05:30
Mans Rullgard df1f3a2305 dmaengine: dw: fix byte order of hw descriptor fields
If the DMA controller uses a different byte order than the host CPU,
the hardware linked list descriptor fields need to be byte-swapped.

This patch makes the driver write these fields using the same byte
order it uses for mmio accesses to the DMA engine. I do not know
if this is guaranteed to always be correct.

Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:10 +05:30
Mans Rullgard bb3450ad0e dmaengine: dw: set src and dst master select according to xfer direction
On some architectures the DMA controller can have two masters connected to
different buses and thus access to memory is possible only through one and
to peripheral through the other.

This patch changes the src and dst master setting to match the direction
of the transfer.

Signed-off-by: Mans Rullgard <mans@mansr.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:10 +05:30
Andy Shevchenko c422025c18 dmaengine: dw: rename masters to reflect actual topology
The source and destination masters are reflecting buses or their layers to
where the different devices can be connected. The patch changes the master
names to reflect which one is related to which independently on the transfer
direction.

The outcome of the change is that the memory data width is now always limited
by a data width of the master which is dedicated to communicate to memory.

The patch will not break anything since all current users have the same data
width for all masters. Though it would be nice to revisit avr32 platforms to
check what is the actual hardware topology in use there. It seems that it has
one bus and two masters on it as stated by Table 8-2, that's why everything
works independently on the master in use. The purpose of the sequential patch
is to fix the driver for configuration of more than one bus.

The change is done in the assumption that src_master and dst_master are
reflecting a connection to the memory and peripheral correspondently on avr32
and otherwise on the rest.

Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:36:09 +05:30
Andy Shevchenko 3fe6409c23 dmaengine: dw: fix master selection
The commit 8950052029 ("dmaengine: dw: apply both HS interfaces and remove
slave_id usage") cleaned up the code to avoid usage of depricated slave_id
member of generic slave configuration.

Meanwhile it broke the master selection by removing important call to
dwc_set_masters() in ->device_alloc_chan_resources() which copied masters from
custom slave configuration to the internal channel structure.

Everything works until now since there is no customized connection of
DesignWare DMA IP to the bus, i.e. one bus and one or more masters are in use.
The configurations where 2 masters are connected to the different masters are
not working anymore. We are expecting one user of such configuration and need
to select masters properly. Besides that it is obviously a performance
regression since only one master is in use in multi-master configuration.

Select masters in accordance with what user asked for. Keep this patch in a form
more suitable for back porting.

We are safe to take necessary data in ->device_alloc_chan_resources() because
we don't support generic slave configuration embedded into custom one, and thus
the only way to provide such is to use the parameter to a filter function which
is called exactly before channel resource allocation.

While here, replase BUG_ON to less noisy dev_warn() and prevent channel
allocation in case of error.

Fixes: 8950052029 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:34:31 +05:30
Jarkko Nikula 4c4d7f8785 dmaengine: core: Revert back to pr_debug in __dma_request_channel()
Commit ef859312c3 ("dmaengine: core: Use dev_ functions for debug and
error prints") wasn't quite right in __dma_request_channel() by claiming
that all pr_ prints have valid DMA channel pointer. Obviously it is not
true as __dma_request_channel() is looking for a channel and returns NULL
if it does not find it.

Prevent this potential NULL pointer dereference by reverting back to
pr_debug().

Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-13 21:09:21 +05:30
Kedareswara rao Appana 48a59edc63 dmaengine: vdma: Fix checkpatch.pl warnings
This patch fixes the below checkpatch.pl warnings.

WARNING: void function return statements are not generally useful
+	return;
+}

WARNING: void function return statements are not generally useful
+	return;
+}

WARNING: Missing a blank line after declarations
+		u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,

Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Moritz Fischer <moritz.fischer@ettus.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-06 08:41:17 -07:00
Kedareswara rao Appana a65cf5125b dmaengine: vdma: Fix race condition in Non-SG mode
When VDMA is configured in  Non-sg mode
Users can queue descriptors greater than h/w configured frames.

Current driver allows the user to queue descriptors upto h/w configured.
Which is wrong for non-sg mode configuration.

This patch fixes this issue.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-06 08:41:15 -07:00
Kedareswara rao Appana b72db4005f dmaengine: vdma: Add 64 bit addressing support to the driver
This VDMA  is a soft ip, which can be programmed to support
32 bit addressing or greater than 32 bit addressing.

When the VDMA ip is configured for 32 bit address space
the buffer address is specified by a single register
(0x5C for MM2S and 0xAC for S2MM channel).

When the  VDMA core is configured for an address space greater
than 32 then each buffer address is specified by a combination of
two registers.

The first register specifies the LSB 32 bits of address,
while the next register specifies the MSB 32 bits of address.

For example, 5Ch will specify the LSB 32 bits while 60h will
specify the MSB 32 bits of the first start address.
So we need to program two registers at a time.

This patch adds the 64 bit addressing support to the vdma driver.

Signed-off-by: Anurag Kumar Vulisha <anuragku@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-06 08:41:14 -07:00
John Ogness a482f4e0d8 dmaengine: edma: special case slot limit workaround
Currently drivers are limited to 19 slots for cyclic transfers.
However, if the DMA burst size is the same as the period size,
the period size can be changed to the full buffer size and
intermediate interrupts activated. Since intermediate interrupts
will trigger for each burst and the burst size is the same as
the period size, the driver will get interrupts each period as
expected. This has the benefit of allowing the functionality of
many more slots, but only uses 2 slots.

This workaround is only active if more than 19 slots are needed
and the burst size matches the period size.

Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: John Ogness <john.ogness@linutronix.de>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-06 07:29:49 -07:00
Peter Ujfalusi 23f49fd2ea dmaengine: edma: Remove dynamic TPTC power management feature
The dynamic or on demand pm_runtime does not work correctly on am335x and
am437x due to interference with hwmod.
Fall back using the pm_runtime usage as it was in the old driver stack,
meaning that at probe time call pm_runtime_enable() and
pm_runtime_get_sync() for the TPTCs as well.

Fixes: 1be5336bc7 ("dmaengine: edma: New device tree binding")

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reported-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-06 07:29:49 -07:00
Linus Walleij f9cd476123 dmaengine: pl08x: allocate OF slave channel data at probe time
The current OF translation of channels can never work with
any DMA client using the DMA channels directly: the only way
to get the channels initialized properly is in the
dma_async_device_register() call, where chan->dev etc is
allocated and initialized.

Allocate and initialize all possible DMA channels and
only augment a target channel with the periph_buses at
of_xlate(). Remove some const settings to make things work.

Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Tested-by: Joachim Eastwood <manabian@gmail.com>
Tested-by: Johannes Stezenbach <js@sig21.net>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-05 16:53:41 -07:00
Vinod Koul b2d8984f3e dmaengine: add DMA_CYCLIC to dma_get_slave_caps
dma_get_slave_caps() API only checked for slave capability where
we use slave capabilities for cyclic dma operations as well, so we
should add the cyclic case here too.

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-05 15:31:33 -07:00
Franck Jullien 330ed4da2c dmaengine: vdma: don't crash when bad channel is requested
When client request a non existing channel from of_dma_xilinx_xlate
we get a NULL pointer dereferencing. This patch fix this problem.

Signed-off-by: Franck Jullien <franck.jullien@odyssee-systemes.fr>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-05 11:10:57 -07:00
Peter Ujfalusi b96c033cc8 dmaengine: omap-dma: Do not suppress interrupts for memcpy
If the client queues up more transfers the driver will not able to move to
the next transfer without knowing that the previous descriptor is
completed.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-05 09:09:42 -07:00
Peter Ujfalusi 689d3c5ecc dmaengine: omap-dma: Fix polled channel completion detection and handling
When based on the CCR_ENABLE bit the channel is stopped we should not call
omap_dma_callback(), only change the return value to DMA_COMPLETE. Client
drivers will do the right thing to clean up the channel after the transfer
has been completed.
Check the CCR_ENABLE only if the channel is running and not paused since
pause in sDMA means that the channel is stopped.
This will fix one hard to reproduce race condition when the channel is
terminated during transfer (affecting cyclic operation).

Fixes: 1a7cf7b26f ("dmaengine: omap-dma: Handle cases when the channel is polled for completion")

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-05 09:09:42 -07:00
Mario Six 77fc397661 dmaengine: mpc512x: Fix code style
Signed-off-by: Mario Six <mario.six@gdsys.cc>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:50:10 -07:00
Mario Six 899ed9dd4f dmaengine: mpc512x: Implement additional chunk sizes for DMA transfers
This patch extends the capabilities of the driver to handle DMA
transfers to and from devices of 1, 2, 4, 16 (for MPC512x), and 32 byte
widths.

Signed-off-by: Mario Six <mario.six@gdsys.cc>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:50:10 -07:00
Mario Six 237ec70903 dmaengine: mpc512x: Fix hanging DMA device transfer for MPC8308
Since the MPC8308 has no external request lines to initiate DMA transfers,
all transfers must be triggered by software.

Because of this, the current implementation of DMA transfers from and to
devices on MPC8308 SoCs using major and minor loops is faulty: After the
completion of the first major loop, the DMA engine resets the start flag in
the channel's TCD, thus halting the transfer. The driver would have to set
the start bit again to trigger the next iteration of the major loop; on
MPC512x SoCs, this is done via the external request lines, so in this case,
the driver doesn't have to interfer in any way.

This has the effect that on MPC8308s, every DMA transfer to or from a
device hangs after executing the first major loop.

The patch fixes this behavior by using just one major loop for the whole
DMA transfer on MPC8308s.

Signed-off-by: Mario Six <mario.six@gdsys.cc>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:50:10 -07:00
Andy Shevchenko 17b3cf4233 dmaengine: hsu: set maximum allowed segment size for DMA
This tells, for example, IOMMU what the maximum size of a segment
the DMA controller can send.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:42:00 -07:00
Andy Shevchenko c36a0176ba dmaengine: hsu: don't check direction of timeouted channel
The timeout capability is only available on the so called DMA write channels,
i.e. associated with UART Rx FIFO. It means we don't need to check the
direction of the channel to handle timeouts.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:42:00 -07:00
Andy Shevchenko 2d4d689f3e dmaengine: hsu: allow more than 3 descriptors
Current code allows only up to 3 descriptors to be programmed to the hardware
since it is used wrong calculations. Change % to min_t() to allow as many
descriptors as user supplied. At once it could be programmed up to 4
descriptors due to hardware limitations.

The issue was found under stress test, so it might not bother ordinary users.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:42:00 -07:00
Andy Shevchenko 4f4bc0abff dmaengine: hsu: correct use of channel status register
There is a typo in documentation regarding to descriptor empty bit (DESCE)
which is set to 1 when descriptor is empty. Thus, status register at the end of
a transfer usually returns all DESCE bits set and thus it will never be zero.

Moreover, there are 2 bits (CDESC) that encode current descriptor, on which
interrupt has been asserted. In case when we have few descriptors programmed we
might have non-zero value.

Remove DESCE and CDESC bits from DMA channel status register (HSU_CH_SR) when
reading it.

Fixes: 2b49e0c567 ("dmaengine: append hsu DMA driver")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:41:43 -07:00
Andy Shevchenko a197f3c7d4 dmaengine: hsu: correct residue calculation of active descriptor
The commit f0579c8cea ("dmaengine: hsu: speed up residue calculation")
speeded up calculation of the queued descriptor but broke the initial residue
value for active descriptor.

In accordance with documentation the hardware descriptor is updated each time
DMA transfered some bytes. It means we have to calculate a sum of lengths of
non-submitted hardware descriptors and whatever current values in the hardware.
Do this straightforward.

Fixes: f0579c8cea ("dmaengine: hsu: speed up residue calculation")
Cc: stable@vger.kernel.org
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:41:43 -07:00
Andy Shevchenko 080edf75d3 dmaengine: hsu: set HSU_CH_MTSR to memory width
HSU_CH_MTSR register should be programmed to a minimum size to transfer. This
size on a memory side of the transfer. Program it accordingly.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 09:41:43 -07:00
Jarkko Nikula ef859312c3 dmaengine: core: Use dev_ functions for debug and error prints
According to dmaengine kerneldoc the struct dma_chan has always a non-NULL
pointer to DMA device and a test in dma_async_device_register()
validates that DMA device must also point to struct device.

All pr_ prints except one in dma_channel_table_init() have valid DMA
channel or DMA device pointer available which allow convert them to use
dev_ functions and thus able to show the associated DMA device.

Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-04 06:19:06 -07:00
Linus Torvalds 11caf57f6a asm-generic changes for 4.6
There are only three patches this time, most other changes to
 files in include/asm-generic tend to go through the tree of whoever
 depends on the change.
 
 Two patches are cleanups for stuff that is no longer needed,
 the main change is to adapt the generic version of BUG_ON()
 for CONFIG_BUG=n to make it behave consistently with BUG().
 
 This avoids undefined behavior along with a number of warnings
 about that undefined behavior in randconfig builds when
 we keep going on after hitting a BUG_ON().
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIVAwUAVvRXDGCrR//JCVInAQKUFRAAmp23pohv08LZzXL8Qu7XfFN+b1RkZ936
 WYBeiA9PEWufQs2hgXaEUXy0onO7ah4cs2NWfkaBPyxT+I9mN+ThdzqVrlTE+AEO
 2K0f2RaZANC238zB86Yv/YvTj7FegH0DDdMBq/P06vlYdgBegx49U3pMpguxl3d0
 /q9MyqTzo9j4uOEK4ix4/Dko+4eKIS5Y/xeb0TkeKA6HiBVzAhGLZFl+eMku07Bf
 ap8B705hBDXSBFeWcK9AvKjHZCM+FCkb+C3TXo9x5tUu8g5OIG1t962OQvT9ldsP
 rvo5ppRh/TAY2Z9chN3cKrsvshbHiZ9uRzeksCunL+SK+dOhEIPCVzLXndQpi3RD
 NgeNKgo6gKYdle44pEj0EH2ktuvr0u8sbjQg9SY2miC1H4DmEbCakSqtQegHXTKd
 chJ6xyNiQXktdfo0pFOtCA2gjqiAriugttBqUtGcK9zRqjGGpP5hOUQVm3jR7UMp
 Hjb+oj5o+Gjz5J1t5zsjbhFINDCHAgXRzqqaoT9RfE9+QlUftUhu+N9KVFgzhe9I
 93VHaqgGIRoi856BO7UZSaMGhy7ljm1nQ18jP9aZl/tBco0kpd3AO8og9dJ0u2j+
 3fEqAHH30ia8GJCfIDnolxTL6uaqcCIeAoLgGcmn+QZS7ka+tD+000rtgd2pdy9/
 gy/VPpFG064=
 =8tPL
 -----END PGP SIGNATURE-----

Merge tag 'asm-generic-4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic

Pull asm-generic updates from Arnd Bergmann:
 "There are only three patches this time, most other changes to files in
  include/asm-generic tend to go through the tree of whoever depends on
  the change.

  Two patches are cleanups for stuff that is no longer needed, the main
  change is to adapt the generic version of BUG_ON() for CONFIG_BUG=n to
  make it behave consistently with BUG().

  This avoids undefined behavior along with a number of warnings about
  that undefined behavior in randconfig builds when we keep going on
  after hitting a BUG_ON()"

* tag 'asm-generic-4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
  asm-generic: remove old nonatomic-io wrapper files
  asm-generic: default BUG_ON(x) to if(x)BUG()
  asm-generic: page.h: Remove useless get_user_page and free_user_page
2016-03-24 23:13:48 -07:00
Linus Torvalds 33b3d2e88c ARM: SoC platform updates for v4.6
Newly added support for additional SoCs:
 
 - Axis Artpec-6 SoC family
 - Allwinner A83T SoC
 - Mediatek MT7623
 - NXP i.MX6QP SoC
 - ST Microelectronics stm32f469 microcontroller
 
 New features:
 - SMP support for Mediatek mt2701
 - Big-endian support for NXP i.MX
 - DaVinci now uses the new DMA engine dma_slave_map
 - OMAP now uses the new DMA engine dma_slave_map
 - earlyprintk support for palmchip uart on mach-tango
 - delay timer support for orion
 
 Other:
 - Exynos PMU driver moved out to drivers/soc/
 - Various smaller updates for Renesas, Xilinx, PXA, AT91, OMAP, uniphier
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIVAwUAVu68DGCrR//JCVInAQIHVQ//Wblms+NKj3aKh6m2Sscs/YkSbFaQ4sY2
 rNyfxLIYsLXkth1kbdHRFSMyL68Ym+xutErgw/3HQPB2D1YtYJE3VJ/y8AU92SU3
 oHyQIty+atB8d8zBbtlkWmat94NIfYf0I8PQETreGb1LMaJqAf0mDEDAyorTLZcZ
 UtQ817Ihn7urqwdTJpTO58V41RmY/vflbHI5T6bIjUJn6fF1e/7+VqtMIfq5sjJ6
 0EPEQdu8s5AJ7gcGlGi9I5gAtSnWSA/9phAxul9P8/HrMpUWIxreSEAy8FY7W14F
 4TON3sQrnw7nyA72U80KGIXhgLy7SbEmHcSqyy4YJK3ycdk6VYk0CBO7nWVYAiD1
 knLisOH6jwe0LIj9WXiRR+Y2Q53pXN8SF77pLDahSnvuShnYEjEH5uELHtxe7Vxh
 gn+NH1rDkRTgdYgt4RWlVyUoLkddQWzLb1m4QyQlvxtTR25cJJayXdVX2MRrNPF5
 c1zRa9HH+b8LJQIMdWfo/NoHhHtftkkGGsqHAAaypZqdpyk0j2HpJYk5ecPR4f5C
 /8o/h/5xOI9gEzp/DVYSZ1VAvRqBQGIDfKBXWq6GuoZaF0aN8ISe5IxFn5Yx2F46
 fNaxqiNpWmyywl8D+tSWPFK6aE21AXKGi5zIzexZZqy283aDjlUPI+tgF2GKIuKP
 3ayYTDeBpLI=
 =ynNj
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC platform updates from Arnd Bergmann:
 "Newly added support for additional SoCs:
   - Axis Artpec-6 SoC family
   - Allwinner A83T SoC
   - Mediatek MT7623
   - NXP i.MX6QP SoC
   - ST Microelectronics stm32f469 microcontroller

  New features:
   - SMP support for Mediatek mt2701
   - Big-endian support for NXP i.MX
   - DaVinci now uses the new DMA engine dma_slave_map
   - OMAP now uses the new DMA engine dma_slave_map
   - earlyprintk support for palmchip uart on mach-tango
   - delay timer support for orion

  Other:
   - Exynos PMU driver moved out to drivers/soc/
   - Various smaller updates for Renesas, Xilinx, PXA, AT91, OMAP,
     uniphier"

* tag 'armsoc-soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (83 commits)
  ARM: uniphier: rework SMP code to support new System Bus binding
  ARM: uniphier: add missing of_node_put()
  ARM: at91: avoid defining CONFIG_* symbols in source code
  ARM: DRA7: hwmod: Add data for eDMA tpcc, tptc0, tptc1
  ARM: imx: Make reset_control_ops const
  ARM: imx: Do L2 errata only if the L2 cache isn't enabled
  ARM: imx: select ARM_CPU_SUSPEND only for imx6
  dmaengine: pxa_dma: fix the maximum requestor line
  ARM: alpine: select the Alpine MSI controller driver
  ARM: pxa: add the number of DMA requestor lines
  dmaengine: mmp-pdma: add number of requestors
  dma: mmp_pdma: Add the #dma-requests DT property documentation
  ARM: OMAP2+: Add rtc hwmod configuration for ti81xx
  ARM: s3c24xx: Avoid warning for inb/outb
  ARM: zynq: Move early printk virtual address to vmalloc area
  ARM: DRA7: hwmod: Add custom reset handler for PCIeSS
  ARM: SAMSUNG: Remove unused register offset definition
  ARM: EXYNOS: Cleanup header files inclusion
  drivers: soc: samsung: Enable COMPILE_TEST
  MAINTAINERS: Add maintainers entry for drivers/soc/samsung
  ...
2016-03-20 14:57:08 -07:00
Linus Torvalds b5b131c747 dmaengine updates for 4.6
This is smallish update with minor changes to core and new driver and usual
 updates. Nothing super exciting here..
 
 - We have made slave address as physical to enable driver to do the mapping.
 - We now expose the maxburst for slave dma as new capability so clients can
   know this and program accordingly
 - addition of device synchronize callbacks on omap and edma.
 - pl330 updates to support DMAFLUSHP for Rockchip platforms.
 - Updates and improved sg handling in Xilinx VDMA driver.
 - New hidma qualcomm dma driver, though some bits are still in progress
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJW6W4OAAoJEHwUBw8lI4NHIj0P/0UEXOn9Laj1dQ++3RuEHtJH
 AvolC574yj/jdvhNNRAu3TBq214VDtVu+OEi6cAwybSMUOT0lbrSEI4a6K6iDIdH
 QGfyz2PFDBMnNLqqNfeQulgB6YgoZ/7PXUOz9D+FX4wyM3poTBb9J2JI5okFuuJI
 r4jmiZrXTZSmm2NTbG0QxHogoyvMDA59EB8cIgAUrl1rDssPkdvzU7ygW6qc5CMW
 33tQFyz6Q74EI9ImPeYUkSf1zzT1va4uRce+3lEmLSvtOWG2pjOOZ1Vw89vtkyal
 yX1eH06glVTQwpfV+fgmbjpn72Ftk+G6rqcB4aICSyN2dH7Gf4D+Dqjp1mdEHyFf
 Oum5pWNPzJ97HoGLwxd8FEuA3ma3C0nC+nDl+ffNWLmIDGgeqFHSQaNBlf2S6y+a
 VtGFJ0EaR//qIpwvPNfpJbkwjrEaEFdSYQcdpGcPPeTeOOpaLGkmJ/2kD2rpGSNC
 iPh+G/h7sJYLFyiG7C6GeuWxShzSL+LpZqv0ks5i/QKmSNXWsvVQexAlBW43R385
 uQkZSWOlzUwmGlTj9XUI2mUxhI73SgKt+WZ9wrJWvIThBHRwwSIln+72SzQ8d4ys
 Smv3DkGt4gCxHmsV+G3nEIBlviECJn2KaaN450D6FVxgQ40yGV5gWAVX4yAWo2De
 uMnQMDamjoajgbeanpbM
 =wCCJ
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This is smallish update with minor changes to core and new driver and
  usual updates.  Nothing super exciting here..

   - We have made slave address as physical to enable driver to do the
     mapping.

   - We now expose the maxburst for slave dma as new capability so
     clients can know this and program accordingly

   - addition of device synchronize callbacks on omap and edma.

   - pl330 updates to support DMAFLUSHP for Rockchip platforms.

   - Updates and improved sg handling in Xilinx VDMA driver.

   - New hidma qualcomm dma driver, though some bits are still in
     progress"

* tag 'dmaengine-4.6-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (40 commits)
  dmaengine: IOATDMA: revise channel reset workaround on CB3.3 platforms
  dmaengine: add Qualcomm Technologies HIDMA channel driver
  dmaengine: add Qualcomm Technologies HIDMA management driver
  dmaengine: hidma: Add Device Tree binding
  dmaengine: qcom_bam_dma: move to qcom directory
  dmaengine: tegra: Move of_device_id table near to its user
  dmaengine: xilinx_vdma: Remove unnecessary variable initializations
  dmaengine: sirf: use __maybe_unused to hide pm functions
  dmaengine: rcar-dmac: clear pertinence number of channels
  dmaengine: sh: shdmac: don't open code of_device_get_match_data()
  dmaengine: tegra: don't open code of_device_get_match_data()
  dmaengine: qcom_bam_dma: Make driver work for BE
  dmaengine: sun4i: support module autoloading
  dma/mic_x100_dma: IS_ERR() vs PTR_ERR() typo
  dmaengine: xilinx_vdma: Use readl_poll_timeout instead of do while loop's
  dmaengine: xilinx_vdma: Simplify spin lock handling
  dmaengine: xilinx_vdma: Fix issues with non-parking mode
  dmaengine: xilinx_vdma: Improve SG engine handling
  dmaengine: pl330: fix to support the burst mode
  dmaengine: make slave address physical
  ...
2016-03-17 12:34:54 -07:00
Linus Torvalds 5ec942463b Merge branch 'mm-pat-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull dma_*_writecombine rename from Ingo Molnar:
 "Rename dma_*_writecombine() to dma_*_wc()

  This is a tree-wide API rename, to move the dma_*() write-combining
  APIs closer in name to their usual API families.  (The old API names
  are kept as compatibility wrappers to not introduce extra breakage.)

  The patch was Coccinelle generated"

* 'mm-pat-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  dma, mm/pat: Rename dma_*_writecombine() to dma_*_wc()
2016-03-14 16:31:41 -07:00
Vinod Koul 896e041e8e Merge branch 'topic/xilinx' into for-linus 2016-03-14 11:18:32 +05:30
Vinod Koul 0dae18450a Merge branch 'topic/sh' into for-linus 2016-03-14 11:18:27 +05:30
Vinod Koul 254efeec31 Merge branch 'topic/qcom' into for-linus 2016-03-14 11:18:22 +05:30
Vinod Koul 8bce4c8765 Merge branch 'topic/pl330' into for-linus 2016-03-14 11:18:12 +05:30
Vinod Koul 805dd3508b Merge branch 'topic/omap' into for-linus 2016-03-14 11:17:59 +05:30
Vinod Koul 0e3d5b2129 Merge branch 'topic/ioatdma' into for-linus 2016-03-14 11:17:52 +05:30
Vinod Koul 8795d14328 Merge branch 'topic/idma' into for-linus 2016-03-14 11:17:44 +05:30
Vinod Koul 9f2f4956ed Merge branch 'topic/edma' into for-linus 2016-03-14 11:17:36 +05:30
Dave Jiang c997e30e7f dmaengine: IOATDMA: revise channel reset workaround on CB3.3 platforms
Previously we unloaded the interrupts and reloaded in order to work around
a channel reset bug that cleared the MSIX table. This approach just isn't
practical when a reset needs to happen in the error handler that just
happens to be running in interrupt context (bottom half). It looks like we
can work around the hardware issue by just storing a shadow copy of the
MSIX table and restore it after reset.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-11 07:55:08 +05:30
Sinan Kaya 67a2003e06 dmaengine: add Qualcomm Technologies HIDMA channel driver
This patch adds support for hidma engine. The driver consists of two
logical blocks. The DMA engine interface and the low-level interface.
The hardware only supports memcpy/memset and this driver only support
memcpy interface. HW and driver doesn't support slave interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-11 07:42:30 +05:30
Sinan Kaya 7f8f209fd6 dmaengine: add Qualcomm Technologies HIDMA management driver
The Qualcomm Technologies HIDMA device has been designed to support
virtualization technology. The driver has been divided into two to follow
the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels share some set
of common parameters. These parameters are initialized by the management
driver during power up. Same management driver is used for monitoring the
execution of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and prioritization.

The management driver is executed in host context and is the main
management entity for all channels provided by the device.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-11 07:42:23 +05:30
Sinan Kaya d9b31efcbf dmaengine: qcom_bam_dma: move to qcom directory
Creating a QCOM directory for all QCOM DMA source files.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Gross <agross@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-11 07:42:06 +05:30
Ludovic Desroches 25c5e9626c dmaengine: at_xdmac: fix residue computation
When computing the residue we need two pieces of information: the current
descriptor and the remaining data of the current descriptor. To get
that information, we need to read consecutively two registers but we
can't do it in an atomic way. For that reason, we have to check manually
that current descriptor has not changed.

Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Suggested-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Reported-by: David Engraf <david.engraf@sysgo.com>
Tested-by: David Engraf <david.engraf@sysgo.com>
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: stable@vger.kernel.org #4.1 and later
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-10 16:32:36 +05:30
Luis R. Rodriguez f6e45661f9 dma, mm/pat: Rename dma_*_writecombine() to dma_*_wc()
Rename dma_*_writecombine() to dma_*_wc(), so that the naming
is coherent across the various write-combining APIs. Keep the
old names for compatibility for a while, these can be removed
at a later time. A guard is left to enable backporting of the
rename, and later remove of the old mapping defines seemlessly.

Build tested successfully with allmodconfig.

The following Coccinelle SmPL patch was used for this simple
transformation:

@ rename_dma_alloc_writecombine @
expression dev, size, dma_addr, gfp;
@@

-dma_alloc_writecombine(dev, size, dma_addr, gfp)
+dma_alloc_wc(dev, size, dma_addr, gfp)

@ rename_dma_free_writecombine @
expression dev, size, cpu_addr, dma_addr;
@@

-dma_free_writecombine(dev, size, cpu_addr, dma_addr)
+dma_free_wc(dev, size, cpu_addr, dma_addr)

@ rename_dma_mmap_writecombine @
expression dev, vma, cpu_addr, dma_addr, size;
@@

-dma_mmap_writecombine(dev, vma, cpu_addr, dma_addr, size)
+dma_mmap_wc(dev, vma, cpu_addr, dma_addr, size)

We also keep the old names as compatibility helpers, and
guard against their definition to make backporting easier.

Generated-by: Coccinelle SmPL
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: airlied@linux.ie
Cc: akpm@linux-foundation.org
Cc: benh@kernel.crashing.org
Cc: bhelgaas@google.com
Cc: bp@suse.de
Cc: dan.j.williams@intel.com
Cc: daniel.vetter@ffwll.ch
Cc: dhowells@redhat.com
Cc: julia.lawall@lip6.fr
Cc: konrad.wilk@oracle.com
Cc: linux-fbdev@vger.kernel.org
Cc: linux-pci@vger.kernel.org
Cc: luto@amacapital.net
Cc: mst@redhat.com
Cc: tomi.valkeinen@ti.com
Cc: toshi.kani@hp.com
Cc: vinod.koul@intel.com
Cc: xen-devel@lists.xensource.com
Link: http://lkml.kernel.org/r/1453516462-4844-1-git-send-email-mcgrof@do-not-panic.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-03-09 14:57:51 +01:00
Xuelin Shi a9af316c83 dmaengine: fsldma: fix memory leak
adding unmap of sources and destinations while doing dequeue.

Signed-off-by: Xuelin Shi <xuelin.shi@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-09 12:15:22 +05:30
Laxman Dewangan 242637bac7 dmaengine: tegra: Move of_device_id table near to its user
After using the function of_device_get_match_data(), the
of_device_id table for tegra20 dma is not used by probe()
and hence moving it near to place where platform driver is
defined as this table used only on this data structure.

Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-04 20:36:01 +05:30
Kedareswara rao Appana 694906348d dmaengine: xilinx_vdma: Remove unnecessary variable initializations
This patch removes the unnecessary variable initializations
in the driver.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-04 20:33:13 +05:30
Arnd Bergmann 6ff1cb88a7 dmaengine: sirf: use __maybe_unused to hide pm functions
The sirf dma driver uses #ifdef to check for CONFIG_PM_SLEEP
for its suspend/resume code but then has no #ifdef for the
respective runtime PM code, so we get a warning if CONFIG_PM
is disabled altogether:

drivers/dma/sirf-dma.c:1000:12: error: 'sirfsoc_dma_runtime_resume' defined but not used [-Werror=unused-function]

This removes the existing #ifdef and instead uses __maybe_unused
annotations for all four functions to let the compiler know it
can silently drop the function definition.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-04 20:31:35 +05:30
Kuninori Morimoto 20c169aceb dmaengine: rcar-dmac: clear pertinence number of channels
DMACHCLR clears each channels, but its channel number is based on
its SoC or IP. Current driver is using fixed 0x7fff (= for 15ch),
it is not good match for Gen3 or Gen2 Audio DMAC. This patch fixes it

Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:51:14 +05:30
Wolfram Sang 6fb5629987 dmaengine: sh: shdmac: don't open code of_device_get_match_data()
This change will also make Coverity happy by avoiding a theoretical NULL
pointer dereference; yet another reason is to use the above helper function
to tighten the code and make it more readable.

Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:32:45 +05:30
Laxman Dewangan 333f16ec68 dmaengine: tegra: don't open code of_device_get_match_data()
Use of_device_get_match_data() for getting matched data
instead of implementing this locally.

Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:22:07 +05:30
Andy Gross 458f5884a1 dmaengine: qcom_bam_dma: Make driver work for BE
This patch fixes the Qualcomm BAM dmaenging driver to work with big
endian kernels.

Signed-off-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:19:01 +05:30
Emilio López 94c622b2a7 dmaengine: sun4i: support module autoloading
MODULE_DEVICE_TABLE() is missing, so the module isn't auto-loading on
supported systems. This commit adds the missing line so it loads
automatically when building it as a module and running on a system
with the early sunxi DMA engine.

Signed-off-by: Emilio López <emilio.lopez@collabora.co.uk>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:16:35 +05:30
Dan Carpenter d387ef021a dma/mic_x100_dma: IS_ERR() vs PTR_ERR() typo
This is harmless because the caller only cares about zero vs non-zero
but we should be returning PTR_ERR() here.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:13:01 +05:30
Robert Jarzmik f16921275c dmaengine: pxa_dma: fix cyclic transfers
While testing audio with pxa2xx-ac97, underrun were happening while the
user application was correctly feeding the music. Debug proved that the
cyclic transfer is not cyclic, ie. the last descriptor did not loop on
the first.

Another issue is that the descriptor length was always set to 8192,
because of an trivial operator issue.

This was tested on a pxa27x platform.

Fixes: a57e16cf03 ("dmaengine: pxa: add pxa dmaengine driver")
Reported-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:06:45 +05:30
Kedareswara rao Appana 9495f26482 dmaengine: xilinx_vdma: Use readl_poll_timeout instead of do while loop's
It is sometimes necessary to poll a memory-mapped register until its
value satisfies some condition use convenience macros
that do this instead of do while loop's.

This patch updates the same in the driver.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:02:38 +05:30
Kedareswara rao Appana 26c5e36931 dmaengine: xilinx_vdma: Simplify spin lock handling
This patch simplifies the spin lock handling in the driver
by moving locking out of xilinx_dma_start_transfer() API
and xilinx_dma_update_completed_cookie() API.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:02:38 +05:30
Kedareswara rao Appana e2b538a77d dmaengine: xilinx_vdma: Fix issues with non-parking mode
This patch fixes issues with the Non-parking mode(Cirular mode).
With the  existing driver in cirular mode if we submit frames less than h/w
configured we simply end-up having misconfigured vdma h/w.
This patch fixes this issue by configuring the frame count register.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:02:38 +05:30
Kedareswara rao Appana 7096f36e53 dmaengine: xilinx_vdma: Improve SG engine handling
The current driver allows user to queue up multiple segments
on to a single transaction descriptor. User will submit this single desc
and in the issue_pending() we decode multiple segments and submit to SG HW engine.
We free up the allocated_desc when it is submitted to the HW.

Existing code prevents the user to prepare multiple trasactions at same time as
we are overwrite with the allocated_desc.

The best utilization of HW SG engine would happen if we collate the pending
list when we start dma this patch updates the same.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-03 21:02:38 +05:30
Arnd Bergmann a1cbaad75a asm-generic: remove old nonatomic-io wrapper files
The two header files got moved to include/linux, and most
users were already converted, this changes the remaining drivers
and removes the files.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Simon Horman <simon.horman@netronome.com>
Acked-by: Yisen Zhuang <yisen.zhuang@huawei.com>
2016-03-01 22:25:17 +01:00
Caesar Wang 0a18f9b268 dmaengine: pl330: fix to support the burst mode
This patch fixes the burst mode that will break DMA uart on SoCFPGA.

In some cases, some SoCS didn't support the multi-burst
even if the devices who use the pl330 claim support the maxburst.

Fixes: commit 848e977
"dmaengine: pl330: support burst mode for dev-to-mem and mem-to-dev transmit"

Reported-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Tested-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-03-01 09:11:25 +05:30
Robert Jarzmik 6bab1c6afd dmaengine: pxa_dma: fix the maximum requestor line
The current number of requestor lines is limited to 31. This was an
error of a previous commit, as this number is platform dependent, and is
actually :
 - for pxa25x: 40 requestor lines
 - for pxa27x: 75 requestor lines
 - for pxa3xx: 100 requestor lines

The previous testing did not reveal the faulty constant as on pxa[23]xx
platforms, only camera, MSL and USB are above requestor 32, and in these
only the camera has a driver using dma.

Fixes: e87ffbdf06 ("dmaengine: pxa_dma: fix the no-requestor case")
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Acked-by: Vinod Koul <vinod.koul@intel.com>
2016-02-26 22:57:45 +01:00
Andy Shevchenko 36bf8fc42e dmaengine: acpi-dma: align debug message with flow
In acpi_dma_request_slave_chan_by_name() the debug message is printed before
the actual matching happens. Correct the message itself to be in align with the
flow.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-02-22 09:06:09 +05:30