Граф коммитов

48 Коммитов

Автор SHA1 Сообщение Дата
Thomas Gleixner d2912cb15b treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19 17:09:55 +02:00
Vinod Koul 56b94b02cb dmaengine: mmp_pdma: remove dma_slave_config direction usage
dma_slave_config direction was marked as deprecated quite some
time back, remove the usage from this driver so that the field
can be removed

Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-11-05 10:32:46 +05:30
Dave Jiang 9c1e511cc6 dmaengine: mmp_pdma: convert callback to helper function
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-08-08 08:11:39 +05:30
Vinod Koul a46018929b dmaengine: mmp_pdma: explicitly freeup irq
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().

The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
2016-07-16 20:19:03 +05:30
Julia Lawall 1c85a8440f dmaengine: mmp_pdma: Use dma_pool_zalloc
Dma_pool_zalloc combines dma_pool_alloc and memset 0.  The semantic patch
that makes this transformation is as follows: (http://coccinelle.lip6.fr/)

// <smpl>
@@
expression d,e;
statement S;
@@

        d =
-            dma_pool_alloc
+            dma_pool_zalloc
             (...);
        if (!d) S
-       memset(d, 0, sizeof(*d));
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-05-03 12:23:11 +05:30
Maxime Ripard 77a68e56aa dmaengine: Add an enum for the dmaengine alignment constraints
Most drivers need to set constraints on the buffer alignment for async tx
operations. However, even though it is documented, some drivers either use
a defined constant that is not matching what the alignment variable expects
(like DMA_BUSWIDTH_* constants) or fill the alignment in bytes instead of
power of two.

Add a new enum for these alignments that matches what the framework
expects, and convert the drivers to it.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 10:53:52 +05:30
Linus Torvalds d6a4c0e5d3 Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma
Pull slave-dmaengine updates from Vinod Koul:

 - new drivers for:
        - Ingenic JZ4780 controller
        - APM X-Gene controller
        - Freescale RaidEngine device
        - Renesas USB Controller

  - remove device_alloc_chan_resources dummy handlers

  - sh driver cleanups for peri peri and related emmc and asoc patches
    as well

  - fixes and enhancements spread over the drivers

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (59 commits)
  dmaengine: dw: don't prompt for DW_DMAC_CORE
  dmaengine: shdmac: avoid unused variable warnings
  dmaengine: fix platform_no_drv_owner.cocci warnings
  dmaengine: pch_dma: fix memory leak on failure path in pch_dma_probe()
  dmaengine: at_xdmac: unlock spin lock before return
  dmaengine: xgene: devm_ioremap() returns NULL on error
  dmaengine: xgene: buffer overflow in xgene_dma_init_channels()
  dmaengine: usb-dmac: Fix dereferencing freed memory 'desc'
  dmaengine: sa11x0: report slave capabilities to upper layers
  dmaengine: vdma: Fix compilation warnings
  dmaengine: fsl_raid: statify fsl_re_chan_probe
  dmaengine: Driver support for FSL RaidEngine device.
  dmaengine: xgene_dma_init_ring_mngr() can be static
  Documentation: dma: Add documentation for the APM X-Gene SoC DMA device DTS binding
  arm64: dts: Add APM X-Gene SoC DMA device and DMA clock DTS nodes
  dmaengine: Add support for APM X-Gene SoC DMA engine driver
  dmaengine: usb-dmac: Add Renesas USB DMA Controller (USB-DMAC) driver
  dmaengine: renesas,usb-dmac: Add device tree bindings documentation
  dmaengine: edma: fixed wrongly initialized data parameter to the edma callback
  dmaengine: ste_dma40: fix implicit conversion
  ...
2015-04-24 09:49:37 -07:00
Fabian Frederick 57c0342239 dmaengine: constify of_device_id array
of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-03-18 22:13:14 +05:30
Robert Jarzmik ecb9b4241f dmaengine: mmp_pdma: fix warning about slave caps
Fix the dmaengine complaint about missing slave caps :
 - declare the available bus widths
 - declare the available transfer types
 - declare the residue calculation type

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-03-05 22:15:35 +05:30
Qiao Zhou 3a314f143d dmaenegine: mmp-pdma: fix irq handler overwrite physical chan issue
Some dma channels may be reserved for other purpose in other layer,
like secure driver in EL2/EL3. PDMA driver can see the interrupt
status, but it should not try to handle related interrupt, since it
doesn't belong to PDMA driver in kernel. These interrupts should be
handled by corresponding client/module.Otherwise, it will overwrite
illegal memory and cause unexpected issues, since pdma driver only
requests resources for pdma channels.

In PDMA driver, the reserved channels are at the end of total 32
channels. If we find interrupt bit index is not smaller than total
dma channels, we should ignore it.

Signed-off-by: Qiao Zhou <zhouqiao@marvell.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-02-23 22:09:56 +05:30
Maxime Ripard a0abd6719b dmaengine: mmp-pdma: Split device_control
Split the device_control callback of the Marvell MMP PDMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-12-22 12:29:00 +05:30
Kiran Padwal cd166280b7 dmaengine: Remove .owner field for driver
There is no need to init .owner field.

Based on the patch from Peter Griffin <peter.griffin@linaro.org>
"mmc: remove .owner field for drivers using module_platform_driver"

This patch removes the superflous .owner field for drivers which
use the module_platform_driver API, as this is overriden in
platform_driver_register anyway."

Signed-off-by: Kiran Padwal <kiran.padwal@smartplayin.com>
[for nvidia]
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-11-06 11:54:18 +05:30
Laurent Pinchart 31c1e5a135 dmaengine: Remove the context argument to the prep_dma_cyclic operation
The argument is always set to NULL and never used. Remove it.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-08-04 13:41:50 +05:30
Daniel Mack 1b38da2646 dma: mmp_pdma: add support for residue reporting
A channel can accommodate more than one transaction, each consisting of
multiple descriptors, the last of which has the DCMD_ENDIRQEN bit set.

In order to report the channel's residue, we hence have to walk the
list of running descriptors, look for those which match the cookie,
and then try to find the descriptor which defines upper and lower
boundaries that embrace the current transport pointer. Once it is found,
walk forward until we find the descriptor that tells us about the end of
a transaction via a set DCMD_ENDIRQEN bit.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-05-07 12:33:40 +05:30
Laurent Pinchart 593d9c2e10 dma: mmp_pdma: Fix physical channel memory allocation size
Use sizeof(*var) instead of sizeof(type) when calling devm_k*alloc().
This avoids using the wrong type as was done to allocate the physical
channels array.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-05-02 21:19:07 +05:30
Laurent Pinchart a2a7c176cc dma: mmp_pdma: Simplify access to channel drcmr value
As the physical channel and virtual channel point to each other,
pchan->phy->vchan is always equal to pchan. Simplify the code
accordingly.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-05-02 21:19:07 +05:30
Chao Xie f0b5077744 dma: mmp_pdma: add IRQF_SHARED when request irq
For some SOCes use mmp_pdma, they have several dma controllers
sharing same irq.
So add IRQF_SHARED to flag when request irq. It can make multiple
controllers share the same irq.

Signed-off-by: Chao Xie <chao.xie@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-03-06 14:47:47 +05:30
Arnd Bergmann 15cec530e4 dmaengine: mmp_pdma: fix mismerge
The merge between 2b7f65b11d "mmp_pdma: Style neatening" and
8010dad55a "dma: add dma_get_any_slave_channel(), for use in of_xlate()"
caused a build error by leaving obsolete code in place:

  mmp_pdma.c: In function 'mmp_pdma_dma_xlate':
  mmp_pdma.c:909:31: error: 'candidate' undeclared
  mmp_pdma.c:912:3: error: label 'retry' used but not defined
  mmp_pdma.c:901:24: warning: unused variable 'c' [-Wunused-variable]

This removes the extraneous lines.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-01-29 17:38:57 +05:30
Vinod Koul 0adcdeed6f Merge branch 'topic/of' into for-linus
Conflicts:
	drivers/dma/mmp_pdma.c

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-01-07 21:36:24 +05:30
Joe Perches 2b7f65b11d mmp_pdma: Style neatening
Neaten code used as a template for other drivers.
Make the code more consistent with kernel styles.

o Convert #defines with (1<<foo) to BIT(foo)
o Alignment wrapping
o Logic inversions to put return at end of functions
o Convert devm_kzalloc with multiply to devm_kcalloc
o typo of Peripheral fix

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-12-16 09:18:48 +05:30
Stephen Warren 8010dad55a dma: add dma_get_any_slave_channel(), for use in of_xlate()
mmp_pdma.c implements a custom of_xlate() function that is 95% identical
to what Tegra will need. Create a function to implement the common part,
so everyone doesn't just cut/paste the implementation.

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-12-10 17:48:33 +05:30
Wei Yongjun 086b0af19a dma: mmp_pdma: add missing platform_set_drvdata() in mmp_pdma_probe()
Add missing platform_set_drvdata() in mmp_pdma_probe(), otherwise
calling platform_get_drvdata() in mmp_pdma_remove() returns NULL.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-11-28 13:39:11 +05:30
Michael Opdenacker 174b537ac2 dma: misc: remove deprecated IRQF_DISABLED
This patch proposes to remove the use of the IRQF_DISABLED flag

It's a NOOP since 2.6.35 and it will be removed one day.

Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-10-13 20:21:35 +05:30
Wei Yongjun f358c289ee dma: mmp_pdma: use list_move instead of list_del/list_add
Using list_move() instead of list_del() + list_add().

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-10-07 07:41:39 +05:30
Daniel Mack 023bf55f1c dma: mmp_pdma: set DMA_PRIVATE
As the driver now has its own xlate function and makes use of the
dma_get_slave_channel(), we need to manually set the DMA_PRIVATE flags.

Drivers which rely on of_dma_simple_xlate() do implicitly the same by
going through __dma_request_channel().

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:53 +05:30
Daniel Mack 50440d74aa dma: mmp_pdma: add support for cyclic DMA descriptors
Provide a callback to prepare cyclic DMA transfers.
This is for instance needed for audio channel transport.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:52 +05:30
Daniel Mack 0cd6156177 dma: mmp_pdma: don't clear DCMD_ENDIRQEN at end of pending chain
In order to fully support multiple transactions per channel, we need to
assure we get an interrupt for each completed transaction. That flags
bit is also our only way to tell at which descriptor a transaction ends.

So, remove the manual clearing of that bit, and then inline the only
remaining command that is left in append_pending_queue() for better
readability.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:52 +05:30
Daniel Mack b721f9e800 dma: mmp_pdma: only complete one transaction from dma_do_tasklet()
Currently, when an interrupt has occured for a channel, the tasklet
worker code will only look at the very last entry in the running list
and complete its cookie, and then dispose the entire running chain.
Hence, the first transaction's cookie will never complete.

In fact, the interrupt we should handle will be the one related to the
first descriptor in the chain with the ENDIRQEN bit set, so complete
the second transaction that is in fact still running.

As a result, the driver can't currently handle multiple transactions on
one chanel, and it's likely that no drivers exist that rely on this
feature.

Fix this by walking the running_chain and look for the first
descriptor that has the interrupt-enable bit set. Only queue
descriptors up to that point for completion handling, while leaving
the rest intact. Also, only make the channel idle if the list is
completely empty after such a cycle.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:52 +05:30
Julia Lawall f2d04c3209 dma: mmp: simplify use of devm_ioremap_resource
Remove unneeded error handling on the result of a call to
platform_get_resource when the value is passed to devm_ioremap_resource.

A simplified version of the semantic patch that makes this change is as
follows: (http://coccinelle.lip6.fr/)

// <smpl>
@@
expression pdev,res,n,e,e1;
expression ret != 0;
identifier l;
@@

- res = platform_get_resource(pdev, IORESOURCE_MEM, n);
  ... when != res
- if (res == NULL) { ... \(goto l;\|return ret;\) }
  ... when != res
+ res = platform_get_resource(pdev, IORESOURCE_MEM, n);
  e = devm_ioremap_resource(e1, res);
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 14:51:28 +05:30
Daniel Mack 6fc4573c4e dma: mmp_pdma: add support for byte-aligned transfers
The PXA DMA controller has a DALGN register which allows for
byte-aligned DMA transfers. Use it in case any of the transfer
descriptors is not aligned to a mask of ~0x7.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:16 +05:30
Daniel Mack 8fd6aac3a8 dma: mmp_pdma: remove duplicate assignment
The DMA_SLAVE is currently set twice.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:16 +05:30
Daniel Mack 419d1f126b dma: mmp_pdma: print the number of channels at probe time
That helps check the provided runtime information.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:16 +05:30
Daniel Mack a9a7cf08bd dma: mmp_pdma: make the controller a DMA provider
This patch makes the mmp_pdma controller able to provide DMA resources
in DT environments by providing an dma xlate function.

of_dma_simple_xlate() isn't used here, because if fails to handle
multiple different DMA engines or several instances of the same
controller. Instead, a private implementation is provided that makes use
of the newly introduced dma_get_slave_channel() call.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack 13b3006b8e dma: mmp_pdma: add filter function
PXA peripherals need to obtain specific DMA request ids which will
eventually be stored in the DRCMR register.

Currently, clients are expected to store that number inside the slave
config block as slave_id, which is unfortunately incompatible with the
way DMA resources are handled in DT environments.

This patch adds a filter function which stores the filter parameter
passed in by of-dma.c into the channel's drcmr register.

For backward compatability, cfg->slave_id is still used if set to
a non-zero value.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack 1ac0e845c1 dma: mmp_pdma: fix maximum transfer length
There's no reason for limiting the maximum transfer length to 0x1000.
Take the actual bit mask instead; the PDMA is able to transfer chunks of
up to SZ_8K - 1.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack 638a542cc4 dma: mmp_pdma: refactor unlocking path in lookup_phy()
As suggested by Ezequiel García, release the spinlock at the end of the
function only, and use a goto for the control flow.

Just a minor cleanup.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack 8b298ded90 dma: mmp_pdma: factor out DRCMR register calculation
The exact same calculation is done twice, so let's factor it out to a
macro.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Jingoo Han 69c9f0ae1d dma: mmp_pdma: Staticize mmp_pdma_alloc_descriptor()
mmp_pdma_alloc_descriptor() is used only in this file.
Fix the following sparse warning:

drivers/dma/mmp_pdma.c:359:25: warning: symbol 'mmp_pdma_alloc_descriptor' was not declared. Should it be static?

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 16:54:42 +05:30
Xiang Wang 26a2dfdeb9 dma: mmp_pdma: clear DRCMR when free a phy channel
In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.

Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:27 +05:30
Xiang Wang 027f28b7bb dma: mmp_pdma: add protect when alloc/free phy channels
In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.

Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko 4aa9fe0a1f mmp_pdma: remove useless use of lock
Accordingly to dma_cookie_status() description locking is not required.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:24 +05:30
Linus Torvalds 5115f3c19d Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dma
Pull slave-dmaengine updates from Vinod Koul:
 "This is fairly big pull by my standards as I had missed last merge
  window.  So we have the support for device tree for slave-dmaengine,
  large updates to dw_dmac driver from Andy for reusing on different
  architectures.  Along with this we have fixes on bunch of the drivers"

Fix up trivial conflicts, usually due to #include line movement next to
each other.

* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits)
  Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT"
  ARM: dts: pl330: Add #dma-cells for generic dma binding support
  DMA: PL330: Register the DMA controller with the generic DMA helpers
  DMA: PL330: Add xlate function
  DMA: PL330: Add new pl330 filter for DT case.
  dma: tegra20-apb-dma: remove unnecessary assignment
  edma: do not waste memory for dma_mask
  dma: coh901318: set residue only if dma is in progress
  dma: coh901318: avoid unbalanced locking
  dmaengine.h: remove redundant else keyword
  dma: of-dma: protect list write operation by spin_lock
  dmaengine: ste_dma40: do not remove descriptors for cyclic transfers
  dma: of-dma.c: fix memory leakage
  dw_dmac: apply default dma_mask if needed
  dmaengine: ioat - fix spare sparse complain
  dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c
  ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING
  dw_dmac: add support for Lynxpoint DMA controllers
  dw_dmac: return proper residue value
  dw_dmac: fill individual length of descriptor
  ...
2013-02-26 09:24:48 -08:00
Thierry Reding 7331205a96 dma: Convert to devm_ioremap_resource()
Convert all uses of devm_request_and_ioremap() to the newly introduced
devm_ioremap_resource() which provides more consistent error handling.

devm_ioremap_resource() provides its own error messages so all explicit
error messages can be removed from the failure code paths.

Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-25 12:21:46 -08:00
Cong Ding ed30933e6f dma: remove unnecessary null pointer check in mmp_pdma.c
the pointer cfg is dereferenced in line 594, so it's no reason to check null
again in line 620.

Signed-off-by: Cong Ding <dinggnu@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 05:49:40 -08:00
Greg Kroah-Hartman 4bf27b8b33 Drivers: dma: remove __dev* attributes.
CONFIG_HOTPLUG is going away as an option.  As a result, the __dev*
markings need to be removed.

This change removes the use of __devinit, __devexit_p, __devinitconst,
and __devexit from these drivers.

Based on patches originally written by Bill Pemberton, but redone by me
in order to handle some of the coding style issues better, by hand.

Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Viresh Kumar <viresh.linux@gmail.com>
Cc: Dan Williams <djbw@fb.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Barry Song <baohua.song@csr.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Jassi Brar <jassisinghbrar@gmail.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-03 15:57:15 -08:00
Bill Pemberton 463a1f8b3c dma: remove use of __devinit
CONFIG_HOTPLUG is going away as an option so __devinit is no longer
needed.

Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Cc: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-11-28 12:42:36 -08:00
Bill Pemberton a7d6e3ec28 dma: remove use of __devexit_p
CONFIG_HOTPLUG is going away as an option so __devexit_p is no longer
needed.

Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Acked-by: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-11-28 12:41:36 -08:00
Zhangfei Gao c8acd6aa6b dmaengine: mmp-pdma support
1. virtual channel vs. physical channel
Virtual channel is managed by dmaengine
Physical channel handling resource, such as irq
Physical channel is alloced dynamically as descending priority,
freed immediately when irq done.
The availble highest priority physically channel will alwayes be alloced

Issue pending list -> alloc highest dma physically channel available -> dma done -> free physically channel

2. list: running list & pending list
submit: desc list -> pending list
issue_pending_list: if (IDLE) pending list -> running list; free pending list (RUN)
irq: free running list (IDLE)
     check pendlist -> pending list -> running list; free pending list (RUN)

3. irq:
Each list generate one irq, calling callback
One list may contain several desc chain, in such case, make sure only the last desc list generate irq.

4. async
Submit will add desc chain to pending list, which can be multi-called
If multi desc chain is submitted, only the last desc would generate irq -> call back
If IDLE, issue_pending_list start pending_list, transforming pendlist to running list
If RUN, irq will start pending list

5. test
5.1 pxa3xx_nand on pxa910
5.2 insmod dmatest.ko (threads_per_chan=y)
By default drivers/dma/dmatest.c test every channel and test memcpy with 1 threads per channel

Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
2012-09-14 08:14:07 +05:30