Граф коммитов

348245 Коммитов

Автор SHA1 Сообщение Дата
Padmavathi Venna a80258f9b2 DMA: PL330: Add xlate function
Add xlate to translate the device-tree binding information into
the appropriate format. The filter function requires the dma
controller device and dma channel number as filter_params.

Signed-off-by: Padmavathi Venna <padma.v@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:04:27 +05:30
Padmavathi Venna 34d19355b8 DMA: PL330: Add new pl330 filter for DT case.
This patch adds a new pl330_dt_filter for DT case to filter the
required channel based on the new filter params and modifies the
old filter only for non-DT case as suggested by Arnd Bergmann.

Signed-off-by: Padmavathi Venna <padma.v@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:04:27 +05:30
Andy Shevchenko a72208733f dma: tegra20-apb-dma: remove unnecessary assignment
There is no need to assign 0 to residue, because dma_cookie_status() does this
for us.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:00:54 +05:30
Andy Shevchenko 373459eee0 edma: do not waste memory for dma_mask
Accordingly to commentary in the platform_device_register_full the memory
allocated for dma_mask will not going to be freed. That's why is better to
assign dma_mask afterwards.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:00:53 +05:30
Andy Shevchenko 9b562639a1 dma: coh901318: set residue only if dma is in progress
When status is DMA_SUCCESS the residue should be zero. Otherwise it's a bug.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:00:53 +05:30
Andy Shevchenko 4168d0d9d3 dma: coh901318: avoid unbalanced locking
In case the len is 0 we must return without trying to unlock the lock that was
not locked.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:00:53 +05:30
Andy Shevchenko 978c4172af dmaengine.h: remove redundant else keyword
dmaengine_device_control returns -ENOSYS in case the dma driver doesn't have
such functionality.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:00:53 +05:30
Andy Shevchenko 88b386c0a7 dma: of-dma: protect list write operation by spin_lock
It's possible to have an inconsistency in the list due to unprotected operation
on it. The patch adds a proper locking on the list operation.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 20:00:53 +05:30
Fabio Baltieri 7dd1452525 dmaengine: ste_dma40: do not remove descriptors for cyclic transfers
Fix dma_tc_handle() to call d40_desc_remove() and d40_desc_done() only
for non-cyclic transfers, as this was breaking ux500_pcm since
introduced in:

d49278e dmaengine: dma40: Add support to split up large elements

Reported-by: Shreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 19:54:31 +05:30
Cong Ding e68b1130df dma: of-dma.c: fix memory leakage
The memory allocated to ofdma might be a leakage when error occurs.

Signed-off-by: Cong Ding <dinggnu@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 19:11:46 +05:30
Andy Shevchenko 877e86f283 dw_dmac: apply default dma_mask if needed
In some cases we got the device without dma_mask configured. We have to apply
the default value to avoid crashes during memory mapping.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-14 15:14:36 +05:30
Fengguang Wu a20702b8d7 dmaengine: ioat - fix spare sparse complain
>> drivers/dma/ioat/dma_v3.c:371:6: sparse: symbol 'ioat3_timer_event' was not declared.

Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-13 08:10:09 -08:00
Vinod Koul 5fa422c922 dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c
as requested by Rob

Suggested-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-13 08:09:37 -08:00
Dave Jiang 4dec23d771 ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING
There is a race that can hit during __cleanup() when the ioat->head pointer is
incremented during descriptor submission. The __cleanup() can clear the
PENDING flag when it does not see any active descriptors. This causes new
submitted descriptors to be ignored because the COMPLETION_PENDING flag is
cleared. This was introduced when code was adapted from ioatdma v1 to ioatdma
v2. For v2 and v3, IOAT_COMPLETION_PENDING flag will be abandoned and a new
flag IOAT_CHAN_ACTIVE will be utilized. This flag will also be protected under
the prep_lock when being modified in order to avoid the race.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Dan Williams <djbw@fb.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-12 08:27:21 -08:00
Mika Westerberg cfdf5b6cc5 dw_dmac: add support for Lynxpoint DMA controllers
Intel Lynxpoint PCH Low Power Subsystem has DMA controller to support general
purpose serial buses like SPI, I2C, and HSUART. This controller is enumerated
from ACPI namespace with ACPI ID INTL9C60.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-02-12 08:14:36 -08:00
Andy Shevchenko 4702d5244c dw_dmac: return proper residue value
Currently the driver returns full length of the active descriptor which is
wrong. We have to go throught the active descriptor and substract the length of
each sent children in the chain from the total length along with the actual
data in the DMA channel registers.

The cyclic case is not handled by this patch due to len field in the descriptor
structure is left untouched by the original code.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-28 04:04:50 -08:00
Andy Shevchenko 176dcec50f dw_dmac: fill individual length of descriptor
It will be useful to have the length of the transfer in the descriptor. The
cyclic transfer functions remained untouched.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-28 04:04:42 -08:00
Andy Shevchenko 30d38a3286 dw_dmac: introduce total_len field in struct dw_desc
By this new field we distinguish a total length of the chain and the individual
length of each descriptor in the chain.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-28 04:04:36 -08:00
Andy Shevchenko fdf475fa40 dw_dmac: remove unnecessary tx_list field in dw_dma_chan
The soft LLP mode is working for active descriptor only. So, we do not need to
have a copy of its pointer.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-28 04:04:32 -08:00
Andy Shevchenko 985a6c7dcf dw_dmac: print out DW_PARAMS and DWC_PARAMS when debug
It's usefull to have the values of the DW_PARAMS and DWC_PARAMS printed when
debug mode is enabled.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-28 04:04:28 -08:00
Barry Song 2b99c25921 DMAEngine: sirf: lock the shared registers access in sirfsoc_dma_terminate_all
Just like Russell pointed out in "DMAEngine: sirf: add DMA
pause/resume support" at
http://www.spinics.net/lists/arm-kernel/msg212496.html
here I find sirfsoc_dma_terminate_all() has same problem,
so move the locking to the front of registers access.

Signed-off-by: Barry Song <Baohua.Song@csr.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-28 01:44:46 -08:00
Barry Song 2518d1d1fc DMAEngine: sirf: add DMA pause/resume support
pause/resume are important for users like ALSA sound drivers,
this patches make the sirf prima2/marco support DMA commands
DMA_PAUSE and DMA_RESUME.

Signed-off-by: Barry Song <Baohua.Song@csr.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-28 01:44:40 -08:00
Vinod Koul 6c5e6a3990 Merge tag 'ux500-dma40' of //git.linaro.org/people/fabiobaltieri/linux.git
Pull ste_dma40 fixes from Fabio

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-21 07:09:34 -08:00
Andy Shevchenko 77bcc497c6 dw_dmac: move soft LLP code from tasklet to dwc_scan_descriptors
The proper place for the main logic of the soft LLP mode is
dwc_scan_descriptors. It prevents to get the transfer unexpectedly aborted in
case the user calls dwc_tx_status.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 20:49:21 -08:00
Andy Shevchenko 5be10f349b dw_dmac: don't exceed AHB master number in dwc_get_data_width
The driver assumes that hardware has two AHB masters which might not be always
true. In such cases we must not exceed number of the AHB masters present in the
hardware. In the proposed scheme in this patch, we would choose the master with
highest possible number whenever we exceed max AHB masters.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 20:49:21 -08:00
Andy Shevchenko f8122a82d2 dw_dmac: allocate dma descriptors from DMA_COHERENT memory
Currently descriptors are allocated from normal cacheable memory and that slows
down filling the descriptors, as we need to call cache_coherency routines
afterwards. It would be better to allocate memory for these descriptors from
DMA_COHERENT memory. This would make code much cleaner too.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 20:49:21 -08:00
Cong Ding 855372c013 dma: sh/shdma-base.c: remove unnecessary null pointer check
the variable chan is dereferenced in line 635, so it is no reason to check
null again in line 641.

Signed-off-by: Cong Ding <dinggnu@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 05:49:40 -08:00
Cong Ding ed30933e6f dma: remove unnecessary null pointer check in mmp_pdma.c
the pointer cfg is dereferenced in line 594, so it's no reason to check null
again in line 620.

Signed-off-by: Cong Ding <dinggnu@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 05:49:40 -08:00
Matt Porter 661f7cb55c dma: edma: fix slave config dependency on direction
The edma_slave_config() implementation depends on the
direction field such that it will not properly configure
a slave channel when called without direction set.

This fixes the implementation so that the slave config
is copied as is and prep_slave_sg() handles the
direction dependent handling. spi-omap2-mcspi and
omap_hsmmc both expose this bug as they configure the
slave channel config from a common path with an unconfigured
direction field.

Signed-off-by: Matt Porter <mporter@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 04:04:29 -08:00
Fabio Baltieri da2ac56a1b dmaengine: set_dma40: balance clock in probe fail code
Clock code was changed to use clk_prepare_enable in:

b707c65 dma/ste_dma40: Fixup clock usage during probe

but clk_disable on probe fail path was not updated.  This patch fix this
by using clk_disable_unprepare in place of clk_disable.

Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:51:16 +01:00
Fabio Baltieri 53d6d68f3c dmaengine: set_dma40: ignore spurious interrupts
Some DMA channels may be used by other cores in the SoC.  This patch
modifies the dma interrupt handler to ignore interrupts from unknown
channels.

Cc: Rabin Vincent <rabin.vincent@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:51:12 +01:00
Fabio Baltieri 7407048bec dmaengine: ste_dma40: add software lli support
This patch add support to manage LLI by SW for select phy channels.

There is a HW issue in certain controllers due to which on certain
occassions HW LLI cannot be used on some physical channels.  To avoid
the HW issue on a specific phy channel, the phy channel number can be
added to the list of soft_lli_channels and there after all the transfers
on that channel will use software LLI, for peripheral to memory
transfers.

SoftLLI introduces relink overhead, that could impact performace for
certain use cases.

This is based on a previous patch of Narayanan Gopalakrishnan.

Cc: Shreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:51:08 +01:00
Fabio Baltieri 7ce529efbc dmaengine: ste_dma40: minor code readability fixes
Use internal variables to the cycles to improve code readability, no
functional changes.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:51:04 +01:00
Fabio Baltieri f26e03ad2b dmaengine: ste_dma40: minor cosmetic fixes
This patch contains various non functional cosmetic fixes.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:51:01 +01:00
Fabio Baltieri 762eb33fde dmaengine: ste_dma40: add missing kernel-doc entry
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:56 +01:00
Fabio Baltieri 4226dd86b1 dmaengine: ste_dma40: add a done queue for completed descriptors
This is to keep the active queue for only those transfers which are
actually active in the hardware.  Descriptors will be moved to the done
queue after they are completed in the hardware (interrupt handler) but
before all the cleanup work has been completed (tasklet).

Mostly based on a previous patch by Rabin Vincent.

Cc: Rabin Vincent <rabin.vincent@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:53 +01:00
Tong Liu 3cb645dc85 dmaengine: ste_dma40: support more than 128 event lines
U8540 DMA controller is different from u9540 we need define new
registers and use them to support handling more than 128 event lines.

Signed-off-by: Tong Liu <tong.liu@stericsson.com>
Reviewed-by: Per Forlin <per.forlin@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:48 +01:00
Gerald Baeza 47db92f4a6 dmaengine: ste_dma40: physical channels number correction
DMAC_ICFG[0:2]=SCHNB only allows to count 'multiple of 4' physical
channels so it was ok with platforms having 8 channels but cannot be
used for next versions (with 10 or 14 channels).  This patch allows to
provide the number of physical channels for a DMA device via
platform_data, or still rely on SCHNB if platform_data announces 0
channel.

Signed-off-by: Gerald Baeza <gerald.baeza@stericsson.com>
Reviewed-by: Per Forlin <per.forlin@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:44 +01:00
Gerald Baeza f000df8c5a dmaengine: ste_dma40: support fixed physical channel allocation
This patch makes existing use_fixed_channel field (of stedma40_chan_cfg
structure) applicable to physical channels.

Signed-off-by: Gerald Baeza <gerald.baeza@stericsson.com>
Tested-by: Yannick Fertre <yannick.fertre@stericsson.com>
Reviewed-by: Per Forlin <per.forlin@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:40 +01:00
Rabin Vincent ccc3d69764 dmaengine: ste_dma40: don't allow high priority dest event lines
Hardware bug: when a logical channel is triggerred by a high priority
destination event line, an extra packet transaction is generated in case
of important data write response latency on previous logical channel A
and if the source transfer of current logical channel B is already
completed and if no other channel with a higher priority than B is
waiting for execution.

Software workaround: do not set the high priority level for the
destination event lines that trigger logical channels.

Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Reviewed-by: Shreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:37 +01:00
Narayanan G 42365cf0fa dmaengine: ste_dma40: don't check for pm_runtime_suspended()
The check for runtime suspend is not needed during a regular suspend, as
the framework takes care of this.  This fixes the issue of DMA driver
not letting the system to go to deepsleep in the first attempt.

Signed-off-by: Narayanan G <narayanan.gopalakrishnan@stericsson.com>
Reviewed-by: Rabin Vincent <rabin.vincent@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:32 +01:00
Per Forlin 92bb6cdb53 dmaengine: ste_dma40: limit burst size to 16
The client is not aware of the maximum burst size in the dma driver.  If
the size exceeds 16 set max to 16.

Signed-off-by: Per Forlin <per.forlin@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:27 +01:00
Per Forlin b96710e5b2 dmaengine: ste_dma40: set dma max seg size
Maximum DMA seg size is (0xffff x data_width).  If max seg
size is not set it deafults to 64k.  This results in failure
if transferring 64k in byte mode.
Large seg sizes may be supported by splitting large transfer.

Signed-off-by: Per Forlin <per.forlin@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:18 +01:00
Per Forlin 8a5d2039ab dmaengine: ste_dma40: use writel_relaxed for lcxa
lcpa and lcla are written often and the cache_sync() overhead in writel
is costly, especially for wlan where every single network packet (in RX
mode) corresponds to a separate DMA transfer.

Signed-off-by: Per Forlin <per.forlin@stericsson.com>
Reviewed-by: Narayanan Gopalakrishnan <narayanan.gopalakrishnan@stericsson.com>
Reviewed-by: Rabin Vincent <rabin.vincent@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:15 +01:00
Narayanan 0fd602235d dmaengine: ste_dma40: reset priority bit for logical channels
This patch sets the SSCFG/SDCFG bit[7] PRI only for physical channel
requests with high priority.  For logical channels, this bit will be
zero.

Signed-off-by: Narayanan G <narayanan.gopalakrishnan@stericsson.com>
Reviewed-by: Rabin Vincent <rabin.vincent@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
2013-01-14 10:50:09 +01:00
Alessandro Rubini 3a95b9fbba pl080.h: moved from arm/include/asm/hardware to include/linux/amba/
The header is used by drivers/dma/amba-pl08x.c, which can be compiled
under x86, where PL080 exists under a PCI-to-AMBA bridge. This patche
moves it where it can be accessed by other architectures, and fixes
all users.

Signed-off-by: Alessandro Rubini <rubini@gnudd.com>
Acked-by: Giancarlo Asnaghi <giancarlo.asnaghi@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-13 05:19:45 -08:00
Heikki Krogerus a5dbff111c dma: dw_dmac: clear suspend bit during termination
The DMA transfer could not be established if previously it was paused and
terminated. In that case the channel's suspend bit remains set that prevents to
transfer anything until channel is resumed.

The patch adds the dwc_chan_resume() call instead of a plain flag assignment.
That clears the DWC_CFGL_CH_SUSP bit as well during termination.

Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-12 05:07:23 -08:00
Andy Shevchenko 23d5f4ec9d dw_dmac: backlink to dw_dma in dw_dma_chan is superfluous
The same information could be extracted from the struct dma_chan.
The patch introduces helper function dwc_get_data_width() as well.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-12 05:07:23 -08:00
Andy Shevchenko 495aea4b57 dw_dmac: make usage of dw_dma_slave optional
The driver requires a custom slave configuration to be present to be able to
make the slave transfers. Nevertheless, in some cases we need only the request
line as an additional information to the generic slave configuration.  The
request line is provided by slave_id parameter of the dma_slave_config
structure. That's why the custom slave configuration could be optional for such
cases.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-12 05:07:23 -08:00
Andy Shevchenko 0fdb567fc7 dw_dmac: store direction in the custom channel structure
Currently the direction value comes from the generic slave configuration
structure and explicitly as a preparation function parameter. The first one is
kinda obsoleted. Thus, we have to store the value passed to the preparation
function somewhere in our structures to be able to use it later. The best
candidate to provide the storage is a custom channel structure. Until now we
still keep and check the direction field of the slave config structure as well.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-12 05:07:23 -08:00