Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (22 commits)
  ioat: fix self test for multi-channel case
  dmaengine: bump initcall level to arch_initcall
  dmaengine: advertise all channels on a device to dma_filter_fn
  dmaengine: use idr for registering dma device numbers
  dmaengine: add a release for dma class devices and dependent infrastructure
  ioat: do not perform removal actions at shutdown
  iop-adma: enable module removal
  iop-adma: kill debug BUG_ON
  iop-adma: let devm do its job, don't duplicate free
  dmaengine: kill enum dma_state_client
  dmaengine: remove 'bigref' infrastructure
  dmaengine: kill struct dma_client and supporting infrastructure
  dmaengine: replace dma_async_client_register with dmaengine_get
  atmel-mci: convert to dma_request_channel and down-level dma_slave
  dmatest: convert to dma_request_channel
  dmaengine: introduce dma_request_channel and private channels
  net_dma: convert to dma_find_channel
  dmaengine: provide a common 'issue_pending_all' implementation
  dmaengine: centralize channel allocation, introduce dma_find_channel
  dmaengine: up-level reference counting to the module level
  ...
This commit is contained in:
Linus Torvalds 2009-01-09 11:52:14 -08:00
Родитель 2150edc6c5 b9bdcbba01
Коммит d9e8a3a5b8
26 изменённых файлов: 918 добавлений и 1278 удалений

Просмотреть файл

@ -13,9 +13,9 @@
3.6 Constraints 3.6 Constraints
3.7 Example 3.7 Example
4 DRIVER DEVELOPER NOTES 4 DMAENGINE DRIVER DEVELOPER NOTES
4.1 Conformance points 4.1 Conformance points
4.2 "My application needs finer control of hardware channels" 4.2 "My application needs exclusive control of hardware channels"
5 SOURCE 5 SOURCE
@ -150,6 +150,7 @@ ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more
implementation examples. implementation examples.
4 DRIVER DEVELOPMENT NOTES 4 DRIVER DEVELOPMENT NOTES
4.1 Conformance points: 4.1 Conformance points:
There are a few conformance points required in dmaengine drivers to There are a few conformance points required in dmaengine drivers to
accommodate assumptions made by applications using the async_tx API: accommodate assumptions made by applications using the async_tx API:
@ -158,58 +159,49 @@ accommodate assumptions made by applications using the async_tx API:
3/ Use async_tx_run_dependencies() in the descriptor clean up path to 3/ Use async_tx_run_dependencies() in the descriptor clean up path to
handle submission of dependent operations handle submission of dependent operations
4.2 "My application needs finer control of hardware channels" 4.2 "My application needs exclusive control of hardware channels"
This requirement seems to arise from cases where a DMA engine driver is Primarily this requirement arises from cases where a DMA engine driver
trying to support device-to-memory DMA. The dmaengine and async_tx is being used to support device-to-memory operations. A channel that is
implementations were designed for offloading memory-to-memory performing these operations cannot, for many platform specific reasons,
operations; however, there are some capabilities of the dmaengine layer be shared. For these cases the dma_request_channel() interface is
that can be used for platform-specific channel management. provided.
Platform-specific constraints can be handled by registering the
application as a 'dma_client' and implementing a 'dma_event_callback' to
apply a filter to the available channels in the system. Before showing
how to implement a custom dma_event callback some background of
dmaengine's client support is required.
The following routines in dmaengine support multiple clients requesting The interface is:
use of a channel: struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
- dma_async_client_register(struct dma_client *client) dma_filter_fn filter_fn,
- dma_async_client_chan_request(struct dma_client *client) void *filter_param);
dma_async_client_register takes a pointer to an initialized dma_client Where dma_filter_fn is defined as:
structure. It expects that the 'event_callback' and 'cap_mask' fields typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
are already initialized.
dma_async_client_chan_request triggers dmaengine to notify the client of When the optional 'filter_fn' parameter is set to NULL
all channels that satisfy the capability mask. It is up to the client's dma_request_channel simply returns the first channel that satisfies the
event_callback routine to track how many channels the client needs and capability mask. Otherwise, when the mask parameter is insufficient for
how many it is currently using. The dma_event_callback routine returns a specifying the necessary channel, the filter_fn routine can be used to
dma_state_client code to let dmaengine know the status of the disposition the available channels in the system. The filter_fn routine
allocation. is called once for each free channel in the system. Upon seeing a
suitable channel filter_fn returns DMA_ACK which flags that channel to
be the return value from dma_request_channel. A channel allocated via
this interface is exclusive to the caller, until dma_release_channel()
is called.
Below is the example of how to extend this functionality for The DMA_PRIVATE capability flag is used to tag dma devices that should
platform-specific filtering of the available channels beyond the not be used by the general-purpose allocator. It can be set at
standard capability mask: initialization time if it is known that a channel will always be
private. Alternatively, it is set when dma_request_channel() finds an
unused "public" channel.
static enum dma_state_client A couple caveats to note when implementing a driver and consumer:
my_dma_client_callback(struct dma_client *client, 1/ Once a channel has been privately allocated it will no longer be
struct dma_chan *chan, enum dma_state state) considered by the general-purpose allocator even after a call to
{ dma_release_channel().
struct dma_device *dma_dev; 2/ Since capabilities are specified at the device level a dma_device
struct my_platform_specific_dma *plat_dma_dev; with multiple channels will either have all channels public, or all
channels private.
dma_dev = chan->device;
plat_dma_dev = container_of(dma_dev,
struct my_platform_specific_dma,
dma_dev);
if (!plat_dma_dev->platform_specific_capability)
return DMA_DUP;
. . .
}
5 SOURCE 5 SOURCE
include/linux/dmaengine.h: core header file for DMA drivers and clients
include/linux/dmaengine.h: core header file for DMA drivers and api users
drivers/dma/dmaengine.c: offload engine channel management routines drivers/dma/dmaengine.c: offload engine channel management routines
drivers/dma/: location for offload engine drivers drivers/dma/: location for offload engine drivers
include/linux/async_tx.h: core header file for the async_tx api include/linux/async_tx.h: core header file for the async_tx api

Просмотреть файл

@ -0,0 +1 @@
See Documentation/crypto/async-tx-api.txt

Просмотреть файл

@ -1305,7 +1305,7 @@ struct platform_device *__init
at32_add_device_mci(unsigned int id, struct mci_platform_data *data) at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
{ {
struct platform_device *pdev; struct platform_device *pdev;
struct dw_dma_slave *dws; struct dw_dma_slave *dws = &data->dma_slave;
u32 pioa_mask; u32 pioa_mask;
u32 piob_mask; u32 piob_mask;
@ -1324,22 +1324,13 @@ at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
ARRAY_SIZE(atmel_mci0_resource))) ARRAY_SIZE(atmel_mci0_resource)))
goto fail; goto fail;
if (data->dma_slave) dws->dma_dev = &dw_dmac0_device.dev;
dws = kmemdup(to_dw_dma_slave(data->dma_slave), dws->reg_width = DW_DMA_SLAVE_WIDTH_32BIT;
sizeof(struct dw_dma_slave), GFP_KERNEL);
else
dws = kzalloc(sizeof(struct dw_dma_slave), GFP_KERNEL);
dws->slave.dev = &pdev->dev;
dws->slave.dma_dev = &dw_dmac0_device.dev;
dws->slave.reg_width = DMA_SLAVE_WIDTH_32BIT;
dws->cfg_hi = (DWC_CFGH_SRC_PER(0) dws->cfg_hi = (DWC_CFGH_SRC_PER(0)
| DWC_CFGH_DST_PER(1)); | DWC_CFGH_DST_PER(1));
dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL
| DWC_CFGL_HS_SRC_POL); | DWC_CFGL_HS_SRC_POL);
data->dma_slave = &dws->slave;
if (platform_device_add_data(pdev, data, if (platform_device_add_data(pdev, data,
sizeof(struct mci_platform_data))) sizeof(struct mci_platform_data)))
goto fail; goto fail;

Просмотреть файл

@ -28,351 +28,18 @@
#include <linux/async_tx.h> #include <linux/async_tx.h>
#ifdef CONFIG_DMA_ENGINE #ifdef CONFIG_DMA_ENGINE
static enum dma_state_client static int __init async_tx_init(void)
dma_channel_add_remove(struct dma_client *client,
struct dma_chan *chan, enum dma_state state);
static struct dma_client async_tx_dma = {
.event_callback = dma_channel_add_remove,
/* .cap_mask == 0 defaults to all channels */
};
/**
* dma_cap_mask_all - enable iteration over all operation types
*/
static dma_cap_mask_t dma_cap_mask_all;
/**
* chan_ref_percpu - tracks channel allocations per core/opertion
*/
struct chan_ref_percpu {
struct dma_chan_ref *ref;
};
static int channel_table_initialized;
static struct chan_ref_percpu *channel_table[DMA_TX_TYPE_END];
/**
* async_tx_lock - protect modification of async_tx_master_list and serialize
* rebalance operations
*/
static spinlock_t async_tx_lock;
static LIST_HEAD(async_tx_master_list);
/* async_tx_issue_pending_all - start all transactions on all channels */
void async_tx_issue_pending_all(void)
{ {
struct dma_chan_ref *ref; dmaengine_get();
rcu_read_lock();
list_for_each_entry_rcu(ref, &async_tx_master_list, node)
ref->chan->device->device_issue_pending(ref->chan);
rcu_read_unlock();
}
EXPORT_SYMBOL_GPL(async_tx_issue_pending_all);
/* dma_wait_for_async_tx - spin wait for a transcation to complete
* @tx: transaction to wait on
*/
enum dma_status
dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
{
enum dma_status status;
struct dma_async_tx_descriptor *iter;
struct dma_async_tx_descriptor *parent;
if (!tx)
return DMA_SUCCESS;
/* poll through the dependency chain, return when tx is complete */
do {
iter = tx;
/* find the root of the unsubmitted dependency chain */
do {
parent = iter->parent;
if (!parent)
break;
else
iter = parent;
} while (parent);
/* there is a small window for ->parent == NULL and
* ->cookie == -EBUSY
*/
while (iter->cookie == -EBUSY)
cpu_relax();
status = dma_sync_wait(iter->chan, iter->cookie);
} while (status == DMA_IN_PROGRESS || (iter != tx));
return status;
}
EXPORT_SYMBOL_GPL(dma_wait_for_async_tx);
/* async_tx_run_dependencies - helper routine for dma drivers to process
* (start) dependent operations on their target channel
* @tx: transaction with dependencies
*/
void async_tx_run_dependencies(struct dma_async_tx_descriptor *tx)
{
struct dma_async_tx_descriptor *dep = tx->next;
struct dma_async_tx_descriptor *dep_next;
struct dma_chan *chan;
if (!dep)
return;
chan = dep->chan;
/* keep submitting up until a channel switch is detected
* in that case we will be called again as a result of
* processing the interrupt from async_tx_channel_switch
*/
for (; dep; dep = dep_next) {
spin_lock_bh(&dep->lock);
dep->parent = NULL;
dep_next = dep->next;
if (dep_next && dep_next->chan == chan)
dep->next = NULL; /* ->next will be submitted */
else
dep_next = NULL; /* submit current dep and terminate */
spin_unlock_bh(&dep->lock);
dep->tx_submit(dep);
}
chan->device->device_issue_pending(chan);
}
EXPORT_SYMBOL_GPL(async_tx_run_dependencies);
static void
free_dma_chan_ref(struct rcu_head *rcu)
{
struct dma_chan_ref *ref;
ref = container_of(rcu, struct dma_chan_ref, rcu);
kfree(ref);
}
static void
init_dma_chan_ref(struct dma_chan_ref *ref, struct dma_chan *chan)
{
INIT_LIST_HEAD(&ref->node);
INIT_RCU_HEAD(&ref->rcu);
ref->chan = chan;
atomic_set(&ref->count, 0);
}
/**
* get_chan_ref_by_cap - returns the nth channel of the given capability
* defaults to returning the channel with the desired capability and the
* lowest reference count if the index can not be satisfied
* @cap: capability to match
* @index: nth channel desired, passing -1 has the effect of forcing the
* default return value
*/
static struct dma_chan_ref *
get_chan_ref_by_cap(enum dma_transaction_type cap, int index)
{
struct dma_chan_ref *ret_ref = NULL, *min_ref = NULL, *ref;
rcu_read_lock();
list_for_each_entry_rcu(ref, &async_tx_master_list, node)
if (dma_has_cap(cap, ref->chan->device->cap_mask)) {
if (!min_ref)
min_ref = ref;
else if (atomic_read(&ref->count) <
atomic_read(&min_ref->count))
min_ref = ref;
if (index-- == 0) {
ret_ref = ref;
break;
}
}
rcu_read_unlock();
if (!ret_ref)
ret_ref = min_ref;
if (ret_ref)
atomic_inc(&ret_ref->count);
return ret_ref;
}
/**
* async_tx_rebalance - redistribute the available channels, optimize
* for cpu isolation in the SMP case, and opertaion isolation in the
* uniprocessor case
*/
static void async_tx_rebalance(void)
{
int cpu, cap, cpu_idx = 0;
unsigned long flags;
if (!channel_table_initialized)
return;
spin_lock_irqsave(&async_tx_lock, flags);
/* undo the last distribution */
for_each_dma_cap_mask(cap, dma_cap_mask_all)
for_each_possible_cpu(cpu) {
struct dma_chan_ref *ref =
per_cpu_ptr(channel_table[cap], cpu)->ref;
if (ref) {
atomic_set(&ref->count, 0);
per_cpu_ptr(channel_table[cap], cpu)->ref =
NULL;
}
}
for_each_dma_cap_mask(cap, dma_cap_mask_all)
for_each_online_cpu(cpu) {
struct dma_chan_ref *new;
if (NR_CPUS > 1)
new = get_chan_ref_by_cap(cap, cpu_idx++);
else
new = get_chan_ref_by_cap(cap, -1);
per_cpu_ptr(channel_table[cap], cpu)->ref = new;
}
spin_unlock_irqrestore(&async_tx_lock, flags);
}
static enum dma_state_client
dma_channel_add_remove(struct dma_client *client,
struct dma_chan *chan, enum dma_state state)
{
unsigned long found, flags;
struct dma_chan_ref *master_ref, *ref;
enum dma_state_client ack = DMA_DUP; /* default: take no action */
switch (state) {
case DMA_RESOURCE_AVAILABLE:
found = 0;
rcu_read_lock();
list_for_each_entry_rcu(ref, &async_tx_master_list, node)
if (ref->chan == chan) {
found = 1;
break;
}
rcu_read_unlock();
pr_debug("async_tx: dma resource available [%s]\n",
found ? "old" : "new");
if (!found)
ack = DMA_ACK;
else
break;
/* add the channel to the generic management list */
master_ref = kmalloc(sizeof(*master_ref), GFP_KERNEL);
if (master_ref) {
/* keep a reference until async_tx is unloaded */
dma_chan_get(chan);
init_dma_chan_ref(master_ref, chan);
spin_lock_irqsave(&async_tx_lock, flags);
list_add_tail_rcu(&master_ref->node,
&async_tx_master_list);
spin_unlock_irqrestore(&async_tx_lock,
flags);
} else {
printk(KERN_WARNING "async_tx: unable to create"
" new master entry in response to"
" a DMA_RESOURCE_ADDED event"
" (-ENOMEM)\n");
return 0;
}
async_tx_rebalance();
break;
case DMA_RESOURCE_REMOVED:
found = 0;
spin_lock_irqsave(&async_tx_lock, flags);
list_for_each_entry(ref, &async_tx_master_list, node)
if (ref->chan == chan) {
/* permit backing devices to go away */
dma_chan_put(ref->chan);
list_del_rcu(&ref->node);
call_rcu(&ref->rcu, free_dma_chan_ref);
found = 1;
break;
}
spin_unlock_irqrestore(&async_tx_lock, flags);
pr_debug("async_tx: dma resource removed [%s]\n",
found ? "ours" : "not ours");
if (found)
ack = DMA_ACK;
else
break;
async_tx_rebalance();
break;
case DMA_RESOURCE_SUSPEND:
case DMA_RESOURCE_RESUME:
printk(KERN_WARNING "async_tx: does not support dma channel"
" suspend/resume\n");
break;
default:
BUG();
}
return ack;
}
static int __init
async_tx_init(void)
{
enum dma_transaction_type cap;
spin_lock_init(&async_tx_lock);
bitmap_fill(dma_cap_mask_all.bits, DMA_TX_TYPE_END);
/* an interrupt will never be an explicit operation type.
* clearing this bit prevents allocation to a slot in 'channel_table'
*/
clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits);
for_each_dma_cap_mask(cap, dma_cap_mask_all) {
channel_table[cap] = alloc_percpu(struct chan_ref_percpu);
if (!channel_table[cap])
goto err;
}
channel_table_initialized = 1;
dma_async_client_register(&async_tx_dma);
dma_async_client_chan_request(&async_tx_dma);
printk(KERN_INFO "async_tx: api initialized (async)\n"); printk(KERN_INFO "async_tx: api initialized (async)\n");
return 0; return 0;
err:
printk(KERN_ERR "async_tx: initialization failure\n");
while (--cap >= 0)
free_percpu(channel_table[cap]);
return 1;
} }
static void __exit async_tx_exit(void) static void __exit async_tx_exit(void)
{ {
enum dma_transaction_type cap; dmaengine_put();
channel_table_initialized = 0;
for_each_dma_cap_mask(cap, dma_cap_mask_all)
if (channel_table[cap])
free_percpu(channel_table[cap]);
dma_async_client_unregister(&async_tx_dma);
} }
/** /**
@ -387,16 +54,9 @@ __async_tx_find_channel(struct dma_async_tx_descriptor *depend_tx,
{ {
/* see if we can keep the chain on one channel */ /* see if we can keep the chain on one channel */
if (depend_tx && if (depend_tx &&
dma_has_cap(tx_type, depend_tx->chan->device->cap_mask)) dma_has_cap(tx_type, depend_tx->chan->device->cap_mask))
return depend_tx->chan; return depend_tx->chan;
else if (likely(channel_table_initialized)) { return dma_find_channel(tx_type);
struct dma_chan_ref *ref;
int cpu = get_cpu();
ref = per_cpu_ptr(channel_table[tx_type], cpu)->ref;
put_cpu();
return ref ? ref->chan : NULL;
} else
return NULL;
} }
EXPORT_SYMBOL_GPL(__async_tx_find_channel); EXPORT_SYMBOL_GPL(__async_tx_find_channel);
#else #else

Просмотреть файл

@ -270,6 +270,6 @@ static void __exit dca_exit(void)
dca_sysfs_exit(); dca_sysfs_exit();
} }
subsys_initcall(dca_init); arch_initcall(dca_init);
module_exit(dca_exit); module_exit(dca_exit);

Просмотреть файл

@ -33,7 +33,6 @@ config INTEL_IOATDMA
config INTEL_IOP_ADMA config INTEL_IOP_ADMA
tristate "Intel IOP ADMA support" tristate "Intel IOP ADMA support"
depends on ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX depends on ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX
select ASYNC_CORE
select DMA_ENGINE select DMA_ENGINE
help help
Enable support for the Intel(R) IOP Series RAID engines. Enable support for the Intel(R) IOP Series RAID engines.
@ -59,7 +58,6 @@ config FSL_DMA
config MV_XOR config MV_XOR
bool "Marvell XOR engine support" bool "Marvell XOR engine support"
depends on PLAT_ORION depends on PLAT_ORION
select ASYNC_CORE
select DMA_ENGINE select DMA_ENGINE
---help--- ---help---
Enable support for the Marvell XOR engine. Enable support for the Marvell XOR engine.

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -35,7 +35,7 @@ MODULE_PARM_DESC(threads_per_chan,
static unsigned int max_channels; static unsigned int max_channels;
module_param(max_channels, uint, S_IRUGO); module_param(max_channels, uint, S_IRUGO);
MODULE_PARM_DESC(nr_channels, MODULE_PARM_DESC(max_channels,
"Maximum number of channels to use (default: all)"); "Maximum number of channels to use (default: all)");
/* /*
@ -71,7 +71,7 @@ struct dmatest_chan {
/* /*
* These are protected by dma_list_mutex since they're only used by * These are protected by dma_list_mutex since they're only used by
* the DMA client event callback * the DMA filter function callback
*/ */
static LIST_HEAD(dmatest_channels); static LIST_HEAD(dmatest_channels);
static unsigned int nr_channels; static unsigned int nr_channels;
@ -80,7 +80,7 @@ static bool dmatest_match_channel(struct dma_chan *chan)
{ {
if (test_channel[0] == '\0') if (test_channel[0] == '\0')
return true; return true;
return strcmp(dev_name(&chan->dev), test_channel) == 0; return strcmp(dma_chan_name(chan), test_channel) == 0;
} }
static bool dmatest_match_device(struct dma_device *device) static bool dmatest_match_device(struct dma_device *device)
@ -215,7 +215,6 @@ static int dmatest_func(void *data)
smp_rmb(); smp_rmb();
chan = thread->chan; chan = thread->chan;
dma_chan_get(chan);
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
total_tests++; total_tests++;
@ -293,7 +292,6 @@ static int dmatest_func(void *data)
} }
ret = 0; ret = 0;
dma_chan_put(chan);
kfree(thread->dstbuf); kfree(thread->dstbuf);
err_dstbuf: err_dstbuf:
kfree(thread->srcbuf); kfree(thread->srcbuf);
@ -319,21 +317,16 @@ static void dmatest_cleanup_channel(struct dmatest_chan *dtc)
kfree(dtc); kfree(dtc);
} }
static enum dma_state_client dmatest_add_channel(struct dma_chan *chan) static int dmatest_add_channel(struct dma_chan *chan)
{ {
struct dmatest_chan *dtc; struct dmatest_chan *dtc;
struct dmatest_thread *thread; struct dmatest_thread *thread;
unsigned int i; unsigned int i;
/* Have we already been told about this channel? */
list_for_each_entry(dtc, &dmatest_channels, node)
if (dtc->chan == chan)
return DMA_DUP;
dtc = kmalloc(sizeof(struct dmatest_chan), GFP_KERNEL); dtc = kmalloc(sizeof(struct dmatest_chan), GFP_KERNEL);
if (!dtc) { if (!dtc) {
pr_warning("dmatest: No memory for %s\n", dev_name(&chan->dev)); pr_warning("dmatest: No memory for %s\n", dma_chan_name(chan));
return DMA_NAK; return -ENOMEM;
} }
dtc->chan = chan; dtc->chan = chan;
@ -343,16 +336,16 @@ static enum dma_state_client dmatest_add_channel(struct dma_chan *chan)
thread = kzalloc(sizeof(struct dmatest_thread), GFP_KERNEL); thread = kzalloc(sizeof(struct dmatest_thread), GFP_KERNEL);
if (!thread) { if (!thread) {
pr_warning("dmatest: No memory for %s-test%u\n", pr_warning("dmatest: No memory for %s-test%u\n",
dev_name(&chan->dev), i); dma_chan_name(chan), i);
break; break;
} }
thread->chan = dtc->chan; thread->chan = dtc->chan;
smp_wmb(); smp_wmb();
thread->task = kthread_run(dmatest_func, thread, "%s-test%u", thread->task = kthread_run(dmatest_func, thread, "%s-test%u",
dev_name(&chan->dev), i); dma_chan_name(chan), i);
if (IS_ERR(thread->task)) { if (IS_ERR(thread->task)) {
pr_warning("dmatest: Failed to run thread %s-test%u\n", pr_warning("dmatest: Failed to run thread %s-test%u\n",
dev_name(&chan->dev), i); dma_chan_name(chan), i);
kfree(thread); kfree(thread);
break; break;
} }
@ -362,86 +355,62 @@ static enum dma_state_client dmatest_add_channel(struct dma_chan *chan)
list_add_tail(&thread->node, &dtc->threads); list_add_tail(&thread->node, &dtc->threads);
} }
pr_info("dmatest: Started %u threads using %s\n", i, dev_name(&chan->dev)); pr_info("dmatest: Started %u threads using %s\n", i, dma_chan_name(chan));
list_add_tail(&dtc->node, &dmatest_channels); list_add_tail(&dtc->node, &dmatest_channels);
nr_channels++; nr_channels++;
return DMA_ACK; return 0;
} }
static enum dma_state_client dmatest_remove_channel(struct dma_chan *chan) static bool filter(struct dma_chan *chan, void *param)
{ {
struct dmatest_chan *dtc, *_dtc; if (!dmatest_match_channel(chan) || !dmatest_match_device(chan->device))
return false;
list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) { else
if (dtc->chan == chan) { return true;
list_del(&dtc->node);
dmatest_cleanup_channel(dtc);
pr_debug("dmatest: lost channel %s\n",
dev_name(&chan->dev));
return DMA_ACK;
}
}
return DMA_DUP;
} }
/*
* Start testing threads as new channels are assigned to us, and kill
* them when the channels go away.
*
* When we unregister the client, all channels are removed so this
* will also take care of cleaning things up when the module is
* unloaded.
*/
static enum dma_state_client
dmatest_event(struct dma_client *client, struct dma_chan *chan,
enum dma_state state)
{
enum dma_state_client ack = DMA_NAK;
switch (state) {
case DMA_RESOURCE_AVAILABLE:
if (!dmatest_match_channel(chan)
|| !dmatest_match_device(chan->device))
ack = DMA_DUP;
else if (max_channels && nr_channels >= max_channels)
ack = DMA_NAK;
else
ack = dmatest_add_channel(chan);
break;
case DMA_RESOURCE_REMOVED:
ack = dmatest_remove_channel(chan);
break;
default:
pr_info("dmatest: Unhandled event %u (%s)\n",
state, dev_name(&chan->dev));
break;
}
return ack;
}
static struct dma_client dmatest_client = {
.event_callback = dmatest_event,
};
static int __init dmatest_init(void) static int __init dmatest_init(void)
{ {
dma_cap_set(DMA_MEMCPY, dmatest_client.cap_mask); dma_cap_mask_t mask;
dma_async_client_register(&dmatest_client); struct dma_chan *chan;
dma_async_client_chan_request(&dmatest_client); int err = 0;
return 0; dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY, mask);
for (;;) {
chan = dma_request_channel(mask, filter, NULL);
if (chan) {
err = dmatest_add_channel(chan);
if (err == 0)
continue;
else {
dma_release_channel(chan);
break; /* add_channel failed, punt */
}
} else
break; /* no more channels available */
if (max_channels && nr_channels >= max_channels)
break; /* we have all we need */
}
return err;
} }
module_init(dmatest_init); /* when compiled-in wait for drivers to load first */
late_initcall(dmatest_init);
static void __exit dmatest_exit(void) static void __exit dmatest_exit(void)
{ {
dma_async_client_unregister(&dmatest_client); struct dmatest_chan *dtc, *_dtc;
list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) {
list_del(&dtc->node);
dmatest_cleanup_channel(dtc);
pr_debug("dmatest: dropped channel %s\n",
dma_chan_name(dtc->chan));
dma_release_channel(dtc->chan);
}
} }
module_exit(dmatest_exit); module_exit(dmatest_exit);

Просмотреть файл

@ -70,6 +70,15 @@
* the controller, though. * the controller, though.
*/ */
static struct device *chan2dev(struct dma_chan *chan)
{
return &chan->dev->device;
}
static struct device *chan2parent(struct dma_chan *chan)
{
return chan->dev->device.parent;
}
static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc) static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc)
{ {
return list_entry(dwc->active_list.next, struct dw_desc, desc_node); return list_entry(dwc->active_list.next, struct dw_desc, desc_node);
@ -93,12 +102,12 @@ static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc)
ret = desc; ret = desc;
break; break;
} }
dev_dbg(&dwc->chan.dev, "desc %p not ACKed\n", desc); dev_dbg(chan2dev(&dwc->chan), "desc %p not ACKed\n", desc);
i++; i++;
} }
spin_unlock_bh(&dwc->lock); spin_unlock_bh(&dwc->lock);
dev_vdbg(&dwc->chan.dev, "scanned %u descriptors on freelist\n", i); dev_vdbg(chan2dev(&dwc->chan), "scanned %u descriptors on freelist\n", i);
return ret; return ret;
} }
@ -108,10 +117,10 @@ static void dwc_sync_desc_for_cpu(struct dw_dma_chan *dwc, struct dw_desc *desc)
struct dw_desc *child; struct dw_desc *child;
list_for_each_entry(child, &desc->txd.tx_list, desc_node) list_for_each_entry(child, &desc->txd.tx_list, desc_node)
dma_sync_single_for_cpu(dwc->chan.dev.parent, dma_sync_single_for_cpu(chan2parent(&dwc->chan),
child->txd.phys, sizeof(child->lli), child->txd.phys, sizeof(child->lli),
DMA_TO_DEVICE); DMA_TO_DEVICE);
dma_sync_single_for_cpu(dwc->chan.dev.parent, dma_sync_single_for_cpu(chan2parent(&dwc->chan),
desc->txd.phys, sizeof(desc->lli), desc->txd.phys, sizeof(desc->lli),
DMA_TO_DEVICE); DMA_TO_DEVICE);
} }
@ -129,11 +138,11 @@ static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc *desc)
spin_lock_bh(&dwc->lock); spin_lock_bh(&dwc->lock);
list_for_each_entry(child, &desc->txd.tx_list, desc_node) list_for_each_entry(child, &desc->txd.tx_list, desc_node)
dev_vdbg(&dwc->chan.dev, dev_vdbg(chan2dev(&dwc->chan),
"moving child desc %p to freelist\n", "moving child desc %p to freelist\n",
child); child);
list_splice_init(&desc->txd.tx_list, &dwc->free_list); list_splice_init(&desc->txd.tx_list, &dwc->free_list);
dev_vdbg(&dwc->chan.dev, "moving desc %p to freelist\n", desc); dev_vdbg(chan2dev(&dwc->chan), "moving desc %p to freelist\n", desc);
list_add(&desc->desc_node, &dwc->free_list); list_add(&desc->desc_node, &dwc->free_list);
spin_unlock_bh(&dwc->lock); spin_unlock_bh(&dwc->lock);
} }
@ -163,9 +172,9 @@ static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
/* ASSERT: channel is idle */ /* ASSERT: channel is idle */
if (dma_readl(dw, CH_EN) & dwc->mask) { if (dma_readl(dw, CH_EN) & dwc->mask) {
dev_err(&dwc->chan.dev, dev_err(chan2dev(&dwc->chan),
"BUG: Attempted to start non-idle channel\n"); "BUG: Attempted to start non-idle channel\n");
dev_err(&dwc->chan.dev, dev_err(chan2dev(&dwc->chan),
" SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: 0x%x:%08x\n", " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: 0x%x:%08x\n",
channel_readl(dwc, SAR), channel_readl(dwc, SAR),
channel_readl(dwc, DAR), channel_readl(dwc, DAR),
@ -193,7 +202,7 @@ dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc)
void *param; void *param;
struct dma_async_tx_descriptor *txd = &desc->txd; struct dma_async_tx_descriptor *txd = &desc->txd;
dev_vdbg(&dwc->chan.dev, "descriptor %u complete\n", txd->cookie); dev_vdbg(chan2dev(&dwc->chan), "descriptor %u complete\n", txd->cookie);
dwc->completed = txd->cookie; dwc->completed = txd->cookie;
callback = txd->callback; callback = txd->callback;
@ -208,11 +217,11 @@ dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc)
* mapped before they were submitted... * mapped before they were submitted...
*/ */
if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP))
dma_unmap_page(dwc->chan.dev.parent, desc->lli.dar, desc->len, dma_unmap_page(chan2parent(&dwc->chan), desc->lli.dar,
DMA_FROM_DEVICE); desc->len, DMA_FROM_DEVICE);
if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP))
dma_unmap_page(dwc->chan.dev.parent, desc->lli.sar, desc->len, dma_unmap_page(chan2parent(&dwc->chan), desc->lli.sar,
DMA_TO_DEVICE); desc->len, DMA_TO_DEVICE);
/* /*
* The API requires that no submissions are done from a * The API requires that no submissions are done from a
@ -228,7 +237,7 @@ static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc)
LIST_HEAD(list); LIST_HEAD(list);
if (dma_readl(dw, CH_EN) & dwc->mask) { if (dma_readl(dw, CH_EN) & dwc->mask) {
dev_err(&dwc->chan.dev, dev_err(chan2dev(&dwc->chan),
"BUG: XFER bit set, but channel not idle!\n"); "BUG: XFER bit set, but channel not idle!\n");
/* Try to continue after resetting the channel... */ /* Try to continue after resetting the channel... */
@ -273,7 +282,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
return; return;
} }
dev_vdbg(&dwc->chan.dev, "scan_descriptors: llp=0x%x\n", llp); dev_vdbg(chan2dev(&dwc->chan), "scan_descriptors: llp=0x%x\n", llp);
list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) { list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) {
if (desc->lli.llp == llp) if (desc->lli.llp == llp)
@ -292,7 +301,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
dwc_descriptor_complete(dwc, desc); dwc_descriptor_complete(dwc, desc);
} }
dev_err(&dwc->chan.dev, dev_err(chan2dev(&dwc->chan),
"BUG: All descriptors done, but channel not idle!\n"); "BUG: All descriptors done, but channel not idle!\n");
/* Try to continue after resetting the channel... */ /* Try to continue after resetting the channel... */
@ -308,7 +317,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli) static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli)
{ {
dev_printk(KERN_CRIT, &dwc->chan.dev, dev_printk(KERN_CRIT, chan2dev(&dwc->chan),
" desc: s0x%x d0x%x l0x%x c0x%x:%x\n", " desc: s0x%x d0x%x l0x%x c0x%x:%x\n",
lli->sar, lli->dar, lli->llp, lli->sar, lli->dar, lli->llp,
lli->ctlhi, lli->ctllo); lli->ctlhi, lli->ctllo);
@ -342,9 +351,9 @@ static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc)
* controller flagged an error instead of scribbling over * controller flagged an error instead of scribbling over
* random memory locations. * random memory locations.
*/ */
dev_printk(KERN_CRIT, &dwc->chan.dev, dev_printk(KERN_CRIT, chan2dev(&dwc->chan),
"Bad descriptor submitted for DMA!\n"); "Bad descriptor submitted for DMA!\n");
dev_printk(KERN_CRIT, &dwc->chan.dev, dev_printk(KERN_CRIT, chan2dev(&dwc->chan),
" cookie: %d\n", bad_desc->txd.cookie); " cookie: %d\n", bad_desc->txd.cookie);
dwc_dump_lli(dwc, &bad_desc->lli); dwc_dump_lli(dwc, &bad_desc->lli);
list_for_each_entry(child, &bad_desc->txd.tx_list, desc_node) list_for_each_entry(child, &bad_desc->txd.tx_list, desc_node)
@ -442,12 +451,12 @@ static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx)
* for DMA. But this is hard to do in a race-free manner. * for DMA. But this is hard to do in a race-free manner.
*/ */
if (list_empty(&dwc->active_list)) { if (list_empty(&dwc->active_list)) {
dev_vdbg(&tx->chan->dev, "tx_submit: started %u\n", dev_vdbg(chan2dev(tx->chan), "tx_submit: started %u\n",
desc->txd.cookie); desc->txd.cookie);
dwc_dostart(dwc, desc); dwc_dostart(dwc, desc);
list_add_tail(&desc->desc_node, &dwc->active_list); list_add_tail(&desc->desc_node, &dwc->active_list);
} else { } else {
dev_vdbg(&tx->chan->dev, "tx_submit: queued %u\n", dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n",
desc->txd.cookie); desc->txd.cookie);
list_add_tail(&desc->desc_node, &dwc->queue); list_add_tail(&desc->desc_node, &dwc->queue);
@ -472,11 +481,11 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
unsigned int dst_width; unsigned int dst_width;
u32 ctllo; u32 ctllo;
dev_vdbg(&chan->dev, "prep_dma_memcpy d0x%x s0x%x l0x%zx f0x%lx\n", dev_vdbg(chan2dev(chan), "prep_dma_memcpy d0x%x s0x%x l0x%zx f0x%lx\n",
dest, src, len, flags); dest, src, len, flags);
if (unlikely(!len)) { if (unlikely(!len)) {
dev_dbg(&chan->dev, "prep_dma_memcpy: length is zero!\n"); dev_dbg(chan2dev(chan), "prep_dma_memcpy: length is zero!\n");
return NULL; return NULL;
} }
@ -516,7 +525,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
first = desc; first = desc;
} else { } else {
prev->lli.llp = desc->txd.phys; prev->lli.llp = desc->txd.phys;
dma_sync_single_for_device(chan->dev.parent, dma_sync_single_for_device(chan2parent(chan),
prev->txd.phys, sizeof(prev->lli), prev->txd.phys, sizeof(prev->lli),
DMA_TO_DEVICE); DMA_TO_DEVICE);
list_add_tail(&desc->desc_node, list_add_tail(&desc->desc_node,
@ -531,7 +540,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
prev->lli.ctllo |= DWC_CTLL_INT_EN; prev->lli.ctllo |= DWC_CTLL_INT_EN;
prev->lli.llp = 0; prev->lli.llp = 0;
dma_sync_single_for_device(chan->dev.parent, dma_sync_single_for_device(chan2parent(chan),
prev->txd.phys, sizeof(prev->lli), prev->txd.phys, sizeof(prev->lli),
DMA_TO_DEVICE); DMA_TO_DEVICE);
@ -562,15 +571,15 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
struct scatterlist *sg; struct scatterlist *sg;
size_t total_len = 0; size_t total_len = 0;
dev_vdbg(&chan->dev, "prep_dma_slave\n"); dev_vdbg(chan2dev(chan), "prep_dma_slave\n");
if (unlikely(!dws || !sg_len)) if (unlikely(!dws || !sg_len))
return NULL; return NULL;
reg_width = dws->slave.reg_width; reg_width = dws->reg_width;
prev = first = NULL; prev = first = NULL;
sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction); sg_len = dma_map_sg(chan2parent(chan), sgl, sg_len, direction);
switch (direction) { switch (direction) {
case DMA_TO_DEVICE: case DMA_TO_DEVICE:
@ -579,7 +588,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
| DWC_CTLL_DST_FIX | DWC_CTLL_DST_FIX
| DWC_CTLL_SRC_INC | DWC_CTLL_SRC_INC
| DWC_CTLL_FC_M2P); | DWC_CTLL_FC_M2P);
reg = dws->slave.tx_reg; reg = dws->tx_reg;
for_each_sg(sgl, sg, sg_len, i) { for_each_sg(sgl, sg, sg_len, i) {
struct dw_desc *desc; struct dw_desc *desc;
u32 len; u32 len;
@ -587,7 +596,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
desc = dwc_desc_get(dwc); desc = dwc_desc_get(dwc);
if (!desc) { if (!desc) {
dev_err(&chan->dev, dev_err(chan2dev(chan),
"not enough descriptors available\n"); "not enough descriptors available\n");
goto err_desc_get; goto err_desc_get;
} }
@ -607,7 +616,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
first = desc; first = desc;
} else { } else {
prev->lli.llp = desc->txd.phys; prev->lli.llp = desc->txd.phys;
dma_sync_single_for_device(chan->dev.parent, dma_sync_single_for_device(chan2parent(chan),
prev->txd.phys, prev->txd.phys,
sizeof(prev->lli), sizeof(prev->lli),
DMA_TO_DEVICE); DMA_TO_DEVICE);
@ -625,7 +634,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
| DWC_CTLL_SRC_FIX | DWC_CTLL_SRC_FIX
| DWC_CTLL_FC_P2M); | DWC_CTLL_FC_P2M);
reg = dws->slave.rx_reg; reg = dws->rx_reg;
for_each_sg(sgl, sg, sg_len, i) { for_each_sg(sgl, sg, sg_len, i) {
struct dw_desc *desc; struct dw_desc *desc;
u32 len; u32 len;
@ -633,7 +642,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
desc = dwc_desc_get(dwc); desc = dwc_desc_get(dwc);
if (!desc) { if (!desc) {
dev_err(&chan->dev, dev_err(chan2dev(chan),
"not enough descriptors available\n"); "not enough descriptors available\n");
goto err_desc_get; goto err_desc_get;
} }
@ -653,7 +662,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
first = desc; first = desc;
} else { } else {
prev->lli.llp = desc->txd.phys; prev->lli.llp = desc->txd.phys;
dma_sync_single_for_device(chan->dev.parent, dma_sync_single_for_device(chan2parent(chan),
prev->txd.phys, prev->txd.phys,
sizeof(prev->lli), sizeof(prev->lli),
DMA_TO_DEVICE); DMA_TO_DEVICE);
@ -673,7 +682,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
prev->lli.ctllo |= DWC_CTLL_INT_EN; prev->lli.ctllo |= DWC_CTLL_INT_EN;
prev->lli.llp = 0; prev->lli.llp = 0;
dma_sync_single_for_device(chan->dev.parent, dma_sync_single_for_device(chan2parent(chan),
prev->txd.phys, sizeof(prev->lli), prev->txd.phys, sizeof(prev->lli),
DMA_TO_DEVICE); DMA_TO_DEVICE);
@ -758,29 +767,21 @@ static void dwc_issue_pending(struct dma_chan *chan)
spin_unlock_bh(&dwc->lock); spin_unlock_bh(&dwc->lock);
} }
static int dwc_alloc_chan_resources(struct dma_chan *chan, static int dwc_alloc_chan_resources(struct dma_chan *chan)
struct dma_client *client)
{ {
struct dw_dma_chan *dwc = to_dw_dma_chan(chan); struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma *dw = to_dw_dma(chan->device); struct dw_dma *dw = to_dw_dma(chan->device);
struct dw_desc *desc; struct dw_desc *desc;
struct dma_slave *slave;
struct dw_dma_slave *dws; struct dw_dma_slave *dws;
int i; int i;
u32 cfghi; u32 cfghi;
u32 cfglo; u32 cfglo;
dev_vdbg(&chan->dev, "alloc_chan_resources\n"); dev_vdbg(chan2dev(chan), "alloc_chan_resources\n");
/* Channels doing slave DMA can only handle one client. */
if (dwc->dws || client->slave) {
if (chan->client_count)
return -EBUSY;
}
/* ASSERT: channel is idle */ /* ASSERT: channel is idle */
if (dma_readl(dw, CH_EN) & dwc->mask) { if (dma_readl(dw, CH_EN) & dwc->mask) {
dev_dbg(&chan->dev, "DMA channel not idle?\n"); dev_dbg(chan2dev(chan), "DMA channel not idle?\n");
return -EIO; return -EIO;
} }
@ -789,23 +790,17 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
cfghi = DWC_CFGH_FIFO_MODE; cfghi = DWC_CFGH_FIFO_MODE;
cfglo = 0; cfglo = 0;
slave = client->slave; dws = dwc->dws;
if (slave) { if (dws) {
/* /*
* We need controller-specific data to set up slave * We need controller-specific data to set up slave
* transfers. * transfers.
*/ */
BUG_ON(!slave->dma_dev || slave->dma_dev != dw->dma.dev); BUG_ON(!dws->dma_dev || dws->dma_dev != dw->dma.dev);
dws = container_of(slave, struct dw_dma_slave, slave);
dwc->dws = dws;
cfghi = dws->cfg_hi; cfghi = dws->cfg_hi;
cfglo = dws->cfg_lo; cfglo = dws->cfg_lo;
} else {
dwc->dws = NULL;
} }
channel_writel(dwc, CFG_LO, cfglo); channel_writel(dwc, CFG_LO, cfglo);
channel_writel(dwc, CFG_HI, cfghi); channel_writel(dwc, CFG_HI, cfghi);
@ -822,7 +817,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL); desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL);
if (!desc) { if (!desc) {
dev_info(&chan->dev, dev_info(chan2dev(chan),
"only allocated %d descriptors\n", i); "only allocated %d descriptors\n", i);
spin_lock_bh(&dwc->lock); spin_lock_bh(&dwc->lock);
break; break;
@ -832,7 +827,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
desc->txd.tx_submit = dwc_tx_submit; desc->txd.tx_submit = dwc_tx_submit;
desc->txd.flags = DMA_CTRL_ACK; desc->txd.flags = DMA_CTRL_ACK;
INIT_LIST_HEAD(&desc->txd.tx_list); INIT_LIST_HEAD(&desc->txd.tx_list);
desc->txd.phys = dma_map_single(chan->dev.parent, &desc->lli, desc->txd.phys = dma_map_single(chan2parent(chan), &desc->lli,
sizeof(desc->lli), DMA_TO_DEVICE); sizeof(desc->lli), DMA_TO_DEVICE);
dwc_desc_put(dwc, desc); dwc_desc_put(dwc, desc);
@ -847,7 +842,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan,
spin_unlock_bh(&dwc->lock); spin_unlock_bh(&dwc->lock);
dev_dbg(&chan->dev, dev_dbg(chan2dev(chan),
"alloc_chan_resources allocated %d descriptors\n", i); "alloc_chan_resources allocated %d descriptors\n", i);
return i; return i;
@ -860,7 +855,7 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
struct dw_desc *desc, *_desc; struct dw_desc *desc, *_desc;
LIST_HEAD(list); LIST_HEAD(list);
dev_dbg(&chan->dev, "free_chan_resources (descs allocated=%u)\n", dev_dbg(chan2dev(chan), "free_chan_resources (descs allocated=%u)\n",
dwc->descs_allocated); dwc->descs_allocated);
/* ASSERT: channel is idle */ /* ASSERT: channel is idle */
@ -881,13 +876,13 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
spin_unlock_bh(&dwc->lock); spin_unlock_bh(&dwc->lock);
list_for_each_entry_safe(desc, _desc, &list, desc_node) { list_for_each_entry_safe(desc, _desc, &list, desc_node) {
dev_vdbg(&chan->dev, " freeing descriptor %p\n", desc); dev_vdbg(chan2dev(chan), " freeing descriptor %p\n", desc);
dma_unmap_single(chan->dev.parent, desc->txd.phys, dma_unmap_single(chan2parent(chan), desc->txd.phys,
sizeof(desc->lli), DMA_TO_DEVICE); sizeof(desc->lli), DMA_TO_DEVICE);
kfree(desc); kfree(desc);
} }
dev_vdbg(&chan->dev, "free_chan_resources done\n"); dev_vdbg(chan2dev(chan), "free_chan_resources done\n");
} }
/*----------------------------------------------------------------------*/ /*----------------------------------------------------------------------*/

Просмотреть файл

@ -366,8 +366,7 @@ static struct fsl_desc_sw *fsl_dma_alloc_descriptor(
* *
* Return - The number of descriptors allocated. * Return - The number of descriptors allocated.
*/ */
static int fsl_dma_alloc_chan_resources(struct dma_chan *chan, static int fsl_dma_alloc_chan_resources(struct dma_chan *chan)
struct dma_client *client)
{ {
struct fsl_dma_chan *fsl_chan = to_fsl_chan(chan); struct fsl_dma_chan *fsl_chan = to_fsl_chan(chan);
@ -823,7 +822,7 @@ static int __devinit fsl_dma_chan_probe(struct fsl_dma_device *fdev,
*/ */
WARN_ON(fdev->feature != new_fsl_chan->feature); WARN_ON(fdev->feature != new_fsl_chan->feature);
new_fsl_chan->dev = &new_fsl_chan->common.dev; new_fsl_chan->dev = &new_fsl_chan->common.dev->device;
new_fsl_chan->reg_base = ioremap(new_fsl_chan->reg.start, new_fsl_chan->reg_base = ioremap(new_fsl_chan->reg.start,
new_fsl_chan->reg.end - new_fsl_chan->reg.start + 1); new_fsl_chan->reg.end - new_fsl_chan->reg.start + 1);

Просмотреть файл

@ -75,60 +75,10 @@ static int ioat_dca_enabled = 1;
module_param(ioat_dca_enabled, int, 0644); module_param(ioat_dca_enabled, int, 0644);
MODULE_PARM_DESC(ioat_dca_enabled, "control support of dca service (default: 1)"); MODULE_PARM_DESC(ioat_dca_enabled, "control support of dca service (default: 1)");
static int ioat_setup_functionality(struct pci_dev *pdev, void __iomem *iobase)
{
struct ioat_device *device = pci_get_drvdata(pdev);
u8 version;
int err = 0;
version = readb(iobase + IOAT_VER_OFFSET);
switch (version) {
case IOAT_VER_1_2:
device->dma = ioat_dma_probe(pdev, iobase);
if (device->dma && ioat_dca_enabled)
device->dca = ioat_dca_init(pdev, iobase);
break;
case IOAT_VER_2_0:
device->dma = ioat_dma_probe(pdev, iobase);
if (device->dma && ioat_dca_enabled)
device->dca = ioat2_dca_init(pdev, iobase);
break;
case IOAT_VER_3_0:
device->dma = ioat_dma_probe(pdev, iobase);
if (device->dma && ioat_dca_enabled)
device->dca = ioat3_dca_init(pdev, iobase);
break;
default:
err = -ENODEV;
break;
}
if (!device->dma)
err = -ENODEV;
return err;
}
static void ioat_shutdown_functionality(struct pci_dev *pdev)
{
struct ioat_device *device = pci_get_drvdata(pdev);
dev_err(&pdev->dev, "Removing dma and dca services\n");
if (device->dca) {
unregister_dca_provider(device->dca);
free_dca_provider(device->dca);
device->dca = NULL;
}
if (device->dma) {
ioat_dma_remove(device->dma);
device->dma = NULL;
}
}
static struct pci_driver ioat_pci_driver = { static struct pci_driver ioat_pci_driver = {
.name = "ioatdma", .name = "ioatdma",
.id_table = ioat_pci_tbl, .id_table = ioat_pci_tbl,
.probe = ioat_probe, .probe = ioat_probe,
.shutdown = ioat_shutdown_functionality,
.remove = __devexit_p(ioat_remove), .remove = __devexit_p(ioat_remove),
}; };
@ -179,7 +129,29 @@ static int __devinit ioat_probe(struct pci_dev *pdev,
pci_set_master(pdev); pci_set_master(pdev);
err = ioat_setup_functionality(pdev, iobase); switch (readb(iobase + IOAT_VER_OFFSET)) {
case IOAT_VER_1_2:
device->dma = ioat_dma_probe(pdev, iobase);
if (device->dma && ioat_dca_enabled)
device->dca = ioat_dca_init(pdev, iobase);
break;
case IOAT_VER_2_0:
device->dma = ioat_dma_probe(pdev, iobase);
if (device->dma && ioat_dca_enabled)
device->dca = ioat2_dca_init(pdev, iobase);
break;
case IOAT_VER_3_0:
device->dma = ioat_dma_probe(pdev, iobase);
if (device->dma && ioat_dca_enabled)
device->dca = ioat3_dca_init(pdev, iobase);
break;
default:
err = -ENODEV;
break;
}
if (!device->dma)
err = -ENODEV;
if (err) if (err)
goto err_version; goto err_version;
@ -198,17 +170,21 @@ err_enable_device:
return err; return err;
} }
/*
* It is unsafe to remove this module: if removed while a requested
* dma is outstanding, esp. from tcp, it is possible to hang while
* waiting for something that will never finish. However, if you're
* feeling lucky, this usually works just fine.
*/
static void __devexit ioat_remove(struct pci_dev *pdev) static void __devexit ioat_remove(struct pci_dev *pdev)
{ {
struct ioat_device *device = pci_get_drvdata(pdev); struct ioat_device *device = pci_get_drvdata(pdev);
ioat_shutdown_functionality(pdev); dev_err(&pdev->dev, "Removing dma and dca services\n");
if (device->dca) {
unregister_dca_provider(device->dca);
free_dca_provider(device->dca);
device->dca = NULL;
}
if (device->dma) {
ioat_dma_remove(device->dma);
device->dma = NULL;
}
kfree(device); kfree(device);
} }

Просмотреть файл

@ -734,8 +734,7 @@ static void ioat2_dma_massage_chan_desc(struct ioat_dma_chan *ioat_chan)
* ioat_dma_alloc_chan_resources - returns the number of allocated descriptors * ioat_dma_alloc_chan_resources - returns the number of allocated descriptors
* @chan: the channel to be filled out * @chan: the channel to be filled out
*/ */
static int ioat_dma_alloc_chan_resources(struct dma_chan *chan, static int ioat_dma_alloc_chan_resources(struct dma_chan *chan)
struct dma_client *client)
{ {
struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan);
struct ioat_desc_sw *desc; struct ioat_desc_sw *desc;
@ -1341,12 +1340,11 @@ static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan)
*/ */
#define IOAT_TEST_SIZE 2000 #define IOAT_TEST_SIZE 2000
DECLARE_COMPLETION(test_completion);
static void ioat_dma_test_callback(void *dma_async_param) static void ioat_dma_test_callback(void *dma_async_param)
{ {
printk(KERN_ERR "ioatdma: ioat_dma_test_callback(%p)\n", struct completion *cmp = dma_async_param;
dma_async_param);
complete(&test_completion); complete(cmp);
} }
/** /**
@ -1363,6 +1361,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device)
dma_addr_t dma_dest, dma_src; dma_addr_t dma_dest, dma_src;
dma_cookie_t cookie; dma_cookie_t cookie;
int err = 0; int err = 0;
struct completion cmp;
src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL); src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL);
if (!src) if (!src)
@ -1381,7 +1380,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device)
dma_chan = container_of(device->common.channels.next, dma_chan = container_of(device->common.channels.next,
struct dma_chan, struct dma_chan,
device_node); device_node);
if (device->common.device_alloc_chan_resources(dma_chan, NULL) < 1) { if (device->common.device_alloc_chan_resources(dma_chan) < 1) {
dev_err(&device->pdev->dev, dev_err(&device->pdev->dev,
"selftest cannot allocate chan resource\n"); "selftest cannot allocate chan resource\n");
err = -ENODEV; err = -ENODEV;
@ -1402,8 +1401,9 @@ static int ioat_dma_self_test(struct ioatdma_device *device)
} }
async_tx_ack(tx); async_tx_ack(tx);
init_completion(&cmp);
tx->callback = ioat_dma_test_callback; tx->callback = ioat_dma_test_callback;
tx->callback_param = (void *)0x8086; tx->callback_param = &cmp;
cookie = tx->tx_submit(tx); cookie = tx->tx_submit(tx);
if (cookie < 0) { if (cookie < 0) {
dev_err(&device->pdev->dev, dev_err(&device->pdev->dev,
@ -1413,7 +1413,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device)
} }
device->common.device_issue_pending(dma_chan); device->common.device_issue_pending(dma_chan);
wait_for_completion_timeout(&test_completion, msecs_to_jiffies(3000)); wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000));
if (device->common.device_is_tx_complete(dma_chan, cookie, NULL, NULL) if (device->common.device_is_tx_complete(dma_chan, cookie, NULL, NULL)
!= DMA_SUCCESS) { != DMA_SUCCESS) {

Просмотреть файл

@ -24,7 +24,6 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/async_tx.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
@ -116,7 +115,7 @@ iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc,
} }
/* run dependent operations */ /* run dependent operations */
async_tx_run_dependencies(&desc->async_tx); dma_run_dependencies(&desc->async_tx);
return cookie; return cookie;
} }
@ -270,8 +269,6 @@ static void __iop_adma_slot_cleanup(struct iop_adma_chan *iop_chan)
break; break;
} }
BUG_ON(!seen_current);
if (cookie > 0) { if (cookie > 0) {
iop_chan->completed_cookie = cookie; iop_chan->completed_cookie = cookie;
pr_debug("\tcompleted cookie %d\n", cookie); pr_debug("\tcompleted cookie %d\n", cookie);
@ -471,8 +468,7 @@ static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan);
* greater than 2x the number slots needed to satisfy a device->max_xor * greater than 2x the number slots needed to satisfy a device->max_xor
* request. * request.
* */ * */
static int iop_adma_alloc_chan_resources(struct dma_chan *chan, static int iop_adma_alloc_chan_resources(struct dma_chan *chan)
struct dma_client *client)
{ {
char *hw_desc; char *hw_desc;
int idx; int idx;
@ -866,7 +862,7 @@ static int __devinit iop_adma_memcpy_self_test(struct iop_adma_device *device)
dma_chan = container_of(device->common.channels.next, dma_chan = container_of(device->common.channels.next,
struct dma_chan, struct dma_chan,
device_node); device_node);
if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) { if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
err = -ENODEV; err = -ENODEV;
goto out; goto out;
} }
@ -964,7 +960,7 @@ iop_adma_xor_zero_sum_self_test(struct iop_adma_device *device)
dma_chan = container_of(device->common.channels.next, dma_chan = container_of(device->common.channels.next,
struct dma_chan, struct dma_chan,
device_node); device_node);
if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) { if (iop_adma_alloc_chan_resources(dma_chan) < 1) {
err = -ENODEV; err = -ENODEV;
goto out; goto out;
} }
@ -1115,26 +1111,13 @@ static int __devexit iop_adma_remove(struct platform_device *dev)
struct iop_adma_device *device = platform_get_drvdata(dev); struct iop_adma_device *device = platform_get_drvdata(dev);
struct dma_chan *chan, *_chan; struct dma_chan *chan, *_chan;
struct iop_adma_chan *iop_chan; struct iop_adma_chan *iop_chan;
int i;
struct iop_adma_platform_data *plat_data = dev->dev.platform_data; struct iop_adma_platform_data *plat_data = dev->dev.platform_data;
dma_async_device_unregister(&device->common); dma_async_device_unregister(&device->common);
for (i = 0; i < 3; i++) {
unsigned int irq;
irq = platform_get_irq(dev, i);
free_irq(irq, device);
}
dma_free_coherent(&dev->dev, plat_data->pool_size, dma_free_coherent(&dev->dev, plat_data->pool_size,
device->dma_desc_pool_virt, device->dma_desc_pool); device->dma_desc_pool_virt, device->dma_desc_pool);
do {
struct resource *res;
res = platform_get_resource(dev, IORESOURCE_MEM, 0);
release_mem_region(res->start, res->end - res->start);
} while (0);
list_for_each_entry_safe(chan, _chan, &device->common.channels, list_for_each_entry_safe(chan, _chan, &device->common.channels,
device_node) { device_node) {
iop_chan = to_iop_adma_chan(chan); iop_chan = to_iop_adma_chan(chan);
@ -1255,7 +1238,6 @@ static int __devinit iop_adma_probe(struct platform_device *pdev)
spin_lock_init(&iop_chan->lock); spin_lock_init(&iop_chan->lock);
INIT_LIST_HEAD(&iop_chan->chain); INIT_LIST_HEAD(&iop_chan->chain);
INIT_LIST_HEAD(&iop_chan->all_slots); INIT_LIST_HEAD(&iop_chan->all_slots);
INIT_RCU_HEAD(&iop_chan->common.rcu);
iop_chan->common.device = dma_dev; iop_chan->common.device = dma_dev;
list_add_tail(&iop_chan->common.device_node, &dma_dev->channels); list_add_tail(&iop_chan->common.device_node, &dma_dev->channels);
@ -1431,16 +1413,12 @@ static int __init iop_adma_init (void)
return platform_driver_register(&iop_adma_driver); return platform_driver_register(&iop_adma_driver);
} }
/* it's currently unsafe to unload this module */
#if 0
static void __exit iop_adma_exit (void) static void __exit iop_adma_exit (void)
{ {
platform_driver_unregister(&iop_adma_driver); platform_driver_unregister(&iop_adma_driver);
return; return;
} }
module_exit(iop_adma_exit); module_exit(iop_adma_exit);
#endif
module_init(iop_adma_init); module_init(iop_adma_init);
MODULE_AUTHOR("Intel Corporation"); MODULE_AUTHOR("Intel Corporation");

Просмотреть файл

@ -18,7 +18,6 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/async_tx.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
@ -340,7 +339,7 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
} }
/* run dependent operations */ /* run dependent operations */
async_tx_run_dependencies(&desc->async_tx); dma_run_dependencies(&desc->async_tx);
return cookie; return cookie;
} }
@ -607,8 +606,7 @@ submit_done:
} }
/* returns the number of allocated descriptors */ /* returns the number of allocated descriptors */
static int mv_xor_alloc_chan_resources(struct dma_chan *chan, static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
struct dma_client *client)
{ {
char *hw_desc; char *hw_desc;
int idx; int idx;
@ -958,7 +956,7 @@ static int __devinit mv_xor_memcpy_self_test(struct mv_xor_device *device)
dma_chan = container_of(device->common.channels.next, dma_chan = container_of(device->common.channels.next,
struct dma_chan, struct dma_chan,
device_node); device_node);
if (mv_xor_alloc_chan_resources(dma_chan, NULL) < 1) { if (mv_xor_alloc_chan_resources(dma_chan) < 1) {
err = -ENODEV; err = -ENODEV;
goto out; goto out;
} }
@ -1053,7 +1051,7 @@ mv_xor_xor_self_test(struct mv_xor_device *device)
dma_chan = container_of(device->common.channels.next, dma_chan = container_of(device->common.channels.next,
struct dma_chan, struct dma_chan,
device_node); device_node);
if (mv_xor_alloc_chan_resources(dma_chan, NULL) < 1) { if (mv_xor_alloc_chan_resources(dma_chan) < 1) {
err = -ENODEV; err = -ENODEV;
goto out; goto out;
} }
@ -1221,7 +1219,6 @@ static int __devinit mv_xor_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&mv_chan->chain); INIT_LIST_HEAD(&mv_chan->chain);
INIT_LIST_HEAD(&mv_chan->completed_slots); INIT_LIST_HEAD(&mv_chan->completed_slots);
INIT_LIST_HEAD(&mv_chan->all_slots); INIT_LIST_HEAD(&mv_chan->all_slots);
INIT_RCU_HEAD(&mv_chan->common.rcu);
mv_chan->common.device = dma_dev; mv_chan->common.device = dma_dev;
list_add_tail(&mv_chan->common.device_node, &dma_dev->channels); list_add_tail(&mv_chan->common.device_node, &dma_dev->channels);

Просмотреть файл

@ -55,7 +55,6 @@ enum atmel_mci_state {
struct atmel_mci_dma { struct atmel_mci_dma {
#ifdef CONFIG_MMC_ATMELMCI_DMA #ifdef CONFIG_MMC_ATMELMCI_DMA
struct dma_client client;
struct dma_chan *chan; struct dma_chan *chan;
struct dma_async_tx_descriptor *data_desc; struct dma_async_tx_descriptor *data_desc;
#endif #endif
@ -593,10 +592,8 @@ atmci_submit_data_dma(struct atmel_mci *host, struct mmc_data *data)
/* If we don't have a channel, we can't do DMA */ /* If we don't have a channel, we can't do DMA */
chan = host->dma.chan; chan = host->dma.chan;
if (chan) { if (chan)
dma_chan_get(chan);
host->data_chan = chan; host->data_chan = chan;
}
if (!chan) if (!chan)
return -ENODEV; return -ENODEV;
@ -1443,60 +1440,6 @@ static irqreturn_t atmci_detect_interrupt(int irq, void *dev_id)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
#ifdef CONFIG_MMC_ATMELMCI_DMA
static inline struct atmel_mci *
dma_client_to_atmel_mci(struct dma_client *client)
{
return container_of(client, struct atmel_mci, dma.client);
}
static enum dma_state_client atmci_dma_event(struct dma_client *client,
struct dma_chan *chan, enum dma_state state)
{
struct atmel_mci *host;
enum dma_state_client ret = DMA_NAK;
host = dma_client_to_atmel_mci(client);
switch (state) {
case DMA_RESOURCE_AVAILABLE:
spin_lock_bh(&host->lock);
if (!host->dma.chan) {
host->dma.chan = chan;
ret = DMA_ACK;
}
spin_unlock_bh(&host->lock);
if (ret == DMA_ACK)
dev_info(&host->pdev->dev,
"Using %s for DMA transfers\n",
chan->dev.bus_id);
break;
case DMA_RESOURCE_REMOVED:
spin_lock_bh(&host->lock);
if (host->dma.chan == chan) {
host->dma.chan = NULL;
ret = DMA_ACK;
}
spin_unlock_bh(&host->lock);
if (ret == DMA_ACK)
dev_info(&host->pdev->dev,
"Lost %s, falling back to PIO\n",
chan->dev.bus_id);
break;
default:
break;
}
return ret;
}
#endif /* CONFIG_MMC_ATMELMCI_DMA */
static int __init atmci_init_slot(struct atmel_mci *host, static int __init atmci_init_slot(struct atmel_mci *host,
struct mci_slot_pdata *slot_data, unsigned int id, struct mci_slot_pdata *slot_data, unsigned int id,
u32 sdc_reg) u32 sdc_reg)
@ -1600,6 +1543,18 @@ static void __exit atmci_cleanup_slot(struct atmel_mci_slot *slot,
mmc_free_host(slot->mmc); mmc_free_host(slot->mmc);
} }
#ifdef CONFIG_MMC_ATMELMCI_DMA
static bool filter(struct dma_chan *chan, void *slave)
{
struct dw_dma_slave *dws = slave;
if (dws->dma_dev == chan->device->dev)
return true;
else
return false;
}
#endif
static int __init atmci_probe(struct platform_device *pdev) static int __init atmci_probe(struct platform_device *pdev)
{ {
struct mci_platform_data *pdata; struct mci_platform_data *pdata;
@ -1652,22 +1607,20 @@ static int __init atmci_probe(struct platform_device *pdev)
goto err_request_irq; goto err_request_irq;
#ifdef CONFIG_MMC_ATMELMCI_DMA #ifdef CONFIG_MMC_ATMELMCI_DMA
if (pdata->dma_slave) { if (pdata->dma_slave.dma_dev) {
struct dma_slave *slave = pdata->dma_slave; struct dw_dma_slave *dws = &pdata->dma_slave;
dma_cap_mask_t mask;
slave->tx_reg = regs->start + MCI_TDR; dws->tx_reg = regs->start + MCI_TDR;
slave->rx_reg = regs->start + MCI_RDR; dws->rx_reg = regs->start + MCI_RDR;
/* Try to grab a DMA channel */ /* Try to grab a DMA channel */
host->dma.client.event_callback = atmci_dma_event; dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, host->dma.client.cap_mask); dma_cap_set(DMA_SLAVE, mask);
host->dma.client.slave = slave; host->dma.chan = dma_request_channel(mask, filter, dws);
dma_async_client_register(&host->dma.client);
dma_async_client_chan_request(&host->dma.client);
} else {
dev_notice(&pdev->dev, "DMA not available, using PIO\n");
} }
if (!host->dma.chan)
dev_notice(&pdev->dev, "DMA not available, using PIO\n");
#endif /* CONFIG_MMC_ATMELMCI_DMA */ #endif /* CONFIG_MMC_ATMELMCI_DMA */
platform_set_drvdata(pdev, host); platform_set_drvdata(pdev, host);
@ -1699,8 +1652,8 @@ static int __init atmci_probe(struct platform_device *pdev)
err_init_slot: err_init_slot:
#ifdef CONFIG_MMC_ATMELMCI_DMA #ifdef CONFIG_MMC_ATMELMCI_DMA
if (pdata->dma_slave) if (host->dma.chan)
dma_async_client_unregister(&host->dma.client); dma_release_channel(host->dma.chan);
#endif #endif
free_irq(irq, host); free_irq(irq, host);
err_request_irq: err_request_irq:
@ -1731,8 +1684,8 @@ static int __exit atmci_remove(struct platform_device *pdev)
clk_disable(host->mck); clk_disable(host->mck);
#ifdef CONFIG_MMC_ATMELMCI_DMA #ifdef CONFIG_MMC_ATMELMCI_DMA
if (host->dma.client.slave) if (host->dma.chan)
dma_async_client_unregister(&host->dma.client); dma_release_channel(host->dma.chan);
#endif #endif
free_irq(platform_get_irq(pdev, 0), host); free_irq(platform_get_irq(pdev, 0), host);
@ -1761,7 +1714,7 @@ static void __exit atmci_exit(void)
platform_driver_unregister(&atmci_driver); platform_driver_unregister(&atmci_driver);
} }
module_init(atmci_init); late_initcall(atmci_init); /* try to load after dma driver when built-in */
module_exit(atmci_exit); module_exit(atmci_exit);
MODULE_DESCRIPTION("Atmel Multimedia Card Interface driver"); MODULE_DESCRIPTION("Atmel Multimedia Card Interface driver");

Просмотреть файл

@ -59,9 +59,7 @@ enum async_tx_flags {
}; };
#ifdef CONFIG_DMA_ENGINE #ifdef CONFIG_DMA_ENGINE
void async_tx_issue_pending_all(void); #define async_tx_issue_pending_all dma_issue_pending_all
enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx);
void async_tx_run_dependencies(struct dma_async_tx_descriptor *tx);
#ifdef CONFIG_ARCH_HAS_ASYNC_TX_FIND_CHANNEL #ifdef CONFIG_ARCH_HAS_ASYNC_TX_FIND_CHANNEL
#include <asm/async_tx.h> #include <asm/async_tx.h>
#else #else
@ -77,19 +75,6 @@ static inline void async_tx_issue_pending_all(void)
do { } while (0); do { } while (0);
} }
static inline enum dma_status
dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
{
return DMA_SUCCESS;
}
static inline void
async_tx_run_dependencies(struct dma_async_tx_descriptor *tx,
struct dma_chan *host_chan)
{
do { } while (0);
}
static inline struct dma_chan * static inline struct dma_chan *
async_tx_find_channel(struct dma_async_tx_descriptor *depend_tx, async_tx_find_channel(struct dma_async_tx_descriptor *depend_tx,
enum dma_transaction_type tx_type, struct page **dst, int dst_count, enum dma_transaction_type tx_type, struct page **dst, int dst_count,

Просмотреть файл

@ -3,7 +3,7 @@
#define ATMEL_MCI_MAX_NR_SLOTS 2 #define ATMEL_MCI_MAX_NR_SLOTS 2
struct dma_slave; #include <linux/dw_dmac.h>
/** /**
* struct mci_slot_pdata - board-specific per-slot configuration * struct mci_slot_pdata - board-specific per-slot configuration
@ -28,11 +28,11 @@ struct mci_slot_pdata {
/** /**
* struct mci_platform_data - board-specific MMC/SDcard configuration * struct mci_platform_data - board-specific MMC/SDcard configuration
* @dma_slave: DMA slave interface to use in data transfers, or NULL. * @dma_slave: DMA slave interface to use in data transfers.
* @slot: Per-slot configuration data. * @slot: Per-slot configuration data.
*/ */
struct mci_platform_data { struct mci_platform_data {
struct dma_slave *dma_slave; struct dw_dma_slave dma_slave;
struct mci_slot_pdata slot[ATMEL_MCI_MAX_NR_SLOTS]; struct mci_slot_pdata slot[ATMEL_MCI_MAX_NR_SLOTS];
}; };

Просмотреть файл

@ -28,32 +28,6 @@
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
/**
* enum dma_state - resource PNP/power management state
* @DMA_RESOURCE_SUSPEND: DMA device going into low power state
* @DMA_RESOURCE_RESUME: DMA device returning to full power
* @DMA_RESOURCE_AVAILABLE: DMA device available to the system
* @DMA_RESOURCE_REMOVED: DMA device removed from the system
*/
enum dma_state {
DMA_RESOURCE_SUSPEND,
DMA_RESOURCE_RESUME,
DMA_RESOURCE_AVAILABLE,
DMA_RESOURCE_REMOVED,
};
/**
* enum dma_state_client - state of the channel in the client
* @DMA_ACK: client would like to use, or was using this channel
* @DMA_DUP: client has already seen this channel, or is not using this channel
* @DMA_NAK: client does not want to see any more channels
*/
enum dma_state_client {
DMA_ACK,
DMA_DUP,
DMA_NAK,
};
/** /**
* typedef dma_cookie_t - an opaque DMA cookie * typedef dma_cookie_t - an opaque DMA cookie
* *
@ -89,23 +63,13 @@ enum dma_transaction_type {
DMA_MEMSET, DMA_MEMSET,
DMA_MEMCPY_CRC32C, DMA_MEMCPY_CRC32C,
DMA_INTERRUPT, DMA_INTERRUPT,
DMA_PRIVATE,
DMA_SLAVE, DMA_SLAVE,
}; };
/* last transaction type for creation of the capabilities mask */ /* last transaction type for creation of the capabilities mask */
#define DMA_TX_TYPE_END (DMA_SLAVE + 1) #define DMA_TX_TYPE_END (DMA_SLAVE + 1)
/**
* enum dma_slave_width - DMA slave register access width.
* @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses
* @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses
* @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses
*/
enum dma_slave_width {
DMA_SLAVE_WIDTH_8BIT,
DMA_SLAVE_WIDTH_16BIT,
DMA_SLAVE_WIDTH_32BIT,
};
/** /**
* enum dma_ctrl_flags - DMA flags to augment operation preparation, * enum dma_ctrl_flags - DMA flags to augment operation preparation,
@ -131,32 +95,6 @@ enum dma_ctrl_flags {
*/ */
typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t; typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t;
/**
* struct dma_slave - Information about a DMA slave
* @dev: device acting as DMA slave
* @dma_dev: required DMA master device. If non-NULL, the client can not be
* bound to other masters than this.
* @tx_reg: physical address of data register used for
* memory-to-peripheral transfers
* @rx_reg: physical address of data register used for
* peripheral-to-memory transfers
* @reg_width: peripheral register width
*
* If dma_dev is non-NULL, the client can not be bound to other DMA
* masters than the one corresponding to this device. The DMA master
* driver may use this to determine if there is controller-specific
* data wrapped around this struct. Drivers of platform code that sets
* the dma_dev field must therefore make sure to use an appropriate
* controller-specific dma slave structure wrapping this struct.
*/
struct dma_slave {
struct device *dev;
struct device *dma_dev;
dma_addr_t tx_reg;
dma_addr_t rx_reg;
enum dma_slave_width reg_width;
};
/** /**
* struct dma_chan_percpu - the per-CPU part of struct dma_chan * struct dma_chan_percpu - the per-CPU part of struct dma_chan
* @refcount: local_t used for open-coded "bigref" counting * @refcount: local_t used for open-coded "bigref" counting
@ -165,7 +103,6 @@ struct dma_slave {
*/ */
struct dma_chan_percpu { struct dma_chan_percpu {
local_t refcount;
/* stats */ /* stats */
unsigned long memcpy_count; unsigned long memcpy_count;
unsigned long bytes_transferred; unsigned long bytes_transferred;
@ -176,13 +113,14 @@ struct dma_chan_percpu {
* @device: ptr to the dma device who supplies this channel, always !%NULL * @device: ptr to the dma device who supplies this channel, always !%NULL
* @cookie: last cookie value returned to client * @cookie: last cookie value returned to client
* @chan_id: channel ID for sysfs * @chan_id: channel ID for sysfs
* @class_dev: class device for sysfs * @dev: class device for sysfs
* @refcount: kref, used in "bigref" slow-mode * @refcount: kref, used in "bigref" slow-mode
* @slow_ref: indicates that the DMA channel is free * @slow_ref: indicates that the DMA channel is free
* @rcu: the DMA channel's RCU head * @rcu: the DMA channel's RCU head
* @device_node: used to add this to the device chan list * @device_node: used to add this to the device chan list
* @local: per-cpu pointer to a struct dma_chan_percpu * @local: per-cpu pointer to a struct dma_chan_percpu
* @client-count: how many clients are using this channel * @client-count: how many clients are using this channel
* @table_count: number of appearances in the mem-to-mem allocation table
*/ */
struct dma_chan { struct dma_chan {
struct dma_device *device; struct dma_device *device;
@ -190,73 +128,47 @@ struct dma_chan {
/* sysfs */ /* sysfs */
int chan_id; int chan_id;
struct device dev; struct dma_chan_dev *dev;
struct kref refcount;
int slow_ref;
struct rcu_head rcu;
struct list_head device_node; struct list_head device_node;
struct dma_chan_percpu *local; struct dma_chan_percpu *local;
int client_count; int client_count;
int table_count;
}; };
#define to_dma_chan(p) container_of(p, struct dma_chan, dev) /**
* struct dma_chan_dev - relate sysfs device node to backing channel device
* @chan - driver channel device
* @device - sysfs device
* @dev_id - parent dma_device dev_id
* @idr_ref - reference count to gate release of dma_device dev_id
*/
struct dma_chan_dev {
struct dma_chan *chan;
struct device device;
int dev_id;
atomic_t *idr_ref;
};
static inline const char *dma_chan_name(struct dma_chan *chan)
{
return dev_name(&chan->dev->device);
}
void dma_chan_cleanup(struct kref *kref); void dma_chan_cleanup(struct kref *kref);
static inline void dma_chan_get(struct dma_chan *chan)
{
if (unlikely(chan->slow_ref))
kref_get(&chan->refcount);
else {
local_inc(&(per_cpu_ptr(chan->local, get_cpu())->refcount));
put_cpu();
}
}
static inline void dma_chan_put(struct dma_chan *chan)
{
if (unlikely(chan->slow_ref))
kref_put(&chan->refcount, dma_chan_cleanup);
else {
local_dec(&(per_cpu_ptr(chan->local, get_cpu())->refcount));
put_cpu();
}
}
/*
* typedef dma_event_callback - function pointer to a DMA event callback
* For each channel added to the system this routine is called for each client.
* If the client would like to use the channel it returns '1' to signal (ack)
* the dmaengine core to take out a reference on the channel and its
* corresponding device. A client must not 'ack' an available channel more
* than once. When a channel is removed all clients are notified. If a client
* is using the channel it must 'ack' the removal. A client must not 'ack' a
* removed channel more than once.
* @client - 'this' pointer for the client context
* @chan - channel to be acted upon
* @state - available or removed
*/
struct dma_client;
typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client,
struct dma_chan *chan, enum dma_state state);
/** /**
* struct dma_client - info on the entity making use of DMA services * typedef dma_filter_fn - callback filter for dma_request_channel
* @event_callback: func ptr to call when something happens * @chan: channel to be reviewed
* @cap_mask: only return channels that satisfy the requested capabilities * @filter_param: opaque parameter passed through dma_request_channel
* a value of zero corresponds to any capability *
* @slave: data for preparing slave transfer. Must be non-NULL iff the * When this optional parameter is specified in a call to dma_request_channel a
* DMA_SLAVE capability is requested. * suitable channel is passed to this routine for further dispositioning before
* @global_node: list_head for global dma_client_list * being returned. Where 'suitable' indicates a non-busy channel that
* satisfies the given capability mask. It returns 'true' to indicate that the
* channel is suitable.
*/ */
struct dma_client { typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
dma_event_callback event_callback;
dma_cap_mask_t cap_mask;
struct dma_slave *slave;
struct list_head global_node;
};
typedef void (*dma_async_tx_callback)(void *dma_async_param); typedef void (*dma_async_tx_callback)(void *dma_async_param);
/** /**
@ -323,14 +235,10 @@ struct dma_device {
dma_cap_mask_t cap_mask; dma_cap_mask_t cap_mask;
int max_xor; int max_xor;
struct kref refcount;
struct completion done;
int dev_id; int dev_id;
struct device *dev; struct device *dev;
int (*device_alloc_chan_resources)(struct dma_chan *chan, int (*device_alloc_chan_resources)(struct dma_chan *chan);
struct dma_client *client);
void (*device_free_chan_resources)(struct dma_chan *chan); void (*device_free_chan_resources)(struct dma_chan *chan);
struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)( struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)(
@ -362,9 +270,8 @@ struct dma_device {
/* --- public DMA engine API --- */ /* --- public DMA engine API --- */
void dma_async_client_register(struct dma_client *client); void dmaengine_get(void);
void dma_async_client_unregister(struct dma_client *client); void dmaengine_put(void);
void dma_async_client_chan_request(struct dma_client *client);
dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan, dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
void *dest, void *src, size_t len); void *dest, void *src, size_t len);
dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan, dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan,
@ -406,6 +313,12 @@ __dma_cap_set(enum dma_transaction_type tx_type, dma_cap_mask_t *dstp)
set_bit(tx_type, dstp->bits); set_bit(tx_type, dstp->bits);
} }
#define dma_cap_zero(mask) __dma_cap_zero(&(mask))
static inline void __dma_cap_zero(dma_cap_mask_t *dstp)
{
bitmap_zero(dstp->bits, DMA_TX_TYPE_END);
}
#define dma_has_cap(tx, mask) __dma_has_cap((tx), &(mask)) #define dma_has_cap(tx, mask) __dma_has_cap((tx), &(mask))
static inline int static inline int
__dma_has_cap(enum dma_transaction_type tx_type, dma_cap_mask_t *srcp) __dma_has_cap(enum dma_transaction_type tx_type, dma_cap_mask_t *srcp)
@ -475,11 +388,25 @@ static inline enum dma_status dma_async_is_complete(dma_cookie_t cookie,
} }
enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie); enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie);
#ifdef CONFIG_DMA_ENGINE
enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx);
#else
static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
{
return DMA_SUCCESS;
}
#endif
/* --- DMA device --- */ /* --- DMA device --- */
int dma_async_device_register(struct dma_device *device); int dma_async_device_register(struct dma_device *device);
void dma_async_device_unregister(struct dma_device *device); void dma_async_device_unregister(struct dma_device *device);
void dma_run_dependencies(struct dma_async_tx_descriptor *tx);
struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type);
void dma_issue_pending_all(void);
#define dma_request_channel(mask, x, y) __dma_request_channel(&(mask), x, y)
struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param);
void dma_release_channel(struct dma_chan *chan);
/* --- Helper iov-locking functions --- */ /* --- Helper iov-locking functions --- */

Просмотреть файл

@ -21,15 +21,35 @@ struct dw_dma_platform_data {
unsigned int nr_channels; unsigned int nr_channels;
}; };
/**
* enum dw_dma_slave_width - DMA slave register access width.
* @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses
* @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses
* @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses
*/
enum dw_dma_slave_width {
DW_DMA_SLAVE_WIDTH_8BIT,
DW_DMA_SLAVE_WIDTH_16BIT,
DW_DMA_SLAVE_WIDTH_32BIT,
};
/** /**
* struct dw_dma_slave - Controller-specific information about a slave * struct dw_dma_slave - Controller-specific information about a slave
* @slave: Generic information about the slave *
* @ctl_lo: Platform-specific initializer for the CTL_LO register * @dma_dev: required DMA master device
* @tx_reg: physical address of data register used for
* memory-to-peripheral transfers
* @rx_reg: physical address of data register used for
* peripheral-to-memory transfers
* @reg_width: peripheral register width
* @cfg_hi: Platform-specific initializer for the CFG_HI register * @cfg_hi: Platform-specific initializer for the CFG_HI register
* @cfg_lo: Platform-specific initializer for the CFG_LO register * @cfg_lo: Platform-specific initializer for the CFG_LO register
*/ */
struct dw_dma_slave { struct dw_dma_slave {
struct dma_slave slave; struct device *dma_dev;
dma_addr_t tx_reg;
dma_addr_t rx_reg;
enum dw_dma_slave_width reg_width;
u32 cfg_hi; u32 cfg_hi;
u32 cfg_lo; u32 cfg_lo;
}; };
@ -54,9 +74,4 @@ struct dw_dma_slave {
#define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */ #define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */
#define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */ #define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */
static inline struct dw_dma_slave *to_dw_dma_slave(struct dma_slave *slave)
{
return container_of(slave, struct dw_dma_slave, slave);
}
#endif /* DW_DMAC_H */ #endif /* DW_DMAC_H */

Просмотреть файл

@ -1125,9 +1125,6 @@ struct softnet_data
struct sk_buff *completion_queue; struct sk_buff *completion_queue;
struct napi_struct backlog; struct napi_struct backlog;
#ifdef CONFIG_NET_DMA
struct dma_chan *net_dma;
#endif
}; };
DECLARE_PER_CPU(struct softnet_data,softnet_data); DECLARE_PER_CPU(struct softnet_data,softnet_data);

Просмотреть файл

@ -24,17 +24,6 @@
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
static inline struct dma_chan *get_softnet_dma(void)
{
struct dma_chan *chan;
rcu_read_lock();
chan = rcu_dereference(__get_cpu_var(softnet_data).net_dma);
if (chan)
dma_chan_get(chan);
rcu_read_unlock();
return chan;
}
int dma_skb_copy_datagram_iovec(struct dma_chan* chan, int dma_skb_copy_datagram_iovec(struct dma_chan* chan,
struct sk_buff *skb, int offset, struct iovec *to, struct sk_buff *skb, int offset, struct iovec *to,
size_t len, struct dma_pinned_list *pinned_list); size_t len, struct dma_pinned_list *pinned_list);

Просмотреть файл

@ -170,25 +170,6 @@ static DEFINE_SPINLOCK(ptype_lock);
static struct list_head ptype_base[PTYPE_HASH_SIZE] __read_mostly; static struct list_head ptype_base[PTYPE_HASH_SIZE] __read_mostly;
static struct list_head ptype_all __read_mostly; /* Taps */ static struct list_head ptype_all __read_mostly; /* Taps */
#ifdef CONFIG_NET_DMA
struct net_dma {
struct dma_client client;
spinlock_t lock;
cpumask_t channel_mask;
struct dma_chan **channels;
};
static enum dma_state_client
netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
enum dma_state state);
static struct net_dma net_dma = {
.client = {
.event_callback = netdev_dma_event,
},
};
#endif
/* /*
* The @dev_base_head list is protected by @dev_base_lock and the rtnl * The @dev_base_head list is protected by @dev_base_lock and the rtnl
* semaphore. * semaphore.
@ -2754,14 +2735,7 @@ out:
* There may not be any more sk_buffs coming right now, so push * There may not be any more sk_buffs coming right now, so push
* any pending DMA copies to hardware * any pending DMA copies to hardware
*/ */
if (!cpus_empty(net_dma.channel_mask)) { dma_issue_pending_all();
int chan_idx;
for_each_cpu_mask_nr(chan_idx, net_dma.channel_mask) {
struct dma_chan *chan = net_dma.channels[chan_idx];
if (chan)
dma_async_memcpy_issue_pending(chan);
}
}
#endif #endif
return; return;
@ -4952,122 +4926,6 @@ static int dev_cpu_callback(struct notifier_block *nfb,
return NOTIFY_OK; return NOTIFY_OK;
} }
#ifdef CONFIG_NET_DMA
/**
* net_dma_rebalance - try to maintain one DMA channel per CPU
* @net_dma: DMA client and associated data (lock, channels, channel_mask)
*
* This is called when the number of channels allocated to the net_dma client
* changes. The net_dma client tries to have one DMA channel per CPU.
*/
static void net_dma_rebalance(struct net_dma *net_dma)
{
unsigned int cpu, i, n, chan_idx;
struct dma_chan *chan;
if (cpus_empty(net_dma->channel_mask)) {
for_each_online_cpu(cpu)
rcu_assign_pointer(per_cpu(softnet_data, cpu).net_dma, NULL);
return;
}
i = 0;
cpu = first_cpu(cpu_online_map);
for_each_cpu_mask_nr(chan_idx, net_dma->channel_mask) {
chan = net_dma->channels[chan_idx];
n = ((num_online_cpus() / cpus_weight(net_dma->channel_mask))
+ (i < (num_online_cpus() %
cpus_weight(net_dma->channel_mask)) ? 1 : 0));
while(n) {
per_cpu(softnet_data, cpu).net_dma = chan;
cpu = next_cpu(cpu, cpu_online_map);
n--;
}
i++;
}
}
/**
* netdev_dma_event - event callback for the net_dma_client
* @client: should always be net_dma_client
* @chan: DMA channel for the event
* @state: DMA state to be handled
*/
static enum dma_state_client
netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
enum dma_state state)
{
int i, found = 0, pos = -1;
struct net_dma *net_dma =
container_of(client, struct net_dma, client);
enum dma_state_client ack = DMA_DUP; /* default: take no action */
spin_lock(&net_dma->lock);
switch (state) {
case DMA_RESOURCE_AVAILABLE:
for (i = 0; i < nr_cpu_ids; i++)
if (net_dma->channels[i] == chan) {
found = 1;
break;
} else if (net_dma->channels[i] == NULL && pos < 0)
pos = i;
if (!found && pos >= 0) {
ack = DMA_ACK;
net_dma->channels[pos] = chan;
cpu_set(pos, net_dma->channel_mask);
net_dma_rebalance(net_dma);
}
break;
case DMA_RESOURCE_REMOVED:
for (i = 0; i < nr_cpu_ids; i++)
if (net_dma->channels[i] == chan) {
found = 1;
pos = i;
break;
}
if (found) {
ack = DMA_ACK;
cpu_clear(pos, net_dma->channel_mask);
net_dma->channels[i] = NULL;
net_dma_rebalance(net_dma);
}
break;
default:
break;
}
spin_unlock(&net_dma->lock);
return ack;
}
/**
* netdev_dma_register - register the networking subsystem as a DMA client
*/
static int __init netdev_dma_register(void)
{
net_dma.channels = kzalloc(nr_cpu_ids * sizeof(struct net_dma),
GFP_KERNEL);
if (unlikely(!net_dma.channels)) {
printk(KERN_NOTICE
"netdev_dma: no memory for net_dma.channels\n");
return -ENOMEM;
}
spin_lock_init(&net_dma.lock);
dma_cap_set(DMA_MEMCPY, net_dma.client.cap_mask);
dma_async_client_register(&net_dma.client);
dma_async_client_chan_request(&net_dma.client);
return 0;
}
#else
static int __init netdev_dma_register(void) { return -ENODEV; }
#endif /* CONFIG_NET_DMA */
/** /**
* netdev_increment_features - increment feature set by one * netdev_increment_features - increment feature set by one
@ -5287,14 +5145,15 @@ static int __init net_dev_init(void)
if (register_pernet_device(&default_device_ops)) if (register_pernet_device(&default_device_ops))
goto out; goto out;
netdev_dma_register();
open_softirq(NET_TX_SOFTIRQ, net_tx_action); open_softirq(NET_TX_SOFTIRQ, net_tx_action);
open_softirq(NET_RX_SOFTIRQ, net_rx_action); open_softirq(NET_RX_SOFTIRQ, net_rx_action);
hotcpu_notifier(dev_cpu_callback, 0); hotcpu_notifier(dev_cpu_callback, 0);
dst_init(); dst_init();
dev_mcast_init(); dev_mcast_init();
#ifdef CONFIG_NET_DMA
dmaengine_get();
#endif
rc = 0; rc = 0;
out: out:
return rc; return rc;

Просмотреть файл

@ -1313,7 +1313,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
if ((available < target) && if ((available < target) &&
(len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) &&
!sysctl_tcp_low_latency && !sysctl_tcp_low_latency &&
__get_cpu_var(softnet_data).net_dma) { dma_find_channel(DMA_MEMCPY)) {
preempt_enable_no_resched(); preempt_enable_no_resched();
tp->ucopy.pinned_list = tp->ucopy.pinned_list =
dma_pin_iovec_pages(msg->msg_iov, len); dma_pin_iovec_pages(msg->msg_iov, len);
@ -1523,7 +1523,7 @@ do_prequeue:
if (!(flags & MSG_TRUNC)) { if (!(flags & MSG_TRUNC)) {
#ifdef CONFIG_NET_DMA #ifdef CONFIG_NET_DMA
if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
tp->ucopy.dma_chan = get_softnet_dma(); tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
if (tp->ucopy.dma_chan) { if (tp->ucopy.dma_chan) {
tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec( tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec(
@ -1628,7 +1628,6 @@ skip_copy:
/* Safe to free early-copied skbs now */ /* Safe to free early-copied skbs now */
__skb_queue_purge(&sk->sk_async_wait_queue); __skb_queue_purge(&sk->sk_async_wait_queue);
dma_chan_put(tp->ucopy.dma_chan);
tp->ucopy.dma_chan = NULL; tp->ucopy.dma_chan = NULL;
} }
if (tp->ucopy.pinned_list) { if (tp->ucopy.pinned_list) {

Просмотреть файл

@ -5005,7 +5005,7 @@ static int tcp_dma_try_early_copy(struct sock *sk, struct sk_buff *skb,
return 0; return 0;
if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
tp->ucopy.dma_chan = get_softnet_dma(); tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
if (tp->ucopy.dma_chan && skb_csum_unnecessary(skb)) { if (tp->ucopy.dma_chan && skb_csum_unnecessary(skb)) {

Просмотреть файл

@ -1594,7 +1594,7 @@ process:
#ifdef CONFIG_NET_DMA #ifdef CONFIG_NET_DMA
struct tcp_sock *tp = tcp_sk(sk); struct tcp_sock *tp = tcp_sk(sk);
if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
tp->ucopy.dma_chan = get_softnet_dma(); tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
if (tp->ucopy.dma_chan) if (tp->ucopy.dma_chan)
ret = tcp_v4_do_rcv(sk, skb); ret = tcp_v4_do_rcv(sk, skb);
else else

Просмотреть файл

@ -1675,7 +1675,7 @@ process:
#ifdef CONFIG_NET_DMA #ifdef CONFIG_NET_DMA
struct tcp_sock *tp = tcp_sk(sk); struct tcp_sock *tp = tcp_sk(sk);
if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
tp->ucopy.dma_chan = get_softnet_dma(); tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY);
if (tp->ucopy.dma_chan) if (tp->ucopy.dma_chan)
ret = tcp_v6_do_rcv(sk, skb); ret = tcp_v6_do_rcv(sk, skb);
else else