2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
|
2005-07-08 04:57:16 +04:00
|
|
|
* Copyright (c) 2005 Cisco Systems. All rights reserved.
|
2005-08-11 10:03:10 +04:00
|
|
|
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
|
2005-04-17 02:20:36 +04:00
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
2005-05-01 19:59:14 +04:00
|
|
|
#include <linux/mm.h>
|
2007-02-11 00:15:08 +03:00
|
|
|
#include <linux/scatterlist.h>
|
Detach sched.h from mm.h
First thing mm.h does is including sched.h solely for can_do_mlock() inline
function which has "current" dereference inside. By dealing with can_do_mlock()
mm.h can be detached from sched.h which is good. See below, why.
This patch
a) removes unconditional inclusion of sched.h from mm.h
b) makes can_do_mlock() normal function in mm/mlock.c
c) exports can_do_mlock() to not break compilation
d) adds sched.h inclusions back to files that were getting it indirectly.
e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
getting them indirectly
Net result is:
a) mm.h users would get less code to open, read, preprocess, parse, ... if
they don't need sched.h
b) sched.h stops being dependency for significant number of files:
on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
after patch it's only 3744 (-8.3%).
Cross-compile tested on
all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
alpha alpha-up
arm
i386 i386-up i386-defconfig i386-allnoconfig
ia64 ia64-up
m68k
mips
parisc parisc-up
powerpc powerpc-up
s390 s390-up
sparc sparc-up
sparc64 sparc64-up
um-x86_64
x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
as well as my two usual configs.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-21 01:22:52 +04:00
|
|
|
#include <linux/sched.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
|
|
|
#include <linux/slab.h>
|
2007-02-11 00:15:08 +03:00
|
|
|
|
|
|
|
#include <asm/page.h>
|
2005-05-01 19:59:14 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
#include "mthca_memfree.h"
|
|
|
|
#include "mthca_dev.h"
|
|
|
|
#include "mthca_cmd.h"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We allocate in as big chunks as we can, up to a maximum of 256 KB
|
|
|
|
* per chunk.
|
|
|
|
*/
|
|
|
|
enum {
|
|
|
|
MTHCA_ICM_ALLOC_SIZE = 1 << 18,
|
|
|
|
MTHCA_TABLE_CHUNK_SIZE = 1 << 18
|
|
|
|
};
|
|
|
|
|
2005-07-08 04:57:16 +04:00
|
|
|
struct mthca_user_db_table {
|
2006-01-31 03:45:11 +03:00
|
|
|
struct mutex mutex;
|
2005-07-08 04:57:16 +04:00
|
|
|
struct {
|
|
|
|
u64 uvirt;
|
|
|
|
struct scatterlist mem;
|
|
|
|
int refcount;
|
|
|
|
} page[0];
|
|
|
|
};
|
|
|
|
|
2007-02-11 00:15:08 +03:00
|
|
|
static void mthca_free_icm_pages(struct mthca_dev *dev, struct mthca_icm_chunk *chunk)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (chunk->nsg > 0)
|
|
|
|
pci_unmap_sg(dev->pdev, chunk->mem, chunk->npages,
|
|
|
|
PCI_DMA_BIDIRECTIONAL);
|
|
|
|
|
|
|
|
for (i = 0; i < chunk->npages; ++i)
|
2007-10-22 23:19:53 +04:00
|
|
|
__free_pages(sg_page(&chunk->mem[i]),
|
2007-02-11 00:15:08 +03:00
|
|
|
get_order(chunk->mem[i].length));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mthca_free_icm_coherent(struct mthca_dev *dev, struct mthca_icm_chunk *chunk)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2007-02-11 00:15:08 +03:00
|
|
|
for (i = 0; i < chunk->npages; ++i) {
|
|
|
|
dma_free_coherent(&dev->pdev->dev, chunk->mem[i].length,
|
2007-10-22 23:19:53 +04:00
|
|
|
lowmem_page_address(sg_page(&chunk->mem[i])),
|
2007-02-11 00:15:08 +03:00
|
|
|
sg_dma_address(&chunk->mem[i]));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_free_icm(struct mthca_dev *dev, struct mthca_icm *icm, int coherent)
|
|
|
|
{
|
|
|
|
struct mthca_icm_chunk *chunk, *tmp;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!icm)
|
|
|
|
return;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(chunk, tmp, &icm->chunk_list, list) {
|
2007-02-11 00:15:08 +03:00
|
|
|
if (coherent)
|
|
|
|
mthca_free_icm_coherent(dev, chunk);
|
|
|
|
else
|
|
|
|
mthca_free_icm_pages(dev, chunk);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
kfree(chunk);
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(icm);
|
|
|
|
}
|
|
|
|
|
2007-02-11 00:15:08 +03:00
|
|
|
static int mthca_alloc_icm_pages(struct scatterlist *mem, int order, gfp_t gfp_mask)
|
|
|
|
{
|
2007-10-22 23:19:53 +04:00
|
|
|
struct page *page;
|
|
|
|
|
2008-06-23 20:29:58 +04:00
|
|
|
/*
|
|
|
|
* Use __GFP_ZERO because buggy firmware assumes ICM pages are
|
|
|
|
* cleared, and subtle failures are seen if they aren't.
|
|
|
|
*/
|
|
|
|
page = alloc_pages(gfp_mask | __GFP_ZERO, order);
|
2007-10-22 23:19:53 +04:00
|
|
|
if (!page)
|
2007-02-11 00:15:08 +03:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2007-10-24 13:20:47 +04:00
|
|
|
sg_set_page(mem, page, PAGE_SIZE << order, 0);
|
2007-02-11 00:15:08 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int mthca_alloc_icm_coherent(struct device *dev, struct scatterlist *mem,
|
|
|
|
int order, gfp_t gfp_mask)
|
|
|
|
{
|
|
|
|
void *buf = dma_alloc_coherent(dev, PAGE_SIZE << order, &sg_dma_address(mem),
|
|
|
|
gfp_mask);
|
|
|
|
if (!buf)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
sg_set_buf(mem, buf, PAGE_SIZE << order);
|
|
|
|
BUG_ON(mem->offset);
|
|
|
|
sg_dma_len(mem) = PAGE_SIZE << order;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
struct mthca_icm *mthca_alloc_icm(struct mthca_dev *dev, int npages,
|
2007-02-11 00:15:08 +03:00
|
|
|
gfp_t gfp_mask, int coherent)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct mthca_icm *icm;
|
|
|
|
struct mthca_icm_chunk *chunk = NULL;
|
|
|
|
int cur_order;
|
2007-02-11 00:15:08 +03:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
/* We use sg_set_buf for coherent allocs, which assumes low memory */
|
|
|
|
BUG_ON(coherent && (gfp_mask & __GFP_HIGHMEM));
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
icm = kmalloc(sizeof *icm, gfp_mask & ~(__GFP_HIGHMEM | __GFP_NOWARN));
|
|
|
|
if (!icm)
|
|
|
|
return icm;
|
|
|
|
|
|
|
|
icm->refcount = 0;
|
|
|
|
INIT_LIST_HEAD(&icm->chunk_list);
|
|
|
|
|
|
|
|
cur_order = get_order(MTHCA_ICM_ALLOC_SIZE);
|
|
|
|
|
|
|
|
while (npages > 0) {
|
|
|
|
if (!chunk) {
|
|
|
|
chunk = kmalloc(sizeof *chunk,
|
|
|
|
gfp_mask & ~(__GFP_HIGHMEM | __GFP_NOWARN));
|
|
|
|
if (!chunk)
|
|
|
|
goto fail;
|
|
|
|
|
2007-10-22 23:19:53 +04:00
|
|
|
sg_init_table(chunk->mem, MTHCA_ICM_CHUNK_LEN);
|
2005-04-17 02:20:36 +04:00
|
|
|
chunk->npages = 0;
|
|
|
|
chunk->nsg = 0;
|
|
|
|
list_add_tail(&chunk->list, &icm->chunk_list);
|
|
|
|
}
|
|
|
|
|
|
|
|
while (1 << cur_order > npages)
|
|
|
|
--cur_order;
|
|
|
|
|
2007-02-11 00:15:08 +03:00
|
|
|
if (coherent)
|
|
|
|
ret = mthca_alloc_icm_coherent(&dev->pdev->dev,
|
|
|
|
&chunk->mem[chunk->npages],
|
|
|
|
cur_order, gfp_mask);
|
|
|
|
else
|
|
|
|
ret = mthca_alloc_icm_pages(&chunk->mem[chunk->npages],
|
|
|
|
cur_order, gfp_mask);
|
|
|
|
|
|
|
|
if (!ret) {
|
|
|
|
++chunk->npages;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-02-17 00:57:33 +03:00
|
|
|
if (coherent)
|
|
|
|
++chunk->nsg;
|
|
|
|
else if (chunk->npages == MTHCA_ICM_CHUNK_LEN) {
|
2005-04-17 02:20:36 +04:00
|
|
|
chunk->nsg = pci_map_sg(dev->pdev, chunk->mem,
|
|
|
|
chunk->npages,
|
|
|
|
PCI_DMA_BIDIRECTIONAL);
|
|
|
|
|
|
|
|
if (chunk->nsg <= 0)
|
|
|
|
goto fail;
|
2007-02-11 00:15:08 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-02-11 00:15:08 +03:00
|
|
|
if (chunk->npages == MTHCA_ICM_CHUNK_LEN)
|
2005-04-17 02:20:36 +04:00
|
|
|
chunk = NULL;
|
|
|
|
|
|
|
|
npages -= 1 << cur_order;
|
|
|
|
} else {
|
|
|
|
--cur_order;
|
|
|
|
if (cur_order < 0)
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-02-11 00:15:08 +03:00
|
|
|
if (!coherent && chunk) {
|
2005-04-17 02:20:36 +04:00
|
|
|
chunk->nsg = pci_map_sg(dev->pdev, chunk->mem,
|
|
|
|
chunk->npages,
|
|
|
|
PCI_DMA_BIDIRECTIONAL);
|
|
|
|
|
|
|
|
if (chunk->nsg <= 0)
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
return icm;
|
|
|
|
|
|
|
|
fail:
|
2007-02-11 00:15:08 +03:00
|
|
|
mthca_free_icm(dev, icm, coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mthca_table_get(struct mthca_dev *dev, struct mthca_icm_table *table, int obj)
|
|
|
|
{
|
|
|
|
int i = (obj & (table->num_obj - 1)) * table->obj_size / MTHCA_TABLE_CHUNK_SIZE;
|
|
|
|
int ret = 0;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_lock(&table->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (table->icm[i]) {
|
|
|
|
++table->icm[i]->refcount;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
table->icm[i] = mthca_alloc_icm(dev, MTHCA_TABLE_CHUNK_SIZE >> PAGE_SHIFT,
|
|
|
|
(table->lowmem ? GFP_KERNEL : GFP_HIGHUSER) |
|
2007-02-11 00:15:08 +03:00
|
|
|
__GFP_NOWARN, table->coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!table->icm[i]) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2011-07-07 21:20:40 +04:00
|
|
|
if (mthca_MAP_ICM(dev, table->icm[i],
|
|
|
|
table->virt + i * MTHCA_TABLE_CHUNK_SIZE)) {
|
2007-02-11 00:15:08 +03:00
|
|
|
mthca_free_icm(dev, table->icm[i], table->coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
table->icm[i] = NULL;
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
++table->icm[i]->refcount;
|
|
|
|
|
|
|
|
out:
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_unlock(&table->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_table_put(struct mthca_dev *dev, struct mthca_icm_table *table, int obj)
|
|
|
|
{
|
2005-06-28 01:36:43 +04:00
|
|
|
int i;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-06-28 01:36:43 +04:00
|
|
|
if (!mthca_is_memfree(dev))
|
|
|
|
return;
|
|
|
|
|
|
|
|
i = (obj & (table->num_obj - 1)) * table->obj_size / MTHCA_TABLE_CHUNK_SIZE;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_lock(&table->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (--table->icm[i]->refcount == 0) {
|
|
|
|
mthca_UNMAP_ICM(dev, table->virt + i * MTHCA_TABLE_CHUNK_SIZE,
|
2011-07-07 21:20:40 +04:00
|
|
|
MTHCA_TABLE_CHUNK_SIZE / MTHCA_ICM_PAGE_SIZE);
|
2007-02-11 00:15:08 +03:00
|
|
|
mthca_free_icm(dev, table->icm[i], table->coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
table->icm[i] = NULL;
|
|
|
|
}
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_unlock(&table->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-02-11 00:15:08 +03:00
|
|
|
void *mthca_table_find(struct mthca_icm_table *table, int obj, dma_addr_t *dma_handle)
|
2005-04-17 02:26:29 +04:00
|
|
|
{
|
2007-02-11 00:15:08 +03:00
|
|
|
int idx, offset, dma_offset, i;
|
2005-04-17 02:26:29 +04:00
|
|
|
struct mthca_icm_chunk *chunk;
|
|
|
|
struct mthca_icm *icm;
|
|
|
|
struct page *page = NULL;
|
|
|
|
|
|
|
|
if (!table->lowmem)
|
|
|
|
return NULL;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_lock(&table->mutex);
|
2005-04-17 02:26:29 +04:00
|
|
|
|
|
|
|
idx = (obj & (table->num_obj - 1)) * table->obj_size;
|
|
|
|
icm = table->icm[idx / MTHCA_TABLE_CHUNK_SIZE];
|
2007-02-11 00:15:08 +03:00
|
|
|
dma_offset = offset = idx % MTHCA_TABLE_CHUNK_SIZE;
|
2005-04-17 02:26:29 +04:00
|
|
|
|
|
|
|
if (!icm)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
list_for_each_entry(chunk, &icm->chunk_list, list) {
|
|
|
|
for (i = 0; i < chunk->npages; ++i) {
|
2007-02-11 00:15:08 +03:00
|
|
|
if (dma_handle && dma_offset >= 0) {
|
|
|
|
if (sg_dma_len(&chunk->mem[i]) > dma_offset)
|
|
|
|
*dma_handle = sg_dma_address(&chunk->mem[i]) +
|
|
|
|
dma_offset;
|
|
|
|
dma_offset -= sg_dma_len(&chunk->mem[i]);
|
|
|
|
}
|
|
|
|
/* DMA mapping can merge pages but not split them,
|
|
|
|
* so if we found the page, dma_handle has already
|
|
|
|
* been assigned to. */
|
2007-01-03 15:46:30 +03:00
|
|
|
if (chunk->mem[i].length > offset) {
|
2007-10-22 23:19:53 +04:00
|
|
|
page = sg_page(&chunk->mem[i]);
|
2005-12-16 00:55:50 +03:00
|
|
|
goto out;
|
2005-04-17 02:26:29 +04:00
|
|
|
}
|
|
|
|
offset -= chunk->mem[i].length;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_unlock(&table->mutex);
|
2005-04-17 02:26:29 +04:00
|
|
|
return page ? lowmem_page_address(page) + offset : NULL;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:26:13 +04:00
|
|
|
int mthca_table_get_range(struct mthca_dev *dev, struct mthca_icm_table *table,
|
|
|
|
int start, int end)
|
|
|
|
{
|
|
|
|
int inc = MTHCA_TABLE_CHUNK_SIZE / table->obj_size;
|
|
|
|
int i, err;
|
|
|
|
|
|
|
|
for (i = start; i <= end; i += inc) {
|
|
|
|
err = mthca_table_get(dev, table, i);
|
|
|
|
if (err)
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
while (i > start) {
|
|
|
|
i -= inc;
|
|
|
|
mthca_table_put(dev, table, i);
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_table_put_range(struct mthca_dev *dev, struct mthca_icm_table *table,
|
|
|
|
int start, int end)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2005-06-28 01:36:43 +04:00
|
|
|
if (!mthca_is_memfree(dev))
|
|
|
|
return;
|
|
|
|
|
2005-04-17 02:26:13 +04:00
|
|
|
for (i = start; i <= end; i += MTHCA_TABLE_CHUNK_SIZE / table->obj_size)
|
|
|
|
mthca_table_put(dev, table, i);
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,
|
|
|
|
u64 virt, int obj_size,
|
|
|
|
int nobj, int reserved,
|
2007-02-11 00:15:08 +03:00
|
|
|
int use_lowmem, int use_coherent)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct mthca_icm_table *table;
|
2008-04-17 08:01:13 +04:00
|
|
|
int obj_per_chunk;
|
2005-04-17 02:20:36 +04:00
|
|
|
int num_icm;
|
2005-08-19 21:36:11 +04:00
|
|
|
unsigned chunk_size;
|
2005-04-17 02:20:36 +04:00
|
|
|
int i;
|
|
|
|
|
2008-04-17 08:01:13 +04:00
|
|
|
obj_per_chunk = MTHCA_TABLE_CHUNK_SIZE / obj_size;
|
|
|
|
num_icm = DIV_ROUND_UP(nobj, obj_per_chunk);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
treewide: Use struct_size() for kmalloc()-family
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kmalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = kmalloc(struct_size(instance, entry, count), GFP_KERNEL);
This patch makes the changes for kmalloc()-family (and kvmalloc()-family)
uses. It was done via automatic conversion with manual review for the
"CHECKME" non-standard cases noted below, using the following Coccinelle
script:
// pkey_cache = kmalloc(sizeof *pkey_cache + tprops->pkey_tbl_len *
// sizeof *pkey_cache->table, GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(*VAR->ELEMENT), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// mr = kzalloc(sizeof(*mr) + m * sizeof(mr->map[0]), GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(VAR->ELEMENT[0]), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// Same pattern, but can't trivially locate the trailing element name,
// or variable name.
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
expression SOMETHING, COUNT, ELEMENT;
@@
- alloc(sizeof(SOMETHING) + COUNT * sizeof(ELEMENT), GFP)
+ alloc(CHECKME_struct_size(&SOMETHING, ELEMENT, COUNT), GFP)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-05-08 23:45:50 +03:00
|
|
|
table = kmalloc(struct_size(table, icm, num_icm), GFP_KERNEL);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!table)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
table->virt = virt;
|
|
|
|
table->num_icm = num_icm;
|
|
|
|
table->num_obj = nobj;
|
|
|
|
table->obj_size = obj_size;
|
|
|
|
table->lowmem = use_lowmem;
|
2007-02-11 00:15:08 +03:00
|
|
|
table->coherent = use_coherent;
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_init(&table->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
for (i = 0; i < num_icm; ++i)
|
|
|
|
table->icm[i] = NULL;
|
|
|
|
|
|
|
|
for (i = 0; i * MTHCA_TABLE_CHUNK_SIZE < reserved * obj_size; ++i) {
|
2005-08-19 21:36:11 +04:00
|
|
|
chunk_size = MTHCA_TABLE_CHUNK_SIZE;
|
|
|
|
if ((i + 1) * MTHCA_TABLE_CHUNK_SIZE > nobj * obj_size)
|
|
|
|
chunk_size = nobj * obj_size - i * MTHCA_TABLE_CHUNK_SIZE;
|
|
|
|
|
|
|
|
table->icm[i] = mthca_alloc_icm(dev, chunk_size >> PAGE_SHIFT,
|
2005-04-17 02:20:36 +04:00
|
|
|
(use_lowmem ? GFP_KERNEL : GFP_HIGHUSER) |
|
2007-02-11 00:15:08 +03:00
|
|
|
__GFP_NOWARN, use_coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!table->icm[i])
|
|
|
|
goto err;
|
2011-07-07 21:20:40 +04:00
|
|
|
if (mthca_MAP_ICM(dev, table->icm[i],
|
|
|
|
virt + i * MTHCA_TABLE_CHUNK_SIZE)) {
|
2007-02-11 00:15:08 +03:00
|
|
|
mthca_free_icm(dev, table->icm[i], table->coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
table->icm[i] = NULL;
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add a reference to this ICM chunk so that it never
|
|
|
|
* gets freed (since it contains reserved firmware objects).
|
|
|
|
*/
|
|
|
|
++table->icm[i]->refcount;
|
|
|
|
}
|
|
|
|
|
|
|
|
return table;
|
|
|
|
|
|
|
|
err:
|
|
|
|
for (i = 0; i < num_icm; ++i)
|
|
|
|
if (table->icm[i]) {
|
|
|
|
mthca_UNMAP_ICM(dev, virt + i * MTHCA_TABLE_CHUNK_SIZE,
|
2011-07-07 21:20:40 +04:00
|
|
|
MTHCA_TABLE_CHUNK_SIZE / MTHCA_ICM_PAGE_SIZE);
|
2007-02-11 00:15:08 +03:00
|
|
|
mthca_free_icm(dev, table->icm[i], table->coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
kfree(table);
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_free_icm_table(struct mthca_dev *dev, struct mthca_icm_table *table)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < table->num_icm; ++i)
|
|
|
|
if (table->icm[i]) {
|
2011-07-07 21:20:40 +04:00
|
|
|
mthca_UNMAP_ICM(dev,
|
|
|
|
table->virt + i * MTHCA_TABLE_CHUNK_SIZE,
|
|
|
|
MTHCA_TABLE_CHUNK_SIZE / MTHCA_ICM_PAGE_SIZE);
|
2007-02-11 00:15:08 +03:00
|
|
|
mthca_free_icm(dev, table->icm[i], table->coherent);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
kfree(table);
|
|
|
|
}
|
|
|
|
|
2005-07-08 04:57:16 +04:00
|
|
|
static u64 mthca_uarc_virt(struct mthca_dev *dev, struct mthca_uar *uar, int page)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
return dev->uar_table.uarc_base +
|
2005-07-08 04:57:16 +04:00
|
|
|
uar->index * dev->uar_table.uarc_size +
|
2006-03-02 09:33:11 +03:00
|
|
|
page * MTHCA_ICM_PAGE_SIZE;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2005-07-08 04:57:16 +04:00
|
|
|
int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar,
|
|
|
|
struct mthca_user_db_table *db_tab, int index, u64 uaddr)
|
|
|
|
{
|
2007-10-22 23:19:53 +04:00
|
|
|
struct page *pages[1];
|
2005-07-08 04:57:16 +04:00
|
|
|
int ret = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!mthca_is_memfree(dev))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (index < 0 || index > dev->uar_table.uarc_size / 8)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_lock(&db_tab->mutex);
|
2005-07-08 04:57:16 +04:00
|
|
|
|
|
|
|
i = index / MTHCA_DB_REC_PER_PAGE;
|
|
|
|
|
|
|
|
if ((db_tab->page[i].refcount >= MTHCA_DB_REC_PER_PAGE) ||
|
|
|
|
(db_tab->page[i].uvirt && db_tab->page[i].uvirt != uaddr) ||
|
|
|
|
(uaddr & 4095)) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (db_tab->page[i].refcount) {
|
|
|
|
++db_tab->page[i].refcount;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2019-05-14 03:17:25 +03:00
|
|
|
ret = get_user_pages_fast(uaddr & PAGE_MASK, 1,
|
|
|
|
FOLL_WRITE | FOLL_LONGTERM, pages);
|
2005-07-08 04:57:16 +04:00
|
|
|
if (ret < 0)
|
|
|
|
goto out;
|
|
|
|
|
2007-10-24 13:20:47 +04:00
|
|
|
sg_set_page(&db_tab->page[i].mem, pages[0], MTHCA_ICM_PAGE_SIZE,
|
|
|
|
uaddr & ~PAGE_MASK);
|
2005-07-08 04:57:16 +04:00
|
|
|
|
|
|
|
ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
|
|
|
|
if (ret < 0) {
|
RDMA: Convert put_page() to put_user_page*()
For infiniband code that retains pages via get_user_pages*(), release
those pages via the new put_user_page(), or put_user_pages*(), instead of
put_page()
This is a tiny part of the second step of fixing the problem described in
[1]. The steps are:
1) Provide put_user_page*() routines, intended to be used for releasing
pages that were pinned via get_user_pages*().
2) Convert all of the call sites for get_user_pages*(), to invoke
put_user_page*(), instead of put_page(). This involves dozens of call
sites, and will take some time.
3) After (2) is complete, use get_user_pages*() and put_user_page*() to
implement tracking of these pages. This tracking will be separate from
the existing struct page refcounting.
4) Use the tracking and identification of these pages, to implement
special handling (especially in writeback paths) when the pages are
backed by a filesystem. Again, [1] provides details as to why that is
desirable.
[1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Acked-by: Jason Gunthorpe <jgg@mellanox.com>
Tested-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-05-25 04:45:22 +03:00
|
|
|
put_user_page(pages[0]);
|
2005-07-08 04:57:16 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = mthca_MAP_ICM_page(dev, sg_dma_address(&db_tab->page[i].mem),
|
2011-07-07 21:20:40 +04:00
|
|
|
mthca_uarc_virt(dev, uar, i));
|
2005-07-08 04:57:16 +04:00
|
|
|
if (ret) {
|
|
|
|
pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
|
RDMA: Convert put_page() to put_user_page*()
For infiniband code that retains pages via get_user_pages*(), release
those pages via the new put_user_page(), or put_user_pages*(), instead of
put_page()
This is a tiny part of the second step of fixing the problem described in
[1]. The steps are:
1) Provide put_user_page*() routines, intended to be used for releasing
pages that were pinned via get_user_pages*().
2) Convert all of the call sites for get_user_pages*(), to invoke
put_user_page*(), instead of put_page(). This involves dozens of call
sites, and will take some time.
3) After (2) is complete, use get_user_pages*() and put_user_page*() to
implement tracking of these pages. This tracking will be separate from
the existing struct page refcounting.
4) Use the tracking and identification of these pages, to implement
special handling (especially in writeback paths) when the pages are
backed by a filesystem. Again, [1] provides details as to why that is
desirable.
[1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Acked-by: Jason Gunthorpe <jgg@mellanox.com>
Tested-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-05-25 04:45:22 +03:00
|
|
|
put_user_page(sg_page(&db_tab->page[i].mem));
|
2005-07-08 04:57:16 +04:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
db_tab->page[i].uvirt = uaddr;
|
|
|
|
db_tab->page[i].refcount = 1;
|
|
|
|
|
|
|
|
out:
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_unlock(&db_tab->mutex);
|
2005-07-08 04:57:16 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_unmap_user_db(struct mthca_dev *dev, struct mthca_uar *uar,
|
|
|
|
struct mthca_user_db_table *db_tab, int index)
|
|
|
|
{
|
|
|
|
if (!mthca_is_memfree(dev))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* To make our bookkeeping simpler, we don't unmap DB
|
|
|
|
* pages until we clean up the whole db table.
|
|
|
|
*/
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_lock(&db_tab->mutex);
|
2005-07-08 04:57:16 +04:00
|
|
|
|
|
|
|
--db_tab->page[index / MTHCA_DB_REC_PER_PAGE].refcount;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_unlock(&db_tab->mutex);
|
2005-07-08 04:57:16 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
struct mthca_user_db_table *mthca_init_user_db_tab(struct mthca_dev *dev)
|
|
|
|
{
|
|
|
|
struct mthca_user_db_table *db_tab;
|
|
|
|
int npages;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!mthca_is_memfree(dev))
|
|
|
|
return NULL;
|
|
|
|
|
2006-03-02 09:33:11 +03:00
|
|
|
npages = dev->uar_table.uarc_size / MTHCA_ICM_PAGE_SIZE;
|
treewide: Use struct_size() for kmalloc()-family
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
void *entry[];
};
instance = kmalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = kmalloc(struct_size(instance, entry, count), GFP_KERNEL);
This patch makes the changes for kmalloc()-family (and kvmalloc()-family)
uses. It was done via automatic conversion with manual review for the
"CHECKME" non-standard cases noted below, using the following Coccinelle
script:
// pkey_cache = kmalloc(sizeof *pkey_cache + tprops->pkey_tbl_len *
// sizeof *pkey_cache->table, GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(*VAR->ELEMENT), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// mr = kzalloc(sizeof(*mr) + m * sizeof(mr->map[0]), GFP_KERNEL);
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
identifier VAR, ELEMENT;
expression COUNT;
@@
- alloc(sizeof(*VAR) + COUNT * sizeof(VAR->ELEMENT[0]), GFP)
+ alloc(struct_size(VAR, ELEMENT, COUNT), GFP)
// Same pattern, but can't trivially locate the trailing element name,
// or variable name.
@@
identifier alloc =~ "kmalloc|kzalloc|kvmalloc|kvzalloc";
expression GFP;
expression SOMETHING, COUNT, ELEMENT;
@@
- alloc(sizeof(SOMETHING) + COUNT * sizeof(ELEMENT), GFP)
+ alloc(CHECKME_struct_size(&SOMETHING, ELEMENT, COUNT), GFP)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-05-08 23:45:50 +03:00
|
|
|
db_tab = kmalloc(struct_size(db_tab, page, npages), GFP_KERNEL);
|
2005-07-08 04:57:16 +04:00
|
|
|
if (!db_tab)
|
|
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_init(&db_tab->mutex);
|
2005-07-08 04:57:16 +04:00
|
|
|
for (i = 0; i < npages; ++i) {
|
|
|
|
db_tab->page[i].refcount = 0;
|
|
|
|
db_tab->page[i].uvirt = 0;
|
2008-02-13 01:38:22 +03:00
|
|
|
sg_init_table(&db_tab->page[i].mem, 1);
|
2005-07-08 04:57:16 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
return db_tab;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar,
|
|
|
|
struct mthca_user_db_table *db_tab)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!mthca_is_memfree(dev))
|
|
|
|
return;
|
|
|
|
|
2006-03-02 09:33:11 +03:00
|
|
|
for (i = 0; i < dev->uar_table.uarc_size / MTHCA_ICM_PAGE_SIZE; ++i) {
|
2005-07-08 04:57:16 +04:00
|
|
|
if (db_tab->page[i].uvirt) {
|
2011-07-07 21:20:40 +04:00
|
|
|
mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1);
|
2005-07-08 04:57:16 +04:00
|
|
|
pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
|
RDMA: Convert put_page() to put_user_page*()
For infiniband code that retains pages via get_user_pages*(), release
those pages via the new put_user_page(), or put_user_pages*(), instead of
put_page()
This is a tiny part of the second step of fixing the problem described in
[1]. The steps are:
1) Provide put_user_page*() routines, intended to be used for releasing
pages that were pinned via get_user_pages*().
2) Convert all of the call sites for get_user_pages*(), to invoke
put_user_page*(), instead of put_page(). This involves dozens of call
sites, and will take some time.
3) After (2) is complete, use get_user_pages*() and put_user_page*() to
implement tracking of these pages. This tracking will be separate from
the existing struct page refcounting.
4) Use the tracking and identification of these pages, to implement
special handling (especially in writeback paths) when the pages are
backed by a filesystem. Again, [1] provides details as to why that is
desirable.
[1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Acked-by: Jason Gunthorpe <jgg@mellanox.com>
Tested-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-05-25 04:45:22 +03:00
|
|
|
put_user_page(sg_page(&db_tab->page[i].mem));
|
2005-07-08 04:57:16 +04:00
|
|
|
}
|
|
|
|
}
|
2005-12-10 00:48:50 +03:00
|
|
|
|
|
|
|
kfree(db_tab);
|
2005-07-08 04:57:16 +04:00
|
|
|
}
|
|
|
|
|
2005-10-19 00:22:16 +04:00
|
|
|
int mthca_alloc_db(struct mthca_dev *dev, enum mthca_db_type type,
|
|
|
|
u32 qn, __be32 **db)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
int group;
|
|
|
|
int start, end, dir;
|
|
|
|
int i, j;
|
|
|
|
struct mthca_db_page *page;
|
|
|
|
int ret = 0;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_lock(&dev->db_tab->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case MTHCA_DB_TYPE_CQ_ARM:
|
|
|
|
case MTHCA_DB_TYPE_SQ:
|
|
|
|
group = 0;
|
|
|
|
start = 0;
|
|
|
|
end = dev->db_tab->max_group1;
|
|
|
|
dir = 1;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case MTHCA_DB_TYPE_CQ_SET_CI:
|
|
|
|
case MTHCA_DB_TYPE_RQ:
|
|
|
|
case MTHCA_DB_TYPE_SRQ:
|
|
|
|
group = 1;
|
|
|
|
start = dev->db_tab->npages - 1;
|
|
|
|
end = dev->db_tab->min_group2;
|
|
|
|
dir = -1;
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
2005-04-17 02:26:20 +04:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
for (i = start; i != end; i += dir)
|
|
|
|
if (dev->db_tab->page[i].db_rec &&
|
|
|
|
!bitmap_full(dev->db_tab->page[i].used,
|
|
|
|
MTHCA_DB_REC_PER_PAGE)) {
|
|
|
|
page = dev->db_tab->page + i;
|
|
|
|
goto found;
|
|
|
|
}
|
|
|
|
|
2005-09-22 08:40:12 +04:00
|
|
|
for (i = start; i != end; i += dir)
|
|
|
|
if (!dev->db_tab->page[i].db_rec) {
|
|
|
|
page = dev->db_tab->page + i;
|
|
|
|
goto alloc;
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
if (dev->db_tab->max_group1 >= dev->db_tab->min_group2 - 1) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2005-09-22 08:40:12 +04:00
|
|
|
if (group == 0)
|
|
|
|
++dev->db_tab->max_group1;
|
|
|
|
else
|
|
|
|
--dev->db_tab->min_group2;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
page = dev->db_tab->page + end;
|
2005-09-22 08:40:12 +04:00
|
|
|
|
|
|
|
alloc:
|
cross-tree: phase out dma_zalloc_coherent()
We already need to zero out memory for dma_alloc_coherent(), as such
using dma_zalloc_coherent() is superflous. Phase it out.
This change was generated with the following Coccinelle SmPL patch:
@ replace_dma_zalloc_coherent @
expression dev, size, data, handle, flags;
@@
-dma_zalloc_coherent(dev, size, handle, flags)
+dma_alloc_coherent(dev, size, handle, flags)
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
[hch: re-ran the script on the latest tree]
Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-01-04 11:23:09 +03:00
|
|
|
page->db_rec = dma_alloc_coherent(&dev->pdev->dev,
|
|
|
|
MTHCA_ICM_PAGE_SIZE, &page->mapping,
|
|
|
|
GFP_KERNEL);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!page->db_rec) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2005-07-08 04:57:16 +04:00
|
|
|
ret = mthca_MAP_ICM_page(dev, page->mapping,
|
2011-07-07 21:20:40 +04:00
|
|
|
mthca_uarc_virt(dev, &dev->driver_uar, i));
|
2005-04-17 02:20:36 +04:00
|
|
|
if (ret) {
|
2006-03-02 09:33:11 +03:00
|
|
|
dma_free_coherent(&dev->pdev->dev, MTHCA_ICM_PAGE_SIZE,
|
2005-04-17 02:20:36 +04:00
|
|
|
page->db_rec, page->mapping);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
bitmap_zero(page->used, MTHCA_DB_REC_PER_PAGE);
|
|
|
|
|
|
|
|
found:
|
|
|
|
j = find_first_zero_bit(page->used, MTHCA_DB_REC_PER_PAGE);
|
|
|
|
set_bit(j, page->used);
|
|
|
|
|
|
|
|
if (group == 1)
|
|
|
|
j = MTHCA_DB_REC_PER_PAGE - 1 - j;
|
|
|
|
|
|
|
|
ret = i * MTHCA_DB_REC_PER_PAGE + j;
|
|
|
|
|
|
|
|
page->db_rec[j] = cpu_to_be64((qn << 8) | (type << 5));
|
|
|
|
|
2005-08-14 08:05:57 +04:00
|
|
|
*db = (__be32 *) &page->db_rec[j];
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
out:
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_unlock(&dev->db_tab->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_free_db(struct mthca_dev *dev, int type, int db_index)
|
|
|
|
{
|
|
|
|
int i, j;
|
|
|
|
struct mthca_db_page *page;
|
|
|
|
|
|
|
|
i = db_index / MTHCA_DB_REC_PER_PAGE;
|
|
|
|
j = db_index % MTHCA_DB_REC_PER_PAGE;
|
|
|
|
|
|
|
|
page = dev->db_tab->page + i;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_lock(&dev->db_tab->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
page->db_rec[j] = 0;
|
|
|
|
if (i >= dev->db_tab->min_group2)
|
|
|
|
j = MTHCA_DB_REC_PER_PAGE - 1 - j;
|
|
|
|
clear_bit(j, page->used);
|
|
|
|
|
|
|
|
if (bitmap_empty(page->used, MTHCA_DB_REC_PER_PAGE) &&
|
|
|
|
i >= dev->db_tab->max_group1 - 1) {
|
2011-07-07 21:20:40 +04:00
|
|
|
mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, &dev->driver_uar, i), 1);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-02 09:33:11 +03:00
|
|
|
dma_free_coherent(&dev->pdev->dev, MTHCA_ICM_PAGE_SIZE,
|
2005-04-17 02:20:36 +04:00
|
|
|
page->db_rec, page->mapping);
|
|
|
|
page->db_rec = NULL;
|
|
|
|
|
|
|
|
if (i == dev->db_tab->max_group1) {
|
|
|
|
--dev->db_tab->max_group1;
|
|
|
|
/* XXX may be able to unmap more pages now */
|
|
|
|
}
|
|
|
|
if (i == dev->db_tab->min_group2)
|
|
|
|
++dev->db_tab->min_group2;
|
|
|
|
}
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_unlock(&dev->db_tab->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
int mthca_init_db_tab(struct mthca_dev *dev)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2005-04-17 02:26:32 +04:00
|
|
|
if (!mthca_is_memfree(dev))
|
2005-04-17 02:20:36 +04:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
dev->db_tab = kmalloc(sizeof *dev->db_tab, GFP_KERNEL);
|
|
|
|
if (!dev->db_tab)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2006-01-31 03:45:11 +03:00
|
|
|
mutex_init(&dev->db_tab->mutex);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-02 09:33:11 +03:00
|
|
|
dev->db_tab->npages = dev->uar_table.uarc_size / MTHCA_ICM_PAGE_SIZE;
|
2005-04-17 02:20:36 +04:00
|
|
|
dev->db_tab->max_group1 = 0;
|
|
|
|
dev->db_tab->min_group2 = dev->db_tab->npages - 1;
|
|
|
|
|
treewide: kmalloc() -> kmalloc_array()
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
patch replaces cases of:
kmalloc(a * b, gfp)
with:
kmalloc_array(a * b, gfp)
as well as handling cases of:
kmalloc(a * b * c, gfp)
with:
kmalloc(array3_size(a, b, c), gfp)
as it's slightly less ugly than:
kmalloc_array(array_size(a, b), c, gfp)
This does, however, attempt to ignore constant size factors like:
kmalloc(4 * 1024, gfp)
though any constants defined via macros get caught up in the conversion.
Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.
The tools/ directory was manually excluded, since it has its own
implementation of kmalloc().
The Coccinelle script used for this was:
// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@
(
kmalloc(
- (sizeof(TYPE)) * E
+ sizeof(TYPE) * E
, ...)
|
kmalloc(
- (sizeof(THING)) * E
+ sizeof(THING) * E
, ...)
)
// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@
(
kmalloc(
- sizeof(u8) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(__u8) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(char) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(unsigned char) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(u8) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(__u8) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(char) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(unsigned char) * COUNT
+ COUNT
, ...)
)
// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@
(
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (COUNT_ID)
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * COUNT_ID
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (COUNT_CONST)
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * COUNT_CONST
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (COUNT_ID)
+ COUNT_ID, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * COUNT_ID
+ COUNT_ID, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (COUNT_CONST)
+ COUNT_CONST, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * COUNT_CONST
+ COUNT_CONST, sizeof(THING)
, ...)
)
// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@
- kmalloc
+ kmalloc_array
(
- SIZE * COUNT
+ COUNT, SIZE
, ...)
// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@
(
kmalloc(
- sizeof(TYPE) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(THING) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
)
// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@
(
kmalloc(
- sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kmalloc(
- sizeof(THING1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(THING1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
)
// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@
(
kmalloc(
- (COUNT) * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
)
// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@
(
kmalloc(C1 * C2 * C3, ...)
|
kmalloc(
- (E1) * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- (E1) * (E2) * E3
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- (E1) * (E2) * (E3)
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- E1 * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
)
// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@
(
kmalloc(sizeof(THING) * C2, ...)
|
kmalloc(sizeof(TYPE) * C2, ...)
|
kmalloc(C1 * C2 * C3, ...)
|
kmalloc(C1 * C2, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (E2)
+ E2, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * E2
+ E2, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (E2)
+ E2, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * E2
+ E2, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- (E1) * E2
+ E1, E2
, ...)
|
- kmalloc
+ kmalloc_array
(
- (E1) * (E2)
+ E1, E2
, ...)
|
- kmalloc
+ kmalloc_array
(
- E1 * E2
+ E1, E2
, ...)
)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-12 23:55:00 +03:00
|
|
|
dev->db_tab->page = kmalloc_array(dev->db_tab->npages,
|
|
|
|
sizeof(*dev->db_tab->page),
|
|
|
|
GFP_KERNEL);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!dev->db_tab->page) {
|
|
|
|
kfree(dev->db_tab);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < dev->db_tab->npages; ++i)
|
|
|
|
dev->db_tab->page[i].db_rec = NULL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_cleanup_db_tab(struct mthca_dev *dev)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2005-04-17 02:26:32 +04:00
|
|
|
if (!mthca_is_memfree(dev))
|
2005-04-17 02:20:36 +04:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Because we don't always free our UARC pages when they
|
|
|
|
* become empty to make mthca_free_db() simpler we need to
|
|
|
|
* make a sweep through the doorbell pages and free any
|
|
|
|
* leftover pages now.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < dev->db_tab->npages; ++i) {
|
|
|
|
if (!dev->db_tab->page[i].db_rec)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!bitmap_empty(dev->db_tab->page[i].used, MTHCA_DB_REC_PER_PAGE))
|
|
|
|
mthca_warn(dev, "Kernel UARC page %d not empty\n", i);
|
|
|
|
|
2011-07-07 21:20:40 +04:00
|
|
|
mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, &dev->driver_uar, i), 1);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-02 09:33:11 +03:00
|
|
|
dma_free_coherent(&dev->pdev->dev, MTHCA_ICM_PAGE_SIZE,
|
2005-04-17 02:20:36 +04:00
|
|
|
dev->db_tab->page[i].db_rec,
|
|
|
|
dev->db_tab->page[i].mapping);
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(dev->db_tab->page);
|
|
|
|
kfree(dev->db_tab);
|
|
|
|
}
|