RDMA/efa: Use ib_umem_num_dma_pages()

If ib_umem_find_best_pgsz() returns > PAGE_SIZE then the equation here is
not correct. 'start' should be 'virt'. Change it to use the core code for
page_num and the canonical calculation of page_shift.

Fixes: 40ddb3f020 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
Link: https://lore.kernel.org/r/7-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Tested-by: Gal Pressman <galpress@amazon.com>
Acked-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
This commit is contained in:
Jason Gunthorpe 2020-09-04 19:41:48 -03:00
Родитель a665aca89a
Коммит 1f9b6827c8
1 изменённых файлов: 3 добавлений и 3 удалений

Просмотреть файл

@ -4,6 +4,7 @@
*/ */
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/log2.h>
#include <rdma/ib_addr.h> #include <rdma/ib_addr.h>
#include <rdma/ib_umem.h> #include <rdma/ib_umem.h>
@ -1540,9 +1541,8 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
goto err_unmap; goto err_unmap;
} }
params.page_shift = __ffs(pg_sz); params.page_shift = order_base_2(pg_sz);
params.page_num = DIV_ROUND_UP(length + (start & (pg_sz - 1)), params.page_num = ib_umem_num_dma_blocks(mr->umem, pg_sz);
pg_sz);
ibdev_dbg(&dev->ibdev, ibdev_dbg(&dev->ibdev,
"start %#llx length %#llx params.page_shift %u params.page_num %u\n", "start %#llx length %#llx params.page_shift %u params.page_num %u\n",