nvme-rdma: fix in-casule data send for chained sgls

We have only 2 inline sg entries and we allow 4 sg entries for the send
wr sge. Larger sgls entries will be chained. However when we build
in-capsule send wr sge, we iterate without taking into account that the
sgl may be chained and still fit in-capsule (which can happen if the sgl
is bigger than 2, but lower-equal to 4).

Fix in-capsule data mapping to correctly iterate chained sgls.

Fixes: 38e1800275 ("nvme-rdma: Avoid preallocating big SGL for data")
Reported-by: Walker, Benjamin <benjamin.walker@intel.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This commit is contained in:
Sagi Grimberg 2021-05-27 18:16:38 -07:00 коммит произвёл Christoph Hellwig
Родитель a4b58f1721
Коммит 12b2aaadb6
1 изменённых файлов: 3 добавлений и 2 удалений

Просмотреть файл

@ -1320,16 +1320,17 @@ static int nvme_rdma_map_sg_inline(struct nvme_rdma_queue *queue,
int count) int count)
{ {
struct nvme_sgl_desc *sg = &c->common.dptr.sgl; struct nvme_sgl_desc *sg = &c->common.dptr.sgl;
struct scatterlist *sgl = req->data_sgl.sg_table.sgl;
struct ib_sge *sge = &req->sge[1]; struct ib_sge *sge = &req->sge[1];
struct scatterlist *sgl;
u32 len = 0; u32 len = 0;
int i; int i;
for (i = 0; i < count; i++, sgl++, sge++) { for_each_sg(req->data_sgl.sg_table.sgl, sgl, count, i) {
sge->addr = sg_dma_address(sgl); sge->addr = sg_dma_address(sgl);
sge->length = sg_dma_len(sgl); sge->length = sg_dma_len(sgl);
sge->lkey = queue->device->pd->local_dma_lkey; sge->lkey = queue->device->pd->local_dma_lkey;
len += sge->length; len += sge->length;
sge++;
} }
sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff); sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff);