Merge branch 'ibmvnic-Bug-fixes-for-queue-descriptor-processing'

Thomas Falcon says:

====================
ibmvnic: Bug fixes for queue descriptor processing

This series resolves a few issues in the ibmvnic driver's
RX buffer and TX completion processing. The first patch
includes memory barriers to synchronize queue descriptor
reads. The second patch fixes a memory leak that could
occur if the device returns a TX completion with an error
code in the descriptor, in which case the respective socket
buffer and other relevant data structures may not be freed
or updated properly.

v3: Correct length of Fixes tags, requested by Jakub Kicinski

v2: Provide more detailed comments explaining specifically what
    reads are being ordered, suggested by Michael Ellerman
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2020-12-01 10:09:04 -08:00
Родитель 237f977ab9 ba246c1751
Коммит de7b3f8164
1 изменённых файлов: 19 добавлений и 3 удалений

Просмотреть файл

@ -2404,6 +2404,12 @@ restart_poll:
if (!pending_scrq(adapter, adapter->rx_scrq[scrq_num]))
break;
/* The queue entry at the current index is peeked at above
* to determine that there is a valid descriptor awaiting
* processing. We want to be sure that the current slot
* holds a valid descriptor before reading its contents.
*/
dma_rmb();
next = ibmvnic_next_scrq(adapter, adapter->rx_scrq[scrq_num]);
rx_buff =
(struct ibmvnic_rx_buff *)be64_to_cpu(next->
@ -3113,13 +3119,18 @@ restart_loop:
unsigned int pool = scrq->pool_index;
int num_entries = 0;
/* The queue entry at the current index is peeked at above
* to determine that there is a valid descriptor awaiting
* processing. We want to be sure that the current slot
* holds a valid descriptor before reading its contents.
*/
dma_rmb();
next = ibmvnic_next_scrq(adapter, scrq);
for (i = 0; i < next->tx_comp.num_comps; i++) {
if (next->tx_comp.rcs[i]) {
if (next->tx_comp.rcs[i])
dev_err(dev, "tx error %x\n",
next->tx_comp.rcs[i]);
continue;
}
index = be32_to_cpu(next->tx_comp.correlators[i]);
if (index & IBMVNIC_TSO_POOL_MASK) {
tx_pool = &adapter->tso_pool[pool];
@ -3513,6 +3524,11 @@ static union sub_crq *ibmvnic_next_scrq(struct ibmvnic_adapter *adapter,
}
spin_unlock_irqrestore(&scrq->lock, flags);
/* Ensure that the entire buffer descriptor has been
* loaded before reading its contents
*/
dma_rmb();
return entry;
}