Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
commit901c7280ca
upstream. Halil Pasic points out [1] that the full revert of that commit (revert inbddac7c1e0
), and that a partial revert that only reverts the problematic case, but still keeps some of the cleanups is probably better.  And that partial revert [2] had already been verified by Oleksandr Natalenko to also fix the issue, I had just missed that in the long discussion. So let's reinstate the cleanups from commitaa6f8dcbab
("swiotlb: rework "fix info leak with DMA_FROM_DEVICE""), and effectively only revert the part that caused problems. Link: https://lore.kernel.org/all/20220328013731.017ae3e3.pasic@linux.ibm.com/ [1] Link: https://lore.kernel.org/all/20220324055732.GB12078@lst.de/ [2] Link: https://lore.kernel.org/all/4386660.LvFx2qVVIh@natalenko.name/ [3] Suggested-by: Halil Pasic <pasic@linux.ibm.com> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Родитель
63351e2e13
Коммит
7007c89463
|
@ -130,11 +130,3 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
|
|||
subsystem that the buffer is fully accessible at the elevated privilege
|
||||
level (and ideally inaccessible or at least read-only at the
|
||||
lesser-privileged levels).
|
||||
|
||||
DMA_ATTR_OVERWRITE
|
||||
------------------
|
||||
|
||||
This is a hint to the DMA-mapping subsystem that the device is expected to
|
||||
overwrite the entire mapped size, thus the caller does not require any of the
|
||||
previous buffer contents to be preserved. This allows bounce-buffering
|
||||
implementations to optimise DMA_FROM_DEVICE transfers.
|
||||
|
|
|
@ -61,14 +61,6 @@
|
|||
*/
|
||||
#define DMA_ATTR_PRIVILEGED (1UL << 9)
|
||||
|
||||
/*
|
||||
* This is a hint to the DMA-mapping subsystem that the device is expected
|
||||
* to overwrite the entire mapped size, thus the caller does not require any
|
||||
* of the previous buffer contents to be preserved. This allows
|
||||
* bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
|
||||
*/
|
||||
#define DMA_ATTR_OVERWRITE (1UL << 10)
|
||||
|
||||
/*
|
||||
* A dma_addr_t can hold any valid DMA or bus address for the platform. It can
|
||||
* be given to a device to use as a DMA source or target. It is specific to a
|
||||
|
|
|
@ -578,10 +578,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
|
|||
for (i = 0; i < nr_slots(alloc_size + offset); i++)
|
||||
mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
|
||||
tlb_addr = slot_addr(mem->start, index) + offset;
|
||||
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
|
||||
(!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
|
||||
dir == DMA_BIDIRECTIONAL))
|
||||
swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
|
||||
/*
|
||||
* When dir == DMA_FROM_DEVICE we could omit the copy from the orig
|
||||
* to the tlb buffer, if we knew for sure the device will
|
||||
* overwirte the entire current content. But we don't. Thus
|
||||
* unconditional bounce may prevent leaking swiotlb content (i.e.
|
||||
* kernel memory) to user-space.
|
||||
*/
|
||||
swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
|
||||
return tlb_addr;
|
||||
}
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче