WSL2-Linux-Kernel/kernel/dma
Robin Murphy 4a37f3dd9a dma-direct: don't over-decrypt memory
The original x86 sev_alloc() only called set_memory_decrypted() on
memory returned by alloc_pages_node(), so the page order calculation
fell out of that logic. However, the common dma-direct code has several
potential allocators, not all of which are guaranteed to round up the
underlying allocation to a power-of-two size, so carrying over that
calculation for the encryption/decryption size was a mistake. Fix it by
rounding to a *number* of pages, rather than an order.

Until recently there was an even worse interaction with DMA_DIRECT_REMAP
where we could have ended up decrypting part of the next adjacent
vmalloc area, only averted by no architecture actually supporting both
configs at once. Don't ask how I found that one out...

Fixes: c10f07aa27 ("dma/direct: Handle force decryption for DMA coherent buffers in common code")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David Rientjes <rientjes@google.com>
2022-05-23 15:25:40 +02:00
..
Kconfig
Makefile
coherent.c
contiguous.c
debug.c dma-debug: change allocation mode from GFP_NOWAIT to GFP_ATIOMIC 2022-05-11 19:48:34 +02:00
debug.h
direct.c dma-direct: don't over-decrypt memory 2022-05-23 15:25:40 +02:00
direct.h dma-direct: use is_swiotlb_active in dma_direct_map_page 2022-04-18 07:21:08 +02:00
dummy.c
map_benchmark.c
mapping.c dma-mapping: move pgprot_decrypted out of dma_pgprot 2022-04-01 06:46:51 +02:00
ops_helpers.c
pool.c
remap.c
swiotlb.c swiotlb: max mapping size takes min align mask into account 2022-05-17 11:21:52 +02:00