iommu/iova: Improve 32-bit free space estimate
For various reasons based on the allocator behaviour and typical use-cases at the time, when the max32_alloc_size optimisation was introduced it seemed reasonable to couple the reset of the tracked size to the update of cached32_node upon freeing a relevant IOVA. However, since subsequent optimisations focused on helping genuine 32-bit devices make best use of even more limited address spaces, it is now a lot more likely for cached32_node to be anywhere in a "full" 32-bit address space, and as such more likely for space to become available from IOVAs below that node being freed. At this point, the short-cut in __cached_rbnode_delete_update() really doesn't hold up any more, and we need to fix the logic to reliably provide the expected behaviour. We still want cached32_node to only move upwards, but we should reset the allocation size if *any* 32-bit space has become available. Reported-by: Yunfei Wang <yf.wang@mediatek.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Miles Chen <miles.chen@mediatek.com> Link: https://lore.kernel.org/r/033815732d83ca73b13c11485ac39336f15c3b40.1646318408.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
This commit is contained in:
Родитель
9a630a4b41
Коммит
5b61343b50
|
@ -94,10 +94,11 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
|
|||
cached_iova = to_iova(iovad->cached32_node);
|
||||
if (free == cached_iova ||
|
||||
(free->pfn_hi < iovad->dma_32bit_pfn &&
|
||||
free->pfn_lo >= cached_iova->pfn_lo)) {
|
||||
free->pfn_lo >= cached_iova->pfn_lo))
|
||||
iovad->cached32_node = rb_next(&free->node);
|
||||
|
||||
if (free->pfn_lo < iovad->dma_32bit_pfn)
|
||||
iovad->max32_alloc_size = iovad->dma_32bit_pfn;
|
||||
}
|
||||
|
||||
cached_iova = to_iova(iovad->cached_node);
|
||||
if (free->pfn_lo >= cached_iova->pfn_lo)
|
||||
|
|
Загрузка…
Ссылка в новой задаче