intel-iommu: Defer the iotlb flush and iova free for intel_unmap_sg() too.

I see no reason why we did this _only_ in intel_unmap_page().

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
This commit is contained in:
David Woodhouse 2009-07-14 01:55:11 +01:00
Родитель 3d39cecc48
Коммит acea0018a2
1 изменённых файлов: 12 добавлений и 5 удалений

Просмотреть файл

@ -2815,11 +2815,18 @@ static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sglist,
/* free page tables */
dma_pte_free_pagetable(domain, start_pfn, last_pfn);
iommu_flush_iotlb_psi(iommu, domain->id, start_pfn,
(last_pfn - start_pfn + 1));
/* free iova */
__free_iova(&domain->iovad, iova);
if (intel_iommu_strict) {
iommu_flush_iotlb_psi(iommu, domain->id, start_pfn,
last_pfn - start_pfn + 1);
/* free iova */
__free_iova(&domain->iovad, iova);
} else {
add_unmap(domain, iova);
/*
* queue up the release of the unmap to save the 1/6th of the
* cpu used up by the iotlb flush operation...
*/
}
}
static int intel_nontranslate_map_sg(struct device *hddev,