iommu/arm-smmu: Mask TLBI address correctly
The less said about "~12UL" the better. Oh dear. We get away with it due to calling constraints that mean IOVAs are implicitly at least page-aligned to begin with, but still; oh dear. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
Родитель
5f9e832c13
Коммит
353b325047
|
@ -504,7 +504,7 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
|
|||
reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
|
||||
|
||||
if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) {
|
||||
iova &= ~12UL;
|
||||
iova = (iova >> 12) << 12;
|
||||
iova |= cfg->asid;
|
||||
do {
|
||||
writel_relaxed(iova, reg);
|
||||
|
|
Загрузка…
Ссылка в новой задаче