iommu/arm-smmu: Mask TLBI address correctly

The less said about "~12UL" the better. Oh dear.

We get away with it due to calling constraints that mean IOVAs are
implicitly at least page-aligned to begin with, but still; oh dear.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
Robin Murphy 2019-08-15 19:37:21 +01:00 коммит произвёл Will Deacon
Родитель 5f9e832c13
Коммит 353b325047
1 изменённых файлов: 1 добавлений и 1 удалений

Просмотреть файл

@ -504,7 +504,7 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) {
iova &= ~12UL;
iova = (iova >> 12) << 12;
iova |= cfg->asid;
do {
writel_relaxed(iova, reg);