mm: allow a NULL fn callback in apply_to_page_range
Besides calling the callback on each page, apply_to_page_range also has the effect of pre-faulting all PTEs for the range. To support callers that only need the pre-faulting, make the callback optional. Based on a patch from Minchan Kim <minchan@kernel.org>. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Juergen Gross <jgross@suse.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Uladzislau Rezki (Sony) <urezki@gmail.com> Link: https://lkml.kernel.org/r/20201002122204.1534411-5-hch@lst.de Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Родитель
3e9a9e256b
Коммит
eeb4a05fce
16
mm/memory.c
16
mm/memory.c
|
@ -2391,13 +2391,15 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
|
||||||
|
|
||||||
arch_enter_lazy_mmu_mode();
|
arch_enter_lazy_mmu_mode();
|
||||||
|
|
||||||
do {
|
if (fn) {
|
||||||
if (create || !pte_none(*pte)) {
|
do {
|
||||||
err = fn(pte++, addr, data);
|
if (create || !pte_none(*pte)) {
|
||||||
if (err)
|
err = fn(pte++, addr, data);
|
||||||
break;
|
if (err)
|
||||||
}
|
break;
|
||||||
} while (addr += PAGE_SIZE, addr != end);
|
}
|
||||||
|
} while (addr += PAGE_SIZE, addr != end);
|
||||||
|
}
|
||||||
*mask |= PGTBL_PTE_MODIFIED;
|
*mask |= PGTBL_PTE_MODIFIED;
|
||||||
|
|
||||||
arch_leave_lazy_mmu_mode();
|
arch_leave_lazy_mmu_mode();
|
||||||
|
|
Загрузка…
Ссылка в новой задаче