mm: highmem: don't treat PKMAP_ADDR(LAST_PKMAP) as a highmem address
kmap_to_page returns the corresponding struct page for a virtual address of an arbitrary mapping. This works by checking whether the address falls in the pkmap region and using the pkmap page tables instead of the linear mapping if appropriate. Unfortunately, the bounds checking means that PKMAP_ADDR(LAST_PKMAP) is incorrectly treated as a highmem address and we can end up walking off the end of pkmap_page_table and subsequently passing junk to pte_page. This patch fixes the bound check to stay within the pkmap tables. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Родитель
96710098ee
Коммит
498c228021
|
@ -98,7 +98,7 @@ struct page *kmap_to_page(void *vaddr)
|
||||||
{
|
{
|
||||||
unsigned long addr = (unsigned long)vaddr;
|
unsigned long addr = (unsigned long)vaddr;
|
||||||
|
|
||||||
if (addr >= PKMAP_ADDR(0) && addr <= PKMAP_ADDR(LAST_PKMAP)) {
|
if (addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP)) {
|
||||||
int i = (addr - PKMAP_ADDR(0)) >> PAGE_SHIFT;
|
int i = (addr - PKMAP_ADDR(0)) >> PAGE_SHIFT;
|
||||||
return pte_page(pkmap_page_table[i]);
|
return pte_page(pkmap_page_table[i]);
|
||||||
}
|
}
|
||||||
|
|
Загрузка…
Ссылка в новой задаче