mm: hugetlb: fix pgoff computation when unmapping page from vma
The computation for pgoff is incorrect, at least with (vma->vm_pgoff >> PAGE_SHIFT) involved. It is fixed with the available method if HPAGE_SIZE is concerned in page cache lookup. [akpm@linux-foundation.org: use vma_hugecache_offset() directly, per Michal] Signed-off-by: Hillf Danton <dhillf@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michal Hocko <mhocko@suse.cz> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Rientjes <rientjes@google.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Родитель
86cfd3a450
Коммит
0c176d52b0
|
@ -2315,8 +2315,7 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
|
|||
* from page cache lookup which is in HPAGE_SIZE units.
|
||||
*/
|
||||
address = address & huge_page_mask(h);
|
||||
pgoff = ((address - vma->vm_start) >> PAGE_SHIFT)
|
||||
+ (vma->vm_pgoff >> PAGE_SHIFT);
|
||||
pgoff = vma_hugecache_offset(h, vma, address);
|
||||
mapping = (struct address_space *)page_private(page);
|
||||
|
||||
/*
|
||||
|
|
Загрузка…
Ссылка в новой задаче