mm: migrate: fix missing update page_private to hugetlb_page_subpool
Since commitd6995da311
("hugetlb: use page.private for hugetlb specific page flags") converts page.private for hugetlb specific page flags. We should use hugetlb_page_subpool() to get the subpool pointer instead of page_private(). This 'could' prevent the migration of hugetlb pages. page_private(hpage) is now used for hugetlb page specific flags. At migration time, the only flag which could be set is HPageVmemmapOptimized. This flag will only be set if the new vmemmap reduction feature is enabled. In addition, !page_mapping() implies an anonymous mapping. So, this will prevent migration of hugetb pages in anonymous mappings if the vmemmap reduction feature is enabled. In addition, that if statement checked for the rare race condition of a page being migrated while in the process of being freed. Since that check is now wrong, we could leak hugetlb subpool usage counts. The commit forgot to update it in the page migration routine. So fix it. [songmuchun@bytedance.com: fix compiler error when !CONFIG_HUGETLB_PAGE reported by Randy] Link: https://lkml.kernel.org/r/20210521022747.35736-1-songmuchun@bytedance.com Link: https://lkml.kernel.org/r/20210520025949.1866-1-songmuchun@bytedance.com Fixes:d6995da311
("hugetlb: use page.private for hugetlb specific page flags") Signed-off-by: Muchun Song <songmuchun@bytedance.com> Reported-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Anshuman Khandual <anshuman.khandual@arm.com> [arm64] Cc: Oscar Salvador <osalvador@suse.de> Cc: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Xiongchun Duan <duanxiongchun@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Родитель
16c9afc776
Коммит
6acfb5ba15
|
@ -898,6 +898,11 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
|
|||
#else /* CONFIG_HUGETLB_PAGE */
|
||||
struct hstate {};
|
||||
|
||||
static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline int isolate_or_dissolve_huge_page(struct page *page,
|
||||
struct list_head *list)
|
||||
{
|
||||
|
|
|
@ -1293,7 +1293,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
|
|||
* page_mapping() set, hugetlbfs specific move page routine will not
|
||||
* be called and we could leak usage counts for subpools.
|
||||
*/
|
||||
if (page_private(hpage) && !page_mapping(hpage)) {
|
||||
if (hugetlb_page_subpool(hpage) && !page_mapping(hpage)) {
|
||||
rc = -EBUSY;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
|
Загрузка…
Ссылка в новой задаче