mm/rmap: use page_not_mapped in try_to_unmap()

page_mapcount_is_zero() calculates accurately how many mappings a hugepage
has in order to check against 0 only.  This is a waste of cpu time.  We
can do this via page_not_mapped() to save some possible atomic_read
cycles.  Remove the function page_mapcount_is_zero() as it's not used
anymore and move page_not_mapped() above try_to_unmap() to avoid
identifier undeclared compilation error.

Link: https://lkml.kernel.org/r/20210130084904.35307-1-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Miaohe Lin 2021-02-25 17:18:03 -08:00 коммит произвёл Linus Torvalds
Родитель 90aaca852c
Коммит b7e188ec98
1 изменённых файлов: 3 добавлений и 8 удалений

Просмотреть файл

@ -1736,9 +1736,9 @@ static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg)
return vma_is_temporary_stack(vma);
}
static int page_mapcount_is_zero(struct page *page)
static int page_not_mapped(struct page *page)
{
return !total_mapcount(page);
return !page_mapped(page);
}
/**
@ -1756,7 +1756,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
struct rmap_walk_control rwc = {
.rmap_one = try_to_unmap_one,
.arg = (void *)flags,
.done = page_mapcount_is_zero,
.done = page_not_mapped,
.anon_lock = page_lock_anon_vma_read,
};
@ -1780,11 +1780,6 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
return !page_mapcount(page) ? true : false;
}
static int page_not_mapped(struct page *page)
{
return !page_mapped(page);
}
/**
* try_to_munlock - try to munlock a page
* @page: the page to be munlocked