From 59409373f60a0a493fe2a1b85dc8c6299c4fef37 Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" Date: Fri, 7 Jan 2022 14:04:55 -0500 Subject: [PATCH] mm/gup: Handle page split race more efficiently If we hit the page split race, the current code returns NULL which will presumably trigger a retry under the mmap_lock. This isn't necessary; we can just retry the compound_head() lookup. This is a very minor optimisation of an unlikely path, but conceptually it matches (eg) the page cache RCU-protected lookup. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: John Hubbard Reviewed-by: Jason Gunthorpe Reviewed-by: William Kucharski --- mm/gup.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ad120f470735..d3e8266d8e70 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -68,7 +68,10 @@ static void put_page_refs(struct page *page, int refs) */ static inline struct page *try_get_compound_head(struct page *page, int refs) { - struct page *head = compound_head(page); + struct page *head; + +retry: + head = compound_head(page); if (WARN_ON_ONCE(page_ref_count(head) < 0)) return NULL; @@ -86,7 +89,7 @@ static inline struct page *try_get_compound_head(struct page *page, int refs) */ if (unlikely(compound_head(page) != head)) { put_page_refs(head, refs); - return NULL; + goto retry; } return head;