shmem: Convert part of shmem_undo_range() to use a folio

find_lock_entries() never returns tail pages.  We cannot use page_folio()
here as the pagevec may also contain swap entries, so simply cast for
now.  This is an intermediate step which will be fully removed by the
end of this series.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
This commit is contained in:
Matthew Wilcox (Oracle) 2021-12-03 08:50:01 -05:00
Родитель 3506659e18
Коммит 7b774aab79
1 изменённых файлов: 7 добавлений и 7 удалений

Просмотреть файл

@ -936,22 +936,22 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
while (index < end && find_lock_entries(mapping, index, end - 1,
&pvec, indices)) {
for (i = 0; i < pagevec_count(&pvec); i++) {
struct page *page = pvec.pages[i];
struct folio *folio = (struct folio *)pvec.pages[i];
index = indices[i];
if (xa_is_value(page)) {
if (xa_is_value(folio)) {
if (unfalloc)
continue;
nr_swaps_freed += !shmem_free_swap(mapping,
index, page);
index, folio);
continue;
}
index += thp_nr_pages(page) - 1;
index += folio_nr_pages(folio) - 1;
if (!unfalloc || !PageUptodate(page))
truncate_inode_page(mapping, page);
unlock_page(page);
if (!unfalloc || !folio_test_uptodate(folio))
truncate_inode_page(mapping, &folio->page);
folio_unlock(folio);
}
pagevec_remove_exceptionals(&pvec);
pagevec_release(&pvec);