writeback: limit write_cache_pages integrity scanning to current EOF

sync can currently take a really long time if a concurrent writer is
extending a file. The problem is that the dirty pages on the address
space grow in the same direction as write_cache_pages scans, so if
the writer keeps ahead of writeback, the writeback will not
terminate until the writer stops adding dirty pages.

For a data integrity sync, we only need to write the pages dirty at
the time we start the writeback, so we can stop scanning once we get
to the page that was at the end of the file at the time the scan
started.

This will prevent operations like copying a large file preventing
sync from completing as it will not write back pages that were
dirtied after the sync was started. This does not impact the
existing integrity guarantees, as any dirty page (old or new)
within the EOF range at the start of the scan will still be
captured.

This patch will not prevent sync from blocking on large writes into
holes. That requires more complex intervention while this patch only
addresses the common append-case of this sync holdoff.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Dave Chinner 2010-06-09 10:37:20 +10:00 коммит произвёл Linus Torvalds
Родитель 254c8c2dbf
Коммит d87815cb20
1 изменённых файлов: 15 добавлений и 0 удалений

Просмотреть файл

@ -851,7 +851,22 @@ int write_cache_pages(struct address_space *mapping,
if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
range_whole = 1;
cycled = 1; /* ignore range_cyclic tests */
/*
* If this is a data integrity sync, cap the writeback to the
* current end of file. Any extension to the file that occurs
* after this is a new write and we don't need to write those
* pages out to fulfil our data integrity requirements. If we
* try to write them out, we can get stuck in this scan until
* the concurrent writer stops adding dirty pages and extending
* EOF.
*/
if (wbc->sync_mode == WB_SYNC_ALL &&
wbc->range_end == LLONG_MAX) {
end = i_size_read(mapping->host) >> PAGE_CACHE_SHIFT;
}
}
retry:
done_index = index;
while (!done && (index <= end)) {