writeback: fix occasional slow sync(1)
In case when system contains no dirty pages, wakeup_flusher_threads() will submit WB_SYNC_NONE writeback for 0 pages so wb_writeback() exits immediately without doing anything, even though there are dirty inodes in the system. Thus sync(1) will write all the dirty inodes from a WB_SYNC_ALL writeback pass which is slow. Fix the problem by using get_nr_dirty_pages() in wakeup_flusher_threads() instead of calculating number of dirty pages manually. That function also takes number of dirty inodes into account. Signed-off-by: Jan Kara <jack@suse.cz> Reported-by: Paul Taysom <taysom@chromium.org> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Родитель
7cb2ef56e6
Коммит
47df3ddedd
|
@ -1049,10 +1049,8 @@ void wakeup_flusher_threads(long nr_pages, enum wb_reason reason)
|
||||||
{
|
{
|
||||||
struct backing_dev_info *bdi;
|
struct backing_dev_info *bdi;
|
||||||
|
|
||||||
if (!nr_pages) {
|
if (!nr_pages)
|
||||||
nr_pages = global_page_state(NR_FILE_DIRTY) +
|
nr_pages = get_nr_dirty_pages();
|
||||||
global_page_state(NR_UNSTABLE_NFS);
|
|
||||||
}
|
|
||||||
|
|
||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
|
list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {
|
||||||
|
|
Загрузка…
Ссылка в новой задаче