ring-buffer: Avoid softlockup in ring_buffer_resize()
[ Upstream commit f6bd2c9248
]
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.
If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.
To avoid it, call cond_resched() after each cpu buffer allocation.
Link: https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyejian1@huawei.com
Cc: <mhiramat@kernel.org>
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
Родитель
b4874f72cf
Коммит
53e7c559b7
|
@ -2176,6 +2176,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
|
|||
err = -ENOMEM;
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
cpus_read_lock();
|
||||
|
|
Загрузка…
Ссылка в новой задаче