WSL2-Linux-Kernel/kernel/sched
Yang Yingliang 2a3548c7ef sched/smt: Fix unbalance sched_smt_present dec/inc
commit e22f910a26cc2a3ac9c66b8e935ef2a7dd881117 upstream.

I got the following warn report while doing stress test:

jump label: negative count!
WARNING: CPU: 3 PID: 38 at kernel/jump_label.c:263 static_key_slow_try_dec+0x9d/0xb0
Call Trace:
 <TASK>
 __static_key_slow_dec_cpuslocked+0x16/0x70
 sched_cpu_deactivate+0x26e/0x2a0
 cpuhp_invoke_callback+0x3ad/0x10d0
 cpuhp_thread_fun+0x3f5/0x680
 smpboot_thread_fn+0x56d/0x8d0
 kthread+0x309/0x400
 ret_from_fork+0x41/0x70
 ret_from_fork_asm+0x1b/0x30
 </TASK>

Because when cpuset_cpu_inactive() fails in sched_cpu_deactivate(),
the cpu offline failed, but sched_smt_present is decremented before
calling sched_cpu_deactivate(), it leads to unbalanced dec/inc, so
fix it by incrementing sched_smt_present in the error path.

Fixes: c5511d03ec ("sched/smt: Make sched_smt_present track topology")
Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Chen Yu <yu.c.chen@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Link: https://lore.kernel.org/r/20240703031610.587047-3-yangyingliang@huaweicloud.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-08-19 05:45:47 +02:00
..
Makefile
autogroup.c
autogroup.h
clock.c
completion.c
core.c sched/smt: Fix unbalance sched_smt_present dec/inc 2024-08-19 05:45:47 +02:00
core_sched.c
cpuacct.c sched/cpuacct: Optimize away RCU read lock 2023-10-06 13:18:19 +02:00
cpudeadline.c sched/core: Introduce sched_asym_cpucap_active() 2022-12-31 13:14:01 +01:00
cpudeadline.h
cpufreq.c
cpufreq_schedutil.c
cpupri.c sched/rt: Fix live lock between select_fallback_rq() and RT push 2023-10-06 13:18:22 +02:00
cpupri.h
cputime.c sched/cputime: Fix mul_u64_u64_div_u64() precision for cputime 2024-08-19 05:45:39 +02:00
deadline.c sched: Fix stop_one_cpu_nowait() vs hotplug 2023-11-20 11:08:13 +01:00
debug.c sched: Fix DEBUG && !SCHEDSTATS warn 2023-05-11 23:00:40 +09:00
fair.c profiling: remove profile=sleep support 2024-08-19 05:45:39 +02:00
features.h
idle.c Revert "kernel/sched: Modify initial boot task idle setup" 2023-10-19 23:05:38 +02:00
isolation.c
loadavg.c
membarrier.c sched/membarrier: reduce the ability to hammer on sys_membarrier 2024-02-23 08:55:14 +01:00
pelt.c
pelt.h
psi.c sched/psi: Fix use-after-free in ep_remove_wait_queue() 2023-02-22 12:57:06 +01:00
rt.c sched/rt: Disallow writing invalid values to sched_rt_period_us 2024-03-01 13:21:43 +01:00
sched-pelt.h
sched.h sched/fair: set_load_weight() must also call reweight_task() for SCHED_IDLE tasks 2024-08-19 05:45:11 +02:00
smp.h
stats.c
stats.h sched: Make struct sched_statistics independent of fair sched class 2023-05-11 23:00:34 +09:00
stop_task.c sched: Make struct sched_statistics independent of fair sched class 2023-05-11 23:00:34 +09:00
swait.c
topology.c sched/fair: Allow disabling sched_balance_newidle with sched_relax_domain_level 2024-06-16 13:39:34 +02:00
wait.c
wait_bit.c