2019-05-19 15:07:45 +03:00
|
|
|
# SPDX-License-Identifier: GPL-2.0-only
|
2005-06-26 01:57:36 +04:00
|
|
|
|
2021-11-10 23:24:44 +03:00
|
|
|
config PREEMPT_NONE_BUILD
|
|
|
|
bool
|
|
|
|
|
|
|
|
config PREEMPT_VOLUNTARY_BUILD
|
|
|
|
bool
|
|
|
|
|
|
|
|
config PREEMPT_BUILD
|
|
|
|
bool
|
|
|
|
select PREEMPTION
|
|
|
|
select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
|
|
|
|
|
2005-06-26 01:57:39 +04:00
|
|
|
choice
|
|
|
|
prompt "Preemption Model"
|
2021-11-10 23:24:44 +03:00
|
|
|
default PREEMPT_NONE
|
2005-06-26 01:57:39 +04:00
|
|
|
|
2021-11-10 23:24:44 +03:00
|
|
|
config PREEMPT_NONE
|
2005-06-26 01:57:39 +04:00
|
|
|
bool "No Forced Preemption (Server)"
|
2021-11-10 23:24:44 +03:00
|
|
|
select PREEMPT_NONE_BUILD if !PREEMPT_DYNAMIC
|
2005-06-26 01:57:39 +04:00
|
|
|
help
|
|
|
|
This is the traditional Linux preemption model, geared towards
|
|
|
|
throughput. It will still provide good latencies most of the
|
|
|
|
time, but there are no guarantees and occasional longer delays
|
|
|
|
are possible.
|
|
|
|
|
|
|
|
Select this option if you are building a kernel for a server or
|
|
|
|
scientific/computation system, or if you want to maximize the
|
|
|
|
raw processing power of the kernel, irrespective of scheduling
|
|
|
|
latencies.
|
|
|
|
|
2021-11-10 23:24:44 +03:00
|
|
|
config PREEMPT_VOLUNTARY
|
2005-06-26 01:57:39 +04:00
|
|
|
bool "Voluntary Kernel Preemption (Desktop)"
|
2018-07-31 14:39:32 +03:00
|
|
|
depends on !ARCH_NO_PREEMPT
|
2021-11-10 23:24:44 +03:00
|
|
|
select PREEMPT_VOLUNTARY_BUILD if !PREEMPT_DYNAMIC
|
2005-06-26 01:57:36 +04:00
|
|
|
help
|
2005-06-26 01:57:39 +04:00
|
|
|
This option reduces the latency of the kernel by adding more
|
|
|
|
"explicit preemption points" to the kernel code. These new
|
|
|
|
preemption points have been selected to reduce the maximum
|
|
|
|
latency of rescheduling, providing faster application reactions,
|
2007-05-09 09:12:20 +04:00
|
|
|
at the cost of slightly lower throughput.
|
2005-06-26 01:57:39 +04:00
|
|
|
|
|
|
|
This allows reaction to interactive events by allowing a
|
|
|
|
low priority process to voluntarily preempt itself even if it
|
|
|
|
is in kernel mode executing a system call. This allows
|
|
|
|
applications to run more 'smoothly' even when the system is
|
2005-06-26 01:57:36 +04:00
|
|
|
under load.
|
|
|
|
|
2005-06-26 01:57:39 +04:00
|
|
|
Select this if you are building a kernel for a desktop system.
|
|
|
|
|
2021-11-10 23:24:44 +03:00
|
|
|
config PREEMPT
|
2005-06-26 01:57:39 +04:00
|
|
|
bool "Preemptible Kernel (Low-Latency Desktop)"
|
2018-07-31 14:39:32 +03:00
|
|
|
depends on !ARCH_NO_PREEMPT
|
2021-11-10 23:24:44 +03:00
|
|
|
select PREEMPT_BUILD
|
2005-06-26 01:57:39 +04:00
|
|
|
help
|
|
|
|
This option reduces the latency of the kernel by making
|
|
|
|
all kernel code (that is not executing in a critical section)
|
|
|
|
preemptible. This allows reaction to interactive events by
|
|
|
|
permitting a low priority process to be preempted involuntarily
|
|
|
|
even if it is in kernel mode executing a system call and would
|
|
|
|
otherwise not be about to reach a natural preemption point.
|
|
|
|
This allows applications to run more 'smoothly' even when the
|
2007-05-09 09:12:20 +04:00
|
|
|
system is under load, at the cost of slightly lower throughput
|
2005-06-26 01:57:39 +04:00
|
|
|
and a slight runtime overhead to kernel code.
|
|
|
|
|
|
|
|
Select this if you are building a kernel for a desktop or
|
|
|
|
embedded system with latency requirements in the milliseconds
|
|
|
|
range.
|
|
|
|
|
2019-07-17 23:01:49 +03:00
|
|
|
config PREEMPT_RT
|
|
|
|
bool "Fully Preemptible Kernel (Real-Time)"
|
2021-11-10 23:24:44 +03:00
|
|
|
depends on EXPERT && ARCH_SUPPORTS_RT
|
2019-07-22 18:59:19 +03:00
|
|
|
select PREEMPTION
|
2019-07-17 23:01:49 +03:00
|
|
|
help
|
|
|
|
This option turns the kernel into a real-time kernel by replacing
|
|
|
|
various locking primitives (spinlocks, rwlocks, etc.) with
|
|
|
|
preemptible priority-inheritance aware variants, enforcing
|
|
|
|
interrupt threading and introducing mechanisms to break up long
|
|
|
|
non-preemptible sections. This makes the kernel, except for very
|
2019-10-26 02:02:07 +03:00
|
|
|
low level and critical code paths (entry code, scheduler, low
|
2019-07-17 23:01:49 +03:00
|
|
|
level interrupt handling) fully preemptible and brings most
|
|
|
|
execution contexts under scheduler control.
|
|
|
|
|
|
|
|
Select this if you are building a kernel for systems which
|
|
|
|
require real-time guarantees.
|
|
|
|
|
2005-06-26 01:57:39 +04:00
|
|
|
endchoice
|
2005-06-26 01:57:36 +04:00
|
|
|
|
2011-06-08 03:13:27 +04:00
|
|
|
config PREEMPT_COUNT
|
2018-12-11 14:00:51 +03:00
|
|
|
bool
|
2019-07-17 23:01:49 +03:00
|
|
|
|
2019-07-22 18:59:19 +03:00
|
|
|
config PREEMPTION
|
2019-07-17 23:01:49 +03:00
|
|
|
bool
|
|
|
|
select PREEMPT_COUNT
|
2021-01-18 17:12:19 +03:00
|
|
|
|
|
|
|
config PREEMPT_DYNAMIC
|
2021-09-14 13:31:34 +03:00
|
|
|
bool "Preemption behaviour defined on boot"
|
2021-11-10 23:24:44 +03:00
|
|
|
depends on HAVE_PREEMPT_DYNAMIC && !PREEMPT_RT
|
sched/preempt: Add PREEMPT_DYNAMIC using static keys
Where an architecture selects HAVE_STATIC_CALL but not
HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline
which will either branch to a callee or return to the caller.
On such architectures, a number of constraints can conspire to make
those trampolines more complicated and potentially less useful than we'd
like. For example:
* Hardware and software control flow integrity schemes can require the
addition of "landing pad" instructions (e.g. `BTI` for arm64), which
will also be present at the "real" callee.
* Limited branch ranges can require that trampolines generate or load an
address into a register and perform an indirect branch (or at least
have a slow path that does so). This loses some of the benefits of
having a direct branch.
* Interaction with SW CFI schemes can be complicated and fragile, e.g.
requiring that we can recognise idiomatic codegen and remove
indirections understand, at least until clang proves more helpful
mechanisms for dealing with this.
For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we
really only need to enable/disable specific preemption functions. We can
achieve the same effect without a number of the pain points above by
using static keys to fold early returns into the preemption functions
themselves rather than in an out-of-line trampoline, effectively
inlining the trampoline into the start of the function.
For arm64, this results in good code generation. For example, the
dynamic_cond_resched() wrapper looks as follows when enabled. When
disabled, the first `B` is replaced with a `NOP`, resulting in an early
return.
| <dynamic_cond_resched>:
| bti c
| b <dynamic_cond_resched+0x10> // or `nop`
| mov w0, #0x0
| ret
| mrs x0, sp_el0
| ldr x0, [x0, #8]
| cbnz x0, <dynamic_cond_resched+0x8>
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
... compared to the regular form of the function:
| <__cond_resched>:
| bti c
| mrs x0, sp_el0
| ldr x1, [x0, #8]
| cbz x1, <__cond_resched+0x18>
| mov w0, #0x0
| ret
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
Any architecture which implements static keys should be able to use this
to implement PREEMPT_DYNAMIC with similar cost to non-inlined static
calls. Since this is likely to have greater overhead than (inlined)
static calls, PREEMPT_DYNAMIC is only defaulted to enabled when
HAVE_PREEMPT_DYNAMIC_CALL is selected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20220214165216.2231574-6-mark.rutland@arm.com
2022-02-14 19:52:14 +03:00
|
|
|
select JUMP_LABEL if HAVE_PREEMPT_DYNAMIC_KEY
|
2021-11-10 23:24:44 +03:00
|
|
|
select PREEMPT_BUILD
|
sched/preempt: Add PREEMPT_DYNAMIC using static keys
Where an architecture selects HAVE_STATIC_CALL but not
HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline
which will either branch to a callee or return to the caller.
On such architectures, a number of constraints can conspire to make
those trampolines more complicated and potentially less useful than we'd
like. For example:
* Hardware and software control flow integrity schemes can require the
addition of "landing pad" instructions (e.g. `BTI` for arm64), which
will also be present at the "real" callee.
* Limited branch ranges can require that trampolines generate or load an
address into a register and perform an indirect branch (or at least
have a slow path that does so). This loses some of the benefits of
having a direct branch.
* Interaction with SW CFI schemes can be complicated and fragile, e.g.
requiring that we can recognise idiomatic codegen and remove
indirections understand, at least until clang proves more helpful
mechanisms for dealing with this.
For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we
really only need to enable/disable specific preemption functions. We can
achieve the same effect without a number of the pain points above by
using static keys to fold early returns into the preemption functions
themselves rather than in an out-of-line trampoline, effectively
inlining the trampoline into the start of the function.
For arm64, this results in good code generation. For example, the
dynamic_cond_resched() wrapper looks as follows when enabled. When
disabled, the first `B` is replaced with a `NOP`, resulting in an early
return.
| <dynamic_cond_resched>:
| bti c
| b <dynamic_cond_resched+0x10> // or `nop`
| mov w0, #0x0
| ret
| mrs x0, sp_el0
| ldr x0, [x0, #8]
| cbnz x0, <dynamic_cond_resched+0x8>
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
... compared to the regular form of the function:
| <__cond_resched>:
| bti c
| mrs x0, sp_el0
| ldr x1, [x0, #8]
| cbz x1, <__cond_resched+0x18>
| mov w0, #0x0
| ret
| paciasp
| stp x29, x30, [sp, #-16]!
| mov x29, sp
| bl <preempt_schedule_common>
| mov w0, #0x1
| ldp x29, x30, [sp], #16
| autiasp
| ret
Any architecture which implements static keys should be able to use this
to implement PREEMPT_DYNAMIC with similar cost to non-inlined static
calls. Since this is likely to have greater overhead than (inlined)
static calls, PREEMPT_DYNAMIC is only defaulted to enabled when
HAVE_PREEMPT_DYNAMIC_CALL is selected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20220214165216.2231574-6-mark.rutland@arm.com
2022-02-14 19:52:14 +03:00
|
|
|
default y if HAVE_PREEMPT_DYNAMIC_CALL
|
2021-01-18 17:12:19 +03:00
|
|
|
help
|
|
|
|
This option allows to define the preemption model on the kernel
|
|
|
|
command line parameter and thus override the default preemption
|
|
|
|
model defined during compile time.
|
|
|
|
|
|
|
|
The feature is primarily interesting for Linux distributions which
|
|
|
|
provide a pre-built kernel binary to reduce the number of kernel
|
|
|
|
flavors they offer while still offering different usecases.
|
|
|
|
|
|
|
|
The runtime overhead is negligible with HAVE_STATIC_CALL_INLINE enabled
|
|
|
|
but if runtime patching is not available for the specific architecture
|
|
|
|
then the potential overhead should be considered.
|
|
|
|
|
|
|
|
Interesting if you want the same pre-built kernel should be used for
|
|
|
|
both Server and Desktop workloads.
|
2020-11-18 02:19:34 +03:00
|
|
|
|
|
|
|
config SCHED_CORE
|
|
|
|
bool "Core Scheduling for SMT"
|
|
|
|
depends on SCHED_SMT
|
2021-05-25 09:53:28 +03:00
|
|
|
help
|
|
|
|
This option permits Core Scheduling, a means of coordinated task
|
|
|
|
selection across SMT siblings. When enabled -- see
|
|
|
|
prctl(PR_SCHED_CORE) -- task selection ensures that all SMT siblings
|
|
|
|
will execute a task from the same 'core group', forcing idle when no
|
|
|
|
matching task is found.
|
|
|
|
|
|
|
|
Use of this feature includes:
|
|
|
|
- mitigation of some (not all) SMT side channels;
|
|
|
|
- limiting SMT interference to improve determinism and/or performance.
|
|
|
|
|
2021-06-28 22:55:16 +03:00
|
|
|
SCHED_CORE is default disabled. When it is enabled and unused,
|
|
|
|
which is the likely usage by Linux distributions, there should
|
|
|
|
be no measurable impact on performance.
|
2021-05-25 09:53:28 +03:00
|
|
|
|
2022-03-31 11:36:55 +03:00
|
|
|
|