Merge branches 'pm-cpufreq' and 'pm-cpuidle'
* pm-cpufreq: cpufreq: Make cpufreq_online() call driver->offline() on errors cpufreq: loongson2: Remove unused linux/sched.h headers cpufreq: sh: Remove unused linux/sched.h headers cpufreq: stats: Clean up local variable in cpufreq_stats_create_table() cpufreq: intel_pstate: hybrid: Fix build with CONFIG_ACPI unset cpufreq: sc520_freq: add 'fallthrough' to one case cpufreq: intel_pstate: Add Cometlake support in no-HWP mode cpufreq: intel_pstate: Add Icelake servers support in no-HWP mode cpufreq: intel_pstate: hybrid: CPU-specific scaling factor cpufreq: intel_pstate: hybrid: Avoid exposing two global attributes * pm-cpuidle: cpuidle: teo: remove unneeded semicolon in teo_select() cpuidle: teo: Use kerneldoc documentation in admin-guide cpuidle: teo: Rework most recent idle duration values treatment cpuidle: teo: Change the main idle state selection logic cpuidle: teo: Cosmetic modification of teo_select() cpuidle: teo: Cosmetic modifications of teo_update() intel_idle: Adjust the SKX C6 parameters if PC6 is disabled
This commit is contained in:
Коммит
ed562d280c
|
@ -347,81 +347,8 @@ for tickless systems. It follows the same basic strategy as the ``menu`` `one
|
|||
<menu-gov_>`_: it always tries to find the deepest idle state suitable for the
|
||||
given conditions. However, it applies a different approach to that problem.
|
||||
|
||||
First, it does not use sleep length correction factors, but instead it attempts
|
||||
to correlate the observed idle duration values with the available idle states
|
||||
and use that information to pick up the idle state that is most likely to
|
||||
"match" the upcoming CPU idle interval. Second, it does not take the tasks
|
||||
that were running on the given CPU in the past and are waiting on some I/O
|
||||
operations to complete now at all (there is no guarantee that they will run on
|
||||
the same CPU when they become runnable again) and the pattern detection code in
|
||||
it avoids taking timer wakeups into account. It also only uses idle duration
|
||||
values less than the current time till the closest timer (with the scheduler
|
||||
tick excluded) for that purpose.
|
||||
|
||||
Like in the ``menu`` governor `case <menu-gov_>`_, the first step is to obtain
|
||||
the *sleep length*, which is the time until the closest timer event with the
|
||||
assumption that the scheduler tick will be stopped (that also is the upper bound
|
||||
on the time until the next CPU wakeup). That value is then used to preselect an
|
||||
idle state on the basis of three metrics maintained for each idle state provided
|
||||
by the ``CPUIdle`` driver: ``hits``, ``misses`` and ``early_hits``.
|
||||
|
||||
The ``hits`` and ``misses`` metrics measure the likelihood that a given idle
|
||||
state will "match" the observed (post-wakeup) idle duration if it "matches" the
|
||||
sleep length. They both are subject to decay (after a CPU wakeup) every time
|
||||
the target residency of the idle state corresponding to them is less than or
|
||||
equal to the sleep length and the target residency of the next idle state is
|
||||
greater than the sleep length (that is, when the idle state corresponding to
|
||||
them "matches" the sleep length). The ``hits`` metric is increased if the
|
||||
former condition is satisfied and the target residency of the given idle state
|
||||
is less than or equal to the observed idle duration and the target residency of
|
||||
the next idle state is greater than the observed idle duration at the same time
|
||||
(that is, it is increased when the given idle state "matches" both the sleep
|
||||
length and the observed idle duration). In turn, the ``misses`` metric is
|
||||
increased when the given idle state "matches" the sleep length only and the
|
||||
observed idle duration is too short for its target residency.
|
||||
|
||||
The ``early_hits`` metric measures the likelihood that a given idle state will
|
||||
"match" the observed (post-wakeup) idle duration if it does not "match" the
|
||||
sleep length. It is subject to decay on every CPU wakeup and it is increased
|
||||
when the idle state corresponding to it "matches" the observed (post-wakeup)
|
||||
idle duration and the target residency of the next idle state is less than or
|
||||
equal to the sleep length (i.e. the idle state "matching" the sleep length is
|
||||
deeper than the given one).
|
||||
|
||||
The governor walks the list of idle states provided by the ``CPUIdle`` driver
|
||||
and finds the last (deepest) one with the target residency less than or equal
|
||||
to the sleep length. Then, the ``hits`` and ``misses`` metrics of that idle
|
||||
state are compared with each other and it is preselected if the ``hits`` one is
|
||||
greater (which means that that idle state is likely to "match" the observed idle
|
||||
duration after CPU wakeup). If the ``misses`` one is greater, the governor
|
||||
preselects the shallower idle state with the maximum ``early_hits`` metric
|
||||
(or if there are multiple shallower idle states with equal ``early_hits``
|
||||
metric which also is the maximum, the shallowest of them will be preselected).
|
||||
[If there is a wakeup latency constraint coming from the `PM QoS framework
|
||||
<cpu-pm-qos_>`_ which is hit before reaching the deepest idle state with the
|
||||
target residency within the sleep length, the deepest idle state with the exit
|
||||
latency within the constraint is preselected without consulting the ``hits``,
|
||||
``misses`` and ``early_hits`` metrics.]
|
||||
|
||||
Next, the governor takes several idle duration values observed most recently
|
||||
into consideration and if at least a half of them are greater than or equal to
|
||||
the target residency of the preselected idle state, that idle state becomes the
|
||||
final candidate to ask for. Otherwise, the average of the most recent idle
|
||||
duration values below the target residency of the preselected idle state is
|
||||
computed and the governor walks the idle states shallower than the preselected
|
||||
one and finds the deepest of them with the target residency within that average.
|
||||
That idle state is then taken as the final candidate to ask for.
|
||||
|
||||
Still, at this point the governor may need to refine the idle state selection if
|
||||
it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
|
||||
generally happens if the target residency of the idle state selected so far is
|
||||
less than the tick period and the tick has not been stopped already (in a
|
||||
previous iteration of the idle loop). Then, like in the ``menu`` governor
|
||||
`case <menu-gov_>`_, the sleep length used in the previous computations may not
|
||||
reflect the real time until the closest timer event and if it really is greater
|
||||
than that time, a shallower state with a suitable target residency may need to
|
||||
be selected.
|
||||
|
||||
.. kernel-doc:: drivers/cpuidle/governors/teo.c
|
||||
:doc: teo-description
|
||||
|
||||
.. _idle-states-representation:
|
||||
|
||||
|
|
|
@ -365,6 +365,9 @@ argument is passed to the kernel in the command line.
|
|||
inclusive) including both turbo and non-turbo P-states (see
|
||||
`Turbo P-states Support`_).
|
||||
|
||||
This attribute is present only if the value exposed by it is the same
|
||||
for all of the CPUs in the system.
|
||||
|
||||
The value of this attribute is not affected by the ``no_turbo``
|
||||
setting described `below <no_turbo_attr_>`_.
|
||||
|
||||
|
@ -374,6 +377,9 @@ argument is passed to the kernel in the command line.
|
|||
Ratio of the `turbo range <turbo_>`_ size to the size of the entire
|
||||
range of supported P-states, in percent.
|
||||
|
||||
This attribute is present only if the value exposed by it is the same
|
||||
for all of the CPUs in the system.
|
||||
|
||||
This attribute is read-only.
|
||||
|
||||
.. _no_turbo_attr:
|
||||
|
|
|
@ -1367,9 +1367,14 @@ static int cpufreq_online(unsigned int cpu)
|
|||
goto out_free_policy;
|
||||
}
|
||||
|
||||
/*
|
||||
* The initialization has succeeded and the policy is online.
|
||||
* If there is a problem with its frequency table, take it
|
||||
* offline and drop it.
|
||||
*/
|
||||
ret = cpufreq_table_validate_and_sort(policy);
|
||||
if (ret)
|
||||
goto out_exit_policy;
|
||||
goto out_offline_policy;
|
||||
|
||||
/* related_cpus should at least include policy->cpus. */
|
||||
cpumask_copy(policy->related_cpus, policy->cpus);
|
||||
|
@ -1515,6 +1520,10 @@ out_destroy_policy:
|
|||
|
||||
up_write(&policy->rwsem);
|
||||
|
||||
out_offline_policy:
|
||||
if (cpufreq_driver->offline)
|
||||
cpufreq_driver->offline(policy);
|
||||
|
||||
out_exit_policy:
|
||||
if (cpufreq_driver->exit)
|
||||
cpufreq_driver->exit(policy);
|
||||
|
|
|
@ -211,7 +211,7 @@ void cpufreq_stats_free_table(struct cpufreq_policy *policy)
|
|||
|
||||
void cpufreq_stats_create_table(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int i = 0, count = 0, ret = -ENOMEM;
|
||||
unsigned int i = 0, count;
|
||||
struct cpufreq_stats *stats;
|
||||
unsigned int alloc_size;
|
||||
struct cpufreq_frequency_table *pos;
|
||||
|
@ -253,8 +253,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
|
|||
stats->last_index = freq_table_get_index(stats, policy->cur);
|
||||
|
||||
policy->stats = stats;
|
||||
ret = sysfs_create_group(&policy->kobj, &stats_attr_group);
|
||||
if (!ret)
|
||||
if (!sysfs_create_group(&policy->kobj, &stats_attr_group))
|
||||
return;
|
||||
|
||||
/* We failed, release resources */
|
||||
|
|
|
@ -121,9 +121,10 @@ struct sample {
|
|||
* @max_pstate_physical:This is physical Max P state for a processor
|
||||
* This can be higher than the max_pstate which can
|
||||
* be limited by platform thermal design power limits
|
||||
* @scaling: Scaling factor to convert frequency to cpufreq
|
||||
* frequency units
|
||||
* @perf_ctl_scaling: PERF_CTL P-state to frequency scaling factor
|
||||
* @scaling: Scaling factor between performance and frequency
|
||||
* @turbo_pstate: Max Turbo P state possible for this platform
|
||||
* @min_freq: @min_pstate frequency in cpufreq units
|
||||
* @max_freq: @max_pstate frequency in cpufreq units
|
||||
* @turbo_freq: @turbo_pstate frequency in cpufreq units
|
||||
*
|
||||
|
@ -134,8 +135,10 @@ struct pstate_data {
|
|||
int min_pstate;
|
||||
int max_pstate;
|
||||
int max_pstate_physical;
|
||||
int perf_ctl_scaling;
|
||||
int scaling;
|
||||
int turbo_pstate;
|
||||
unsigned int min_freq;
|
||||
unsigned int max_freq;
|
||||
unsigned int turbo_freq;
|
||||
};
|
||||
|
@ -366,7 +369,7 @@ static void intel_pstate_set_itmt_prio(int cpu)
|
|||
}
|
||||
}
|
||||
|
||||
static int intel_pstate_get_cppc_guranteed(int cpu)
|
||||
static int intel_pstate_get_cppc_guaranteed(int cpu)
|
||||
{
|
||||
struct cppc_perf_caps cppc_perf;
|
||||
int ret;
|
||||
|
@ -382,7 +385,7 @@ static int intel_pstate_get_cppc_guranteed(int cpu)
|
|||
}
|
||||
|
||||
#else /* CONFIG_ACPI_CPPC_LIB */
|
||||
static void intel_pstate_set_itmt_prio(int cpu)
|
||||
static inline void intel_pstate_set_itmt_prio(int cpu)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_ACPI_CPPC_LIB */
|
||||
|
@ -467,6 +470,20 @@ static void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy)
|
|||
|
||||
acpi_processor_unregister_performance(policy->cpu);
|
||||
}
|
||||
|
||||
static bool intel_pstate_cppc_perf_valid(u32 perf, struct cppc_perf_caps *caps)
|
||||
{
|
||||
return perf && perf <= caps->highest_perf && perf >= caps->lowest_perf;
|
||||
}
|
||||
|
||||
static bool intel_pstate_cppc_perf_caps(struct cpudata *cpu,
|
||||
struct cppc_perf_caps *caps)
|
||||
{
|
||||
if (cppc_get_perf_caps(cpu->cpu, caps))
|
||||
return false;
|
||||
|
||||
return caps->highest_perf && caps->lowest_perf <= caps->highest_perf;
|
||||
}
|
||||
#else /* CONFIG_ACPI */
|
||||
static inline void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
|
@ -483,12 +500,146 @@ static inline bool intel_pstate_acpi_pm_profile_server(void)
|
|||
#endif /* CONFIG_ACPI */
|
||||
|
||||
#ifndef CONFIG_ACPI_CPPC_LIB
|
||||
static int intel_pstate_get_cppc_guranteed(int cpu)
|
||||
static inline int intel_pstate_get_cppc_guaranteed(int cpu)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
#endif /* CONFIG_ACPI_CPPC_LIB */
|
||||
|
||||
static void intel_pstate_hybrid_hwp_perf_ctl_parity(struct cpudata *cpu)
|
||||
{
|
||||
pr_debug("CPU%d: Using PERF_CTL scaling for HWP\n", cpu->cpu);
|
||||
|
||||
cpu->pstate.scaling = cpu->pstate.perf_ctl_scaling;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_pstate_hybrid_hwp_calibrate - Calibrate HWP performance levels.
|
||||
* @cpu: Target CPU.
|
||||
*
|
||||
* On hybrid processors, HWP may expose more performance levels than there are
|
||||
* P-states accessible through the PERF_CTL interface. If that happens, the
|
||||
* scaling factor between HWP performance levels and CPU frequency will be less
|
||||
* than the scaling factor between P-state values and CPU frequency.
|
||||
*
|
||||
* In that case, the scaling factor between HWP performance levels and CPU
|
||||
* frequency needs to be determined which can be done with the help of the
|
||||
* observation that certain HWP performance levels should correspond to certain
|
||||
* P-states, like for example the HWP highest performance should correspond
|
||||
* to the maximum turbo P-state of the CPU.
|
||||
*/
|
||||
static void intel_pstate_hybrid_hwp_calibrate(struct cpudata *cpu)
|
||||
{
|
||||
int perf_ctl_max_phys = cpu->pstate.max_pstate_physical;
|
||||
int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
|
||||
int perf_ctl_turbo = pstate_funcs.get_turbo();
|
||||
int turbo_freq = perf_ctl_turbo * perf_ctl_scaling;
|
||||
int perf_ctl_max = pstate_funcs.get_max();
|
||||
int max_freq = perf_ctl_max * perf_ctl_scaling;
|
||||
int scaling = INT_MAX;
|
||||
int freq;
|
||||
|
||||
pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys);
|
||||
pr_debug("CPU%d: perf_ctl_max = %d\n", cpu->cpu, perf_ctl_max);
|
||||
pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo);
|
||||
pr_debug("CPU%d: perf_ctl_scaling = %d\n", cpu->cpu, perf_ctl_scaling);
|
||||
|
||||
pr_debug("CPU%d: HWP_CAP guaranteed = %d\n", cpu->cpu, cpu->pstate.max_pstate);
|
||||
pr_debug("CPU%d: HWP_CAP highest = %d\n", cpu->cpu, cpu->pstate.turbo_pstate);
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
if (IS_ENABLED(CONFIG_ACPI_CPPC_LIB)) {
|
||||
struct cppc_perf_caps caps;
|
||||
|
||||
if (intel_pstate_cppc_perf_caps(cpu, &caps)) {
|
||||
if (intel_pstate_cppc_perf_valid(caps.nominal_perf, &caps)) {
|
||||
pr_debug("CPU%d: Using CPPC nominal\n", cpu->cpu);
|
||||
|
||||
/*
|
||||
* If the CPPC nominal performance is valid, it
|
||||
* can be assumed to correspond to cpu_khz.
|
||||
*/
|
||||
if (caps.nominal_perf == perf_ctl_max_phys) {
|
||||
intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
|
||||
return;
|
||||
}
|
||||
scaling = DIV_ROUND_UP(cpu_khz, caps.nominal_perf);
|
||||
} else if (intel_pstate_cppc_perf_valid(caps.guaranteed_perf, &caps)) {
|
||||
pr_debug("CPU%d: Using CPPC guaranteed\n", cpu->cpu);
|
||||
|
||||
/*
|
||||
* If the CPPC guaranteed performance is valid,
|
||||
* it can be assumed to correspond to max_freq.
|
||||
*/
|
||||
if (caps.guaranteed_perf == perf_ctl_max) {
|
||||
intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
|
||||
return;
|
||||
}
|
||||
scaling = DIV_ROUND_UP(max_freq, caps.guaranteed_perf);
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
/*
|
||||
* If using the CPPC data to compute the HWP-to-frequency scaling factor
|
||||
* doesn't work, use the HWP_CAP gauranteed perf for this purpose with
|
||||
* the assumption that it corresponds to max_freq.
|
||||
*/
|
||||
if (scaling > perf_ctl_scaling) {
|
||||
pr_debug("CPU%d: Using HWP_CAP guaranteed\n", cpu->cpu);
|
||||
|
||||
if (cpu->pstate.max_pstate == perf_ctl_max) {
|
||||
intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
|
||||
return;
|
||||
}
|
||||
scaling = DIV_ROUND_UP(max_freq, cpu->pstate.max_pstate);
|
||||
if (scaling > perf_ctl_scaling) {
|
||||
/*
|
||||
* This should not happen, because it would mean that
|
||||
* the number of HWP perf levels was less than the
|
||||
* number of P-states, so use the PERF_CTL scaling in
|
||||
* that case.
|
||||
*/
|
||||
pr_debug("CPU%d: scaling (%d) out of range\n", cpu->cpu,
|
||||
scaling);
|
||||
|
||||
intel_pstate_hybrid_hwp_perf_ctl_parity(cpu);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If the product of the HWP performance scaling factor obtained above
|
||||
* and the HWP_CAP highest performance is greater than the maximum turbo
|
||||
* frequency corresponding to the pstate_funcs.get_turbo() return value,
|
||||
* the scaling factor is too high, so recompute it so that the HWP_CAP
|
||||
* highest performance corresponds to the maximum turbo frequency.
|
||||
*/
|
||||
if (turbo_freq < cpu->pstate.turbo_pstate * scaling) {
|
||||
pr_debug("CPU%d: scaling too high (%d)\n", cpu->cpu, scaling);
|
||||
|
||||
cpu->pstate.turbo_freq = turbo_freq;
|
||||
scaling = DIV_ROUND_UP(turbo_freq, cpu->pstate.turbo_pstate);
|
||||
}
|
||||
|
||||
cpu->pstate.scaling = scaling;
|
||||
|
||||
pr_debug("CPU%d: HWP-to-frequency scaling factor: %d\n", cpu->cpu, scaling);
|
||||
|
||||
cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling,
|
||||
perf_ctl_scaling);
|
||||
|
||||
freq = perf_ctl_max_phys * perf_ctl_scaling;
|
||||
cpu->pstate.max_pstate_physical = DIV_ROUND_UP(freq, scaling);
|
||||
|
||||
cpu->pstate.min_freq = cpu->pstate.min_pstate * perf_ctl_scaling;
|
||||
/*
|
||||
* Cast the min P-state value retrieved via pstate_funcs.get_min() to
|
||||
* the effective range of HWP performance levels.
|
||||
*/
|
||||
cpu->pstate.min_pstate = DIV_ROUND_UP(cpu->pstate.min_freq, scaling);
|
||||
}
|
||||
|
||||
static inline void update_turbo_state(void)
|
||||
{
|
||||
u64 misc_en;
|
||||
|
@ -795,19 +946,22 @@ cpufreq_freq_attr_rw(energy_performance_preference);
|
|||
|
||||
static ssize_t show_base_frequency(struct cpufreq_policy *policy, char *buf)
|
||||
{
|
||||
struct cpudata *cpu;
|
||||
u64 cap;
|
||||
int ratio;
|
||||
struct cpudata *cpu = all_cpu_data[policy->cpu];
|
||||
int ratio, freq;
|
||||
|
||||
ratio = intel_pstate_get_cppc_guranteed(policy->cpu);
|
||||
ratio = intel_pstate_get_cppc_guaranteed(policy->cpu);
|
||||
if (ratio <= 0) {
|
||||
u64 cap;
|
||||
|
||||
rdmsrl_on_cpu(policy->cpu, MSR_HWP_CAPABILITIES, &cap);
|
||||
ratio = HWP_GUARANTEED_PERF(cap);
|
||||
}
|
||||
|
||||
cpu = all_cpu_data[policy->cpu];
|
||||
freq = ratio * cpu->pstate.scaling;
|
||||
if (cpu->pstate.scaling != cpu->pstate.perf_ctl_scaling)
|
||||
freq = rounddown(freq, cpu->pstate.perf_ctl_scaling);
|
||||
|
||||
return sprintf(buf, "%d\n", ratio * cpu->pstate.scaling);
|
||||
return sprintf(buf, "%d\n", freq);
|
||||
}
|
||||
|
||||
cpufreq_freq_attr_ro(base_frequency);
|
||||
|
@ -831,9 +985,20 @@ static void __intel_pstate_get_hwp_cap(struct cpudata *cpu)
|
|||
|
||||
static void intel_pstate_get_hwp_cap(struct cpudata *cpu)
|
||||
{
|
||||
int scaling = cpu->pstate.scaling;
|
||||
|
||||
__intel_pstate_get_hwp_cap(cpu);
|
||||
cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
|
||||
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
|
||||
|
||||
cpu->pstate.max_freq = cpu->pstate.max_pstate * scaling;
|
||||
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * scaling;
|
||||
if (scaling != cpu->pstate.perf_ctl_scaling) {
|
||||
int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
|
||||
|
||||
cpu->pstate.max_freq = rounddown(cpu->pstate.max_freq,
|
||||
perf_ctl_scaling);
|
||||
cpu->pstate.turbo_freq = rounddown(cpu->pstate.turbo_freq,
|
||||
perf_ctl_scaling);
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_pstate_hwp_set(unsigned int cpu)
|
||||
|
@ -1365,8 +1530,6 @@ define_one_global_rw(energy_efficiency);
|
|||
static struct attribute *intel_pstate_attributes[] = {
|
||||
&status.attr,
|
||||
&no_turbo.attr,
|
||||
&turbo_pct.attr,
|
||||
&num_pstates.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
|
@ -1391,6 +1554,14 @@ static void __init intel_pstate_sysfs_expose_params(void)
|
|||
if (WARN_ON(rc))
|
||||
return;
|
||||
|
||||
if (!boot_cpu_has(X86_FEATURE_HYBRID_CPU)) {
|
||||
rc = sysfs_create_file(intel_pstate_kobject, &turbo_pct.attr);
|
||||
WARN_ON(rc);
|
||||
|
||||
rc = sysfs_create_file(intel_pstate_kobject, &num_pstates.attr);
|
||||
WARN_ON(rc);
|
||||
}
|
||||
|
||||
/*
|
||||
* If per cpu limits are enforced there are no global limits, so
|
||||
* return without creating max/min_perf_pct attributes
|
||||
|
@ -1417,6 +1588,11 @@ static void __init intel_pstate_sysfs_remove(void)
|
|||
|
||||
sysfs_remove_group(intel_pstate_kobject, &intel_pstate_attr_group);
|
||||
|
||||
if (!boot_cpu_has(X86_FEATURE_HYBRID_CPU)) {
|
||||
sysfs_remove_file(intel_pstate_kobject, &num_pstates.attr);
|
||||
sysfs_remove_file(intel_pstate_kobject, &turbo_pct.attr);
|
||||
}
|
||||
|
||||
if (!per_cpu_limits) {
|
||||
sysfs_remove_file(intel_pstate_kobject, &max_perf_pct.attr);
|
||||
sysfs_remove_file(intel_pstate_kobject, &min_perf_pct.attr);
|
||||
|
@ -1713,19 +1889,33 @@ static void intel_pstate_max_within_limits(struct cpudata *cpu)
|
|||
|
||||
static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
|
||||
{
|
||||
bool hybrid_cpu = boot_cpu_has(X86_FEATURE_HYBRID_CPU);
|
||||
int perf_ctl_max_phys = pstate_funcs.get_max_physical();
|
||||
int perf_ctl_scaling = hybrid_cpu ? cpu_khz / perf_ctl_max_phys :
|
||||
pstate_funcs.get_scaling();
|
||||
|
||||
cpu->pstate.min_pstate = pstate_funcs.get_min();
|
||||
cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical();
|
||||
cpu->pstate.scaling = pstate_funcs.get_scaling();
|
||||
cpu->pstate.max_pstate_physical = perf_ctl_max_phys;
|
||||
cpu->pstate.perf_ctl_scaling = perf_ctl_scaling;
|
||||
|
||||
if (hwp_active && !hwp_mode_bdw) {
|
||||
__intel_pstate_get_hwp_cap(cpu);
|
||||
|
||||
if (hybrid_cpu)
|
||||
intel_pstate_hybrid_hwp_calibrate(cpu);
|
||||
else
|
||||
cpu->pstate.scaling = perf_ctl_scaling;
|
||||
} else {
|
||||
cpu->pstate.scaling = perf_ctl_scaling;
|
||||
cpu->pstate.max_pstate = pstate_funcs.get_max();
|
||||
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
|
||||
}
|
||||
|
||||
cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling;
|
||||
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
|
||||
if (cpu->pstate.scaling == perf_ctl_scaling) {
|
||||
cpu->pstate.min_freq = cpu->pstate.min_pstate * perf_ctl_scaling;
|
||||
cpu->pstate.max_freq = cpu->pstate.max_pstate * perf_ctl_scaling;
|
||||
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * perf_ctl_scaling;
|
||||
}
|
||||
|
||||
if (pstate_funcs.get_aperf_mperf_shift)
|
||||
cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift();
|
||||
|
@ -2087,6 +2277,8 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
|
|||
X86_MATCH(ATOM_GOLDMONT, core_funcs),
|
||||
X86_MATCH(ATOM_GOLDMONT_PLUS, core_funcs),
|
||||
X86_MATCH(SKYLAKE_X, core_funcs),
|
||||
X86_MATCH(COMETLAKE, core_funcs),
|
||||
X86_MATCH(ICELAKE_X, core_funcs),
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
|
||||
|
@ -2195,23 +2387,34 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
|
|||
unsigned int policy_min,
|
||||
unsigned int policy_max)
|
||||
{
|
||||
int scaling = cpu->pstate.scaling;
|
||||
int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
|
||||
int32_t max_policy_perf, min_policy_perf;
|
||||
|
||||
max_policy_perf = policy_max / perf_ctl_scaling;
|
||||
if (policy_max == policy_min) {
|
||||
min_policy_perf = max_policy_perf;
|
||||
} else {
|
||||
min_policy_perf = policy_min / perf_ctl_scaling;
|
||||
min_policy_perf = clamp_t(int32_t, min_policy_perf,
|
||||
0, max_policy_perf);
|
||||
}
|
||||
|
||||
/*
|
||||
* HWP needs some special consideration, because HWP_REQUEST uses
|
||||
* abstract values to represent performance rather than pure ratios.
|
||||
*/
|
||||
if (hwp_active)
|
||||
if (hwp_active) {
|
||||
intel_pstate_get_hwp_cap(cpu);
|
||||
|
||||
max_policy_perf = policy_max / scaling;
|
||||
if (policy_max == policy_min) {
|
||||
min_policy_perf = max_policy_perf;
|
||||
} else {
|
||||
min_policy_perf = policy_min / scaling;
|
||||
min_policy_perf = clamp_t(int32_t, min_policy_perf,
|
||||
0, max_policy_perf);
|
||||
if (cpu->pstate.scaling != perf_ctl_scaling) {
|
||||
int scaling = cpu->pstate.scaling;
|
||||
int freq;
|
||||
|
||||
freq = max_policy_perf * perf_ctl_scaling;
|
||||
max_policy_perf = DIV_ROUND_UP(freq, scaling);
|
||||
freq = min_policy_perf * perf_ctl_scaling;
|
||||
min_policy_perf = DIV_ROUND_UP(freq, scaling);
|
||||
}
|
||||
}
|
||||
|
||||
pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n",
|
||||
|
@ -2405,7 +2608,7 @@ static int __intel_pstate_cpu_init(struct cpufreq_policy *policy)
|
|||
cpu->min_perf_ratio = 0;
|
||||
|
||||
/* cpuinfo and default policy values */
|
||||
policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling;
|
||||
policy->cpuinfo.min_freq = cpu->pstate.min_freq;
|
||||
update_turbo_state();
|
||||
global.turbo_disabled_mf = global.turbo_disabled;
|
||||
policy->cpuinfo.max_freq = global.turbo_disabled ?
|
||||
|
@ -3135,6 +3338,8 @@ hwp_cpu_matched:
|
|||
}
|
||||
|
||||
pr_info("HWP enabled\n");
|
||||
} else if (boot_cpu_has(X86_FEATURE_HYBRID_CPU)) {
|
||||
pr_warn("Problematic setup: Hybrid processor with disabled HWP\n");
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
#include <linux/cpufreq.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/sched.h> /* set_cpus_allowed() */
|
||||
#include <linux/delay.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
|
|
|
@ -42,6 +42,7 @@ static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu)
|
|||
default:
|
||||
pr_err("error: cpuctl register has unexpected value %02x\n",
|
||||
clockspeed_reg);
|
||||
fallthrough;
|
||||
case 0x01:
|
||||
return 100000;
|
||||
case 0x02:
|
||||
|
|
|
@ -23,7 +23,6 @@
|
|||
#include <linux/cpumask.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/sched.h> /* set_cpus_allowed() */
|
||||
#include <linux/clk.h>
|
||||
#include <linux/percpu.h>
|
||||
#include <linux/sh_clk.h>
|
||||
|
|
|
@ -2,47 +2,103 @@
|
|||
/*
|
||||
* Timer events oriented CPU idle governor
|
||||
*
|
||||
* Copyright (C) 2018 Intel Corporation
|
||||
* Copyright (C) 2018 - 2021 Intel Corporation
|
||||
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
*/
|
||||
|
||||
/**
|
||||
* DOC: teo-description
|
||||
*
|
||||
* The idea of this governor is based on the observation that on many systems
|
||||
* timer events are two or more orders of magnitude more frequent than any
|
||||
* other interrupts, so they are likely to be the most significant source of CPU
|
||||
* other interrupts, so they are likely to be the most significant cause of CPU
|
||||
* wakeups from idle states. Moreover, information about what happened in the
|
||||
* (relatively recent) past can be used to estimate whether or not the deepest
|
||||
* idle state with target residency within the time to the closest timer is
|
||||
* likely to be suitable for the upcoming idle time of the CPU and, if not, then
|
||||
* which of the shallower idle states to choose.
|
||||
* idle state with target residency within the (known) time till the closest
|
||||
* timer event, referred to as the sleep length, is likely to be suitable for
|
||||
* the upcoming CPU idle period and, if not, then which of the shallower idle
|
||||
* states to choose instead of it.
|
||||
*
|
||||
* Of course, non-timer wakeup sources are more important in some use cases and
|
||||
* they can be covered by taking a few most recent idle time intervals of the
|
||||
* CPU into account. However, even in that case it is not necessary to consider
|
||||
* idle duration values greater than the time till the closest timer, as the
|
||||
* patterns that they may belong to produce average values close enough to
|
||||
* the time till the closest timer (sleep length) anyway.
|
||||
* Of course, non-timer wakeup sources are more important in some use cases
|
||||
* which can be covered by taking a few most recent idle time intervals of the
|
||||
* CPU into account. However, even in that context it is not necessary to
|
||||
* consider idle duration values greater than the sleep length, because the
|
||||
* closest timer will ultimately wake up the CPU anyway unless it is woken up
|
||||
* earlier.
|
||||
*
|
||||
* Thus this governor estimates whether or not the upcoming idle time of the CPU
|
||||
* is likely to be significantly shorter than the sleep length and selects an
|
||||
* idle state for it in accordance with that, as follows:
|
||||
* Thus this governor estimates whether or not the prospective idle duration of
|
||||
* a CPU is likely to be significantly shorter than the sleep length and selects
|
||||
* an idle state for it accordingly.
|
||||
*
|
||||
* - Find an idle state on the basis of the sleep length and state statistics
|
||||
* collected over time:
|
||||
* The computations carried out by this governor are based on using bins whose
|
||||
* boundaries are aligned with the target residency parameter values of the CPU
|
||||
* idle states provided by the %CPUIdle driver in the ascending order. That is,
|
||||
* the first bin spans from 0 up to, but not including, the target residency of
|
||||
* the second idle state (idle state 1), the second bin spans from the target
|
||||
* residency of idle state 1 up to, but not including, the target residency of
|
||||
* idle state 2, the third bin spans from the target residency of idle state 2
|
||||
* up to, but not including, the target residency of idle state 3 and so on.
|
||||
* The last bin spans from the target residency of the deepest idle state
|
||||
* supplied by the driver to infinity.
|
||||
*
|
||||
* o Find the deepest idle state whose target residency is less than or equal
|
||||
* to the sleep length.
|
||||
* Two metrics called "hits" and "intercepts" are associated with each bin.
|
||||
* They are updated every time before selecting an idle state for the given CPU
|
||||
* in accordance with what happened last time.
|
||||
*
|
||||
* o Select it if it matched both the sleep length and the observed idle
|
||||
* duration in the past more often than it matched the sleep length alone
|
||||
* (i.e. the observed idle duration was significantly shorter than the sleep
|
||||
* length matched by it).
|
||||
* The "hits" metric reflects the relative frequency of situations in which the
|
||||
* sleep length and the idle duration measured after CPU wakeup fall into the
|
||||
* same bin (that is, the CPU appears to wake up "on time" relative to the sleep
|
||||
* length). In turn, the "intercepts" metric reflects the relative frequency of
|
||||
* situations in which the measured idle duration is so much shorter than the
|
||||
* sleep length that the bin it falls into corresponds to an idle state
|
||||
* shallower than the one whose bin is fallen into by the sleep length (these
|
||||
* situations are referred to as "intercepts" below).
|
||||
*
|
||||
* o Otherwise, select the shallower state with the greatest matched "early"
|
||||
* wakeups metric.
|
||||
* In addition to the metrics described above, the governor counts recent
|
||||
* intercepts (that is, intercepts that have occurred during the last
|
||||
* %NR_RECENT invocations of it for the given CPU) for each bin.
|
||||
*
|
||||
* - If the majority of the most recent idle duration values are below the
|
||||
* target residency of the idle state selected so far, use those values to
|
||||
* compute the new expected idle duration and find an idle state matching it
|
||||
* (which has to be shallower than the one selected so far).
|
||||
* In order to select an idle state for a CPU, the governor takes the following
|
||||
* steps (modulo the possible latency constraint that must be taken into account
|
||||
* too):
|
||||
*
|
||||
* 1. Find the deepest CPU idle state whose target residency does not exceed
|
||||
* the current sleep length (the candidate idle state) and compute 3 sums as
|
||||
* follows:
|
||||
*
|
||||
* - The sum of the "hits" and "intercepts" metrics for the candidate state
|
||||
* and all of the deeper idle states (it represents the cases in which the
|
||||
* CPU was idle long enough to avoid being intercepted if the sleep length
|
||||
* had been equal to the current one).
|
||||
*
|
||||
* - The sum of the "intercepts" metrics for all of the idle states shallower
|
||||
* than the candidate one (it represents the cases in which the CPU was not
|
||||
* idle long enough to avoid being intercepted if the sleep length had been
|
||||
* equal to the current one).
|
||||
*
|
||||
* - The sum of the numbers of recent intercepts for all of the idle states
|
||||
* shallower than the candidate one.
|
||||
*
|
||||
* 2. If the second sum is greater than the first one or the third sum is
|
||||
* greater than %NR_RECENT / 2, the CPU is likely to wake up early, so look
|
||||
* for an alternative idle state to select.
|
||||
*
|
||||
* - Traverse the idle states shallower than the candidate one in the
|
||||
* descending order.
|
||||
*
|
||||
* - For each of them compute the sum of the "intercepts" metrics and the sum
|
||||
* of the numbers of recent intercepts over all of the idle states between
|
||||
* it and the candidate one (including the former and excluding the
|
||||
* latter).
|
||||
*
|
||||
* - If each of these sums that needs to be taken into account (because the
|
||||
* check related to it has indicated that the CPU is likely to wake up
|
||||
* early) is greater than a half of the corresponding sum computed in step
|
||||
* 1 (which means that the target residency of the state in question had
|
||||
* not exceeded the idle duration in over a half of the relevant cases),
|
||||
* select the given idle state instead of the candidate one.
|
||||
*
|
||||
* 3. By default, select the candidate state.
|
||||
*/
|
||||
|
||||
#include <linux/cpuidle.h>
|
||||
|
@ -60,65 +116,51 @@
|
|||
|
||||
/*
|
||||
* Number of the most recent idle duration values to take into consideration for
|
||||
* the detection of wakeup patterns.
|
||||
* the detection of recent early wakeup patterns.
|
||||
*/
|
||||
#define INTERVALS 8
|
||||
#define NR_RECENT 9
|
||||
|
||||
/**
|
||||
* struct teo_idle_state - Idle state data used by the TEO cpuidle governor.
|
||||
* @early_hits: "Early" CPU wakeups "matching" this state.
|
||||
* @hits: "On time" CPU wakeups "matching" this state.
|
||||
* @misses: CPU wakeups "missing" this state.
|
||||
*
|
||||
* A CPU wakeup is "matched" by a given idle state if the idle duration measured
|
||||
* after the wakeup is between the target residency of that state and the target
|
||||
* residency of the next one (or if this is the deepest available idle state, it
|
||||
* "matches" a CPU wakeup when the measured idle duration is at least equal to
|
||||
* its target residency).
|
||||
*
|
||||
* Also, from the TEO governor perspective, a CPU wakeup from idle is "early" if
|
||||
* it occurs significantly earlier than the closest expected timer event (that
|
||||
* is, early enough to match an idle state shallower than the one matching the
|
||||
* time till the closest timer event). Otherwise, the wakeup is "on time", or
|
||||
* it is a "hit".
|
||||
*
|
||||
* A "miss" occurs when the given state doesn't match the wakeup, but it matches
|
||||
* the time till the closest timer event used for idle state selection.
|
||||
* struct teo_bin - Metrics used by the TEO cpuidle governor.
|
||||
* @intercepts: The "intercepts" metric.
|
||||
* @hits: The "hits" metric.
|
||||
* @recent: The number of recent "intercepts".
|
||||
*/
|
||||
struct teo_idle_state {
|
||||
unsigned int early_hits;
|
||||
struct teo_bin {
|
||||
unsigned int intercepts;
|
||||
unsigned int hits;
|
||||
unsigned int misses;
|
||||
unsigned int recent;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct teo_cpu - CPU data used by the TEO cpuidle governor.
|
||||
* @time_span_ns: Time between idle state selection and post-wakeup update.
|
||||
* @sleep_length_ns: Time till the closest timer event (at the selection time).
|
||||
* @states: Idle states data corresponding to this CPU.
|
||||
* @interval_idx: Index of the most recent saved idle interval.
|
||||
* @intervals: Saved idle duration values.
|
||||
* @state_bins: Idle state data bins for this CPU.
|
||||
* @total: Grand total of the "intercepts" and "hits" mertics for all bins.
|
||||
* @next_recent_idx: Index of the next @recent_idx entry to update.
|
||||
* @recent_idx: Indices of bins corresponding to recent "intercepts".
|
||||
*/
|
||||
struct teo_cpu {
|
||||
s64 time_span_ns;
|
||||
s64 sleep_length_ns;
|
||||
struct teo_idle_state states[CPUIDLE_STATE_MAX];
|
||||
int interval_idx;
|
||||
u64 intervals[INTERVALS];
|
||||
struct teo_bin state_bins[CPUIDLE_STATE_MAX];
|
||||
unsigned int total;
|
||||
int next_recent_idx;
|
||||
int recent_idx[NR_RECENT];
|
||||
};
|
||||
|
||||
static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
|
||||
|
||||
/**
|
||||
* teo_update - Update CPU data after wakeup.
|
||||
* teo_update - Update CPU metrics after wakeup.
|
||||
* @drv: cpuidle driver containing state data.
|
||||
* @dev: Target CPU.
|
||||
*/
|
||||
static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
||||
{
|
||||
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
|
||||
int i, idx_hit = 0, idx_timer = 0;
|
||||
unsigned int hits, misses;
|
||||
int i, idx_timer = 0, idx_duration = 0;
|
||||
u64 measured_ns;
|
||||
|
||||
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
|
||||
|
@ -151,53 +193,52 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
|||
measured_ns /= 2;
|
||||
}
|
||||
|
||||
cpu_data->total = 0;
|
||||
|
||||
/*
|
||||
* Decay the "early hits" metric for all of the states and find the
|
||||
* states matching the sleep length and the measured idle duration.
|
||||
* Decay the "hits" and "intercepts" metrics for all of the bins and
|
||||
* find the bins that the sleep length and the measured idle duration
|
||||
* fall into.
|
||||
*/
|
||||
for (i = 0; i < drv->state_count; i++) {
|
||||
unsigned int early_hits = cpu_data->states[i].early_hits;
|
||||
s64 target_residency_ns = drv->states[i].target_residency_ns;
|
||||
struct teo_bin *bin = &cpu_data->state_bins[i];
|
||||
|
||||
cpu_data->states[i].early_hits -= early_hits >> DECAY_SHIFT;
|
||||
bin->hits -= bin->hits >> DECAY_SHIFT;
|
||||
bin->intercepts -= bin->intercepts >> DECAY_SHIFT;
|
||||
|
||||
if (drv->states[i].target_residency_ns <= cpu_data->sleep_length_ns) {
|
||||
cpu_data->total += bin->hits + bin->intercepts;
|
||||
|
||||
if (target_residency_ns <= cpu_data->sleep_length_ns) {
|
||||
idx_timer = i;
|
||||
if (drv->states[i].target_residency_ns <= measured_ns)
|
||||
idx_hit = i;
|
||||
if (target_residency_ns <= measured_ns)
|
||||
idx_duration = i;
|
||||
}
|
||||
}
|
||||
|
||||
i = cpu_data->next_recent_idx++;
|
||||
if (cpu_data->next_recent_idx >= NR_RECENT)
|
||||
cpu_data->next_recent_idx = 0;
|
||||
|
||||
if (cpu_data->recent_idx[i] >= 0)
|
||||
cpu_data->state_bins[cpu_data->recent_idx[i]].recent--;
|
||||
|
||||
/*
|
||||
* Update the "hits" and "misses" data for the state matching the sleep
|
||||
* length. If it matches the measured idle duration too, this is a hit,
|
||||
* so increase the "hits" metric for it then. Otherwise, this is a
|
||||
* miss, so increase the "misses" metric for it. In the latter case
|
||||
* also increase the "early hits" metric for the state that actually
|
||||
* matches the measured idle duration.
|
||||
* If the measured idle duration falls into the same bin as the sleep
|
||||
* length, this is a "hit", so update the "hits" metric for that bin.
|
||||
* Otherwise, update the "intercepts" metric for the bin fallen into by
|
||||
* the measured idle duration.
|
||||
*/
|
||||
hits = cpu_data->states[idx_timer].hits;
|
||||
hits -= hits >> DECAY_SHIFT;
|
||||
|
||||
misses = cpu_data->states[idx_timer].misses;
|
||||
misses -= misses >> DECAY_SHIFT;
|
||||
|
||||
if (idx_timer == idx_hit) {
|
||||
hits += PULSE;
|
||||
if (idx_timer == idx_duration) {
|
||||
cpu_data->state_bins[idx_timer].hits += PULSE;
|
||||
cpu_data->recent_idx[i] = -1;
|
||||
} else {
|
||||
misses += PULSE;
|
||||
cpu_data->states[idx_hit].early_hits += PULSE;
|
||||
cpu_data->state_bins[idx_duration].intercepts += PULSE;
|
||||
cpu_data->state_bins[idx_duration].recent++;
|
||||
cpu_data->recent_idx[i] = idx_duration;
|
||||
}
|
||||
|
||||
cpu_data->states[idx_timer].misses = misses;
|
||||
cpu_data->states[idx_timer].hits = hits;
|
||||
|
||||
/*
|
||||
* Save idle duration values corresponding to non-timer wakeups for
|
||||
* pattern detection.
|
||||
*/
|
||||
cpu_data->intervals[cpu_data->interval_idx++] = measured_ns;
|
||||
if (cpu_data->interval_idx >= INTERVALS)
|
||||
cpu_data->interval_idx = 0;
|
||||
cpu_data->total += PULSE;
|
||||
}
|
||||
|
||||
static bool teo_time_ok(u64 interval_ns)
|
||||
|
@ -205,6 +246,12 @@ static bool teo_time_ok(u64 interval_ns)
|
|||
return !tick_nohz_tick_stopped() || interval_ns >= TICK_NSEC;
|
||||
}
|
||||
|
||||
static s64 teo_middle_of_bin(int idx, struct cpuidle_driver *drv)
|
||||
{
|
||||
return (drv->states[idx].target_residency_ns +
|
||||
drv->states[idx+1].target_residency_ns) / 2;
|
||||
}
|
||||
|
||||
/**
|
||||
* teo_find_shallower_state - Find shallower idle state matching given duration.
|
||||
* @drv: cpuidle driver containing state data.
|
||||
|
@ -240,10 +287,18 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
{
|
||||
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
|
||||
s64 latency_req = cpuidle_governor_latency_req(dev->cpu);
|
||||
int max_early_idx, prev_max_early_idx, constraint_idx, idx0, idx, i;
|
||||
unsigned int hits, misses, early_hits;
|
||||
unsigned int idx_intercept_sum = 0;
|
||||
unsigned int intercept_sum = 0;
|
||||
unsigned int idx_recent_sum = 0;
|
||||
unsigned int recent_sum = 0;
|
||||
unsigned int idx_hit_sum = 0;
|
||||
unsigned int hit_sum = 0;
|
||||
int constraint_idx = 0;
|
||||
int idx0 = 0, idx = -1;
|
||||
bool alt_intercepts, alt_recent;
|
||||
ktime_t delta_tick;
|
||||
s64 duration_ns;
|
||||
int i;
|
||||
|
||||
if (dev->last_state_idx >= 0) {
|
||||
teo_update(drv, dev);
|
||||
|
@ -255,170 +310,135 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
duration_ns = tick_nohz_get_sleep_length(&delta_tick);
|
||||
cpu_data->sleep_length_ns = duration_ns;
|
||||
|
||||
hits = 0;
|
||||
misses = 0;
|
||||
early_hits = 0;
|
||||
max_early_idx = -1;
|
||||
prev_max_early_idx = -1;
|
||||
constraint_idx = drv->state_count;
|
||||
idx = -1;
|
||||
idx0 = idx;
|
||||
/* Check if there is any choice in the first place. */
|
||||
if (drv->state_count < 2) {
|
||||
idx = 0;
|
||||
goto end;
|
||||
}
|
||||
if (!dev->states_usage[0].disable) {
|
||||
idx = 0;
|
||||
if (drv->states[1].target_residency_ns > duration_ns)
|
||||
goto end;
|
||||
}
|
||||
|
||||
for (i = 0; i < drv->state_count; i++) {
|
||||
/*
|
||||
* Find the deepest idle state whose target residency does not exceed
|
||||
* the current sleep length and the deepest idle state not deeper than
|
||||
* the former whose exit latency does not exceed the current latency
|
||||
* constraint. Compute the sums of metrics for early wakeup pattern
|
||||
* detection.
|
||||
*/
|
||||
for (i = 1; i < drv->state_count; i++) {
|
||||
struct teo_bin *prev_bin = &cpu_data->state_bins[i-1];
|
||||
struct cpuidle_state *s = &drv->states[i];
|
||||
|
||||
if (dev->states_usage[i].disable) {
|
||||
/*
|
||||
* Ignore disabled states with target residencies beyond
|
||||
* the anticipated idle duration.
|
||||
*/
|
||||
if (s->target_residency_ns > duration_ns)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* This state is disabled, so the range of idle duration
|
||||
* values corresponding to it is covered by the current
|
||||
* candidate state, but still the "hits" and "misses"
|
||||
* metrics of the disabled state need to be used to
|
||||
* decide whether or not the state covering the range in
|
||||
* question is good enough.
|
||||
*/
|
||||
hits = cpu_data->states[i].hits;
|
||||
misses = cpu_data->states[i].misses;
|
||||
|
||||
if (early_hits >= cpu_data->states[i].early_hits ||
|
||||
idx < 0)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* If the current candidate state has been the one with
|
||||
* the maximum "early hits" metric so far, the "early
|
||||
* hits" metric of the disabled state replaces the
|
||||
* current "early hits" count to avoid selecting a
|
||||
* deeper state with lower "early hits" metric.
|
||||
*/
|
||||
if (max_early_idx == idx) {
|
||||
early_hits = cpu_data->states[i].early_hits;
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* The current candidate state is closer to the disabled
|
||||
* one than the current maximum "early hits" state, so
|
||||
* replace the latter with it, but in case the maximum
|
||||
* "early hits" state index has not been set so far,
|
||||
* check if the current candidate state is not too
|
||||
* shallow for that role.
|
||||
*/
|
||||
if (teo_time_ok(drv->states[idx].target_residency_ns)) {
|
||||
prev_max_early_idx = max_early_idx;
|
||||
early_hits = cpu_data->states[i].early_hits;
|
||||
max_early_idx = idx;
|
||||
}
|
||||
/*
|
||||
* Update the sums of idle state mertics for all of the states
|
||||
* shallower than the current one.
|
||||
*/
|
||||
intercept_sum += prev_bin->intercepts;
|
||||
hit_sum += prev_bin->hits;
|
||||
recent_sum += prev_bin->recent;
|
||||
|
||||
if (dev->states_usage[i].disable)
|
||||
continue;
|
||||
}
|
||||
|
||||
if (idx < 0) {
|
||||
idx = i; /* first enabled state */
|
||||
hits = cpu_data->states[i].hits;
|
||||
misses = cpu_data->states[i].misses;
|
||||
idx0 = i;
|
||||
}
|
||||
|
||||
if (s->target_residency_ns > duration_ns)
|
||||
break;
|
||||
|
||||
if (s->exit_latency_ns > latency_req && constraint_idx > i)
|
||||
idx = i;
|
||||
|
||||
if (s->exit_latency_ns <= latency_req)
|
||||
constraint_idx = i;
|
||||
|
||||
idx = i;
|
||||
hits = cpu_data->states[i].hits;
|
||||
misses = cpu_data->states[i].misses;
|
||||
|
||||
if (early_hits < cpu_data->states[i].early_hits &&
|
||||
teo_time_ok(drv->states[i].target_residency_ns)) {
|
||||
prev_max_early_idx = max_early_idx;
|
||||
early_hits = cpu_data->states[i].early_hits;
|
||||
max_early_idx = i;
|
||||
}
|
||||
idx_intercept_sum = intercept_sum;
|
||||
idx_hit_sum = hit_sum;
|
||||
idx_recent_sum = recent_sum;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the "hits" metric of the idle state matching the sleep length is
|
||||
* greater than its "misses" metric, that is the one to use. Otherwise,
|
||||
* it is more likely that one of the shallower states will match the
|
||||
* idle duration observed after wakeup, so take the one with the maximum
|
||||
* "early hits" metric, but if that cannot be determined, just use the
|
||||
* state selected so far.
|
||||
*/
|
||||
if (hits <= misses) {
|
||||
/*
|
||||
* The current candidate state is not suitable, so take the one
|
||||
* whose "early hits" metric is the maximum for the range of
|
||||
* shallower states.
|
||||
*/
|
||||
if (idx == max_early_idx)
|
||||
max_early_idx = prev_max_early_idx;
|
||||
|
||||
if (max_early_idx >= 0) {
|
||||
idx = max_early_idx;
|
||||
duration_ns = drv->states[idx].target_residency_ns;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If there is a latency constraint, it may be necessary to use a
|
||||
* shallower idle state than the one selected so far.
|
||||
*/
|
||||
if (constraint_idx < idx)
|
||||
idx = constraint_idx;
|
||||
|
||||
/* Avoid unnecessary overhead. */
|
||||
if (idx < 0) {
|
||||
idx = 0; /* No states enabled. Must use 0. */
|
||||
} else if (idx > idx0) {
|
||||
unsigned int count = 0;
|
||||
u64 sum = 0;
|
||||
idx = 0; /* No states enabled, must use 0. */
|
||||
goto end;
|
||||
} else if (idx == idx0) {
|
||||
goto end;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the sum of the intercepts metric for all of the idle states
|
||||
* shallower than the current candidate one (idx) is greater than the
|
||||
* sum of the intercepts and hits metrics for the candidate state and
|
||||
* all of the deeper states, or the sum of the numbers of recent
|
||||
* intercepts over all of the states shallower than the candidate one
|
||||
* is greater than a half of the number of recent events taken into
|
||||
* account, the CPU is likely to wake up early, so find an alternative
|
||||
* idle state to select.
|
||||
*/
|
||||
alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum;
|
||||
alt_recent = idx_recent_sum > NR_RECENT / 2;
|
||||
if (alt_recent || alt_intercepts) {
|
||||
s64 last_enabled_span_ns = duration_ns;
|
||||
int last_enabled_idx = idx;
|
||||
|
||||
/*
|
||||
* The target residencies of at least two different enabled idle
|
||||
* states are less than or equal to the current expected idle
|
||||
* duration. Try to refine the selection using the most recent
|
||||
* measured idle duration values.
|
||||
* Look for the deepest idle state whose target residency had
|
||||
* not exceeded the idle duration in over a half of the relevant
|
||||
* cases (both with respect to intercepts overall and with
|
||||
* respect to the recent intercepts only) in the past.
|
||||
*
|
||||
* Count and sum the most recent idle duration values less than
|
||||
* the current expected idle duration value.
|
||||
* Take the possible latency constraint and duration limitation
|
||||
* present if the tick has been stopped already into account.
|
||||
*/
|
||||
for (i = 0; i < INTERVALS; i++) {
|
||||
u64 val = cpu_data->intervals[i];
|
||||
intercept_sum = 0;
|
||||
recent_sum = 0;
|
||||
|
||||
if (val >= duration_ns)
|
||||
for (i = idx - 1; i >= idx0; i--) {
|
||||
struct teo_bin *bin = &cpu_data->state_bins[i];
|
||||
s64 span_ns;
|
||||
|
||||
intercept_sum += bin->intercepts;
|
||||
recent_sum += bin->recent;
|
||||
|
||||
if (dev->states_usage[i].disable)
|
||||
continue;
|
||||
|
||||
count++;
|
||||
sum += val;
|
||||
}
|
||||
|
||||
/*
|
||||
* Give up unless the majority of the most recent idle duration
|
||||
* values are in the interesting range.
|
||||
*/
|
||||
if (count > INTERVALS / 2) {
|
||||
u64 avg_ns = div64_u64(sum, count);
|
||||
|
||||
/*
|
||||
* Avoid spending too much time in an idle state that
|
||||
* would be too shallow.
|
||||
*/
|
||||
if (teo_time_ok(avg_ns)) {
|
||||
duration_ns = avg_ns;
|
||||
if (drv->states[idx].target_residency_ns > avg_ns)
|
||||
idx = teo_find_shallower_state(drv, dev,
|
||||
idx, avg_ns);
|
||||
span_ns = teo_middle_of_bin(i, drv);
|
||||
if (!teo_time_ok(span_ns)) {
|
||||
/*
|
||||
* The current state is too shallow, so select
|
||||
* the first enabled deeper state.
|
||||
*/
|
||||
duration_ns = last_enabled_span_ns;
|
||||
idx = last_enabled_idx;
|
||||
break;
|
||||
}
|
||||
|
||||
if ((!alt_recent || 2 * recent_sum > idx_recent_sum) &&
|
||||
(!alt_intercepts ||
|
||||
2 * intercept_sum > idx_intercept_sum)) {
|
||||
idx = i;
|
||||
duration_ns = span_ns;
|
||||
break;
|
||||
}
|
||||
|
||||
last_enabled_span_ns = span_ns;
|
||||
last_enabled_idx = i;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If there is a latency constraint, it may be necessary to select an
|
||||
* idle state shallower than the current candidate one.
|
||||
*/
|
||||
if (idx > constraint_idx)
|
||||
idx = constraint_idx;
|
||||
|
||||
end:
|
||||
/*
|
||||
* Don't stop the tick if the selected state is a polling one or if the
|
||||
* expected idle duration is shorter than the tick period length.
|
||||
|
@ -478,8 +498,8 @@ static int teo_enable_device(struct cpuidle_driver *drv,
|
|||
|
||||
memset(cpu_data, 0, sizeof(*cpu_data));
|
||||
|
||||
for (i = 0; i < INTERVALS; i++)
|
||||
cpu_data->intervals[i] = U64_MAX;
|
||||
for (i = 0; i < NR_RECENT; i++)
|
||||
cpu_data->recent_idx[i] = -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1484,6 +1484,36 @@ static void __init sklh_idle_state_table_update(void)
|
|||
skl_cstates[6].flags |= CPUIDLE_FLAG_UNUSABLE; /* C9-SKL */
|
||||
}
|
||||
|
||||
/**
|
||||
* skx_idle_state_table_update - Adjust the Sky Lake/Cascade Lake
|
||||
* idle states table.
|
||||
*/
|
||||
static void __init skx_idle_state_table_update(void)
|
||||
{
|
||||
unsigned long long msr;
|
||||
|
||||
rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr);
|
||||
|
||||
/*
|
||||
* 000b: C0/C1 (no package C-state support)
|
||||
* 001b: C2
|
||||
* 010b: C6 (non-retention)
|
||||
* 011b: C6 (retention)
|
||||
* 111b: No Package C state limits.
|
||||
*/
|
||||
if ((msr & 0x7) < 2) {
|
||||
/*
|
||||
* Uses the CC6 + PC0 latency and 3 times of
|
||||
* latency for target_residency if the PC6
|
||||
* is disabled in BIOS. This is consistent
|
||||
* with how intel_idle driver uses _CST
|
||||
* to set the target_residency.
|
||||
*/
|
||||
skx_cstates[2].exit_latency = 92;
|
||||
skx_cstates[2].target_residency = 276;
|
||||
}
|
||||
}
|
||||
|
||||
static bool __init intel_idle_verify_cstate(unsigned int mwait_hint)
|
||||
{
|
||||
unsigned int mwait_cstate = MWAIT_HINT2CSTATE(mwait_hint) + 1;
|
||||
|
@ -1515,6 +1545,9 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
|
|||
case INTEL_FAM6_SKYLAKE:
|
||||
sklh_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_FAM6_SKYLAKE_X:
|
||||
skx_idle_state_table_update();
|
||||
break;
|
||||
}
|
||||
|
||||
for (cstate = 0; cstate < CPUIDLE_STATE_MAX; ++cstate) {
|
||||
|
|
Загрузка…
Ссылка в новой задаче