Power management and ACPI fixes for 3.10-rc2

- intel_pstate driver fixes and cleanups from Dirk Brandewie and
   Wei Yongjun.
 
 - cpufreq fixes related to ARM big.LITTLE support and the
   cpufreq-cpu0 driver from Viresh Kumar.
 
 - Assorted cpufreq fixes from Srivatsa S. Bhat, Borislav Petkov,
   Wolfram Sang, Alexander Shiyan, and Nishanth Menon.
 
 - Assorted ACPI fixes from Catalin Marinas, Lan Tianyu, Alex Hung,
   Jan-Simon Möller, and Rafael J. Wysocki.
 
 - Fix for a kfree() under spinlock in the PM core from Shuah Khan.
 
 - PM documentation updates from Borislav Petkov and Zhang Rui.
 
 /
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.19 (GNU/Linux)
 
 iQIcBAABAgAGBQJRlTuzAAoJEKhOf7ml8uNsJJkQAJyb/pNEAi3/MuooSkWGt0Uq
 P5RCQ0RtZtXkpzmddBySy+9biJ81gDzr1tAitjGLPCkVNJQsn9EZ07XkxUIjQ44I
 /h5cGRltGVStMoh+xnEAQVGJt+KTF4G8AQP3ookJ7VcfDCzKkQn33Nhg4JY6vYpv
 cTT0UU0CtNeH8V0sEP/ydqy7bcN0Zt1F2xkftp4jzj1RZ0jiBSWgqITZOSidoKIR
 SMIRPRb3R2u0JpYWchQhJEKOBcd30aK+pyz8k4d1hnjdcQ7MuvlKV/XsdkMMYN0X
 CZ5nwF9UW+vhXQVqAgCfquHXRSEYD2DmkQx30+wDwuXXGhr9s4rD2rXFhu9bEI/r
 Gn9ksE/kbZI1cGlewTKKl0Xq3HOjxrjFWYOL2lgCr39EnyyTvzlNA35pYCALN8n+
 1SSB7bGsSf5kftxLPMf7RQzWFLI9O1rdYUtkoomuMIxwwKlCTGUKi5zFHahfHyRx
 MfRiH9J5mzZaGY2chnBCsHP0tvfhMyRAnziS62FYH48fmOhskr88O3/T4m/mYDr+
 +taU3Bc6H6JmffP6s6/vWNJwxOE8FUh8awwTVGrSr8WmkQ1zvdbiQO5vaWmoC7H7
 5F0YbI24CVaTpDGYuKLORZjFiKexKaGHJzx0BXQZZijqma0FPre8Xw1ePtoqwb6r
 ux/GeFm0vBIyPRWwHjZz
 =AY9g
 -----END PGP SIGNATURE-----

Merge tag 'pm+acpi-3.10-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI fixes from Rafael Wysocki:

 - intel_pstate driver fixes and cleanups from Dirk Brandewie and Wei
   Yongjun.

 - cpufreq fixes related to ARM big.LITTLE support and the cpufreq-cpu0
   driver from Viresh Kumar.

 - Assorted cpufreq fixes from Srivatsa S Bhat, Borislav Petkov, Wolfram
   Sang, Alexander Shiyan, and Nishanth Menon.

 - Assorted ACPI fixes from Catalin Marinas, Lan Tianyu, Alex Hung,
   Jan-Simon Möller, and Rafael J Wysocki.

 - Fix for a kfree() under spinlock in the PM core from Shuah Khan.

 - PM documentation updates from Borislav Petkov and Zhang Rui.

* tag 'pm+acpi-3.10-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (30 commits)
  cpufreq: Preserve sysfs files across suspend/resume
  ACPI / scan: Fix memory leak on acpi_scan_init_hotplug() error path
  PM / hibernate: Correct documentation
  PM / Documentation: remove inaccurate suspend/hibernate transition lantency statement
  PM: Documentation update for freeze state
  cpufreq / intel_pstate: use vzalloc() instead of vmalloc()/memset(0)
  cpufreq, ondemand: Remove leftover debug line
  PM: Avoid calling kfree() under spinlock in dev_pm_put_subsys_data()
  cpufreq / kirkwood: don't check resource with devm_ioremap_resource
  cpufreq / intel_pstate: remove #ifdef MODULE compile fence
  cpufreq / intel_pstate: Remove idle mode PID
  cpufreq / intel_pstate: fix ffmpeg regression
  cpufreq / intel_pstate: use lowest requested max performance
  cpufreq / intel_pstate: remove idle time and duration from sample and calculations
  cpufreq: Fix incorrect dependecies for ARM SA11xx drivers
  cpufreq: ARM big LITTLE: Fix Kconfig entries
  cpufreq: cpufreq-cpu0: Free parent node for error cases
  cpufreq: cpufreq-cpu0: defer probe when regulator is not ready
  cpufreq: Issue CPUFREQ_GOV_POLICY_EXIT notifier before dropping policy refcount
  cpufreq: governors: Fix CPUFREQ_GOV_POLICY_{INIT|EXIT} notifiers
  ...
This commit is contained in:
Linus Torvalds 2013-05-16 15:12:34 -07:00
Родитель 8968216574 49a9e4315d
Коммит d5fe85af85
26 изменённых файлов: 189 добавлений и 194 удалений

Просмотреть файл

@ -268,7 +268,7 @@ situations.
System Power Management Phases
------------------------------
Suspending or resuming the system is done in several phases. Different phases
are used for standby or memory sleep states ("suspend-to-RAM") and the
are used for freeze, standby, and memory sleep states ("suspend-to-RAM") and the
hibernation state ("suspend-to-disk"). Each phase involves executing callbacks
for every device before the next phase begins. Not all busses or classes
support all these callbacks and not all drivers use all the callbacks. The
@ -309,7 +309,8 @@ execute the corresponding method from dev->driver->pm instead if there is one.
Entering System Suspend
-----------------------
When the system goes into the standby or memory sleep state, the phases are:
When the system goes into the freeze, standby or memory sleep state,
the phases are:
prepare, suspend, suspend_late, suspend_noirq.
@ -368,7 +369,7 @@ the devices that were suspended.
Leaving System Suspend
----------------------
When resuming from standby or memory sleep, the phases are:
When resuming from freeze, standby or memory sleep, the phases are:
resume_noirq, resume_early, resume, complete.
@ -433,8 +434,8 @@ the system log.
Entering Hibernation
--------------------
Hibernating the system is more complicated than putting it into the standby or
memory sleep state, because it involves creating and saving a system image.
Hibernating the system is more complicated than putting it into the other
sleep states, because it involves creating and saving a system image.
Therefore there are more phases for hibernation, with a different set of
callbacks. These phases always run after tasks have been frozen and memory has
been freed.
@ -485,8 +486,8 @@ image forms an atomic snapshot of the system state.
At this point the system image is saved, and the devices then need to be
prepared for the upcoming system shutdown. This is much like suspending them
before putting the system into the standby or memory sleep state, and the phases
are similar.
before putting the system into the freeze, standby or memory sleep state,
and the phases are similar.
9. The prepare phase is discussed above.

Просмотреть файл

@ -7,8 +7,8 @@ running. The interface exists in /sys/power/ directory (assuming sysfs
is mounted at /sys).
/sys/power/state controls system power state. Reading from this file
returns what states are supported, which is hard-coded to 'standby'
(Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'
returns what states are supported, which is hard-coded to 'freeze',
'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'
(Suspend-to-Disk).
Writing to this file one of those strings causes the system to

Просмотреть файл

@ -15,8 +15,10 @@ A suspend/hibernation notifier may be used for this purpose.
The subsystems or drivers having such needs can register suspend notifiers that
will be called upon the following events by the PM core:
PM_HIBERNATION_PREPARE The system is going to hibernate or suspend, tasks will
be frozen immediately.
PM_HIBERNATION_PREPARE The system is going to hibernate, tasks will be frozen
immediately. This is different from PM_SUSPEND_PREPARE
below because here we do additional work between notifiers
and drivers freezing.
PM_POST_HIBERNATION The system memory state has been restored from a
hibernation image or an error occurred during

Просмотреть файл

@ -2,12 +2,26 @@
System Power Management States
The kernel supports three power management states generically, though
each is dependent on platform support code to implement the low-level
details for each state. This file describes each state, what they are
The kernel supports four power management states generically, though
one is generic and the other three are dependent on platform support
code to implement the low-level details for each state.
This file describes each state, what they are
commonly called, what ACPI state they map to, and what string to write
to /sys/power/state to enter that state
state: Freeze / Low-Power Idle
ACPI state: S0
String: "freeze"
This state is a generic, pure software, light-weight, low-power state.
It allows more energy to be saved relative to idle by freezing user
space and putting all I/O devices into low-power states (possibly
lower-power than available at run time), such that the processors can
spend more time in their idle states.
This state can be used for platforms without Standby/Suspend-to-RAM
support, or it can be used in addition to Suspend-to-RAM (memory sleep)
to provide reduced resume latency.
State: Standby / Power-On Suspend
ACPI State: S1
@ -22,9 +36,6 @@ We try to put devices in a low-power state equivalent to D1, which
also offers low power savings, but low resume latency. Not all devices
support D1, and those that don't are left on.
A transition from Standby to the On state should take about 1-2
seconds.
State: Suspend-to-RAM
ACPI State: S3
@ -42,9 +53,6 @@ transition back to the On state.
For at least ACPI, STR requires some minimal boot-strapping code to
resume the system from STR. This may be true on other platforms.
A transition from Suspend-to-RAM to the On state should take about
3-5 seconds.
State: Suspend-to-disk
ACPI State: S4
@ -74,7 +82,3 @@ low-power state (like ACPI S4), or it may simply power down. Powering
down offers greater savings, and allows this mechanism to work on any
system. However, entering a real low-power state allows the user to
trigger wake up events (e.g. pressing a key or opening a laptop lid).
A transition from Suspend-to-Disk to the On state should take about 30
seconds, though it's typically a bit more with the current
implementation.

Просмотреть файл

@ -28,6 +28,8 @@
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/dmi.h>
#include <linux/delay.h>
#ifdef CONFIG_ACPI_PROCFS_POWER
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
@ -74,6 +76,8 @@ static int acpi_ac_resume(struct device *dev);
#endif
static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume);
static int ac_sleep_before_get_state_ms;
static struct acpi_driver acpi_ac_driver = {
.name = "ac",
.class = ACPI_AC_CLASS,
@ -252,6 +256,16 @@ static void acpi_ac_notify(struct acpi_device *device, u32 event)
case ACPI_AC_NOTIFY_STATUS:
case ACPI_NOTIFY_BUS_CHECK:
case ACPI_NOTIFY_DEVICE_CHECK:
/*
* A buggy BIOS may notify AC first and then sleep for
* a specific time before doing actual operations in the
* EC event handler (_Qxx). This will cause the AC state
* reported by the ACPI event to be incorrect, so wait for a
* specific time for the EC event handler to make progress.
*/
if (ac_sleep_before_get_state_ms > 0)
msleep(ac_sleep_before_get_state_ms);
acpi_ac_get_state(ac);
acpi_bus_generate_proc_event(device, event, (u32) ac->state);
acpi_bus_generate_netlink_event(device->pnp.device_class,
@ -264,6 +278,24 @@ static void acpi_ac_notify(struct acpi_device *device, u32 event)
return;
}
static int thinkpad_e530_quirk(const struct dmi_system_id *d)
{
ac_sleep_before_get_state_ms = 1000;
return 0;
}
static struct dmi_system_id ac_dmi_table[] = {
{
.callback = thinkpad_e530_quirk,
.ident = "thinkpad e530",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_MATCH(DMI_PRODUCT_NAME, "32597CG"),
},
},
{},
};
static int acpi_ac_add(struct acpi_device *device)
{
int result = 0;
@ -312,6 +344,7 @@ static int acpi_ac_add(struct acpi_device *device)
kfree(ac);
}
dmi_check_system(ac_dmi_table);
return result;
}

Просмотреть файл

@ -223,7 +223,7 @@ static int ec_check_sci_sync(struct acpi_ec *ec, u8 state)
static int ec_poll(struct acpi_ec *ec)
{
unsigned long flags;
int repeat = 2; /* number of command restarts */
int repeat = 5; /* number of command restarts */
while (repeat--) {
unsigned long delay = jiffies +
msecs_to_jiffies(ec_delay);
@ -241,8 +241,6 @@ static int ec_poll(struct acpi_ec *ec)
}
advance_transaction(ec, acpi_ec_read_status(ec));
} while (time_before(jiffies, delay));
if (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF)
break;
pr_debug(PREFIX "controller reset, restart transaction\n");
spin_lock_irqsave(&ec->lock, flags);
start_transaction(ec);

Просмотреть файл

@ -95,9 +95,6 @@ static const struct acpi_device_id processor_device_ids[] = {
};
MODULE_DEVICE_TABLE(acpi, processor_device_ids);
static SIMPLE_DEV_PM_OPS(acpi_processor_pm,
acpi_processor_suspend, acpi_processor_resume);
static struct acpi_driver acpi_processor_driver = {
.name = "processor",
.class = ACPI_PROCESSOR_CLASS,
@ -107,7 +104,6 @@ static struct acpi_driver acpi_processor_driver = {
.remove = acpi_processor_remove,
.notify = acpi_processor_notify,
},
.drv.pm = &acpi_processor_pm,
};
#define INSTALL_NOTIFY_HANDLER 1
@ -934,6 +930,8 @@ static int __init acpi_processor_init(void)
if (result < 0)
return result;
acpi_processor_syscore_init();
acpi_processor_install_hotplug_notify();
acpi_thermal_cpufreq_init();
@ -956,6 +954,8 @@ static void __exit acpi_processor_exit(void)
acpi_processor_uninstall_hotplug_notify();
acpi_processor_syscore_exit();
acpi_bus_unregister_driver(&acpi_processor_driver);
return;

Просмотреть файл

@ -34,6 +34,7 @@
#include <linux/sched.h> /* need_resched() */
#include <linux/clockchips.h>
#include <linux/cpuidle.h>
#include <linux/syscore_ops.h>
/*
* Include the apic definitions for x86 to have the APIC timer related defines
@ -210,33 +211,41 @@ static void lapic_timer_state_broadcast(struct acpi_processor *pr,
#endif
#ifdef CONFIG_PM_SLEEP
static u32 saved_bm_rld;
static void acpi_idle_bm_rld_save(void)
int acpi_processor_suspend(void)
{
acpi_read_bit_register(ACPI_BITREG_BUS_MASTER_RLD, &saved_bm_rld);
return 0;
}
static void acpi_idle_bm_rld_restore(void)
void acpi_processor_resume(void)
{
u32 resumed_bm_rld;
acpi_read_bit_register(ACPI_BITREG_BUS_MASTER_RLD, &resumed_bm_rld);
if (resumed_bm_rld == saved_bm_rld)
return;
if (resumed_bm_rld != saved_bm_rld)
acpi_write_bit_register(ACPI_BITREG_BUS_MASTER_RLD, saved_bm_rld);
acpi_write_bit_register(ACPI_BITREG_BUS_MASTER_RLD, saved_bm_rld);
}
int acpi_processor_suspend(struct device *dev)
static struct syscore_ops acpi_processor_syscore_ops = {
.suspend = acpi_processor_suspend,
.resume = acpi_processor_resume,
};
void acpi_processor_syscore_init(void)
{
acpi_idle_bm_rld_save();
return 0;
register_syscore_ops(&acpi_processor_syscore_ops);
}
int acpi_processor_resume(struct device *dev)
void acpi_processor_syscore_exit(void)
{
acpi_idle_bm_rld_restore();
return 0;
unregister_syscore_ops(&acpi_processor_syscore_ops);
}
#endif /* CONFIG_PM_SLEEP */
#if defined(CONFIG_X86)
static void tsc_check_state(int state)

Просмотреть файл

@ -1785,7 +1785,7 @@ static void acpi_scan_init_hotplug(acpi_handle handle, int type)
acpi_set_pnp_ids(handle, &pnp, type);
if (!pnp.type.hardware_id)
return;
goto out;
/*
* This relies on the fact that acpi_install_notify_handler() will not
@ -1800,6 +1800,7 @@ static void acpi_scan_init_hotplug(acpi_handle handle, int type)
}
}
out:
acpi_free_pnp_ids(&pnp);
}

Просмотреть файл

@ -456,6 +456,14 @@ static struct dmi_system_id video_dmi_table[] __initdata = {
DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dm4 Notebook PC"),
},
},
{
.callback = video_ignore_initial_backlight,
.ident = "HP 1000 Notebook PC",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"),
DMI_MATCH(DMI_PRODUCT_NAME, "HP 1000 Notebook PC"),
},
},
{}
};

Просмотреть файл

@ -61,24 +61,24 @@ EXPORT_SYMBOL_GPL(dev_pm_get_subsys_data);
int dev_pm_put_subsys_data(struct device *dev)
{
struct pm_subsys_data *psd;
int ret = 0;
int ret = 1;
spin_lock_irq(&dev->power.lock);
psd = dev_to_psd(dev);
if (!psd) {
ret = -EINVAL;
if (!psd)
goto out;
}
if (--psd->refcount == 0) {
dev->power.subsys_data = NULL;
kfree(psd);
ret = 1;
} else {
psd = NULL;
ret = 0;
}
out:
spin_unlock_irq(&dev->power.lock);
kfree(psd);
return ret;
}

Просмотреть файл

@ -47,7 +47,7 @@ config CPU_FREQ_STAT_DETAILS
choice
prompt "Default CPUFreq governor"
default CPU_FREQ_DEFAULT_GOV_USERSPACE if CPU_FREQ_SA1100 || CPU_FREQ_SA1110
default CPU_FREQ_DEFAULT_GOV_USERSPACE if ARM_SA1100_CPUFREQ || ARM_SA1110_CPUFREQ
default CPU_FREQ_DEFAULT_GOV_PERFORMANCE
help
This option sets which CPUFreq governor shall be loaded at

Просмотреть файл

@ -3,16 +3,17 @@
#
config ARM_BIG_LITTLE_CPUFREQ
tristate
depends on ARM_CPU_TOPOLOGY
tristate "Generic ARM big LITTLE CPUfreq driver"
depends on ARM_CPU_TOPOLOGY && PM_OPP && HAVE_CLK
help
This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
config ARM_DT_BL_CPUFREQ
tristate "Generic ARM big LITTLE CPUfreq driver probed via DT"
select ARM_BIG_LITTLE_CPUFREQ
depends on OF && HAVE_CLK
tristate "Generic probing via DT for ARM big LITTLE CPUfreq driver"
depends on ARM_BIG_LITTLE_CPUFREQ && OF
help
This enables the Generic CPUfreq driver for ARM big.LITTLE platform.
This gets frequency tables from DT.
This enables probing via DT for Generic CPUfreq driver for ARM
big.LITTLE platform. This gets frequency tables from DT.
config ARM_EXYNOS_CPUFREQ
bool "SAMSUNG EXYNOS SoCs"

Просмотреть файл

@ -40,11 +40,6 @@ static struct clk *clk[MAX_CLUSTERS];
static struct cpufreq_frequency_table *freq_table[MAX_CLUSTERS];
static atomic_t cluster_usage[MAX_CLUSTERS] = {ATOMIC_INIT(0), ATOMIC_INIT(0)};
static int cpu_to_cluster(int cpu)
{
return topology_physical_package_id(cpu);
}
static unsigned int bL_cpufreq_get(unsigned int cpu)
{
u32 cur_cluster = cpu_to_cluster(cpu);
@ -192,7 +187,7 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu));
dev_info(cpu_dev, "CPU %d initialized\n", policy->cpu);
dev_info(cpu_dev, "%s: CPU %d initialized\n", __func__, policy->cpu);
return 0;
}

Просмотреть файл

@ -34,6 +34,11 @@ struct cpufreq_arm_bL_ops {
int (*init_opp_table)(struct device *cpu_dev);
};
static inline int cpu_to_cluster(int cpu)
{
return topology_physical_package_id(cpu);
}
int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops);
void bL_cpufreq_unregister(struct cpufreq_arm_bL_ops *ops);

Просмотреть файл

@ -66,8 +66,8 @@ static int dt_get_transition_latency(struct device *cpu_dev)
parent = of_find_node_by_path("/cpus");
if (!parent) {
pr_err("failed to find OF /cpus\n");
return -ENOENT;
pr_info("Failed to find OF /cpus. Use CPUFREQ_ETERNAL transition latency\n");
return CPUFREQ_ETERNAL;
}
for_each_child_of_node(parent, np) {
@ -78,10 +78,11 @@ static int dt_get_transition_latency(struct device *cpu_dev)
of_node_put(np);
of_node_put(parent);
return 0;
return transition_latency;
}
return -ENODEV;
pr_info("clock-latency isn't found, use CPUFREQ_ETERNAL transition latency\n");
return CPUFREQ_ETERNAL;
}
static struct cpufreq_arm_bL_ops dt_bL_ops = {

Просмотреть файл

@ -189,12 +189,29 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev)
if (!np) {
pr_err("failed to find cpu0 node\n");
return -ENOENT;
ret = -ENOENT;
goto out_put_parent;
}
cpu_dev = &pdev->dev;
cpu_dev->of_node = np;
cpu_reg = devm_regulator_get(cpu_dev, "cpu0");
if (IS_ERR(cpu_reg)) {
/*
* If cpu0 regulator supply node is present, but regulator is
* not yet registered, we should try defering probe.
*/
if (PTR_ERR(cpu_reg) == -EPROBE_DEFER) {
dev_err(cpu_dev, "cpu0 regulator not ready, retry\n");
ret = -EPROBE_DEFER;
goto out_put_node;
}
pr_warn("failed to get cpu0 regulator: %ld\n",
PTR_ERR(cpu_reg));
cpu_reg = NULL;
}
cpu_clk = devm_clk_get(cpu_dev, NULL);
if (IS_ERR(cpu_clk)) {
ret = PTR_ERR(cpu_clk);
@ -202,12 +219,6 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev)
goto out_put_node;
}
cpu_reg = devm_regulator_get(cpu_dev, "cpu0");
if (IS_ERR(cpu_reg)) {
pr_warn("failed to get cpu0 regulator\n");
cpu_reg = NULL;
}
ret = of_init_opp_table(cpu_dev);
if (ret) {
pr_err("failed to init OPP table: %d\n", ret);
@ -264,6 +275,8 @@ out_free_table:
opp_free_cpufreq_table(cpu_dev, &freq_table);
out_put_node:
of_node_put(np);
out_put_parent:
of_node_put(parent);
return ret;
}

Просмотреть файл

@ -1075,14 +1075,14 @@ static int __cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif
__func__, cpu_dev->id, cpu);
}
if ((cpus == 1) && (cpufreq_driver->target))
__cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT);
pr_debug("%s: removing link, cpu: %d\n", __func__, cpu);
cpufreq_cpu_put(data);
/* If cpu is last user of policy, free policy */
if (cpus == 1) {
if (cpufreq_driver->target)
__cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT);
lock_policy_rwsem_read(cpu);
kobj = &data->kobj;
cmp = &data->kobj_unregister;
@ -1832,15 +1832,13 @@ static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb,
if (dev) {
switch (action) {
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
cpufreq_add_dev(dev, NULL);
break;
case CPU_DOWN_PREPARE:
case CPU_DOWN_PREPARE_FROZEN:
case CPU_UP_CANCELED_FROZEN:
__cpufreq_remove_dev(dev, NULL);
break;
case CPU_DOWN_FAILED:
case CPU_DOWN_FAILED_FROZEN:
cpufreq_add_dev(dev, NULL);
break;
}

Просмотреть файл

@ -255,6 +255,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
if (have_governor_per_policy()) {
WARN_ON(dbs_data);
} else if (dbs_data) {
dbs_data->usage_count++;
policy->governor_data = dbs_data;
return 0;
}
@ -266,6 +267,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
}
dbs_data->cdata = cdata;
dbs_data->usage_count = 1;
rc = cdata->init(dbs_data);
if (rc) {
pr_err("%s: POLICY_INIT: init() failed\n", __func__);
@ -294,7 +296,8 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
set_sampling_rate(dbs_data, max(dbs_data->min_sampling_rate,
latency * LATENCY_MULTIPLIER));
if (dbs_data->cdata->governor == GOV_CONSERVATIVE) {
if ((cdata->governor == GOV_CONSERVATIVE) &&
(!policy->governor->initialized)) {
struct cs_ops *cs_ops = dbs_data->cdata->gov_ops;
cpufreq_register_notifier(cs_ops->notifier_block,
@ -306,12 +309,12 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy,
return 0;
case CPUFREQ_GOV_POLICY_EXIT:
if ((policy->governor->initialized == 1) ||
have_governor_per_policy()) {
if (!--dbs_data->usage_count) {
sysfs_remove_group(get_governor_parent_kobj(policy),
get_sysfs_attr(dbs_data));
if (dbs_data->cdata->governor == GOV_CONSERVATIVE) {
if ((dbs_data->cdata->governor == GOV_CONSERVATIVE) &&
(policy->governor->initialized == 1)) {
struct cs_ops *cs_ops = dbs_data->cdata->gov_ops;
cpufreq_unregister_notifier(cs_ops->notifier_block,

Просмотреть файл

@ -211,6 +211,7 @@ struct common_dbs_data {
struct dbs_data {
struct common_dbs_data *cdata;
unsigned int min_sampling_rate;
int usage_count;
void *tuners;
/* dbs_mutex protects dbs_enable in governor start/stop */

Просмотреть файл

@ -547,7 +547,6 @@ static int od_init(struct dbs_data *dbs_data)
tuners->io_is_busy = should_io_be_busy();
dbs_data->tuners = tuners;
pr_info("%s: tuners %p\n", __func__, tuners);
mutex_init(&dbs_data->mutex);
return 0;
}

Просмотреть файл

@ -349,15 +349,16 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb,
switch (action) {
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
cpufreq_update_policy(cpu);
break;
case CPU_DOWN_PREPARE:
case CPU_DOWN_PREPARE_FROZEN:
cpufreq_stats_free_sysfs(cpu);
break;
case CPU_DEAD:
case CPU_DEAD_FROZEN:
cpufreq_stats_free_table(cpu);
break;
case CPU_UP_CANCELED_FROZEN:
cpufreq_stats_free_sysfs(cpu);
cpufreq_stats_free_table(cpu);
break;
}

Просмотреть файл

@ -48,12 +48,7 @@ static inline int32_t div_fp(int32_t x, int32_t y)
}
struct sample {
ktime_t start_time;
ktime_t end_time;
int core_pct_busy;
int pstate_pct_busy;
u64 duration_us;
u64 idletime_us;
u64 aperf;
u64 mperf;
int freq;
@ -86,13 +81,9 @@ struct cpudata {
struct pstate_adjust_policy *pstate_policy;
struct pstate_data pstate;
struct _pid pid;
struct _pid idle_pid;
int min_pstate_count;
int idle_mode;
ktime_t prev_sample;
u64 prev_idle_time_us;
u64 prev_aperf;
u64 prev_mperf;
int sample_ptr;
@ -124,6 +115,8 @@ struct perf_limits {
int min_perf_pct;
int32_t max_perf;
int32_t min_perf;
int max_policy_pct;
int max_sysfs_pct;
};
static struct perf_limits limits = {
@ -132,6 +125,8 @@ static struct perf_limits limits = {
.max_perf = int_tofp(1),
.min_perf_pct = 0,
.min_perf = 0,
.max_policy_pct = 100,
.max_sysfs_pct = 100,
};
static inline void pid_reset(struct _pid *pid, int setpoint, int busy,
@ -202,19 +197,6 @@ static inline void intel_pstate_busy_pid_reset(struct cpudata *cpu)
0);
}
static inline void intel_pstate_idle_pid_reset(struct cpudata *cpu)
{
pid_p_gain_set(&cpu->idle_pid, cpu->pstate_policy->p_gain_pct);
pid_d_gain_set(&cpu->idle_pid, cpu->pstate_policy->d_gain_pct);
pid_i_gain_set(&cpu->idle_pid, cpu->pstate_policy->i_gain_pct);
pid_reset(&cpu->idle_pid,
75,
50,
cpu->pstate_policy->deadband,
0);
}
static inline void intel_pstate_reset_all_pid(void)
{
unsigned int cpu;
@ -302,7 +284,8 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
if (ret != 1)
return -EINVAL;
limits.max_perf_pct = clamp_t(int, input, 0 , 100);
limits.max_sysfs_pct = clamp_t(int, input, 0 , 100);
limits.max_perf_pct = min(limits.max_policy_pct, limits.max_sysfs_pct);
limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100));
return count;
}
@ -408,9 +391,8 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
if (pstate == cpu->pstate.current_pstate)
return;
#ifndef MODULE
trace_cpu_frequency(pstate * 100000, cpu->cpu);
#endif
cpu->pstate.current_pstate = pstate;
wrmsrl(MSR_IA32_PERF_CTL, pstate << 8);
@ -450,48 +432,26 @@ static inline void intel_pstate_calc_busy(struct cpudata *cpu,
struct sample *sample)
{
u64 core_pct;
sample->pstate_pct_busy = 100 - div64_u64(
sample->idletime_us * 100,
sample->duration_us);
core_pct = div64_u64(sample->aperf * 100, sample->mperf);
sample->freq = cpu->pstate.max_pstate * core_pct * 1000;
sample->core_pct_busy = div_s64((sample->pstate_pct_busy * core_pct),
100);
sample->core_pct_busy = core_pct;
}
static inline void intel_pstate_sample(struct cpudata *cpu)
{
ktime_t now;
u64 idle_time_us;
u64 aperf, mperf;
now = ktime_get();
idle_time_us = get_cpu_idle_time_us(cpu->cpu, NULL);
rdmsrl(MSR_IA32_APERF, aperf);
rdmsrl(MSR_IA32_MPERF, mperf);
/* for the first sample, don't actually record a sample, just
* set the baseline */
if (cpu->prev_idle_time_us > 0) {
cpu->sample_ptr = (cpu->sample_ptr + 1) % SAMPLE_COUNT;
cpu->samples[cpu->sample_ptr].start_time = cpu->prev_sample;
cpu->samples[cpu->sample_ptr].end_time = now;
cpu->samples[cpu->sample_ptr].duration_us =
ktime_us_delta(now, cpu->prev_sample);
cpu->samples[cpu->sample_ptr].idletime_us =
idle_time_us - cpu->prev_idle_time_us;
cpu->sample_ptr = (cpu->sample_ptr + 1) % SAMPLE_COUNT;
cpu->samples[cpu->sample_ptr].aperf = aperf;
cpu->samples[cpu->sample_ptr].mperf = mperf;
cpu->samples[cpu->sample_ptr].aperf -= cpu->prev_aperf;
cpu->samples[cpu->sample_ptr].mperf -= cpu->prev_mperf;
cpu->samples[cpu->sample_ptr].aperf = aperf;
cpu->samples[cpu->sample_ptr].mperf = mperf;
cpu->samples[cpu->sample_ptr].aperf -= cpu->prev_aperf;
cpu->samples[cpu->sample_ptr].mperf -= cpu->prev_mperf;
intel_pstate_calc_busy(cpu, &cpu->samples[cpu->sample_ptr]);
intel_pstate_calc_busy(cpu, &cpu->samples[cpu->sample_ptr]);
}
cpu->prev_sample = now;
cpu->prev_idle_time_us = idle_time_us;
cpu->prev_aperf = aperf;
cpu->prev_mperf = mperf;
}
@ -505,16 +465,6 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
mod_timer_pinned(&cpu->timer, jiffies + delay);
}
static inline void intel_pstate_idle_mode(struct cpudata *cpu)
{
cpu->idle_mode = 1;
}
static inline void intel_pstate_normal_mode(struct cpudata *cpu)
{
cpu->idle_mode = 0;
}
static inline int intel_pstate_get_scaled_busy(struct cpudata *cpu)
{
int32_t busy_scaled;
@ -547,50 +497,21 @@ static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
intel_pstate_pstate_decrease(cpu, steps);
}
static inline void intel_pstate_adjust_idle_pstate(struct cpudata *cpu)
{
int busy_scaled;
struct _pid *pid;
int ctl = 0;
int steps;
pid = &cpu->idle_pid;
busy_scaled = intel_pstate_get_scaled_busy(cpu);
ctl = pid_calc(pid, 100 - busy_scaled);
steps = abs(ctl);
if (ctl < 0)
intel_pstate_pstate_decrease(cpu, steps);
else
intel_pstate_pstate_increase(cpu, steps);
if (cpu->pstate.current_pstate == cpu->pstate.min_pstate)
intel_pstate_normal_mode(cpu);
}
static void intel_pstate_timer_func(unsigned long __data)
{
struct cpudata *cpu = (struct cpudata *) __data;
intel_pstate_sample(cpu);
intel_pstate_adjust_busy_pstate(cpu);
if (!cpu->idle_mode)
intel_pstate_adjust_busy_pstate(cpu);
else
intel_pstate_adjust_idle_pstate(cpu);
#if defined(XPERF_FIX)
if (cpu->pstate.current_pstate == cpu->pstate.min_pstate) {
cpu->min_pstate_count++;
if (!(cpu->min_pstate_count % 5)) {
intel_pstate_set_pstate(cpu, cpu->pstate.max_pstate);
intel_pstate_idle_mode(cpu);
}
} else
cpu->min_pstate_count = 0;
#endif
intel_pstate_set_sample_time(cpu);
}
@ -631,7 +552,6 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
(unsigned long)cpu;
cpu->timer.expires = jiffies + HZ/100;
intel_pstate_busy_pid_reset(cpu);
intel_pstate_idle_pid_reset(cpu);
intel_pstate_sample(cpu);
intel_pstate_set_pstate(cpu, cpu->pstate.max_pstate);
@ -675,8 +595,9 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
limits.min_perf_pct = clamp_t(int, limits.min_perf_pct, 0 , 100);
limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100));
limits.max_perf_pct = policy->max * 100 / policy->cpuinfo.max_freq;
limits.max_perf_pct = clamp_t(int, limits.max_perf_pct, 0 , 100);
limits.max_policy_pct = policy->max * 100 / policy->cpuinfo.max_freq;
limits.max_policy_pct = clamp_t(int, limits.max_policy_pct, 0 , 100);
limits.max_perf_pct = min(limits.max_policy_pct, limits.max_sysfs_pct);
limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100));
return 0;
@ -788,10 +709,9 @@ static int __init intel_pstate_init(void)
pr_info("Intel P-state driver initializing.\n");
all_cpu_data = vmalloc(sizeof(void *) * num_possible_cpus());
all_cpu_data = vzalloc(sizeof(void *) * num_possible_cpus());
if (!all_cpu_data)
return -ENOMEM;
memset(all_cpu_data, 0, sizeof(void *) * num_possible_cpus());
rc = cpufreq_register_driver(&intel_pstate_driver);
if (rc)

Просмотреть файл

@ -171,10 +171,6 @@ static int kirkwood_cpufreq_probe(struct platform_device *pdev)
priv.dev = &pdev->dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "Cannot get memory resource\n");
return -ENODEV;
}
priv.base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv.base))
return PTR_ERR(priv.base);

Просмотреть файл

@ -77,7 +77,7 @@ struct acpi_signal_fatal_info {
/*
* OSL Initialization and shutdown primitives
*/
acpi_status __initdata acpi_os_initialize(void);
acpi_status __init acpi_os_initialize(void);
acpi_status acpi_os_terminate(void);

Просмотреть файл

@ -329,10 +329,16 @@ int acpi_processor_power_init(struct acpi_processor *pr);
int acpi_processor_power_exit(struct acpi_processor *pr);
int acpi_processor_cst_has_changed(struct acpi_processor *pr);
int acpi_processor_hotplug(struct acpi_processor *pr);
int acpi_processor_suspend(struct device *dev);
int acpi_processor_resume(struct device *dev);
extern struct cpuidle_driver acpi_idle_driver;
#ifdef CONFIG_PM_SLEEP
void acpi_processor_syscore_init(void);
void acpi_processor_syscore_exit(void);
#else
static inline void acpi_processor_syscore_init(void) {}
static inline void acpi_processor_syscore_exit(void) {}
#endif
/* in processor_thermal.c */
int acpi_processor_get_limit_info(struct acpi_processor *pr);
extern const struct thermal_cooling_device_ops processor_cooling_ops;