Merge branch 'pm-core'
* pm-core: ACPI / PM: Take SMART_SUSPEND driver flag into account PCI / PM: Take SMART_SUSPEND driver flag into account PCI / PM: Drop unnecessary invocations of pcibios_pm_ops callbacks PM / core: Add SMART_SUSPEND driver flag PCI / PM: Use the NEVER_SKIP driver flag PM / core: Add NEVER_SKIP and SMART_PREPARE driver flags PM / core: Convert timers to use timer_setup() PM / core: Fix kerneldoc comments of four functions PM / core: Drop legacy class suspend/resume operations
This commit is contained in:
Коммит
1efef68262
|
@ -354,6 +354,20 @@ the phases are: ``prepare``, ``suspend``, ``suspend_late``, ``suspend_noirq``.
|
||||||
is because all such devices are initially set to runtime-suspended with
|
is because all such devices are initially set to runtime-suspended with
|
||||||
runtime PM disabled.
|
runtime PM disabled.
|
||||||
|
|
||||||
|
This feature also can be controlled by device drivers by using the
|
||||||
|
``DPM_FLAG_NEVER_SKIP`` and ``DPM_FLAG_SMART_PREPARE`` driver power
|
||||||
|
management flags. [Typically, they are set at the time the driver is
|
||||||
|
probed against the device in question by passing them to the
|
||||||
|
:c:func:`dev_pm_set_driver_flags` helper function.] If the first of
|
||||||
|
these flags is set, the PM core will not apply the direct-complete
|
||||||
|
procedure described above to the given device and, consequenty, to any
|
||||||
|
of its ancestors. The second flag, when set, informs the middle layer
|
||||||
|
code (bus types, device types, PM domains, classes) that it should take
|
||||||
|
the return value of the ``->prepare`` callback provided by the driver
|
||||||
|
into account and it may only return a positive value from its own
|
||||||
|
``->prepare`` callback if the driver's one also has returned a positive
|
||||||
|
value.
|
||||||
|
|
||||||
2. The ``->suspend`` methods should quiesce the device to stop it from
|
2. The ``->suspend`` methods should quiesce the device to stop it from
|
||||||
performing I/O. They also may save the device registers and put it into
|
performing I/O. They also may save the device registers and put it into
|
||||||
the appropriate low-power state, depending on the bus type the device is
|
the appropriate low-power state, depending on the bus type the device is
|
||||||
|
@ -752,6 +766,26 @@ the state of devices (possibly except for resuming them from runtime suspend)
|
||||||
from their ``->prepare`` and ``->suspend`` callbacks (or equivalent) *before*
|
from their ``->prepare`` and ``->suspend`` callbacks (or equivalent) *before*
|
||||||
invoking device drivers' ``->suspend`` callbacks (or equivalent).
|
invoking device drivers' ``->suspend`` callbacks (or equivalent).
|
||||||
|
|
||||||
|
Some bus types and PM domains have a policy to resume all devices from runtime
|
||||||
|
suspend upfront in their ``->suspend`` callbacks, but that may not be really
|
||||||
|
necessary if the driver of the device can cope with runtime-suspended devices.
|
||||||
|
The driver can indicate that by setting ``DPM_FLAG_SMART_SUSPEND`` in
|
||||||
|
:c:member:`power.driver_flags` at the probe time, by passing it to the
|
||||||
|
:c:func:`dev_pm_set_driver_flags` helper. That also may cause middle-layer code
|
||||||
|
(bus types, PM domains etc.) to skip the ``->suspend_late`` and
|
||||||
|
``->suspend_noirq`` callbacks provided by the driver if the device remains in
|
||||||
|
runtime suspend at the beginning of the ``suspend_late`` phase of system-wide
|
||||||
|
suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM
|
||||||
|
has been disabled for it, under the assumption that its state should not change
|
||||||
|
after that point until the system-wide transition is over. If that happens, the
|
||||||
|
driver's system-wide resume callbacks, if present, may still be invoked during
|
||||||
|
the subsequent system-wide resume transition and the device's runtime power
|
||||||
|
management status may be set to "active" before enabling runtime PM for it,
|
||||||
|
so the driver must be prepared to cope with the invocation of its system-wide
|
||||||
|
resume callbacks back-to-back with its ``->runtime_suspend`` one (without the
|
||||||
|
intervening ``->runtime_resume`` and so on) and the final state of the device
|
||||||
|
must reflect the "active" status for runtime PM in that case.
|
||||||
|
|
||||||
During system-wide resume from a sleep state it's easiest to put devices into
|
During system-wide resume from a sleep state it's easiest to put devices into
|
||||||
the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
|
the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
|
||||||
Refer to that document for more information regarding this particular issue as
|
Refer to that document for more information regarding this particular issue as
|
||||||
|
|
|
@ -961,6 +961,39 @@ dev_pm_ops to indicate that one suspend routine is to be pointed to by the
|
||||||
.suspend(), .freeze(), and .poweroff() members and one resume routine is to
|
.suspend(), .freeze(), and .poweroff() members and one resume routine is to
|
||||||
be pointed to by the .resume(), .thaw(), and .restore() members.
|
be pointed to by the .resume(), .thaw(), and .restore() members.
|
||||||
|
|
||||||
|
3.1.19. Driver Flags for Power Management
|
||||||
|
|
||||||
|
The PM core allows device drivers to set flags that influence the handling of
|
||||||
|
power management for the devices by the core itself and by middle layer code
|
||||||
|
including the PCI bus type. The flags should be set once at the driver probe
|
||||||
|
time with the help of the dev_pm_set_driver_flags() function and they should not
|
||||||
|
be updated directly afterwards.
|
||||||
|
|
||||||
|
The DPM_FLAG_NEVER_SKIP flag prevents the PM core from using the direct-complete
|
||||||
|
mechanism allowing device suspend/resume callbacks to be skipped if the device
|
||||||
|
is in runtime suspend when the system suspend starts. That also affects all of
|
||||||
|
the ancestors of the device, so this flag should only be used if absolutely
|
||||||
|
necessary.
|
||||||
|
|
||||||
|
The DPM_FLAG_SMART_PREPARE flag instructs the PCI bus type to only return a
|
||||||
|
positive value from pci_pm_prepare() if the ->prepare callback provided by the
|
||||||
|
driver of the device returns a positive value. That allows the driver to opt
|
||||||
|
out from using the direct-complete mechanism dynamically.
|
||||||
|
|
||||||
|
The DPM_FLAG_SMART_SUSPEND flag tells the PCI bus type that from the driver's
|
||||||
|
perspective the device can be safely left in runtime suspend during system
|
||||||
|
suspend. That causes pci_pm_suspend(), pci_pm_freeze() and pci_pm_poweroff()
|
||||||
|
to skip resuming the device from runtime suspend unless there are PCI-specific
|
||||||
|
reasons for doing that. Also, it causes pci_pm_suspend_late/noirq(),
|
||||||
|
pci_pm_freeze_late/noirq() and pci_pm_poweroff_late/noirq() to return early
|
||||||
|
if the device remains in runtime suspend in the beginning of the "late" phase
|
||||||
|
of the system-wide transition under way. Moreover, if the device is in
|
||||||
|
runtime suspend in pci_pm_resume_noirq() or pci_pm_restore_noirq(), its runtime
|
||||||
|
power management status will be changed to "active" (as it is going to be put
|
||||||
|
into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(),
|
||||||
|
the function will set the power.direct_complete flag for it (to make the PM core
|
||||||
|
skip the subsequent "thaw" callbacks for it) and return.
|
||||||
|
|
||||||
3.2. Device Runtime Power Management
|
3.2. Device Runtime Power Management
|
||||||
------------------------------------
|
------------------------------------
|
||||||
In addition to providing device power management callbacks PCI device drivers
|
In addition to providing device power management callbacks PCI device drivers
|
||||||
|
|
|
@ -849,8 +849,12 @@ static int acpi_lpss_resume(struct device *dev)
|
||||||
#ifdef CONFIG_PM_SLEEP
|
#ifdef CONFIG_PM_SLEEP
|
||||||
static int acpi_lpss_suspend_late(struct device *dev)
|
static int acpi_lpss_suspend_late(struct device *dev)
|
||||||
{
|
{
|
||||||
int ret = pm_generic_suspend_late(dev);
|
int ret;
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
ret = pm_generic_suspend_late(dev);
|
||||||
return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
|
return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -889,10 +893,17 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
|
||||||
.complete = acpi_subsys_complete,
|
.complete = acpi_subsys_complete,
|
||||||
.suspend = acpi_subsys_suspend,
|
.suspend = acpi_subsys_suspend,
|
||||||
.suspend_late = acpi_lpss_suspend_late,
|
.suspend_late = acpi_lpss_suspend_late,
|
||||||
|
.suspend_noirq = acpi_subsys_suspend_noirq,
|
||||||
|
.resume_noirq = acpi_subsys_resume_noirq,
|
||||||
.resume_early = acpi_lpss_resume_early,
|
.resume_early = acpi_lpss_resume_early,
|
||||||
.freeze = acpi_subsys_freeze,
|
.freeze = acpi_subsys_freeze,
|
||||||
|
.freeze_late = acpi_subsys_freeze_late,
|
||||||
|
.freeze_noirq = acpi_subsys_freeze_noirq,
|
||||||
|
.thaw_noirq = acpi_subsys_thaw_noirq,
|
||||||
.poweroff = acpi_subsys_suspend,
|
.poweroff = acpi_subsys_suspend,
|
||||||
.poweroff_late = acpi_lpss_suspend_late,
|
.poweroff_late = acpi_lpss_suspend_late,
|
||||||
|
.poweroff_noirq = acpi_subsys_suspend_noirq,
|
||||||
|
.restore_noirq = acpi_subsys_resume_noirq,
|
||||||
.restore_early = acpi_lpss_resume_early,
|
.restore_early = acpi_lpss_resume_early,
|
||||||
#endif
|
#endif
|
||||||
.runtime_suspend = acpi_lpss_runtime_suspend,
|
.runtime_suspend = acpi_lpss_runtime_suspend,
|
||||||
|
|
|
@ -939,7 +939,8 @@ static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev)
|
||||||
u32 sys_target = acpi_target_system_state();
|
u32 sys_target = acpi_target_system_state();
|
||||||
int ret, state;
|
int ret, state;
|
||||||
|
|
||||||
if (device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
|
if (!pm_runtime_suspended(dev) || !adev ||
|
||||||
|
device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
if (sys_target == ACPI_STATE_S0)
|
if (sys_target == ACPI_STATE_S0)
|
||||||
|
@ -962,14 +963,16 @@ static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev)
|
||||||
int acpi_subsys_prepare(struct device *dev)
|
int acpi_subsys_prepare(struct device *dev)
|
||||||
{
|
{
|
||||||
struct acpi_device *adev = ACPI_COMPANION(dev);
|
struct acpi_device *adev = ACPI_COMPANION(dev);
|
||||||
int ret;
|
|
||||||
|
|
||||||
ret = pm_generic_prepare(dev);
|
if (dev->driver && dev->driver->pm && dev->driver->pm->prepare) {
|
||||||
if (ret < 0)
|
int ret = dev->driver->pm->prepare(dev);
|
||||||
return ret;
|
|
||||||
|
|
||||||
if (!adev || !pm_runtime_suspended(dev))
|
if (ret < 0)
|
||||||
return 0;
|
return ret;
|
||||||
|
|
||||||
|
if (!ret && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
return !acpi_dev_needs_resume(dev, adev);
|
return !acpi_dev_needs_resume(dev, adev);
|
||||||
}
|
}
|
||||||
|
@ -996,12 +999,17 @@ EXPORT_SYMBOL_GPL(acpi_subsys_complete);
|
||||||
* acpi_subsys_suspend - Run the device driver's suspend callback.
|
* acpi_subsys_suspend - Run the device driver's suspend callback.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to handle.
|
||||||
*
|
*
|
||||||
* Follow PCI and resume devices suspended at run time before running their
|
* Follow PCI and resume devices from runtime suspend before running their
|
||||||
* system suspend callbacks.
|
* system suspend callbacks, unless the driver can cope with runtime-suspended
|
||||||
|
* devices during system suspend and there are no ACPI-specific reasons for
|
||||||
|
* resuming them.
|
||||||
*/
|
*/
|
||||||
int acpi_subsys_suspend(struct device *dev)
|
int acpi_subsys_suspend(struct device *dev)
|
||||||
{
|
{
|
||||||
pm_runtime_resume(dev);
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
||||||
|
acpi_dev_needs_resume(dev, ACPI_COMPANION(dev)))
|
||||||
|
pm_runtime_resume(dev);
|
||||||
|
|
||||||
return pm_generic_suspend(dev);
|
return pm_generic_suspend(dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
|
EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
|
||||||
|
@ -1015,11 +1023,47 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend);
|
||||||
*/
|
*/
|
||||||
int acpi_subsys_suspend_late(struct device *dev)
|
int acpi_subsys_suspend_late(struct device *dev)
|
||||||
{
|
{
|
||||||
int ret = pm_generic_suspend_late(dev);
|
int ret;
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
ret = pm_generic_suspend_late(dev);
|
||||||
return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev));
|
return ret ? ret : acpi_dev_suspend(dev, device_may_wakeup(dev));
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
|
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_subsys_suspend_noirq - Run the device driver's "noirq" suspend callback.
|
||||||
|
* @dev: Device to suspend.
|
||||||
|
*/
|
||||||
|
int acpi_subsys_suspend_noirq(struct device *dev)
|
||||||
|
{
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return pm_generic_suspend_noirq(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback.
|
||||||
|
* @dev: Device to handle.
|
||||||
|
*/
|
||||||
|
int acpi_subsys_resume_noirq(struct device *dev)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
|
||||||
|
* during system suspend, so update their runtime PM status to "active"
|
||||||
|
* as they will be put into D0 going forward.
|
||||||
|
*/
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
pm_runtime_set_active(dev);
|
||||||
|
|
||||||
|
return pm_generic_resume_noirq(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_subsys_resume_early - Resume device using ACPI.
|
* acpi_subsys_resume_early - Resume device using ACPI.
|
||||||
* @dev: Device to Resume.
|
* @dev: Device to Resume.
|
||||||
|
@ -1047,11 +1091,60 @@ int acpi_subsys_freeze(struct device *dev)
|
||||||
* runtime-suspended devices should not be touched during freeze/thaw
|
* runtime-suspended devices should not be touched during freeze/thaw
|
||||||
* transitions.
|
* transitions.
|
||||||
*/
|
*/
|
||||||
pm_runtime_resume(dev);
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
|
||||||
|
pm_runtime_resume(dev);
|
||||||
|
|
||||||
return pm_generic_freeze(dev);
|
return pm_generic_freeze(dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
|
EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_subsys_freeze_late - Run the device driver's "late" freeze callback.
|
||||||
|
* @dev: Device to handle.
|
||||||
|
*/
|
||||||
|
int acpi_subsys_freeze_late(struct device *dev)
|
||||||
|
{
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return pm_generic_freeze_late(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(acpi_subsys_freeze_late);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_subsys_freeze_noirq - Run the device driver's "noirq" freeze callback.
|
||||||
|
* @dev: Device to handle.
|
||||||
|
*/
|
||||||
|
int acpi_subsys_freeze_noirq(struct device *dev)
|
||||||
|
{
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return pm_generic_freeze_noirq(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(acpi_subsys_freeze_noirq);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_subsys_thaw_noirq - Run the device driver's "noirq" thaw callback.
|
||||||
|
* @dev: Device to handle.
|
||||||
|
*/
|
||||||
|
int acpi_subsys_thaw_noirq(struct device *dev)
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* If the device is in runtime suspend, the "thaw" code may not work
|
||||||
|
* correctly with it, so skip the driver callback and make the PM core
|
||||||
|
* skip all of the subsequent "thaw" callbacks for the device.
|
||||||
|
*/
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev)) {
|
||||||
|
dev->power.direct_complete = true;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
return pm_generic_thaw_noirq(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(acpi_subsys_thaw_noirq);
|
||||||
#endif /* CONFIG_PM_SLEEP */
|
#endif /* CONFIG_PM_SLEEP */
|
||||||
|
|
||||||
static struct dev_pm_domain acpi_general_pm_domain = {
|
static struct dev_pm_domain acpi_general_pm_domain = {
|
||||||
|
@ -1063,10 +1156,17 @@ static struct dev_pm_domain acpi_general_pm_domain = {
|
||||||
.complete = acpi_subsys_complete,
|
.complete = acpi_subsys_complete,
|
||||||
.suspend = acpi_subsys_suspend,
|
.suspend = acpi_subsys_suspend,
|
||||||
.suspend_late = acpi_subsys_suspend_late,
|
.suspend_late = acpi_subsys_suspend_late,
|
||||||
|
.suspend_noirq = acpi_subsys_suspend_noirq,
|
||||||
|
.resume_noirq = acpi_subsys_resume_noirq,
|
||||||
.resume_early = acpi_subsys_resume_early,
|
.resume_early = acpi_subsys_resume_early,
|
||||||
.freeze = acpi_subsys_freeze,
|
.freeze = acpi_subsys_freeze,
|
||||||
|
.freeze_late = acpi_subsys_freeze_late,
|
||||||
|
.freeze_noirq = acpi_subsys_freeze_noirq,
|
||||||
|
.thaw_noirq = acpi_subsys_thaw_noirq,
|
||||||
.poweroff = acpi_subsys_suspend,
|
.poweroff = acpi_subsys_suspend,
|
||||||
.poweroff_late = acpi_subsys_suspend_late,
|
.poweroff_late = acpi_subsys_suspend_late,
|
||||||
|
.poweroff_noirq = acpi_subsys_suspend_noirq,
|
||||||
|
.restore_noirq = acpi_subsys_resume_noirq,
|
||||||
.restore_early = acpi_subsys_resume_early,
|
.restore_early = acpi_subsys_resume_early,
|
||||||
#endif
|
#endif
|
||||||
},
|
},
|
||||||
|
|
|
@ -464,6 +464,7 @@ pinctrl_bind_failed:
|
||||||
if (dev->pm_domain && dev->pm_domain->dismiss)
|
if (dev->pm_domain && dev->pm_domain->dismiss)
|
||||||
dev->pm_domain->dismiss(dev);
|
dev->pm_domain->dismiss(dev);
|
||||||
pm_runtime_reinit(dev);
|
pm_runtime_reinit(dev);
|
||||||
|
dev_pm_set_driver_flags(dev, 0);
|
||||||
|
|
||||||
switch (ret) {
|
switch (ret) {
|
||||||
case -EPROBE_DEFER:
|
case -EPROBE_DEFER:
|
||||||
|
@ -869,6 +870,7 @@ static void __device_release_driver(struct device *dev, struct device *parent)
|
||||||
if (dev->pm_domain && dev->pm_domain->dismiss)
|
if (dev->pm_domain && dev->pm_domain->dismiss)
|
||||||
dev->pm_domain->dismiss(dev);
|
dev->pm_domain->dismiss(dev);
|
||||||
pm_runtime_reinit(dev);
|
pm_runtime_reinit(dev);
|
||||||
|
dev_pm_set_driver_flags(dev, 0);
|
||||||
|
|
||||||
klist_remove(&dev->p->knode_driver);
|
klist_remove(&dev->p->knode_driver);
|
||||||
device_pm_check_callbacks(dev);
|
device_pm_check_callbacks(dev);
|
||||||
|
|
|
@ -528,7 +528,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
|
||||||
/*------------------------- Resume routines -------------------------*/
|
/*------------------------- Resume routines -------------------------*/
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* device_resume_noirq - Execute an "early resume" callback for given device.
|
* device_resume_noirq - Execute a "noirq resume" callback for given device.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to handle.
|
||||||
* @state: PM transition of the system being carried out.
|
* @state: PM transition of the system being carried out.
|
||||||
* @async: If true, the device is being resumed asynchronously.
|
* @async: If true, the device is being resumed asynchronously.
|
||||||
|
@ -848,16 +848,10 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
|
||||||
goto Driver;
|
goto Driver;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dev->class) {
|
if (dev->class && dev->class->pm) {
|
||||||
if (dev->class->pm) {
|
info = "class ";
|
||||||
info = "class ";
|
callback = pm_op(dev->class->pm, state);
|
||||||
callback = pm_op(dev->class->pm, state);
|
goto Driver;
|
||||||
goto Driver;
|
|
||||||
} else if (dev->class->resume) {
|
|
||||||
info = "legacy class ";
|
|
||||||
callback = dev->class->resume;
|
|
||||||
goto End;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dev->bus) {
|
if (dev->bus) {
|
||||||
|
@ -1083,7 +1077,7 @@ static pm_message_t resume_event(pm_message_t sleep_state)
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* device_suspend_noirq - Execute a "late suspend" callback for given device.
|
* __device_suspend_noirq - Execute a "noirq suspend" callback for given device.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to handle.
|
||||||
* @state: PM transition of the system being carried out.
|
* @state: PM transition of the system being carried out.
|
||||||
* @async: If true, the device is being suspended asynchronously.
|
* @async: If true, the device is being suspended asynchronously.
|
||||||
|
@ -1243,7 +1237,7 @@ int dpm_suspend_noirq(pm_message_t state)
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* device_suspend_late - Execute a "late suspend" callback for given device.
|
* __device_suspend_late - Execute a "late suspend" callback for given device.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to handle.
|
||||||
* @state: PM transition of the system being carried out.
|
* @state: PM transition of the system being carried out.
|
||||||
* @async: If true, the device is being suspended asynchronously.
|
* @async: If true, the device is being suspended asynchronously.
|
||||||
|
@ -1445,7 +1439,7 @@ static void dpm_clear_suppliers_direct_complete(struct device *dev)
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* device_suspend - Execute "suspend" callbacks for given device.
|
* __device_suspend - Execute "suspend" callbacks for given device.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to handle.
|
||||||
* @state: PM transition of the system being carried out.
|
* @state: PM transition of the system being carried out.
|
||||||
* @async: If true, the device is being suspended asynchronously.
|
* @async: If true, the device is being suspended asynchronously.
|
||||||
|
@ -1508,17 +1502,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
|
||||||
goto Run;
|
goto Run;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dev->class) {
|
if (dev->class && dev->class->pm) {
|
||||||
if (dev->class->pm) {
|
info = "class ";
|
||||||
info = "class ";
|
callback = pm_op(dev->class->pm, state);
|
||||||
callback = pm_op(dev->class->pm, state);
|
goto Run;
|
||||||
goto Run;
|
|
||||||
} else if (dev->class->suspend) {
|
|
||||||
pm_dev_dbg(dev, state, "legacy class ");
|
|
||||||
error = legacy_suspend(dev, state, dev->class->suspend,
|
|
||||||
"legacy class ");
|
|
||||||
goto End;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dev->bus) {
|
if (dev->bus) {
|
||||||
|
@ -1665,6 +1652,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
|
||||||
if (dev->power.syscore)
|
if (dev->power.syscore)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) &&
|
||||||
|
!pm_runtime_enabled(dev));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If a device's parent goes into runtime suspend at the wrong time,
|
* If a device's parent goes into runtime suspend at the wrong time,
|
||||||
* it won't be possible to resume the device. To prevent this we
|
* it won't be possible to resume the device. To prevent this we
|
||||||
|
@ -1713,7 +1703,9 @@ unlock:
|
||||||
* applies to suspend transitions, however.
|
* applies to suspend transitions, however.
|
||||||
*/
|
*/
|
||||||
spin_lock_irq(&dev->power.lock);
|
spin_lock_irq(&dev->power.lock);
|
||||||
dev->power.direct_complete = ret > 0 && state.event == PM_EVENT_SUSPEND;
|
dev->power.direct_complete = state.event == PM_EVENT_SUSPEND &&
|
||||||
|
pm_runtime_suspended(dev) && ret > 0 &&
|
||||||
|
!dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP);
|
||||||
spin_unlock_irq(&dev->power.lock);
|
spin_unlock_irq(&dev->power.lock);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1862,11 +1854,16 @@ void device_pm_check_callbacks(struct device *dev)
|
||||||
dev->power.no_pm_callbacks =
|
dev->power.no_pm_callbacks =
|
||||||
(!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
|
(!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
|
||||||
!dev->bus->suspend && !dev->bus->resume)) &&
|
!dev->bus->suspend && !dev->bus->resume)) &&
|
||||||
(!dev->class || (pm_ops_is_empty(dev->class->pm) &&
|
(!dev->class || pm_ops_is_empty(dev->class->pm)) &&
|
||||||
!dev->class->suspend && !dev->class->resume)) &&
|
|
||||||
(!dev->type || pm_ops_is_empty(dev->type->pm)) &&
|
(!dev->type || pm_ops_is_empty(dev->type->pm)) &&
|
||||||
(!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
|
(!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
|
||||||
(!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
|
(!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
|
||||||
!dev->driver->suspend && !dev->driver->resume));
|
!dev->driver->suspend && !dev->driver->resume));
|
||||||
spin_unlock_irq(&dev->power.lock);
|
spin_unlock_irq(&dev->power.lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool dev_pm_smart_suspend_and_suspended(struct device *dev)
|
||||||
|
{
|
||||||
|
return dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) &&
|
||||||
|
pm_runtime_status_suspended(dev);
|
||||||
|
}
|
||||||
|
|
|
@ -894,9 +894,9 @@ static void pm_runtime_work(struct work_struct *work)
|
||||||
*
|
*
|
||||||
* Check if the time is right and queue a suspend request.
|
* Check if the time is right and queue a suspend request.
|
||||||
*/
|
*/
|
||||||
static void pm_suspend_timer_fn(unsigned long data)
|
static void pm_suspend_timer_fn(struct timer_list *t)
|
||||||
{
|
{
|
||||||
struct device *dev = (struct device *)data;
|
struct device *dev = from_timer(dev, t, power.suspend_timer);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
unsigned long expires;
|
unsigned long expires;
|
||||||
|
|
||||||
|
@ -1499,8 +1499,7 @@ void pm_runtime_init(struct device *dev)
|
||||||
INIT_WORK(&dev->power.work, pm_runtime_work);
|
INIT_WORK(&dev->power.work, pm_runtime_work);
|
||||||
|
|
||||||
dev->power.timer_expires = 0;
|
dev->power.timer_expires = 0;
|
||||||
setup_timer(&dev->power.suspend_timer, pm_suspend_timer_fn,
|
timer_setup(&dev->power.suspend_timer, pm_suspend_timer_fn, 0);
|
||||||
(unsigned long)dev);
|
|
||||||
|
|
||||||
init_waitqueue_head(&dev->power.wait_queue);
|
init_waitqueue_head(&dev->power.wait_queue);
|
||||||
}
|
}
|
||||||
|
|
|
@ -54,7 +54,7 @@ static unsigned int saved_count;
|
||||||
|
|
||||||
static DEFINE_SPINLOCK(events_lock);
|
static DEFINE_SPINLOCK(events_lock);
|
||||||
|
|
||||||
static void pm_wakeup_timer_fn(unsigned long data);
|
static void pm_wakeup_timer_fn(struct timer_list *t);
|
||||||
|
|
||||||
static LIST_HEAD(wakeup_sources);
|
static LIST_HEAD(wakeup_sources);
|
||||||
|
|
||||||
|
@ -176,7 +176,7 @@ void wakeup_source_add(struct wakeup_source *ws)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
spin_lock_init(&ws->lock);
|
spin_lock_init(&ws->lock);
|
||||||
setup_timer(&ws->timer, pm_wakeup_timer_fn, (unsigned long)ws);
|
timer_setup(&ws->timer, pm_wakeup_timer_fn, 0);
|
||||||
ws->active = false;
|
ws->active = false;
|
||||||
ws->last_time = ktime_get();
|
ws->last_time = ktime_get();
|
||||||
|
|
||||||
|
@ -481,8 +481,7 @@ static bool wakeup_source_not_registered(struct wakeup_source *ws)
|
||||||
* Use timer struct to check if the given source is initialized
|
* Use timer struct to check if the given source is initialized
|
||||||
* by wakeup_source_add.
|
* by wakeup_source_add.
|
||||||
*/
|
*/
|
||||||
return ws->timer.function != pm_wakeup_timer_fn ||
|
return ws->timer.function != (TIMER_FUNC_TYPE)pm_wakeup_timer_fn;
|
||||||
ws->timer.data != (unsigned long)ws;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -724,9 +723,9 @@ EXPORT_SYMBOL_GPL(pm_relax);
|
||||||
* in @data if it is currently active and its timer has not been canceled and
|
* in @data if it is currently active and its timer has not been canceled and
|
||||||
* the expiration time of the timer is not in future.
|
* the expiration time of the timer is not in future.
|
||||||
*/
|
*/
|
||||||
static void pm_wakeup_timer_fn(unsigned long data)
|
static void pm_wakeup_timer_fn(struct timer_list *t)
|
||||||
{
|
{
|
||||||
struct wakeup_source *ws = (struct wakeup_source *)data;
|
struct wakeup_source *ws = from_timer(ws, t, timer);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&ws->lock, flags);
|
spin_lock_irqsave(&ws->lock, flags);
|
||||||
|
|
|
@ -1304,7 +1304,7 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||||
* becaue the HDA driver may require us to enable the audio power
|
* becaue the HDA driver may require us to enable the audio power
|
||||||
* domain during system suspend.
|
* domain during system suspend.
|
||||||
*/
|
*/
|
||||||
pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;
|
dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
|
||||||
|
|
||||||
ret = i915_driver_init_early(dev_priv, ent);
|
ret = i915_driver_init_early(dev_priv, ent);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
|
@ -225,7 +225,7 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||||
* MEI requires to resume from runtime suspend mode
|
* MEI requires to resume from runtime suspend mode
|
||||||
* in order to perform link reset flow upon system suspend.
|
* in order to perform link reset flow upon system suspend.
|
||||||
*/
|
*/
|
||||||
pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;
|
dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* ME maps runtime suspend/resume to D0i states,
|
* ME maps runtime suspend/resume to D0i states,
|
||||||
|
|
|
@ -141,7 +141,7 @@ static int mei_txe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||||
* MEI requires to resume from runtime suspend mode
|
* MEI requires to resume from runtime suspend mode
|
||||||
* in order to perform link reset flow upon system suspend.
|
* in order to perform link reset flow upon system suspend.
|
||||||
*/
|
*/
|
||||||
pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;
|
dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* TXE maps runtime suspend/resume to own power gating states,
|
* TXE maps runtime suspend/resume to own power gating states,
|
||||||
|
|
|
@ -682,8 +682,11 @@ static int pci_pm_prepare(struct device *dev)
|
||||||
|
|
||||||
if (drv && drv->pm && drv->pm->prepare) {
|
if (drv && drv->pm && drv->pm->prepare) {
|
||||||
int error = drv->pm->prepare(dev);
|
int error = drv->pm->prepare(dev);
|
||||||
if (error)
|
if (error < 0)
|
||||||
return error;
|
return error;
|
||||||
|
|
||||||
|
if (!error && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
return pci_dev_keep_suspended(to_pci_dev(dev));
|
return pci_dev_keep_suspended(to_pci_dev(dev));
|
||||||
}
|
}
|
||||||
|
@ -724,18 +727,25 @@ static int pci_pm_suspend(struct device *dev)
|
||||||
|
|
||||||
if (!pm) {
|
if (!pm) {
|
||||||
pci_pm_default_suspend(pci_dev);
|
pci_pm_default_suspend(pci_dev);
|
||||||
goto Fixup;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* PCI devices suspended at run time need to be resumed at this point,
|
* PCI devices suspended at run time may need to be resumed at this
|
||||||
* because in general it is necessary to reconfigure them for system
|
* point, because in general it may be necessary to reconfigure them for
|
||||||
* suspend. Namely, if the device is supposed to wake up the system
|
* system suspend. Namely, if the device is expected to wake up the
|
||||||
* from the sleep state, we may need to reconfigure it for this purpose.
|
* system from the sleep state, it may have to be reconfigured for this
|
||||||
* In turn, if the device is not supposed to wake up the system from the
|
* purpose, or if the device is not expected to wake up the system from
|
||||||
* sleep state, we'll have to prevent it from signaling wake-up.
|
* the sleep state, it should be prevented from signaling wakeup events
|
||||||
|
* going forward.
|
||||||
|
*
|
||||||
|
* Also if the driver of the device does not indicate that its system
|
||||||
|
* suspend callbacks can cope with runtime-suspended devices, it is
|
||||||
|
* better to resume the device from runtime suspend here.
|
||||||
*/
|
*/
|
||||||
pm_runtime_resume(dev);
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
||||||
|
!pci_dev_keep_suspended(pci_dev))
|
||||||
|
pm_runtime_resume(dev);
|
||||||
|
|
||||||
pci_dev->state_saved = false;
|
pci_dev->state_saved = false;
|
||||||
if (pm->suspend) {
|
if (pm->suspend) {
|
||||||
|
@ -755,17 +765,27 @@ static int pci_pm_suspend(struct device *dev)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Fixup:
|
|
||||||
pci_fixup_device(pci_fixup_suspend, pci_dev);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int pci_pm_suspend_late(struct device *dev)
|
||||||
|
{
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev));
|
||||||
|
|
||||||
|
return pm_generic_suspend_late(dev);
|
||||||
|
}
|
||||||
|
|
||||||
static int pci_pm_suspend_noirq(struct device *dev)
|
static int pci_pm_suspend_noirq(struct device *dev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (pci_has_legacy_pm_support(pci_dev))
|
if (pci_has_legacy_pm_support(pci_dev))
|
||||||
return pci_legacy_suspend_late(dev, PMSG_SUSPEND);
|
return pci_legacy_suspend_late(dev, PMSG_SUSPEND);
|
||||||
|
|
||||||
|
@ -827,6 +847,14 @@ static int pci_pm_resume_noirq(struct device *dev)
|
||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
|
||||||
|
* during system suspend, so update their runtime PM status to "active"
|
||||||
|
* as they are going to be put into D0 shortly.
|
||||||
|
*/
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
pm_runtime_set_active(dev);
|
||||||
|
|
||||||
pci_pm_default_resume_early(pci_dev);
|
pci_pm_default_resume_early(pci_dev);
|
||||||
|
|
||||||
if (pci_has_legacy_pm_support(pci_dev))
|
if (pci_has_legacy_pm_support(pci_dev))
|
||||||
|
@ -869,6 +897,7 @@ static int pci_pm_resume(struct device *dev)
|
||||||
#else /* !CONFIG_SUSPEND */
|
#else /* !CONFIG_SUSPEND */
|
||||||
|
|
||||||
#define pci_pm_suspend NULL
|
#define pci_pm_suspend NULL
|
||||||
|
#define pci_pm_suspend_late NULL
|
||||||
#define pci_pm_suspend_noirq NULL
|
#define pci_pm_suspend_noirq NULL
|
||||||
#define pci_pm_resume NULL
|
#define pci_pm_resume NULL
|
||||||
#define pci_pm_resume_noirq NULL
|
#define pci_pm_resume_noirq NULL
|
||||||
|
@ -903,7 +932,8 @@ static int pci_pm_freeze(struct device *dev)
|
||||||
* devices should not be touched during freeze/thaw transitions,
|
* devices should not be touched during freeze/thaw transitions,
|
||||||
* however.
|
* however.
|
||||||
*/
|
*/
|
||||||
pm_runtime_resume(dev);
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
|
||||||
|
pm_runtime_resume(dev);
|
||||||
|
|
||||||
pci_dev->state_saved = false;
|
pci_dev->state_saved = false;
|
||||||
if (pm->freeze) {
|
if (pm->freeze) {
|
||||||
|
@ -915,17 +945,25 @@ static int pci_pm_freeze(struct device *dev)
|
||||||
return error;
|
return error;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pcibios_pm_ops.freeze)
|
|
||||||
return pcibios_pm_ops.freeze(dev);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int pci_pm_freeze_late(struct device *dev)
|
||||||
|
{
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return pm_generic_freeze_late(dev);;
|
||||||
|
}
|
||||||
|
|
||||||
static int pci_pm_freeze_noirq(struct device *dev)
|
static int pci_pm_freeze_noirq(struct device *dev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (pci_has_legacy_pm_support(pci_dev))
|
if (pci_has_legacy_pm_support(pci_dev))
|
||||||
return pci_legacy_suspend_late(dev, PMSG_FREEZE);
|
return pci_legacy_suspend_late(dev, PMSG_FREEZE);
|
||||||
|
|
||||||
|
@ -955,6 +993,16 @@ static int pci_pm_thaw_noirq(struct device *dev)
|
||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the device is in runtime suspend, the code below may not work
|
||||||
|
* correctly with it, so skip that code and make the PM core skip all of
|
||||||
|
* the subsequent "thaw" callbacks for the device.
|
||||||
|
*/
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev)) {
|
||||||
|
dev->power.direct_complete = true;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
if (pcibios_pm_ops.thaw_noirq) {
|
if (pcibios_pm_ops.thaw_noirq) {
|
||||||
error = pcibios_pm_ops.thaw_noirq(dev);
|
error = pcibios_pm_ops.thaw_noirq(dev);
|
||||||
if (error)
|
if (error)
|
||||||
|
@ -979,12 +1027,6 @@ static int pci_pm_thaw(struct device *dev)
|
||||||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
if (pcibios_pm_ops.thaw) {
|
|
||||||
error = pcibios_pm_ops.thaw(dev);
|
|
||||||
if (error)
|
|
||||||
return error;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (pci_has_legacy_pm_support(pci_dev))
|
if (pci_has_legacy_pm_support(pci_dev))
|
||||||
return pci_legacy_resume(dev);
|
return pci_legacy_resume(dev);
|
||||||
|
|
||||||
|
@ -1010,11 +1052,13 @@ static int pci_pm_poweroff(struct device *dev)
|
||||||
|
|
||||||
if (!pm) {
|
if (!pm) {
|
||||||
pci_pm_default_suspend(pci_dev);
|
pci_pm_default_suspend(pci_dev);
|
||||||
goto Fixup;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* The reason to do that is the same as in pci_pm_suspend(). */
|
/* The reason to do that is the same as in pci_pm_suspend(). */
|
||||||
pm_runtime_resume(dev);
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
||||||
|
!pci_dev_keep_suspended(pci_dev))
|
||||||
|
pm_runtime_resume(dev);
|
||||||
|
|
||||||
pci_dev->state_saved = false;
|
pci_dev->state_saved = false;
|
||||||
if (pm->poweroff) {
|
if (pm->poweroff) {
|
||||||
|
@ -1026,20 +1070,27 @@ static int pci_pm_poweroff(struct device *dev)
|
||||||
return error;
|
return error;
|
||||||
}
|
}
|
||||||
|
|
||||||
Fixup:
|
|
||||||
pci_fixup_device(pci_fixup_suspend, pci_dev);
|
|
||||||
|
|
||||||
if (pcibios_pm_ops.poweroff)
|
|
||||||
return pcibios_pm_ops.poweroff(dev);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int pci_pm_poweroff_late(struct device *dev)
|
||||||
|
{
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
pci_fixup_device(pci_fixup_suspend, to_pci_dev(dev));
|
||||||
|
|
||||||
|
return pm_generic_poweroff_late(dev);
|
||||||
|
}
|
||||||
|
|
||||||
static int pci_pm_poweroff_noirq(struct device *dev)
|
static int pci_pm_poweroff_noirq(struct device *dev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (pci_has_legacy_pm_support(to_pci_dev(dev)))
|
if (pci_has_legacy_pm_support(to_pci_dev(dev)))
|
||||||
return pci_legacy_suspend_late(dev, PMSG_HIBERNATE);
|
return pci_legacy_suspend_late(dev, PMSG_HIBERNATE);
|
||||||
|
|
||||||
|
@ -1081,6 +1132,10 @@ static int pci_pm_restore_noirq(struct device *dev)
|
||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
|
/* This is analogous to the pci_pm_resume_noirq() case. */
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
pm_runtime_set_active(dev);
|
||||||
|
|
||||||
if (pcibios_pm_ops.restore_noirq) {
|
if (pcibios_pm_ops.restore_noirq) {
|
||||||
error = pcibios_pm_ops.restore_noirq(dev);
|
error = pcibios_pm_ops.restore_noirq(dev);
|
||||||
if (error)
|
if (error)
|
||||||
|
@ -1104,12 +1159,6 @@ static int pci_pm_restore(struct device *dev)
|
||||||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
if (pcibios_pm_ops.restore) {
|
|
||||||
error = pcibios_pm_ops.restore(dev);
|
|
||||||
if (error)
|
|
||||||
return error;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This is necessary for the hibernation error path in which restore is
|
* This is necessary for the hibernation error path in which restore is
|
||||||
* called without restoring the standard config registers of the device.
|
* called without restoring the standard config registers of the device.
|
||||||
|
@ -1135,10 +1184,12 @@ static int pci_pm_restore(struct device *dev)
|
||||||
#else /* !CONFIG_HIBERNATE_CALLBACKS */
|
#else /* !CONFIG_HIBERNATE_CALLBACKS */
|
||||||
|
|
||||||
#define pci_pm_freeze NULL
|
#define pci_pm_freeze NULL
|
||||||
|
#define pci_pm_freeze_late NULL
|
||||||
#define pci_pm_freeze_noirq NULL
|
#define pci_pm_freeze_noirq NULL
|
||||||
#define pci_pm_thaw NULL
|
#define pci_pm_thaw NULL
|
||||||
#define pci_pm_thaw_noirq NULL
|
#define pci_pm_thaw_noirq NULL
|
||||||
#define pci_pm_poweroff NULL
|
#define pci_pm_poweroff NULL
|
||||||
|
#define pci_pm_poweroff_late NULL
|
||||||
#define pci_pm_poweroff_noirq NULL
|
#define pci_pm_poweroff_noirq NULL
|
||||||
#define pci_pm_restore NULL
|
#define pci_pm_restore NULL
|
||||||
#define pci_pm_restore_noirq NULL
|
#define pci_pm_restore_noirq NULL
|
||||||
|
@ -1254,10 +1305,13 @@ static const struct dev_pm_ops pci_dev_pm_ops = {
|
||||||
.prepare = pci_pm_prepare,
|
.prepare = pci_pm_prepare,
|
||||||
.complete = pci_pm_complete,
|
.complete = pci_pm_complete,
|
||||||
.suspend = pci_pm_suspend,
|
.suspend = pci_pm_suspend,
|
||||||
|
.suspend_late = pci_pm_suspend_late,
|
||||||
.resume = pci_pm_resume,
|
.resume = pci_pm_resume,
|
||||||
.freeze = pci_pm_freeze,
|
.freeze = pci_pm_freeze,
|
||||||
|
.freeze_late = pci_pm_freeze_late,
|
||||||
.thaw = pci_pm_thaw,
|
.thaw = pci_pm_thaw,
|
||||||
.poweroff = pci_pm_poweroff,
|
.poweroff = pci_pm_poweroff,
|
||||||
|
.poweroff_late = pci_pm_poweroff_late,
|
||||||
.restore = pci_pm_restore,
|
.restore = pci_pm_restore,
|
||||||
.suspend_noirq = pci_pm_suspend_noirq,
|
.suspend_noirq = pci_pm_suspend_noirq,
|
||||||
.resume_noirq = pci_pm_resume_noirq,
|
.resume_noirq = pci_pm_resume_noirq,
|
||||||
|
|
|
@ -2166,8 +2166,7 @@ bool pci_dev_keep_suspended(struct pci_dev *pci_dev)
|
||||||
|
|
||||||
if (!pm_runtime_suspended(dev)
|
if (!pm_runtime_suspended(dev)
|
||||||
|| pci_target_state(pci_dev, wakeup) != pci_dev->current_state
|
|| pci_target_state(pci_dev, wakeup) != pci_dev->current_state
|
||||||
|| platform_pci_need_resume(pci_dev)
|
|| platform_pci_need_resume(pci_dev))
|
||||||
|| (pci_dev->dev_flags & PCI_DEV_FLAGS_NEEDS_RESUME))
|
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -885,17 +885,27 @@ int acpi_dev_suspend_late(struct device *dev);
|
||||||
int acpi_subsys_prepare(struct device *dev);
|
int acpi_subsys_prepare(struct device *dev);
|
||||||
void acpi_subsys_complete(struct device *dev);
|
void acpi_subsys_complete(struct device *dev);
|
||||||
int acpi_subsys_suspend_late(struct device *dev);
|
int acpi_subsys_suspend_late(struct device *dev);
|
||||||
|
int acpi_subsys_suspend_noirq(struct device *dev);
|
||||||
|
int acpi_subsys_resume_noirq(struct device *dev);
|
||||||
int acpi_subsys_resume_early(struct device *dev);
|
int acpi_subsys_resume_early(struct device *dev);
|
||||||
int acpi_subsys_suspend(struct device *dev);
|
int acpi_subsys_suspend(struct device *dev);
|
||||||
int acpi_subsys_freeze(struct device *dev);
|
int acpi_subsys_freeze(struct device *dev);
|
||||||
|
int acpi_subsys_freeze_late(struct device *dev);
|
||||||
|
int acpi_subsys_freeze_noirq(struct device *dev);
|
||||||
|
int acpi_subsys_thaw_noirq(struct device *dev);
|
||||||
#else
|
#else
|
||||||
static inline int acpi_dev_resume_early(struct device *dev) { return 0; }
|
static inline int acpi_dev_resume_early(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_prepare(struct device *dev) { return 0; }
|
static inline int acpi_subsys_prepare(struct device *dev) { return 0; }
|
||||||
static inline void acpi_subsys_complete(struct device *dev) {}
|
static inline void acpi_subsys_complete(struct device *dev) {}
|
||||||
static inline int acpi_subsys_suspend_late(struct device *dev) { return 0; }
|
static inline int acpi_subsys_suspend_late(struct device *dev) { return 0; }
|
||||||
|
static inline int acpi_subsys_suspend_noirq(struct device *dev) { return 0; }
|
||||||
|
static inline int acpi_subsys_resume_noirq(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_resume_early(struct device *dev) { return 0; }
|
static inline int acpi_subsys_resume_early(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_suspend(struct device *dev) { return 0; }
|
static inline int acpi_subsys_suspend(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_freeze(struct device *dev) { return 0; }
|
static inline int acpi_subsys_freeze(struct device *dev) { return 0; }
|
||||||
|
static inline int acpi_subsys_freeze_late(struct device *dev) { return 0; }
|
||||||
|
static inline int acpi_subsys_freeze_noirq(struct device *dev) { return 0; }
|
||||||
|
static inline int acpi_subsys_thaw_noirq(struct device *dev) { return 0; }
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_ACPI
|
#ifdef CONFIG_ACPI
|
||||||
|
|
|
@ -370,9 +370,6 @@ int subsys_virtual_register(struct bus_type *subsys,
|
||||||
* @devnode: Callback to provide the devtmpfs.
|
* @devnode: Callback to provide the devtmpfs.
|
||||||
* @class_release: Called to release this class.
|
* @class_release: Called to release this class.
|
||||||
* @dev_release: Called to release the device.
|
* @dev_release: Called to release the device.
|
||||||
* @suspend: Used to put the device to sleep mode, usually to a low power
|
|
||||||
* state.
|
|
||||||
* @resume: Used to bring the device from the sleep mode.
|
|
||||||
* @shutdown_pre: Called at shut-down time before driver shutdown.
|
* @shutdown_pre: Called at shut-down time before driver shutdown.
|
||||||
* @ns_type: Callbacks so sysfs can detemine namespaces.
|
* @ns_type: Callbacks so sysfs can detemine namespaces.
|
||||||
* @namespace: Namespace of the device belongs to this class.
|
* @namespace: Namespace of the device belongs to this class.
|
||||||
|
@ -400,8 +397,6 @@ struct class {
|
||||||
void (*class_release)(struct class *class);
|
void (*class_release)(struct class *class);
|
||||||
void (*dev_release)(struct device *dev);
|
void (*dev_release)(struct device *dev);
|
||||||
|
|
||||||
int (*suspend)(struct device *dev, pm_message_t state);
|
|
||||||
int (*resume)(struct device *dev);
|
|
||||||
int (*shutdown_pre)(struct device *dev);
|
int (*shutdown_pre)(struct device *dev);
|
||||||
|
|
||||||
const struct kobj_ns_type_operations *ns_type;
|
const struct kobj_ns_type_operations *ns_type;
|
||||||
|
@ -1075,6 +1070,16 @@ static inline void dev_pm_syscore_device(struct device *dev, bool val)
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline void dev_pm_set_driver_flags(struct device *dev, u32 flags)
|
||||||
|
{
|
||||||
|
dev->power.driver_flags = flags;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline bool dev_pm_test_driver_flags(struct device *dev, u32 flags)
|
||||||
|
{
|
||||||
|
return !!(dev->power.driver_flags & flags);
|
||||||
|
}
|
||||||
|
|
||||||
static inline void device_lock(struct device *dev)
|
static inline void device_lock(struct device *dev)
|
||||||
{
|
{
|
||||||
mutex_lock(&dev->mutex);
|
mutex_lock(&dev->mutex);
|
||||||
|
|
|
@ -206,13 +206,8 @@ enum pci_dev_flags {
|
||||||
PCI_DEV_FLAGS_BRIDGE_XLATE_ROOT = (__force pci_dev_flags_t) (1 << 9),
|
PCI_DEV_FLAGS_BRIDGE_XLATE_ROOT = (__force pci_dev_flags_t) (1 << 9),
|
||||||
/* Do not use FLR even if device advertises PCI_AF_CAP */
|
/* Do not use FLR even if device advertises PCI_AF_CAP */
|
||||||
PCI_DEV_FLAGS_NO_FLR_RESET = (__force pci_dev_flags_t) (1 << 10),
|
PCI_DEV_FLAGS_NO_FLR_RESET = (__force pci_dev_flags_t) (1 << 10),
|
||||||
/*
|
|
||||||
* Resume before calling the driver's system suspend hooks, disabling
|
|
||||||
* the direct_complete optimization.
|
|
||||||
*/
|
|
||||||
PCI_DEV_FLAGS_NEEDS_RESUME = (__force pci_dev_flags_t) (1 << 11),
|
|
||||||
/* Don't use Relaxed Ordering for TLPs directed at this device */
|
/* Don't use Relaxed Ordering for TLPs directed at this device */
|
||||||
PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 12),
|
PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11),
|
||||||
};
|
};
|
||||||
|
|
||||||
enum pci_irq_reroute_variant {
|
enum pci_irq_reroute_variant {
|
||||||
|
|
|
@ -550,6 +550,33 @@ struct pm_subsys_data {
|
||||||
#endif
|
#endif
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Driver flags to control system suspend/resume behavior.
|
||||||
|
*
|
||||||
|
* These flags can be set by device drivers at the probe time. They need not be
|
||||||
|
* cleared by the drivers as the driver core will take care of that.
|
||||||
|
*
|
||||||
|
* NEVER_SKIP: Do not skip system suspend/resume callbacks for the device.
|
||||||
|
* SMART_PREPARE: Check the return value of the driver's ->prepare callback.
|
||||||
|
* SMART_SUSPEND: No need to resume the device from runtime suspend.
|
||||||
|
*
|
||||||
|
* Setting SMART_PREPARE instructs bus types and PM domains which may want
|
||||||
|
* system suspend/resume callbacks to be skipped for the device to return 0 from
|
||||||
|
* their ->prepare callbacks if the driver's ->prepare callback returns 0 (in
|
||||||
|
* other words, the system suspend/resume callbacks can only be skipped for the
|
||||||
|
* device if its driver doesn't object against that). This flag has no effect
|
||||||
|
* if NEVER_SKIP is set.
|
||||||
|
*
|
||||||
|
* Setting SMART_SUSPEND instructs bus types and PM domains which may want to
|
||||||
|
* runtime resume the device upfront during system suspend that doing so is not
|
||||||
|
* necessary from the driver's perspective. It also may cause them to skip
|
||||||
|
* invocations of the ->suspend_late and ->suspend_noirq callbacks provided by
|
||||||
|
* the driver if they decide to leave the device in runtime suspend.
|
||||||
|
*/
|
||||||
|
#define DPM_FLAG_NEVER_SKIP BIT(0)
|
||||||
|
#define DPM_FLAG_SMART_PREPARE BIT(1)
|
||||||
|
#define DPM_FLAG_SMART_SUSPEND BIT(2)
|
||||||
|
|
||||||
struct dev_pm_info {
|
struct dev_pm_info {
|
||||||
pm_message_t power_state;
|
pm_message_t power_state;
|
||||||
unsigned int can_wakeup:1;
|
unsigned int can_wakeup:1;
|
||||||
|
@ -561,6 +588,7 @@ struct dev_pm_info {
|
||||||
bool is_late_suspended:1;
|
bool is_late_suspended:1;
|
||||||
bool early_init:1; /* Owned by the PM core */
|
bool early_init:1; /* Owned by the PM core */
|
||||||
bool direct_complete:1; /* Owned by the PM core */
|
bool direct_complete:1; /* Owned by the PM core */
|
||||||
|
u32 driver_flags;
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
#ifdef CONFIG_PM_SLEEP
|
#ifdef CONFIG_PM_SLEEP
|
||||||
struct list_head entry;
|
struct list_head entry;
|
||||||
|
@ -737,6 +765,8 @@ extern int pm_generic_poweroff_late(struct device *dev);
|
||||||
extern int pm_generic_poweroff(struct device *dev);
|
extern int pm_generic_poweroff(struct device *dev);
|
||||||
extern void pm_generic_complete(struct device *dev);
|
extern void pm_generic_complete(struct device *dev);
|
||||||
|
|
||||||
|
extern bool dev_pm_smart_suspend_and_suspended(struct device *dev);
|
||||||
|
|
||||||
#else /* !CONFIG_PM_SLEEP */
|
#else /* !CONFIG_PM_SLEEP */
|
||||||
|
|
||||||
#define device_pm_lock() do {} while (0)
|
#define device_pm_lock() do {} while (0)
|
||||||
|
|
Загрузка…
Ссылка в новой задаче