License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2007-10-18 14:04:39 +04:00
|
|
|
#ifndef _LINUX_SUSPEND_H
|
|
|
|
#define _LINUX_SUSPEND_H
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/notifier.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/pm.h>
|
2007-05-07 01:50:42 +04:00
|
|
|
#include <linux/mm.h>
|
2011-12-07 02:18:12 +04:00
|
|
|
#include <linux/freezer.h>
|
2007-10-18 14:04:39 +04:00
|
|
|
#include <asm/errno.h>
|
|
|
|
|
2011-09-22 00:47:55 +04:00
|
|
|
#ifdef CONFIG_VT
|
2008-04-28 13:15:03 +04:00
|
|
|
extern void pm_set_vt_switch(int);
|
2007-10-18 14:04:39 +04:00
|
|
|
#else
|
2008-04-28 13:15:03 +04:00
|
|
|
static inline void pm_set_vt_switch(int do_switch)
|
|
|
|
{
|
|
|
|
}
|
2011-09-22 00:47:55 +04:00
|
|
|
#endif
|
2008-04-28 13:15:03 +04:00
|
|
|
|
2011-09-22 00:47:55 +04:00
|
|
|
#ifdef CONFIG_VT_CONSOLE_SLEEP
|
2016-06-14 17:23:22 +03:00
|
|
|
extern void pm_prepare_console(void);
|
2011-09-22 00:47:55 +04:00
|
|
|
extern void pm_restore_console(void);
|
|
|
|
#else
|
2016-06-14 17:23:22 +03:00
|
|
|
static inline void pm_prepare_console(void)
|
2008-04-28 13:15:03 +04:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pm_restore_console(void)
|
|
|
|
{
|
|
|
|
}
|
2007-10-18 14:04:39 +04:00
|
|
|
#endif
|
|
|
|
|
|
|
|
typedef int __bitwise suspend_state_t;
|
|
|
|
|
|
|
|
#define PM_SUSPEND_ON ((__force suspend_state_t) 0)
|
2017-08-10 01:13:07 +03:00
|
|
|
#define PM_SUSPEND_TO_IDLE ((__force suspend_state_t) 1)
|
PM: Introduce suspend state PM_SUSPEND_FREEZE
PM_SUSPEND_FREEZE state is a general state that
does not need any platform specific support, it equals
frozen processes + suspended devices + idle processors.
Compared with PM_SUSPEND_MEMORY,
PM_SUSPEND_FREEZE saves less power
because the system is still in a running state.
PM_SUSPEND_FREEZE has less resume latency because it does not
touch BIOS, and the processors are in idle state.
Compared with RTPM/idle,
PM_SUSPEND_FREEZE saves more power as
1. the processor has longer sleep time because processes are frozen.
The deeper c-state the processor supports, more power saving we can get.
2. PM_SUSPEND_FREEZE uses system suspend code path, thus we can get
more power saving from the devices that does not have good RTPM support.
This state is useful for
1) platforms that do not have STR, or have a broken STR.
2) platforms that have an extremely low power idle state,
which can be used to replace STR.
The following describes how PM_SUSPEND_FREEZE state works.
1. echo freeze > /sys/power/state
2. the processes are frozen.
3. all the devices are suspended.
4. all the processors are blocked by a wait queue
5. all the processors idles and enters (Deep) c-state.
6. an interrupt fires.
7. a processor is woken up and handles the irq.
8. if it is a general event,
a) the irq handler runs and quites.
b) goto step 4.
9. if it is a real wake event, say, power button pressing, keyboard touch, mouse moving,
a) the irq handler runs and activate the wakeup source
b) wakeup_source_activate() notifies the wait queue.
c) system starts resuming from PM_SUSPEND_FREEZE
10. all the devices are resumed.
11. all the processes are unfrozen.
12. system is back to working.
Known Issue:
The wakeup of this new PM_SUSPEND_FREEZE state may behave differently
from the previous suspend state.
Take ACPI platform for example, there are some GPEs that only enabled
when the system is in sleep state, to wake the system backk from S3/S4.
But we are not touching these GPEs during transition to PM_SUSPEND_FREEZE.
This means we may lose some wake event.
But on the other hand, as we do not disable all the Interrupts during
PM_SUSPEND_FREEZE, we may get some extra "wakeup" Interrupts, that are
not available for S3/S4.
The patches has been tested on an old Sony laptop, and here are the results:
Average Power:
1. RPTM/idle for half an hour:
14.8W, 12.6W, 14.1W, 12.5W, 14.4W, 13.2W, 12.9W
2. Freeze for half an hour:
11W, 10.4W, 9.4W, 11.3W 10.5W
3. RTPM/idle for three hours:
11.6W
4. Freeze for three hours:
10W
5. Suspend to Memory:
0.5~0.9W
Average Resume Latency:
1. RTPM/idle with a black screen: (From pressing keyboard to screen back)
Less than 0.2s
2. Freeze: (From pressing power button to screen back)
2.50s
3. Suspend to Memory: (From pressing power button to screen back)
4.33s
>From the results, we can see that all the platforms should benefit from
this patch, even if it does not have Low Power S0.
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-06 16:00:36 +04:00
|
|
|
#define PM_SUSPEND_STANDBY ((__force suspend_state_t) 2)
|
2007-10-18 14:04:39 +04:00
|
|
|
#define PM_SUSPEND_MEM ((__force suspend_state_t) 3)
|
2017-08-10 01:13:07 +03:00
|
|
|
#define PM_SUSPEND_MIN PM_SUSPEND_TO_IDLE
|
2007-10-18 14:04:39 +04:00
|
|
|
#define PM_SUSPEND_MAX ((__force suspend_state_t) 4)
|
|
|
|
|
PM / Suspend: Add statistics debugfs file for suspend to RAM
Record S3 failure time about each reason and the latest two failed
devices' names in S3 progress.
We can check it through 'suspend_stats' entry in debugfs.
The motivation of the patch:
We are enabling power features on Medfield. Comparing with PC/notebook,
a mobile enters/exits suspend-2-ram (we call it s3 on Medfield) far
more frequently. If it can't enter suspend-2-ram in time, the power
might be used up soon.
We often find sometimes, a device suspend fails. Then, system retries
s3 over and over again. As display is off, testers and developers
don't know what happens.
Some testers and developers complain they don't know if system
tries suspend-2-ram, and what device fails to suspend. They need
such info for a quick check. The patch adds suspend_stats under
debugfs for users to check suspend to RAM statistics quickly.
If not using this patch, we have other methods to get info about
what device fails. One is to turn on CONFIG_PM_DEBUG, but users
would get too much info and testers need recompile the system.
In addition, dynamic debug is another good tool to dump debug info.
But it still doesn't match our utilization scenario closely.
1) user need write a user space parser to process the syslog output;
2) Our testing scenario is we leave the mobile for at least hours.
Then, check its status. No serial console available during the
testing. One is because console would be suspended, and the other
is serial console connecting with spi or HSU devices would consume
power. These devices are powered off at suspend-2-ram.
Signed-off-by: ShuoX Liu <shuox.liu@intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-08-11 01:01:26 +04:00
|
|
|
enum suspend_stat_step {
|
|
|
|
SUSPEND_FREEZE = 1,
|
|
|
|
SUSPEND_PREPARE,
|
|
|
|
SUSPEND_SUSPEND,
|
2012-01-29 23:38:29 +04:00
|
|
|
SUSPEND_SUSPEND_LATE,
|
PM / Suspend: Add statistics debugfs file for suspend to RAM
Record S3 failure time about each reason and the latest two failed
devices' names in S3 progress.
We can check it through 'suspend_stats' entry in debugfs.
The motivation of the patch:
We are enabling power features on Medfield. Comparing with PC/notebook,
a mobile enters/exits suspend-2-ram (we call it s3 on Medfield) far
more frequently. If it can't enter suspend-2-ram in time, the power
might be used up soon.
We often find sometimes, a device suspend fails. Then, system retries
s3 over and over again. As display is off, testers and developers
don't know what happens.
Some testers and developers complain they don't know if system
tries suspend-2-ram, and what device fails to suspend. They need
such info for a quick check. The patch adds suspend_stats under
debugfs for users to check suspend to RAM statistics quickly.
If not using this patch, we have other methods to get info about
what device fails. One is to turn on CONFIG_PM_DEBUG, but users
would get too much info and testers need recompile the system.
In addition, dynamic debug is another good tool to dump debug info.
But it still doesn't match our utilization scenario closely.
1) user need write a user space parser to process the syslog output;
2) Our testing scenario is we leave the mobile for at least hours.
Then, check its status. No serial console available during the
testing. One is because console would be suspended, and the other
is serial console connecting with spi or HSU devices would consume
power. These devices are powered off at suspend-2-ram.
Signed-off-by: ShuoX Liu <shuox.liu@intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-08-11 01:01:26 +04:00
|
|
|
SUSPEND_SUSPEND_NOIRQ,
|
|
|
|
SUSPEND_RESUME_NOIRQ,
|
2012-01-29 23:38:29 +04:00
|
|
|
SUSPEND_RESUME_EARLY,
|
PM / Suspend: Add statistics debugfs file for suspend to RAM
Record S3 failure time about each reason and the latest two failed
devices' names in S3 progress.
We can check it through 'suspend_stats' entry in debugfs.
The motivation of the patch:
We are enabling power features on Medfield. Comparing with PC/notebook,
a mobile enters/exits suspend-2-ram (we call it s3 on Medfield) far
more frequently. If it can't enter suspend-2-ram in time, the power
might be used up soon.
We often find sometimes, a device suspend fails. Then, system retries
s3 over and over again. As display is off, testers and developers
don't know what happens.
Some testers and developers complain they don't know if system
tries suspend-2-ram, and what device fails to suspend. They need
such info for a quick check. The patch adds suspend_stats under
debugfs for users to check suspend to RAM statistics quickly.
If not using this patch, we have other methods to get info about
what device fails. One is to turn on CONFIG_PM_DEBUG, but users
would get too much info and testers need recompile the system.
In addition, dynamic debug is another good tool to dump debug info.
But it still doesn't match our utilization scenario closely.
1) user need write a user space parser to process the syslog output;
2) Our testing scenario is we leave the mobile for at least hours.
Then, check its status. No serial console available during the
testing. One is because console would be suspended, and the other
is serial console connecting with spi or HSU devices would consume
power. These devices are powered off at suspend-2-ram.
Signed-off-by: ShuoX Liu <shuox.liu@intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-08-11 01:01:26 +04:00
|
|
|
SUSPEND_RESUME
|
|
|
|
};
|
|
|
|
|
|
|
|
struct suspend_stats {
|
|
|
|
int success;
|
|
|
|
int fail;
|
|
|
|
int failed_freeze;
|
|
|
|
int failed_prepare;
|
|
|
|
int failed_suspend;
|
2012-01-29 23:38:29 +04:00
|
|
|
int failed_suspend_late;
|
PM / Suspend: Add statistics debugfs file for suspend to RAM
Record S3 failure time about each reason and the latest two failed
devices' names in S3 progress.
We can check it through 'suspend_stats' entry in debugfs.
The motivation of the patch:
We are enabling power features on Medfield. Comparing with PC/notebook,
a mobile enters/exits suspend-2-ram (we call it s3 on Medfield) far
more frequently. If it can't enter suspend-2-ram in time, the power
might be used up soon.
We often find sometimes, a device suspend fails. Then, system retries
s3 over and over again. As display is off, testers and developers
don't know what happens.
Some testers and developers complain they don't know if system
tries suspend-2-ram, and what device fails to suspend. They need
such info for a quick check. The patch adds suspend_stats under
debugfs for users to check suspend to RAM statistics quickly.
If not using this patch, we have other methods to get info about
what device fails. One is to turn on CONFIG_PM_DEBUG, but users
would get too much info and testers need recompile the system.
In addition, dynamic debug is another good tool to dump debug info.
But it still doesn't match our utilization scenario closely.
1) user need write a user space parser to process the syslog output;
2) Our testing scenario is we leave the mobile for at least hours.
Then, check its status. No serial console available during the
testing. One is because console would be suspended, and the other
is serial console connecting with spi or HSU devices would consume
power. These devices are powered off at suspend-2-ram.
Signed-off-by: ShuoX Liu <shuox.liu@intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-08-11 01:01:26 +04:00
|
|
|
int failed_suspend_noirq;
|
|
|
|
int failed_resume;
|
2012-01-29 23:38:29 +04:00
|
|
|
int failed_resume_early;
|
PM / Suspend: Add statistics debugfs file for suspend to RAM
Record S3 failure time about each reason and the latest two failed
devices' names in S3 progress.
We can check it through 'suspend_stats' entry in debugfs.
The motivation of the patch:
We are enabling power features on Medfield. Comparing with PC/notebook,
a mobile enters/exits suspend-2-ram (we call it s3 on Medfield) far
more frequently. If it can't enter suspend-2-ram in time, the power
might be used up soon.
We often find sometimes, a device suspend fails. Then, system retries
s3 over and over again. As display is off, testers and developers
don't know what happens.
Some testers and developers complain they don't know if system
tries suspend-2-ram, and what device fails to suspend. They need
such info for a quick check. The patch adds suspend_stats under
debugfs for users to check suspend to RAM statistics quickly.
If not using this patch, we have other methods to get info about
what device fails. One is to turn on CONFIG_PM_DEBUG, but users
would get too much info and testers need recompile the system.
In addition, dynamic debug is another good tool to dump debug info.
But it still doesn't match our utilization scenario closely.
1) user need write a user space parser to process the syslog output;
2) Our testing scenario is we leave the mobile for at least hours.
Then, check its status. No serial console available during the
testing. One is because console would be suspended, and the other
is serial console connecting with spi or HSU devices would consume
power. These devices are powered off at suspend-2-ram.
Signed-off-by: ShuoX Liu <shuox.liu@intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-08-11 01:01:26 +04:00
|
|
|
int failed_resume_noirq;
|
|
|
|
#define REC_FAILED_NUM 2
|
|
|
|
int last_failed_dev;
|
|
|
|
char failed_devs[REC_FAILED_NUM][40];
|
|
|
|
int last_failed_errno;
|
|
|
|
int errno[REC_FAILED_NUM];
|
|
|
|
int last_failed_step;
|
|
|
|
enum suspend_stat_step failed_steps[REC_FAILED_NUM];
|
|
|
|
};
|
|
|
|
|
|
|
|
extern struct suspend_stats suspend_stats;
|
|
|
|
|
|
|
|
static inline void dpm_save_failed_dev(const char *name)
|
|
|
|
{
|
|
|
|
strlcpy(suspend_stats.failed_devs[suspend_stats.last_failed_dev],
|
|
|
|
name,
|
|
|
|
sizeof(suspend_stats.failed_devs[0]));
|
|
|
|
suspend_stats.last_failed_dev++;
|
|
|
|
suspend_stats.last_failed_dev %= REC_FAILED_NUM;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void dpm_save_failed_errno(int err)
|
|
|
|
{
|
|
|
|
suspend_stats.errno[suspend_stats.last_failed_errno] = err;
|
|
|
|
suspend_stats.last_failed_errno++;
|
|
|
|
suspend_stats.last_failed_errno %= REC_FAILED_NUM;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void dpm_save_failed_step(enum suspend_stat_step step)
|
|
|
|
{
|
|
|
|
suspend_stats.failed_steps[suspend_stats.last_failed_step] = step;
|
|
|
|
suspend_stats.last_failed_step++;
|
|
|
|
suspend_stats.last_failed_step %= REC_FAILED_NUM;
|
|
|
|
}
|
|
|
|
|
2007-10-18 14:04:39 +04:00
|
|
|
/**
|
2007-10-18 14:04:40 +04:00
|
|
|
* struct platform_suspend_ops - Callbacks for managing platform dependent
|
|
|
|
* system sleep states.
|
2007-10-18 14:04:39 +04:00
|
|
|
*
|
|
|
|
* @valid: Callback to determine if given system sleep state is supported by
|
|
|
|
* the platform.
|
|
|
|
* Valid (ie. supported) states are advertised in /sys/power/state. Note
|
|
|
|
* that it still may be impossible to enter given system sleep state if the
|
|
|
|
* conditions aren't right.
|
2007-10-18 14:04:40 +04:00
|
|
|
* There is the %suspend_valid_only_mem function available that can be
|
|
|
|
* assigned to this if the platform only supports mem sleep.
|
2007-10-18 14:04:39 +04:00
|
|
|
*
|
2008-01-08 02:04:17 +03:00
|
|
|
* @begin: Initialise a transition to given system sleep state.
|
|
|
|
* @begin() is executed right prior to suspending devices. The information
|
|
|
|
* conveyed to the platform code by @begin() should be disregarded by it as
|
|
|
|
* soon as @end() is executed. If @begin() fails (ie. returns nonzero),
|
2007-10-18 14:04:39 +04:00
|
|
|
* @prepare(), @enter() and @finish() will not be called by the PM core.
|
|
|
|
* This callback is optional. However, if it is implemented, the argument
|
2008-01-08 02:04:17 +03:00
|
|
|
* passed to @enter() is redundant and should be ignored.
|
2007-10-18 14:04:39 +04:00
|
|
|
*
|
|
|
|
* @prepare: Prepare the platform for entering the system sleep state indicated
|
2008-01-08 02:04:17 +03:00
|
|
|
* by @begin().
|
2007-10-18 14:04:39 +04:00
|
|
|
* @prepare() is called right after devices have been suspended (ie. the
|
|
|
|
* appropriate .suspend() method has been executed for each device) and
|
2009-04-19 22:08:42 +04:00
|
|
|
* before device drivers' late suspend callbacks are executed. It returns
|
|
|
|
* 0 on success or a negative error code otherwise, in which case the
|
|
|
|
* system cannot enter the desired sleep state (@prepare_late(), @enter(),
|
2010-07-08 01:43:45 +04:00
|
|
|
* and @wake() will not be called in that case).
|
2009-04-19 22:08:42 +04:00
|
|
|
*
|
|
|
|
* @prepare_late: Finish preparing the platform for entering the system sleep
|
|
|
|
* state indicated by @begin().
|
|
|
|
* @prepare_late is called before disabling nonboot CPUs and after
|
|
|
|
* device drivers' late suspend callbacks have been executed. It returns
|
|
|
|
* 0 on success or a negative error code otherwise, in which case the
|
2010-07-08 01:43:45 +04:00
|
|
|
* system cannot enter the desired sleep state (@enter() will not be
|
|
|
|
* executed).
|
2007-10-18 14:04:39 +04:00
|
|
|
*
|
2008-01-08 02:04:17 +03:00
|
|
|
* @enter: Enter the system sleep state indicated by @begin() or represented by
|
|
|
|
* the argument if @begin() is not implemented.
|
2007-10-18 14:04:39 +04:00
|
|
|
* This callback is mandatory. It returns 0 on success or a negative
|
|
|
|
* error code otherwise, in which case the system cannot enter the desired
|
|
|
|
* sleep state.
|
|
|
|
*
|
2009-04-19 22:08:42 +04:00
|
|
|
* @wake: Called when the system has just left a sleep state, right after
|
|
|
|
* the nonboot CPUs have been enabled and before device drivers' early
|
|
|
|
* resume callbacks are executed.
|
|
|
|
* This callback is optional, but should be implemented by the platforms
|
|
|
|
* that implement @prepare_late(). If implemented, it is always called
|
2010-07-08 01:43:45 +04:00
|
|
|
* after @prepare_late and @enter(), even if one of them fails.
|
2009-04-19 22:08:42 +04:00
|
|
|
*
|
|
|
|
* @finish: Finish wake-up of the platform.
|
|
|
|
* @finish is called right prior to calling device drivers' regular suspend
|
|
|
|
* callbacks.
|
2007-10-18 14:04:39 +04:00
|
|
|
* This callback is optional, but should be implemented by the platforms
|
|
|
|
* that implement @prepare(). If implemented, it is always called after
|
2010-07-08 01:43:45 +04:00
|
|
|
* @enter() and @wake(), even if any of them fails. It is executed after
|
|
|
|
* a failing @prepare.
|
2008-01-08 02:04:17 +03:00
|
|
|
*
|
PM / Suspend: Add .suspend_again() callback to suspend_ops
A system or a device may need to control suspend/wakeup events. It may
want to wakeup the system after a predefined amount of time or at a
predefined event decided while entering suspend for polling or delayed
work. Then, it may want to enter suspend again if its predefined wakeup
condition is the only wakeup reason and there is no outstanding events;
thus, it does not wakeup the userspace unnecessary or unnecessary
devices and keeps suspended as long as possible (saving the power).
Enabling a system to wakeup after a specified time can be easily
achieved by using RTC. However, to enter suspend again immediately
without invoking userland and unrelated devices, we need additional
features in the suspend framework.
Such need comes from:
1. Monitoring a critical device status without interrupts that can
wakeup the system. (in-suspend polling)
An example is ambient temperature monitoring that needs to shut down
the system or a specific device function if it is too hot or cold. The
temperature of a specific device may be needed to be monitored as well;
e.g., a charger monitors battery temperature in order to stop charging
if overheated.
2. Execute critical "delayed work" at suspend.
A driver or a system/board may have a delayed work (or any similar
things) that it wants to execute at the requested time.
For example, some chargers want to check the battery voltage some
time (e.g., 30 seconds) after the battery is fully charged and the
charger has stopped. Then, the charger restarts charging if the voltage
has dropped more than a threshold, which is smaller than "restart-charger"
voltage, which is a threshold to restart charging regardless of the
time passed.
This patch allows to add "suspend_again" callback at struct
platform_suspend_ops and let the "suspend_again" callback return true if
the system is required to enter suspend again after the current instance
of wakeup. Device-wise suspend_again implemented at dev_pm_ops or
syscore is not done because: a) suspend_again feature is usually under
platform-wise decision and controls the behavior of the whole platform
and b) There are very limited devices related to the usage cases of
suspend_again; chargers and temperature sensors are mentioned so far.
With suspend_again callback registered at struct platform_suspend_ops
suspend_ops in kernel/power/suspend.c with suspend_set_ops by the
platform, the suspend framework tries to enter suspend again by
looping suspend_enter() if suspend_again has returned true and there has
been no errors in the suspending sequence or pending wakeups (by
pm_wakeup_pending).
Tested at Exynos4-NURI.
[rjw: Fixed up kerneldoc comment for suspend_enter().]
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-06-12 17:57:05 +04:00
|
|
|
* @suspend_again: Returns whether the system should suspend again (true) or
|
|
|
|
* not (false). If the platform wants to poll sensors or execute some
|
|
|
|
* code during suspended without invoking userspace and most of devices,
|
|
|
|
* suspend_again callback is the place assuming that periodic-wakeup or
|
|
|
|
* alarm-wakeup is already setup. This allows to execute some codes while
|
|
|
|
* being kept suspended in the view of userland and devices.
|
|
|
|
*
|
2008-01-08 02:04:17 +03:00
|
|
|
* @end: Called by the PM core right after resuming devices, to indicate to
|
|
|
|
* the platform that the system has returned to the working state or
|
|
|
|
* the transition to the sleep state has been aborted.
|
|
|
|
* This callback is optional, but should be implemented by the platforms
|
2009-04-19 22:08:42 +04:00
|
|
|
* that implement @begin(). Accordingly, platforms implementing @begin()
|
|
|
|
* should also provide a @end() which cleans up transitions aborted before
|
2008-01-08 02:04:17 +03:00
|
|
|
* @enter().
|
2008-06-13 01:24:06 +04:00
|
|
|
*
|
|
|
|
* @recover: Recover the platform from a suspend failure.
|
|
|
|
* Called by the PM core if the suspending of devices fails.
|
|
|
|
* This callback is optional and should only be implemented by platforms
|
|
|
|
* which require special recovery actions in that situation.
|
2007-10-18 14:04:39 +04:00
|
|
|
*/
|
2007-10-18 14:04:40 +04:00
|
|
|
struct platform_suspend_ops {
|
2007-10-18 14:04:39 +04:00
|
|
|
int (*valid)(suspend_state_t state);
|
2008-01-08 02:04:17 +03:00
|
|
|
int (*begin)(suspend_state_t state);
|
2007-10-18 14:04:41 +04:00
|
|
|
int (*prepare)(void);
|
2009-04-19 22:08:42 +04:00
|
|
|
int (*prepare_late)(void);
|
2007-10-18 14:04:39 +04:00
|
|
|
int (*enter)(suspend_state_t state);
|
2009-04-19 22:08:42 +04:00
|
|
|
void (*wake)(void);
|
2007-10-18 14:04:41 +04:00
|
|
|
void (*finish)(void);
|
PM / Suspend: Add .suspend_again() callback to suspend_ops
A system or a device may need to control suspend/wakeup events. It may
want to wakeup the system after a predefined amount of time or at a
predefined event decided while entering suspend for polling or delayed
work. Then, it may want to enter suspend again if its predefined wakeup
condition is the only wakeup reason and there is no outstanding events;
thus, it does not wakeup the userspace unnecessary or unnecessary
devices and keeps suspended as long as possible (saving the power).
Enabling a system to wakeup after a specified time can be easily
achieved by using RTC. However, to enter suspend again immediately
without invoking userland and unrelated devices, we need additional
features in the suspend framework.
Such need comes from:
1. Monitoring a critical device status without interrupts that can
wakeup the system. (in-suspend polling)
An example is ambient temperature monitoring that needs to shut down
the system or a specific device function if it is too hot or cold. The
temperature of a specific device may be needed to be monitored as well;
e.g., a charger monitors battery temperature in order to stop charging
if overheated.
2. Execute critical "delayed work" at suspend.
A driver or a system/board may have a delayed work (or any similar
things) that it wants to execute at the requested time.
For example, some chargers want to check the battery voltage some
time (e.g., 30 seconds) after the battery is fully charged and the
charger has stopped. Then, the charger restarts charging if the voltage
has dropped more than a threshold, which is smaller than "restart-charger"
voltage, which is a threshold to restart charging regardless of the
time passed.
This patch allows to add "suspend_again" callback at struct
platform_suspend_ops and let the "suspend_again" callback return true if
the system is required to enter suspend again after the current instance
of wakeup. Device-wise suspend_again implemented at dev_pm_ops or
syscore is not done because: a) suspend_again feature is usually under
platform-wise decision and controls the behavior of the whole platform
and b) There are very limited devices related to the usage cases of
suspend_again; chargers and temperature sensors are mentioned so far.
With suspend_again callback registered at struct platform_suspend_ops
suspend_ops in kernel/power/suspend.c with suspend_set_ops by the
platform, the suspend framework tries to enter suspend again by
looping suspend_enter() if suspend_again has returned true and there has
been no errors in the suspending sequence or pending wakeups (by
pm_wakeup_pending).
Tested at Exynos4-NURI.
[rjw: Fixed up kerneldoc comment for suspend_enter().]
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-06-12 17:57:05 +04:00
|
|
|
bool (*suspend_again)(void);
|
2008-01-08 02:04:17 +03:00
|
|
|
void (*end)(void);
|
2008-06-13 01:24:06 +04:00
|
|
|
void (*recover)(void);
|
2007-10-18 14:04:39 +04:00
|
|
|
};
|
|
|
|
|
2017-08-10 01:15:30 +03:00
|
|
|
struct platform_s2idle_ops {
|
2014-05-16 01:29:57 +04:00
|
|
|
int (*begin)(void);
|
2014-09-30 04:29:01 +04:00
|
|
|
int (*prepare)(void);
|
ACPI: PM: s2idle: Execute LPS0 _DSM functions with suspended devices
According to Section 3.5 of the "Intel Low Power S0 Idle" document [1],
Function 5 of the LPS0 _DSM is expected to be invoked when the system
configuration matches the criteria for entering the target low-power
state of the platform. In particular, this means that all devices
should be suspended and in low-power states already when that function
is invoked.
This is not the case currently, however, because Function 5 of the
LPS0 _DSM is invoked by it before the "noirq" phase of device suspend,
which means that some devices may not have been put into low-power
states yet at that point. That is a consequence of the previous
design of the suspend-to-idle flow that allowed the "noirq" phase of
device suspend and the "noirq" phase of device resume to be carried
out for multiple times while "suspended" (if any spurious wakeup
events were detected) and the point of the LPS0 _DSM Function 5
invocation was chosen so as to call it (and LPS0 _DSM Function 6
analogously) once per suspend-resume cycle (regardless of how many
times the "noirq" phases of device suspend and resume were carried
out while "suspended").
Now that the suspend-to-idle flow has been redesigned to carry out
the "noirq" phases of device suspend and resume once in each cycle,
the code can be reordered to follow the specification that it is
based on more closely.
For this purpose, add ->prepare_late and ->restore_early platform
callbacks for suspend-to-idle, to be executed, respectively, after
the "noirq" phase of suspending devices and before the "noirq"
phase of resuming them and make ACPI use them for the invocation
of LPS0 _DSM functions as appropriate.
While at it, move the LPS0 entry requirements check to be made
before invoking Functions 3 and 5 of the LPS0 _DSM (also once
per cycle) as follows from the specification [1].
Link: https://uefi.org/sites/default/files/resources/Intel_ACPI_Low_Power_S0_Idle.pdf # [1]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
2019-08-01 20:31:10 +03:00
|
|
|
int (*prepare_late)(void);
|
2020-02-11 12:11:02 +03:00
|
|
|
bool (*wake)(void);
|
ACPI: PM: s2idle: Execute LPS0 _DSM functions with suspended devices
According to Section 3.5 of the "Intel Low Power S0 Idle" document [1],
Function 5 of the LPS0 _DSM is expected to be invoked when the system
configuration matches the criteria for entering the target low-power
state of the platform. In particular, this means that all devices
should be suspended and in low-power states already when that function
is invoked.
This is not the case currently, however, because Function 5 of the
LPS0 _DSM is invoked by it before the "noirq" phase of device suspend,
which means that some devices may not have been put into low-power
states yet at that point. That is a consequence of the previous
design of the suspend-to-idle flow that allowed the "noirq" phase of
device suspend and the "noirq" phase of device resume to be carried
out for multiple times while "suspended" (if any spurious wakeup
events were detected) and the point of the LPS0 _DSM Function 5
invocation was chosen so as to call it (and LPS0 _DSM Function 6
analogously) once per suspend-resume cycle (regardless of how many
times the "noirq" phases of device suspend and resume were carried
out while "suspended").
Now that the suspend-to-idle flow has been redesigned to carry out
the "noirq" phases of device suspend and resume once in each cycle,
the code can be reordered to follow the specification that it is
based on more closely.
For this purpose, add ->prepare_late and ->restore_early platform
callbacks for suspend-to-idle, to be executed, respectively, after
the "noirq" phase of suspending devices and before the "noirq"
phase of resuming them and make ACPI use them for the invocation
of LPS0 _DSM functions as appropriate.
While at it, move the LPS0 entry requirements check to be made
before invoking Functions 3 and 5 of the LPS0 _DSM (also once
per cycle) as follows from the specification [1].
Link: https://uefi.org/sites/default/files/resources/Intel_ACPI_Low_Power_S0_Idle.pdf # [1]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
2019-08-01 20:31:10 +03:00
|
|
|
void (*restore_early)(void);
|
2014-09-30 04:29:01 +04:00
|
|
|
void (*restore)(void);
|
2014-05-16 01:29:57 +04:00
|
|
|
void (*end)(void);
|
|
|
|
};
|
|
|
|
|
2007-10-18 14:04:39 +04:00
|
|
|
#ifdef CONFIG_SUSPEND
|
2017-08-01 00:43:18 +03:00
|
|
|
extern suspend_state_t mem_sleep_current;
|
|
|
|
extern suspend_state_t mem_sleep_default;
|
|
|
|
|
2007-10-18 14:04:39 +04:00
|
|
|
/**
|
2007-10-18 14:04:40 +04:00
|
|
|
* suspend_set_ops - set platform dependent suspend operations
|
|
|
|
* @ops: The new suspend operations to set.
|
2007-10-18 14:04:39 +04:00
|
|
|
*/
|
2010-11-16 16:14:02 +03:00
|
|
|
extern void suspend_set_ops(const struct platform_suspend_ops *ops);
|
2007-10-18 14:04:40 +04:00
|
|
|
extern int suspend_valid_only_mem(suspend_state_t state);
|
2015-02-13 01:33:15 +03:00
|
|
|
|
2015-10-07 01:49:34 +03:00
|
|
|
extern unsigned int pm_suspend_global_flags;
|
|
|
|
|
2019-06-26 01:20:23 +03:00
|
|
|
#define PM_SUSPEND_FLAG_FW_SUSPEND BIT(0)
|
|
|
|
#define PM_SUSPEND_FLAG_FW_RESUME BIT(1)
|
|
|
|
#define PM_SUSPEND_FLAG_NO_PLATFORM BIT(2)
|
2015-10-07 01:49:34 +03:00
|
|
|
|
|
|
|
static inline void pm_suspend_clear_flags(void)
|
|
|
|
{
|
|
|
|
pm_suspend_global_flags = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pm_set_suspend_via_firmware(void)
|
|
|
|
{
|
|
|
|
pm_suspend_global_flags |= PM_SUSPEND_FLAG_FW_SUSPEND;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pm_set_resume_via_firmware(void)
|
|
|
|
{
|
|
|
|
pm_suspend_global_flags |= PM_SUSPEND_FLAG_FW_RESUME;
|
|
|
|
}
|
|
|
|
|
2019-06-26 01:20:23 +03:00
|
|
|
static inline void pm_set_suspend_no_platform(void)
|
|
|
|
{
|
|
|
|
pm_suspend_global_flags |= PM_SUSPEND_FLAG_NO_PLATFORM;
|
|
|
|
}
|
|
|
|
|
2019-05-27 13:45:18 +03:00
|
|
|
/**
|
|
|
|
* pm_suspend_via_firmware - Check if platform firmware will suspend the system.
|
|
|
|
*
|
|
|
|
* To be called during system-wide power management transitions to sleep states
|
|
|
|
* or during the subsequent system-wide transitions back to the working state.
|
|
|
|
*
|
|
|
|
* Return 'true' if the platform firmware is going to be invoked at the end of
|
|
|
|
* the system-wide power management transition (to a sleep state) in progress in
|
|
|
|
* order to complete it, or if the platform firmware has been invoked in order
|
|
|
|
* to complete the last (or preceding) transition of the system to a sleep
|
|
|
|
* state.
|
|
|
|
*
|
|
|
|
* This matters if the caller needs or wants to carry out some special actions
|
|
|
|
* depending on whether or not control will be passed to the platform firmware
|
|
|
|
* subsequently (for example, the device may need to be reset before letting the
|
|
|
|
* platform firmware manipulate it, which is not necessary when the platform
|
|
|
|
* firmware is not going to be invoked) or when such special actions may have
|
|
|
|
* been carried out during the preceding transition of the system to a sleep
|
|
|
|
* state (as they may need to be taken into account).
|
|
|
|
*/
|
2015-10-07 01:49:34 +03:00
|
|
|
static inline bool pm_suspend_via_firmware(void)
|
|
|
|
{
|
|
|
|
return !!(pm_suspend_global_flags & PM_SUSPEND_FLAG_FW_SUSPEND);
|
|
|
|
}
|
|
|
|
|
2019-05-27 13:45:18 +03:00
|
|
|
/**
|
|
|
|
* pm_resume_via_firmware - Check if platform firmware has woken up the system.
|
|
|
|
*
|
|
|
|
* To be called during system-wide power management transitions from sleep
|
|
|
|
* states.
|
|
|
|
*
|
|
|
|
* Return 'true' if the platform firmware has passed control to the kernel at
|
|
|
|
* the beginning of the system-wide power management transition in progress, so
|
|
|
|
* the event that woke up the system from sleep has been handled by the platform
|
|
|
|
* firmware.
|
|
|
|
*/
|
2015-10-07 01:49:34 +03:00
|
|
|
static inline bool pm_resume_via_firmware(void)
|
|
|
|
{
|
|
|
|
return !!(pm_suspend_global_flags & PM_SUSPEND_FLAG_FW_RESUME);
|
|
|
|
}
|
|
|
|
|
2019-06-26 01:20:23 +03:00
|
|
|
/**
|
|
|
|
* pm_suspend_no_platform - Check if platform may change device power states.
|
|
|
|
*
|
|
|
|
* To be called during system-wide power management transitions to sleep states
|
|
|
|
* or during the subsequent system-wide transitions back to the working state.
|
|
|
|
*
|
|
|
|
* Return 'true' if the power states of devices remain under full control of the
|
|
|
|
* kernel throughout the system-wide suspend and resume cycle in progress (that
|
|
|
|
* is, if a device is put into a certain power state during suspend, it can be
|
|
|
|
* expected to remain in that state during resume).
|
|
|
|
*/
|
|
|
|
static inline bool pm_suspend_no_platform(void)
|
|
|
|
{
|
|
|
|
return !!(pm_suspend_global_flags & PM_SUSPEND_FLAG_NO_PLATFORM);
|
|
|
|
}
|
|
|
|
|
2015-02-13 01:33:15 +03:00
|
|
|
/* Suspend-to-idle state machnine. */
|
2017-08-10 01:13:56 +03:00
|
|
|
enum s2idle_states {
|
|
|
|
S2IDLE_STATE_NONE, /* Not suspended/suspending. */
|
|
|
|
S2IDLE_STATE_ENTER, /* Enter suspend-to-idle. */
|
|
|
|
S2IDLE_STATE_WAKE, /* Wake up from suspend-to-idle. */
|
2015-02-13 01:33:15 +03:00
|
|
|
};
|
|
|
|
|
2017-08-10 01:13:56 +03:00
|
|
|
extern enum s2idle_states __read_mostly s2idle_state;
|
2015-02-13 01:33:15 +03:00
|
|
|
|
2017-08-10 01:13:56 +03:00
|
|
|
static inline bool idle_should_enter_s2idle(void)
|
2015-02-13 01:33:15 +03:00
|
|
|
{
|
2017-08-10 01:13:56 +03:00
|
|
|
return unlikely(s2idle_state == S2IDLE_STATE_ENTER);
|
2015-02-13 01:33:15 +03:00
|
|
|
}
|
|
|
|
|
2019-06-18 11:18:28 +03:00
|
|
|
extern bool pm_suspend_default_s2idle(void);
|
2016-08-19 16:41:00 +03:00
|
|
|
extern void __init pm_states_init(void);
|
2017-08-10 01:15:30 +03:00
|
|
|
extern void s2idle_set_ops(const struct platform_s2idle_ops *ops);
|
2017-08-10 01:13:56 +03:00
|
|
|
extern void s2idle_wake(void);
|
2007-10-18 14:04:39 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* arch_suspend_disable_irqs - disable IRQs for suspend
|
|
|
|
*
|
|
|
|
* Disables IRQs (in the default case). This is a weak symbol in the common
|
|
|
|
* code and thus allows architectures to override it if more needs to be
|
|
|
|
* done. Not called for suspend to disk.
|
|
|
|
*/
|
|
|
|
extern void arch_suspend_disable_irqs(void);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* arch_suspend_enable_irqs - enable IRQs after suspend
|
|
|
|
*
|
|
|
|
* Enables IRQs (in the default case). This is a weak symbol in the common
|
|
|
|
* code and thus allows architectures to override it if more needs to be
|
|
|
|
* done. Not called for suspend to disk.
|
|
|
|
*/
|
|
|
|
extern void arch_suspend_enable_irqs(void);
|
|
|
|
|
|
|
|
extern int pm_suspend(suspend_state_t state);
|
2020-01-16 14:53:54 +03:00
|
|
|
extern bool sync_on_suspend_enabled;
|
2007-10-18 14:04:39 +04:00
|
|
|
#else /* !CONFIG_SUSPEND */
|
|
|
|
#define suspend_valid_only_mem NULL
|
|
|
|
|
2015-10-07 01:49:34 +03:00
|
|
|
static inline void pm_suspend_clear_flags(void) {}
|
|
|
|
static inline void pm_set_suspend_via_firmware(void) {}
|
|
|
|
static inline void pm_set_resume_via_firmware(void) {}
|
|
|
|
static inline bool pm_suspend_via_firmware(void) { return false; }
|
|
|
|
static inline bool pm_resume_via_firmware(void) { return false; }
|
2019-07-30 12:55:59 +03:00
|
|
|
static inline bool pm_suspend_no_platform(void) { return false; }
|
2019-06-18 11:18:28 +03:00
|
|
|
static inline bool pm_suspend_default_s2idle(void) { return false; }
|
2015-10-07 01:49:34 +03:00
|
|
|
|
2010-11-16 16:14:02 +03:00
|
|
|
static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
|
2007-10-18 14:04:39 +04:00
|
|
|
static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; }
|
2020-01-16 14:53:54 +03:00
|
|
|
static inline bool sync_on_suspend_enabled(void) { return true; }
|
2017-08-10 01:13:56 +03:00
|
|
|
static inline bool idle_should_enter_s2idle(void) { return false; }
|
2016-08-19 16:41:00 +03:00
|
|
|
static inline void __init pm_states_init(void) {}
|
2017-08-10 01:15:30 +03:00
|
|
|
static inline void s2idle_set_ops(const struct platform_s2idle_ops *ops) {}
|
2017-08-10 01:13:56 +03:00
|
|
|
static inline void s2idle_wake(void) {}
|
2007-10-18 14:04:39 +04:00
|
|
|
#endif /* !CONFIG_SUSPEND */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
[PATCH] swsusp: Improve handling of highmem
Currently swsusp saves the contents of highmem pages by copying them to the
normal zone which is quite inefficient (eg. it requires two normal pages
to be used for saving one highmem page). This may be improved by using
highmem for saving the contents of saveable highmem pages.
Namely, during the suspend phase of the suspend-resume cycle we try to
allocate as many free highmem pages as there are saveable highmem pages.
If there are not enough highmem image pages to store the contents of all of
the saveable highmem pages, some of them will be stored in the "normal"
memory. Next, we allocate as many free "normal" pages as needed to store
the (remaining) image data. We use a memory bitmap to mark the allocated
free pages (ie. highmem as well as "normal" image pages).
Now, we use another memory bitmap to mark all of the saveable pages
(highmem as well as "normal") and the contents of the saveable pages are
copied into the image pages. Then, the second bitmap is used to save the
pfns corresponding to the saveable pages and the first one is used to save
their data.
During the resume phase the pfns of the pages that were saveable during the
suspend are loaded from the image and used to mark the "unsafe" page
frames. Next, we try to allocate as many free highmem page frames as to
load all of the image data that had been in the highmem before the suspend
and we allocate so many free "normal" page frames that the total number of
allocated free pages (highmem and "normal") is equal to the size of the
image. While doing this we have to make sure that there will be some extra
free "normal" and "safe" page frames for two lists of PBEs constructed
later.
Now, the image data are loaded, if possible, into their "original" page
frames. The image data that cannot be written into their "original" page
frames are loaded into "safe" page frames and their "original" kernel
virtual addresses, as well as the addresses of the "safe" pages containing
their copies, are stored in one of two lists of PBEs.
One list of PBEs is for the copies of "normal" suspend pages (ie. "normal"
pages that were saveable during the suspend) and it is used in the same way
as previously (ie. by the architecture-dependent parts of swsusp). The
other list of PBEs is for the copies of highmem suspend pages. The pages
in this list are restored (in a reversible way) right before the
arch-dependent code is called.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07 07:34:18 +03:00
|
|
|
/* struct pbe is used for creating lists of pages that should be restored
|
|
|
|
* atomically during the resume from disk, because the page frames they have
|
|
|
|
* occupied before the suspend are in use.
|
|
|
|
*/
|
2006-09-26 10:32:51 +04:00
|
|
|
struct pbe {
|
[PATCH] swsusp: Improve handling of highmem
Currently swsusp saves the contents of highmem pages by copying them to the
normal zone which is quite inefficient (eg. it requires two normal pages
to be used for saving one highmem page). This may be improved by using
highmem for saving the contents of saveable highmem pages.
Namely, during the suspend phase of the suspend-resume cycle we try to
allocate as many free highmem pages as there are saveable highmem pages.
If there are not enough highmem image pages to store the contents of all of
the saveable highmem pages, some of them will be stored in the "normal"
memory. Next, we allocate as many free "normal" pages as needed to store
the (remaining) image data. We use a memory bitmap to mark the allocated
free pages (ie. highmem as well as "normal" image pages).
Now, we use another memory bitmap to mark all of the saveable pages
(highmem as well as "normal") and the contents of the saveable pages are
copied into the image pages. Then, the second bitmap is used to save the
pfns corresponding to the saveable pages and the first one is used to save
their data.
During the resume phase the pfns of the pages that were saveable during the
suspend are loaded from the image and used to mark the "unsafe" page
frames. Next, we try to allocate as many free highmem page frames as to
load all of the image data that had been in the highmem before the suspend
and we allocate so many free "normal" page frames that the total number of
allocated free pages (highmem and "normal") is equal to the size of the
image. While doing this we have to make sure that there will be some extra
free "normal" and "safe" page frames for two lists of PBEs constructed
later.
Now, the image data are loaded, if possible, into their "original" page
frames. The image data that cannot be written into their "original" page
frames are loaded into "safe" page frames and their "original" kernel
virtual addresses, as well as the addresses of the "safe" pages containing
their copies, are stored in one of two lists of PBEs.
One list of PBEs is for the copies of "normal" suspend pages (ie. "normal"
pages that were saveable during the suspend) and it is used in the same way
as previously (ie. by the architecture-dependent parts of swsusp). The
other list of PBEs is for the copies of highmem suspend pages. The pages
in this list are restored (in a reversible way) right before the
arch-dependent code is called.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07 07:34:18 +03:00
|
|
|
void *address; /* address of the copy */
|
|
|
|
void *orig_address; /* original address of a page */
|
2006-01-06 11:13:05 +03:00
|
|
|
struct pbe *next;
|
2006-09-26 10:32:51 +04:00
|
|
|
};
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/* mm/page_alloc.c */
|
|
|
|
extern void mark_free_pages(struct zone *zone);
|
|
|
|
|
2007-05-09 13:33:18 +04:00
|
|
|
/**
|
2007-10-18 14:04:43 +04:00
|
|
|
* struct platform_hibernation_ops - hibernation platform support
|
2007-05-09 13:33:18 +04:00
|
|
|
*
|
2008-01-08 02:08:44 +03:00
|
|
|
* The methods in this structure allow a platform to carry out special
|
|
|
|
* operations required by it during a hibernation transition.
|
2007-05-09 13:33:18 +04:00
|
|
|
*
|
2008-06-13 01:24:06 +04:00
|
|
|
* All the methods below, except for @recover(), must be implemented.
|
2007-05-09 13:33:18 +04:00
|
|
|
*
|
2008-01-08 02:08:44 +03:00
|
|
|
* @begin: Tell the platform driver that we're starting hibernation.
|
2007-10-18 14:04:42 +04:00
|
|
|
* Called right after shrinking memory and before freezing devices.
|
|
|
|
*
|
2008-01-08 02:08:44 +03:00
|
|
|
* @end: Called by the PM core right after resuming devices, to indicate to
|
|
|
|
* the platform that the system has returned to the working state.
|
|
|
|
*
|
2007-10-18 14:04:42 +04:00
|
|
|
* @pre_snapshot: Prepare the platform for creating the hibernation image.
|
|
|
|
* Called right after devices have been frozen and before the nonboot
|
|
|
|
* CPUs are disabled (runs with IRQs on).
|
|
|
|
*
|
|
|
|
* @finish: Restore the previous state of the platform after the hibernation
|
|
|
|
* image has been created *or* put the platform into the normal operation
|
|
|
|
* mode after the hibernation (the same method is executed in both cases).
|
|
|
|
* Called right after the nonboot CPUs have been enabled and before
|
|
|
|
* thawing devices (runs with IRQs on).
|
|
|
|
*
|
|
|
|
* @prepare: Prepare the platform for entering the low power state.
|
|
|
|
* Called right after the hibernation image has been saved and before
|
|
|
|
* devices are prepared for entering the low power state.
|
|
|
|
*
|
|
|
|
* @enter: Put the system into the low power state after the hibernation image
|
|
|
|
* has been saved to disk.
|
|
|
|
* Called after the nonboot CPUs have been disabled and all of the low
|
|
|
|
* level devices have been shut down (runs with IRQs off).
|
|
|
|
*
|
2007-10-18 14:04:55 +04:00
|
|
|
* @leave: Perform the first stage of the cleanup after the system sleep state
|
|
|
|
* indicated by @set_target() has been left.
|
|
|
|
* Called right after the control has been passed from the boot kernel to
|
|
|
|
* the image kernel, before the nonboot CPUs are enabled and before devices
|
|
|
|
* are resumed. Executed with interrupts disabled.
|
|
|
|
*
|
2007-10-18 14:04:42 +04:00
|
|
|
* @pre_restore: Prepare system for the restoration from a hibernation image.
|
|
|
|
* Called right after devices have been frozen and before the nonboot
|
|
|
|
* CPUs are disabled (runs with IRQs on).
|
|
|
|
*
|
|
|
|
* @restore_cleanup: Clean up after a failing image restoration.
|
|
|
|
* Called right after the nonboot CPUs have been enabled and before
|
|
|
|
* thawing devices (runs with IRQs on).
|
2008-06-13 01:24:06 +04:00
|
|
|
*
|
|
|
|
* @recover: Recover the platform from a failure to suspend devices.
|
|
|
|
* Called by the PM core if the suspending of devices during hibernation
|
|
|
|
* fails. This callback is optional and should only be implemented by
|
|
|
|
* platforms which require special recovery actions in that situation.
|
2007-05-09 13:33:18 +04:00
|
|
|
*/
|
2007-10-18 14:04:43 +04:00
|
|
|
struct platform_hibernation_ops {
|
2019-05-16 13:43:19 +03:00
|
|
|
int (*begin)(pm_message_t stage);
|
2008-01-08 02:08:44 +03:00
|
|
|
void (*end)(void);
|
2007-10-18 14:04:42 +04:00
|
|
|
int (*pre_snapshot)(void);
|
|
|
|
void (*finish)(void);
|
2007-05-09 13:33:18 +04:00
|
|
|
int (*prepare)(void);
|
|
|
|
int (*enter)(void);
|
2007-10-18 14:04:55 +04:00
|
|
|
void (*leave)(void);
|
swsusp: introduce restore platform operations
At least on some machines it is necessary to prepare the ACPI firmware for the
restoration of the system memory state from the hibernation image if the
"platform" mode of hibernation has been used. Namely, in that cases we need
to disable the GPEs before replacing the "boot" kernel with the "frozen"
kernel (cf. http://bugzilla.kernel.org/show_bug.cgi?id=7887). After the
restore they will be re-enabled by hibernation_ops->finish(), but if the
restore fails, they have to be re-enabled by the restore code explicitly.
For this purpose we can introduce two additional hibernation operations,
called pre_restore() and restore_cleanup() and call them from the restore code
path. Still, they should be called if the "platform" mode of hibernation has
been used, so we need to pass the information about the hibernation mode from
the "frozen" kernel to the "boot" kernel in the image header.
Apparently, we can't drop the disabling of GPEs before the restore because of
Bug #7887 . We also can't do it unconditionally, because the GPEs wouldn't
have been enabled after a successful restore if the suspend had been done in
the 'shutdown' or 'reboot' mode.
In principle we could (and probably should) unconditionally disable the GPEs
before each snapshot creation *and* before the restore, but then we'd have to
unconditionally enable them after the snapshot creation as well as after the
restore (or restore failure) Still, for this purpose we'd need to modify
acpi_enter_sleep_state_prep() and acpi_leave_sleep_state() and we'd have to
introduce some mechanism synchronizing the disablind/enabling of the GPEs with
the device drivers' .suspend()/.resume() routines and with
disable_/enable_nonboot_cpus(). However, this would have affected the
suspend (ie. s2ram) code as well as the hibernation, which I'd like to avoid
in this patch series.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 12:47:30 +04:00
|
|
|
int (*pre_restore)(void);
|
|
|
|
void (*restore_cleanup)(void);
|
2008-06-13 01:24:06 +04:00
|
|
|
void (*recover)(void);
|
2007-05-09 13:33:18 +04:00
|
|
|
};
|
|
|
|
|
2007-07-30 01:24:36 +04:00
|
|
|
#ifdef CONFIG_HIBERNATION
|
2007-05-07 01:50:43 +04:00
|
|
|
/* kernel/power/snapshot.c */
|
2022-01-19 13:47:51 +03:00
|
|
|
extern void register_nosave_region(unsigned long b, unsigned long e);
|
2007-05-07 01:50:43 +04:00
|
|
|
extern int swsusp_page_is_forbidden(struct page *);
|
|
|
|
extern void swsusp_set_page_free(struct page *);
|
|
|
|
extern void swsusp_unset_page_free(struct page *);
|
|
|
|
extern unsigned long get_safe_page(gfp_t gfp_mask);
|
2018-02-02 17:56:18 +03:00
|
|
|
extern asmlinkage int swsusp_arch_suspend(void);
|
|
|
|
extern asmlinkage int swsusp_arch_resume(void);
|
2007-05-09 13:33:18 +04:00
|
|
|
|
2010-11-09 23:48:49 +03:00
|
|
|
extern void hibernation_set_ops(const struct platform_hibernation_ops *ops);
|
2007-05-09 13:33:18 +04:00
|
|
|
extern int hibernate(void);
|
2009-01-19 22:54:54 +03:00
|
|
|
extern bool system_entering_hibernation(void);
|
2014-06-14 00:30:35 +04:00
|
|
|
extern bool hibernation_available(void);
|
2014-03-25 04:20:29 +04:00
|
|
|
asmlinkage int swsusp_save(void);
|
|
|
|
extern struct pbe *restore_pblist;
|
2019-05-24 13:44:18 +03:00
|
|
|
int pfn_is_nosave(unsigned long pfn);
|
2020-07-21 01:08:18 +03:00
|
|
|
|
|
|
|
int hibernate_quiet_exec(int (*func)(void *data), void *data);
|
2007-07-30 01:24:36 +04:00
|
|
|
#else /* CONFIG_HIBERNATION */
|
2011-04-12 00:54:42 +04:00
|
|
|
static inline void register_nosave_region(unsigned long b, unsigned long e) {}
|
2007-05-07 01:50:43 +04:00
|
|
|
static inline int swsusp_page_is_forbidden(struct page *p) { return 0; }
|
|
|
|
static inline void swsusp_set_page_free(struct page *p) {}
|
|
|
|
static inline void swsusp_unset_page_free(struct page *p) {}
|
2007-05-09 13:33:18 +04:00
|
|
|
|
2010-11-09 23:48:49 +03:00
|
|
|
static inline void hibernation_set_ops(const struct platform_hibernation_ops *ops) {}
|
2007-05-09 13:33:18 +04:00
|
|
|
static inline int hibernate(void) { return -ENOSYS; }
|
2009-06-10 03:28:19 +04:00
|
|
|
static inline bool system_entering_hibernation(void) { return false; }
|
2014-06-14 00:30:35 +04:00
|
|
|
static inline bool hibernation_available(void) { return false; }
|
2020-07-21 01:08:18 +03:00
|
|
|
|
|
|
|
static inline int hibernate_quiet_exec(int (*func)(void *data), void *data) {
|
|
|
|
return -ENOTSUPP;
|
|
|
|
}
|
2009-06-10 03:28:19 +04:00
|
|
|
#endif /* CONFIG_HIBERNATION */
|
|
|
|
|
2020-05-19 21:14:10 +03:00
|
|
|
#ifdef CONFIG_HIBERNATION_SNAPSHOT_DEV
|
2020-09-21 10:19:55 +03:00
|
|
|
int is_hibernate_resume_dev(dev_t dev);
|
2020-05-19 21:14:10 +03:00
|
|
|
#else
|
2020-09-21 10:19:55 +03:00
|
|
|
static inline int is_hibernate_resume_dev(dev_t dev) { return 0; }
|
2020-05-19 21:14:10 +03:00
|
|
|
#endif
|
|
|
|
|
2011-07-26 04:13:11 +04:00
|
|
|
/* Hibernation and suspend events */
|
|
|
|
#define PM_HIBERNATION_PREPARE 0x0001 /* Going to hibernate */
|
|
|
|
#define PM_POST_HIBERNATION 0x0002 /* Hibernation finished */
|
|
|
|
#define PM_SUSPEND_PREPARE 0x0003 /* Going to suspend the system */
|
|
|
|
#define PM_POST_SUSPEND 0x0004 /* Suspend finished */
|
|
|
|
#define PM_RESTORE_PREPARE 0x0005 /* Going to restore a saved image */
|
|
|
|
#define PM_POST_RESTORE 0x0006 /* Restore failed */
|
|
|
|
|
2018-07-31 11:51:32 +03:00
|
|
|
extern struct mutex system_transition_mutex;
|
2011-12-07 02:24:38 +04:00
|
|
|
|
2007-07-30 01:27:18 +04:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
2005-04-17 02:20:36 +04:00
|
|
|
void save_processor_state(void);
|
|
|
|
void restore_processor_state(void);
|
2005-10-31 01:59:56 +03:00
|
|
|
|
2007-07-19 12:47:36 +04:00
|
|
|
/* kernel/power/main.c */
|
2007-11-20 01:49:18 +03:00
|
|
|
extern int register_pm_notifier(struct notifier_block *nb);
|
|
|
|
extern int unregister_pm_notifier(struct notifier_block *nb);
|
2019-02-25 15:36:41 +03:00
|
|
|
extern void ksys_sync_helper(void);
|
2007-07-19 12:47:36 +04:00
|
|
|
|
|
|
|
#define pm_notifier(fn, pri) { \
|
|
|
|
static struct notifier_block fn##_nb = \
|
|
|
|
{ .notifier_call = fn, .priority = pri }; \
|
|
|
|
register_pm_notifier(&fn##_nb); \
|
|
|
|
}
|
PM: Make it possible to avoid races between wakeup and system sleep
One of the arguments during the suspend blockers discussion was that
the mainline kernel didn't contain any mechanisms making it possible
to avoid races between wakeup and system suspend.
Generally, there are two problems in that area. First, if a wakeup
event occurs exactly when /sys/power/state is being written to, it
may be delivered to user space right before the freezer kicks in, so
the user space consumer of the event may not be able to process it
before the system is suspended. Second, if a wakeup event occurs
after user space has been frozen, it is not generally guaranteed that
the ongoing transition of the system into a sleep state will be
aborted.
To address these issues introduce a new global sysfs attribute,
/sys/power/wakeup_count, associated with a running counter of wakeup
events and three helper functions, pm_stay_awake(), pm_relax(), and
pm_wakeup_event(), that may be used by kernel subsystems to control
the behavior of this attribute and to request the PM core to abort
system transitions into a sleep state already in progress.
The /sys/power/wakeup_count file may be read from or written to by
user space. Reads will always succeed (unless interrupted by a
signal) and return the current value of the wakeup events counter.
Writes, however, will only succeed if the written number is equal to
the current value of the wakeup events counter. If a write is
successful, it will cause the kernel to save the current value of the
wakeup events counter and to abort the subsequent system transition
into a sleep state if any wakeup events are reported after the write
has returned.
[The assumption is that before writing to /sys/power/state user space
will first read from /sys/power/wakeup_count. Next, user space
consumers of wakeup events will have a chance to acknowledge or
veto the upcoming system transition to a sleep state. Finally, if
the transition is allowed to proceed, /sys/power/wakeup_count will
be written to and if that succeeds, /sys/power/state will be written
to as well. Still, if any wakeup events are reported to the PM core
by kernel subsystems after that point, the transition will be
aborted.]
Additionally, put a wakeup events counter into struct dev_pm_info and
make these per-device wakeup event counters available via sysfs,
so that it's possible to check the activity of various wakeup event
sources within the kernel.
To illustrate how subsystems can use pm_wakeup_event(), make the
low-level PCI runtime PM wakeup-handling code use it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
2010-07-06 00:43:53 +04:00
|
|
|
|
|
|
|
/* drivers/base/power/wakeup.c */
|
|
|
|
extern bool events_check_enabled;
|
2017-07-18 03:19:25 +03:00
|
|
|
extern suspend_state_t pm_suspend_target_state;
|
PM: Make it possible to avoid races between wakeup and system sleep
One of the arguments during the suspend blockers discussion was that
the mainline kernel didn't contain any mechanisms making it possible
to avoid races between wakeup and system suspend.
Generally, there are two problems in that area. First, if a wakeup
event occurs exactly when /sys/power/state is being written to, it
may be delivered to user space right before the freezer kicks in, so
the user space consumer of the event may not be able to process it
before the system is suspended. Second, if a wakeup event occurs
after user space has been frozen, it is not generally guaranteed that
the ongoing transition of the system into a sleep state will be
aborted.
To address these issues introduce a new global sysfs attribute,
/sys/power/wakeup_count, associated with a running counter of wakeup
events and three helper functions, pm_stay_awake(), pm_relax(), and
pm_wakeup_event(), that may be used by kernel subsystems to control
the behavior of this attribute and to request the PM core to abort
system transitions into a sleep state already in progress.
The /sys/power/wakeup_count file may be read from or written to by
user space. Reads will always succeed (unless interrupted by a
signal) and return the current value of the wakeup events counter.
Writes, however, will only succeed if the written number is equal to
the current value of the wakeup events counter. If a write is
successful, it will cause the kernel to save the current value of the
wakeup events counter and to abort the subsequent system transition
into a sleep state if any wakeup events are reported after the write
has returned.
[The assumption is that before writing to /sys/power/state user space
will first read from /sys/power/wakeup_count. Next, user space
consumers of wakeup events will have a chance to acknowledge or
veto the upcoming system transition to a sleep state. Finally, if
the transition is allowed to proceed, /sys/power/wakeup_count will
be written to and if that succeeds, /sys/power/state will be written
to as well. Still, if any wakeup events are reported to the PM core
by kernel subsystems after that point, the transition will be
aborted.]
Additionally, put a wakeup events counter into struct dev_pm_info and
make these per-device wakeup event counters available via sysfs,
so that it's possible to check the activity of various wakeup event
sources within the kernel.
To illustrate how subsystems can use pm_wakeup_event(), make the
low-level PCI runtime PM wakeup-handling code use it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
2010-07-06 00:43:53 +04:00
|
|
|
|
2010-12-04 00:58:31 +03:00
|
|
|
extern bool pm_wakeup_pending(void);
|
2014-09-01 15:47:49 +04:00
|
|
|
extern void pm_system_wakeup(void);
|
ACPI / PM: Ignore spurious SCI wakeups from suspend-to-idle
The ACPI SCI (System Control Interrupt) is set up as a wakeup IRQ
during suspend-to-idle transitions and, consequently, any events
signaled through it wake up the system from that state. However,
on some systems some of the events signaled via the ACPI SCI while
suspended to idle should not cause the system to wake up. In fact,
quite often they should just be discarded.
Arguably, systems should not resume entirely on such events, but in
order to decide which events really should cause the system to resume
and which are spurious, it is necessary to resume up to the point
when ACPI SCIs are actually handled and processed, which is after
executing dpm_resume_noirq() in the system resume path.
For this reasons, add a loop around freeze_enter() in which the
platforms can process events signaled via multiplexed IRQ lines
like the ACPI SCI and add suspend-to-idle hooks that can be
used for this purpose to struct platform_freeze_ops.
In the ACPI case, the ->wake hook is used for checking if the SCI
has triggered while suspended and deferring the interrupt-induced
system wakeup until the events signaled through it are actually
processed sufficiently to decide whether or not the system should
resume. In turn, the ->sync hook allows all of the relevant event
queues to be flushed so as to prevent events from being missed due
to race conditions.
In addition to that, some ACPI code processing wakeup events needs
to be modified to use the "hard" version of wakeup triggers, so that
it will cause a system resume to happen on device-induced wakeup
events even if the "soft" mechanism to prevent the system from
suspending is not enabled. However, to preserve the existing
behavior with respect to suspend-to-RAM, this only is done in
the suspend-to-idle case and only if an SCI has occurred while
suspended.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-06-12 23:56:34 +03:00
|
|
|
extern void pm_system_cancel_wakeup(void);
|
PM: s2idle: ACPI: Fix wakeup interrupts handling
commit cb1f65c1e1424a4b5e4a86da8aa3b8fd8459c8ec upstream.
After commit e3728b50cd9b ("ACPI: PM: s2idle: Avoid possible race
related to the EC GPE") wakeup interrupts occurring immediately after
the one discarded by acpi_s2idle_wake() may be missed. Moreover, if
the SCI triggers again immediately after the rearming in
acpi_s2idle_wake(), that wakeup may be missed too.
The problem is that pm_system_irq_wakeup() only calls pm_system_wakeup()
when pm_wakeup_irq is 0, but that's not the case any more after the
interrupt causing acpi_s2idle_wake() to run until pm_wakeup_irq is
cleared by the pm_wakeup_clear() call in s2idle_loop(). However,
there may be wakeup interrupts occurring in that time frame and if
that happens, they will be missed.
To address that issue first move the clearing of pm_wakeup_irq to
the point at which it is known that the interrupt causing
acpi_s2idle_wake() to tun will be discarded, before rearming the SCI
for wakeup. Moreover, because that only reduces the size of the
time window in which the issue may manifest itself, allow
pm_system_irq_wakeup() to register two second wakeup interrupts in
a row and, when discarding the first one, replace it with the second
one. [Of course, this assumes that only one wakeup interrupt can be
discarded in one go, but currently that is the case and I am not
aware of any plans to change that.]
Fixes: e3728b50cd9b ("ACPI: PM: s2idle: Avoid possible race related to the EC GPE")
Cc: 5.4+ <stable@vger.kernel.org> # 5.4+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-02-04 20:35:22 +03:00
|
|
|
extern void pm_wakeup_clear(unsigned int irq_number);
|
2015-09-15 20:32:46 +03:00
|
|
|
extern void pm_system_irq_wakeup(unsigned int irq_number);
|
PM: s2idle: ACPI: Fix wakeup interrupts handling
commit cb1f65c1e1424a4b5e4a86da8aa3b8fd8459c8ec upstream.
After commit e3728b50cd9b ("ACPI: PM: s2idle: Avoid possible race
related to the EC GPE") wakeup interrupts occurring immediately after
the one discarded by acpi_s2idle_wake() may be missed. Moreover, if
the SCI triggers again immediately after the rearming in
acpi_s2idle_wake(), that wakeup may be missed too.
The problem is that pm_system_irq_wakeup() only calls pm_system_wakeup()
when pm_wakeup_irq is 0, but that's not the case any more after the
interrupt causing acpi_s2idle_wake() to run until pm_wakeup_irq is
cleared by the pm_wakeup_clear() call in s2idle_loop(). However,
there may be wakeup interrupts occurring in that time frame and if
that happens, they will be missed.
To address that issue first move the clearing of pm_wakeup_irq to
the point at which it is known that the interrupt causing
acpi_s2idle_wake() to tun will be discarded, before rearming the SCI
for wakeup. Moreover, because that only reduces the size of the
time window in which the issue may manifest itself, allow
pm_system_irq_wakeup() to register two second wakeup interrupts in
a row and, when discarding the first one, replace it with the second
one. [Of course, this assumes that only one wakeup interrupt can be
discarded in one go, but currently that is the case and I am not
aware of any plans to change that.]
Fixes: e3728b50cd9b ("ACPI: PM: s2idle: Avoid possible race related to the EC GPE")
Cc: 5.4+ <stable@vger.kernel.org> # 5.4+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-02-04 20:35:22 +03:00
|
|
|
extern unsigned int pm_wakeup_irq(void);
|
2012-04-30 00:53:22 +04:00
|
|
|
extern bool pm_get_wakeup_count(unsigned int *count, bool block);
|
2010-09-23 00:09:10 +04:00
|
|
|
extern bool pm_save_wakeup_count(unsigned int count);
|
2012-04-30 00:53:32 +04:00
|
|
|
extern void pm_wakep_autosleep_enabled(bool set);
|
2013-06-12 23:55:22 +04:00
|
|
|
extern void pm_print_active_wakeup_sources(void);
|
2011-12-07 02:24:38 +04:00
|
|
|
|
2018-01-05 20:19:08 +03:00
|
|
|
extern void lock_system_sleep(void);
|
|
|
|
extern void unlock_system_sleep(void);
|
2011-12-07 02:24:38 +04:00
|
|
|
|
2007-07-30 01:27:18 +04:00
|
|
|
#else /* !CONFIG_PM_SLEEP */
|
2007-07-19 12:47:36 +04:00
|
|
|
|
|
|
|
static inline int register_pm_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int unregister_pm_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-02-25 15:36:41 +03:00
|
|
|
static inline void ksys_sync_helper(void) {}
|
|
|
|
|
2007-07-19 12:47:36 +04:00
|
|
|
#define pm_notifier(fn, pri) do { (void)(fn); } while (0)
|
2010-10-05 00:07:32 +04:00
|
|
|
|
2010-12-04 00:58:31 +03:00
|
|
|
static inline bool pm_wakeup_pending(void) { return false; }
|
2014-09-01 15:47:49 +04:00
|
|
|
static inline void pm_system_wakeup(void) {}
|
ACPI / PM: Ignore spurious SCI wakeups from suspend-to-idle
The ACPI SCI (System Control Interrupt) is set up as a wakeup IRQ
during suspend-to-idle transitions and, consequently, any events
signaled through it wake up the system from that state. However,
on some systems some of the events signaled via the ACPI SCI while
suspended to idle should not cause the system to wake up. In fact,
quite often they should just be discarded.
Arguably, systems should not resume entirely on such events, but in
order to decide which events really should cause the system to resume
and which are spurious, it is necessary to resume up to the point
when ACPI SCIs are actually handled and processed, which is after
executing dpm_resume_noirq() in the system resume path.
For this reasons, add a loop around freeze_enter() in which the
platforms can process events signaled via multiplexed IRQ lines
like the ACPI SCI and add suspend-to-idle hooks that can be
used for this purpose to struct platform_freeze_ops.
In the ACPI case, the ->wake hook is used for checking if the SCI
has triggered while suspended and deferring the interrupt-induced
system wakeup until the events signaled through it are actually
processed sufficiently to decide whether or not the system should
resume. In turn, the ->sync hook allows all of the relevant event
queues to be flushed so as to prevent events from being missed due
to race conditions.
In addition to that, some ACPI code processing wakeup events needs
to be modified to use the "hard" version of wakeup triggers, so that
it will cause a system resume to happen on device-induced wakeup
events even if the "soft" mechanism to prevent the system from
suspending is not enabled. However, to preserve the existing
behavior with respect to suspend-to-RAM, this only is done in
the suspend-to-idle case and only if an SCI has occurred while
suspended.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-06-12 23:56:34 +03:00
|
|
|
static inline void pm_wakeup_clear(bool reset) {}
|
2015-09-15 20:32:46 +03:00
|
|
|
static inline void pm_system_irq_wakeup(unsigned int irq_number) {}
|
2009-11-18 01:06:22 +03:00
|
|
|
|
|
|
|
static inline void lock_system_sleep(void) {}
|
|
|
|
static inline void unlock_system_sleep(void) {}
|
|
|
|
|
2011-12-07 02:24:38 +04:00
|
|
|
#endif /* !CONFIG_PM_SLEEP */
|
kexec jump: save/restore device state
This patch implements devices state save/restore before after kexec.
This patch together with features in kexec_jump patch can be used for
following:
- A simple hibernation implementation without ACPI support. You can kexec a
hibernating kernel, save the memory image of original system and shutdown
the system. When resuming, you restore the memory image of original system
via ordinary kexec load then jump back.
- Kernel/system debug through making system snapshot. You can make system
snapshot, jump back, do some thing and make another system snapshot.
- Cooperative multi-kernel/system. With kexec jump, you can switch between
several kernels/systems quickly without boot process except the first time.
This appears like swap a whole kernel/system out/in.
- A general method to call program in physical mode (paging turning
off). This can be used to invoke BIOS code under Linux.
The following user-space tools can be used with kexec jump:
- kexec-tools needs to be patched to support kexec jump. The patches
and the precompiled kexec can be download from the following URL:
source: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-src_git_kh10.tar.bz2
patches: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-patches_git_kh10.tar.bz2
binary: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec_git_kh10
- makedumpfile with patches are used as memory image saving tool, it
can exclude free pages from original kernel memory image file. The
patches and the precompiled makedumpfile can be download from the
following URL:
source: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile-src_cvs_kh10.tar.bz2
patches: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile-patches_cvs_kh10.tar.bz2
binary: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile_cvs_kh10
- An initramfs image can be used as the root file system of kexeced
kernel. An initramfs image built with "BuildRoot" can be downloaded
from the following URL:
initramfs image: http://khibernation.sourceforge.net/download/release_v10/initramfs/rootfs_cvs_kh10.gz
All user space tools above are included in the initramfs image.
Usage example of simple hibernation:
1. Compile and install patched kernel with following options selected:
CONFIG_X86_32=y
CONFIG_RELOCATABLE=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_PM=y
CONFIG_HIBERNATION=y
CONFIG_KEXEC_JUMP=y
2. Build an initramfs image contains kexec-tool and makedumpfile, or
download the pre-built initramfs image, called rootfs.gz in
following text.
3. Prepare a partition to save memory image of original kernel, called
hibernating partition in following text.
4. Boot kernel compiled in step 1 (kernel A).
5. In the kernel A, load kernel compiled in step 1 (kernel B) with
/sbin/kexec. The shell command line can be as follow:
/sbin/kexec --load-preserve-context /boot/bzImage --mem-min=0x100000
--mem-max=0xffffff --initrd=rootfs.gz
6. Boot the kernel B with following shell command line:
/sbin/kexec -e
7. The kernel B will boot as normal kexec. In kernel B the memory
image of kernel A can be saved into hibernating partition as
follow:
jump_back_entry=`cat /proc/cmdline | tr ' ' '\n' | grep kexec_jump_back_entry | cut -d '='`
echo $jump_back_entry > kexec_jump_back_entry
cp /proc/vmcore dump.elf
Then you can shutdown the machine as normal.
8. Boot kernel compiled in step 1 (kernel C). Use the rootfs.gz as
root file system.
9. In kernel C, load the memory image of kernel A as follow:
/sbin/kexec -l --args-none --entry=`cat kexec_jump_back_entry` dump.elf
10. Jump back to the kernel A as follow:
/sbin/kexec -e
Then, kernel A is resumed.
Implementation point:
To support jumping between two kernels, before jumping to (executing)
the new kernel and jumping back to the original kernel, the devices
are put into quiescent state, and the state of devices and CPU is
saved. After jumping back from kexeced kernel and jumping to the new
kernel, the state of devices and CPU are restored accordingly. The
devices/CPU state save/restore code of software suspend is called to
implement corresponding function.
Known issues:
- Because the segment number supported by sys_kexec_load is limited,
hibernation image with many segments may not be load. This is
planned to be eliminated by adding a new flag to sys_kexec_load to
make a image can be loaded with multiple sys_kexec_load invoking.
Now, only the i386 architecture is supported.
Signed-off-by: Huang Ying <ying.huang@intel.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26 06:45:10 +04:00
|
|
|
|
2012-06-21 02:19:33 +04:00
|
|
|
#ifdef CONFIG_PM_SLEEP_DEBUG
|
|
|
|
extern bool pm_print_times_enabled;
|
2017-08-16 04:16:59 +03:00
|
|
|
extern bool pm_debug_messages_on;
|
2017-07-23 01:03:43 +03:00
|
|
|
extern __printf(2, 3) void __pm_pr_dbg(bool defer, const char *fmt, ...);
|
2012-06-21 02:19:33 +04:00
|
|
|
#else
|
|
|
|
#define pm_print_times_enabled (false)
|
2017-08-16 04:16:59 +03:00
|
|
|
#define pm_debug_messages_on (false)
|
2017-07-19 03:38:44 +03:00
|
|
|
|
|
|
|
#include <linux/printk.h>
|
|
|
|
|
2017-07-23 01:03:43 +03:00
|
|
|
#define __pm_pr_dbg(defer, fmt, ...) \
|
2017-07-19 03:38:44 +03:00
|
|
|
no_printk(KERN_DEBUG fmt, ##__VA_ARGS__)
|
2012-06-21 02:19:33 +04:00
|
|
|
#endif
|
|
|
|
|
2017-07-23 01:03:43 +03:00
|
|
|
#define pm_pr_dbg(fmt, ...) \
|
|
|
|
__pm_pr_dbg(false, fmt, ##__VA_ARGS__)
|
|
|
|
|
|
|
|
#define pm_deferred_pr_dbg(fmt, ...) \
|
|
|
|
__pm_pr_dbg(true, fmt, ##__VA_ARGS__)
|
|
|
|
|
2012-04-30 00:53:22 +04:00
|
|
|
#ifdef CONFIG_PM_AUTOSLEEP
|
|
|
|
|
|
|
|
/* kernel/power/autosleep.c */
|
|
|
|
void queue_up_suspend_work(void);
|
|
|
|
|
|
|
|
#else /* !CONFIG_PM_AUTOSLEEP */
|
|
|
|
|
|
|
|
static inline void queue_up_suspend_work(void) {}
|
|
|
|
|
|
|
|
#endif /* !CONFIG_PM_AUTOSLEEP */
|
|
|
|
|
2007-10-18 14:04:39 +04:00
|
|
|
#endif /* _LINUX_SUSPEND_H */
|