Merge branch 'pm-genirq'
* pm-genirq: PM / genirq: Document rules related to system suspend and interrupts PCI / PM: Make PCIe PME interrupts wake up from suspend-to-idle x86 / PM: Set IRQCHIP_SKIP_SET_WAKE for IOAPIC IRQ chip objects genirq: Simplify wakeup mechanism genirq: Mark wakeup sources as armed on suspend genirq: Create helper for flow handler entry check genirq: Distangle edge handler entry genirq: Avoid double loop on suspend genirq: Move MASK_ON_SUSPEND handling into suspend_device_irqs() genirq: Make use of pm misfeature accounting genirq: Add sanity checks for PM options on shared interrupt lines genirq: Move suspend/resume logic into irq/pm code PM / sleep: Mechanism for aborting system suspends unconditionally
This commit is contained in:
Коммит
88b42a4883
|
@ -0,0 +1,123 @@
|
||||||
|
System Suspend and Device Interrupts
|
||||||
|
|
||||||
|
Copyright (C) 2014 Intel Corp.
|
||||||
|
Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||||
|
|
||||||
|
|
||||||
|
Suspending and Resuming Device IRQs
|
||||||
|
-----------------------------------
|
||||||
|
|
||||||
|
Device interrupt request lines (IRQs) are generally disabled during system
|
||||||
|
suspend after the "late" phase of suspending devices (that is, after all of the
|
||||||
|
->prepare, ->suspend and ->suspend_late callbacks have been executed for all
|
||||||
|
devices). That is done by suspend_device_irqs().
|
||||||
|
|
||||||
|
The rationale for doing so is that after the "late" phase of device suspend
|
||||||
|
there is no legitimate reason why any interrupts from suspended devices should
|
||||||
|
trigger and if any devices have not been suspended properly yet, it is better to
|
||||||
|
block interrupts from them anyway. Also, in the past we had problems with
|
||||||
|
interrupt handlers for shared IRQs that device drivers implementing them were
|
||||||
|
not prepared for interrupts triggering after their devices had been suspended.
|
||||||
|
In some cases they would attempt to access, for example, memory address spaces
|
||||||
|
of suspended devices and cause unpredictable behavior to ensue as a result.
|
||||||
|
Unfortunately, such problems are very difficult to debug and the introduction
|
||||||
|
of suspend_device_irqs(), along with the "noirq" phase of device suspend and
|
||||||
|
resume, was the only practical way to mitigate them.
|
||||||
|
|
||||||
|
Device IRQs are re-enabled during system resume, right before the "early" phase
|
||||||
|
of resuming devices (that is, before starting to execute ->resume_early
|
||||||
|
callbacks for devices). The function doing that is resume_device_irqs().
|
||||||
|
|
||||||
|
|
||||||
|
The IRQF_NO_SUSPEND Flag
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
There are interrupts that can legitimately trigger during the entire system
|
||||||
|
suspend-resume cycle, including the "noirq" phases of suspending and resuming
|
||||||
|
devices as well as during the time when nonboot CPUs are taken offline and
|
||||||
|
brought back online. That applies to timer interrupts in the first place,
|
||||||
|
but also to IPIs and to some other special-purpose interrupts.
|
||||||
|
|
||||||
|
The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when
|
||||||
|
requesting a special-purpose interrupt. It causes suspend_device_irqs() to
|
||||||
|
leave the corresponding IRQ enabled so as to allow the interrupt to work all
|
||||||
|
the time as expected.
|
||||||
|
|
||||||
|
Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one
|
||||||
|
user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed
|
||||||
|
for it will be executed as usual after suspend_device_irqs(), even if the
|
||||||
|
IRQF_NO_SUSPEND flag was not passed to request_irq() (or equivalent) by some of
|
||||||
|
the IRQ's users. For this reason, using IRQF_NO_SUSPEND and IRQF_SHARED at the
|
||||||
|
same time should be avoided.
|
||||||
|
|
||||||
|
|
||||||
|
System Wakeup Interrupts, enable_irq_wake() and disable_irq_wake()
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
|
System wakeup interrupts generally need to be configured to wake up the system
|
||||||
|
from sleep states, especially if they are used for different purposes (e.g. as
|
||||||
|
I/O interrupts) in the working state.
|
||||||
|
|
||||||
|
That may involve turning on a special signal handling logic within the platform
|
||||||
|
(such as an SoC) so that signals from a given line are routed in a different way
|
||||||
|
during system sleep so as to trigger a system wakeup when needed. For example,
|
||||||
|
the platform may include a dedicated interrupt controller used specifically for
|
||||||
|
handling system wakeup events. Then, if a given interrupt line is supposed to
|
||||||
|
wake up the system from sleep sates, the corresponding input of that interrupt
|
||||||
|
controller needs to be enabled to receive signals from the line in question.
|
||||||
|
After wakeup, it generally is better to disable that input to prevent the
|
||||||
|
dedicated controller from triggering interrupts unnecessarily.
|
||||||
|
|
||||||
|
The IRQ subsystem provides two helper functions to be used by device drivers for
|
||||||
|
those purposes. Namely, enable_irq_wake() turns on the platform's logic for
|
||||||
|
handling the given IRQ as a system wakeup interrupt line and disable_irq_wake()
|
||||||
|
turns that logic off.
|
||||||
|
|
||||||
|
Calling enable_irq_wake() causes suspend_device_irqs() to treat the given IRQ
|
||||||
|
in a special way. Namely, the IRQ remains enabled, by on the first interrupt
|
||||||
|
it will be disabled, marked as pending and "suspended" so that it will be
|
||||||
|
re-enabled by resume_device_irqs() during the subsequent system resume. Also
|
||||||
|
the PM core is notified about the event which casues the system suspend in
|
||||||
|
progress to be aborted (that doesn't have to happen immediately, but at one
|
||||||
|
of the points where the suspend thread looks for pending wakeup events).
|
||||||
|
|
||||||
|
This way every interrupt from a wakeup interrupt source will either cause the
|
||||||
|
system suspend currently in progress to be aborted or wake up the system if
|
||||||
|
already suspended. However, after suspend_device_irqs() interrupt handlers are
|
||||||
|
not executed for system wakeup IRQs. They are only executed for IRQF_NO_SUSPEND
|
||||||
|
IRQs at that time, but those IRQs should not be configured for system wakeup
|
||||||
|
using enable_irq_wake().
|
||||||
|
|
||||||
|
|
||||||
|
Interrupts and Suspend-to-Idle
|
||||||
|
------------------------------
|
||||||
|
|
||||||
|
Suspend-to-idle (also known as the "freeze" sleep state) is a relatively new
|
||||||
|
system sleep state that works by idling all of the processors and waiting for
|
||||||
|
interrupts right after the "noirq" phase of suspending devices.
|
||||||
|
|
||||||
|
Of course, this means that all of the interrupts with the IRQF_NO_SUSPEND flag
|
||||||
|
set will bring CPUs out of idle while in that state, but they will not cause the
|
||||||
|
IRQ subsystem to trigger a system wakeup.
|
||||||
|
|
||||||
|
System wakeup interrupts, in turn, will trigger wakeup from suspend-to-idle in
|
||||||
|
analogy with what they do in the full system suspend case. The only difference
|
||||||
|
is that the wakeup from suspend-to-idle is signaled using the usual working
|
||||||
|
state interrupt delivery mechanisms and doesn't require the platform to use
|
||||||
|
any special interrupt handling logic for it to work.
|
||||||
|
|
||||||
|
|
||||||
|
IRQF_NO_SUSPEND and enable_irq_wake()
|
||||||
|
-------------------------------------
|
||||||
|
|
||||||
|
There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND
|
||||||
|
flag on the same IRQ.
|
||||||
|
|
||||||
|
First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND
|
||||||
|
interrupts (interrupt handlers are invoked after suspend_device_irqs()) are
|
||||||
|
directly at odds with the rules for handling system wakeup interrupts (interrupt
|
||||||
|
handlers are not invoked after suspend_device_irqs()).
|
||||||
|
|
||||||
|
Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not
|
||||||
|
to individual interrupt handlers, so sharing an IRQ between a system wakeup
|
||||||
|
interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense.
|
|
@ -2623,6 +2623,7 @@ static struct irq_chip ioapic_chip __read_mostly = {
|
||||||
.irq_eoi = ack_apic_level,
|
.irq_eoi = ack_apic_level,
|
||||||
.irq_set_affinity = native_ioapic_set_affinity,
|
.irq_set_affinity = native_ioapic_set_affinity,
|
||||||
.irq_retrigger = ioapic_retrigger_irq,
|
.irq_retrigger = ioapic_retrigger_irq,
|
||||||
|
.flags = IRQCHIP_SKIP_SET_WAKE,
|
||||||
};
|
};
|
||||||
|
|
||||||
static inline void init_IO_APIC_traps(void)
|
static inline void init_IO_APIC_traps(void)
|
||||||
|
@ -3173,6 +3174,7 @@ static struct irq_chip msi_chip = {
|
||||||
.irq_ack = ack_apic_edge,
|
.irq_ack = ack_apic_edge,
|
||||||
.irq_set_affinity = msi_set_affinity,
|
.irq_set_affinity = msi_set_affinity,
|
||||||
.irq_retrigger = ioapic_retrigger_irq,
|
.irq_retrigger = ioapic_retrigger_irq,
|
||||||
|
.flags = IRQCHIP_SKIP_SET_WAKE,
|
||||||
};
|
};
|
||||||
|
|
||||||
int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
|
int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
|
||||||
|
@ -3271,6 +3273,7 @@ static struct irq_chip dmar_msi_type = {
|
||||||
.irq_ack = ack_apic_edge,
|
.irq_ack = ack_apic_edge,
|
||||||
.irq_set_affinity = dmar_msi_set_affinity,
|
.irq_set_affinity = dmar_msi_set_affinity,
|
||||||
.irq_retrigger = ioapic_retrigger_irq,
|
.irq_retrigger = ioapic_retrigger_irq,
|
||||||
|
.flags = IRQCHIP_SKIP_SET_WAKE,
|
||||||
};
|
};
|
||||||
|
|
||||||
int arch_setup_dmar_msi(unsigned int irq)
|
int arch_setup_dmar_msi(unsigned int irq)
|
||||||
|
@ -3321,6 +3324,7 @@ static struct irq_chip hpet_msi_type = {
|
||||||
.irq_ack = ack_apic_edge,
|
.irq_ack = ack_apic_edge,
|
||||||
.irq_set_affinity = hpet_msi_set_affinity,
|
.irq_set_affinity = hpet_msi_set_affinity,
|
||||||
.irq_retrigger = ioapic_retrigger_irq,
|
.irq_retrigger = ioapic_retrigger_irq,
|
||||||
|
.flags = IRQCHIP_SKIP_SET_WAKE,
|
||||||
};
|
};
|
||||||
|
|
||||||
int default_setup_hpet_msi(unsigned int irq, unsigned int id)
|
int default_setup_hpet_msi(unsigned int irq, unsigned int id)
|
||||||
|
@ -3384,6 +3388,7 @@ static struct irq_chip ht_irq_chip = {
|
||||||
.irq_ack = ack_apic_edge,
|
.irq_ack = ack_apic_edge,
|
||||||
.irq_set_affinity = ht_set_affinity,
|
.irq_set_affinity = ht_set_affinity,
|
||||||
.irq_retrigger = ioapic_retrigger_irq,
|
.irq_retrigger = ioapic_retrigger_irq,
|
||||||
|
.flags = IRQCHIP_SKIP_SET_WAKE,
|
||||||
};
|
};
|
||||||
|
|
||||||
int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev)
|
int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev)
|
||||||
|
|
|
@ -24,6 +24,9 @@
|
||||||
*/
|
*/
|
||||||
bool events_check_enabled __read_mostly;
|
bool events_check_enabled __read_mostly;
|
||||||
|
|
||||||
|
/* If set and the system is suspending, terminate the suspend. */
|
||||||
|
static bool pm_abort_suspend __read_mostly;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Combined counters of registered wakeup events and wakeup events in progress.
|
* Combined counters of registered wakeup events and wakeup events in progress.
|
||||||
* They need to be modified together atomically, so it's better to use one
|
* They need to be modified together atomically, so it's better to use one
|
||||||
|
@ -719,7 +722,18 @@ bool pm_wakeup_pending(void)
|
||||||
pm_print_active_wakeup_sources();
|
pm_print_active_wakeup_sources();
|
||||||
}
|
}
|
||||||
|
|
||||||
return ret;
|
return ret || pm_abort_suspend;
|
||||||
|
}
|
||||||
|
|
||||||
|
void pm_system_wakeup(void)
|
||||||
|
{
|
||||||
|
pm_abort_suspend = true;
|
||||||
|
freeze_wake();
|
||||||
|
}
|
||||||
|
|
||||||
|
void pm_wakeup_clear(void)
|
||||||
|
{
|
||||||
|
pm_abort_suspend = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -9,7 +9,7 @@
|
||||||
#include <linux/syscore_ops.h>
|
#include <linux/syscore_ops.h>
|
||||||
#include <linux/mutex.h>
|
#include <linux/mutex.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/interrupt.h>
|
#include <linux/suspend.h>
|
||||||
#include <trace/events/power.h>
|
#include <trace/events/power.h>
|
||||||
|
|
||||||
static LIST_HEAD(syscore_ops_list);
|
static LIST_HEAD(syscore_ops_list);
|
||||||
|
@ -54,9 +54,8 @@ int syscore_suspend(void)
|
||||||
pr_debug("Checking wakeup interrupts\n");
|
pr_debug("Checking wakeup interrupts\n");
|
||||||
|
|
||||||
/* Return error code if there are any wakeup interrupts pending. */
|
/* Return error code if there are any wakeup interrupts pending. */
|
||||||
ret = check_wakeup_irqs();
|
if (pm_wakeup_pending())
|
||||||
if (ret)
|
return -EBUSY;
|
||||||
return ret;
|
|
||||||
|
|
||||||
WARN_ONCE(!irqs_disabled(),
|
WARN_ONCE(!irqs_disabled(),
|
||||||
"Interrupts enabled before system core suspend.\n");
|
"Interrupts enabled before system core suspend.\n");
|
||||||
|
|
|
@ -41,11 +41,17 @@ static int __init pcie_pme_setup(char *str)
|
||||||
}
|
}
|
||||||
__setup("pcie_pme=", pcie_pme_setup);
|
__setup("pcie_pme=", pcie_pme_setup);
|
||||||
|
|
||||||
|
enum pme_suspend_level {
|
||||||
|
PME_SUSPEND_NONE = 0,
|
||||||
|
PME_SUSPEND_WAKEUP,
|
||||||
|
PME_SUSPEND_NOIRQ,
|
||||||
|
};
|
||||||
|
|
||||||
struct pcie_pme_service_data {
|
struct pcie_pme_service_data {
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
struct pcie_device *srv;
|
struct pcie_device *srv;
|
||||||
struct work_struct work;
|
struct work_struct work;
|
||||||
bool noirq; /* Don't enable the PME interrupt used by this service. */
|
enum pme_suspend_level suspend_level;
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -223,7 +229,7 @@ static void pcie_pme_work_fn(struct work_struct *work)
|
||||||
spin_lock_irq(&data->lock);
|
spin_lock_irq(&data->lock);
|
||||||
|
|
||||||
for (;;) {
|
for (;;) {
|
||||||
if (data->noirq)
|
if (data->suspend_level != PME_SUSPEND_NONE)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta);
|
pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta);
|
||||||
|
@ -250,7 +256,7 @@ static void pcie_pme_work_fn(struct work_struct *work)
|
||||||
spin_lock_irq(&data->lock);
|
spin_lock_irq(&data->lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!data->noirq)
|
if (data->suspend_level == PME_SUSPEND_NONE)
|
||||||
pcie_pme_interrupt_enable(port, true);
|
pcie_pme_interrupt_enable(port, true);
|
||||||
|
|
||||||
spin_unlock_irq(&data->lock);
|
spin_unlock_irq(&data->lock);
|
||||||
|
@ -367,6 +373,21 @@ static int pcie_pme_probe(struct pcie_device *srv)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool pcie_pme_check_wakeup(struct pci_bus *bus)
|
||||||
|
{
|
||||||
|
struct pci_dev *dev;
|
||||||
|
|
||||||
|
if (!bus)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
list_for_each_entry(dev, &bus->devices, bus_list)
|
||||||
|
if (device_may_wakeup(&dev->dev)
|
||||||
|
|| pcie_pme_check_wakeup(dev->subordinate))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pcie_pme_suspend - Suspend PCIe PME service device.
|
* pcie_pme_suspend - Suspend PCIe PME service device.
|
||||||
* @srv: PCIe service device to suspend.
|
* @srv: PCIe service device to suspend.
|
||||||
|
@ -375,11 +396,26 @@ static int pcie_pme_suspend(struct pcie_device *srv)
|
||||||
{
|
{
|
||||||
struct pcie_pme_service_data *data = get_service_data(srv);
|
struct pcie_pme_service_data *data = get_service_data(srv);
|
||||||
struct pci_dev *port = srv->port;
|
struct pci_dev *port = srv->port;
|
||||||
|
bool wakeup;
|
||||||
|
|
||||||
|
if (device_may_wakeup(&port->dev)) {
|
||||||
|
wakeup = true;
|
||||||
|
} else {
|
||||||
|
down_read(&pci_bus_sem);
|
||||||
|
wakeup = pcie_pme_check_wakeup(port->subordinate);
|
||||||
|
up_read(&pci_bus_sem);
|
||||||
|
}
|
||||||
spin_lock_irq(&data->lock);
|
spin_lock_irq(&data->lock);
|
||||||
pcie_pme_interrupt_enable(port, false);
|
if (wakeup) {
|
||||||
pcie_clear_root_pme_status(port);
|
enable_irq_wake(srv->irq);
|
||||||
data->noirq = true;
|
data->suspend_level = PME_SUSPEND_WAKEUP;
|
||||||
|
} else {
|
||||||
|
struct pci_dev *port = srv->port;
|
||||||
|
|
||||||
|
pcie_pme_interrupt_enable(port, false);
|
||||||
|
pcie_clear_root_pme_status(port);
|
||||||
|
data->suspend_level = PME_SUSPEND_NOIRQ;
|
||||||
|
}
|
||||||
spin_unlock_irq(&data->lock);
|
spin_unlock_irq(&data->lock);
|
||||||
|
|
||||||
synchronize_irq(srv->irq);
|
synchronize_irq(srv->irq);
|
||||||
|
@ -394,12 +430,17 @@ static int pcie_pme_suspend(struct pcie_device *srv)
|
||||||
static int pcie_pme_resume(struct pcie_device *srv)
|
static int pcie_pme_resume(struct pcie_device *srv)
|
||||||
{
|
{
|
||||||
struct pcie_pme_service_data *data = get_service_data(srv);
|
struct pcie_pme_service_data *data = get_service_data(srv);
|
||||||
struct pci_dev *port = srv->port;
|
|
||||||
|
|
||||||
spin_lock_irq(&data->lock);
|
spin_lock_irq(&data->lock);
|
||||||
data->noirq = false;
|
if (data->suspend_level == PME_SUSPEND_NOIRQ) {
|
||||||
pcie_clear_root_pme_status(port);
|
struct pci_dev *port = srv->port;
|
||||||
pcie_pme_interrupt_enable(port, true);
|
|
||||||
|
pcie_clear_root_pme_status(port);
|
||||||
|
pcie_pme_interrupt_enable(port, true);
|
||||||
|
} else {
|
||||||
|
disable_irq_wake(srv->irq);
|
||||||
|
}
|
||||||
|
data->suspend_level = PME_SUSPEND_NONE;
|
||||||
spin_unlock_irq(&data->lock);
|
spin_unlock_irq(&data->lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -193,11 +193,6 @@ extern void irq_wake_thread(unsigned int irq, void *dev_id);
|
||||||
/* The following three functions are for the core kernel use only. */
|
/* The following three functions are for the core kernel use only. */
|
||||||
extern void suspend_device_irqs(void);
|
extern void suspend_device_irqs(void);
|
||||||
extern void resume_device_irqs(void);
|
extern void resume_device_irqs(void);
|
||||||
#ifdef CONFIG_PM_SLEEP
|
|
||||||
extern int check_wakeup_irqs(void);
|
|
||||||
#else
|
|
||||||
static inline int check_wakeup_irqs(void) { return 0; }
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* struct irq_affinity_notify - context for notification of IRQ affinity changes
|
* struct irq_affinity_notify - context for notification of IRQ affinity changes
|
||||||
|
|
|
@ -173,6 +173,7 @@ struct irq_data {
|
||||||
* IRQD_IRQ_DISABLED - Disabled state of the interrupt
|
* IRQD_IRQ_DISABLED - Disabled state of the interrupt
|
||||||
* IRQD_IRQ_MASKED - Masked state of the interrupt
|
* IRQD_IRQ_MASKED - Masked state of the interrupt
|
||||||
* IRQD_IRQ_INPROGRESS - In progress state of the interrupt
|
* IRQD_IRQ_INPROGRESS - In progress state of the interrupt
|
||||||
|
* IRQD_WAKEUP_ARMED - Wakeup mode armed
|
||||||
*/
|
*/
|
||||||
enum {
|
enum {
|
||||||
IRQD_TRIGGER_MASK = 0xf,
|
IRQD_TRIGGER_MASK = 0xf,
|
||||||
|
@ -186,6 +187,7 @@ enum {
|
||||||
IRQD_IRQ_DISABLED = (1 << 16),
|
IRQD_IRQ_DISABLED = (1 << 16),
|
||||||
IRQD_IRQ_MASKED = (1 << 17),
|
IRQD_IRQ_MASKED = (1 << 17),
|
||||||
IRQD_IRQ_INPROGRESS = (1 << 18),
|
IRQD_IRQ_INPROGRESS = (1 << 18),
|
||||||
|
IRQD_WAKEUP_ARMED = (1 << 19),
|
||||||
};
|
};
|
||||||
|
|
||||||
static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
|
static inline bool irqd_is_setaffinity_pending(struct irq_data *d)
|
||||||
|
@ -257,6 +259,12 @@ static inline bool irqd_irq_inprogress(struct irq_data *d)
|
||||||
return d->state_use_accessors & IRQD_IRQ_INPROGRESS;
|
return d->state_use_accessors & IRQD_IRQ_INPROGRESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool irqd_is_wakeup_armed(struct irq_data *d)
|
||||||
|
{
|
||||||
|
return d->state_use_accessors & IRQD_WAKEUP_ARMED;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Functions for chained handlers which can be enabled/disabled by the
|
* Functions for chained handlers which can be enabled/disabled by the
|
||||||
* standard disable_irq/enable_irq calls. Must be called with
|
* standard disable_irq/enable_irq calls. Must be called with
|
||||||
|
|
|
@ -36,6 +36,11 @@ struct irq_desc;
|
||||||
* @threads_oneshot: bitfield to handle shared oneshot threads
|
* @threads_oneshot: bitfield to handle shared oneshot threads
|
||||||
* @threads_active: number of irqaction threads currently running
|
* @threads_active: number of irqaction threads currently running
|
||||||
* @wait_for_threads: wait queue for sync_irq to wait for threaded handlers
|
* @wait_for_threads: wait queue for sync_irq to wait for threaded handlers
|
||||||
|
* @nr_actions: number of installed actions on this descriptor
|
||||||
|
* @no_suspend_depth: number of irqactions on a irq descriptor with
|
||||||
|
* IRQF_NO_SUSPEND set
|
||||||
|
* @force_resume_depth: number of irqactions on a irq descriptor with
|
||||||
|
* IRQF_FORCE_RESUME set
|
||||||
* @dir: /proc/irq/ procfs entry
|
* @dir: /proc/irq/ procfs entry
|
||||||
* @name: flow handler name for /proc/interrupts output
|
* @name: flow handler name for /proc/interrupts output
|
||||||
*/
|
*/
|
||||||
|
@ -68,6 +73,11 @@ struct irq_desc {
|
||||||
unsigned long threads_oneshot;
|
unsigned long threads_oneshot;
|
||||||
atomic_t threads_active;
|
atomic_t threads_active;
|
||||||
wait_queue_head_t wait_for_threads;
|
wait_queue_head_t wait_for_threads;
|
||||||
|
#ifdef CONFIG_PM_SLEEP
|
||||||
|
unsigned int nr_actions;
|
||||||
|
unsigned int no_suspend_depth;
|
||||||
|
unsigned int force_resume_depth;
|
||||||
|
#endif
|
||||||
#ifdef CONFIG_PROC_FS
|
#ifdef CONFIG_PROC_FS
|
||||||
struct proc_dir_entry *dir;
|
struct proc_dir_entry *dir;
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -371,6 +371,8 @@ extern int unregister_pm_notifier(struct notifier_block *nb);
|
||||||
extern bool events_check_enabled;
|
extern bool events_check_enabled;
|
||||||
|
|
||||||
extern bool pm_wakeup_pending(void);
|
extern bool pm_wakeup_pending(void);
|
||||||
|
extern void pm_system_wakeup(void);
|
||||||
|
extern void pm_wakeup_clear(void);
|
||||||
extern bool pm_get_wakeup_count(unsigned int *count, bool block);
|
extern bool pm_get_wakeup_count(unsigned int *count, bool block);
|
||||||
extern bool pm_save_wakeup_count(unsigned int count);
|
extern bool pm_save_wakeup_count(unsigned int count);
|
||||||
extern void pm_wakep_autosleep_enabled(bool set);
|
extern void pm_wakep_autosleep_enabled(bool set);
|
||||||
|
@ -418,6 +420,8 @@ static inline int unregister_pm_notifier(struct notifier_block *nb)
|
||||||
#define pm_notifier(fn, pri) do { (void)(fn); } while (0)
|
#define pm_notifier(fn, pri) do { (void)(fn); } while (0)
|
||||||
|
|
||||||
static inline bool pm_wakeup_pending(void) { return false; }
|
static inline bool pm_wakeup_pending(void) { return false; }
|
||||||
|
static inline void pm_system_wakeup(void) {}
|
||||||
|
static inline void pm_wakeup_clear(void) {}
|
||||||
|
|
||||||
static inline void lock_system_sleep(void) {}
|
static inline void lock_system_sleep(void) {}
|
||||||
static inline void unlock_system_sleep(void) {}
|
static inline void unlock_system_sleep(void) {}
|
||||||
|
|
|
@ -342,6 +342,31 @@ static bool irq_check_poll(struct irq_desc *desc)
|
||||||
return irq_wait_for_poll(desc);
|
return irq_wait_for_poll(desc);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool irq_may_run(struct irq_desc *desc)
|
||||||
|
{
|
||||||
|
unsigned int mask = IRQD_IRQ_INPROGRESS | IRQD_WAKEUP_ARMED;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the interrupt is not in progress and is not an armed
|
||||||
|
* wakeup interrupt, proceed.
|
||||||
|
*/
|
||||||
|
if (!irqd_has_set(&desc->irq_data, mask))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the interrupt is an armed wakeup source, mark it pending
|
||||||
|
* and suspended, disable it and notify the pm core about the
|
||||||
|
* event.
|
||||||
|
*/
|
||||||
|
if (irq_pm_check_wakeup(desc))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Handle a potential concurrent poll on a different core.
|
||||||
|
*/
|
||||||
|
return irq_check_poll(desc);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* handle_simple_irq - Simple and software-decoded IRQs.
|
* handle_simple_irq - Simple and software-decoded IRQs.
|
||||||
* @irq: the interrupt number
|
* @irq: the interrupt number
|
||||||
|
@ -359,9 +384,8 @@ handle_simple_irq(unsigned int irq, struct irq_desc *desc)
|
||||||
{
|
{
|
||||||
raw_spin_lock(&desc->lock);
|
raw_spin_lock(&desc->lock);
|
||||||
|
|
||||||
if (unlikely(irqd_irq_inprogress(&desc->irq_data)))
|
if (!irq_may_run(desc))
|
||||||
if (!irq_check_poll(desc))
|
goto out_unlock;
|
||||||
goto out_unlock;
|
|
||||||
|
|
||||||
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
||||||
kstat_incr_irqs_this_cpu(irq, desc);
|
kstat_incr_irqs_this_cpu(irq, desc);
|
||||||
|
@ -412,9 +436,8 @@ handle_level_irq(unsigned int irq, struct irq_desc *desc)
|
||||||
raw_spin_lock(&desc->lock);
|
raw_spin_lock(&desc->lock);
|
||||||
mask_ack_irq(desc);
|
mask_ack_irq(desc);
|
||||||
|
|
||||||
if (unlikely(irqd_irq_inprogress(&desc->irq_data)))
|
if (!irq_may_run(desc))
|
||||||
if (!irq_check_poll(desc))
|
goto out_unlock;
|
||||||
goto out_unlock;
|
|
||||||
|
|
||||||
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
||||||
kstat_incr_irqs_this_cpu(irq, desc);
|
kstat_incr_irqs_this_cpu(irq, desc);
|
||||||
|
@ -485,9 +508,8 @@ handle_fasteoi_irq(unsigned int irq, struct irq_desc *desc)
|
||||||
|
|
||||||
raw_spin_lock(&desc->lock);
|
raw_spin_lock(&desc->lock);
|
||||||
|
|
||||||
if (unlikely(irqd_irq_inprogress(&desc->irq_data)))
|
if (!irq_may_run(desc))
|
||||||
if (!irq_check_poll(desc))
|
goto out;
|
||||||
goto out;
|
|
||||||
|
|
||||||
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
||||||
kstat_incr_irqs_this_cpu(irq, desc);
|
kstat_incr_irqs_this_cpu(irq, desc);
|
||||||
|
@ -541,19 +563,23 @@ handle_edge_irq(unsigned int irq, struct irq_desc *desc)
|
||||||
raw_spin_lock(&desc->lock);
|
raw_spin_lock(&desc->lock);
|
||||||
|
|
||||||
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
||||||
/*
|
|
||||||
* If we're currently running this IRQ, or its disabled,
|
if (!irq_may_run(desc)) {
|
||||||
* we shouldn't process the IRQ. Mark it pending, handle
|
desc->istate |= IRQS_PENDING;
|
||||||
* the necessary masking and go out
|
mask_ack_irq(desc);
|
||||||
*/
|
goto out_unlock;
|
||||||
if (unlikely(irqd_irq_disabled(&desc->irq_data) ||
|
|
||||||
irqd_irq_inprogress(&desc->irq_data) || !desc->action)) {
|
|
||||||
if (!irq_check_poll(desc)) {
|
|
||||||
desc->istate |= IRQS_PENDING;
|
|
||||||
mask_ack_irq(desc);
|
|
||||||
goto out_unlock;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If its disabled or no action available then mask it and get
|
||||||
|
* out of here.
|
||||||
|
*/
|
||||||
|
if (irqd_irq_disabled(&desc->irq_data) || !desc->action) {
|
||||||
|
desc->istate |= IRQS_PENDING;
|
||||||
|
mask_ack_irq(desc);
|
||||||
|
goto out_unlock;
|
||||||
|
}
|
||||||
|
|
||||||
kstat_incr_irqs_this_cpu(irq, desc);
|
kstat_incr_irqs_this_cpu(irq, desc);
|
||||||
|
|
||||||
/* Start handling the irq */
|
/* Start handling the irq */
|
||||||
|
@ -602,18 +628,21 @@ void handle_edge_eoi_irq(unsigned int irq, struct irq_desc *desc)
|
||||||
raw_spin_lock(&desc->lock);
|
raw_spin_lock(&desc->lock);
|
||||||
|
|
||||||
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING);
|
||||||
/*
|
|
||||||
* If we're currently running this IRQ, or its disabled,
|
if (!irq_may_run(desc)) {
|
||||||
* we shouldn't process the IRQ. Mark it pending, handle
|
desc->istate |= IRQS_PENDING;
|
||||||
* the necessary masking and go out
|
goto out_eoi;
|
||||||
*/
|
|
||||||
if (unlikely(irqd_irq_disabled(&desc->irq_data) ||
|
|
||||||
irqd_irq_inprogress(&desc->irq_data) || !desc->action)) {
|
|
||||||
if (!irq_check_poll(desc)) {
|
|
||||||
desc->istate |= IRQS_PENDING;
|
|
||||||
goto out_eoi;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If its disabled or no action available then mask it and get
|
||||||
|
* out of here.
|
||||||
|
*/
|
||||||
|
if (irqd_irq_disabled(&desc->irq_data) || !desc->action) {
|
||||||
|
desc->istate |= IRQS_PENDING;
|
||||||
|
goto out_eoi;
|
||||||
|
}
|
||||||
|
|
||||||
kstat_incr_irqs_this_cpu(irq, desc);
|
kstat_incr_irqs_this_cpu(irq, desc);
|
||||||
|
|
||||||
do {
|
do {
|
||||||
|
|
|
@ -63,8 +63,8 @@ enum {
|
||||||
|
|
||||||
extern int __irq_set_trigger(struct irq_desc *desc, unsigned int irq,
|
extern int __irq_set_trigger(struct irq_desc *desc, unsigned int irq,
|
||||||
unsigned long flags);
|
unsigned long flags);
|
||||||
extern void __disable_irq(struct irq_desc *desc, unsigned int irq, bool susp);
|
extern void __disable_irq(struct irq_desc *desc, unsigned int irq);
|
||||||
extern void __enable_irq(struct irq_desc *desc, unsigned int irq, bool resume);
|
extern void __enable_irq(struct irq_desc *desc, unsigned int irq);
|
||||||
|
|
||||||
extern int irq_startup(struct irq_desc *desc, bool resend);
|
extern int irq_startup(struct irq_desc *desc, bool resend);
|
||||||
extern void irq_shutdown(struct irq_desc *desc);
|
extern void irq_shutdown(struct irq_desc *desc);
|
||||||
|
@ -194,3 +194,15 @@ static inline void kstat_incr_irqs_this_cpu(unsigned int irq, struct irq_desc *d
|
||||||
__this_cpu_inc(*desc->kstat_irqs);
|
__this_cpu_inc(*desc->kstat_irqs);
|
||||||
__this_cpu_inc(kstat.irqs_sum);
|
__this_cpu_inc(kstat.irqs_sum);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_PM_SLEEP
|
||||||
|
bool irq_pm_check_wakeup(struct irq_desc *desc);
|
||||||
|
void irq_pm_install_action(struct irq_desc *desc, struct irqaction *action);
|
||||||
|
void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action);
|
||||||
|
#else
|
||||||
|
static inline bool irq_pm_check_wakeup(struct irq_desc *desc) { return false; }
|
||||||
|
static inline void
|
||||||
|
irq_pm_install_action(struct irq_desc *desc, struct irqaction *action) { }
|
||||||
|
static inline void
|
||||||
|
irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action) { }
|
||||||
|
#endif
|
||||||
|
|
|
@ -382,14 +382,8 @@ setup_affinity(unsigned int irq, struct irq_desc *desc, struct cpumask *mask)
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
void __disable_irq(struct irq_desc *desc, unsigned int irq, bool suspend)
|
void __disable_irq(struct irq_desc *desc, unsigned int irq)
|
||||||
{
|
{
|
||||||
if (suspend) {
|
|
||||||
if (!desc->action || (desc->action->flags & IRQF_NO_SUSPEND))
|
|
||||||
return;
|
|
||||||
desc->istate |= IRQS_SUSPENDED;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!desc->depth++)
|
if (!desc->depth++)
|
||||||
irq_disable(desc);
|
irq_disable(desc);
|
||||||
}
|
}
|
||||||
|
@ -401,7 +395,7 @@ static int __disable_irq_nosync(unsigned int irq)
|
||||||
|
|
||||||
if (!desc)
|
if (!desc)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
__disable_irq(desc, irq, false);
|
__disable_irq(desc, irq);
|
||||||
irq_put_desc_busunlock(desc, flags);
|
irq_put_desc_busunlock(desc, flags);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -442,20 +436,8 @@ void disable_irq(unsigned int irq)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(disable_irq);
|
EXPORT_SYMBOL(disable_irq);
|
||||||
|
|
||||||
void __enable_irq(struct irq_desc *desc, unsigned int irq, bool resume)
|
void __enable_irq(struct irq_desc *desc, unsigned int irq)
|
||||||
{
|
{
|
||||||
if (resume) {
|
|
||||||
if (!(desc->istate & IRQS_SUSPENDED)) {
|
|
||||||
if (!desc->action)
|
|
||||||
return;
|
|
||||||
if (!(desc->action->flags & IRQF_FORCE_RESUME))
|
|
||||||
return;
|
|
||||||
/* Pretend that it got disabled ! */
|
|
||||||
desc->depth++;
|
|
||||||
}
|
|
||||||
desc->istate &= ~IRQS_SUSPENDED;
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (desc->depth) {
|
switch (desc->depth) {
|
||||||
case 0:
|
case 0:
|
||||||
err_out:
|
err_out:
|
||||||
|
@ -497,7 +479,7 @@ void enable_irq(unsigned int irq)
|
||||||
KERN_ERR "enable_irq before setup/request_irq: irq %u\n", irq))
|
KERN_ERR "enable_irq before setup/request_irq: irq %u\n", irq))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
__enable_irq(desc, irq, false);
|
__enable_irq(desc, irq);
|
||||||
out:
|
out:
|
||||||
irq_put_desc_busunlock(desc, flags);
|
irq_put_desc_busunlock(desc, flags);
|
||||||
}
|
}
|
||||||
|
@ -1218,6 +1200,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
|
||||||
new->irq = irq;
|
new->irq = irq;
|
||||||
*old_ptr = new;
|
*old_ptr = new;
|
||||||
|
|
||||||
|
irq_pm_install_action(desc, new);
|
||||||
|
|
||||||
/* Reset broken irq detection when installing new handler */
|
/* Reset broken irq detection when installing new handler */
|
||||||
desc->irq_count = 0;
|
desc->irq_count = 0;
|
||||||
desc->irqs_unhandled = 0;
|
desc->irqs_unhandled = 0;
|
||||||
|
@ -1228,7 +1212,7 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
|
||||||
*/
|
*/
|
||||||
if (shared && (desc->istate & IRQS_SPURIOUS_DISABLED)) {
|
if (shared && (desc->istate & IRQS_SPURIOUS_DISABLED)) {
|
||||||
desc->istate &= ~IRQS_SPURIOUS_DISABLED;
|
desc->istate &= ~IRQS_SPURIOUS_DISABLED;
|
||||||
__enable_irq(desc, irq, false);
|
__enable_irq(desc, irq);
|
||||||
}
|
}
|
||||||
|
|
||||||
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
||||||
|
@ -1336,6 +1320,8 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id)
|
||||||
/* Found it - now remove it from the list of entries: */
|
/* Found it - now remove it from the list of entries: */
|
||||||
*action_ptr = action->next;
|
*action_ptr = action->next;
|
||||||
|
|
||||||
|
irq_pm_remove_action(desc, action);
|
||||||
|
|
||||||
/* If this was the last handler, shut down the IRQ line: */
|
/* If this was the last handler, shut down the IRQ line: */
|
||||||
if (!desc->action) {
|
if (!desc->action) {
|
||||||
irq_shutdown(desc);
|
irq_shutdown(desc);
|
||||||
|
|
159
kernel/irq/pm.c
159
kernel/irq/pm.c
|
@ -9,17 +9,105 @@
|
||||||
#include <linux/irq.h>
|
#include <linux/irq.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/interrupt.h>
|
#include <linux/interrupt.h>
|
||||||
|
#include <linux/suspend.h>
|
||||||
#include <linux/syscore_ops.h>
|
#include <linux/syscore_ops.h>
|
||||||
|
|
||||||
#include "internals.h"
|
#include "internals.h"
|
||||||
|
|
||||||
|
bool irq_pm_check_wakeup(struct irq_desc *desc)
|
||||||
|
{
|
||||||
|
if (irqd_is_wakeup_armed(&desc->irq_data)) {
|
||||||
|
irqd_clear(&desc->irq_data, IRQD_WAKEUP_ARMED);
|
||||||
|
desc->istate |= IRQS_SUSPENDED | IRQS_PENDING;
|
||||||
|
desc->depth++;
|
||||||
|
irq_disable(desc);
|
||||||
|
pm_system_wakeup();
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Called from __setup_irq() with desc->lock held after @action has
|
||||||
|
* been installed in the action chain.
|
||||||
|
*/
|
||||||
|
void irq_pm_install_action(struct irq_desc *desc, struct irqaction *action)
|
||||||
|
{
|
||||||
|
desc->nr_actions++;
|
||||||
|
|
||||||
|
if (action->flags & IRQF_FORCE_RESUME)
|
||||||
|
desc->force_resume_depth++;
|
||||||
|
|
||||||
|
WARN_ON_ONCE(desc->force_resume_depth &&
|
||||||
|
desc->force_resume_depth != desc->nr_actions);
|
||||||
|
|
||||||
|
if (action->flags & IRQF_NO_SUSPEND)
|
||||||
|
desc->no_suspend_depth++;
|
||||||
|
|
||||||
|
WARN_ON_ONCE(desc->no_suspend_depth &&
|
||||||
|
desc->no_suspend_depth != desc->nr_actions);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Called from __free_irq() with desc->lock held after @action has
|
||||||
|
* been removed from the action chain.
|
||||||
|
*/
|
||||||
|
void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action)
|
||||||
|
{
|
||||||
|
desc->nr_actions--;
|
||||||
|
|
||||||
|
if (action->flags & IRQF_FORCE_RESUME)
|
||||||
|
desc->force_resume_depth--;
|
||||||
|
|
||||||
|
if (action->flags & IRQF_NO_SUSPEND)
|
||||||
|
desc->no_suspend_depth--;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool suspend_device_irq(struct irq_desc *desc, int irq)
|
||||||
|
{
|
||||||
|
if (!desc->action || desc->no_suspend_depth)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if (irqd_is_wakeup_set(&desc->irq_data)) {
|
||||||
|
irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED);
|
||||||
|
/*
|
||||||
|
* We return true here to force the caller to issue
|
||||||
|
* synchronize_irq(). We need to make sure that the
|
||||||
|
* IRQD_WAKEUP_ARMED is visible before we return from
|
||||||
|
* suspend_device_irqs().
|
||||||
|
*/
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
desc->istate |= IRQS_SUSPENDED;
|
||||||
|
__disable_irq(desc, irq);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Hardware which has no wakeup source configuration facility
|
||||||
|
* requires that the non wakeup interrupts are masked at the
|
||||||
|
* chip level. The chip implementation indicates that with
|
||||||
|
* IRQCHIP_MASK_ON_SUSPEND.
|
||||||
|
*/
|
||||||
|
if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
|
||||||
|
mask_irq(desc);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* suspend_device_irqs - disable all currently enabled interrupt lines
|
* suspend_device_irqs - disable all currently enabled interrupt lines
|
||||||
*
|
*
|
||||||
* During system-wide suspend or hibernation device drivers need to be prevented
|
* During system-wide suspend or hibernation device drivers need to be
|
||||||
* from receiving interrupts and this function is provided for this purpose.
|
* prevented from receiving interrupts and this function is provided
|
||||||
* It marks all interrupt lines in use, except for the timer ones, as disabled
|
* for this purpose.
|
||||||
* and sets the IRQS_SUSPENDED flag for each of them.
|
*
|
||||||
|
* So we disable all interrupts and mark them IRQS_SUSPENDED except
|
||||||
|
* for those which are unused, those which are marked as not
|
||||||
|
* suspendable via an interrupt request with the flag IRQF_NO_SUSPEND
|
||||||
|
* set and those which are marked as active wakeup sources.
|
||||||
|
*
|
||||||
|
* The active wakeup sources are handled by the flow handler entry
|
||||||
|
* code which checks for the IRQD_WAKEUP_ARMED flag, suspends the
|
||||||
|
* interrupt and notifies the pm core about the wakeup.
|
||||||
*/
|
*/
|
||||||
void suspend_device_irqs(void)
|
void suspend_device_irqs(void)
|
||||||
{
|
{
|
||||||
|
@ -28,18 +116,36 @@ void suspend_device_irqs(void)
|
||||||
|
|
||||||
for_each_irq_desc(irq, desc) {
|
for_each_irq_desc(irq, desc) {
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
bool sync;
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&desc->lock, flags);
|
raw_spin_lock_irqsave(&desc->lock, flags);
|
||||||
__disable_irq(desc, irq, true);
|
sync = suspend_device_irq(desc, irq);
|
||||||
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
||||||
}
|
|
||||||
|
|
||||||
for_each_irq_desc(irq, desc)
|
if (sync)
|
||||||
if (desc->istate & IRQS_SUSPENDED)
|
|
||||||
synchronize_irq(irq);
|
synchronize_irq(irq);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(suspend_device_irqs);
|
EXPORT_SYMBOL_GPL(suspend_device_irqs);
|
||||||
|
|
||||||
|
static void resume_irq(struct irq_desc *desc, int irq)
|
||||||
|
{
|
||||||
|
irqd_clear(&desc->irq_data, IRQD_WAKEUP_ARMED);
|
||||||
|
|
||||||
|
if (desc->istate & IRQS_SUSPENDED)
|
||||||
|
goto resume;
|
||||||
|
|
||||||
|
/* Force resume the interrupt? */
|
||||||
|
if (!desc->force_resume_depth)
|
||||||
|
return;
|
||||||
|
|
||||||
|
/* Pretend that it got disabled ! */
|
||||||
|
desc->depth++;
|
||||||
|
resume:
|
||||||
|
desc->istate &= ~IRQS_SUSPENDED;
|
||||||
|
__enable_irq(desc, irq);
|
||||||
|
}
|
||||||
|
|
||||||
static void resume_irqs(bool want_early)
|
static void resume_irqs(bool want_early)
|
||||||
{
|
{
|
||||||
struct irq_desc *desc;
|
struct irq_desc *desc;
|
||||||
|
@ -54,7 +160,7 @@ static void resume_irqs(bool want_early)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
raw_spin_lock_irqsave(&desc->lock, flags);
|
raw_spin_lock_irqsave(&desc->lock, flags);
|
||||||
__enable_irq(desc, irq, true);
|
resume_irq(desc, irq);
|
||||||
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
raw_spin_unlock_irqrestore(&desc->lock, flags);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -93,38 +199,3 @@ void resume_device_irqs(void)
|
||||||
resume_irqs(false);
|
resume_irqs(false);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(resume_device_irqs);
|
EXPORT_SYMBOL_GPL(resume_device_irqs);
|
||||||
|
|
||||||
/**
|
|
||||||
* check_wakeup_irqs - check if any wake-up interrupts are pending
|
|
||||||
*/
|
|
||||||
int check_wakeup_irqs(void)
|
|
||||||
{
|
|
||||||
struct irq_desc *desc;
|
|
||||||
int irq;
|
|
||||||
|
|
||||||
for_each_irq_desc(irq, desc) {
|
|
||||||
/*
|
|
||||||
* Only interrupts which are marked as wakeup source
|
|
||||||
* and have not been disabled before the suspend check
|
|
||||||
* can abort suspend.
|
|
||||||
*/
|
|
||||||
if (irqd_is_wakeup_set(&desc->irq_data)) {
|
|
||||||
if (desc->depth == 1 && desc->istate & IRQS_PENDING)
|
|
||||||
return -EBUSY;
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
/*
|
|
||||||
* Check the non wakeup interrupts whether they need
|
|
||||||
* to be masked before finally going into suspend
|
|
||||||
* state. That's for hardware which has no wakeup
|
|
||||||
* source configuration facility. The chip
|
|
||||||
* implementation indicates that with
|
|
||||||
* IRQCHIP_MASK_ON_SUSPEND.
|
|
||||||
*/
|
|
||||||
if (desc->istate & IRQS_SUSPENDED &&
|
|
||||||
irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
|
|
||||||
mask_irq(desc);
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
|
@ -129,6 +129,7 @@ int freeze_processes(void)
|
||||||
if (!pm_freezing)
|
if (!pm_freezing)
|
||||||
atomic_inc(&system_freezing_cnt);
|
atomic_inc(&system_freezing_cnt);
|
||||||
|
|
||||||
|
pm_wakeup_clear();
|
||||||
printk("Freezing user space processes ... ");
|
printk("Freezing user space processes ... ");
|
||||||
pm_freezing = true;
|
pm_freezing = true;
|
||||||
error = try_to_freeze_tasks(true);
|
error = try_to_freeze_tasks(true);
|
||||||
|
|
Загрузка…
Ссылка в новой задаче