2011-08-25 17:35:41 +04:00
|
|
|
/*
|
|
|
|
* Devices PM QoS constraints management
|
|
|
|
*
|
|
|
|
* Copyright (C) 2011 Texas Instruments, Inc.
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* This module exposes the interface to kernel space for specifying
|
|
|
|
* per-device PM QoS dependencies. It provides infrastructure for registration
|
|
|
|
* of:
|
|
|
|
*
|
|
|
|
* Dependents on a QoS value : register requests
|
|
|
|
* Watchers of QoS value : get notified when target QoS value changes
|
|
|
|
*
|
|
|
|
* This QoS design is best effort based. Dependents register their QoS needs.
|
|
|
|
* Watchers register to keep track of the current QoS needs of the system.
|
2011-08-25 17:35:47 +04:00
|
|
|
* Watchers can register different types of notification callbacks:
|
|
|
|
* . a per-device notification callback using the dev_pm_qos_*_notifier API.
|
|
|
|
* The notification chain data is stored in the per-device constraint
|
|
|
|
* data struct.
|
|
|
|
* . a system-wide notification callback using the dev_pm_qos_*_global_notifier
|
|
|
|
* API. The notification chain data is stored in a static variable.
|
2011-08-25 17:35:41 +04:00
|
|
|
*
|
|
|
|
* Note about the per-device constraint data struct allocation:
|
|
|
|
* . The per-device constraints data struct ptr is tored into the device
|
|
|
|
* dev_pm_info.
|
|
|
|
* . To minimize the data usage by the per-device constraints, the data struct
|
|
|
|
* is only allocated at the first call to dev_pm_qos_add_request.
|
|
|
|
* . The data is later free'd when the device is removed from the system.
|
|
|
|
* . A global mutex protects the constraints users from the data being
|
|
|
|
* allocated and free'd.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/pm_qos.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/device.h>
|
|
|
|
#include <linux/mutex.h>
|
2011-05-27 15:12:15 +04:00
|
|
|
#include <linux/export.h>
|
2012-10-24 04:08:18 +04:00
|
|
|
#include <linux/pm_runtime.h>
|
2013-03-04 17:22:57 +04:00
|
|
|
#include <linux/err.h>
|
2011-08-25 17:35:41 +04:00
|
|
|
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
#include "power.h"
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
static DEFINE_MUTEX(dev_pm_qos_mtx);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
static DEFINE_MUTEX(dev_pm_qos_sysfs_mtx);
|
2011-09-30 00:29:44 +04:00
|
|
|
|
2011-08-25 17:35:47 +04:00
|
|
|
static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers);
|
|
|
|
|
2012-10-23 03:09:12 +04:00
|
|
|
/**
|
|
|
|
* __dev_pm_qos_flags - Check PM QoS flags for a given device.
|
|
|
|
* @dev: Device to check the PM QoS flags for.
|
|
|
|
* @mask: Flags to check against.
|
|
|
|
*
|
|
|
|
* This routine must be called with dev->power.lock held.
|
|
|
|
*/
|
|
|
|
enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask)
|
|
|
|
{
|
|
|
|
struct dev_pm_qos *qos = dev->power.qos;
|
|
|
|
struct pm_qos_flags *pqf;
|
|
|
|
s32 val;
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR_OR_NULL(qos))
|
2012-10-23 03:09:12 +04:00
|
|
|
return PM_QOS_FLAGS_UNDEFINED;
|
|
|
|
|
|
|
|
pqf = &qos->flags;
|
|
|
|
if (list_empty(&pqf->list))
|
|
|
|
return PM_QOS_FLAGS_UNDEFINED;
|
|
|
|
|
|
|
|
val = pqf->effective_flags & mask;
|
|
|
|
if (val)
|
|
|
|
return (val == mask) ? PM_QOS_FLAGS_ALL : PM_QOS_FLAGS_SOME;
|
|
|
|
|
|
|
|
return PM_QOS_FLAGS_NONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_flags - Check PM QoS flags for a given device (locked).
|
|
|
|
* @dev: Device to check the PM QoS flags for.
|
|
|
|
* @mask: Flags to check against.
|
|
|
|
*/
|
|
|
|
enum pm_qos_flags_status dev_pm_qos_flags(struct device *dev, s32 mask)
|
|
|
|
{
|
|
|
|
unsigned long irqflags;
|
|
|
|
enum pm_qos_flags_status ret;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&dev->power.lock, irqflags);
|
|
|
|
ret = __dev_pm_qos_flags(dev, mask);
|
|
|
|
spin_unlock_irqrestore(&dev->power.lock, irqflags);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2013-01-23 00:26:28 +04:00
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_flags);
|
2012-10-23 03:09:12 +04:00
|
|
|
|
2011-09-30 00:29:44 +04:00
|
|
|
/**
|
2011-12-01 03:01:31 +04:00
|
|
|
* __dev_pm_qos_read_value - Get PM QoS constraint for a given device.
|
|
|
|
* @dev: Device to get the PM QoS constraint value for.
|
|
|
|
*
|
|
|
|
* This routine must be called with dev->power.lock held.
|
|
|
|
*/
|
|
|
|
s32 __dev_pm_qos_read_value(struct device *dev)
|
|
|
|
{
|
2013-03-04 17:22:57 +04:00
|
|
|
return IS_ERR_OR_NULL(dev->power.qos) ?
|
|
|
|
0 : pm_qos_read_value(&dev->power.qos->latency);
|
2011-12-01 03:01:31 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_read_value - Get PM QoS constraint for a given device (locked).
|
2011-09-30 00:29:44 +04:00
|
|
|
* @dev: Device to get the PM QoS constraint value for.
|
|
|
|
*/
|
|
|
|
s32 dev_pm_qos_read_value(struct device *dev)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
2011-12-01 03:01:31 +04:00
|
|
|
s32 ret;
|
2011-09-30 00:29:44 +04:00
|
|
|
|
|
|
|
spin_lock_irqsave(&dev->power.lock, flags);
|
2011-12-01 03:01:31 +04:00
|
|
|
ret = __dev_pm_qos_read_value(dev);
|
2011-09-30 00:29:44 +04:00
|
|
|
spin_unlock_irqrestore(&dev->power.lock, flags);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2012-10-23 03:09:12 +04:00
|
|
|
/**
|
|
|
|
* apply_constraint - Add/modify/remove device PM QoS request.
|
|
|
|
* @req: Constraint request to apply
|
|
|
|
* @action: Action to perform (add/update/remove).
|
|
|
|
* @value: Value to assign to the QoS request.
|
2011-08-25 17:35:47 +04:00
|
|
|
*
|
|
|
|
* Internal function to update the constraints list using the PM QoS core
|
|
|
|
* code and if needed call the per-device and the global notification
|
|
|
|
* callbacks
|
|
|
|
*/
|
|
|
|
static int apply_constraint(struct dev_pm_qos_request *req,
|
2012-10-23 03:09:12 +04:00
|
|
|
enum pm_qos_req_action action, s32 value)
|
2011-08-25 17:35:47 +04:00
|
|
|
{
|
2012-10-23 03:09:12 +04:00
|
|
|
struct dev_pm_qos *qos = req->dev->power.qos;
|
|
|
|
int ret;
|
2011-08-25 17:35:47 +04:00
|
|
|
|
2012-10-23 03:09:12 +04:00
|
|
|
switch(req->type) {
|
|
|
|
case DEV_PM_QOS_LATENCY:
|
|
|
|
ret = pm_qos_update_target(&qos->latency, &req->data.pnode,
|
|
|
|
action, value);
|
|
|
|
if (ret) {
|
|
|
|
value = pm_qos_read_value(&qos->latency);
|
|
|
|
blocking_notifier_call_chain(&dev_pm_notifiers,
|
|
|
|
(unsigned long)value,
|
|
|
|
req);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case DEV_PM_QOS_FLAGS:
|
|
|
|
ret = pm_qos_update_flags(&qos->flags, &req->data.flr,
|
|
|
|
action, value);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
2011-08-25 17:35:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* dev_pm_qos_constraints_allocate
|
|
|
|
* @dev: device to allocate data for
|
|
|
|
*
|
|
|
|
* Called at the first call to add_request, for constraint data allocation
|
|
|
|
* Must be called with the dev_pm_qos_mtx mutex held
|
|
|
|
*/
|
|
|
|
static int dev_pm_qos_constraints_allocate(struct device *dev)
|
|
|
|
{
|
2012-10-23 03:07:27 +04:00
|
|
|
struct dev_pm_qos *qos;
|
2011-08-25 17:35:41 +04:00
|
|
|
struct pm_qos_constraints *c;
|
|
|
|
struct blocking_notifier_head *n;
|
|
|
|
|
2012-10-23 03:07:27 +04:00
|
|
|
qos = kzalloc(sizeof(*qos), GFP_KERNEL);
|
|
|
|
if (!qos)
|
2011-08-25 17:35:41 +04:00
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
n = kzalloc(sizeof(*n), GFP_KERNEL);
|
|
|
|
if (!n) {
|
2012-10-23 03:07:27 +04:00
|
|
|
kfree(qos);
|
2011-08-25 17:35:41 +04:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
BLOCKING_INIT_NOTIFIER_HEAD(n);
|
|
|
|
|
2012-10-23 03:07:27 +04:00
|
|
|
c = &qos->latency;
|
2011-09-30 00:29:44 +04:00
|
|
|
plist_head_init(&c->list);
|
|
|
|
c->target_value = PM_QOS_DEV_LAT_DEFAULT_VALUE;
|
|
|
|
c->default_value = PM_QOS_DEV_LAT_DEFAULT_VALUE;
|
|
|
|
c->type = PM_QOS_MIN;
|
|
|
|
c->notifiers = n;
|
|
|
|
|
2012-10-23 03:09:12 +04:00
|
|
|
INIT_LIST_HEAD(&qos->flags.list);
|
|
|
|
|
2011-09-30 00:29:44 +04:00
|
|
|
spin_lock_irq(&dev->power.lock);
|
2012-10-23 03:07:27 +04:00
|
|
|
dev->power.qos = qos;
|
2011-09-30 00:29:44 +04:00
|
|
|
spin_unlock_irq(&dev->power.lock);
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
static void __dev_pm_qos_hide_latency_limit(struct device *dev);
|
|
|
|
static void __dev_pm_qos_hide_flags(struct device *dev);
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_constraints_destroy
|
|
|
|
* @dev: target device
|
|
|
|
*
|
2011-09-30 00:29:44 +04:00
|
|
|
* Called from the device PM subsystem on device removal under device_pm_lock().
|
2011-08-25 17:35:41 +04:00
|
|
|
*/
|
|
|
|
void dev_pm_qos_constraints_destroy(struct device *dev)
|
|
|
|
{
|
2012-10-23 03:07:27 +04:00
|
|
|
struct dev_pm_qos *qos;
|
2011-08-25 17:35:41 +04:00
|
|
|
struct dev_pm_qos_request *req, *tmp;
|
2011-09-30 00:29:44 +04:00
|
|
|
struct pm_qos_constraints *c;
|
2012-11-24 13:10:51 +04:00
|
|
|
struct pm_qos_flags *f;
|
2011-08-25 17:35:41 +04:00
|
|
|
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_lock(&dev_pm_qos_sysfs_mtx);
|
2013-03-04 17:22:57 +04:00
|
|
|
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
/*
|
2012-11-24 13:10:51 +04:00
|
|
|
* If the device's PM QoS resume latency limit or PM QoS flags have been
|
|
|
|
* exposed to user space, they have to be hidden at this point.
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
*/
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
pm_qos_sysfs_remove_latency(dev);
|
|
|
|
pm_qos_sysfs_remove_flags(dev);
|
|
|
|
|
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
__dev_pm_qos_hide_latency_limit(dev);
|
|
|
|
__dev_pm_qos_hide_flags(dev);
|
2011-08-25 17:35:41 +04:00
|
|
|
|
2012-10-23 03:07:27 +04:00
|
|
|
qos = dev->power.qos;
|
|
|
|
if (!qos)
|
2011-09-30 00:29:44 +04:00
|
|
|
goto out;
|
2011-08-25 17:35:41 +04:00
|
|
|
|
2012-11-24 13:10:51 +04:00
|
|
|
/* Flush the constraints lists for the device. */
|
2012-10-23 03:07:27 +04:00
|
|
|
c = &qos->latency;
|
2012-10-23 03:09:00 +04:00
|
|
|
plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
|
2011-09-30 00:29:44 +04:00
|
|
|
/*
|
|
|
|
* Update constraints list and call the notification
|
|
|
|
* callbacks if needed
|
|
|
|
*/
|
|
|
|
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
|
|
|
memset(req, 0, sizeof(*req));
|
2011-08-25 17:35:41 +04:00
|
|
|
}
|
2012-11-24 13:10:51 +04:00
|
|
|
f = &qos->flags;
|
|
|
|
list_for_each_entry_safe(req, tmp, &f->list, data.flr.node) {
|
|
|
|
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
|
|
|
memset(req, 0, sizeof(*req));
|
2011-08-25 17:35:41 +04:00
|
|
|
}
|
|
|
|
|
2011-09-30 00:29:44 +04:00
|
|
|
spin_lock_irq(&dev->power.lock);
|
2013-03-04 17:22:57 +04:00
|
|
|
dev->power.qos = ERR_PTR(-ENODEV);
|
2011-09-30 00:29:44 +04:00
|
|
|
spin_unlock_irq(&dev->power.lock);
|
|
|
|
|
|
|
|
kfree(c->notifiers);
|
2012-11-02 01:45:30 +04:00
|
|
|
kfree(qos);
|
2011-09-30 00:29:44 +04:00
|
|
|
|
|
|
|
out:
|
2011-08-25 17:35:41 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_sysfs_mtx);
|
2011-08-25 17:35:41 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_add_request - inserts new qos request into the list
|
|
|
|
* @dev: target device for the constraint
|
|
|
|
* @req: pointer to a preallocated handle
|
2012-10-23 03:09:12 +04:00
|
|
|
* @type: type of the request
|
2011-08-25 17:35:41 +04:00
|
|
|
* @value: defines the qos request
|
|
|
|
*
|
|
|
|
* This function inserts a new entry in the device constraints list of
|
|
|
|
* requested qos performance characteristics. It recomputes the aggregate
|
|
|
|
* QoS expectations of parameters and initializes the dev_pm_qos_request
|
|
|
|
* handle. Caller needs to save this handle for later use in updates and
|
|
|
|
* removal.
|
|
|
|
*
|
|
|
|
* Returns 1 if the aggregated constraint value has changed,
|
|
|
|
* 0 if the aggregated constraint value has not changed,
|
2011-09-30 00:29:44 +04:00
|
|
|
* -EINVAL in case of wrong parameters, -ENOMEM if there's not enough memory
|
|
|
|
* to allocate for data structures, -ENODEV if the device has just been removed
|
|
|
|
* from the system.
|
2012-11-02 16:10:09 +04:00
|
|
|
*
|
|
|
|
* Callers should ensure that the target device is not RPM_SUSPENDED before
|
|
|
|
* using this function for requests of type DEV_PM_QOS_FLAGS.
|
2011-08-25 17:35:41 +04:00
|
|
|
*/
|
|
|
|
int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
|
2012-10-23 03:09:12 +04:00
|
|
|
enum dev_pm_qos_req_type type, s32 value)
|
2011-08-25 17:35:41 +04:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
if (!dev || !req) /*guard against callers passing in null */
|
|
|
|
return -EINVAL;
|
|
|
|
|
2011-11-10 03:44:18 +04:00
|
|
|
if (WARN(dev_pm_qos_request_active(req),
|
|
|
|
"%s() called for already added request\n", __func__))
|
2011-08-25 17:35:41 +04:00
|
|
|
return -EINVAL;
|
|
|
|
|
2011-09-30 00:29:44 +04:00
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
2011-08-25 17:35:41 +04:00
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR(dev->power.qos))
|
|
|
|
ret = -ENODEV;
|
|
|
|
else if (!dev->power.qos)
|
|
|
|
ret = dev_pm_qos_constraints_allocate(dev);
|
2011-08-25 17:35:41 +04:00
|
|
|
|
2012-10-23 03:09:12 +04:00
|
|
|
if (!ret) {
|
2013-03-04 17:22:57 +04:00
|
|
|
req->dev = dev;
|
2012-10-23 03:09:12 +04:00
|
|
|
req->type = type;
|
2011-08-25 17:35:47 +04:00
|
|
|
ret = apply_constraint(req, PM_QOS_ADD_REQ, value);
|
2012-10-23 03:09:12 +04:00
|
|
|
}
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
2011-09-30 00:29:44 +04:00
|
|
|
|
2011-08-25 17:35:41 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_add_request);
|
|
|
|
|
2012-10-24 04:08:18 +04:00
|
|
|
/**
|
|
|
|
* __dev_pm_qos_update_request - Modify an existing device PM QoS request.
|
|
|
|
* @req : PM QoS request to modify.
|
|
|
|
* @new_value: New value to request.
|
|
|
|
*/
|
|
|
|
static int __dev_pm_qos_update_request(struct dev_pm_qos_request *req,
|
|
|
|
s32 new_value)
|
|
|
|
{
|
|
|
|
s32 curr_value;
|
|
|
|
int ret = 0;
|
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
if (!req) /*guard against callers passing in null */
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (WARN(!dev_pm_qos_request_active(req),
|
|
|
|
"%s() called for unknown object\n", __func__))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR_OR_NULL(req->dev->power.qos))
|
2012-10-24 04:08:18 +04:00
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
switch(req->type) {
|
|
|
|
case DEV_PM_QOS_LATENCY:
|
|
|
|
curr_value = req->data.pnode.prio;
|
|
|
|
break;
|
|
|
|
case DEV_PM_QOS_FLAGS:
|
|
|
|
curr_value = req->data.flr.flags;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (curr_value != new_value)
|
|
|
|
ret = apply_constraint(req, PM_QOS_UPDATE_REQ, new_value);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-08-25 17:35:41 +04:00
|
|
|
/**
|
|
|
|
* dev_pm_qos_update_request - modifies an existing qos request
|
|
|
|
* @req : handle to list element holding a dev_pm_qos request to use
|
|
|
|
* @new_value: defines the qos request
|
|
|
|
*
|
|
|
|
* Updates an existing dev PM qos request along with updating the
|
|
|
|
* target value.
|
|
|
|
*
|
|
|
|
* Attempts are made to make this code callable on hot code paths.
|
|
|
|
*
|
|
|
|
* Returns 1 if the aggregated constraint value has changed,
|
|
|
|
* 0 if the aggregated constraint value has not changed,
|
|
|
|
* -EINVAL in case of wrong parameters, -ENODEV if the device has been
|
|
|
|
* removed from the system
|
2012-11-02 16:10:09 +04:00
|
|
|
*
|
|
|
|
* Callers should ensure that the target device is not RPM_SUSPENDED before
|
|
|
|
* using this function for requests of type DEV_PM_QOS_FLAGS.
|
2011-08-25 17:35:41 +04:00
|
|
|
*/
|
2012-10-24 04:08:18 +04:00
|
|
|
int dev_pm_qos_update_request(struct dev_pm_qos_request *req, s32 new_value)
|
2011-08-25 17:35:41 +04:00
|
|
|
{
|
2012-10-24 04:08:18 +04:00
|
|
|
int ret;
|
2011-08-25 17:35:41 +04:00
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
ret = __dev_pm_qos_update_request(req, new_value);
|
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_update_request);
|
|
|
|
|
|
|
|
static int __dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
|
|
|
|
{
|
2013-03-04 17:22:57 +04:00
|
|
|
int ret;
|
2013-03-04 01:48:14 +04:00
|
|
|
|
2011-08-25 17:35:41 +04:00
|
|
|
if (!req) /*guard against callers passing in null */
|
|
|
|
return -EINVAL;
|
|
|
|
|
2011-11-10 03:44:18 +04:00
|
|
|
if (WARN(!dev_pm_qos_request_active(req),
|
|
|
|
"%s() called for unknown object\n", __func__))
|
2011-08-25 17:35:41 +04:00
|
|
|
return -EINVAL;
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR_OR_NULL(req->dev->power.qos))
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
ret = apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
|
|
|
memset(req, 0, sizeof(*req));
|
2011-08-25 17:35:41 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_remove_request - modifies an existing qos request
|
|
|
|
* @req: handle to request list element
|
|
|
|
*
|
|
|
|
* Will remove pm qos request from the list of constraints and
|
|
|
|
* recompute the current target value. Call this on slow code paths.
|
|
|
|
*
|
|
|
|
* Returns 1 if the aggregated constraint value has changed,
|
|
|
|
* 0 if the aggregated constraint value has not changed,
|
|
|
|
* -EINVAL in case of wrong parameters, -ENODEV if the device has been
|
|
|
|
* removed from the system
|
2012-11-02 16:10:09 +04:00
|
|
|
*
|
|
|
|
* Callers should ensure that the target device is not RPM_SUSPENDED before
|
|
|
|
* using this function for requests of type DEV_PM_QOS_FLAGS.
|
2011-08-25 17:35:41 +04:00
|
|
|
*/
|
|
|
|
int dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
|
|
|
|
{
|
2013-03-04 01:48:14 +04:00
|
|
|
int ret;
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
2013-03-04 01:48:14 +04:00
|
|
|
ret = __dev_pm_qos_remove_request(req);
|
2011-08-25 17:35:41 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_remove_request);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_add_notifier - sets notification entry for changes to target value
|
|
|
|
* of per-device PM QoS constraints
|
|
|
|
*
|
|
|
|
* @dev: target device for the constraint
|
|
|
|
* @notifier: notifier block managed by caller.
|
|
|
|
*
|
|
|
|
* Will register the notifier into a notification chain that gets called
|
|
|
|
* upon changes to the target value for the device.
|
2012-04-30 00:54:47 +04:00
|
|
|
*
|
|
|
|
* If the device's constraints object doesn't exist when this routine is called,
|
|
|
|
* it will be created (or error code will be returned if that fails).
|
2011-08-25 17:35:41 +04:00
|
|
|
*/
|
|
|
|
int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier)
|
|
|
|
{
|
2012-04-30 00:54:47 +04:00
|
|
|
int ret = 0;
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR(dev->power.qos))
|
|
|
|
ret = -ENODEV;
|
|
|
|
else if (!dev->power.qos)
|
|
|
|
ret = dev_pm_qos_constraints_allocate(dev);
|
2012-04-30 00:54:47 +04:00
|
|
|
|
|
|
|
if (!ret)
|
|
|
|
ret = blocking_notifier_chain_register(
|
2012-10-23 03:07:27 +04:00
|
|
|
dev->power.qos->latency.notifiers, notifier);
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
2012-04-30 00:54:47 +04:00
|
|
|
return ret;
|
2011-08-25 17:35:41 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_add_notifier);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_remove_notifier - deletes notification for changes to target value
|
|
|
|
* of per-device PM QoS constraints
|
|
|
|
*
|
|
|
|
* @dev: target device for the constraint
|
|
|
|
* @notifier: notifier block to be removed.
|
|
|
|
*
|
|
|
|
* Will remove the notifier from the notification chain that gets called
|
|
|
|
* upon changes to the target value.
|
|
|
|
*/
|
|
|
|
int dev_pm_qos_remove_notifier(struct device *dev,
|
|
|
|
struct notifier_block *notifier)
|
|
|
|
{
|
|
|
|
int retval = 0;
|
|
|
|
|
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
|
2011-09-30 00:29:44 +04:00
|
|
|
/* Silently return if the constraints object is not present. */
|
2013-03-04 17:22:57 +04:00
|
|
|
if (!IS_ERR_OR_NULL(dev->power.qos))
|
2011-09-30 00:29:44 +04:00
|
|
|
retval = blocking_notifier_chain_unregister(
|
2012-10-23 03:07:27 +04:00
|
|
|
dev->power.qos->latency.notifiers,
|
2011-09-30 00:29:44 +04:00
|
|
|
notifier);
|
2011-08-25 17:35:41 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_remove_notifier);
|
2011-08-25 17:35:47 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_add_global_notifier - sets notification entry for changes to
|
|
|
|
* target value of the PM QoS constraints for any device
|
|
|
|
*
|
|
|
|
* @notifier: notifier block managed by caller.
|
|
|
|
*
|
|
|
|
* Will register the notifier into a notification chain that gets called
|
|
|
|
* upon changes to the target value for any device.
|
|
|
|
*/
|
|
|
|
int dev_pm_qos_add_global_notifier(struct notifier_block *notifier)
|
|
|
|
{
|
|
|
|
return blocking_notifier_chain_register(&dev_pm_notifiers, notifier);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_add_global_notifier);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_remove_global_notifier - deletes notification for changes to
|
|
|
|
* target value of PM QoS constraints for any device
|
|
|
|
*
|
|
|
|
* @notifier: notifier block to be removed.
|
|
|
|
*
|
|
|
|
* Will remove the notifier from the notification chain that gets called
|
|
|
|
* upon changes to the target value for any device.
|
|
|
|
*/
|
|
|
|
int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier)
|
|
|
|
{
|
|
|
|
return blocking_notifier_chain_unregister(&dev_pm_notifiers, notifier);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier);
|
PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
Some devices, like the I2C controller on SH7372, are not
necessary for providing power to their children or forwarding
wakeup signals (and generally interrupts) from them. They are
only needed by their children when there's some data to transfer,
so they may be suspended for the majority of time and resumed
on demand, when the children have data to send or receive. For this
purpose, however, their power.ignore_children flags have to be set,
or the PM core wouldn't allow them to be suspended while their
children were active.
Unfortunately, in some situations it may take too much time to
resume such devices so that they can assist their children in
transferring data. For example, if such a device belongs to a PM
domain which goes to the "power off" state when that device is
suspended, it may take too much time to restore power to the
domain in response to the request from one of the device's
children. In that case, if the parent's resume time is critical,
the domain should stay in the "power on" state, although it still may
be desirable to power manage the parent itself (e.g. by manipulating
its clock).
In general, device PM QoS may be used to address this problem.
Namely, if the device's children added PM QoS latency constraints
for it, they would be able to prevent it from being put into an
overly deep low-power state. However, in some cases the devices
needing to be serviced are not the immediate children of a
"children-ignoring" device, but its grandchildren or even less
direct descendants. In those cases, the entity wanting to add a
PM QoS request for a given device's ancestor that ignores its
children will have to find it in the first place, so introduce a new
helper function that may be used to achieve that. This function,
dev_pm_qos_add_ancestor_request(), will search for the first
ancestor of the given device whose power.ignore_children flag is
set and will add a device PM QoS latency request for that ancestor
on behalf of the caller. The request added this way may be removed
with the help of dev_pm_qos_remove_request() in the future, like
any other device PM QoS latency request.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-12-23 04:23:52 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_add_ancestor_request - Add PM QoS request for device's ancestor.
|
|
|
|
* @dev: Device whose ancestor to add the request for.
|
|
|
|
* @req: Pointer to the preallocated handle.
|
|
|
|
* @value: Constraint latency value.
|
|
|
|
*/
|
|
|
|
int dev_pm_qos_add_ancestor_request(struct device *dev,
|
|
|
|
struct dev_pm_qos_request *req, s32 value)
|
|
|
|
{
|
|
|
|
struct device *ancestor = dev->parent;
|
2012-12-18 17:07:49 +04:00
|
|
|
int ret = -ENODEV;
|
PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
Some devices, like the I2C controller on SH7372, are not
necessary for providing power to their children or forwarding
wakeup signals (and generally interrupts) from them. They are
only needed by their children when there's some data to transfer,
so they may be suspended for the majority of time and resumed
on demand, when the children have data to send or receive. For this
purpose, however, their power.ignore_children flags have to be set,
or the PM core wouldn't allow them to be suspended while their
children were active.
Unfortunately, in some situations it may take too much time to
resume such devices so that they can assist their children in
transferring data. For example, if such a device belongs to a PM
domain which goes to the "power off" state when that device is
suspended, it may take too much time to restore power to the
domain in response to the request from one of the device's
children. In that case, if the parent's resume time is critical,
the domain should stay in the "power on" state, although it still may
be desirable to power manage the parent itself (e.g. by manipulating
its clock).
In general, device PM QoS may be used to address this problem.
Namely, if the device's children added PM QoS latency constraints
for it, they would be able to prevent it from being put into an
overly deep low-power state. However, in some cases the devices
needing to be serviced are not the immediate children of a
"children-ignoring" device, but its grandchildren or even less
direct descendants. In those cases, the entity wanting to add a
PM QoS request for a given device's ancestor that ignores its
children will have to find it in the first place, so introduce a new
helper function that may be used to achieve that. This function,
dev_pm_qos_add_ancestor_request(), will search for the first
ancestor of the given device whose power.ignore_children flag is
set and will add a device PM QoS latency request for that ancestor
on behalf of the caller. The request added this way may be removed
with the help of dev_pm_qos_remove_request() in the future, like
any other device PM QoS latency request.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-12-23 04:23:52 +04:00
|
|
|
|
|
|
|
while (ancestor && !ancestor->power.ignore_children)
|
|
|
|
ancestor = ancestor->parent;
|
|
|
|
|
|
|
|
if (ancestor)
|
2012-12-18 17:07:49 +04:00
|
|
|
ret = dev_pm_qos_add_request(ancestor, req,
|
|
|
|
DEV_PM_QOS_LATENCY, value);
|
PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
Some devices, like the I2C controller on SH7372, are not
necessary for providing power to their children or forwarding
wakeup signals (and generally interrupts) from them. They are
only needed by their children when there's some data to transfer,
so they may be suspended for the majority of time and resumed
on demand, when the children have data to send or receive. For this
purpose, however, their power.ignore_children flags have to be set,
or the PM core wouldn't allow them to be suspended while their
children were active.
Unfortunately, in some situations it may take too much time to
resume such devices so that they can assist their children in
transferring data. For example, if such a device belongs to a PM
domain which goes to the "power off" state when that device is
suspended, it may take too much time to restore power to the
domain in response to the request from one of the device's
children. In that case, if the parent's resume time is critical,
the domain should stay in the "power on" state, although it still may
be desirable to power manage the parent itself (e.g. by manipulating
its clock).
In general, device PM QoS may be used to address this problem.
Namely, if the device's children added PM QoS latency constraints
for it, they would be able to prevent it from being put into an
overly deep low-power state. However, in some cases the devices
needing to be serviced are not the immediate children of a
"children-ignoring" device, but its grandchildren or even less
direct descendants. In those cases, the entity wanting to add a
PM QoS request for a given device's ancestor that ignores its
children will have to find it in the first place, so introduce a new
helper function that may be used to achieve that. This function,
dev_pm_qos_add_ancestor_request(), will search for the first
ancestor of the given device whose power.ignore_children flag is
set and will add a device PM QoS latency request for that ancestor
on behalf of the caller. The request added this way may be removed
with the help of dev_pm_qos_remove_request() in the future, like
any other device PM QoS latency request.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-12-23 04:23:52 +04:00
|
|
|
|
2012-12-18 17:07:49 +04:00
|
|
|
if (ret < 0)
|
PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
Some devices, like the I2C controller on SH7372, are not
necessary for providing power to their children or forwarding
wakeup signals (and generally interrupts) from them. They are
only needed by their children when there's some data to transfer,
so they may be suspended for the majority of time and resumed
on demand, when the children have data to send or receive. For this
purpose, however, their power.ignore_children flags have to be set,
or the PM core wouldn't allow them to be suspended while their
children were active.
Unfortunately, in some situations it may take too much time to
resume such devices so that they can assist their children in
transferring data. For example, if such a device belongs to a PM
domain which goes to the "power off" state when that device is
suspended, it may take too much time to restore power to the
domain in response to the request from one of the device's
children. In that case, if the parent's resume time is critical,
the domain should stay in the "power on" state, although it still may
be desirable to power manage the parent itself (e.g. by manipulating
its clock).
In general, device PM QoS may be used to address this problem.
Namely, if the device's children added PM QoS latency constraints
for it, they would be able to prevent it from being put into an
overly deep low-power state. However, in some cases the devices
needing to be serviced are not the immediate children of a
"children-ignoring" device, but its grandchildren or even less
direct descendants. In those cases, the entity wanting to add a
PM QoS request for a given device's ancestor that ignores its
children will have to find it in the first place, so introduce a new
helper function that may be used to achieve that. This function,
dev_pm_qos_add_ancestor_request(), will search for the first
ancestor of the given device whose power.ignore_children flag is
set and will add a device PM QoS latency request for that ancestor
on behalf of the caller. The request added this way may be removed
with the help of dev_pm_qos_remove_request() in the future, like
any other device PM QoS latency request.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-12-23 04:23:52 +04:00
|
|
|
req->dev = NULL;
|
|
|
|
|
2012-12-18 17:07:49 +04:00
|
|
|
return ret;
|
PM / QoS: Introduce dev_pm_qos_add_ancestor_request()
Some devices, like the I2C controller on SH7372, are not
necessary for providing power to their children or forwarding
wakeup signals (and generally interrupts) from them. They are
only needed by their children when there's some data to transfer,
so they may be suspended for the majority of time and resumed
on demand, when the children have data to send or receive. For this
purpose, however, their power.ignore_children flags have to be set,
or the PM core wouldn't allow them to be suspended while their
children were active.
Unfortunately, in some situations it may take too much time to
resume such devices so that they can assist their children in
transferring data. For example, if such a device belongs to a PM
domain which goes to the "power off" state when that device is
suspended, it may take too much time to restore power to the
domain in response to the request from one of the device's
children. In that case, if the parent's resume time is critical,
the domain should stay in the "power on" state, although it still may
be desirable to power manage the parent itself (e.g. by manipulating
its clock).
In general, device PM QoS may be used to address this problem.
Namely, if the device's children added PM QoS latency constraints
for it, they would be able to prevent it from being put into an
overly deep low-power state. However, in some cases the devices
needing to be serviced are not the immediate children of a
"children-ignoring" device, but its grandchildren or even less
direct descendants. In those cases, the entity wanting to add a
PM QoS request for a given device's ancestor that ignores its
children will have to find it in the first place, so introduce a new
helper function that may be used to achieve that. This function,
dev_pm_qos_add_ancestor_request(), will search for the first
ancestor of the given device whose power.ignore_children flag is
set and will add a device PM QoS latency request for that ancestor
on behalf of the caller. The request added this way may be removed
with the help of dev_pm_qos_remove_request() in the future, like
any other device PM QoS latency request.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
2011-12-23 04:23:52 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_add_ancestor_request);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
|
|
|
|
#ifdef CONFIG_PM_RUNTIME
|
2012-10-24 04:08:18 +04:00
|
|
|
static void __dev_pm_qos_drop_user_request(struct device *dev,
|
|
|
|
enum dev_pm_qos_req_type type)
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
{
|
2013-03-04 01:48:14 +04:00
|
|
|
struct dev_pm_qos_request *req = NULL;
|
|
|
|
|
2012-10-24 04:08:18 +04:00
|
|
|
switch(type) {
|
|
|
|
case DEV_PM_QOS_LATENCY:
|
2013-03-04 01:48:14 +04:00
|
|
|
req = dev->power.qos->latency_req;
|
2012-10-24 04:08:18 +04:00
|
|
|
dev->power.qos->latency_req = NULL;
|
|
|
|
break;
|
|
|
|
case DEV_PM_QOS_FLAGS:
|
2013-03-04 01:48:14 +04:00
|
|
|
req = dev->power.qos->flags_req;
|
2012-10-24 04:08:18 +04:00
|
|
|
dev->power.qos->flags_req = NULL;
|
|
|
|
break;
|
|
|
|
}
|
2013-03-04 01:48:14 +04:00
|
|
|
__dev_pm_qos_remove_request(req);
|
|
|
|
kfree(req);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
}
|
|
|
|
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
static void dev_pm_qos_drop_user_request(struct device *dev,
|
|
|
|
enum dev_pm_qos_req_type type)
|
|
|
|
{
|
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
__dev_pm_qos_drop_user_request(dev, type);
|
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
|
|
|
}
|
|
|
|
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
/**
|
|
|
|
* dev_pm_qos_expose_latency_limit - Expose PM QoS latency limit to user space.
|
|
|
|
* @dev: Device whose PM QoS latency limit is to be exposed to user space.
|
|
|
|
* @value: Initial value of the latency limit.
|
|
|
|
*/
|
|
|
|
int dev_pm_qos_expose_latency_limit(struct device *dev, s32 value)
|
|
|
|
{
|
|
|
|
struct dev_pm_qos_request *req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!device_is_registered(dev) || value < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
req = kzalloc(sizeof(*req), GFP_KERNEL);
|
|
|
|
if (!req)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2012-10-23 03:09:12 +04:00
|
|
|
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY, value);
|
2013-03-04 01:48:14 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
kfree(req);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
return ret;
|
2013-03-04 01:48:14 +04:00
|
|
|
}
|
|
|
|
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_lock(&dev_pm_qos_sysfs_mtx);
|
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR_OR_NULL(dev->power.qos))
|
2013-03-04 01:48:14 +04:00
|
|
|
ret = -ENODEV;
|
|
|
|
else if (dev->power.qos->latency_req)
|
|
|
|
ret = -EEXIST;
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
__dev_pm_qos_remove_request(req);
|
|
|
|
kfree(req);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
2013-03-04 01:48:14 +04:00
|
|
|
goto out;
|
|
|
|
}
|
2012-10-24 04:08:18 +04:00
|
|
|
dev->power.qos->latency_req = req;
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
|
|
|
|
2012-10-24 04:08:18 +04:00
|
|
|
ret = pm_qos_sysfs_add_latency(dev);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
if (ret)
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
out:
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_sysfs_mtx);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_expose_latency_limit);
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
static void __dev_pm_qos_hide_latency_limit(struct device *dev)
|
|
|
|
{
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->latency_req)
|
2013-03-04 17:22:57 +04:00
|
|
|
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY);
|
|
|
|
}
|
|
|
|
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
/**
|
|
|
|
* dev_pm_qos_hide_latency_limit - Hide PM QoS latency limit from user space.
|
|
|
|
* @dev: Device whose PM QoS latency limit is to be hidden from user space.
|
|
|
|
*/
|
|
|
|
void dev_pm_qos_hide_latency_limit(struct device *dev)
|
|
|
|
{
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_lock(&dev_pm_qos_sysfs_mtx);
|
|
|
|
|
|
|
|
pm_qos_sysfs_remove_latency(dev);
|
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
2013-03-04 17:22:57 +04:00
|
|
|
__dev_pm_qos_hide_latency_limit(dev);
|
2013-03-04 01:48:14 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_sysfs_mtx);
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_limit);
|
2012-10-24 04:08:18 +04:00
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_expose_flags - Expose PM QoS flags of a device to user space.
|
|
|
|
* @dev: Device whose PM QoS flags are to be exposed to user space.
|
|
|
|
* @val: Initial values of the flags.
|
|
|
|
*/
|
|
|
|
int dev_pm_qos_expose_flags(struct device *dev, s32 val)
|
|
|
|
{
|
|
|
|
struct dev_pm_qos_request *req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!device_is_registered(dev))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
req = kzalloc(sizeof(*req), GFP_KERNEL);
|
|
|
|
if (!req)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_FLAGS, val);
|
2013-03-04 01:48:14 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
kfree(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
pm_runtime_get_sync(dev);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_lock(&dev_pm_qos_sysfs_mtx);
|
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR_OR_NULL(dev->power.qos))
|
2013-03-04 01:48:14 +04:00
|
|
|
ret = -ENODEV;
|
|
|
|
else if (dev->power.qos->flags_req)
|
|
|
|
ret = -EEXIST;
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
__dev_pm_qos_remove_request(req);
|
|
|
|
kfree(req);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
2013-03-04 01:48:14 +04:00
|
|
|
goto out;
|
|
|
|
}
|
2012-10-24 04:08:18 +04:00
|
|
|
dev->power.qos->flags_req = req;
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
|
|
|
|
2012-10-24 04:08:18 +04:00
|
|
|
ret = pm_qos_sysfs_add_flags(dev);
|
|
|
|
if (ret)
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS);
|
2012-10-24 04:08:18 +04:00
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
out:
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_sysfs_mtx);
|
2012-11-08 07:14:08 +04:00
|
|
|
pm_runtime_put(dev);
|
2012-10-24 04:08:18 +04:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_expose_flags);
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
static void __dev_pm_qos_hide_flags(struct device *dev)
|
|
|
|
{
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->flags_req)
|
2013-03-04 17:22:57 +04:00
|
|
|
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_FLAGS);
|
|
|
|
}
|
|
|
|
|
2012-10-24 04:08:18 +04:00
|
|
|
/**
|
|
|
|
* dev_pm_qos_hide_flags - Hide PM QoS flags of a device from user space.
|
|
|
|
* @dev: Device whose PM QoS flags are to be hidden from user space.
|
|
|
|
*/
|
|
|
|
void dev_pm_qos_hide_flags(struct device *dev)
|
|
|
|
{
|
2013-03-04 01:48:14 +04:00
|
|
|
pm_runtime_get_sync(dev);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
mutex_lock(&dev_pm_qos_sysfs_mtx);
|
|
|
|
|
|
|
|
pm_qos_sysfs_remove_flags(dev);
|
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
2013-03-04 17:22:57 +04:00
|
|
|
__dev_pm_qos_hide_flags(dev);
|
2013-03-04 01:48:14 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
PM / QoS: Avoid possible deadlock related to sysfs access
Commit b81ea1b (PM / QoS: Fix concurrency issues and memory leaks in
device PM QoS) put calls to pm_qos_sysfs_add_latency(),
pm_qos_sysfs_add_flags(), pm_qos_sysfs_remove_latency(), and
pm_qos_sysfs_remove_flags() under dev_pm_qos_mtx, which was a
mistake, because it may lead to deadlocks in some situations.
For example, if pm_qos_remote_wakeup_store() is run in parallel
with dev_pm_qos_constraints_destroy(), they may deadlock in the
following way:
======================================================
[ INFO: possible circular locking dependency detected ]
3.9.0-rc4-next-20130328-sasha-00014-g91a3267 #319 Tainted: G W
-------------------------------------------------------
trinity-child6/12371 is trying to acquire lock:
(s_active#54){++++.+}, at: [<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
but task is already holding lock:
(dev_pm_qos_mtx){+.+.+.}, at: [<ffffffff81f07cc3>] dev_pm_qos_constraints_destroy+0x23/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (dev_pm_qos_mtx){+.+.+.}:
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff83dab809>] __mutex_lock_common+0x59/0x5e0
[<ffffffff83dabebf>] mutex_lock_nested+0x3f/0x50
[<ffffffff81f07f2f>] dev_pm_qos_update_flags+0x3f/0xc0
[<ffffffff81f05f4f>] pm_qos_remote_wakeup_store+0x3f/0x70
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f2c1>] __kernel_write+0x81/0x150
[<ffffffff812afc2d>] write_pipe_buf+0x4d/0x80
[<ffffffff812af57c>] splice_from_pipe_feed+0x7c/0x120
[<ffffffff812afa25>] __splice_from_pipe+0x45/0x80
[<ffffffff812b14fc>] splice_from_pipe+0x4c/0x70
[<ffffffff812b1538>] default_file_splice_write+0x18/0x30
[<ffffffff812afae3>] do_splice_from+0x83/0xb0
[<ffffffff812afb2e>] direct_splice_actor+0x1e/0x20
[<ffffffff812b0277>] splice_direct_to_actor+0xe7/0x200
[<ffffffff812b15bc>] do_splice_direct+0x4c/0x70
[<ffffffff8127eda9>] do_sendfile+0x169/0x300
[<ffffffff8127ff94>] SyS_sendfile64+0x64/0xb0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
-> #0 (s_active#54){++++.+}:
[<ffffffff811800cf>] __lock_acquire+0x15bf/0x1e50
[<ffffffff811811da>] lock_acquire+0x1aa/0x240
[<ffffffff81300aa2>] sysfs_deactivate+0x122/0x1a0
[<ffffffff81301631>] sysfs_addrm_finish+0x31/0x60
[<ffffffff812ff77f>] sysfs_hash_and_remove+0x7f/0xb0
[<ffffffff813035a1>] sysfs_unmerge_group+0x51/0x70
[<ffffffff81f068f4>] pm_qos_sysfs_remove_flags+0x14/0x20
[<ffffffff81f07490>] __dev_pm_qos_hide_flags+0x30/0x70
[<ffffffff81f07cd5>] dev_pm_qos_constraints_destroy+0x35/0x250
[<ffffffff81f06931>] dpm_sysfs_remove+0x11/0x50
[<ffffffff81efcf6f>] device_del+0x3f/0x1b0
[<ffffffff81efd128>] device_unregister+0x48/0x60
[<ffffffff82d4083c>] usb_hub_remove_port_device+0x1c/0x20
[<ffffffff82d2a9cd>] hub_disconnect+0xdd/0x160
[<ffffffff82d36ab7>] usb_unbind_interface+0x67/0x170
[<ffffffff81f001a7>] __device_release_driver+0x87/0xe0
[<ffffffff81f00559>] device_release_driver+0x29/0x40
[<ffffffff81effc58>] bus_remove_device+0x148/0x160
[<ffffffff81efd07f>] device_del+0x14f/0x1b0
[<ffffffff82d344f9>] usb_disable_device+0xf9/0x280
[<ffffffff82d34ff8>] usb_set_configuration+0x268/0x840
[<ffffffff82d3a7fc>] usb_remove_store+0x4c/0x80
[<ffffffff81efbb43>] dev_attr_store+0x13/0x20
[<ffffffff812ffdaa>] sysfs_write_file+0xfa/0x150
[<ffffffff8127f71d>] do_loop_readv_writev+0x4d/0x90
[<ffffffff8127f999>] do_readv_writev+0xf9/0x1e0
[<ffffffff8127faba>] vfs_writev+0x3a/0x60
[<ffffffff8127fc60>] SyS_writev+0x50/0xd0
[<ffffffff83db7d18>] tracesys+0xe1/0xe6
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(dev_pm_qos_mtx);
lock(s_active#54);
lock(dev_pm_qos_mtx);
lock(s_active#54);
*** DEADLOCK ***
To avoid that, remove the calls to functions mentioned above from
under dev_pm_qos_mtx and introduce a separate lock to prevent races
between functions that add or remove device PM QoS sysfs attributes
from happening.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-02 03:25:24 +04:00
|
|
|
|
|
|
|
mutex_unlock(&dev_pm_qos_sysfs_mtx);
|
2013-03-04 01:48:14 +04:00
|
|
|
pm_runtime_put(dev);
|
2012-10-24 04:08:18 +04:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(dev_pm_qos_hide_flags);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* dev_pm_qos_update_flags - Update PM QoS flags request owned by user space.
|
|
|
|
* @dev: Device to update the PM QoS flags request for.
|
|
|
|
* @mask: Flags to set/clear.
|
|
|
|
* @set: Whether to set or clear the flags (true means set).
|
|
|
|
*/
|
|
|
|
int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set)
|
|
|
|
{
|
|
|
|
s32 value;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
pm_runtime_get_sync(dev);
|
|
|
|
mutex_lock(&dev_pm_qos_mtx);
|
|
|
|
|
2013-03-04 17:22:57 +04:00
|
|
|
if (IS_ERR_OR_NULL(dev->power.qos) || !dev->power.qos->flags_req) {
|
2013-03-04 01:48:14 +04:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2012-10-24 04:08:18 +04:00
|
|
|
value = dev_pm_qos_requested_flags(dev);
|
|
|
|
if (set)
|
|
|
|
value |= mask;
|
|
|
|
else
|
|
|
|
value &= ~mask;
|
|
|
|
|
|
|
|
ret = __dev_pm_qos_update_request(dev->power.qos->flags_req, value);
|
|
|
|
|
2013-03-04 01:48:14 +04:00
|
|
|
out:
|
2012-10-24 04:08:18 +04:00
|
|
|
mutex_unlock(&dev_pm_qos_mtx);
|
|
|
|
pm_runtime_put(dev);
|
|
|
|
return ret;
|
|
|
|
}
|
2013-03-04 17:22:57 +04:00
|
|
|
#else /* !CONFIG_PM_RUNTIME */
|
|
|
|
static void __dev_pm_qos_hide_latency_limit(struct device *dev) {}
|
|
|
|
static void __dev_pm_qos_hide_flags(struct device *dev) {}
|
PM / QoS: Make it possible to expose PM QoS latency constraints
A runtime suspend of a device (e.g. an MMC controller) belonging to
a power domain or, in a more complicated scenario, a runtime suspend
of another device in the same power domain, may cause power to be
removed from the entire domain. In that case, the amount of time
necessary to runtime-resume the given device (e.g. the MMC
controller) is often substantially greater than the time needed to
run its driver's runtime resume callback. That may hurt performance
in some situations, because user data may need to wait for the
device to become operational, so we should make it possible to
prevent that from happening.
For this reason, introduce a new sysfs attribute for devices,
power/pm_qos_resume_latency_us, allowing user space to specify the
upper bound of the time necessary to bring the (runtime-suspended)
device up after the resume of it has been requested. However, make
that attribute appear only for the devices whose drivers declare
support for it by calling the (new) dev_pm_qos_expose_latency_limit()
helper function with the appropriate initial value of the attribute.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Reviewed-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
2012-03-13 04:01:39 +04:00
|
|
|
#endif /* CONFIG_PM_RUNTIME */
|