More power management and ACPI updates for v4.4-rc1
- Support for the ACPI _CCA configuration object intended to tell the OS whether or not a bus master device supports hardware managed cache coherency and a new set of functions to allow drivers to check the cache coherency support for devices in a platform firmware interface agnostic way (Suravee Suthikulpanit, Jeremy Linton). - ACPI backlight quirks for ESPRIMO Mobile M9410 and Dell XPS L421X (Aaron Lu, Hans de Goede). - Fixes for the arm_big_little and s5pv210-cpufreq cpufreq drivers (Jon Medhurst, Nicolas Pitre). - kfree()-related fixup for the recently introduced CPPC cpufreq frontend (Markus Elfring). - intel_pstate fix reducing kernel log noise on systems where P-states are managed by hardware (Prarit Bhargava). - intel_pstate maintainers information update (Srinivas Pandruvada). - cpufreq core optimization related to the handling of delayed work items used by governors (Viresh Kumar). - Locking fixes and cleanups of the Operating Performance Points (OPP) framework (Viresh Kumar). - Generic power domains framework cleanups (Lina Iyer). - cpupower tool updates (Jacob Tanenbaum, Sriram Raghunathan, Thomas Renninger). - turbostat tool updates (Len Brown). / -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAABCAAGBQJWQ96OAAoJEILEb/54YlRxyYYQALJ1HXu76SvYX1re2aawOw6Y WgzF3Ly7JX034E1VvA2xP6wgkWpBRBDcpnRDeltNA4dYXPBDei/eTcRZTLX12N3g AfFRGjGWTtLJfpNPecNMmUyF5xHjgDgMIQRabY+Is5NfP5STkPHJeqULnEpvTtx8 bd0lnC5jc4vuZiPEh1xVb+ClYDqWS8YQPyFJVjV/BaIf8Qwe5+oRX36byMBaKc9D ZgmvmCk5n/HLQQ1uQsqe4xnhFLHN2rypt2BLvFrOtlnSz9VNNpQyB+OIW1mgCD4f LhpKIwjP8NhZNQUq8HFu7nDlm8ciQtWmeMPB5NdGQ+OESu7yfKAOzQ+3U6Gl2Gaf 66zVGyV6SOJJwfDVJ3qKTtroWps9QV7ZClOJ+zJGgiujwU+tJ3pDQyZM9pa7CL3C s7ZAUsI6IigSBjD3nJVOyG4DO0a8KQFCIE1mDmyqId45Qz8xJoOrYP33/ZnDuOdo 2OtL/emyfWsz9ixbHVfwIhb7EC6aoaUxQrhSWmNraaQS43YfioZR7h4we8gwenph X4E1KY4SdML+uFf2VKIcd45NM3IBprCxx5UgFAJdrqe8+otqPNF2AVosG4iqhg/b k4nxwuIvw2a8Fm77U9ytyXDYMItU/wIlAHMbnmgx+oTwRv6AbZ07MHkyfuQLYuhD tq5Y14qSiTS7prNacx98 =XZiP -----END PGP SIGNATURE----- Merge tag 'pm+acpi-4.4-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull more power management and ACPI updates from Rafael Wysocki: "The only new feature in this batch is support for the ACPI _CCA device configuration object, which it a pre-requisite for future ACPI PCI support on ARM64, but should not affect the other architectures. The rest is fixes and cleanups, mostly in cpufreq (including intel_pstate), the Operating Performace Points (OPP) framework and tools (cpupower and turbostat). Specifics: - Support for the ACPI _CCA configuration object intended to tell the OS whether or not a bus master device supports hardware managed cache coherency and a new set of functions to allow drivers to check the cache coherency support for devices in a platform firmware interface agnostic way (Suravee Suthikulpanit, Jeremy Linton). - ACPI backlight quirks for ESPRIMO Mobile M9410 and Dell XPS L421X (Aaron Lu, Hans de Goede). - Fixes for the arm_big_little and s5pv210-cpufreq cpufreq drivers (Jon Medhurst, Nicolas Pitre). - kfree()-related fixup for the recently introduced CPPC cpufreq frontend (Markus Elfring). - intel_pstate fix reducing kernel log noise on systems where P-states are managed by hardware (Prarit Bhargava). - intel_pstate maintainers information update (Srinivas Pandruvada). - cpufreq core optimization related to the handling of delayed work items used by governors (Viresh Kumar). - Locking fixes and cleanups of the Operating Performance Points (OPP) framework (Viresh Kumar). - Generic power domains framework cleanups (Lina Iyer). - cpupower tool updates (Jacob Tanenbaum, Sriram Raghunathan, Thomas Renninger). - turbostat tool updates (Len Brown)" * tag 'pm+acpi-4.4-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (32 commits) PCI: ACPI: Add support for PCI device DMA coherency PCI: OF: Move of_pci_dma_configure() to pci_dma_configure() of/pci: Fix pci_get_host_bridge_device leak device property: ACPI: Remove unused DMA APIs device property: ACPI: Make use of the new DMA Attribute APIs device property: Adding DMA Attribute APIs for Generic Devices ACPI: Adding DMA Attribute APIs for ACPI Device device property: Introducing enum dev_dma_attr ACPI: Honor ACPI _CCA attribute setting cpufreq: CPPC: Delete an unnecessary check before the function call kfree() PM / OPP: Add opp_rcu_lockdep_assert() to _find_device_opp() PM / OPP: Hold dev_opp_list_lock for writers PM / OPP: Protect updates to list_dev with mutex PM / OPP: Propagate error properly from dev_pm_opp_set_sharing_cpus() cpufreq: s5pv210-cpufreq: fix wrong do_div() usage MAINTAINERS: update for intel P-state driver Creating a common structure initialization pattern for struct option cpupower: Enable disabled Cstates if they are below max latency cpupower: Remove debug message when using cpupower idle-set -D switch cpupower: cpupower monitor reports uninitialized values for offline cpus ...
This commit is contained in:
Коммит
be23c9d20b
|
@ -5505,7 +5505,8 @@ S: Supported
|
|||
F: drivers/idle/intel_idle.c
|
||||
|
||||
INTEL PSTATE DRIVER
|
||||
M: Kristen Carlson Accardi <kristen@linux.intel.com>
|
||||
M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
|
||||
M: Len Brown <lenb@kernel.org>
|
||||
L: linux-pm@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/cpufreq/intel_pstate.c
|
||||
|
|
|
@ -103,7 +103,12 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
|
|||
pdevinfo.res = resources;
|
||||
pdevinfo.num_res = count;
|
||||
pdevinfo.fwnode = acpi_fwnode_handle(adev);
|
||||
pdevinfo.dma_mask = acpi_check_dma(adev, NULL) ? DMA_BIT_MASK(32) : 0;
|
||||
|
||||
if (acpi_dma_supported(adev))
|
||||
pdevinfo.dma_mask = DMA_BIT_MASK(32);
|
||||
else
|
||||
pdevinfo.dma_mask = 0;
|
||||
|
||||
pdev = platform_device_register_full(&pdevinfo);
|
||||
if (IS_ERR(pdev))
|
||||
dev_err(&adev->dev, "platform device creation failed: %ld\n",
|
||||
|
|
|
@ -77,6 +77,12 @@ module_param(allow_duplicates, bool, 0644);
|
|||
static int disable_backlight_sysfs_if = -1;
|
||||
module_param(disable_backlight_sysfs_if, int, 0444);
|
||||
|
||||
static bool device_id_scheme = false;
|
||||
module_param(device_id_scheme, bool, 0444);
|
||||
|
||||
static bool only_lcd = false;
|
||||
module_param(only_lcd, bool, 0444);
|
||||
|
||||
static int register_count;
|
||||
static DEFINE_MUTEX(register_count_mutex);
|
||||
static struct mutex video_list_lock;
|
||||
|
@ -394,6 +400,18 @@ static int video_disable_backlight_sysfs_if(
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int video_set_device_id_scheme(const struct dmi_system_id *d)
|
||||
{
|
||||
device_id_scheme = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int video_enable_only_lcd(const struct dmi_system_id *d)
|
||||
{
|
||||
only_lcd = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct dmi_system_id video_dmi_table[] = {
|
||||
/*
|
||||
* Broken _BQC workaround http://bugzilla.kernel.org/show_bug.cgi?id=13121
|
||||
|
@ -455,6 +473,33 @@ static struct dmi_system_id video_dmi_table[] = {
|
|||
DMI_MATCH(DMI_PRODUCT_NAME, "PORTEGE R830"),
|
||||
},
|
||||
},
|
||||
/*
|
||||
* Some machine's _DOD IDs don't have bit 31(Device ID Scheme) set
|
||||
* but the IDs actually follow the Device ID Scheme.
|
||||
*/
|
||||
{
|
||||
/* https://bugzilla.kernel.org/show_bug.cgi?id=104121 */
|
||||
.callback = video_set_device_id_scheme,
|
||||
.ident = "ESPRIMO Mobile M9410",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Mobile M9410"),
|
||||
},
|
||||
},
|
||||
/*
|
||||
* Some machines have multiple video output devices, but only the one
|
||||
* that is the type of LCD can do the backlight control so we should not
|
||||
* register backlight interface for other video output devices.
|
||||
*/
|
||||
{
|
||||
/* https://bugzilla.kernel.org/show_bug.cgi?id=104121 */
|
||||
.callback = video_enable_only_lcd,
|
||||
.ident = "ESPRIMO Mobile M9410",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Mobile M9410"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
|
@ -1003,7 +1048,7 @@ acpi_video_bus_get_one_device(struct acpi_device *device,
|
|||
|
||||
attribute = acpi_video_get_device_attr(video, device_id);
|
||||
|
||||
if (attribute && attribute->device_id_scheme) {
|
||||
if (attribute && (attribute->device_id_scheme || device_id_scheme)) {
|
||||
switch (attribute->display_type) {
|
||||
case ACPI_VIDEO_DISPLAY_CRT:
|
||||
data->flags.crt = 1;
|
||||
|
@ -1568,15 +1613,6 @@ static void acpi_video_dev_register_backlight(struct acpi_video_device *device)
|
|||
static int count;
|
||||
char *name;
|
||||
|
||||
/*
|
||||
* Do not create backlight device for video output
|
||||
* device that is not in the enumerated list.
|
||||
*/
|
||||
if (!acpi_video_device_in_dod(device)) {
|
||||
dev_dbg(&device->dev->dev, "not in _DOD list, ignore\n");
|
||||
return;
|
||||
}
|
||||
|
||||
result = acpi_video_init_brightness(device);
|
||||
if (result)
|
||||
return;
|
||||
|
@ -1657,6 +1693,22 @@ static void acpi_video_run_bcl_for_osi(struct acpi_video_bus *video)
|
|||
mutex_unlock(&video->device_list_lock);
|
||||
}
|
||||
|
||||
static bool acpi_video_should_register_backlight(struct acpi_video_device *dev)
|
||||
{
|
||||
/*
|
||||
* Do not create backlight device for video output
|
||||
* device that is not in the enumerated list.
|
||||
*/
|
||||
if (!acpi_video_device_in_dod(dev)) {
|
||||
dev_dbg(&dev->dev->dev, "not in _DOD list, ignore\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (only_lcd)
|
||||
return dev->flags.lcd;
|
||||
return true;
|
||||
}
|
||||
|
||||
static int acpi_video_bus_register_backlight(struct acpi_video_bus *video)
|
||||
{
|
||||
struct acpi_video_device *dev;
|
||||
|
@ -1670,8 +1722,10 @@ static int acpi_video_bus_register_backlight(struct acpi_video_bus *video)
|
|||
return 0;
|
||||
|
||||
mutex_lock(&video->device_list_lock);
|
||||
list_for_each_entry(dev, &video->video_device_list, entry)
|
||||
acpi_video_dev_register_backlight(dev);
|
||||
list_for_each_entry(dev, &video->video_device_list, entry) {
|
||||
if (acpi_video_should_register_backlight(dev))
|
||||
acpi_video_dev_register_backlight(dev);
|
||||
}
|
||||
mutex_unlock(&video->device_list_lock);
|
||||
|
||||
video->backlight_registered = true;
|
||||
|
|
|
@ -168,7 +168,7 @@ int acpi_bind_one(struct device *dev, struct acpi_device *acpi_dev)
|
|||
struct list_head *physnode_list;
|
||||
unsigned int node_id;
|
||||
int retval = -EINVAL;
|
||||
bool coherent;
|
||||
enum dev_dma_attr attr;
|
||||
|
||||
if (has_acpi_companion(dev)) {
|
||||
if (acpi_dev) {
|
||||
|
@ -225,8 +225,10 @@ int acpi_bind_one(struct device *dev, struct acpi_device *acpi_dev)
|
|||
if (!has_acpi_companion(dev))
|
||||
ACPI_COMPANION_SET(dev, acpi_dev);
|
||||
|
||||
if (acpi_check_dma(acpi_dev, &coherent))
|
||||
arch_setup_dma_ops(dev, 0, 0, NULL, coherent);
|
||||
attr = acpi_get_dma_attr(acpi_dev);
|
||||
if (attr != DEV_DMA_NOT_SUPPORTED)
|
||||
arch_setup_dma_ops(dev, 0, 0, NULL,
|
||||
attr == DEV_DMA_COHERENT);
|
||||
|
||||
acpi_physnode_link_name(physical_node_name, node_id);
|
||||
retval = sysfs_create_link(&acpi_dev->dev.kobj, &dev->kobj,
|
||||
|
|
|
@ -1308,6 +1308,48 @@ void acpi_free_pnp_ids(struct acpi_device_pnp *pnp)
|
|||
kfree(pnp->unique_id);
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_dma_supported - Check DMA support for the specified device.
|
||||
* @adev: The pointer to acpi device
|
||||
*
|
||||
* Return false if DMA is not supported. Otherwise, return true
|
||||
*/
|
||||
bool acpi_dma_supported(struct acpi_device *adev)
|
||||
{
|
||||
if (!adev)
|
||||
return false;
|
||||
|
||||
if (adev->flags.cca_seen)
|
||||
return true;
|
||||
|
||||
/*
|
||||
* Per ACPI 6.0 sec 6.2.17, assume devices can do cache-coherent
|
||||
* DMA on "Intel platforms". Presumably that includes all x86 and
|
||||
* ia64, and other arches will set CONFIG_ACPI_CCA_REQUIRED=y.
|
||||
*/
|
||||
if (!IS_ENABLED(CONFIG_ACPI_CCA_REQUIRED))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_get_dma_attr - Check the supported DMA attr for the specified device.
|
||||
* @adev: The pointer to acpi device
|
||||
*
|
||||
* Return enum dev_dma_attr.
|
||||
*/
|
||||
enum dev_dma_attr acpi_get_dma_attr(struct acpi_device *adev)
|
||||
{
|
||||
if (!acpi_dma_supported(adev))
|
||||
return DEV_DMA_NOT_SUPPORTED;
|
||||
|
||||
if (adev->flags.coherent_dma)
|
||||
return DEV_DMA_COHERENT;
|
||||
else
|
||||
return DEV_DMA_NON_COHERENT;
|
||||
}
|
||||
|
||||
static void acpi_init_coherency(struct acpi_device *adev)
|
||||
{
|
||||
unsigned long long cca = 0;
|
||||
|
|
|
@ -232,6 +232,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
|
|||
"900X3C/900X3D/900X3E/900X4C/900X4D"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* https://bugzilla.redhat.com/show_bug.cgi?id=1272633 */
|
||||
.callback = video_detect_force_video,
|
||||
.ident = "Dell XPS14 L421X",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "XPS L421X"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* https://bugzilla.redhat.com/show_bug.cgi?id=1163574 */
|
||||
.callback = video_detect_force_video,
|
||||
|
|
|
@ -321,8 +321,7 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
|
|||
if (stat > PM_QOS_FLAGS_NONE)
|
||||
return -EBUSY;
|
||||
|
||||
if (pdd->dev->driver && (!pm_runtime_suspended(pdd->dev)
|
||||
|| pdd->dev->power.irq_safe))
|
||||
if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
|
||||
not_suspended++;
|
||||
}
|
||||
|
||||
|
@ -1312,13 +1311,17 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
|
|||
int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
|
||||
struct generic_pm_domain *subdomain)
|
||||
{
|
||||
struct gpd_link *link;
|
||||
struct gpd_link *link, *itr;
|
||||
int ret = 0;
|
||||
|
||||
if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain)
|
||||
|| genpd == subdomain)
|
||||
return -EINVAL;
|
||||
|
||||
link = kzalloc(sizeof(*link), GFP_KERNEL);
|
||||
if (!link)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&genpd->lock);
|
||||
mutex_lock_nested(&subdomain->lock, SINGLE_DEPTH_NESTING);
|
||||
|
||||
|
@ -1328,18 +1331,13 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
|
|||
goto out;
|
||||
}
|
||||
|
||||
list_for_each_entry(link, &genpd->master_links, master_node) {
|
||||
if (link->slave == subdomain && link->master == genpd) {
|
||||
list_for_each_entry(itr, &genpd->master_links, master_node) {
|
||||
if (itr->slave == subdomain && itr->master == genpd) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
link = kzalloc(sizeof(*link), GFP_KERNEL);
|
||||
if (!link) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
link->master = genpd;
|
||||
list_add_tail(&link->master_node, &genpd->master_links);
|
||||
link->slave = subdomain;
|
||||
|
@ -1350,7 +1348,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
|
|||
out:
|
||||
mutex_unlock(&subdomain->lock);
|
||||
mutex_unlock(&genpd->lock);
|
||||
|
||||
if (ret)
|
||||
kfree(link);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_genpd_add_subdomain);
|
||||
|
|
|
@ -11,6 +11,8 @@
|
|||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/slab.h>
|
||||
|
@ -27,7 +29,7 @@
|
|||
*/
|
||||
static LIST_HEAD(dev_opp_list);
|
||||
/* Lock to allow exclusive modification to the device and opp lists */
|
||||
static DEFINE_MUTEX(dev_opp_list_lock);
|
||||
DEFINE_MUTEX(dev_opp_list_lock);
|
||||
|
||||
#define opp_rcu_lockdep_assert() \
|
||||
do { \
|
||||
|
@ -79,14 +81,18 @@ static struct device_opp *_managed_opp(const struct device_node *np)
|
|||
* Return: pointer to 'struct device_opp' if found, otherwise -ENODEV or
|
||||
* -EINVAL based on type of error.
|
||||
*
|
||||
* Locking: This function must be called under rcu_read_lock(). device_opp
|
||||
* is a RCU protected pointer. This means that device_opp is valid as long
|
||||
* as we are under RCU lock.
|
||||
* Locking: For readers, this function must be called under rcu_read_lock().
|
||||
* device_opp is a RCU protected pointer, which means that device_opp is valid
|
||||
* as long as we are under RCU lock.
|
||||
*
|
||||
* For Writers, this function must be called with dev_opp_list_lock held.
|
||||
*/
|
||||
struct device_opp *_find_device_opp(struct device *dev)
|
||||
{
|
||||
struct device_opp *dev_opp;
|
||||
|
||||
opp_rcu_lockdep_assert();
|
||||
|
||||
if (IS_ERR_OR_NULL(dev)) {
|
||||
pr_err("%s: Invalid parameters\n", __func__);
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
@ -701,7 +707,7 @@ static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
|
|||
}
|
||||
|
||||
/**
|
||||
* _opp_add_dynamic() - Allocate a dynamic OPP.
|
||||
* _opp_add_v1() - Allocate a OPP based on v1 bindings.
|
||||
* @dev: device for which we do this operation
|
||||
* @freq: Frequency in Hz for this OPP
|
||||
* @u_volt: Voltage in uVolts for this OPP
|
||||
|
@ -727,8 +733,8 @@ static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
|
|||
* Duplicate OPPs (both freq and volt are same) and !opp->available
|
||||
* -ENOMEM Memory allocation failure
|
||||
*/
|
||||
static int _opp_add_dynamic(struct device *dev, unsigned long freq,
|
||||
long u_volt, bool dynamic)
|
||||
static int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt,
|
||||
bool dynamic)
|
||||
{
|
||||
struct device_opp *dev_opp;
|
||||
struct dev_pm_opp *new_opp;
|
||||
|
@ -770,9 +776,10 @@ unlock:
|
|||
}
|
||||
|
||||
/* TODO: Support multiple regulators */
|
||||
static int opp_get_microvolt(struct dev_pm_opp *opp, struct device *dev)
|
||||
static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev)
|
||||
{
|
||||
u32 microvolt[3] = {0};
|
||||
u32 val;
|
||||
int count, ret;
|
||||
|
||||
/* Missing property isn't a problem, but an invalid entry is */
|
||||
|
@ -805,6 +812,9 @@ static int opp_get_microvolt(struct dev_pm_opp *opp, struct device *dev)
|
|||
opp->u_volt_min = microvolt[1];
|
||||
opp->u_volt_max = microvolt[2];
|
||||
|
||||
if (!of_property_read_u32(opp->np, "opp-microamp", &val))
|
||||
opp->u_amp = val;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -869,13 +879,10 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
|
|||
if (!of_property_read_u32(np, "clock-latency-ns", &val))
|
||||
new_opp->clock_latency_ns = val;
|
||||
|
||||
ret = opp_get_microvolt(new_opp, dev);
|
||||
ret = opp_parse_supplies(new_opp, dev);
|
||||
if (ret)
|
||||
goto free_opp;
|
||||
|
||||
if (!of_property_read_u32(new_opp->np, "opp-microamp", &val))
|
||||
new_opp->u_amp = val;
|
||||
|
||||
ret = _opp_add(dev, new_opp, dev_opp);
|
||||
if (ret)
|
||||
goto free_opp;
|
||||
|
@ -939,7 +946,7 @@ unlock:
|
|||
*/
|
||||
int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
|
||||
{
|
||||
return _opp_add_dynamic(dev, freq, u_volt, true);
|
||||
return _opp_add_v1(dev, freq, u_volt, true);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dev_pm_opp_add);
|
||||
|
||||
|
@ -1172,13 +1179,17 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
|
|||
struct device_opp *dev_opp;
|
||||
int ret = 0, count = 0;
|
||||
|
||||
mutex_lock(&dev_opp_list_lock);
|
||||
|
||||
dev_opp = _managed_opp(opp_np);
|
||||
if (dev_opp) {
|
||||
/* OPPs are already managed */
|
||||
if (!_add_list_dev(dev, dev_opp))
|
||||
ret = -ENOMEM;
|
||||
mutex_unlock(&dev_opp_list_lock);
|
||||
return ret;
|
||||
}
|
||||
mutex_unlock(&dev_opp_list_lock);
|
||||
|
||||
/* We have opp-list node now, iterate over it and add OPPs */
|
||||
for_each_available_child_of_node(opp_np, np) {
|
||||
|
@ -1196,15 +1207,20 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
|
|||
if (WARN_ON(!count))
|
||||
return -ENOENT;
|
||||
|
||||
mutex_lock(&dev_opp_list_lock);
|
||||
|
||||
dev_opp = _find_device_opp(dev);
|
||||
if (WARN_ON(IS_ERR(dev_opp))) {
|
||||
ret = PTR_ERR(dev_opp);
|
||||
mutex_unlock(&dev_opp_list_lock);
|
||||
goto free_table;
|
||||
}
|
||||
|
||||
dev_opp->np = opp_np;
|
||||
dev_opp->shared_opp = of_property_read_bool(opp_np, "opp-shared");
|
||||
|
||||
mutex_unlock(&dev_opp_list_lock);
|
||||
|
||||
return 0;
|
||||
|
||||
free_table:
|
||||
|
@ -1241,7 +1257,7 @@ static int _of_add_opp_table_v1(struct device *dev)
|
|||
unsigned long freq = be32_to_cpup(val++) * 1000;
|
||||
unsigned long volt = be32_to_cpup(val++);
|
||||
|
||||
if (_opp_add_dynamic(dev, freq, volt, false))
|
||||
if (_opp_add_v1(dev, freq, volt, false))
|
||||
dev_warn(dev, "%s: Failed to add OPP %ld\n",
|
||||
__func__, freq);
|
||||
nr -= 2;
|
||||
|
|
|
@ -10,6 +10,9 @@
|
|||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/err.h>
|
||||
|
@ -124,12 +127,12 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
|
|||
struct device *dev;
|
||||
int cpu, ret = 0;
|
||||
|
||||
rcu_read_lock();
|
||||
mutex_lock(&dev_opp_list_lock);
|
||||
|
||||
dev_opp = _find_device_opp(cpu_dev);
|
||||
if (IS_ERR(dev_opp)) {
|
||||
ret = -EINVAL;
|
||||
goto out_rcu_read_unlock;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
for_each_cpu(cpu, cpumask) {
|
||||
|
@ -150,10 +153,10 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
|
|||
continue;
|
||||
}
|
||||
}
|
||||
out_rcu_read_unlock:
|
||||
rcu_read_unlock();
|
||||
unlock:
|
||||
mutex_unlock(&dev_opp_list_lock);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus);
|
||||
|
||||
|
|
|
@ -21,6 +21,9 @@
|
|||
#include <linux/rculist.h>
|
||||
#include <linux/rcupdate.h>
|
||||
|
||||
/* Lock to allow exclusive modification to the device and opp lists */
|
||||
extern struct mutex dev_opp_list_lock;
|
||||
|
||||
/*
|
||||
* Internal data structure organization with the OPP layer library is as
|
||||
* follows:
|
||||
|
|
|
@ -598,18 +598,34 @@ unsigned int device_get_child_node_count(struct device *dev)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(device_get_child_node_count);
|
||||
|
||||
bool device_dma_is_coherent(struct device *dev)
|
||||
bool device_dma_supported(struct device *dev)
|
||||
{
|
||||
bool coherent = false;
|
||||
|
||||
/* For DT, this is always supported.
|
||||
* For ACPI, this depends on CCA, which
|
||||
* is determined by the acpi_dma_supported().
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_OF) && dev->of_node)
|
||||
coherent = of_dma_is_coherent(dev->of_node);
|
||||
else
|
||||
acpi_check_dma(ACPI_COMPANION(dev), &coherent);
|
||||
return true;
|
||||
|
||||
return coherent;
|
||||
return acpi_dma_supported(ACPI_COMPANION(dev));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(device_dma_is_coherent);
|
||||
EXPORT_SYMBOL_GPL(device_dma_supported);
|
||||
|
||||
enum dev_dma_attr device_get_dma_attr(struct device *dev)
|
||||
{
|
||||
enum dev_dma_attr attr = DEV_DMA_NOT_SUPPORTED;
|
||||
|
||||
if (IS_ENABLED(CONFIG_OF) && dev->of_node) {
|
||||
if (of_dma_is_coherent(dev->of_node))
|
||||
attr = DEV_DMA_COHERENT;
|
||||
else
|
||||
attr = DEV_DMA_NON_COHERENT;
|
||||
} else
|
||||
attr = acpi_get_dma_attr(ACPI_COMPANION(dev));
|
||||
|
||||
return attr;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(device_get_dma_attr);
|
||||
|
||||
/**
|
||||
* device_get_phy_mode - Get phy mode for given device
|
||||
|
|
|
@ -149,6 +149,19 @@ bL_cpufreq_set_rate(u32 cpu, u32 old_cluster, u32 new_cluster, u32 rate)
|
|||
__func__, cpu, old_cluster, new_cluster, new_rate);
|
||||
|
||||
ret = clk_set_rate(clk[new_cluster], new_rate * 1000);
|
||||
if (!ret) {
|
||||
/*
|
||||
* FIXME: clk_set_rate hasn't returned an error here however it
|
||||
* may be that clk_change_rate failed due to hardware or
|
||||
* firmware issues and wasn't able to report that due to the
|
||||
* current design of the clk core layer. To work around this
|
||||
* problem we will read back the clock rate and check it is
|
||||
* correct. This needs to be removed once clk core is fixed.
|
||||
*/
|
||||
if (clk_get_rate(clk[new_cluster]) != new_rate * 1000)
|
||||
ret = -EIO;
|
||||
}
|
||||
|
||||
if (WARN_ON(ret)) {
|
||||
pr_err("clk_set_rate failed: %d, new cluster: %d\n", ret,
|
||||
new_cluster);
|
||||
|
@ -189,15 +202,6 @@ bL_cpufreq_set_rate(u32 cpu, u32 old_cluster, u32 new_cluster, u32 rate)
|
|||
mutex_unlock(&cluster_lock[old_cluster]);
|
||||
}
|
||||
|
||||
/*
|
||||
* FIXME: clk_set_rate has to handle the case where clk_change_rate
|
||||
* can fail due to hardware or firmware issues. Until the clk core
|
||||
* layer is fixed, we can check here. In most of the cases we will
|
||||
* be reading only the cached value anyway. This needs to be removed
|
||||
* once clk core is fixed.
|
||||
*/
|
||||
if (bL_cpufreq_get_rate(cpu) != new_rate)
|
||||
return -EIO;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -166,8 +166,7 @@ static int __init cppc_cpufreq_init(void)
|
|||
|
||||
out:
|
||||
for_each_possible_cpu(i)
|
||||
if (all_cpu_data[i])
|
||||
kfree(all_cpu_data[i]);
|
||||
kfree(all_cpu_data[i]);
|
||||
|
||||
kfree(all_cpu_data);
|
||||
return -ENODEV;
|
||||
|
|
|
@ -171,10 +171,6 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
|
|||
{
|
||||
int i;
|
||||
|
||||
mutex_lock(&cpufreq_governor_lock);
|
||||
if (!policy->governor_enabled)
|
||||
goto out_unlock;
|
||||
|
||||
if (!all_cpus) {
|
||||
/*
|
||||
* Use raw_smp_processor_id() to avoid preemptible warnings.
|
||||
|
@ -188,9 +184,6 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
|
|||
for_each_cpu(i, policy->cpus)
|
||||
__gov_queue_work(i, dbs_data, delay);
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&cpufreq_governor_lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(gov_queue_work);
|
||||
|
||||
|
@ -229,13 +222,24 @@ static void dbs_timer(struct work_struct *work)
|
|||
struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info,
|
||||
dwork.work);
|
||||
struct cpu_common_dbs_info *shared = cdbs->shared;
|
||||
struct cpufreq_policy *policy = shared->policy;
|
||||
struct dbs_data *dbs_data = policy->governor_data;
|
||||
struct cpufreq_policy *policy;
|
||||
struct dbs_data *dbs_data;
|
||||
unsigned int sampling_rate, delay;
|
||||
bool modify_all = true;
|
||||
|
||||
mutex_lock(&shared->timer_mutex);
|
||||
|
||||
policy = shared->policy;
|
||||
|
||||
/*
|
||||
* Governor might already be disabled and there is no point continuing
|
||||
* with the work-handler.
|
||||
*/
|
||||
if (!policy)
|
||||
goto unlock;
|
||||
|
||||
dbs_data = policy->governor_data;
|
||||
|
||||
if (dbs_data->cdata->governor == GOV_CONSERVATIVE) {
|
||||
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
|
||||
|
||||
|
@ -252,6 +256,7 @@ static void dbs_timer(struct work_struct *work)
|
|||
delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all);
|
||||
gov_queue_work(dbs_data, policy, delay, modify_all);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&shared->timer_mutex);
|
||||
}
|
||||
|
||||
|
@ -478,9 +483,17 @@ static int cpufreq_governor_stop(struct cpufreq_policy *policy,
|
|||
if (!shared || !shared->policy)
|
||||
return -EBUSY;
|
||||
|
||||
/*
|
||||
* Work-handler must see this updated, as it should not proceed any
|
||||
* further after governor is disabled. And so timer_mutex is taken while
|
||||
* updating this value.
|
||||
*/
|
||||
mutex_lock(&shared->timer_mutex);
|
||||
shared->policy = NULL;
|
||||
mutex_unlock(&shared->timer_mutex);
|
||||
|
||||
gov_cancel_work(dbs_data, policy);
|
||||
|
||||
shared->policy = NULL;
|
||||
mutex_destroy(&shared->timer_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -684,8 +684,6 @@ static void __init intel_pstate_sysfs_expose_params(void)
|
|||
|
||||
static void intel_pstate_hwp_enable(struct cpudata *cpudata)
|
||||
{
|
||||
pr_info("intel_pstate: HWP enabled\n");
|
||||
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1);
|
||||
}
|
||||
|
||||
|
@ -1557,8 +1555,10 @@ static int __init intel_pstate_init(void)
|
|||
if (!all_cpu_data)
|
||||
return -ENOMEM;
|
||||
|
||||
if (static_cpu_has_safe(X86_FEATURE_HWP) && !no_hwp)
|
||||
if (static_cpu_has_safe(X86_FEATURE_HWP) && !no_hwp) {
|
||||
pr_info("intel_pstate: HWP enabled\n");
|
||||
hwp_active++;
|
||||
}
|
||||
|
||||
if (!hwp_active && hwp_only)
|
||||
goto out;
|
||||
|
@ -1593,8 +1593,10 @@ static int __init intel_pstate_setup(char *str)
|
|||
|
||||
if (!strcmp(str, "disable"))
|
||||
no_load = 1;
|
||||
if (!strcmp(str, "no_hwp"))
|
||||
if (!strcmp(str, "no_hwp")) {
|
||||
pr_info("intel_pstate: HWP disabled\n");
|
||||
no_hwp = 1;
|
||||
}
|
||||
if (!strcmp(str, "force"))
|
||||
force_load = 1;
|
||||
if (!strcmp(str, "hwp_only"))
|
||||
|
|
|
@ -212,11 +212,11 @@ static void s5pv210_set_refresh(enum s5pv210_dmc_port ch, unsigned long freq)
|
|||
/* Find current DRAM frequency */
|
||||
tmp = s5pv210_dram_conf[ch].freq;
|
||||
|
||||
do_div(tmp, freq);
|
||||
tmp /= freq;
|
||||
|
||||
tmp1 = s5pv210_dram_conf[ch].refresh;
|
||||
|
||||
do_div(tmp1, tmp);
|
||||
tmp1 /= tmp;
|
||||
|
||||
__raw_writel(tmp1, reg);
|
||||
}
|
||||
|
|
|
@ -94,6 +94,7 @@ static int ccp_platform_probe(struct platform_device *pdev)
|
|||
struct ccp_device *ccp;
|
||||
struct ccp_platform *ccp_platform;
|
||||
struct device *dev = &pdev->dev;
|
||||
enum dev_dma_attr attr;
|
||||
struct resource *ior;
|
||||
int ret;
|
||||
|
||||
|
@ -118,18 +119,24 @@ static int ccp_platform_probe(struct platform_device *pdev)
|
|||
}
|
||||
ccp->io_regs = ccp->io_map;
|
||||
|
||||
attr = device_get_dma_attr(dev);
|
||||
if (attr == DEV_DMA_NOT_SUPPORTED) {
|
||||
dev_err(dev, "DMA is not supported");
|
||||
goto e_err;
|
||||
}
|
||||
|
||||
ccp_platform->coherent = (attr == DEV_DMA_COHERENT);
|
||||
if (ccp_platform->coherent)
|
||||
ccp->axcache = CACHE_WB_NO_ALLOC;
|
||||
else
|
||||
ccp->axcache = CACHE_NONE;
|
||||
|
||||
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
|
||||
if (ret) {
|
||||
dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n", ret);
|
||||
goto e_err;
|
||||
}
|
||||
|
||||
ccp_platform->coherent = device_dma_is_coherent(ccp->dev);
|
||||
if (ccp_platform->coherent)
|
||||
ccp->axcache = CACHE_WB_NO_ALLOC;
|
||||
else
|
||||
ccp->axcache = CACHE_NONE;
|
||||
|
||||
dev_set_drvdata(dev, ccp);
|
||||
|
||||
ret = ccp_init(ccp);
|
||||
|
|
|
@ -342,6 +342,7 @@ static int xgbe_probe(struct platform_device *pdev)
|
|||
struct resource *res;
|
||||
const char *phy_mode;
|
||||
unsigned int i, phy_memnum, phy_irqnum;
|
||||
enum dev_dma_attr attr;
|
||||
int ret;
|
||||
|
||||
DBGPR("--> xgbe_probe\n");
|
||||
|
@ -609,7 +610,12 @@ static int xgbe_probe(struct platform_device *pdev)
|
|||
goto err_io;
|
||||
|
||||
/* Set the DMA coherency values */
|
||||
pdata->coherent = device_dma_is_coherent(pdata->dev);
|
||||
attr = device_get_dma_attr(dev);
|
||||
if (attr == DEV_DMA_NOT_SUPPORTED) {
|
||||
dev_err(dev, "DMA is not supported");
|
||||
goto err_io;
|
||||
}
|
||||
pdata->coherent = (attr == DEV_DMA_COHERENT);
|
||||
if (pdata->coherent) {
|
||||
pdata->axdomain = XGBE_DMA_OS_AXDOMAIN;
|
||||
pdata->arcache = XGBE_DMA_OS_ARCACHE;
|
||||
|
|
|
@ -143,26 +143,6 @@ void of_pci_check_probe_only(void)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(of_pci_check_probe_only);
|
||||
|
||||
/**
|
||||
* of_pci_dma_configure - Setup DMA configuration
|
||||
* @dev: ptr to pci_dev struct of the PCI device
|
||||
*
|
||||
* Function to update PCI devices's DMA configuration using the same
|
||||
* info from the OF node of host bridge's parent (if any).
|
||||
*/
|
||||
void of_pci_dma_configure(struct pci_dev *pci_dev)
|
||||
{
|
||||
struct device *dev = &pci_dev->dev;
|
||||
struct device *bridge = pci_get_host_bridge_device(pci_dev);
|
||||
|
||||
if (!bridge->parent)
|
||||
return;
|
||||
|
||||
of_dma_configure(dev, bridge->parent->of_node);
|
||||
pci_put_host_bridge_device(bridge);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_pci_dma_configure);
|
||||
|
||||
#if defined(CONFIG_OF_ADDRESS)
|
||||
/**
|
||||
* of_pci_get_host_bridge_resources - Parse PCI host bridge resources from DT
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci_hotplug.h>
|
||||
#include <linux/slab.h>
|
||||
|
@ -13,6 +14,7 @@
|
|||
#include <linux/cpumask.h>
|
||||
#include <linux/pci-aspm.h>
|
||||
#include <linux/aer.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <asm-generic/pci-bridge.h>
|
||||
#include "pci.h"
|
||||
|
||||
|
@ -1672,6 +1674,34 @@ static void pci_set_msi_domain(struct pci_dev *dev)
|
|||
dev_set_msi_domain(&dev->dev, d);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_dma_configure - Setup DMA configuration
|
||||
* @dev: ptr to pci_dev struct of the PCI device
|
||||
*
|
||||
* Function to update PCI devices's DMA configuration using the same
|
||||
* info from the OF node or ACPI node of host bridge's parent (if any).
|
||||
*/
|
||||
static void pci_dma_configure(struct pci_dev *dev)
|
||||
{
|
||||
struct device *bridge = pci_get_host_bridge_device(dev);
|
||||
|
||||
if (IS_ENABLED(CONFIG_OF) && dev->dev.of_node) {
|
||||
if (bridge->parent)
|
||||
of_dma_configure(&dev->dev, bridge->parent->of_node);
|
||||
} else if (has_acpi_companion(bridge)) {
|
||||
struct acpi_device *adev = to_acpi_device_node(bridge->fwnode);
|
||||
enum dev_dma_attr attr = acpi_get_dma_attr(adev);
|
||||
|
||||
if (attr == DEV_DMA_NOT_SUPPORTED)
|
||||
dev_warn(&dev->dev, "DMA not supported.\n");
|
||||
else
|
||||
arch_setup_dma_ops(&dev->dev, 0, 0, NULL,
|
||||
attr == DEV_DMA_COHERENT);
|
||||
}
|
||||
|
||||
pci_put_host_bridge_device(bridge);
|
||||
}
|
||||
|
||||
void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
|
||||
{
|
||||
int ret;
|
||||
|
@ -1685,7 +1715,7 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
|
|||
dev->dev.dma_mask = &dev->dma_mask;
|
||||
dev->dev.dma_parms = &dev->dma_parms;
|
||||
dev->dev.coherent_dma_mask = 0xffffffffull;
|
||||
of_pci_dma_configure(dev);
|
||||
pci_dma_configure(dev);
|
||||
|
||||
pci_set_dma_max_seg_size(dev, 65536);
|
||||
pci_set_dma_seg_boundary(dev, 0xffffffff);
|
||||
|
|
|
@ -390,39 +390,6 @@ struct acpi_data_node {
|
|||
struct completion kobj_done;
|
||||
};
|
||||
|
||||
static inline bool acpi_check_dma(struct acpi_device *adev, bool *coherent)
|
||||
{
|
||||
bool ret = false;
|
||||
|
||||
if (!adev)
|
||||
return ret;
|
||||
|
||||
/**
|
||||
* Currently, we only support _CCA=1 (i.e. coherent_dma=1)
|
||||
* This should be equivalent to specifyig dma-coherent for
|
||||
* a device in OF.
|
||||
*
|
||||
* For the case when _CCA=0 (i.e. coherent_dma=0 && cca_seen=1),
|
||||
* There are two cases:
|
||||
* case 1. Do not support and disable DMA.
|
||||
* case 2. Support but rely on arch-specific cache maintenance for
|
||||
* non-coherence DMA operations.
|
||||
* Currently, we implement case 1 above.
|
||||
*
|
||||
* For the case when _CCA is missing (i.e. cca_seen=0) and
|
||||
* platform specifies ACPI_CCA_REQUIRED, we do not support DMA,
|
||||
* and fallback to arch-specific default handling.
|
||||
*
|
||||
* See acpi_init_coherency() for more info.
|
||||
*/
|
||||
if (adev->flags.coherent_dma) {
|
||||
ret = true;
|
||||
if (coherent)
|
||||
*coherent = adev->flags.coherent_dma;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline bool is_acpi_node(struct fwnode_handle *fwnode)
|
||||
{
|
||||
return fwnode && (fwnode->type == FWNODE_ACPI
|
||||
|
@ -595,6 +562,9 @@ struct acpi_pci_root {
|
|||
|
||||
/* helper */
|
||||
|
||||
bool acpi_dma_supported(struct acpi_device *adev);
|
||||
enum dev_dma_attr acpi_get_dma_attr(struct acpi_device *adev);
|
||||
|
||||
struct acpi_device *acpi_find_child_device(struct acpi_device *parent,
|
||||
u64 address, bool check_children);
|
||||
int acpi_is_root_bridge(acpi_handle);
|
||||
|
|
|
@ -601,11 +601,16 @@ static inline int acpi_device_modalias(struct device *dev,
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
static inline bool acpi_check_dma(struct acpi_device *adev, bool *coherent)
|
||||
static inline bool acpi_dma_supported(struct acpi_device *adev)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline enum dev_dma_attr acpi_get_dma_attr(struct acpi_device *adev)
|
||||
{
|
||||
return DEV_DMA_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
#define ACPI_PTR(_ptr) (NULL)
|
||||
|
||||
#endif /* !CONFIG_ACPI */
|
||||
|
|
|
@ -16,7 +16,6 @@ int of_pci_get_devfn(struct device_node *np);
|
|||
int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
|
||||
int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
|
||||
int of_get_pci_domain_nr(struct device_node *node);
|
||||
void of_pci_dma_configure(struct pci_dev *pci_dev);
|
||||
void of_pci_check_probe_only(void);
|
||||
#else
|
||||
static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
|
||||
|
@ -53,8 +52,6 @@ of_get_pci_domain_nr(struct device_node *node)
|
|||
return -1;
|
||||
}
|
||||
|
||||
static inline void of_pci_dma_configure(struct pci_dev *pci_dev) { }
|
||||
|
||||
static inline void of_pci_check_probe_only(void) { }
|
||||
#endif
|
||||
|
||||
|
|
|
@ -27,6 +27,12 @@ enum dev_prop_type {
|
|||
DEV_PROP_MAX,
|
||||
};
|
||||
|
||||
enum dev_dma_attr {
|
||||
DEV_DMA_NOT_SUPPORTED,
|
||||
DEV_DMA_NON_COHERENT,
|
||||
DEV_DMA_COHERENT,
|
||||
};
|
||||
|
||||
bool device_property_present(struct device *dev, const char *propname);
|
||||
int device_property_read_u8_array(struct device *dev, const char *propname,
|
||||
u8 *val, size_t nval);
|
||||
|
@ -168,7 +174,9 @@ struct property_set {
|
|||
|
||||
void device_add_property_set(struct device *dev, struct property_set *pset);
|
||||
|
||||
bool device_dma_is_coherent(struct device *dev);
|
||||
bool device_dma_supported(struct device *dev);
|
||||
|
||||
enum dev_dma_attr device_get_dma_attr(struct device *dev);
|
||||
|
||||
int device_get_phy_mode(struct device *dev);
|
||||
|
||||
|
|
|
@ -134,7 +134,7 @@ next_one:
|
|||
}
|
||||
|
||||
static struct option info_opts[] = {
|
||||
{.name = "numpst", .has_arg=no_argument, .flag=NULL, .val='n'},
|
||||
{"numpst", no_argument, NULL, 'n'},
|
||||
};
|
||||
|
||||
void print_help(void)
|
||||
|
|
|
@ -20,7 +20,9 @@ Disable a specific processor sleep state.
|
|||
Enable a specific processor sleep state.
|
||||
.TP
|
||||
\fB\-D\fR \fB\-\-disable-by-latency\fR <LATENCY>
|
||||
Disable all idle states with a equal or higher latency than <LATENCY>
|
||||
Disable all idle states with a equal or higher latency than <LATENCY>.
|
||||
|
||||
Enable all idle states with a latency lower than <LATENCY>.
|
||||
.TP
|
||||
\fB\-E\fR \fB\-\-enable-all\fR
|
||||
Enable all idle states if not enabled already.
|
||||
|
|
|
@ -536,21 +536,21 @@ static int get_latency(unsigned int cpu, unsigned int human)
|
|||
}
|
||||
|
||||
static struct option info_opts[] = {
|
||||
{ .name = "debug", .has_arg = no_argument, .flag = NULL, .val = 'e'},
|
||||
{ .name = "boost", .has_arg = no_argument, .flag = NULL, .val = 'b'},
|
||||
{ .name = "freq", .has_arg = no_argument, .flag = NULL, .val = 'f'},
|
||||
{ .name = "hwfreq", .has_arg = no_argument, .flag = NULL, .val = 'w'},
|
||||
{ .name = "hwlimits", .has_arg = no_argument, .flag = NULL, .val = 'l'},
|
||||
{ .name = "driver", .has_arg = no_argument, .flag = NULL, .val = 'd'},
|
||||
{ .name = "policy", .has_arg = no_argument, .flag = NULL, .val = 'p'},
|
||||
{ .name = "governors", .has_arg = no_argument, .flag = NULL, .val = 'g'},
|
||||
{ .name = "related-cpus", .has_arg = no_argument, .flag = NULL, .val = 'r'},
|
||||
{ .name = "affected-cpus",.has_arg = no_argument, .flag = NULL, .val = 'a'},
|
||||
{ .name = "stats", .has_arg = no_argument, .flag = NULL, .val = 's'},
|
||||
{ .name = "latency", .has_arg = no_argument, .flag = NULL, .val = 'y'},
|
||||
{ .name = "proc", .has_arg = no_argument, .flag = NULL, .val = 'o'},
|
||||
{ .name = "human", .has_arg = no_argument, .flag = NULL, .val = 'm'},
|
||||
{ .name = "no-rounding", .has_arg = no_argument, .flag = NULL, .val = 'n'},
|
||||
{"debug", no_argument, NULL, 'e'},
|
||||
{"boost", no_argument, NULL, 'b'},
|
||||
{"freq", no_argument, NULL, 'f'},
|
||||
{"hwfreq", no_argument, NULL, 'w'},
|
||||
{"hwlimits", no_argument, NULL, 'l'},
|
||||
{"driver", no_argument, NULL, 'd'},
|
||||
{"policy", no_argument, NULL, 'p'},
|
||||
{"governors", no_argument, NULL, 'g'},
|
||||
{"related-cpus", no_argument, NULL, 'r'},
|
||||
{"affected-cpus", no_argument, NULL, 'a'},
|
||||
{"stats", no_argument, NULL, 's'},
|
||||
{"latency", no_argument, NULL, 'y'},
|
||||
{"proc", no_argument, NULL, 'o'},
|
||||
{"human", no_argument, NULL, 'm'},
|
||||
{"no-rounding", no_argument, NULL, 'n'},
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -22,11 +22,11 @@
|
|||
#define NORM_FREQ_LEN 32
|
||||
|
||||
static struct option set_opts[] = {
|
||||
{ .name = "min", .has_arg = required_argument, .flag = NULL, .val = 'd'},
|
||||
{ .name = "max", .has_arg = required_argument, .flag = NULL, .val = 'u'},
|
||||
{ .name = "governor", .has_arg = required_argument, .flag = NULL, .val = 'g'},
|
||||
{ .name = "freq", .has_arg = required_argument, .flag = NULL, .val = 'f'},
|
||||
{ .name = "related", .has_arg = no_argument, .flag = NULL, .val='r'},
|
||||
{"min", required_argument, NULL, 'd'},
|
||||
{"max", required_argument, NULL, 'u'},
|
||||
{"governor", required_argument, NULL, 'g'},
|
||||
{"freq", required_argument, NULL, 'f'},
|
||||
{"related", no_argument, NULL, 'r'},
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -126,8 +126,8 @@ static void proc_cpuidle_cpu_output(unsigned int cpu)
|
|||
}
|
||||
|
||||
static struct option info_opts[] = {
|
||||
{ .name = "silent", .has_arg = no_argument, .flag = NULL, .val = 's'},
|
||||
{ .name = "proc", .has_arg = no_argument, .flag = NULL, .val = 'o'},
|
||||
{"silent", no_argument, NULL, 's'},
|
||||
{"proc", no_argument, NULL, 'o'},
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -13,15 +13,11 @@
|
|||
#include "helpers/sysfs.h"
|
||||
|
||||
static struct option info_opts[] = {
|
||||
{ .name = "disable",
|
||||
.has_arg = required_argument, .flag = NULL, .val = 'd'},
|
||||
{ .name = "enable",
|
||||
.has_arg = required_argument, .flag = NULL, .val = 'e'},
|
||||
{ .name = "disable-by-latency",
|
||||
.has_arg = required_argument, .flag = NULL, .val = 'D'},
|
||||
{ .name = "enable-all",
|
||||
.has_arg = no_argument, .flag = NULL, .val = 'E'},
|
||||
{ },
|
||||
{"disable", required_argument, NULL, 'd'},
|
||||
{"enable", required_argument, NULL, 'e'},
|
||||
{"disable-by-latency", required_argument, NULL, 'D'},
|
||||
{"enable-all", no_argument, NULL, 'E'},
|
||||
{ },
|
||||
};
|
||||
|
||||
|
||||
|
@ -148,14 +144,21 @@ int cmd_idle_set(int argc, char **argv)
|
|||
(cpu, idlestate);
|
||||
state_latency = sysfs_get_idlestate_latency
|
||||
(cpu, idlestate);
|
||||
printf("CPU: %u - idlestate %u - state_latency: %llu - latency: %llu\n",
|
||||
cpu, idlestate, state_latency, latency);
|
||||
if (disabled == 1 || latency > state_latency)
|
||||
if (disabled == 1) {
|
||||
if (latency > state_latency){
|
||||
ret = sysfs_idlestate_disable
|
||||
(cpu, idlestate, 0);
|
||||
if (ret == 0)
|
||||
printf(_("Idlestate %u enabled on CPU %u\n"), idlestate, cpu);
|
||||
}
|
||||
continue;
|
||||
ret = sysfs_idlestate_disable
|
||||
(cpu, idlestate, 1);
|
||||
if (ret == 0)
|
||||
}
|
||||
if (latency <= state_latency){
|
||||
ret = sysfs_idlestate_disable
|
||||
(cpu, idlestate, 1);
|
||||
if (ret == 0)
|
||||
printf(_("Idlestate %u disabled on CPU %u\n"), idlestate, cpu);
|
||||
}
|
||||
}
|
||||
break;
|
||||
case 'E':
|
||||
|
|
|
@ -17,8 +17,8 @@
|
|||
#include "helpers/sysfs.h"
|
||||
|
||||
static struct option set_opts[] = {
|
||||
{ .name = "perf-bias", .has_arg = optional_argument, .flag = NULL, .val = 'b'},
|
||||
{ },
|
||||
{"perf-bias", optional_argument, NULL, 'b'},
|
||||
{ },
|
||||
};
|
||||
|
||||
static void print_wrong_arg_exit(void)
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
#include "helpers/bitmask.h"
|
||||
|
||||
static struct option set_opts[] = {
|
||||
{ .name = "perf-bias", .has_arg = required_argument, .flag = NULL, .val = 'b'},
|
||||
{"perf-bias", required_argument, NULL, 'b'},
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -73,18 +73,22 @@ int get_cpu_topology(struct cpupower_topology *cpu_top)
|
|||
for (cpu = 0; cpu < cpus; cpu++) {
|
||||
cpu_top->core_info[cpu].cpu = cpu;
|
||||
cpu_top->core_info[cpu].is_online = sysfs_is_cpu_online(cpu);
|
||||
if (!cpu_top->core_info[cpu].is_online)
|
||||
continue;
|
||||
if(sysfs_topology_read_file(
|
||||
cpu,
|
||||
"physical_package_id",
|
||||
&(cpu_top->core_info[cpu].pkg)) < 0)
|
||||
return -1;
|
||||
&(cpu_top->core_info[cpu].pkg)) < 0) {
|
||||
cpu_top->core_info[cpu].pkg = -1;
|
||||
cpu_top->core_info[cpu].core = -1;
|
||||
continue;
|
||||
}
|
||||
if(sysfs_topology_read_file(
|
||||
cpu,
|
||||
"core_id",
|
||||
&(cpu_top->core_info[cpu].core)) < 0)
|
||||
return -1;
|
||||
&(cpu_top->core_info[cpu].core)) < 0) {
|
||||
cpu_top->core_info[cpu].pkg = -1;
|
||||
cpu_top->core_info[cpu].core = -1;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
qsort(cpu_top->core_info, cpus, sizeof(struct cpuid_core_info),
|
||||
|
@ -95,12 +99,15 @@ int get_cpu_topology(struct cpupower_topology *cpu_top)
|
|||
done by pkg value. */
|
||||
last_pkg = cpu_top->core_info[0].pkg;
|
||||
for(cpu = 1; cpu < cpus; cpu++) {
|
||||
if(cpu_top->core_info[cpu].pkg != last_pkg) {
|
||||
if (cpu_top->core_info[cpu].pkg != last_pkg &&
|
||||
cpu_top->core_info[cpu].pkg != -1) {
|
||||
|
||||
last_pkg = cpu_top->core_info[cpu].pkg;
|
||||
cpu_top->pkgs++;
|
||||
}
|
||||
}
|
||||
cpu_top->pkgs++;
|
||||
if (!cpu_top->core_info[0].pkg == -1)
|
||||
cpu_top->pkgs++;
|
||||
|
||||
/* Intel's cores count is not consecutively numbered, there may
|
||||
* be a core_id of 3, but none of 2. Assume there always is 0
|
||||
|
|
|
@ -143,6 +143,9 @@ void print_results(int topology_depth, int cpu)
|
|||
/* Be careful CPUs may got resorted for pkg value do not just use cpu */
|
||||
if (!bitmask_isbitset(cpus_chosen, cpu_top.core_info[cpu].cpu))
|
||||
return;
|
||||
if (!cpu_top.core_info[cpu].is_online &&
|
||||
cpu_top.core_info[cpu].pkg == -1)
|
||||
return;
|
||||
|
||||
if (topology_depth > 2)
|
||||
printf("%4d|", cpu_top.core_info[cpu].pkg);
|
||||
|
@ -191,7 +194,8 @@ void print_results(int topology_depth, int cpu)
|
|||
* It's up to the monitor plug-in to check .is_online, this one
|
||||
* is just for additional info.
|
||||
*/
|
||||
if (!cpu_top.core_info[cpu].is_online) {
|
||||
if (!cpu_top.core_info[cpu].is_online &&
|
||||
cpu_top.core_info[cpu].pkg != -1) {
|
||||
printf(_(" *is offline\n"));
|
||||
return;
|
||||
} else
|
||||
|
@ -388,6 +392,9 @@ int cmd_monitor(int argc, char **argv)
|
|||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
if (!cpu_top.core_info[0].is_online)
|
||||
printf("WARNING: at least one cpu is offline\n");
|
||||
|
||||
/* Default is: monitor all CPUs */
|
||||
if (bitmask_isallclear(cpus_chosen))
|
||||
bitmask_setall(cpus_chosen);
|
||||
|
|
|
@ -75,6 +75,7 @@ unsigned int aperf_mperf_multiplier = 1;
|
|||
int do_smi;
|
||||
double bclk;
|
||||
double base_hz;
|
||||
unsigned int has_base_hz;
|
||||
double tsc_tweak = 1.0;
|
||||
unsigned int show_pkg;
|
||||
unsigned int show_core;
|
||||
|
@ -96,6 +97,7 @@ unsigned int do_ring_perf_limit_reasons;
|
|||
unsigned int crystal_hz;
|
||||
unsigned long long tsc_hz;
|
||||
int base_cpu;
|
||||
double discover_bclk(unsigned int family, unsigned int model);
|
||||
|
||||
#define RAPL_PKG (1 << 0)
|
||||
/* 0x610 MSR_PKG_POWER_LIMIT */
|
||||
|
@ -511,9 +513,13 @@ int format_counters(struct thread_data *t, struct core_data *c,
|
|||
}
|
||||
|
||||
/* Bzy_MHz */
|
||||
if (has_aperf)
|
||||
outp += sprintf(outp, "%8.0f",
|
||||
1.0 * t->tsc * tsc_tweak / units * t->aperf / t->mperf / interval_float);
|
||||
if (has_aperf) {
|
||||
if (has_base_hz)
|
||||
outp += sprintf(outp, "%8.0f", base_hz / units * t->aperf / t->mperf);
|
||||
else
|
||||
outp += sprintf(outp, "%8.0f",
|
||||
1.0 * t->tsc / units * t->aperf / t->mperf / interval_float);
|
||||
}
|
||||
|
||||
/* TSC_MHz */
|
||||
outp += sprintf(outp, "%8.0f", 1.0 * t->tsc/units/interval_float);
|
||||
|
@ -1158,12 +1164,6 @@ int phi_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCLRSV, PCLRSV,
|
|||
static void
|
||||
calculate_tsc_tweak()
|
||||
{
|
||||
unsigned long long msr;
|
||||
unsigned int base_ratio;
|
||||
|
||||
get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr);
|
||||
base_ratio = (msr >> 8) & 0xFF;
|
||||
base_hz = base_ratio * bclk * 1000000;
|
||||
tsc_tweak = base_hz / tsc_hz;
|
||||
}
|
||||
|
||||
|
@ -1440,7 +1440,7 @@ dump_config_tdp(void)
|
|||
|
||||
get_msr(base_cpu, MSR_TURBO_ACTIVATION_RATIO, &msr);
|
||||
fprintf(stderr, "cpu%d: MSR_TURBO_ACTIVATION_RATIO: 0x%08llx (", base_cpu, msr);
|
||||
fprintf(stderr, "MAX_NON_TURBO_RATIO=%d", (unsigned int)(msr) & 0xEF);
|
||||
fprintf(stderr, "MAX_NON_TURBO_RATIO=%d", (unsigned int)(msr) & 0x7F);
|
||||
fprintf(stderr, " lock=%d", (unsigned int)(msr >> 31) & 1);
|
||||
fprintf(stderr, ")\n");
|
||||
}
|
||||
|
@ -1821,6 +1821,7 @@ void check_permissions()
|
|||
int probe_nhm_msrs(unsigned int family, unsigned int model)
|
||||
{
|
||||
unsigned long long msr;
|
||||
unsigned int base_ratio;
|
||||
int *pkg_cstate_limits;
|
||||
|
||||
if (!genuine_intel)
|
||||
|
@ -1829,6 +1830,8 @@ int probe_nhm_msrs(unsigned int family, unsigned int model)
|
|||
if (family != 6)
|
||||
return 0;
|
||||
|
||||
bclk = discover_bclk(family, model);
|
||||
|
||||
switch (model) {
|
||||
case 0x1A: /* Core i7, Xeon 5500 series - Bloomfield, Gainstown NHM-EP */
|
||||
case 0x1E: /* Core i7 and i5 Processor - Clarksfield, Lynnfield, Jasper Forest */
|
||||
|
@ -1871,9 +1874,13 @@ int probe_nhm_msrs(unsigned int family, unsigned int model)
|
|||
return 0;
|
||||
}
|
||||
get_msr(base_cpu, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
|
||||
|
||||
pkg_cstate_limit = pkg_cstate_limits[msr & 0xF];
|
||||
|
||||
get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr);
|
||||
base_ratio = (msr >> 8) & 0xFF;
|
||||
|
||||
base_hz = base_ratio * bclk * 1000000;
|
||||
has_base_hz = 1;
|
||||
return 1;
|
||||
}
|
||||
int has_nhm_turbo_ratio_limit(unsigned int family, unsigned int model)
|
||||
|
@ -2780,7 +2787,6 @@ void process_cpuid()
|
|||
do_skl_residency = has_skl_msrs(family, model);
|
||||
do_slm_cstates = is_slm(family, model);
|
||||
do_knl_cstates = is_knl(family, model);
|
||||
bclk = discover_bclk(family, model);
|
||||
|
||||
rapl_probe(family, model);
|
||||
perf_limit_reasons_probe(family, model);
|
||||
|
|
Загрузка…
Ссылка в новой задаче