Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6

* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (179 commits)
  ACPI: Fix acpi_processor_idle and idle= boot parameters interaction
  acpi: fix section mismatch warning in pnpacpi
  intel_menlo: fix build warning
  ACPI: Cleanup: Remove unneeded, multiple local dummy variables
  ACPI: video - fix permissions on some proc entries
  ACPI: video - properly handle errors when registering proc elements
  ACPI: video - do not store invalid entries in attached_array list
  ACPI: re-name acpi_pm_ops to acpi_suspend_ops
  ACER_WMI/ASUS_LAPTOP: fix build bug
  thinkpad_acpi: fix possible NULL pointer dereference if kstrdup failed
  ACPI: check a return value correctly in acpi_power_get_context()
  #if 0 acpi/bay.c:eject_removable_drive()
  eeepc-laptop: add hwmon fan control
  eeepc-laptop: add backlight
  eeepc-laptop: add base driver
  ACPI: thinkpad-acpi: bump up version to 0.20
  ACPI: thinkpad-acpi: fix selects in Kconfig
  ACPI: thinkpad-acpi: use a private workqueue
  ACPI: thinkpad-acpi: fluff really minor fix
  ACPI: thinkpad-acpi: use uppercase for "LED" on user documentation
  ...

Fixed conflicts in drivers/acpi/video.c and drivers/misc/intel_menlow.c
manually.
This commit is contained in:
Linus Torvalds 2008-04-30 11:52:52 -07:00
Родитель ccf2779544 008238b54a
Коммит 08acd4f8af
196 изменённых файлов: 6677 добавлений и 3664 удалений

Просмотреть файл

@ -1,7 +1,7 @@
ThinkPad ACPI Extras Driver
Version 0.19
January 06th, 2008
Version 0.20
April 09th, 2008
Borislav Deianov <borislav@users.sf.net>
Henrique de Moraes Holschuh <hmh@hmh.eng.br>
@ -18,6 +18,11 @@ This driver used to be named ibm-acpi until kernel 2.6.21 and release
moved to the drivers/misc tree and renamed to thinkpad-acpi for kernel
2.6.22, and release 0.14.
The driver is named "thinkpad-acpi". In some places, like module
names, "thinkpad_acpi" is used because of userspace issues.
"tpacpi" is used as a shorthand where "thinkpad-acpi" would be too
long due to length limitations on some Linux kernel versions.
Status
------
@ -571,6 +576,47 @@ netlink interface and the input layer interface, and don't bother at all
with hotkey_report_mode.
Brightness hotkey notes:
These are the current sane choices for brightness key mapping in
thinkpad-acpi:
For IBM and Lenovo models *without* ACPI backlight control (the ones on
which thinkpad-acpi will autoload its backlight interface by default,
and on which ACPI video does not export a backlight interface):
1. Don't enable or map the brightness hotkeys in thinkpad-acpi, as
these older firmware versions unfortunately won't respect the hotkey
mask for brightness keys anyway, and always reacts to them. This
usually work fine, unless X.org drivers are doing something to block
the BIOS. In that case, use (3) below. This is the default mode of
operation.
2. Enable the hotkeys, but map them to something else that is NOT
KEY_BRIGHTNESS_UP/DOWN or any other keycode that would cause
userspace to try to change the backlight level, and use that as an
on-screen-display hint.
3. IF AND ONLY IF X.org drivers find a way to block the firmware from
automatically changing the brightness, enable the hotkeys and map
them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN, and feed that to
something that calls xbacklight. thinkpad-acpi will not be able to
change brightness in that case either, so you should disable its
backlight interface.
For Lenovo models *with* ACPI backlight control:
1. Load up ACPI video and use that. ACPI video will report ACPI
events for brightness change keys. Do not mess with thinkpad-acpi
defaults in this case. thinkpad-acpi should not have anything to do
with backlight events in a scenario where ACPI video is loaded:
brightness hotkeys must be disabled, and the backlight interface is
to be kept disabled as well. This is the default mode of operation.
2. Do *NOT* load up ACPI video, enable the hotkeys in thinkpad-acpi,
and map them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN. Process
these keys on userspace somehow (e.g. by calling xbacklight).
Bluetooth
---------
@ -647,16 +693,31 @@ while others are still having problems. For more information:
https://bugs.freedesktop.org/show_bug.cgi?id=2000
ThinkLight control -- /proc/acpi/ibm/light
------------------------------------------
ThinkLight control
------------------
The current status of the ThinkLight can be found in this file. A few
models which do not make the status available will show it as
"unknown". The available commands are:
procfs: /proc/acpi/ibm/light
sysfs attributes: as per LED class, for the "tpacpi::thinklight" LED
procfs notes:
The ThinkLight status can be read and set through the procfs interface. A
few models which do not make the status available will show the ThinkLight
status as "unknown". The available commands are:
echo on > /proc/acpi/ibm/light
echo off > /proc/acpi/ibm/light
sysfs notes:
The ThinkLight sysfs interface is documented by the LED class
documentation, in Documentation/leds-class.txt. The ThinkLight LED name
is "tpacpi::thinklight".
Due to limitations in the sysfs LED class, if the status of the thinklight
cannot be read or if it is unknown, thinkpad-acpi will report it as "off".
It is impossible to know if the status returned through sysfs is valid.
Docking / undocking -- /proc/acpi/ibm/dock
------------------------------------------
@ -815,28 +876,63 @@ The cmos command interface is prone to firmware split-brain problems, as
in newer ThinkPads it is just a compatibility layer. Do not use it, it is
exported just as a debug tool.
LED control -- /proc/acpi/ibm/led
---------------------------------
LED control
-----------
Some of the LED indicators can be controlled through this feature. The
available commands are:
procfs: /proc/acpi/ibm/led
sysfs attributes: as per LED class, see below for names
echo '<led number> on' >/proc/acpi/ibm/led
echo '<led number> off' >/proc/acpi/ibm/led
echo '<led number> blink' >/proc/acpi/ibm/led
Some of the LED indicators can be controlled through this feature. On
some older ThinkPad models, it is possible to query the status of the
LED indicators as well. Newer ThinkPads cannot query the real status
of the LED indicators.
The <led number> range is 0 to 7. The set of LEDs that can be
controlled varies from model to model. Here is the mapping on the X40:
procfs notes:
The available commands are:
echo '<LED number> on' >/proc/acpi/ibm/led
echo '<LED number> off' >/proc/acpi/ibm/led
echo '<LED number> blink' >/proc/acpi/ibm/led
The <LED number> range is 0 to 7. The set of LEDs that can be
controlled varies from model to model. Here is the common ThinkPad
mapping:
0 - power
1 - battery (orange)
2 - battery (green)
3 - UltraBase
3 - UltraBase/dock
4 - UltraBay
5 - UltraBase battery slot
6 - (unknown)
7 - standby
All of the above can be turned on and off and can be made to blink.
sysfs notes:
The ThinkPad LED sysfs interface is described in detail by the LED class
documentation, in Documentation/leds-class.txt.
The leds are named (in LED ID order, from 0 to 7):
"tpacpi::power", "tpacpi:orange:batt", "tpacpi:green:batt",
"tpacpi::dock_active", "tpacpi::bay_active", "tpacpi::dock_batt",
"tpacpi::unknown_led", "tpacpi::standby".
Due to limitations in the sysfs LED class, if the status of the LED
indicators cannot be read due to an error, thinkpad-acpi will report it as
a brightness of zero (same as LED off).
If the thinkpad firmware doesn't support reading the current status,
trying to read the current LED brightness will just return whatever
brightness was last written to that attribute.
These LEDs can blink using hardware acceleration. To request that a
ThinkPad indicator LED should blink in hardware accelerated mode, use the
"timer" trigger, and leave the delay_on and delay_off parameters set to
zero (to request hardware acceleration autodetection).
ACPI sounds -- /proc/acpi/ibm/beep
----------------------------------
@ -1090,6 +1186,15 @@ it there will be the following attributes:
dim the display.
WARNING:
Whatever you do, do NOT ever call thinkpad-acpi backlight-level change
interface and the ACPI-based backlight level change interface
(available on newer BIOSes, and driven by the Linux ACPI video driver)
at the same time. The two will interact in bad ways, do funny things,
and maybe reduce the life of the backlight lamps by needlessly kicking
its level up and down at every change.
Volume control -- /proc/acpi/ibm/volume
---------------------------------------

Просмотреть файл

@ -108,10 +108,12 @@ and throttle appropriate devices.
RO read only value
RW read/write value
All thermal sysfs attributes will be represented under /sys/class/thermal
Thermal sysfs attributes will be represented under /sys/class/thermal.
Hwmon sysfs I/F extension is also available under /sys/class/hwmon
if hwmon is compiled in or built as a module.
Thermal zone device sys I/F, created once it's registered:
|thermal_zone[0-*]:
/sys/class/thermal/thermal_zone[0-*]:
|-----type: Type of the thermal zone
|-----temp: Current temperature
|-----mode: Working mode of the thermal zone
@ -119,7 +121,7 @@ Thermal zone device sys I/F, created once it's registered:
|-----trip_point_[0-*]_type: Trip point type
Thermal cooling device sys I/F, created once it's registered:
|cooling_device[0-*]:
/sys/class/thermal/cooling_device[0-*]:
|-----type : Type of the cooling device(processor/fan/...)
|-----max_state: Maximum cooling state of the cooling device
|-----cur_state: Current cooling state of the cooling device
@ -130,10 +132,19 @@ They represent the relationship between a thermal zone and its associated coolin
They are created/removed for each
thermal_zone_bind_cooling_device/thermal_zone_unbind_cooling_device successful execution.
|thermal_zone[0-*]
/sys/class/thermal/thermal_zone[0-*]
|-----cdev[0-*]: The [0-*]th cooling device in the current thermal zone
|-----cdev[0-*]_trip_point: Trip point that cdev[0-*] is associated with
Besides the thermal zone device sysfs I/F and cooling device sysfs I/F,
the generic thermal driver also creates a hwmon sysfs I/F for each _type_ of
thermal zone device. E.g. the generic thermal driver registers one hwmon class device
and build the associated hwmon sysfs I/F for all the registered ACPI thermal zones.
/sys/class/hwmon/hwmon[0-*]:
|-----name: The type of the thermal zone devices.
|-----temp[1-*]_input: The current temperature of thermal zone [1-*].
|-----temp[1-*]_critical: The critical trip point of thermal zone [1-*].
Please read Documentation/hwmon/sysfs-interface for additional information.
***************************
* Thermal zone attributes *
@ -141,7 +152,10 @@ thermal_zone_bind_cooling_device/thermal_zone_unbind_cooling_device successful e
type Strings which represent the thermal zone type.
This is given by thermal zone driver as part of registration.
Eg: "ACPI thermal zone" indicates it's a ACPI thermal device
Eg: "acpitz" indicates it's an ACPI thermal device.
In order to keep it consistent with hwmon sys attribute,
this should be a short, lowercase string,
not containing spaces nor dashes.
RO
Required
@ -218,7 +232,7 @@ the sys I/F structure will be built like this:
/sys/class/thermal:
|thermal_zone1:
|-----type: ACPI thermal zone
|-----type: acpitz
|-----temp: 37000
|-----mode: kernel
|-----trip_point_0_temp: 100000
@ -243,3 +257,10 @@ the sys I/F structure will be built like this:
|-----type: Fan
|-----max_state: 2
|-----cur_state: 0
/sys/class/hwmon:
|hwmon0:
|-----name: acpitz
|-----temp1_input: 37000
|-----temp1_crit: 100000

Просмотреть файл

@ -1533,6 +1533,13 @@ L: bluesmoke-devel@lists.sourceforge.net
W: bluesmoke.sourceforge.net
S: Maintained
EEEPC LAPTOP EXTRAS DRIVER
P: Corentin Chary
M: corentincj@iksaif.net
L: acpi4asus-user@lists.sourceforge.net
W: http://sourceforge.net/projects/acpi4asus
S: Maintained
EEPRO100 NETWORK DRIVER
P: Andrey V. Savochkin
M: saw@saw.sw.com.sg

Просмотреть файл

@ -163,14 +163,11 @@ static int sysctl_pm_do_suspend(ctl_table *ctl, int write, struct file *filp,
if ((mode != 1) && (mode != 5))
return -EINVAL;
retval = pm_send_all(PM_SUSPEND, (void *)3);
if (retval == 0) {
if (mode == 5)
retval = pm_do_bus_sleep();
else
retval = pm_do_suspend();
pm_send_all(PM_RESUME, (void *)0);
}
return retval;
@ -183,9 +180,6 @@ static int try_set_cmode(int new_cmode)
if (!(clock_cmodes_permitted & (1<<new_cmode)))
return -EINVAL;
/* tell all the drivers we're suspending */
pm_send_all(PM_SUSPEND, (void *)3);
/* now change cmode */
local_irq_disable();
frv_dma_pause_all();
@ -201,8 +195,6 @@ static int try_set_cmode(int new_cmode)
frv_dma_resume_all();
local_irq_enable();
/* tell all the drivers we're resuming */
pm_send_all(PM_RESUME, (void *)0);
return 0;
}

Просмотреть файл

@ -251,7 +251,6 @@ int au_sleep(void)
static int pm_do_sleep(ctl_table * ctl, int write, struct file *file,
void __user *buffer, size_t * len, loff_t *ppos)
{
int retval = 0;
#ifdef SLEEP_TEST_TIMEOUT
#define TMPBUFLEN2 16
char buf[TMPBUFLEN2], *p;
@ -271,36 +270,12 @@ static int pm_do_sleep(ctl_table * ctl, int write, struct file *file,
p = buf;
sleep_ticks = simple_strtoul(p, &p, 0);
#endif
retval = pm_send_all(PM_SUSPEND, (void *) 2);
if (retval)
return retval;
au_sleep();
retval = pm_send_all(PM_RESUME, (void *) 0);
}
return retval;
return 0;
}
static int pm_do_suspend(ctl_table * ctl, int write, struct file *file,
void __user *buffer, size_t * len, loff_t *ppos)
{
int retval = 0;
if (!write) {
*len = 0;
} else {
retval = pm_send_all(PM_SUSPEND, (void *) 2);
if (retval)
return retval;
suspend_mode = 1;
retval = pm_send_all(PM_RESUME, (void *) 0);
}
return retval;
}
static int pm_do_freq(ctl_table * ctl, int write, struct file *file,
void __user *buffer, size_t * len, loff_t *ppos)
{
@ -413,14 +388,6 @@ static int pm_do_freq(ctl_table * ctl, int write, struct file *file,
static struct ctl_table pm_table[] = {
{
.ctl_name = CTL_UNNUMBERED,
.procname = "suspend",
.data = NULL,
.maxlen = 0,
.mode = 0600,
.proc_handler = &pm_do_suspend
},
{
.ctl_name = CTL_UNNUMBERED,
.procname = "sleep",

Просмотреть файл

@ -1192,19 +1192,6 @@ static int suspend(int vetoable)
int err;
struct apm_user *as;
if (pm_send_all(PM_SUSPEND, (void *)3)) {
/* Vetoed */
if (vetoable) {
if (apm_info.connection_version > 0x100)
set_system_power_state(APM_STATE_REJECT);
err = -EBUSY;
ignore_sys_suspend = 0;
printk(KERN_WARNING "apm: suspend was vetoed.\n");
goto out;
}
printk(KERN_CRIT "apm: suspend was vetoed, but suspending anyway.\n");
}
device_suspend(PMSG_SUSPEND);
local_irq_disable();
device_power_down(PMSG_SUSPEND);
@ -1227,9 +1214,7 @@ static int suspend(int vetoable)
device_power_up();
local_irq_enable();
device_resume();
pm_send_all(PM_RESUME, (void *)0);
queue_event(APM_NORMAL_RESUME, NULL);
out:
spin_lock(&user_list_lock);
for (as = user_list; as != NULL; as = as->next) {
as->suspend_wait = 0;
@ -1340,7 +1325,6 @@ static void check_events(void)
if ((event != APM_NORMAL_RESUME)
|| (ignore_normal_resume == 0)) {
device_resume();
pm_send_all(PM_RESUME, (void *)0);
queue_event(event, NULL);
}
ignore_normal_resume = 0;

Просмотреть файл

@ -140,6 +140,7 @@ config ACPI_VIDEO
tristate "Video"
depends on X86 && BACKLIGHT_CLASS_DEVICE && VIDEO_OUTPUT_CONTROL
depends on INPUT
select THERMAL
help
This driver implement the ACPI Extensions For Display Adapters
for integrated graphics devices on motherboard, as specified in
@ -151,6 +152,7 @@ config ACPI_VIDEO
config ACPI_FAN
tristate "Fan"
select THERMAL
default y
help
This driver adds support for ACPI fan devices, allowing user-mode
@ -172,6 +174,7 @@ config ACPI_BAY
config ACPI_PROCESSOR
tristate "Processor"
select THERMAL
default y
help
This driver installs ACPI as the idle handler for Linux, and uses

Просмотреть файл

@ -201,6 +201,7 @@ static int is_ejectable_bay(acpi_handle handle)
return 0;
}
#if 0
/**
* eject_removable_drive - try to eject this drive
* @dev : the device structure of the drive
@ -225,6 +226,7 @@ int eject_removable_drive(struct device *dev)
return 0;
}
EXPORT_SYMBOL_GPL(eject_removable_drive);
#endif /* 0 */
static int acpi_bay_add_fs(struct bay *bay)
{

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -89,12 +89,16 @@ acpi_ds_create_buffer_field(union acpi_parse_object *op,
ACPI_FUNCTION_TRACE(ds_create_buffer_field);
/* Get the name_string argument */
/*
* Get the name_string argument (name of the new buffer_field)
*/
if (op->common.aml_opcode == AML_CREATE_FIELD_OP) {
/* For create_field, name is the 4th argument */
arg = acpi_ps_get_arg(op, 3);
} else {
/* Create Bit/Byte/Word/Dword field */
/* For all other create_xXXField operators, name is the 3rd argument */
arg = acpi_ps_get_arg(op, 2);
}
@ -107,26 +111,30 @@ acpi_ds_create_buffer_field(union acpi_parse_object *op,
node = walk_state->deferred_node;
status = AE_OK;
} else {
/*
* During the load phase, we want to enter the name of the field into
* the namespace. During the execute phase (when we evaluate the size
* operand), we want to lookup the name
*/
if (walk_state->parse_flags & ACPI_PARSE_EXECUTE) {
flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE;
} else {
flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE |
ACPI_NS_ERROR_IF_FOUND;
/* Execute flag should always be set when this function is entered */
if (!(walk_state->parse_flags & ACPI_PARSE_EXECUTE)) {
return_ACPI_STATUS(AE_AML_INTERNAL);
}
/*
* Enter the name_string into the namespace
*/
/* Creating new namespace node, should not already exist */
flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE |
ACPI_NS_ERROR_IF_FOUND;
/* Mark node temporary if we are executing a method */
if (walk_state->method_node) {
flags |= ACPI_NS_TEMPORARY;
}
/* Enter the name_string into the namespace */
status =
acpi_ns_lookup(walk_state->scope_info,
arg->common.value.string, ACPI_TYPE_ANY,
ACPI_IMODE_LOAD_PASS1, flags, walk_state,
&(node));
&node);
if (ACPI_FAILURE(status)) {
ACPI_ERROR_NAMESPACE(arg->common.value.string, status);
return_ACPI_STATUS(status);
@ -136,13 +144,13 @@ acpi_ds_create_buffer_field(union acpi_parse_object *op,
/*
* We could put the returned object (Node) on the object stack for later,
* but for now, we will put it in the "op" object that the parser uses,
* so we can get it again at the end of this scope
* so we can get it again at the end of this scope.
*/
op->common.node = node;
/*
* If there is no object attached to the node, this node was just created
* and we need to create the field object. Otherwise, this was a lookup
* and we need to create the field object. Otherwise, this was a lookup
* of an existing node and we don't want to create the field object again.
*/
obj_desc = acpi_ns_get_attached_object(node);
@ -164,9 +172,8 @@ acpi_ds_create_buffer_field(union acpi_parse_object *op,
}
/*
* Remember location in AML stream of the field unit
* opcode and operands -- since the buffer and index
* operands must be evaluated.
* Remember location in AML stream of the field unit opcode and operands --
* since the buffer and index operands must be evaluated.
*/
second_desc = obj_desc->common.next_object;
second_desc->extra.aml_start = op->named.data;
@ -261,7 +268,7 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info,
case AML_INT_NAMEDFIELD_OP:
/* Lookup the name */
/* Lookup the name, it should already exist */
status = acpi_ns_lookup(walk_state->scope_info,
(char *)&arg->named.name,
@ -272,20 +279,23 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info,
if (ACPI_FAILURE(status)) {
ACPI_ERROR_NAMESPACE((char *)&arg->named.name,
status);
if (status != AE_ALREADY_EXISTS) {
return_ACPI_STATUS(status);
}
/* Already exists, ignore error */
return_ACPI_STATUS(status);
} else {
arg->common.node = info->field_node;
info->field_bit_length = arg->common.value.size;
/* Create and initialize an object for the new Field Node */
status = acpi_ex_prep_field_value(info);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
/*
* If there is no object attached to the node, this node was
* just created and we need to create the field object.
* Otherwise, this was a lookup of an existing node and we
* don't want to create the field object again.
*/
if (!acpi_ns_get_attached_object
(info->field_node)) {
status = acpi_ex_prep_field_value(info);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
}
}
@ -399,9 +409,27 @@ acpi_ds_init_field_objects(union acpi_parse_object *op,
union acpi_parse_object *arg = NULL;
struct acpi_namespace_node *node;
u8 type = 0;
u32 flags;
ACPI_FUNCTION_TRACE_PTR(ds_init_field_objects, op);
/* Execute flag should always be set when this function is entered */
if (!(walk_state->parse_flags & ACPI_PARSE_EXECUTE)) {
if (walk_state->parse_flags & ACPI_PARSE_DEFERRED_OP) {
/* bank_field Op is deferred, just return OK */
return_ACPI_STATUS(AE_OK);
}
return_ACPI_STATUS(AE_AML_INTERNAL);
}
/*
* Get the field_list argument for this opcode. This is the start of the
* list of field elements.
*/
switch (walk_state->opcode) {
case AML_FIELD_OP:
arg = acpi_ps_get_arg(op, 2);
@ -422,20 +450,33 @@ acpi_ds_init_field_objects(union acpi_parse_object *op,
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
if (!arg) {
return_ACPI_STATUS(AE_AML_NO_OPERAND);
}
/* Creating new namespace node(s), should not already exist */
flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE |
ACPI_NS_ERROR_IF_FOUND;
/* Mark node(s) temporary if we are executing a method */
if (walk_state->method_node) {
flags |= ACPI_NS_TEMPORARY;
}
/*
* Walk the list of entries in the field_list
*/
while (arg) {
/* Ignore OFFSET and ACCESSAS terms here */
/*
* Ignore OFFSET and ACCESSAS terms here; we are only interested in the
* field names in order to enter them into the namespace.
*/
if (arg->common.aml_opcode == AML_INT_NAMEDFIELD_OP) {
status = acpi_ns_lookup(walk_state->scope_info,
(char *)&arg->named.name,
type, ACPI_IMODE_LOAD_PASS1,
ACPI_NS_NO_UPSEARCH |
ACPI_NS_DONT_OPEN_SCOPE |
ACPI_NS_ERROR_IF_FOUND,
(char *)&arg->named.name, type,
ACPI_IMODE_LOAD_PASS1, flags,
walk_state, &node);
if (ACPI_FAILURE(status)) {
ACPI_ERROR_NAMESPACE((char *)&arg->named.name,
@ -452,7 +493,7 @@ acpi_ds_init_field_objects(union acpi_parse_object *op,
arg->common.node = node;
}
/* Move to next field in the list */
/* Get the next field element in the list */
arg = arg->common.next;
}
@ -466,7 +507,7 @@ acpi_ds_init_field_objects(union acpi_parse_object *op,
*
* PARAMETERS: Op - Op containing the Field definition and args
* region_node - Object for the containing Operation Region
* ` walk_state - Current method state
* walk_state - Current method state
*
* RETURN: Status
*
@ -513,36 +554,13 @@ acpi_ds_create_bank_field(union acpi_parse_object *op,
return_ACPI_STATUS(status);
}
/* Third arg is the bank_value */
/* TBD: This arg is a term_arg, not a constant, and must be evaluated */
/*
* Third arg is the bank_value
* This arg is a term_arg, not a constant
* It will be evaluated later, by acpi_ds_eval_bank_field_operands
*/
arg = arg->common.next;
/* Currently, only the following constants are supported */
switch (arg->common.aml_opcode) {
case AML_ZERO_OP:
info.bank_value = 0;
break;
case AML_ONE_OP:
info.bank_value = 1;
break;
case AML_BYTE_OP:
case AML_WORD_OP:
case AML_DWORD_OP:
case AML_QWORD_OP:
info.bank_value = (u32) arg->common.value.integer;
break;
default:
info.bank_value = 0;
ACPI_ERROR((AE_INFO,
"Non-constant BankValue for BankField is not implemented"));
}
/* Fourth arg is the field flags */
arg = arg->common.next;
@ -553,8 +571,17 @@ acpi_ds_create_bank_field(union acpi_parse_object *op,
info.field_type = ACPI_TYPE_LOCAL_BANK_FIELD;
info.region_node = region_node;
status = acpi_ds_get_field_names(&info, walk_state, arg->common.next);
/*
* Use Info.data_register_node to store bank_field Op
* It's safe because data_register_node will never be used when create bank field
* We store aml_start and aml_length in the bank_field Op for late evaluation
* Used in acpi_ex_prep_field_value(Info)
*
* TBD: Or, should we add a field in struct acpi_create_field_info, like "void *ParentOp"?
*/
info.data_register_node = (struct acpi_namespace_node *)op;
status = acpi_ds_get_field_names(&info, walk_state, arg->common.next);
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -42,7 +42,6 @@
*/
#include <acpi/acpi.h>
#include <acpi/acparser.h>
#include <acpi/amlcode.h>
#include <acpi/acdispat.h>
#include <acpi/acinterp.h>
@ -102,7 +101,7 @@ acpi_ds_method_error(acpi_status status, struct acpi_walk_state *walk_state)
walk_state->opcode,
walk_state->aml_offset,
NULL);
(void)acpi_ex_enter_interpreter();
acpi_ex_enter_interpreter();
}
#ifdef ACPI_DISASSEMBLER
if (ACPI_FAILURE(status)) {
@ -232,9 +231,9 @@ acpi_ds_begin_method_execution(struct acpi_namespace_node *method_node,
* recursive call.
*/
if (!walk_state ||
!obj_desc->method.mutex->mutex.owner_thread ||
(walk_state->thread !=
obj_desc->method.mutex->mutex.owner_thread)) {
!obj_desc->method.mutex->mutex.thread_id ||
(walk_state->thread->thread_id !=
obj_desc->method.mutex->mutex.thread_id)) {
/*
* Acquire the method mutex. This releases the interpreter if we
* block (and reacquires it before it returns)
@ -254,8 +253,8 @@ acpi_ds_begin_method_execution(struct acpi_namespace_node *method_node,
original_sync_level =
walk_state->thread->current_sync_level;
obj_desc->method.mutex->mutex.owner_thread =
walk_state->thread;
obj_desc->method.mutex->mutex.thread_id =
walk_state->thread->thread_id;
walk_state->thread->current_sync_level =
obj_desc->method.sync_level;
} else {
@ -535,8 +534,6 @@ void
acpi_ds_terminate_control_method(union acpi_operand_object *method_desc,
struct acpi_walk_state *walk_state)
{
struct acpi_namespace_node *method_node;
acpi_status status;
ACPI_FUNCTION_TRACE_PTR(ds_terminate_control_method, walk_state);
@ -551,34 +548,26 @@ acpi_ds_terminate_control_method(union acpi_operand_object *method_desc,
/* Delete all arguments and locals */
acpi_ds_method_data_delete_all(walk_state);
}
/*
* If method is serialized, release the mutex and restore the
* current sync level for this thread
*/
if (method_desc->method.mutex) {
/* Acquisition Depth handles recursive calls */
method_desc->method.mutex->mutex.acquisition_depth--;
if (!method_desc->method.mutex->mutex.acquisition_depth) {
walk_state->thread->current_sync_level =
method_desc->method.mutex->mutex.
original_sync_level;
acpi_os_release_mutex(method_desc->method.mutex->mutex.
os_mutex);
method_desc->method.mutex->mutex.owner_thread = NULL;
}
}
if (walk_state) {
/*
* Delete any objects created by this method during execution.
* The method Node is stored in the walk state
* If method is serialized, release the mutex and restore the
* current sync level for this thread
*/
method_node = walk_state->method_node;
if (method_desc->method.mutex) {
/* Acquisition Depth handles recursive calls */
method_desc->method.mutex->mutex.acquisition_depth--;
if (!method_desc->method.mutex->mutex.acquisition_depth) {
walk_state->thread->current_sync_level =
method_desc->method.mutex->mutex.
original_sync_level;
acpi_os_release_mutex(method_desc->method.
mutex->mutex.os_mutex);
method_desc->method.mutex->mutex.thread_id = 0;
}
}
/*
* Delete any namespace objects created anywhere within
@ -620,7 +609,7 @@ acpi_ds_terminate_control_method(union acpi_operand_object *method_desc,
*/
if ((method_desc->method.method_flags & AML_METHOD_SERIALIZED)
&& (!method_desc->method.mutex)) {
status = acpi_ds_create_method_mutex(method_desc);
(void)acpi_ds_create_method_mutex(method_desc);
}
/* No more threads, we can free the owner_id */

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -157,7 +157,9 @@ acpi_ds_build_internal_object(struct acpi_walk_state *walk_state,
* will remain as named references. This behavior is not described
* in the ACPI spec, but it appears to be an oversight.
*/
obj_desc = (union acpi_operand_object *)op->common.node;
obj_desc =
ACPI_CAST_PTR(union acpi_operand_object,
op->common.node);
status =
acpi_ex_resolve_node_to_value(ACPI_CAST_INDIRECT_PTR
@ -172,7 +174,19 @@ acpi_ds_build_internal_object(struct acpi_walk_state *walk_state,
switch (op->common.node->type) {
/*
* For these types, we need the actual node, not the subobject.
* However, the subobject got an extra reference count above.
* However, the subobject did not get an extra reference count above.
*
* TBD: should ex_resolve_node_to_value be changed to fix this?
*/
case ACPI_TYPE_DEVICE:
case ACPI_TYPE_THERMAL:
acpi_ut_add_reference(op->common.node->object);
/*lint -fallthrough */
/*
* For these types, we need the actual node, not the subobject.
* The subobject got an extra reference count in ex_resolve_node_to_value.
*/
case ACPI_TYPE_MUTEX:
case ACPI_TYPE_METHOD:
@ -180,25 +194,15 @@ acpi_ds_build_internal_object(struct acpi_walk_state *walk_state,
case ACPI_TYPE_PROCESSOR:
case ACPI_TYPE_EVENT:
case ACPI_TYPE_REGION:
case ACPI_TYPE_DEVICE:
case ACPI_TYPE_THERMAL:
obj_desc =
(union acpi_operand_object *)op->common.
node;
/* We will create a reference object for these types below */
break;
default:
break;
}
/*
* If above resolved to an operand object, we are done. Otherwise,
* we have a NS node, we must create the package entry as a named
* reference.
*/
if (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) !=
ACPI_DESC_TYPE_NAMED) {
/*
* All other types - the node was resolved to an actual
* object, we are done.
*/
goto exit;
}
}
@ -223,7 +227,7 @@ acpi_ds_build_internal_object(struct acpi_walk_state *walk_state,
exit:
*obj_desc_ptr = obj_desc;
return_ACPI_STATUS(AE_OK);
return_ACPI_STATUS(status);
}
/*******************************************************************************
@ -369,7 +373,9 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
union acpi_parse_object *parent;
union acpi_operand_object *obj_desc = NULL;
acpi_status status = AE_OK;
acpi_native_uint i;
unsigned i;
u16 index;
u16 reference_count;
ACPI_FUNCTION_TRACE(ds_build_internal_package_obj);
@ -447,13 +453,60 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
package.
elements[i]);
}
if (*obj_desc_ptr) {
/* Existing package, get existing reference count */
reference_count =
(*obj_desc_ptr)->common.reference_count;
if (reference_count > 1) {
/* Make new element ref count match original ref count */
for (index = 0; index < (reference_count - 1);
index++) {
acpi_ut_add_reference((obj_desc->
package.
elements[i]));
}
}
}
arg = arg->common.next;
}
if (!arg) {
/* Check for match between num_elements and actual length of package_list */
if (arg) {
/*
* num_elements was exhausted, but there are remaining elements in the
* package_list.
*
* Note: technically, this is an error, from ACPI spec: "It is an error
* for NumElements to be less than the number of elements in the
* PackageList". However, for now, we just print an error message and
* no exception is returned.
*/
while (arg) {
/* Find out how many elements there really are */
i++;
arg = arg->common.next;
}
ACPI_ERROR((AE_INFO,
"Package List length (%X) larger than NumElements count (%X), truncated\n",
i, element_count));
} else if (i < element_count) {
/*
* Arg list (elements) was exhausted, but we did not reach num_elements count.
* Note: this is not an error, the package is padded out with NULLs.
*/
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Package List length larger than NumElements count (%X), truncated\n",
element_count));
"Package List length (%X) smaller than NumElements count (%X), padded with null elements\n",
i, element_count));
}
obj_desc->package.flags |= AOPOBJ_DATA_VALID;
@ -721,6 +774,8 @@ acpi_ds_init_object_from_op(struct acpi_walk_state *walk_state,
/* Node was saved in Op */
obj_desc->reference.node = op->common.node;
obj_desc->reference.object =
op->common.node->object;
}
obj_desc->reference.opcode = opcode;

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -49,6 +49,7 @@
#include <acpi/acinterp.h>
#include <acpi/acnamesp.h>
#include <acpi/acevents.h>
#include <acpi/actables.h>
#define _COMPONENT ACPI_DISPATCHER
ACPI_MODULE_NAME("dsopcode")
@ -217,6 +218,50 @@ acpi_ds_get_buffer_field_arguments(union acpi_operand_object *obj_desc)
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_get_bank_field_arguments
*
* PARAMETERS: obj_desc - A valid bank_field object
*
* RETURN: Status.
*
* DESCRIPTION: Get bank_field bank_value. This implements the late
* evaluation of these field attributes.
*
******************************************************************************/
acpi_status
acpi_ds_get_bank_field_arguments(union acpi_operand_object *obj_desc)
{
union acpi_operand_object *extra_desc;
struct acpi_namespace_node *node;
acpi_status status;
ACPI_FUNCTION_TRACE_PTR(ds_get_bank_field_arguments, obj_desc);
if (obj_desc->common.flags & AOPOBJ_DATA_VALID) {
return_ACPI_STATUS(AE_OK);
}
/* Get the AML pointer (method object) and bank_field node */
extra_desc = acpi_ns_get_secondary_object(obj_desc);
node = obj_desc->bank_field.node;
ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname
(ACPI_TYPE_LOCAL_BANK_FIELD, node, NULL));
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "[%4.4s] BankField Arg Init\n",
acpi_ut_get_node_name(node)));
/* Execute the AML code for the term_arg arguments */
status = acpi_ds_execute_arguments(node, acpi_ns_get_parent_node(node),
extra_desc->extra.aml_length,
extra_desc->extra.aml_start);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_get_buffer_arguments
@ -770,7 +815,109 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n",
obj_desc,
ACPI_FORMAT_UINT64(obj_desc->region.address),
ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address),
obj_desc->region.length));
/* Now the address and length are valid for this opregion */
obj_desc->region.flags |= AOPOBJ_DATA_VALID;
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_eval_table_region_operands
*
* PARAMETERS: walk_state - Current walk
* Op - A valid region Op object
*
* RETURN: Status
*
* DESCRIPTION: Get region address and length
* Called from acpi_ds_exec_end_op during data_table_region parse tree walk
*
******************************************************************************/
acpi_status
acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state,
union acpi_parse_object *op)
{
acpi_status status;
union acpi_operand_object *obj_desc;
union acpi_operand_object **operand;
struct acpi_namespace_node *node;
union acpi_parse_object *next_op;
acpi_native_uint table_index;
struct acpi_table_header *table;
ACPI_FUNCTION_TRACE_PTR(ds_eval_table_region_operands, op);
/*
* This is where we evaluate the signature_string and oem_iDString
* and oem_table_iDString of the data_table_region declaration
*/
node = op->common.node;
/* next_op points to signature_string op */
next_op = op->common.value.arg;
/*
* Evaluate/create the signature_string and oem_iDString
* and oem_table_iDString operands
*/
status = acpi_ds_create_operands(walk_state, next_op);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/*
* Resolve the signature_string and oem_iDString
* and oem_table_iDString operands
*/
status = acpi_ex_resolve_operands(op->common.aml_opcode,
ACPI_WALK_OPERANDS, walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
ACPI_DUMP_OPERANDS(ACPI_WALK_OPERANDS, ACPI_IMODE_EXECUTE,
acpi_ps_get_opcode_name(op->common.aml_opcode),
1, "after AcpiExResolveOperands");
operand = &walk_state->operands[0];
/* Find the ACPI table */
status = acpi_tb_find_table(operand[0]->string.pointer,
operand[1]->string.pointer,
operand[2]->string.pointer, &table_index);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
acpi_ut_remove_reference(operand[0]);
acpi_ut_remove_reference(operand[1]);
acpi_ut_remove_reference(operand[2]);
status = acpi_get_table_by_index(table_index, &table);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
obj_desc = acpi_ns_get_attached_object(node);
if (!obj_desc) {
return_ACPI_STATUS(AE_NOT_EXIST);
}
obj_desc->region.address =
(acpi_physical_address) ACPI_TO_INTEGER(table);
obj_desc->region.length = table->length;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n",
obj_desc,
ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address),
obj_desc->region.length));
/* Now the address and length are valid for this opregion */
@ -808,6 +955,12 @@ acpi_ds_eval_data_object_operands(struct acpi_walk_state *walk_state,
/* The first operand (for all of these data objects) is the length */
/*
* Set proper index into operand stack for acpi_ds_obj_stack_push
* invoked inside acpi_ds_create_operand.
*/
walk_state->operand_index = walk_state->num_operands;
status = acpi_ds_create_operand(walk_state, op->common.value.arg, 1);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
@ -876,6 +1029,106 @@ acpi_ds_eval_data_object_operands(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_eval_bank_field_operands
*
* PARAMETERS: walk_state - Current walk
* Op - A valid bank_field Op object
*
* RETURN: Status
*
* DESCRIPTION: Get bank_field bank_value
* Called from acpi_ds_exec_end_op during bank_field parse tree walk
*
******************************************************************************/
acpi_status
acpi_ds_eval_bank_field_operands(struct acpi_walk_state *walk_state,
union acpi_parse_object *op)
{
acpi_status status;
union acpi_operand_object *obj_desc;
union acpi_operand_object *operand_desc;
struct acpi_namespace_node *node;
union acpi_parse_object *next_op;
union acpi_parse_object *arg;
ACPI_FUNCTION_TRACE_PTR(ds_eval_bank_field_operands, op);
/*
* This is where we evaluate the bank_value field of the
* bank_field declaration
*/
/* next_op points to the op that holds the Region */
next_op = op->common.value.arg;
/* next_op points to the op that holds the Bank Register */
next_op = next_op->common.next;
/* next_op points to the op that holds the Bank Value */
next_op = next_op->common.next;
/*
* Set proper index into operand stack for acpi_ds_obj_stack_push
* invoked inside acpi_ds_create_operand.
*
* We use walk_state->Operands[0] to store the evaluated bank_value
*/
walk_state->operand_index = 0;
status = acpi_ds_create_operand(walk_state, next_op, 0);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
status = acpi_ex_resolve_to_value(&walk_state->operands[0], walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
ACPI_DUMP_OPERANDS(ACPI_WALK_OPERANDS, ACPI_IMODE_EXECUTE,
acpi_ps_get_opcode_name(op->common.aml_opcode),
1, "after AcpiExResolveOperands");
/*
* Get the bank_value operand and save it
* (at Top of stack)
*/
operand_desc = walk_state->operands[0];
/* Arg points to the start Bank Field */
arg = acpi_ps_get_arg(op, 4);
while (arg) {
/* Ignore OFFSET and ACCESSAS terms here */
if (arg->common.aml_opcode == AML_INT_NAMEDFIELD_OP) {
node = arg->common.node;
obj_desc = acpi_ns_get_attached_object(node);
if (!obj_desc) {
return_ACPI_STATUS(AE_NOT_EXIST);
}
obj_desc->bank_field.value =
(u32) operand_desc->integer.value;
}
/* Move to next field in the list */
arg = arg->common.next;
}
acpi_ut_remove_reference(operand_desc);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_exec_begin_control_op
@ -1070,8 +1323,7 @@ acpi_ds_exec_end_control_op(struct acpi_walk_state * walk_state,
* is set to anything other than zero!
*/
walk_state->return_desc = walk_state->operands[0];
} else if ((walk_state->results) &&
(walk_state->results->results.num_results > 0)) {
} else if (walk_state->result_count) {
/* Since we have a real Return(), delete any implicit return */

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -278,7 +278,9 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
AML_VAR_PACKAGE_OP)
|| (op->common.parent->common.aml_opcode == AML_BUFFER_OP)
|| (op->common.parent->common.aml_opcode ==
AML_INT_EVAL_SUBTREE_OP)) {
AML_INT_EVAL_SUBTREE_OP)
|| (op->common.parent->common.aml_opcode ==
AML_BANK_FIELD_OP)) {
/*
* These opcodes allow term_arg(s) as operands and therefore
* the operands can be method calls. The result is used.
@ -472,7 +474,8 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
/* A valid name must be looked up in the namespace */
if ((arg->common.aml_opcode == AML_INT_NAMEPATH_OP) &&
(arg->common.value.string)) {
(arg->common.value.string) &&
!(arg->common.flags & ACPI_PARSEOP_IN_STACK)) {
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, "Getting a name: Arg=%p\n",
arg));
@ -595,7 +598,8 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
} else {
/* Check for null name case */
if (arg->common.aml_opcode == AML_INT_NAMEPATH_OP) {
if ((arg->common.aml_opcode == AML_INT_NAMEPATH_OP) &&
!(arg->common.flags & ACPI_PARSEOP_IN_STACK)) {
/*
* If the name is null, this means that this is an
* optional result parameter that was not specified
@ -617,7 +621,8 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(AE_NOT_IMPLEMENTED);
}
if (op_info->flags & AML_HAS_RETVAL) {
if ((op_info->flags & AML_HAS_RETVAL)
|| (arg->common.flags & ACPI_PARSEOP_IN_STACK)) {
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"Argument previously created, already stacked\n"));
@ -630,9 +635,7 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
* Use value that was already previously returned
* by the evaluation of this argument
*/
status =
acpi_ds_result_pop_from_bottom(&obj_desc,
walk_state);
status = acpi_ds_result_pop(&obj_desc, walk_state);
if (ACPI_FAILURE(status)) {
/*
* Only error is underflow, and this indicates
@ -698,27 +701,52 @@ acpi_ds_create_operands(struct acpi_walk_state *walk_state,
{
acpi_status status = AE_OK;
union acpi_parse_object *arg;
union acpi_parse_object *arguments[ACPI_OBJ_NUM_OPERANDS];
u32 arg_count = 0;
u32 index = walk_state->num_operands;
u32 i;
ACPI_FUNCTION_TRACE_PTR(ds_create_operands, first_arg);
/* For all arguments in the list... */
/* Get all arguments in the list */
arg = first_arg;
while (arg) {
status = acpi_ds_create_operand(walk_state, arg, arg_count);
if (ACPI_FAILURE(status)) {
goto cleanup;
if (index >= ACPI_OBJ_NUM_OPERANDS) {
return_ACPI_STATUS(AE_BAD_DATA);
}
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"Arg #%d (%p) done, Arg1=%p\n", arg_count,
arg, first_arg));
arguments[index] = arg;
walk_state->operands[index] = NULL;
/* Move on to next argument, if any */
arg = arg->common.next;
arg_count++;
index++;
}
index--;
/* It is the appropriate order to get objects from the Result stack */
for (i = 0; i < arg_count; i++) {
arg = arguments[index];
/* Force the filling of the operand stack in inverse order */
walk_state->operand_index = (u8) index;
status = acpi_ds_create_operand(walk_state, arg, index);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
index--;
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"Arg #%d (%p) done, Arg1=%p\n", index, arg,
first_arg));
}
return_ACPI_STATUS(status);
@ -729,9 +757,112 @@ acpi_ds_create_operands(struct acpi_walk_state *walk_state,
* pop everything off of the operand stack and delete those
* objects
*/
(void)acpi_ds_obj_stack_pop_and_delete(arg_count, walk_state);
acpi_ds_obj_stack_pop_and_delete(arg_count, walk_state);
ACPI_EXCEPTION((AE_INFO, status, "While creating Arg %d", index));
return_ACPI_STATUS(status);
}
/*****************************************************************************
*
* FUNCTION: acpi_ds_evaluate_name_path
*
* PARAMETERS: walk_state - Current state of the parse tree walk,
* the opcode of current operation should be
* AML_INT_NAMEPATH_OP
*
* RETURN: Status
*
* DESCRIPTION: Translate the -name_path- parse tree object to the equivalent
* interpreter object, convert it to value, if needed, duplicate
* it, if needed, and push it onto the current result stack.
*
****************************************************************************/
acpi_status acpi_ds_evaluate_name_path(struct acpi_walk_state *walk_state)
{
acpi_status status = AE_OK;
union acpi_parse_object *op = walk_state->op;
union acpi_operand_object **operand = &walk_state->operands[0];
union acpi_operand_object *new_obj_desc;
u8 type;
ACPI_FUNCTION_TRACE_PTR(ds_evaluate_name_path, walk_state);
if (!op->common.parent) {
/* This happens after certain exception processing */
goto exit;
}
if ((op->common.parent->common.aml_opcode == AML_PACKAGE_OP) ||
(op->common.parent->common.aml_opcode == AML_VAR_PACKAGE_OP) ||
(op->common.parent->common.aml_opcode == AML_REF_OF_OP)) {
/* TBD: Should we specify this feature as a bit of op_info->Flags of these opcodes? */
goto exit;
}
status = acpi_ds_create_operand(walk_state, op, 0);
if (ACPI_FAILURE(status)) {
goto exit;
}
if (op->common.flags & ACPI_PARSEOP_TARGET) {
new_obj_desc = *operand;
goto push_result;
}
type = ACPI_GET_OBJECT_TYPE(*operand);
status = acpi_ex_resolve_to_value(operand, walk_state);
if (ACPI_FAILURE(status)) {
goto exit;
}
if (type == ACPI_TYPE_INTEGER) {
/* It was incremented by acpi_ex_resolve_to_value */
acpi_ut_remove_reference(*operand);
status =
acpi_ut_copy_iobject_to_iobject(*operand, &new_obj_desc,
walk_state);
if (ACPI_FAILURE(status)) {
goto exit;
}
} else {
/*
* The object either was anew created or is
* a Namespace node - don't decrement it.
*/
new_obj_desc = *operand;
}
/* Cleanup for name-path operand */
status = acpi_ds_obj_stack_pop(1, walk_state);
if (ACPI_FAILURE(status)) {
walk_state->result_obj = new_obj_desc;
goto exit;
}
push_result:
walk_state->result_obj = new_obj_desc;
status = acpi_ds_result_push(walk_state->result_obj, walk_state);
if (ACPI_SUCCESS(status)) {
/* Force to take it from stack */
op->common.flags |= ACPI_PARSEOP_IN_STACK;
}
exit:
ACPI_EXCEPTION((AE_INFO, status, "While creating Arg %d",
(arg_count + 1)));
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -285,11 +285,6 @@ acpi_ds_exec_begin_op(struct acpi_walk_state *walk_state,
switch (opcode_class) {
case AML_CLASS_CONTROL:
status = acpi_ds_result_stack_push(walk_state);
if (ACPI_FAILURE(status)) {
goto error_exit;
}
status = acpi_ds_exec_begin_control_op(walk_state, op);
break;
@ -305,20 +300,11 @@ acpi_ds_exec_begin_op(struct acpi_walk_state *walk_state,
status = acpi_ds_load2_begin_op(walk_state, NULL);
}
if (op->common.aml_opcode == AML_REGION_OP) {
status = acpi_ds_result_stack_push(walk_state);
}
break;
case AML_CLASS_EXECUTE:
case AML_CLASS_CREATE:
/*
* Most operators with arguments (except create_xxx_field operators)
* Start a new result/operand state
*/
if (walk_state->op_info->object_type != ACPI_TYPE_BUFFER_FIELD) {
status = acpi_ds_result_stack_push(walk_state);
}
break;
default:
@ -374,6 +360,7 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
/* Init the walk state */
walk_state->num_operands = 0;
walk_state->operand_index = 0;
walk_state->return_desc = NULL;
walk_state->result_obj = NULL;
@ -388,10 +375,17 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
/* Decode the Opcode Class */
switch (op_class) {
case AML_CLASS_ARGUMENT: /* constants, literals, etc. - do nothing */
case AML_CLASS_ARGUMENT: /* Constants, literals, etc. */
if (walk_state->opcode == AML_INT_NAMEPATH_OP) {
status = acpi_ds_evaluate_name_path(walk_state);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
}
break;
case AML_CLASS_EXECUTE: /* most operators with arguments */
case AML_CLASS_EXECUTE: /* Most operators with arguments */
/* Build resolved operand stack */
@ -400,13 +394,6 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
goto cleanup;
}
/* Done with this result state (Now that operand stack is built) */
status = acpi_ds_result_stack_pop(walk_state);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
/*
* All opcodes require operand resolution, with the only exceptions
* being the object_type and size_of operators.
@ -487,16 +474,6 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
status = acpi_ds_exec_end_control_op(walk_state, op);
/* Make sure to properly pop the result stack */
if (ACPI_SUCCESS(status)) {
status = acpi_ds_result_stack_pop(walk_state);
} else if (status == AE_CTRL_PENDING) {
status = acpi_ds_result_stack_pop(walk_state);
if (ACPI_SUCCESS(status)) {
status = AE_CTRL_PENDING;
}
}
break;
case AML_TYPE_METHOD_CALL:
@ -516,7 +493,7 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
op->common.node =
(struct acpi_namespace_node *)op->asl.value.
arg->asl.node->object;
arg->asl.node;
acpi_ut_add_reference(op->asl.value.arg->asl.
node->object);
return_ACPI_STATUS(AE_OK);
@ -632,13 +609,6 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
break;
}
/* Done with result state (Now that operand stack is built) */
status = acpi_ds_result_stack_pop(walk_state);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
/*
* If a result object was returned from above, push it on the
* current result stack
@ -671,8 +641,28 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
if (ACPI_FAILURE(status)) {
break;
}
} else if (op->common.aml_opcode == AML_DATA_REGION_OP) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Executing DataTableRegion Strings Op=%p\n",
op));
status = acpi_ds_result_stack_pop(walk_state);
status =
acpi_ds_eval_table_region_operands
(walk_state, op);
if (ACPI_FAILURE(status)) {
break;
}
} else if (op->common.aml_opcode == AML_BANK_FIELD_OP) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Executing BankField Op=%p\n",
op));
status =
acpi_ds_eval_bank_field_operands(walk_state,
op);
if (ACPI_FAILURE(status)) {
break;
}
}
break;

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -443,6 +443,15 @@ acpi_status acpi_ds_load1_end_op(struct acpi_walk_state *walk_state)
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
} else if (op->common.aml_opcode == AML_DATA_REGION_OP) {
status =
acpi_ex_create_region(op->named.data,
op->named.length,
REGION_DATA_TABLE,
walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
}
}
#endif
@ -767,6 +776,12 @@ acpi_ds_load2_begin_op(struct acpi_walk_state *walk_state,
acpi_ns_lookup(walk_state->scope_info, buffer_ptr,
object_type, ACPI_IMODE_LOAD_PASS2, flags,
walk_state, &node);
if (ACPI_SUCCESS(status) && (flags & ACPI_NS_TEMPORARY)) {
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"***New Node [%4.4s] %p is temporary\n",
acpi_ut_get_node_name(node), node));
}
break;
}
@ -823,6 +838,7 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
struct acpi_namespace_node *new_node;
#ifndef ACPI_NO_METHOD_EXECUTION
u32 i;
u8 region_space;
#endif
ACPI_FUNCTION_TRACE(ds_load2_end_op);
@ -1003,11 +1019,6 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
status = acpi_ex_create_event(walk_state);
break;
case AML_DATA_REGION_OP:
status = acpi_ex_create_table_region(walk_state);
break;
case AML_ALIAS_OP:
status = acpi_ex_create_alias(walk_state);
@ -1035,6 +1046,15 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
switch (op->common.aml_opcode) {
#ifndef ACPI_NO_METHOD_EXECUTION
case AML_REGION_OP:
case AML_DATA_REGION_OP:
if (op->common.aml_opcode == AML_REGION_OP) {
region_space = (acpi_adr_space_type)
((op->common.value.arg)->common.value.
integer);
} else {
region_space = REGION_DATA_TABLE;
}
/*
* If we are executing a method, initialize the region
@ -1043,10 +1063,7 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
status =
acpi_ex_create_region(op->named.data,
op->named.length,
(acpi_adr_space_type)
((op->common.value.
arg)->common.value.
integer),
region_space,
walk_state);
if (ACPI_FAILURE(status)) {
return (status);

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -49,85 +49,9 @@
#define _COMPONENT ACPI_DISPATCHER
ACPI_MODULE_NAME("dswstate")
/* Local prototypes */
#ifdef ACPI_OBSOLETE_FUNCTIONS
acpi_status
acpi_ds_result_insert(void *object,
u32 index, struct acpi_walk_state *walk_state);
acpi_status acpi_ds_obj_stack_delete_all(struct acpi_walk_state *walk_state);
acpi_status
acpi_ds_obj_stack_pop_object(union acpi_operand_object **object,
struct acpi_walk_state *walk_state);
void *acpi_ds_obj_stack_get_value(u32 index,
struct acpi_walk_state *walk_state);
#endif
#ifdef ACPI_FUTURE_USAGE
/*******************************************************************************
*
* FUNCTION: acpi_ds_result_remove
*
* PARAMETERS: Object - Where to return the popped object
* Index - Where to extract the object
* walk_state - Current Walk state
*
* RETURN: Status
*
* DESCRIPTION: Pop an object off the bottom of this walk's result stack. In
* other words, this is a FIFO.
*
******************************************************************************/
acpi_status
acpi_ds_result_remove(union acpi_operand_object **object,
u32 index, struct acpi_walk_state *walk_state)
{
union acpi_generic_state *state;
ACPI_FUNCTION_NAME(ds_result_remove);
state = walk_state->results;
if (!state) {
ACPI_ERROR((AE_INFO, "No result object pushed! State=%p",
walk_state));
return (AE_NOT_EXIST);
}
if (index >= ACPI_OBJ_MAX_OPERAND) {
ACPI_ERROR((AE_INFO,
"Index out of range: %X State=%p Num=%X",
index, walk_state, state->results.num_results));
}
/* Check for a valid result object */
if (!state->results.obj_desc[index]) {
ACPI_ERROR((AE_INFO,
"Null operand! State=%p #Ops=%X, Index=%X",
walk_state, state->results.num_results, index));
return (AE_AML_NO_RETURN_VALUE);
}
/* Remove the object */
state->results.num_results--;
*object = state->results.obj_desc[index];
state->results.obj_desc[index] = NULL;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Obj=%p [%s] Index=%X State=%p Num=%X\n",
*object,
(*object) ? acpi_ut_get_object_type_name(*object) :
"NULL", index, walk_state,
state->results.num_results));
return (AE_OK);
}
#endif /* ACPI_FUTURE_USAGE */
/* Local prototypes */
static acpi_status acpi_ds_result_stack_push(struct acpi_walk_state *ws);
static acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state *ws);
/*******************************************************************************
*
@ -138,122 +62,67 @@ acpi_ds_result_remove(union acpi_operand_object **object,
*
* RETURN: Status
*
* DESCRIPTION: Pop an object off the bottom of this walk's result stack. In
* other words, this is a FIFO.
* DESCRIPTION: Pop an object off the top of this walk's result stack
*
******************************************************************************/
acpi_status
acpi_ds_result_pop(union acpi_operand_object ** object,
struct acpi_walk_state * walk_state)
acpi_ds_result_pop(union acpi_operand_object **object,
struct acpi_walk_state *walk_state)
{
acpi_native_uint index;
union acpi_generic_state *state;
acpi_status status;
ACPI_FUNCTION_NAME(ds_result_pop);
state = walk_state->results;
if (!state) {
return (AE_OK);
/* Incorrect state of result stack */
if (state && !walk_state->result_count) {
ACPI_ERROR((AE_INFO, "No results on result stack"));
return (AE_AML_INTERNAL);
}
if (!state->results.num_results) {
if (!state && walk_state->result_count) {
ACPI_ERROR((AE_INFO, "No result state for result stack"));
return (AE_AML_INTERNAL);
}
/* Empty result stack */
if (!state) {
ACPI_ERROR((AE_INFO, "Result stack is empty! State=%p",
walk_state));
return (AE_AML_NO_RETURN_VALUE);
}
/* Remove top element */
/* Return object of the top element and clean that top element result stack */
state->results.num_results--;
walk_state->result_count--;
index = walk_state->result_count % ACPI_RESULTS_FRAME_OBJ_NUM;
for (index = ACPI_OBJ_NUM_OPERANDS; index; index--) {
/* Check for a valid result object */
if (state->results.obj_desc[index - 1]) {
*object = state->results.obj_desc[index - 1];
state->results.obj_desc[index - 1] = NULL;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Obj=%p [%s] Index=%X State=%p Num=%X\n",
*object,
(*object) ?
acpi_ut_get_object_type_name(*object)
: "NULL", (u32) index - 1, walk_state,
state->results.num_results));
return (AE_OK);
}
}
ACPI_ERROR((AE_INFO, "No result objects! State=%p", walk_state));
return (AE_AML_NO_RETURN_VALUE);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_result_pop_from_bottom
*
* PARAMETERS: Object - Where to return the popped object
* walk_state - Current Walk state
*
* RETURN: Status
*
* DESCRIPTION: Pop an object off the bottom of this walk's result stack. In
* other words, this is a FIFO.
*
******************************************************************************/
acpi_status
acpi_ds_result_pop_from_bottom(union acpi_operand_object ** object,
struct acpi_walk_state * walk_state)
{
acpi_native_uint index;
union acpi_generic_state *state;
ACPI_FUNCTION_NAME(ds_result_pop_from_bottom);
state = walk_state->results;
if (!state) {
*object = state->results.obj_desc[index];
if (!*object) {
ACPI_ERROR((AE_INFO,
"No result object pushed! State=%p", walk_state));
return (AE_NOT_EXIST);
}
if (!state->results.num_results) {
ACPI_ERROR((AE_INFO, "No result objects! State=%p",
"No result objects on result stack, State=%p",
walk_state));
return (AE_AML_NO_RETURN_VALUE);
}
/* Remove Bottom element */
*object = state->results.obj_desc[0];
/* Push entire stack down one element */
for (index = 0; index < state->results.num_results; index++) {
state->results.obj_desc[index] =
state->results.obj_desc[index + 1];
state->results.obj_desc[index] = NULL;
if (index == 0) {
status = acpi_ds_result_stack_pop(walk_state);
if (ACPI_FAILURE(status)) {
return (status);
}
}
state->results.num_results--;
/* Check for a valid result object */
if (!*object) {
ACPI_ERROR((AE_INFO,
"Null operand! State=%p #Ops=%X Index=%X",
walk_state, state->results.num_results,
(u32) index));
return (AE_AML_NO_RETURN_VALUE);
}
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] Results=%p State=%p\n",
*object,
(*object) ? acpi_ut_get_object_type_name(*object) :
"NULL", state, walk_state));
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Obj=%p [%s] Index=%X State=%p Num=%X\n", *object,
acpi_ut_get_object_type_name(*object),
(u32) index, walk_state, walk_state->result_count));
return (AE_OK);
}
@ -276,39 +145,56 @@ acpi_ds_result_push(union acpi_operand_object * object,
struct acpi_walk_state * walk_state)
{
union acpi_generic_state *state;
acpi_status status;
acpi_native_uint index;
ACPI_FUNCTION_NAME(ds_result_push);
if (walk_state->result_count > walk_state->result_size) {
ACPI_ERROR((AE_INFO, "Result stack is full"));
return (AE_AML_INTERNAL);
} else if (walk_state->result_count == walk_state->result_size) {
/* Extend the result stack */
status = acpi_ds_result_stack_push(walk_state);
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO,
"Failed to extend the result stack"));
return (status);
}
}
if (!(walk_state->result_count < walk_state->result_size)) {
ACPI_ERROR((AE_INFO, "No free elements in result stack"));
return (AE_AML_INTERNAL);
}
state = walk_state->results;
if (!state) {
ACPI_ERROR((AE_INFO, "No result stack frame during push"));
return (AE_AML_INTERNAL);
}
if (state->results.num_results == ACPI_OBJ_NUM_OPERANDS) {
ACPI_ERROR((AE_INFO,
"Result stack overflow: Obj=%p State=%p Num=%X",
object, walk_state, state->results.num_results));
return (AE_STACK_OVERFLOW);
}
if (!object) {
ACPI_ERROR((AE_INFO,
"Null Object! Obj=%p State=%p Num=%X",
object, walk_state, state->results.num_results));
object, walk_state, walk_state->result_count));
return (AE_BAD_PARAMETER);
}
state->results.obj_desc[state->results.num_results] = object;
state->results.num_results++;
/* Assign the address of object to the top free element of result stack */
index = walk_state->result_count % ACPI_RESULTS_FRAME_OBJ_NUM;
state->results.obj_desc[index] = object;
walk_state->result_count++;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] State=%p Num=%X Cur=%X\n",
object,
object ?
acpi_ut_get_object_type_name((union
acpi_operand_object *)
object) : "NULL",
walk_state, state->results.num_results,
object), walk_state,
walk_state->result_count,
walk_state->current_result));
return (AE_OK);
@ -322,16 +208,25 @@ acpi_ds_result_push(union acpi_operand_object * object,
*
* RETURN: Status
*
* DESCRIPTION: Push an object onto the walk_state result stack.
* DESCRIPTION: Push an object onto the walk_state result stack
*
******************************************************************************/
acpi_status acpi_ds_result_stack_push(struct acpi_walk_state * walk_state)
static acpi_status acpi_ds_result_stack_push(struct acpi_walk_state *walk_state)
{
union acpi_generic_state *state;
ACPI_FUNCTION_NAME(ds_result_stack_push);
/* Check for stack overflow */
if (((u32) walk_state->result_size + ACPI_RESULTS_FRAME_OBJ_NUM) >
ACPI_RESULTS_OBJ_NUM_MAX) {
ACPI_ERROR((AE_INFO, "Result stack overflow: State=%p Num=%X",
walk_state, walk_state->result_size));
return (AE_STACK_OVERFLOW);
}
state = acpi_ut_create_generic_state();
if (!state) {
return (AE_NO_MEMORY);
@ -340,6 +235,10 @@ acpi_status acpi_ds_result_stack_push(struct acpi_walk_state * walk_state)
state->common.descriptor_type = ACPI_DESC_TYPE_STATE_RESULT;
acpi_ut_push_generic_state(&walk_state->results, state);
/* Increase the length of the result stack by the length of frame */
walk_state->result_size += ACPI_RESULTS_FRAME_OBJ_NUM;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Results=%p State=%p\n",
state, walk_state));
@ -354,11 +253,11 @@ acpi_status acpi_ds_result_stack_push(struct acpi_walk_state * walk_state)
*
* RETURN: Status
*
* DESCRIPTION: Pop an object off of the walk_state result stack.
* DESCRIPTION: Pop an object off of the walk_state result stack
*
******************************************************************************/
acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state * walk_state)
static acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state *walk_state)
{
union acpi_generic_state *state;
@ -367,18 +266,27 @@ acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state * walk_state)
/* Check for stack underflow */
if (walk_state->results == NULL) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Underflow - State=%p\n",
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Result stack underflow - State=%p\n",
walk_state));
return (AE_AML_NO_OPERAND);
}
if (walk_state->result_size < ACPI_RESULTS_FRAME_OBJ_NUM) {
ACPI_ERROR((AE_INFO, "Insufficient result stack size"));
return (AE_AML_INTERNAL);
}
state = acpi_ut_pop_generic_state(&walk_state->results);
acpi_ut_delete_generic_state(state);
/* Decrease the length of result stack by the length of frame */
walk_state->result_size -= ACPI_RESULTS_FRAME_OBJ_NUM;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Result=%p RemainingResults=%X State=%p\n",
state, state->results.num_results, walk_state));
acpi_ut_delete_generic_state(state);
state, walk_state->result_count, walk_state));
return (AE_OK);
}
@ -412,9 +320,13 @@ acpi_ds_obj_stack_push(void *object, struct acpi_walk_state * walk_state)
/* Put the object onto the stack */
walk_state->operands[walk_state->num_operands] = object;
walk_state->operands[walk_state->operand_index] = object;
walk_state->num_operands++;
/* For the usual order of filling the operand stack */
walk_state->operand_index++;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] State=%p #Ops=%X\n",
object,
acpi_ut_get_object_type_name((union
@ -484,43 +396,36 @@ acpi_ds_obj_stack_pop(u32 pop_count, struct acpi_walk_state * walk_state)
*
******************************************************************************/
acpi_status
void
acpi_ds_obj_stack_pop_and_delete(u32 pop_count,
struct acpi_walk_state * walk_state)
struct acpi_walk_state *walk_state)
{
u32 i;
acpi_native_int i;
union acpi_operand_object *obj_desc;
ACPI_FUNCTION_NAME(ds_obj_stack_pop_and_delete);
for (i = 0; i < pop_count; i++) {
/* Check for stack underflow */
if (pop_count == 0) {
return;
}
for (i = (acpi_native_int) (pop_count - 1); i >= 0; i--) {
if (walk_state->num_operands == 0) {
ACPI_ERROR((AE_INFO,
"Object stack underflow! Count=%X State=%p #Ops=%X",
pop_count, walk_state,
walk_state->num_operands));
return (AE_STACK_UNDERFLOW);
return;
}
/* Pop the stack and delete an object if present in this stack entry */
walk_state->num_operands--;
obj_desc = walk_state->operands[walk_state->num_operands];
obj_desc = walk_state->operands[i];
if (obj_desc) {
acpi_ut_remove_reference(walk_state->
operands[walk_state->
num_operands]);
walk_state->operands[walk_state->num_operands] = NULL;
acpi_ut_remove_reference(walk_state->operands[i]);
walk_state->operands[i] = NULL;
}
}
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Count=%X State=%p #Ops=%X\n",
pop_count, walk_state, walk_state->num_operands));
return (AE_OK);
}
/*******************************************************************************
@ -560,7 +465,7 @@ struct acpi_walk_state *acpi_ds_get_current_walk_state(struct acpi_thread_state
*
* RETURN: None
*
* DESCRIPTION: Place the Thread state at the head of the state list.
* DESCRIPTION: Place the Thread state at the head of the state list
*
******************************************************************************/
@ -636,7 +541,6 @@ struct acpi_walk_state *acpi_ds_create_walk_state(acpi_owner_id owner_id, union
*thread)
{
struct acpi_walk_state *walk_state;
acpi_status status;
ACPI_FUNCTION_TRACE(ds_create_walk_state);
@ -659,14 +563,6 @@ struct acpi_walk_state *acpi_ds_create_walk_state(acpi_owner_id owner_id, union
acpi_ds_method_data_init(walk_state);
#endif
/* Create an initial result stack entry */
status = acpi_ds_result_stack_push(walk_state);
if (ACPI_FAILURE(status)) {
ACPI_FREE(walk_state);
return_PTR(NULL);
}
/* Put the new state at the head of the walk list */
if (thread) {
@ -860,190 +756,3 @@ void acpi_ds_delete_walk_state(struct acpi_walk_state *walk_state)
ACPI_FREE(walk_state);
return_VOID;
}
#ifdef ACPI_OBSOLETE_FUNCTIONS
/*******************************************************************************
*
* FUNCTION: acpi_ds_result_insert
*
* PARAMETERS: Object - Object to push
* Index - Where to insert the object
* walk_state - Current Walk state
*
* RETURN: Status
*
* DESCRIPTION: Insert an object onto this walk's result stack
*
******************************************************************************/
acpi_status
acpi_ds_result_insert(void *object,
u32 index, struct acpi_walk_state *walk_state)
{
union acpi_generic_state *state;
ACPI_FUNCTION_NAME(ds_result_insert);
state = walk_state->results;
if (!state) {
ACPI_ERROR((AE_INFO, "No result object pushed! State=%p",
walk_state));
return (AE_NOT_EXIST);
}
if (index >= ACPI_OBJ_NUM_OPERANDS) {
ACPI_ERROR((AE_INFO,
"Index out of range: %X Obj=%p State=%p Num=%X",
index, object, walk_state,
state->results.num_results));
return (AE_BAD_PARAMETER);
}
if (!object) {
ACPI_ERROR((AE_INFO,
"Null Object! Index=%X Obj=%p State=%p Num=%X",
index, object, walk_state,
state->results.num_results));
return (AE_BAD_PARAMETER);
}
state->results.obj_desc[index] = object;
state->results.num_results++;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Obj=%p [%s] State=%p Num=%X Cur=%X\n",
object,
object ?
acpi_ut_get_object_type_name((union
acpi_operand_object *)
object) : "NULL",
walk_state, state->results.num_results,
walk_state->current_result));
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_obj_stack_delete_all
*
* PARAMETERS: walk_state - Current Walk state
*
* RETURN: Status
*
* DESCRIPTION: Clear the object stack by deleting all objects that are on it.
* Should be used with great care, if at all!
*
******************************************************************************/
acpi_status acpi_ds_obj_stack_delete_all(struct acpi_walk_state * walk_state)
{
u32 i;
ACPI_FUNCTION_TRACE_PTR(ds_obj_stack_delete_all, walk_state);
/* The stack size is configurable, but fixed */
for (i = 0; i < ACPI_OBJ_NUM_OPERANDS; i++) {
if (walk_state->operands[i]) {
acpi_ut_remove_reference(walk_state->operands[i]);
walk_state->operands[i] = NULL;
}
}
return_ACPI_STATUS(AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_obj_stack_pop_object
*
* PARAMETERS: Object - Where to return the popped object
* walk_state - Current Walk state
*
* RETURN: Status
*
* DESCRIPTION: Pop this walk's object stack. Objects on the stack are NOT
* deleted by this routine.
*
******************************************************************************/
acpi_status
acpi_ds_obj_stack_pop_object(union acpi_operand_object **object,
struct acpi_walk_state *walk_state)
{
ACPI_FUNCTION_NAME(ds_obj_stack_pop_object);
/* Check for stack underflow */
if (walk_state->num_operands == 0) {
ACPI_ERROR((AE_INFO,
"Missing operand/stack empty! State=%p #Ops=%X",
walk_state, walk_state->num_operands));
*object = NULL;
return (AE_AML_NO_OPERAND);
}
/* Pop the stack */
walk_state->num_operands--;
/* Check for a valid operand */
if (!walk_state->operands[walk_state->num_operands]) {
ACPI_ERROR((AE_INFO,
"Null operand! State=%p #Ops=%X",
walk_state, walk_state->num_operands));
*object = NULL;
return (AE_AML_NO_OPERAND);
}
/* Get operand and set stack entry to null */
*object = walk_state->operands[walk_state->num_operands];
walk_state->operands[walk_state->num_operands] = NULL;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] State=%p #Ops=%X\n",
*object, acpi_ut_get_object_type_name(*object),
walk_state, walk_state->num_operands));
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_ds_obj_stack_get_value
*
* PARAMETERS: Index - Stack index whose value is desired. Based
* on the top of the stack (index=0 == top)
* walk_state - Current Walk state
*
* RETURN: Pointer to the requested operand
*
* DESCRIPTION: Retrieve an object from this walk's operand stack. Index must
* be within the range of the current stack pointer.
*
******************************************************************************/
void *acpi_ds_obj_stack_get_value(u32 index, struct acpi_walk_state *walk_state)
{
ACPI_FUNCTION_TRACE_PTR(ds_obj_stack_get_value, walk_state);
/* Can't do it if the stack is empty */
if (walk_state->num_operands == 0) {
return_PTR(NULL);
}
/* or if the index is past the top of the stack */
if (index > (walk_state->num_operands - (u32) 1)) {
return_PTR(NULL);
}
return_PTR(walk_state->
operands[(acpi_native_uint) (walk_state->num_operands - 1) -
index]);
}
#endif

Просмотреть файл

@ -73,38 +73,14 @@ enum ec_event {
#define ACPI_EC_DELAY 500 /* Wait 500ms max. during EC ops */
#define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */
#define ACPI_EC_UDELAY 100 /* Wait 100us before polling EC again */
enum {
EC_FLAGS_WAIT_GPE = 0, /* Don't check status until GPE arrives */
EC_FLAGS_QUERY_PENDING, /* Query is pending */
EC_FLAGS_GPE_MODE, /* Expect GPE to be sent for status change */
EC_FLAGS_NO_ADDRESS_GPE, /* Expect GPE only for non-address event */
EC_FLAGS_ADDRESS, /* Address is being written */
EC_FLAGS_NO_WDATA_GPE, /* Don't expect WDATA GPE event */
EC_FLAGS_WDATA, /* Data is being written */
EC_FLAGS_NO_OBF1_GPE, /* Don't expect GPE before read */
};
static int acpi_ec_remove(struct acpi_device *device, int type);
static int acpi_ec_start(struct acpi_device *device);
static int acpi_ec_stop(struct acpi_device *device, int type);
static int acpi_ec_add(struct acpi_device *device);
static const struct acpi_device_id ec_device_ids[] = {
{"PNP0C09", 0},
{"", 0},
};
static struct acpi_driver acpi_ec_driver = {
.name = "ec",
.class = ACPI_EC_CLASS,
.ids = ec_device_ids,
.ops = {
.add = acpi_ec_add,
.remove = acpi_ec_remove,
.start = acpi_ec_start,
.stop = acpi_ec_stop,
},
EC_FLAGS_NO_GPE, /* Don't use GPE mode */
EC_FLAGS_RESCHEDULE_POLL /* Re-schedule poll */
};
/* If we find an EC via the ECDT, we need to keep a ptr to its context */
@ -129,6 +105,8 @@ static struct acpi_ec {
struct mutex lock;
wait_queue_head_t wait;
struct list_head list;
struct delayed_work work;
atomic_t irq_count;
u8 handlers_installed;
} *boot_ec, *first_ec;
@ -177,65 +155,52 @@ static inline int acpi_ec_check_status(struct acpi_ec *ec, enum ec_event event)
return 0;
}
static void ec_schedule_ec_poll(struct acpi_ec *ec)
{
if (test_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags))
schedule_delayed_work(&ec->work,
msecs_to_jiffies(ACPI_EC_DELAY));
}
static void ec_switch_to_poll_mode(struct acpi_ec *ec)
{
set_bit(EC_FLAGS_NO_GPE, &ec->flags);
clear_bit(EC_FLAGS_GPE_MODE, &ec->flags);
acpi_disable_gpe(NULL, ec->gpe, ACPI_NOT_ISR);
set_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags);
}
static int acpi_ec_wait(struct acpi_ec *ec, enum ec_event event, int force_poll)
{
int ret = 0;
if (unlikely(event == ACPI_EC_EVENT_OBF_1 &&
test_bit(EC_FLAGS_NO_OBF1_GPE, &ec->flags)))
force_poll = 1;
if (unlikely(test_bit(EC_FLAGS_ADDRESS, &ec->flags) &&
test_bit(EC_FLAGS_NO_ADDRESS_GPE, &ec->flags)))
force_poll = 1;
if (unlikely(test_bit(EC_FLAGS_WDATA, &ec->flags) &&
test_bit(EC_FLAGS_NO_WDATA_GPE, &ec->flags)))
force_poll = 1;
atomic_set(&ec->irq_count, 0);
if (likely(test_bit(EC_FLAGS_GPE_MODE, &ec->flags)) &&
likely(!force_poll)) {
if (wait_event_timeout(ec->wait, acpi_ec_check_status(ec, event),
msecs_to_jiffies(ACPI_EC_DELAY)))
goto end;
return 0;
clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags);
if (acpi_ec_check_status(ec, event)) {
if (event == ACPI_EC_EVENT_OBF_1) {
/* miss OBF_1 GPE, don't expect it */
pr_info(PREFIX "missing OBF confirmation, "
"don't expect it any longer.\n");
set_bit(EC_FLAGS_NO_OBF1_GPE, &ec->flags);
} else if (test_bit(EC_FLAGS_ADDRESS, &ec->flags)) {
/* miss address GPE, don't expect it anymore */
pr_info(PREFIX "missing address confirmation, "
"don't expect it any longer.\n");
set_bit(EC_FLAGS_NO_ADDRESS_GPE, &ec->flags);
} else if (test_bit(EC_FLAGS_WDATA, &ec->flags)) {
/* miss write data GPE, don't expect it */
pr_info(PREFIX "missing write data confirmation, "
"don't expect it any longer.\n");
set_bit(EC_FLAGS_NO_WDATA_GPE, &ec->flags);
} else {
/* missing GPEs, switch back to poll mode */
if (printk_ratelimit())
pr_info(PREFIX "missing confirmations, "
/* missing GPEs, switch back to poll mode */
if (printk_ratelimit())
pr_info(PREFIX "missing confirmations, "
"switch off interrupt mode.\n");
clear_bit(EC_FLAGS_GPE_MODE, &ec->flags);
}
goto end;
ec_switch_to_poll_mode(ec);
ec_schedule_ec_poll(ec);
return 0;
}
} else {
unsigned long delay = jiffies + msecs_to_jiffies(ACPI_EC_DELAY);
clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags);
while (time_before(jiffies, delay)) {
if (acpi_ec_check_status(ec, event))
goto end;
return 0;
udelay(ACPI_EC_UDELAY);
}
}
pr_err(PREFIX "acpi_ec_wait timeout,"
" status = %d, expect_event = %d\n",
acpi_ec_read_status(ec), event);
ret = -ETIME;
end:
clear_bit(EC_FLAGS_ADDRESS, &ec->flags);
return ret;
pr_err(PREFIX "acpi_ec_wait timeout, status = 0x%2.2x, event = %s\n",
acpi_ec_read_status(ec),
(event == ACPI_EC_EVENT_OBF_1) ? "\"b0=1\"" : "\"b1=0\"");
return -ETIME;
}
static int acpi_ec_transaction_unlocked(struct acpi_ec *ec, u8 command,
@ -245,8 +210,8 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec, u8 command,
{
int result = 0;
set_bit(EC_FLAGS_WAIT_GPE, &ec->flags);
acpi_ec_write_cmd(ec, command);
pr_debug(PREFIX "transaction start\n");
acpi_ec_write_cmd(ec, command);
for (; wdata_len > 0; --wdata_len) {
result = acpi_ec_wait(ec, ACPI_EC_EVENT_IBF_0, force_poll);
if (result) {
@ -254,15 +219,11 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec, u8 command,
"write_cmd timeout, command = %d\n", command);
goto end;
}
/* mark the address byte written to EC */
if (rdata_len + wdata_len > 1)
set_bit(EC_FLAGS_ADDRESS, &ec->flags);
set_bit(EC_FLAGS_WAIT_GPE, &ec->flags);
acpi_ec_write_data(ec, *(wdata++));
}
if (!rdata_len) {
set_bit(EC_FLAGS_WDATA, &ec->flags);
result = acpi_ec_wait(ec, ACPI_EC_EVENT_IBF_0, force_poll);
if (result) {
pr_err(PREFIX
@ -527,46 +488,50 @@ static u32 acpi_ec_gpe_handler(void *data)
{
acpi_status status = AE_OK;
struct acpi_ec *ec = data;
u8 state = acpi_ec_read_status(ec);
pr_debug(PREFIX "~~~> interrupt\n");
atomic_inc(&ec->irq_count);
if (atomic_read(&ec->irq_count) > 5) {
pr_err(PREFIX "GPE storm detected, disabling EC GPE\n");
ec_switch_to_poll_mode(ec);
goto end;
}
clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags);
if (test_bit(EC_FLAGS_GPE_MODE, &ec->flags))
wake_up(&ec->wait);
if (acpi_ec_read_status(ec) & ACPI_EC_FLAG_SCI) {
if (state & ACPI_EC_FLAG_SCI) {
if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags))
status = acpi_os_execute(OSL_EC_BURST_HANDLER,
acpi_ec_gpe_query, ec);
} else if (unlikely(!test_bit(EC_FLAGS_GPE_MODE, &ec->flags))) {
} else if (!test_bit(EC_FLAGS_GPE_MODE, &ec->flags) &&
!test_bit(EC_FLAGS_NO_GPE, &ec->flags) &&
in_interrupt()) {
/* this is non-query, must be confirmation */
if (printk_ratelimit())
pr_info(PREFIX "non-query interrupt received,"
" switching to interrupt mode\n");
set_bit(EC_FLAGS_GPE_MODE, &ec->flags);
clear_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags);
}
end:
ec_schedule_ec_poll(ec);
return ACPI_SUCCESS(status) ?
ACPI_INTERRUPT_HANDLED : ACPI_INTERRUPT_NOT_HANDLED;
}
static void do_ec_poll(struct work_struct *work)
{
struct acpi_ec *ec = container_of(work, struct acpi_ec, work.work);
atomic_set(&ec->irq_count, 0);
(void)acpi_ec_gpe_handler(ec);
}
/* --------------------------------------------------------------------------
Address Space Management
-------------------------------------------------------------------------- */
static acpi_status
acpi_ec_space_setup(acpi_handle region_handle,
u32 function, void *handler_context, void **return_context)
{
/*
* The EC object is in the handler context and is needed
* when calling the acpi_ec_space_handler.
*/
*return_context = (function != ACPI_REGION_DEACTIVATE) ?
handler_context : NULL;
return AE_OK;
}
static acpi_status
acpi_ec_space_handler(u32 function, acpi_physical_address address,
u32 bits, acpi_integer *value,
@ -704,6 +669,8 @@ static struct acpi_ec *make_acpi_ec(void)
mutex_init(&ec->lock);
init_waitqueue_head(&ec->wait);
INIT_LIST_HEAD(&ec->list);
INIT_DELAYED_WORK_DEFERRABLE(&ec->work, do_ec_poll);
atomic_set(&ec->irq_count, 0);
return ec;
}
@ -736,17 +703,21 @@ ec_parse_device(acpi_handle handle, u32 Level, void *context, void **retval)
status = acpi_evaluate_integer(handle, "_GPE", NULL, &ec->gpe);
if (ACPI_FAILURE(status))
return status;
/* Find and register all query methods */
acpi_walk_namespace(ACPI_TYPE_METHOD, handle, 1,
acpi_ec_register_query_methods, ec, NULL);
/* Use the global lock for all EC transactions? */
acpi_evaluate_integer(handle, "_GLK", NULL, &ec->global_lock);
ec->handle = handle;
return AE_CTRL_TERMINATE;
}
static void ec_poll_stop(struct acpi_ec *ec)
{
clear_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags);
cancel_delayed_work(&ec->work);
}
static void ec_remove_handlers(struct acpi_ec *ec)
{
ec_poll_stop(ec);
if (ACPI_FAILURE(acpi_remove_address_space_handler(ec->handle,
ACPI_ADR_SPACE_EC, &acpi_ec_space_handler)))
pr_err(PREFIX "failed to remove space handler\n");
@ -766,31 +737,28 @@ static int acpi_ec_add(struct acpi_device *device)
strcpy(acpi_device_class(device), ACPI_EC_CLASS);
/* Check for boot EC */
if (boot_ec) {
if (boot_ec->handle == device->handle) {
/* Pre-loaded EC from DSDT, just move pointer */
ec = boot_ec;
boot_ec = NULL;
goto end;
} else if (boot_ec->handle == ACPI_ROOT_OBJECT) {
/* ECDT-based EC, time to shut it down */
ec_remove_handlers(boot_ec);
kfree(boot_ec);
first_ec = boot_ec = NULL;
if (boot_ec &&
(boot_ec->handle == device->handle ||
boot_ec->handle == ACPI_ROOT_OBJECT)) {
ec = boot_ec;
boot_ec = NULL;
} else {
ec = make_acpi_ec();
if (!ec)
return -ENOMEM;
if (ec_parse_device(device->handle, 0, ec, NULL) !=
AE_CTRL_TERMINATE) {
kfree(ec);
return -EINVAL;
}
}
ec = make_acpi_ec();
if (!ec)
return -ENOMEM;
if (ec_parse_device(device->handle, 0, ec, NULL) !=
AE_CTRL_TERMINATE) {
kfree(ec);
return -EINVAL;
}
ec->handle = device->handle;
end:
/* Find and register all query methods */
acpi_walk_namespace(ACPI_TYPE_METHOD, ec->handle, 1,
acpi_ec_register_query_methods, ec, NULL);
if (!first_ec)
first_ec = ec;
acpi_driver_data(device) = ec;
@ -865,7 +833,7 @@ static int ec_install_handlers(struct acpi_ec *ec)
status = acpi_install_address_space_handler(ec->handle,
ACPI_ADR_SPACE_EC,
&acpi_ec_space_handler,
&acpi_ec_space_setup, ec);
NULL, ec);
if (ACPI_FAILURE(status)) {
acpi_remove_gpe_handler(NULL, ec->gpe, &acpi_ec_gpe_handler);
return -ENODEV;
@ -892,6 +860,7 @@ static int acpi_ec_start(struct acpi_device *device)
/* EC is fully operational, allow queries */
clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
ec_schedule_ec_poll(ec);
return ret;
}
@ -919,6 +888,11 @@ int __init acpi_boot_ec_enable(void)
return -EFAULT;
}
static const struct acpi_device_id ec_device_ids[] = {
{"PNP0C09", 0},
{"", 0},
};
int __init acpi_ec_ecdt_probe(void)
{
int ret;
@ -939,6 +913,7 @@ int __init acpi_ec_ecdt_probe(void)
boot_ec->data_addr = ecdt_ptr->data.address;
boot_ec->gpe = ecdt_ptr->gpe;
boot_ec->handle = ACPI_ROOT_OBJECT;
acpi_get_handle(ACPI_ROOT_OBJECT, ecdt_ptr->id, &boot_ec->handle);
} else {
/* This workaround is needed only on some broken machines,
* which require early EC, but fail to provide ECDT */
@ -968,6 +943,39 @@ int __init acpi_ec_ecdt_probe(void)
return -ENODEV;
}
static int acpi_ec_suspend(struct acpi_device *device, pm_message_t state)
{
struct acpi_ec *ec = acpi_driver_data(device);
/* Stop using GPE */
set_bit(EC_FLAGS_NO_GPE, &ec->flags);
clear_bit(EC_FLAGS_GPE_MODE, &ec->flags);
acpi_disable_gpe(NULL, ec->gpe, ACPI_NOT_ISR);
return 0;
}
static int acpi_ec_resume(struct acpi_device *device)
{
struct acpi_ec *ec = acpi_driver_data(device);
/* Enable use of GPE back */
clear_bit(EC_FLAGS_NO_GPE, &ec->flags);
acpi_enable_gpe(NULL, ec->gpe, ACPI_NOT_ISR);
return 0;
}
static struct acpi_driver acpi_ec_driver = {
.name = "ec",
.class = ACPI_EC_CLASS,
.ids = ec_device_ids,
.ops = {
.add = acpi_ec_add,
.remove = acpi_ec_remove,
.start = acpi_ec_start,
.stop = acpi_ec_stop,
.suspend = acpi_ec_suspend,
.resume = acpi_ec_resume,
},
};
static int __init acpi_ec_init(void)
{
int result = 0;

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -248,10 +248,6 @@ acpi_status acpi_ev_disable_gpe(struct acpi_gpe_event_info *gpe_event_info)
ACPI_FUNCTION_TRACE(ev_disable_gpe);
if (!(gpe_event_info->flags & ACPI_GPE_ENABLE_MASK)) {
return_ACPI_STATUS(AE_OK);
}
/* Make sure HW enable masks are updated */
status =

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -49,22 +49,7 @@
#define _COMPONENT ACPI_EVENTS
ACPI_MODULE_NAME("evmisc")
/* Names for Notify() values, used for debug output */
#ifdef ACPI_DEBUG_OUTPUT
static const char *acpi_notify_value_names[] = {
"Bus Check",
"Device Check",
"Device Wake",
"Eject Request",
"Device Check Light",
"Frequency Mismatch",
"Bus Mode Mismatch",
"Power Fault"
};
#endif
/* Pointer to FACS needed for the Global Lock */
static struct acpi_table_facs *facs = NULL;
/* Local prototypes */
@ -94,7 +79,6 @@ u8 acpi_ev_is_notify_object(struct acpi_namespace_node *node)
switch (node->type) {
case ACPI_TYPE_DEVICE:
case ACPI_TYPE_PROCESSOR:
case ACPI_TYPE_POWER:
case ACPI_TYPE_THERMAL:
/*
* These are the ONLY objects that can receive ACPI notifications
@ -139,17 +123,9 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
* initiate soft-off or sleep operation?
*/
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Dispatching Notify(%X) on node %p\n", notify_value,
node));
if (notify_value <= 7) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Notify value: %s\n",
acpi_notify_value_names[notify_value]));
} else {
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Notify value: 0x%2.2X **Device Specific**\n",
notify_value));
}
"Dispatching Notify on [%4.4s] Node %p Value 0x%2.2X (%s)\n",
acpi_ut_get_node_name(node), node, notify_value,
acpi_ut_get_notify_name(notify_value)));
/* Get the notify object attached to the NS Node */
@ -159,10 +135,12 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
/* We have the notify object, Get the right handler */
switch (node->type) {
/* Notify allowed only on these types */
case ACPI_TYPE_DEVICE:
case ACPI_TYPE_THERMAL:
case ACPI_TYPE_PROCESSOR:
case ACPI_TYPE_POWER:
if (notify_value <= ACPI_MAX_SYS_NOTIFY) {
handler_obj =
@ -179,8 +157,13 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
}
}
/* If there is any handler to run, schedule the dispatcher */
/*
* If there is any handler to run, schedule the dispatcher.
* Check for:
* 1) Global system notify handler
* 2) Global device notify handler
* 3) Per-device notify handler
*/
if ((acpi_gbl_system_notify.handler
&& (notify_value <= ACPI_MAX_SYS_NOTIFY))
|| (acpi_gbl_device_notify.handler
@ -190,6 +173,13 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
return (AE_NO_MEMORY);
}
if (!handler_obj) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Executing system notify handler for Notify (%4.4s, %X) node %p\n",
acpi_ut_get_node_name(node),
notify_value, node));
}
notify_info->common.descriptor_type =
ACPI_DESC_TYPE_STATE_NOTIFY;
notify_info->notify.node = node;
@ -202,15 +192,12 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
if (ACPI_FAILURE(status)) {
acpi_ut_delete_generic_state(notify_info);
}
}
if (!handler_obj) {
} else {
/*
* There is no per-device notify handler for this device.
* This may or may not be a problem.
* There is no notify handler (per-device or system) for this device.
*/
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"No notify handler for Notify(%4.4s, %X) node %p\n",
"No notify handler for Notify (%4.4s, %X) node %p\n",
acpi_ut_get_node_name(node), notify_value,
node));
}
@ -349,9 +336,10 @@ acpi_status acpi_ev_init_global_lock_handler(void)
ACPI_FUNCTION_TRACE(ev_init_global_lock_handler);
status =
acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
(struct acpi_table_header **)&facs);
status = acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
ACPI_CAST_INDIRECT_PTR(struct
acpi_table_header,
&facs));
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@ -439,7 +427,8 @@ acpi_status acpi_ev_acquire_global_lock(u16 timeout)
* Only one thread can acquire the GL at a time, the global_lock_mutex
* enforces this. This interface releases the interpreter if we must wait.
*/
status = acpi_ex_system_wait_mutex(acpi_gbl_global_lock_mutex, 0);
status = acpi_ex_system_wait_mutex(
acpi_gbl_global_lock_mutex->mutex.os_mutex, 0);
if (status == AE_TIME) {
if (acpi_ev_global_lock_thread_id == acpi_os_get_thread_id()) {
acpi_ev_global_lock_acquired++;
@ -448,9 +437,9 @@ acpi_status acpi_ev_acquire_global_lock(u16 timeout)
}
if (ACPI_FAILURE(status)) {
status =
acpi_ex_system_wait_mutex(acpi_gbl_global_lock_mutex,
timeout);
status = acpi_ex_system_wait_mutex(
acpi_gbl_global_lock_mutex->mutex.os_mutex,
timeout);
}
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
@ -459,6 +448,19 @@ acpi_status acpi_ev_acquire_global_lock(u16 timeout)
acpi_ev_global_lock_thread_id = acpi_os_get_thread_id();
acpi_ev_global_lock_acquired++;
/*
* Update the global lock handle and check for wraparound. The handle is
* only used for the external global lock interfaces, but it is updated
* here to properly handle the case where a single thread may acquire the
* lock via both the AML and the acpi_acquire_global_lock interfaces. The
* handle is therefore updated on the first acquire from a given thread
* regardless of where the acquisition request originated.
*/
acpi_gbl_global_lock_handle++;
if (acpi_gbl_global_lock_handle == 0) {
acpi_gbl_global_lock_handle = 1;
}
/*
* Make sure that a global lock actually exists. If not, just treat
* the lock as a standard mutex.
@ -555,7 +557,7 @@ acpi_status acpi_ev_release_global_lock(void)
/* Release the local GL mutex */
acpi_ev_global_lock_thread_id = NULL;
acpi_ev_global_lock_acquired = 0;
acpi_os_release_mutex(acpi_gbl_global_lock_mutex);
acpi_os_release_mutex(acpi_gbl_global_lock_mutex->mutex.os_mutex);
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -394,7 +394,7 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
"Handler %p (@%p) Address %8.8X%8.8X [%s]\n",
&region_obj->region.handler->address_space, handler,
ACPI_FORMAT_UINT64(address),
ACPI_FORMAT_NATIVE_UINT(address),
acpi_ut_get_region_name(region_obj->region.
space_id)));

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -758,6 +758,12 @@ ACPI_EXPORT_SYMBOL(acpi_remove_gpe_handler)
*
* DESCRIPTION: Acquire the ACPI Global Lock
*
* Note: Allows callers with the same thread ID to acquire the global lock
* multiple times. In other words, externally, the behavior of the global lock
* is identical to an AML mutex. On the first acquire, a new handle is
* returned. On any subsequent calls to acquire by the same thread, the same
* handle is returned.
*
******************************************************************************/
acpi_status acpi_acquire_global_lock(u16 timeout, u32 * handle)
{
@ -770,14 +776,19 @@ acpi_status acpi_acquire_global_lock(u16 timeout, u32 * handle)
/* Must lock interpreter to prevent race conditions */
acpi_ex_enter_interpreter();
status = acpi_ev_acquire_global_lock(timeout);
acpi_ex_exit_interpreter();
status = acpi_ex_acquire_mutex_object(timeout,
acpi_gbl_global_lock_mutex,
acpi_os_get_thread_id());
if (ACPI_SUCCESS(status)) {
acpi_gbl_global_lock_handle++;
/* Return the global lock handle (updated in acpi_ev_acquire_global_lock) */
*handle = acpi_gbl_global_lock_handle;
}
acpi_ex_exit_interpreter();
return (status);
}
@ -798,11 +809,11 @@ acpi_status acpi_release_global_lock(u32 handle)
{
acpi_status status;
if (handle != acpi_gbl_global_lock_handle) {
if (!handle || (handle != acpi_gbl_global_lock_handle)) {
return (AE_NOT_ACQUIRED);
}
status = acpi_ev_release_global_lock();
status = acpi_ex_release_mutex_object(acpi_gbl_global_lock_mutex);
return (status);
}

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -45,7 +45,6 @@
#include <acpi/acinterp.h>
#include <acpi/amlcode.h>
#include <acpi/acnamesp.h>
#include <acpi/acevents.h>
#include <acpi/actables.h>
#include <acpi/acdispat.h>
@ -138,6 +137,14 @@ acpi_ex_load_table_op(struct acpi_walk_state *walk_state,
ACPI_FUNCTION_TRACE(ex_load_table_op);
/* Validate lengths for the signature_string, OEMIDString, OEMtable_iD */
if ((operand[0]->string.length > ACPI_NAME_SIZE) ||
(operand[1]->string.length > ACPI_OEM_ID_SIZE) ||
(operand[2]->string.length > ACPI_OEM_TABLE_ID_SIZE)) {
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
/* Find the ACPI table in the RSDT/XSDT */
status = acpi_tb_find_table(operand[0]->string.pointer,
@ -229,11 +236,18 @@ acpi_ex_load_table_op(struct acpi_walk_state *walk_state,
status = acpi_get_table_by_index(table_index, &table);
if (ACPI_SUCCESS(status)) {
ACPI_INFO((AE_INFO,
"Dynamic OEM Table Load - [%4.4s] OemId [%6.6s] OemTableId [%8.8s]",
"Dynamic OEM Table Load - [%.4s] OemId [%.6s] OemTableId [%.8s]",
table->signature, table->oem_id,
table->oem_table_id));
}
/* Invoke table handler if present */
if (acpi_gbl_table_handler) {
(void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD, table,
acpi_gbl_table_handler_context);
}
*return_desc = ddb_handle;
return_ACPI_STATUS(status);
}
@ -268,6 +282,7 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
struct acpi_table_desc table_desc;
acpi_native_uint table_index;
acpi_status status;
u32 length;
ACPI_FUNCTION_TRACE(ex_load_op);
@ -278,16 +293,16 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
switch (ACPI_GET_OBJECT_TYPE(obj_desc)) {
case ACPI_TYPE_REGION:
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Load from Region %p %s\n",
obj_desc,
acpi_ut_get_object_type_name(obj_desc)));
/* Region must be system_memory (from ACPI spec) */
if (obj_desc->region.space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) {
return_ACPI_STATUS(AE_AML_OPERAND_TYPE);
}
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Load from Region %p %s\n",
obj_desc,
acpi_ut_get_object_type_name(obj_desc)));
/*
* If the Region Address and Length have not been previously evaluated,
* evaluate them now and save the results.
@ -299,6 +314,11 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
}
}
/*
* We will simply map the memory region for the table. However, the
* memory region is technically not guaranteed to remain stable and
* we may eventually have to copy the table to a local buffer.
*/
table_desc.address = obj_desc->region.address;
table_desc.length = obj_desc->region.length;
table_desc.flags = ACPI_TABLE_ORIGIN_MAPPED;
@ -306,18 +326,41 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
case ACPI_TYPE_BUFFER: /* Buffer or resolved region_field */
/* Simply extract the buffer from the buffer object */
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Load from Buffer or Field %p %s\n", obj_desc,
acpi_ut_get_object_type_name(obj_desc)));
table_desc.pointer = ACPI_CAST_PTR(struct acpi_table_header,
obj_desc->buffer.pointer);
table_desc.length = table_desc.pointer->length;
table_desc.flags = ACPI_TABLE_ORIGIN_ALLOCATED;
length = obj_desc->buffer.length;
obj_desc->buffer.pointer = NULL;
/* Must have at least an ACPI table header */
if (length < sizeof(struct acpi_table_header)) {
return_ACPI_STATUS(AE_INVALID_TABLE_LENGTH);
}
/* Validate checksum here. It won't get validated in tb_add_table */
status =
acpi_tb_verify_checksum(ACPI_CAST_PTR
(struct acpi_table_header,
obj_desc->buffer.pointer), length);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/*
* We need to copy the buffer since the original buffer could be
* changed or deleted in the future
*/
table_desc.pointer = ACPI_ALLOCATE(length);
if (!table_desc.pointer) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
ACPI_MEMCPY(table_desc.pointer, obj_desc->buffer.pointer,
length);
table_desc.length = length;
table_desc.flags = ACPI_TABLE_ORIGIN_ALLOCATED;
break;
default:
@ -333,7 +376,8 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
}
status =
acpi_ex_add_table(table_index, acpi_gbl_root_node, &ddb_handle);
acpi_ex_add_table(table_index, walk_state->scope_info->scope.node,
&ddb_handle);
if (ACPI_FAILURE(status)) {
/* On error, table_ptr was deallocated above */
@ -349,11 +393,23 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
/* table_ptr was deallocated above */
acpi_ut_remove_reference(ddb_handle);
return_ACPI_STATUS(status);
}
/* Invoke table handler if present */
if (acpi_gbl_table_handler) {
(void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD,
table_desc.pointer,
acpi_gbl_table_handler_context);
}
cleanup:
if (ACPI_FAILURE(status)) {
/* Delete allocated buffer or mapping */
acpi_tb_delete_table(&table_desc);
}
return_ACPI_STATUS(status);
@ -376,6 +432,7 @@ acpi_status acpi_ex_unload_table(union acpi_operand_object *ddb_handle)
acpi_status status = AE_OK;
union acpi_operand_object *table_desc = ddb_handle;
acpi_native_uint table_index;
struct acpi_table_header *table;
ACPI_FUNCTION_TRACE(ex_unload_table);
@ -395,17 +452,25 @@ acpi_status acpi_ex_unload_table(union acpi_operand_object *ddb_handle)
table_index = (acpi_native_uint) table_desc->reference.object;
/* Invoke table handler if present */
if (acpi_gbl_table_handler) {
status = acpi_get_table_by_index(table_index, &table);
if (ACPI_SUCCESS(status)) {
(void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_UNLOAD,
table,
acpi_gbl_table_handler_context);
}
}
/*
* Delete the entire namespace under this table Node
* (Offset contains the table_id)
*/
acpi_tb_delete_namespace_by_owner(table_index);
acpi_tb_release_owner_id(table_index);
(void)acpi_tb_release_owner_id(table_index);
acpi_tb_set_table_loaded_flag(table_index, FALSE);
/* Delete the table descriptor (ddb_handle) */
acpi_ut_remove_reference(table_desc);
return_ACPI_STATUS(status);
return_ACPI_STATUS(AE_OK);
}

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -96,16 +96,28 @@ acpi_status acpi_ex_create_alias(struct acpi_walk_state *walk_state)
* to the original Node.
*/
switch (target_node->type) {
/* For these types, the sub-object can change dynamically via a Store */
case ACPI_TYPE_INTEGER:
case ACPI_TYPE_STRING:
case ACPI_TYPE_BUFFER:
case ACPI_TYPE_PACKAGE:
case ACPI_TYPE_BUFFER_FIELD:
/*
* These types open a new scope, so we need the NS node in order to access
* any children.
*/
case ACPI_TYPE_DEVICE:
case ACPI_TYPE_POWER:
case ACPI_TYPE_PROCESSOR:
case ACPI_TYPE_THERMAL:
case ACPI_TYPE_LOCAL_SCOPE:
/*
* The new alias has the type ALIAS and points to the original
* NS node, not the object itself. This is because for these
* types, the object can change dynamically via a Store.
* NS node, not the object itself.
*/
alias_node->type = ACPI_TYPE_LOCAL_ALIAS;
alias_node->object =
@ -115,9 +127,7 @@ acpi_status acpi_ex_create_alias(struct acpi_walk_state *walk_state)
case ACPI_TYPE_METHOD:
/*
* The new alias has the type ALIAS and points to the original
* NS node, not the object itself. This is because for these
* types, the object can change dynamically via a Store.
* Control method aliases need to be differentiated
*/
alias_node->type = ACPI_TYPE_LOCAL_METHOD_ALIAS;
alias_node->object =
@ -340,101 +350,6 @@ acpi_ex_create_region(u8 * aml_start,
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_create_table_region
*
* PARAMETERS: walk_state - Current state
*
* RETURN: Status
*
* DESCRIPTION: Create a new data_table_region object
*
******************************************************************************/
acpi_status acpi_ex_create_table_region(struct acpi_walk_state *walk_state)
{
acpi_status status;
union acpi_operand_object **operand = &walk_state->operands[0];
union acpi_operand_object *obj_desc;
struct acpi_namespace_node *node;
union acpi_operand_object *region_obj2;
acpi_native_uint table_index;
struct acpi_table_header *table;
ACPI_FUNCTION_TRACE(ex_create_table_region);
/* Get the Node from the object stack */
node = walk_state->op->common.node;
/*
* If the region object is already attached to this node,
* just return
*/
if (acpi_ns_get_attached_object(node)) {
return_ACPI_STATUS(AE_OK);
}
/* Find the ACPI table */
status = acpi_tb_find_table(operand[1]->string.pointer,
operand[2]->string.pointer,
operand[3]->string.pointer, &table_index);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Create the region descriptor */
obj_desc = acpi_ut_create_internal_object(ACPI_TYPE_REGION);
if (!obj_desc) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
region_obj2 = obj_desc->common.next_object;
region_obj2->extra.region_context = NULL;
status = acpi_get_table_by_index(table_index, &table);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Init the region from the operands */
obj_desc->region.space_id = REGION_DATA_TABLE;
obj_desc->region.address =
(acpi_physical_address) ACPI_TO_INTEGER(table);
obj_desc->region.length = table->length;
obj_desc->region.node = node;
obj_desc->region.flags = AOPOBJ_DATA_VALID;
/* Install the new region object in the parent Node */
status = acpi_ns_attach_object(node, obj_desc, ACPI_TYPE_REGION);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
status = acpi_ev_initialize_region(obj_desc, FALSE);
if (ACPI_FAILURE(status)) {
if (status == AE_NOT_EXIST) {
status = AE_OK;
} else {
goto cleanup;
}
}
obj_desc->region.flags |= AOPOBJ_SETUP_COMPLETE;
cleanup:
/* Remove local reference to the object */
acpi_ut_remove_reference(obj_desc);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_create_processor

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -500,25 +500,28 @@ void acpi_ex_dump_operand(union acpi_operand_object *obj_desc, u32 depth)
acpi_os_printf("Reference: Debug\n");
break;
case AML_NAME_OP:
ACPI_DUMP_PATHNAME(obj_desc->reference.object,
"Reference: Name: ", ACPI_LV_INFO,
_COMPONENT);
ACPI_DUMP_ENTRY(obj_desc->reference.object,
ACPI_LV_INFO);
break;
case AML_INDEX_OP:
acpi_os_printf("Reference: Index %p\n",
obj_desc->reference.object);
break;
case AML_LOAD_OP:
acpi_os_printf("Reference: [DdbHandle] TableIndex %p\n",
obj_desc->reference.object);
break;
case AML_REF_OF_OP:
acpi_os_printf("Reference: (RefOf) %p\n",
obj_desc->reference.object);
acpi_os_printf("Reference: (RefOf) %p [%s]\n",
obj_desc->reference.object,
acpi_ut_get_type_name(((union
acpi_operand_object
*)obj_desc->
reference.
object)->common.
type));
break;
case AML_ARG_OP:
@ -559,8 +562,9 @@ void acpi_ex_dump_operand(union acpi_operand_object *obj_desc, u32 depth)
case AML_INT_NAMEPATH_OP:
acpi_os_printf("Reference.Node->Name %X\n",
obj_desc->reference.node->name.integer);
acpi_os_printf("Reference: Namepath %X [%4.4s]\n",
obj_desc->reference.node->name.integer,
obj_desc->reference.node->name.ascii);
break;
default:
@ -640,8 +644,8 @@ void acpi_ex_dump_operand(union acpi_operand_object *obj_desc, u32 depth)
acpi_os_printf("\n");
} else {
acpi_os_printf(" base %8.8X%8.8X Length %X\n",
ACPI_FORMAT_UINT64(obj_desc->region.
address),
ACPI_FORMAT_NATIVE_UINT(obj_desc->region.
address),
obj_desc->region.length);
}
break;
@ -877,20 +881,43 @@ static void acpi_ex_dump_reference_obj(union acpi_operand_object *obj_desc)
ret_buf.length = ACPI_ALLOCATE_LOCAL_BUFFER;
if (obj_desc->reference.opcode == AML_INT_NAMEPATH_OP) {
acpi_os_printf("Named Object %p ", obj_desc->reference.node);
acpi_os_printf(" Named Object %p ", obj_desc->reference.node);
status =
acpi_ns_handle_to_pathname(obj_desc->reference.node,
&ret_buf);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not convert name to pathname\n");
acpi_os_printf(" Could not convert name to pathname\n");
} else {
acpi_os_printf("%s\n", (char *)ret_buf.pointer);
ACPI_FREE(ret_buf.pointer);
}
} else if (obj_desc->reference.object) {
acpi_os_printf("\nReferenced Object: %p\n",
obj_desc->reference.object);
if (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) ==
ACPI_DESC_TYPE_OPERAND) {
acpi_os_printf(" Target: %p",
obj_desc->reference.object);
if (obj_desc->reference.opcode == AML_LOAD_OP) {
/*
* For DDBHandle reference,
* obj_desc->Reference.Object is the table index
*/
acpi_os_printf(" [DDBHandle]\n");
} else {
acpi_os_printf(" [%s]\n",
acpi_ut_get_type_name(((union
acpi_operand_object
*)
obj_desc->
reference.
object)->
common.
type));
}
} else {
acpi_os_printf(" Target: %p\n",
obj_desc->reference.object);
}
}
}
@ -976,7 +1003,9 @@ acpi_ex_dump_package_obj(union acpi_operand_object *obj_desc,
case ACPI_TYPE_LOCAL_REFERENCE:
acpi_os_printf("[Object Reference] ");
acpi_os_printf("[Object Reference] %s",
(acpi_ps_get_opcode_info
(obj_desc->reference.opcode))->name);
acpi_ex_dump_reference_obj(obj_desc);
break;

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -71,7 +71,6 @@ acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state,
union acpi_operand_object *buffer_desc;
acpi_size length;
void *buffer;
u8 locked;
ACPI_FUNCTION_TRACE_PTR(ex_read_data_from_field, obj_desc);
@ -111,9 +110,7 @@ acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state,
/* Lock entire transaction if requested */
locked =
acpi_ex_acquire_global_lock(obj_desc->common_field.
field_flags);
acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);
/*
* Perform the read.
@ -125,7 +122,7 @@ acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state,
buffer.pointer),
ACPI_READ | (obj_desc->field.
attribute << 16));
acpi_ex_release_global_lock(locked);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
goto exit;
}
@ -175,13 +172,12 @@ acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state,
/* Lock entire transaction if requested */
locked =
acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);
acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);
/* Read from the field */
status = acpi_ex_extract_from_field(obj_desc, buffer, (u32) length);
acpi_ex_release_global_lock(locked);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
exit:
if (ACPI_FAILURE(status)) {
@ -214,10 +210,7 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
{
acpi_status status;
u32 length;
u32 required_length;
void *buffer;
void *new_buffer;
u8 locked;
union acpi_operand_object *buffer_desc;
ACPI_FUNCTION_TRACE_PTR(ex_write_data_to_field, obj_desc);
@ -278,9 +271,7 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
/* Lock entire transaction if requested */
locked =
acpi_ex_acquire_global_lock(obj_desc->common_field.
field_flags);
acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);
/*
* Perform the write (returns status and perhaps data in the
@ -291,7 +282,7 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
(acpi_integer *) buffer,
ACPI_WRITE | (obj_desc->field.
attribute << 16));
acpi_ex_release_global_lock(locked);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
*result_desc = buffer_desc;
return_ACPI_STATUS(status);
@ -319,35 +310,6 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
return_ACPI_STATUS(AE_AML_OPERAND_TYPE);
}
/*
* We must have a buffer that is at least as long as the field
* we are writing to. This is because individual fields are
* indivisible and partial writes are not supported -- as per
* the ACPI specification.
*/
new_buffer = NULL;
required_length =
ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->common_field.bit_length);
if (length < required_length) {
/* We need to create a new buffer */
new_buffer = ACPI_ALLOCATE_ZEROED(required_length);
if (!new_buffer) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
/*
* Copy the original data to the new buffer, starting
* at Byte zero. All unused (upper) bytes of the
* buffer will be 0.
*/
ACPI_MEMCPY((char *)new_buffer, (char *)buffer, length);
buffer = new_buffer;
length = required_length;
}
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"FieldWrite [FROM]: Obj %p (%s:%X), Buf %p, ByteLen %X\n",
source_desc,
@ -366,19 +328,12 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
/* Lock entire transaction if requested */
locked =
acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);
acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);
/* Write to the field */
status = acpi_ex_insert_into_field(obj_desc, buffer, length);
acpi_ex_release_global_lock(locked);
/* Free temporary buffer if we used one */
if (new_buffer) {
ACPI_FREE(new_buffer);
}
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -263,7 +263,8 @@ acpi_ex_access_region(union acpi_operand_object *obj_desc,
rgn_desc->region.space_id,
obj_desc->common_field.access_byte_width,
obj_desc->common_field.base_byte_offset,
field_datum_byte_offset, (void *)address));
field_datum_byte_offset, ACPI_CAST_PTR(void,
address)));
/* Invoke the appropriate address_space/op_region handler */
@ -805,18 +806,39 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
u32 datum_count;
u32 field_datum_count;
u32 i;
u32 required_length;
void *new_buffer;
ACPI_FUNCTION_TRACE(ex_insert_into_field);
/* Validate input buffer */
if (buffer_length <
ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->common_field.bit_length)) {
ACPI_ERROR((AE_INFO,
"Field size %X (bits) is too large for buffer (%X)",
obj_desc->common_field.bit_length, buffer_length));
new_buffer = NULL;
required_length =
ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->common_field.bit_length);
/*
* We must have a buffer that is at least as long as the field
* we are writing to. This is because individual fields are
* indivisible and partial writes are not supported -- as per
* the ACPI specification.
*/
if (buffer_length < required_length) {
return_ACPI_STATUS(AE_BUFFER_OVERFLOW);
/* We need to create a new buffer */
new_buffer = ACPI_ALLOCATE_ZEROED(required_length);
if (!new_buffer) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
/*
* Copy the original data to the new buffer, starting
* at Byte zero. All unused (upper) bytes of the
* buffer will be 0.
*/
ACPI_MEMCPY((char *)new_buffer, (char *)buffer, buffer_length);
buffer = new_buffer;
buffer_length = required_length;
}
/*
@ -866,7 +888,7 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
merged_datum,
field_offset);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
goto exit;
}
field_offset += obj_desc->common_field.access_byte_width;
@ -924,5 +946,11 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
mask, merged_datum,
field_offset);
exit:
/* Free temporary buffer if we used one */
if (new_buffer) {
ACPI_FREE(new_buffer);
}
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -124,6 +124,79 @@ acpi_ex_link_mutex(union acpi_operand_object *obj_desc,
thread->acquired_mutex_list = obj_desc;
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_acquire_mutex_object
*
* PARAMETERS: time_desc - Timeout in milliseconds
* obj_desc - Mutex object
* Thread - Current thread state
*
* RETURN: Status
*
* DESCRIPTION: Acquire an AML mutex, low-level interface. Provides a common
* path that supports multiple acquires by the same thread.
*
* MUTEX: Interpreter must be locked
*
* NOTE: This interface is called from three places:
* 1) From acpi_ex_acquire_mutex, via an AML Acquire() operator
* 2) From acpi_ex_acquire_global_lock when an AML Field access requires the
* global lock
* 3) From the external interface, acpi_acquire_global_lock
*
******************************************************************************/
acpi_status
acpi_ex_acquire_mutex_object(u16 timeout,
union acpi_operand_object *obj_desc,
acpi_thread_id thread_id)
{
acpi_status status;
ACPI_FUNCTION_TRACE_PTR(ex_acquire_mutex_object, obj_desc);
if (!obj_desc) {
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
/* Support for multiple acquires by the owning thread */
if (obj_desc->mutex.thread_id == thread_id) {
/*
* The mutex is already owned by this thread, just increment the
* acquisition depth
*/
obj_desc->mutex.acquisition_depth++;
return_ACPI_STATUS(AE_OK);
}
/* Acquire the mutex, wait if necessary. Special case for Global Lock */
if (obj_desc == acpi_gbl_global_lock_mutex) {
status = acpi_ev_acquire_global_lock(timeout);
} else {
status = acpi_ex_system_wait_mutex(obj_desc->mutex.os_mutex,
timeout);
}
if (ACPI_FAILURE(status)) {
/* Includes failure from a timeout on time_desc */
return_ACPI_STATUS(status);
}
/* Acquired the mutex: update mutex object */
obj_desc->mutex.thread_id = thread_id;
obj_desc->mutex.acquisition_depth = 1;
obj_desc->mutex.original_sync_level = 0;
obj_desc->mutex.owner_thread = NULL; /* Used only for AML Acquire() */
return_ACPI_STATUS(AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_acquire_mutex
@ -151,7 +224,7 @@ acpi_ex_acquire_mutex(union acpi_operand_object *time_desc,
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
/* Sanity check: we must have a valid thread ID */
/* Must have a valid thread ID */
if (!walk_state->thread) {
ACPI_ERROR((AE_INFO,
@ -161,7 +234,7 @@ acpi_ex_acquire_mutex(union acpi_operand_object *time_desc,
}
/*
* Current Sync must be less than or equal to the sync level of the
* Current sync level must be less than or equal to the sync level of the
* mutex. This mechanism provides some deadlock prevention
*/
if (walk_state->thread->current_sync_level > obj_desc->mutex.sync_level) {
@ -172,51 +245,89 @@ acpi_ex_acquire_mutex(union acpi_operand_object *time_desc,
return_ACPI_STATUS(AE_AML_MUTEX_ORDER);
}
/* Support for multiple acquires by the owning thread */
status = acpi_ex_acquire_mutex_object((u16) time_desc->integer.value,
obj_desc,
walk_state->thread->thread_id);
if (ACPI_SUCCESS(status) && obj_desc->mutex.acquisition_depth == 1) {
/* Save Thread object, original/current sync levels */
obj_desc->mutex.owner_thread = walk_state->thread;
obj_desc->mutex.original_sync_level =
walk_state->thread->current_sync_level;
walk_state->thread->current_sync_level =
obj_desc->mutex.sync_level;
/* Link the mutex to the current thread for force-unlock at method exit */
acpi_ex_link_mutex(obj_desc, walk_state->thread);
}
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_release_mutex_object
*
* PARAMETERS: obj_desc - The object descriptor for this op
*
* RETURN: Status
*
* DESCRIPTION: Release a previously acquired Mutex, low level interface.
* Provides a common path that supports multiple releases (after
* previous multiple acquires) by the same thread.
*
* MUTEX: Interpreter must be locked
*
* NOTE: This interface is called from three places:
* 1) From acpi_ex_release_mutex, via an AML Acquire() operator
* 2) From acpi_ex_release_global_lock when an AML Field access requires the
* global lock
* 3) From the external interface, acpi_release_global_lock
*
******************************************************************************/
acpi_status acpi_ex_release_mutex_object(union acpi_operand_object *obj_desc)
{
acpi_status status = AE_OK;
ACPI_FUNCTION_TRACE(ex_release_mutex_object);
if (obj_desc->mutex.acquisition_depth == 0) {
return (AE_NOT_ACQUIRED);
}
/* Match multiple Acquires with multiple Releases */
obj_desc->mutex.acquisition_depth--;
if (obj_desc->mutex.acquisition_depth != 0) {
/* Just decrement the depth and return */
return_ACPI_STATUS(AE_OK);
}
if (obj_desc->mutex.owner_thread) {
if (obj_desc->mutex.owner_thread->thread_id ==
walk_state->thread->thread_id) {
/*
* The mutex is already owned by this thread, just increment the
* acquisition depth
*/
obj_desc->mutex.acquisition_depth++;
return_ACPI_STATUS(AE_OK);
}
/* Unlink the mutex from the owner's list */
acpi_ex_unlink_mutex(obj_desc);
obj_desc->mutex.owner_thread = NULL;
}
/* Acquire the mutex, wait if necessary. Special case for Global Lock */
/* Release the mutex, special case for Global Lock */
if (obj_desc->mutex.os_mutex == acpi_gbl_global_lock_mutex) {
status =
acpi_ev_acquire_global_lock((u16) time_desc->integer.value);
if (obj_desc == acpi_gbl_global_lock_mutex) {
status = acpi_ev_release_global_lock();
} else {
status = acpi_ex_system_wait_mutex(obj_desc->mutex.os_mutex,
(u16) time_desc->integer.
value);
acpi_os_release_mutex(obj_desc->mutex.os_mutex);
}
if (ACPI_FAILURE(status)) {
/* Clear mutex info */
/* Includes failure from a timeout on time_desc */
return_ACPI_STATUS(status);
}
/* Have the mutex: update mutex and walk info and save the sync_level */
obj_desc->mutex.owner_thread = walk_state->thread;
obj_desc->mutex.acquisition_depth = 1;
obj_desc->mutex.original_sync_level =
walk_state->thread->current_sync_level;
walk_state->thread->current_sync_level = obj_desc->mutex.sync_level;
/* Link the mutex to the current thread for force-unlock at method exit */
acpi_ex_link_mutex(obj_desc, walk_state->thread);
return_ACPI_STATUS(AE_OK);
obj_desc->mutex.thread_id = 0;
return_ACPI_STATUS(status);
}
/*******************************************************************************
@ -253,22 +364,13 @@ acpi_ex_release_mutex(union acpi_operand_object *obj_desc,
return_ACPI_STATUS(AE_AML_MUTEX_NOT_ACQUIRED);
}
/* Sanity check: we must have a valid thread ID */
if (!walk_state->thread) {
ACPI_ERROR((AE_INFO,
"Cannot release Mutex [%4.4s], null thread info",
acpi_ut_get_node_name(obj_desc->mutex.node)));
return_ACPI_STATUS(AE_AML_INTERNAL);
}
/*
* The Mutex is owned, but this thread must be the owner.
* Special case for Global Lock, any thread can release
*/
if ((obj_desc->mutex.owner_thread->thread_id !=
walk_state->thread->thread_id)
&& (obj_desc->mutex.os_mutex != acpi_gbl_global_lock_mutex)) {
&& (obj_desc != acpi_gbl_global_lock_mutex)) {
ACPI_ERROR((AE_INFO,
"Thread %lX cannot release Mutex [%4.4s] acquired by thread %lX",
(unsigned long)walk_state->thread->thread_id,
@ -278,45 +380,37 @@ acpi_ex_release_mutex(union acpi_operand_object *obj_desc,
return_ACPI_STATUS(AE_AML_NOT_OWNER);
}
/* Must have a valid thread ID */
if (!walk_state->thread) {
ACPI_ERROR((AE_INFO,
"Cannot release Mutex [%4.4s], null thread info",
acpi_ut_get_node_name(obj_desc->mutex.node)));
return_ACPI_STATUS(AE_AML_INTERNAL);
}
/*
* The sync level of the mutex must be less than or equal to the current
* sync level
*/
if (obj_desc->mutex.sync_level > walk_state->thread->current_sync_level) {
ACPI_ERROR((AE_INFO,
"Cannot release Mutex [%4.4s], incorrect SyncLevel",
acpi_ut_get_node_name(obj_desc->mutex.node)));
"Cannot release Mutex [%4.4s], SyncLevel mismatch: mutex %d current %d",
acpi_ut_get_node_name(obj_desc->mutex.node),
obj_desc->mutex.sync_level,
walk_state->thread->current_sync_level));
return_ACPI_STATUS(AE_AML_MUTEX_ORDER);
}
/* Match multiple Acquires with multiple Releases */
status = acpi_ex_release_mutex_object(obj_desc);
obj_desc->mutex.acquisition_depth--;
if (obj_desc->mutex.acquisition_depth != 0) {
if (obj_desc->mutex.acquisition_depth == 0) {
/* Just decrement the depth and return */
/* Restore the original sync_level */
return_ACPI_STATUS(AE_OK);
walk_state->thread->current_sync_level =
obj_desc->mutex.original_sync_level;
}
/* Unlink the mutex from the owner's list */
acpi_ex_unlink_mutex(obj_desc);
/* Release the mutex, special case for Global Lock */
if (obj_desc->mutex.os_mutex == acpi_gbl_global_lock_mutex) {
status = acpi_ev_release_global_lock();
} else {
acpi_os_release_mutex(obj_desc->mutex.os_mutex);
}
/* Update the mutex and restore sync_level */
obj_desc->mutex.owner_thread = NULL;
walk_state->thread->current_sync_level =
obj_desc->mutex.original_sync_level;
return_ACPI_STATUS(status);
}
@ -357,7 +451,7 @@ void acpi_ex_release_all_mutexes(struct acpi_thread_state *thread)
/* Release the mutex, special case for Global Lock */
if (obj_desc->mutex.os_mutex == acpi_gbl_global_lock_mutex) {
if (obj_desc == acpi_gbl_global_lock_mutex) {
/* Ignore errors */
@ -369,6 +463,7 @@ void acpi_ex_release_all_mutexes(struct acpi_thread_state *thread)
/* Mark mutex unowned */
obj_desc->mutex.owner_thread = NULL;
obj_desc->mutex.thread_id = 0;
/* Update Thread sync_level (Last mutex is the important one) */

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -121,6 +121,7 @@ acpi_status acpi_ex_opcode_0A_0T_1R(struct acpi_walk_state *walk_state)
if ((ACPI_FAILURE(status)) || walk_state->result_obj) {
acpi_ut_remove_reference(return_desc);
walk_state->result_obj = NULL;
} else {
/* Save the return value */
@ -739,26 +740,38 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
value = acpi_gbl_integer_byte_width;
break;
case ACPI_TYPE_BUFFER:
value = temp_desc->buffer.length;
break;
case ACPI_TYPE_STRING:
value = temp_desc->string.length;
break;
case ACPI_TYPE_BUFFER:
/* Buffer arguments may not be evaluated at this point */
status = acpi_ds_get_buffer_arguments(temp_desc);
value = temp_desc->buffer.length;
break;
case ACPI_TYPE_PACKAGE:
/* Package arguments may not be evaluated at this point */
status = acpi_ds_get_package_arguments(temp_desc);
value = temp_desc->package.count;
break;
default:
ACPI_ERROR((AE_INFO,
"Operand is not Buf/Int/Str/Pkg - found type %s",
"Operand must be Buffer/Integer/String/Package - found type %s",
acpi_ut_get_type_name(type)));
status = AE_AML_OPERAND_TYPE;
goto cleanup;
}
if (ACPI_FAILURE(status)) {
goto cleanup;
}
/*
* Now that we have the size of the object, create a result
* object to hold the value

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -241,10 +241,6 @@ acpi_status acpi_ex_opcode_2A_2T_1R(struct acpi_walk_state *walk_state)
goto cleanup;
}
/* Return the remainder */
walk_state->result_obj = return_desc1;
cleanup:
/*
* Since the remainder is not returned indirectly, remove a reference to
@ -259,6 +255,12 @@ acpi_status acpi_ex_opcode_2A_2T_1R(struct acpi_walk_state *walk_state)
acpi_ut_remove_reference(return_desc1);
}
/* Save return object (the remainder) on success */
else {
walk_state->result_obj = return_desc1;
}
return_ACPI_STATUS(status);
}
@ -490,6 +492,7 @@ acpi_status acpi_ex_opcode_2A_1T_1R(struct acpi_walk_state *walk_state)
if (ACPI_FAILURE(status)) {
acpi_ut_remove_reference(return_desc);
walk_state->result_obj = NULL;
}
return_ACPI_STATUS(status);
@ -583,8 +586,6 @@ acpi_status acpi_ex_opcode_2A_0T_1R(struct acpi_walk_state *walk_state)
return_desc->integer.value = ACPI_INTEGER_MAX;
}
walk_state->result_obj = return_desc;
cleanup:
/* Delete return object on error */
@ -593,5 +594,11 @@ acpi_status acpi_ex_opcode_2A_0T_1R(struct acpi_walk_state *walk_state)
acpi_ut_remove_reference(return_desc);
}
/* Save return object on success */
else {
walk_state->result_obj = return_desc;
}
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -260,6 +260,7 @@ acpi_status acpi_ex_opcode_3A_1T_1R(struct acpi_walk_state *walk_state)
if (ACPI_FAILURE(status) || walk_state->result_obj) {
acpi_ut_remove_reference(return_desc);
walk_state->result_obj = NULL;
}
/* Set the return object and exit */

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -322,8 +322,6 @@ acpi_status acpi_ex_opcode_6A_0T_1R(struct acpi_walk_state * walk_state)
goto cleanup;
}
walk_state->result_obj = return_desc;
cleanup:
/* Delete return object on error */
@ -332,5 +330,11 @@ acpi_status acpi_ex_opcode_6A_0T_1R(struct acpi_walk_state * walk_state)
acpi_ut_remove_reference(return_desc);
}
/* Save return object on success */
else {
walk_state->result_obj = return_desc;
}
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -412,6 +412,7 @@ acpi_ex_prep_common_field_object(union acpi_operand_object *obj_desc,
acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
{
union acpi_operand_object *obj_desc;
union acpi_operand_object *second_desc = NULL;
u32 type;
acpi_status status;
@ -494,6 +495,20 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
obj_desc->field.access_byte_width,
obj_desc->bank_field.region_obj,
obj_desc->bank_field.bank_obj));
/*
* Remember location in AML stream of the field unit
* opcode and operands -- since the bank_value
* operands must be evaluated.
*/
second_desc = obj_desc->common.next_object;
second_desc->extra.aml_start =
((union acpi_parse_object *)(info->data_register_node))->
named.data;
second_desc->extra.aml_length =
((union acpi_parse_object *)(info->data_register_node))->
named.length;
break;
case ACPI_TYPE_LOCAL_INDEX_FIELD:

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -160,7 +160,7 @@ acpi_ex_system_memory_space_handler(u32 function,
if (!mem_info->mapped_logical_address) {
ACPI_ERROR((AE_INFO,
"Could not map memory at %8.8X%8.8X, size %X",
ACPI_FORMAT_UINT64(address),
ACPI_FORMAT_NATIVE_UINT(address),
(u32) window_size));
mem_info->mapped_length = 0;
return_ACPI_STATUS(AE_NO_MEMORY);
@ -182,7 +182,8 @@ acpi_ex_system_memory_space_handler(u32 function,
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"System-Memory (width %d) R/W %d Address=%8.8X%8.8X\n",
bit_width, function, ACPI_FORMAT_UINT64(address)));
bit_width, function,
ACPI_FORMAT_NATIVE_UINT(address)));
/*
* Perform the memory read or write
@ -284,7 +285,8 @@ acpi_ex_system_io_space_handler(u32 function,
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"System-IO (width %d) R/W %d Address=%8.8X%8.8X\n",
bit_width, function, ACPI_FORMAT_UINT64(address)));
bit_width, function,
ACPI_FORMAT_NATIVE_UINT(address)));
/* Decode the function parameter */

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -116,9 +116,11 @@ acpi_ex_resolve_node_to_value(struct acpi_namespace_node **object_ptr,
* Several object types require no further processing:
* 1) Device/Thermal objects don't have a "real" subobject, return the Node
* 2) Method locals and arguments have a pseudo-Node
* 3) 10/2007: Added method type to assist with Package construction.
*/
if ((entry_type == ACPI_TYPE_DEVICE) ||
(entry_type == ACPI_TYPE_THERMAL) ||
(entry_type == ACPI_TYPE_METHOD) ||
(node->flags & (ANOBJ_METHOD_ARG | ANOBJ_METHOD_LOCAL))) {
return_ACPI_STATUS(AE_OK);
}
@ -214,7 +216,6 @@ acpi_ex_resolve_node_to_value(struct acpi_namespace_node **object_ptr,
/* For these objects, just return the object attached to the Node */
case ACPI_TYPE_MUTEX:
case ACPI_TYPE_METHOD:
case ACPI_TYPE_POWER:
case ACPI_TYPE_PROCESSOR:
case ACPI_TYPE_EVENT:
@ -238,12 +239,11 @@ acpi_ex_resolve_node_to_value(struct acpi_namespace_node **object_ptr,
case ACPI_TYPE_LOCAL_REFERENCE:
switch (source_desc->reference.opcode) {
case AML_LOAD_OP:
/* This is a ddb_handle */
/* Return an additional reference to the object */
case AML_LOAD_OP: /* This is a ddb_handle */
case AML_REF_OF_OP:
case AML_INDEX_OP:
/* Return an additional reference to the object */
obj_desc = source_desc;
acpi_ut_add_reference(obj_desc);

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -140,7 +140,6 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
{
acpi_status status = AE_OK;
union acpi_operand_object *stack_desc;
void *temp_node;
union acpi_operand_object *obj_desc = NULL;
u16 opcode;
@ -156,23 +155,6 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
opcode = stack_desc->reference.opcode;
switch (opcode) {
case AML_NAME_OP:
/*
* Convert name reference to a namespace node
* Then, acpi_ex_resolve_node_to_value can be used to get the value
*/
temp_node = stack_desc->reference.object;
/* Delete the Reference Object */
acpi_ut_remove_reference(stack_desc);
/* Return the namespace node */
(*stack_ptr) = temp_node;
break;
case AML_LOCAL_OP:
case AML_ARG_OP:
@ -207,15 +189,25 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
switch (stack_desc->reference.target_type) {
case ACPI_TYPE_BUFFER_FIELD:
/* Just return - leave the Reference on the stack */
/* Just return - do not dereference */
break;
case ACPI_TYPE_PACKAGE:
/* If method call or copy_object - do not dereference */
if ((walk_state->opcode ==
AML_INT_METHODCALL_OP)
|| (walk_state->opcode == AML_COPY_OP)) {
break;
}
/* Otherwise, dereference the package_index to a package element */
obj_desc = *stack_desc->reference.where;
if (obj_desc) {
/*
* Valid obj descriptor, copy pointer to return value
* Valid object descriptor, copy pointer to return value
* (i.e., dereference the package index)
* Delete the ref object, increment the returned object
*/
@ -224,11 +216,11 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
*stack_ptr = obj_desc;
} else {
/*
* A NULL object descriptor means an unitialized element of
* A NULL object descriptor means an uninitialized element of
* the package, can't dereference it
*/
ACPI_ERROR((AE_INFO,
"Attempt to deref an Index to NULL pkg element Idx=%p",
"Attempt to dereference an Index to NULL package element Idx=%p",
stack_desc));
status = AE_AML_UNINITIALIZED_ELEMENT;
}
@ -239,7 +231,7 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
/* Invalid reference object */
ACPI_ERROR((AE_INFO,
"Unknown TargetType %X in Index/Reference obj %p",
"Unknown TargetType %X in Index/Reference object %p",
stack_desc->reference.target_type,
stack_desc));
status = AE_AML_INTERNAL;
@ -251,7 +243,7 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
case AML_DEBUG_OP:
case AML_LOAD_OP:
/* Just leave the object as-is */
/* Just leave the object as-is, do not dereference */
break;
@ -390,10 +382,10 @@ acpi_ex_resolve_multiple(struct acpi_walk_state *walk_state,
}
/*
* For reference objects created via the ref_of or Index operators,
* we need to get to the base object (as per the ACPI specification
* of the object_type and size_of operators). This means traversing
* the list of possibly many nested references.
* For reference objects created via the ref_of, Index, or Load/load_table
* operators, we need to get to the base object (as per the ACPI
* specification of the object_type and size_of operators). This means
* traversing the list of possibly many nested references.
*/
while (ACPI_GET_OBJECT_TYPE(obj_desc) == ACPI_TYPE_LOCAL_REFERENCE) {
switch (obj_desc->reference.opcode) {
@ -463,6 +455,11 @@ acpi_ex_resolve_multiple(struct acpi_walk_state *walk_state,
}
break;
case AML_LOAD_OP:
type = ACPI_TYPE_DDB_HANDLE;
goto exit;
case AML_LOCAL_OP:
case AML_ARG_OP:

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -137,7 +137,6 @@ acpi_ex_resolve_operands(u16 opcode,
union acpi_operand_object *obj_desc;
acpi_status status = AE_OK;
u8 object_type;
void *temp_node;
u32 arg_types;
const struct acpi_opcode_info *op_info;
u32 this_arg_type;
@ -239,7 +238,6 @@ acpi_ex_resolve_operands(u16 opcode,
/*lint -fallthrough */
case AML_NAME_OP:
case AML_INDEX_OP:
case AML_REF_OF_OP:
case AML_ARG_OP:
@ -332,15 +330,6 @@ acpi_ex_resolve_operands(u16 opcode,
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
if (obj_desc->reference.opcode == AML_NAME_OP) {
/* Convert a named reference to the actual named object */
temp_node = obj_desc->reference.object;
acpi_ut_remove_reference(obj_desc);
(*stack_ptr) = temp_node;
}
goto next_operand;
case ARGI_DATAREFOBJ: /* Store operator only */

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -84,8 +84,12 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
ACPI_FUNCTION_TRACE_PTR(ex_do_debug_object, source_desc);
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[ACPI Debug] %*s",
level, " "));
/* Print line header as long as we are not in the middle of an object display */
if (!((level > 0) && index == 0)) {
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[ACPI Debug] %*s",
level, " "));
}
/* Display index for package output only */
@ -95,12 +99,12 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
}
if (!source_desc) {
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "<Null Object>\n"));
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[Null Object]\n"));
return_VOID;
}
if (ACPI_GET_DESCRIPTOR_TYPE(source_desc) == ACPI_DESC_TYPE_OPERAND) {
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%s: ",
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%s ",
acpi_ut_get_object_type_name
(source_desc)));
@ -123,6 +127,8 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
return_VOID;
}
/* source_desc is of type ACPI_DESC_TYPE_OPERAND */
switch (ACPI_GET_OBJECT_TYPE(source_desc)) {
case ACPI_TYPE_INTEGER:
@ -147,7 +153,7 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
(u32) source_desc->buffer.length));
ACPI_DUMP_BUFFER(source_desc->buffer.pointer,
(source_desc->buffer.length <
32) ? source_desc->buffer.length : 32);
256) ? source_desc->buffer.length : 256);
break;
case ACPI_TYPE_STRING:
@ -160,7 +166,7 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
case ACPI_TYPE_PACKAGE:
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT,
"[0x%.2X Elements]\n",
"[Contains 0x%.2X Elements]\n",
source_desc->package.count));
/* Output the entire contents of the package */
@ -180,12 +186,59 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
(source_desc->reference.opcode),
source_desc->reference.offset));
} else {
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[%s]\n",
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[%s]",
acpi_ps_get_opcode_name
(source_desc->reference.opcode)));
}
if (source_desc->reference.object) {
if (source_desc->reference.opcode == AML_LOAD_OP) { /* Load and load_table */
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT,
" Table OwnerId %p\n",
source_desc->reference.object));
break;
}
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, " "));
/* Check for valid node first, then valid object */
if (source_desc->reference.node) {
if (ACPI_GET_DESCRIPTOR_TYPE
(source_desc->reference.node) !=
ACPI_DESC_TYPE_NAMED) {
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT,
" %p - Not a valid namespace node\n",
source_desc->reference.
node));
} else {
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT,
"Node %p [%4.4s] ",
source_desc->reference.
node,
(source_desc->reference.
node)->name.ascii));
switch ((source_desc->reference.node)->type) {
/* These types have no attached object */
case ACPI_TYPE_DEVICE:
acpi_os_printf("Device\n");
break;
case ACPI_TYPE_THERMAL:
acpi_os_printf("Thermal Zone\n");
break;
default:
acpi_ex_do_debug_object((source_desc->
reference.
node)->object,
level + 4, 0);
break;
}
}
} else if (source_desc->reference.object) {
if (ACPI_GET_DESCRIPTOR_TYPE
(source_desc->reference.object) ==
ACPI_DESC_TYPE_NAMED) {
@ -198,18 +251,13 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
acpi_ex_do_debug_object(source_desc->reference.
object, level + 4, 0);
}
} else if (source_desc->reference.node) {
acpi_ex_do_debug_object((source_desc->reference.node)->
object, level + 4, 0);
}
break;
default:
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%p %s\n",
source_desc,
acpi_ut_get_object_type_name
(source_desc)));
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%p\n",
source_desc));
break;
}
@ -313,7 +361,6 @@ acpi_ex_store(union acpi_operand_object *source_desc,
* 4) Store to the debug object
*/
switch (ref_desc->reference.opcode) {
case AML_NAME_OP:
case AML_REF_OF_OP:
/* Storing an object into a Name "container" */
@ -415,11 +462,24 @@ acpi_ex_store_object_to_index(union acpi_operand_object *source_desc,
*/
obj_desc = *(index_desc->reference.where);
status =
acpi_ut_copy_iobject_to_iobject(source_desc, &new_desc,
walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
if (ACPI_GET_OBJECT_TYPE(source_desc) ==
ACPI_TYPE_LOCAL_REFERENCE
&& source_desc->reference.opcode == AML_LOAD_OP) {
/* This is a DDBHandle, just add a reference to it */
acpi_ut_add_reference(source_desc);
new_desc = source_desc;
} else {
/* Normal object, copy it */
status =
acpi_ut_copy_iobject_to_iobject(source_desc,
&new_desc,
walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
}
if (obj_desc) {
@ -571,10 +631,17 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
/* If no implicit conversion, drop into the default case below */
if ((!implicit_conversion) || (walk_state->opcode == AML_COPY_OP)) {
/* Force execution of default (no implicit conversion) */
if ((!implicit_conversion) ||
((walk_state->opcode == AML_COPY_OP) &&
(target_type != ACPI_TYPE_LOCAL_REGION_FIELD) &&
(target_type != ACPI_TYPE_LOCAL_BANK_FIELD) &&
(target_type != ACPI_TYPE_LOCAL_INDEX_FIELD))) {
/*
* Force execution of default (no implicit conversion). Note:
* copy_object does not perform an implicit conversion, as per the ACPI
* spec -- except in case of region/bank/index fields -- because these
* objects must retain their original type permanently.
*/
target_type = ACPI_TYPE_ANY;
}

Просмотреть файл

@ -7,7 +7,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -44,7 +44,6 @@
#include <acpi/acpi.h>
#include <acpi/acinterp.h>
#include <acpi/acevents.h>
#define _COMPONENT ACPI_EXECUTER
ACPI_MODULE_NAME("exsystem")

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -61,7 +61,6 @@
#include <acpi/acpi.h>
#include <acpi/acinterp.h>
#include <acpi/amlcode.h>
#include <acpi/acevents.h>
#define _COMPONENT ACPI_EXECUTER
ACPI_MODULE_NAME("exutils")
@ -217,9 +216,10 @@ void acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc)
/*
* Object must be a valid number and we must be executing
* a control method
* a control method. NS node could be there for AML_INT_NAMEPATH_OP.
*/
if ((!obj_desc) ||
(ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_OPERAND) ||
(ACPI_GET_OBJECT_TYPE(obj_desc) != ACPI_TYPE_INTEGER)) {
return;
}
@ -240,72 +240,73 @@ void acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc)
* PARAMETERS: field_flags - Flags with Lock rule:
* always_lock or never_lock
*
* RETURN: TRUE/FALSE indicating whether the lock was actually acquired
* RETURN: None
*
* DESCRIPTION: Obtain the global lock and keep track of this fact via two
* methods. A global variable keeps the state of the lock, and
* the state is returned to the caller.
* DESCRIPTION: Obtain the ACPI hardware Global Lock, only if the field
* flags specifiy that it is to be obtained before field access.
*
******************************************************************************/
u8 acpi_ex_acquire_global_lock(u32 field_flags)
void acpi_ex_acquire_global_lock(u32 field_flags)
{
u8 locked = FALSE;
acpi_status status;
ACPI_FUNCTION_TRACE(ex_acquire_global_lock);
/* Only attempt lock if the always_lock bit is set */
/* Only use the lock if the always_lock bit is set */
if (field_flags & AML_FIELD_LOCK_RULE_MASK) {
/* We should attempt to get the lock, wait forever */
status = acpi_ev_acquire_global_lock(ACPI_WAIT_FOREVER);
if (ACPI_SUCCESS(status)) {
locked = TRUE;
} else {
ACPI_EXCEPTION((AE_INFO, status,
"Could not acquire Global Lock"));
}
if (!(field_flags & AML_FIELD_LOCK_RULE_MASK)) {
return_VOID;
}
return_UINT8(locked);
/* Attempt to get the global lock, wait forever */
status = acpi_ex_acquire_mutex_object(ACPI_WAIT_FOREVER,
acpi_gbl_global_lock_mutex,
acpi_os_get_thread_id());
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"Could not acquire Global Lock"));
}
return_VOID;
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_release_global_lock
*
* PARAMETERS: locked_by_me - Return value from corresponding call to
* acquire_global_lock.
* PARAMETERS: field_flags - Flags with Lock rule:
* always_lock or never_lock
*
* RETURN: None
*
* DESCRIPTION: Release the global lock if it is locked.
* DESCRIPTION: Release the ACPI hardware Global Lock
*
******************************************************************************/
void acpi_ex_release_global_lock(u8 locked_by_me)
void acpi_ex_release_global_lock(u32 field_flags)
{
acpi_status status;
ACPI_FUNCTION_TRACE(ex_release_global_lock);
/* Only attempt unlock if the caller locked it */
/* Only use the lock if the always_lock bit is set */
if (locked_by_me) {
if (!(field_flags & AML_FIELD_LOCK_RULE_MASK)) {
return_VOID;
}
/* OK, now release the lock */
/* Release the global lock */
status = acpi_ev_release_global_lock();
if (ACPI_FAILURE(status)) {
status = acpi_ex_release_mutex_object(acpi_gbl_global_lock_mutex);
if (ACPI_FAILURE(status)) {
/* Report the error, but there isn't much else we can do */
/* Report the error, but there isn't much else we can do */
ACPI_EXCEPTION((AE_INFO, status,
"Could not release ACPI Global Lock"));
}
ACPI_EXCEPTION((AE_INFO, status,
"Could not release Global Lock"));
}
return_VOID;

Просмотреть файл

@ -256,24 +256,23 @@ static int acpi_fan_add(struct acpi_device *device)
result = PTR_ERR(cdev);
goto end;
}
if (cdev) {
printk(KERN_INFO PREFIX
"%s is registered as cooling_device%d\n",
device->dev.bus_id, cdev->id);
acpi_driver_data(device) = cdev;
result = sysfs_create_link(&device->dev.kobj,
&cdev->device.kobj,
"thermal_cooling");
if (result)
return result;
printk(KERN_INFO PREFIX
"%s is registered as cooling_device%d\n",
device->dev.bus_id, cdev->id);
result = sysfs_create_link(&cdev->device.kobj,
&device->dev.kobj,
"device");
if (result)
return result;
}
acpi_driver_data(device) = cdev;
result = sysfs_create_link(&device->dev.kobj,
&cdev->device.kobj,
"thermal_cooling");
if (result)
printk(KERN_ERR PREFIX "Create sysfs link\n");
result = sysfs_create_link(&cdev->device.kobj,
&device->dev.kobj,
"device");
if (result)
printk(KERN_ERR PREFIX "Create sysfs link\n");
result = acpi_fan_add_fs(device);
if (result)

Просмотреть файл

@ -142,6 +142,7 @@ EXPORT_SYMBOL(acpi_get_physical_device);
static int acpi_bind_one(struct device *dev, acpi_handle handle)
{
struct acpi_device *acpi_dev;
acpi_status status;
if (dev->archdata.acpi_handle) {
@ -157,6 +158,16 @@ static int acpi_bind_one(struct device *dev, acpi_handle handle)
}
dev->archdata.acpi_handle = handle;
status = acpi_bus_get_device(handle, &acpi_dev);
if (!ACPI_FAILURE(status)) {
int ret;
ret = sysfs_create_link(&dev->kobj, &acpi_dev->dev.kobj,
"firmware_node");
ret = sysfs_create_link(&acpi_dev->dev.kobj, &dev->kobj,
"physical_node");
}
return 0;
}
@ -165,8 +176,17 @@ static int acpi_unbind_one(struct device *dev)
if (!dev->archdata.acpi_handle)
return 0;
if (dev == acpi_get_physical_device(dev->archdata.acpi_handle)) {
struct acpi_device *acpi_dev;
/* acpi_get_physical_device increase refcnt by one */
put_device(dev);
if (!acpi_bus_get_device(dev->archdata.acpi_handle,
&acpi_dev)) {
sysfs_remove_link(&dev->kobj, "firmware_node");
sysfs_remove_link(&acpi_dev->dev.kobj, "physical_node");
}
acpi_detach_data(dev->archdata.acpi_handle,
acpi_glue_data_handler);
dev->archdata.acpi_handle = NULL;

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -7,7 +7,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -70,9 +70,10 @@ acpi_set_firmware_waking_vector(acpi_physical_address physical_address)
/* Get the FACS */
status =
acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
(struct acpi_table_header **)&facs);
status = acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
ACPI_CAST_INDIRECT_PTR(struct
acpi_table_header,
&facs));
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@ -124,9 +125,10 @@ acpi_get_firmware_waking_vector(acpi_physical_address * physical_address)
/* Get the FACS */
status =
acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
(struct acpi_table_header **)&facs);
status = acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
ACPI_CAST_INDIRECT_PTR(struct
acpi_table_header,
&facs));
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -208,8 +208,7 @@ acpi_status acpi_ns_root_initialize(void)
/* Special case for ACPI Global Lock */
if (ACPI_STRCMP(init_val->name, "_GL_") == 0) {
acpi_gbl_global_lock_mutex =
obj_desc->mutex.os_mutex;
acpi_gbl_global_lock_mutex = obj_desc;
/* Create additional counting semaphore for global lock */
@ -582,44 +581,68 @@ acpi_ns_lookup(union acpi_generic_state *scope_info,
return_ACPI_STATUS(status);
}
/*
* Sanity typecheck of the target object:
*
* If 1) This is the last segment (num_segments == 0)
* 2) And we are looking for a specific type
* (Not checking for TYPE_ANY)
* 3) Which is not an alias
* 4) Which is not a local type (TYPE_SCOPE)
* 5) And the type of target object is known (not TYPE_ANY)
* 6) And target object does not match what we are looking for
*
* Then we have a type mismatch. Just warn and ignore it.
*/
if ((num_segments == 0) &&
(type_to_check_for != ACPI_TYPE_ANY) &&
(type_to_check_for != ACPI_TYPE_LOCAL_ALIAS) &&
(type_to_check_for != ACPI_TYPE_LOCAL_METHOD_ALIAS) &&
(type_to_check_for != ACPI_TYPE_LOCAL_SCOPE) &&
(this_node->type != ACPI_TYPE_ANY) &&
(this_node->type != type_to_check_for)) {
/* More segments to follow? */
/* Complain about a type mismatch */
ACPI_WARNING((AE_INFO,
"NsLookup: Type mismatch on %4.4s (%s), searching for (%s)",
ACPI_CAST_PTR(char, &simple_name),
acpi_ut_get_type_name(this_node->type),
acpi_ut_get_type_name
(type_to_check_for)));
if (num_segments > 0) {
/*
* If we have an alias to an object that opens a scope (such as a
* device or processor), we need to dereference the alias here so that
* we can access any children of the original node (via the remaining
* segments).
*/
if (this_node->type == ACPI_TYPE_LOCAL_ALIAS) {
if (acpi_ns_opens_scope
(((struct acpi_namespace_node *)this_node->
object)->type)) {
this_node =
(struct acpi_namespace_node *)
this_node->object;
}
}
}
/*
* If this is the last name segment and we are not looking for a
* specific type, but the type of found object is known, use that type
* to see if it opens a scope.
*/
if ((num_segments == 0) && (type == ACPI_TYPE_ANY)) {
type = this_node->type;
/* Special handling for the last segment (num_segments == 0) */
else {
/*
* Sanity typecheck of the target object:
*
* If 1) This is the last segment (num_segments == 0)
* 2) And we are looking for a specific type
* (Not checking for TYPE_ANY)
* 3) Which is not an alias
* 4) Which is not a local type (TYPE_SCOPE)
* 5) And the type of target object is known (not TYPE_ANY)
* 6) And target object does not match what we are looking for
*
* Then we have a type mismatch. Just warn and ignore it.
*/
if ((type_to_check_for != ACPI_TYPE_ANY) &&
(type_to_check_for != ACPI_TYPE_LOCAL_ALIAS) &&
(type_to_check_for != ACPI_TYPE_LOCAL_METHOD_ALIAS)
&& (type_to_check_for != ACPI_TYPE_LOCAL_SCOPE)
&& (this_node->type != ACPI_TYPE_ANY)
&& (this_node->type != type_to_check_for)) {
/* Complain about a type mismatch */
ACPI_WARNING((AE_INFO,
"NsLookup: Type mismatch on %4.4s (%s), searching for (%s)",
ACPI_CAST_PTR(char, &simple_name),
acpi_ut_get_type_name(this_node->
type),
acpi_ut_get_type_name
(type_to_check_for)));
}
/*
* If this is the last name segment and we are not looking for a
* specific type, but the type of found object is known, use that type
* to (later) see if it opens a scope.
*/
if (type == ACPI_TYPE_ANY) {
type = this_node->type;
}
}
/* Point to next name segment and make this node current */

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -249,7 +249,9 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
acpi_os_printf("ID %X Len %.4X Addr %p\n",
obj_desc->processor.proc_id,
obj_desc->processor.length,
(char *)obj_desc->processor.address);
ACPI_CAST_PTR(void,
obj_desc->processor.
address));
break;
case ACPI_TYPE_DEVICE:
@ -320,9 +322,8 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
space_id));
if (obj_desc->region.flags & AOPOBJ_DATA_VALID) {
acpi_os_printf(" Addr %8.8X%8.8X Len %.4X\n",
ACPI_FORMAT_UINT64(obj_desc->
region.
address),
ACPI_FORMAT_NATIVE_UINT
(obj_desc->region.address),
obj_desc->region.length);
} else {
acpi_os_printf

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -244,6 +244,10 @@ acpi_ns_init_one_object(acpi_handle obj_handle,
info->field_count++;
break;
case ACPI_TYPE_LOCAL_BANK_FIELD:
info->field_count++;
break;
case ACPI_TYPE_BUFFER:
info->buffer_count++;
break;
@ -287,6 +291,12 @@ acpi_ns_init_one_object(acpi_handle obj_handle,
status = acpi_ds_get_buffer_field_arguments(obj_desc);
break;
case ACPI_TYPE_LOCAL_BANK_FIELD:
info->field_init++;
status = acpi_ds_get_bank_field_arguments(obj_desc);
break;
case ACPI_TYPE_BUFFER:
info->buffer_init++;

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -107,11 +107,11 @@ acpi_ns_load_table(acpi_native_uint table_index,
goto unlock;
}
status = acpi_ns_parse_table(table_index, node->child);
status = acpi_ns_parse_table(table_index, node);
if (ACPI_SUCCESS(status)) {
acpi_tb_set_table_loaded_flag(table_index, TRUE);
} else {
acpi_tb_release_owner_id(table_index);
(void)acpi_tb_release_owner_id(table_index);
}
unlock:

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -180,6 +180,12 @@ acpi_size acpi_ns_get_pathname_length(struct acpi_namespace_node *node)
next_node = node;
while (next_node && (next_node != acpi_gbl_root_node)) {
if (ACPI_GET_DESCRIPTOR_TYPE(next_node) != ACPI_DESC_TYPE_NAMED) {
ACPI_ERROR((AE_INFO,
"Invalid NS Node (%p) while traversing path",
next_node));
return 0;
}
size += ACPI_PATH_SEGMENT_LENGTH;
next_node = acpi_ns_get_parent_node(next_node);
}

Просмотреть файл

@ -6,7 +6,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -64,7 +64,8 @@ ACPI_MODULE_NAME("nsparse")
******************************************************************************/
acpi_status
acpi_ns_one_complete_parse(acpi_native_uint pass_number,
acpi_native_uint table_index)
acpi_native_uint table_index,
struct acpi_namespace_node * start_node)
{
union acpi_parse_object *parse_root;
acpi_status status;
@ -111,14 +112,25 @@ acpi_ns_one_complete_parse(acpi_native_uint pass_number,
aml_start = (u8 *) table + sizeof(struct acpi_table_header);
aml_length = table->length - sizeof(struct acpi_table_header);
status = acpi_ds_init_aml_walk(walk_state, parse_root, NULL,
aml_start, aml_length, NULL,
(u8) pass_number);
aml_start, (u32) aml_length,
NULL, (u8) pass_number);
}
if (ACPI_FAILURE(status)) {
acpi_ds_delete_walk_state(walk_state);
acpi_ps_delete_parse_tree(parse_root);
return_ACPI_STATUS(status);
goto cleanup;
}
/* start_node is the default location to load the table */
if (start_node && start_node != acpi_gbl_root_node) {
status =
acpi_ds_scope_stack_push(start_node, ACPI_TYPE_METHOD,
walk_state);
if (ACPI_FAILURE(status)) {
acpi_ds_delete_walk_state(walk_state);
goto cleanup;
}
}
/* Parse the AML */
@ -127,6 +139,7 @@ acpi_ns_one_complete_parse(acpi_native_uint pass_number,
(unsigned)pass_number));
status = acpi_ps_parse_aml(walk_state);
cleanup:
acpi_ps_delete_parse_tree(parse_root);
return_ACPI_STATUS(status);
}
@ -163,7 +176,9 @@ acpi_ns_parse_table(acpi_native_uint table_index,
* performs another complete parse of the AML.
*/
ACPI_DEBUG_PRINT((ACPI_DB_PARSE, "**** Start pass 1\n"));
status = acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS1, table_index);
status =
acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS1, table_index,
start_node);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@ -178,7 +193,9 @@ acpi_ns_parse_table(acpi_native_uint table_index,
* parse objects are all cached.
*/
ACPI_DEBUG_PRINT((ACPI_DB_PARSE, "**** Start pass 2\n"));
status = acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS2, table_index);
status =
acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS2, table_index,
start_node);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -77,9 +77,7 @@ struct acpi_namespace_node *acpi_ns_get_next_node(acpi_object_type type, struct
/* It's really the parent's _scope_ that we want */
if (parent_node->child) {
next_node = parent_node->child;
}
next_node = parent_node->child;
}
else {

Просмотреть файл

@ -6,7 +6,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -467,10 +467,13 @@ acpi_ns_get_device_callback(acpi_handle obj_handle,
return (AE_CTRL_DEPTH);
}
if (!(flags & ACPI_STA_DEVICE_PRESENT)) {
/* Don't examine children of the device if not present */
if (!(flags & ACPI_STA_DEVICE_PRESENT) &&
!(flags & ACPI_STA_DEVICE_FUNCTIONING)) {
/*
* Don't examine the children of the device only when the
* device is neither present nor functional. See ACPI spec,
* description of _STA for more information.
*/
return (AE_CTRL_DEPTH);
}
@ -539,7 +542,7 @@ acpi_ns_get_device_callback(acpi_handle obj_handle,
* value is returned to the caller.
*
* This is a wrapper for walk_namespace, but the callback performs
* additional filtering. Please see acpi_get_device_callback.
* additional filtering. Please see acpi_ns_get_device_callback.
*
******************************************************************************/

Просмотреть файл

@ -6,7 +6,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -6,7 +6,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -742,6 +742,7 @@ EXPORT_SYMBOL(acpi_os_execute);
void acpi_os_wait_events_complete(void *context)
{
flush_workqueue(kacpid_wq);
flush_workqueue(kacpi_notify_wq);
}
EXPORT_SYMBOL(acpi_os_wait_events_complete);

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -230,12 +230,12 @@ acpi_ps_get_next_namepath(struct acpi_walk_state *walk_state,
struct acpi_parse_state *parser_state,
union acpi_parse_object *arg, u8 possible_method_call)
{
acpi_status status;
char *path;
union acpi_parse_object *name_op;
acpi_status status;
union acpi_operand_object *method_desc;
struct acpi_namespace_node *node;
union acpi_generic_state scope_info;
u8 *start = parser_state->aml;
ACPI_FUNCTION_TRACE(ps_get_next_namepath);
@ -249,25 +249,18 @@ acpi_ps_get_next_namepath(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(AE_OK);
}
/* Setup search scope info */
scope_info.scope.node = NULL;
node = parser_state->start_node;
if (node) {
scope_info.scope.node = node;
}
/*
* Lookup the name in the internal namespace. We don't want to add
* anything new to the namespace here, however, so we use MODE_EXECUTE.
* Lookup the name in the internal namespace, starting with the current
* scope. We don't want to add anything new to the namespace here,
* however, so we use MODE_EXECUTE.
* Allow searching of the parent tree, but don't open a new scope -
* we just want to lookup the object (must be mode EXECUTE to perform
* the upsearch)
*/
status =
acpi_ns_lookup(&scope_info, path, ACPI_TYPE_ANY, ACPI_IMODE_EXECUTE,
ACPI_NS_SEARCH_PARENT | ACPI_NS_DONT_OPEN_SCOPE,
NULL, &node);
status = acpi_ns_lookup(walk_state->scope_info, path,
ACPI_TYPE_ANY, ACPI_IMODE_EXECUTE,
ACPI_NS_SEARCH_PARENT | ACPI_NS_DONT_OPEN_SCOPE,
NULL, &node);
/*
* If this name is a control method invocation, we must
@ -275,6 +268,16 @@ acpi_ps_get_next_namepath(struct acpi_walk_state *walk_state,
*/
if (ACPI_SUCCESS(status) &&
possible_method_call && (node->type == ACPI_TYPE_METHOD)) {
if (walk_state->op->common.aml_opcode == AML_UNLOAD_OP) {
/*
* acpi_ps_get_next_namestring has increased the AML pointer,
* so we need to restore the saved AML pointer for method call.
*/
walk_state->parser_state.aml = start;
walk_state->arg_count = 1;
acpi_ps_init_op(arg, AML_INT_METHODCALL_OP);
return_ACPI_STATUS(AE_OK);
}
/* This name is actually a control method invocation */
@ -686,9 +689,29 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(AE_NO_MEMORY);
}
status =
acpi_ps_get_next_namepath(walk_state, parser_state,
arg, 0);
/* To support super_name arg of Unload */
if (walk_state->op->common.aml_opcode == AML_UNLOAD_OP) {
status =
acpi_ps_get_next_namepath(walk_state,
parser_state, arg,
1);
/*
* If the super_name arg of Unload is a method call,
* we have restored the AML pointer, just free this Arg
*/
if (arg->common.aml_opcode ==
AML_INT_METHODCALL_OP) {
acpi_ps_free_op(arg);
arg = NULL;
}
} else {
status =
acpi_ps_get_next_namepath(walk_state,
parser_state, arg,
0);
}
} else {
/* Single complex argument, nothing returned */

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -182,6 +182,7 @@ acpi_ps_build_named_op(struct acpi_walk_state *walk_state,
ACPI_FUNCTION_TRACE_PTR(ps_build_named_op, walk_state);
unnamed_op->common.value.arg = NULL;
unnamed_op->common.arg_list_length = 0;
unnamed_op->common.aml_opcode = walk_state->opcode;
/*
@ -241,7 +242,8 @@ acpi_ps_build_named_op(struct acpi_walk_state *walk_state,
acpi_ps_append_arg(*op, unnamed_op->common.value.arg);
acpi_gbl_depth++;
if ((*op)->common.aml_opcode == AML_REGION_OP) {
if ((*op)->common.aml_opcode == AML_REGION_OP ||
(*op)->common.aml_opcode == AML_DATA_REGION_OP) {
/*
* Defer final parsing of an operation_region body, because we don't
* have enough info in the first pass to parse it correctly (i.e.,
@ -280,6 +282,9 @@ acpi_ps_create_op(struct acpi_walk_state *walk_state,
acpi_status status = AE_OK;
union acpi_parse_object *op;
union acpi_parse_object *named_op = NULL;
union acpi_parse_object *parent_scope;
u8 argument_count;
const struct acpi_opcode_info *op_info;
ACPI_FUNCTION_TRACE_PTR(ps_create_op, walk_state);
@ -320,8 +325,32 @@ acpi_ps_create_op(struct acpi_walk_state *walk_state,
op->named.length = 0;
}
acpi_ps_append_arg(acpi_ps_get_parent_scope
(&(walk_state->parser_state)), op);
if (walk_state->opcode == AML_BANK_FIELD_OP) {
/*
* Backup to beginning of bank_field declaration
* body_length is unknown until we parse the body
*/
op->named.data = aml_op_start;
op->named.length = 0;
}
parent_scope = acpi_ps_get_parent_scope(&(walk_state->parser_state));
acpi_ps_append_arg(parent_scope, op);
if (parent_scope) {
op_info =
acpi_ps_get_opcode_info(parent_scope->common.aml_opcode);
if (op_info->flags & AML_HAS_TARGET) {
argument_count =
acpi_ps_get_argument_count(op_info->type);
if (parent_scope->common.arg_list_length >
argument_count) {
op->common.flags |= ACPI_PARSEOP_TARGET;
}
} else if (parent_scope->common.aml_opcode == AML_INCREMENT_OP) {
op->common.flags |= ACPI_PARSEOP_TARGET;
}
}
if (walk_state->descending_callback != NULL) {
/*
@ -603,13 +632,6 @@ acpi_ps_complete_op(struct acpi_walk_state *walk_state,
acpi_ps_pop_scope(&(walk_state->parser_state), op,
&walk_state->arg_types,
&walk_state->arg_count);
if ((*op)->common.aml_opcode != AML_WHILE_OP) {
status2 = acpi_ds_result_stack_pop(walk_state);
if (ACPI_FAILURE(status2)) {
return_ACPI_STATUS(status2);
}
}
}
/* Close this iteration of the While loop */
@ -640,10 +662,6 @@ acpi_ps_complete_op(struct acpi_walk_state *walk_state,
if (ACPI_FAILURE(status2)) {
return_ACPI_STATUS(status2);
}
status2 = acpi_ds_result_stack_pop(walk_state);
if (ACPI_FAILURE(status2)) {
return_ACPI_STATUS(status2);
}
acpi_ut_delete_generic_state
(acpi_ut_pop_generic_state
@ -1005,7 +1023,8 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
acpi_gbl_depth--;
}
if (op->common.aml_opcode == AML_REGION_OP) {
if (op->common.aml_opcode == AML_REGION_OP ||
op->common.aml_opcode == AML_DATA_REGION_OP) {
/*
* Skip parsing of control method or opregion body,
* because we don't have enough info in the first pass
@ -1030,6 +1049,16 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
(u32) (parser_state->aml - op->named.data);
}
if (op->common.aml_opcode == AML_BANK_FIELD_OP) {
/*
* Backup to beginning of bank_field declaration
*
* body_length is unknown until we parse the body
*/
op->named.length =
(u32) (parser_state->aml - op->named.data);
}
/* This op complete, notify the dispatcher */
if (walk_state->ascending_callback != NULL) {

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -49,6 +49,9 @@
#define _COMPONENT ACPI_PARSER
ACPI_MODULE_NAME("psopcode")
static const u8 acpi_gbl_argument_count[] =
{ 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 6 };
/*******************************************************************************
*
* NAME: acpi_gbl_aml_op_info
@ -59,6 +62,7 @@ ACPI_MODULE_NAME("psopcode")
* the operand type.
*
******************************************************************************/
/*
* Summary of opcode types/flags
*
@ -176,6 +180,7 @@ ACPI_MODULE_NAME("psopcode")
AML_CREATE_QWORD_FIELD_OP
******************************************************************************/
/*
* Master Opcode information table. A summary of everything we know about each
* opcode, all in one place.
@ -515,9 +520,10 @@ const struct acpi_opcode_info acpi_gbl_aml_op_info[AML_NUM_OPCODES] = {
AML_TYPE_NAMED_FIELD,
AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE | AML_FIELD),
/* 5F */ ACPI_OP("BankField", ARGP_BANK_FIELD_OP, ARGI_BANK_FIELD_OP,
ACPI_TYPE_ANY, AML_CLASS_NAMED_OBJECT,
ACPI_TYPE_LOCAL_BANK_FIELD, AML_CLASS_NAMED_OBJECT,
AML_TYPE_NAMED_FIELD,
AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE | AML_FIELD),
AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE | AML_FIELD |
AML_DEFER),
/* Internal opcodes that map to invalid AML opcodes */
@ -619,9 +625,9 @@ const struct acpi_opcode_info acpi_gbl_aml_op_info[AML_NUM_OPCODES] = {
AML_TYPE_EXEC_6A_0T_1R, AML_FLAGS_EXEC_6A_0T_1R),
/* 7C */ ACPI_OP("DataTableRegion", ARGP_DATA_REGION_OP,
ARGI_DATA_REGION_OP, ACPI_TYPE_REGION,
AML_CLASS_NAMED_OBJECT, AML_TYPE_NAMED_SIMPLE,
AML_CLASS_NAMED_OBJECT, AML_TYPE_NAMED_COMPLEX,
AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE |
AML_NSNODE | AML_NAMED),
AML_NSNODE | AML_NAMED | AML_DEFER),
/* 7D */ ACPI_OP("[EvalSubTree]", ARGP_SCOPE_OP, ARGI_SCOPE_OP,
ACPI_TYPE_ANY, AML_CLASS_NAMED_OBJECT,
AML_TYPE_NAMED_NO_OBJ,
@ -779,3 +785,25 @@ char *acpi_ps_get_opcode_name(u16 opcode)
#endif
}
/*******************************************************************************
*
* FUNCTION: acpi_ps_get_argument_count
*
* PARAMETERS: op_type - Type associated with the AML opcode
*
* RETURN: Argument count
*
* DESCRIPTION: Obtain the number of expected arguments for an AML opcode
*
******************************************************************************/
u8 acpi_ps_get_argument_count(u32 op_type)
{
if (op_type <= AML_TYPE_EXEC_6A_0T_1R) {
return (acpi_gbl_argument_count[op_type]);
}
return (0);
}

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -204,6 +204,8 @@ acpi_ps_complete_this_op(struct acpi_walk_state * walk_state,
AML_BUFFER_OP)
|| (op->common.parent->common.aml_opcode ==
AML_PACKAGE_OP)
|| (op->common.parent->common.aml_opcode ==
AML_BANK_FIELD_OP)
|| (op->common.parent->common.aml_opcode ==
AML_VAR_PACKAGE_OP)) {
replacement_op =
@ -349,19 +351,13 @@ acpi_ps_next_parse_state(struct acpi_walk_state *walk_state,
parser_state->aml = walk_state->aml_last_while;
walk_state->control_state->common.value = FALSE;
status = acpi_ds_result_stack_pop(walk_state);
if (ACPI_SUCCESS(status)) {
status = AE_CTRL_BREAK;
}
status = AE_CTRL_BREAK;
break;
case AE_CTRL_CONTINUE:
parser_state->aml = walk_state->aml_last_while;
status = acpi_ds_result_stack_pop(walk_state);
if (ACPI_SUCCESS(status)) {
status = AE_CTRL_CONTINUE;
}
status = AE_CTRL_CONTINUE;
break;
case AE_CTRL_PENDING:
@ -383,10 +379,7 @@ acpi_ps_next_parse_state(struct acpi_walk_state *walk_state,
* Just close out this package
*/
parser_state->aml = acpi_ps_get_next_package_end(parser_state);
status = acpi_ds_result_stack_pop(walk_state);
if (ACPI_SUCCESS(status)) {
status = AE_CTRL_PENDING;
}
status = AE_CTRL_PENDING;
break;
case AE_CTRL_FALSE:
@ -541,7 +534,7 @@ acpi_status acpi_ps_parse_aml(struct acpi_walk_state *walk_state)
if ((status == AE_ALREADY_EXISTS) &&
(!walk_state->method_desc->method.mutex)) {
ACPI_INFO((AE_INFO,
"Marking method %4.4s as Serialized",
"Marking method %4.4s as Serialized because of AE_ALREADY_EXISTS error",
walk_state->method_node->name.
ascii));
@ -601,6 +594,30 @@ acpi_status acpi_ps_parse_aml(struct acpi_walk_state *walk_state)
* The object is deleted
*/
if (!previous_walk_state->return_desc) {
/*
* In slack mode execution, if there is no return value
* we should implicitly return zero (0) as a default value.
*/
if (acpi_gbl_enable_interpreter_slack &&
!previous_walk_state->
implicit_return_obj) {
previous_walk_state->
implicit_return_obj =
acpi_ut_create_internal_object
(ACPI_TYPE_INTEGER);
if (!previous_walk_state->
implicit_return_obj) {
return_ACPI_STATUS
(AE_NO_MEMORY);
}
previous_walk_state->
implicit_return_obj->
integer.value = 0;
}
/* Restart the calling control method */
status =
acpi_ds_restart_control_method
(walk_state,

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -171,6 +171,8 @@ acpi_ps_append_arg(union acpi_parse_object *op, union acpi_parse_object *arg)
while (arg) {
arg->common.parent = op;
arg = arg->common.next;
op->common.arg_list_length++;
}
}

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
*****************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -122,7 +122,7 @@ acpi_power_get_context(acpi_handle handle,
}
*resource = acpi_driver_data(device);
if (!resource)
if (!*resource)
return -ENODEV;
return 0;

Просмотреть файл

@ -674,22 +674,21 @@ static int __cpuinit acpi_processor_start(struct acpi_device *device)
result = PTR_ERR(pr->cdev);
goto end;
}
if (pr->cdev) {
printk(KERN_INFO PREFIX
"%s is registered as cooling_device%d\n",
device->dev.bus_id, pr->cdev->id);
result = sysfs_create_link(&device->dev.kobj,
&pr->cdev->device.kobj,
"thermal_cooling");
if (result)
return result;
result = sysfs_create_link(&pr->cdev->device.kobj,
&device->dev.kobj,
"device");
if (result)
return result;
}
printk(KERN_INFO PREFIX
"%s is registered as cooling_device%d\n",
device->dev.bus_id, pr->cdev->id);
result = sysfs_create_link(&device->dev.kobj,
&pr->cdev->device.kobj,
"thermal_cooling");
if (result)
printk(KERN_ERR PREFIX "Create sysfs link\n");
result = sysfs_create_link(&pr->cdev->device.kobj,
&device->dev.kobj,
"device");
if (result)
printk(KERN_ERR PREFIX "Create sysfs link\n");
if (pr->flags.throttling) {
printk(KERN_INFO PREFIX "%s [%s] (supports",

Просмотреть файл

@ -847,6 +847,7 @@ static int acpi_processor_get_power_info_default(struct acpi_processor *pr)
/* all processors need to support C1 */
pr->power.states[ACPI_STATE_C1].type = ACPI_STATE_C1;
pr->power.states[ACPI_STATE_C1].valid = 1;
pr->power.states[ACPI_STATE_C1].entry_method = ACPI_CSTATE_HALT;
}
/* the C0 state only exists as a filler in our array */
pr->power.states[ACPI_STATE_C0].valid = 1;
@ -959,6 +960,9 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
cx.address);
}
if (cx.type == ACPI_STATE_C1) {
cx.valid = 1;
}
obj = &(element->package.elements[2]);
if (obj->type != ACPI_TYPE_INTEGER)
@ -1295,6 +1299,8 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr)
{
int result = 0;
if (boot_option_idle_override)
return 0;
if (!pr)
return -EINVAL;
@ -1734,6 +1740,9 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr)
{
int ret;
if (boot_option_idle_override)
return 0;
if (!pr)
return -EINVAL;
@ -1764,6 +1773,8 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr,
struct proc_dir_entry *entry = NULL;
unsigned int i;
if (boot_option_idle_override)
return 0;
if (!first_run) {
dmi_check_system(processor_power_dmi_table);
@ -1799,7 +1810,7 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr,
* Note that we use previously set idle handler will be used on
* platforms that only support C1.
*/
if ((pr->flags.power) && (!boot_option_idle_override)) {
if (pr->flags.power) {
#ifdef CONFIG_CPU_IDLE
acpi_processor_setup_cpuidle(pr);
pr->power.dev.cpu = pr->id;
@ -1835,8 +1846,11 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr,
int acpi_processor_power_exit(struct acpi_processor *pr,
struct acpi_device *device)
{
if (boot_option_idle_override)
return 0;
#ifdef CONFIG_CPU_IDLE
if ((pr->flags.power) && (!boot_option_idle_override))
if (pr->flags.power)
cpuidle_unregister_device(&pr->power.dev);
#endif
pr->flags.power_setup_done = 0;

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -73,7 +73,7 @@ acpi_rs_stream_option_length(u32 resource_length, u32 minimum_total_length);
static u8 acpi_rs_count_set_bits(u16 bit_field)
{
u8 bits_set;
acpi_native_uint bits_set;
ACPI_FUNCTION_ENTRY();
@ -81,10 +81,10 @@ static u8 acpi_rs_count_set_bits(u16 bit_field)
/* Zero the least significant bit that is set */
bit_field &= (bit_field - 1);
bit_field &= (u16) (bit_field - 1);
}
return (bits_set);
return ((u8) bits_set);
}
/*******************************************************************************
@ -211,6 +211,24 @@ acpi_rs_get_aml_length(struct acpi_resource * resource, acpi_size * size_needed)
* variable-length fields
*/
switch (resource->type) {
case ACPI_RESOURCE_TYPE_IRQ:
/* Length can be 3 or 2 */
if (resource->data.irq.descriptor_length == 2) {
total_size--;
}
break;
case ACPI_RESOURCE_TYPE_START_DEPENDENT:
/* Length can be 1 or 0 */
if (resource->data.irq.descriptor_length == 0) {
total_size--;
}
break;
case ACPI_RESOURCE_TYPE_VENDOR:
/*
* Vendor Defined Resource:

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -87,8 +87,10 @@ acpi_rs_dump_descriptor(void *resource, struct acpi_rsdump_info *table);
*
******************************************************************************/
struct acpi_rsdump_info acpi_rs_dump_irq[6] = {
struct acpi_rsdump_info acpi_rs_dump_irq[7] = {
{ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_irq), "IRQ", NULL},
{ACPI_RSD_UINT8, ACPI_RSD_OFFSET(irq.descriptor_length),
"Descriptor Length", NULL},
{ACPI_RSD_1BITFLAG, ACPI_RSD_OFFSET(irq.triggering), "Triggering",
acpi_gbl_he_decode},
{ACPI_RSD_1BITFLAG, ACPI_RSD_OFFSET(irq.polarity), "Polarity",
@ -115,9 +117,11 @@ struct acpi_rsdump_info acpi_rs_dump_dma[6] = {
NULL}
};
struct acpi_rsdump_info acpi_rs_dump_start_dpf[3] = {
struct acpi_rsdump_info acpi_rs_dump_start_dpf[4] = {
{ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_start_dpf),
"Start-Dependent-Functions", NULL},
{ACPI_RSD_UINT8, ACPI_RSD_OFFSET(start_dpf.descriptor_length),
"Descriptor Length", NULL},
{ACPI_RSD_2BITFLAG, ACPI_RSD_OFFSET(start_dpf.compatibility_priority),
"Compatibility Priority", acpi_gbl_config_decode},
{ACPI_RSD_2BITFLAG, ACPI_RSD_OFFSET(start_dpf.performance_robustness),

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -185,7 +185,7 @@ struct acpi_rsconvert_info acpi_rs_convert_end_tag[2] = {
*
******************************************************************************/
struct acpi_rsconvert_info acpi_rs_get_start_dpf[5] = {
struct acpi_rsconvert_info acpi_rs_get_start_dpf[6] = {
{ACPI_RSC_INITGET, ACPI_RESOURCE_TYPE_START_DEPENDENT,
ACPI_RS_SIZE(struct acpi_resource_start_dependent),
ACPI_RSC_TABLE_SIZE(acpi_rs_get_start_dpf)},
@ -196,6 +196,12 @@ struct acpi_rsconvert_info acpi_rs_get_start_dpf[5] = {
ACPI_ACCEPTABLE_CONFIGURATION,
2},
/* Get the descriptor length (0 or 1 for Start Dpf descriptor) */
{ACPI_RSC_1BITFLAG, ACPI_RS_OFFSET(data.start_dpf.descriptor_length),
AML_OFFSET(start_dpf.descriptor_type),
0},
/* All done if there is no flag byte present in the descriptor */
{ACPI_RSC_EXIT_NE, ACPI_RSC_COMPARE_AML_LENGTH, 0, 1},
@ -219,7 +225,9 @@ struct acpi_rsconvert_info acpi_rs_get_start_dpf[5] = {
*
******************************************************************************/
struct acpi_rsconvert_info acpi_rs_set_start_dpf[6] = {
struct acpi_rsconvert_info acpi_rs_set_start_dpf[10] = {
/* Start with a default descriptor of length 1 */
{ACPI_RSC_INITSET, ACPI_RESOURCE_NAME_START_DEPENDENT,
sizeof(struct aml_resource_start_dependent),
ACPI_RSC_TABLE_SIZE(acpi_rs_set_start_dpf)},
@ -235,6 +243,33 @@ struct acpi_rsconvert_info acpi_rs_set_start_dpf[6] = {
ACPI_RS_OFFSET(data.start_dpf.performance_robustness),
AML_OFFSET(start_dpf.flags),
2},
/*
* All done if the output descriptor length is required to be 1
* (i.e., optimization to 0 bytes cannot be attempted)
*/
{ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE,
ACPI_RS_OFFSET(data.start_dpf.descriptor_length),
1},
/* Set length to 0 bytes (no flags byte) */
{ACPI_RSC_LENGTH, 0, 0,
sizeof(struct aml_resource_start_dependent_noprio)},
/*
* All done if the output descriptor length is required to be 0.
*
* TBD: Perhaps we should check for error if input flags are not
* compatible with a 0-byte descriptor.
*/
{ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE,
ACPI_RS_OFFSET(data.start_dpf.descriptor_length),
0},
/* Reset length to 1 byte (descriptor with flags byte) */
{ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_start_dependent)},
/*
* All done if flags byte is necessary -- if either priority value
* is not ACPI_ACCEPTABLE_CONFIGURATION

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -52,7 +52,7 @@ ACPI_MODULE_NAME("rsirq")
* acpi_rs_get_irq
*
******************************************************************************/
struct acpi_rsconvert_info acpi_rs_get_irq[7] = {
struct acpi_rsconvert_info acpi_rs_get_irq[8] = {
{ACPI_RSC_INITGET, ACPI_RESOURCE_TYPE_IRQ,
ACPI_RS_SIZE(struct acpi_resource_irq),
ACPI_RSC_TABLE_SIZE(acpi_rs_get_irq)},
@ -69,6 +69,12 @@ struct acpi_rsconvert_info acpi_rs_get_irq[7] = {
ACPI_EDGE_SENSITIVE,
1},
/* Get the descriptor length (2 or 3 for IRQ descriptor) */
{ACPI_RSC_2BITFLAG, ACPI_RS_OFFSET(data.irq.descriptor_length),
AML_OFFSET(irq.descriptor_type),
0},
/* All done if no flag byte present in descriptor */
{ACPI_RSC_EXIT_NE, ACPI_RSC_COMPARE_AML_LENGTH, 0, 3},
@ -94,7 +100,9 @@ struct acpi_rsconvert_info acpi_rs_get_irq[7] = {
*
******************************************************************************/
struct acpi_rsconvert_info acpi_rs_set_irq[9] = {
struct acpi_rsconvert_info acpi_rs_set_irq[13] = {
/* Start with a default descriptor of length 3 */
{ACPI_RSC_INITSET, ACPI_RESOURCE_NAME_IRQ,
sizeof(struct aml_resource_irq),
ACPI_RSC_TABLE_SIZE(acpi_rs_set_irq)},
@ -105,7 +113,7 @@ struct acpi_rsconvert_info acpi_rs_set_irq[9] = {
AML_OFFSET(irq.irq_mask),
ACPI_RS_OFFSET(data.irq.interrupt_count)},
/* Set the flags byte by default */
/* Set the flags byte */
{ACPI_RSC_1BITFLAG, ACPI_RS_OFFSET(data.irq.triggering),
AML_OFFSET(irq.flags),
@ -118,6 +126,33 @@ struct acpi_rsconvert_info acpi_rs_set_irq[9] = {
{ACPI_RSC_1BITFLAG, ACPI_RS_OFFSET(data.irq.sharable),
AML_OFFSET(irq.flags),
4},
/*
* All done if the output descriptor length is required to be 3
* (i.e., optimization to 2 bytes cannot be attempted)
*/
{ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE,
ACPI_RS_OFFSET(data.irq.descriptor_length),
3},
/* Set length to 2 bytes (no flags byte) */
{ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_irq_noflags)},
/*
* All done if the output descriptor length is required to be 2.
*
* TBD: Perhaps we should check for error if input flags are not
* compatible with a 2-byte descriptor.
*/
{ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE,
ACPI_RS_OFFSET(data.irq.descriptor_length),
2},
/* Reset length to 3 bytes (descriptor with flags byte) */
{ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_irq)},
/*
* Check if the flags byte is necessary. Not needed if the flags are:
* ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_HIGH, ACPI_EXCLUSIVE
@ -134,7 +169,7 @@ struct acpi_rsconvert_info acpi_rs_set_irq[9] = {
ACPI_RS_OFFSET(data.irq.sharable),
ACPI_EXCLUSIVE},
/* irq_no_flags() descriptor can be used */
/* We can optimize to a 2-byte irq_no_flags() descriptor */
{ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_irq_noflags)}
};

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -497,6 +497,17 @@ acpi_rs_convert_resource_to_aml(struct acpi_resource *resource,
}
break;
case ACPI_RSC_EXIT_EQ:
/*
* Control - Exit conversion if equal
*/
if (*ACPI_ADD_PTR(u8, resource,
COMPARE_TARGET(info)) ==
COMPARE_VALUE(info)) {
goto exit;
}
break;
default:
ACPI_ERROR((AE_INFO, "Invalid conversion opcode"));

Просмотреть файл

@ -5,7 +5,7 @@
******************************************************************************/
/*
* Copyright (C) 2000 - 2007, R. Byron Moore
* Copyright (C) 2000 - 2008, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -97,17 +97,17 @@ u8 acpi_rs_decode_bitmask(u16 mask, u8 * list)
u16 acpi_rs_encode_bitmask(u8 * list, u8 count)
{
acpi_native_uint i;
u16 mask;
acpi_native_uint mask;
ACPI_FUNCTION_ENTRY();
/* Encode the list into a single bitmask */
for (i = 0, mask = 0; i < count; i++) {
mask |= (0x0001 << list[i]);
mask |= (0x1 << list[i]);
}
return (mask);
return ((u16) mask);
}
/*******************************************************************************

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше