Merge branch 'x86/core' into core/percpu

This commit is contained in:
Ingo Molnar 2009-03-04 02:29:19 +01:00
Родитель 9976b39b50 8b0e5860cb
Коммит 91d75e209b
373 изменённых файлов: 14129 добавлений и 3200 удалений

Просмотреть файл

@ -1,3 +1,46 @@
What: /sys/bus/pci/drivers/.../bind
Date: December 2003
Contact: linux-pci@vger.kernel.org
Description:
Writing a device location to this file will cause
the driver to attempt to bind to the device found at
this location. This is useful for overriding default
bindings. The format for the location is: DDDD:BB:DD.F.
That is Domain:Bus:Device.Function and is the same as
found in /sys/bus/pci/devices/. For example:
# echo 0000:00:19.0 > /sys/bus/pci/drivers/foo/bind
(Note: kernels before 2.6.28 may require echo -n).
What: /sys/bus/pci/drivers/.../unbind
Date: December 2003
Contact: linux-pci@vger.kernel.org
Description:
Writing a device location to this file will cause the
driver to attempt to unbind from the device found at
this location. This may be useful when overriding default
bindings. The format for the location is: DDDD:BB:DD.F.
That is Domain:Bus:Device.Function and is the same as
found in /sys/bus/pci/devices/. For example:
# echo 0000:00:19.0 > /sys/bus/pci/drivers/foo/unbind
(Note: kernels before 2.6.28 may require echo -n).
What: /sys/bus/pci/drivers/.../new_id
Date: December 2003
Contact: linux-pci@vger.kernel.org
Description:
Writing a device ID to this file will attempt to
dynamically add a new device ID to a PCI device driver.
This may allow the driver to support more hardware than
was included in the driver's static device ID support
table at compile time. The format for the device ID is:
VVVV DDDD SVVV SDDD CCCC MMMM PPPP. That is Vendor ID,
Device ID, Subsystem Vendor ID, Subsystem Device ID,
Class, Class Mask, and Private Driver Data. The Vendor ID
and Device ID fields are required, the rest are optional.
Upon successfully adding an ID, the driver will probe
for the device and attempt to bind to it. For example:
# echo "8086 10f5" > /sys/bus/pci/drivers/foo/new_id
What: /sys/bus/pci/devices/.../vpd What: /sys/bus/pci/devices/.../vpd
Date: February 2008 Date: February 2008
Contact: Ben Hutchings <bhutchings@solarflare.com> Contact: Ben Hutchings <bhutchings@solarflare.com>

Просмотреть файл

@ -1,205 +0,0 @@
This README escorted the skystar2-driver rewriting procedure. It describes the
state of the new flexcop-driver set and some internals are written down here
too.
This document hopefully describes things about the flexcop and its
device-offsprings. Goal was to write an easy-to-write and easy-to-read set of
drivers based on the skystar2.c and other information.
Remark: flexcop-pci.c was a copy of skystar2.c, but every line has been
touched and rewritten.
History & News
==============
2005-04-01 - correct USB ISOC transfers (thanks to Vadim Catana)
General coding processing
=========================
We should proceed as follows (as long as no one complains):
0) Think before start writing code!
1) rewriting the skystar2.c with the help of the flexcop register descriptions
and splitting up the files to a pci-bus-part and a flexcop-part.
The new driver will be called b2c2-flexcop-pci.ko/b2c2-flexcop-usb.ko for the
device-specific part and b2c2-flexcop.ko for the common flexcop-functions.
2) Search for errors in the leftover of flexcop-pci.c (compare with pluto2.c
and other pci drivers)
3) make some beautification (see 'Improvements when rewriting (refactoring) is
done')
4) Testing the new driver and maybe substitute the skystar2.c with it, to reach
a wider tester audience.
5) creating an usb-bus-part using the already written flexcop code for the pci
card.
Idea: create a kernel-object for the flexcop and export all important
functions. This option saves kernel-memory, but maybe a lot of functions have
to be exported to kernel namespace.
Current situation
=================
0) Done :)
1) Done (some minor issues left)
2) Done
3) Not ready yet, more information is necessary
4) next to be done (see the table below)
5) USB driver is working (yes, there are some minor issues)
What seems to be ready?
-----------------------
1) Rewriting
1a) i2c is cut off from the flexcop-pci.c and seems to work
1b) moved tuner and demod stuff from flexcop-pci.c to flexcop-tuner-fe.c
1c) moved lnb and diseqc stuff from flexcop-pci.c to flexcop-tuner-fe.c
1e) eeprom (reading MAC address)
1d) sram (no dynamic sll size detection (commented out) (using default as JJ told me))
1f) misc. register accesses for reading parameters (e.g. resetting, revision)
1g) pid/mac filter (flexcop-hw-filter.c)
1i) dvb-stuff initialization in flexcop.c (done)
1h) dma stuff (now just using the size-irq, instead of all-together, to be done)
1j) remove flexcop initialization from flexcop-pci.c completely (done)
1l) use a well working dma IRQ method (done, see 'Known bugs and problems and TODO')
1k) cleanup flexcop-files (remove unused EXPORT_SYMBOLs, make static from
non-static where possible, moved code to proper places)
2) Search for errors in the leftover of flexcop-pci.c (partially done)
5a) add MAC address reading
5c) feeding of ISOC data to the software demux (format of the isochronous data
and speed optimization, no real error) (thanks to Vadim Catana)
What to do in the near future?
--------------------------------------
(no special order here)
5) USB driver
5b) optimize isoc-transfer (submitting/killing isoc URBs when transfer is starting)
Testing changes
---------------
O = item is working
P = item is partially working
X = item is not working
N = item does not apply here
<empty field> = item need to be examined
| PCI | USB
item | mt352 | nxt2002 | stv0299 | mt312 | mt352 | nxt2002 | stv0299 | mt312
-------+-------+---------+---------+-------+-------+---------+---------+-------
1a) | O | | | | N | N | N | N
1b) | O | | | | | | O |
1c) | N | N | | | N | N | O |
1d) | O | O
1e) | O | O
1f) | P
1g) | O
1h) | P |
1i) | O | N
1j) | O | N
1l) | O | N
2) | O | N
5a) | N | O
5b)* | N |
5c) | N | O
* - not done yet
Known bugs and problems and TODO
--------------------------------
1g/h/l) when pid filtering is enabled on the pci card
DMA usage currently:
The DMA is splitted in 2 equal-sized subbuffers. The Flexcop writes to first
address and triggers an IRQ when it's full and starts writing to the second
address. When the second address is full, the IRQ is triggered again, and
the flexcop writes to first address again, and so on.
The buffersize of each address is currently 640*188 bytes.
Problem is, when using hw-pid-filtering and doing some low-bandwidth
operation (like scanning) the buffers won't be filled enough to trigger
the IRQ. That's why:
When PID filtering is activated, the timer IRQ is used. Every 1.97 ms the IRQ
is triggered. Is the current write address of DMA1 different to the one
during the last IRQ, then the data is passed to the demuxer.
There is an additional DMA-IRQ-method: packet count IRQ. This isn't
implemented correctly yet.
The solution is to disable HW PID filtering, but I don't know how the DVB
API software demux behaves on slow systems with 45MBit/s TS.
Solved bugs :)
--------------
1g) pid-filtering (somehow pid index 4 and 5 (EMM_PID and ECM_PID) aren't
working)
SOLUTION: also index 0 was affected, because net_translation is done for
these indexes by default
5b) isochronous transfer does only work in the first attempt (for the Sky2PC
USB, Air2PC is working) SOLUTION: the flexcop was going asleep and never really
woke up again (don't know if this need fixes, see
flexcop-fe-tuner.c:flexcop_sleep)
NEWS: when the driver is loaded and unloaded and loaded again (w/o doing
anything in the while the driver is loaded the first time), no transfers take
place anymore.
Improvements when rewriting (refactoring) is done
=================================================
- split sleeping of the flexcop (misc_204.ACPI3_sig = 1;) from lnb_control
(enable sleeping for other demods than dvb-s)
- add support for CableStar (stv0297 Microtune 203x/ALPS) (almost done, incompatibilities with the Nexus-CA)
Debugging
---------
- add verbose debugging to skystar2.c (dump the reg_dw_data) and compare it
with this flexcop, this is important, because i2c is now using the
flexcop_ibi_value union from flexcop-reg.h (do you have a better idea for
that, please tell us so).
Everything which is identical in the following table, can be put into a common
flexcop-module.
PCI USB
-------------------------------------------------------------------------------
Different:
Register access: accessing IO memory USB control message
I2C bus: I2C bus of the FC USB control message
Data transfer: DMA isochronous transfer
EEPROM transfer: through i2c bus not clear yet
Identical:
Streaming: accessing registers
PID Filtering: accessing registers
Sram destinations: accessing registers
Tuner/Demod: I2C bus
DVB-stuff: can be written for common use
Acknowledgements (just for the rewriting part)
================
Bjarne Steinsbo thought a lot in the first place of the pci part for this code
sharing idea.
Andreas Oberritter for providing a recent PCI initialization template
(pluto2.c).
Boleslaw Ciesielski for pointing out a problem with firmware loader.
Vadim Catana for correcting the USB transfer.
comments, critics and ideas to linux-dvb@linuxtv.org.

Просмотреть файл

@ -1,5 +1,5 @@
How to set up the Technisat devices How to set up the Technisat/B2C2 Flexcop devices
=================================== ================================================
1) Find out what device you have 1) Find out what device you have
================================ ================================
@ -16,54 +16,60 @@ DVB: registering frontend 0 (Conexant CX24123/CX24109)...
If the Technisat is the only TV device in your box get rid of unnecessary modules and check this one: If the Technisat is the only TV device in your box get rid of unnecessary modules and check this one:
"Multimedia devices" => "Customise analog and hybrid tuner modules to build" "Multimedia devices" => "Customise analog and hybrid tuner modules to build"
In this directory uncheck every driver which is activated there. In this directory uncheck every driver which is activated there (except "Simple tuner support" for case 9 only).
Then please activate: Then please activate:
2a) Main module part: 2a) Main module part:
a.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" a.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters"
b.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC PCI" in case of a PCI card OR b.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC PCI" in case of a PCI card
OR
c.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC USB" in case of an USB 1.1 adapter c.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC USB" in case of an USB 1.1 adapter
d.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Enable debug for the B2C2 FlexCop drivers" d.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Enable debug for the B2C2 FlexCop drivers"
Notice: d.) is helpful for troubleshooting Notice: d.) is helpful for troubleshooting
2b) Frontend module part: 2b) Frontend module part:
1.) Revision 2.3: 1.) SkyStar DVB-S Revision 2.3:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink VP310/MT312/ZL10313 based" b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink VP310/MT312/ZL10313 based"
2.) Revision 2.6: 2.) SkyStar DVB-S Revision 2.6:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0299 based" b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0299 based"
3.) Revision 2.7: 3.) SkyStar DVB-S Revision 2.7:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Samsung S5H1420 based" b.)"Multimedia devices" => "Customise DVB frontends" => "Samsung S5H1420 based"
c.)"Multimedia devices" => "Customise DVB frontends" => "Integrant ITD1000 Zero IF tuner for DVB-S/DSS" c.)"Multimedia devices" => "Customise DVB frontends" => "Integrant ITD1000 Zero IF tuner for DVB-S/DSS"
d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller" d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller"
4.) Revision 2.8: 4.) SkyStar DVB-S Revision 2.8:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24113/CX24128 tuner for DVB-S/DSS" b.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24113/CX24128 tuner for DVB-S/DSS"
c.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24123 based" c.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24123 based"
d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller" d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller"
5.) DVB-T card: 5.) AirStar DVB-T card:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink MT352 based" b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink MT352 based"
6.) DVB-C card: 6.) CableStar DVB-C card:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0297 based" b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0297 based"
7.) ATSC card 1st generation: 7.) AirStar ATSC card 1st generation:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Broadcom BCM3510" b.)"Multimedia devices" => "Customise DVB frontends" => "Broadcom BCM3510"
8.) ATSC card 2nd generation: 8.) AirStar ATSC card 2nd generation:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build" a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "NxtWave Communications NXT2002/NXT2004 based" b.)"Multimedia devices" => "Customise DVB frontends" => "NxtWave Communications NXT2002/NXT2004 based"
c.)"Multimedia devices" => "Customise DVB frontends" => "LG Electronics LGDT3302/LGDT3303 based" c.)"Multimedia devices" => "Customise DVB frontends" => "Generic I2C PLL based tuners"
Author: Uwe Bugla <uwe.bugla@gmx.de> December 2008 9.) AirStar ATSC card 3rd generation:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "LG Electronics LGDT3302/LGDT3303 based"
c.)"Multimedia devices" => "Customise analog and hybrid tuner modules to build" => "Simple tuner support"
Author: Uwe Bugla <uwe.bugla@gmx.de> February 2009

Просмотреть файл

@ -868,8 +868,10 @@ and is between 256 and 4096 characters. It is defined in the file
icn= [HW,ISDN] icn= [HW,ISDN]
Format: <io>[,<membase>[,<icn_id>[,<icn_id2>]]] Format: <io>[,<membase>[,<icn_id>[,<icn_id2>]]]
ide= [HW] (E)IDE subsystem ide-core.nodma= [HW] (E)IDE subsystem
Format: ide=nodma or ide=doubler Format: =0.0 to prevent dma on hda, =0.1 hdb =1.0 hdc
.vlb_clock .pci_clock .noflush .noprobe .nowerr .cdrom
.chs .ignore_cable are additional options
See Documentation/ide/ide.txt. See Documentation/ide/ide.txt.
idebus= [HW] (E)IDE subsystem - VLB/PCI bus speed idebus= [HW] (E)IDE subsystem - VLB/PCI bus speed
@ -1308,8 +1310,13 @@ and is between 256 and 4096 characters. It is defined in the file
memtest= [KNL,X86] Enable memtest memtest= [KNL,X86] Enable memtest
Format: <integer> Format: <integer>
range: 0,4 : pattern number
default : 0 <disable> default : 0 <disable>
Specifies the number of memtest passes to be
performed. Each pass selects another test
pattern from a given set of patterns. Memtest
fills the memory with this pattern, validates
memory contents and reserves bad memory
regions that are detected.
meye.*= [HW] Set MotionEye Camera parameters meye.*= [HW] Set MotionEye Camera parameters
See Documentation/video4linux/meye.txt. See Documentation/video4linux/meye.txt.

Просмотреть файл

@ -4,7 +4,7 @@ Introduction
============ ============
The Chelsio T3 ASIC based Adapters (S310, S320, S302, S304, Mezz cards, etc. The Chelsio T3 ASIC based Adapters (S310, S320, S302, S304, Mezz cards, etc.
series of products) supports iSCSI acceleration and iSCSI Direct Data Placement series of products) support iSCSI acceleration and iSCSI Direct Data Placement
(DDP) where the hardware handles the expensive byte touching operations, such (DDP) where the hardware handles the expensive byte touching operations, such
as CRC computation and verification, and direct DMA to the final host memory as CRC computation and verification, and direct DMA to the final host memory
destination: destination:
@ -31,9 +31,9 @@ destination:
the TCP segments onto the wire. It handles TCP retransmission if the TCP segments onto the wire. It handles TCP retransmission if
needed. needed.
On receving, S3 h/w recovers the iSCSI PDU by reassembling TCP On receiving, S3 h/w recovers the iSCSI PDU by reassembling TCP
segments, separating the header and data, calculating and verifying segments, separating the header and data, calculating and verifying
the digests, then forwards the header to the host. The payload data, the digests, then forwarding the header to the host. The payload data,
if possible, will be directly placed into the pre-posted host DDP if possible, will be directly placed into the pre-posted host DDP
buffer. Otherwise, the payload data will be sent to the host too. buffer. Otherwise, the payload data will be sent to the host too.
@ -68,9 +68,8 @@ The following steps need to be taken to accelerates the open-iscsi initiator:
sure the ip address is unique in the network. sure the ip address is unique in the network.
3. edit /etc/iscsi/iscsid.conf 3. edit /etc/iscsi/iscsid.conf
The default setting for MaxRecvDataSegmentLength (131072) is too big, The default setting for MaxRecvDataSegmentLength (131072) is too big;
replace "node.conn[0].iscsi.MaxRecvDataSegmentLength" to be a value no replace with a value no bigger than 15360 (for example 8192):
bigger than 15360 (for example 8192):
node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192

Просмотреть файл

@ -543,7 +543,10 @@ Protocol: 2.08+
The payload may be compressed. The format of both the compressed and The payload may be compressed. The format of both the compressed and
uncompressed data should be determined using the standard magic uncompressed data should be determined using the standard magic
numbers. Currently only gzip compressed ELF is used. numbers. The currently supported compression formats are gzip
(magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A) and LZMA
(magic number 5D 00). The uncompressed payload is currently always ELF
(magic number 7F 45 4C 46).
Field name: payload_length Field name: payload_length
Type: read Type: read

Просмотреть файл

@ -2464,7 +2464,7 @@ S: Maintained
ISDN SUBSYSTEM ISDN SUBSYSTEM
P: Karsten Keil P: Karsten Keil
M: kkeil@suse.de M: isdn@linux-pingi.de
L: isdn4linux@listserv.isdn4linux.de (subscribers-only) L: isdn4linux@listserv.isdn4linux.de (subscribers-only)
W: http://www.isdn4linux.de W: http://www.isdn4linux.de
T: git kernel.org:/pub/scm/linux/kernel/kkeil/isdn-2.6.git T: git kernel.org:/pub/scm/linux/kernel/kkeil/isdn-2.6.git

Просмотреть файл

@ -311,6 +311,9 @@ evm_u35_setup(struct i2c_client *client, int gpio, unsigned ngpio, void *c)
gpio_request(gpio + 7, "nCF_SEL"); gpio_request(gpio + 7, "nCF_SEL");
gpio_direction_output(gpio + 7, 1); gpio_direction_output(gpio + 7, 1);
/* irlml6401 sustains over 3A, switches 5V in under 8 msec */
setup_usb(500, 8);
return 0; return 0;
} }
@ -417,9 +420,6 @@ static __init void davinci_evm_init(void)
platform_add_devices(davinci_evm_devices, platform_add_devices(davinci_evm_devices,
ARRAY_SIZE(davinci_evm_devices)); ARRAY_SIZE(davinci_evm_devices));
evm_init_i2c(); evm_init_i2c();
/* irlml6401 sustains over 3A, switches 5V in under 8 msec */
setup_usb(500, 8);
} }
static __init void davinci_evm_irq_init(void) static __init void davinci_evm_irq_init(void)

Просмотреть файл

@ -230,6 +230,11 @@ static struct clk davinci_clks[] = {
.rate = &commonrate, .rate = &commonrate,
.lpsc = DAVINCI_LPSC_GPIO, .lpsc = DAVINCI_LPSC_GPIO,
}, },
{
.name = "usb",
.rate = &commonrate,
.lpsc = DAVINCI_LPSC_USB,
},
{ {
.name = "AEMIFCLK", .name = "AEMIFCLK",
.rate = &commonrate, .rate = &commonrate,

Просмотреть файл

@ -47,6 +47,7 @@ static struct musb_hdrc_platform_data usb_data = {
#elif defined(CONFIG_USB_MUSB_HOST) #elif defined(CONFIG_USB_MUSB_HOST)
.mode = MUSB_HOST, .mode = MUSB_HOST,
#endif #endif
.clock = "usb",
.config = &musb_config, .config = &musb_config,
}; };

Просмотреть файл

@ -19,6 +19,7 @@
#include <linux/serial_8250.h> #include <linux/serial_8250.h>
#include <linux/ata_platform.h> #include <linux/ata_platform.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/i2c.h>
#include <asm/elf.h> #include <asm/elf.h>
#include <asm/mach-types.h> #include <asm/mach-types.h>
@ -201,8 +202,13 @@ static struct platform_device *devs[] __initdata = {
&pata_device, &pata_device,
}; };
static struct i2c_board_info i2c_rtc = {
I2C_BOARD_INFO("pcf8583", 0x50)
};
static int __init rpc_init(void) static int __init rpc_init(void)
{ {
i2c_register_board_info(0, &i2c_rtc, 1);
return platform_add_devices(devs, ARRAY_SIZE(devs)); return platform_add_devices(devs, ARRAY_SIZE(devs));
} }

Просмотреть файл

@ -638,6 +638,17 @@ config DMAR
and include PCI device scope covered by these DMA and include PCI device scope covered by these DMA
remapping devices. remapping devices.
config DMAR_DEFAULT_ON
def_bool y
prompt "Enable DMA Remapping Devices by default"
depends on DMAR
help
Selecting this option will enable a DMAR device at boot time if
one is found. If this option is not selected, DMAR support can
be enabled by passing intel_iommu=on to the kernel. It is
recommended you say N here while the DMAR code remains
experimental.
endmenu endmenu
endif endif

Просмотреть файл

@ -507,7 +507,7 @@ static int iosapic_find_sharable_irq(unsigned long trigger, unsigned long pol)
if (trigger == IOSAPIC_EDGE) if (trigger == IOSAPIC_EDGE)
return -EINVAL; return -EINVAL;
for (i = 0; i <= NR_IRQS; i++) { for (i = 0; i < NR_IRQS; i++) {
info = &iosapic_intr_info[i]; info = &iosapic_intr_info[i];
if (info->trigger == trigger && info->polarity == pol && if (info->trigger == trigger && info->polarity == pol &&
(info->dmode == IOSAPIC_FIXED || (info->dmode == IOSAPIC_FIXED ||

Просмотреть файл

@ -2149,7 +2149,7 @@ unw_remove_unwind_table (void *handle)
/* next, remove hash table entries for this table */ /* next, remove hash table entries for this table */
for (index = 0; index <= UNW_HASH_SIZE; ++index) { for (index = 0; index < UNW_HASH_SIZE; ++index) {
tmp = unw.cache + unw.hash[index]; tmp = unw.cache + unw.hash[index];
if (unw.hash[index] >= UNW_CACHE_SIZE if (unw.hash[index] >= UNW_CACHE_SIZE
|| tmp->ip < table->start || tmp->ip >= table->end) || tmp->ip < table->start || tmp->ip >= table->end)

Просмотреть файл

@ -603,7 +603,7 @@ config CAVIUM_OCTEON_SIMULATOR
select SYS_SUPPORTS_64BIT_KERNEL select SYS_SUPPORTS_64BIT_KERNEL
select SYS_SUPPORTS_BIG_ENDIAN select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_HIGHMEM select SYS_SUPPORTS_HIGHMEM
select CPU_CAVIUM_OCTEON select SYS_HAS_CPU_CAVIUM_OCTEON
help help
The Octeon simulator is software performance model of the Cavium The Octeon simulator is software performance model of the Cavium
Octeon Processor. It supports simulating Octeon processors on x86 Octeon Processor. It supports simulating Octeon processors on x86
@ -618,7 +618,7 @@ config CAVIUM_OCTEON_REFERENCE_BOARD
select SYS_SUPPORTS_BIG_ENDIAN select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_HIGHMEM select SYS_SUPPORTS_HIGHMEM
select SYS_HAS_EARLY_PRINTK select SYS_HAS_EARLY_PRINTK
select CPU_CAVIUM_OCTEON select SYS_HAS_CPU_CAVIUM_OCTEON
select SWAP_IO_SPACE select SWAP_IO_SPACE
help help
This option supports all of the Octeon reference boards from Cavium This option supports all of the Octeon reference boards from Cavium
@ -1234,6 +1234,7 @@ config CPU_SB1
config CPU_CAVIUM_OCTEON config CPU_CAVIUM_OCTEON
bool "Cavium Octeon processor" bool "Cavium Octeon processor"
depends on SYS_HAS_CPU_CAVIUM_OCTEON
select IRQ_CPU select IRQ_CPU
select IRQ_CPU_OCTEON select IRQ_CPU_OCTEON
select CPU_HAS_PREFETCH select CPU_HAS_PREFETCH
@ -1314,6 +1315,9 @@ config SYS_HAS_CPU_RM9000
config SYS_HAS_CPU_SB1 config SYS_HAS_CPU_SB1
bool bool
config SYS_HAS_CPU_CAVIUM_OCTEON
bool
# #
# CPU may reorder R->R, R->W, W->R, W->W # CPU may reorder R->R, R->W, W->R, W->W
# Reordering beyond LL and SC is handled in WEAK_REORDERING_BEYOND_LLSC # Reordering beyond LL and SC is handled in WEAK_REORDERING_BEYOND_LLSC
@ -1387,6 +1391,7 @@ config 32BIT
config 64BIT config 64BIT
bool "64-bit kernel" bool "64-bit kernel"
depends on CPU_SUPPORTS_64BIT_KERNEL && SYS_SUPPORTS_64BIT_KERNEL depends on CPU_SUPPORTS_64BIT_KERNEL && SYS_SUPPORTS_64BIT_KERNEL
select HAVE_SYSCALL_WRAPPERS
help help
Select this option if you want to build a 64-bit kernel. Select this option if you want to build a 64-bit kernel.

Просмотреть файл

@ -118,7 +118,7 @@ void __init plat_time_init(void)
* setup counter 1 (RTC) to tick at full speed * setup counter 1 (RTC) to tick at full speed
*/ */
t = 0xffffff; t = 0xffffff;
while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_T1S) && t--) while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_T1S) && --t)
asm volatile ("nop"); asm volatile ("nop");
if (!t) if (!t)
goto cntr_err; goto cntr_err;
@ -127,7 +127,7 @@ void __init plat_time_init(void)
au_sync(); au_sync();
t = 0xffffff; t = 0xffffff;
while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && t--) while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && --t)
asm volatile ("nop"); asm volatile ("nop");
if (!t) if (!t)
goto cntr_err; goto cntr_err;
@ -135,7 +135,7 @@ void __init plat_time_init(void)
au_sync(); au_sync();
t = 0xffffff; t = 0xffffff;
while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && t--) while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && --t)
asm volatile ("nop"); asm volatile ("nop");
if (!t) if (!t)
goto cntr_err; goto cntr_err;

Просмотреть файл

@ -1,6 +1,5 @@
#ifndef __ASM_SECCOMP_H #ifndef __ASM_SECCOMP_H
#include <linux/thread_info.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#define __NR_seccomp_read __NR_read #define __NR_seccomp_read __NR_read

Просмотреть файл

@ -111,7 +111,6 @@ int show_interrupts(struct seq_file *p, void *v)
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]); seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif #endif
seq_printf(p, " %14s", irq_desc[i].chip->name); seq_printf(p, " %14s", irq_desc[i].chip->name);
seq_printf(p, "-%-8s", irq_desc[i].name);
seq_printf(p, " %s", action->name); seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next) for (action=action->next; action; action = action->next)

Просмотреть файл

@ -32,6 +32,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/binfmts.h> #include <linux/binfmts.h>
#include <linux/security.h> #include <linux/security.h>
#include <linux/syscalls.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/vfs.h> #include <linux/vfs.h>
#include <linux/ipc.h> #include <linux/ipc.h>
@ -63,9 +64,9 @@
#define merge_64(r1, r2) ((((r2) & 0xffffffffUL) << 32) + ((r1) & 0xffffffffUL)) #define merge_64(r1, r2) ((((r2) & 0xffffffffUL) << 32) + ((r1) & 0xffffffffUL))
#endif #endif
asmlinkage unsigned long SYSCALL_DEFINE6(32_mmap2, unsigned long, addr, unsigned long, len,
sys32_mmap2(unsigned long addr, unsigned long len, unsigned long prot, unsigned long, prot, unsigned long, flags, unsigned long, fd,
unsigned long flags, unsigned long fd, unsigned long pgoff) unsigned long, pgoff)
{ {
struct file * file = NULL; struct file * file = NULL;
unsigned long error; unsigned long error;
@ -121,21 +122,21 @@ struct rlimit32 {
int rlim_max; int rlim_max;
}; };
asmlinkage long sys32_truncate64(const char __user * path, SYSCALL_DEFINE4(32_truncate64, const char __user *, path,
unsigned long __dummy, int a2, int a3) unsigned long, __dummy, unsigned long, a2, unsigned long, a3)
{ {
return sys_truncate(path, merge_64(a2, a3)); return sys_truncate(path, merge_64(a2, a3));
} }
asmlinkage long sys32_ftruncate64(unsigned int fd, unsigned long __dummy, SYSCALL_DEFINE4(32_ftruncate64, unsigned long, fd, unsigned long, __dummy,
int a2, int a3) unsigned long, a2, unsigned long, a3)
{ {
return sys_ftruncate(fd, merge_64(a2, a3)); return sys_ftruncate(fd, merge_64(a2, a3));
} }
asmlinkage int sys32_llseek(unsigned int fd, unsigned int offset_high, SYSCALL_DEFINE5(32_llseek, unsigned long, fd, unsigned long, offset_high,
unsigned int offset_low, loff_t __user * result, unsigned long, offset_low, loff_t __user *, result,
unsigned int origin) unsigned long, origin)
{ {
return sys_llseek(fd, offset_high, offset_low, result, origin); return sys_llseek(fd, offset_high, offset_low, result, origin);
} }
@ -144,20 +145,20 @@ asmlinkage int sys32_llseek(unsigned int fd, unsigned int offset_high,
lseek back to original location. They fail just like lseek does on lseek back to original location. They fail just like lseek does on
non-seekable files. */ non-seekable files. */
asmlinkage ssize_t sys32_pread(unsigned int fd, char __user * buf, SYSCALL_DEFINE6(32_pread, unsigned long, fd, char __user *, buf, size_t, count,
size_t count, u32 unused, u64 a4, u64 a5) unsigned long, unused, unsigned long, a4, unsigned long, a5)
{ {
return sys_pread64(fd, buf, count, merge_64(a4, a5)); return sys_pread64(fd, buf, count, merge_64(a4, a5));
} }
asmlinkage ssize_t sys32_pwrite(unsigned int fd, const char __user * buf, SYSCALL_DEFINE6(32_pwrite, unsigned int, fd, const char __user *, buf,
size_t count, u32 unused, u64 a4, u64 a5) size_t, count, u32, unused, u64, a4, u64, a5)
{ {
return sys_pwrite64(fd, buf, count, merge_64(a4, a5)); return sys_pwrite64(fd, buf, count, merge_64(a4, a5));
} }
asmlinkage int sys32_sched_rr_get_interval(compat_pid_t pid, SYSCALL_DEFINE2(32_sched_rr_get_interval, compat_pid_t, pid,
struct compat_timespec __user *interval) struct compat_timespec __user *, interval)
{ {
struct timespec t; struct timespec t;
int ret; int ret;
@ -174,8 +175,8 @@ asmlinkage int sys32_sched_rr_get_interval(compat_pid_t pid,
#ifdef CONFIG_SYSVIPC #ifdef CONFIG_SYSVIPC
asmlinkage long SYSCALL_DEFINE6(32_ipc, u32, call, long, first, long, second, long, third,
sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth) unsigned long, ptr, unsigned long, fifth)
{ {
int version, err; int version, err;
@ -233,8 +234,8 @@ sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth)
#else #else
asmlinkage long SYSCALL_DEFINE6(32_ipc, u32, call, int, first, int, second, int, third,
sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth) u32, ptr, u32 fifth)
{ {
return -ENOSYS; return -ENOSYS;
} }
@ -242,7 +243,7 @@ sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth)
#endif /* CONFIG_SYSVIPC */ #endif /* CONFIG_SYSVIPC */
#ifdef CONFIG_MIPS32_N32 #ifdef CONFIG_MIPS32_N32
asmlinkage long sysn32_semctl(int semid, int semnum, int cmd, u32 arg) SYSCALL_DEFINE4(n32_semctl, int, semid, int, semnum, int, cmd, u32, arg)
{ {
/* compat_sys_semctl expects a pointer to union semun */ /* compat_sys_semctl expects a pointer to union semun */
u32 __user *uptr = compat_alloc_user_space(sizeof(u32)); u32 __user *uptr = compat_alloc_user_space(sizeof(u32));
@ -251,13 +252,14 @@ asmlinkage long sysn32_semctl(int semid, int semnum, int cmd, u32 arg)
return compat_sys_semctl(semid, semnum, cmd, uptr); return compat_sys_semctl(semid, semnum, cmd, uptr);
} }
asmlinkage long sysn32_msgsnd(int msqid, u32 msgp, unsigned msgsz, int msgflg) SYSCALL_DEFINE4(n32_msgsnd, int, msqid, u32, msgp, unsigned int, msgsz,
int, msgflg)
{ {
return compat_sys_msgsnd(msqid, msgsz, msgflg, compat_ptr(msgp)); return compat_sys_msgsnd(msqid, msgsz, msgflg, compat_ptr(msgp));
} }
asmlinkage long sysn32_msgrcv(int msqid, u32 msgp, size_t msgsz, int msgtyp, SYSCALL_DEFINE5(n32_msgrcv, int, msqid, u32, msgp, size_t, msgsz,
int msgflg) int, msgtyp, int, msgflg)
{ {
return compat_sys_msgrcv(msqid, msgsz, msgtyp, msgflg, IPC_64, return compat_sys_msgrcv(msqid, msgsz, msgtyp, msgflg, IPC_64,
compat_ptr(msgp)); compat_ptr(msgp));
@ -277,7 +279,7 @@ struct sysctl_args32
#ifdef CONFIG_SYSCTL_SYSCALL #ifdef CONFIG_SYSCTL_SYSCALL
asmlinkage long sys32_sysctl(struct sysctl_args32 __user *args) SYSCALL_DEFINE1(32_sysctl, struct sysctl_args32 __user *, args)
{ {
struct sysctl_args32 tmp; struct sysctl_args32 tmp;
int error; int error;
@ -316,9 +318,16 @@ asmlinkage long sys32_sysctl(struct sysctl_args32 __user *args)
return error; return error;
} }
#else
SYSCALL_DEFINE1(32_sysctl, struct sysctl_args32 __user *, args)
{
return -ENOSYS;
}
#endif /* CONFIG_SYSCTL_SYSCALL */ #endif /* CONFIG_SYSCTL_SYSCALL */
asmlinkage long sys32_newuname(struct new_utsname __user * name) SYSCALL_DEFINE1(32_newuname, struct new_utsname __user *, name)
{ {
int ret = 0; int ret = 0;
@ -334,7 +343,7 @@ asmlinkage long sys32_newuname(struct new_utsname __user * name)
return ret; return ret;
} }
asmlinkage int sys32_personality(unsigned long personality) SYSCALL_DEFINE1(32_personality, unsigned long, personality)
{ {
int ret; int ret;
personality &= 0xffffffff; personality &= 0xffffffff;
@ -357,7 +366,7 @@ struct ustat32 {
extern asmlinkage long sys_ustat(dev_t dev, struct ustat __user * ubuf); extern asmlinkage long sys_ustat(dev_t dev, struct ustat __user * ubuf);
asmlinkage int sys32_ustat(dev_t dev, struct ustat32 __user * ubuf32) SYSCALL_DEFINE2(32_ustat, dev_t, dev, struct ustat32 __user *, ubuf32)
{ {
int err; int err;
struct ustat tmp; struct ustat tmp;
@ -381,8 +390,8 @@ out:
return err; return err;
} }
asmlinkage int sys32_sendfile(int out_fd, int in_fd, compat_off_t __user *offset, SYSCALL_DEFINE4(32_sendfile, long, out_fd, long, in_fd,
s32 count) compat_off_t __user *, offset, s32, count)
{ {
mm_segment_t old_fs = get_fs(); mm_segment_t old_fs = get_fs();
int ret; int ret;

Просмотреть файл

@ -399,7 +399,7 @@ einval: li v0, -ENOSYS
sys sys_swapon 2 sys sys_swapon 2
sys sys_reboot 3 sys sys_reboot 3
sys sys_old_readdir 3 sys sys_old_readdir 3
sys old_mmap 6 /* 4090 */ sys sys_mips_mmap 6 /* 4090 */
sys sys_munmap 2 sys sys_munmap 2
sys sys_truncate 2 sys sys_truncate 2
sys sys_ftruncate 2 sys sys_ftruncate 2
@ -519,7 +519,7 @@ einval: li v0, -ENOSYS
sys sys_sendfile 4 sys sys_sendfile 4
sys sys_ni_syscall 0 sys sys_ni_syscall 0
sys sys_ni_syscall 0 sys sys_ni_syscall 0
sys sys_mmap2 6 /* 4210 */ sys sys_mips_mmap2 6 /* 4210 */
sys sys_truncate64 4 sys sys_truncate64 4
sys sys_ftruncate64 4 sys sys_ftruncate64 4
sys sys_stat64 2 sys sys_stat64 2

Просмотреть файл

@ -207,7 +207,7 @@ sys_call_table:
PTR sys_newlstat PTR sys_newlstat
PTR sys_poll PTR sys_poll
PTR sys_lseek PTR sys_lseek
PTR old_mmap PTR sys_mips_mmap
PTR sys_mprotect /* 5010 */ PTR sys_mprotect /* 5010 */
PTR sys_munmap PTR sys_munmap
PTR sys_brk PTR sys_brk

Просмотреть файл

@ -129,12 +129,12 @@ EXPORT(sysn32_call_table)
PTR sys_newlstat PTR sys_newlstat
PTR sys_poll PTR sys_poll
PTR sys_lseek PTR sys_lseek
PTR old_mmap PTR sys_mips_mmap
PTR sys_mprotect /* 6010 */ PTR sys_mprotect /* 6010 */
PTR sys_munmap PTR sys_munmap
PTR sys_brk PTR sys_brk
PTR sys32_rt_sigaction PTR sys_32_rt_sigaction
PTR sys32_rt_sigprocmask PTR sys_32_rt_sigprocmask
PTR compat_sys_ioctl /* 6015 */ PTR compat_sys_ioctl /* 6015 */
PTR sys_pread64 PTR sys_pread64
PTR sys_pwrite64 PTR sys_pwrite64
@ -159,7 +159,7 @@ EXPORT(sysn32_call_table)
PTR compat_sys_setitimer PTR compat_sys_setitimer
PTR sys_alarm PTR sys_alarm
PTR sys_getpid PTR sys_getpid
PTR sys32_sendfile PTR sys_32_sendfile
PTR sys_socket /* 6040 */ PTR sys_socket /* 6040 */
PTR sys_connect PTR sys_connect
PTR sys_accept PTR sys_accept
@ -181,14 +181,14 @@ EXPORT(sysn32_call_table)
PTR sys_exit PTR sys_exit
PTR compat_sys_wait4 PTR compat_sys_wait4
PTR sys_kill /* 6060 */ PTR sys_kill /* 6060 */
PTR sys32_newuname PTR sys_32_newuname
PTR sys_semget PTR sys_semget
PTR sys_semop PTR sys_semop
PTR sysn32_semctl PTR sys_n32_semctl
PTR sys_shmdt /* 6065 */ PTR sys_shmdt /* 6065 */
PTR sys_msgget PTR sys_msgget
PTR sysn32_msgsnd PTR sys_n32_msgsnd
PTR sysn32_msgrcv PTR sys_n32_msgrcv
PTR compat_sys_msgctl PTR compat_sys_msgctl
PTR compat_sys_fcntl /* 6070 */ PTR compat_sys_fcntl /* 6070 */
PTR sys_flock PTR sys_flock
@ -245,15 +245,15 @@ EXPORT(sysn32_call_table)
PTR sys_getsid PTR sys_getsid
PTR sys_capget PTR sys_capget
PTR sys_capset PTR sys_capset
PTR sys32_rt_sigpending /* 6125 */ PTR sys_32_rt_sigpending /* 6125 */
PTR compat_sys_rt_sigtimedwait PTR compat_sys_rt_sigtimedwait
PTR sys32_rt_sigqueueinfo PTR sys_32_rt_sigqueueinfo
PTR sysn32_rt_sigsuspend PTR sysn32_rt_sigsuspend
PTR sys32_sigaltstack PTR sys32_sigaltstack
PTR compat_sys_utime /* 6130 */ PTR compat_sys_utime /* 6130 */
PTR sys_mknod PTR sys_mknod
PTR sys32_personality PTR sys_32_personality
PTR sys32_ustat PTR sys_32_ustat
PTR compat_sys_statfs PTR compat_sys_statfs
PTR compat_sys_fstatfs /* 6135 */ PTR compat_sys_fstatfs /* 6135 */
PTR sys_sysfs PTR sys_sysfs
@ -265,14 +265,14 @@ EXPORT(sysn32_call_table)
PTR sys_sched_getscheduler PTR sys_sched_getscheduler
PTR sys_sched_get_priority_max PTR sys_sched_get_priority_max
PTR sys_sched_get_priority_min PTR sys_sched_get_priority_min
PTR sys32_sched_rr_get_interval /* 6145 */ PTR sys_32_sched_rr_get_interval /* 6145 */
PTR sys_mlock PTR sys_mlock
PTR sys_munlock PTR sys_munlock
PTR sys_mlockall PTR sys_mlockall
PTR sys_munlockall PTR sys_munlockall
PTR sys_vhangup /* 6150 */ PTR sys_vhangup /* 6150 */
PTR sys_pivot_root PTR sys_pivot_root
PTR sys32_sysctl PTR sys_32_sysctl
PTR sys_prctl PTR sys_prctl
PTR compat_sys_adjtimex PTR compat_sys_adjtimex
PTR compat_sys_setrlimit /* 6155 */ PTR compat_sys_setrlimit /* 6155 */

Просмотреть файл

@ -265,12 +265,12 @@ sys_call_table:
PTR sys_olduname PTR sys_olduname
PTR sys_umask /* 4060 */ PTR sys_umask /* 4060 */
PTR sys_chroot PTR sys_chroot
PTR sys32_ustat PTR sys_32_ustat
PTR sys_dup2 PTR sys_dup2
PTR sys_getppid PTR sys_getppid
PTR sys_getpgrp /* 4065 */ PTR sys_getpgrp /* 4065 */
PTR sys_setsid PTR sys_setsid
PTR sys32_sigaction PTR sys_32_sigaction
PTR sys_sgetmask PTR sys_sgetmask
PTR sys_ssetmask PTR sys_ssetmask
PTR sys_setreuid /* 4070 */ PTR sys_setreuid /* 4070 */
@ -293,7 +293,7 @@ sys_call_table:
PTR sys_swapon PTR sys_swapon
PTR sys_reboot PTR sys_reboot
PTR compat_sys_old_readdir PTR compat_sys_old_readdir
PTR old_mmap /* 4090 */ PTR sys_mips_mmap /* 4090 */
PTR sys_munmap PTR sys_munmap
PTR sys_truncate PTR sys_truncate
PTR sys_ftruncate PTR sys_ftruncate
@ -320,12 +320,12 @@ sys_call_table:
PTR compat_sys_wait4 PTR compat_sys_wait4
PTR sys_swapoff /* 4115 */ PTR sys_swapoff /* 4115 */
PTR compat_sys_sysinfo PTR compat_sys_sysinfo
PTR sys32_ipc PTR sys_32_ipc
PTR sys_fsync PTR sys_fsync
PTR sys32_sigreturn PTR sys32_sigreturn
PTR sys32_clone /* 4120 */ PTR sys32_clone /* 4120 */
PTR sys_setdomainname PTR sys_setdomainname
PTR sys32_newuname PTR sys_32_newuname
PTR sys_ni_syscall /* sys_modify_ldt */ PTR sys_ni_syscall /* sys_modify_ldt */
PTR compat_sys_adjtimex PTR compat_sys_adjtimex
PTR sys_mprotect /* 4125 */ PTR sys_mprotect /* 4125 */
@ -339,11 +339,11 @@ sys_call_table:
PTR sys_fchdir PTR sys_fchdir
PTR sys_bdflush PTR sys_bdflush
PTR sys_sysfs /* 4135 */ PTR sys_sysfs /* 4135 */
PTR sys32_personality PTR sys_32_personality
PTR sys_ni_syscall /* for afs_syscall */ PTR sys_ni_syscall /* for afs_syscall */
PTR sys_setfsuid PTR sys_setfsuid
PTR sys_setfsgid PTR sys_setfsgid
PTR sys32_llseek /* 4140 */ PTR sys_32_llseek /* 4140 */
PTR compat_sys_getdents PTR compat_sys_getdents
PTR compat_sys_select PTR compat_sys_select
PTR sys_flock PTR sys_flock
@ -356,7 +356,7 @@ sys_call_table:
PTR sys_ni_syscall /* 4150 */ PTR sys_ni_syscall /* 4150 */
PTR sys_getsid PTR sys_getsid
PTR sys_fdatasync PTR sys_fdatasync
PTR sys32_sysctl PTR sys_32_sysctl
PTR sys_mlock PTR sys_mlock
PTR sys_munlock /* 4155 */ PTR sys_munlock /* 4155 */
PTR sys_mlockall PTR sys_mlockall
@ -368,7 +368,7 @@ sys_call_table:
PTR sys_sched_yield PTR sys_sched_yield
PTR sys_sched_get_priority_max PTR sys_sched_get_priority_max
PTR sys_sched_get_priority_min PTR sys_sched_get_priority_min
PTR sys32_sched_rr_get_interval /* 4165 */ PTR sys_32_sched_rr_get_interval /* 4165 */
PTR compat_sys_nanosleep PTR compat_sys_nanosleep
PTR sys_mremap PTR sys_mremap
PTR sys_accept PTR sys_accept
@ -397,25 +397,25 @@ sys_call_table:
PTR sys_getresgid PTR sys_getresgid
PTR sys_prctl PTR sys_prctl
PTR sys32_rt_sigreturn PTR sys32_rt_sigreturn
PTR sys32_rt_sigaction PTR sys_32_rt_sigaction
PTR sys32_rt_sigprocmask /* 4195 */ PTR sys_32_rt_sigprocmask /* 4195 */
PTR sys32_rt_sigpending PTR sys_32_rt_sigpending
PTR compat_sys_rt_sigtimedwait PTR compat_sys_rt_sigtimedwait
PTR sys32_rt_sigqueueinfo PTR sys_32_rt_sigqueueinfo
PTR sys32_rt_sigsuspend PTR sys32_rt_sigsuspend
PTR sys32_pread /* 4200 */ PTR sys_32_pread /* 4200 */
PTR sys32_pwrite PTR sys_32_pwrite
PTR sys_chown PTR sys_chown
PTR sys_getcwd PTR sys_getcwd
PTR sys_capget PTR sys_capget
PTR sys_capset /* 4205 */ PTR sys_capset /* 4205 */
PTR sys32_sigaltstack PTR sys32_sigaltstack
PTR sys32_sendfile PTR sys_32_sendfile
PTR sys_ni_syscall PTR sys_ni_syscall
PTR sys_ni_syscall PTR sys_ni_syscall
PTR sys32_mmap2 /* 4210 */ PTR sys_mips_mmap2 /* 4210 */
PTR sys32_truncate64 PTR sys_32_truncate64
PTR sys32_ftruncate64 PTR sys_32_ftruncate64
PTR sys_newstat PTR sys_newstat
PTR sys_newlstat PTR sys_newlstat
PTR sys_newfstat /* 4215 */ PTR sys_newfstat /* 4215 */
@ -481,7 +481,7 @@ sys_call_table:
PTR compat_sys_mq_notify /* 4275 */ PTR compat_sys_mq_notify /* 4275 */
PTR compat_sys_mq_getsetattr PTR compat_sys_mq_getsetattr
PTR sys_ni_syscall /* sys_vserver */ PTR sys_ni_syscall /* sys_vserver */
PTR sys32_waitid PTR sys_32_waitid
PTR sys_ni_syscall /* available, was setaltroot */ PTR sys_ni_syscall /* available, was setaltroot */
PTR sys_add_key /* 4280 */ PTR sys_add_key /* 4280 */
PTR sys_request_key PTR sys_request_key

Просмотреть файл

@ -19,6 +19,7 @@
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/abi.h> #include <asm/abi.h>
@ -338,8 +339,8 @@ asmlinkage int sys_rt_sigsuspend(nabi_no_regargs struct pt_regs regs)
} }
#ifdef CONFIG_TRAD_SIGNALS #ifdef CONFIG_TRAD_SIGNALS
asmlinkage int sys_sigaction(int sig, const struct sigaction __user *act, SYSCALL_DEFINE3(sigaction, int, sig, const struct sigaction __user *, act,
struct sigaction __user *oact) struct sigaction __user *, oact)
{ {
struct k_sigaction new_ka, old_ka; struct k_sigaction new_ka, old_ka;
int ret; int ret;

Просмотреть файл

@ -349,8 +349,8 @@ asmlinkage int sys32_rt_sigsuspend(nabi_no_regargs struct pt_regs regs)
return -ERESTARTNOHAND; return -ERESTARTNOHAND;
} }
asmlinkage int sys32_sigaction(int sig, const struct sigaction32 __user *act, SYSCALL_DEFINE3(32_sigaction, long, sig, const struct sigaction32 __user *, act,
struct sigaction32 __user *oact) struct sigaction32 __user *, oact)
{ {
struct k_sigaction new_ka, old_ka; struct k_sigaction new_ka, old_ka;
int ret; int ret;
@ -704,9 +704,9 @@ struct mips_abi mips_abi_32 = {
.restart = __NR_O32_restart_syscall .restart = __NR_O32_restart_syscall
}; };
asmlinkage int sys32_rt_sigaction(int sig, const struct sigaction32 __user *act, SYSCALL_DEFINE4(32_rt_sigaction, int, sig,
struct sigaction32 __user *oact, const struct sigaction32 __user *, act,
unsigned int sigsetsize) struct sigaction32 __user *, oact, unsigned int, sigsetsize)
{ {
struct k_sigaction new_sa, old_sa; struct k_sigaction new_sa, old_sa;
int ret = -EINVAL; int ret = -EINVAL;
@ -748,8 +748,8 @@ out:
return ret; return ret;
} }
asmlinkage int sys32_rt_sigprocmask(int how, compat_sigset_t __user *set, SYSCALL_DEFINE4(32_rt_sigprocmask, int, how, compat_sigset_t __user *, set,
compat_sigset_t __user *oset, unsigned int sigsetsize) compat_sigset_t __user *, oset, unsigned int, sigsetsize)
{ {
sigset_t old_set, new_set; sigset_t old_set, new_set;
int ret; int ret;
@ -770,8 +770,8 @@ asmlinkage int sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
return ret; return ret;
} }
asmlinkage int sys32_rt_sigpending(compat_sigset_t __user *uset, SYSCALL_DEFINE2(32_rt_sigpending, compat_sigset_t __user *, uset,
unsigned int sigsetsize) unsigned int, sigsetsize)
{ {
int ret; int ret;
sigset_t set; sigset_t set;
@ -787,7 +787,8 @@ asmlinkage int sys32_rt_sigpending(compat_sigset_t __user *uset,
return ret; return ret;
} }
asmlinkage int sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo) SYSCALL_DEFINE3(32_rt_sigqueueinfo, int, pid, int, sig,
compat_siginfo_t __user *, uinfo)
{ {
siginfo_t info; siginfo_t info;
int ret; int ret;
@ -802,10 +803,9 @@ asmlinkage int sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *
return ret; return ret;
} }
asmlinkage long SYSCALL_DEFINE5(32_waitid, int, which, compat_pid_t, pid,
sys32_waitid(int which, compat_pid_t pid, compat_siginfo_t __user *, uinfo, int, options,
compat_siginfo_t __user *uinfo, int options, struct compat_rusage __user *, uru)
struct compat_rusage __user *uru)
{ {
siginfo_t info; siginfo_t info;
struct rusage ru; struct rusage ru;

Просмотреть файл

@ -152,9 +152,9 @@ out:
return error; return error;
} }
asmlinkage unsigned long SYSCALL_DEFINE6(mips_mmap, unsigned long, addr, unsigned long, len,
old_mmap(unsigned long addr, unsigned long len, int prot, unsigned long, prot, unsigned long, flags, unsigned long,
int flags, int fd, off_t offset) fd, off_t, offset)
{ {
unsigned long result; unsigned long result;
@ -168,9 +168,9 @@ out:
return result; return result;
} }
asmlinkage unsigned long SYSCALL_DEFINE6(mips_mmap2, unsigned long, addr, unsigned long, len,
sys_mmap2(unsigned long addr, unsigned long len, unsigned long prot, unsigned long, prot, unsigned long, flags, unsigned long, fd,
unsigned long flags, unsigned long fd, unsigned long pgoff) unsigned long, pgoff)
{ {
if (pgoff & (~PAGE_MASK >> 12)) if (pgoff & (~PAGE_MASK >> 12))
return -EINVAL; return -EINVAL;
@ -240,7 +240,7 @@ out:
/* /*
* Compacrapability ... * Compacrapability ...
*/ */
asmlinkage int sys_uname(struct old_utsname __user * name) SYSCALL_DEFINE1(uname, struct old_utsname __user *, name)
{ {
if (name && !copy_to_user(name, utsname(), sizeof (*name))) if (name && !copy_to_user(name, utsname(), sizeof (*name)))
return 0; return 0;
@ -250,7 +250,7 @@ asmlinkage int sys_uname(struct old_utsname __user * name)
/* /*
* Compacrapability ... * Compacrapability ...
*/ */
asmlinkage int sys_olduname(struct oldold_utsname __user * name) SYSCALL_DEFINE1(olduname, struct oldold_utsname __user *, name)
{ {
int error; int error;
@ -279,7 +279,7 @@ asmlinkage int sys_olduname(struct oldold_utsname __user * name)
return error; return error;
} }
asmlinkage int sys_set_thread_area(unsigned long addr) SYSCALL_DEFINE1(set_thread_area, unsigned long, addr)
{ {
struct thread_info *ti = task_thread_info(current); struct thread_info *ti = task_thread_info(current);
@ -290,7 +290,7 @@ asmlinkage int sys_set_thread_area(unsigned long addr)
return 0; return 0;
} }
asmlinkage int _sys_sysmips(int cmd, long arg1, int arg2, int arg3) asmlinkage int _sys_sysmips(long cmd, long arg1, long arg2, long arg3)
{ {
switch (cmd) { switch (cmd) {
case MIPS_ATOMIC_SET: case MIPS_ATOMIC_SET:
@ -325,8 +325,8 @@ asmlinkage int _sys_sysmips(int cmd, long arg1, int arg2, int arg3)
* *
* This is really horribly ugly. * This is really horribly ugly.
*/ */
asmlinkage int sys_ipc(unsigned int call, int first, int second, SYSCALL_DEFINE6(ipc, unsigned int, call, int, first, int, second,
unsigned long third, void __user *ptr, long fifth) unsigned long, third, void __user *, ptr, long, fifth)
{ {
int version, ret; int version, ret;
@ -411,7 +411,7 @@ asmlinkage int sys_ipc(unsigned int call, int first, int second,
/* /*
* No implemented yet ... * No implemented yet ...
*/ */
asmlinkage int sys_cachectl(char *addr, int nbytes, int op) SYSCALL_DEFINE3(cachectl, char *, addr, int, nbytes, int, op)
{ {
return -ENOSYS; return -ENOSYS;
} }

Просмотреть файл

@ -13,6 +13,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/syscalls.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
@ -58,8 +59,8 @@ EXPORT_SYMBOL(_dma_cache_wback_inv);
* We could optimize the case where the cache argument is not BCACHE but * We could optimize the case where the cache argument is not BCACHE but
* that seems very atypical use ... * that seems very atypical use ...
*/ */
asmlinkage int sys_cacheflush(unsigned long addr, SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes,
unsigned long bytes, unsigned int cache) unsigned int, cache)
{ {
if (bytes == 0) if (bytes == 0)
return 0; return 0;

Просмотреть файл

@ -210,5 +210,10 @@ struct compat_shmid64_ds {
compat_ulong_t __unused6; compat_ulong_t __unused6;
}; };
static inline int is_compat_task(void)
{
return test_thread_flag(TIF_32BIT);
}
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_COMPAT_H */ #endif /* _ASM_POWERPC_COMPAT_H */

Просмотреть файл

@ -1,10 +1,6 @@
#ifndef _ASM_POWERPC_SECCOMP_H #ifndef _ASM_POWERPC_SECCOMP_H
#define _ASM_POWERPC_SECCOMP_H #define _ASM_POWERPC_SECCOMP_H
#ifdef __KERNEL__
#include <linux/thread_info.h>
#endif
#include <linux/unistd.h> #include <linux/unistd.h>
#define __NR_seccomp_read __NR_read #define __NR_seccomp_read __NR_read

Просмотреть файл

@ -367,27 +367,24 @@ static int emulate_multiple(struct pt_regs *regs, unsigned char __user *addr,
static int emulate_fp_pair(unsigned char __user *addr, unsigned int reg, static int emulate_fp_pair(unsigned char __user *addr, unsigned int reg,
unsigned int flags) unsigned int flags)
{ {
char *ptr = (char *) &current->thread.TS_FPR(reg); char *ptr0 = (char *) &current->thread.TS_FPR(reg);
int i, ret; char *ptr1 = (char *) &current->thread.TS_FPR(reg+1);
int i, ret, sw = 0;
if (!(flags & F)) if (!(flags & F))
return 0; return 0;
if (reg & 1) if (reg & 1)
return 0; /* invalid form: FRS/FRT must be even */ return 0; /* invalid form: FRS/FRT must be even */
if (!(flags & SW)) { if (flags & SW)
/* not byte-swapped - easy */ sw = 7;
if (!(flags & ST)) ret = 0;
ret = __copy_from_user(ptr, addr, 16); for (i = 0; i < 8; ++i) {
else if (!(flags & ST)) {
ret = __copy_to_user(addr, ptr, 16); ret |= __get_user(ptr0[i^sw], addr + i);
} else { ret |= __get_user(ptr1[i^sw], addr + i + 8);
/* each FPR value is byte-swapped separately */ } else {
ret = 0; ret |= __put_user(ptr0[i^sw], addr + i);
for (i = 0; i < 16; ++i) { ret |= __put_user(ptr1[i^sw], addr + i + 8);
if (!(flags & ST))
ret |= __get_user(ptr[i^7], addr + i);
else
ret |= __put_user(ptr[i^7], addr + i);
} }
} }
if (ret) if (ret)

Просмотреть файл

@ -62,18 +62,19 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
72: std r8,8(r3) 72: std r8,8(r3)
beq+ 3f beq+ 3f
addi r3,r3,16 addi r3,r3,16
23: ld r9,8(r4)
.Ldo_tail: .Ldo_tail:
bf cr7*4+1,1f bf cr7*4+1,1f
rotldi r9,r9,32 23: lwz r9,8(r4)
addi r4,r4,4
73: stw r9,0(r3) 73: stw r9,0(r3)
addi r3,r3,4 addi r3,r3,4
1: bf cr7*4+2,2f 1: bf cr7*4+2,2f
rotldi r9,r9,16 44: lhz r9,8(r4)
addi r4,r4,2
74: sth r9,0(r3) 74: sth r9,0(r3)
addi r3,r3,2 addi r3,r3,2
2: bf cr7*4+3,3f 2: bf cr7*4+3,3f
rotldi r9,r9,8 45: lbz r9,8(r4)
75: stb r9,0(r3) 75: stb r9,0(r3)
3: li r3,0 3: li r3,0
blr blr
@ -141,11 +142,24 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
6: cmpwi cr1,r5,8 6: cmpwi cr1,r5,8
addi r3,r3,32 addi r3,r3,32
sld r9,r9,r10 sld r9,r9,r10
ble cr1,.Ldo_tail ble cr1,7f
34: ld r0,8(r4) 34: ld r0,8(r4)
srd r7,r0,r11 srd r7,r0,r11
or r9,r7,r9 or r9,r7,r9
b .Ldo_tail 7:
bf cr7*4+1,1f
rotldi r9,r9,32
94: stw r9,0(r3)
addi r3,r3,4
1: bf cr7*4+2,2f
rotldi r9,r9,16
95: sth r9,0(r3)
addi r3,r3,2
2: bf cr7*4+3,3f
rotldi r9,r9,8
96: stb r9,0(r3)
3: li r3,0
blr
.Ldst_unaligned: .Ldst_unaligned:
PPC_MTOCRF 0x01,r6 /* put #bytes to 8B bdry into cr7 */ PPC_MTOCRF 0x01,r6 /* put #bytes to 8B bdry into cr7 */
@ -218,7 +232,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
121: 121:
132: 132:
addi r3,r3,8 addi r3,r3,8
123:
134: 134:
135: 135:
138: 138:
@ -226,6 +239,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
140: 140:
141: 141:
142: 142:
123:
144:
145:
/* /*
* here we have had a fault on a load and r3 points to the first * here we have had a fault on a load and r3 points to the first
@ -309,6 +325,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
187: 187:
188: 188:
189: 189:
194:
195:
196:
1: 1:
ld r6,-24(r1) ld r6,-24(r1)
ld r5,-8(r1) ld r5,-8(r1)
@ -329,7 +348,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
.llong 72b,172b .llong 72b,172b
.llong 23b,123b .llong 23b,123b
.llong 73b,173b .llong 73b,173b
.llong 44b,144b
.llong 74b,174b .llong 74b,174b
.llong 45b,145b
.llong 75b,175b .llong 75b,175b
.llong 24b,124b .llong 24b,124b
.llong 25b,125b .llong 25b,125b
@ -347,6 +368,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
.llong 79b,179b .llong 79b,179b
.llong 80b,180b .llong 80b,180b
.llong 34b,134b .llong 34b,134b
.llong 94b,194b
.llong 95b,195b
.llong 96b,196b
.llong 35b,135b .llong 35b,135b
.llong 81b,181b .llong 81b,181b
.llong 36b,136b .llong 36b,136b

Просмотреть файл

@ -53,18 +53,19 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
3: std r8,8(r3) 3: std r8,8(r3)
beq 3f beq 3f
addi r3,r3,16 addi r3,r3,16
ld r9,8(r4)
.Ldo_tail: .Ldo_tail:
bf cr7*4+1,1f bf cr7*4+1,1f
rotldi r9,r9,32 lwz r9,8(r4)
addi r4,r4,4
stw r9,0(r3) stw r9,0(r3)
addi r3,r3,4 addi r3,r3,4
1: bf cr7*4+2,2f 1: bf cr7*4+2,2f
rotldi r9,r9,16 lhz r9,8(r4)
addi r4,r4,2
sth r9,0(r3) sth r9,0(r3)
addi r3,r3,2 addi r3,r3,2
2: bf cr7*4+3,3f 2: bf cr7*4+3,3f
rotldi r9,r9,8 lbz r9,8(r4)
stb r9,0(r3) stb r9,0(r3)
3: ld r3,48(r1) /* return dest pointer */ 3: ld r3,48(r1) /* return dest pointer */
blr blr
@ -133,11 +134,24 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
cmpwi cr1,r5,8 cmpwi cr1,r5,8
addi r3,r3,32 addi r3,r3,32
sld r9,r9,r10 sld r9,r9,r10
ble cr1,.Ldo_tail ble cr1,6f
ld r0,8(r4) ld r0,8(r4)
srd r7,r0,r11 srd r7,r0,r11
or r9,r7,r9 or r9,r7,r9
b .Ldo_tail 6:
bf cr7*4+1,1f
rotldi r9,r9,32
stw r9,0(r3)
addi r3,r3,4
1: bf cr7*4+2,2f
rotldi r9,r9,16
sth r9,0(r3)
addi r3,r3,2
2: bf cr7*4+3,3f
rotldi r9,r9,8
stb r9,0(r3)
3: ld r3,48(r1) /* return dest pointer */
blr
.Ldst_unaligned: .Ldst_unaligned:
PPC_MTOCRF 0x01,r6 # put #bytes to 8B bdry into cr7 PPC_MTOCRF 0x01,r6 # put #bytes to 8B bdry into cr7

Просмотреть файл

@ -204,6 +204,23 @@ static int __init ppc4xx_setup_one_pci_PMM(struct pci_controller *hose,
{ {
u32 ma, pcila, pciha; u32 ma, pcila, pciha;
/* Hack warning ! The "old" PCI 2.x cell only let us configure the low
* 32-bit of incoming PLB addresses. The top 4 bits of the 36-bit
* address are actually hard wired to a value that appears to depend
* on the specific SoC. For example, it's 0 on 440EP and 1 on 440EPx.
*
* The trick here is we just crop those top bits and ignore them when
* programming the chip. That means the device-tree has to be right
* for the specific part used (we don't print a warning if it's wrong
* but on the other hand, you'll crash quickly enough), but at least
* this code should work whatever the hard coded value is
*/
plb_addr &= 0xffffffffull;
/* Note: Due to the above hack, the test below doesn't actually test
* if you address is above 4G, but it tests that address and
* (address + size) are both contained in the same 4G
*/
if ((plb_addr + size) > 0xffffffffull || !is_power_of_2(size) || if ((plb_addr + size) > 0xffffffffull || !is_power_of_2(size) ||
size < 0x1000 || (plb_addr & (size - 1)) != 0) { size < 0x1000 || (plb_addr & (size - 1)) != 0) {
printk(KERN_WARNING "%s: Resource out of range\n", printk(KERN_WARNING "%s: Resource out of range\n",

Просмотреть файл

@ -22,7 +22,6 @@
#include <linux/gpio.h> #include <linux/gpio.h>
#include <linux/spi/spi.h> #include <linux/spi/spi.h>
#include <linux/spi/spi_gpio.h> #include <linux/spi/spi_gpio.h>
#include <media/ov772x.h>
#include <media/soc_camera_platform.h> #include <media/soc_camera_platform.h>
#include <media/sh_mobile_ceu.h> #include <media/sh_mobile_ceu.h>
#include <video/sh_mobile_lcdc.h> #include <video/sh_mobile_lcdc.h>
@ -224,7 +223,6 @@ static void camera_power(int val)
} }
#ifdef CONFIG_I2C #ifdef CONFIG_I2C
/* support for the old ncm03j camera */
static unsigned char camera_ncm03j_magic[] = static unsigned char camera_ncm03j_magic[] =
{ {
0x87, 0x00, 0x88, 0x08, 0x89, 0x01, 0x8A, 0xE8, 0x87, 0x00, 0x88, 0x08, 0x89, 0x01, 0x8A, 0xE8,
@ -245,23 +243,6 @@ static unsigned char camera_ncm03j_magic[] =
0x63, 0xD4, 0x64, 0xEA, 0xD6, 0x0F, 0x63, 0xD4, 0x64, 0xEA, 0xD6, 0x0F,
}; };
static int camera_probe(void)
{
struct i2c_adapter *a = i2c_get_adapter(0);
struct i2c_msg msg;
int ret;
camera_power(1);
msg.addr = 0x6e;
msg.buf = camera_ncm03j_magic;
msg.len = 2;
msg.flags = 0;
ret = i2c_transfer(a, &msg, 1);
camera_power(0);
return ret;
}
static int camera_set_capture(struct soc_camera_platform_info *info, static int camera_set_capture(struct soc_camera_platform_info *info,
int enable) int enable)
{ {
@ -313,35 +294,8 @@ static struct platform_device camera_device = {
.platform_data = &camera_info, .platform_data = &camera_info,
}, },
}; };
static int __init camera_setup(void)
{
if (camera_probe() > 0)
platform_device_register(&camera_device);
return 0;
}
late_initcall(camera_setup);
#endif /* CONFIG_I2C */ #endif /* CONFIG_I2C */
static int ov7725_power(struct device *dev, int mode)
{
camera_power(0);
if (mode)
camera_power(1);
return 0;
}
static struct ov772x_camera_info ov7725_info = {
.buswidth = SOCAM_DATAWIDTH_8,
.flags = OV772X_FLAG_VFLIP | OV772X_FLAG_HFLIP,
.link = {
.power = ov7725_power,
},
};
static struct sh_mobile_ceu_info sh_mobile_ceu_info = { static struct sh_mobile_ceu_info sh_mobile_ceu_info = {
.flags = SOCAM_PCLK_SAMPLE_RISING | SOCAM_HSYNC_ACTIVE_HIGH | .flags = SOCAM_PCLK_SAMPLE_RISING | SOCAM_HSYNC_ACTIVE_HIGH |
SOCAM_VSYNC_ACTIVE_HIGH | SOCAM_MASTER | SOCAM_DATAWIDTH_8, SOCAM_VSYNC_ACTIVE_HIGH | SOCAM_MASTER | SOCAM_DATAWIDTH_8,
@ -392,6 +346,9 @@ static struct platform_device *ap325rxa_devices[] __initdata = {
&ap325rxa_nor_flash_device, &ap325rxa_nor_flash_device,
&lcdc_device, &lcdc_device,
&ceu_device, &ceu_device,
#ifdef CONFIG_I2C
&camera_device,
#endif
&nand_flash_device, &nand_flash_device,
&sdcard_cn3_device, &sdcard_cn3_device,
}; };
@ -400,10 +357,6 @@ static struct i2c_board_info __initdata ap325rxa_i2c_devices[] = {
{ {
I2C_BOARD_INFO("pcf8563", 0x51), I2C_BOARD_INFO("pcf8563", 0x51),
}, },
{
I2C_BOARD_INFO("ov772x", 0x21),
.platform_data = &ov7725_info,
},
}; };
static struct spi_board_info ap325rxa_spi_devices[] = { static struct spi_board_info ap325rxa_spi_devices[] = {

Просмотреть файл

@ -18,8 +18,8 @@
#include <asm/freq.h> #include <asm/freq.h>
#include <asm/io.h> #include <asm/io.h>
const static int pll1rate[]={1,2,3,4,6,8}; static const int pll1rate[]={1,2,3,4,6,8};
const static int pfc_divisors[]={1,2,3,4,6,8,12}; static const int pfc_divisors[]={1,2,3,4,6,8,12};
#define ifc_divisors pfc_divisors #define ifc_divisors pfc_divisors
#if (CONFIG_SH_CLK_MD == 0) #if (CONFIG_SH_CLK_MD == 0)

Просмотреть файл

@ -240,4 +240,9 @@ struct compat_shmid64_ds {
unsigned int __unused2; unsigned int __unused2;
}; };
static inline int is_compat_task(void)
{
return test_thread_flag(TIF_32BIT);
}
#endif /* _ASM_SPARC64_COMPAT_H */ #endif /* _ASM_SPARC64_COMPAT_H */

Просмотреть файл

@ -1,11 +1,5 @@
#ifndef _ASM_SECCOMP_H #ifndef _ASM_SECCOMP_H
#include <linux/thread_info.h> /* already defines TIF_32BIT */
#ifndef TIF_32BIT
#error "unexpected TIF_32BIT on sparc64"
#endif
#include <linux/unistd.h> #include <linux/unistd.h>
#define __NR_seccomp_read __NR_read #define __NR_seccomp_read __NR_read

Просмотреть файл

@ -306,6 +306,7 @@ static int jbusmc_print_dimm(int syndrome_code,
buf[1] = '?'; buf[1] = '?';
buf[2] = '?'; buf[2] = '?';
buf[3] = '\0'; buf[3] = '\0';
return 0;
} }
p = dp->controller; p = dp->controller;
prop = &p->layout; prop = &p->layout;

Просмотреть файл

@ -40,6 +40,9 @@ config X86
select HAVE_GENERIC_DMA_COHERENT if X86_32 select HAVE_GENERIC_DMA_COHERENT if X86_32
select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EFFICIENT_UNALIGNED_ACCESS
select USER_STACKTRACE_SUPPORT select USER_STACKTRACE_SUPPORT
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
config ARCH_DEFCONFIG config ARCH_DEFCONFIG
string string
@ -1825,7 +1828,7 @@ config DMAR
remapping devices. remapping devices.
config DMAR_DEFAULT_ON config DMAR_DEFAULT_ON
def_bool n def_bool y
prompt "Enable DMA Remapping Devices by default" prompt "Enable DMA Remapping Devices by default"
depends on DMAR depends on DMAR
help help

Просмотреть файл

@ -4,7 +4,7 @@
# create a compressed vmlinux image from the original vmlinux # create a compressed vmlinux image from the original vmlinux
# #
targets := vmlinux vmlinux.bin vmlinux.bin.gz head_$(BITS).o misc.o piggy.o targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma head_$(BITS).o misc.o piggy.o
KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
KBUILD_CFLAGS += -fno-strict-aliasing -fPIC KBUILD_CFLAGS += -fno-strict-aliasing -fPIC
@ -47,18 +47,35 @@ ifeq ($(CONFIG_X86_32),y)
ifdef CONFIG_RELOCATABLE ifdef CONFIG_RELOCATABLE
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin.all FORCE
$(call if_changed,lzma)
else else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
endif endif
LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
else else
$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(call if_changed,gzip) $(call if_changed,gzip)
$(obj)/vmlinux.bin.bz2: $(obj)/vmlinux.bin FORCE
$(call if_changed,bzip2)
$(obj)/vmlinux.bin.lzma: $(obj)/vmlinux.bin FORCE
$(call if_changed,lzma)
LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T LDFLAGS_piggy.o := -r --format binary --oformat elf64-x86-64 -T
endif endif
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE suffix_$(CONFIG_KERNEL_GZIP) = gz
suffix_$(CONFIG_KERNEL_BZIP2) = bz2
suffix_$(CONFIG_KERNEL_LZMA) = lzma
$(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
$(call if_changed,ld) $(call if_changed,ld)

Просмотреть файл

@ -116,71 +116,13 @@
/* /*
* gzip declarations * gzip declarations
*/ */
#define OF(args) args
#define STATIC static #define STATIC static
#undef memset #undef memset
#undef memcpy #undef memcpy
#define memzero(s, n) memset((s), 0, (n)) #define memzero(s, n) memset((s), 0, (n))
typedef unsigned char uch;
typedef unsigned short ush;
typedef unsigned long ulg;
/*
* Window size must be at least 32k, and a power of two.
* We don't actually have a window just a huge output buffer,
* so we report a 2G window size, as that should always be
* larger than our output buffer:
*/
#define WSIZE 0x80000000
/* Input buffer: */
static unsigned char *inbuf;
/* Sliding window buffer (and final output buffer): */
static unsigned char *window;
/* Valid bytes in inbuf: */
static unsigned insize;
/* Index of next byte to be processed in inbuf: */
static unsigned inptr;
/* Bytes in output buffer: */
static unsigned outcnt;
/* gzip flag byte */
#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gz file */
#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
#define ORIG_NAM 0x08 /* bit 3 set: original file name present */
#define COMMENT 0x10 /* bit 4 set: file comment present */
#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
#define RESERVED 0xC0 /* bit 6, 7: reserved */
#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
/* Diagnostic functions */
#ifdef DEBUG
# define Assert(cond, msg) do { if (!(cond)) error(msg); } while (0)
# define Trace(x) do { fprintf x; } while (0)
# define Tracev(x) do { if (verbose) fprintf x ; } while (0)
# define Tracevv(x) do { if (verbose > 1) fprintf x ; } while (0)
# define Tracec(c, x) do { if (verbose && (c)) fprintf x ; } while (0)
# define Tracecv(c, x) do { if (verbose > 1 && (c)) fprintf x ; } while (0)
#else
# define Assert(cond, msg)
# define Trace(x)
# define Tracev(x)
# define Tracevv(x)
# define Tracec(c, x)
# define Tracecv(c, x)
#endif
static int fill_inbuf(void);
static void flush_window(void);
static void error(char *m); static void error(char *m);
/* /*
@ -189,13 +131,8 @@ static void error(char *m);
static struct boot_params *real_mode; /* Pointer to real-mode data */ static struct boot_params *real_mode; /* Pointer to real-mode data */
static int quiet; static int quiet;
extern unsigned char input_data[];
extern int input_len;
static long bytes_out;
static void *memset(void *s, int c, unsigned n); static void *memset(void *s, int c, unsigned n);
static void *memcpy(void *dest, const void *src, unsigned n); void *memcpy(void *dest, const void *src, unsigned n);
static void __putstr(int, const char *); static void __putstr(int, const char *);
#define putstr(__x) __putstr(0, __x) #define putstr(__x) __putstr(0, __x)
@ -213,7 +150,17 @@ static char *vidmem;
static int vidport; static int vidport;
static int lines, cols; static int lines, cols;
#include "../../../../lib/inflate.c" #ifdef CONFIG_KERNEL_GZIP
#include "../../../../lib/decompress_inflate.c"
#endif
#ifdef CONFIG_KERNEL_BZIP2
#include "../../../../lib/decompress_bunzip2.c"
#endif
#ifdef CONFIG_KERNEL_LZMA
#include "../../../../lib/decompress_unlzma.c"
#endif
static void scroll(void) static void scroll(void)
{ {
@ -282,7 +229,7 @@ static void *memset(void *s, int c, unsigned n)
return s; return s;
} }
static void *memcpy(void *dest, const void *src, unsigned n) void *memcpy(void *dest, const void *src, unsigned n)
{ {
int i; int i;
const char *s = src; const char *s = src;
@ -293,38 +240,6 @@ static void *memcpy(void *dest, const void *src, unsigned n)
return dest; return dest;
} }
/* ===========================================================================
* Fill the input buffer. This is called only when the buffer is empty
* and at least one byte is really needed.
*/
static int fill_inbuf(void)
{
error("ran out of input data");
return 0;
}
/* ===========================================================================
* Write the output window window[0..outcnt-1] and update crc and bytes_out.
* (Used for the decompressed data only.)
*/
static void flush_window(void)
{
/* With my window equal to my output buffer
* I only need to compute the crc here.
*/
unsigned long c = crc; /* temporary variable */
unsigned n;
unsigned char *in, ch;
in = window;
for (n = 0; n < outcnt; n++) {
ch = *in++;
c = crc_32_tab[((int)c ^ ch) & 0xff] ^ (c >> 8);
}
crc = c;
bytes_out += (unsigned long)outcnt;
outcnt = 0;
}
static void error(char *x) static void error(char *x)
{ {
@ -407,12 +322,8 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
lines = real_mode->screen_info.orig_video_lines; lines = real_mode->screen_info.orig_video_lines;
cols = real_mode->screen_info.orig_video_cols; cols = real_mode->screen_info.orig_video_cols;
window = output; /* Output buffer (Normally at 1M) */
free_mem_ptr = heap; /* Heap */ free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE; free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
inbuf = input_data; /* Input buffer */
insize = input_len;
inptr = 0;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
if ((unsigned long)output & (__KERNEL_ALIGN - 1)) if ((unsigned long)output & (__KERNEL_ALIGN - 1))
@ -430,10 +341,9 @@ asmlinkage void decompress_kernel(void *rmode, memptr heap,
#endif #endif
#endif #endif
makecrc();
if (!quiet) if (!quiet)
putstr("\nDecompressing Linux... "); putstr("\nDecompressing Linux... ");
gunzip(); decompress(input_data, input_len, NULL, NULL, output, NULL, error);
parse_elf(output); parse_elf(output);
if (!quiet) if (!quiet)
putstr("done.\nBooting the kernel.\n"); putstr("done.\nBooting the kernel.\n");

Просмотреть файл

@ -1,7 +1,7 @@
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.29-rc4 # Linux kernel version: 2.6.29-rc4
# Thu Feb 12 12:57:57 2009 # Tue Feb 24 15:50:58 2009
# #
# CONFIG_64BIT is not set # CONFIG_64BIT is not set
CONFIG_X86_32=y CONFIG_X86_32=y
@ -266,7 +266,9 @@ CONFIG_PREEMPT_VOLUNTARY=y
CONFIG_X86_LOCAL_APIC=y CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
# CONFIG_X86_MCE is not set CONFIG_X86_MCE=y
CONFIG_X86_MCE_NONFATAL=y
CONFIG_X86_MCE_P4THERMAL=y
CONFIG_VM86=y CONFIG_VM86=y
# CONFIG_TOSHIBA is not set # CONFIG_TOSHIBA is not set
# CONFIG_I8K is not set # CONFIG_I8K is not set

Просмотреть файл

@ -1,7 +1,7 @@
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.29-rc4 # Linux kernel version: 2.6.29-rc4
# Thu Feb 12 12:57:29 2009 # Tue Feb 24 15:44:16 2009
# #
CONFIG_64BIT=y CONFIG_64BIT=y
# CONFIG_X86_32 is not set # CONFIG_X86_32 is not set
@ -266,7 +266,9 @@ CONFIG_PREEMPT_VOLUNTARY=y
CONFIG_X86_LOCAL_APIC=y CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
# CONFIG_X86_MCE is not set CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
# CONFIG_I8K is not set # CONFIG_I8K is not set
CONFIG_MICROCODE=y CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y CONFIG_MICROCODE_INTEL=y

Просмотреть файл

@ -75,7 +75,14 @@ static inline void default_inquire_remote_apic(int apicid)
#define setup_secondary_clock setup_secondary_APIC_clock #define setup_secondary_clock setup_secondary_APIC_clock
#endif #endif
#ifdef CONFIG_X86_VSMP
extern int is_vsmp_box(void); extern int is_vsmp_box(void);
#else
static inline int is_vsmp_box(void)
{
return 0;
}
#endif
extern void xapic_wait_icr_idle(void); extern void xapic_wait_icr_idle(void);
extern u32 safe_xapic_wait_icr_idle(void); extern u32 safe_xapic_wait_icr_idle(void);
extern void xapic_icr_write(u32, u32); extern void xapic_icr_write(u32, u32);
@ -306,7 +313,7 @@ struct apic {
void (*send_IPI_self)(int vector); void (*send_IPI_self)(int vector);
/* wakeup_secondary_cpu */ /* wakeup_secondary_cpu */
int (*wakeup_cpu)(int apicid, unsigned long start_eip); int (*wakeup_secondary_cpu)(int apicid, unsigned long start_eip);
int trampoline_phys_low; int trampoline_phys_low;
int trampoline_phys_high; int trampoline_phys_high;
@ -324,8 +331,21 @@ struct apic {
u32 (*safe_wait_icr_idle)(void); u32 (*safe_wait_icr_idle)(void);
}; };
/*
* Pointer to the local APIC driver in use on this system (there's
* always just one such driver in use - the kernel decides via an
* early probing process which one it picks - and then sticks to it):
*/
extern struct apic *apic; extern struct apic *apic;
/*
* APIC functionality to boot other CPUs - only used on SMP:
*/
#ifdef CONFIG_SMP
extern atomic_t init_deasserted;
extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
#endif
static inline u32 apic_read(u32 reg) static inline u32 apic_read(u32 reg)
{ {
return apic->read(reg); return apic->read(reg);
@ -384,9 +404,7 @@ static inline unsigned default_get_apic_id(unsigned long x)
#define DEFAULT_TRAMPOLINE_PHYS_LOW 0x467 #define DEFAULT_TRAMPOLINE_PHYS_LOW 0x467
#define DEFAULT_TRAMPOLINE_PHYS_HIGH 0x469 #define DEFAULT_TRAMPOLINE_PHYS_HIGH 0x469
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_64
extern void es7000_update_apic_to_cluster(void);
#else
extern struct apic apic_flat; extern struct apic apic_flat;
extern struct apic apic_physflat; extern struct apic apic_physflat;
extern struct apic apic_x2apic_cluster; extern struct apic apic_x2apic_cluster;

Просмотреть файл

@ -10,17 +10,31 @@
#define EXTENDED_VGA 0xfffe /* 80x50 mode */ #define EXTENDED_VGA 0xfffe /* 80x50 mode */
#define ASK_VGA 0xfffd /* ask for it at bootup */ #define ASK_VGA 0xfffd /* ask for it at bootup */
#ifdef __KERNEL__
/* Physical address where kernel should be loaded. */ /* Physical address where kernel should be loaded. */
#define LOAD_PHYSICAL_ADDR ((CONFIG_PHYSICAL_START \ #define LOAD_PHYSICAL_ADDR ((CONFIG_PHYSICAL_START \
+ (CONFIG_PHYSICAL_ALIGN - 1)) \ + (CONFIG_PHYSICAL_ALIGN - 1)) \
& ~(CONFIG_PHYSICAL_ALIGN - 1)) & ~(CONFIG_PHYSICAL_ALIGN - 1))
#ifdef CONFIG_KERNEL_BZIP2
#define BOOT_HEAP_SIZE 0x400000
#else /* !CONFIG_KERNEL_BZIP2 */
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#define BOOT_HEAP_SIZE 0x7000 #define BOOT_HEAP_SIZE 0x7000
#define BOOT_STACK_SIZE 0x4000
#else #else
#define BOOT_HEAP_SIZE 0x4000 #define BOOT_HEAP_SIZE 0x4000
#endif
#endif /* !CONFIG_KERNEL_BZIP2 */
#ifdef CONFIG_X86_64
#define BOOT_STACK_SIZE 0x4000
#else
#define BOOT_STACK_SIZE 0x1000 #define BOOT_STACK_SIZE 0x1000
#endif #endif
#endif /* __KERNEL__ */
#endif /* _ASM_X86_BOOT_H */ #endif /* _ASM_X86_BOOT_H */

Просмотреть файл

@ -1,11 +1,155 @@
/*
* fixmap.h: compile-time virtual memory allocation
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1998 Ingo Molnar
*
* Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
* x86_32 and x86_64 integration by Gustavo F. Padovan, February 2009
*/
#ifndef _ASM_X86_FIXMAP_H #ifndef _ASM_X86_FIXMAP_H
#define _ASM_X86_FIXMAP_H #define _ASM_X86_FIXMAP_H
#ifndef __ASSEMBLY__
#include <linux/kernel.h>
#include <asm/acpi.h>
#include <asm/apicdef.h>
#include <asm/page.h>
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
# include "fixmap_32.h" #include <linux/threads.h>
#include <asm/kmap_types.h>
#else #else
# include "fixmap_64.h" #include <asm/vsyscall.h>
#ifdef CONFIG_EFI
#include <asm/efi.h>
#endif #endif
#endif
/*
* We can't declare FIXADDR_TOP as variable for x86_64 because vsyscall
* uses fixmaps that relies on FIXADDR_TOP for proper address calculation.
* Because of this, FIXADDR_TOP x86 integration was left as later work.
*/
#ifdef CONFIG_X86_32
/* used by vmalloc.c, vsyscall.lds.S.
*
* Leave one empty page between vmalloc'ed areas and
* the start of the fixmap.
*/
extern unsigned long __FIXADDR_TOP;
#define FIXADDR_TOP ((unsigned long)__FIXADDR_TOP)
#define FIXADDR_USER_START __fix_to_virt(FIX_VDSO)
#define FIXADDR_USER_END __fix_to_virt(FIX_VDSO - 1)
#else
#define FIXADDR_TOP (VSYSCALL_END-PAGE_SIZE)
/* Only covers 32bit vsyscalls currently. Need another set for 64bit. */
#define FIXADDR_USER_START ((unsigned long)VSYSCALL32_VSYSCALL)
#define FIXADDR_USER_END (FIXADDR_USER_START + PAGE_SIZE)
#endif
/*
* Here we define all the compile-time 'special' virtual
* addresses. The point is to have a constant address at
* compile time, but to set the physical address only
* in the boot process.
* for x86_32: We allocate these special addresses
* from the end of virtual memory (0xfffff000) backwards.
* Also this lets us do fail-safe vmalloc(), we
* can guarantee that these special addresses and
* vmalloc()-ed addresses never overlap.
*
* These 'compile-time allocated' memory buffers are
* fixed-size 4k pages (or larger if used with an increment
* higher than 1). Use set_fixmap(idx,phys) to associate
* physical memory with fixmap indices.
*
* TLB entries of such buffers will not be flushed across
* task switches.
*/
enum fixed_addresses {
#ifdef CONFIG_X86_32
FIX_HOLE,
FIX_VDSO,
#else
VSYSCALL_LAST_PAGE,
VSYSCALL_FIRST_PAGE = VSYSCALL_LAST_PAGE
+ ((VSYSCALL_END-VSYSCALL_START) >> PAGE_SHIFT) - 1,
VSYSCALL_HPET,
#endif
FIX_DBGP_BASE,
FIX_EARLYCON_MEM_BASE,
#ifdef CONFIG_X86_LOCAL_APIC
FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
#endif
#ifdef CONFIG_X86_IO_APIC
FIX_IO_APIC_BASE_0,
FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1,
#endif
#ifdef CONFIG_X86_64
#ifdef CONFIG_EFI
FIX_EFI_IO_MAP_LAST_PAGE,
FIX_EFI_IO_MAP_FIRST_PAGE = FIX_EFI_IO_MAP_LAST_PAGE
+ MAX_EFI_IO_PAGES - 1,
#endif
#endif
#ifdef CONFIG_X86_VISWS_APIC
FIX_CO_CPU, /* Cobalt timer */
FIX_CO_APIC, /* Cobalt APIC Redirection Table */
FIX_LI_PCIA, /* Lithium PCI Bridge A */
FIX_LI_PCIB, /* Lithium PCI Bridge B */
#endif
#ifdef CONFIG_X86_F00F_BUG
FIX_F00F_IDT, /* Virtual mapping for IDT */
#endif
#ifdef CONFIG_X86_CYCLONE_TIMER
FIX_CYCLONE_TIMER, /*cyclone timer register*/
#endif
#ifdef CONFIG_X86_32
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
#ifdef CONFIG_PCI_MMCONFIG
FIX_PCIE_MCFG,
#endif
#endif
#ifdef CONFIG_PARAVIRT
FIX_PARAVIRT_BOOTMAP,
#endif
__end_of_permanent_fixed_addresses,
#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
FIX_OHCI1394_BASE,
#endif
/*
* 256 temporary boot-time mappings, used by early_ioremap(),
* before ioremap() is functional.
*
* We round it up to the next 256 pages boundary so that we
* can have a single pgd entry and a single pte table:
*/
#define NR_FIX_BTMAPS 64
#define FIX_BTMAPS_SLOTS 4
FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
(__end_of_permanent_fixed_addresses & 255),
FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
#ifdef CONFIG_X86_32
FIX_WP_TEST,
#endif
__end_of_fixed_addresses
};
extern void reserve_top_address(unsigned long reserve);
#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_BOOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
#define FIXADDR_BOOT_START (FIXADDR_TOP - FIXADDR_BOOT_SIZE)
extern int fixmaps_set; extern int fixmaps_set;
@ -69,4 +213,5 @@ static inline unsigned long virt_to_fix(const unsigned long vaddr)
BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START); BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
return __virt_to_fix(vaddr); return __virt_to_fix(vaddr);
} }
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_X86_FIXMAP_H */ #endif /* _ASM_X86_FIXMAP_H */

Просмотреть файл

@ -1,115 +0,0 @@
/*
* fixmap.h: compile-time virtual memory allocation
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1998 Ingo Molnar
*
* Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
*/
#ifndef _ASM_X86_FIXMAP_32_H
#define _ASM_X86_FIXMAP_32_H
/* used by vmalloc.c, vsyscall.lds.S.
*
* Leave one empty page between vmalloc'ed areas and
* the start of the fixmap.
*/
extern unsigned long __FIXADDR_TOP;
#define FIXADDR_USER_START __fix_to_virt(FIX_VDSO)
#define FIXADDR_USER_END __fix_to_virt(FIX_VDSO - 1)
#ifndef __ASSEMBLY__
#include <linux/kernel.h>
#include <asm/acpi.h>
#include <asm/apicdef.h>
#include <asm/page.h>
#include <linux/threads.h>
#include <asm/kmap_types.h>
/*
* Here we define all the compile-time 'special' virtual
* addresses. The point is to have a constant address at
* compile time, but to set the physical address only
* in the boot process. We allocate these special addresses
* from the end of virtual memory (0xfffff000) backwards.
* Also this lets us do fail-safe vmalloc(), we
* can guarantee that these special addresses and
* vmalloc()-ed addresses never overlap.
*
* these 'compile-time allocated' memory buffers are
* fixed-size 4k pages. (or larger if used with an increment
* highger than 1) use fixmap_set(idx,phys) to associate
* physical memory with fixmap indices.
*
* TLB entries of such buffers will not be flushed across
* task switches.
*/
enum fixed_addresses {
FIX_HOLE,
FIX_VDSO,
FIX_DBGP_BASE,
FIX_EARLYCON_MEM_BASE,
#ifdef CONFIG_X86_LOCAL_APIC
FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
#endif
#ifdef CONFIG_X86_IO_APIC
FIX_IO_APIC_BASE_0,
FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS-1,
#endif
#ifdef CONFIG_X86_VISWS_APIC
FIX_CO_CPU, /* Cobalt timer */
FIX_CO_APIC, /* Cobalt APIC Redirection Table */
FIX_LI_PCIA, /* Lithium PCI Bridge A */
FIX_LI_PCIB, /* Lithium PCI Bridge B */
#endif
#ifdef CONFIG_X86_F00F_BUG
FIX_F00F_IDT, /* Virtual mapping for IDT */
#endif
#ifdef CONFIG_X86_CYCLONE_TIMER
FIX_CYCLONE_TIMER, /*cyclone timer register*/
#endif
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
#ifdef CONFIG_PCI_MMCONFIG
FIX_PCIE_MCFG,
#endif
#ifdef CONFIG_PARAVIRT
FIX_PARAVIRT_BOOTMAP,
#endif
__end_of_permanent_fixed_addresses,
/*
* 256 temporary boot-time mappings, used by early_ioremap(),
* before ioremap() is functional.
*
* We round it up to the next 256 pages boundary so that we
* can have a single pgd entry and a single pte table:
*/
#define NR_FIX_BTMAPS 64
#define FIX_BTMAPS_SLOTS 4
FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
(__end_of_permanent_fixed_addresses & 255),
FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
FIX_WP_TEST,
#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
FIX_OHCI1394_BASE,
#endif
__end_of_fixed_addresses
};
extern void reserve_top_address(unsigned long reserve);
#define FIXADDR_TOP ((unsigned long)__FIXADDR_TOP)
#define __FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
#define __FIXADDR_BOOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START (FIXADDR_TOP - __FIXADDR_SIZE)
#define FIXADDR_BOOT_START (FIXADDR_TOP - __FIXADDR_BOOT_SIZE)
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_X86_FIXMAP_32_H */

Просмотреть файл

@ -1,79 +0,0 @@
/*
* fixmap.h: compile-time virtual memory allocation
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1998 Ingo Molnar
*/
#ifndef _ASM_X86_FIXMAP_64_H
#define _ASM_X86_FIXMAP_64_H
#include <linux/kernel.h>
#include <asm/acpi.h>
#include <asm/apicdef.h>
#include <asm/page.h>
#include <asm/vsyscall.h>
#include <asm/efi.h>
/*
* Here we define all the compile-time 'special' virtual
* addresses. The point is to have a constant address at
* compile time, but to set the physical address only
* in the boot process.
*
* These 'compile-time allocated' memory buffers are
* fixed-size 4k pages (or larger if used with an increment
* higher than 1). Use set_fixmap(idx,phys) to associate
* physical memory with fixmap indices.
*
* TLB entries of such buffers will not be flushed across
* task switches.
*/
enum fixed_addresses {
VSYSCALL_LAST_PAGE,
VSYSCALL_FIRST_PAGE = VSYSCALL_LAST_PAGE
+ ((VSYSCALL_END-VSYSCALL_START) >> PAGE_SHIFT) - 1,
VSYSCALL_HPET,
FIX_DBGP_BASE,
FIX_EARLYCON_MEM_BASE,
FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
FIX_IO_APIC_BASE_0,
FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1,
FIX_EFI_IO_MAP_LAST_PAGE,
FIX_EFI_IO_MAP_FIRST_PAGE = FIX_EFI_IO_MAP_LAST_PAGE
+ MAX_EFI_IO_PAGES - 1,
#ifdef CONFIG_PARAVIRT
FIX_PARAVIRT_BOOTMAP,
#endif
__end_of_permanent_fixed_addresses,
#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
FIX_OHCI1394_BASE,
#endif
/*
* 256 temporary boot-time mappings, used by early_ioremap(),
* before ioremap() is functional.
*
* We round it up to the next 256 pages boundary so that we
* can have a single pgd entry and a single pte table:
*/
#define NR_FIX_BTMAPS 64
#define FIX_BTMAPS_SLOTS 4
FIX_BTMAP_END = __end_of_permanent_fixed_addresses + 256 -
(__end_of_permanent_fixed_addresses & 255),
FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS*FIX_BTMAPS_SLOTS - 1,
__end_of_fixed_addresses
};
#define FIXADDR_TOP (VSYSCALL_END-PAGE_SIZE)
#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
/* Only covers 32bit vsyscalls currently. Need another set for 64bit. */
#define FIXADDR_USER_START ((unsigned long)VSYSCALL32_VSYSCALL)
#define FIXADDR_USER_END (FIXADDR_USER_START + PAGE_SIZE)
#endif /* _ASM_X86_FIXMAP_64_H */

Просмотреть файл

@ -23,6 +23,9 @@
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
int
is_io_mapping_possible(resource_size_t base, unsigned long size);
void * void *
iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot); iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot);

Просмотреть файл

@ -4,8 +4,12 @@
extern int pxm_to_nid(int pxm); extern int pxm_to_nid(int pxm);
extern void numa_remove_cpu(int cpu); extern void numa_remove_cpu(int cpu);
#ifdef CONFIG_NUMA #ifdef CONFIG_HIGHMEM
extern void set_highmem_pages_init(void); extern void set_highmem_pages_init(void);
#else
static inline void set_highmem_pages_init(void)
{
}
#endif #endif
#endif /* _ASM_X86_NUMA_32_H */ #endif /* _ASM_X86_NUMA_32_H */

Просмотреть файл

@ -15,4 +15,7 @@ extern int reserve_memtype(u64 start, u64 end,
unsigned long req_type, unsigned long *ret_type); unsigned long req_type, unsigned long *ret_type);
extern int free_memtype(u64 start, u64 end); extern int free_memtype(u64 start, u64 end);
extern int kernel_map_sync_memtype(u64 base, unsigned long size,
unsigned long flag);
#endif /* _ASM_X86_PAT_H */ #endif /* _ASM_X86_PAT_H */

Просмотреть файл

@ -248,7 +248,6 @@ struct x86_hw_tss {
#define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long)) #define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long))
#define IO_BITMAP_OFFSET offsetof(struct tss_struct, io_bitmap) #define IO_BITMAP_OFFSET offsetof(struct tss_struct, io_bitmap)
#define INVALID_IO_BITMAP_OFFSET 0x8000 #define INVALID_IO_BITMAP_OFFSET 0x8000
#define INVALID_IO_BITMAP_OFFSET_LAZY 0x9000
struct tss_struct { struct tss_struct {
/* /*
@ -263,11 +262,6 @@ struct tss_struct {
* be within the limit. * be within the limit.
*/ */
unsigned long io_bitmap[IO_BITMAP_LONGS + 1]; unsigned long io_bitmap[IO_BITMAP_LONGS + 1];
/*
* Cache the current maximum and the last task that used the bitmap:
*/
unsigned long io_bitmap_max;
struct thread_struct *io_bitmap_owner;
/* /*
* .. and then another 0x100 bytes for the emergency kernel stack: * .. and then another 0x100 bytes for the emergency kernel stack:

Просмотреть файл

@ -1,12 +1,6 @@
#ifndef _ASM_X86_SECCOMP_32_H #ifndef _ASM_X86_SECCOMP_32_H
#define _ASM_X86_SECCOMP_32_H #define _ASM_X86_SECCOMP_32_H
#include <linux/thread_info.h>
#ifdef TIF_32BIT
#error "unexpected TIF_32BIT on i386"
#endif
#include <linux/unistd.h> #include <linux/unistd.h>
#define __NR_seccomp_read __NR_read #define __NR_seccomp_read __NR_read

Просмотреть файл

@ -1,14 +1,6 @@
#ifndef _ASM_X86_SECCOMP_64_H #ifndef _ASM_X86_SECCOMP_64_H
#define _ASM_X86_SECCOMP_64_H #define _ASM_X86_SECCOMP_64_H
#include <linux/thread_info.h>
#ifdef TIF_32BIT
#error "unexpected TIF_32BIT on x86_64"
#else
#define TIF_32BIT TIF_IA32
#endif
#include <linux/unistd.h> #include <linux/unistd.h>
#include <asm/ia32_unistd.h> #include <asm/ia32_unistd.h>

Просмотреть файл

@ -31,7 +31,6 @@ struct x86_quirks {
void (*smp_read_mpc_oem)(struct mpc_oemtable *oemtable, void (*smp_read_mpc_oem)(struct mpc_oemtable *oemtable,
unsigned short oemsize); unsigned short oemsize);
int (*setup_ioapic_ids)(void); int (*setup_ioapic_ids)(void);
int (*update_apic)(void);
}; };
extern void x86_quirk_pre_intr_init(void); extern void x86_quirk_pre_intr_init(void);
@ -65,7 +64,11 @@ extern void x86_quirk_time_init(void);
#include <asm/bootparam.h> #include <asm/bootparam.h>
/* Interrupt control for vSMPowered x86_64 systems */ /* Interrupt control for vSMPowered x86_64 systems */
#ifdef CONFIG_X86_VSMP
void vsmp_init(void); void vsmp_init(void);
#else
static inline void vsmp_init(void) { }
#endif
void setup_bios_corruption_check(void); void setup_bios_corruption_check(void);
@ -77,8 +80,6 @@ static inline void visws_early_detect(void) { }
static inline int is_visws_box(void) { return 0; } static inline int is_visws_box(void) { return 0; }
#endif #endif
extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
extern int wakeup_secondary_cpu_via_init(int apicid, unsigned long start_eip);
extern struct x86_quirks *x86_quirks; extern struct x86_quirks *x86_quirks;
extern unsigned long saved_video_mode; extern unsigned long saved_video_mode;

Просмотреть файл

@ -20,6 +20,9 @@
struct task_struct; /* one of the stranger aspects of C forward declarations */ struct task_struct; /* one of the stranger aspects of C forward declarations */
struct task_struct *__switch_to(struct task_struct *prev, struct task_struct *__switch_to(struct task_struct *prev,
struct task_struct *next); struct task_struct *next);
struct tss_struct;
void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
struct tss_struct *tss);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32

Просмотреть файл

@ -188,30 +188,18 @@ __copy_to_user_inatomic(void __user *dst, const void *src, unsigned size)
extern long __copy_user_nocache(void *dst, const void __user *src, extern long __copy_user_nocache(void *dst, const void __user *src,
unsigned size, int zerorest); unsigned size, int zerorest);
static inline int __copy_from_user_nocache(void *dst, const void __user *src, static inline int
unsigned size) __copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
{ {
might_sleep(); might_sleep();
/* return __copy_user_nocache(dst, src, size, 1);
* In practice this limit means that large file write()s
* which get chunked to 4K copies get handled via
* non-temporal stores here. Smaller writes get handled
* via regular __copy_from_user():
*/
if (likely(size >= PAGE_SIZE))
return __copy_user_nocache(dst, src, size, 1);
else
return __copy_from_user(dst, src, size);
} }
static inline int __copy_from_user_inatomic_nocache(void *dst, static inline int
const void __user *src, __copy_from_user_inatomic_nocache(void *dst, const void __user *src,
unsigned size) unsigned size)
{ {
if (likely(size >= PAGE_SIZE)) return __copy_user_nocache(dst, src, size, 0);
return __copy_user_nocache(dst, src, size, 0);
else
return __copy_from_user_inatomic(dst, src, size);
} }
unsigned long unsigned long

Просмотреть файл

@ -12,7 +12,6 @@ extern enum uv_system_type get_uv_system_type(void);
extern int is_uv_system(void); extern int is_uv_system(void);
extern void uv_cpu_init(void); extern void uv_cpu_init(void);
extern void uv_system_init(void); extern void uv_system_init(void);
extern int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip);
extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask, extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
struct mm_struct *mm, struct mm_struct *mm,
unsigned long va, unsigned long va,
@ -24,8 +23,6 @@ static inline enum uv_system_type get_uv_system_type(void) { return UV_NONE; }
static inline int is_uv_system(void) { return 0; } static inline int is_uv_system(void) { return 0; }
static inline void uv_cpu_init(void) { } static inline void uv_cpu_init(void) { }
static inline void uv_system_init(void) { } static inline void uv_system_init(void) { }
static inline int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip)
{ return 1; }
static inline const struct cpumask * static inline const struct cpumask *
uv_flush_tlb_others(const struct cpumask *cpumask, struct mm_struct *mm, uv_flush_tlb_others(const struct cpumask *cpumask, struct mm_struct *mm,
unsigned long va, unsigned int cpu) unsigned long va, unsigned int cpu)

Просмотреть файл

@ -70,7 +70,7 @@ obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o obj-$(CONFIG_KEXEC) += machine_kexec_$(BITS).o
obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o obj-$(CONFIG_KEXEC) += relocate_kernel_$(BITS).o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o
obj-y += vsmp_64.o obj-$(CONFIG_X86_VSMP) += vsmp_64.o
obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_MODULES) += module_$(BITS).o obj-$(CONFIG_MODULES) += module_$(BITS).o
obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o obj-$(CONFIG_EFI) += efi.o efi_$(BITS).o efi_stub_$(BITS).o

Просмотреть файл

@ -498,12 +498,12 @@ void *text_poke_early(void *addr, const void *opcode, size_t len)
*/ */
void *__kprobes text_poke(void *addr, const void *opcode, size_t len) void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
{ {
unsigned long flags;
char *vaddr; char *vaddr;
int nr_pages = 2; int nr_pages = 2;
struct page *pages[2]; struct page *pages[2];
int i; int i;
might_sleep();
if (!core_kernel_text((unsigned long)addr)) { if (!core_kernel_text((unsigned long)addr)) {
pages[0] = vmalloc_to_page(addr); pages[0] = vmalloc_to_page(addr);
pages[1] = vmalloc_to_page(addr + PAGE_SIZE); pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
@ -517,9 +517,9 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
nr_pages = 1; nr_pages = 1;
vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL); vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL);
BUG_ON(!vaddr); BUG_ON(!vaddr);
local_irq_save(flags); local_irq_disable();
memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
local_irq_restore(flags); local_irq_enable();
vunmap(vaddr); vunmap(vaddr);
sync_core(); sync_core();
/* Could also do a CLFLUSH here to speed up CPU recovery; but /* Could also do a CLFLUSH here to speed up CPU recovery; but

Просмотреть файл

@ -222,7 +222,6 @@ struct apic apic_flat = {
.send_IPI_all = flat_send_IPI_all, .send_IPI_all = flat_send_IPI_all,
.send_IPI_self = apic_send_IPI_self, .send_IPI_self = apic_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL, .wait_for_init_deassert = NULL,
@ -373,7 +372,6 @@ struct apic apic_physflat = {
.send_IPI_all = physflat_send_IPI_all, .send_IPI_all = physflat_send_IPI_all,
.send_IPI_self = apic_send_IPI_self, .send_IPI_self = apic_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL, .wait_for_init_deassert = NULL,

Просмотреть файл

@ -16,17 +16,17 @@
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/ipi.h> #include <asm/ipi.h>
static inline unsigned bigsmp_get_apic_id(unsigned long x) static unsigned bigsmp_get_apic_id(unsigned long x)
{ {
return (x >> 24) & 0xFF; return (x >> 24) & 0xFF;
} }
static inline int bigsmp_apic_id_registered(void) static int bigsmp_apic_id_registered(void)
{ {
return 1; return 1;
} }
static inline const cpumask_t *bigsmp_target_cpus(void) static const cpumask_t *bigsmp_target_cpus(void)
{ {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
return &cpu_online_map; return &cpu_online_map;
@ -35,13 +35,12 @@ static inline const cpumask_t *bigsmp_target_cpus(void)
#endif #endif
} }
static inline unsigned long static unsigned long bigsmp_check_apicid_used(physid_mask_t bitmap, int apicid)
bigsmp_check_apicid_used(physid_mask_t bitmap, int apicid)
{ {
return 0; return 0;
} }
static inline unsigned long bigsmp_check_apicid_present(int bit) static unsigned long bigsmp_check_apicid_present(int bit)
{ {
return 1; return 1;
} }
@ -64,7 +63,7 @@ static inline unsigned long calculate_ldr(int cpu)
* an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel * an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel
* document number 292116). So here it goes... * document number 292116). So here it goes...
*/ */
static inline void bigsmp_init_apic_ldr(void) static void bigsmp_init_apic_ldr(void)
{ {
unsigned long val; unsigned long val;
int cpu = smp_processor_id(); int cpu = smp_processor_id();
@ -74,19 +73,19 @@ static inline void bigsmp_init_apic_ldr(void)
apic_write(APIC_LDR, val); apic_write(APIC_LDR, val);
} }
static inline void bigsmp_setup_apic_routing(void) static void bigsmp_setup_apic_routing(void)
{ {
printk(KERN_INFO printk(KERN_INFO
"Enabling APIC mode: Physflat. Using %d I/O APICs\n", "Enabling APIC mode: Physflat. Using %d I/O APICs\n",
nr_ioapics); nr_ioapics);
} }
static inline int bigsmp_apicid_to_node(int logical_apicid) static int bigsmp_apicid_to_node(int logical_apicid)
{ {
return apicid_2_node[hard_smp_processor_id()]; return apicid_2_node[hard_smp_processor_id()];
} }
static inline int bigsmp_cpu_present_to_apicid(int mps_cpu) static int bigsmp_cpu_present_to_apicid(int mps_cpu)
{ {
if (mps_cpu < nr_cpu_ids) if (mps_cpu < nr_cpu_ids)
return (int) per_cpu(x86_bios_cpu_apicid, mps_cpu); return (int) per_cpu(x86_bios_cpu_apicid, mps_cpu);
@ -94,7 +93,7 @@ static inline int bigsmp_cpu_present_to_apicid(int mps_cpu)
return BAD_APICID; return BAD_APICID;
} }
static inline physid_mask_t bigsmp_apicid_to_cpu_present(int phys_apicid) static physid_mask_t bigsmp_apicid_to_cpu_present(int phys_apicid)
{ {
return physid_mask_of_physid(phys_apicid); return physid_mask_of_physid(phys_apicid);
} }
@ -107,29 +106,24 @@ static inline int bigsmp_cpu_to_logical_apicid(int cpu)
return cpu_physical_id(cpu); return cpu_physical_id(cpu);
} }
static inline physid_mask_t bigsmp_ioapic_phys_id_map(physid_mask_t phys_map) static physid_mask_t bigsmp_ioapic_phys_id_map(physid_mask_t phys_map)
{ {
/* For clustered we don't have a good way to do this yet - hack */ /* For clustered we don't have a good way to do this yet - hack */
return physids_promote(0xFFL); return physids_promote(0xFFL);
} }
static inline void bigsmp_setup_portio_remap(void) static int bigsmp_check_phys_apicid_present(int boot_cpu_physical_apicid)
{
}
static inline int bigsmp_check_phys_apicid_present(int boot_cpu_physical_apicid)
{ {
return 1; return 1;
} }
/* As we are using single CPU as destination, pick only one CPU here */ /* As we are using single CPU as destination, pick only one CPU here */
static inline unsigned int bigsmp_cpu_mask_to_apicid(const cpumask_t *cpumask) static unsigned int bigsmp_cpu_mask_to_apicid(const cpumask_t *cpumask)
{ {
return bigsmp_cpu_to_logical_apicid(first_cpu(*cpumask)); return bigsmp_cpu_to_logical_apicid(first_cpu(*cpumask));
} }
static inline unsigned int static unsigned int bigsmp_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
bigsmp_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
const struct cpumask *andmask) const struct cpumask *andmask)
{ {
int cpu; int cpu;
@ -148,7 +142,7 @@ bigsmp_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
return BAD_APICID; return BAD_APICID;
} }
static inline int bigsmp_phys_pkg_id(int cpuid_apic, int index_msb) static int bigsmp_phys_pkg_id(int cpuid_apic, int index_msb)
{ {
return cpuid_apic >> index_msb; return cpuid_apic >> index_msb;
} }
@ -158,12 +152,12 @@ static inline void bigsmp_send_IPI_mask(const struct cpumask *mask, int vector)
default_send_IPI_mask_sequence_phys(mask, vector); default_send_IPI_mask_sequence_phys(mask, vector);
} }
static inline void bigsmp_send_IPI_allbutself(int vector) static void bigsmp_send_IPI_allbutself(int vector)
{ {
default_send_IPI_mask_allbutself_phys(cpu_online_mask, vector); default_send_IPI_mask_allbutself_phys(cpu_online_mask, vector);
} }
static inline void bigsmp_send_IPI_all(int vector) static void bigsmp_send_IPI_all(int vector)
{ {
bigsmp_send_IPI_mask(cpu_online_mask, vector); bigsmp_send_IPI_mask(cpu_online_mask, vector);
} }
@ -256,7 +250,6 @@ struct apic apic_bigsmp = {
.send_IPI_all = bigsmp_send_IPI_all, .send_IPI_all = bigsmp_send_IPI_all,
.send_IPI_self = default_send_IPI_self, .send_IPI_self = default_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,

Просмотреть файл

@ -163,22 +163,17 @@ static int wakeup_secondary_cpu_via_mip(int cpu, unsigned long eip)
return 0; return 0;
} }
static int __init es7000_update_apic(void) static int es7000_apic_is_cluster(void)
{ {
apic->wakeup_cpu = wakeup_secondary_cpu_via_mip;
/* MPENTIUMIII */ /* MPENTIUMIII */
if (boot_cpu_data.x86 == 6 && if (boot_cpu_data.x86 == 6 &&
(boot_cpu_data.x86_model >= 7 || boot_cpu_data.x86_model <= 11)) { (boot_cpu_data.x86_model >= 7 || boot_cpu_data.x86_model <= 11))
es7000_update_apic_to_cluster(); return 1;
apic->wait_for_init_deassert = NULL;
apic->wakeup_cpu = wakeup_secondary_cpu_via_mip;
}
return 0; return 0;
} }
static void __init setup_unisys(void) static void setup_unisys(void)
{ {
/* /*
* Determine the generation of the ES7000 currently running. * Determine the generation of the ES7000 currently running.
@ -192,14 +187,12 @@ static void __init setup_unisys(void)
else else
es7000_plat = ES7000_CLASSIC; es7000_plat = ES7000_CLASSIC;
ioapic_renumber_irq = es7000_rename_gsi; ioapic_renumber_irq = es7000_rename_gsi;
x86_quirks->update_apic = es7000_update_apic;
} }
/* /*
* Parse the OEM Table: * Parse the OEM Table:
*/ */
static int __init parse_unisys_oem(char *oemptr) static int parse_unisys_oem(char *oemptr)
{ {
int i; int i;
int success = 0; int success = 0;
@ -261,7 +254,7 @@ static int __init parse_unisys_oem(char *oemptr)
} }
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
static int __init find_unisys_acpi_oem_table(unsigned long *oem_addr) static int find_unisys_acpi_oem_table(unsigned long *oem_addr)
{ {
struct acpi_table_header *header = NULL; struct acpi_table_header *header = NULL;
struct es7000_oem_table *table; struct es7000_oem_table *table;
@ -292,7 +285,7 @@ static int __init find_unisys_acpi_oem_table(unsigned long *oem_addr)
return 0; return 0;
} }
static void __init unmap_unisys_acpi_oem_table(unsigned long oem_addr) static void unmap_unisys_acpi_oem_table(unsigned long oem_addr)
{ {
if (!oem_addr) if (!oem_addr)
return; return;
@ -310,8 +303,10 @@ static int es7000_check_dsdt(void)
return 0; return 0;
} }
static int es7000_acpi_ret;
/* Hook from generic ACPI tables.c */ /* Hook from generic ACPI tables.c */
static int __init es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id) static int es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
{ {
unsigned long oem_addr = 0; unsigned long oem_addr = 0;
int check_dsdt; int check_dsdt;
@ -332,10 +327,26 @@ static int __init es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
*/ */
unmap_unisys_acpi_oem_table(oem_addr); unmap_unisys_acpi_oem_table(oem_addr);
} }
return ret;
es7000_acpi_ret = ret;
return ret && !es7000_apic_is_cluster();
} }
static int es7000_acpi_madt_oem_check_cluster(char *oem_id, char *oem_table_id)
{
int ret = es7000_acpi_ret;
return ret && es7000_apic_is_cluster();
}
#else /* !CONFIG_ACPI: */ #else /* !CONFIG_ACPI: */
static int __init es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id) static int es7000_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
{
return 0;
}
static int es7000_acpi_madt_oem_check_cluster(char *oem_id, char *oem_table_id)
{ {
return 0; return 0;
} }
@ -349,8 +360,7 @@ static void es7000_spin(int n)
rep_nop(); rep_nop();
} }
static int __init static int es7000_mip_write(struct mip_reg *mip_reg)
es7000_mip_write(struct mip_reg *mip_reg)
{ {
int status = 0; int status = 0;
int spin; int spin;
@ -383,7 +393,7 @@ es7000_mip_write(struct mip_reg *mip_reg)
return status; return status;
} }
static void __init es7000_enable_apic_mode(void) static void es7000_enable_apic_mode(void)
{ {
struct mip_reg es7000_mip_reg; struct mip_reg es7000_mip_reg;
int mip_status; int mip_status;
@ -416,11 +426,8 @@ static void es7000_vector_allocation_domain(int cpu, cpumask_t *retmask)
static void es7000_wait_for_init_deassert(atomic_t *deassert) static void es7000_wait_for_init_deassert(atomic_t *deassert)
{ {
#ifndef CONFIG_ES7000_CLUSTERED_APIC
while (!atomic_read(deassert)) while (!atomic_read(deassert))
cpu_relax(); cpu_relax();
#endif
return;
} }
static unsigned int es7000_get_apic_id(unsigned long x) static unsigned int es7000_get_apic_id(unsigned long x)
@ -565,72 +572,24 @@ static int es7000_check_phys_apicid_present(int cpu_physical_apicid)
return 1; return 1;
} }
static unsigned int
es7000_cpu_mask_to_apicid_cluster(const struct cpumask *cpumask)
{
int cpus_found = 0;
int num_bits_set;
int apicid;
int cpu;
num_bits_set = cpumask_weight(cpumask);
/* Return id to all */
if (num_bits_set == nr_cpu_ids)
return 0xFF;
/*
* The cpus in the mask must all be on the apic cluster. If are not
* on the same apicid cluster return default value of target_cpus():
*/
cpu = cpumask_first(cpumask);
apicid = es7000_cpu_to_logical_apicid(cpu);
while (cpus_found < num_bits_set) {
if (cpumask_test_cpu(cpu, cpumask)) {
int new_apicid = es7000_cpu_to_logical_apicid(cpu);
if (APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
WARN(1, "Not a valid mask!");
return 0xFF;
}
apicid = new_apicid;
cpus_found++;
}
cpu++;
}
return apicid;
}
static unsigned int es7000_cpu_mask_to_apicid(const cpumask_t *cpumask) static unsigned int es7000_cpu_mask_to_apicid(const cpumask_t *cpumask)
{ {
int cpus_found = 0; unsigned int round = 0;
int num_bits_set; int cpu, uninitialized_var(apicid);
int apicid;
int cpu;
num_bits_set = cpus_weight(*cpumask);
/* Return id to all */
if (num_bits_set == nr_cpu_ids)
return es7000_cpu_to_logical_apicid(0);
/* /*
* The cpus in the mask must all be on the apic cluster. If are not * The cpus in the mask must all be on the apic cluster.
* on the same apicid cluster return default value of target_cpus():
*/ */
cpu = first_cpu(*cpumask); for_each_cpu(cpu, cpumask) {
apicid = es7000_cpu_to_logical_apicid(cpu); int new_apicid = es7000_cpu_to_logical_apicid(cpu);
while (cpus_found < num_bits_set) {
if (cpu_isset(cpu, *cpumask)) {
int new_apicid = es7000_cpu_to_logical_apicid(cpu);
if (APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) { if (round && APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
printk("%s: Not a valid mask!\n", __func__); WARN(1, "Not a valid mask!");
return es7000_cpu_to_logical_apicid(0); return BAD_APICID;
}
apicid = new_apicid;
cpus_found++;
} }
cpu++; apicid = new_apicid;
round++;
} }
return apicid; return apicid;
} }
@ -659,37 +618,103 @@ static int es7000_phys_pkg_id(int cpuid_apic, int index_msb)
return cpuid_apic >> index_msb; return cpuid_apic >> index_msb;
} }
void __init es7000_update_apic_to_cluster(void)
{
apic->target_cpus = target_cpus_cluster;
apic->irq_delivery_mode = dest_LowestPrio;
/* logical delivery broadcast to all procs: */
apic->irq_dest_mode = 1;
apic->init_apic_ldr = es7000_init_apic_ldr_cluster;
apic->cpu_mask_to_apicid = es7000_cpu_mask_to_apicid_cluster;
}
static int probe_es7000(void) static int probe_es7000(void)
{ {
/* probed later in mptable/ACPI hooks */ /* probed later in mptable/ACPI hooks */
return 0; return 0;
} }
static __init int static int es7000_mps_ret;
es7000_mps_oem_check(struct mpc_table *mpc, char *oem, char *productid) static int es7000_mps_oem_check(struct mpc_table *mpc, char *oem,
char *productid)
{ {
int ret = 0;
if (mpc->oemptr) { if (mpc->oemptr) {
struct mpc_oemtable *oem_table = struct mpc_oemtable *oem_table =
(struct mpc_oemtable *)mpc->oemptr; (struct mpc_oemtable *)mpc->oemptr;
if (!strncmp(oem, "UNISYS", 6)) if (!strncmp(oem, "UNISYS", 6))
return parse_unisys_oem((char *)oem_table); ret = parse_unisys_oem((char *)oem_table);
} }
return 0;
es7000_mps_ret = ret;
return ret && !es7000_apic_is_cluster();
} }
static int es7000_mps_oem_check_cluster(struct mpc_table *mpc, char *oem,
char *productid)
{
int ret = es7000_mps_ret;
return ret && es7000_apic_is_cluster();
}
struct apic apic_es7000_cluster = {
.name = "es7000",
.probe = probe_es7000,
.acpi_madt_oem_check = es7000_acpi_madt_oem_check_cluster,
.apic_id_registered = es7000_apic_id_registered,
.irq_delivery_mode = dest_LowestPrio,
/* logical delivery broadcast to all procs: */
.irq_dest_mode = 1,
.target_cpus = target_cpus_cluster,
.disable_esr = 1,
.dest_logical = 0,
.check_apicid_used = es7000_check_apicid_used,
.check_apicid_present = es7000_check_apicid_present,
.vector_allocation_domain = es7000_vector_allocation_domain,
.init_apic_ldr = es7000_init_apic_ldr_cluster,
.ioapic_phys_id_map = es7000_ioapic_phys_id_map,
.setup_apic_routing = es7000_setup_apic_routing,
.multi_timer_check = NULL,
.apicid_to_node = es7000_apicid_to_node,
.cpu_to_logical_apicid = es7000_cpu_to_logical_apicid,
.cpu_present_to_apicid = es7000_cpu_present_to_apicid,
.apicid_to_cpu_present = es7000_apicid_to_cpu_present,
.setup_portio_remap = NULL,
.check_phys_apicid_present = es7000_check_phys_apicid_present,
.enable_apic_mode = es7000_enable_apic_mode,
.phys_pkg_id = es7000_phys_pkg_id,
.mps_oem_check = es7000_mps_oem_check_cluster,
.get_apic_id = es7000_get_apic_id,
.set_apic_id = NULL,
.apic_id_mask = 0xFF << 24,
.cpu_mask_to_apicid = es7000_cpu_mask_to_apicid,
.cpu_mask_to_apicid_and = es7000_cpu_mask_to_apicid_and,
.send_IPI_mask = es7000_send_IPI_mask,
.send_IPI_mask_allbutself = NULL,
.send_IPI_allbutself = es7000_send_IPI_allbutself,
.send_IPI_all = es7000_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
.wakeup_secondary_cpu = wakeup_secondary_cpu_via_mip,
.trampoline_phys_low = 0x467,
.trampoline_phys_high = 0x469,
.wait_for_init_deassert = NULL,
/* Nothing to do for most platforms, since cleared by the INIT cycle: */
.smp_callin_clear_local_apic = NULL,
.inquire_remote_apic = default_inquire_remote_apic,
.read = native_apic_mem_read,
.write = native_apic_mem_write,
.icr_read = native_apic_icr_read,
.icr_write = native_apic_icr_write,
.wait_icr_idle = native_apic_wait_icr_idle,
.safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
};
struct apic apic_es7000 = { struct apic apic_es7000 = {
@ -737,8 +762,6 @@ struct apic apic_es7000 = {
.send_IPI_all = es7000_send_IPI_all, .send_IPI_all = es7000_send_IPI_all,
.send_IPI_self = default_send_IPI_self, .send_IPI_self = default_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = 0x467, .trampoline_phys_low = 0x467,
.trampoline_phys_high = 0x469, .trampoline_phys_high = 0x469,

Просмотреть файл

@ -69,7 +69,7 @@ struct mpc_trans {
/* x86_quirks member */ /* x86_quirks member */
static int mpc_record; static int mpc_record;
static __cpuinitdata struct mpc_trans *translation_table[MAX_MPC_ENTRY]; static struct mpc_trans *translation_table[MAX_MPC_ENTRY];
int mp_bus_id_to_node[MAX_MP_BUSSES]; int mp_bus_id_to_node[MAX_MP_BUSSES];
int mp_bus_id_to_local[MAX_MP_BUSSES]; int mp_bus_id_to_local[MAX_MP_BUSSES];
@ -256,13 +256,6 @@ static int __init numaq_setup_ioapic_ids(void)
return 1; return 1;
} }
static int __init numaq_update_apic(void)
{
apic->wakeup_cpu = wakeup_secondary_cpu_via_nmi;
return 0;
}
static struct x86_quirks numaq_x86_quirks __initdata = { static struct x86_quirks numaq_x86_quirks __initdata = {
.arch_pre_time_init = numaq_pre_time_init, .arch_pre_time_init = numaq_pre_time_init,
.arch_time_init = NULL, .arch_time_init = NULL,
@ -278,7 +271,6 @@ static struct x86_quirks numaq_x86_quirks __initdata = {
.mpc_oem_pci_bus = mpc_oem_pci_bus, .mpc_oem_pci_bus = mpc_oem_pci_bus,
.smp_read_mpc_oem = smp_read_mpc_oem, .smp_read_mpc_oem = smp_read_mpc_oem,
.setup_ioapic_ids = numaq_setup_ioapic_ids, .setup_ioapic_ids = numaq_setup_ioapic_ids,
.update_apic = numaq_update_apic,
}; };
static __init void early_check_numaq(void) static __init void early_check_numaq(void)
@ -546,7 +538,7 @@ struct apic apic_numaq = {
.send_IPI_all = numaq_send_IPI_all, .send_IPI_all = numaq_send_IPI_all,
.send_IPI_self = default_send_IPI_self, .send_IPI_self = default_send_IPI_self,
.wakeup_cpu = NULL, .wakeup_secondary_cpu = wakeup_secondary_cpu_via_nmi,
.trampoline_phys_low = NUMAQ_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = NUMAQ_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = NUMAQ_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = NUMAQ_TRAMPOLINE_PHYS_HIGH,

Просмотреть файл

@ -138,7 +138,6 @@ struct apic apic_default = {
.send_IPI_all = default_send_IPI_all, .send_IPI_all = default_send_IPI_all,
.send_IPI_self = default_send_IPI_self, .send_IPI_self = default_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
@ -159,6 +158,7 @@ extern struct apic apic_numaq;
extern struct apic apic_summit; extern struct apic apic_summit;
extern struct apic apic_bigsmp; extern struct apic apic_bigsmp;
extern struct apic apic_es7000; extern struct apic apic_es7000;
extern struct apic apic_es7000_cluster;
extern struct apic apic_default; extern struct apic apic_default;
struct apic *apic = &apic_default; struct apic *apic = &apic_default;
@ -176,6 +176,7 @@ static struct apic *apic_probe[] __initdata = {
#endif #endif
#ifdef CONFIG_X86_ES7000 #ifdef CONFIG_X86_ES7000
&apic_es7000, &apic_es7000,
&apic_es7000_cluster,
#endif #endif
&apic_default, /* must be last */ &apic_default, /* must be last */
NULL, NULL,
@ -197,9 +198,6 @@ static int __init parse_apic(char *arg)
} }
} }
if (x86_quirks->update_apic)
x86_quirks->update_apic();
/* Parsed again by __setup for debug/verbose */ /* Parsed again by __setup for debug/verbose */
return 0; return 0;
} }
@ -218,8 +216,6 @@ void __init generic_bigsmp_probe(void)
if (!cmdline_apic && apic == &apic_default) { if (!cmdline_apic && apic == &apic_default) {
if (apic_bigsmp.probe()) { if (apic_bigsmp.probe()) {
apic = &apic_bigsmp; apic = &apic_bigsmp;
if (x86_quirks->update_apic)
x86_quirks->update_apic();
printk(KERN_INFO "Overriding APIC driver with %s\n", printk(KERN_INFO "Overriding APIC driver with %s\n",
apic->name); apic->name);
} }
@ -240,9 +236,6 @@ void __init generic_apic_probe(void)
/* Not visible without early console */ /* Not visible without early console */
if (!apic_probe[i]) if (!apic_probe[i])
panic("Didn't find an APIC driver"); panic("Didn't find an APIC driver");
if (x86_quirks->update_apic)
x86_quirks->update_apic();
} }
printk(KERN_INFO "Using APIC driver %s\n", apic->name); printk(KERN_INFO "Using APIC driver %s\n", apic->name);
} }
@ -262,8 +255,6 @@ generic_mps_oem_check(struct mpc_table *mpc, char *oem, char *productid)
if (!cmdline_apic) { if (!cmdline_apic) {
apic = apic_probe[i]; apic = apic_probe[i];
if (x86_quirks->update_apic)
x86_quirks->update_apic();
printk(KERN_INFO "Switched to APIC driver `%s'.\n", printk(KERN_INFO "Switched to APIC driver `%s'.\n",
apic->name); apic->name);
} }
@ -284,8 +275,6 @@ int __init default_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
if (!cmdline_apic) { if (!cmdline_apic) {
apic = apic_probe[i]; apic = apic_probe[i];
if (x86_quirks->update_apic)
x86_quirks->update_apic();
printk(KERN_INFO "Switched to APIC driver `%s'.\n", printk(KERN_INFO "Switched to APIC driver `%s'.\n",
apic->name); apic->name);
} }

Просмотреть файл

@ -68,9 +68,6 @@ void __init default_setup_apic_routing(void)
apic = &apic_physflat; apic = &apic_physflat;
printk(KERN_INFO "Setting APIC routing to %s\n", apic->name); printk(KERN_INFO "Setting APIC routing to %s\n", apic->name);
} }
if (x86_quirks->update_apic)
x86_quirks->update_apic();
} }
/* Same for both flat and physical. */ /* Same for both flat and physical. */

Просмотреть файл

@ -48,7 +48,7 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/smp.h> #include <linux/smp.h>
static inline unsigned summit_get_apic_id(unsigned long x) static unsigned summit_get_apic_id(unsigned long x)
{ {
return (x >> 24) & 0xFF; return (x >> 24) & 0xFF;
} }
@ -58,7 +58,7 @@ static inline void summit_send_IPI_mask(const cpumask_t *mask, int vector)
default_send_IPI_mask_sequence_logical(mask, vector); default_send_IPI_mask_sequence_logical(mask, vector);
} }
static inline void summit_send_IPI_allbutself(int vector) static void summit_send_IPI_allbutself(int vector)
{ {
cpumask_t mask = cpu_online_map; cpumask_t mask = cpu_online_map;
cpu_clear(smp_processor_id(), mask); cpu_clear(smp_processor_id(), mask);
@ -67,7 +67,7 @@ static inline void summit_send_IPI_allbutself(int vector)
summit_send_IPI_mask(&mask, vector); summit_send_IPI_mask(&mask, vector);
} }
static inline void summit_send_IPI_all(int vector) static void summit_send_IPI_all(int vector)
{ {
summit_send_IPI_mask(&cpu_online_map, vector); summit_send_IPI_mask(&cpu_online_map, vector);
} }
@ -77,13 +77,13 @@ static inline void summit_send_IPI_all(int vector)
extern int use_cyclone; extern int use_cyclone;
#ifdef CONFIG_X86_SUMMIT_NUMA #ifdef CONFIG_X86_SUMMIT_NUMA
extern void setup_summit(void); static void setup_summit(void);
#else #else
#define setup_summit() {} static inline void setup_summit(void) {}
#endif #endif
static inline int static int summit_mps_oem_check(struct mpc_table *mpc, char *oem,
summit_mps_oem_check(struct mpc_table *mpc, char *oem, char *productid) char *productid)
{ {
if (!strncmp(oem, "IBM ENSW", 8) && if (!strncmp(oem, "IBM ENSW", 8) &&
(!strncmp(productid, "VIGIL SMP", 9) (!strncmp(productid, "VIGIL SMP", 9)
@ -98,7 +98,7 @@ summit_mps_oem_check(struct mpc_table *mpc, char *oem, char *productid)
} }
/* Hook from generic ACPI tables.c */ /* Hook from generic ACPI tables.c */
static inline int summit_acpi_madt_oem_check(char *oem_id, char *oem_table_id) static int summit_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
{ {
if (!strncmp(oem_id, "IBM", 3) && if (!strncmp(oem_id, "IBM", 3) &&
(!strncmp(oem_table_id, "SERVIGIL", 8) (!strncmp(oem_table_id, "SERVIGIL", 8)
@ -186,7 +186,7 @@ static inline int is_WPEG(struct rio_detail *rio){
#define SUMMIT_APIC_DFR_VALUE (APIC_DFR_CLUSTER) #define SUMMIT_APIC_DFR_VALUE (APIC_DFR_CLUSTER)
static inline const cpumask_t *summit_target_cpus(void) static const cpumask_t *summit_target_cpus(void)
{ {
/* CPU_MASK_ALL (0xff) has undefined behaviour with /* CPU_MASK_ALL (0xff) has undefined behaviour with
* dest_LowestPrio mode logical clustered apic interrupt routing * dest_LowestPrio mode logical clustered apic interrupt routing
@ -195,19 +195,18 @@ static inline const cpumask_t *summit_target_cpus(void)
return &cpumask_of_cpu(0); return &cpumask_of_cpu(0);
} }
static inline unsigned long static unsigned long summit_check_apicid_used(physid_mask_t bitmap, int apicid)
summit_check_apicid_used(physid_mask_t bitmap, int apicid)
{ {
return 0; return 0;
} }
/* we don't use the phys_cpu_present_map to indicate apicid presence */ /* we don't use the phys_cpu_present_map to indicate apicid presence */
static inline unsigned long summit_check_apicid_present(int bit) static unsigned long summit_check_apicid_present(int bit)
{ {
return 1; return 1;
} }
static inline void summit_init_apic_ldr(void) static void summit_init_apic_ldr(void)
{ {
unsigned long val, id; unsigned long val, id;
int count = 0; int count = 0;
@ -234,18 +233,18 @@ static inline void summit_init_apic_ldr(void)
apic_write(APIC_LDR, val); apic_write(APIC_LDR, val);
} }
static inline int summit_apic_id_registered(void) static int summit_apic_id_registered(void)
{ {
return 1; return 1;
} }
static inline void summit_setup_apic_routing(void) static void summit_setup_apic_routing(void)
{ {
printk("Enabling APIC mode: Summit. Using %d I/O APICs\n", printk("Enabling APIC mode: Summit. Using %d I/O APICs\n",
nr_ioapics); nr_ioapics);
} }
static inline int summit_apicid_to_node(int logical_apicid) static int summit_apicid_to_node(int logical_apicid)
{ {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
return apicid_2_node[hard_smp_processor_id()]; return apicid_2_node[hard_smp_processor_id()];
@ -266,7 +265,7 @@ static inline int summit_cpu_to_logical_apicid(int cpu)
#endif #endif
} }
static inline int summit_cpu_present_to_apicid(int mps_cpu) static int summit_cpu_present_to_apicid(int mps_cpu)
{ {
if (mps_cpu < nr_cpu_ids) if (mps_cpu < nr_cpu_ids)
return (int)per_cpu(x86_bios_cpu_apicid, mps_cpu); return (int)per_cpu(x86_bios_cpu_apicid, mps_cpu);
@ -274,64 +273,44 @@ static inline int summit_cpu_present_to_apicid(int mps_cpu)
return BAD_APICID; return BAD_APICID;
} }
static inline physid_mask_t static physid_mask_t summit_ioapic_phys_id_map(physid_mask_t phys_id_map)
summit_ioapic_phys_id_map(physid_mask_t phys_id_map)
{ {
/* For clustered we don't have a good way to do this yet - hack */ /* For clustered we don't have a good way to do this yet - hack */
return physids_promote(0x0F); return physids_promote(0x0F);
} }
static inline physid_mask_t summit_apicid_to_cpu_present(int apicid) static physid_mask_t summit_apicid_to_cpu_present(int apicid)
{ {
return physid_mask_of_physid(0); return physid_mask_of_physid(0);
} }
static inline void summit_setup_portio_remap(void) static int summit_check_phys_apicid_present(int boot_cpu_physical_apicid)
{
}
static inline int summit_check_phys_apicid_present(int boot_cpu_physical_apicid)
{ {
return 1; return 1;
} }
static inline unsigned int summit_cpu_mask_to_apicid(const cpumask_t *cpumask) static unsigned int summit_cpu_mask_to_apicid(const cpumask_t *cpumask)
{ {
int cpus_found = 0; unsigned int round = 0;
int num_bits_set; int cpu, apicid = 0;
int apicid;
int cpu;
num_bits_set = cpus_weight(*cpumask);
/* Return id to all */
if (num_bits_set >= nr_cpu_ids)
return 0xFF;
/* /*
* The cpus in the mask must all be on the apic cluster. If are not * The cpus in the mask must all be on the apic cluster.
* on the same apicid cluster return default value of target_cpus():
*/ */
cpu = first_cpu(*cpumask); for_each_cpu(cpu, cpumask) {
apicid = summit_cpu_to_logical_apicid(cpu); int new_apicid = summit_cpu_to_logical_apicid(cpu);
while (cpus_found < num_bits_set) { if (round && APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
if (cpu_isset(cpu, *cpumask)) { printk("%s: Not a valid mask!\n", __func__);
int new_apicid = summit_cpu_to_logical_apicid(cpu); return BAD_APICID;
if (APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
printk ("%s: Not a valid mask!\n", __func__);
return 0xFF;
}
apicid = apicid | new_apicid;
cpus_found++;
} }
cpu++; apicid |= new_apicid;
round++;
} }
return apicid; return apicid;
} }
static inline unsigned int static unsigned int summit_cpu_mask_to_apicid_and(const struct cpumask *inmask,
summit_cpu_mask_to_apicid_and(const struct cpumask *inmask,
const struct cpumask *andmask) const struct cpumask *andmask)
{ {
int apicid = summit_cpu_to_logical_apicid(0); int apicid = summit_cpu_to_logical_apicid(0);
@ -356,7 +335,7 @@ summit_cpu_mask_to_apicid_and(const struct cpumask *inmask,
* *
* See Intel's IA-32 SW Dev's Manual Vol2 under CPUID. * See Intel's IA-32 SW Dev's Manual Vol2 under CPUID.
*/ */
static inline int summit_phys_pkg_id(int cpuid_apic, int index_msb) static int summit_phys_pkg_id(int cpuid_apic, int index_msb)
{ {
return hard_smp_processor_id() >> index_msb; return hard_smp_processor_id() >> index_msb;
} }
@ -381,15 +360,15 @@ static void summit_vector_allocation_domain(int cpu, cpumask_t *retmask)
} }
#ifdef CONFIG_X86_SUMMIT_NUMA #ifdef CONFIG_X86_SUMMIT_NUMA
static struct rio_table_hdr *rio_table_hdr __initdata; static struct rio_table_hdr *rio_table_hdr;
static struct scal_detail *scal_devs[MAX_NUMNODES] __initdata; static struct scal_detail *scal_devs[MAX_NUMNODES];
static struct rio_detail *rio_devs[MAX_NUMNODES*4] __initdata; static struct rio_detail *rio_devs[MAX_NUMNODES*4];
#ifndef CONFIG_X86_NUMAQ #ifndef CONFIG_X86_NUMAQ
static int mp_bus_id_to_node[MAX_MP_BUSSES] __initdata; static int mp_bus_id_to_node[MAX_MP_BUSSES];
#endif #endif
static int __init setup_pci_node_map_for_wpeg(int wpeg_num, int last_bus) static int setup_pci_node_map_for_wpeg(int wpeg_num, int last_bus)
{ {
int twister = 0, node = 0; int twister = 0, node = 0;
int i, bus, num_buses; int i, bus, num_buses;
@ -451,7 +430,7 @@ static int __init setup_pci_node_map_for_wpeg(int wpeg_num, int last_bus)
return bus; return bus;
} }
static int __init build_detail_arrays(void) static int build_detail_arrays(void)
{ {
unsigned long ptr; unsigned long ptr;
int i, scal_detail_size, rio_detail_size; int i, scal_detail_size, rio_detail_size;
@ -485,7 +464,7 @@ static int __init build_detail_arrays(void)
return 1; return 1;
} }
void __init setup_summit(void) void setup_summit(void)
{ {
unsigned long ptr; unsigned long ptr;
unsigned short offset; unsigned short offset;
@ -583,7 +562,6 @@ struct apic apic_summit = {
.send_IPI_all = summit_send_IPI_all, .send_IPI_all = summit_send_IPI_all,
.send_IPI_self = default_send_IPI_self, .send_IPI_self = default_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,

Просмотреть файл

@ -224,7 +224,6 @@ struct apic apic_x2apic_cluster = {
.send_IPI_all = x2apic_send_IPI_all, .send_IPI_all = x2apic_send_IPI_all,
.send_IPI_self = x2apic_send_IPI_self, .send_IPI_self = x2apic_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL, .wait_for_init_deassert = NULL,

Просмотреть файл

@ -213,7 +213,6 @@ struct apic apic_x2apic_phys = {
.send_IPI_all = x2apic_send_IPI_all, .send_IPI_all = x2apic_send_IPI_all,
.send_IPI_self = x2apic_send_IPI_self, .send_IPI_self = x2apic_send_IPI_self,
.wakeup_cpu = NULL,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL, .wait_for_init_deassert = NULL,

Просмотреть файл

@ -7,28 +7,28 @@
* *
* Copyright (C) 2007-2008 Silicon Graphics, Inc. All rights reserved. * Copyright (C) 2007-2008 Silicon Graphics, Inc. All rights reserved.
*/ */
#include <linux/kernel.h>
#include <linux/threads.h>
#include <linux/cpu.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/hardirq.h>
#include <linux/proc_fs.h>
#include <linux/threads.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/init.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/module.h>
#include <linux/hardirq.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/proc_fs.h> #include <linux/cpu.h>
#include <asm/current.h> #include <linux/init.h>
#include <asm/smp.h>
#include <asm/apic.h>
#include <asm/ipi.h>
#include <asm/pgtable.h>
#include <asm/uv/uv.h>
#include <asm/uv/uv_mmrs.h> #include <asm/uv/uv_mmrs.h>
#include <asm/uv/uv_hub.h> #include <asm/uv/uv_hub.h>
#include <asm/current.h>
#include <asm/pgtable.h>
#include <asm/uv/bios.h> #include <asm/uv/bios.h>
#include <asm/uv/uv.h>
#include <asm/apic.h>
#include <asm/ipi.h>
#include <asm/smp.h>
DEFINE_PER_CPU(int, x2apic_extra_bits); DEFINE_PER_CPU(int, x2apic_extra_bits);
@ -91,24 +91,28 @@ static void uv_vector_allocation_domain(int cpu, struct cpumask *retmask)
cpumask_set_cpu(cpu, retmask); cpumask_set_cpu(cpu, retmask);
} }
int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip) static int uv_wakeup_secondary(int phys_apicid, unsigned long start_rip)
{ {
#ifdef CONFIG_SMP
unsigned long val; unsigned long val;
int pnode; int pnode;
pnode = uv_apicid_to_pnode(phys_apicid); pnode = uv_apicid_to_pnode(phys_apicid);
val = (1UL << UVH_IPI_INT_SEND_SHFT) | val = (1UL << UVH_IPI_INT_SEND_SHFT) |
(phys_apicid << UVH_IPI_INT_APIC_ID_SHFT) | (phys_apicid << UVH_IPI_INT_APIC_ID_SHFT) |
(((long)start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) | ((start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) |
APIC_DM_INIT; APIC_DM_INIT;
uv_write_global_mmr64(pnode, UVH_IPI_INT, val); uv_write_global_mmr64(pnode, UVH_IPI_INT, val);
mdelay(10); mdelay(10);
val = (1UL << UVH_IPI_INT_SEND_SHFT) | val = (1UL << UVH_IPI_INT_SEND_SHFT) |
(phys_apicid << UVH_IPI_INT_APIC_ID_SHFT) | (phys_apicid << UVH_IPI_INT_APIC_ID_SHFT) |
(((long)start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) | ((start_rip << UVH_IPI_INT_VECTOR_SHFT) >> 12) |
APIC_DM_STARTUP; APIC_DM_STARTUP;
uv_write_global_mmr64(pnode, UVH_IPI_INT, val); uv_write_global_mmr64(pnode, UVH_IPI_INT, val);
atomic_set(&init_deasserted, 1);
#endif
return 0; return 0;
} }
@ -285,7 +289,7 @@ struct apic apic_x2apic_uv_x = {
.send_IPI_all = uv_send_IPI_all, .send_IPI_all = uv_send_IPI_all,
.send_IPI_self = uv_send_IPI_self, .send_IPI_self = uv_send_IPI_self,
.wakeup_cpu = NULL, .wakeup_secondary_cpu = uv_wakeup_secondary,
.trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW, .trampoline_phys_low = DEFAULT_TRAMPOLINE_PHYS_LOW,
.trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH, .trampoline_phys_high = DEFAULT_TRAMPOLINE_PHYS_HIGH,
.wait_for_init_deassert = NULL, .wait_for_init_deassert = NULL,
@ -365,7 +369,7 @@ static __init void map_high(char *id, unsigned long base, int shift,
paddr = base << shift; paddr = base << shift;
bytes = (1UL << shift) * (max_pnode + 1); bytes = (1UL << shift) * (max_pnode + 1);
printk(KERN_INFO "UV: Map %s_HI 0x%lx - 0x%lx\n", id, paddr, printk(KERN_INFO "UV: Map %s_HI 0x%lx - 0x%lx\n", id, paddr,
paddr + bytes); paddr + bytes);
if (map_type == map_uc) if (map_type == map_uc)
init_extra_mapping_uc(paddr, bytes); init_extra_mapping_uc(paddr, bytes);
else else
@ -528,7 +532,7 @@ late_initcall(uv_init_heartbeat);
/* /*
* Called on each cpu to initialize the per_cpu UV data area. * Called on each cpu to initialize the per_cpu UV data area.
* ZZZ hotplug not supported yet * FIXME: hotplug not supported yet
*/ */
void __cpuinit uv_cpu_init(void) void __cpuinit uv_cpu_init(void)
{ {

Просмотреть файл

@ -7,11 +7,10 @@
/* /*
* Get CPU information for use by the procfs. * Get CPU information for use by the procfs.
*/ */
#ifdef CONFIG_X86_32
static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c, static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c,
unsigned int cpu) unsigned int cpu)
{ {
#ifdef CONFIG_X86_HT #ifdef CONFIG_SMP
if (c->x86_max_cores * smp_num_siblings > 1) { if (c->x86_max_cores * smp_num_siblings > 1) {
seq_printf(m, "physical id\t: %d\n", c->phys_proc_id); seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
seq_printf(m, "siblings\t: %d\n", seq_printf(m, "siblings\t: %d\n",
@ -24,6 +23,7 @@ static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c,
#endif #endif
} }
#ifdef CONFIG_X86_32
static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c) static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c)
{ {
/* /*
@ -50,22 +50,6 @@ static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c)
c->wp_works_ok ? "yes" : "no"); c->wp_works_ok ? "yes" : "no");
} }
#else #else
static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c,
unsigned int cpu)
{
#ifdef CONFIG_SMP
if (c->x86_max_cores * smp_num_siblings > 1) {
seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
seq_printf(m, "siblings\t: %d\n",
cpus_weight(per_cpu(cpu_core_map, cpu)));
seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id);
seq_printf(m, "cpu cores\t: %d\n", c->booted_cores);
seq_printf(m, "apicid\t\t: %d\n", c->apicid);
seq_printf(m, "initial apicid\t: %d\n", c->initial_apicid);
}
#endif
}
static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c) static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c)
{ {
seq_printf(m, seq_printf(m,

Просмотреть файл

@ -858,6 +858,9 @@ void __init reserve_early_overlap_ok(u64 start, u64 end, char *name)
*/ */
void __init reserve_early(u64 start, u64 end, char *name) void __init reserve_early(u64 start, u64 end, char *name)
{ {
if (start >= end)
return;
drop_overlaps_that_are_ok(start, end); drop_overlaps_that_are_ok(start, end);
__reserve_early(start, end, name, 0); __reserve_early(start, end, name, 0);
} }

Просмотреть файл

@ -85,19 +85,8 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
t->io_bitmap_max = bytes; t->io_bitmap_max = bytes;
#ifdef CONFIG_X86_32
/*
* Sets the lazy trigger so that the next I/O operation will
* reload the correct bitmap.
* Reset the owner so that a process switch will not set
* tss->io_bitmap_base to IO_BITMAP_OFFSET.
*/
tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
tss->io_bitmap_owner = NULL;
#else
/* Update the TSS: */ /* Update the TSS: */
memcpy(tss->io_bitmap, t->io_bitmap_ptr, bytes_updated); memcpy(tss->io_bitmap, t->io_bitmap_ptr, bytes_updated);
#endif
put_cpu(); put_cpu();

Просмотреть файл

@ -1,8 +1,8 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <asm/idle.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/prctl.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/module.h> #include <linux/module.h>
@ -11,6 +11,9 @@
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/idle.h>
#include <asm/uaccess.h>
#include <asm/i387.h>
unsigned long idle_halt; unsigned long idle_halt;
EXPORT_SYMBOL(idle_halt); EXPORT_SYMBOL(idle_halt);
@ -55,6 +58,192 @@ void arch_task_cache_init(void)
SLAB_PANIC, NULL); SLAB_PANIC, NULL);
} }
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
struct task_struct *me = current;
struct thread_struct *t = &me->thread;
if (me->thread.io_bitmap_ptr) {
struct tss_struct *tss = &per_cpu(init_tss, get_cpu());
kfree(t->io_bitmap_ptr);
t->io_bitmap_ptr = NULL;
clear_thread_flag(TIF_IO_BITMAP);
/*
* Careful, clear this in the TSS too:
*/
memset(tss->io_bitmap, 0xff, t->io_bitmap_max);
t->io_bitmap_max = 0;
put_cpu();
}
ds_exit_thread(current);
}
void flush_thread(void)
{
struct task_struct *tsk = current;
#ifdef CONFIG_X86_64
if (test_tsk_thread_flag(tsk, TIF_ABI_PENDING)) {
clear_tsk_thread_flag(tsk, TIF_ABI_PENDING);
if (test_tsk_thread_flag(tsk, TIF_IA32)) {
clear_tsk_thread_flag(tsk, TIF_IA32);
} else {
set_tsk_thread_flag(tsk, TIF_IA32);
current_thread_info()->status |= TS_COMPAT;
}
}
#endif
clear_tsk_thread_flag(tsk, TIF_DEBUG);
tsk->thread.debugreg0 = 0;
tsk->thread.debugreg1 = 0;
tsk->thread.debugreg2 = 0;
tsk->thread.debugreg3 = 0;
tsk->thread.debugreg6 = 0;
tsk->thread.debugreg7 = 0;
memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
/*
* Forget coprocessor state..
*/
tsk->fpu_counter = 0;
clear_fpu(tsk);
clear_used_math();
}
static void hard_disable_TSC(void)
{
write_cr4(read_cr4() | X86_CR4_TSD);
}
void disable_TSC(void)
{
preempt_disable();
if (!test_and_set_thread_flag(TIF_NOTSC))
/*
* Must flip the CPU state synchronously with
* TIF_NOTSC in the current running context.
*/
hard_disable_TSC();
preempt_enable();
}
static void hard_enable_TSC(void)
{
write_cr4(read_cr4() & ~X86_CR4_TSD);
}
static void enable_TSC(void)
{
preempt_disable();
if (test_and_clear_thread_flag(TIF_NOTSC))
/*
* Must flip the CPU state synchronously with
* TIF_NOTSC in the current running context.
*/
hard_enable_TSC();
preempt_enable();
}
int get_tsc_mode(unsigned long adr)
{
unsigned int val;
if (test_thread_flag(TIF_NOTSC))
val = PR_TSC_SIGSEGV;
else
val = PR_TSC_ENABLE;
return put_user(val, (unsigned int __user *)adr);
}
int set_tsc_mode(unsigned int val)
{
if (val == PR_TSC_SIGSEGV)
disable_TSC();
else if (val == PR_TSC_ENABLE)
enable_TSC();
else
return -EINVAL;
return 0;
}
void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
struct tss_struct *tss)
{
struct thread_struct *prev, *next;
prev = &prev_p->thread;
next = &next_p->thread;
if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) ||
test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR))
ds_switch_to(prev_p, next_p);
else if (next->debugctlmsr != prev->debugctlmsr)
update_debugctlmsr(next->debugctlmsr);
if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
set_debugreg(next->debugreg0, 0);
set_debugreg(next->debugreg1, 1);
set_debugreg(next->debugreg2, 2);
set_debugreg(next->debugreg3, 3);
/* no 4 and 5 */
set_debugreg(next->debugreg6, 6);
set_debugreg(next->debugreg7, 7);
}
if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^
test_tsk_thread_flag(next_p, TIF_NOTSC)) {
/* prev and next are different */
if (test_tsk_thread_flag(next_p, TIF_NOTSC))
hard_disable_TSC();
else
hard_enable_TSC();
}
if (test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
/*
* Copy the relevant range of the IO bitmap.
* Normally this is 128 bytes or less:
*/
memcpy(tss->io_bitmap, next->io_bitmap_ptr,
max(prev->io_bitmap_max, next->io_bitmap_max));
} else if (test_tsk_thread_flag(prev_p, TIF_IO_BITMAP)) {
/*
* Clear any possible leftover bits:
*/
memset(tss->io_bitmap, 0xff, prev->io_bitmap_max);
}
}
int sys_fork(struct pt_regs *regs)
{
return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
}
/*
* This is trivial, and on the face of it looks like it
* could equally well be done in user mode.
*
* Not so, for quite unobvious reasons - register pressure.
* In user mode vfork() cannot have a stack frame, and if
* done by calling the "clone()" system call directly, you
* do not have enough call-clobbered registers to hold all
* the information you need.
*/
int sys_vfork(struct pt_regs *regs)
{
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0,
NULL, NULL);
}
/* /*
* Idle related variables and functions * Idle related variables and functions
*/ */

Просмотреть файл

@ -230,55 +230,6 @@ int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
} }
EXPORT_SYMBOL(kernel_thread); EXPORT_SYMBOL(kernel_thread);
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
/* The process may have allocated an io port bitmap... nuke it. */
if (unlikely(test_thread_flag(TIF_IO_BITMAP))) {
struct task_struct *tsk = current;
struct thread_struct *t = &tsk->thread;
int cpu = get_cpu();
struct tss_struct *tss = &per_cpu(init_tss, cpu);
kfree(t->io_bitmap_ptr);
t->io_bitmap_ptr = NULL;
clear_thread_flag(TIF_IO_BITMAP);
/*
* Careful, clear this in the TSS too:
*/
memset(tss->io_bitmap, 0xff, tss->io_bitmap_max);
t->io_bitmap_max = 0;
tss->io_bitmap_owner = NULL;
tss->io_bitmap_max = 0;
tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET;
put_cpu();
}
ds_exit_thread(current);
}
void flush_thread(void)
{
struct task_struct *tsk = current;
tsk->thread.debugreg0 = 0;
tsk->thread.debugreg1 = 0;
tsk->thread.debugreg2 = 0;
tsk->thread.debugreg3 = 0;
tsk->thread.debugreg6 = 0;
tsk->thread.debugreg7 = 0;
memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
clear_tsk_thread_flag(tsk, TIF_DEBUG);
/*
* Forget coprocessor state..
*/
tsk->fpu_counter = 0;
clear_fpu(tsk);
clear_used_math();
}
void release_thread(struct task_struct *dead_task) void release_thread(struct task_struct *dead_task)
{ {
BUG_ON(dead_task->mm); BUG_ON(dead_task->mm);
@ -366,127 +317,6 @@ start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp)
} }
EXPORT_SYMBOL_GPL(start_thread); EXPORT_SYMBOL_GPL(start_thread);
static void hard_disable_TSC(void)
{
write_cr4(read_cr4() | X86_CR4_TSD);
}
void disable_TSC(void)
{
preempt_disable();
if (!test_and_set_thread_flag(TIF_NOTSC))
/*
* Must flip the CPU state synchronously with
* TIF_NOTSC in the current running context.
*/
hard_disable_TSC();
preempt_enable();
}
static void hard_enable_TSC(void)
{
write_cr4(read_cr4() & ~X86_CR4_TSD);
}
static void enable_TSC(void)
{
preempt_disable();
if (test_and_clear_thread_flag(TIF_NOTSC))
/*
* Must flip the CPU state synchronously with
* TIF_NOTSC in the current running context.
*/
hard_enable_TSC();
preempt_enable();
}
int get_tsc_mode(unsigned long adr)
{
unsigned int val;
if (test_thread_flag(TIF_NOTSC))
val = PR_TSC_SIGSEGV;
else
val = PR_TSC_ENABLE;
return put_user(val, (unsigned int __user *)adr);
}
int set_tsc_mode(unsigned int val)
{
if (val == PR_TSC_SIGSEGV)
disable_TSC();
else if (val == PR_TSC_ENABLE)
enable_TSC();
else
return -EINVAL;
return 0;
}
static noinline void
__switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
struct tss_struct *tss)
{
struct thread_struct *prev, *next;
prev = &prev_p->thread;
next = &next_p->thread;
if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) ||
test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR))
ds_switch_to(prev_p, next_p);
else if (next->debugctlmsr != prev->debugctlmsr)
update_debugctlmsr(next->debugctlmsr);
if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
set_debugreg(next->debugreg0, 0);
set_debugreg(next->debugreg1, 1);
set_debugreg(next->debugreg2, 2);
set_debugreg(next->debugreg3, 3);
/* no 4 and 5 */
set_debugreg(next->debugreg6, 6);
set_debugreg(next->debugreg7, 7);
}
if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^
test_tsk_thread_flag(next_p, TIF_NOTSC)) {
/* prev and next are different */
if (test_tsk_thread_flag(next_p, TIF_NOTSC))
hard_disable_TSC();
else
hard_enable_TSC();
}
if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
/*
* Disable the bitmap via an invalid offset. We still cache
* the previous bitmap owner and the IO bitmap contents:
*/
tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET;
return;
}
if (likely(next == tss->io_bitmap_owner)) {
/*
* Previous owner of the bitmap (hence the bitmap content)
* matches the next task, we dont have to do anything but
* to set a valid offset in the TSS:
*/
tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
return;
}
/*
* Lazy TSS's I/O bitmap copy. We set an invalid offset here
* and we let the task to get a GPF in case an I/O instruction
* is performed. The handler of the GPF will verify that the
* faulting task has a valid I/O bitmap and, it true, does the
* real copy and restart the instruction. This will save us
* redundant copies when the currently switched task does not
* perform any I/O during its timeslice.
*/
tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY;
}
/* /*
* switch_to(x,yn) should switch tasks from x to y. * switch_to(x,yn) should switch tasks from x to y.
@ -600,11 +430,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
return prev_p; return prev_p;
} }
int sys_fork(struct pt_regs *regs)
{
return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
}
int sys_clone(struct pt_regs *regs) int sys_clone(struct pt_regs *regs)
{ {
unsigned long clone_flags; unsigned long clone_flags;
@ -620,21 +445,6 @@ int sys_clone(struct pt_regs *regs)
return do_fork(clone_flags, newsp, regs, 0, parent_tidptr, child_tidptr); return do_fork(clone_flags, newsp, regs, 0, parent_tidptr, child_tidptr);
} }
/*
* This is trivial, and on the face of it looks like it
* could equally well be done in user mode.
*
* Not so, for quite unobvious reasons - register pressure.
* In user mode vfork() cannot have a stack frame, and if
* done by calling the "clone()" system call directly, you
* do not have enough call-clobbered registers to hold all
* the information you need.
*/
int sys_vfork(struct pt_regs *regs)
{
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0, NULL, NULL);
}
/* /*
* sys_execve() executes a new program. * sys_execve() executes a new program.
*/ */

Просмотреть файл

@ -237,61 +237,6 @@ void show_regs(struct pt_regs *regs)
show_trace(NULL, regs, (void *)(regs + 1), regs->bp); show_trace(NULL, regs, (void *)(regs + 1), regs->bp);
} }
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
struct task_struct *me = current;
struct thread_struct *t = &me->thread;
if (me->thread.io_bitmap_ptr) {
struct tss_struct *tss = &per_cpu(init_tss, get_cpu());
kfree(t->io_bitmap_ptr);
t->io_bitmap_ptr = NULL;
clear_thread_flag(TIF_IO_BITMAP);
/*
* Careful, clear this in the TSS too:
*/
memset(tss->io_bitmap, 0xff, t->io_bitmap_max);
t->io_bitmap_max = 0;
put_cpu();
}
ds_exit_thread(current);
}
void flush_thread(void)
{
struct task_struct *tsk = current;
if (test_tsk_thread_flag(tsk, TIF_ABI_PENDING)) {
clear_tsk_thread_flag(tsk, TIF_ABI_PENDING);
if (test_tsk_thread_flag(tsk, TIF_IA32)) {
clear_tsk_thread_flag(tsk, TIF_IA32);
} else {
set_tsk_thread_flag(tsk, TIF_IA32);
current_thread_info()->status |= TS_COMPAT;
}
}
clear_tsk_thread_flag(tsk, TIF_DEBUG);
tsk->thread.debugreg0 = 0;
tsk->thread.debugreg1 = 0;
tsk->thread.debugreg2 = 0;
tsk->thread.debugreg3 = 0;
tsk->thread.debugreg6 = 0;
tsk->thread.debugreg7 = 0;
memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
/*
* Forget coprocessor state..
*/
tsk->fpu_counter = 0;
clear_fpu(tsk);
clear_used_math();
}
void release_thread(struct task_struct *dead_task) void release_thread(struct task_struct *dead_task)
{ {
if (dead_task->mm) { if (dead_task->mm) {
@ -425,118 +370,6 @@ start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp)
} }
EXPORT_SYMBOL_GPL(start_thread); EXPORT_SYMBOL_GPL(start_thread);
static void hard_disable_TSC(void)
{
write_cr4(read_cr4() | X86_CR4_TSD);
}
void disable_TSC(void)
{
preempt_disable();
if (!test_and_set_thread_flag(TIF_NOTSC))
/*
* Must flip the CPU state synchronously with
* TIF_NOTSC in the current running context.
*/
hard_disable_TSC();
preempt_enable();
}
static void hard_enable_TSC(void)
{
write_cr4(read_cr4() & ~X86_CR4_TSD);
}
static void enable_TSC(void)
{
preempt_disable();
if (test_and_clear_thread_flag(TIF_NOTSC))
/*
* Must flip the CPU state synchronously with
* TIF_NOTSC in the current running context.
*/
hard_enable_TSC();
preempt_enable();
}
int get_tsc_mode(unsigned long adr)
{
unsigned int val;
if (test_thread_flag(TIF_NOTSC))
val = PR_TSC_SIGSEGV;
else
val = PR_TSC_ENABLE;
return put_user(val, (unsigned int __user *)adr);
}
int set_tsc_mode(unsigned int val)
{
if (val == PR_TSC_SIGSEGV)
disable_TSC();
else if (val == PR_TSC_ENABLE)
enable_TSC();
else
return -EINVAL;
return 0;
}
/*
* This special macro can be used to load a debugging register
*/
#define loaddebug(thread, r) set_debugreg(thread->debugreg ## r, r)
static inline void __switch_to_xtra(struct task_struct *prev_p,
struct task_struct *next_p,
struct tss_struct *tss)
{
struct thread_struct *prev, *next;
prev = &prev_p->thread,
next = &next_p->thread;
if (test_tsk_thread_flag(next_p, TIF_DS_AREA_MSR) ||
test_tsk_thread_flag(prev_p, TIF_DS_AREA_MSR))
ds_switch_to(prev_p, next_p);
else if (next->debugctlmsr != prev->debugctlmsr)
update_debugctlmsr(next->debugctlmsr);
if (test_tsk_thread_flag(next_p, TIF_DEBUG)) {
loaddebug(next, 0);
loaddebug(next, 1);
loaddebug(next, 2);
loaddebug(next, 3);
/* no 4 and 5 */
loaddebug(next, 6);
loaddebug(next, 7);
}
if (test_tsk_thread_flag(prev_p, TIF_NOTSC) ^
test_tsk_thread_flag(next_p, TIF_NOTSC)) {
/* prev and next are different */
if (test_tsk_thread_flag(next_p, TIF_NOTSC))
hard_disable_TSC();
else
hard_enable_TSC();
}
if (test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) {
/*
* Copy the relevant range of the IO bitmap.
* Normally this is 128 bytes or less:
*/
memcpy(tss->io_bitmap, next->io_bitmap_ptr,
max(prev->io_bitmap_max, next->io_bitmap_max));
} else if (test_tsk_thread_flag(prev_p, TIF_IO_BITMAP)) {
/*
* Clear any possible leftover bits:
*/
memset(tss->io_bitmap, 0xff, prev->io_bitmap_max);
}
}
/* /*
* switch_to(x,y) should switch tasks from x to y. * switch_to(x,y) should switch tasks from x to y.
* *
@ -694,11 +527,6 @@ void set_personality_64bit(void)
current->personality &= ~READ_IMPLIES_EXEC; current->personality &= ~READ_IMPLIES_EXEC;
} }
asmlinkage long sys_fork(struct pt_regs *regs)
{
return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);
}
asmlinkage long asmlinkage long
sys_clone(unsigned long clone_flags, unsigned long newsp, sys_clone(unsigned long clone_flags, unsigned long newsp,
void __user *parent_tid, void __user *child_tid, struct pt_regs *regs) void __user *parent_tid, void __user *child_tid, struct pt_regs *regs)
@ -708,22 +536,6 @@ sys_clone(unsigned long clone_flags, unsigned long newsp,
return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid); return do_fork(clone_flags, newsp, regs, 0, parent_tid, child_tid);
} }
/*
* This is trivial, and on the face of it looks like it
* could equally well be done in user mode.
*
* Not so, for quite unobvious reasons - register pressure.
* In user mode vfork() cannot have a stack frame, and if
* done by calling the "clone()" system call directly, you
* do not have enough call-clobbered registers to hold all
* the information you need.
*/
asmlinkage long sys_vfork(struct pt_regs *regs)
{
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs->sp, regs, 0,
NULL, NULL);
}
unsigned long get_wchan(struct task_struct *p) unsigned long get_wchan(struct task_struct *p)
{ {
unsigned long stack; unsigned long stack;

Просмотреть файл

@ -1383,7 +1383,7 @@ void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs,
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
# define IS_IA32 1 # define IS_IA32 1
#elif defined CONFIG_IA32_EMULATION #elif defined CONFIG_IA32_EMULATION
# define IS_IA32 test_thread_flag(TIF_IA32) # define IS_IA32 is_compat_task()
#else #else
# define IS_IA32 0 # define IS_IA32 0
#endif #endif

Просмотреть файл

@ -600,19 +600,7 @@ static int __init setup_elfcorehdr(char *arg)
early_param("elfcorehdr", setup_elfcorehdr); early_param("elfcorehdr", setup_elfcorehdr);
#endif #endif
static int __init default_update_apic(void) static struct x86_quirks default_x86_quirks __initdata;
{
#ifdef CONFIG_SMP
if (!apic->wakeup_cpu)
apic->wakeup_cpu = wakeup_secondary_cpu_via_init;
#endif
return 0;
}
static struct x86_quirks default_x86_quirks __initdata = {
.update_apic = default_update_apic,
};
struct x86_quirks *x86_quirks __initdata = &default_x86_quirks; struct x86_quirks *x86_quirks __initdata = &default_x86_quirks;
@ -875,9 +863,7 @@ void __init setup_arch(char **cmdline_p)
reserve_initrd(); reserve_initrd();
#ifdef CONFIG_X86_64
vsmp_init(); vsmp_init();
#endif
io_delay_init(); io_delay_init();

Просмотреть файл

@ -187,6 +187,71 @@ setup_sigcontext(struct sigcontext __user *sc, void __user *fpstate,
/* /*
* Set up a signal frame. * Set up a signal frame.
*/ */
/*
* Determine which stack to use..
*/
static unsigned long align_sigframe(unsigned long sp)
{
#ifdef CONFIG_X86_32
/*
* Align the stack pointer according to the i386 ABI,
* i.e. so that on function entry ((sp + 4) & 15) == 0.
*/
sp = ((sp + 4) & -16ul) - 4;
#else /* !CONFIG_X86_32 */
sp = round_down(sp, 16) - 8;
#endif
return sp;
}
static inline void __user *
get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
void __user **fpstate)
{
/* Default to using normal stack */
unsigned long sp = regs->sp;
#ifdef CONFIG_X86_64
/* redzone */
sp -= 128;
#endif /* CONFIG_X86_64 */
/*
* If we are on the alternate signal stack and would overflow it, don't.
* Return an always-bogus address instead so we will die with SIGSEGV.
*/
if (on_sig_stack(sp) && !likely(on_sig_stack(sp - frame_size)))
return (void __user *) -1L;
/* This is the X/Open sanctioned signal stack switching. */
if (ka->sa.sa_flags & SA_ONSTACK) {
if (sas_ss_flags(sp) == 0)
sp = current->sas_ss_sp + current->sas_ss_size;
} else {
#ifdef CONFIG_X86_32
/* This is the legacy signal stack switching. */
if ((regs->ss & 0xffff) != __USER_DS &&
!(ka->sa.sa_flags & SA_RESTORER) &&
ka->sa.sa_restorer)
sp = (unsigned long) ka->sa.sa_restorer;
#endif /* CONFIG_X86_32 */
}
if (used_math()) {
sp -= sig_xstate_size;
#ifdef CONFIG_X86_64
sp = round_down(sp, 64);
#endif /* CONFIG_X86_64 */
*fpstate = (void __user *)sp;
if (save_i387_xstate(*fpstate) < 0)
return (void __user *)-1L;
}
return (void __user *)align_sigframe(sp - frame_size);
}
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
static const struct { static const struct {
u16 poplmovl; u16 poplmovl;
@ -210,54 +275,6 @@ static const struct {
0 0
}; };
/*
* Determine which stack to use..
*/
static inline void __user *
get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
void **fpstate)
{
unsigned long sp;
/* Default to using normal stack */
sp = regs->sp;
/*
* If we are on the alternate signal stack and would overflow it, don't.
* Return an always-bogus address instead so we will die with SIGSEGV.
*/
if (on_sig_stack(sp) && !likely(on_sig_stack(sp - frame_size)))
return (void __user *) -1L;
/* This is the X/Open sanctioned signal stack switching. */
if (ka->sa.sa_flags & SA_ONSTACK) {
if (sas_ss_flags(sp) == 0)
sp = current->sas_ss_sp + current->sas_ss_size;
} else {
/* This is the legacy signal stack switching. */
if ((regs->ss & 0xffff) != __USER_DS &&
!(ka->sa.sa_flags & SA_RESTORER) &&
ka->sa.sa_restorer)
sp = (unsigned long) ka->sa.sa_restorer;
}
if (used_math()) {
sp = sp - sig_xstate_size;
*fpstate = (struct _fpstate *) sp;
if (save_i387_xstate(*fpstate) < 0)
return (void __user *)-1L;
}
sp -= frame_size;
/*
* Align the stack pointer according to the i386 ABI,
* i.e. so that on function entry ((sp + 4) & 15) == 0.
*/
sp = ((sp + 4) & -16ul) - 4;
return (void __user *) sp;
}
static int static int
__setup_frame(int sig, struct k_sigaction *ka, sigset_t *set, __setup_frame(int sig, struct k_sigaction *ka, sigset_t *set,
struct pt_regs *regs) struct pt_regs *regs)
@ -388,24 +405,6 @@ static int __setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
return 0; return 0;
} }
#else /* !CONFIG_X86_32 */ #else /* !CONFIG_X86_32 */
/*
* Determine which stack to use..
*/
static void __user *
get_stack(struct k_sigaction *ka, unsigned long sp, unsigned long size)
{
/* Default to using normal stack - redzone*/
sp -= 128;
/* This is the X/Open sanctioned signal stack switching. */
if (ka->sa.sa_flags & SA_ONSTACK) {
if (sas_ss_flags(sp) == 0)
sp = current->sas_ss_sp + current->sas_ss_size;
}
return (void __user *)round_down(sp - size, 64);
}
static int __setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, static int __setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs *regs) sigset_t *set, struct pt_regs *regs)
{ {
@ -414,15 +413,7 @@ static int __setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
int err = 0; int err = 0;
struct task_struct *me = current; struct task_struct *me = current;
if (used_math()) { frame = get_sigframe(ka, regs, sizeof(struct rt_sigframe), &fp);
fp = get_stack(ka, regs->sp, sig_xstate_size);
frame = (void __user *)round_down(
(unsigned long)fp - sizeof(struct rt_sigframe), 16) - 8;
if (save_i387_xstate(fp) < 0)
return -EFAULT;
} else
frame = get_stack(ka, regs->sp, sizeof(struct rt_sigframe)) - 8;
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
return -EFAULT; return -EFAULT;

Просмотреть файл

@ -112,7 +112,7 @@ EXPORT_PER_CPU_SYMBOL(cpu_core_map);
DEFINE_PER_CPU_SHARED_ALIGNED(struct cpuinfo_x86, cpu_info); DEFINE_PER_CPU_SHARED_ALIGNED(struct cpuinfo_x86, cpu_info);
EXPORT_PER_CPU_SYMBOL(cpu_info); EXPORT_PER_CPU_SYMBOL(cpu_info);
static atomic_t init_deasserted; atomic_t init_deasserted;
/* Set if we find a B stepping CPU */ /* Set if we find a B stepping CPU */
@ -614,12 +614,6 @@ wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
unsigned long send_status, accept_status = 0; unsigned long send_status, accept_status = 0;
int maxlvt, num_starts, j; int maxlvt, num_starts, j;
if (get_uv_system_type() == UV_NON_UNIQUE_APIC) {
send_status = uv_wakeup_secondary(phys_apicid, start_eip);
atomic_set(&init_deasserted, 1);
return send_status;
}
maxlvt = lapic_get_maxlvt(); maxlvt = lapic_get_maxlvt();
/* /*
@ -748,7 +742,8 @@ static void __cpuinit do_fork_idle(struct work_struct *work)
/* /*
* NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
* (ie clustered apic addressing mode), this is a LOGICAL apic ID. * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
* Returns zero if CPU booted OK, else error code from ->wakeup_cpu. * Returns zero if CPU booted OK, else error code from
* ->wakeup_secondary_cpu.
*/ */
static int __cpuinit do_boot_cpu(int apicid, int cpu) static int __cpuinit do_boot_cpu(int apicid, int cpu)
{ {
@ -835,9 +830,13 @@ do_rest:
} }
/* /*
* Starting actual IPI sequence... * Kick the secondary CPU. Use the method in the APIC driver
* if it's defined - or use an INIT boot APIC message otherwise:
*/ */
boot_error = apic->wakeup_cpu(apicid, start_ip); if (apic->wakeup_secondary_cpu)
boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
else
boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
if (!boot_error) { if (!boot_error) {
/* /*

Просмотреть файл

@ -118,47 +118,6 @@ die_if_kernel(const char *str, struct pt_regs *regs, long err)
if (!user_mode_vm(regs)) if (!user_mode_vm(regs))
die(str, regs, err); die(str, regs, err);
} }
/*
* Perform the lazy TSS's I/O bitmap copy. If the TSS has an
* invalid offset set (the LAZY one) and the faulting thread has
* a valid I/O bitmap pointer, we copy the I/O bitmap in the TSS,
* we set the offset field correctly and return 1.
*/
static int lazy_iobitmap_copy(void)
{
struct thread_struct *thread;
struct tss_struct *tss;
int cpu;
cpu = get_cpu();
tss = &per_cpu(init_tss, cpu);
thread = &current->thread;
if (tss->x86_tss.io_bitmap_base == INVALID_IO_BITMAP_OFFSET_LAZY &&
thread->io_bitmap_ptr) {
memcpy(tss->io_bitmap, thread->io_bitmap_ptr,
thread->io_bitmap_max);
/*
* If the previously set map was extending to higher ports
* than the current one, pad extra space with 0xff (no access).
*/
if (thread->io_bitmap_max < tss->io_bitmap_max) {
memset((char *) tss->io_bitmap +
thread->io_bitmap_max, 0xff,
tss->io_bitmap_max - thread->io_bitmap_max);
}
tss->io_bitmap_max = thread->io_bitmap_max;
tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
tss->io_bitmap_owner = thread;
put_cpu();
return 1;
}
put_cpu();
return 0;
}
#endif #endif
static void __kprobes static void __kprobes
@ -309,11 +268,6 @@ do_general_protection(struct pt_regs *regs, long error_code)
conditional_sti(regs); conditional_sti(regs);
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
if (lazy_iobitmap_copy()) {
/* restart the faulting instruction */
return;
}
if (regs->flags & X86_VM_MASK) if (regs->flags & X86_VM_MASK)
goto gp_in_vm86; goto gp_in_vm86;
#endif #endif

Просмотреть файл

@ -22,7 +22,7 @@
#include <asm/paravirt.h> #include <asm/paravirt.h>
#include <asm/setup.h> #include <asm/setup.h>
#if defined CONFIG_PCI && defined CONFIG_PARAVIRT #ifdef CONFIG_PARAVIRT
/* /*
* Interrupt control on vSMPowered systems: * Interrupt control on vSMPowered systems:
* ~AC is a shadow of IF. If IF is 'on' AC should be 'off' * ~AC is a shadow of IF. If IF is 'on' AC should be 'off'
@ -114,7 +114,6 @@ static void __init set_vsmp_pv_ops(void)
} }
#endif #endif
#ifdef CONFIG_PCI
static int is_vsmp = -1; static int is_vsmp = -1;
static void __init detect_vsmp_box(void) static void __init detect_vsmp_box(void)
@ -139,15 +138,6 @@ int is_vsmp_box(void)
return 0; return 0;
} }
} }
#else
static void __init detect_vsmp_box(void)
{
}
int is_vsmp_box(void)
{
return 0;
}
#endif
void __init vsmp_init(void) void __init vsmp_init(void)
{ {

Просмотреть файл

@ -1,4 +1,4 @@
obj-y := init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \ obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
pat.o pgtable.o gup.o pat.o pgtable.o gup.o
obj-$(CONFIG_SMP) += tlb.o obj-$(CONFIG_SMP) += tlb.o

Просмотреть файл

@ -1,5 +1,6 @@
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/swap.h> /* for totalram_pages */
void *kmap(struct page *page) void *kmap(struct page *page)
{ {
@ -156,3 +157,36 @@ EXPORT_SYMBOL(kmap);
EXPORT_SYMBOL(kunmap); EXPORT_SYMBOL(kunmap);
EXPORT_SYMBOL(kmap_atomic); EXPORT_SYMBOL(kmap_atomic);
EXPORT_SYMBOL(kunmap_atomic); EXPORT_SYMBOL(kunmap_atomic);
#ifdef CONFIG_NUMA
void __init set_highmem_pages_init(void)
{
struct zone *zone;
int nid;
for_each_zone(zone) {
unsigned long zone_start_pfn, zone_end_pfn;
if (!is_highmem(zone))
continue;
zone_start_pfn = zone->zone_start_pfn;
zone_end_pfn = zone_start_pfn + zone->spanned_pages;
nid = zone_to_nid(zone);
printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n",
zone->name, nid, zone_start_pfn, zone_end_pfn);
add_highpages_with_active_regions(nid, zone_start_pfn,
zone_end_pfn);
}
totalram_pages += totalhigh_pages;
}
#else
void __init set_highmem_pages_init(void)
{
add_highpages_with_active_regions(0, highstart_pfn, highend_pfn);
totalram_pages += totalhigh_pages;
}
#endif /* CONFIG_NUMA */

49
arch/x86/mm/init.c Normal file
Просмотреть файл

@ -0,0 +1,49 @@
#include <linux/swap.h>
#include <asm/cacheflush.h>
#include <asm/page.h>
#include <asm/sections.h>
#include <asm/system.h>
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr = begin;
if (addr >= end)
return;
/*
* If debugging page accesses then do not free this memory but
* mark them not present - any buggy init-section access will
* create a kernel page fault:
*/
#ifdef CONFIG_DEBUG_PAGEALLOC
printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
begin, PAGE_ALIGN(end));
set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
#else
/*
* We just marked the kernel text read only above, now that
* we are going to free part of that, we need to make that
* writeable first.
*/
set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
for (; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)(addr & ~(PAGE_SIZE-1)),
POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
#endif
}
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
}

Просмотреть файл

@ -50,8 +50,6 @@
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
unsigned int __VMALLOC_RESERVE = 128 << 20;
unsigned long max_low_pfn_mapped; unsigned long max_low_pfn_mapped;
unsigned long max_pfn_mapped; unsigned long max_pfn_mapped;
@ -486,22 +484,10 @@ void __init add_highpages_with_active_regions(int nid, unsigned long start_pfn,
work_with_active_regions(nid, add_highpages_work_fn, &data); work_with_active_regions(nid, add_highpages_work_fn, &data);
} }
#ifndef CONFIG_NUMA
static void __init set_highmem_pages_init(void)
{
add_highpages_with_active_regions(0, highstart_pfn, highend_pfn);
totalram_pages += totalhigh_pages;
}
#endif /* !CONFIG_NUMA */
#else #else
static inline void permanent_kmaps_init(pgd_t *pgd_base) static inline void permanent_kmaps_init(pgd_t *pgd_base)
{ {
} }
static inline void set_highmem_pages_init(void)
{
}
#endif /* CONFIG_HIGHMEM */ #endif /* CONFIG_HIGHMEM */
void __init native_pagetable_setup_start(pgd_t *base) void __init native_pagetable_setup_start(pgd_t *base)
@ -864,10 +850,10 @@ static void __init find_early_table_space(unsigned long end, int use_pse)
unsigned long puds, pmds, ptes, tables, start; unsigned long puds, pmds, ptes, tables, start;
puds = (end + PUD_SIZE - 1) >> PUD_SHIFT; puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
tables = PAGE_ALIGN(puds * sizeof(pud_t)); tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT; pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
tables += PAGE_ALIGN(pmds * sizeof(pmd_t)); tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
if (use_pse) { if (use_pse) {
unsigned long extra; unsigned long extra;
@ -878,10 +864,10 @@ static void __init find_early_table_space(unsigned long end, int use_pse)
} else } else
ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT; ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
tables += PAGE_ALIGN(ptes * sizeof(pte_t)); tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
/* for fixmap */ /* for fixmap */
tables += PAGE_ALIGN(__end_of_fixed_addresses * sizeof(pte_t)); tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
/* /*
* RED-PEN putting page tables only on node 0 could * RED-PEN putting page tables only on node 0 could
@ -1231,45 +1217,6 @@ void mark_rodata_ro(void)
} }
#endif #endif
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
#ifdef CONFIG_DEBUG_PAGEALLOC
/*
* If debugging page accesses then do not free this memory but
* mark them not present - any buggy init-section access will
* create a kernel page fault:
*/
printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
begin, PAGE_ALIGN(end));
set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
#else
unsigned long addr;
/*
* We just marked the kernel text read only above, now that
* we are going to free part of that, we need to make that
* writeable first.
*/
set_memory_rw(begin, (end - begin) >> PAGE_SHIFT);
for (addr = begin; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
#endif
}
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
}
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end) void free_initrd_mem(unsigned long start, unsigned long end)
{ {

Просмотреть файл

@ -748,6 +748,8 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
pos = start_pfn << PAGE_SHIFT; pos = start_pfn << PAGE_SHIFT;
end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT) end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT)
<< (PMD_SHIFT - PAGE_SHIFT); << (PMD_SHIFT - PAGE_SHIFT);
if (end_pfn > (end >> PAGE_SHIFT))
end_pfn = end >> PAGE_SHIFT;
if (start_pfn < end_pfn) { if (start_pfn < end_pfn) {
nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
pos = end_pfn << PAGE_SHIFT; pos = end_pfn << PAGE_SHIFT;
@ -979,43 +981,6 @@ void __init mem_init(void)
initsize >> 10); initsize >> 10);
} }
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr = begin;
if (addr >= end)
return;
/*
* If debugging page accesses then do not free this memory but
* mark them not present - any buggy init-section access will
* create a kernel page fault:
*/
#ifdef CONFIG_DEBUG_PAGEALLOC
printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
begin, PAGE_ALIGN(end));
set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
#else
printk(KERN_INFO "Freeing %s: %luk freed\n", what, (end - begin) >> 10);
for (; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)(addr & ~(PAGE_SIZE-1)),
POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
#endif
}
void free_initmem(void)
{
free_init_pages("unused kernel memory",
(unsigned long)(&__init_begin),
(unsigned long)(&__init_end));
}
#ifdef CONFIG_DEBUG_RODATA #ifdef CONFIG_DEBUG_RODATA
const int rodata_test_data = 0xC3; const int rodata_test_data = 0xC3;
EXPORT_SYMBOL_GPL(rodata_test_data); EXPORT_SYMBOL_GPL(rodata_test_data);

Просмотреть файл

@ -20,6 +20,17 @@
#include <asm/pat.h> #include <asm/pat.h>
#include <linux/module.h> #include <linux/module.h>
int is_io_mapping_possible(resource_size_t base, unsigned long size)
{
#ifndef CONFIG_X86_PAE
/* There is no way to map greater than 1 << 32 address without PAE */
if (base + size > 0x100000000ULL)
return 0;
#endif
return 1;
}
EXPORT_SYMBOL_GPL(is_io_mapping_possible);
/* Map 'pfn' using fixed map 'type' and protections 'prot' /* Map 'pfn' using fixed map 'type' and protections 'prot'
*/ */
void * void *

Просмотреть файл

@ -9,44 +9,44 @@
#include <asm/e820.h> #include <asm/e820.h>
static void __init memtest(unsigned long start_phys, unsigned long size, static u64 patterns[] __initdata = {
unsigned pattern) 0,
0xffffffffffffffffULL,
0x5555555555555555ULL,
0xaaaaaaaaaaaaaaaaULL,
0x1111111111111111ULL,
0x2222222222222222ULL,
0x4444444444444444ULL,
0x8888888888888888ULL,
0x3333333333333333ULL,
0x6666666666666666ULL,
0x9999999999999999ULL,
0xccccccccccccccccULL,
0x7777777777777777ULL,
0xbbbbbbbbbbbbbbbbULL,
0xddddddddddddddddULL,
0xeeeeeeeeeeeeeeeeULL,
0x7a6c7258554e494cULL, /* yeah ;-) */
};
static void __init reserve_bad_mem(u64 pattern, u64 start_bad, u64 end_bad)
{ {
unsigned long i; printk(KERN_INFO " %016llx bad mem addr %010llx - %010llx reserved\n",
unsigned long *start; (unsigned long long) pattern,
unsigned long start_bad; (unsigned long long) start_bad,
unsigned long last_bad; (unsigned long long) end_bad);
unsigned long val; reserve_early(start_bad, end_bad, "BAD RAM");
unsigned long start_phys_aligned; }
unsigned long count;
unsigned long incr;
switch (pattern) { static void __init memtest(u64 pattern, u64 start_phys, u64 size)
case 0: {
val = 0UL; u64 i, count;
break; u64 *start;
case 1: u64 start_bad, last_bad;
val = -1UL; u64 start_phys_aligned;
break; size_t incr;
case 2:
#ifdef CONFIG_X86_64
val = 0x5555555555555555UL;
#else
val = 0x55555555UL;
#endif
break;
case 3:
#ifdef CONFIG_X86_64
val = 0xaaaaaaaaaaaaaaaaUL;
#else
val = 0xaaaaaaaaUL;
#endif
break;
default:
return;
}
incr = sizeof(unsigned long); incr = sizeof(pattern);
start_phys_aligned = ALIGN(start_phys, incr); start_phys_aligned = ALIGN(start_phys, incr);
count = (size - (start_phys_aligned - start_phys))/incr; count = (size - (start_phys_aligned - start_phys))/incr;
start = __va(start_phys_aligned); start = __va(start_phys_aligned);
@ -54,25 +54,42 @@ static void __init memtest(unsigned long start_phys, unsigned long size,
last_bad = 0; last_bad = 0;
for (i = 0; i < count; i++) for (i = 0; i < count; i++)
start[i] = val; start[i] = pattern;
for (i = 0; i < count; i++, start++, start_phys_aligned += incr) { for (i = 0; i < count; i++, start++, start_phys_aligned += incr) {
if (*start != val) { if (*start == pattern)
if (start_phys_aligned == last_bad + incr) { continue;
last_bad += incr; if (start_phys_aligned == last_bad + incr) {
} else { last_bad += incr;
if (start_bad) { continue;
printk(KERN_CONT "\n %016lx bad mem addr %010lx - %010lx reserved",
val, start_bad, last_bad + incr);
reserve_early(start_bad, last_bad + incr, "BAD RAM");
}
start_bad = last_bad = start_phys_aligned;
}
} }
if (start_bad)
reserve_bad_mem(pattern, start_bad, last_bad + incr);
start_bad = last_bad = start_phys_aligned;
} }
if (start_bad) { if (start_bad)
printk(KERN_CONT "\n %016lx bad mem addr %010lx - %010lx reserved", reserve_bad_mem(pattern, start_bad, last_bad + incr);
val, start_bad, last_bad + incr); }
reserve_early(start_bad, last_bad + incr, "BAD RAM");
static void __init do_one_pass(u64 pattern, u64 start, u64 end)
{
u64 size = 0;
while (start < end) {
start = find_e820_area_size(start, &size, 1);
/* done ? */
if (start >= end)
break;
if (start + size > end)
size = end - start;
printk(KERN_INFO " %010llx - %010llx pattern %016llx\n",
(unsigned long long) start,
(unsigned long long) start + size,
(unsigned long long) cpu_to_be64(pattern));
memtest(pattern, start, size);
start += size;
} }
} }
@ -90,33 +107,22 @@ early_param("memtest", parse_memtest);
void __init early_memtest(unsigned long start, unsigned long end) void __init early_memtest(unsigned long start, unsigned long end)
{ {
u64 t_start, t_size; unsigned int i;
unsigned pattern; unsigned int idx = 0;
if (!memtest_pattern) if (!memtest_pattern)
return; return;
printk(KERN_INFO "early_memtest: pattern num %d", memtest_pattern); printk(KERN_INFO "early_memtest: # of tests: %d\n", memtest_pattern);
for (pattern = 0; pattern < memtest_pattern; pattern++) { for (i = 0; i < memtest_pattern; i++) {
t_start = start; idx = i % ARRAY_SIZE(patterns);
t_size = 0; do_one_pass(patterns[idx], start, end);
while (t_start < end) { }
t_start = find_e820_area_size(t_start, &t_size, 1);
if (idx > 0) {
/* done ? */ printk(KERN_INFO "early_memtest: wipe out "
if (t_start >= end) "test pattern from memory\n");
break; /* additional test with pattern 0 will do this */
if (t_start + t_size > end) do_one_pass(0, start, end);
t_size = end - t_start;
printk(KERN_CONT "\n %010llx - %010llx pattern %d",
(unsigned long long)t_start,
(unsigned long long)t_start + t_size, pattern);
memtest(t_start, t_size, pattern);
t_start += t_size;
}
} }
printk(KERN_CONT "\n");
} }

Просмотреть файл

@ -423,32 +423,6 @@ void __init initmem_init(unsigned long start_pfn,
setup_bootmem_allocator(); setup_bootmem_allocator();
} }
void __init set_highmem_pages_init(void)
{
#ifdef CONFIG_HIGHMEM
struct zone *zone;
int nid;
for_each_zone(zone) {
unsigned long zone_start_pfn, zone_end_pfn;
if (!is_highmem(zone))
continue;
zone_start_pfn = zone->zone_start_pfn;
zone_end_pfn = zone_start_pfn + zone->spanned_pages;
nid = zone_to_nid(zone);
printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n",
zone->name, nid, zone_start_pfn, zone_end_pfn);
add_highpages_with_active_regions(nid, zone_start_pfn,
zone_end_pfn);
}
totalram_pages += totalhigh_pages;
#endif
}
#ifdef CONFIG_MEMORY_HOTPLUG #ifdef CONFIG_MEMORY_HOTPLUG
static int paddr_to_nid(u64 addr) static int paddr_to_nid(u64 addr)
{ {

Просмотреть файл

@ -11,6 +11,7 @@
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/fs.h> #include <linux/fs.h>
@ -633,6 +634,33 @@ void unmap_devmem(unsigned long pfn, unsigned long size, pgprot_t vma_prot)
free_memtype(addr, addr + size); free_memtype(addr, addr + size);
} }
/*
* Change the memory type for the physial address range in kernel identity
* mapping space if that range is a part of identity map.
*/
int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
{
unsigned long id_sz;
if (!pat_enabled || base >= __pa(high_memory))
return 0;
id_sz = (__pa(high_memory) < base + size) ?
__pa(high_memory) - base :
size;
if (ioremap_change_attr((unsigned long)__va(base), id_sz, flags) < 0) {
printk(KERN_INFO
"%s:%d ioremap_change_attr failed %s "
"for %Lx-%Lx\n",
current->comm, current->pid,
cattr_name(flags),
base, (unsigned long long)(base + size));
return -EINVAL;
}
return 0;
}
/* /*
* Internal interface to reserve a range of physical memory with prot. * Internal interface to reserve a range of physical memory with prot.
* Reserved non RAM regions only and after successful reserve_memtype, * Reserved non RAM regions only and after successful reserve_memtype,
@ -642,7 +670,7 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
int strict_prot) int strict_prot)
{ {
int is_ram = 0; int is_ram = 0;
int id_sz, ret; int ret;
unsigned long flags; unsigned long flags;
unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK); unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);
@ -679,23 +707,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
flags); flags);
} }
/* Need to keep identity mapping in sync */ if (kernel_map_sync_memtype(paddr, size, flags) < 0) {
if (paddr >= __pa(high_memory))
return 0;
id_sz = (__pa(high_memory) < paddr + size) ?
__pa(high_memory) - paddr :
size;
if (ioremap_change_attr((unsigned long)__va(paddr), id_sz, flags) < 0) {
free_memtype(paddr, paddr + size); free_memtype(paddr, paddr + size);
printk(KERN_ERR
"%s:%d reserve_pfn_range ioremap_change_attr failed %s "
"for %Lx-%Lx\n",
current->comm, current->pid,
cattr_name(flags),
(unsigned long long)paddr,
(unsigned long long)(paddr + size));
return -EINVAL; return -EINVAL;
} }
return 0; return 0;
@ -877,6 +890,7 @@ pgprot_t pgprot_writecombine(pgprot_t prot)
else else
return pgprot_noncached(prot); return pgprot_noncached(prot);
} }
EXPORT_SYMBOL_GPL(pgprot_writecombine);
#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_X86_PAT) #if defined(CONFIG_DEBUG_FS) && defined(CONFIG_X86_PAT)

Просмотреть файл

@ -313,6 +313,24 @@ int ptep_clear_flush_young(struct vm_area_struct *vma,
return young; return young;
} }
/**
* reserve_top_address - reserves a hole in the top of kernel address space
* @reserve - size of hole to reserve
*
* Can be used to relocate the fixmap area and poke a hole in the top
* of kernel address space to make room for a hypervisor.
*/
void __init reserve_top_address(unsigned long reserve)
{
#ifdef CONFIG_X86_32
BUG_ON(fixmaps_set > 0);
printk(KERN_INFO "Reserving virtual address space above 0x%08x\n",
(int)-reserve);
__FIXADDR_TOP = -reserve - PAGE_SIZE;
__VMALLOC_RESERVE += reserve;
#endif
}
int fixmaps_set; int fixmaps_set;
void __native_set_fixmap(enum fixed_addresses idx, pte_t pte) void __native_set_fixmap(enum fixed_addresses idx, pte_t pte)

Просмотреть файл

@ -20,6 +20,8 @@
#include <asm/tlb.h> #include <asm/tlb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
unsigned int __VMALLOC_RESERVE = 128 << 20;
/* /*
* Associate a virtual page frame with a given physical page frame * Associate a virtual page frame with a given physical page frame
* and protection flags for that frame. * and protection flags for that frame.
@ -97,22 +99,6 @@ void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags)
unsigned long __FIXADDR_TOP = 0xfffff000; unsigned long __FIXADDR_TOP = 0xfffff000;
EXPORT_SYMBOL(__FIXADDR_TOP); EXPORT_SYMBOL(__FIXADDR_TOP);
/**
* reserve_top_address - reserves a hole in the top of kernel address space
* @reserve - size of hole to reserve
*
* Can be used to relocate the fixmap area and poke a hole in the top
* of kernel address space to make room for a hypervisor.
*/
void __init reserve_top_address(unsigned long reserve)
{
BUG_ON(fixmaps_set > 0);
printk(KERN_INFO "Reserving virtual address space above 0x%08x\n",
(int)-reserve);
__FIXADDR_TOP = -reserve - PAGE_SIZE;
__VMALLOC_RESERVE += reserve;
}
/* /*
* vmalloc=size forces the vmalloc area to be exactly 'size' * vmalloc=size forces the vmalloc area to be exactly 'size'
* bytes. This can be used to increase (or decrease) the * bytes. This can be used to increase (or decrease) the

Просмотреть файл

@ -78,8 +78,18 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
if (cpu_has_arch_perfmon) { if (cpu_has_arch_perfmon) {
union cpuid10_eax eax; union cpuid10_eax eax;
eax.full = cpuid_eax(0xa); eax.full = cpuid_eax(0xa);
if (counter_width < eax.split.bit_width)
counter_width = eax.split.bit_width; /*
* For Core2 (family 6, model 15), don't reset the
* counter width:
*/
if (!(eax.split.version_id == 0 &&
current_cpu_data.x86 == 6 &&
current_cpu_data.x86_model == 15)) {
if (counter_width < eax.split.bit_width)
counter_width = eax.split.bit_width;
}
} }
/* clear all counters */ /* clear all counters */

Просмотреть файл

@ -942,6 +942,9 @@ asmlinkage void __init xen_start_kernel(void)
possible map and a non-dummy shared_info. */ possible map and a non-dummy shared_info. */
per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0]; per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
local_irq_disable();
early_boot_irqs_off();
xen_raw_console_write("mapping kernel into physical memory\n"); xen_raw_console_write("mapping kernel into physical memory\n");
pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages); pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);

Просмотреть файл

@ -38,72 +38,84 @@ void blk_recalc_rq_sectors(struct request *rq, int nsect)
} }
} }
void blk_recalc_rq_segments(struct request *rq) static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
struct bio *bio,
unsigned int *seg_size_ptr)
{ {
int nr_phys_segs;
unsigned int phys_size; unsigned int phys_size;
struct bio_vec *bv, *bvprv = NULL; struct bio_vec *bv, *bvprv = NULL;
int seg_size; int cluster, i, high, highprv = 1;
int cluster; unsigned int seg_size, nr_phys_segs;
struct req_iterator iter; struct bio *fbio;
int high, highprv = 1;
struct request_queue *q = rq->q;
if (!rq->bio) if (!bio)
return; return 0;
fbio = bio;
cluster = test_bit(QUEUE_FLAG_CLUSTER, &q->queue_flags); cluster = test_bit(QUEUE_FLAG_CLUSTER, &q->queue_flags);
seg_size = 0; seg_size = 0;
phys_size = nr_phys_segs = 0; phys_size = nr_phys_segs = 0;
rq_for_each_segment(bv, rq, iter) { for_each_bio(bio) {
/* bio_for_each_segment(bv, bio, i) {
* the trick here is making sure that a high page is never /*
* considered part of another segment, since that might * the trick here is making sure that a high page is
* change with the bounce page. * never considered part of another segment, since that
*/ * might change with the bounce page.
high = page_to_pfn(bv->bv_page) > q->bounce_pfn; */
if (high || highprv) high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
goto new_segment; if (high || highprv)
if (cluster) {
if (seg_size + bv->bv_len > q->max_segment_size)
goto new_segment;
if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
goto new_segment;
if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
goto new_segment; goto new_segment;
if (cluster) {
if (seg_size + bv->bv_len > q->max_segment_size)
goto new_segment;
if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
goto new_segment;
if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
goto new_segment;
seg_size += bv->bv_len; seg_size += bv->bv_len;
bvprv = bv; bvprv = bv;
continue; continue;
} }
new_segment: new_segment:
if (nr_phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size) if (nr_phys_segs == 1 && seg_size >
rq->bio->bi_seg_front_size = seg_size; fbio->bi_seg_front_size)
fbio->bi_seg_front_size = seg_size;
nr_phys_segs++; nr_phys_segs++;
bvprv = bv; bvprv = bv;
seg_size = bv->bv_len; seg_size = bv->bv_len;
highprv = high; highprv = high;
}
} }
if (nr_phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size) if (seg_size_ptr)
*seg_size_ptr = seg_size;
return nr_phys_segs;
}
void blk_recalc_rq_segments(struct request *rq)
{
unsigned int seg_size = 0, phys_segs;
phys_segs = __blk_recalc_rq_segments(rq->q, rq->bio, &seg_size);
if (phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size)
rq->bio->bi_seg_front_size = seg_size; rq->bio->bi_seg_front_size = seg_size;
if (seg_size > rq->biotail->bi_seg_back_size) if (seg_size > rq->biotail->bi_seg_back_size)
rq->biotail->bi_seg_back_size = seg_size; rq->biotail->bi_seg_back_size = seg_size;
rq->nr_phys_segments = nr_phys_segs; rq->nr_phys_segments = phys_segs;
} }
void blk_recount_segments(struct request_queue *q, struct bio *bio) void blk_recount_segments(struct request_queue *q, struct bio *bio)
{ {
struct request rq;
struct bio *nxt = bio->bi_next; struct bio *nxt = bio->bi_next;
rq.q = q;
rq.bio = rq.biotail = bio;
bio->bi_next = NULL; bio->bi_next = NULL;
blk_recalc_rq_segments(&rq); bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, NULL);
bio->bi_next = nxt; bio->bi_next = nxt;
bio->bi_phys_segments = rq.nr_phys_segments;
bio->bi_flags |= (1 << BIO_SEG_VALID); bio->bi_flags |= (1 << BIO_SEG_VALID);
} }
EXPORT_SYMBOL(blk_recount_segments); EXPORT_SYMBOL(blk_recount_segments);

Просмотреть файл

@ -256,6 +256,22 @@ void blkdev_show(struct seq_file *seqf, off_t offset)
} }
#endif /* CONFIG_PROC_FS */ #endif /* CONFIG_PROC_FS */
/**
* register_blkdev - register a new block device
*
* @major: the requested major device number [1..255]. If @major=0, try to
* allocate any unused major number.
* @name: the name of the new block device as a zero terminated string
*
* The @name must be unique within the system.
*
* The return value depends on the @major input parameter.
* - if a major device number was requested in range [1..255] then the
* function returns zero on success, or a negative error code
* - if any unused major number was requested with @major=0 parameter
* then the return value is the allocated major number in range
* [1..255] or a negative error code otherwise
*/
int register_blkdev(unsigned int major, const char *name) int register_blkdev(unsigned int major, const char *name)
{ {
struct blk_major_name **n, *p; struct blk_major_name **n, *p;

Просмотреть файл

@ -214,7 +214,7 @@ static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ? seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
"yes" : "no"); "yes" : "no");
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize); seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "digestsize : %u\n", alg->cra_hash.digestsize); seq_printf(m, "digestsize : %u\n", alg->cra_ahash.digestsize);
} }
const struct crypto_type crypto_ahash_type = { const struct crypto_type crypto_ahash_type = {

Просмотреть файл

@ -24,7 +24,7 @@
#include <linux/libata.h> #include <linux/libata.h>
#define DRV_NAME "pata_amd" #define DRV_NAME "pata_amd"
#define DRV_VERSION "0.3.11" #define DRV_VERSION "0.4.1"
/** /**
* timing_setup - shared timing computation and load * timing_setup - shared timing computation and load
@ -145,6 +145,13 @@ static int amd_pre_reset(struct ata_link *link, unsigned long deadline)
return ata_sff_prereset(link, deadline); return ata_sff_prereset(link, deadline);
} }
/**
* amd_cable_detect - report cable type
* @ap: port
*
* AMD controller/BIOS setups record the cable type in word 0x42
*/
static int amd_cable_detect(struct ata_port *ap) static int amd_cable_detect(struct ata_port *ap)
{ {
static const u32 bitmask[2] = {0x03, 0x0C}; static const u32 bitmask[2] = {0x03, 0x0C};
@ -157,6 +164,40 @@ static int amd_cable_detect(struct ata_port *ap)
return ATA_CBL_PATA40; return ATA_CBL_PATA40;
} }
/**
* amd_fifo_setup - set the PIO FIFO for ATA/ATAPI
* @ap: ATA interface
* @adev: ATA device
*
* Set the PCI fifo for this device according to the devices present
* on the bus at this point in time. We need to turn the post write buffer
* off for ATAPI devices as we may need to issue a word sized write to the
* device as the final I/O
*/
static void amd_fifo_setup(struct ata_port *ap)
{
struct ata_device *adev;
struct pci_dev *pdev = to_pci_dev(ap->host->dev);
static const u8 fifobit[2] = { 0xC0, 0x30};
u8 fifo = fifobit[ap->port_no];
u8 r;
ata_for_each_dev(adev, &ap->link, ENABLED) {
if (adev->class == ATA_DEV_ATAPI)
fifo = 0;
}
if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7411) /* FIFO is broken */
fifo = 0;
/* On the later chips the read prefetch bits become no-op bits */
pci_read_config_byte(pdev, 0x41, &r);
r &= ~fifobit[ap->port_no];
r |= fifo;
pci_write_config_byte(pdev, 0x41, r);
}
/** /**
* amd33_set_piomode - set initial PIO mode data * amd33_set_piomode - set initial PIO mode data
* @ap: ATA interface * @ap: ATA interface
@ -167,21 +208,25 @@ static int amd_cable_detect(struct ata_port *ap)
static void amd33_set_piomode(struct ata_port *ap, struct ata_device *adev) static void amd33_set_piomode(struct ata_port *ap, struct ata_device *adev)
{ {
amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 1); timing_setup(ap, adev, 0x40, adev->pio_mode, 1);
} }
static void amd66_set_piomode(struct ata_port *ap, struct ata_device *adev) static void amd66_set_piomode(struct ata_port *ap, struct ata_device *adev)
{ {
amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 2); timing_setup(ap, adev, 0x40, adev->pio_mode, 2);
} }
static void amd100_set_piomode(struct ata_port *ap, struct ata_device *adev) static void amd100_set_piomode(struct ata_port *ap, struct ata_device *adev)
{ {
amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 3); timing_setup(ap, adev, 0x40, adev->pio_mode, 3);
} }
static void amd133_set_piomode(struct ata_port *ap, struct ata_device *adev) static void amd133_set_piomode(struct ata_port *ap, struct ata_device *adev)
{ {
amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 4); timing_setup(ap, adev, 0x40, adev->pio_mode, 4);
} }
@ -397,6 +442,16 @@ static struct ata_port_operations nv133_port_ops = {
.set_dmamode = nv133_set_dmamode, .set_dmamode = nv133_set_dmamode,
}; };
static void amd_clear_fifo(struct pci_dev *pdev)
{
u8 fifo;
/* Disable the FIFO, the FIFO logic will re-enable it as
appropriate */
pci_read_config_byte(pdev, 0x41, &fifo);
fifo &= 0x0F;
pci_write_config_byte(pdev, 0x41, fifo);
}
static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id) static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
{ {
static const struct ata_port_info info[10] = { static const struct ata_port_info info[10] = {
@ -503,14 +558,8 @@ static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
if (type < 3) if (type < 3)
ata_pci_bmdma_clear_simplex(pdev); ata_pci_bmdma_clear_simplex(pdev);
if (pdev->vendor == PCI_VENDOR_ID_AMD)
/* Check for AMD7411 */ amd_clear_fifo(pdev);
if (type == 3)
/* FIFO is broken */
pci_write_config_byte(pdev, 0x41, fifo & 0x0F);
else
pci_write_config_byte(pdev, 0x41, fifo | 0xF0);
/* Cable detection on Nvidia chips doesn't work too well, /* Cable detection on Nvidia chips doesn't work too well,
* cache BIOS programmed UDMA mode. * cache BIOS programmed UDMA mode.
*/ */
@ -536,18 +585,11 @@ static int amd_reinit_one(struct pci_dev *pdev)
return rc; return rc;
if (pdev->vendor == PCI_VENDOR_ID_AMD) { if (pdev->vendor == PCI_VENDOR_ID_AMD) {
u8 fifo; amd_clear_fifo(pdev);
pci_read_config_byte(pdev, 0x41, &fifo);
if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7411)
/* FIFO is broken */
pci_write_config_byte(pdev, 0x41, fifo & 0x0F);
else
pci_write_config_byte(pdev, 0x41, fifo | 0xF0);
if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7409 || if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7409 ||
pdev->device == PCI_DEVICE_ID_AMD_COBRA_7401) pdev->device == PCI_DEVICE_ID_AMD_COBRA_7401)
ata_pci_bmdma_clear_simplex(pdev); ata_pci_bmdma_clear_simplex(pdev);
} }
ata_host_resume(host); ata_host_resume(host);
return 0; return 0;
} }

Просмотреть файл

@ -557,6 +557,9 @@ static unsigned int it821x_read_id(struct ata_device *adev,
id[83] |= 0x4400; /* Word 83 is valid and LBA48 */ id[83] |= 0x4400; /* Word 83 is valid and LBA48 */
id[86] |= 0x0400; /* LBA48 on */ id[86] |= 0x0400; /* LBA48 on */
id[ATA_ID_MAJOR_VER] |= 0x1F; id[ATA_ID_MAJOR_VER] |= 0x1F;
/* Clear the serial number because it's different each boot
which breaks validation on resume */
memset(&id[ATA_ID_SERNO], 0x20, ATA_ID_SERNO_LEN);
} }
return err_mask; return err_mask;
} }

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше