This commit is contained in:
Jeff Garzik 2006-02-20 05:13:31 -05:00
Родитель db024d5398 6aff8f1f07
Коммит ff60774859
336 изменённых файлов: 5509 добавлений и 3231 удалений

Просмотреть файл

@ -11,6 +11,8 @@
Joel Schopp <jschopp@austin.ibm.com> Joel Schopp <jschopp@austin.ibm.com>
ia64/x86_64: ia64/x86_64:
Ashok Raj <ashok.raj@intel.com> Ashok Raj <ashok.raj@intel.com>
s390:
Heiko Carstens <heiko.carstens@de.ibm.com>
Authors: Ashok Raj <ashok.raj@intel.com> Authors: Ashok Raj <ashok.raj@intel.com>
Lots of feedback: Nathan Lynch <nathanl@austin.ibm.com>, Lots of feedback: Nathan Lynch <nathanl@austin.ibm.com>,
@ -44,9 +46,23 @@ maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using
maxcpus=2 will only boot 2. You can choose to bring the maxcpus=2 will only boot 2. You can choose to bring the
other cpus later online, read FAQ's for more info. other cpus later online, read FAQ's for more info.
additional_cpus=n [x86_64 only] use this to limit hotpluggable cpus. additional_cpus=n [x86_64, s390 only] use this to limit hotpluggable cpus.
This option sets This option sets
cpu_possible_map = cpu_present_map + additional_cpus cpu_possible_map = cpu_present_map + additional_cpus
ia64 and x86_64 use the number of disabled local apics in ACPI tables MADT
to determine the number of potentially hot-pluggable cpus. The implementation
should only rely on this to count the #of cpus, but *MUST* not rely on the
apicid values in those tables for disabled apics. In the event BIOS doesnt
mark such hot-pluggable cpus as disabled entries, one could use this
parameter "additional_cpus=x" to represent those cpus in the cpu_possible_map.
possible_cpus=n [s390 only] use this to set hotpluggable cpus.
This option sets possible_cpus bits in
cpu_possible_map. Thus keeping the numbers of bits set
constant even if the machine gets rebooted.
This option overrides additional_cpus.
CPU maps and such CPU maps and such
----------------- -----------------

Просмотреть файл

@ -0,0 +1,234 @@
=================================
INTERNAL KERNEL ABI FOR FR-V ARCH
=================================
The internal FRV kernel ABI is not quite the same as the userspace ABI. A number of the registers
are used for special purposed, and the ABI is not consistent between modules vs core, and MMU vs
no-MMU.
This partly stems from the fact that FRV CPUs do not have a separate supervisor stack pointer, and
most of them do not have any scratch registers, thus requiring at least one general purpose
register to be clobbered in such an event. Also, within the kernel core, it is possible to simply
jump or call directly between functions using a relative offset. This cannot be extended to modules
for the displacement is likely to be too far. Thus in modules the address of a function to call
must be calculated in a register and then used, requiring two extra instructions.
This document has the following sections:
(*) System call register ABI
(*) CPU operating modes
(*) Internal kernel-mode register ABI
(*) Internal debug-mode register ABI
(*) Virtual interrupt handling
========================
SYSTEM CALL REGISTER ABI
========================
When a system call is made, the following registers are effective:
REGISTERS CALL RETURN
=============== ======================= =======================
GR7 System call number Preserved
GR8 Syscall arg #1 Return value
GR9-GR13 Syscall arg #2-6 Preserved
===================
CPU OPERATING MODES
===================
The FR-V CPU has three basic operating modes. In order of increasing capability:
(1) User mode.
Basic userspace running mode.
(2) Kernel mode.
Normal kernel mode. There are many additional control registers available that may be
accessed in this mode, in addition to all the stuff available to user mode. This has two
submodes:
(a) Exceptions enabled (PSR.T == 1).
Exceptions will invoke the appropriate normal kernel mode handler. On entry to the
handler, the PSR.T bit will be cleared.
(b) Exceptions disabled (PSR.T == 0).
No exceptions or interrupts may happen. Any mandatory exceptions will cause the CPU to
halt unless the CPU is told to jump into debug mode instead.
(3) Debug mode.
No exceptions may happen in this mode. Memory protection and management exceptions will be
flagged for later consideration, but the exception handler won't be invoked. Debugging traps
such as hardware breakpoints and watchpoints will be ignored. This mode is entered only by
debugging events obtained from the other two modes.
All kernel mode registers may be accessed, plus a few extra debugging specific registers.
=================================
INTERNAL KERNEL-MODE REGISTER ABI
=================================
There are a number of permanent register assignments that are set up by entry.S in the exception
prologue. Note that there is a complete set of exception prologues for each of user->kernel
transition and kernel->kernel transition. There are also user->debug and kernel->debug mode
transition prologues.
REGISTER FLAVOUR USE
=============== ======= ====================================================
GR1 Supervisor stack pointer
GR15 Current thread info pointer
GR16 GP-Rel base register for small data
GR28 Current exception frame pointer (__frame)
GR29 Current task pointer (current)
GR30 Destroyed by kernel mode entry
GR31 NOMMU Destroyed by debug mode entry
GR31 MMU Destroyed by TLB miss kernel mode entry
CCR.ICC2 Virtual interrupt disablement tracking
CCCR.CC3 Cleared by exception prologue (atomic op emulation)
SCR0 MMU See mmu-layout.txt.
SCR1 MMU See mmu-layout.txt.
SCR2 MMU Save for EAR0 (destroyed by icache insns in debug mode)
SCR3 MMU Save for GR31 during debug exceptions
DAMR/IAMR NOMMU Fixed memory protection layout.
DAMR/IAMR MMU See mmu-layout.txt.
Certain registers are also used or modified across function calls:
REGISTER CALL RETURN
=============== =============================== ===============================
GR0 Fixed Zero -
GR2 Function call frame pointer
GR3 Special Preserved
GR3-GR7 - Clobbered
GR8 Function call arg #1 Return value (or clobbered)
GR9 Function call arg #2 Return value MSW (or clobbered)
GR10-GR13 Function call arg #3-#6 Clobbered
GR14 - Clobbered
GR15-GR16 Special Preserved
GR17-GR27 - Preserved
GR28-GR31 Special Only accessed explicitly
LR Return address after CALL Clobbered
CCR/CCCR - Mostly Clobbered
================================
INTERNAL DEBUG-MODE REGISTER ABI
================================
This is the same as the kernel-mode register ABI for functions calls. The difference is that in
debug-mode there's a different stack and a different exception frame. Almost all the global
registers from kernel-mode (including the stack pointer) may be changed.
REGISTER FLAVOUR USE
=============== ======= ====================================================
GR1 Debug stack pointer
GR16 GP-Rel base register for small data
GR31 Current debug exception frame pointer (__debug_frame)
SCR3 MMU Saved value of GR31
Note that debug mode is able to interfere with the kernel's emulated atomic ops, so it must be
exceedingly careful not to do any that would interact with the main kernel in this regard. Hence
the debug mode code (gdbstub) is almost completely self-contained. The only external code used is
the sprintf family of functions.
Futhermore, break.S is so complicated because single-step mode does not switch off on entry to an
exception. That means unless manually disabled, single-stepping will blithely go on stepping into
things like interrupts. See gdbstub.txt for more information.
==========================
VIRTUAL INTERRUPT HANDLING
==========================
Because accesses to the PSR is so slow, and to disable interrupts we have to access it twice (once
to read and once to write), we don't actually disable interrupts at all if we don't have to. What
we do instead is use the ICC2 condition code flags to note virtual disablement, such that if we
then do take an interrupt, we note the flag, really disable interrupts, set another flag and resume
execution at the point the interrupt happened. Setting condition flags as a side effect of an
arithmetic or logical instruction is really fast. This use of the ICC2 only occurs within the
kernel - it does not affect userspace.
The flags we use are:
(*) CCR.ICC2.Z [Zero flag]
Set to virtually disable interrupts, clear when interrupts are virtually enabled. Can be
modified by logical instructions without affecting the Carry flag.
(*) CCR.ICC2.C [Carry flag]
Clear to indicate hardware interrupts are really disabled, set otherwise.
What happens is this:
(1) Normal kernel-mode operation.
ICC2.Z is 0, ICC2.C is 1.
(2) An interrupt occurs. The exception prologue examines ICC2.Z and determines that nothing needs
doing. This is done simply with an unlikely BEQ instruction.
(3) The interrupts are disabled (local_irq_disable)
ICC2.Z is set to 1.
(4) If interrupts were then re-enabled (local_irq_enable):
ICC2.Z would be set to 0.
A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would be used to trap if
interrupts were now virtually enabled, but physically disabled - which they're not, so the
trap isn't taken. The kernel would then be back to state (1).
(5) An interrupt occurs. The exception prologue examines ICC2.Z and determines that the interrupt
shouldn't actually have happened. It jumps aside, and there disabled interrupts by setting
PSR.PIL to 14 and then it clears ICC2.C.
(6) If interrupts were then saved and disabled again (local_irq_save):
ICC2.Z would be shifted into the save variable and masked off (giving a 1).
ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be unaffected (ie: 0).
(7) If interrupts were then restored from state (6) (local_irq_restore):
ICC2.Z would be set to indicate the result of XOR'ing the saved value (ie: 1) with 1, which
gives a result of 0 - thus leaving ICC2.Z set.
ICC2.C would remain unaffected (ie: 0).
A TIHI #2 instruction would be used to again assay the current state, but this would do
nothing as Z==1.
(8) If interrupts were then enabled (local_irq_enable):
ICC2.Z would be cleared. ICC2.C would be left unaffected. Both flags would now be 0.
A TIHI #2 instruction again issued to assay the current state would then trap as both Z==0
[interrupts virtually enabled] and C==0 [interrupts really disabled] would then be true.
(9) The trap #2 handler would simply enable hardware interrupts (set PSR.PIL to 0), set ICC2.C to
1 and return.
(10) Immediately upon returning, the pending interrupt would be taken.
(11) The interrupt handler would take the path of actually processing the interrupt (ICC2.Z is
clear, BEQ fails as per step (2)).
(12) The interrupt handler would then set ICC2.C to 1 since hardware interrupts are definitely
enabled - or else the kernel wouldn't be here.
(13) On return from the interrupt handler, things would be back to state (1).
This trap (#2) is only available in kernel mode. In user mode it will result in SIGILL.

Просмотреть файл

@ -36,6 +36,10 @@ Module Parameters
(default is 1) (default is 1)
Use 'init=0' to bypass initializing the chip. Use 'init=0' to bypass initializing the chip.
Try this if your computer crashes when you load the module. Try this if your computer crashes when you load the module.
* reset: int
(default is 0)
The driver used to reset the chip on load, but does no more. Use
'reset=1' to restore the old behavior. Report if you need to do this.
Description Description
----------- -----------

Просмотреть файл

@ -1133,6 +1133,8 @@ running once the system is up.
Mechanism 1. Mechanism 1.
conf2 [IA-32] Force use of PCI Configuration conf2 [IA-32] Force use of PCI Configuration
Mechanism 2. Mechanism 2.
nommconf [IA-32,X86_64] Disable use of MMCONFIG for PCI
Configuration
nosort [IA-32] Don't sort PCI devices according to nosort [IA-32] Don't sort PCI devices according to
order given by the PCI BIOS. This sorting is order given by the PCI BIOS. This sorting is
done to get a device order compatible with done to get a device order compatible with
@ -1636,6 +1638,9 @@ running once the system is up.
Format: Format:
<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]] <irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
norandmaps Don't use address space randomization
Equivalent to echo 0 > /proc/sys/kernel/randomize_va_space
______________________________________________________________________ ______________________________________________________________________
Changelog: Changelog:

Просмотреть файл

@ -136,17 +136,20 @@ Kprobes, jprobes, and return probes are implemented on the following
architectures: architectures:
- i386 - i386
- x86_64 (AMD-64, E64MT) - x86_64 (AMD-64, EM64T)
- ppc64 - ppc64
- ia64 (Support for probes on certain instruction types is still in progress.) - ia64 (Does not support probes on instruction slot1.)
- sparc64 (Return probes not yet implemented.) - sparc64 (Return probes not yet implemented.)
3. Configuring Kprobes 3. Configuring Kprobes
When configuring the kernel using make menuconfig/xconfig/oldconfig, When configuring the kernel using make menuconfig/xconfig/oldconfig,
ensure that CONFIG_KPROBES is set to "y". Under "Kernel hacking", ensure that CONFIG_KPROBES is set to "y". Under "Instrumentation
look for "Kprobes". You may have to enable "Kernel debugging" Support", look for "Kprobes".
(CONFIG_DEBUG_KERNEL) before you can enable Kprobes.
So that you can load and unload Kprobes-based instrumentation modules,
make sure "Loadable module support" (CONFIG_MODULES) and "Module
unloading" (CONFIG_MODULE_UNLOAD) are set to "y".
You may also want to ensure that CONFIG_KALLSYMS and perhaps even You may also want to ensure that CONFIG_KALLSYMS and perhaps even
CONFIG_KALLSYMS_ALL are set to "y", since kallsyms_lookup_name() CONFIG_KALLSYMS_ALL are set to "y", since kallsyms_lookup_name()
@ -262,18 +265,18 @@ at any time after the probe has been registered.
5. Kprobes Features and Limitations 5. Kprobes Features and Limitations
As of Linux v2.6.12, Kprobes allows multiple probes at the same Kprobes allows multiple probes at the same address. Currently,
address. Currently, however, there cannot be multiple jprobes on however, there cannot be multiple jprobes on the same function at
the same function at the same time. the same time.
In general, you can install a probe anywhere in the kernel. In general, you can install a probe anywhere in the kernel.
In particular, you can probe interrupt handlers. Known exceptions In particular, you can probe interrupt handlers. Known exceptions
are discussed in this section. are discussed in this section.
For obvious reasons, it's a bad idea to install a probe in The register_*probe functions will return -EINVAL if you attempt
the code that implements Kprobes (mostly kernel/kprobes.c and to install a probe in the code that implements Kprobes (mostly
arch/*/kernel/kprobes.c). A patch in the v2.6.13 timeframe instructs kernel/kprobes.c and arch/*/kernel/kprobes.c, but also functions such
Kprobes to reject such requests. as do_page_fault and notifier_call_chain).
If you install a probe in an inline-able function, Kprobes makes If you install a probe in an inline-able function, Kprobes makes
no attempt to chase down all inline instances of the function and no attempt to chase down all inline instances of the function and
@ -290,18 +293,14 @@ from the accidental ones. Don't drink and probe.
Kprobes makes no attempt to prevent probe handlers from stepping on Kprobes makes no attempt to prevent probe handlers from stepping on
each other -- e.g., probing printk() and then calling printk() from a each other -- e.g., probing printk() and then calling printk() from a
probe handler. As of Linux v2.6.12, if a probe handler hits a probe, probe handler. If a probe handler hits a probe, that second probe's
that second probe's handlers won't be run in that instance. handlers won't be run in that instance, and the kprobe.nmissed member
of the second probe will be incremented.
In Linux v2.6.12 and previous versions, Kprobes' data structures are As of Linux v2.6.15-rc1, multiple handlers (or multiple instances of
protected by a single lock that is held during probe registration and the same handler) may run concurrently on different CPUs.
unregistration and while handlers are run. Thus, no two handlers
can run simultaneously. To improve scalability on SMP systems,
this restriction will probably be removed soon, in which case
multiple handlers (or multiple instances of the same handler) may
run concurrently on different CPUs. Code your handlers accordingly.
Kprobes does not use semaphores or allocate memory except during Kprobes does not use mutexes or allocate memory except during
registration and unregistration. registration and unregistration.
Probe handlers are run with preemption disabled. Depending on the Probe handlers are run with preemption disabled. Depending on the
@ -316,11 +315,18 @@ address instead of the real return address for kretprobed functions.
(As far as we can tell, __builtin_return_address() is used only (As far as we can tell, __builtin_return_address() is used only
for instrumentation and error reporting.) for instrumentation and error reporting.)
If the number of times a function is called does not match the If the number of times a function is called does not match the number
number of times it returns, registering a return probe on that of times it returns, registering a return probe on that function may
function may produce undesirable results. We have the do_exit() produce undesirable results. We have the do_exit() case covered.
and do_execve() cases covered. do_fork() is not an issue. We're do_execve() and do_fork() are not an issue. We're unaware of other
unaware of other specific cases where this could be a problem. specific cases where this could be a problem.
If, upon entry to or exit from a function, the CPU is running on
a stack other than that of the current task, registering a return
probe on that function may produce undesirable results. For this
reason, Kprobes doesn't support return probes (or kprobes or jprobes)
on the x86_64 version of __switch_to(); the registration functions
return -EINVAL.
6. Probe Overhead 6. Probe Overhead
@ -347,14 +353,12 @@ k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
7. TODO 7. TODO
a. SystemTap (http://sourceware.org/systemtap): Work in progress a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
to provide a simplified programming interface for probe-based programming interface for probe-based instrumentation. Try it out.
instrumentation. b. Kernel return probes for sparc64.
b. Improved SMP scalability: Currently, work is in progress to handle c. Support for other architectures.
multiple kprobes in parallel. d. User-space probes.
c. Kernel return probes for sparc64. e. Watchpoint probes (which fire on data references).
d. Support for other architectures.
e. User-space probes.
8. Kprobes Example 8. Kprobes Example
@ -411,8 +415,7 @@ int init_module(void)
printk("Couldn't find %s to plant kprobe\n", "do_fork"); printk("Couldn't find %s to plant kprobe\n", "do_fork");
return -1; return -1;
} }
ret = register_kprobe(&kp); if ((ret = register_kprobe(&kp) < 0)) {
if (ret < 0) {
printk("register_kprobe failed, returned %d\n", ret); printk("register_kprobe failed, returned %d\n", ret);
return -1; return -1;
} }

Просмотреть файл

@ -95,11 +95,13 @@ CONFIG_BLK_DEV_IDEDMA_PCI=y
CONFIG_IDEDMA_PCI_AUTO=y CONFIG_IDEDMA_PCI_AUTO=y
CONFIG_BLK_DEV_IDE_AU1XXX=y CONFIG_BLK_DEV_IDE_AU1XXX=y
CONFIG_BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA=y CONFIG_BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA=y
CONFIG_BLK_DEV_IDE_AU1XXX_BURSTABLE_ON=y
CONFIG_BLK_DEV_IDE_AU1XXX_SEQTS_PER_RQ=128 CONFIG_BLK_DEV_IDE_AU1XXX_SEQTS_PER_RQ=128
CONFIG_BLK_DEV_IDEDMA=y CONFIG_BLK_DEV_IDEDMA=y
CONFIG_IDEDMA_AUTO=y CONFIG_IDEDMA_AUTO=y
Also define 'IDE_AU1XXX_BURSTMODE' in 'drivers/ide/mips/au1xxx-ide.c' to enable
the burst support on DBDMA controller.
If the used system need the USB support enable the following kernel configs for If the used system need the USB support enable the following kernel configs for
high IDE to USB throughput. high IDE to USB throughput.
@ -115,6 +117,8 @@ CONFIG_BLK_DEV_IDE_AU1XXX_SEQTS_PER_RQ=128
CONFIG_BLK_DEV_IDEDMA=y CONFIG_BLK_DEV_IDEDMA=y
CONFIG_IDEDMA_AUTO=y CONFIG_IDEDMA_AUTO=y
Also undefine 'IDE_AU1XXX_BURSTMODE' in 'drivers/ide/mips/au1xxx-ide.c' to
disable the burst support on DBDMA controller.
ADD NEW HARD DISC TO WHITE OR BLACK LIST ADD NEW HARD DISC TO WHITE OR BLACK LIST
---------------------------------------- ----------------------------------------

Просмотреть файл

@ -1,3 +1,26 @@
1 Release Date : Wed Feb 03 14:31:44 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.04
i. Support for 1078 type (ppc IOP) controller, device id : 0x60 added.
During initialization, depending on the device id, the template members
are initialized with function pointers specific to the ppc or
xscale controllers.
-Sumant Patro <Sumant.Patro@lsil.com>
1 Release Date : Fri Feb 03 14:16:25 PST 2006 - Sumant Patro
<Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.02
i. Register 16 byte CDB capability with scsi midlayer
"Ths patch properly registers the 16 byte command length capability of the
megaraid_sas controlled hardware with the scsi midlayer. All megaraid_sas
hardware supports 16 byte CDB's."
-Joshua Giles <joshua_giles@dell.com>
1 Release Date : Mon Jan 23 14:09:01 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com> 1 Release Date : Mon Jan 23 14:09:01 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.02 2 Current Version : 00.00.02.02
3 Older Version : 00.00.02.01 3 Older Version : 00.00.02.01

Просмотреть файл

@ -1,7 +1,7 @@
VERSION = 2 VERSION = 2
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 16 SUBLEVEL = 16
EXTRAVERSION =-rc3 EXTRAVERSION =-rc4
NAME=Sliding Snow Leopard NAME=Sliding Snow Leopard
# *DOCUMENTATION* # *DOCUMENTATION*
@ -106,13 +106,12 @@ KBUILD_OUTPUT := $(shell cd $(KBUILD_OUTPUT) && /bin/pwd)
$(if $(KBUILD_OUTPUT),, \ $(if $(KBUILD_OUTPUT),, \
$(error output directory "$(saved-output)" does not exist)) $(error output directory "$(saved-output)" does not exist))
.PHONY: $(MAKECMDGOALS) cdbuilddir .PHONY: $(MAKECMDGOALS)
$(MAKECMDGOALS) _all: cdbuilddir
cdbuilddir: $(filter-out _all,$(MAKECMDGOALS)) _all:
$(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \ $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \
KBUILD_SRC=$(CURDIR) \ KBUILD_SRC=$(CURDIR) \
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $(MAKECMDGOALS) KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $@
# Leave processing to above invocation of make # Leave processing to above invocation of make
skip-makefile := 1 skip-makefile := 1

Просмотреть файл

@ -111,7 +111,7 @@
CALL(sys_statfs) CALL(sys_statfs)
/* 100 */ CALL(sys_fstatfs) /* 100 */ CALL(sys_fstatfs)
CALL(sys_ni_syscall) CALL(sys_ni_syscall)
CALL(OBSOLETE(sys_socketcall)) CALL(OBSOLETE(ABI(sys_socketcall, sys_oabi_socketcall)))
CALL(sys_syslog) CALL(sys_syslog)
CALL(sys_setitimer) CALL(sys_setitimer)
/* 105 */ CALL(sys_getitimer) /* 105 */ CALL(sys_getitimer)

Просмотреть файл

@ -23,6 +23,7 @@
#include <linux/root_dev.h> #include <linux/root_dev.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/smp.h>
#include <asm/cpu.h> #include <asm/cpu.h>
#include <asm/elf.h> #include <asm/elf.h>
@ -771,6 +772,10 @@ void __init setup_arch(char **cmdline_p)
paging_init(&meminfo, mdesc); paging_init(&meminfo, mdesc);
request_standard_resources(&meminfo, mdesc); request_standard_resources(&meminfo, mdesc);
#ifdef CONFIG_SMP
smp_init_cpus();
#endif
cpu_init(); cpu_init();
/* /*

Просмотреть файл

@ -338,7 +338,6 @@ void __init smp_prepare_boot_cpu(void)
per_cpu(cpu_data, cpu).idle = current; per_cpu(cpu_data, cpu).idle = current;
cpu_set(cpu, cpu_possible_map);
cpu_set(cpu, cpu_present_map); cpu_set(cpu, cpu_present_map);
cpu_set(cpu, cpu_online_map); cpu_set(cpu, cpu_online_map);
} }

Просмотреть файл

@ -64,6 +64,7 @@
* sys_connect: * sys_connect:
* sys_sendmsg: * sys_sendmsg:
* sys_sendto: * sys_sendto:
* sys_socketcall:
* *
* struct sockaddr_un loses its padding with EABI. Since the size of the * struct sockaddr_un loses its padding with EABI. Since the size of the
* structure is used as a validation test in unix_mkname(), we need to * structure is used as a validation test in unix_mkname(), we need to
@ -78,6 +79,7 @@
#include <linux/eventpoll.h> #include <linux/eventpoll.h>
#include <linux/sem.h> #include <linux/sem.h>
#include <linux/socket.h> #include <linux/socket.h>
#include <linux/net.h>
#include <asm/ipc.h> #include <asm/ipc.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
@ -408,3 +410,31 @@ asmlinkage long sys_oabi_sendmsg(int fd, struct msghdr __user *msg, unsigned fla
return sys_sendmsg(fd, msg, flags); return sys_sendmsg(fd, msg, flags);
} }
asmlinkage long sys_oabi_socketcall(int call, unsigned long __user *args)
{
unsigned long r = -EFAULT, a[6];
switch (call) {
case SYS_BIND:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_bind(a[0], (struct sockaddr __user *)a[1], a[2]);
break;
case SYS_CONNECT:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_connect(a[0], (struct sockaddr __user *)a[1], a[2]);
break;
case SYS_SENDTO:
if (copy_from_user(a, args, 6 * sizeof(long)) == 0)
r = sys_oabi_sendto(a[0], (void __user *)a[1], a[2], a[3],
(struct sockaddr __user *)a[4], a[5]);
break;
case SYS_SENDMSG:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_sendmsg(a[0], (struct msghdr __user *)a[1], a[2]);
break;
default:
r = sys_socketcall(call, args);
}
return r;
}

Просмотреть файл

@ -140,6 +140,18 @@ static void __init poke_milo(void)
mb(); mb();
} }
/*
* Initialise the CPU possible map early - this describes the CPUs
* which may be present or become present in the system.
*/
void __init smp_init_cpus(void)
{
unsigned int i, ncores = get_core_count();
for (i = 0; i < ncores; i++)
cpu_set(i, cpu_possible_map);
}
void __init smp_prepare_cpus(unsigned int max_cpus) void __init smp_prepare_cpus(unsigned int max_cpus)
{ {
unsigned int ncores = get_core_count(); unsigned int ncores = get_core_count();
@ -176,14 +188,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
max_cpus = ncores; max_cpus = ncores;
/* /*
* Initialise the possible/present maps. * Initialise the present map, which describes the set of CPUs
* cpu_possible_map describes the set of CPUs which may be present * actually populated at the present time.
* cpu_present_map describes the set of CPUs populated
*/ */
for (i = 0; i < max_cpus; i++) { for (i = 0; i < max_cpus; i++)
cpu_set(i, cpu_possible_map);
cpu_set(i, cpu_present_map); cpu_set(i, cpu_present_map);
}
/* /*
* Do we need any more CPUs? If so, then let them know where * Do we need any more CPUs? If so, then let them know where

Просмотреть файл

@ -13,7 +13,6 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/config.h> #include <linux/config.h>
#include <linux/init.h>
#include <linux/major.h> #include <linux/major.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>

Просмотреть файл

@ -12,7 +12,6 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/config.h> #include <linux/config.h>
#include <linux/init.h>
#include <linux/major.h> #include <linux/major.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>

Просмотреть файл

@ -27,8 +27,6 @@ static struct flash_platform_data nslu2_flash_data = {
}; };
static struct resource nslu2_flash_resource = { static struct resource nslu2_flash_resource = {
.start = NSLU2_FLASH_BASE,
.end = NSLU2_FLASH_BASE + NSLU2_FLASH_SIZE,
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
}; };
@ -116,6 +114,10 @@ static void __init nslu2_init(void)
{ {
ixp4xx_sys_init(); ixp4xx_sys_init();
nslu2_flash_resource.start = IXP4XX_EXP_BUS_BASE(0);
nslu2_flash_resource.end =
IXP4XX_EXP_BUS_BASE(0) + ixp4xx_exp_bus_size - 1;
pm_power_off = nslu2_power_off; pm_power_off = nslu2_power_off;
platform_add_devices(nslu2_devices, ARRAY_SIZE(nslu2_devices)); platform_add_devices(nslu2_devices, ARRAY_SIZE(nslu2_devices));

Просмотреть файл

@ -143,6 +143,18 @@ static void __init poke_milo(void)
mb(); mb();
} }
/*
* Initialise the CPU possible map early - this describes the CPUs
* which may be present or become present in the system.
*/
void __init smp_init_cpus(void)
{
unsigned int i, ncores = get_core_count();
for (i = 0; i < ncores; i++)
cpu_set(i, cpu_possible_map);
}
void __init smp_prepare_cpus(unsigned int max_cpus) void __init smp_prepare_cpus(unsigned int max_cpus)
{ {
unsigned int ncores = get_core_count(); unsigned int ncores = get_core_count();
@ -179,14 +191,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
local_timer_setup(cpu); local_timer_setup(cpu);
/* /*
* Initialise the possible/present maps. * Initialise the present map, which describes the set of CPUs
* cpu_possible_map describes the set of CPUs which may be present * actually populated at the present time.
* cpu_present_map describes the set of CPUs populated
*/ */
for (i = 0; i < max_cpus; i++) { for (i = 0; i < max_cpus; i++)
cpu_set(i, cpu_possible_map);
cpu_set(i, cpu_present_map); cpu_set(i, cpu_present_map);
}
/* /*
* Do we need any more CPUs? If so, then let them know where * Do we need any more CPUs? If so, then let them know where

Просмотреть файл

@ -38,7 +38,6 @@
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/pm.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <asm/io.h> #include <asm/io.h>

Просмотреть файл

@ -25,6 +25,10 @@ config GENERIC_HARDIRQS
bool bool
default n default n
config TIME_LOW_RES
bool
default y
mainmenu "Fujitsu FR-V Kernel Configuration" mainmenu "Fujitsu FR-V Kernel Configuration"
source "init/Kconfig" source "init/Kconfig"

Просмотреть файл

@ -81,7 +81,7 @@ endif
# - reserve CC3 for use with atomic ops # - reserve CC3 for use with atomic ops
# - all the extra registers are dealt with only at context switch time # - all the extra registers are dealt with only at context switch time
CFLAGS += -mno-fdpic -mgpr-32 -msoft-float -mno-media CFLAGS += -mno-fdpic -mgpr-32 -msoft-float -mno-media
CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15 CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15 -ffixed-icc2
AFLAGS += -mno-fdpic AFLAGS += -mno-fdpic
ASFLAGS += -mno-fdpic ASFLAGS += -mno-fdpic

Просмотреть файл

@ -200,12 +200,20 @@ __break_step:
movsg bpcsr,gr2 movsg bpcsr,gr2
sethi.p %hi(__entry_kernel_external_interrupt),gr3 sethi.p %hi(__entry_kernel_external_interrupt),gr3
setlo %lo(__entry_kernel_external_interrupt),gr3 setlo %lo(__entry_kernel_external_interrupt),gr3
subcc gr2,gr3,gr0,icc0 subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_uspace_external_interrupt),gr3
setlo.p %lo(__entry_uspace_external_interrupt),gr3
beq icc0,#2,__break_step_kernel_external_interrupt beq icc0,#2,__break_step_kernel_external_interrupt
sethi.p %hi(__entry_uspace_external_interrupt),gr3 subcc.p gr2,gr3,gr0,icc0
setlo %lo(__entry_uspace_external_interrupt),gr3 sethi %hi(__entry_kernel_external_interrupt_virtually_disabled),gr3
subcc gr2,gr3,gr0,icc0 setlo.p %lo(__entry_kernel_external_interrupt_virtually_disabled),gr3
beq icc0,#2,__break_step_uspace_external_interrupt beq icc0,#2,__break_step_uspace_external_interrupt
subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_kernel_external_interrupt_virtual_reenable),gr3
setlo.p %lo(__entry_kernel_external_interrupt_virtual_reenable),gr3
beq icc0,#2,__break_step_kernel_external_interrupt_virtually_disabled
subcc gr2,gr3,gr0,icc0
beq icc0,#2,__break_step_kernel_external_interrupt_virtual_reenable
LEDS 0x2007,gr2 LEDS 0x2007,gr2
@ -254,6 +262,9 @@ __break_step_kernel_softprog_interrupt:
# step through an external interrupt from kernel mode # step through an external interrupt from kernel mode
.globl __break_step_kernel_external_interrupt .globl __break_step_kernel_external_interrupt
__break_step_kernel_external_interrupt: __break_step_kernel_external_interrupt:
# deal with virtual interrupt disablement
beq icc2,#0,__break_step_kernel_external_interrupt_virtually_disabled
sethi.p %hi(__entry_kernel_external_interrupt_reentry),gr3 sethi.p %hi(__entry_kernel_external_interrupt_reentry),gr3
setlo %lo(__entry_kernel_external_interrupt_reentry),gr3 setlo %lo(__entry_kernel_external_interrupt_reentry),gr3
@ -294,6 +305,64 @@ __break_return_as_kernel_prologue:
#endif #endif
rett #1 rett #1
# we single-stepped into an interrupt handler whilst interrupts were merely virtually disabled
# need to really disable interrupts, set flag, fix up and return
__break_step_kernel_external_interrupt_virtually_disabled:
movsg psr,gr2
andi gr2,#~PSR_PIL,gr2
ori gr2,#PSR_PIL_14,gr2 /* debugging interrupts only */
movgs gr2,psr
ldi @(gr31,#REG_CCR),gr3
movgs gr3,ccr
subcc.p gr0,gr0,gr0,icc2 /* leave Z set, clear C */
# exceptions must've been enabled and we must've been in supervisor mode
setlos BPSR_BET|BPSR_BS,gr3
movgs gr3,bpsr
# return to where the interrupt happened
movsg pcsr,gr2
movgs gr2,bpcsr
lddi.p @(gr31,#REG_GR(2)),gr2
xor gr31,gr31,gr31
movgs gr0,brr
#ifdef CONFIG_MMU
movsg scr3,gr31
#endif
rett #1
# we stepped through into the virtual interrupt reenablement trap
#
# we also want to single step anyway, but after fixing up so that we get an event on the
# instruction after the broken-into exception returns
.globl __break_step_kernel_external_interrupt_virtual_reenable
__break_step_kernel_external_interrupt_virtual_reenable:
movsg psr,gr2
andi gr2,#~PSR_PIL,gr2
movgs gr2,psr
ldi @(gr31,#REG_CCR),gr3
movgs gr3,ccr
subicc gr0,#1,gr0,icc2 /* clear Z, set C */
# save the adjusted ICC2
movsg ccr,gr3
sti gr3,@(gr31,#REG_CCR)
# exceptions must've been enabled and we must've been in supervisor mode
setlos BPSR_BET|BPSR_BS,gr3
movgs gr3,bpsr
# return to where the trap happened
movsg pcsr,gr2
movgs gr2,bpcsr
# and then process the single step
bra __break_continue
# step through an internal exception from uspace mode # step through an internal exception from uspace mode
.globl __break_step_uspace_softprog_interrupt .globl __break_step_uspace_softprog_interrupt
__break_step_uspace_softprog_interrupt: __break_step_uspace_softprog_interrupt:

Просмотреть файл

@ -116,6 +116,8 @@ __break_kerneltrap_fixup_table:
.long __break_step_uspace_external_interrupt .long __break_step_uspace_external_interrupt
.section .trap.kernel .section .trap.kernel
.org \tbr_tt .org \tbr_tt
# deal with virtual interrupt disablement
beq icc2,#0,__entry_kernel_external_interrupt_virtually_disabled
bra __entry_kernel_external_interrupt bra __entry_kernel_external_interrupt
.section .trap.fixup.kernel .section .trap.fixup.kernel
.org \tbr_tt >> 2 .org \tbr_tt >> 2
@ -259,25 +261,52 @@ __trap_fixup_kernel_data_tlb_miss:
.org TBR_TT_TRAP0 .org TBR_TT_TRAP0
.rept 127 .rept 127
bra __entry_uspace_softprog_interrupt bra __entry_uspace_softprog_interrupt
bra __break_step_uspace_softprog_interrupt .long 0,0,0
.long 0,0
.endr .endr
.org TBR_TT_BREAK .org TBR_TT_BREAK
bra __entry_break bra __entry_break
.long 0,0,0 .long 0,0,0
.section .trap.fixup.user
.org TBR_TT_TRAP0 >> 2
.rept 127
.long __break_step_uspace_softprog_interrupt
.endr
.org TBR_TT_BREAK >> 2
.long 0
# miscellaneous kernel mode entry points # miscellaneous kernel mode entry points
.section .trap.kernel .section .trap.kernel
.org TBR_TT_TRAP0 .org TBR_TT_TRAP0
.rept 127
bra __entry_kernel_softprog_interrupt bra __entry_kernel_softprog_interrupt
bra __break_step_kernel_softprog_interrupt .org TBR_TT_TRAP1
.long 0,0 bra __entry_kernel_softprog_interrupt
# trap #2 in kernel - reenable interrupts
.org TBR_TT_TRAP2
bra __entry_kernel_external_interrupt_virtual_reenable
# miscellaneous kernel traps
.org TBR_TT_TRAP3
.rept 124
bra __entry_kernel_softprog_interrupt
.long 0,0,0
.endr .endr
.org TBR_TT_BREAK .org TBR_TT_BREAK
bra __entry_break bra __entry_break
.long 0,0,0 .long 0,0,0
.section .trap.fixup.kernel
.org TBR_TT_TRAP0 >> 2
.long __break_step_kernel_softprog_interrupt
.long __break_step_kernel_softprog_interrupt
.long __break_step_kernel_external_interrupt_virtual_reenable
.rept 124
.long __break_step_kernel_softprog_interrupt
.endr
.org TBR_TT_BREAK >> 2
.long 0
# miscellaneous debug mode entry points # miscellaneous debug mode entry points
.section .trap.break .section .trap.break
.org TBR_TT_BREAK .org TBR_TT_BREAK

Просмотреть файл

@ -141,7 +141,10 @@ __entry_uspace_external_interrupt_reentry:
movsg gner0,gr4 movsg gner0,gr4
movsg gner1,gr5 movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0) stdi.p gr4,@(gr28,#REG_GNER0)
# interrupts start off fully disabled in the interrupt handler
subcc gr0,gr0,gr0,icc2 /* set Z and clear C */
# set up kernel global registers # set up kernel global registers
sethi.p %hi(__kernel_current_task),gr5 sethi.p %hi(__kernel_current_task),gr5
@ -193,9 +196,8 @@ __entry_uspace_external_interrupt_reentry:
.type __entry_kernel_external_interrupt,@function .type __entry_kernel_external_interrupt,@function
__entry_kernel_external_interrupt: __entry_kernel_external_interrupt:
LEDS 0x6210 LEDS 0x6210
// sub sp,gr15,gr31
sub sp,gr15,gr31 // LEDS32
LEDS32
# set up the stack pointer # set up the stack pointer
or.p sp,gr0,gr30 or.p sp,gr0,gr30
@ -231,7 +233,10 @@ __entry_kernel_external_interrupt_reentry:
stdi gr24,@(gr28,#REG_GR(24)) stdi gr24,@(gr28,#REG_GR(24))
stdi gr26,@(gr28,#REG_GR(26)) stdi gr26,@(gr28,#REG_GR(26))
sti gr29,@(gr28,#REG_GR(29)) sti gr29,@(gr28,#REG_GR(29))
stdi gr30,@(gr28,#REG_GR(30)) stdi.p gr30,@(gr28,#REG_GR(30))
# note virtual interrupts will be fully enabled upon return
subicc gr0,#1,gr0,icc2 /* clear Z, set C */
movsg tbr ,gr20 movsg tbr ,gr20
movsg psr ,gr22 movsg psr ,gr22
@ -267,7 +272,10 @@ __entry_kernel_external_interrupt_reentry:
movsg gner0,gr4 movsg gner0,gr4
movsg gner1,gr5 movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0) stdi.p gr4,@(gr28,#REG_GNER0)
# interrupts start off fully disabled in the interrupt handler
subcc gr0,gr0,gr0,icc2 /* set Z and clear C */
# set the return address # set the return address
sethi.p %hi(__entry_return_from_kernel_interrupt),gr4 sethi.p %hi(__entry_return_from_kernel_interrupt),gr4
@ -291,6 +299,45 @@ __entry_kernel_external_interrupt_reentry:
.size __entry_kernel_external_interrupt,.-__entry_kernel_external_interrupt .size __entry_kernel_external_interrupt,.-__entry_kernel_external_interrupt
###############################################################################
#
# deal with interrupts that were actually virtually disabled
# - we need to really disable them, flag the fact and return immediately
# - if you change this, you must alter break.S also
#
###############################################################################
.balign L1_CACHE_BYTES
.globl __entry_kernel_external_interrupt_virtually_disabled
.type __entry_kernel_external_interrupt_virtually_disabled,@function
__entry_kernel_external_interrupt_virtually_disabled:
movsg psr,gr30
andi gr30,#~PSR_PIL,gr30
ori gr30,#PSR_PIL_14,gr30 ; debugging interrupts only
movgs gr30,psr
subcc gr0,gr0,gr0,icc2 ; leave Z set, clear C
rett #0
.size __entry_kernel_external_interrupt_virtually_disabled,.-__entry_kernel_external_interrupt_virtually_disabled
###############################################################################
#
# deal with re-enablement of interrupts that were pending when virtually re-enabled
# - set ICC2.C, re-enable the real interrupts and return
# - we can clear ICC2.Z because we shouldn't be here if it's not 0 [due to TIHI]
# - if you change this, you must alter break.S also
#
###############################################################################
.balign L1_CACHE_BYTES
.globl __entry_kernel_external_interrupt_virtual_reenable
.type __entry_kernel_external_interrupt_virtual_reenable,@function
__entry_kernel_external_interrupt_virtual_reenable:
movsg psr,gr30
andi gr30,#~PSR_PIL,gr30 ; re-enable interrupts
movgs gr30,psr
subicc gr0,#1,gr0,icc2 ; clear Z, set C
rett #0
.size __entry_kernel_external_interrupt_virtual_reenable,.-__entry_kernel_external_interrupt_virtual_reenable
############################################################################### ###############################################################################
# #
@ -335,6 +382,7 @@ __entry_uspace_softprog_interrupt_reentry:
sethi.p %hi(__entry_return_from_user_exception),gr23 sethi.p %hi(__entry_return_from_user_exception),gr23
setlo %lo(__entry_return_from_user_exception),gr23 setlo %lo(__entry_return_from_user_exception),gr23
bra __entry_common bra __entry_common
.size __entry_uspace_softprog_interrupt,.-__entry_uspace_softprog_interrupt .size __entry_uspace_softprog_interrupt,.-__entry_uspace_softprog_interrupt
@ -495,7 +543,10 @@ __entry_common:
movsg gner0,gr4 movsg gner0,gr4
movsg gner1,gr5 movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0) stdi.p gr4,@(gr28,#REG_GNER0)
# set up virtual interrupt disablement
subicc gr0,#1,gr0,icc2 /* clear Z flag, set C flag */
# set up kernel global registers # set up kernel global registers
sethi.p %hi(__kernel_current_task),gr5 sethi.p %hi(__kernel_current_task),gr5
@ -1418,11 +1469,27 @@ sys_call_table:
.long sys_add_key .long sys_add_key
.long sys_request_key .long sys_request_key
.long sys_keyctl .long sys_keyctl
.long sys_ni_syscall // sys_vperfctr_open .long sys_ioprio_set
.long sys_ni_syscall // sys_vperfctr_control /* 290 */ .long sys_ioprio_get /* 290 */
.long sys_ni_syscall // sys_vperfctr_unlink .long sys_inotify_init
.long sys_ni_syscall // sys_vperfctr_iresume .long sys_inotify_add_watch
.long sys_ni_syscall // sys_vperfctr_read .long sys_inotify_rm_watch
.long sys_migrate_pages
.long sys_openat /* 295 */
.long sys_mkdirat
.long sys_mknodat
.long sys_fchownat
.long sys_futimesat
.long sys_newfstatat /* 300 */
.long sys_unlinkat
.long sys_renameat
.long sys_linkat
.long sys_symlinkat
.long sys_readlinkat /* 305 */
.long sys_fchmodat
.long sys_faccessat
.long sys_pselect6
.long sys_ppoll
syscall_table_size = (. - sys_call_table) syscall_table_size = (. - sys_call_table)

Просмотреть файл

@ -513,6 +513,9 @@ __head_mmu_enabled:
movgs gr0,ccr movgs gr0,ccr
movgs gr0,cccr movgs gr0,cccr
# initialise the virtual interrupt handling
subcc gr0,gr0,gr0,icc2 /* set Z, clear C */
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
movgs gr3,scr2 movgs gr3,scr2
movgs gr3,scr3 movgs gr3,scr3

Просмотреть файл

@ -287,18 +287,11 @@ asmlinkage void do_IRQ(void)
struct irq_source *source; struct irq_source *source;
int level, cpu; int level, cpu;
irq_enter();
level = (__frame->tbr >> 4) & 0xf; level = (__frame->tbr >> 4) & 0xf;
cpu = smp_processor_id(); cpu = smp_processor_id();
#if 0
{
static u32 irqcount;
*(volatile u32 *) 0xe1200004 = ~((irqcount++ << 8) | level);
*(volatile u16 *) 0xffc00100 = (u16) ~0x9999;
mb();
}
#endif
if ((unsigned long) __frame - (unsigned long) (current + 1) < 512) if ((unsigned long) __frame - (unsigned long) (current + 1) < 512)
BUG(); BUG();
@ -308,40 +301,12 @@ asmlinkage void do_IRQ(void)
kstat_this_cpu.irqs[level]++; kstat_this_cpu.irqs[level]++;
irq_enter();
for (source = frv_irq_levels[level].sources; source; source = source->next) for (source = frv_irq_levels[level].sources; source; source = source->next)
source->doirq(source); source->doirq(source);
irq_exit();
__clr_MASK(level); __clr_MASK(level);
/* only process softirqs if we didn't interrupt another interrupt handler */ irq_exit();
if ((__frame->psr & PSR_PIL) == PSR_PIL_0)
if (local_softirq_pending())
do_softirq();
#ifdef CONFIG_PREEMPT
local_irq_disable();
while (--current->preempt_count == 0) {
if (!(__frame->psr & PSR_S) ||
current->need_resched == 0 ||
in_interrupt())
break;
current->preempt_count++;
local_irq_enable();
preempt_schedule();
local_irq_disable();
}
#endif
#if 0
{
*(volatile u16 *) 0xffc00100 = (u16) ~0x6666;
mb();
}
#endif
} /* end do_IRQ() */ } /* end do_IRQ() */

Просмотреть файл

@ -43,15 +43,6 @@ void iounmap(void *addr)
{ {
} }
/*
* __iounmap unmaps nearly everything, so be careful
* it doesn't free currently pointer/page tables anymore but it
* wans't used anyway and might be added later.
*/
void __iounmap(void *addr, unsigned long size)
{
}
/* /*
* Set new cache mode for some kernel address space. * Set new cache mode for some kernel address space.
* The caller must push data for that range itself, if such data may already * The caller must push data for that range itself, if such data may already

Просмотреть файл

@ -33,6 +33,10 @@ config GENERIC_CALIBRATE_DELAY
bool bool
default y default y
config TIME_LOW_RES
bool
default y
config ISA config ISA
bool bool
default y default y

Просмотреть файл

@ -169,7 +169,7 @@ endif
config CPU_H8300H config CPU_H8300H
bool bool
depends on (H8002 || H83007 || H83048 || H83068) depends on (H83002 || H83007 || H83048 || H83068)
default y default y
config CPU_H8S config CPU_H8S

3
arch/i386/boot/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
bootsect
bzImage
setup

1
arch/i386/boot/tools/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
build

1
arch/i386/kernel/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
vsyscall.lds

Просмотреть файл

@ -1,4 +1,5 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/init.h> #include <linux/init.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/msr.h> #include <asm/msr.h>

Просмотреть файл

@ -282,6 +282,10 @@ time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
if (val != CPUFREQ_RESUMECHANGE) if (val != CPUFREQ_RESUMECHANGE)
write_seqlock_irq(&xtime_lock); write_seqlock_irq(&xtime_lock);
if (!ref_freq) { if (!ref_freq) {
if (!freq->old){
ref_freq = freq->new;
goto end;
}
ref_freq = freq->old; ref_freq = freq->old;
loops_per_jiffy_ref = cpu_data[freq->cpu].loops_per_jiffy; loops_per_jiffy_ref = cpu_data[freq->cpu].loops_per_jiffy;
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
@ -307,6 +311,7 @@ time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
#endif #endif
} }
end:
if (val != CPUFREQ_RESUMECHANGE) if (val != CPUFREQ_RESUMECHANGE)
write_sequnlock_irq(&xtime_lock); write_sequnlock_irq(&xtime_lock);

Просмотреть файл

@ -7,6 +7,21 @@
* for details. * for details.
*/ */
/*
* The caller puts arg2 in %ecx, which gets pushed. The kernel will use
* %ecx itself for arg2. The pushing is because the sysexit instruction
* (found in entry.S) requires that we clobber %ecx with the desired %esp.
* User code might expect that %ecx is unclobbered though, as it would be
* for returning via the iret instruction, so we must push and pop.
*
* The caller puts arg3 in %edx, which the sysexit instruction requires
* for %eip. Thus, exactly as for arg2, we must push and pop.
*
* Arg6 is different. The caller puts arg6 in %ebp. Since the sysenter
* instruction clobbers %esp, the user's %esp won't even survive entry
* into the kernel. We store %esp in %ebp. Code in entry.S must fetch
* arg6 from the stack.
*/
.text .text
.globl __kernel_vsyscall .globl __kernel_vsyscall
.type __kernel_vsyscall,@function .type __kernel_vsyscall,@function

Просмотреть файл

@ -20,7 +20,20 @@ struct frame_head {
} __attribute__((packed)); } __attribute__((packed));
static struct frame_head * static struct frame_head *
dump_backtrace(struct frame_head * head) dump_kernel_backtrace(struct frame_head * head)
{
oprofile_add_trace(head->ret);
/* frame pointers should strictly progress back up the stack
* (towards higher addresses) */
if (head >= head->ebp)
return NULL;
return head->ebp;
}
static struct frame_head *
dump_user_backtrace(struct frame_head * head)
{ {
struct frame_head bufhead[2]; struct frame_head bufhead[2];
@ -105,10 +118,10 @@ x86_backtrace(struct pt_regs * const regs, unsigned int depth)
if (!user_mode_vm(regs)) { if (!user_mode_vm(regs)) {
while (depth-- && valid_kernel_stack(head, regs)) while (depth-- && valid_kernel_stack(head, regs))
head = dump_backtrace(head); head = dump_kernel_backtrace(head);
return; return;
} }
while (depth-- && head) while (depth-- && head)
head = dump_backtrace(head); head = dump_user_backtrace(head);
} }

Просмотреть файл

@ -761,6 +761,59 @@ int acpi_map_cpu2node(acpi_handle handle, int cpu, long physid)
return (0); return (0);
} }
int additional_cpus __initdata = -1;
static __init int setup_additional_cpus(char *s)
{
if (s)
additional_cpus = simple_strtol(s, NULL, 0);
return 0;
}
early_param("additional_cpus", setup_additional_cpus);
/*
* cpu_possible_map should be static, it cannot change as cpu's
* are onlined, or offlined. The reason is per-cpu data-structures
* are allocated by some modules at init time, and dont expect to
* do this dynamically on cpu arrival/departure.
* cpu_present_map on the other hand can change dynamically.
* In case when cpu_hotplug is not compiled, then we resort to current
* behaviour, which is cpu_possible == cpu_present.
* - Ashok Raj
*
* Three ways to find out the number of additional hotplug CPUs:
* - If the BIOS specified disabled CPUs in ACPI/mptables use that.
* - The user can overwrite it with additional_cpus=NUM
* - Otherwise don't reserve additional CPUs.
*/
__init void prefill_possible_map(void)
{
int i;
int possible, disabled_cpus;
disabled_cpus = total_cpus - available_cpus;
if (additional_cpus == -1) {
if (disabled_cpus > 0)
additional_cpus = disabled_cpus;
else
additional_cpus = 0;
}
possible = available_cpus + additional_cpus;
if (possible > NR_CPUS)
possible = NR_CPUS;
printk(KERN_INFO "SMP: Allowing %d CPUs, %d hotplug CPUs\n",
possible, max((possible - available_cpus), 0));
for (i = 0; i < possible; i++)
cpu_set(i, cpu_possible_map);
}
int acpi_map_lsapic(acpi_handle handle, int *pcpu) int acpi_map_lsapic(acpi_handle handle, int *pcpu)
{ {
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };

Просмотреть файл

@ -569,7 +569,9 @@ GLOBAL_ENTRY(ia64_trace_syscall)
.mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8 .mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8
.mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10 .mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10
br.call.sptk.many rp=syscall_trace_leave // give parent a chance to catch return value br.call.sptk.many rp=syscall_trace_leave // give parent a chance to catch return value
.ret3: br.cond.sptk .work_pending_syscall_end .ret3:
(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
br.cond.sptk .work_pending_syscall_end
strace_error: strace_error:
ld8 r3=[r2] // load pt_regs.r8 ld8 r3=[r2] // load pt_regs.r8

Просмотреть файл

@ -10,23 +10,8 @@
#include <linux/string.h> #include <linux/string.h>
EXPORT_SYMBOL(memset); EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memchr);
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL(memcpy); EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(memscan);
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strcmp);
EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strlen); EXPORT_SYMBOL(strlen);
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
EXPORT_SYMBOL(strncpy);
EXPORT_SYMBOL(strnlen);
EXPORT_SYMBOL(strrchr);
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strpbrk);
#include <asm/checksum.h> #include <asm/checksum.h>
EXPORT_SYMBOL(ip_fast_csum); /* hand-coded assembly */ EXPORT_SYMBOL(ip_fast_csum); /* hand-coded assembly */

Просмотреть файл

@ -430,6 +430,7 @@ setup_arch (char **cmdline_p)
if (early_console_setup(*cmdline_p) == 0) if (early_console_setup(*cmdline_p) == 0)
mark_bsp_online(); mark_bsp_online();
parse_early_param();
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
/* Initialize the ACPI boot-time table parser */ /* Initialize the ACPI boot-time table parser */
acpi_table_init(); acpi_table_init();
@ -688,6 +689,9 @@ void
setup_per_cpu_areas (void) setup_per_cpu_areas (void)
{ {
/* start_kernel() requires this... */ /* start_kernel() requires this... */
#ifdef CONFIG_ACPI_HOTPLUG_CPU
prefill_possible_map();
#endif
} }
/* /*

Просмотреть файл

@ -129,7 +129,7 @@ DEFINE_PER_CPU(int, cpu_state);
/* Bitmasks of currently online, and possible CPUs */ /* Bitmasks of currently online, and possible CPUs */
cpumask_t cpu_online_map; cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map); EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_possible_map; cpumask_t cpu_possible_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_possible_map); EXPORT_SYMBOL(cpu_possible_map);
cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned; cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
@ -506,9 +506,6 @@ smp_build_cpu_map (void)
for (cpu = 0; cpu < NR_CPUS; cpu++) { for (cpu = 0; cpu < NR_CPUS; cpu++) {
ia64_cpu_to_sapicid[cpu] = -1; ia64_cpu_to_sapicid[cpu] = -1;
#ifdef CONFIG_HOTPLUG_CPU
cpu_set(cpu, cpu_possible_map);
#endif
} }
ia64_cpu_to_sapicid[0] = boot_cpu_id; ia64_cpu_to_sapicid[0] = boot_cpu_id;

Просмотреть файл

@ -250,32 +250,27 @@ time_init (void)
set_normalized_timespec(&wall_to_monotonic, -xtime.tv_sec, -xtime.tv_nsec); set_normalized_timespec(&wall_to_monotonic, -xtime.tv_sec, -xtime.tv_nsec);
} }
#define SMALLUSECS 100 /*
* Generic udelay assumes that if preemption is allowed and the thread
* migrates to another CPU, that the ITC values are synchronized across
* all CPUs.
*/
static void
ia64_itc_udelay (unsigned long usecs)
{
unsigned long start = ia64_get_itc();
unsigned long end = start + usecs*local_cpu_data->cyc_per_usec;
while (time_before(ia64_get_itc(), end))
cpu_relax();
}
void (*ia64_udelay)(unsigned long usecs) = &ia64_itc_udelay;
void void
udelay (unsigned long usecs) udelay (unsigned long usecs)
{ {
unsigned long start; (*ia64_udelay)(usecs);
unsigned long cycles;
unsigned long smallusecs;
/*
* Execute the non-preemptible delay loop (because the ITC might
* not be synchronized between CPUS) in relatively short time
* chunks, allowing preemption between the chunks.
*/
while (usecs > 0) {
smallusecs = (usecs > SMALLUSECS) ? SMALLUSECS : usecs;
preempt_disable();
cycles = smallusecs*local_cpu_data->cyc_per_usec;
start = ia64_get_itc();
while (ia64_get_itc() - start < cycles)
cpu_relax();
preempt_enable();
usecs -= smallusecs;
}
} }
EXPORT_SYMBOL(udelay); EXPORT_SYMBOL(udelay);

Просмотреть файл

@ -16,6 +16,7 @@
#include <linux/module.h> /* for EXPORT_SYMBOL */ #include <linux/module.h> /* for EXPORT_SYMBOL */
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/delay.h> /* for ssleep() */
#include <asm/fpswa.h> #include <asm/fpswa.h>
#include <asm/ia32.h> #include <asm/ia32.h>
@ -116,6 +117,13 @@ die (const char *str, struct pt_regs *regs, long err)
bust_spinlocks(0); bust_spinlocks(0);
die.lock_owner = -1; die.lock_owner = -1;
spin_unlock_irq(&die.lock); spin_unlock_irq(&die.lock);
if (panic_on_oops) {
printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n");
ssleep(5);
panic("Fatal exception");
}
do_exit(SIGSEGV); do_exit(SIGSEGV);
} }

Просмотреть файл

@ -23,6 +23,10 @@
#include "xtalk/hubdev.h" #include "xtalk/hubdev.h"
#include "xtalk/xwidgetdev.h" #include "xtalk/xwidgetdev.h"
extern void sn_init_cpei_timer(void);
extern void register_sn_procfs(void);
static struct list_head sn_sysdata_list; static struct list_head sn_sysdata_list;
/* sysdata list struct */ /* sysdata list struct */
@ -40,12 +44,12 @@ struct brick {
struct slab_info slab_info[MAX_SLABS + 1]; struct slab_info slab_info[MAX_SLABS + 1];
}; };
int sn_ioif_inited = 0; /* SN I/O infrastructure initialized? */ int sn_ioif_inited; /* SN I/O infrastructure initialized? */
struct sn_pcibus_provider *sn_pci_provider[PCIIO_ASIC_MAX_TYPES]; /* indexed by asic type */ struct sn_pcibus_provider *sn_pci_provider[PCIIO_ASIC_MAX_TYPES]; /* indexed by asic type */
static int max_segment_number = 0; /* Default highest segment number */ static int max_segment_number; /* Default highest segment number */
static int max_pcibus_number = 255; /* Default highest pci bus number */ static int max_pcibus_number = 255; /* Default highest pci bus number */
/* /*
* Hooks and struct for unsupported pci providers * Hooks and struct for unsupported pci providers
@ -84,7 +88,6 @@ static inline u64
sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num, sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
u64 address) u64 address)
{ {
struct ia64_sal_retval ret_stuff; struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0; ret_stuff.status = 0;
ret_stuff.v0 = 0; ret_stuff.v0 = 0;
@ -94,7 +97,6 @@ sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
(u64) nasid, (u64) widget_num, (u64) nasid, (u64) widget_num,
(u64) device_num, (u64) address, 0, 0, 0); (u64) device_num, (u64) address, 0, 0, 0);
return ret_stuff.status; return ret_stuff.status;
} }
/* /*
@ -102,7 +104,6 @@ sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
*/ */
static inline u64 sal_get_hubdev_info(u64 handle, u64 address) static inline u64 sal_get_hubdev_info(u64 handle, u64 address)
{ {
struct ia64_sal_retval ret_stuff; struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0; ret_stuff.status = 0;
ret_stuff.v0 = 0; ret_stuff.v0 = 0;
@ -118,7 +119,6 @@ static inline u64 sal_get_hubdev_info(u64 handle, u64 address)
*/ */
static inline u64 sal_get_pcibus_info(u64 segment, u64 busnum, u64 address) static inline u64 sal_get_pcibus_info(u64 segment, u64 busnum, u64 address)
{ {
struct ia64_sal_retval ret_stuff; struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0; ret_stuff.status = 0;
ret_stuff.v0 = 0; ret_stuff.v0 = 0;
@ -215,7 +215,7 @@ static void __init sn_fixup_ionodes(void)
struct hubdev_info *hubdev; struct hubdev_info *hubdev;
u64 status; u64 status;
u64 nasid; u64 nasid;
int i, widget, device; int i, widget, device, size;
/* /*
* Get SGI Specific HUB chipset information. * Get SGI Specific HUB chipset information.
@ -251,48 +251,37 @@ static void __init sn_fixup_ionodes(void)
if (!hubdev->hdi_flush_nasid_list.widget_p) if (!hubdev->hdi_flush_nasid_list.widget_p)
continue; continue;
size = (HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *);
hubdev->hdi_flush_nasid_list.widget_p = hubdev->hdi_flush_nasid_list.widget_p =
kmalloc((HUB_WIDGET_ID_MAX + 1) * kzalloc(size, GFP_KERNEL);
sizeof(struct sn_flush_device_kernel *), if (!hubdev->hdi_flush_nasid_list.widget_p)
GFP_KERNEL); BUG();
memset(hubdev->hdi_flush_nasid_list.widget_p, 0x0,
(HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *));
for (widget = 0; widget <= HUB_WIDGET_ID_MAX; widget++) { for (widget = 0; widget <= HUB_WIDGET_ID_MAX; widget++) {
sn_flush_device_kernel = kmalloc(DEV_PER_WIDGET * size = DEV_PER_WIDGET *
sizeof(struct sizeof(struct sn_flush_device_kernel);
sn_flush_device_kernel), sn_flush_device_kernel = kzalloc(size, GFP_KERNEL);
GFP_KERNEL);
if (!sn_flush_device_kernel) if (!sn_flush_device_kernel)
BUG(); BUG();
memset(sn_flush_device_kernel, 0x0,
DEV_PER_WIDGET *
sizeof(struct sn_flush_device_kernel));
dev_entry = sn_flush_device_kernel; dev_entry = sn_flush_device_kernel;
for (device = 0; device < DEV_PER_WIDGET; for (device = 0; device < DEV_PER_WIDGET;
device++,dev_entry++) { device++,dev_entry++) {
dev_entry->common = kmalloc(sizeof(struct size = sizeof(struct sn_flush_device_common);
sn_flush_device_common), dev_entry->common = kzalloc(size, GFP_KERNEL);
GFP_KERNEL);
if (!dev_entry->common) if (!dev_entry->common)
BUG(); BUG();
memset(dev_entry->common, 0x0, sizeof(struct
sn_flush_device_common));
if (sn_prom_feature_available( if (sn_prom_feature_available(
PRF_DEVICE_FLUSH_LIST)) PRF_DEVICE_FLUSH_LIST))
status = sal_get_device_dmaflush_list( status = sal_get_device_dmaflush_list(
nasid, nasid, widget, device,
widget, (u64)(dev_entry->common));
device,
(u64)(dev_entry->common));
else else
status = sn_device_fixup_war(nasid, status = sn_device_fixup_war(nasid,
widget, widget, device,
device, dev_entry->common);
dev_entry->common);
if (status != SALRET_OK) if (status != SALRET_OK)
panic("SAL call failed: %s\n", panic("SAL call failed: %s\n",
ia64_sal_strerror(status)); ia64_sal_strerror(status));
@ -383,13 +372,12 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
pci_dev_get(dev); /* for the sysdata pointer */ pci_dev_get(dev); /* for the sysdata pointer */
pcidev_info = kzalloc(sizeof(struct pcidev_info), GFP_KERNEL); pcidev_info = kzalloc(sizeof(struct pcidev_info), GFP_KERNEL);
if (pcidev_info <= 0) if (!pcidev_info)
BUG(); /* Cannot afford to run out of memory */ BUG(); /* Cannot afford to run out of memory */
sn_irq_info = kmalloc(sizeof(struct sn_irq_info), GFP_KERNEL); sn_irq_info = kzalloc(sizeof(struct sn_irq_info), GFP_KERNEL);
if (sn_irq_info <= 0) if (!sn_irq_info)
BUG(); /* Cannot afford to run out of memory */ BUG(); /* Cannot afford to run out of memory */
memset(sn_irq_info, 0, sizeof(struct sn_irq_info));
/* Call to retrieve pci device information needed by kernel. */ /* Call to retrieve pci device information needed by kernel. */
status = sal_get_pcidev_info((u64) segment, (u64) dev->bus->number, status = sal_get_pcidev_info((u64) segment, (u64) dev->bus->number,
@ -482,13 +470,13 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
*/ */
void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus) void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
{ {
int status = 0; int status;
int nasid, cnode; int nasid, cnode;
struct pci_controller *controller; struct pci_controller *controller;
struct sn_pci_controller *sn_controller; struct sn_pci_controller *sn_controller;
struct pcibus_bussoft *prom_bussoft_ptr; struct pcibus_bussoft *prom_bussoft_ptr;
struct hubdev_info *hubdev_info; struct hubdev_info *hubdev_info;
void *provider_soft = NULL; void *provider_soft;
struct sn_pcibus_provider *provider; struct sn_pcibus_provider *provider;
status = sal_get_pcibus_info((u64) segment, (u64) busnum, status = sal_get_pcibus_info((u64) segment, (u64) busnum,
@ -535,6 +523,8 @@ void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
bus->sysdata = controller; bus->sysdata = controller;
if (provider->bus_fixup) if (provider->bus_fixup)
provider_soft = (*provider->bus_fixup) (prom_bussoft_ptr, controller); provider_soft = (*provider->bus_fixup) (prom_bussoft_ptr, controller);
else
provider_soft = NULL;
if (provider_soft == NULL) { if (provider_soft == NULL) {
/* fixup failed or not applicable */ /* fixup failed or not applicable */
@ -638,13 +628,8 @@ void sn_bus_free_sysdata(void)
static int __init sn_pci_init(void) static int __init sn_pci_init(void)
{ {
int i = 0; int i, j;
int j = 0;
struct pci_dev *pci_dev = NULL; struct pci_dev *pci_dev = NULL;
extern void sn_init_cpei_timer(void);
#ifdef CONFIG_PROC_FS
extern void register_sn_procfs(void);
#endif
if (!ia64_platform_is("sn2") || IS_RUNNING_ON_FAKE_PROM()) if (!ia64_platform_is("sn2") || IS_RUNNING_ON_FAKE_PROM())
return 0; return 0;
@ -700,32 +685,29 @@ static int __init sn_pci_init(void)
*/ */
void hubdev_init_node(nodepda_t * npda, cnodeid_t node) void hubdev_init_node(nodepda_t * npda, cnodeid_t node)
{ {
struct hubdev_info *hubdev_info; struct hubdev_info *hubdev_info;
int size;
pg_data_t *pg;
size = sizeof(struct hubdev_info);
if (node >= num_online_nodes()) /* Headless/memless IO nodes */ if (node >= num_online_nodes()) /* Headless/memless IO nodes */
hubdev_info = pg = NODE_DATA(0);
(struct hubdev_info *)alloc_bootmem_node(NODE_DATA(0),
sizeof(struct
hubdev_info));
else else
hubdev_info = pg = NODE_DATA(node);
(struct hubdev_info *)alloc_bootmem_node(NODE_DATA(node),
sizeof(struct
hubdev_info));
npda->pdinfo = (void *)hubdev_info;
hubdev_info = (struct hubdev_info *)alloc_bootmem_node(pg, size);
npda->pdinfo = (void *)hubdev_info;
} }
geoid_t geoid_t
cnodeid_get_geoid(cnodeid_t cnode) cnodeid_get_geoid(cnodeid_t cnode)
{ {
struct hubdev_info *hubdev; struct hubdev_info *hubdev;
hubdev = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo); hubdev = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo);
return hubdev->hdi_geoid; return hubdev->hdi_geoid;
} }
subsys_initcall(sn_pci_init); subsys_initcall(sn_pci_init);
@ -734,3 +716,4 @@ EXPORT_SYMBOL(sn_pci_unfixup_slot);
EXPORT_SYMBOL(sn_pci_controller_fixup); EXPORT_SYMBOL(sn_pci_controller_fixup);
EXPORT_SYMBOL(sn_bus_store_sysdata); EXPORT_SYMBOL(sn_bus_store_sysdata);
EXPORT_SYMBOL(sn_bus_free_sysdata); EXPORT_SYMBOL(sn_bus_free_sysdata);
EXPORT_SYMBOL(sn_pcidev_info_get);

Просмотреть файл

@ -75,7 +75,7 @@ EXPORT_SYMBOL(sn_rtc_cycles_per_second);
DEFINE_PER_CPU(struct sn_hub_info_s, __sn_hub_info); DEFINE_PER_CPU(struct sn_hub_info_s, __sn_hub_info);
EXPORT_PER_CPU_SYMBOL(__sn_hub_info); EXPORT_PER_CPU_SYMBOL(__sn_hub_info);
DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_NUMNODES]); DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_COMPACT_NODES]);
EXPORT_PER_CPU_SYMBOL(__sn_cnodeid_to_nasid); EXPORT_PER_CPU_SYMBOL(__sn_cnodeid_to_nasid);
DEFINE_PER_CPU(struct nodepda_s *, __sn_nodepda); DEFINE_PER_CPU(struct nodepda_s *, __sn_nodepda);
@ -317,6 +317,7 @@ struct pcdp_vga_device {
#define PCDP_PCI_TRANS_IOPORT 0x02 #define PCDP_PCI_TRANS_IOPORT 0x02
#define PCDP_PCI_TRANS_MMIO 0x01 #define PCDP_PCI_TRANS_MMIO 0x01
#if defined(CONFIG_VT) && defined(CONFIG_VGA_CONSOLE)
static void static void
sn_scan_pcdp(void) sn_scan_pcdp(void)
{ {
@ -358,6 +359,7 @@ sn_scan_pcdp(void)
break; /* once we find the primary, we're done */ break; /* once we find the primary, we're done */
} }
} }
#endif
static unsigned long sn2_rtc_initial; static unsigned long sn2_rtc_initial;

Просмотреть файл

@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive * License. See the file "COPYING" in the main directory of this archive
* for more details. * for more details.
* *
* Copyright (C) 1999,2001-2004 Silicon Graphics, Inc. All Rights Reserved. * Copyright (C) 1999,2001-2004, 2006 Silicon Graphics, Inc. All Rights Reserved.
* *
* Module to export the system's Firmware Interface Tables, including * Module to export the system's Firmware Interface Tables, including
* PROM revision numbers and banners, in /proc * PROM revision numbers and banners, in /proc
@ -190,7 +190,7 @@ static int
read_version_entry(char *page, char **start, off_t off, int count, int *eof, read_version_entry(char *page, char **start, off_t off, int count, int *eof,
void *data) void *data)
{ {
int len = 0; int len;
/* data holds the NASID of the node */ /* data holds the NASID of the node */
len = dump_version(page, (unsigned long)data); len = dump_version(page, (unsigned long)data);
@ -202,7 +202,7 @@ static int
read_fit_entry(char *page, char **start, off_t off, int count, int *eof, read_fit_entry(char *page, char **start, off_t off, int count, int *eof,
void *data) void *data)
{ {
int len = 0; int len;
/* data holds the NASID of the node */ /* data holds the NASID of the node */
len = dump_fit(page, (unsigned long)data); len = dump_fit(page, (unsigned long)data);
@ -229,13 +229,16 @@ int __init prominfo_init(void)
struct proc_dir_entry *p; struct proc_dir_entry *p;
cnodeid_t cnodeid; cnodeid_t cnodeid;
unsigned long nasid; unsigned long nasid;
int size;
char name[NODE_NAME_LEN]; char name[NODE_NAME_LEN];
if (!ia64_platform_is("sn2")) if (!ia64_platform_is("sn2"))
return 0; return 0;
proc_entries = kmalloc(num_online_nodes() * sizeof(struct proc_dir_entry *), size = num_online_nodes() * sizeof(struct proc_dir_entry *);
GFP_KERNEL); proc_entries = kzalloc(size, GFP_KERNEL);
if (!proc_entries)
return -ENOMEM;
sgi_prominfo_entry = proc_mkdir("sgi_prominfo", NULL); sgi_prominfo_entry = proc_mkdir("sgi_prominfo", NULL);
@ -244,14 +247,12 @@ int __init prominfo_init(void)
sprintf(name, "node%d", cnodeid); sprintf(name, "node%d", cnodeid);
*entp = proc_mkdir(name, sgi_prominfo_entry); *entp = proc_mkdir(name, sgi_prominfo_entry);
nasid = cnodeid_to_nasid(cnodeid); nasid = cnodeid_to_nasid(cnodeid);
p = create_proc_read_entry( p = create_proc_read_entry("fit", 0, *entp, read_fit_entry,
"fit", 0, *entp, read_fit_entry, (void *)nasid);
(void *)nasid);
if (p) if (p)
p->owner = THIS_MODULE; p->owner = THIS_MODULE;
p = create_proc_read_entry( p = create_proc_read_entry("version", 0, *entp,
"version", 0, *entp, read_version_entry, read_version_entry, (void *)nasid);
(void *)nasid);
if (p) if (p)
p->owner = THIS_MODULE; p->owner = THIS_MODULE;
entp++; entp++;
@ -263,7 +264,7 @@ int __init prominfo_init(void)
void __exit prominfo_exit(void) void __exit prominfo_exit(void)
{ {
struct proc_dir_entry **entp; struct proc_dir_entry **entp;
unsigned cnodeid; unsigned int cnodeid;
char name[NODE_NAME_LEN]; char name[NODE_NAME_LEN];
entp = proc_entries; entp = proc_entries;

Просмотреть файл

@ -46,8 +46,14 @@ DECLARE_PER_CPU(struct ptc_stats, ptcstats);
static __cacheline_aligned DEFINE_SPINLOCK(sn2_global_ptc_lock); static __cacheline_aligned DEFINE_SPINLOCK(sn2_global_ptc_lock);
void sn2_ptc_deadlock_recovery(short *, short, short, int, volatile unsigned long *, unsigned long, extern unsigned long
volatile unsigned long *, unsigned long); sn2_ptc_deadlock_recovery_core(volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
void
sn2_ptc_deadlock_recovery(short *, short, short, int,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
/* /*
* Note: some is the following is captured here to make degugging easier * Note: some is the following is captured here to make degugging easier
@ -59,16 +65,6 @@ void sn2_ptc_deadlock_recovery(short *, short, short, int, volatile unsigned lon
#define reset_max_active_on_deadlock() 1 #define reset_max_active_on_deadlock() 1
#define PTC_LOCK(sh1) ((sh1) ? &sn2_global_ptc_lock : &sn_nodepda->ptc_lock) #define PTC_LOCK(sh1) ((sh1) ? &sn2_global_ptc_lock : &sn_nodepda->ptc_lock)
static inline void ptc_lock(int sh1, unsigned long *flagp)
{
spin_lock_irqsave(PTC_LOCK(sh1), *flagp);
}
static inline void ptc_unlock(int sh1, unsigned long flags)
{
spin_unlock_irqrestore(PTC_LOCK(sh1), flags);
}
struct ptc_stats { struct ptc_stats {
unsigned long ptc_l; unsigned long ptc_l;
unsigned long change_rid; unsigned long change_rid;
@ -82,6 +78,8 @@ struct ptc_stats {
unsigned long shub_ptc_flushes_not_my_mm; unsigned long shub_ptc_flushes_not_my_mm;
}; };
#define sn2_ptctest 0
static inline unsigned long wait_piowc(void) static inline unsigned long wait_piowc(void)
{ {
volatile unsigned long *piows; volatile unsigned long *piows;
@ -200,7 +198,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
max_active = max_active_pio(shub1); max_active = max_active_pio(shub1);
itc = ia64_get_itc(); itc = ia64_get_itc();
ptc_lock(shub1, &flags); spin_lock_irqsave(PTC_LOCK(shub1), flags);
itc2 = ia64_get_itc(); itc2 = ia64_get_itc();
__get_cpu_var(ptcstats).lock_itc_clocks += itc2 - itc; __get_cpu_var(ptcstats).lock_itc_clocks += itc2 - itc;
@ -258,7 +256,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
ia64_srlz_d(); ia64_srlz_d();
} }
ptc_unlock(shub1, flags); spin_unlock_irqrestore(PTC_LOCK(shub1), flags);
preempt_enable(); preempt_enable();
} }
@ -270,11 +268,12 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
* TLB flush transaction. The recovery sequence is somewhat tricky & is * TLB flush transaction. The recovery sequence is somewhat tricky & is
* coded in assembly language. * coded in assembly language.
*/ */
void sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid, volatile unsigned long *ptc0, unsigned long data0,
volatile unsigned long *ptc1, unsigned long data1) void
sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid,
volatile unsigned long *ptc0, unsigned long data0,
volatile unsigned long *ptc1, unsigned long data1)
{ {
extern unsigned long sn2_ptc_deadlock_recovery_core(volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long, volatile unsigned long *, unsigned long);
short nasid, i; short nasid, i;
unsigned long *piows, zeroval, n; unsigned long *piows, zeroval, n;

Просмотреть файл

@ -6,11 +6,11 @@
* Copyright (C) 2000-2005 Silicon Graphics, Inc. All rights reserved. * Copyright (C) 2000-2005 Silicon Graphics, Inc. All rights reserved.
*/ */
#include <linux/config.h> #include <linux/config.h>
#include <asm/uaccess.h>
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <asm/uaccess.h>
#include <asm/sn/sn_sal.h> #include <asm/sn/sn_sal.h>
static int partition_id_show(struct seq_file *s, void *p) static int partition_id_show(struct seq_file *s, void *p)
@ -90,10 +90,10 @@ static int coherence_id_open(struct inode *inode, struct file *file)
return single_open(file, coherence_id_show, NULL); return single_open(file, coherence_id_show, NULL);
} }
static struct proc_dir_entry *sn_procfs_create_entry( static struct proc_dir_entry
const char *name, struct proc_dir_entry *parent, *sn_procfs_create_entry(const char *name, struct proc_dir_entry *parent,
int (*openfunc)(struct inode *, struct file *), int (*openfunc)(struct inode *, struct file *),
int (*releasefunc)(struct inode *, struct file *)) int (*releasefunc)(struct inode *, struct file *))
{ {
struct proc_dir_entry *e = create_proc_entry(name, 0444, parent); struct proc_dir_entry *e = create_proc_entry(name, 0444, parent);
@ -126,24 +126,24 @@ void register_sn_procfs(void)
return; return;
sn_procfs_create_entry("partition_id", sgi_proc_dir, sn_procfs_create_entry("partition_id", sgi_proc_dir,
partition_id_open, single_release); partition_id_open, single_release);
sn_procfs_create_entry("system_serial_number", sgi_proc_dir, sn_procfs_create_entry("system_serial_number", sgi_proc_dir,
system_serial_number_open, single_release); system_serial_number_open, single_release);
sn_procfs_create_entry("licenseID", sgi_proc_dir, sn_procfs_create_entry("licenseID", sgi_proc_dir,
licenseID_open, single_release); licenseID_open, single_release);
e = sn_procfs_create_entry("sn_force_interrupt", sgi_proc_dir, e = sn_procfs_create_entry("sn_force_interrupt", sgi_proc_dir,
sn_force_interrupt_open, single_release); sn_force_interrupt_open, single_release);
if (e) if (e)
e->proc_fops->write = sn_force_interrupt_write_proc; e->proc_fops->write = sn_force_interrupt_write_proc;
sn_procfs_create_entry("coherence_id", sgi_proc_dir, sn_procfs_create_entry("coherence_id", sgi_proc_dir,
coherence_id_open, single_release); coherence_id_open, single_release);
sn_procfs_create_entry("sn_topology", sgi_proc_dir, sn_procfs_create_entry("sn_topology", sgi_proc_dir,
sn_topology_open, sn_topology_release); sn_topology_open, sn_topology_release);
} }
#endif /* CONFIG_PROC_FS */ #endif /* CONFIG_PROC_FS */

Просмотреть файл

@ -14,6 +14,7 @@
#include <asm/hw_irq.h> #include <asm/hw_irq.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/timex.h>
#include <asm/sn/leds.h> #include <asm/sn/leds.h>
#include <asm/sn/shub_mmr.h> #include <asm/sn/shub_mmr.h>
@ -28,9 +29,27 @@ static struct time_interpolator sn2_interpolator = {
.source = TIME_SOURCE_MMIO64 .source = TIME_SOURCE_MMIO64
}; };
/*
* sn udelay uses the RTC instead of the ITC because the ITC is not
* synchronized across all CPUs, and the thread may migrate to another CPU
* if preemption is enabled.
*/
static void
ia64_sn_udelay (unsigned long usecs)
{
unsigned long start = rtc_time();
unsigned long end = start +
usecs * sn_rtc_cycles_per_second / 1000000;
while (time_before((unsigned long)rtc_time(), end))
cpu_relax();
}
void __init sn_timer_init(void) void __init sn_timer_init(void)
{ {
sn2_interpolator.frequency = sn_rtc_cycles_per_second; sn2_interpolator.frequency = sn_rtc_cycles_per_second;
sn2_interpolator.addr = RTC_COUNTER_ADDR; sn2_interpolator.addr = RTC_COUNTER_ADDR;
register_time_interpolator(&sn2_interpolator); register_time_interpolator(&sn2_interpolator);
ia64_udelay = &ia64_sn_udelay;
} }

Просмотреть файл

@ -1,7 +1,7 @@
/* /*
* *
* *
* Copyright (c) 2005 Silicon Graphics, Inc. All Rights Reserved. * Copyright (c) 2005, 2006 Silicon Graphics, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify it * This program is free software; you can redistribute it and/or modify it
* under the terms of version 2 of the GNU General Public License * under the terms of version 2 of the GNU General Public License
@ -22,11 +22,6 @@
* License along with this program; if not, write the Free Software * License along with this program; if not, write the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
* *
* Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
* Mountain View, CA 94043, or:
*
* http://www.sgi.com
*
* For further information regarding this notice, see: * For further information regarding this notice, see:
* *
* http://oss.sgi.com/projects/GenInfo/NoticeExplan * http://oss.sgi.com/projects/GenInfo/NoticeExplan

Просмотреть файл

@ -284,12 +284,10 @@ struct sn_irq_info *tiocx_irq_alloc(nasid_t nasid, int widget, int irq,
if ((nasid & 1) == 0) if ((nasid & 1) == 0)
return NULL; return NULL;
sn_irq_info = kmalloc(sn_irq_size, GFP_KERNEL); sn_irq_info = kzalloc(sn_irq_size, GFP_KERNEL);
if (sn_irq_info == NULL) if (sn_irq_info == NULL)
return NULL; return NULL;
memset(sn_irq_info, 0x0, sn_irq_size);
status = tiocx_intr_alloc(nasid, widget, __pa(sn_irq_info), irq, status = tiocx_intr_alloc(nasid, widget, __pa(sn_irq_info), irq,
req_nasid, slice); req_nasid, slice);
if (status) { if (status) {

Просмотреть файл

@ -738,7 +738,9 @@ xpc_process_disconnect(struct xpc_channel *ch, unsigned long *irq_flags)
/* make sure all activity has settled down first */ /* make sure all activity has settled down first */
if (atomic_read(&ch->references) > 0) { if (atomic_read(&ch->references) > 0 ||
((ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) &&
!(ch->flags & XPC_C_DISCONNECTINGCALLOUT_MADE))) {
return; return;
} }
DBUG_ON(atomic_read(&ch->kthreads_assigned) != 0); DBUG_ON(atomic_read(&ch->kthreads_assigned) != 0);
@ -775,7 +777,7 @@ xpc_process_disconnect(struct xpc_channel *ch, unsigned long *irq_flags)
/* both sides are disconnected now */ /* both sides are disconnected now */
if (ch->flags & XPC_C_CONNECTCALLOUT) { if (ch->flags & XPC_C_DISCONNECTINGCALLOUT_MADE) {
spin_unlock_irqrestore(&ch->lock, *irq_flags); spin_unlock_irqrestore(&ch->lock, *irq_flags);
xpc_disconnect_callout(ch, xpcDisconnected); xpc_disconnect_callout(ch, xpcDisconnected);
spin_lock_irqsave(&ch->lock, *irq_flags); spin_lock_irqsave(&ch->lock, *irq_flags);
@ -1300,7 +1302,7 @@ xpc_process_msg_IPI(struct xpc_partition *part, int ch_number)
"delivered=%d, partid=%d, channel=%d\n", "delivered=%d, partid=%d, channel=%d\n",
nmsgs_sent, ch->partid, ch->number); nmsgs_sent, ch->partid, ch->number);
if (ch->flags & XPC_C_CONNECTCALLOUT) { if (ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) {
xpc_activate_kthreads(ch, nmsgs_sent); xpc_activate_kthreads(ch, nmsgs_sent);
} }
} }

Просмотреть файл

@ -750,12 +750,16 @@ xpc_daemonize_kthread(void *args)
/* let registerer know that connection has been established */ /* let registerer know that connection has been established */
spin_lock_irqsave(&ch->lock, irq_flags); spin_lock_irqsave(&ch->lock, irq_flags);
if (!(ch->flags & XPC_C_CONNECTCALLOUT)) { if (!(ch->flags & XPC_C_CONNECTEDCALLOUT)) {
ch->flags |= XPC_C_CONNECTCALLOUT; ch->flags |= XPC_C_CONNECTEDCALLOUT;
spin_unlock_irqrestore(&ch->lock, irq_flags); spin_unlock_irqrestore(&ch->lock, irq_flags);
xpc_connected_callout(ch); xpc_connected_callout(ch);
spin_lock_irqsave(&ch->lock, irq_flags);
ch->flags |= XPC_C_CONNECTEDCALLOUT_MADE;
spin_unlock_irqrestore(&ch->lock, irq_flags);
/* /*
* It is possible that while the callout was being * It is possible that while the callout was being
* made that the remote partition sent some messages. * made that the remote partition sent some messages.
@ -777,15 +781,17 @@ xpc_daemonize_kthread(void *args)
if (atomic_dec_return(&ch->kthreads_assigned) == 0) { if (atomic_dec_return(&ch->kthreads_assigned) == 0) {
spin_lock_irqsave(&ch->lock, irq_flags); spin_lock_irqsave(&ch->lock, irq_flags);
if ((ch->flags & XPC_C_CONNECTCALLOUT) && if ((ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) &&
!(ch->flags & XPC_C_DISCONNECTCALLOUT)) { !(ch->flags & XPC_C_DISCONNECTINGCALLOUT)) {
ch->flags |= XPC_C_DISCONNECTCALLOUT; ch->flags |= XPC_C_DISCONNECTINGCALLOUT;
spin_unlock_irqrestore(&ch->lock, irq_flags); spin_unlock_irqrestore(&ch->lock, irq_flags);
xpc_disconnect_callout(ch, xpcDisconnecting); xpc_disconnect_callout(ch, xpcDisconnecting);
} else {
spin_unlock_irqrestore(&ch->lock, irq_flags); spin_lock_irqsave(&ch->lock, irq_flags);
ch->flags |= XPC_C_DISCONNECTINGCALLOUT_MADE;
} }
spin_unlock_irqrestore(&ch->lock, irq_flags);
if (atomic_dec_return(&part->nchannels_engaged) == 0) { if (atomic_dec_return(&part->nchannels_engaged) == 0) {
xpc_mark_partition_disengaged(part); xpc_mark_partition_disengaged(part);
xpc_IPI_send_disengage(part); xpc_IPI_send_disengage(part);

Просмотреть файл

@ -335,10 +335,10 @@ int sn_pci_legacy_read(struct pci_bus *bus, u16 port, u32 *val, u8 size)
*/ */
SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE, SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE,
pci_domain_nr(bus), bus->number, pci_domain_nr(bus), bus->number,
0, /* io */ 0, /* io */
0, /* read */ 0, /* read */
port, size, __pa(val)); port, size, __pa(val));
if (isrv.status == 0) if (isrv.status == 0)
return size; return size;
@ -381,10 +381,10 @@ int sn_pci_legacy_write(struct pci_bus *bus, u16 port, u32 val, u8 size)
*/ */
SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE, SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE,
pci_domain_nr(bus), bus->number, pci_domain_nr(bus), bus->number,
0, /* io */ 0, /* io */
1, /* write */ 1, /* write */
port, size, __pa(&val)); port, size, __pa(&val));
if (isrv.status == 0) if (isrv.status == 0)
return size; return size;

Просмотреть файл

@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive * License. See the file "COPYING" in the main directory of this archive
* for more details. * for more details.
* *
* Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved. * Copyright (C) 2001-2006 Silicon Graphics, Inc. All rights reserved.
*/ */
#include <linux/types.h> #include <linux/types.h>
@ -12,7 +12,7 @@
#include <asm/sn/pcibus_provider_defs.h> #include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h> #include <asm/sn/pcidev.h>
int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */ int pcibr_invalidate_ate; /* by default don't invalidate ATE on free */
/* /*
* mark_ate: Mark the ate as either free or inuse. * mark_ate: Mark the ate as either free or inuse.
@ -20,14 +20,12 @@ int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
static void mark_ate(struct ate_resource *ate_resource, int start, int number, static void mark_ate(struct ate_resource *ate_resource, int start, int number,
u64 value) u64 value)
{ {
u64 *ate = ate_resource->ate; u64 *ate = ate_resource->ate;
int index; int index;
int length = 0; int length = 0;
for (index = start; length < number; index++, length++) for (index = start; length < number; index++, length++)
ate[index] = value; ate[index] = value;
} }
/* /*
@ -37,7 +35,6 @@ static void mark_ate(struct ate_resource *ate_resource, int start, int number,
static int find_free_ate(struct ate_resource *ate_resource, int start, static int find_free_ate(struct ate_resource *ate_resource, int start,
int count) int count)
{ {
u64 *ate = ate_resource->ate; u64 *ate = ate_resource->ate;
int index; int index;
int start_free; int start_free;
@ -70,12 +67,10 @@ static int find_free_ate(struct ate_resource *ate_resource, int start,
static inline void free_ate_resource(struct ate_resource *ate_resource, static inline void free_ate_resource(struct ate_resource *ate_resource,
int start) int start)
{ {
mark_ate(ate_resource, start, ate_resource->ate[start], 0); mark_ate(ate_resource, start, ate_resource->ate[start], 0);
if ((ate_resource->lowest_free_index > start) || if ((ate_resource->lowest_free_index > start) ||
(ate_resource->lowest_free_index < 0)) (ate_resource->lowest_free_index < 0))
ate_resource->lowest_free_index = start; ate_resource->lowest_free_index = start;
} }
/* /*
@ -84,7 +79,6 @@ static inline void free_ate_resource(struct ate_resource *ate_resource,
static inline int alloc_ate_resource(struct ate_resource *ate_resource, static inline int alloc_ate_resource(struct ate_resource *ate_resource,
int ate_needed) int ate_needed)
{ {
int start_index; int start_index;
/* /*
@ -118,19 +112,12 @@ static inline int alloc_ate_resource(struct ate_resource *ate_resource,
*/ */
int pcibr_ate_alloc(struct pcibus_info *pcibus_info, int count) int pcibr_ate_alloc(struct pcibus_info *pcibus_info, int count)
{ {
int status = 0; int status;
u64 flag; unsigned long flags;
flag = pcibr_lock(pcibus_info); spin_lock_irqsave(&pcibus_info->pbi_lock, flags);
status = alloc_ate_resource(&pcibus_info->pbi_int_ate_resource, count); status = alloc_ate_resource(&pcibus_info->pbi_int_ate_resource, count);
spin_unlock_irqrestore(&pcibus_info->pbi_lock, flags);
if (status < 0) {
/* Failed to allocate */
pcibr_unlock(pcibus_info, flag);
return -1;
}
pcibr_unlock(pcibus_info, flag);
return status; return status;
} }
@ -182,7 +169,7 @@ void pcibr_ate_free(struct pcibus_info *pcibus_info, int index)
ate_write(pcibus_info, index, count, (ate & ~PCI32_ATE_V)); ate_write(pcibus_info, index, count, (ate & ~PCI32_ATE_V));
} }
flags = pcibr_lock(pcibus_info); spin_lock_irqsave(&pcibus_info->pbi_lock, flags);
free_ate_resource(&pcibus_info->pbi_int_ate_resource, index); free_ate_resource(&pcibus_info->pbi_int_ate_resource, index);
pcibr_unlock(pcibus_info, flags); spin_unlock_irqrestore(&pcibus_info->pbi_lock, flags);
} }

Просмотреть файл

@ -137,14 +137,12 @@ pcibr_dmatrans_direct64(struct pcidev_info * info, u64 paddr,
pci_addr |= PCI64_ATTR_VIRTUAL; pci_addr |= PCI64_ATTR_VIRTUAL;
return pci_addr; return pci_addr;
} }
static dma_addr_t static dma_addr_t
pcibr_dmatrans_direct32(struct pcidev_info * info, pcibr_dmatrans_direct32(struct pcidev_info * info,
u64 paddr, size_t req_size, u64 flags) u64 paddr, size_t req_size, u64 flags)
{ {
struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info; struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info;
struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info-> struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
pdi_pcibus_info; pdi_pcibus_info;
@ -171,7 +169,6 @@ pcibr_dmatrans_direct32(struct pcidev_info * info,
} }
return PCI32_DIRECT_BASE | offset; return PCI32_DIRECT_BASE | offset;
} }
/* /*
@ -218,9 +215,8 @@ void sn_dma_flush(u64 addr)
u64 flags; u64 flags;
u64 itte; u64 itte;
struct hubdev_info *hubinfo; struct hubdev_info *hubinfo;
volatile struct sn_flush_device_kernel *p; struct sn_flush_device_kernel *p;
volatile struct sn_flush_device_common *common; struct sn_flush_device_common *common;
struct sn_flush_nasid_entry *flush_nasid_list; struct sn_flush_nasid_entry *flush_nasid_list;
if (!sn_ioif_inited) if (!sn_ioif_inited)
@ -310,8 +306,7 @@ void sn_dma_flush(u64 addr)
(common->sfdl_slot - 1)); (common->sfdl_slot - 1));
} }
} else { } else {
spin_lock_irqsave((spinlock_t *)&p->sfdl_flush_lock, spin_lock_irqsave(&p->sfdl_flush_lock, flags);
flags);
*common->sfdl_flush_addr = 0; *common->sfdl_flush_addr = 0;
/* force an interrupt. */ /* force an interrupt. */
@ -322,8 +317,7 @@ void sn_dma_flush(u64 addr)
cpu_relax(); cpu_relax();
/* okay, everything is synched up. */ /* okay, everything is synched up. */
spin_unlock_irqrestore((spinlock_t *)&p->sfdl_flush_lock, spin_unlock_irqrestore(&p->sfdl_flush_lock, flags);
flags);
} }
return; return;
} }

Просмотреть файл

@ -163,9 +163,12 @@ pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *cont
/* Setup the PMU ATE map */ /* Setup the PMU ATE map */
soft->pbi_int_ate_resource.lowest_free_index = 0; soft->pbi_int_ate_resource.lowest_free_index = 0;
soft->pbi_int_ate_resource.ate = soft->pbi_int_ate_resource.ate =
kmalloc(soft->pbi_int_ate_size * sizeof(u64), GFP_KERNEL); kzalloc(soft->pbi_int_ate_size * sizeof(u64), GFP_KERNEL);
memset(soft->pbi_int_ate_resource.ate, 0,
(soft->pbi_int_ate_size * sizeof(u64))); if (!soft->pbi_int_ate_resource.ate) {
kfree(soft);
return NULL;
}
if (prom_bussoft->bs_asic_type == PCIIO_ASIC_TYPE_TIOCP) { if (prom_bussoft->bs_asic_type == PCIIO_ASIC_TYPE_TIOCP) {
/* TIO PCI Bridge: find nearest node with CPUs */ /* TIO PCI Bridge: find nearest node with CPUs */

Просмотреть файл

@ -21,6 +21,10 @@ config GENERIC_CALIBRATE_DELAY
bool bool
default y default y
config TIME_LOW_RES
bool
default y
config ARCH_MAY_HAVE_PC_FDC config ARCH_MAY_HAVE_PC_FDC
bool bool
depends on Q40 || (BROKEN && SUN3X) depends on Q40 || (BROKEN && SUN3X)

Просмотреть файл

@ -29,6 +29,10 @@ config GENERIC_CALIBRATE_DELAY
bool bool
default y default y
config TIME_LOW_RES
bool
default y
source "init/Kconfig" source "init/Kconfig"
menu "Processor type and features" menu "Processor type and features"

Просмотреть файл

@ -94,7 +94,6 @@ endif
# machines may also. Since BFD is incredibly buggy with respect to # machines may also. Since BFD is incredibly buggy with respect to
# crossformat linking we rely on the elf2ecoff tool for format conversion. # crossformat linking we rely on the elf2ecoff tool for format conversion.
# #
cflags-y += -I $(TOPDIR)/include/asm/gcc
cflags-y += -G 0 -mno-abicalls -fno-pic -pipe cflags-y += -G 0 -mno-abicalls -fno-pic -pipe
LDFLAGS_vmlinux += -G 0 -static -n -nostdlib LDFLAGS_vmlinux += -G 0 -static -n -nostdlib
MODFLAGS += -mlong-calls MODFLAGS += -mlong-calls

Просмотреть файл

@ -25,6 +25,7 @@
#include <linux/a.out.h> #include <linux/a.out.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/kallsyms.h>
#include <asm/abi.h> #include <asm/abi.h>
#include <asm/bootinfo.h> #include <asm/bootinfo.h>
@ -272,46 +273,19 @@ long kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
static struct mips_frame_info { static struct mips_frame_info {
void *func; void *func;
int omit_fp; /* compiled without fno-omit-frame-pointer */ unsigned long func_size;
int frame_offset; int frame_size;
int pc_offset; int pc_offset;
} schedule_frame, mfinfo[] = { } *schedule_frame, mfinfo[64];
{ schedule, 0 }, /* must be first */ static int mfinfo_num;
/* arch/mips/kernel/semaphore.c */
{ __down, 1 },
{ __down_interruptible, 1 },
/* kernel/sched.c */
#ifdef CONFIG_PREEMPT
{ preempt_schedule, 0 },
#endif
{ wait_for_completion, 0 },
{ interruptible_sleep_on, 0 },
{ interruptible_sleep_on_timeout, 0 },
{ sleep_on, 0 },
{ sleep_on_timeout, 0 },
{ yield, 0 },
{ io_schedule, 0 },
{ io_schedule_timeout, 0 },
#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT)
{ __preempt_spin_lock, 0 },
{ __preempt_write_lock, 0 },
#endif
/* kernel/timer.c */
{ schedule_timeout, 1 },
/* { nanosleep_restart, 1 }, */
/* lib/rwsem-spinlock.c */
{ __down_read, 1 },
{ __down_write, 1 },
};
static int mips_frame_info_initialized;
static int __init get_frame_info(struct mips_frame_info *info) static int __init get_frame_info(struct mips_frame_info *info)
{ {
int i; int i;
void *func = info->func; void *func = info->func;
union mips_instruction *ip = (union mips_instruction *)func; union mips_instruction *ip = (union mips_instruction *)func;
info->pc_offset = -1; info->pc_offset = -1;
info->frame_offset = info->omit_fp ? 0 : -1; info->frame_size = 0;
for (i = 0; i < 128; i++, ip++) { for (i = 0; i < 128; i++, ip++) {
/* if jal, jalr, jr, stop. */ /* if jal, jalr, jr, stop. */
if (ip->j_format.opcode == jal_op || if (ip->j_format.opcode == jal_op ||
@ -320,6 +294,23 @@ static int __init get_frame_info(struct mips_frame_info *info)
ip->r_format.func == jr_op))) ip->r_format.func == jr_op)))
break; break;
if (info->func_size && i >= info->func_size / 4)
break;
if (
#ifdef CONFIG_32BIT
ip->i_format.opcode == addiu_op &&
#endif
#ifdef CONFIG_64BIT
ip->i_format.opcode == daddiu_op &&
#endif
ip->i_format.rs == 29 &&
ip->i_format.rt == 29) {
/* addiu/daddiu sp,sp,-imm */
if (info->frame_size)
continue;
info->frame_size = - ip->i_format.simmediate;
}
if ( if (
#ifdef CONFIG_32BIT #ifdef CONFIG_32BIT
ip->i_format.opcode == sw_op && ip->i_format.opcode == sw_op &&
@ -327,31 +318,20 @@ static int __init get_frame_info(struct mips_frame_info *info)
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
ip->i_format.opcode == sd_op && ip->i_format.opcode == sd_op &&
#endif #endif
ip->i_format.rs == 29) ip->i_format.rs == 29 &&
{ ip->i_format.rt == 31) {
/* sw / sd $ra, offset($sp) */ /* sw / sd $ra, offset($sp) */
if (ip->i_format.rt == 31) { if (info->pc_offset != -1)
if (info->pc_offset != -1) continue;
continue; info->pc_offset =
info->pc_offset = ip->i_format.simmediate / sizeof(long);
ip->i_format.simmediate / sizeof(long);
}
/* sw / sd $s8, offset($sp) */
if (ip->i_format.rt == 30) {
//#if 0 /* gcc 3.4 does aggressive optimization... */
if (info->frame_offset != -1)
continue;
//#endif
info->frame_offset =
ip->i_format.simmediate / sizeof(long);
}
} }
} }
if (info->pc_offset == -1 || info->frame_offset == -1) { if (info->pc_offset == -1 || info->frame_size == 0) {
printk("Can't analyze prologue code at %p\n", func); if (func == schedule)
printk("Can't analyze prologue code at %p\n", func);
info->pc_offset = -1; info->pc_offset = -1;
info->frame_offset = -1; info->frame_size = 0;
return -1;
} }
return 0; return 0;
@ -359,25 +339,36 @@ static int __init get_frame_info(struct mips_frame_info *info)
static int __init frame_info_init(void) static int __init frame_info_init(void)
{ {
int i, found; int i;
for (i = 0; i < ARRAY_SIZE(mfinfo); i++) #ifdef CONFIG_KALLSYMS
if (get_frame_info(&mfinfo[i])) char *modname;
return -1; char namebuf[KSYM_NAME_LEN + 1];
schedule_frame = mfinfo[0]; unsigned long start, size, ofs;
/* bubble sort */ extern char __sched_text_start[], __sched_text_end[];
do { extern char __lock_text_start[], __lock_text_end[];
struct mips_frame_info tmp;
found = 0; start = (unsigned long)__sched_text_start;
for (i = 1; i < ARRAY_SIZE(mfinfo); i++) { for (i = 0; i < ARRAY_SIZE(mfinfo); i++) {
if (mfinfo[i-1].func > mfinfo[i].func) { if (start == (unsigned long)schedule)
tmp = mfinfo[i]; schedule_frame = &mfinfo[i];
mfinfo[i] = mfinfo[i-1]; if (!kallsyms_lookup(start, &size, &ofs, &modname, namebuf))
mfinfo[i-1] = tmp; break;
found = 1; mfinfo[i].func = (void *)(start + ofs);
} mfinfo[i].func_size = size;
} start += size - ofs;
} while (found); if (start >= (unsigned long)__lock_text_end)
mips_frame_info_initialized = 1; break;
if (start == (unsigned long)__sched_text_end)
start = (unsigned long)__lock_text_start;
}
#else
mfinfo[0].func = schedule;
schedule_frame = &mfinfo[0];
#endif
for (i = 0; i < ARRAY_SIZE(mfinfo) && mfinfo[i].func; i++)
get_frame_info(&mfinfo[i]);
mfinfo_num = i;
return 0; return 0;
} }
@ -394,47 +385,52 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
if (t->reg31 == (unsigned long) ret_from_fork) if (t->reg31 == (unsigned long) ret_from_fork)
return t->reg31; return t->reg31;
if (schedule_frame.pc_offset < 0) if (!schedule_frame || schedule_frame->pc_offset < 0)
return 0; return 0;
return ((unsigned long *)t->reg29)[schedule_frame.pc_offset]; return ((unsigned long *)t->reg29)[schedule_frame->pc_offset];
} }
/* get_wchan - a maintenance nightmare^W^Wpain in the ass ... */ /* get_wchan - a maintenance nightmare^W^Wpain in the ass ... */
unsigned long get_wchan(struct task_struct *p) unsigned long get_wchan(struct task_struct *p)
{ {
unsigned long stack_page; unsigned long stack_page;
unsigned long frame, pc; unsigned long pc;
#ifdef CONFIG_KALLSYMS
unsigned long frame;
#endif
if (!p || p == current || p->state == TASK_RUNNING) if (!p || p == current || p->state == TASK_RUNNING)
return 0; return 0;
stack_page = (unsigned long)task_stack_page(p); stack_page = (unsigned long)task_stack_page(p);
if (!stack_page || !mips_frame_info_initialized) if (!stack_page || !mfinfo_num)
return 0; return 0;
pc = thread_saved_pc(p); pc = thread_saved_pc(p);
#ifdef CONFIG_KALLSYMS
if (!in_sched_functions(pc)) if (!in_sched_functions(pc))
return pc; return pc;
frame = ((unsigned long *)p->thread.reg30)[schedule_frame.frame_offset]; frame = p->thread.reg29 + schedule_frame->frame_size;
do { do {
int i; int i;
if (frame < stack_page || frame > stack_page + THREAD_SIZE - 32) if (frame < stack_page || frame > stack_page + THREAD_SIZE - 32)
return 0; return 0;
for (i = ARRAY_SIZE(mfinfo) - 1; i >= 0; i--) { for (i = mfinfo_num - 1; i >= 0; i--) {
if (pc >= (unsigned long) mfinfo[i].func) if (pc >= (unsigned long) mfinfo[i].func)
break; break;
} }
if (i < 0) if (i < 0)
break; break;
if (mfinfo[i].omit_fp)
break;
pc = ((unsigned long *)frame)[mfinfo[i].pc_offset]; pc = ((unsigned long *)frame)[mfinfo[i].pc_offset];
frame = ((unsigned long *)frame)[mfinfo[i].frame_offset]; if (!mfinfo[i].frame_size)
break;
frame += mfinfo[i].frame_size;
} while (in_sched_functions(pc)); } while (in_sched_functions(pc));
#endif
return pc; return pc;
} }

Просмотреть файл

@ -623,7 +623,7 @@ einval: li v0, -EINVAL
sys sys_mknodat 4 /* 4290 */ sys sys_mknodat 4 /* 4290 */
sys sys_fchownat 5 sys sys_fchownat 5
sys sys_futimesat 3 sys sys_futimesat 3
sys sys_newfstatat 4 sys sys_fstatat64 4
sys sys_unlinkat 3 sys sys_unlinkat 3
sys sys_renameat 4 /* 4295 */ sys sys_renameat 4 /* 4295 */
sys sys_linkat 4 sys sys_linkat 4

Просмотреть файл

@ -176,7 +176,7 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size)
if ((ka->sa.sa_flags & SA_ONSTACK) && (sas_ss_flags (sp) == 0)) if ((ka->sa.sa_flags & SA_ONSTACK) && (sas_ss_flags (sp) == 0))
sp = current->sas_ss_sp + current->sas_ss_size; sp = current->sas_ss_sp + current->sas_ss_size;
return (void __user *)((sp - frame_size) & (ICACHE_REFILLS_WORKAROUND_WAR ? 32 : ALMASK)); return (void __user *)((sp - frame_size) & (ICACHE_REFILLS_WORKAROUND_WAR ? ~(cpu_icache_line_size()-1) : ALMASK));
} }
static inline int install_sigtramp(unsigned int __user *tramp, static inline int install_sigtramp(unsigned int __user *tramp,

Просмотреть файл

@ -537,7 +537,7 @@ _sys32_rt_sigreturn(nabi_no_regargs struct pt_regs regs)
/* The ucontext contains a stack32_t, so we must convert! */ /* The ucontext contains a stack32_t, so we must convert! */
if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp)) if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp))
goto badframe; goto badframe;
st.ss_size = (long) sp; st.ss_sp = (void *)(long) sp;
if (__get_user(st.ss_size, &frame->rs_uc.uc_stack.ss_size)) if (__get_user(st.ss_size, &frame->rs_uc.uc_stack.ss_size))
goto badframe; goto badframe;
if (__get_user(st.ss_flags, &frame->rs_uc.uc_stack.ss_flags)) if (__get_user(st.ss_flags, &frame->rs_uc.uc_stack.ss_flags))

Просмотреть файл

@ -108,7 +108,7 @@ _sysn32_rt_sigreturn(nabi_no_regargs struct pt_regs regs)
/* The ucontext contains a stack32_t, so we must convert! */ /* The ucontext contains a stack32_t, so we must convert! */
if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp)) if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp))
goto badframe; goto badframe;
st.ss_size = (long) sp; st.ss_sp = (void *)(long) sp;
if (__get_user(st.ss_size, &frame->rs_uc.uc_stack.ss_size)) if (__get_user(st.ss_size, &frame->rs_uc.uc_stack.ss_size))
goto badframe; goto badframe;
if (__get_user(st.ss_flags, &frame->rs_uc.uc_stack.ss_flags)) if (__get_user(st.ss_flags, &frame->rs_uc.uc_stack.ss_flags))

Просмотреть файл

@ -68,6 +68,8 @@ void __init sanitize_tlb_entries(void)
set_c0_mvpcontrol(MVPCONTROL_VPC); set_c0_mvpcontrol(MVPCONTROL_VPC);
back_to_back_c0_hazard();
/* Disable TLB sharing */ /* Disable TLB sharing */
clear_c0_mvpcontrol(MVPCONTROL_STLB); clear_c0_mvpcontrol(MVPCONTROL_STLB);
@ -102,35 +104,6 @@ void __init sanitize_tlb_entries(void)
clear_c0_mvpcontrol(MVPCONTROL_VPC); clear_c0_mvpcontrol(MVPCONTROL_VPC);
} }
#if 0
/*
* Use c0_MVPConf0 to find out how many CPUs are available, setting up
* phys_cpu_present_map and the logical/physical mappings.
*/
void __init prom_build_cpu_map(void)
{
int i, num, ncpus;
cpus_clear(phys_cpu_present_map);
/* assume we boot on cpu 0.... */
cpu_set(0, phys_cpu_present_map);
__cpu_number_map[0] = 0;
__cpu_logical_map[0] = 0;
if (cpu_has_mipsmt) {
ncpus = ((read_c0_mvpconf0() & (MVPCONF0_PVPE)) >> MVPCONF0_PVPE_SHIFT) + 1;
for (i=1, num=0; i< NR_CPUS && i<ncpus; i++) {
cpu_set(i, phys_cpu_present_map);
__cpu_number_map[i] = ++num;
__cpu_logical_map[num] = i;
}
printk(KERN_INFO "%i available secondary CPU(s)\n", num);
}
}
#endif
static void ipi_resched_dispatch (struct pt_regs *regs) static void ipi_resched_dispatch (struct pt_regs *regs)
{ {
do_IRQ(MIPS_CPU_IPI_RESCHED_IRQ, regs); do_IRQ(MIPS_CPU_IPI_RESCHED_IRQ, regs);
@ -222,6 +195,9 @@ void prom_prepare_cpus(unsigned int max_cpus)
/* set config to be the same as vpe0, particularly kseg0 coherency alg */ /* set config to be the same as vpe0, particularly kseg0 coherency alg */
write_vpe_c0_config( read_c0_config()); write_vpe_c0_config( read_c0_config());
/* Propagate Config7 */
write_vpe_c0_config7(read_c0_config7());
} }
} }

Просмотреть файл

@ -471,61 +471,29 @@ struct flush_icache_range_args {
static inline void local_r4k_flush_icache_range(void *args) static inline void local_r4k_flush_icache_range(void *args)
{ {
struct flush_icache_range_args *fir_args = args; struct flush_icache_range_args *fir_args = args;
unsigned long dc_lsize = cpu_dcache_line_size();
unsigned long ic_lsize = cpu_icache_line_size();
unsigned long sc_lsize = cpu_scache_line_size();
unsigned long start = fir_args->start; unsigned long start = fir_args->start;
unsigned long end = fir_args->end; unsigned long end = fir_args->end;
unsigned long addr, aend;
if (!cpu_has_ic_fills_f_dc) { if (!cpu_has_ic_fills_f_dc) {
if (end - start > dcache_size) { if (end - start > dcache_size) {
r4k_blast_dcache(); r4k_blast_dcache();
} else { } else {
R4600_HIT_CACHEOP_WAR_IMPL; R4600_HIT_CACHEOP_WAR_IMPL;
addr = start & ~(dc_lsize - 1); protected_blast_dcache_range(start, end);
aend = (end - 1) & ~(dc_lsize - 1);
while (1) {
/* Hit_Writeback_Inv_D */
protected_writeback_dcache_line(addr);
if (addr == aend)
break;
addr += dc_lsize;
}
} }
if (!cpu_icache_snoops_remote_store) { if (!cpu_icache_snoops_remote_store) {
if (end - start > scache_size) { if (end - start > scache_size)
r4k_blast_scache(); r4k_blast_scache();
} else { else
addr = start & ~(sc_lsize - 1); protected_blast_scache_range(start, end);
aend = (end - 1) & ~(sc_lsize - 1);
while (1) {
/* Hit_Writeback_Inv_SD */
protected_writeback_scache_line(addr);
if (addr == aend)
break;
addr += sc_lsize;
}
}
} }
} }
if (end - start > icache_size) if (end - start > icache_size)
r4k_blast_icache(); r4k_blast_icache();
else { else
addr = start & ~(ic_lsize - 1); protected_blast_icache_range(start, end);
aend = (end - 1) & ~(ic_lsize - 1);
while (1) {
/* Hit_Invalidate_I */
protected_flush_icache_line(addr);
if (addr == aend)
break;
addr += ic_lsize;
}
}
} }
static void r4k_flush_icache_range(unsigned long start, unsigned long end) static void r4k_flush_icache_range(unsigned long start, unsigned long end)
@ -619,27 +587,14 @@ static void r4k_flush_icache_page(struct vm_area_struct *vma,
static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size) static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
{ {
unsigned long end, a;
/* Catch bad driver code */ /* Catch bad driver code */
BUG_ON(size == 0); BUG_ON(size == 0);
if (cpu_has_subset_pcaches) { if (cpu_has_subset_pcaches) {
unsigned long sc_lsize = cpu_scache_line_size(); if (size >= scache_size)
if (size >= scache_size) {
r4k_blast_scache(); r4k_blast_scache();
return; else
} blast_scache_range(addr, addr + size);
a = addr & ~(sc_lsize - 1);
end = (addr + size - 1) & ~(sc_lsize - 1);
while (1) {
flush_scache_line(a); /* Hit_Writeback_Inv_SD */
if (a == end)
break;
a += sc_lsize;
}
return; return;
} }
@ -651,17 +606,8 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
if (size >= dcache_size) { if (size >= dcache_size) {
r4k_blast_dcache(); r4k_blast_dcache();
} else { } else {
unsigned long dc_lsize = cpu_dcache_line_size();
R4600_HIT_CACHEOP_WAR_IMPL; R4600_HIT_CACHEOP_WAR_IMPL;
a = addr & ~(dc_lsize - 1); blast_dcache_range(addr, addr + size);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
flush_dcache_line(a); /* Hit_Writeback_Inv_D */
if (a == end)
break;
a += dc_lsize;
}
} }
bc_wback_inv(addr, size); bc_wback_inv(addr, size);
@ -669,44 +615,22 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
static void r4k_dma_cache_inv(unsigned long addr, unsigned long size) static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
{ {
unsigned long end, a;
/* Catch bad driver code */ /* Catch bad driver code */
BUG_ON(size == 0); BUG_ON(size == 0);
if (cpu_has_subset_pcaches) { if (cpu_has_subset_pcaches) {
unsigned long sc_lsize = cpu_scache_line_size(); if (size >= scache_size)
if (size >= scache_size) {
r4k_blast_scache(); r4k_blast_scache();
return; else
} blast_scache_range(addr, addr + size);
a = addr & ~(sc_lsize - 1);
end = (addr + size - 1) & ~(sc_lsize - 1);
while (1) {
flush_scache_line(a); /* Hit_Writeback_Inv_SD */
if (a == end)
break;
a += sc_lsize;
}
return; return;
} }
if (size >= dcache_size) { if (size >= dcache_size) {
r4k_blast_dcache(); r4k_blast_dcache();
} else { } else {
unsigned long dc_lsize = cpu_dcache_line_size();
R4600_HIT_CACHEOP_WAR_IMPL; R4600_HIT_CACHEOP_WAR_IMPL;
a = addr & ~(dc_lsize - 1); blast_dcache_range(addr, addr + size);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
flush_dcache_line(a); /* Hit_Writeback_Inv_D */
if (a == end)
break;
a += dc_lsize;
}
} }
bc_inv(addr, size); bc_inv(addr, size);

Просмотреть файл

@ -44,8 +44,6 @@ __asm__ __volatile__( \
/* TX39H-style cache flush routines. */ /* TX39H-style cache flush routines. */
static void tx39h_flush_icache_all(void) static void tx39h_flush_icache_all(void)
{ {
unsigned long start = KSEG0;
unsigned long end = (start + icache_size);
unsigned long flags, config; unsigned long flags, config;
/* disable icache (set ICE#) */ /* disable icache (set ICE#) */
@ -53,33 +51,18 @@ static void tx39h_flush_icache_all(void)
config = read_c0_conf(); config = read_c0_conf();
write_c0_conf(config & ~TX39_CONF_ICE); write_c0_conf(config & ~TX39_CONF_ICE);
TX39_STOP_STREAMING(); TX39_STOP_STREAMING();
blast_icache16();
/* invalidate icache */
while (start < end) {
cache16_unroll32(start, Index_Invalidate_I);
start += 0x200;
}
write_c0_conf(config); write_c0_conf(config);
local_irq_restore(flags); local_irq_restore(flags);
} }
static void tx39h_dma_cache_wback_inv(unsigned long addr, unsigned long size) static void tx39h_dma_cache_wback_inv(unsigned long addr, unsigned long size)
{ {
unsigned long end, a;
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
/* Catch bad driver code */ /* Catch bad driver code */
BUG_ON(size == 0); BUG_ON(size == 0);
iob(); iob();
a = addr & ~(dc_lsize - 1); blast_inv_dcache_range(addr, addr + size);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
invalidate_dcache_line(a); /* Hit_Invalidate_D */
if (a == end) break;
a += dc_lsize;
}
} }
@ -241,42 +224,21 @@ static void tx39_flush_data_cache_page(unsigned long addr)
static void tx39_flush_icache_range(unsigned long start, unsigned long end) static void tx39_flush_icache_range(unsigned long start, unsigned long end)
{ {
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
unsigned long addr, aend;
if (end - start > dcache_size) if (end - start > dcache_size)
tx39_blast_dcache(); tx39_blast_dcache();
else { else
addr = start & ~(dc_lsize - 1); protected_blast_dcache_range(start, end);
aend = (end - 1) & ~(dc_lsize - 1);
while (1) {
/* Hit_Writeback_Inv_D */
protected_writeback_dcache_line(addr);
if (addr == aend)
break;
addr += dc_lsize;
}
}
if (end - start > icache_size) if (end - start > icache_size)
tx39_blast_icache(); tx39_blast_icache();
else { else {
unsigned long flags, config; unsigned long flags, config;
addr = start & ~(dc_lsize - 1);
aend = (end - 1) & ~(dc_lsize - 1);
/* disable icache (set ICE#) */ /* disable icache (set ICE#) */
local_irq_save(flags); local_irq_save(flags);
config = read_c0_conf(); config = read_c0_conf();
write_c0_conf(config & ~TX39_CONF_ICE); write_c0_conf(config & ~TX39_CONF_ICE);
TX39_STOP_STREAMING(); TX39_STOP_STREAMING();
while (1) { protected_blast_icache_range(start, end);
/* Hit_Invalidate_I */
protected_flush_icache_line(addr);
if (addr == aend)
break;
addr += dc_lsize;
}
write_c0_conf(config); write_c0_conf(config);
local_irq_restore(flags); local_irq_restore(flags);
} }
@ -311,7 +273,7 @@ static void tx39_flush_icache_page(struct vm_area_struct *vma, struct page *page
static void tx39_dma_cache_wback_inv(unsigned long addr, unsigned long size) static void tx39_dma_cache_wback_inv(unsigned long addr, unsigned long size)
{ {
unsigned long end, a; unsigned long end;
if (((size | addr) & (PAGE_SIZE - 1)) == 0) { if (((size | addr) & (PAGE_SIZE - 1)) == 0) {
end = addr + size; end = addr + size;
@ -322,20 +284,13 @@ static void tx39_dma_cache_wback_inv(unsigned long addr, unsigned long size)
} else if (size > dcache_size) { } else if (size > dcache_size) {
tx39_blast_dcache(); tx39_blast_dcache();
} else { } else {
unsigned long dc_lsize = current_cpu_data.dcache.linesz; blast_dcache_range(addr, addr + size);
a = addr & ~(dc_lsize - 1);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
flush_dcache_line(a); /* Hit_Writeback_Inv_D */
if (a == end) break;
a += dc_lsize;
}
} }
} }
static void tx39_dma_cache_inv(unsigned long addr, unsigned long size) static void tx39_dma_cache_inv(unsigned long addr, unsigned long size)
{ {
unsigned long end, a; unsigned long end;
if (((size | addr) & (PAGE_SIZE - 1)) == 0) { if (((size | addr) & (PAGE_SIZE - 1)) == 0) {
end = addr + size; end = addr + size;
@ -346,14 +301,7 @@ static void tx39_dma_cache_inv(unsigned long addr, unsigned long size)
} else if (size > dcache_size) { } else if (size > dcache_size) {
tx39_blast_dcache(); tx39_blast_dcache();
} else { } else {
unsigned long dc_lsize = current_cpu_data.dcache.linesz; blast_inv_dcache_range(addr, addr + size);
a = addr & ~(dc_lsize - 1);
end = (addr + size - 1) & ~(dc_lsize - 1);
while (1) {
invalidate_dcache_line(a); /* Hit_Invalidate_D */
if (a == end) break;
a += dc_lsize;
}
} }
} }

Просмотреть файл

@ -29,6 +29,11 @@ config GENERIC_CALIBRATE_DELAY
bool bool
default y default y
config TIME_LOW_RES
bool
depends on SMP
default y
config GENERIC_ISA_DMA config GENERIC_ISA_DMA
bool bool

Просмотреть файл

@ -377,15 +377,15 @@
ENTRY_SAME(inotify_init) ENTRY_SAME(inotify_init)
ENTRY_SAME(inotify_add_watch) /* 270 */ ENTRY_SAME(inotify_add_watch) /* 270 */
ENTRY_SAME(inotify_rm_watch) ENTRY_SAME(inotify_rm_watch)
ENTRY_COMP(pselect6) ENTRY_SAME(ni_syscall) /* 271 ENTRY_COMP(pselect6) */
ENTRY_COMP(ppoll) ENTRY_SAME(ni_syscall) /* 272 ENTRY_COMP(ppoll) */
ENTRY_SAME(migrate_pages) ENTRY_SAME(migrate_pages)
ENTRY_COMP(openat) /* 275 */ ENTRY_COMP(openat) /* 275 */
ENTRY_SAME(mkdirat) ENTRY_SAME(mkdirat)
ENTRY_SAME(mknodat) ENTRY_SAME(mknodat)
ENTRY_SAME(fchownat) ENTRY_SAME(fchownat)
ENTRY_COMP(futimesat) ENTRY_COMP(futimesat)
ENTRY_COMP(newfstatat) /* 280 */ ENTRY_SAME(fstatat64) /* 280 */
ENTRY_SAME(unlinkat) ENTRY_SAME(unlinkat)
ENTRY_SAME(renameat) ENTRY_SAME(renameat)
ENTRY_SAME(linkat) ENTRY_SAME(linkat)

Просмотреть файл

@ -80,6 +80,10 @@ config HOTPLUG_CPU
can be controlled through /sys/devices/system/cpu/cpu#. can be controlled through /sys/devices/system/cpu/cpu#.
Say N if you want to disable CPU hotplug. Say N if you want to disable CPU hotplug.
config DEFAULT_MIGRATION_COST
int
default "1000000"
config MATHEMU config MATHEMU
bool "IEEE FPU emulation" bool "IEEE FPU emulation"
depends on MARCH_G5 depends on MARCH_G5

Просмотреть файл

@ -905,8 +905,8 @@ asmlinkage long sys32_fstat64(unsigned long fd, struct stat64_emu31 __user * sta
return ret; return ret;
} }
asmlinkage long sys32_fstatat(unsigned int dfd, char __user *filename, asmlinkage long sys32_fstatat64(unsigned int dfd, char __user *filename,
struct stat64_emu31 __user* statbuf, int flag) struct stat64_emu31 __user* statbuf, int flag)
{ {
struct kstat stat; struct kstat stat;
int error = -EINVAL; int error = -EINVAL;

Просмотреть файл

@ -1523,13 +1523,13 @@ compat_sys_futimesat_wrapper:
llgtr %r4,%r4 # struct timeval * llgtr %r4,%r4 # struct timeval *
jg compat_sys_futimesat jg compat_sys_futimesat
.globl sys32_fstatat_wrapper .globl sys32_fstatat64_wrapper
sys32_fstatat_wrapper: sys32_fstatat64_wrapper:
llgfr %r2,%r2 # unsigned int llgfr %r2,%r2 # unsigned int
llgtr %r3,%r3 # char * llgtr %r3,%r3 # char *
llgtr %r4,%r4 # struct stat64 * llgtr %r4,%r4 # struct stat64 *
lgfr %r5,%r5 # int lgfr %r5,%r5 # int
jg sys32_fstatat jg sys32_fstatat64
.globl sys_unlinkat_wrapper .globl sys_unlinkat_wrapper
sys_unlinkat_wrapper: sys_unlinkat_wrapper:

Просмотреть файл

@ -128,8 +128,10 @@ void default_idle(void)
__ctl_set_bit(8, 15); __ctl_set_bit(8, 15);
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
if (cpu_is_offline(cpu)) if (cpu_is_offline(cpu)) {
preempt_enable_no_resched();
cpu_die(); cpu_die();
}
#endif #endif
local_mcck_disable(); local_mcck_disable();

Просмотреть файл

@ -600,6 +600,7 @@ setup_arch(char **cmdline_p)
init_mm.brk = (unsigned long) &_end; init_mm.brk = (unsigned long) &_end;
parse_cmdline_early(cmdline_p); parse_cmdline_early(cmdline_p);
parse_early_param();
setup_memory(); setup_memory();
setup_resources(); setup_resources();
@ -607,6 +608,7 @@ setup_arch(char **cmdline_p)
cpu_init(); cpu_init();
__cpu_logical_map[0] = S390_lowcore.cpu_data.cpu_addr; __cpu_logical_map[0] = S390_lowcore.cpu_data.cpu_addr;
smp_setup_cpu_possible_map();
/* /*
* Create kernel page tables and switch to virtual addressing. * Create kernel page tables and switch to virtual addressing.

Просмотреть файл

@ -1,8 +1,7 @@
/* /*
* arch/s390/kernel/smp.c * arch/s390/kernel/smp.c
* *
* S390 version * Copyright (C) IBM Corp. 1999,2006
* Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com), * Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com),
* Martin Schwidefsky (schwidefsky@de.ibm.com) * Martin Schwidefsky (schwidefsky@de.ibm.com)
* Heiko Carstens (heiko.carstens@de.ibm.com) * Heiko Carstens (heiko.carstens@de.ibm.com)
@ -41,8 +40,6 @@
#include <asm/cpcmd.h> #include <asm/cpcmd.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
/* prototypes */
extern volatile int __cpu_logical_map[]; extern volatile int __cpu_logical_map[];
/* /*
@ -51,13 +48,11 @@ extern volatile int __cpu_logical_map[];
struct _lowcore *lowcore_ptr[NR_CPUS]; struct _lowcore *lowcore_ptr[NR_CPUS];
cpumask_t cpu_online_map; cpumask_t cpu_online_map = CPU_MASK_NONE;
cpumask_t cpu_possible_map = CPU_MASK_ALL; cpumask_t cpu_possible_map = CPU_MASK_NONE;
static struct task_struct *current_set[NR_CPUS]; static struct task_struct *current_set[NR_CPUS];
EXPORT_SYMBOL(cpu_online_map);
/* /*
* Reboot, halt and power_off routines for SMP. * Reboot, halt and power_off routines for SMP.
*/ */
@ -490,10 +485,10 @@ void smp_ctl_clear_bit(int cr, int bit) {
* Lets check how many CPUs we have. * Lets check how many CPUs we have.
*/ */
void static unsigned int
__init smp_check_cpus(unsigned int max_cpus) __init smp_count_cpus(void)
{ {
int cpu, num_cpus; unsigned int cpu, num_cpus;
__u16 boot_cpu_addr; __u16 boot_cpu_addr;
/* /*
@ -503,19 +498,20 @@ __init smp_check_cpus(unsigned int max_cpus)
boot_cpu_addr = S390_lowcore.cpu_data.cpu_addr; boot_cpu_addr = S390_lowcore.cpu_data.cpu_addr;
current_thread_info()->cpu = 0; current_thread_info()->cpu = 0;
num_cpus = 1; num_cpus = 1;
for (cpu = 0; cpu <= 65535 && num_cpus < max_cpus; cpu++) { for (cpu = 0; cpu <= 65535; cpu++) {
if ((__u16) cpu == boot_cpu_addr) if ((__u16) cpu == boot_cpu_addr)
continue; continue;
__cpu_logical_map[num_cpus] = (__u16) cpu; __cpu_logical_map[1] = (__u16) cpu;
if (signal_processor(num_cpus, sigp_sense) == if (signal_processor(1, sigp_sense) ==
sigp_not_operational) sigp_not_operational)
continue; continue;
cpu_set(num_cpus, cpu_present_map);
num_cpus++; num_cpus++;
} }
printk("Detected %d CPU's\n",(int) num_cpus); printk("Detected %d CPU's\n",(int) num_cpus);
printk("Boot cpu address %2X\n", boot_cpu_addr); printk("Boot cpu address %2X\n", boot_cpu_addr);
return num_cpus;
} }
/* /*
@ -676,6 +672,44 @@ __cpu_up(unsigned int cpu)
return 0; return 0;
} }
static unsigned int __initdata additional_cpus;
static unsigned int __initdata possible_cpus;
void __init smp_setup_cpu_possible_map(void)
{
unsigned int phy_cpus, pos_cpus, cpu;
phy_cpus = smp_count_cpus();
pos_cpus = min(phy_cpus + additional_cpus, (unsigned int) NR_CPUS);
if (possible_cpus)
pos_cpus = min(possible_cpus, (unsigned int) NR_CPUS);
for (cpu = 0; cpu < pos_cpus; cpu++)
cpu_set(cpu, cpu_possible_map);
phy_cpus = min(phy_cpus, pos_cpus);
for (cpu = 0; cpu < phy_cpus; cpu++)
cpu_set(cpu, cpu_present_map);
}
#ifdef CONFIG_HOTPLUG_CPU
static int __init setup_additional_cpus(char *s)
{
additional_cpus = simple_strtoul(s, NULL, 0);
return 0;
}
early_param("additional_cpus", setup_additional_cpus);
static int __init setup_possible_cpus(char *s)
{
possible_cpus = simple_strtoul(s, NULL, 0);
return 0;
}
early_param("possible_cpus", setup_possible_cpus);
int int
__cpu_disable(void) __cpu_disable(void)
{ {
@ -744,6 +778,8 @@ cpu_die(void)
for(;;); for(;;);
} }
#endif /* CONFIG_HOTPLUG_CPU */
/* /*
* Cycle through the processors and setup structures. * Cycle through the processors and setup structures.
*/ */
@ -757,7 +793,6 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
/* request the 0x1201 emergency signal external interrupt */ /* request the 0x1201 emergency signal external interrupt */
if (register_external_interrupt(0x1201, do_ext_call_interrupt) != 0) if (register_external_interrupt(0x1201, do_ext_call_interrupt) != 0)
panic("Couldn't request external interrupt 0x1201"); panic("Couldn't request external interrupt 0x1201");
smp_check_cpus(max_cpus);
memset(lowcore_ptr,0,sizeof(lowcore_ptr)); memset(lowcore_ptr,0,sizeof(lowcore_ptr));
/* /*
* Initialize prefix pages and stacks for all possible cpus * Initialize prefix pages and stacks for all possible cpus
@ -806,7 +841,6 @@ void __devinit smp_prepare_boot_cpu(void)
BUG_ON(smp_processor_id() != 0); BUG_ON(smp_processor_id() != 0);
cpu_set(0, cpu_online_map); cpu_set(0, cpu_online_map);
cpu_set(0, cpu_present_map);
S390_lowcore.percpu_offset = __per_cpu_offset[0]; S390_lowcore.percpu_offset = __per_cpu_offset[0];
current_set[0] = current; current_set[0] = current;
} }
@ -845,6 +879,7 @@ static int __init topology_init(void)
subsys_initcall(topology_init); subsys_initcall(topology_init);
EXPORT_SYMBOL(cpu_online_map);
EXPORT_SYMBOL(cpu_possible_map); EXPORT_SYMBOL(cpu_possible_map);
EXPORT_SYMBOL(lowcore_ptr); EXPORT_SYMBOL(lowcore_ptr);
EXPORT_SYMBOL(smp_ctl_set_bit); EXPORT_SYMBOL(smp_ctl_set_bit);

Просмотреть файл

@ -301,7 +301,7 @@ SYSCALL(sys_mkdirat,sys_mkdirat,sys_mkdirat_wrapper)
SYSCALL(sys_mknodat,sys_mknodat,sys_mknodat_wrapper) /* 290 */ SYSCALL(sys_mknodat,sys_mknodat,sys_mknodat_wrapper) /* 290 */
SYSCALL(sys_fchownat,sys_fchownat,sys_fchownat_wrapper) SYSCALL(sys_fchownat,sys_fchownat,sys_fchownat_wrapper)
SYSCALL(sys_futimesat,sys_futimesat,compat_sys_futimesat_wrapper) SYSCALL(sys_futimesat,sys_futimesat,compat_sys_futimesat_wrapper)
SYSCALL(sys_fstatat64,sys_newfstatat,sys32_fstatat_wrapper) SYSCALL(sys_fstatat64,sys_newfstatat,sys32_fstatat64_wrapper)
SYSCALL(sys_unlinkat,sys_unlinkat,sys_unlinkat_wrapper) SYSCALL(sys_unlinkat,sys_unlinkat,sys_unlinkat_wrapper)
SYSCALL(sys_renameat,sys_renameat,sys_renameat_wrapper) /* 295 */ SYSCALL(sys_renameat,sys_renameat,sys_renameat_wrapper) /* 295 */
SYSCALL(sys_linkat,sys_linkat,sys_linkat_wrapper) SYSCALL(sys_linkat,sys_linkat,sys_linkat_wrapper)

Просмотреть файл

@ -30,7 +30,7 @@ void __delay(unsigned long loops)
*/ */
__asm__ __volatile__( __asm__ __volatile__(
"0: brct %0,0b" "0: brct %0,0b"
: /* no outputs */ : "r" (loops/2) ); : /* no outputs */ : "r" ((loops/2) + 1));
} }
/* /*

Просмотреть файл

@ -446,7 +446,7 @@ endmenu
config ISA_DMA_API config ISA_DMA_API
bool bool
depends on MPC1211 depends on SH_MPC1211
default y default y
menu "Kernel features" menu "Kernel features"

Просмотреть файл

@ -76,7 +76,7 @@ sys_call_table:
/*270*/ .long sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink /*270*/ .long sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink
/*275*/ .long sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid /*275*/ .long sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid
/*280*/ .long sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat /*280*/ .long sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat
/*285*/ .long sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_newfstatat /*285*/ .long sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*290*/ .long sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat /*290*/ .long sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
/*295*/ .long sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare /*295*/ .long sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare

Просмотреть файл

@ -428,6 +428,27 @@ asmlinkage long compat_sys_fstat64(unsigned int fd,
return error; return error;
} }
asmlinkage long compat_sys_fstatat64(unsigned int dfd, char __user *filename,
struct compat_stat64 __user * statbuf, int flag)
{
struct kstat stat;
int error = -EINVAL;
if ((flag & ~AT_SYMLINK_NOFOLLOW) != 0)
goto out;
if (flag & AT_SYMLINK_NOFOLLOW)
error = vfs_lstat_fd(dfd, filename, &stat);
else
error = vfs_stat_fd(dfd, filename, &stat);
if (!error)
error = cp_compat_stat64(&stat, statbuf);
out:
return error;
}
asmlinkage long compat_sys_sysfs(int option, u32 arg1, u32 arg2) asmlinkage long compat_sys_sysfs(int option, u32 arg1, u32 arg2)
{ {
return sys_sysfs(option, arg1, arg2); return sys_sysfs(option, arg1, arg2);

Просмотреть файл

@ -77,7 +77,7 @@ sys_call_table32:
/*270*/ .word sys32_io_submit, sys_io_cancel, compat_sys_io_getevents, sys32_mq_open, sys_mq_unlink /*270*/ .word sys32_io_submit, sys_io_cancel, compat_sys_io_getevents, sys32_mq_open, sys_mq_unlink
.word compat_sys_mq_timedsend, compat_sys_mq_timedreceive, compat_sys_mq_notify, compat_sys_mq_getsetattr, compat_sys_waitid .word compat_sys_mq_timedsend, compat_sys_mq_timedreceive, compat_sys_mq_notify, compat_sys_mq_getsetattr, compat_sys_waitid
/*280*/ .word sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, compat_sys_openat /*280*/ .word sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, compat_sys_openat
.word sys_mkdirat, sys_mknodat, sys_fchownat, compat_sys_futimesat, compat_sys_newfstatat .word sys_mkdirat, sys_mknodat, sys_fchownat, compat_sys_futimesat, compat_sys_fstatat64
/*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat /*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, compat_sys_pselect6, compat_sys_ppoll, sys_unshare .word sys_fchmodat, sys_faccessat, compat_sys_pselect6, compat_sys_ppoll, sys_unshare
@ -146,7 +146,7 @@ sys_call_table:
/*270*/ .word sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink /*270*/ .word sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink
.word sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid .word sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid
/*280*/ .word sys_nis_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat /*280*/ .word sys_nis_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat
.word sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_newfstatat .word sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat /*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare .word sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare

Просмотреть файл

@ -28,6 +28,10 @@ config GENERIC_IRQ_PROBE
bool bool
default y default y
config TIME_LOW_RES
bool
default y
# Turn off some random 386 crap that can affect device config # Turn off some random 386 crap that can affect device config
config ISA config ISA
bool bool

Просмотреть файл

@ -1,7 +1,7 @@
# #
# Automatically generated make config: don't edit # Automatically generated make config: don't edit
# Linux kernel version: 2.6.16-rc1-git2 # Linux kernel version: 2.6.16-rc3
# Thu Jan 19 10:05:21 2006 # Mon Feb 13 22:31:24 2006
# #
CONFIG_X86_64=y CONFIG_X86_64=y
CONFIG_64BIT=y CONFIG_64BIT=y
@ -21,7 +21,6 @@ CONFIG_DMI=y
# Code maturity level options # Code maturity level options
# #
CONFIG_EXPERIMENTAL=y CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_LOCK_KERNEL=y CONFIG_LOCK_KERNEL=y
CONFIG_INIT_ENV_ARG_LIMIT=32 CONFIG_INIT_ENV_ARG_LIMIT=32
@ -267,6 +266,7 @@ CONFIG_NET=y
# #
# Networking options # Networking options
# #
# CONFIG_NETDEBUG is not set
CONFIG_PACKET=y CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set # CONFIG_PACKET_MMAP is not set
CONFIG_UNIX=y CONFIG_UNIX=y
@ -446,7 +446,6 @@ CONFIG_BLK_DEV_PIIX=y
# CONFIG_BLK_DEV_NS87415 is not set # CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set # CONFIG_BLK_DEV_PDC202XX_OLD is not set
CONFIG_BLK_DEV_PDC202XX_NEW=y CONFIG_BLK_DEV_PDC202XX_NEW=y
# CONFIG_PDC202XX_FORCE is not set
# CONFIG_BLK_DEV_SVWKS is not set # CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIIMAGE is not set # CONFIG_BLK_DEV_SIIMAGE is not set
# CONFIG_BLK_DEV_SIS5513 is not set # CONFIG_BLK_DEV_SIS5513 is not set
@ -573,7 +572,33 @@ CONFIG_FUSION_MAX_SGE=128
# #
# IEEE 1394 (FireWire) support # IEEE 1394 (FireWire) support
# #
# CONFIG_IEEE1394 is not set CONFIG_IEEE1394=y
#
# Subsystem Options
#
# CONFIG_IEEE1394_VERBOSEDEBUG is not set
# CONFIG_IEEE1394_OUI_DB is not set
# CONFIG_IEEE1394_EXTRA_CONFIG_ROMS is not set
# CONFIG_IEEE1394_EXPORT_FULL_API is not set
#
# Device Drivers
#
#
# Texas Instruments PCILynx requires I2C
#
CONFIG_IEEE1394_OHCI1394=y
#
# Protocol Drivers
#
# CONFIG_IEEE1394_VIDEO1394 is not set
# CONFIG_IEEE1394_SBP2 is not set
# CONFIG_IEEE1394_ETH1394 is not set
# CONFIG_IEEE1394_DV1394 is not set
CONFIG_IEEE1394_RAWIO=y
# #
# I2O device support # I2O device support
@ -772,6 +797,7 @@ CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# #
CONFIG_SERIAL_CORE=y CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_JSM is not set
CONFIG_UNIX98_PTYS=y CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256 CONFIG_LEGACY_PTY_COUNT=256
@ -871,6 +897,7 @@ CONFIG_HPET_MMAP=y
# #
CONFIG_HWMON=y CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set # CONFIG_HWMON_VID is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_HDAPS is not set # CONFIG_SENSORS_HDAPS is not set
# CONFIG_HWMON_DEBUG_CHIP is not set # CONFIG_HWMON_DEBUG_CHIP is not set
@ -1101,7 +1128,6 @@ CONFIG_USB_MON=y
# EDAC - error detection and reporting (RAS) # EDAC - error detection and reporting (RAS)
# #
# CONFIG_EDAC is not set # CONFIG_EDAC is not set
# CONFIG_EDAC_POLL is not set
# #
# Firmware Drivers # Firmware Drivers
@ -1291,14 +1317,12 @@ CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_DEBUG_SPINLOCK is not set # CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set # CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_KOBJECT is not set # CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_FS=y CONFIG_DEBUG_FS=y
# CONFIG_DEBUG_VM is not set # CONFIG_DEBUG_VM is not set
# CONFIG_FRAME_POINTER is not set # CONFIG_FRAME_POINTER is not set
# CONFIG_FORCED_INLINING is not set # CONFIG_FORCED_INLINING is not set
# CONFIG_UNWIND_INFO is not set
# CONFIG_RCU_TORTURE_TEST is not set # CONFIG_RCU_TORTURE_TEST is not set
CONFIG_INIT_DEBUG=y
# CONFIG_DEBUG_RODATA is not set # CONFIG_DEBUG_RODATA is not set
# CONFIG_IOMMU_DEBUG is not set # CONFIG_IOMMU_DEBUG is not set

Просмотреть файл

@ -1152,6 +1152,7 @@ __setup("noapicmaintimer", setup_noapicmaintimer);
static __init int setup_apicpmtimer(char *s) static __init int setup_apicpmtimer(char *s)
{ {
apic_calibrate_pmtmr = 1; apic_calibrate_pmtmr = 1;
notsc_setup(NULL);
return setup_apicmaintimer(NULL); return setup_apicmaintimer(NULL);
} }
__setup("apicpmtimer", setup_apicpmtimer); __setup("apicpmtimer", setup_apicpmtimer);

Просмотреть файл

@ -554,6 +554,7 @@ iret_label:
/* running with kernel gs */ /* running with kernel gs */
bad_iret: bad_iret:
movq $-9999,%rdi /* better code? */ movq $-9999,%rdi /* better code? */
sti
jmp do_exit jmp do_exit
.previous .previous

Просмотреть файл

@ -213,6 +213,11 @@ ENTRY(early_idt_handler)
cmpl $2,early_recursion_flag(%rip) cmpl $2,early_recursion_flag(%rip)
jz 1f jz 1f
call dump_stack call dump_stack
#ifdef CONFIG_KALLSYMS
leaq early_idt_ripmsg(%rip),%rdi
movq 8(%rsp),%rsi # get rip again
call __print_symbol
#endif
1: hlt 1: hlt
jmp 1b jmp 1b
early_recursion_flag: early_recursion_flag:
@ -220,6 +225,8 @@ early_recursion_flag:
early_idt_msg: early_idt_msg:
.asciz "PANIC: early exception rip %lx error %lx cr2 %lx\n" .asciz "PANIC: early exception rip %lx error %lx cr2 %lx\n"
early_idt_ripmsg:
.asciz "RIP %s\n"
.code32 .code32
ENTRY(no_long_mode) ENTRY(no_long_mode)

Просмотреть файл

@ -30,6 +30,9 @@
#include <linux/mc146818rtc.h> #include <linux/mc146818rtc.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/sysdev.h> #include <linux/sysdev.h>
#ifdef CONFIG_ACPI
#include <acpi/acpi_bus.h>
#endif
#include <asm/io.h> #include <asm/io.h>
#include <asm/smp.h> #include <asm/smp.h>
@ -260,6 +263,8 @@ __setup("apic", enable_ioapic_setup);
And another hack to disable the IOMMU on VIA chipsets. And another hack to disable the IOMMU on VIA chipsets.
... and others. Really should move this somewhere else.
Kludge-O-Rama. */ Kludge-O-Rama. */
void __init check_ioapic(void) void __init check_ioapic(void)
{ {
@ -307,6 +312,17 @@ void __init check_ioapic(void)
case PCI_VENDOR_ID_ATI: case PCI_VENDOR_ID_ATI:
if (apic_runs_main_timer != 0) if (apic_runs_main_timer != 0)
break; break;
#ifdef CONFIG_ACPI
/* Don't do this for laptops right
right now because their timer
doesn't necessarily tick in C2/3 */
if (acpi_fadt.revision >= 3 &&
(acpi_fadt.plvl2_lat + acpi_fadt.plvl3_lat) < 1100) {
printk(KERN_INFO
"ATI board detected, but seems to be a laptop. Timer might be shakey, sorry\n");
break;
}
#endif
printk(KERN_INFO printk(KERN_INFO
"ATI board detected. Using APIC/PM timer.\n"); "ATI board detected. Using APIC/PM timer.\n");
apic_runs_main_timer = 1; apic_runs_main_timer = 1;

Просмотреть файл

@ -288,9 +288,9 @@ static int __init smp_read_mpc(struct mp_config_table *mpc)
memcpy(str,mpc->mpc_productid,12); memcpy(str,mpc->mpc_productid,12);
str[12]=0; str[12]=0;
printk(KERN_INFO "Product ID: %s ",str); printk("Product ID: %s ",str);
printk(KERN_INFO "APIC at: 0x%X\n",mpc->mpc_lapic); printk("APIC at: 0x%X\n",mpc->mpc_lapic);
/* save the local APIC address, it might be non-default */ /* save the local APIC address, it might be non-default */
if (!acpi_lapic) if (!acpi_lapic)

Просмотреть файл

@ -236,6 +236,7 @@ static void enable_lapic_nmi_watchdog(void)
{ {
if (nmi_active < 0) { if (nmi_active < 0) {
nmi_watchdog = NMI_LOCAL_APIC; nmi_watchdog = NMI_LOCAL_APIC;
touch_nmi_watchdog();
setup_apic_nmi_watchdog(); setup_apic_nmi_watchdog();
} }
} }
@ -456,15 +457,17 @@ static DEFINE_PER_CPU(int, nmi_touch);
void touch_nmi_watchdog (void) void touch_nmi_watchdog (void)
{ {
int i; if (nmi_watchdog > 0) {
unsigned cpu;
/* /*
* Tell other CPUs to reset their alert counters. We cannot * Tell other CPUs to reset their alert counters. We cannot
* do it ourselves because the alert count increase is not * do it ourselves because the alert count increase is not
* atomic. * atomic.
*/ */
for (i = 0; i < NR_CPUS; i++) for_each_present_cpu (cpu)
per_cpu(nmi_touch, i) = 1; per_cpu(nmi_touch, cpu) = 1;
}
touch_softlockup_watchdog(); touch_softlockup_watchdog();
} }

Просмотреть файл

@ -1327,8 +1327,7 @@ static int __init nohpet_setup(char *s)
__setup("nohpet", nohpet_setup); __setup("nohpet", nohpet_setup);
int __init notsc_setup(char *s)
static int __init notsc_setup(char *s)
{ {
notsc = 1; notsc = 1;
return 0; return 0;

Просмотреть файл

@ -155,7 +155,7 @@ int __init k8_scan_nodes(unsigned long start, unsigned long end)
if (!found) if (!found)
return -1; return -1;
memnode_shift = compute_hash_shift(nodes, numnodes); memnode_shift = compute_hash_shift(nodes, 8);
if (memnode_shift < 0) { if (memnode_shift < 0) {
printk(KERN_ERR "No NUMA node hash function found. Contact maintainer\n"); printk(KERN_ERR "No NUMA node hash function found. Contact maintainer\n");
return -1; return -1;

Просмотреть файл

@ -351,7 +351,7 @@ void __init init_cpu_to_node(void)
continue; continue;
if (apicid_to_node[apicid] == NUMA_NO_NODE) if (apicid_to_node[apicid] == NUMA_NO_NODE)
continue; continue;
cpu_to_node[i] = apicid_to_node[apicid]; numa_set_node(i,apicid_to_node[apicid]);
} }
} }

Просмотреть файл

@ -228,7 +228,8 @@ static int nodes_cover_memory(void)
} }
e820ram = end_pfn - e820_hole_size(0, end_pfn); e820ram = end_pfn - e820_hole_size(0, end_pfn);
if (pxmram < e820ram) { /* We seem to lose 3 pages somewhere. Allow a bit of slack. */
if ((long)(e820ram - pxmram) >= 1*1024*1024) {
printk(KERN_ERR printk(KERN_ERR
"SRAT: PXMs only cover %luMB of your %luMB e820 RAM. Not used.\n", "SRAT: PXMs only cover %luMB of your %luMB e820 RAM. Not used.\n",
(pxmram << PAGE_SHIFT) >> 20, (pxmram << PAGE_SHIFT) >> 20,
@ -270,7 +271,7 @@ int __init acpi_scan_nodes(unsigned long start, unsigned long end)
return -1; return -1;
} }
memnode_shift = compute_hash_shift(nodes, nodes_weight(nodes_parsed)); memnode_shift = compute_hash_shift(nodes, MAX_NUMNODES);
if (memnode_shift < 0) { if (memnode_shift < 0) {
printk(KERN_ERR printk(KERN_ERR
"SRAT: No NUMA node hash function found. Contact maintainer\n"); "SRAT: No NUMA node hash function found. Contact maintainer\n");

Просмотреть файл

@ -391,8 +391,7 @@ acpi_rs_get_list_length(u8 * aml_buffer,
* Ensure a 32-bit boundary for the structure * Ensure a 32-bit boundary for the structure
*/ */
extra_struct_bytes = extra_struct_bytes =
ACPI_ROUND_UP_to_32_bITS(resource_length) - ACPI_ROUND_UP_to_32_bITS(resource_length);
resource_length;
break; break;
case ACPI_RESOURCE_NAME_END_TAG: case ACPI_RESOURCE_NAME_END_TAG:
@ -408,8 +407,7 @@ acpi_rs_get_list_length(u8 * aml_buffer,
* Add vendor data and ensure a 32-bit boundary for the structure * Add vendor data and ensure a 32-bit boundary for the structure
*/ */
extra_struct_bytes = extra_struct_bytes =
ACPI_ROUND_UP_to_32_bITS(resource_length) - ACPI_ROUND_UP_to_32_bITS(resource_length);
resource_length;
break; break;
case ACPI_RESOURCE_NAME_ADDRESS32: case ACPI_RESOURCE_NAME_ADDRESS32:

Просмотреть файл

@ -645,7 +645,7 @@ static void pkt_copy_bio_data(struct bio *src_bio, int seg, int offs, struct pag
* b) The data can be used as cache to avoid read requests if we receive a * b) The data can be used as cache to avoid read requests if we receive a
* new write request for the same zone. * new write request for the same zone.
*/ */
static void pkt_make_local_copy(struct packet_data *pkt, struct page **pages, int *offsets) static void pkt_make_local_copy(struct packet_data *pkt, struct bio_vec *bvec)
{ {
int f, p, offs; int f, p, offs;
@ -653,15 +653,15 @@ static void pkt_make_local_copy(struct packet_data *pkt, struct page **pages, in
p = 0; p = 0;
offs = 0; offs = 0;
for (f = 0; f < pkt->frames; f++) { for (f = 0; f < pkt->frames; f++) {
if (pages[f] != pkt->pages[p]) { if (bvec[f].bv_page != pkt->pages[p]) {
void *vfrom = kmap_atomic(pages[f], KM_USER0) + offsets[f]; void *vfrom = kmap_atomic(bvec[f].bv_page, KM_USER0) + bvec[f].bv_offset;
void *vto = page_address(pkt->pages[p]) + offs; void *vto = page_address(pkt->pages[p]) + offs;
memcpy(vto, vfrom, CD_FRAMESIZE); memcpy(vto, vfrom, CD_FRAMESIZE);
kunmap_atomic(vfrom, KM_USER0); kunmap_atomic(vfrom, KM_USER0);
pages[f] = pkt->pages[p]; bvec[f].bv_page = pkt->pages[p];
offsets[f] = offs; bvec[f].bv_offset = offs;
} else { } else {
BUG_ON(offsets[f] != offs); BUG_ON(bvec[f].bv_offset != offs);
} }
offs += CD_FRAMESIZE; offs += CD_FRAMESIZE;
if (offs >= PAGE_SIZE) { if (offs >= PAGE_SIZE) {
@ -991,18 +991,17 @@ try_next_bio:
static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt) static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
{ {
struct bio *bio; struct bio *bio;
struct page *pages[PACKET_MAX_SIZE];
int offsets[PACKET_MAX_SIZE];
int f; int f;
int frames_write; int frames_write;
struct bio_vec *bvec = pkt->w_bio->bi_io_vec;
for (f = 0; f < pkt->frames; f++) { for (f = 0; f < pkt->frames; f++) {
pages[f] = pkt->pages[(f * CD_FRAMESIZE) / PAGE_SIZE]; bvec[f].bv_page = pkt->pages[(f * CD_FRAMESIZE) / PAGE_SIZE];
offsets[f] = (f * CD_FRAMESIZE) % PAGE_SIZE; bvec[f].bv_offset = (f * CD_FRAMESIZE) % PAGE_SIZE;
} }
/* /*
* Fill-in pages[] and offsets[] with data from orig_bios. * Fill-in bvec with data from orig_bios.
*/ */
frames_write = 0; frames_write = 0;
spin_lock(&pkt->lock); spin_lock(&pkt->lock);
@ -1024,11 +1023,11 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
} }
if (src_bvl->bv_len - src_offs >= CD_FRAMESIZE) { if (src_bvl->bv_len - src_offs >= CD_FRAMESIZE) {
pages[f] = src_bvl->bv_page; bvec[f].bv_page = src_bvl->bv_page;
offsets[f] = src_bvl->bv_offset + src_offs; bvec[f].bv_offset = src_bvl->bv_offset + src_offs;
} else { } else {
pkt_copy_bio_data(bio, segment, src_offs, pkt_copy_bio_data(bio, segment, src_offs,
pages[f], offsets[f]); bvec[f].bv_page, bvec[f].bv_offset);
} }
src_offs += CD_FRAMESIZE; src_offs += CD_FRAMESIZE;
frames_write++; frames_write++;
@ -1042,7 +1041,7 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
BUG_ON(frames_write != pkt->write_size); BUG_ON(frames_write != pkt->write_size);
if (test_bit(PACKET_MERGE_SEGS, &pd->flags) || (pkt->write_size < pkt->frames)) { if (test_bit(PACKET_MERGE_SEGS, &pd->flags) || (pkt->write_size < pkt->frames)) {
pkt_make_local_copy(pkt, pages, offsets); pkt_make_local_copy(pkt, bvec);
pkt->cache_valid = 1; pkt->cache_valid = 1;
} else { } else {
pkt->cache_valid = 0; pkt->cache_valid = 0;
@ -1055,17 +1054,9 @@ static void pkt_start_write(struct pktcdvd_device *pd, struct packet_data *pkt)
pkt->w_bio->bi_bdev = pd->bdev; pkt->w_bio->bi_bdev = pd->bdev;
pkt->w_bio->bi_end_io = pkt_end_io_packet_write; pkt->w_bio->bi_end_io = pkt_end_io_packet_write;
pkt->w_bio->bi_private = pkt; pkt->w_bio->bi_private = pkt;
for (f = 0; f < pkt->frames; f++) { for (f = 0; f < pkt->frames; f++)
if ((f + 1 < pkt->frames) && (pages[f + 1] == pages[f]) && if (!bio_add_page(pkt->w_bio, bvec[f].bv_page, CD_FRAMESIZE, bvec[f].bv_offset))
(offsets[f + 1] = offsets[f] + CD_FRAMESIZE)) { BUG();
if (!bio_add_page(pkt->w_bio, pages[f], CD_FRAMESIZE * 2, offsets[f]))
BUG();
f++;
} else {
if (!bio_add_page(pkt->w_bio, pages[f], CD_FRAMESIZE, offsets[f]))
BUG();
}
}
VPRINTK("pktcdvd: vcnt=%d\n", pkt->w_bio->bi_vcnt); VPRINTK("pktcdvd: vcnt=%d\n", pkt->w_bio->bi_vcnt);
atomic_set(&pkt->io_wait, 1); atomic_set(&pkt->io_wait, 1);
@ -1548,7 +1539,7 @@ static int pkt_good_disc(struct pktcdvd_device *pd, disc_information *di)
case 0x12: /* DVD-RAM */ case 0x12: /* DVD-RAM */
return 0; return 0;
default: default:
printk("pktcdvd: Wrong disc profile (%x)\n", pd->mmc3_profile); VPRINTK("pktcdvd: Wrong disc profile (%x)\n", pd->mmc3_profile);
return 1; return 1;
} }
@ -1894,8 +1885,8 @@ static int pkt_open_write(struct pktcdvd_device *pd)
unsigned int write_speed, media_write_speed, read_speed; unsigned int write_speed, media_write_speed, read_speed;
if ((ret = pkt_probe_settings(pd))) { if ((ret = pkt_probe_settings(pd))) {
DPRINTK("pktcdvd: %s failed probe\n", pd->name); VPRINTK("pktcdvd: %s failed probe\n", pd->name);
return -EIO; return -EROFS;
} }
if ((ret = pkt_set_write_settings(pd))) { if ((ret = pkt_set_write_settings(pd))) {
@ -2053,10 +2044,9 @@ static int pkt_open(struct inode *inode, struct file *file)
goto out_dec; goto out_dec;
} }
} else { } else {
if (pkt_open_dev(pd, file->f_mode & FMODE_WRITE)) { ret = pkt_open_dev(pd, file->f_mode & FMODE_WRITE);
ret = -EIO; if (ret)
goto out_dec; goto out_dec;
}
/* /*
* needed here as well, since ext2 (among others) may change * needed here as well, since ext2 (among others) may change
* the blocksize at mount time * the blocksize at mount time
@ -2436,11 +2426,12 @@ static int pkt_ioctl(struct inode *inode, struct file *file, unsigned int cmd, u
* The door gets locked when the device is opened, so we * The door gets locked when the device is opened, so we
* have to unlock it or else the eject command fails. * have to unlock it or else the eject command fails.
*/ */
pkt_lock_door(pd, 0); if (pd->refcnt == 1)
pkt_lock_door(pd, 0);
return blkdev_ioctl(pd->bdev->bd_inode, file, cmd, arg); return blkdev_ioctl(pd->bdev->bd_inode, file, cmd, arg);
default: default:
printk("pktcdvd: Unknown ioctl for %s (%x)\n", pd->name, cmd); VPRINTK("pktcdvd: Unknown ioctl for %s (%x)\n", pd->name, cmd);
return -ENOTTY; return -ENOTTY;
} }

Просмотреть файл

@ -474,18 +474,6 @@ static int bt3c_hci_ioctl(struct hci_dev *hdev, unsigned int cmd, unsigned long
/* ======================== Card services HCI interaction ======================== */ /* ======================== Card services HCI interaction ======================== */
static struct device *bt3c_device(void)
{
static struct device dev = {
.bus_id = "pcmcia",
};
kobject_set_name(&dev.kobj, "bt3c");
kobject_init(&dev.kobj);
return &dev;
}
static int bt3c_load_firmware(bt3c_info_t *info, unsigned char *firmware, int count) static int bt3c_load_firmware(bt3c_info_t *info, unsigned char *firmware, int count)
{ {
char *ptr = (char *) firmware; char *ptr = (char *) firmware;
@ -574,6 +562,7 @@ static int bt3c_open(bt3c_info_t *info)
{ {
const struct firmware *firmware; const struct firmware *firmware;
struct hci_dev *hdev; struct hci_dev *hdev;
client_handle_t handle;
int err; int err;
spin_lock_init(&(info->lock)); spin_lock_init(&(info->lock));
@ -605,8 +594,10 @@ static int bt3c_open(bt3c_info_t *info)
hdev->owner = THIS_MODULE; hdev->owner = THIS_MODULE;
handle = info->link.handle;
/* Load firmware */ /* Load firmware */
err = request_firmware(&firmware, "BT3CPCC.bin", bt3c_device()); err = request_firmware(&firmware, "BT3CPCC.bin", &handle_to_dev(handle));
if (err < 0) { if (err < 0) {
BT_ERR("Firmware request failed"); BT_ERR("Firmware request failed");
goto error; goto error;

Просмотреть файл

@ -85,7 +85,6 @@
{0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100}, \ {0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100}, \
{0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \ {0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
{0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \ {0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
{0x1002, 0x5b60, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
{0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \ {0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \
{0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \ {0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
{0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \ {0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \

Просмотреть файл

@ -150,17 +150,6 @@ static void rs_wait_until_sent(struct tty_struct *, int);
/* Standard COM flags (except for COM4, because of the 8514 problem) */ /* Standard COM flags (except for COM4, because of the 8514 problem) */
#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST) #define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
/*
* tmp_buf is used as a temporary buffer by serial_write. We need to
* lock it in case the memcpy_fromfs blocks while swapping in a page,
* and some other program tries to do a serial write at the same time.
* Since the lock will only come under contention when the system is
* swapping and available memory is low, it makes sense to share one
* buffer across all the serial ports, since it significantly saves
* memory if large numbers of serial ports are open.
*/
static unsigned char *tmp_buf;
static inline int serial_paranoia_check(struct esp_struct *info, static inline int serial_paranoia_check(struct esp_struct *info,
char *name, const char *routine) char *name, const char *routine)
{ {
@ -1267,7 +1256,7 @@ static int rs_write(struct tty_struct * tty,
if (serial_paranoia_check(info, tty->name, "rs_write")) if (serial_paranoia_check(info, tty->name, "rs_write"))
return 0; return 0;
if (!tty || !info->xmit_buf || !tmp_buf) if (!tty || !info->xmit_buf)
return 0; return 0;
while (1) { while (1) {
@ -2291,11 +2280,7 @@ static int esp_open(struct tty_struct *tty, struct file * filp)
tty->driver_data = info; tty->driver_data = info;
info->tty = tty; info->tty = tty;
if (!tmp_buf) { spin_unlock_irqrestore(&info->lock, flags);
tmp_buf = (unsigned char *) get_zeroed_page(GFP_KERNEL);
if (!tmp_buf)
return -ENOMEM;
}
/* /*
* Start up serial port * Start up serial port
@ -2602,9 +2587,6 @@ static void __exit espserial_exit(void)
free_pages((unsigned long)dma_buffer, free_pages((unsigned long)dma_buffer,
get_order(DMA_BUFFER_SZ)); get_order(DMA_BUFFER_SZ));
if (tmp_buf)
free_page((unsigned long)tmp_buf);
while (free_pio_buf) { while (free_pio_buf) {
pio_buf = free_pio_buf->next; pio_buf = free_pio_buf->next;
kfree(free_pio_buf); kfree(free_pio_buf);

Просмотреть файл

@ -956,22 +956,18 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
} }
} else if (res->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) { } else if (res->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) {
struct acpi_resource_extended_irq *irqp; struct acpi_resource_extended_irq *irqp;
int i; int i, irq;
irqp = &res->data.extended_irq; irqp = &res->data.extended_irq;
if (irqp->interrupt_count > 0) { for (i = 0; i < irqp->interrupt_count; i++) {
hdp->hd_nirqs = irqp->interrupt_count; irq = acpi_register_gsi(irqp->interrupts[i],
irqp->triggering, irqp->polarity);
if (irq < 0)
return AE_ERROR;
for (i = 0; i < hdp->hd_nirqs; i++) { hdp->hd_irq[hdp->hd_nirqs] = irq;
int rc = hdp->hd_nirqs++;
acpi_register_gsi(irqp->interrupts[i],
irqp->triggering,
irqp->polarity);
if (rc < 0)
return AE_ERROR;
hdp->hd_irq[i] = rc;
}
} }
} }

Просмотреть файл

@ -33,6 +33,7 @@
static int TPM_INF_DATA; static int TPM_INF_DATA;
static int TPM_INF_ADDR; static int TPM_INF_ADDR;
static int TPM_INF_BASE; static int TPM_INF_BASE;
static int TPM_INF_ADDR_LEN;
static int TPM_INF_PORT_LEN; static int TPM_INF_PORT_LEN;
/* TPM header definitions */ /* TPM header definitions */
@ -195,6 +196,7 @@ static int tpm_inf_recv(struct tpm_chip *chip, u8 * buf, size_t count)
int i; int i;
int ret; int ret;
u32 size = 0; u32 size = 0;
number_of_wtx = 0;
recv_begin: recv_begin:
/* start receiving header */ /* start receiving header */
@ -378,24 +380,35 @@ static int __devinit tpm_inf_pnp_probe(struct pnp_dev *dev,
if (pnp_port_valid(dev, 0) && pnp_port_valid(dev, 1) && if (pnp_port_valid(dev, 0) && pnp_port_valid(dev, 1) &&
!(pnp_port_flags(dev, 0) & IORESOURCE_DISABLED)) { !(pnp_port_flags(dev, 0) & IORESOURCE_DISABLED)) {
TPM_INF_ADDR = pnp_port_start(dev, 0); TPM_INF_ADDR = pnp_port_start(dev, 0);
TPM_INF_ADDR_LEN = pnp_port_len(dev, 0);
TPM_INF_DATA = (TPM_INF_ADDR + 1); TPM_INF_DATA = (TPM_INF_ADDR + 1);
TPM_INF_BASE = pnp_port_start(dev, 1); TPM_INF_BASE = pnp_port_start(dev, 1);
TPM_INF_PORT_LEN = pnp_port_len(dev, 1); TPM_INF_PORT_LEN = pnp_port_len(dev, 1);
if (!TPM_INF_PORT_LEN) if ((TPM_INF_PORT_LEN < 4) || (TPM_INF_ADDR_LEN < 2)) {
return -EINVAL; rc = -EINVAL;
goto err_last;
}
dev_info(&dev->dev, "Found %s with ID %s\n", dev_info(&dev->dev, "Found %s with ID %s\n",
dev->name, dev_id->id); dev->name, dev_id->id);
if (!((TPM_INF_BASE >> 8) & 0xff)) if (!((TPM_INF_BASE >> 8) & 0xff)) {
return -EINVAL; rc = -EINVAL;
goto err_last;
}
/* publish my base address and request region */ /* publish my base address and request region */
tpm_inf.base = TPM_INF_BASE; tpm_inf.base = TPM_INF_BASE;
if (request_region if (request_region
(tpm_inf.base, TPM_INF_PORT_LEN, "tpm_infineon0") == NULL) { (tpm_inf.base, TPM_INF_PORT_LEN, "tpm_infineon0") == NULL) {
release_region(tpm_inf.base, TPM_INF_PORT_LEN); rc = -EINVAL;
return -EINVAL; goto err_last;
}
if (request_region(TPM_INF_ADDR, TPM_INF_ADDR_LEN,
"tpm_infineon0") == NULL) {
rc = -EINVAL;
goto err_last;
} }
} else { } else {
return -EINVAL; rc = -EINVAL;
goto err_last;
} }
/* query chip for its vendor, its version number a.s.o. */ /* query chip for its vendor, its version number a.s.o. */
@ -443,8 +456,8 @@ static int __devinit tpm_inf_pnp_probe(struct pnp_dev *dev,
dev_err(&dev->dev, dev_err(&dev->dev,
"Could not set IO-ports to 0x%lx\n", "Could not set IO-ports to 0x%lx\n",
tpm_inf.base); tpm_inf.base);
release_region(tpm_inf.base, TPM_INF_PORT_LEN); rc = -EIO;
return -EIO; goto err_release_region;
} }
/* activate register */ /* activate register */
@ -471,14 +484,21 @@ static int __devinit tpm_inf_pnp_probe(struct pnp_dev *dev,
rc = tpm_register_hardware(&dev->dev, &tpm_inf); rc = tpm_register_hardware(&dev->dev, &tpm_inf);
if (rc < 0) { if (rc < 0) {
release_region(tpm_inf.base, TPM_INF_PORT_LEN); rc = -ENODEV;
return -ENODEV; goto err_release_region;
} }
return 0; return 0;
} else { } else {
dev_info(&dev->dev, "No Infineon TPM found!\n"); rc = -ENODEV;
return -ENODEV; goto err_release_region;
} }
err_release_region:
release_region(tpm_inf.base, TPM_INF_PORT_LEN);
release_region(TPM_INF_ADDR, TPM_INF_ADDR_LEN);
err_last:
return rc;
} }
static __devexit void tpm_inf_pnp_remove(struct pnp_dev *dev) static __devexit void tpm_inf_pnp_remove(struct pnp_dev *dev)
@ -518,5 +538,5 @@ module_exit(cleanup_inf);
MODULE_AUTHOR("Marcel Selhorst <selhorst@crypto.rub.de>"); MODULE_AUTHOR("Marcel Selhorst <selhorst@crypto.rub.de>");
MODULE_DESCRIPTION("Driver for Infineon TPM SLD 9630 TT 1.1 / SLB 9635 TT 1.2"); MODULE_DESCRIPTION("Driver for Infineon TPM SLD 9630 TT 1.1 / SLB 9635 TT 1.2");
MODULE_VERSION("1.6"); MODULE_VERSION("1.7");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше