Граф коммитов

74 Коммитов

Автор SHA1 Сообщение Дата
Benjamin Herrenschmidt 1b70c5a649 [POWERPC] Fix bogus paca->_current initialization
When doing lockdep, I had two patches to initialize paca->_current
early, one bogus, and one correct.  Unfortunately both got merged
as the bad one ended up being part of the main lockdep patch by
mistake.  This causes memory corruption at boot.  This removes
the offending code.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-09 20:22:58 +10:00
Paul Mackerras 745a14cc26 [POWERPC] Add fast little-endian switch system call
This adds a system call on 64-bit platforms for switching between
little-endian and big-endian modes that is much faster than doing a
prctl call.  This system call is handled as a special case right at
the start of the system call entry code, and because it is a special
case, it uses a system call number which is out of the range of
normal system calls, namely 0x1ebe.

Measurements with lmbench on a 4.2GHz POWER6 showed no measurable
change in the speed of normal system calls with this patch.

Switching endianness with this new system call takes around 60ns on a
4.2GHz POWER6, compared with around 300ns to switch endian mode with a
prctl.  This can provide a significant performance advantage for
emulators for little-endian architectures that want to switch between
big-endian and little-endian mode frequently, e.g. because they are
generating instructions sequences on the fly and they want to run
those sequences in little-endian mode.

The other thing about this system call is that it doesn't clobber as
many registers as a normal system call.  It only clobbers r12.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-29 15:57:34 +10:00
Benjamin Herrenschmidt 945feb174b [POWERPC] irqtrace support for 64-bit powerpc
This adds the low level irq tracing hooks to the powerpc architecture
needed to enable full lockdep functionality.

This is partly based on Johannes Berg's initial version.  I removed
the asm trampoline that isn't needed (thus improving performance) and
modified all sorts of bits and pieces, reworking most of the assembly,
etc...

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-18 15:38:47 +10:00
Benjamin Herrenschmidt 7c6352a469 [POWERPC] Initialize paca->current earlier
Currently, we initialize the "current" pointer in the PACA (which
is used by the "current" macro in the kernel) before calling
setup_system(). That means that early_setup() is called with
current still "NULL" which is -not- a good idea. It happens to
work so far but breaks with lockdep when early code calls printk.

This changes it so that all PACAs are statically initialized with
__current pointing to the init task. For non-0 CPUs, this is fixed
up before use.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17 07:46:10 +10:00
Paul Mackerras 320787c75c [POWERPC] Fix handling of unrecoverable SLB miss interrupts
If an SLB miss interrupt happens while the RI bit of MSR is zero, we
can't just return, because RI being zero indicates that SRR0/SRR1
potentially had live values in them, and the process of taking an
interrupt overwrites them.

This should never happen, but if it does, we try to print a nice oops
message.  That doesn't work, however, because the code at unrecov_slb
assumes that the MMU has been turned on, but we call it with the MMU
off (and have done so since the SLB miss handler was rewritten to run
without turning the MMU on) -- except on iSeries, where everything runs
with the MMU on.

This fixes it by adding the necessary code to turn the MMU on if
necessary.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-14 21:11:22 +10:00
Benjamin Herrenschmidt ff3da2e093 [POWERPC] Fix iSeries hard irq enabling regression
A subtle bug sneaked into iSeries recently.  On this platform, we must
not normally clear MSR:EE (the hardware external interrupt enable)
except for short periods of time.  Taking an interrupt while
soft-disabled doesn't cause us to clear it for example.

The iSeries kernel expects to mostly run with MSR:EE enabled at all
times except in a few exception entry/exit code paths.  Thus
local_irq_enable() doesn't check if it needs to hard-enable as it
expects this to be unnecessary on iSeries.

However, hard_irq_disable() _does_ cause MSR:EE to be cleared,
including on iSeries.  A call to it was recently added to the
context switch code, thus causing interrupts to become disabled
for a long periods of time, causing the iSeries watchdog to kick
in under some circumstances and other nasty things.

This patch fixes it by making local_irq_enable() properly re-enable
MSR:EE on iSeries.  It basically removes a return statement here
to make iSeries use the same code path as everybody else.  That does
mean that we might occasionally get spurious decrementer interrupts
but I don't think that matters.

Another option would have been to make hard_irq_disable() a nop
on iSeries but I didn't like it much, in case we have good reasons
to hard-disable.

Part of the patch is fixes to make sure the hard_enabled PACA field
is properly set on iSeries as it used not to be before, since it
was mostly unused.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-03 22:10:34 +11:00
Paul Mackerras fa28237cfc [POWERPC] Provide a way to protect 4k subpages when using 64k pages
Using 64k pages on 64-bit PowerPC systems makes life difficult for
emulators that are trying to emulate an ISA, such as x86, which use a
smaller page size, since the emulator can no longer use the MMU and
the normal system calls for controlling page protections.  Of course,
the emulator can emulate the MMU by checking and possibly remapping
the address for each memory access in software, but that is pretty
slow.

This provides a facility for such programs to control the access
permissions on individual 4k sub-pages of 64k pages.  The idea is
that the emulator supplies an array of protection masks to apply to a
specified range of virtual addresses.  These masks are applied at the
level where hardware PTEs are inserted into the hardware page table
based on the Linux PTEs, so the Linux PTEs are not affected.  Note
that this new mechanism does not allow any access that would otherwise
be prohibited; it can only prohibit accesses that would otherwise be
allowed.  This new facility is only available on 64-bit PowerPC and
only when the kernel is configured for 64k pages.

The masks are supplied using a new subpage_prot system call, which
takes a starting virtual address and length, and a pointer to an array
of protection masks in memory.  The array has a 32-bit word per 64k
page to be protected; each 32-bit word consists of 16 2-bit fields,
for which 0 allows any access (that is otherwise allowed), 1 prevents
write accesses, and 2 or 3 prevent any access.

Implicit in this is that the regions of the address space that are
protected are switched to use 4k hardware pages rather than 64k
hardware pages (on machines with hardware 64k page support).  In fact
the whole process is switched to use 4k hardware pages when the
subpage_prot system call is used, but this could be improved in future
to switch only the affected segments.

The subpage protection bits are stored in a 3 level tree akin to the
page table tree.  The top level of this tree is stored in a structure
that is appended to the top level of the page table tree, i.e., the
pgd array.  Since it will often only be 32-bit addresses (below 4GB)
that are protected, the pointers to the first four bottom level pages
are also stored in this structure (each bottom level page contains the
protection bits for 1GB of address space), so the protection bits for
addresses below 4GB can be accessed with one fewer loads than those
for higher addresses.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-24 10:06:01 +11:00
Benjamin Herrenschmidt a792e75d9b [POWERPC] Fix si_addr value on low level hash failures
If the low level MMU hash table insertion returns an error (which
can happen in some rare circumstances when the hypervisor refuses
the insertion of a PTE, typically if you try to access junk via
/dev/mem), the generated signal had an incorrect si_addr value due
to a bug in the assembly, which was loading it as a 32 bits quantity
instead of a 64 bits quantity.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-08 14:15:34 +11:00
Paul Mackerras 1189be6508 [POWERPC] Use 1TB segments
This makes the kernel use 1TB segments for all kernel mappings and for
user addresses of 1TB and above, on machines which support them
(currently POWER5+, POWER6 and PA6T).

We detect that the machine supports 1TB segments by looking at the
ibm,processor-segment-sizes property in the device tree.

We don't currently use 1TB segments for user addresses < 1T, since
that would effectively prevent 32-bit processes from using huge pages
unless we also had a way to revert to using 256MB segments.  That
would be possible but would involve extra complications (such as
keeping track of which segment size was used when HPTEs were inserted)
and is not addressed here.

Parts of this patch were originally written by Ben Herrenschmidt.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-12 14:05:17 +10:00
Stephen Rothwell 9e4859ef54 [POWERPC] FWNMI is only used on pSeries
This saves 4k on non pSeries builds (except for iSeries where it saves
almost 4k).

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:25:34 +10:00
Stephen Rothwell ee7a76da1e [POWERPC] Size swapper_pg_dir correctly
David Gibson pointed out that swapper_pg_dir actually need to be
PGD_TABLE_SIZE bytes long not PAGE_SIZE.  This actually saves 64k in
the bss for a kernel ppc64_defconfig built with CONFIG_PPC_64K_PAGES.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:25:34 +10:00
Stephen Rothwell 19a8d97d89 [POWERPC] Remove cmd_line from head*.S
It is just a C char array, so declare it thusly.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:25:34 +10:00
Olof Johansson a416561bf7 [POWERPC] Move lowlevel runlatch calls under cpu feature control
There's no need to call the runlatch on functions on processors that
don't implement them (CPU_FTR_CTRL).

Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-14 01:33:22 +10:00
Stephen Rothwell dc8f571a26 [POWERPC] Move the iSeries exception vectors
out of head_64.S and into platforms/iseries/exception.S

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22 16:48:35 +10:00
Stephen Rothwell f9ff0f3048 [POWERPC] Move the exception macros into a header file
It makes head_64.S a bit more readable and will allow us to move the
iSeries exceptions elsewhere.

This also removes the last line of the comment:

 * The following macros define the code that appears as
 * the prologue to each of the exception handlers.  They
 * are split into two parts to allow a single kernel binary
 * to be used for pSeries and iSeries.
 * LOL.  One day... - paulus

Anything is possible. :-)

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22 16:48:35 +10:00
Stephen Rothwell fc68e8699f [POWERPC] Move iSeries startup code out of head_64.S
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22 16:48:34 +10:00
Stephen Rothwell 16a15a30f8 [POWERPC] iSeries: Clean up lparmap mess
We need to have xLparMap in head_64.S so that it is at a fixed address
(because the linker will not resolve (address & 0xffffffff) for us).
But the assembler miscalculates the KERNEL_VSID() expressions.  So put
the confusing expressions into asm-offsets.c.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-22 15:21:46 +10:00
Stephen Rothwell 939e60f680 [POWERPC] Fix more section mismatches in head_64.S
WARNING: vmlinux.o(.text+0x8174): Section mismatch: reference to .init.text:.prom_init (between '.__boot_from_prom' and '.__after_prom_start')
WARNING: vmlinux.o(.text+0x8498): Section mismatch: reference to .init.text:.early_setup (between '.start_here_multiplatform' and '.start_here_common')
WARNING: vmlinux.o(.text+0x8514): Section mismatch: reference to .init.text:.setup_system (between '.start_here_common' and 'system_call_common')
WARNING: vmlinux.o(.text+0x8530): Section mismatch: reference to .init.text:.start_kernel (between '.start_here_common' and 'system_call_common')

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-10 21:04:41 +10:00
Stephen Rothwell c40b91b59d [POWERPC] iSeries: Fix section mismatch warnings
WARNING: vmlinux.o(.text+0x8124): Section mismatch: reference to .init.text:.iSeries_early_setup (between '.__start_initialization_iSeries' and '.__mmu_off')
WARNING: vmlinux.o(.text+0x8128): Section mismatch: reference to .init.text:.early_setup (between '.__start_initialization_iSeries' and '.__mmu_off')

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-26 16:12:17 +10:00
Geoff Levand 75423b7ba5 [POWERPC] Correct __secondary_hold comment
Remove references to pSeries and OpenFirmware in the __secondary_hold
usage comment.  __secondary_hold is a generic routine and can be used
by other platforms.

Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-28 19:16:47 +10:00
Olof Johansson 687304014f [POWERPC] Save trap number in bad_stack
Save the trap number in the case of getting a bad stack in an exception
handler. It is sometimes useful to know what exception it was that caused
this to happen. Without this, no trap number is reported.

Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-24 22:06:59 +10:00
Sonny Rao a5bcbcaff2 [POWERPC] Remove stale comment from head_64.S
This is now inaccurate because we may not have entered prom_init() and
r3 is overwritten immediately anyway.

Signed-off-by: Sonny Rao <sonny@burdell.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-13 03:55:14 +10:00
MOKUNO Masakazu fdc0a9be3a [POWERPC] Remove some redundant isync instructions
Remove some redundant isync instructions.

enable_64b_mode() already does an isync, so there is no need to do it again.

Signed-off-by: MOKUNO, Masakazu <mokuno@sm.sony.co.jp>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-03-09 15:03:24 +11:00
Livio Soares 449d846dbc [POWERPC] Fix performance monitor exception
To the issue: some point during 2.6.20 development, Paul Mackerras
introduced the "lazy IRQ  disabling" patch (very cool work,  BTW).
In that patch, the performance monitor unit exception was marked as
"maskable", in the sense that if interrupts were soft-disabled, that
exception could be ignored.  This broke my PowerPC profiling code.
The symptom that I see is that a varying number of interrupts
(from 0 to $n$, typically closer to 0) get delivered, when, in
reality, it should always be very close to $n$.

The issue stems from the way masking is being done.   Masking in
this fashion seems to  work well with the decrementer and external
interrupts, because they are raised again until "really"  handled.
For the PMU, however, this does not apply (at least on my Xserver
machine with a 970FX processor).  If the PMU exception is not handled,
it will _not_ be re-raised (at least on my machine).  The documentation
states that the PMXE bit in MMCR0 is set to 0 when the PMU exception
is raised.  However, software must re-set the bit to re-enable PMU
exceptions.  If the exception is ignored (as currently) not only is
that interrupt lost, but because software does not re-set PMXE, the
PMU registers are "frozen" forever.

[This patch means that performance monitor exceptions are taken and
handled even if irqs are off, as long as some other interrupt hasn't
come along and caused interrupts to be hard-disabled.  In this sense
the PMU exception becomes like an NMI.  The oprofile code for most
powerpc processors does nothing that is unsafe in an NMI context, but
the Cell oprofile code does a spin_lock_irqsave.  However, that turns
out to be OK because Cell doesn't actually use the performance
monitor exception; performance monitor interrupts come in as a
regular interrupt on Cell, so will be disabled when irqs are off.
 -- paulus.]

Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-02-07 14:03:23 +11:00
Stephen Rothwell c705677e1c [POWERPC] iSeries: Eliminate "exceeds stub group size" warnings
Commit 3ccfc65c50 missed the same fixes for
legacy iSeries specific code, so make some more symbols no longer global.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-12-04 20:41:31 +11:00
Paul Mackerras 79acbb3ff2 Merge branch 'linux-2.6' into for-linus 2006-12-04 15:59:07 +11:00
s.hauer@pengutronix.de a7a1ed3050 [PATCH] Remove occurences of PPC_MULTIPLATFORM in head_64.S
Since iSeries is merged to MULTIPLATFORM, there is no way to build a 64bit
kernel without MULTIPLATFORM, so PPC_MULTIPLATFORM can be removed in
64bit-only files.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-11-13 14:44:58 +11:00
Paul Mackerras 3ccfc65c50 [PATCH] powerpc: Eliminate "exceeds stub group size" linker warning
It turns out that the linker warnings on 64-bit powerpc about "section
blah exceeds stub group size" were being triggered by conditional
branches in head_64.S branching to global symbols, whether in
head_64.S or in other files.  This eliminates the warnings by making
some global symbols in head_64.S no longer global, and by rearranging
some branches.

Signed-off-by: Paul Mackerras <paulus@samba.org>
[ Yee-haa. Maybe I'll notice newly introduced real warnings now - Linus ]
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-11-01 14:56:59 -08:00
Olof Johansson 190a24f560 [POWERPC] Make sure __cpu_preinit_ppc970 gets called on 970GX processors
Add check for 970GX for __cpu_preinit_ppc970.

Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-26 09:20:07 +10:00
Benjamin Herrenschmidt 42c4aaadb7 [POWERPC] Consolidate feature fixup code
There are currently two versions of the functions for applying the
feature fixups, one for CPU features and one for firmware features. In
addition, they are both in assembly and with separate implementations
for 32 and 64 bits. identify_cpu() is also implemented in assembly and
separately for 32 and 64 bits.

This patch replaces them with a pair of C functions. The call sites are
slightly moved on ppc64 as well to be called from C instead of from
assembly, though it's a very small change, and thus shouldn't cause any
problem.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-25 11:42:10 +10:00
Paul Mackerras b0a779debd [POWERPC] Make sure interrupt enable gets restored properly
The lazy IRQ disable patch missed a couple of places where the
interrupt enable flags need to be restored correctly.  First, we
weren't restoring the paca->hard_enabled flag on interrupt exit.
Instead of saving it on entry, we compute it from the MSR_EE bit
in the MSR we are restoring at exit.  Secondly, the MMU hash miss
code was clearing both paca->soft_enabled and paca->hard_enabled
but not restoring them in the case where hash_page was able to
resolve the miss from the Linux page tables.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-18 10:12:53 +10:00
Paul Mackerras d04c56f73c [POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts.  This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca.  If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns.  This means that interrupts only
actually get disabled in the processor when an interrupt comes along.

When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled.  If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.

This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.

This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw.  This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags.  This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-16 16:31:36 +10:00
Stephen Rothwell 3f639ee8c5 [POWERPC] implement BEGIN/END_FW_FTR_SECTION
and use it an all the obvious places in assembler code.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
2006-10-03 16:50:21 +10:00
Olof Johansson 5a2fe38d28 [POWERPC] powerpc: Reduce default cacheline size to 64 bytes
Reduce default cacheline size on 64-bit powerpc from 128 bytes to 64.
This is the architected minimum. In most cases we'll still end up using
cache line information from the device tree, but defaults are used during
early boot and doing a few dcbst/icbi's too many there won't do any harm.

Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-09-13 18:39:52 +10:00
Olof Johansson f39b7a55a8 [POWERPC] Cleanup CPU inits
Cleanup CPU inits a bit more, Geoff Levand already did some earlier.

* Move CPU state save to cpu_setup, since cpu_setup is only ever done
  on cpu 0 on 64-bit and save is never done more than once.
* Rename __restore_cpu_setup to __restore_cpu_ppc970 and add
  function pointers to the cputable to use instead. Powermac always
  has 970 so no need to check there.
* Rename __970_cpu_preinit to __cpu_preinit_ppc970 and check PVR before
  calling it instead of in it, it's too early to use cputable.
* Rename pSeries_secondary_smp_init to generic_secondary_smp_init since
  everyone but powermac and iSeries use it.

Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-08-25 13:27:35 +10:00
Olaf Hering 9fc0a92c7e [POWERPC] force 64bit mode in fwnmi handlers to workaround firmware bugs
The firmware of POWER4 and JS20 systems does not switch the cpu to 64bit
mode when the registered system_reset and machine_check handlers get called.
If a 32bit process runs on that cpu at the time of the event, the cpu
remains in 32bit mode. xmon and kdump can not deal with it, the result is
an error like 'Bad kernel stack pointer fff2aad0 at 3200'.
xmon just loses some register info, but booting the kdump kernel usually fails.

Both handlers are not hot paths.  Duplicate the EXCEPTION_PROLOG_PSERIES macro
and add two instructions to switch to 64bit:

 li     r11,5;
 rldimi r10,r11,61,0;

Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-07-29 04:07:08 +10:00
Jörn Engel 6ab3d5624e Remove obsolete #include <linux/config.h>
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-06-30 19:25:36 +02:00
Michael Ellerman 33dbcf72f6 [POWERPC] Make sure smp_processor_id works very early in boot
There's a small period early in boot where we don't know which cpu we're
running on. That's ok, except that it means we have no paca, or more
correctly that our paca pointer points somewhere random.

So that we can safely call things like smp_processor_id(), we need a paca,
so just assume we're on cpu 0. No code should _write_ to the paca before
we've set the correct one up.

We setup the proper paca after we've scanned the flat device tree in
early_setup(), so there's no need to do it again in start_here_common.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-29 16:22:47 +10:00
Jimi Xenidis d0b79c54fc [POWERPC] Skip the "copy down" of the kernel if it is already at zero.
This patch allows the kernel to recognized that it was loaded at zero
and the copy down of the image is unnecessary.  This is useful for
Simulators and kexec models.
On a typical 3.8 MiB vmlinux.strip this saves about 2.3 million instructions.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-28 15:18:53 +10:00
Michael Ellerman 4ba99b97da [POWERPC] Setup the boot cpu's paca pointer in C rather than asm
There's no need to set the boot cpu paca in asm, so do it in C so us
mere mortals can understand it.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-28 11:59:47 +10:00
Michael Ellerman 1dce0e3047 [POWERPC] Remove remaining iSeries debugger cruft
None of this seems to be necessary, so let's see if can remove it and not
break anything. Booted on iSeries & pSeries here.

NB. we don't remove the hvReleaseData, we just move it down so that the file
reads more clearly.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-28 11:59:46 +10:00
Benjamin Herrenschmidt acf7d76827 [POWERPC] cell: add RAS support
This is a first version of support for the Cell BE "Reliability,
Availability and Serviceability" features.

It doesn't yet handle some of the RAS interrupts (the ones described in
iic_is/iic_irr), I'm still working on a proper way to expose these. They
are essentially a cascaded controller by themselves (sic !) though I may
just handle them locally to the iic driver. I need also to sync with
David Erb on the way he hooked in the performance monitor interrupt.

So that's all for 2.6.17 and I'll do more work on that with my rework of
the powerpc interrupt layer that I'm hacking on at the moment.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-21 15:01:29 +10:00
Paul Mackerras f39224a8c1 powerpc: Use correct sequence for putting CPU into nap mode
We weren't using the recommended sequence for putting the CPU into
nap mode.  When I changed the idle loop, for some reason 7447A cpus
started hanging when we put them into nap mode.  Changing to the
recommended sequence fixes that.

The complexity here is that the recommended sequence is a loop that
keeps putting the cpu back into nap mode.  Clearly we need some way
to break out of the loop when an interrupt (external interrupt,
decrementer, performance monitor) occurs.  Here we use a bit in
the thread_info struct to indicate that we need this, and the exception
entry code notices this and arranges for the exception to return
to the value in the link register, thus breaking out of the loop.
We use a new `local_flags' field in the thread_info which we can
alter without needing to use an atomic update sequence.

The PPC970 has the same recommended sequence, so we do the same thing
there too.

This also fixes a bug in the kernel stack overflow handling code on
32-bit, since it was causing a value that we needed in a register to
get trashed.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-04-18 21:49:11 +10:00
Anton Blanchard 4df20460a3 [PATCH] powerpc: Allow non zero boot cpuids
We currently have a hack to flip the boot cpu and its secondary thread
to logical cpuid 0 and 1. This means the logical - physical mapping will
differ depending on which cpu is boot cpu. This is most apparent on
kexec, where we might kexec on any cpu and therefore change the mapping
from boot to boot.

The patch below does a first pass early on to work out the logical cpuid
of the boot thread. We then fix up some paca structures to match.

Ive also removed the boot_cpuid_phys variable for ppc64, to be
consistent we use get_hard_smp_processor_id(boot_cpuid) everywhere.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-27 14:48:48 +11:00
Olaf Hering 6088857b16 [PATCH] correct the comment about stackpointer alignment in __boot_from_prom
The address of variable val in prom_init_stdout is passed to prom_getprop.
prom_getprop casts the pointer to u32 and passes it to call_prom in the hope
that OpenFirmware stores something there.
But the pointer is truncated in the lower bits and the expected value is
stored somewhere else.

In my testing I had a stackpointer of 0x0023e6b4. val was at offset 120,
wich has address 0x0023e72c. But the value passed to OF was 0x0023e728.

c00000000040b710:       3b 01 00 78     addi    r24,r1,120
...
c00000000040b754:       57 08 00 38     rlwinm  r8,r24,0,0,28
...
c00000000040b784:       80 01 00 78     lwz     r0,120(r1)
...
c00000000040b798:       90 1b 00 0c     stw     r0,12(r27)
...

The stackpointer came from 32bit code.
The chain was yaboot -> zImage -> vmlinux

PowerMac OpenFirmware does appearently not handle the ELF sections
correctly.  If yaboot was compiled in
/usr/src/packages/BUILD/lilo-10.1.1/yaboot, then the stackpointer is
unaligned. But the stackpointer is correct if yaboot is compiled in
/tmp/yaboot.

This bug triggered since 2.6.15, now prom_getprop is an inline
function. gcc clears the lower bits, instead of just clearing the
upper 32 bits.

Signed-off-by: Olaf Hering <olh@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-27 14:48:32 +11:00
Paul Mackerras 5164501794 Merge ../linux-2.6 2006-03-09 14:32:05 +11:00
Linus Torvalds c05b477045 ppc64: make sure to align stack pointer to 16 bytes at boot
yaboot is scrogged and calls us with an invalid stack alignment,
it seems.

Thanks to David Woodhouse to pointing me to the problem.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-04 15:00:45 -08:00
Paul Mackerras c6622f63db powerpc: Implement accurate task and CPU time accounting
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels.  Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode.  We
also count the time spent processing hardware and software interrupts
accurately.  This is conditional on CONFIG_VIRT_CPU_ACCOUNTING.  If
that is not set, we do tick-based approximate accounting as before.

To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on

* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
  context in kernel mode
* context switches.

On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time).  Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.

This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.

This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc.  Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace.  Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 14:05:56 +11:00
Paul Mackerras a00428f5b1 Merge ../powerpc-merge 2006-02-24 14:05:47 +11:00
Anton Blanchard f1870f772c [PATCH] powerpc64: remove broken/bitrotted HMT support
HMT support is currently broken and needs to be reworked to play nicely
with the SMT scheduler. Remove the bit rotten bits for the time being.

I also updated an incorrect comment, we enter __secondary_hold with the
physical cpu id in r3.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-02-24 11:36:33 +11:00