Граф коммитов

1972 Коммитов

Автор SHA1 Сообщение Дата
Milton Miller a1e0eb1042 powerpc: Fix build for 32-bit SMP configs
attr_smt_snooze_delay is only defined for CONFIG_PPC64, so protect the
attribute removal with the same condition.  This fixes this build error
on 32-bit SMP configurations:

/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c: In function ‘unregister_cpu_online’:
/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: ‘attr_smt_snooze_delay’ undeclared (first use in this function)
/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: (Each undeclared identifier is reported only once
/data/home/miltonm/next.git/arch/powerpc/kernel/sysfs.c:722: error: for each function it appears in.)

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-01 13:28:19 +11:00
Paul Mackerras ab598b6680 powerpc: Fix system calls on Cell entered with XER.SO=1
It turns out that on Cell, on a kernel with CONFIG_VIRT_CPU_ACCOUNTING
= y, if a program sets the SO (summary overflow) bit in the XER and
then does a system call, the SO bit in CR0 will be set on return
regardless of whether the system call detected an error.  Since CR0.SO
is used as the error indication from the system call, this means that
all system calls appear to fail.

The reason is that the workaround for the timebase bug on Cell uses a
compare instruction.  With CONFIG_VIRT_CPU_ACCOUNTING = y, the
ACCOUNT_CPU_USER_ENTRY macro reads the timebase, so we end up doing a
compare instruction, which copies XER.SO to CR0.SO.  Since we were
doing this in the system call entry patch after clearing CR0.SO but
before saving the CR, this meant that the saved CR image had CR0.SO
set if XER.SO was set on entry.

This fixes it by moving the clearing of CR0.SO to after the
ACCOUNT_CPU_USER_ENTRY call in the system call entry path.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-12-01 09:40:19 +11:00
Adhemerval Zanella 4b824de9b1 powerpc: Fix IRQ assignment for some PCIe devices
Currently, some PCIe devices on POWER6 machines do not get interrupts
assigned correctly.  The problem is that OF doesn't create an
"interrupt" property for them.  The fix is for of_irq_map_pci to fall
back to using the value in the PCI interrupt-pin register in config
space, as we do when there is no OF device-tree node for the device.

I have verified that this works fine with a pair of Squib-E SAS
adapter on a P6-570.

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-12-01 09:40:18 +11:00
Steven Rostedt f1eecf0e4f powerpc/ppc32: static ftrace fixes for PPC32
Impact: fix for PowerPC 32 code

There were some early init code that was not safe for static
ftrace to boot on my PowerBook. This code must only use relative
addressing, and static mcount performs a compare of the
ftrace_trace_function pointer, and gets that with an absolute address.
In the early init boot up code, this will cause a fault.

This patch removes tracing from the files containing the offending
functions.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:08:07 +01:00
Steven Rostedt 0029ff8752 powerpc: ftrace, use create_branch
Impact: clean up

Paul Mackerras pointed out that the code to determine if the branch
can reach the destination is incorrect. Michael Ellerman suggested
to pull out the code from create_branch and use that.

Simply using create_branch is probably the best.

Reported-by: Michael Ellerman <michael@ellerman.id.au>
Reported-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:08:01 +01:00
Steven Rostedt ec682cef2d powerpc: ftrace, added missing icache flush
Impact: fix to PowerPC code modification

After modifying code it is essential to flush the icache. This patch
adds the missing flush.

Reported-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:07:56 +01:00
Steven Rostedt d9af12b72b powerpc: ftrace, fix cast aliasing and add code verification
Impact: clean up and robustness addition

This patch addresses the comments made by Paul Mackerras.
It removes the type casting between unsigned int and unsigned char
pointers, and replaces them with a use of all unsigned int.

Verification that the jump is indeed made to a trampoline has also
been added.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:07:50 +01:00
Steven Rostedt c7b0d17366 powerpc: ftrace, do nothing in mcount call for dyn ftrace
Impact: quicken mcount calls that are not replaced by dyn ftrace

Dynamic ftrace no longer does on the fly recording of mcount locations.
The mcount locations are now found at compile time. The mcount
function no longer needs to store registers and call a stub function.
It can now just simply return.

Since there are some functions that do not get converted to a nop
(.init sections and other code that may disappear), this patch should
help speed up that code.

Also, the stub for mcount on PowerPC 32 can not be a simple branch
link register like it is on PowerPC 64. According to the ABI specification:

"The _mcount routine is required to restore the link register from
 the stack so that the profiling code can be inserted transparently,
 whether or not the profiled function saves the link register itself."

This means that we must restore the link register that was used
to make the call to mcount.  The minimal mcount function for PPC32
ends up being:

 mcount:
        mflr    r0
        mtctr   r0
        lwz     r0, 4(r1)
        mtlr    r0
        bctr

Where we move the link register used to call mcount into the
ctr register, and then restore the link register from the stack.
Then we use the ctr register to jump back to the mcount caller.
The r0 register is free for us to use.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-11-28 14:07:45 +01:00
Steven Rostedt 7cc45e6432 powerpc/ppc32: ftrace, dynamic ftrace to handle modules
Impact: add ability to trace modules on 32 bit PowerPC

This patch performs the necessary trampoline calls to handle
modules with dynamic ftrace on 32 bit PowerPC.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:52:53 -08:00
Steven Rostedt f48cb8b48b powerpc/ppc64: ftrace, handle module trampolines for dyn ftrace
Impact: Allow 64 bit PowerPC to trace modules with dynamic ftrace

This adds code to handle the PPC64 module trampolines, and allows for
PPC64 to use dynamic ftrace.

Thanks to Paul Mackerras for these updates:

  - fix the mod and rec->arch.mod NULL checks.
  - fix to is_bl_op compare.

Thanks to Milton Miller for:

  - finding the nasty race with using two nops, and recommending
    instead that I use a branch 8 forward.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:52:28 -08:00
Steven Rostedt e4486fe316 powerpc: ftrace, use probe_kernel API to modify code
Impact: use cleaner probe_kernel API over assembly

Using probe_kernel_read/write interface is a much cleaner approach
than the current assembly version.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:52:04 -08:00
Steven Rostedt 8fd6e5a8c8 powerpc: ftrace, convert to new dynamic ftrace arch API
Impact: update to PowerPC ftrace arch API

This patch converts PowerPC to use the new dynamic ftrace arch API.

Thanks to Paul Mackennas for pointing out the mistakes of my original
test_24bit_addr function.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:51:40 -08:00
Steven Rostedt 6d07bb4735 powerpc: ftrace, do not latency trace idle
Impact: fix for irq off latency tracer

When idle is called, interrupts are disabled, but the idle function
will still wake up on an interrupt. The problem is that the interrupt
disabled latency tracer will take this call to idle as a latency.

This patch disables the latency tracing when going into idle.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
2008-11-20 10:51:15 -08:00
Milton Miller 25ddd738c2 powerpc: Provide a separate handler for each IPI action
With the new generic smp call function helpers, I noticed the code in
smp_message_recv was a single function call in many cases.  While
getting the message number from the ipi data is easy, we can reduce
the path length by a function and data-dependent switch by registering
seperate IPI actions for these simple calls.

Originally I left the ipi action array exposed, but then I realized the
registration code should be common too.

The three users each had their own name array, so I made a fourth
to convert all users to use a common one.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-19 16:05:06 +11:00
Michael Ellerman 54018178ef powerpc: Use for_each_node_with_property() in of_irq_map_init()
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-19 16:05:01 +11:00
Benjamin Herrenschmidt 6612d9b0b8 powerpc/44x: Fix 460EX/460GT machine check handling
Those cores use the 440A type machine check (ie, they have
MCSRR0/MCSRR1). They thus need to call the appropriate fixup
function to hook the right variant of the exception.

Without this, all machine checks become fatal due to loss
of context when entering the exception handler.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-11-13 10:11:26 -05:00
Paul Mackerras 486936cd93 Merge branch 'linux-2.6' into next 2008-11-12 08:43:22 +11:00
Andreas Schwab 77eb50aefa powerpc: Fix msr check in compat_sys_swapcontext
The new context may not be 16-byte aligned, so the real address of the
mcontext structure should be read from the uc_regs pointer instead of
directly using the (unaligned) uc_mcontext field.

Signed-off-by: Andreas Schwab <schwab@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-11 19:42:22 +11:00
Kumar Gala b41d6fee37 powerpc/fsl-booke: Fix synchronization bug w/local tlb invalidates
The implemetation of _tlbil_pid() on Freescale Book-E cores needs
an msync & isync after we flash invalidate the TLBs.  This was causing
the following oops reported by Sebastian Andrzej Siewior:

  VFS: Mounted root (nfs filesystem) readonly.
  Freeing unused kernel memory: 148k init
  BUG: sleeping function called from invalid context at /home/bigeasy/git/linux-2.6-powerpc/mm/mmap.c:234
  in_atomic():1, irqs_disabled():0
  Call Trace:
  [df189df0] [c0007160] show_stack+0x48/0x148 (unreliable)
  [df189e30] [c0029480] __might_sleep+0xf0/0x100
  [df189e40] [c0070ac0] remove_vma+0x28/0x98
  [df189e50] [c0070c1c] exit_mmap+0xec/0x128
  [df189e80] [c002d2f4] mmput+0x54/0xec
  [df189ea0] [c0030b6c] exit_mm+0x10c/0x120
  [df189ed0] [c003288c] do_exit+0x1ac/0x6e8
  [df189f20] [c0032e48] do_group_exit+0x80/0xac
  [df189f40] [c000e9dc] ret_from_syscall+0x0/0x3c
  BUG: scheduling while atomic: udevd/956/0x10000002
  Modules linked in:
  Call Trace:
  [df189df0] [c0007160] show_stack+0x48/0x148 (unreliable)
  [df189e30] [c002ac88] __schedule_bug+0x58/0x6c
  [df189e40] [c023e6cc] schedule+0xa8/0x4a8
  [df189e90] [c002ad6c] __cond_resched+0x38/0x64
  [df189ea0] [c023ebc8] _cond_resched+0x3c/0x58
  [df189eb0] [c0030e70] put_files_struct+0x90/0xec
  [df189ed0] [c00328a8] do_exit+0x1c8/0x6e8
  [df189f20] [c0032e48] do_group_exit+0x80/0xac
  [df189f40] [c000e9dc] ret_from_syscall+0x0/0x3c

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-11-08 12:38:55 -06:00
Paul Mackerras 3cc698789a powerpc: Eliminate unused do_gtod variable
Since we started using the generic timekeeping code, we haven't had a
powerpc-specific version of do_gettimeofday, and hence there is now
nothing that reads the do_gtod variable in arch/powerpc/kernel/time.c.
This therefore removes it and the code that sets it.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:49:28 +11:00
Paul Mackerras 597bc5c00b powerpc: Improve resolution of VDSO clock_gettime
Currently the clock_gettime implementation in the VDSO produces a
result with microsecond resolution for the cases that are handled
without a system call, i.e. CLOCK_REALTIME and CLOCK_MONOTONIC.  The
nanoseconds field of the result is obtained by computing a
microseconds value and multiplying by 1000.

This changes the code in the VDSO to do the computation for
clock_gettime with nanosecond resolution.  That means that the
resolution of the result will ultimately depend on the timebase
frequency.

Because the timestamp in the VDSO datapage (stamp_xsec, the real time
corresponding to the timebase count in tb_orig_stamp) is in units of
2^-20 seconds, it doesn't have sufficient resolution for computing a
result with nanosecond resolution.  Therefore this adds a copy of
xtime to the VDSO datapage and updates it in update_gtod() along with
the other time-related fields.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:49:22 +11:00
Benjamin Herrenschmidt 7eef440a54 powerpc/pci: Cosmetic cleanups of pci-common.c
This does a few cosmetic cleanups, moving a couple of things around
but without actually changing what the code does.

(There is a minor change in ordering of operations in
pcibios_setup_bus_devices but it should have no impact).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:41:52 +11:00
Benjamin Herrenschmidt fd6852c8fa powerpc/pci: Fix various pseries PCI hotplug issues
The pseries PCI hotplug code has a number of issues, ranging from
incorrect resource setup to crashes, depending on what is added,
when, whether it contains a bridge, etc etc....

This fixes a whole bunch of these, while actually simplifying the code
a bit, using more generic code in the process and factoring out common
code between adding of a PHB, a slot or a device.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:31:52 +11:00
Benjamin Herrenschmidt b5ae5f911d powerpc/pci: Make pcibios_allocate_bus_resources more robust
To properly fix PCI hotplug, it's useful to be able to make the fixup
passes on all devices whether they were just hot plugged or already
there.

However, pcibios_allocate_bus_resources() wouldn't cope well with
being called twice for a given bus.  This makes it ignore resources
that have already been allocated, along with adding a bit of debug
output.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:26:05 +11:00
Benjamin Herrenschmidt 8b8da35804 powerpc/pci: Split pcibios_fixup_bus() into bus setup and device setup
Currently, our PCI code uses the pcibios_fixup_bus() callback, which
is called by the generic code when probing PCI buses, for two
different things.

One is to set up things related to the bus itself, such as reading
bridge resources for P2P bridges, fixing them up, or setting up the
iommu's associated with bridges on some platforms.

The other is some setup for each individual device under that bridge,
mostly setting up DMA mappings and interrupts.

The problem is that this approach doesn't work well with PCI hotplug
when an existing bus is re-probed for new children.  We fix this
problem by splitting pcibios_fixup_bus into two routines:

	pcibios_setup_bus_self() is now called to setup the bus itself

	pcibios_setup_bus_devices() is now called to setup devices

pcibios_fixup_bus() is then modified to call these two after reading the
bridge bases, and the OF based PCI probe is modified to avoid calling
into the first one when rescanning an existing bridge.

[paulus@samba.org - fixed eeh.h for 32-bit compile now that pci-common.c
is including it unconditionally.]

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-06 09:22:37 +11:00
Benjamin Herrenschmidt ab56ced9c5 powerpc/pci: Remove pcibios_do_bus_setup()
The function pcibios_do_bus_setup() was used by pcibios_fixup_bus()
to perform setup that is different between the 32-bit and 64-bit
code.  This difference no longer exists, thus the function is removed
and the setup now done directly from pci-common.c.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:11:53 +11:00
Benjamin Herrenschmidt 5328032335 powerpc/pci: Use common PHB resource hookup
The 32-bit and 64-bit powerpc PCI code used to set up the resource
pointers of the root bus of a given PHB in completely different
places.

This unifies this in large part, by making 32-bit use a routine very
similar to what 64-bit does when initially scanning the PCI busses.

The actual setup of the PHB resources itself is then moved to a
common function in pci-common.c.

This should cause no functional change on 64-bit.  On 32-bit, the
effect is that the PHB resources are going to be setup a bit earlier,
instead of being setup from pcibios_fixup_bus().

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:11:53 +11:00
Benjamin Herrenschmidt b0494bc8ee powerpc/pci: Cleanup debug printk's
This removes the various DBG() macro from the powerpc PCI code and
makes it use the standard pr_debug instead.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:11:53 +11:00
Brian King 409001948d powerpc: Update page-in counter for CMM
A new field has been added to the VPA as a method for the client OS to
communicate to firmware the number of page-ins it is performing when
running collaborative memory overcommit.  The hypervisor will use this
information to better determine if a partition is experiencing memory
pressure and needs more memory allocated to it.

Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:08:28 +11:00
Benjamin Herrenschmidt a6a8e009b1 powerpc: Silence software timebase sync
When no hardware method is provided to sync the timebase registers
across the machine, and the platform doesn't sync them for us, then we
use a generic software implementation.  Currently, the code for that
has many printks, and they don't have log levels.  Most of the printks
are only useful for debugging the code, and since we haven't had any
problems with it for years, this turns them into pr_debug.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:08:28 +11:00
Benjamin Herrenschmidt 1fd0f52583 powerpc: Fix domain numbers in /proc on 64-bit
The code to properly expose domain numbers in /proc is somewhat
bogus on ppc64 as it depends on the "buid" field being non-0,
but that field is really pseries specific.

This removes that code and makes ppc64 use the same code as 32-bit
which effectively decides whether to expose domains based on
ppc_pci_flags set by the platform, and sets the default for 64-bit
to enable domains and enable compatibility for domain 0 (which
strips the domain number for domain 0 to help with X servers).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-11-05 22:08:27 +11:00
Linus Torvalds f891caf28f Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (23 commits)
  Revert "powerpc: Sync RPA note in zImage with kernel's RPA note"
  powerpc: Fix compile errors with CONFIG_BUG=n
  powerpc: Fix format string warning in arch/powerpc/boot/main.c
  powerpc: Fix bug in kernel copy of libfdt's fdt_subnode_offset_namelen()
  powerpc: Remove duplicate DMA entry from mpc8313erdb device tree
  powerpc/cell/OProfile: Fix on-stack array size in activate spu profiling function
  powerpc/mpic: Fix regression caused by change of default IRQ affinity
  powerpc: Update remaining dma_mapping_ops to use map/unmap_page
  powerpc/pci: Fix unmapping of IO space on 64-bit
  powerpc/pci: Properly allocate bus resources for hotplug PHBs
  OF-device: Don't overwrite numa_node in device registration
  powerpc: Fix swapcontext system for VSX + old ucontext size
  powerpc: Fix compiler warning for the relocatable kernel
  powerpc: Work around ld bug in older binutils
  powerpc/ppc64/kdump: Better flag for running relocatable
  powerpc: Use is_kdump_kernel()
  powerpc: Kexec exit should not use magic numbers
  powerpc/44x: Update 44x defconfigs
  powerpc/40x: Update 40x defconfigs
  powerpc: enable heap randomization for linkstations
  ...
2008-10-31 08:14:15 -07:00
Paul Mackerras 5663a1232b Revert "powerpc: Sync RPA note in zImage with kernel's RPA note"
This reverts commit 91a0030295, plus
commit 0dcd440120 ("powerpc: Revert CHRP
boot wrapper to real-base = 12MB on 32-bit") which depended on it.

Commit 91a00302 was causing NVRAM corruption on some pSeries machines,
for as-yet unknown reasons, so this reverts it until the cause is
identified.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 22:36:21 +11:00
Mark Nelson f9226d572d powerpc: Update remaining dma_mapping_ops to use map/unmap_page
After the merge of the 32 and 64bit DMA code, dma_direct_ops lost
their map/unmap_single() functions but gained map/unmap_page().  This
caused a problem for Cell because Cell's dma_iommu_fixed_ops called
the dma_direct_ops if the fixed linear mapping was to be used or the
iommu ops if the dynamic window was to be used.  So in order to fix
this problem we need to update the 64bit DMA code to use
map/unmap_page.

First, we update the generic IOMMU code so that iommu_map_single()
becomes iommu_map_page() and iommu_unmap_single() becomes
iommu_unmap_page().  Then we propagate these changes up through all
the callers of these two functions and in the process update all the
dma_mapping_ops so that they have map/unmap_page rahter than
map/unmap_single.  We can do this because on 64bit there is no HIGHMEM
memory so map/unmap_page ends up performing exactly the same function
as map/unmap_single, just taking different arguments.

This has no affect on drivers because the dma_map_single_attrs() just
ends up calling the map_page() function of the appropriate
dma_mapping_ops and similarly the dma_unmap_single_attrs() calls
unmap_page().

This fixes an oops on Cell blades, which oops on boot without this
because they call dma_direct_ops.map_single, which is NULL.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:13:48 +11:00
Benjamin Herrenschmidt b30115ea8f powerpc/pci: Fix unmapping of IO space on 64-bit
A typo/thinko made us pass the wrong argument to __flush_hash_table_range
when unplugging bridges, thus not flushing all the translations for
the IO space on unplug.  The third parameter to __flush_hash_table_range
is `end', not `size'.

This causes the hypervisor to refuse unplugging slots.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:13:46 +11:00
Nathan Fontenot e90a131846 powerpc/pci: Properly allocate bus resources for hotplug PHBs
Resources for PHB's that are dynamically added to a system are not
properly allocated in the resource tree.

Not having these resources allocated causes an oops when removing
the PHB when we try to release them.

The diff appears a bit messy, this is mainly due to moving everything
one tab to the left in the pcibios_allocate_bus_resources routine.
The functionality change in this routine is only that the
list_for_each_entry() loop is pulled out and moved to the necessary
calling routine.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:12:03 +11:00
Jeremy Kerr 6098e2ee14 OF-device: Don't overwrite numa_node in device registration
Currently, the numa_node of OF-devices will be overwritten during
device_register, which simply sets the node to -1.  On cell machines,
this means that devices can't find their IOMMU, which is referenced
through the device's numa node.

Set the numa node for OF devices with no parent, and use the
lower-level device_initialize and device_add functions, so that the
node is preserved.

We can remove the call to set_dev_node in of_device_alloc, as it
will be overwritten during register.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:12:01 +11:00
Michael Neuling 16c29d180b powerpc: Fix swapcontext system for VSX + old ucontext size
Since VSX support was added, we now have two sizes of ucontext_t;
the older, smaller size without the extra VSX state, and the new
larger size with the extra VSX state.  A program using the
sys_swapcontext system call and supplying smaller ucontext_t
structures will currently get an EINVAL error if the task has
used VSX (e.g. because of calling library code that uses VSX) and
the old_ctx argument is non-NULL (i.e. the program is asking for
its current context to be saved).  Thus the program will start
getting EINVAL errors on calls that previously worked.

This commit changes this behaviour so that we don't send an EINVAL in
this case.  It will now return the smaller context but the VSX MSR bit
will always be cleared to indicate that the ucontext_t doesn't include
the extra VSX state, even if the task has executed VSX instructions.

Both 32 and 64 bit cases are updated.

[paulus@samba.org - also fix some access_ok() and get_user() calls]

Thanks to Ben Herrenschmidt for noticing this problem.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:12:00 +11:00
Michael Neuling b160544ccc powerpc: Fix compiler warning for the relocatable kernel
Fixes this warning:
 arch/powerpc/kernel/setup_64.c:447:5: warning: "kernstart_addr" is not defined

which arises because PHYSICAL_START is no longer a constant when
CONFIG_RELOCATABLE=y.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:54 +11:00
Paul Mackerras 2a4b9c5af8 powerpc: Work around ld bug in older binutils
Commit 549e8152de ("powerpc: Make the
64-bit kernel as a position-independent executable") added lines to
vmlinux.lds.S to add the extra sections needed to implement a
relocatable kernel.  However, those lines seem to trigger a bug in
older versions of GNU ld (such as 2.16.1) when building a
non-relocatable kernel.  Since ld 2.16.1 is still a popular choice for
cross-toolchains, this adds an #ifdef to vmlinux.lds.S so the added
lines are only included when building a relocatable kernel.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:52 +11:00
Milton Miller 8b8b0cc1c7 powerpc/ppc64/kdump: Better flag for running relocatable
The __kdump_flag ABI is overly constraining for future development.

As of 2.6.27, the kernel entry point has 4 constraints:  Offset 0 is
the starting point for the master (boot) cpu (entered with r3 pointing
to the device tree structure), offset 0x60 is code for the slave cpus
(entered with r3 set to their device tree physical id), offset 0x20 is
used by the iseries hypervisor, and secondary cpus must be well behaved
when the first 256 bytes are copied to address 0.

Placing the __kdump_flag at 0x18 is bad because:

- It was taking the last 8 bytes before the iseries hypervisor data.
- It was 8 bytes for a boolean flag
- It had no way of identifying that the flag was present
- It does leave any room for the master to add any additional code
  before branching, which hurts debug.
- It will be unnecessarily hard for 32 bit code to be common (8 bytes)

Now that we have eliminated the use of __kdump_flag in favor of
the standard is_kdump_kernel(), this flag only controls run without
relocating the kernel to PHYSICAL_START (0), so rename it __run_at_load.

Move the flag to 0x5c, 1 word before the secondary cpu entry point at
0x60.  Initialize it with "run0" to say it will run at 0 unless it is
set to 1.  It only exists if we are relocatable.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:49 +11:00
Milton Miller 62a8bd6c92 powerpc: Use is_kdump_kernel()
linux/crash_dump.h defines is_kdump_kernel() to be used by code that
needs to know if the previous kernel crashed instead of a (clean) boot
or reboot.

This updates the just added powerpc code to use it.  This is needed
for the next commit, which will remove __kdump_flag.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:47 +11:00
Milton Miller 1767c8f392 powerpc: Kexec exit should not use magic numbers
Commit 54622f10a6 ("powerpc: Support for
relocatable kdump kernel") added a magic flag value in a register to
tell purgatory that it should be a panic kernel.  This part is wrong
and is reverted by this commit.

The kernel gets a list of memory blocks and a entry point from user space.
Its job is to copy the blocks into place and then branch to the designated
entry point (after turning "off" the mmu).

The user space tool inserts a trampoline, called purgatory, that runs
before the user supplied code.   Its job is to establish the entry
environment for the new kernel or other application based on the contents
of memory.  The purgatory code is compiled and embedded in the tool,
where it is later patched using the elf symbol table using elf symbols.

Since the tool knows it is creating a purgatory that will run after a
kernel crash, it should just patch purgatory (or the kernel directly)
if something needs to happen.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-10-31 16:11:44 +11:00
Ingo Molnar 4944dd62de Merge commit 'v2.6.28-rc2' into tracing/urgent 2008-10-27 10:50:54 +01:00
Linus Torvalds 5b34653963 Merge branch 'x86/um-header' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86/um-header' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (26 commits)
  x86: canonicalize remaining header guards
  x86: drop double underscores from header guards
  x86: Fix ASM_X86__ header guards
  x86, um: get rid of uml-config.h
  x86, um: get rid of arch/um/Kconfig.arch
  x86, um: get rid of arch/um/os symlink
  x86, um: get rid of excessive includes of uml-config.h
  x86, um: get rid of header symlinks
  x86, um: merge Kconfig.i386 and Kconfig.x86_64
  x86, um: get rid of sysdep symlink
  x86, um: trim the junk from uml ptrace-*.h
  x86, um: take vm-flags.h to sysdep
  x86, um: get rid of uml asm/arch
  x86, um: get rid of uml highmem.h
  x86, um: get rid of uml unistd.h
  x86, um: get rid of system.h -> system.h include
  x86, um: uml atomic.h is not needed anymore
  x86, um: untangle uml ldt.h
  x86, um: get rid of more uml asm/arch uses
  x86, um: remove dead header (uml module-generic.h; never used these days)
  ...
2008-10-23 10:22:01 -07:00
Steven Rostedt 15adc04898 ftrace, powerpc, sparc64, x86: remove notrace from arch ftrace file
The entire file of ftrace.c in the arch code needs to be marked
as notrace. It is much cleaner to do this from the Makefile with
CFLAGS_REMOVE_ftrace.o.

[ powerpc already had this in its Makefile. ]

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-23 16:00:25 +02:00
Steven Rostedt 4d296c2432 ftrace: remove mcount set
The arch dependent function ftrace_mcount_set was only used by the daemon
start up code. This patch removes it.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-23 16:00:23 +02:00
Al Viro 2e074004c6 x86, um: get rid of uml signal.h
the only theoretical reason for it these days is ppc; aside of uml/ppc
being dead, do_signal() would be happier in arch/powerpc/kernel/signal.h
anyway.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-10-22 22:55:20 -07:00
Ingo Molnar debfcaf93e Merge branch 'tracing/ftrace' into tracing/urgent 2008-10-22 09:08:14 +02:00
Mohan Kumar M 54622f10a6 powerpc: Support for relocatable kdump kernel
This adds relocatable kernel support for kdump. With this one can
use the same regular kernel to capture the kdump. A signature (0xfeed1234)
is passed in r6 from panic code to the next kernel through kexec_sequence
and purgatory code. The signature is used to differentiate between
kdump kernel and non-kdump kernels.

The purgatory code compares the signature and sets the __kdump_flag in
head_64.S.  During the boot up, kernel code checks __kdump_flag and if it
is set, the kernel will behave as relocatable kdump kernel. This kernel
will boot at the address where it was loaded by kexec-tools ie. at the
address reserved through crashkernel boot parameter.

CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump
kernel as relocatable. So the same kernel can be used as production and
kdump kernel.

This patch incorporates the changes suggested by Paul Mackerras to avoid
GOT use and to avoid two copies of the code.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-22 15:01:22 +11:00
David Gibson 201bdc868d powerpc: Further compile fixup for STRICT_MM_TYPECHECKS
A patch of mine was recently committed to fix up STRICT_MM_TYPECHECKS
behaviour on powerpc (f5ea64dcba).
However, something which breaks it again seems to have slipped in
afterwards.  So, here's another small fix.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-22 11:00:26 +11:00
Michael Neuling 8873d93b4b powerpc: Remove empty #else from signal_64.c
Remove empty/bogus #else from signal_64.c

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-22 11:00:26 +11:00
Becky Bruce f465df81a8 powerpc: Move memory size print into common show_cpuinfo for 32-bit
Most of the platforms were printing the size of the memory
in their show_cpuinfo implementations. This moves that to
the common show_cpuinfo, so that all 32-bit platforms will
now print the size of memory.  I also update the code
to deal with the fact that total_memory is now a phys_addr_t.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-22 11:00:25 +11:00
Benjamin Herrenschmidt a02efb906d Merge commit 'origin' into master
Manual merge of:

	arch/powerpc/Kconfig
	arch/powerpc/include/asm/page.h
2008-10-21 15:52:04 +11:00
Milton Miller 34d81f858a powerpc: Delete unused prom_strtoul and prom_memparse
These functions should have been static, and inspection shows they
are no longer used.   (We used to parse mem= but we now defer that
to early_param).

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-21 15:20:08 +11:00
Milton Miller ed7b2144bc powerpc: Find and destroy possible stale kernel added properties
64 bit powerpc requires the kexec user space tools avoid overwriting
the static kernel image and translation hash table when choosing
where to put memory image data because it copies the data into place
using the kernels virtual memory system.  Kexec userspace determines
these and other areas blocked by reading properties the kernel adds,
but does not filter these properties when creating the device tree
for the next kernel.

When the second kernel tries to add its values for these properties,
the export via /proc/device-tree is hidden by the pre-existing but
stale values from the flat tree.  Kexec userspace reads the old
property, allocates the new kernel at the old kernel's end, and
gets rejected by the overlap check.

Search and remove these stale properties before adding the new values.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-21 15:19:12 +11:00
Kumar Gala a3ba68f969 powerpc: Fix build issue with CONFIG_RELOCATABLE=y
There are two issues when we enable CONFIG_RELOCATABLE.  The first is due
to the fact that phys_addr_t is now defined in linux/types.h.  The second
is due to the fact that the DMA code changes expose memstart_addr to
prom_init.c

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-21 15:17:49 +11:00
roel kluin bb5e6491ca powerpc: Unsigned speed cannot be negative in udbg_16559.c
"unsigned int" speed cannot be negative, it's thus pointless
to test if it is.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-21 15:17:47 +11:00
Benjamin Herrenschmidt e9f82cb750 powerpc/PCI: Add legacy PCI access via sysfs
This patch adds support for legacy_io and legacy_mem files in
bus class directories in sysfs for powerpc

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2008-10-20 11:01:47 -07:00
Steven Rostedt 606576ce81 ftrace: rename FTRACE to FUNCTION_TRACER
Due to confusion between the ftrace infrastructure and the gcc profiling
tracer "ftrace", this patch renames the config options from FTRACE to
FUNCTION_TRACER.  The other two names that are offspring from FTRACE
DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same.

This patch was generated mostly by script, and partially by hand.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-20 18:27:03 +02:00
Vivek Goyal 57cac4d188 kdump: make elfcorehdr_addr independent of CONFIG_PROC_VMCORE
o elfcorehdr_addr is used by not only the code under CONFIG_PROC_VMCORE
  but also by the code which is not inside CONFIG_PROC_VMCORE.  For
  example, is_kdump_kernel() is used by powerpc code to determine if
  kernel is booting after a panic then use previous kernel's TCE table.
  So even if CONFIG_PROC_VMCORE is not set in second kernel, one should be
  able to correctly determine that we are booting after a panic and setup
  calgary iommu accordingly.

o So remove the assumption that elfcorehdr_addr is under
  CONFIG_PROC_VMCORE.

o Move definition of elfcorehdr_addr to arch dependent crash files.
  (Unfortunately crash dump does not have an arch independent file
  otherwise that would have been the best place).

o kexec.c is not the right place as one can Have CRASH_DUMP enabled in
  second kernel without KEXEC being enabled.

o I don't see sh setup code parsing the command line for
  elfcorehdr_addr.  I am wondering how does vmcore interface work on sh.
  Anyway, I am atleast defining elfcoredhr_addr so that compilation is not
  broken on sh.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Acked-by: Simon Horman <horms@verge.net.au>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-20 08:52:39 -07:00
Josh Boyer df8f71faa8 powerpc/40x: Add AMCC PowerPC 405EZ to cputable
This adds the AMCC PowerPC 405EZ chip to the cputable

Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-10-17 10:31:18 -04:00
Linus Torvalds 08d19f51f0 Merge branch 'kvm-updates/2.6.28' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm
* 'kvm-updates/2.6.28' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm: (134 commits)
  KVM: ia64: Add intel iommu support for guests.
  KVM: ia64: add directed mmio range support for kvm guests
  KVM: ia64: Make pmt table be able to hold physical mmio entries.
  KVM: Move irqchip_in_kernel() from ioapic.h to irq.h
  KVM: Separate irq ack notification out of arch/x86/kvm/irq.c
  KVM: Change is_mmio_pfn to kvm_is_mmio_pfn, and make it common for all archs
  KVM: Move device assignment logic to common code
  KVM: Device Assignment: Move vtd.c from arch/x86/kvm/ to virt/kvm/
  KVM: VMX: enable invlpg exiting if EPT is disabled
  KVM: x86: Silence various LAPIC-related host kernel messages
  KVM: Device Assignment: Map mmio pages into VT-d page table
  KVM: PIC: enhance IPI avoidance
  KVM: MMU: add "oos_shadow" parameter to disable oos
  KVM: MMU: speed up mmu_unsync_walk
  KVM: MMU: out of sync shadow core
  KVM: MMU: mmu_convert_notrap helper
  KVM: MMU: awareness of new kvm_mmu_zap_page behaviour
  KVM: MMU: mmu_parent_walk
  KVM: x86: trap invlpg
  KVM: MMU: sync roots on mmu reload
  ...
2008-10-16 15:36:00 -07:00
Joerg Roedel 2994a3b265 powerpc: use iommu_num_pages function in IOMMU code
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-16 11:21:33 -07:00
Joerg Roedel 3400001c53 powerpc: rename iommu_num_pages function to iommu_nr_pages
This is a preparation patch for introducing a generic iommu_num_pages function.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-16 11:21:33 -07:00
Christoph Hellwig b418da16dd compat: generic compat get/settimeofday
Nothing arch specific in get/settimeofday.  The details of the timeval
conversion varied a little from arch to arch, but all with the same
results.

Also add an extern declaration for sys_tz to linux/time.h because externs
in .c files are fowned upon.  I'll kill the externs in various other files
in a sparate patch.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net> [ sparc bits ]
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Cc: Matthew Wilcox <matthew@wil.cx>
Cc: Grant Grundler <grundler@parisc-linux.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-16 11:21:33 -07:00
Christoph Hellwig f7a5000f7a compat: move cp_compat_stat to common code
struct stat / compat_stat is the same on all architectures, so
cp_compat_stat should be, too.

Turns out it is, except that various architectures have slightly and some
high2lowuid/high2lowgid or the direct assignment instead of the
SET_UID/SET_GID that expands to the correct one anyway.

This patch replaces the arch-specific cp_compat_stat implementations with
a common one based on the x86-64 one.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net> [ sparc bits ]
Acked-by: Kyle McMartin <kyle@mcmartin.ca> [ parisc bits ]
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-16 11:21:33 -07:00
Hollis Blanchard 49dd2c4928 KVM: powerpc: Map guest userspace with TID=0 mappings
When we use TID=N userspace mappings, we must ensure that kernel mappings have
been destroyed when entering userspace. Using TID=1/TID=0 for kernel/user
mappings and running userspace with PID=0 means that userspace can't access the
kernel mappings, but the kernel can directly access userspace.

The net is that we don't need to flush the TLB on privilege switches, but we do
on guest context switches (which are far more infrequent). Guest boot time
performance improvement: about 30%.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-10-15 10:15:16 +02:00
Hollis Blanchard 83aae4a809 KVM: ppc: Write only modified shadow entries into the TLB on exit
Track which TLB entries need to be written, instead of overwriting everything
below the high water mark. Typically only a single guest TLB entry will be
modified in a single exit.

Guest boot time performance improvement: about 15%.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-10-15 10:15:16 +02:00
Hollis Blanchard 20754c2495 KVM: ppc: Stop saving host TLB state
We're saving the host TLB state to memory on every exit, but never using it.
Originally I had thought that we'd want to restore host TLB for heavyweight
exits, but that could actually hurt when context switching to an unrelated host
process (i.e. not qemu).

Since this decreases the performance penalty of all exits, this patch improves
guest boot time by about 15%.

Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-10-15 10:15:16 +02:00
Benjamin Herrenschmidt 6dc6472581 Merge commit 'origin'
Manual fixup of conflicts on:

	arch/powerpc/include/asm/dcr-regs.h
	drivers/net/ibm_newemac/core.h
2008-10-15 11:31:54 +11:00
Benjamin Herrenschmidt 2bda347bc5 powerpc: Fix 32-bit SMP boot on CHRP
prom_init was changed to take a new argument, the address
where the kernel is loaded, which is now used to copy the
SMP spin loop down before use.

However, only head_64.S was adapted to pass this new value,
not head_32.S, thus breaking SMP boot on 32-bit SMP CHRP
machines.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-15 10:13:29 +11:00
Benjamin Herrenschmidt 7b6b574ca7 powerpc: Fix link errors on 32-bit machines using legacy DMA
The new merged DMA code will try to access isa_bridge_pcidev when
trying to DMA to/from legacy devices. This is however only defined
on 64-bit. Fixes this for now by adding the variable, even if it
stays NULL. In the long run, we'll make isa-bridge.c common to
32 and 64-bit.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-15 10:13:29 +11:00
Benjamin Herrenschmidt b556151110 powerpc/pci: Improve detection of unassigned bridge resources
When the powerpc PCI layer is not configured to re-assign everything,
it currently fails to detect that a PCI to PCI bridge has been left
unassigned by the firmware and tries to allocate resource for the
default window values in the bridge (0...X) (with the notable exception
of a hack we have in there that detects some Apple firmware unassigned
bridge resources).

This results in resource allocation failures, which are generally
fixed up later on but it causes scary warnings in the logs and we
have seen the fixup code fall over in some circumstances (a different
issue to fix as well).

This code improves that by providing a more complete & useful function
to intuit that a bridge was left unassigned by the firmware, and thus
force a full re-allocation by the PCI code without trying to allocate
the existing useless resources first.

The algorithm we use basically considers unassigned a window that
starts at 0 (PCI address) if the corresponding address space enable
bit is not set. In addition, for memory space, it considers such a
resource unassigned also if the host bridge isn't configured to
forward cycles to address 0 (ie, the resource basically overlaps
main memory).

This fixes a range of problems with things like Bare-Metal support
on pSeries machines, or attempt to use partial firmware PCI setup.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-15 10:13:29 +11:00
Sebastian Andrzej Siewior cd301c7ba4 powerpc: Reflect the used arguments in machine_init() prototype
The "phys" argument to machine_init() isn't used and isn't likely to
ever be so let's remove it.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-14 10:35:26 +11:00
Benjamin Herrenschmidt 8aa2659009 powerpc: Fix DMA offset for non-coherent DMA
After Becky's work we can almost have different DMA offsets
between on-chip devices and PCI. Almost because there's a
problem with the non-coherent DMA code that basically ignores
the programmed offset to use the global one for everything.
This fixes it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-14 10:35:26 +11:00
David Woodhouse e758936e02 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6
Conflicts:

	include/asm-x86/statfs.h
2008-10-13 17:13:56 +01:00
Milton Miller 9bd54d185a powerpc: remove non-dependent load fsl_booke PTE_64BIT
b38fd42ff4 added false dependencys
to order the load of upper and lower halfs of the pte, but only
adjusted whitespace instead of deleting the old load in the iside
handler, letting the hardware see the non-dependent load.

This patch removes the extra load.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-10-13 11:09:59 -05:00
John Rigby 4a015c3740 powerpc/fsl: Hide MPC5121 pci bridge.
The class of the MPC5121 pci host bridge is PCI_CLASS_BRIDGE_OTHER
while other freescale host bridges have class set to
PCI_CLASS_PROCESSOR_POWERPC.

This patch makes fixup_hide_host_resource_fsl match
PCI_CLASS_BRIDGE_OTHER in addition to PCI_CLASS_PROCESSOR_POWERPC.

Signed-off-by: John Rigby <jrigby@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-10-13 11:09:58 -05:00
Milton Miller 22d660ffd0 powerpc/smp: No need to set_need_resched when getting a resched IPI
The comment in the code was asking "Do we have to do this?", and according
to x86 and s390 the answer is no, the scheduler will do it before calling
the arch hook.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-13 16:24:20 +11:00
Paul Mackerras 91a0030295 powerpc: Sync RPA note in zImage with kernel's RPA note
Commit 9b09c6d909 ("powerpc: Change the
default link address for pSeries zImage kernels") changed the
real-base value in the CHRP note added by the addnote program from
12MB to 32MB to give more space for Open Firmware to load the zImage.
(The real-base value says where we want OF to position itself in
memory.)  However, this change was ineffective on most pSeries
machines, because the RPA note added by addnote has the "ignore me"
flag set to 1.  This was intended to tell OF to ignore just the RPA
note, but has the side effect of also making OF ignore the CHRP note
(at least on most pSeries machines).

To solve this we have to set the "ignore me" flag to 0 in the RPA
note.  (We can't just omit the RPA note because that is equivalent to
having an RPA note with default values, and the default values are not
what we want.)  However, then we have to make sure the values in the
zImage's RPA note match up with the values that the kernel supplies
later in prom_init.c with either the ibm,client-architecture-support
call or the process-elf-header call in prom_send_capabilities().

So this sets the "ignore me" flag in the RPA note in addnote to 0, and
adjusts the RPA note values in addnote.c and in prom_init.c to be
consistent with each other and with the values in ibm_architecture_vec.

However, since the wrapper is independent of the kernel, this doesn't
ensure that the notes will stay consistent.  To ensure that, this adds
code to addnote.c so that it can extract the kernel's RPA note from
the kernel binary and put that in the zImage.  To that end, we put the
kernel's fake ELF header (which contains the kernel's RPA note) into
its own section, and arrange for wrapper to pull out that section with
objcopy and pass it to addnote, which then extracts the RPA note from
it and transfers it to the zImage.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-10 15:55:19 +11:00
Josh Poimboeuf 41c2e949cb powerpc: Fix error path in kernel_thread function
The powerpc 32-bit and 64-bit kernel_thread functions don't properly
propagate errors being returned by the clone syscall.  (In the case of
error, the syscall exit code returns a positive errno in r3 and sets
the CR0[SO] bit.)

This patch fixes that by negating r3 if CR0[SO] is set after the syscall.

Signed-off-by: Josh Poimboeuf <jpoimboe@us.ibm.com>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-10 15:55:18 +11:00
Ingo Molnar 990d0f2ced Merge branches 'sched/devel', 'sched/cpu-hotplug', 'sched/cpusets' and 'sched/urgent' into sched/core 2008-10-08 11:31:02 +02:00
Benjamin Herrenschmidt 7c12d906f4 powerpc: Fix sysfs pci mmap on 32-bit machines with 64-bit PCI
When manipulating 64-bit PCI addresses, the code would lose the
top 32-bit in a couple of places when shifting a pfn due to missing
type casting from the 32-bit pfn to a 64-bit resource before the
shift.

This breaks using newer X servers for example on 440 machines
with the PCI bus above 32-bit.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-07 14:26:21 +11:00
Johannes Berg 2e2b4043cc powerpc: Fix 64-bit hibernation with 64k pages
A bug in my initial 64-bit hibernation code breaks it when using
page sizes that aren't 4K.

Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-07 14:26:20 +11:00
Sebastien Dugue 6ddc9d3200 powerpc: Ignore generated vmlinux.lds in git
Add a .gitignore in arch/powerpc/kernel to ignore the generated
vmlinux.lds.

Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-10-07 14:26:18 +11:00
Benjamin Herrenschmidt c9b59da130 Merge commit 'kumar/kumar-mmu' 2008-10-02 16:11:49 +10:00
Linus Torvalds 95237b80a3 Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
  powerpc: Fix failure to shutdown with CPU hotplug
  powerpc: Fix PCI in Holly device tree
2008-09-30 08:40:46 -07:00
Johannes Berg 61e9916eba powerpc: Fix failure to shutdown with CPU hotplug
I tracked down the shutdown regression to CPUs not dying
when being shut down during power-off. This turns out to
be due to the system_state being SYSTEM_POWER_OFF, which
this code doesn't take as a valid state for shutting off
CPUs in.

This has never made sense to me, but when I added hotplug
code to implement hibernate I only "made it work" and did
not question the need to check the system_state. Thomas
Gleixner helped me dig, but the only thing we found is
that it was added with the original commit that added CPU
hotplug support.

Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Acked-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-09-30 13:25:06 +10:00
Jason Wessel d7161a6534 kgdb, x86, arm, mips, powerpc: ignore user space single stepping
On the x86 arch, user space single step exceptions should be ignored
if they occur in the kernel space, such as ptrace stepping through a
system call.

First check if it is kgdb that is executing a single step, then ensure
it is not an accidental traversal into the user space, while in kgdb,
any other time the TIF_SINGLESTEP is set, kgdb should ignore the
exception.

On x86, arm, mips and powerpc, the kgdb_contthread usage was
inconsistent with the way single stepping is implemented in the kgdb
core.  The arch specific stub should always set the
kgdb_cpu_doing_single_step correctly if it is single stepping.  This
allows kgdb to correctly process an instruction steps if ptrace
happens to be requesting an instruction step over a system call.

Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
2008-09-26 10:36:41 -05:00
Becky Bruce 4ee7084eb1 POWERPC: Allow 32-bit hashed pgtable code to support 36-bit physical
This rearranges a bit of code, and adds support for
36-bit physical addressing for configs that use a
hashed page table.  The 36b physical support is not
enabled by default on any config - it must be
explicitly enabled via the config system.

This patch *only* expands the page table code to accomodate
large physical addresses on 32-bit systems and enables the
PHYS_64BIT config option for 86xx.  It does *not*
allow you to boot a board with more than about 3.5GB of
RAM - for that, SWIOTLB support is also required (and
coming soon).

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:29:44 -05:00
Kumar Gala 0ba3418b8b powerpc: Introduce local (non-broadcast) forms of tlb invalidates
Introduced a new set of low level tlb invalidate functions that do not
broadcast invalidates on the bus:

_tlbil_all - invalidate all
_tlbil_pid - invalidate based on process id (or mm context)
_tlbil_va  - invalidate based on virtual address (ea + pid)

On non-SMP configs _tlbil_all should be functionally equivalent to _tlbia and
_tlbil_va should be functionally equivalent to _tlbie.

The intent of this change is to handle SMP based invalidates via IPIs instead
of broadcasts as the mechanism scales better for larger number of cores.

On e500 (fsl-booke mmu) based cores move to using MMUCSR for invalidate alls
and tlbsx/tlbwe for invalidate virtual address.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:29:40 -05:00
Becky Bruce 4fc665b88a powerpc: Merge 32 and 64-bit dma code
We essentially adopt the 64-bit dma code, with some changes to support
32-bit systems, including HIGHMEM.  dma functions on 32-bit are now
invoked via accessor functions which call the correct op for a device based
on archdata dma_ops.  If there is no archdata dma_ops, this defaults
to dma_direct_ops.

In addition, the dma_map/unmap_page functions are added to dma_ops
because we can't just fall back on map/unmap_single when HIGHMEM is
enabled. In the case of dma_direct_*, we stop using map/unmap_single
and just use the page version - this saves a lot of ugly
ifdeffing.  We leave map/unmap_single in the dma_ops definition,
though, because they are needed by the iommu code, which does not
implement map/unmap_page.  Ideally, going forward, we will completely
eliminate map/unmap_single and just have map/unmap_page, if it's
workable for 64-bit.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:45 -05:00
Becky Bruce 8fae035324 powerpc: Drop archdata numa_node
Use the struct device's numa_node instead; use accessor functions
to get/set numa_node.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:43 -05:00
Becky Bruce 8dd0e95206 powerpc: Move iommu dma ops from dma.c to dma-iommu.c
32-bit platforms are about to start using dma.c; move the iommu
dma ops into their own file to make this a bit cleaner.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:42 -05:00
Becky Bruce 7c05d7e08d powerpc: Rename dma_64.c to dma.c
This is in preparation for the merge of the 32 and 64-bit
dma code in arch/powerpc.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:41 -05:00
Kumar Gala b38fd42ff4 powerpc/fsl-booke: Fixup 64-bit PTE reading for SMP support
We need to create a false data dependency to ensure the loads of
the pte are done in the right order.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-19 13:31:04 -05:00
Kumar Gala 33a7f12274 powerpc: Fix build warnings introduced by PMC support on 32-bit
arch/powerpc/kernel/sysfs.c:197:7: warning: "CONFIG_6xx" is not defined
arch/powerpc/kernel/sysfs.c:141: warning: 'run_on_cpu' defined but not used

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-18 17:57:50 -05:00
James Bottomley 2d291e9027 Fix compile failure with non modular builds
Commit deac93df26 ("lib: Correct printk
%pF to work on all architectures") broke the non modular builds by
moving an essential function into modules.c.  Fix this by moving it
out again and into asm/sections.h as an inline.  To do this, the
definition of struct ppc64_opd_entry has been lifted out of modules.c
and put in asm/elf.h where it belongs.

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-17 09:14:42 -07:00
Martin Langer a501d8f30e powerpc: Fix major revision number for Freescale cores
Some 74xx cores by Freescale are using the configuration field instead
of the major revision field for their revision number.  This corrects
the wrong behaviour for those ppc cores including my one.

There is a reference document at Freecale.  It describes the PVR
register.  This is based on that pdf.  You can find the document at:

http://www.freescale.com/files/archives/doc/support_info/PPCPVR.pdf

Signed-off-by: Martin Langer <martin-langer@gmx.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:48 -07:00
Sebastien Dugue 150c6c8fec powerpc: Make the irq reverse mapping radix tree lockless
The radix trees used by interrupt controllers for their irq reverse
mapping (currently only the XICS found on pSeries) have a complex
locking scheme dating back to before the advent of the lockless radix
tree.

This takes advantage of the lockless radix tree and of the fact that
the items of the tree are pointers to a static array (irq_map)
elements which can never go under us to simplify the locking.

Concurrency between readers and writers is handled by the intrinsic
properties of the lockless radix tree.  Concurrency between writers is
handled with a global mutex.

Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:45 -07:00
Sebastien Dugue 967e012ef3 powerpc: Separate the irq radix tree insertion and lookup
irq_radix_revmap() currently serves 2 purposes, irq mapping lookup
and insertion which happen in interrupt and process context respectively.

Separate the function into its 2 components, one for lookup only and one
for insertion only.

Fix the only user of the revmap tree (XICS) to use the new functions.

Also, move the insertion into the radix tree of those irqs that were
requested before it was initialized at said tree initialization.

Mutual exclusion between the tree initialization and readers/writers is
handled via a state variable (revmap_trees_allocated) set to 1 when the tree
has been initialized and set to 2 after the already requested irqs have been
inserted in the tree by the init path. This state is checked before any reader
or writer access just like we used to check for tree.gfp_mask != 0 before.

Finally, now that we're not any longer inserting nodes into the radix-tree
in interrupt context, turn the GFP_ATOMIC allocations into GFP_KERNEL ones.

Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:44 -07:00
Christoph Hellwig d6c93adbeb powerpc: Use sys_pause for 32-bit pause entry point
sys32_pause is a useless copy of the generic sys_pause.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:39 -07:00
Paul Mackerras 549e8152de powerpc: Make the 64-bit kernel as a position-independent executable
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set.  This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations.  (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)

The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run.  In fact we call it twice; once before calling prom_init, and again
when starting the main kernel.  This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.

This also changes __va and __pa to use an equivalent definition that is
simpler.  With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).

With this, relocatable kernels still copy themselves down to physical
address 0 and run there.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:38 -07:00
Paul Mackerras e31aa453bb powerpc: Use LOAD_REG_IMMEDIATE only for constants on 64-bit
Using LOAD_REG_IMMEDIATE to get the address of kernel symbols
generates 5 instructions where LOAD_REG_ADDR can do it in one,
and will generate R_PPC64_ADDR16_* relocations in the output when
we get to making the kernel as a position-independent executable,
which we'd rather not have to handle.  This changes various bits
of assembly code to use LOAD_REG_ADDR when we need to get the
address of a symbol, or to use suitable position-independent code
for cases where we can't access the TOC for various reasons, or
if we're not running at the address we were linked at.

It also cleans up a few minor things; there's no reason to save and
restore SRR0/1 around RTAS calls, __mmu_off can get the return
address from LR more conveniently than the caller can supply it in
R4 (and we already assume elsewhere that EA == RA if the MMU is on
in early boot), and enable_64b_mode was using 5 instructions where
2 would do.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:35 -07:00
Paul Mackerras 1f6a93e4c3 powerpc: Make it possible to move the interrupt handlers away from the kernel
This changes the way that the exception prologs transfer control to
the handlers in 64-bit kernels with the aim of making it possible to
have the prologs separate from the main body of the kernel.  Now,
instead of computing the address of the handler by taking the top
32 bits of the paca address (to get the 0xc0000000........ part) and
ORing in something in the bottom 16 bits, we get the base address of
the kernel by doing a load from the paca and add an offset.

This also replaces an mfmsr and an ori to compute the MSR value for
the handler with a load from the paca.  That makes it unnecessary to
have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit
mode.

We can no longer use a direct branches in the exception prolog code,
which means that the SLB miss handlers can't branch directly to
.slb_miss_realmode any more.  Instead we have to compute the address
and do an indirect branch.  This is conditional on CONFIG_RELOCATABLE;
for non-relocatable kernels we use a direct branch as before.  (A later
change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.)

Since the secondary CPUs on pSeries start execution in the first 0x100
bytes of real memory and then have to get to wherever the kernel is,
we can't use a direct branch to get there.  Instead this changes
__secondary_hold_spinloop from a flag to a function pointer.  When it
is set to a non-NULL value, the secondary CPUs jump to the function
pointed to by that value.

Finally this eliminates one code difference between 32-bit and 64-bit
by making __secondary_hold be the text address of the secondary CPU
spinloop rather than a function descriptor for it.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:08 -07:00
Paul Mackerras 9a95516740 powerpc: Rearrange head_64.S to move interrupt handler code to the beginning
This rearranges head_64.S so that we have all the first-level exception
prologs together starting at 0x100, followed by all the second-level
handlers that are invoked from the first-level prologs, followed by
other code.  This doesn't make any functional change but will make
following changes for relocatable kernel support easier.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:08:06 -07:00
Chandru cf00085d80 powerpc: Add support for dynamic reconfiguration memory in kexec/kdump kernels
Kdump kernel needs to use only those memory regions that it is allowed
to use (crashkernel, rtas, tce, etc.).  Each of these regions have
their own sizes and are currently added under 'linux,usable-memory'
property under each memory@xxx node of the device tree.

The ibm,dynamic-memory property of ibm,dynamic-reconfiguration-memory
node (on POWER6) now stores in it the representation for most of the
logical memory blocks with the size of each memory block being a
constant (lmb_size).  If one or more or part of the above mentioned
regions lie under one of the lmb from ibm,dynamic-memory property,
there is a need to identify those regions within the given lmb.

This makes the kernel recognize a new 'linux,drconf-usable-memory'
property added by kexec-tools.  Each entry in this property is of the
form of a count followed by that many (base, size) pairs for the above
mentioned regions.  The number of cells in the count value is given by
the #size-cells property of the root node.

Signed-off-by: Chandru Siddalingappa <chandru@in.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-15 11:07:58 -07:00
Paul Mackerras 7e392f8c29 Merge branch 'linux-2.6' 2008-09-10 11:36:13 +10:00
James Bottomley deac93df26 lib: Correct printk %pF to work on all architectures
It was introduced by "vsprintf: add support for '%pS' and '%pF' pointer
formats" in commit 0fe1ef24f7.  However,
the current way its coded doesn't work on parisc64.  For two reasons: 1)
parisc isn't in the #ifdef and 2) parisc has a different format for
function descriptors

Make dereference_function_descriptor() more accommodating by allowing
architecture overrides.  I put the three overrides (for parisc64, ppc64
and ia64) in arch/kernel/module.c because that's where the kernel
internal linker which knows how to deal with function descriptors sits.

Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-09 11:51:15 -07:00
Manfred Spraul e545a6140b kernel/cpu.c: create a CPU_STARTING cpu_chain notifier
Right now, there is no notifier that is called on a new cpu, before the new
cpu begins processing interrupts/softirqs.
Various kernel function would need that notification, e.g. kvm works around
by calling smp_call_function_single(), rcu polls cpu_online_map.

The patch adds a CPU_STARTING notification. It also adds a helper function
that sends the message to all cpu_chain handlers.

Tested on x86-64.
All other archs are untested. Especially on sparc, I'm not sure if I got
it right.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 19:25:24 +02:00
Adrian Bunk 9d5a9e7465 Remove asm/a.out.h files for all architectures without a.out support.
This patch also includes the required removal of (unused) inclusion of
<asm/a.out.h> <linux/a.out.h>'s in the arch/ code for these
architectures.

[dwmw2: updated for 2.6.27-rc]
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2008-09-06 19:30:24 +01:00
Kumar Gala 7888bc2b47 powerpc: Fix for getting CPU number in power_save_ppc32_restore()
The calculation to get TI_CPU based off of SPRG3 was just plain wrong,
meaning that we were getting garbage for the CPU number on 6xx/G3/G4
based SMP boxes in this code.

Just offset off the stack pointer (to get to thread_info) like all the
other references to TI_CPU do.

This was pointed out by Chen Gong <G.Chen@freescale.com>

[paulus@samba.org - use rlwinm r12,r11,... instead of
 rlwinm r12,r1,...; tophys()]

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:47 +10:00
Tony Breeds 7563dc6458 powerpc: Work around gcc's -fno-omit-frame-pointer bug
This bug is causing random crashes
(http://bugzilla.kernel.org/show_bug.cgi?id=11414).

-fno-omit-frame-pointer is only needed on powerpc when -pg is also
supplied, and there is a gcc bug that causes incorrect code generation
on 32-bit powerpc when -fno-omit-frame-pointer is used---it uses stack
locations below the stack pointer, which is not allowed by the ABI
because those locations can and sometimes do get corrupted by an
interrupt.

This ensures that CONFIG_FRAME_POINTER is only selected by ftrace.
When CONFIG_FTRACE is enabled we also pass -mno-sched-epilog to work
around the gcc codegen bug.

Patch based on work by:
	Andreas Schwab <schwab@suse.de>
	Segher Boessenkool <segher@kernel.crashing.org>

Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:34 +10:00
Stephen Rothwell 303996dace powerpc: Make sure _etext is after all kernel text
This makes core_kernel_text() (and therefore kernel_text_address())
return the correct result.  Currently all the __devinit routines (at
least) will not be considered to be kernel text.

This is just a quick fix for 2.6.27 - hopefully we will be able to fix
this better in 2.6.28.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:26 +10:00
Michael Neuling 78fbc824ed powerpc: Fix uninitialised variable in VSX alignment code
This fixes an uninitialised variable in the VSX alignment code.  It can
cause warnings from GCC (noticed with gcc-4.1.1).  Gcc is actually
correct in this instance, and this bug could cause the alignment
interrupt handler to send a SIGSEGV to the process on a legitimate
access.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:14 +10:00
Benjamin Herrenschmidt b950bdd0fc powerpc: Expose PMCs & cache topology in sysfs on 32-bit
The file arch/powerpc/kernel/sysfs.c is currently only compiled for
64-bit kernels.  It contain code to register CPU sysdevs in sysfs and
add various properties such as cache topology and raw access by root
to performance monitor counters (PMCs).  A lot of that can be re-used
as is on 32-bits.

This makes the file be built for both, with appropriate ifdef'ing
for the few bits that are really 64-bit specific, and adds some
support for the raw PMCs for 75x and 74xx processors.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 16:34:58 +10:00
Nathan Lynch f3d3d307e6 powerpc: Remove redundant sysfs_remove_file calls for cache info
When removing a directory, the sysfs core takes care of removing files
in the directory (see sysfs_remove_dir()).  So when we are about to
delete a kobject (and thus cause its sysfs directory to be removed),
we don't have to explicitly remove the files attached to it, although
it's harmless to do so.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 16:34:58 +10:00
Harvey Harrison 5df72bf3f7 powerpc: Replace __FUNCTION__ with __func__
__FUNCTION__ is gcc-specific, use __func__

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 16:34:57 +10:00
Harvey Harrison 542ad5d4cc powerpc: Use the common ascii hex helpers
[akpm@linux-foundation.org: exclude prom_init.c]
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 16:34:57 +10:00
Michael Ellerman 01f3880dd8 powerpc: Streamline ret_from_except_lite for non-iSeries platforms
There is a small passage of code in ret_from_except_lite which is
only required on iSeries.  For a multi-platform kernel on non-iSeries
machines this means we end up executing ~15 nops in ret_from_except_lite.

It would be nicer if non-iSeries could skip the code entirely, and on
iSeries we can jump out of line to execute the code.

I have no performance numbers to justify this, other than the assertion
that executing 15 nops takes longer than executing 0.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 16:34:57 +10:00
Brian King cd5aeb9f6c powerpc: Fix vio_bus_probe oops on probe error
When CMO is enabled and booted on a non CMO system and the VIO
device's probe function fails, an oops can result since
vio_cmo_bus_remove is called when it should not.  This fixes it by
avoiding the vio_cmo_bus_remove call on platforms that don't implement
CMO.

cpu 0x0: Vector: 300 (Data Access) at [c00000000e13b3d0]
    pc: c000000000020d34: .vio_cmo_bus_remove+0xc0/0x1f4
    lr: c000000000020ca4: .vio_cmo_bus_remove+0x30/0x1f4
    sp: c00000000e13b650
   msr: 8000000000009032
   dar: 0
 dsisr: 40000000
  current = 0xc00000000e0566c0
  paca    = 0xc0000000006f9b80
    pid   = 2428, comm = modprobe
enter ? for help
[c00000000e13b6e0] c000000000021d94 .vio_bus_probe+0x2f8/0x33c
[c00000000e13b7a0] c00000000029fc88 .driver_probe_device+0x13c/0x200
[c00000000e13b830] c00000000029fdac .__driver_attach+0x60/0xa4
[c00000000e13b8c0] c00000000029f050 .bus_for_each_dev+0x80/0xd8
[c00000000e13b980] c00000000029f9ec .driver_attach+0x28/0x40
[c00000000e13ba00] c00000000029f630 .bus_add_driver+0xd4/0x284
[c00000000e13baa0] c0000000002a01bc .driver_register+0xc4/0x198
[c00000000e13bb50] c00000000002168c .vio_register_driver+0x40/0x5c
[c00000000e13bbe0] d0000000003b3f1c .ibmvfc_module_init+0x70/0x109c [ibmvfc]
[c00000000e13bc70] c0000000000acf08 .sys_init_module+0x184c/0x1a10
[c00000000e13be30] c000000000008748 syscall_exit+0x0/0x40

Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 09:50:22 +10:00
Joachim Fenkes 4589f1fe57 powerpc/ibmebus: Restore "name" sysfs attribute on ibmebus devices
Recent of_platform changes made of_bus_type_init() overwrite the bus
type's .dev_attrs list, meaning that the "name" attribute that ibmebus
devices previously had is no longer present.  This is a user-visible
regression which breaks the userspace eHCA support, since the eHCA
userspace driver relies on the name attribute to check for valid
adapters.

This fixes it by providing the "name" attribute in the generic OF
device code instead.  Tested on POWER.

Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 09:50:21 +10:00
Michael Ellerman 7230ced492 powerpc: Fix /dev/oldmem interface for kdump
A change to __ioremap() broke reading /dev/oldmem because we're no
longer able to ioremap pfn 0 (d177c207, "[PATCH] powerpc: IOMMU: don't
ioremap null addresses").

We actually don't need to ioremap for anything that's part of the linear
mapping, so just read it directly.

Also make sure we're only reading one page or less at a time.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Sachin Sant <sachinp@in.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-20 09:50:21 +10:00
Christoph Hellwig 50d0b17645 powerpc: Use generic compat_sys_old_readdir
Use the generic compat_sys_old_readdir instead of the powerpc one which
is almost the same except for the almost complete lack of error
handling.

Note that we can't just use SYSCALL() in systbl.h because the native
syscall is named old_readdir, not sys_old_readdir.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-18 14:22:35 +10:00
Paul Collins d9178f4c14 powerpc/kexec: Fix up KEXEC_CONTROL_CODE_SIZE missed during conversion
Commit 163f6876f5 missed one, resulting in
the following compile error:

  AS      arch/powerpc/kernel/misc_32.o
arch/powerpc/kernel/misc_32.S: Assembler messages:
arch/powerpc/kernel/misc_32.S:902: Error: unsupported relocation against KEXEC_CONTROL_CODE_SIZE
make[2]: *** [arch/powerpc/kernel/misc_32.o] Error 1
make[1]: *** [arch/powerpc/kernel] Error 2
make: *** [vmlinux] Error 2

I grepped arch/ and found no further instances.

Signed-off-by: Paul Collins <paul@ondioline.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-18 14:22:35 +10:00
Steven Rostedt b9754568ef powerpc: Remove dead module_find_bug code
Doing some various "make randconfig", I came across an error when
CONFIG_BUG was not set:

arch/powerpc/kernel/module.c: In function 'module_find_bug':
arch/powerpc/kernel/module.c:111: error: increment of pointer to unknown structure
arch/powerpc/kernel/module.c:111: error: arithmetic on pointer to an incomplete type
arch/powerpc/kernel/module.c:112: error: dereferencing pointer to incomplete type

Looking further into this, I found that module_find_bug, defined in
powerpc arch code, is not called anywhere, so this just removes it.

There is a static module_find_bug in lib/bug.c but that is a separate issue.

Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-18 14:22:35 +10:00
Robert Jennings ac22429df2 powerpc: Add CMO enabled flag and paging space data to lparcfg
Add a field in lparcfg output to indicate whether the kernel is
running on a dedicated or shared memory lpar.  Added fields to show
the paging space pool IDs and the CMO page size.

Submitted-by: Robert Jennings <rcj@linux.vnet.ibm.com>

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-18 14:22:35 +10:00
Rocky Craig 9acd57ca74 powerpc: Fix TLB invalidation on boot on 32-bit
The intent of "flush_tlbs" is to invalidate all TLB entries by doing a
TLB invalidate instruction for all pages in the address range 0 to
0x00400000.  A loop counter is set up at the high value and
decremented by page size.  However, the loop is only done once as the
sense of the conditional branch at the loop end does not match the
setup/decrement.  This fixes it to do the whole range by correcting
the branch condition.

Signed-off-by: Rocky Craig <rocky.craig@hp.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-18 14:22:34 +10:00
Huang Ying 163f6876f5 kexec jump: rename KEXEC_CONTROL_CODE_SIZE to KEXEC_CONTROL_PAGE_SIZE
Rename KEXEC_CONTROL_CODE_SIZE to KEXEC_CONTROL_PAGE_SIZE, because control
page is used for not only code on some platform.  For example in kexec
jump, it is used for data and stack too.

[akpm@linux-foundation.org: unbreak powerpc and arm, finish conversion]
Signed-off-by: Huang Ying <ying.huang@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-08-15 08:35:42 -07:00
Benjamin Herrenschmidt 8db13a0e1e powerpc/pci: Don't keep ISA memory hole resources in the tree
When we have an ISA memory hole (ie, a PCI window that allows us to
generate PCI memory cycles at low PCI address) mixed with other
resources using a different CPU <=> PCI mapping, we must not keep
the ISA hole in the bridge resource list.

If we do, things might start trying to allocate device resources
in there and will get the PCI addresses wrong.

This fixes it by arranging to remove the ISA memory hole resource in
this case.  This fixes various cases of PCMCIA breakage on PowerBooks
using the MPC106 "grackle" bridge.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-11 10:09:56 +10:00
Nathan Fontenot b79998fc2e powerpc: Zero fill the return values of rtas argument buffer
The kernel copy of the rtas args struct contains the return
value(s) for the specified rtas call.  These are copied back
to user space with the assumption that every value has been
set by the rtas call, which turns out to be not always true.
Thus userspace can see random values and think the call failed
when in fact it succeeded, but for some reason didn't set one
of the return values.

This fixes the problem by zeroing out the return value fields
of the rtas args struct before processing the rtas call.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-11 10:09:56 +10:00
Kumar Gala 9c4cb82515 powerpc: Remove use of CONFIG_PPC_MERGE
Now that arch/ppc is gone and CONFIG_PPC_MERGE is always set, remove
the dead code associated with !CONFIG_PPC_MERGE from arch/powerpc
and include/asm-powerpc.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-08-04 13:18:17 +10:00
Michael Neuling 7d2a175b9b powerpc: Don't use the wrong thread_struct for ptrace get/set VSX regs
In PTRACE_GET/SETVSRREGS, we should be using the thread we are
ptracing rather than current.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-30 15:26:54 +10:00
Michael Neuling 1ac42ef844 powerpc: Fix ptrace buffer size for VSX
Fix cut-and-paste error in the size setting for ptrace buffers for VSX.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-30 15:26:54 +10:00
Michael Neuling 33b3f03dcc powerpc: Correctly hookup PTRACE_GET/SETVSRREGS for 32 bit processes
Fix bug where PTRACE_GET/SETVSRREGS are not connected for 32 bit processes.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-30 15:26:54 +10:00
Nathan Fontenot 9ee07f91a1 powerpc: Allow non-hcall return values for lparcfg writes
The code to handle writes to /proc/ppc64/lparcfg incorrectly
assumes that the return code from the helper routines to update
processor or memory entitlement return a hcall return value. It
then assumes any non-hcall return value is bad and sets the return
code for the write to be -EIO.

The update_[mp]pp routines can return values other than a hcall
return value. This patch removes the automatic setting of any
return code that is not an hcall return value from these routines
to -EIO.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-30 15:26:53 +10:00
Benjamin Herrenschmidt 025d7917a5 powerpc/powermac: Fixup default serial port device for pmac_zilog
This removes the non-working code in legacy_serial that tried to handle
the powermac SCC ports, and instead add a (now working) function to the
powermac platform code to find the default serial console if any.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:53 +10:00
Nathan Lynch 124c27d375 powerpc: Show processor cache information in sysfs
Collect cache information from the OF device tree and display it in
the cpu hierarchy in sysfs.  This is intended to be compatible at the
userspace level with x86's implementation[1], hence some of the funny
attribute names.  The arrangement of cache info is not immediately
intuitive, but (again) it's for compatibility's sake.

The cache attributes exposed are:

type (Data, Instruction, or Unified)
level (1, 2, 3...)
size
coherency_line_size
number_of_sets
ways_of_associativity

All of these can be derived on platforms that follow the OF PowerPC
Processor binding.  The code "publishes" only those attributes for
which it is able to determine values; attributes for values which
cannot be determined are not created at all.

[1] arch/x86/kernel/cpu/intel_cacheinfo.c

BenH: Turned some printk's into pr_debug, added better NULL checking
in a couple of places.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:52 +10:00
Nathan Lynch e9efed3b80 powerpc: Make core id information available to userspace
Existing Open Firmware practice is to report each processor core as a
separate node in the device tree.  Report the value of the "reg" OF
property corresponding to a logical CPU's device node as the core_id
attribute in /sys/devices/system/cpu/cpu*/topology/core_id.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:52 +10:00
Nathan Lynch 440a0857e3 powerpc: Make core sibling information available to userspace
Implement the notion of "core siblings" for powerpc.  This makes
/sys/devices/system/cpu/cpu*/topology/core_siblings present sensible
values, indicating online CPUs which share an L2 cache.

BenH: Made cpu_to_l2cache() use of_find_node_by_phandle() instead
of IBM-specific open coded search

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:51 +10:00
Stephen Rothwell 0764bf63da powerpc/vio: More fallout from dma_mapping_error API change
arch/powerpc/kernel/vio.c:533: error: too few arguments to function 'dma_mapping_error'

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:51 +10:00
Roland McGrath 7d6d637dac powerpc: Add TIF_NOTIFY_RESUME support for tracehook
This adds TIF_NOTIFY_RESUME support for powerpc.  When set,
we call tracehook_notify_resume() on the way to user mode.
This overloads do_signal() to do the work, but changes its
arguments to it has the TIF_* bits handy in a register and
drops the useless first argument that was always zero.

Signed-off-by: Roland McGrath <roland@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:50 +10:00
Roland McGrath 4f72c4279e powerpc: Make syscall tracing use tracehook.h helpers
This changes powerpc syscall tracing to use the new tracehook.h entry
points.  There is no change, only cleanup.

In addition, the assembly changes allow do_syscall_trace_enter() to
abort the syscall without losing the information about the original
r0 value.

Signed-off-by: Roland McGrath <roland@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:49 +10:00
Roland McGrath 6558ba2b5c powerpc: Call tracehook_signal_handler() when setting up signal frames
This makes the powerpc signal handling code call tracehook_signal_handler()
after a handler is set up.  This means that using PTRACE_SINGLESTEP to
enter a signal handler will report to ptrace on the first instruction of
the handler, instead of the second.  This is consistent with what x86 and
other machines do, and what users and debuggers want.

BenH: Fixed up the test for the trap value.

Signed-off-by: Roland McGrath <roland@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:49 +10:00
Nathan Lynch e2075f79a9 powerpc: Update cpu_sibling_maps dynamically
Rather doing one initialization pass over all the per-cpu
cpu_sibling_maps at boot, update the maps at cpu online/offline time.

This is a behavior change -- the thread_siblings attribute now
reflects only online siblings, whereas it would display offline
siblings before.  The new behavior matches that of x86, and is
arguably more useful.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:49 +10:00
Nathan Lynch 9ba1984ead powerpc: register_cpu_online should be __cpuinit
It is called only in cpu online paths.

(caught by CONFIG_DEBUG_SECTION_MISMATCH=y)

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:48 +10:00
Nathan Lynch 7d2f6075f9 powerpc: kill useless SMT code in prom_hold_cpus
This piece of code is broken for >2 threads, and possibly in some
other subtle ways (such as comparing a value obtained from an
"ibm,ppc-interrupt-server#s" property to a value obtained from a
"reg" property) and doesn't seem to have any useful purpose in the
first place other than a dubious warning in case NR_CPUS is too
small, which probably isn't the right place to do so.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:48 +10:00
Nathan Lynch b9fa49a9a9 powerpc: Fix vio build warnings
arch/powerpc/kernel/vio.c:1034: warning: function declaration isn’t a prototype
arch/powerpc/kernel/vio.c:1035: warning: function declaration isn’t a prototype

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:47 +10:00
Kumar Gala 2325f0a0c3 powerpc/booke: Clean up the hardware watchpoint support
* CONFIG_BOOKE is selected by CONFIG_44x so we dont need both
* Fixed a few comments
* Go back to only using DBCR0_IDM to determine if we are using
  debug resources.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:47 +10:00
Huang Weiyi d3b060231b powerpc: Removed duplicated include in stacktrace.c
Removed duplicated include file <linux/module.h> in
arch/powerpc/kernel/stacktrace.c.

Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28 16:30:47 +10:00
Alexey Dobriyan 51cc50685a SL*B: drop kmem cache argument from constructor
Kmem cache passed to constructor is only needed for constructors that are
themselves multiplexeres.  Nobody uses this "feature", nor does anybody uses
passed kmem cache in non-trivial way, so pass only pointer to object.

Non-trivial places are:
	arch/powerpc/mm/init_64.c
	arch/powerpc/mm/hugetlbpage.c

This is flag day, yes.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
Acked-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Matt Mackall <mpm@selenic.com>
[akpm@linux-foundation.org: fix arch/powerpc/mm/hugetlbpage.c]
[akpm@linux-foundation.org: fix mm/slab.c]
[akpm@linux-foundation.org: fix ubifs]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26 12:00:07 -07:00
Huang Ying 3ab8352137 kexec jump
This patch provides an enhancement to kexec/kdump.  It implements the
following features:

- Backup/restore memory used by the original kernel before/after
  kexec.

- Save/restore CPU state before/after kexec.

The features of this patch can be used as a general method to call program in
physical mode (paging turning off).  This can be used to call BIOS code under
Linux.

kexec-tools needs to be patched to support kexec jump. The patches and
the precompiled kexec can be download from the following URL:

       source: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-src_git_kh10.tar.bz2
       patches: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-patches_git_kh10.tar.bz2
       binary: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec_git_kh10

Usage example of calling some physical mode code and return:

1. Compile and install patched kernel with following options selected:

CONFIG_X86_32=y
CONFIG_KEXEC=y
CONFIG_PM=y
CONFIG_KEXEC_JUMP=y

2. Build patched kexec-tool or download the pre-built one.

3. Build some physical mode executable named such as "phy_mode"

4. Boot kernel compiled in step 1.

5. Load physical mode executable with /sbin/kexec. The shell command
   line can be as follow:

   /sbin/kexec --load-preserve-context --args-none phy_mode

6. Call physical mode executable with following shell command line:

   /sbin/kexec -e

Implementation point:

To support jumping without reserving memory.  One shadow backup page (source
page) is allocated for each page used by kexeced code image (destination
page).  When do kexec_load, the image of kexeced code is loaded into source
pages, and before executing, the destination pages and the source pages are
swapped, so the contents of destination pages are backupped.  Before jumping
to the kexeced code image and after jumping back to the original kernel, the
destination pages and the source pages are swapped too.

C ABI (calling convention) is used as communication protocol between
kernel and called code.

A flag named KEXEC_PRESERVE_CONTEXT for sys_kexec_load is added to
indicate that the loaded kernel image is used for jumping back.

Now, only the i386 architecture is supported.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26 12:00:04 -07:00
Nathan Lynch fc532f8108 powerpc: Fix boot problem due to AT_BASE_PLATFORM change
Commit 9115d13453 ("powerpc: Enable
AT_BASE_PLATFORM aux vector") broke boot on 32-bit powerpc systems; we
have to use PTRRELOC to initialize powerpc_base_platform this early in
boot.

Bug reported by Jon Smirl.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-26 09:02:43 +10:00
Linus Torvalds 5047887caf Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (34 commits)
  powerpc: Wireup new syscalls
  Move update_mmu_cache() declaration from tlbflush.h to pgtable.h
  powerpc/pseries: Remove kmalloc call in handling writes to lparcfg
  powerpc/pseries: Update arch vector to indicate support for CMO
  ibmvfc: Add support for collaborative memory overcommit
  ibmvscsi: driver enablement for CMO
  ibmveth: enable driver for CMO
  ibmveth: Automatically enable larger rx buffer pools for larger mtu
  powerpc/pseries: Verify CMO memory entitlement updates with virtual I/O
  powerpc/pseries: vio bus support for CMO
  powerpc/pseries: iommu enablement for CMO
  powerpc/pseries: Add CMO paging statistics
  powerpc/pseries: Add collaborative memory manager
  powerpc/pseries: Utilities to set firmware page state
  powerpc/pseries: Enable CMO feature during platform setup
  powerpc/pseries: Split retrieval of processor entitlement data into a helper routine
  powerpc/pseries: Add memory entitlement capabilities to /proc/ppc64/lparcfg
  powerpc/pseries: Split processor entitlement retrieval and gathering to helper routines
  powerpc/pseries: Remove extraneous error reporting for hcall failures in lparcfg
  powerpc: Fix compile error with binutils 2.15
  ...

Fixed up conflict in arch/powerpc/platforms/52xx/Kconfig manually.
2008-07-25 11:08:17 -07:00
Srinivasa D S ef53d9c5e4 kprobes: improve kretprobe scalability with hashed locking
Currently list of kretprobe instances are stored in kretprobe object (as
used_instances,free_instances) and in kretprobe hash table.  We have one
global kretprobe lock to serialise the access to these lists.  This causes
only one kretprobe handler to execute at a time.  Hence affects system
performance, particularly on SMP systems and when return probe is set on
lot of functions (like on all systemcalls).

Solution proposed here gives fine-grain locks that performs better on SMP
system compared to present kretprobe implementation.

Solution:

 1) Instead of having one global lock to protect kretprobe instances
    present in kretprobe object and kretprobe hash table.  We will have
    two locks, one lock for protecting kretprobe hash table and another
    lock for kretporbe object.

 2) We hold lock present in kretprobe object while we modify kretprobe
    instance in kretprobe object and we hold per-hash-list lock while
    modifying kretprobe instances present in that hash list.  To prevent
    deadlock, we never grab a per-hash-list lock while holding a kretprobe
    lock.

 3) We can remove used_instances from struct kretprobe, as we can
    track used instances of kretprobe instances using kretprobe hash
    table.

Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system
with return probes set on all systemcalls looks like this.

cacheline              non-cacheline             Un-patched kernel
aligned patch 	       aligned patch
===============================================================================
real    9m46.784s       9m54.412s                  10m2.450s
user    40m5.715s       40m7.142s                  40m4.273s
sys     2m57.754s       2m58.583s                  3m17.430s
===========================================================

Time duration for kernel compilation ("make -j 8) on the same system, when
kernel is not probed.
=========================
real    9m26.389s
user    40m8.775s
sys     2m7.283s
=========================

Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com>
Signed-off-by: Jim Keniston <jkenisto@us.ibm.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-25 10:53:30 -07:00
Nathan Fontenot 16c14b4621 powerpc/pseries: Remove kmalloc call in handling writes to lparcfg
There are only 4 valid name=value pairs for writes to
/proc/ppc64/lparcfg.  Current code allocates a buffer to copy
this information in from the user.  Since the longest name=value
pair will easily fit into a buffer of 64 characters, simply
put the buffer on the stack instead of allocating the buffer.

Signed-off-by: Nathan Fotenot <nfont@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:45 +10:00
Nathan Fontenot 8391e42a5c powerpc/pseries: Update arch vector to indicate support for CMO
Update the architecture vector to indicate that Cooperative Memory
Overcommitment is supported if CONFIG_PPC_SMLPAR is set.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:45 +10:00
Nathan Fontenot 22e1a4dd3f powerpc/pseries: Verify CMO memory entitlement updates with virtual I/O
Verify memory entitlement updates can be handled by vio.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:43 +10:00
Robert Jennings a90ab95a95 powerpc/pseries: vio bus support for CMO
This is a large patch but the normal code path is not affected.  For
non-pSeries platforms the code is ifdef'ed out and for non-CMO enabled
pSeries systems this does not affect the normal code path.  Devices that
do not perform DMA operations do not need modification with this patch.
The function get_desired_dma was renamed from get_io_entitlement for
clarity.

Overview

Cooperative Memory Overcommitment (CMO) allows for a set of OS partitions
to be run with less RAM than the aggregate needs of the group of
partitions.  The firmware will balance memory between the partitions
and page in/out memory as needed.  Based on the number and type of IO
adpaters preset each partition is allocated an amount of memory for
DMA operations and this allocation will be guaranteed to the partition;
this is referred to as the partition's 'entitlement'.

Partitions running in a CMO environment can only have virtual IO devices
present.  The VIO bus layer will manage the IO entitlement for the system.
Accounting, at a system and per-device level, is tracked in the VIO bus
code and exposed via sysfs.  A set of dma_ops functions are added to
the bus to allow for this accounting.

Bus initialization

At initialization, the bus will calculate the minimum needs of the system
based on providing each device present with a standard minimum entitlement
along with a spare allocation for the bus to handle hotplug events.
If the minimum needs can not be met the system boot will be halted.

Device changes

The significant changes for devices while running under CMO are that the
devices must specify how much dedicated IO entitlement they desire and
must also handle DMA mapping errors that can occur due to constrained
IO memory.  The virtual IO drivers are modified to silence errors when
DMA mappings fail for CMO and handle these failures gracefully.

Each devices will be guaranteed a minimum entitlement that can always
be mapped.  Devices will specify how much entitlement they desire and
the VIO bus will attempt to provide for this.  Devices can change their
desired entitlement level at any point in time to address particular needs
(via vio_cmo_set_dev_desired()), not just at device probe time.

VIO bus changes

The system will have a particular entitlement level available from which
it can provide memory to the devices.  The bus defines two pools of memory
within this entitlement, the reserved and excess pools.  Each device is
provided with it's own entitlement no less than a system defined minimum
entitlement and no greater than what the device has specified as it's
desired entitlement.  The entitlement provided to devices comes from the
reserve pool.  The reserve pool can also contain a spare allocation as
large as the system defined minimum entitlement which is used for device
hotplug events.  Any entitlement not needed to fulfill the needs of a
reserve pool is placed in the excess pool.  Each device is guaranteed
that it can map up to it's entitled level; additional mapping are possible
as long as there is unmapped memory in the excess pool.

Bus probe

As the system starts, each device is given an entitlement equal only
to the system defined minimum entitlement.  The reserve pool is equal
to the sum of these entitlements, plus a spare allocation.  The VIO bus
also tracks the aggregate desired entitlement of all the devices.  If the
system desired entitlement is greater than the size of the reserve pool,
when devices unmap IO memory it will be reserved and a balance operation
will be scheduled for some time in the future.

Entitlement balancing

The balance function tries to fairly distribute entitlement between the
devices in the system with the goal of providing each device with it's
desired amount of entitlement.  Devices using more than what would be
ideal will have their entitled set-point adjusted; this will effectively
set a goal for lower IO memory usage as future mappings can fail and
deallocations will trigger a balance operation to distribute the newly
unmapped memory.  A fair distribution of entitlement can take several
balance operations to achieve.  Entitlement changes and device DLPAR
events will alter the state of CMO and will trigger balance operations.

Hotplug events

The VIO bus allows for changes in system entitlement at run-time via
'vio_cmo_entitlement_update()'.  When devices are added the hotplug
device event will be preceded by a system entitlement increase and this
is reversed when devices are removed.

The following changes are made that the VIO bus layer for CMO:
 * add IO memory accounting per device structure.
 * add IO memory entitlement query function to driver structure.
 * during vio bus probe, if CMO is enabled, check that driver has
   memory entitlement query function defined.  Fail if function not defined.
 * fail to register driver if io entitlement function not defined.
 * create set of dma_ops at vio level for CMO that will track allocations
   and return DMA failures once entitlement is reached.  Entitlement will
   limited by overall system entitlement.  Devices will have a reserved
   quantity of memory that is guaranteed, the rest can be used as available.
 * expose entitlement, current allocation, desired allocation, and the
   allocation error counter for devices to the user through sysfs
 * provide mechanism for changing a device's desired entitlement at run time
   for devices as an exported function and sysfs tunable
 * track any DMA failures for entitled IO memory for each vio device.
 * check entitlement against available system entitlement on device add
 * track entitlement metrics (high water mark, current usage)
 * provide function to reset high water mark
 * provide minimum and desired entitlement numbers at a bus level
 * provide drivers with a minimum guaranteed entitlement
 * balance available entitlement between devices to satisfy their needs
 * handle system entitlement changes and device hotplug

Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:43 +10:00
Robert Jennings 6490c4903d powerpc/pseries: iommu enablement for CMO
To support Cooperative Memory Overcommitment (CMO), we need to check
for failure from some of the tce hcalls.

These changes for the pseries platform affect the powerpc architecture;
patches for the other affected platforms are included in this patch.

pSeries platform IOMMU code changes:
 * platform TCE functions must handle H_NOT_ENOUGH_RESOURCES errors and
   return an error.

Architecture IOMMU code changes:
 * Calls to ppc_md.tce_build need to check return values and return
   DMA_MAPPING_ERROR for transient errors.

Architecture changes:
 * struct machdep_calls for tce_build*_pSeriesLP functions need to change
   to indicate failure.
 * all other platforms will need updates to iommu functions to match the new
   calling semantics; they will return 0 on success.  The other platforms
   default configs have been built, but no further testing was performed.

Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:43 +10:00
Brian King ffa5abbd0c powerpc/pseries: Add CMO paging statistics
With the addition of Cooperative Memory Overcommitment (CMO) support
for IBM Power Systems, two fields have been added to the VPA to report
paging statistics.  Add support in lparcfg to report them to userspace.

Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:42 +10:00
Robert Jennings 398778f78b powerpc/pseries: Split retrieval of processor entitlement data into a helper routine
Split the retrieval of processor entitlement data returned in the H_GET_PPP
hcall into its own helper routine.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:41 +10:00
Nathan Fontenot dfc3403f0e powerpc/pseries: Add memory entitlement capabilities to /proc/ppc64/lparcfg
Update /proc/ppc64/lparcfg to display Cooperative Memory
Overcommitment statistics as reported by the H_GET_MPP hcall.  This
also updates the lparcfg interface to allow setting memory entitlement
and weight.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:41 +10:00
Nathan Fotenot 11529396ea powerpc/pseries: Split processor entitlement retrieval and gathering to helper routines
Split the retrieval and setting of processor entitlement and weight into
helper routines.  This also removes the printing of the raw values
returned from h_get_ppp, the values are already parsed and printed.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:41 +10:00
Nathan Fontenot 545500b307 powerpc/pseries: Remove extraneous error reporting for hcall failures in lparcfg
Remove the extraneous error reporting used when a hcall made from lparcfg fails.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:40 +10:00
Segher Boessenkool 80c60bf9b9 powerpc: Fix compile error with binutils 2.15
My previous patch to fix compilation with binutils-2.17 causes
a "file truncated" build error from ld with binutils 2.15 (and
possibly older), and a warning with 2.16 and 2.17.

This fixes it.

Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org>
Acked-by: Chuck Meade <chuckmeade@mindspring.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:40 +10:00
Luis Machado d6a61bfc06 powerpc: BookE hardware watchpoint support
This patch implements support for HW based watchpoint via the
DBSR_DAC (Data Address Compare) facility of the BookE processors.

It does so by interfacing with the existing DABR breakpoint code
and adding the necessary bits and pieces for the new bits to
be properly set or cleared

Signed-off-by: Luis Machado <luisgpm@br.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:39 +10:00
Stephen Rothwell 00bf6e9061 powerpc: Fallout from sysdev API changes
A struct sysdev_attribute * parameter was added to the show routine by
commit 4a0b2b4dbe "sysdev: Pass the
attribute to the low level sysdev show/store function".

This eliminates a warning:

arch/powerpc/kernel/sysfs.c:538: warning: initialization from incompatible pointer type

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:39 +10:00
Nathan Lynch 9115d13453 powerpc: Enable AT_BASE_PLATFORM aux vector
Stash the first platform string matched by identify_cpu() in
powerpc_base_platform, and supply that to the ELF loader for the value
of AT_BASE_PLATFORM.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:39 +10:00
Linus Torvalds ecc8b655b3 Merge branch 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  nohz: adjust tick_nohz_stop_sched_tick() call of s390 as well
  nohz: prevent tick stop outside of the idle loop
2008-07-24 12:55:01 -07:00
Andrea Righi 27ac792ca0 PAGE_ALIGN(): correctly handle 64-bit values on 32-bit architectures
On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit
boundary. For example:

	u64 val = PAGE_ALIGN(size);

always returns a value < 4GB even if size is greater than 4GB.

The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for
example):

#define PAGE_SHIFT      12
#define PAGE_SIZE       (_AC(1,UL) << PAGE_SHIFT)
#define PAGE_MASK       (~(PAGE_SIZE-1))
...
#define PAGE_ALIGN(addr)       (((addr)+PAGE_SIZE-1)&PAGE_MASK)

The "~" is performed on a 32-bit value, so everything in "and" with
PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary.
Using the ALIGN() macro seems to be the right way, because it uses
typeof(addr) for the mask.

Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in
include/linux/mm.h.

See also lkml discussion: http://lkml.org/lkml/2008/6/11/237

[akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c]
[akpm@linux-foundation.org: fix v850]
[akpm@linux-foundation.org: fix powerpc]
[akpm@linux-foundation.org: fix arm]
[akpm@linux-foundation.org: fix mips]
[akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c]
[akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c]
[akpm@linux-foundation.org: fix powerpc]
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:21 -07:00
Jason Wessel 17ce452f7e kgdb, powerpc: arch specific powerpc kgdb support
This patch removes the old kgdb reminants from ARCH=powerpc and
implements the new style arch specific stub for the common kgdb core
interface.

It is possible to have xmon and kgdb in the same kernel, but you
cannot use both at the same time because there is only one set of
debug hooks.

The arch specific kgdb implementation saves the previous state of the
debug hooks and restores them if you unconfigure the kgdb I/O driver.
Kgdb should have no impact on a kernel that has no kgdb I/O driver
configured.

Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
2008-07-23 11:30:15 -05:00
Linus Torvalds 06b8147c5d Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (49 commits)
  powerpc: Fix build bug with binutils < 2.18 and GCC < 4.2
  powerpc/eeh: Don't panic when EEH_MAX_FAILS is exceeded
  fbdev: Teaches offb about palette on radeon r5xx/r6xx
  powerpc/cell/edac: Log a syndrome code in case of correctable error
  powerpc/cell: Add DMA_ATTR_WEAK_ORDERING dma attribute and use in Cell IOMMU code
  powerpc: Indicate which oprofile counters to use while in compat mode
  powerpc/boot: Change spaces to tabs
  powerpc: Remove duplicate 6xx option in Kconfig
  powerpc: Use PPC_LONG and PPC_LONG_ALIGN in lib/string.S
  powerpc: Use PPC_LONG_ALIGN in uaccess.h
  powerpc: Add a #define for aligning to a long-sized boundary
  powerpc: Fix OF parsing of 64 bits PCI addresses
  powerpc: Use WARN_ON(1) instead of __WARN()
  powerpc: Fix support for latencytop
  powerpc/ps3: Update ps3_defconfig
  powerpc/ps3: Add a sub-match id to ps3_system_bus
  powerpc: Add a 6xx defconfig
  powerpc/dma: Use the struct dma_attrs in iommu code
  powerpc/cell: Add support for power button of future IBM cell blades
  powerpc/cell: Cleanup sysreset_hack for IBM cell blades
  ...
2008-07-22 13:16:01 -07:00
Andi Kleen 4a0b2b4dbe sysdev: Pass the attribute to the low level sysdev show/store function
This allow to dynamically generate attributes and share show/store
functions between attributes. Right now most attributes are generated
by special macros and lots of duplicated code. With the attribute
passed it's instead possible to attach some data to the attribute
and then use that in shared low level functions to do different things.

I need this for the dynamically generated bank attributes in the x86
machine check code, but it'll allow some further cleanups.

I converted all users in tree to the new show/store prototype. It's a single
huge patch to avoid unbisectable sections.

Runtime tested: x86-32, x86-64
Compiled only: ia64, powerpc
Not compile tested/only grep converted: sh, arm, avr32

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-07-21 21:55:02 -07:00
Segher Boessenkool c69cccc95f powerpc: Fix build bug with binutils < 2.18 and GCC < 4.2
binutils < 2.18 has a bug that makes it misbehave when taking an
ELF file with all segments at load address 0 as input.  This
happens when running "strip" on vmlinux, because of the AT() magic
in this linker script.  People using GCC >= 4.2 won't run into
this problem, because the "build-id" support will put some data
into the "notes" segment (at a non-zero load address).

To work around this, we force some data into both the "dummy"
segment and the kernel segment, so the dummy segment will get a
non-zero load address.  It's not enough to always create the
"notes" segment, since if nothing gets assigned to it, its load
address will be zero.

Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org>
Tested-By: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:37 +10:00
Torez Smith 79e25bac12 powerpc: Indicate which oprofile counters to use while in compat mode
While running on a system with new hardware and a kernel where the
cpu_specs[] table does not recognize the new hardware, the identify_cpu()
routine will select the default case as it searches through cpu_specs[]
in an attempt to match the real PVR. Once the default case is selected,
non of the oprofile counters and/or fields have been set up or defined.

When identify_cpu() is called once more with the logical PVR, some of
the cpu specific fields are replaced with the exception of the oprofile
related ones. However, in the case where we have actually taken the
default case while searching for the real PVR, we need to tell
oprofile that we are now running in compatibility mode so it can pick up
the correct counters. We do this by setting the oprofile_cpu_type field
to be that taken from the cpu_specs[] for the cpu we are now emulating.

This change will detect that we are now altering the real PVR and determine
if we also need to update the oprofile_cpu_type field.

Signed-off-by: Torez Smith <lnxtorez@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:36 +10:00
Benjamin Herrenschmidt 67260ac9a3 powerpc: Fix OF parsing of 64 bits PCI addresses
The OF parsing code for PCI addresses isn't always treating properly
the address space indication 0b11 (ie. 0x3) as meaning 64 bits
memory space.

This means that it fails to parse addresses for PCI BARs that have
this encoding set by the firmware, which happens on some SLOF
versions and breaks offb palette handling on Powerstation.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Segher Boessenkool <segher@kernel.crashing.org>
2008-07-22 10:39:34 +10:00
Arnd Bergmann 6fdc9f5076 powerpc: Fix support for latencytop
We need to pass the kernel stack pointer instead of the user space
stack pointer in save_stack_trace_tsk().

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:33 +10:00
Mark Nelson 4f3dd8a062 powerpc/dma: Use the struct dma_attrs in iommu code
Update iommu_alloc() to take the struct dma_attrs and pass them on to
tce_build(). This change propagates down to the tce_build functions of
all the platforms.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:32 +10:00
Ingo Molnar 9b610fda0d Merge branch 'linus' into timers/nohz 2008-07-18 19:53:16 +02:00
Thomas Gleixner b8f8c3cf0a nohz: prevent tick stop outside of the idle loop
Jack Ren and Eric Miao tracked down the following long standing
problem in the NOHZ code:

	scheduler switch to idle task
	enable interrupts

Window starts here

	----> interrupt happens (does not set NEED_RESCHED)
	      	irq_exit() stops the tick

	----> interrupt happens (does set NEED_RESCHED)

	return from schedule()
	
	cpu_idle(): preempt_disable();

Window ends here

The interrupts can happen at any point inside the race window. The
first interrupt stops the tick, the second one causes the scheduler to
rerun and switch away from idle again and we end up with the tick
disabled.

The fact that it needs two interrupts where the first one does not set
NEED_RESCHED and the second one does made the bug obscure and extremly
hard to reproduce and analyse. Kudos to Jack and Eric.

Solution: Limit the NOHZ functionality to the idle loop to make sure
that we can not run into such a situation ever again.

cpu_idle()
{
	preempt_disable();

	while(1) {
		 tick_nohz_stop_sched_tick(1); <- tell NOHZ code that we
		 			          are in the idle loop

		 while (!need_resched())
		       halt();

		 tick_nohz_restart_sched_tick(); <- disables NOHZ mode
		 preempt_enable_no_resched();
		 schedule();
		 preempt_disable();
	}
}

In hindsight we should have done this forever, but ... 

/me grabs a large brown paperbag.

Debugged-by: Jack Ren <jack.ren@marvell.com>, 
Debugged-by: eric miao <eric.y.miao@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-07-18 18:10:28 +02:00
Kumar Gala 0332f000cd powerpc/fsl: Minor TLBSYNC cleanup for FSL Book-E
Use the TLBSYNC macro defined in ppc_asm.h rather than our own ifdefs.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-07-16 17:57:52 -05:00
Kumar Gala 6cfd8990e2 powerpc: rework FSL Book-E PTE access and TLB miss
This converts the FSL Book-E PTE access and TLB miss handling to match
with the recent changes to 44x that introduce support for non-atomic PTE
operations in pgtable-ppc32.h and removes write back to the PTE from
the TLB miss handlers. In addition, the DSI interrupt code no longer
tries to fixup write permission, this is left to generic code, and
_PAGE_HWWRITE is gone.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-07-16 17:57:51 -05:00
Benjamin Herrenschmidt 84c3d4aaec Merge commit 'origin/master'
Manual merge of:

	arch/powerpc/Kconfig
	arch/powerpc/kernel/stacktrace.c
	arch/powerpc/mm/slice.c
	arch/ppc/kernel/smp.c
2008-07-16 11:07:59 +10:00
Ingo Molnar 1a781a777b Merge branch 'generic-ipi' into generic-ipi-for-linus
Conflicts:

	arch/powerpc/Kconfig
	arch/s390/kernel/time.c
	arch/x86/kernel/apic_32.c
	arch/x86/kernel/cpu/perfctr-watchdog.c
	arch/x86/kernel/i8259_64.c
	arch/x86/kernel/ldt.c
	arch/x86/kernel/nmi_64.c
	arch/x86/kernel/smpboot.c
	arch/x86/xen/smp.c
	include/asm-x86/hw_irq_32.h
	include/asm-x86/hw_irq_64.h
	include/asm-x86/mach-default/irq_vectors.h
	include/asm-x86/mach-voyager/irq_vectors.h
	include/asm-x86/smp.h
	kernel/Makefile

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-15 21:55:59 +02:00
Linus Torvalds af5329cdf5 Merge branch 'core/stacktrace' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core/stacktrace' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  generic-ipi: powerpc/generic-ipi tree build failure
  stacktrace: fix build failure on sparc64
  stacktrace: export save_stack_trace[_tsk]
  stacktrace: fix modular build, export print_stack_trace and save_stack_trace
  backtrace: replace timer with tasklet + completions
  stacktrace: add saved stack traces to backtrace self-test
  stacktrace: print_stack_trace() cleanup
  debugging: make stacktrace independent from DEBUG_KERNEL
  stacktrace: don't crash on invalid stack trace structs
2008-07-15 10:31:35 -07:00
Benjamin Herrenschmidt 43d2548bb2 Merge commit '85082fd7cbe3173198aac0eb5e85ab1edcc6352c' into test-build
Manual fixup of:

	arch/powerpc/Kconfig
2008-07-15 15:44:51 +10:00
Sonny Rao b6f6b98a4e powerpc: Don't spin on sync instruction at boot time
Push the sync below the secondary smp init hold loop and comment its purpose.
This should speed up boot by reducing global traffic during the single-threaded
portion of boot.

Signed-off-by: Sonny Rao <sonnyrao@us.ibm.com>
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:29:28 +10:00
Michael Neuling cd6f37be7f powerpc: Add VSX load/store alignment exception handler
VSX loads and stores will take an alignment exception when the address
is not on a 4 byte boundary.

This add support for these alignment exceptions and will emulate the
requested load or store.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:29:25 +10:00
Michael Neuling 7c29217096 powerpc: fix giveup_vsx to save registers correctly
giveup_vsx didn't save the FPU and VMX regsiters.  Change it to be
like giveup_fpr/altivec which save these registers.

Also update call sites where FPU and VMX are already saved to use the
original giveup_vsx (renamed to __giveup_vsx).

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:29:23 +10:00
Arnd Bergmann 01f4b8b8b8 powerpc: support for latencytop
Implement save_stack_trace_tsk on powerpc, so that we can run with
latencytop.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:29:15 +10:00
Nathan Lynch 0f47331475 powerpc: Add PPC_FEATURE_PSERIES_PERFMON_COMPAT
Background from Maynard Johnson:
As of POWER6, a set of 32 common events is defined that must be
supported on all future POWER processors.  The main impetus for this
compat set is the need to support partition migration, especially from
processor P(n) to processor P(n+1), where performance software that's
running in the new partition may not be knowledgeable about processor
P(n+1).  If a performance tool determines it does not support the
physical processor, but is told (via the
PPC_FEATURE_PSERIES_PERFMON_COMPAT bit) that the processor supports
the notion of the PMU compat set, then the performance tool can
surface just those events to the user of the tool.

PPC_FEATURE_PSERIES_PERFMON_COMPAT indicates that the PMU supports at
least this basic subset of events which is compatible across POWER
processor lines.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:24:57 +10:00
Benjamin Herrenschmidt 930074b6b9 Merge commit 'jwb/jwb-next' 2008-07-15 11:54:57 +10:00
Linus Torvalds d18bb9a548 Merge branch 'core/rodata' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core/rodata' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  move BUG_TABLE into RODATA
2008-07-14 15:28:10 -07:00
Stephen Rothwell 7798ed0f57 generic-ipi: powerpc/generic-ipi tree build failure
Today's linux-next build (powerpc allmodconfig) failed like this:

ERROR: ".save_stack_trace" [tests/backtracetest.ko] undefined!

But save_stack_trace is exported in arch/powerpc/kernel/stacktrace.c

I couldn't figure it out until I noticed these earlier warnings:

arch/powerpc/kernel/stacktrace.c:47: warning: data definition has no type or storage class
arch/powerpc/kernel/stacktrace.c:47: warning: type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL'
arch/powerpc/kernel/stacktrace.c:47: warning: parameter names (without types) in function declaration

I applied the patch below.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: <linuxppc-dev@ozlabs.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-14 17:54:22 +02:00
Kumar Gala ddb107e98b powerpc/booke: don't reinitialize time base
For some reason long ago I decided that we should zero out the time base
when we calibrate the decrementer.  The problem is that this can be
harmful in SMP systems where the firmware has already synchronized the
time bases on the various cores.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-07-14 07:55:42 -05:00
Benjamin Herrenschmidt 11c2d8174e Merge commit 'origin/HEAD' into test-merge
Manual fixup of include/asm-powerpc/pgtable-ppc64.h
2008-07-14 14:29:49 +10:00
Ingo Molnar bac0c9103b Merge branch 'tracing/ftrace' into auto-ftrace-next 2008-07-10 11:43:00 +02:00
Benjamin Herrenschmidt 1bc54c0311 powerpc: rework 4xx PTE access and TLB miss
This is some preliminary work to improve TLB management on SW loaded
TLB powerpc platforms. This introduce support for non-atomic PTE
operations in pgtable-ppc32.h and removes write back to the PTE from
the TLB miss handlers. In addition, the DSI interrupt code no longer
tries to fixup write permission, this is left to generic code, and
_PAGE_HWWRITE is gone.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-07-09 13:36:17 -04:00
Josh Boyer beae4c03c0 Merge branch 'virtex-for-2.6.27' of git://git.secretlab.ca/git/linux-2.6-virtex into 4xx-next 2008-07-09 13:35:16 -04:00
Michael Neuling b887ec620a powerpc: remove unused variable in emulate_fp_pair
regs is not used in emulate_fp_pair so remove it.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:47 +10:00
Michael Neuling c1cb299ead powerpc: fix swapcontext backwards compat. with VSX ucontext changes
When the ucontext changed to add the VSX context, this broke backwards
compatibly on swapcontext.  swapcontext only compares the ucontext size
passed in from the user to the new kernel ucontext size.

This adds a check against the old ucontext size (with VMX but without
VSX).  It also adds some sanity check for ucontexts without VSX, but
where VSX is used according the MSR.  Fixes for both 32 and 64bit
processes on 64bit kernels

Kudos to Paulus for noticing.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:47 +10:00
Paul Gortmaker 1b17adf19b powerpc/ibmebus: more meaningful variable name
Choose a more meaningful name for better System.map readability and
autopsy value etc.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:46 +10:00
Dave Kleikamp ef3d3246a0 powerpc/mm: Add Strong Access Ordering support
Allow an application to enable Strong Access Ordering on specific pages of
memory on Power 7 hardware. Currently, power has a weaker memory model than
x86. Implementing a stronger memory model allows an emulator to more
efficiently translate x86 code into power code, resulting in faster code
execution.

On Power 7 hardware, storing 0b1110 in the WIMG bits of the hpte enables
strong access ordering mode for the memory page.  This patchset allows a
user to specify which pages are thus enabled by passing a new protection
bit through mmap() and mprotect().  I have defined PROT_SAO to be 0x10.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:45 +10:00
Benjamin Herrenschmidt 058c78f4ba powerpc: Use new printk extension %pS to print symbols on oops
This changes the oops and backtrace code to use the new %pS
printk extension to print out symbols rather than manually
calling print_symbol.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:44 +10:00
Mark Nelson 3a4c6f0b15 powerpc: move device_to_mask() to dma-mapping.h
Move device_to_mask() to dma-mapping.h because we need to use it from
outside dma_64.c in a later patch.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:44 +10:00
Mark Nelson 3affedc4e1 powerpc/dma: implement new dma_*map*_attrs() interfaces
Update powerpc to use the new dma_*map*_attrs() interfaces. In doing so
update struct dma_mapping_ops to accept a struct dma_attrs and propagate
these changes through to all users of the code (generic IOMMU and the
64bit DMA code, and the iseries and ps3 platform code).

The old dma_*map_*() interfaces are reimplemented as calls to the
corresponding new interfaces.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:43 +10:00
Mark Nelson c8692362db powerpc/dma: Add struct iommu_table argument to iommu_map_sg()
Make iommu_map_sg take a struct iommu_table. It did so before commit
740c3ce667 (iommu sg merging: ppc: make
iommu respect the segment size limits).

This stops the function looking in the archdata.dma_data for the iommu
table because in the future it will be called with a device that has
no table there.

This also has the nice side effect of making iommu_map_sg() match the
other map functions.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:43 +10:00
Vitaly Bordug ba0fc709e1 powerpc: Add missing reference to coherent_dma_mask
There is dma_mask in of_device upon of_platform_device_create()
but we don't actually set coherent_dma_mask. This may cause weird
behavior of USB subsystem using of_device USB host drivers.

Signed-off-by: Vitaly Bordug <vitb@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-08 21:06:35 -07:00
Benjamin Herrenschmidt 3bc5ab9b7f powerpc: Fix unterminated of_device_id array in legacy_serial.c
A recent patch to legacy_serial.c factored out some code by
using the of_match_node() facility to match a node against
an array of possible matches. However, the patch didn't properly
terminate the array causing potential crashes in cases where no
match is found. In addition, the name of the array was poorly
chosen for a static symbol making debugging harder.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-07 08:53:49 -07:00
John Linn 23e7237e09 powerpc/virtex: add Xilinx 440 cpu to the cputable
Updates the cputable to include the 440 processor found in the
Xilinx Virtex5 FXT FPGA.

Signed-off-by: John Linn <john.linn@xilinx.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2008-07-04 00:59:02 -06:00
Stephen Rothwell 392096e98f generic-ipi: fix linux-next tree build failure
Today's linux-next build (powerpc ppc64_defconfig) failed like this:

arch/powerpc/mm/tlb_64.c: In function 'pgtable_free_now':
arch/powerpc/mm/tlb_64.c:66: error: too many arguments to function 'smp_call_function'
arch/powerpc/kernel/machine_kexec_64.c: In function 'kexec_prepare_cpus':
arch/powerpc/kernel/machine_kexec_64.c:175: error: too many arguments to function 'smp_call_function'

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Jens Axboe <jens.axboe@oracle.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: <linuxppc-dev@ozlabs.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-03 09:25:42 +02:00
Ingo Molnar 7b4c9505f2 stacktrace: export save_stack_trace[_tsk]
Andrew Morton reported this against linux-next:

ERROR: ".save_stack_trace" [tests/backtracetest.ko] undefined!

Reported-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-03 09:17:55 +02:00
Michael Neuling 138fc1ee06 powerpc: Remove old dump_task_* functions
Since Roland's ptrace cleanup starting with commit
f65255e8d5 ("[POWERPC] Use user_regset
accessors for FP regs"), the dump_task_* functions are no longer being
used.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:13 +10:00
Michael Neuling 6a274c08f2 powerpc: Clean up copy_to/from_user for vsx and fpr
This merges and cleans up some of the ugly copy/to from user code
which is required for the new fpr and vsx layout in the thread_struct.

Also fixes some hard coded buffer sizes and removes a redundant
fpr_flush_to_thread.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:11 +10:00
Kumar Gala 2d1b202762 powerpc: Fixup lwsync at runtime
To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync
at runtime.  On e500v1/v2 lwsync causes an illop so we need to patch up
the code.  We default to 'sync' since that is always safe and if the cpu
is capable we will replace 'sync' with 'lwsync'.

We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is
needed.  This flag could be moved elsewhere since we dont really use it
for the normal CPU_FTR purpose.

Finally we only store the relative offset in the fixup section to keep it
as small as possible rather than using a full fixup_entry.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:10 +10:00
John Linn 1e6d1f2606 powerpc/legacy_serial: Bail if reg-offset/shift properties are present
The legacy serial driver does not work with an 8250 type UART that is
described in the device tree with the reg-offset and reg-shift
properties.  This change makes legacy_serial ignore these devices.

Signed-off-by: John Linn <john.linn@xilinx.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2008-07-01 15:12:37 -06:00
Michael Neuling f3e909c275 powerpc: Update for VSX core file and ptrace
This correctly hooks the VSX dump into Roland McGrath core file
infrastructure.  It adds the VSX dump information as an additional elf
note in the core file (after talking more to the tool chain/gdb guys).
This also ensures the formats are consistent between signals, ptrace
and core files.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 14:47:09 +10:00
Michael Neuling 436db693c4 powerpc: Fix compile error for CONFIG_VSX
Fix compile error when CONFIG_VSX is enabled.

arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
arch/powerpc/kernel/signal_64.c:241: error: 'i' undeclared (first use in this function)

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 14:47:07 +10:00
Stephen Rothwell fcbc5a976b powerpc: Explicitly copy elements of pt_regs
Gcc 4.3 produced this warning:

arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
arch/powerpc/kernel/signal_64.c:161: warning: array subscript is above array bounds

This is caused by us copying to aliases of elements of the pt_regs
structure.  Make those explicit.

This adds one extra __get_user and unrolls a loop.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:59 +10:00
Michael Neuling ce48b21007 powerpc: Add VSX context save/restore, ptrace and signal support
This patch extends the floating point save and restore code to use the
VSX load/stores when VSX is available.  This will make FP context
save/restore marginally slower on FP only code, when VSX is available,
as it has to load/store 128bits rather than just 64bits.

Mixing FP, VMX and VSX code will get constant architected state.

The signals interface is extended to enable access to VSR 0-31
doubleword 1 after discussions with tool chain maintainers.  Backward
compatibility is maintained.

The ptrace interface is also extended to allow access to VSR 0-31 full
registers.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:50 +10:00
Michael Neuling 72ffff5b17 powerpc: Add VSX assembler code macros
This adds the macros for the VSX load/store instruction as most
binutils are not going to support this for a while.

Also add VSX register save/restore macros and vsr[0-63] register definitions.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:48 +10:00
Michael Neuling b962ce9d26 powerpc: Add VSX CPU feature
Add a VSX CPU feature.  Also add code to detect if VSX is available
from the device tree.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:47 +10:00
Michael Neuling c6e6771b87 powerpc: Introduce VSX thread_struct and CONFIG_VSX
The layout of the new VSR registers and how they overlap on top of the
legacy FPR and VR registers is:

                   VSR doubleword 0               VSR doubleword 1
          ----------------------------------------------------------------
  VSR[0]  |             FPR[0]            |                              |
          ----------------------------------------------------------------
  VSR[1]  |             FPR[1]            |                              |
          ----------------------------------------------------------------
          |              ...              |                              |
          |              ...              |                              |
          ----------------------------------------------------------------
  VSR[30] |             FPR[30]           |                              |
          ----------------------------------------------------------------
  VSR[31] |             FPR[31]           |                              |
          ----------------------------------------------------------------
  VSR[32] |                             VR[0]                            |
          ----------------------------------------------------------------
  VSR[33] |                             VR[1]                            |
          ----------------------------------------------------------------
          |                              ...                             |
          |                              ...                             |
          ----------------------------------------------------------------
  VSR[62] |                             VR[30]                           |
          ----------------------------------------------------------------
  VSR[63] |                             VR[31]                           |
          ----------------------------------------------------------------

VSX has 64 128bit registers.  The first 32 regs overlap with the FP
registers and hence extend them with and additional 64 bits.  The
second 32 regs overlap with the VMX registers.

This commit introduces the thread_struct changes required to reflect
this register layout.  Ptrace and signals code is updated so that the
floating point registers are correctly accessed from the thread_struct
when CONFIG_VSX is enabled.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:46 +10:00
Michael Neuling 6f3d8e6947 powerpc: Make load_up_fpu and load_up_altivec callable
Make load_up_fpu and load_up_altivec callable so they can be reused by
the VSX code.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:45 +10:00
Michael Neuling 10e343925a powerpc: Move altivec_unavailable
Move the altivec_unavailable code, to make room at 0xf40 where the
vsx_unavailable exception will be.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:44 +10:00
Michael Neuling 9c75a31c35 powerpc: Add macros to access floating point registers in thread_struct.
We are going to change where the floating point registers are stored
in the thread_struct, so in preparation add some macros to access the
floating point registers.  Update all code to use these new macros.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:43 +10:00
Michael Neuling 9e7511861c powerpc: Fix MSR setting in 32 bit signal code
If we set the SPE MSR bit in save_user_regs we can blow away the VEC
bit.  This doesn't matter in reality as they are in fact the same bit
but looks bad.

Also, when we add VSX in a later patch, we need to be able to set two
separate MSR bits here.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:42 +10:00
Michael Ellerman c230328def powerpc: Use an alternative feature section in entry_64.S
Use an alternative feature section in _switch. There are three cases
handled here, either we don't have an SLB, in which case we jump over the
entire code section, or if we do we either do or don't have 1TB segments.

Boot tested on Power3, Power5 and Power5+.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:31 +10:00
Michael Ellerman fac23fe4be powerpc: Introduce infrastructure for feature sections with alternatives
The current feature section logic only supports nop'ing out code, this means
if you want to choose at runtime between instruction sequences, one or both
cases will have to execute the nop'ed out contents of the other section, eg:

BEGIN_FTR_SECTION
	or	1,1,1
END_FTR_SECTION_IFSET(FOO)
BEGIN_FTR_SECTION
	or	2,2,2
END_FTR_SECTION_IFCLR(FOO)

and the resulting code will be either,

	or	1,1,1
	nop

or,
	nop
	or	2,2,2

For small code segments this is fine, but for larger code blocks and in
performance criticial code segments, it would be nice to avoid the nops.
This commit starts to implement logic to allow the following:

BEGIN_FTR_SECTION
	or	1,1,1
FTR_SECTION_ELSE
	or	2,2,2
ALT_FTR_SECTION_END_IFSET(FOO)

and the resulting code will be:

	or	1,1,1
or,
	or	2,2,2

We achieve this by extending the existing FTR macros. The current feature
section semantic just becomes a special case, ie. if the else case is empty
we nop out the default case.

The key limitation is that the size of the else case must be less than or
equal to the size of the default case. If the else case is smaller the
remainder of the section is nop'ed.

We let the linker put the else case code in with the rest of the text,
so that relative branches from the else case are more likley to link,
this has the disadvantage that we can't free the unused else cases.

This commit introduces the required macro and linker script changes, but
does not enable the patching of the alternative sections.

We also need to update two hand-made section entries in reg.h and timex.h

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:28 +10:00
Michael Ellerman 51c52e8669 powerpc: Split out do_feature_fixups() from cputable.c
The logic to patch CPU feature sections lives in cputable.c, but these days
it's used for CPU features as well as firmware features.  Move it into
it's own file for neatness and as preparation for some additions.

While we're moving the code, we pull the loop body logic into a separate
routine, and remove a comment which doesn't apply anymore.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:24 +10:00
Michael Ellerman b7bcda631e powerpc: Add PPC_NOP_INSTR, a hash define for the preferred nop instruction
A bunch of code has hard-coded the value for a "nop" instruction, it
would be nice to have a #define for it.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:23 +10:00
Michael Ellerman e7a57273c6 powerpc: Allow create_branch() to return errors
Currently create_branch() creates a branch instruction for you, and
patches it into the call site.  In some circumstances it would be nice
to be able to create the instruction and patch it later, and also some
code might want to check for errors in the branch creation before
doing the patching.  A future commit will change create_branch() to
check for errors.

For callers that don't care, replace create_branch() with
patch_branch(), which just creates the branch and patches it directly.

While we're touching all the callers, change to using unsigned int *,
as this seems to match usage better.  That allows (and requires) us to
remove the volatile in the definition of vector in powermac/smp.c and
mpc86xx_smp.c, that's correct because now that we're passing vector as
an unsigned int * the compiler knows that it's value might change
across the patch_branch() call.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Jon Loeliger <jdl@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:19 +10:00
Michael Ellerman aaddd3eaca powerpc: Move code patching code into arch/powerpc/lib/code-patching.c
We currently have a few routines for patching code in asm/system.h, because
they didn't fit anywhere else. I'd like to clean them up a little and add
some more, so first move them into a dedicated C file - they don't need to
be inlined.

While we're moving the code, drop create_function_call(), it's intended
caller never got merged and will be replaced in future with something
different.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:18 +10:00
Kumar Gala f0c426bc35 powerpc: Move common module code into its own file
Refactor common code between ppc32 and ppc64 module handling into a
shared filed.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:05 +10:00
Joel Schopp 0cb9901377 powerpc: Tell firmware we support architecture V2.06
Add the bits to the architecture-vec so that ibm,client-architecture
lets the firmware know we support the 2.06 architecture.

Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:00 +10:00
Joel Schopp 635f5a6354 powerpc: Add cputable entry for Power7 architected mode
Add an entry for Power7 architected mode and add "(raw)" to Power7 raw
mode to distinguish it more clearly.

Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:27:59 +10:00
Michael Neuling e952e6c4d6 powerpc: Add cputable entry for POWER7
Add a cputable entry for the POWER7 processor.

Also tell firmware that we know about POWER7.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:31:11 +10:00
Arnd Bergmann 36c35be332 powerpc: Increase CRASH_HANDLER_MAX
There are now two potential callers of machine_crash_shutdown,
so increase the limit accordingly.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:31:00 +10:00
Paul Mackerras e9a4b6a3f6 Merge branch 'linux-2.6' 2008-06-30 10:16:50 +10:00
Paul Mackerras 441dbb500b Merge branch 'next' of master.kernel.org:/pub/scm/linux/kernel/git/jwboyer/powerpc-4xx 2008-06-30 09:57:05 +10:00
Jens Axboe 15c8b6c1aa on_each_cpu(): kill unused 'retry' parameter
It's not even passed on to smp_call_function() anymore, since that
was removed. So kill it.

Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:24:38 +02:00
Jens Axboe 8691e5a8f6 smp_call_function: get rid of the unused nonatomic/retry argument
It's never used and the comments refer to nonatomic and retry
interchangably. So get rid of it.

Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:24:35 +02:00
Jens Axboe b7d7a2404f powerpc: convert to generic helpers for IPI function calls
This converts ppc to use the new helpers for smp_call_function() and
friends, and adds support for smp_call_function_single().

ppc loses the timeout functionality of smp_call_function_mask() with
this change, as the generic code does not provide that.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:22:13 +02:00
Kumar Gala f82796214a powerpc/booke: Add kprobes support for booke style processors
This patch is based on work done by Madhvesh. R. Sulibhavi back in
March 2007.

We refactor some of the single step handling since it differs between
"classic" and "booke" powerpc cores.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 03:35:46 -05:00
Kumar Gala b76e59d1fb powerpc/kprobes: Some minor fixes
* Mark __flush_icache_range as a function that can't be probed since its
  used by the kprobe code.

* Fix an issue with single stepping and async exceptions.  We need to
  ensure that we dont get an async exception (external, decrementer, etc)
  while we are attempting to single step the probe point.

  Added a check to ensure we only handle a single step if its really
  intended for the instruction in question.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 03:35:33 -05:00
Kumar Gala aba11fc50c powerpc/e500mc: flush L2 on NAP for e500mc
If we have an L2CSR register (e500mc) we need to flush the L2 before going
to nap.  We use the HW flush mechanism provided in that register.

The code reuses the CPU_FTR_604_PERF_MON bit as it is no longer used by
any code in the kernel.  Additionally we didn't reuse the exist L2CR
feature bit as this is intended for the 7xxx L2CR register and L2CSR
is part of the new Freescale "Book-E" registers.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 01:49:03 -05:00
Kumar Gala fc4033b2f8 powerpc/85xx: add DOZE/NAP support for e500 core
The e500 core enter DOZE/NAP power-saving modes when the core go to
cpu_idle routine.

The power management default running mode is DOZE, If the user

echo 1 > /proc/sys/kernel/powersave-nap

the system will change to NAP running mode.

Signed-off-by: Dave Liu <daveliu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 01:48:56 -05:00
Abhishek Sagar 395a59d0f8 ftrace: store mcount address in rec->ip
Record the address of the mcount call-site. Currently all archs except sparc64
record the address of the instruction following the mcount call-site. Some
general cleanups are entailed. Storing mcount addresses in rec->ip enables
looking them up in the kprobe hash table later on to check if they're kprobe'd.

Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com>
Cc: davem@davemloft.net
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-23 22:10:56 +02:00