Граф коммитов

23 Коммитов

Автор SHA1 Сообщение Дата
Nicolas Pitre 3e99675af1 ARM: 7582/2: rename kvm_seq to vmalloc_seq so to avoid confusion with KVM
The kvm_seq value has nothing to do what so ever with this other KVM.
Given that KVM support on ARM is imminent, it's best to rename kvm_seq
into something else to clearly identify what it is about i.e. a sequence
number for vmalloc section mappings.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-11-26 12:23:53 +00:00
Will Deacon bf51bb82cc ARM: mm: use bitmap operations when allocating new ASIDs
When allocating a new ASID, we must take care not to re-assign a
reserved ASID-value to a new mm. This requires us to check each
candidate ASID against those currently reserved by other cores before
assigning a new ASID to the current mm.

This patch improves the ASID allocation algorithm by using a
bitmap-based approach. Rather than iterating over the reserved ASID
array for each candidate ASID, we simply find the first zero bit,
ensuring that those indices corresponding to reserved ASIDs are set
when flushing during a rollover event.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2012-11-05 16:25:25 +00:00
Will Deacon 4b88316083 ARM: mm: avoid taking ASID spinlock on fastpath
When scheduling a new mm, we take a spinlock so that we can:

  1. Safely allocate a new ASID, if required
  2. Update our active_asids field without worrying about parallel
     updates to reserved_asids
  3. Ensure that we flush our local TLB, if required

However, this has the nasty affect of serialising context-switch across
all CPUs in the system. The usual (fast) case is where the next mm has
a valid ASID for the current generation. In such a scenario, we can
avoid taking the lock and instead use atomic64_xchg to update the
active_asids variable for the current CPU. If a rollover occurs on
another CPU (which would take the lock), when copying the active_asids
into the reserved_asids another atomic64_xchg is used to replace each
active_asids with 0. The fast path can then detect this case and fall
back to spinning on the lock.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2012-11-05 16:25:25 +00:00
Will Deacon b5466f8728 ARM: mm: remove IPI broadcasting on ASID rollover
ASIDs are allocated to MMU contexts based on a rolling counter. This
means that after 255 allocations we must invalidate all existing ASIDs
via an expensive IPI mechanism to synchronise all of the online CPUs and
ensure that all tasks execute with an ASID from the new generation.

This patch changes the rollover behaviour so that we rely instead on the
hardware broadcasting of the TLB invalidation to avoid the IPI calls.
This works by keeping track of the active ASID on each core, which is
then reserved in the case of a rollover so that currently scheduled
tasks can continue to run. For cores without hardware TLB broadcasting,
we keep track of pending flushes in a cpumask, so cores can flush their
local TLB before scheduling a new mm.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2012-11-05 16:25:25 +00:00
Will Deacon ae3790b8a9 ARM: 7502/1: contextidr: avoid using bfi instruction during notifier
The bfi instruction is not available on ARMv6, so instead use an and/orr
sequence in the contextidr_notifier. This gets rid of the assembler
error:

  Assembler messages:
  Error: selected processor does not support ARM mode `bfi r3,r2,#0,#8'

Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-08-25 09:15:57 +01:00
Will Deacon 575320d625 ARM: 7445/1: mm: update CONTEXTIDR register to contain PID of current process
This patch introduces a new Kconfig option which, when enabled, causes
the kernel to write the PID of the current task into the PROCID field
of the CONTEXTIDR on context switch. This is useful when analysing
hardware trace, since writes to this register can be configured to emit
an event into the trace stream.

The thread notifier for writing the PID is deliberately kept separate
from the ASID-writing code so that we can support newer processors using
LPAE, where the ASID is stored in TTBR0. As such, the switch_mm code is
updated to perform a read-modify-write sequence to ensure that we don't
clobber the PID on CPUs using the classic 2-level page tables.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-07-09 17:41:10 +01:00
Catalin Marinas e323969ccd ARM: Remove current_mm per-cpu variable
The current_mm variable was used to store the new mm between the
switch_mm() and switch_to() calls where an IPI to reset the context
could have set the wrong mm. Since the interrupts are disabled during
context switch, there is no need for this variable, current->active_mm
already points to the current mm when interrupts are re-enabled.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-04-17 15:29:37 +01:00
Catalin Marinas 7fec1b57b8 ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs
Since the ASIDs must be unique to an mm across all the CPUs in a system,
the __new_context() function needs to broadcast a context reset event to
all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
cannot be issued with interrupts disabled and ARM had to define
__ARCH_WANT_INTERRUPTS_ON_CTXSW.

This patch changes the check_context() function to
check_and_switch_context() called from switch_mm(). In case of
ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the
interrupts are disabled, it defers the __new_context() and
cpu_switch_mm() calls to the post-lock switch hook where the interrupts
are enabled. Setting the reserved TTBR0 was also moved to
check_and_switch_context() from cpu_v7_switch_mm().

Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-04-17 15:29:32 +01:00
Will Deacon 3c5f7e7b4a ARM: Use TTBR1 instead of reserved context ID
On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.

This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.

This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.

Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: add LPAE support]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2012-04-17 15:29:21 +01:00
Catalin Marinas 14d8c9512a ARM: LPAE: Add context switching support
With LPAE, TTBRx registers are 64-bit. The ASID is stored in TTBR0
rather than a separate Context ID register. This patch makes the
necessary changes to handle context switching on LPAE.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2011-12-08 10:30:40 +00:00
Thomas Gleixner bd31b85960 locking, ARM: Annotate low level hw locks as raw
Annotate the low level hardware locks which must not be preempted.

In mainline this change documents the low level nature of
the lock - otherwise there's no functional difference. Lockdep
and Sparse checking will work as usual.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-09-13 11:12:14 +02:00
Russell King a0a54d37b4 Revert "ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks"
This reverts commit 45b95235b0.

Will Deacon reports that:

 In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
 I updated the ASID rollover code to use only the kernel page tables
 whilst updating the ASID.

 Unfortunately, the code to restore the user page tables was part of a
 later patch which isn't yet in mainline, so this leaves the code
 quite broken.

We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
from ARM, so lets revert these until we can properly sort out what we're
doing with the context switching.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-09 10:13:16 +01:00
Russell King 07989b7ad6 Revert "ARM: 6943/1: mm: use TTBR1 instead of reserved context ID"
This reverts commit 52af9c6cd8.

Will Deacon reports that:

 In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
 I updated the ASID rollover code to use only the kernel page tables
 whilst updating the ASID.

 Unfortunately, the code to restore the user page tables was part of a
 later patch which isn't yet in mainline, so this leaves the code
 quite broken.

We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
from ARM, so lets revert these until we can properly sort out what we're
doing with the ARM context switching.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-09 10:13:04 +01:00
Will Deacon 45b95235b0 ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks
Now that ASID 0 is no longer used as a reserved value, allow it to be
allocated to tasks.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:33 +01:00
Will Deacon 52af9c6cd8 ARM: 6943/1: mm: use TTBR1 instead of reserved context ID
On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.

This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.

This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:33 +01:00
Catalin Marinas 11805bcfa4 ARM: 5905/1: ARM: Global ASID allocation on SMP
The current ASID allocation algorithm doesn't ensure the notification
of the other CPUs when the ASID rolls over. This may lead to two
processes using the same ASID (but different generation) or multiple
threads of the same process using different ASIDs.

This patch adds the broadcasting of the ASID rollover event to the
other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
during the handling of the broadcast, the ASID numbering now starts at
"smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
to NR_CPUS.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:51 +00:00
Russell King df71dfd4ca ARM: Fix errata 411920 workarounds
Errata 411920 indicates that any "invalidate entire instruction cache"
operation can fail if the right conditions are present.  This is not
limited just to those operations in flush.c, but elsewhere.  Place the
workaround in the already existing __flush_icache_all() function
instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-29 19:13:09 +00:00
Rusty Russell 56f8ba83a5 cpumask: use mm_cpumask() wrapper: arm
Makes code futureproof against the impending change to mm->cpu_vm_mask.

It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24 09:34:49 +09:30
Russell King 805f53f085 Merge branches 'armv7', 'at91', 'misc' and 'omap' into devel 2007-05-09 10:41:28 +01:00
Catalin Marinas 065cf519c3 [ARM] armv7: add support for asid-tagged VIVT I-cache
ARMv7 can have VIPT, PIPT or ASID-tagged VIVT I-cache. This patch
adds the necessary invalidation of the I-cache when the ASID numbers
are re-used.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-05-09 09:50:23 +01:00
Russell King 8678c1f042 [ARM] Fix ASID version switch
Close a hole in the ASID version switch, particularly the following
scenario:

CPU0 MM PID			CPU1 MM PID
	idle
				  A	pid(A)
				  A	idle(lazy tlb)
		* new asid version triggered by B *
  B	pid(B)
  A	pid(A)
		* MM A gets new asid version *
  A	idle(lazy tlb)
				  A	pid(A)
		* CPU1 doesn't see the new ASID *

The result is that CPU1 continues running with the hardware set
for the original (stale) ASID value, but mm->context.id contains
the new ASID value.  The result is that the next MM fault on CPU1
updates the page table entries, but flush_tlb_page() fails due to
wrong ASID.

There is a related case with a threaded application is allocated
a new ASID on one CPU while another of its threads is running on
some different CPU.  This scenario is not fixed by this commit.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-05-08 20:03:09 +01:00
Catalin Marinas 9d99df4b10 [ARM] 4128/1: Architecture compliant TTBR changing sequence
On newer architectures (ARMv6, ARMv7), the depth of the prefetch and
branch prediction is implementation defined and there is a small risk
of wrong ASID tagging when changing TTBR0 before setting the new
context id. The recommended solution is to set a reserved ASID during
TTBR changing. This patch reserves ASID 0.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2007-02-08 14:49:24 +00:00
Russell King d84b47115a [ARM] Move mmu.c out of the way
Rename mmu.c to context.c - it's the ARMv6 ASID context handling
code rather than generic "mmu" handling code.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-09-20 14:58:35 +01:00