ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE

With the new ASID allocation algorithm, active ASIDs at the time of a
rollover event will be marked as reserved, so active mm_structs can
continue to operate with the same ASID as before. This in turn means
that we don't need to worry about allocating a new ASID to an mm that
is currently active (installed in TTBR0).

Since updating the pgd and ASID is atomic on LPAE systems (by virtue of
the two being fields in the same hardware register), we can dispose of
the reserved TTBR0 and rely on whatever tables we currently have live.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This commit is contained in:
Will Deacon 2013-12-17 19:17:11 +01:00 коммит произвёл Russell King
Родитель a472b09dd5
Коммит e1a5848e33
1 изменённых файлов: 11 добавлений и 10 удалений

Просмотреть файл

@ -78,20 +78,21 @@ void a15_erratum_get_cpumask(int this_cpu, struct mm_struct *mm,
#endif #endif
#ifdef CONFIG_ARM_LPAE #ifdef CONFIG_ARM_LPAE
static void cpu_set_reserved_ttbr0(void) /*
{ * With LPAE, the ASID and page tables are updated atomicly, so there is
/* * no need for a reserved set of tables (the active ASID tracking prevents
* Set TTBR0 to swapper_pg_dir which contains only global entries. The * any issues across a rollover).
* ASID is set to 0. */
*/ #define cpu_set_reserved_ttbr0()
cpu_set_ttbr(0, __pa(swapper_pg_dir));
isb();
}
#else #else
static void cpu_set_reserved_ttbr0(void) static void cpu_set_reserved_ttbr0(void)
{ {
u32 ttb; u32 ttb;
/* Copy TTBR1 into TTBR0 */ /*
* Copy TTBR1 into TTBR0.
* This points at swapper_pg_dir, which contains only global
* entries so any speculative walks are perfectly safe.
*/
asm volatile( asm volatile(
" mrc p15, 0, %0, c2, c0, 1 @ read TTBR1\n" " mrc p15, 0, %0, c2, c0, 1 @ read TTBR1\n"
" mcr p15, 0, %0, c2, c0, 0 @ set TTBR0\n" " mcr p15, 0, %0, c2, c0, 0 @ set TTBR0\n"