Stephen Rothwell reports that commit 3f4c9f8f0a20 ("ARM: 8197/1:
vfp: Fix VFPv3 hwcap detection on CPUID based cpus") introduced a
variable unused warning.
arch/arm/vfp/vfpmodule.c: In function 'vfp_init':
arch/arm/vfp/vfpmodule.c:725:6: warning: unused variable 'mvfr0'
[-Wunused-variable]
u32 mvfr0;
Silence this warning by using IS_ENABLED instead of ifdefs.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The subarchitecture field in the fpsid register is 7 bits wide on
ARM CPUs using the CPUID identification scheme, spanning bits 22
to 16. The topmost bit is used to designate that the
subarchitecture designer is not ARM when it is set to 1. On
non-CPUID scheme CPUs the subarchitecture field is only 4 bits
wide and the higher bits are used to indicate no double precision
support (bit 20) and the FTSMX/FLDMX format (bits 21-22).
The VFP support code only looks at bits 19-16 to determine the
VFP version. On Qualcomm's processors (Krait and Scorpion) we
should see that we have HWCAP_VFPv3 but we don't because bit 22
is set to 1 to indicate that the subarchitecture is not
implemented by ARM and the rest of the bits are left as 0 because
this is the first subarchitecture that Qualcomm has designed.
Unfortunately we can't just widen the FPSID subarchitecture
bitmask to consider all the bits on a CPUID scheme because there
may be CPUs without the CPUID scheme that have VFP without double
precision support and then the version would be a very wrong and
large number. Instead, update the version detection logic to
consider if the CPU is using the CPUID scheme.
If the CPU is using CPUID scheme, use the MVFR registers to
determine what version of VFP is supported. We already do this
for VFPv4, so do something similar for VFPv3 and look for single
or double precision support in MVFR0. Otherwise fall back to
using FPSID to detect VFP support on non-CPUID scheme CPUs. We
know that VFPv3 is only present in CPUs that have support for the
CPUID scheme so this should be equivalent.
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The CPU_DYING notifier is called by cpu stopper task which
does not own the context held in the VFP hardware. Calling
vfp_force_reload() has no effect.
Replace it with clearing vfp_current_hw_state.
Signed-off-by: Yuanyuan Zhong <zyy@motorola.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In order to safely support the use of NEON instructions in
kernel mode, some precautions need to be taken:
- the userland context that may be present in the registers (even
if the NEON/VFP is currently disabled) must be stored under the
correct task (which may not be 'current' in the UP case),
- to avoid having to keep track of additional vfpstates for the
kernel side, disallow the use of NEON in interrupt context
and run with preemption disabled,
- after use, re-enable preemption and re-enable the lazy restore
machinery by disabling the NEON/VFP unit.
This patch adds the functions kernel_neon_begin() and
kernel_neon_end() which take care of the above. It also adds
the Kconfig symbol KERNEL_MODE_NEON to enable it.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
The support code in vfp_support_entry does not care whether the
exception that caused it to be invoked occurred in kernel mode or
in user mode. However, neither condition that could trigger this
exception (lazy restore and VFP bounce to support code) is
currently allowable in kernel mode.
In either case, print a message describing the condition before
letting the undefined instruction handler run its course and trigger
an oops.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
In order to use the NEON unit in the kernel, we should
initialize it a bit earlier in the boot process so NEON users
that like to do a quick benchmark at load time (like the
xor_blocks or RAID-6 code) find the NEON/VFP unit already
enabled.
Replaced late_initcall() with core_initcall().
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Martin Storsjö reports that the sequence:
ee312ac1 vsub.f32 s4, s3, s2
ee702ac0 vsub.f32 s5, s1, s0
e59f0028 ldr r0, [pc, #40]
ee111a90 vmov r1, s3
on Raspberry Pi (implementor 41 architecture 1 part 20 variant b rev 5)
where s3 is a denormal and s2 is zero results in incorrect behaviour -
the instruction "vsub.f32 s5, s1, s0" is not executed:
VFP: bounce: trigger ee111a90 fpexc d0000780
VFP: emulate: INST=0xee312ac1 SCR=0x00000000
...
As we can see, the instruction triggering the exception is the "vmov"
instruction, and we emulate the "vsub.f32 s4, s3, s2" but fail to
properly take account of the FPEXC_FP2V flag in FPEXC. This is because
the test for the second instruction register being valid is bogus, and
will always skip emulation of the second instruction.
Cc: <stable@vger.kernel.org>
Reported-by: Martin Storsjö <martin@martin.st>
Tested-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
After commit 846a136881 ("ARM: vfp: fix
saving d16-d31 vfp registers on v6+ kernels"), the OMAP 2430SDP board
started crashing during boot with omap2plus_defconfig:
[ 3.875122] mmcblk0: mmc0:e624 SD04G 3.69 GiB
[ 3.915954] mmcblk0: p1
[ 4.086639] Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
[ 4.093719] Modules linked in:
[ 4.096954] CPU: 0 Not tainted (3.6.0-02232-g759e00b #570)
[ 4.103149] PC is at vfp_reload_hw+0x1c/0x44
[ 4.107666] LR is at __und_usr_fault_32+0x0/0x8
It turns out that the context save/restore fix unmasked a latent bug
in commit 5aaf254409 ("ARM: 6203/1: Make
VFPv3 usable on ARMv6"). When CONFIG_VFPv3 is set, but the kernel is
booted on a pre-VFPv3 core, the code attempts to save and restore the
d16-d31 VFP registers. These are only present on non-D16 VFPv3+, so
this results in an undefined instruction exception. The code didn't
crash before commit 846a136 because the save and restore code was
only touching d0-d15, present on all VFP.
Fix by implementing a request from Russell King to add a new HWCAP
flag that affirmatively indicates the presence of the d16-d31
registers:
http://marc.info/?l=linux-arm-kernel&m=135013547905283&w=2
and some feedback from Måns to clarify the name of the HWCAP flag.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@linaro.org>
Cc: Måns Rullgård <mans.rullgard@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
VFPv4 support depends on the VFPv3 context save/restore code, so only
advertise support in the hwcaps if the kernel can actually handle it.
Cc: <stable@vger.kernel.org> # 3.1+
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
vfp_pm_suspend should save the VFP state in suspend after
any lazy context switch. If it only saves when the VFP is enabled,
the state can get lost when, on a UP system:
Thread 1 uses the VFP
Context switch occurs to thread 2, VFP is disabled but the
VFP context is not saved
Thread 2 initiates suspend
vfp_pm_suspend is called with the VFP disabled, and the unsaved
VFP context of Thread 1 in the registers
Modify vfp_pm_suspend to save the VFP context whenever
vfp_current_hw_state is not NULL.
Includes a fix from Ido Yariv <ido@wizery.com>, who pointed out that on
SMP systems, the state pointer can be pointing to a freed task struct if
a task exited on another cpu, fixed by using #ifndef CONFIG_SMP in the
new if clause.
Cc: Barry Song <bs14@csr.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ido Yariv <ido@wizery.com>
Cc: Daniel Drake <dsd@laptop.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
vfp_pm_suspend runs on each cpu, only clear the hardware state
pointer for the current cpu. Prevents a possible crash if one
cpu clears the hw state pointer when another cpu has already
checked if it is valid.
Cc: stable@vger.kernel.org
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit ff9a184c ("ARM: 7400/1: vfp: clear fpscr length and stride bits
on entry to sig handler") flushes the VFP state prior to entering a
signal handler so that a VFP operation inside the handler will trap and
force a restore of ABI-compliant registers. Reflushing and disabling VFP
on the sigreturn path is predicated on the saved thread state indicating
that VFP was used by the handler -- however for SMP platforms this is
only set on context-switch, making the check unreliable and causing VFP
register corruption in userspace since the register values are not
necessarily those restored from the sigframe.
This patch unconditionally flushes the VFP state after a signal handler.
Since we already perform the flush before the handler and the flushing
itself happens lazily, the redundant flush when VFP is not used by the
handler is essentially a nop.
Reported-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The vfp_enable function enables access to the VFP co-processor register
space (cp10 and cp11) on the current CPU and must be called with
preemption disabled. Unfortunately, the vfp_init late initcall does not
disable preemption and can lead to an oops during boot if thread
migration occurs at the wrong time and we end up attempting to access
the FPSID on a CPU with VFP access disabled.
This patch fixes the initcall to call vfp_enable from a non-preemptible
context on each CPU and adds a BUG_ON(preemptible) to ensure that any
similar problems are easily spotted in the future.
Cc: stable@vger.kernel.org
Reported-by: Hyungwoo Yang <hwoo.yang@gmail.com>
Signed-off-by: Hyungwoo Yang <hyungwooy@nvidia.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is mainly to get rid of the "vfp_pm_suspend: saving vfp state"
message flooding the kernel message ring by default.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM PCS mandates that the length and stride bits of the fpscr are
cleared on entry to and return from a public interface. Although signal
handlers run asynchronously with respect to the interrupted function,
the handler itself expects to run as though it has been called like a
normal function.
This patch updates the state mirroring the VFP hardware before entry to
a signal handler so that it adheres to the PCS. Furthermore, we disable
VFP to ensure that we trap on any floating point operation performed by
the signal handler and synchronise the hardware appropriately. A check
is inserted after the signal handler to avoid redundant flushing if VFP
was not used.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The user VFP state must be preserved (subject to ucontext modifications)
across invocation of a signal handler and this is currently handled by
vfp_{preserve,restore}_context in signal.c
Since this code requires intimate low-level knowledge of the VFP state,
this patch moves it into vfpmodule.c.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Disintegrate asm/system.h for ARM.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Russell King <linux@arm.linux.org.uk>
cc: linux-arm-kernel@lists.infradead.org
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
Building these files does not reveal a hidden need for
any of these. Since module.h brings in the whole kitchen
sink, it just needlessly adds 30k+ lines to the cpp burden.
There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build. They will have
to be cleaned up and tested via using their respective configs.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Function vfp_force_reload() clears vfp_current_hw_state, so
update the comment accordingly.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
When the cpu is powered down in a low power mode, the vfp
registers may be reset.
This patch uses CPU_PM_ENTER and CPU_PM_EXIT notifiers to save
and restore the cpu's vfp registers.
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-and-Acked-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Vishwanath BS <vishwanath.bs@ti.com>
Prevent a preemption event causing the initialized VFP state being
overwritten by ensuring that the VFP hardware access is disabled
prior to starting initialization. We can then do this in safety
while still allowing preemption to occur.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Fix a hole in the VFP thread migration. Lets define two threads.
Thread 1, we'll call 'interesting_thread' which is a thread which is
running on CPU0, using VFP (so vfp_current_hw_state[0] =
&interesting_thread->vfpstate) and gets migrated off to CPU1, where
it continues execution of VFP instructions.
Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes
over on CPU0. This has also been using VFP, and last used VFP on CPU0,
but doesn't use it again.
The following code will be executed twice:
cpu = thread->cpu;
/*
* On SMP, if VFP is enabled, save the old state in
* case the thread migrates to a different CPU. The
* restoring is done lazily.
*/
if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) {
vfp_save_state(vfp_current_hw_state[cpu], fpexc);
vfp_current_hw_state[cpu]->hard.cpu = cpu;
}
/*
* Thread migration, just force the reloading of the
* state on the new CPU in case the VFP registers
* contain stale data.
*/
if (thread->vfpstate.hard.cpu != cpu)
vfp_current_hw_state[cpu] = NULL;
The first execution will be on CPU0 to switch away from 'interesting_thread'.
interesting_thread->cpu will be 0.
So, vfp_current_hw_state[0] points at interesting_thread->vfpstate.
The hardware state will be saved, along with the CPU number (0) that
it was executing on.
'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0.
Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0,
and so the thread migration check is not triggered.
This means that vfp_current_hw_state[0] remains pointing at interesting_thread.
The second execution will be on CPU1 to switch _to_ 'interesting_thread'.
So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now
will be 1. The previous thread executing on CPU1 is not relevant to this
so we shall ignore that.
We get to the thread migration check. Here, we discover that
interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is
now 1, indicating thread migration. We set vfp_current_hw_state[1] to
NULL.
So, at this point vfp_current_hw_state[] contains the following:
[0] = &interesting_thread->vfpstate
[1] = NULL
Our interesting thread now executes a VFP instruction, takes a fault
which loads the state into the VFP hardware. Now, through the assembly
we now have:
[0] = &interesting_thread->vfpstate
[1] = &interesting_thread->vfpstate
CPU1 stops due to ptrace (and so saves its VFP state) using the thread
switch code above), and CPU0 calls vfp_sync_hwstate().
if (vfp_current_hw_state[cpu] == &thread->vfpstate) {
vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN);
BANG, we corrupt interesting_thread's VFP state by overwriting the
more up-to-date state saved by CPU1 with the old VFP state from CPU0.
Fix this by ensuring that we have sane semantics for the various state
describing variables:
1. vfp_current_hw_state[] points to the current owner of the context
information stored in each CPUs hardware, or NULL if that state
information is invalid.
2. thread->vfpstate.hard.cpu always contains the most recent CPU number
which the state was loaded into or NR_CPUS if no CPU owns the state.
So, for a particular CPU to be a valid owner of the VFP state for a
particular thread t, two things must be true:
vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu.
and that is valid from the moment a CPU loads the saved VFP context
into the hardware. This gives clear and consistent semantics to
interpreting these variables.
This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu
is invalidated, otherwise CPU0 may believe it was the last owner. The
hole can happen thus:
- thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info
freed.
- New thread allocated from a previously running thread on CPU2, reusing
memory for thread1 and copying vfp.hard.cpu.
At this point, the following are true:
new_thread1->vfpstate.hard.cpu == 2
&new_thread1->vfpstate == vfp_current_hw_state[2]
Lastly, this also addresses thread flushing in a similar way to thread
copying. Hole is:
- thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP.
- thread calls execve(), so thread flush happens, leaving
vfp_current_hw_state[0] intact. This vfpstate is memset to 0 causing
thread->vfpstate.hard.cpu = 0.
- thread migrates back to CPU0 before using VFP.
At this point, the following are true:
thread->vfpstate.hard.cpu == 0
&thread->vfpstate == vfp_current_hw_state[0]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rename the slightly confusing 'last_VFP_context' variable to be more
descriptive of what it actually is. This variable stores a pointer
to the current owner's vfpstate structure for the context held in the
VFP hardware.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The presence of VFPv4 cannot be detected simply by looking at the FPSID
subarchitecture field, as a value >= 2 signifies the architecture as
VFPv3 or later.
This patch reads from MVFR1 to check whether or not the fused multiply
accumulate instructions are supported. Since these are introduced with
VFPv4, this tells us what we need to know.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Convert some ARM architecture's common code to using
struct syscore_ops objects for power management instead of sysdev
classes and sysdevs.
This simplifies the code and reduces the kernel's memory footprint.
It also is necessary for removing sysdevs from the kernel entirely in
the future.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
VFP registers d16-d31 are callee saved registers and must be preserved
during function calls, including fork(). The VFP configuration should
also be preserved. The patch copies the full VFP state to the child
process.
Reported-by: Paul Wright <paul.wright@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds THREAD_NOTIFY_COPY for calling registered handlers
during the copy_thread() function call. It also changes the VFP handler
to use a switch statement rather than if..else and ignore this event.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Improve the documentation for the VFP hotplug notifier handler, so
that people better understand what's going on there and what has
been done for them.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/kernel/return_address.c:37:6: warning: symbol 'return_address' was not declared. Should it be static?
arch/arm/kernel/setup.c:76:14: warning: symbol 'processor_id' was not declared. Should it be static?
arch/arm/kernel/traps.c:259:1: warning: symbol 'die_lock' was not declared. Should it be static?
arch/arm/vfp/vfpmodule.c:156:6: warning: symbol 'vfp_raise_sigfpe' was not declared. Should it be static?
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We can not guarantee that VFP will be enabled when CPU hotplug brings
a CPU back online from a reset state. Add a hotplug CPU notifier to
ensure that the VFP coprocessor access is enabled whenever a CPU comes
back online.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
MVFR0 and MVFR1 are only available starting with ARM1136 r1p0 release
according to "B.5 VFP changes" in DDI0211F_arm1136_r1p0_trm.pdf. This is
also when TLS register got added, so we can use HAS_TLS also to test for
MVFR0 and MVFR1.
Otherwise VFPFMRX and VFPFMXR access fails and we get:
Internal error: Oops - undefined instruction: 0 [#1]
PC is at no_old_VFP_process+0x8/0x3c
LR is at __und_svc+0x48/0x80
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
From: Imre Deak <imre.deak@nokia.com>
Recently the UP versions of these functions were refactored and as
a side effect it became possible to call them for the current thread.
This isn't true for the SMP versions however, so fix this up.
Signed-off-by: Imre Deak <imre.deak@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A CPU has VFPv3 hardware if the FPSID[19:16] bits are 2 or more.
Currently Linux was only checking for 3 or more.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If we're only reading the VFP context via the ptrace call, there's
no need to invalidate the hardware context - we only need to do that
on PTRACE_SETVFPREGS. This allows more efficient monitoring of a
traced task.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The more I look at vfp_sync_state(), the more I believe it's trying
to do its job in a really obscure way.
Essentially, last_VFP_context[] tracks who owns the state in the VFP
hardware. If last_VFP_context[] is the context for the thread which
we're interested in, then the VFP hardware has context which is not
saved in the software state - so we need to bring the software state
up to date.
If last_VFP_context[] is for some other thread, we really don't care
what state the VFP hardware is in; it doesn't contain any information
pertinent to the thread we're trying to deal with - so don't touch
the hardware.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit c98929c07a removed the clearing of the FPSCR[31:28] bits from the
vfp_raise_exceptions() function and the new bits are or'ed with the old
FPSCR bits leading to unexpected results (the original commit was
referring to the cumulative bits - FPSCR[4:0]).
Reported-by: Tom Hameenanttila <tmhameen@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This avoids races in the VFP code where the dead thread may have
state on another CPU. By moving this code to exit_thread(), we
will be running as the thread, and therefore be running on the
current CPU.
This means that we can ensure that the only local state is accessed
in the thread notifiers.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When the VFP notifier is called for flush_thread(), we may be
preemptible, meaning we might migrate to another CPU, which means
referencing the current CPU number without some form of locking is
invalid, and can cause data corruption.
For the most cases, this isn't a problem since atomic notifiers are run
under rcu lock, which for most configurations results in preemption
being disabled - except when the preemptable tree-based rcu
implementation is selected.
Let's make it safe anyway.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This CPU generates synchronous VFP exceptions in a non-standard way -
the FPEXC.EX bit set but without the FPSCR.IXE bit being set like in the
VFP subarchitecture 1 or just the FPEXC.DEX bit like in VFP
subarchitecture 2. The main problem is that the faulty instruction
(which needs to be emulated in software) will be restarted several times
(normally until a context switch disables the VFP). This patch ensures
that the VFP exception is treated as synchronous.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Nicolas Pitre <nico@cam.org>
We've observed that ARM VFP state can be corrupted during VFP exception
handling when PREEMPT is enabled. The exact conditions are difficult
to reproduce but appear to occur during VFP exception handling when a
task causes a VFP exception which is handled via VFP_bounce and is then
preempted by yet another task which in turn causes yet another VFP
exception. Since the VFP_bounce code is not preempt safe, VFP state then
becomes corrupt. In order to prevent preemption from occuring while
handling a VFP exception, this patch disables preemption while handling
VFP exceptions.
Signed-off-by: George G. Davis <gdavis@mvista.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The VFPv3D16 is a VFPv3 CPU configuration where only 16 double registers
are present, as the VFPv2 configuration. This patch adds the
corresponding hwcap bits so that applications or debuggers have more
information about the supported features.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds ptrace support for setting and getting the VFP registers
using PTRACE_SETVFPREGS and PTRACE_GETVFPREGS. The user_vfp structure
defined in asm/user.h contains 32 double registers (to cover VFPv3 and
Neon hardware) and the FPSCR register.
Cc: Paul Brook <paul@codesourcery.com>
Cc: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When CONFIG_PM is selected, the VFP code does not have any handler
installed to deal with either saving the VFP state of the current
task, nor does it do anything to try and restore the VFP after a
resume.
On resume, the VFP will have been reset and the co-processor access
control registers are in an indeterminate state (very probably the
CP10 and CP11 the VFP uses will have been disabled by the ARM core
reset). When this happens, resume will break as soon as it tries to
unfreeze the tasks and restart scheduling.
Add a sys device to allow us to hook the suspend call to save the
current thread state if the thread is using VFP and a resume hook
which restores the CP10/CP11 access and ensures the VFP is disabled
so that the lazy swapping will take place on next access.
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It's never used and the comments refer to nonatomic and retry
interchangably. So get rid of it.
Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
This patch allows the VFP support code to run correctly on CPUs
compatible with the common VFP subarchitecture specification (Appendix
B in the ARM ARM v7-A and v7-R edition). It implements support for VFP
subarchitecture 2 while being backwards compatible with
subarchitecture 1.
On VFP subarchitecture 1, the arithmetic exceptions are asynchronous
(or imprecise as described in the old ARM ARM) unless the FPSCR.IXE
bit is 1. The exceptional instructions can be read from FPINST and
FPINST2 registers. With VFP subarchitecture 2, the arithmetic
exceptions can also be synchronous and marked by the FPEXC.DEX bit
(the FPEXC.EX bit is cleared). CPUs implementing the synchronous
arithmetic exceptions don't have the FPINST and FPINST2 registers and
accessing them would trigger and undefined exception.
Note that FPEXC.EX bit has an additional meaning on subarchitecture 1
- if it isn't set, there is no additional information in FPINST and
FPINST2 that needs to be saved at context switch or when lazy-loading
the VFP state of a different thread.
The patch also removes the clearing of the cumulative exception flags in
FPSCR when additional exceptions were raised. It is up to the user
application to clear these bits.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>