The PCIe host controller uses MSIs provided by GICv3 ITS. Enable it on
Thunder SoCs by adding an entry to DT.
Signed-off-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
this moves to general APIs, and all drivers will be changed based
on it.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJVd+OkAAoJEDIv4aC191RhuWkP/2LzJBRoBP1WJmTOvR0Kofl7
4PG5Xi/+1NtSHubvTSJJX2kZ9I9FHemQAJba8pqZ0pgZkb+7t0dA2+G1mDi8SOr8
VyWlMotI8Im0PoUHXmt5GPHEI4E6WqyX1TQFUD4JM/Ln07mevH/ocjd94LN79eO6
tc1z8AEw1/d5MCcEI34FP+vNXyLOE/9hM7IsmBtf4Niq7VrO8SWS5y1tJBOz+d2v
HmZZYEP/WS2mX8E77cz89pf0+BkmhF6nj1zoFCmlLVxCXrlKncs+Uvnsb+n/k1BI
gA4UNOfSoA+k+h+1F/7BhyEABqYkEuBDYs9m5NsypL+WITaVnI9pmNWNAA7IZNTg
dEYoVKqpJX3lVPWJ0i0hvgK1xXNMKlbwFb6iV80upvUVNfzZs9JveSL97Iiy9WjI
LlRdCTlteBvgsNBJyVG//788VU5+dr8HXV1UCfm+drs3H43TudCnJ8AhF3JaToO+
wv045McJTUL/CI+WQiXGNlAoXDjprgjLSiXRyEKF1xNsem3u78Z5bpjGX5crpzgC
HIdnsxb39jCHCNVJLuK/eCJ9Oocpo9LdKU0wSqIx0GNAx+xdoPo5H8jmlE4bnSWo
9f/9784A1SI/tpc0mKRz/z3FEKiP0i3CTJfcmC9EAd4LWFYk54WgtlzPztRkE625
YQMHjM+cavYWpzzgnQ+B
=Beg0
-----END PGP SIGNATURE-----
Merge tag 'sirf-iobrg2regmap-for-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/baohua/linux into fixes
Merge "CSR SiRFSoC rtc iobrg move to regmap for 4.2" from Barry Song:
move CSR rtc iobrg read/write API to be regmap
this moves to general APIs, and all drivers will be changed based
on it.
* tag 'sirf-iobrg2regmap-for-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/baohua/linux:
ARM: prima2: move to use REGMAP APIs for rtciobrg
This patch adds poweroff button device node to support poweroff feature
on APM X-Gene Mustang platform.
Signed-off-by: Y Vo <yvo@apm.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Currently we enable debug exceptions before reading ESR_EL1 in both
el0_inv and el1_inv. If a debug exception is taken before we read
ESR_EL1, the value will have been corrupted.
As el*_inv is typically fatal, an intervening debug exception results in
misleading debug information being logged to the console, but is not
otherwise harmful.
As with the other entry paths, we can use the ESR_EL1 value stashed
earlier in the exception entry (in x25 for el0_sync{,_compat}, and x1
for el1_sync), giving us better error reporting in this case.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The "if (pass_size > buf->total)" can underflow so I have changed the
type of size and pass_size to unsigned to avoid this problem.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Newer ASICs have more VRAM on average and allocating more GART as
well can have advantages. Also see commit edcd26e8.
Ideally, we should scale GART size based on actual VRAM size, but
that requires significant restructuring of initialization.
v2: extract small helper, apply to error paths
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Grigori Goronzy <greg@chown.ath.cx>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This was regressed by commit 39e7f6f8, although I don't know of any
actual issues caused by it.
The storage domain is read without TTM locking now, but the lock
never helped to prevent any races.
Reviewed-by: Christian König <christian.koenig@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Grigori Goronzy <greg@chown.ath.cx>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We don't need to call the (expensive) radeon_bo_wait, checking the
fences via RCU is much faster. The reservation done by radeon_bo_wait
does not save us from any race conditions.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Grigori Goronzy <greg@chown.ath.cx>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This is a translation of the patch ...
"drm/radeon: Handle irqs only based on irq ring, not irq status regs."
... for the vblank irq handling, to fix the same problem described
in that patch on the new driver.
Only compile tested due to lack of suitable hw.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
CC: Michel Dänzer <michel.daenzer@amd.com>
CC: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Trying to resolve issues with missed vblanks and impossible
values inside delivered kms pageflip completion events showed
that radeon's irq handling sometimes doesn't handle valid irqs,
but silently skips them. This was observed for vblank interrupts.
Although those irqs have corresponding events queued in the gpu's
irq ring at time of interrupt, and therefore the corresponding
handling code gets triggered by these events, the handling code
sometimes silently skipped processing the irq. The reason for those
skips is that the handling code double-checks for each irq event if
the corresponding irq status bits in the irq status registers
are set. Sometimes those bits are not set at time of check
for valid irqs, maybe due to some hardware race on some setups?
The problem only seems to happen on some machine + card combos
sometimes, e.g., never happened during my testing of different PC
cards of the DCE-2/3/4 generation a year ago, but happens consistently
now on two different Apple Mac cards (RV730, DCE-3, Apple iMac and
Evergreen JUNIPER, DCE-4 in a Apple MacPro). It also doesn't happen
at each interrupt but only occassionally every couple of
hundred or thousand vblank interrupts.
This results in XOrg warning messages like
"[ 7084.472] (WW) RADEON(0): radeon_dri2_flip_event_handler:
Pageflip completion event has impossible msc 420120 < target_msc 420121"
as well as skipped frames and problems for applications that
use kms pageflip events or vblank events, e.g., users of DRI2 and
DRI3/Present, Waylands Weston compositor, etc. See also
https://bugs.freedesktop.org/show_bug.cgi?id=85203
After some talking to Alex and Michel, we decided to fix this
by turning the double-check for asserted irq status bits into a
warning. Whenever a irq event is queued in the IH ring, always
execute the corresponding interrupt handler. Still check the irq
status bits, but only to log a DRM_DEBUG message on a mismatch.
This fixed the problems reliably on both previously failing
cards, RV-730 dual-head tested on both crtcs (pipes D1 and D2)
and a triple-output Juniper HD-5770 card tested on all three
available crtcs (D1/D2/D3). The r600 and evergreen irq handling
is therefore tested, but the cik an si handling is only compile
tested due to lack of hw.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
CC: Michel Dänzer <michel.daenzer@amd.com>
CC: Alex Deucher <alexander.deucher@amd.com>
CC: <stable@vger.kernel.org> # v3.16+
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The "fix" in commit 0b08c5e594 ("audit: Fix check of return value of
strnlen_user()") didn't fix anything, it broke things. As reported by
Steven Rostedt:
"Yes, strnlen_user() returns 0 on fault, but if you look at what len is
set to, than you would notice that on fault len would be -1"
because we just subtracted one from the return value. So testing
against 0 doesn't test for a fault condition, it tests against a
perfectly valid empty string.
Also fix up the usual braindamage wrt using WARN_ON() inside a
conditional - make it part of the conditional and remove the explicit
unlikely() (which is already part of the WARN_ON*() logic, exactly so
that you don't have to write unreadable code.
Reported-and-tested-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Paul Moore <pmoore@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since
commit 8c7b5ccb72
Author: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Date: Tue Apr 21 17:13:19 2015 +0300
drm/i915: Use atomic helpers for computing changed flags
we compute the plane state for a modeset before actually committing
any changes, which means crtc->active won't be correct yet. Looking at
future work in the modeset conversion targetting 4.3 the only places
where crtc_state->active isn't accurate is when disabling other CRTCs
than the one the modeset is for (when stealing connectors). Which
isn't the case here. And that's also confirmed by an audit, we do
unconditionally update crtc_state->active for the current pipe.
We also don't need to update any other plane check functions since we
only ever add the primary state to the modeset update right now.
Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This was lost in
commit ce22dba92d
Author: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Date: Tue Apr 21 17:12:56 2015 +0300
drm/i915: Move toggling planes out of crtc enable/disable.
and we still need that crtc->active check since the overall modeset
flow doesn't yet take dpms state into account properly. Fixes WARNING
backtraces on at least bdw/hsw due to the ips disabling code being
upset about being run on a switched-off pipe.
We don't need a corresponding change on the enable side since with the
old setCrtc semantics we always force-enable the pipe after a modeset.
And the dpms function intel_crtc_control already checks for ->active.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When a cpu goes up some architectures (e.g. x86) have to walk the irq
space to set up the vector space for the cpu. While this needs extra
protection at the architecture level we can avoid a few race
conditions by preventing the concurrent allocation/free of irq
descriptors and the associated data.
When a cpu goes down it moves the interrupts which are targeted to
this cpu away by reassigning the affinities. While this happens
interrupts can be allocated and freed, which opens a can of race
conditions in the code which reassignes the affinities because
interrupt descriptors might be freed underneath.
Example:
CPU1 CPU2
cpu_up/down
irq_desc = irq_to_desc(irq);
remove_from_radix_tree(desc);
raw_spin_lock(&desc->lock);
free(desc);
We could protect the irq descriptors with RCU, but that would require
a full tree change of all accesses to interrupt descriptors. But
fortunately these kind of race conditions are rather limited to a few
things like cpu hotplug. The normal setup/teardown is very well
serialized. So the simpler and obvious solution is:
Prevent allocation and freeing of interrupt descriptors accross cpu
hotplug.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: xiao jin <jin.xiao@intel.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Link: http://lkml.kernel.org/r/20150705171102.063519515@linutronix.de
When rotated and partial views were added no one spotted the resume
path which assumes only one GGTT VMA per object and hence is now
skipping rebind of alternative views.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that we have a shared powerpc tree on kernel.org, point folks at that
as the primary place to look for powerpc stuff.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
It was discovered that if a process mmaped their problem state area they
were able to access one page more than expected, potentially allowing
them to access the problem state area of an unrelated process.
This was due to a simple off by one error in the mmap fault handler
introduced in 0712dc7e73 ("cxl: Fix issues
when unmapping contexts"), which is fixed in this patch.
Cc: stable@vger.kernel.org
Fixes: 0712dc7e73 ("cxl: Fix issues when unmapping contexts")
Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch makes the mmap call fail outright if the requested region is
larger than the problem state area assigned to the context so the error
is reported immediately rather than waiting for an attempt to access an
address out of bounds.
Although we never expect users to map more than the assigned problem
state area and are not aware of anyone doing this (other than for
testing), this does have the potential to break users if someone has
used a larger range regardless. I'm submitting it for consideration, but
if this change is not considered acceptable the previous patch is
sufficient to prevent access out of bounds without breaking anyone.
Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Commit 46e12c07b3 (MIPS: O32 / 32-bit:
Always copy 4 stack arguments.) change the O32 syscall handler to always
load four arguments from the userspace stack even for syscalls that
require fewer or no arguments to be copied. This removes a large table
from kernel space and need to maintain it. It appeared that it was ok
the implementation chosen requires 16 bytes of readable stack space
above the user stack pointer.
Turned out a few threading implementations munmap the user stack before
the thread exits resulting in errors due to the unreadable stack.
We now treat any failed load as a if the loaded value was zero and let
the actual syscall deal with the situation.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Incorrect register offset used for sthi407 clockgenC
Signed-off-by: Pankaj Dev <pankaj.dev@st.com>
Signed-off-by: Gabriel Fernandez <gabriel.fernandez@linaro.org>
Fixes: 51306d56ba ("clk: st: STiH407: Support for clockgenC0")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Pull ARM updates from Russell King:
"These are late by a week; they should have been merged during the
merge window, but unfortunately, the ARM kernel build/boot farms were
indicating random failures, and it wasn't clear whether the cause was
something in these changes or something during the merge window.
This is a set of merge window fixes with some documentation additions"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: avoid unwanted GCC memset()/memcpy() optimisations for IO variants
ARM: pgtable: document mapping types
ARM: io: convert ioremap*() to functions
ARM: io: fix ioremap_wt() implementation
ARM: io: document ARM specific behaviour of ioremap*() implementations
ARM: fix lockdep unannotated irqs-off warning
ARM: 8397/1: fix vdsomunge not to depend on glibc specific error.h
ARM: add helpful message when truncating physical memory
ARM: add help text for HIGHPTE configuration entry
ARM: fix DEBUG_SET_MODULE_RONX build dependencies
ARM: 8396/1: use phys_addr_t in pfn_to_kaddr()
ARM: 8394/1: update memblock limit after mapping lowmem
ARM: 8393/1: smp: Fix suspicious RCU usage with ipi tracepoints
In function mei_nfc_host_exit mei_cl_remove_device cannot be called
under the device mutex as device removing flow invokes the device driver
remove handler that calls in turn to mei_cl_disable_device which
naturally acquires the device mutex.
Also remove mei_cl_bus_remove_devices which has the same issue, but is
never executed as currently the only device on the mei client bus is NFC
and a new device cannot be easily added till the bus revamp is
completed.
This fixes regression caused by commit be9b720a0c ("mei_phy: move all
nfc logic from mei driver to nfc")
Prior to this change the nfc driver remove handler called to no-op
disable function while actual nfc device was disabled directly from the
mei driver.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Making tick_broadcast_oneshot_control() independent from
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST broke the build for
CONFIG_GENERIC_CLOCKEVENTS=n because the function is not defined
there.
Provide a proper stub inline.
Fixes: f32dd11705 'tick/broadcast: Make idle check independent from mode and config'
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
CC arch/mips/loongson64/lemote-2f/clock.o
/home/ralf/src/linux/linux-mips/arch/mips/loongson64/lemote-2f/clock.c:18:40: fatal error: asm/mach-loongson/loongson.h: No such file or directory
#include <asm/mach-loongson/loongson.h>
^
compilation terminated.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The firmware only reports hover condition while the very first contact is
approaching the surface; the hover is not reported for the subsequent
contacts. Therefore we should not be using ABS_MT_DISTANCE to report hover
but rather its single-touch counterpart ABS_DISTANCE.
Signed-off-by: Duson Lin <dusonlin@emc.com.tw>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Andriy reported that on a virtual machine the warning about negative
expiry time in the clock events programming code triggered:
hpet: hpet0 irq 40 for MSI
hpet: hpet1 irq 41 for MSI
Switching to clocksource hpet
WARNING: at kernel/time/clockevents.c:239
[<ffffffff810ce6eb>] clockevents_program_event+0xdb/0xf0
[<ffffffff810cf211>] tick_handle_periodic_broadcast+0x41/0x50
[<ffffffff81016525>] timer_interrupt+0x15/0x20
When the second hpet is installed as a per cpu timer the broadcast
event is not longer required and stopped, which sets the next_evt of
the broadcast device to KTIME_MAX.
If after that a spurious interrupt happens on the broadcast device,
then the current code blindly handles it and tries to reprogram the
broadcast device afterwards, which adds the period to
next_evt. KTIME_MAX + period results in a negative expiry value
causing the WARN_ON in the clockevents code to trigger.
Add a proper check for the state of the broadcast device into the
interrupt handler and return if the interrupt is spurious.
[ Folded in pointer fix from Sudeep ]
Reported-by: Andriy Gapon <avg@FreeBSD.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20150705205221.802094647@linutronix.de
If the current cpu is the one which has the hrtimer based broadcast
queued then we better return busy immediately instead of going through
loops and hoops to figure that out.
[ Split out from a larger combo patch ]
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
Tell the idle code not to go deep if the broadcast IPI is about to
arrive.
[ Split out from a larger combo patch ]
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
If the system is in periodic mode and the broadcast device is hrtimer
based, return busy as we have no proper handling for this.
[ Split out from a larger combo patch ]
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
We need to check more than the periodic mode for proper operation in
all runtime combinations. To avoid code duplication move the check
into the enter state handling.
No functional change.
[ Split out from a larger combo patch ]
Reported-and-tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
Add a check for a installed broadcast device to the oneshot control
function and return busy if not.
[ Split out from a larger combo patch ]
Reported-and-tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
Currently the broadcast busy check, which prevents the idle code from
going into deep idle, works only in one shot mode.
If NOHZ and HIGHRES are off (config or command line) there is no
sanity check at all, so under certain conditions cpus are allowed to
go into deep idle, where the local timer stops, and are not woken up
again because there is no broadcast timer installed or a hrtimer based
broadcast device is not evaluated.
Move tick_broadcast_oneshot_control() into the common code and provide
proper subfunctions for the various config combinations.
The common check in tick_broadcast_oneshot_control() is for the C3STOP
misfeature flag of the local clock event device. If its not set, idle
can proceed. If set, further checks are necessary.
Provide checks for the trivial cases:
- If broadcast is disabled in the config, then return busy
- If oneshot mode (NOHZ/HIGHES) is disabled in the config, return
busy if the broadcast device is hrtimer based.
- If oneshot mode is enabled in the config call the original
tick_broadcast_oneshot_control() function. That function needs
extra checks which will be implemented in seperate patches.
[ Split out from a larger combo patch ]
Reported-and-tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
The broadcast code shuts down the local clock event unconditionally
even if no broadcast device is installed or if the broadcast device is
hrtimer based.
Add proper sanity checks.
[ Split out from a larger combo patch ]
Reported-and-tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
The hrtimer based broadcast vehicle can cause a hrtimer recursion
which went unnoticed until we changed the hrtimer expiry code to keep
track of the currently running timer.
local_timer_interrupt()
local_handler()
hrtimer_interrupt()
expire_hrtimers()
broadcast_hrtimer()
send_ipis()
local_handler()
hrtimer_interrupt()
....
Solution is simple: Prevent the local handler call from the broadcast
code when the broadcast 'device' is hrtimer based.
[ Split out from a larger combo patch ]
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
For those parts of the arm64 ACPI code that need to check GICC subtables
in the MADT, use the new BAD_MADT_GICC_ENTRY macro instead of the previous
BAD_MADT_ENTRY. The new macro takes into account differences in the size
of the GICC subtable that the old macro did not; this caused failures even
though the subtable entries are valid.
Fixes: aeb823bbac ("ACPICA: ACPI 6.0: Add changes for FADT table.")
Signed-off-by: Al Stone <al.stone@linaro.org>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The BAD_MADT_ENTRY() macro is designed to work for all of the subtables
of the MADT. In the ACPI 5.1 version of the spec, the struct for the
GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in
ACPI 6.0, the struct is 80 bytes long. But, there is only one definition
in ACPICA for this struct -- and that is the 6.0 version. Hence, when
BAD_MADT_ENTRY() compares the struct size to the length in the GICC
subtable, it fails if 5.1 structs are in use, and there are systems in
the wild that have them.
This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable
only, accounting for the difference in specification versions that are
possible. The BAD_MADT_ENTRY() will continue to work as is for all other
MADT subtables.
This code is being added to an arm64 header file since that is currently
the only architecture using the GICC subtable of the MADT. As a GIC is
specific to ARM, it is also unlikely the subtable will be used elsewhere.
Fixes: aeb823bbac ("ACPICA: ACPI 6.0: Add changes for FADT table.")
Signed-off-by: Al Stone <al.stone@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
[catalin.marinas@arm.com: extra brackets around macro arguments]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If dev_pm_attach_wake_irq() fails, the device's power.wakeirq field
should not be set to point to the struct wake_irq passed to that
function, as that object will be freed going forward.
For this reason, make dev_pm_attach_wake_irq() first call
device_wakeup_attach_irq() and only set the device's power.wakeirq
field if that's successful.
That requires device_wakeup_attach_irq() to be called under the
device's power.lock lock, but since dev_pm_attach_wake_irq() is
the only caller of it, the requisite changes are easy to make.
Fixes: 4990d4fe32 (PM / Wakeirq: Add automated device wake IRQ handling)
Reported-by: Felipe Balbi <balbi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
irq_data is protected by irq_desc->lock, so retrieving the irq chip
from irq_data outside the lock is racy vs. an concurrent update. Move
it into the lock held region.
While at it add a comment why the vector walk does not require
vector_lock.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: xiao jin <jin.xiao@intel.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Link: http://lkml.kernel.org/r/20150705171102.331320612@linutronix.de
It's unsafe to examine fields in the irq descriptor w/o holding the
descriptor lock. Add proper locking.
While at it add a comment why the vector check can run lock less
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: xiao jin <jin.xiao@intel.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Link: http://lkml.kernel.org/r/20150705171102.236544164@linutronix.de
Jin debugged a nasty cpu hotplug race which results in leaking a irq
vector on the newly hotplugged cpu.
cpu N cpu M
native_cpu_up device_shutdown
do_boot_cpu free_msi_irqs
start_secondary arch_teardown_msi_irqs
smp_callin default_teardown_msi_irqs
setup_vector_irq arch_teardown_msi_irq
__setup_vector_irq native_teardown_msi_irq
lock(vector_lock) destroy_irq
install vectors
unlock(vector_lock)
lock(vector_lock)
---> __clear_irq_vector
unlock(vector_lock)
lock(vector_lock)
set_cpu_online
unlock(vector_lock)
This leaves the irq vector(s) which are torn down on CPU M stale in
the vector array of CPU N, because CPU M does not see CPU N online
yet. There is a similar issue with concurrent newly setup interrupts.
The alloc/free protection of irq descriptors does not prevent the
above race, because it merily prevents interrupt descriptors from
going away or changing concurrently.
Prevent this by moving the call to setup_vector_irq() into the
vector_lock held region which protects set_cpu_online():
cpu N cpu M
native_cpu_up device_shutdown
do_boot_cpu free_msi_irqs
start_secondary arch_teardown_msi_irqs
smp_callin default_teardown_msi_irqs
lock(vector_lock) arch_teardown_msi_irq
setup_vector_irq()
__setup_vector_irq native_teardown_msi_irq
install vectors destroy_irq
set_cpu_online
unlock(vector_lock)
lock(vector_lock)
__clear_irq_vector
unlock(vector_lock)
So cpu M either sees the cpu N online before clearing the vector or
cpu N installs the vectors after cpu M has cleared it.
Reported-by: xiao jin <jin.xiao@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Link: http://lkml.kernel.org/r/20150705171102.141898931@linutronix.de
Currently the kernel API AFU dev refcounting is done on context start and stop.
This patch moves this refcounting to context init and release, bringing it
inline with how the userspace API does it.
Without this we've seen the refcounting on the AFU get out of whack between the
user and kernel API usage. This causes the AFU structures to be freed when
they are actually still in use.
This fixes some kref warnings we've been seeing and spurious ErrIVTE IRQs.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>