As of today we use hardcoded MCIP debug mask, so if we launch
kernel via debugger and kick fever cores than HW has all cpus
hang at the momemt of setup MCIP debug mask.
So update MCIP debug mask when the new cpu came online, instead of
use hardcoded MCIP debug mask.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
In SMP systems, GFRC is used for clocksource. However by default the
counter keeps running even when core is halted (say when debugging via a
JTAG debugger). This confuses Linux timekeeping and triggers flase RCU stall
splat such as below:
| [ARCLinux]# while true; do ./shm_open_23-1.run-test ; done
| Running with 1000 processes for 1000 objects
| hrtimer: interrupt took 485060 ns
|
| create_cnt: 1000
| Running with 1000 processes for 1000 objects
| [ARCLinux]# INFO: rcu_preempt self-detected stall on CPU
| 2-...: (1 GPs behind) idle=a01/1/0 softirq=135770/135773 fqs=0
| INFO: rcu_preempt detected stalls on CPUs/tasks:
| 0-...: (1 GPs behind) idle=71e/0/0 softirq=135264/135264 fqs=0
| 2-...: (1 GPs behind) idle=a01/1/0 softirq=135770/135773 fqs=0
| 3-...: (1 GPs behind) idle=4e0/0/0 softirq=134304/134304 fqs=0
| (detected by 1, t=13648 jiffies, g=31493, c=31492, q=1)
Starting from ARC HS v3.0 it's possible to tie GFRC to state of up-to 4
ARC cores with help of GFRC's CORE register where we set a mask for
cores which state we need to rely on.
We update cpu mask every time new cpu came online instead of using
hardcoded one or using mask generated from "possible_cpus" as we
want it set correctly even if we run kernel on HW which has fewer cores
than expected (or we launch kernel via debugger and kick fever cores
than HW has)
Note that GFRC halts when all cores have halted and thus relies on
programming of Inter-Core-dEbug register to halt all cores when one
halts.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
[vgupta: rewrote changelog]
No need for specifying a list of interrupts in the declaration
of IDU interrupt controller anymore since the kernel can obtain
a number of supported interrupts from the build register.
Also delete support of the second parameter for devices which
are connected to IDU because it is not used anywhere.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This enhancement is needed to allow masking all available common interrupts
in IDU interrupt controller in boot time since the kernel can
discover a number of them from the build register. Also now there
is no need to specify in device tree a list of used core interrupts
by IDU. E.g. before:
idu_intc: idu-interrupt-controller {
compatible = "snps,archs-idu-intc";
interrupt-controller;
interrupt-parent = <&core_intc>;
#interrupt-cells = <2>;
interrupts = <24 25 26 27 28 29 30 31>;
};
and after:
idu_intc: idu-interrupt-controller {
compatible = "snps,archs-idu-intc";
interrupt-controller;
interrupt-parent = <&core_intc>;
#interrupt-cells = <2>;
};
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Ignore value of interrupt distribution mode for common interrupts in
IDU since setting of affinity using value from Device Tree is deprecated
in ARC. Originally it is done in idu_irq_xlate() function and it is
semantically wrong and does not guaranty that an affinity value will be
set properly. idu_irq_enable() function is better place for
initialization of common interrupts.
By default send all common interrupts to all available online CPUs.
The affinity of common interrupts in IDU must be set manually since
in some cases the kernel will not call irq_set_affinity() by itself:
1. When the kernel is not configured with support of SMP.
2. When the kernel is configured with support of SMP but upper
interrupt controllers does not support setting of the affinity
and cannot propagate it to IDU.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
It is necessary to call entry/exit functions for parent interrupt
controllers for proper masking/unmasking of interrupt lines.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Also remove the dependency on ARCv2, to increase compile coverage for
!ARCV2 builds
Acked-by: Daniel Lezcano <daniel.lezcnao@linaro.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARC linux uses 2 distribution modes for common interrupts: round robin
mode (IDU_M_DISTRI_RR) and a simple destination mode (IDU_M_DISTRI_DEST).
The first one is used when more than 1 cores may handle a common interrupt
and the second one is used when only 1 core may handle a common interrupt.
However idu_irq_set_affinity() always sets IDU_M_DISTRI_RR for all affinity
values. But there is no sense in setting of such mode if only 1 core must
handle a common interrupt.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This came up when reviewing code to address missing IRQ affinity
setting in AXS103 platform and/or implementing hierarchical IRQ domains
- smp_ipi_irq_setup() callers pass hwirq but in turn calls
request_percpu_irq() which expects a linux virq. So invoke
irq_find_mapping() to do the conversion
(also explicitify this in code by renaming the args appropriately)
- idu_of_init()/idu_cascade_isr() were similarly using linux virq where
hwirq is expected, so do the conversion using irqd_to_hwirq() helper
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
[vgupta: made changelog a bit concise a bit]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The IDU intc is technically part of MCIP (Multi-core IP) hence
historically was only available in a SMP hardware build (and thus only
in a SMP kernel build). Now that hardware restriction has been lifted,
so a UP kernel needs to support it.
This requires breaking mcip.c into parts which are strictly SMP
(inter-core interrupts) and IDU which in reality is just another
intc and thus has no bearing on SMP.
This change allows IDU in UP builds and with a suitable device tree, we
can have the cascaded intc system
ARCv2 core intc <---> ARCv2 IDU intc <---> periperals
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This will be needed for switching to linear irq domain as
irq_create_mapping() called by intr code needs the IRQ numbers
in addition to existing usage in mcip.c for requesting the irq
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
- Remove explicit clocksource setup and let it be done by OF framework
by defining CLOCKSOURCE_OF_DECLARE() for various timers
- This allows multiple clocksources to be potentially registered
simultaneouly: previously we could only do one - as all of them had
same arc_counter_setup() routine for registration
- Setup routines also ensure that the underlying timer actually exists.
- Remove some of the panic() calls if underlying timer is NOT detected as
fallback clocksource might still be available
1. If GRFC doesn't exist, jiffies clocksource gets registered anyways
2. if RTC doesn't exist, TIMER1 can take over (as it is always
present)
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Previous Commit ("ARC: SMP: No need for CONFIG_ARC_IPI_DBG") removed
the Kconfig option ARC_IPI_DBG. Remove the last reference on this
option.
Signed-off-by: Valentin Rothberg <valentinrothberg@gmail.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARConnect/MCIP IPI sending has a retry-wait loop in case caller had
not seen a previous such interrupt. Turns out that it is not needed at
all. Linux cross core calling allows coalescing multiple IPIs to same
receiver - it is fine as long as there is one.
This logic is built into upper layer already, at a higher level of
abstraction. ipi_send_msg_one() sets the actual msg payload, but it only
calls MCIP IPI sending if msg holder was empty (using
atomic-set-new-and-get-old construct). Thus it is unlikely that the
retry-wait looping was ever getting exercised at all.
Cc: Chuck Jordan <cjordan@synopsys.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARConnect/MCIP Inter-Core-Interrupt module can't send interrupt to
local core. So use core intc capability to trigger software
interrupt to self, using an unsued IRQ #21.
This showed up as csd deadlock with LTP trace_sched on a dual core
system. This test acts as scheduler fuzzer, triggering all sorts of
schedulting activity. Trouble starts with IPI to self, which doesn't get
delivered (effectively lost due to H/w capability), but the msg intended
to be sent remain enqueued in per-cpu @ipi_data.
All subsequent IPIs to this core from other cores get elided due to the
IPI coalescing optimization in ipi_send_msg_one() where a pending msg
implies an IPI already sent and assumes other core is yet to ack it.
After the elided IPI, other core simply goes into csd_lock_wait()
but never comes out as this core never sees the interrupt.
Fixes STAR 9001008624
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: <stable@vger.kernel.org> [4.2]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This will better reflect its description i.e. "any needed setup..."
and not just do an "IPI request".
Signed-off-by: Noam Camus <noamc@ezchip.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
MCIP now registers it's own per cpu setup routine (for IPI IRQ request)
using smp_ops.init_irq_cpu().
So no need for platforms to do that. This now completely decouples
platforms from MCIP.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
MCIP now registers it's own probe callback with smp_ops.init_early_smp()
which is called by ARC common code, so no need for platforms to do that.
This decouples the platforms and MCIP and helps confine MCIP details
to it's own file.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
For non halt-on-reset case, all cores start of simultaneously in @stext.
Master core0 proceeds with kernel boot, while other spin-wait on
@wake_flag being set by master once it is ready. So NO hardware assist
is needed for master to "kick" the others.
This patch moves this soft implementation out of mcip.c (as there is no
hardware assist) into common smp.c
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Most interrupt flow handlers do not use the irq argument. Those few
which use it can retrieve the irq number from the irq descriptor.
Remove the argument.
Search and replace was done with coccinelle and some extra helper
scripts around it. Thanks to Julia for her help!
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
The irq argument of most interrupt flow handlers is unused or merily
used instead of a local variable. The handlers which need the irq
argument can retrieve the irq number from the irq descriptor.
Search and update was done with coccinelle and the invaluable help of
Julia Lawall.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Vineet Gupta <vgupta@synopsys.com>
The IRQCHIP_DECLARE macro migrated to 'include/linux/irqchip.h'.
See commit 91e20b5040
("irqchip: Move IRQCHIP_DECLARE macro to include/linux/irqchip.h").
This patch removes the inclusions of private header 'drivers/irqchip/irqchip.h'
and if necessary replaces them with inclusions of 'include/linux/irqchip.h'.
Signed-off-by: Joel Porquet <joel@porquet.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
With this nsim standlone / OSCI have working irq affinity - AXS103 still
needs some work as IDU is not visible in intc hierarchy yet !
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>