Граф коммитов

20 Коммитов

Автор SHA1 Сообщение Дата
Frederic Weisbecker 1268fbc746 nohz: Remove tick_nohz_idle_enter_norcu() / tick_nohz_idle_exit_norcu()
Those two APIs were provided to optimize the calls of
tick_nohz_idle_enter() and rcu_idle_enter() into a single
irq disabled section. This way no interrupt happening in-between would
needlessly process any RCU job.

Now we are talking about an optimization for which benefits
have yet to be measured. Let's start simple and completely decouple
idle rcu and dyntick idle logics to simplify.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-12-11 10:31:57 -08:00
Frederic Weisbecker 2bbb6817c0 nohz: Allow rcu extended quiescent state handling seperately from tick stop
It is assumed that rcu won't be used once we switch to tickless
mode and until we restart the tick. However this is not always
true, as in x86-64 where we dereference the idle notifiers after
the tick is stopped.

To prepare for fixing this, add two new APIs:
tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().

If no use of RCU is made in the idle loop between
tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
must instead call the new *_norcu() version such that the arch doesn't
need to call rcu_idle_enter() and rcu_idle_exit().

Otherwise the arch must call tick_nohz_enter_idle() and
tick_nohz_exit_idle() and also call explicitly:

- rcu_idle_enter() after its last use of RCU before the CPU is put
to sleep.
- rcu_idle_exit() before the first use of RCU after the CPU is woken
up.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-12-11 10:31:36 -08:00
Frederic Weisbecker 280f06774a nohz: Separate out irq exit and idle loop dyntick logic
The tick_nohz_stop_sched_tick() function, which tries to delay
the next timer tick as long as possible, can be called from two
places:

- From the idle loop to start the dytick idle mode
- From interrupt exit if we have interrupted the dyntick
idle mode, so that we reprogram the next tick event in
case the irq changed some internal state that requires this
action.

There are only few minor differences between both that
are handled by that function, driven by the ts->inidle
cpu variable and the inidle parameter. The whole guarantees
that we only update the dyntick mode on irq exit if we actually
interrupted the dyntick idle mode, and that we enter in RCU extended
quiescent state from idle loop entry only.

Split this function into:

- tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
dynticks idle mode unconditionally if it can, and enters into RCU
extended quiescent state.

- tick_nohz_irq_exit() which only updates the dynticks idle mode
when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).

To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
into tick_nohz_idle_exit().

This simplifies the code and micro-optimize the irq exit path (no need
for local_irq_save there). This also prepares for the split between
dynticks and rcu extended quiescent state logics. We'll need this split to
further fix illegal uses of RCU in extended quiescent states in the idle
loop.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2011-12-11 10:31:35 -08:00
Paul Mundt c66d3fcbf3 sh: Fix up fallout from cpuidle changes.
Fixes up the pm_idle redefinition that was introduced with the earlier
cpuidle changes.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2011-08-08 16:30:11 +09:00
David Brown cbc158d6bf cpuidle: Consistent spelling of cpuidle_idle_call()
Commit a0bfa13738 mispells
cpuidle_idle_call() on ARM and SH code.  Fix this to be consistent.

Cc: Kevin Hilman <khilman@deeprootsystems.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: x86@kernel.org
Cc: Len Brown <len.brown@intel.com>
Signed-off-by: David Brown <davidb@codeaurora.org>
[ Also done by Mark Brown - th ebug has been around forever, and was
  noticed in -next, but the idle tree never picked it up. Bad bad bad ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 16:35:34 -10:00
Linus Torvalds 35e51fe82d Merge branch 'idle-release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-idle-2.6
* 'idle-release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-idle-2.6:
  cpuidle: stop depending on pm_idle
  x86 idle: move mwait_idle_with_hints() to where it is used
  cpuidle: replace xen access to x86 pm_idle and default_idle
  cpuidle: create bootparam "cpuidle.off=1"
  mrst_pmu: driver for Intel Moorestown Power Management Unit
2011-08-03 21:54:15 -10:00
Len Brown a0bfa13738 cpuidle: stop depending on pm_idle
cpuidle users should call cpuidle_call_idle() directly
rather than via (pm_idle)() function pointer.

Architecture may choose to continue using (pm_idle)(),
but cpuidle need not depend on it:

  my_arch_cpu_idle()
	...
	if(cpuidle_call_idle())
		pm_idle();

cc: Kevin Hilman <khilman@deeprootsystems.com>
cc: Paul Mundt <lethal@linux-sh.org>
cc: x86@kernel.org
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2011-08-03 19:06:37 -04:00
Arun Sharma 60063497a9 atomic: use <linux/atomic.h>
This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>

Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-07-26 16:49:47 -07:00
Paul Mundt 763142d1ef sh: CPU hotplug support.
This adds preliminary support for CPU hotplug for SH SMP systems.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-26 19:08:55 +09:00
Paul Mundt f0ccf2770f sh: convert online CPU map twiddling to cpumask.
This converts from cpu_set() for the online map to set_cpu_online().
The two online map modifiers were the last remaining manual map
manipulation bits, with this in place everything now goes through
cpumask.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-26 18:39:50 +09:00
Paul Mundt 90851c4076 sh: Tidy up a couple of section mismatches.
select_idle_routine() and register_sh_pmu() both needed their annotations
fixed up to silence section mismatch warnings.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23 17:06:47 +09:00
Paul Mundt fbb82b0365 sh: machine_ops based reboot support.
This provides a machine_ops-based reboot interface loosely cloned from
x86, and converts the native sh32 and sh64 cases over to it.

Necessary both for tying in SMP support and also enabling platforms like
SDK7786 to add support for their microcontroller-based power managers.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20 16:42:52 +09:00
Paul Mundt 73a38b839b sh: Only use bl bit toggling for sleeping idle.
We don't actually require this in the cpu_relax() polling case, so just
cuddle these around the sleeping version.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-21 11:57:33 +09:00
Paul Mundt 3147093e1d sh: Restore bl bit toggling in idle loop.
This fixes up some crashes with IRQs racing the need_resched() test under
QEMU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-21 11:57:29 +09:00
Paul Mundt 9dbe00a56a sh: Fix up IRQ re-enabling for the need_resched() case.
In the case where need_resched() is set in between the cpu_idle() and
pm_idle() calls we were missing an else case for just re-enabling local
IRQs and bailing out. This was noticed by the irqs_disabled() warning,
even though IRQs were being re-enabled elsewhere.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 17:55:59 +09:00
Paul Mundt 0e6d4986e7 sh: Make check_pgt_cache() more aggressive while idling.
This follows the x86 change and moves check_pgt_cache() up under the
!need_resched() tight loop, rather than simply calling in to it when
exiting idle.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 17:27:58 +09:00
Paul Mundt f533c3d340 sh: Idle loop chainsawing for SMP-based light sleep.
This does a bit of chainsawing of the idle loop code to get light sleep
working on SMP. Previously this was forcing secondary CPUs in to sleep
mode with them not coming back if they didn't have their own local
timers. Given that we use clockevents broadcasting by default, the CPU
managing the clockevents can't have IRQs disabled before entering its
sleep state.

This unfortunately leaves us with the age-old need_resched() race in
between local_irq_enable() and cpu_sleep(), but at present this is
unavoidable. After some more experimentation it may be possible to layer
on SR.BL bit manipulation over top of this scheme to inhibit the race
condition, but given the current potential for missing wakeups, this is
left as a future exercise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 17:20:58 +09:00
Paul Mundt 2e046b9487 sh: Provide cpu_idle_wait() to fix up cpuidle/SMP build.
Crib the x86 cpu_idle_wait() implementation and shove it in with the
idle code, subsequently enabling ARCH_HAS_CPU_IDLE_WAIT.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-23 17:30:17 +09:00
Paul Mundt e869a90ee1 sh: Wire up ARCH_HAS_DEFAULT_IDLE for cpuidle.
cpuidle wants ARCH_HAS_DEFAULT_IDLE defined in order to use the
default idle loop. So, make it accessible and enable it for all
sh machines.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-04-02 13:08:31 +09:00
Paul Mundt 1da1180c6e sh: Split out the idle loop for reuse between _32/_64 variants.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:43:50 +09:00