s390/idle: fix sequence handling vs cpu hotplug

The s390 idle accounting code uses a sequence counter which gets used
when the per cpu idle statistics get updated and read.

One assumption on read access is that only when the sequence counter is
even and did not change while reading all values the result is valid.
On cpu hotplug however the per cpu data structure gets initialized via
a cpu hotplug notifier on CPU_ONLINE.
CPU_ONLINE however is too late, since the onlined cpu is already running
and might access the per cpu data. Worst case is that the data structure
gets initialized while an idle thread is updating its idle statistics.
This will result in an uneven sequence counter after an update.

As a result user space tools like top, which access /proc/stat in order
to get idle stats, will busy loop waiting for the sequence counter to
become even again, which will never happen until the queried cpu will
update its idle statistics again. And even then the sequence counter
will only have an even value for a couple of cpu cycles.

Fix this by moving the initialization of the per cpu idle statistics
to cpu_init(). I prefer that solution in favor of changing the
notifier to CPU_UP_PREPARE, which would be a different solution to
the problem.

Cc: stable@vger.kernel.org
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This commit is contained in:
Heiko Carstens 2012-07-13 15:45:33 +02:00 коммит произвёл Martin Schwidefsky
Родитель 8738e07d5c
Коммит 0008204ffe
2 изменённых файлов: 2 добавлений и 3 удалений

Просмотреть файл

@ -26,12 +26,14 @@ static DEFINE_PER_CPU(struct cpuid, cpu_id);
void __cpuinit cpu_init(void) void __cpuinit cpu_init(void)
{ {
struct cpuid *id = &per_cpu(cpu_id, smp_processor_id()); struct cpuid *id = &per_cpu(cpu_id, smp_processor_id());
struct s390_idle_data *idle = &__get_cpu_var(s390_idle);
get_cpu_id(id); get_cpu_id(id);
atomic_inc(&init_mm.mm_count); atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm; current->active_mm = &init_mm;
BUG_ON(current->mm); BUG_ON(current->mm);
enter_lazy_tlb(&init_mm, current); enter_lazy_tlb(&init_mm, current);
memset(idle, 0, sizeof(*idle));
} }
/* /*

Просмотреть файл

@ -959,14 +959,11 @@ static int __cpuinit smp_cpu_notify(struct notifier_block *self,
unsigned int cpu = (unsigned int)(long)hcpu; unsigned int cpu = (unsigned int)(long)hcpu;
struct cpu *c = &pcpu_devices[cpu].cpu; struct cpu *c = &pcpu_devices[cpu].cpu;
struct device *s = &c->dev; struct device *s = &c->dev;
struct s390_idle_data *idle;
int err = 0; int err = 0;
switch (action) { switch (action) {
case CPU_ONLINE: case CPU_ONLINE:
case CPU_ONLINE_FROZEN: case CPU_ONLINE_FROZEN:
idle = &per_cpu(s390_idle, cpu);
memset(idle, 0, sizeof(struct s390_idle_data));
err = sysfs_create_group(&s->kobj, &cpu_online_attr_group); err = sysfs_create_group(&s->kobj, &cpu_online_attr_group);
break; break;
case CPU_DEAD: case CPU_DEAD: