ARM: 6902/1: perf: Remove erroneous check on active_events
When initialising a PMU, there is a check to protect against races with other CPUs filling all of the available event slots. Since armpmu_add checks that an event can be scheduled, we do not need to do this at initialisation time. Furthermore the current code is broken because it assumes that atomic_inc_not_zero will unconditionally increment active_counts and then tries to decrement it again on failure. This patch removes the broken, redundant code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This commit is contained in:
Родитель
31bee4cf0e
Коммит
57ce9bb39b
|
@ -560,11 +560,6 @@ static int armpmu_event_init(struct perf_event *event)
|
|||
event->destroy = hw_perf_event_destroy;
|
||||
|
||||
if (!atomic_inc_not_zero(&active_events)) {
|
||||
if (atomic_read(&active_events) > armpmu->num_events) {
|
||||
atomic_dec(&active_events);
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
mutex_lock(&pmu_reserve_mutex);
|
||||
if (atomic_read(&active_events) == 0) {
|
||||
err = armpmu_reserve_hardware();
|
||||
|
|
Загрузка…
Ссылка в новой задаче