perf stat: Clear evsel->reset_group for each stat run
commitbf515f024e
upstream. If a weak group is broken then the reset_group flag remains set for the next run. Having reset_group set means the counter isn't created and ultimately a segfault. A simple reproduction of this is: # perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W which will be added as a test in the next patch. Fixes:4804e01116
("perf stat: Use affinity for opening events") Reviewed-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Ian Rogers <irogers@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Link: https://lore.kernel.org/r/20220822213352.75721-1-irogers@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Родитель
b5f5fee03d
Коммит
da86f80da3
|
@ -807,6 +807,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
|
|||
return -1;
|
||||
|
||||
evlist__for_each_entry(evsel_list, counter) {
|
||||
counter->reset_group = false;
|
||||
if (bpf_counter__load(counter, &target))
|
||||
return -1;
|
||||
if (!evsel__is_bpf(counter))
|
||||
|
|
Загрузка…
Ссылка в новой задаче