In a1a587073c ("perf script: Use fprintf like printing uniformly")
there were a few cases that were missed, fix it.
Reported-by: yuzhoujian <yuzhoujian@didichuxing.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-sq9hvfk5mkjdqzlpyiq7jkos@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The current even timekeeping, which computes enabled and running
times, uses 3 distinct timestamps to reflect the various event states:
OFF (stopped), INACTIVE (enabled) and ACTIVE (running).
Furthermore, the update rules are such that even INACTIVE events need
their timestamps updated. This is undesirable because we'd like to not
touch INACTIVE events if at all possible, this makes event scheduling
(much) more expensive than needed.
Rewrite the timekeeping to directly use event->state, this greatly
simplifies the code and results in only having to update things when
we change state, or an up-to-date value is requested (read).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
perf_event_read() has a number of issues regarding the timekeeping bits.
- The IPI didn't update group times when it found INACTIVE
- The direct call would not re-check ->state after taking ctx->lock
which can result in ->count and timestamps getting out of sync.
And we can make use of the ordering introduced for perf_event_stop()
to make it more accurate for ACTIVE.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The barrier and comment make no sense:
- if what the barrier says is true, it should be wmb() but that
should then be part of the arch driver, not the generic code.
- if it is an SMP barrier, there must be a matching barrier, and
there isn't one.
So kill it.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Its a weird name, active is one of the states, it should not be part
of the name, also, its too long.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We should make sure to update ctx time before we use it to update
event times.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Event timestamps are serialized using ctx->lock, make sure to hold it
over reading all values.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We should make sure the ctx time is updated before we detach events;
which will want to update event times.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
perf_event_read_value() is an external accessor, just like
perf_event_{en,dis}able() and should thus use perf_event_ctx_lock().
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: f63a8daa58 ("perf: Fix event->ctx locking")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
eBPF programs would like access to the (perf) event enabled and
running times along with the event value, such that they can deal with
event multiplexing (among other things).
This patch extends the interface; a future eBPF patch will utilize
the new functionality.
[ Note, there's a same-content commit with a poor changelog and a meaningless
title in the networking tree as well - but we need this change for subsequent
perf work, so apply it here as well, with a proper changelog. Hopefully Git
will be able to sort out this somewhat messy workflow, if there are no other,
conflicting changes to these files. ]
Signed-off-by: Yonghong Song <yhs@fb.com>
[ Rewrote the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <ast@fb.com>
Cc: <daniel@iogearbox.net>
Cc: <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: David S. Miller <davem@davemloft.net>
Link: http://lkml.kernel.org/r/20171005161923.332790-2-yhs@fb.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
From Milian's cover letter: (Milian Wolff)
This series of patches completely reworks the way inline frames are
handled. Instead of querying for the inline nodes on-demand in the
individual tools, we now create proper callchain nodes for inlined
frames. The advantages this approach brings are numerous:
- Less duplicated code in the individual browser
- Aggregated cost for inlined frames for the --children top-down list
- Various bug fixes that arose from querying for a srcline/symbol based on
the IP of a sample, which will always point to the last inlined frame
instead of the corresponding non-inlined frame
- Overall much better support for visualizing cost for heavily-inlined C++
code, which simply was confusing and unreliably before
- srcline honors the global setting as to whether full paths or basenames
should be shown
- Caches for inlined frames and srcline information, which allow us to
enable inline frame handling by default
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEELb9bqkb7Te0zijNb1lAW81NSqkAFAlnwsBYACgkQ1lAW81NS
qkBQrxAAkARBYgZS5y2WFSDB8zIvXHBPiMmTDNwpgAIUo66YMAI9+y18VoTGBwju
kTbD1VhLQAg1DRODuGjwnT2lKLDkl2kYhP5hwzFZYk7L8/F0jBtrPNI3uM3kOgvm
zfmMHpKMWCdRU1gcLqmhW3gUs5ySHj7iZuZWWTlyrmTJoBzpQFRnGi89gNds5NkC
0nY0HU099HnTgBcAkidExg5RYKhFjevYnisuBW4Ob7g34WMYgL44cVepnBzuBZdn
uKYFFUsnd+aL4Spq6jr+W31XVv61xUKttaSKT7PUrWiDGzRdpTQZLXwzd7cyJShF
e0TGvTtqt1xSpTU9YCZvAqfs0K5gkp0e9OddYB/P1Wne1oj7tWAa4kKFdccYFMY1
+rbPMK0A/ms37QMx3m+/zJ+07dxuu94W/MI6k+KW+Al10QnKQSaAzLe7XxZU4p8y
TA7KOrrZI/KYOvTyIHpeeI1LFHnn4OZy+LnkoxxH71fmCphsUsGvNIhzID1ntffK
I/RLhpe1+9425NhViZlCoD4+vIMf9CCpJYhGNQZ6ndK+ESacuWUa4DTRJYUnF5xZ
ducZefpm7XCTwnvkkvNN3JwmeBI0H9ePaepvLjLJP11RzZyE9fgcRdQIcPBNC76+
a2NEkq/ean+pEFiz6S5vYrz0DSw5FTfJbfCVrVXQkYQVZcCU7F8=
=IEC2
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo-4.15-20171025' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core inline improvements from Arnaldo Carvalho de Melo:
From Milian's cover letter: (Milian Wolff)
"This series of patches completely reworks the way inline frames are
handled. Instead of querying for the inline nodes on-demand in the
individual tools, we now create proper callchain nodes for inlined
frames. The advantages this approach brings are numerous:
- Less duplicated code in the individual browser
- Aggregated cost for inlined frames for the --children top-down list
- Various bug fixes that arose from querying for a srcline/symbol based on
the IP of a sample, which will always point to the last inlined frame
instead of the corresponding non-inlined frame
- Overall much better support for visualizing cost for heavily-inlined C++
code, which simply was confusing and unreliably before
- srcline honors the global setting as to whether full paths or basenames
should be shown
- Caches for inlined frames and srcline information, which allow us to
enable inline frame handling by default"
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Now that we have caches in place to speed up the process of finding
inlined frames and srcline information repeatedly, we can enable this
useful option by default.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171019113836.5548-6-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
On one hand this ensures that the memory is properly freed when the DSO
gets freed. On the other hand this significantly speeds up the
processing of the callchain nodes when lots of srclines are requested.
For one of my data files e.g.:
Before:
Performance counter stats for 'perf report -s srcline -g srcline --stdio':
52496.495043 task-clock (msec) # 0.999 CPUs utilized
634 context-switches # 0.012 K/sec
2 cpu-migrations # 0.000 K/sec
191,561 page-faults # 0.004 M/sec
165,074,498,235 cycles # 3.144 GHz
334,170,832,408 instructions # 2.02 insn per cycle
90,220,029,745 branches # 1718.591 M/sec
654,525,177 branch-misses # 0.73% of all branches
52.533273822 seconds time elapsedProcessed 236605 events and lost 40 chunks!
After:
Performance counter stats for 'perf report -s srcline -g srcline --stdio':
22606.323706 task-clock (msec) # 1.000 CPUs utilized
31 context-switches # 0.001 K/sec
0 cpu-migrations # 0.000 K/sec
185,471 page-faults # 0.008 M/sec
71,188,113,681 cycles # 3.149 GHz
133,204,943,083 instructions # 1.87 insn per cycle
34,886,384,979 branches # 1543.214 M/sec
278,214,495 branch-misses # 0.80% of all branches
22.609857253 seconds time elapsed
Note that the difference is only this large when `--inline` is not
passed. In such situations, we would use the inliner cache and thus do
not run this code path that often.
I think that this cache should actually be used in other places, too.
When looking at the valgrind leak report for perf report, we see tons of
srclines being leaked, most notably from calls to
hist_entry__get_srcline. The problem is that get_srcline has many
different formatting options (show_sym, show_addr, potentially even
unwind_inlines when calling __get_srcline directly). As such, the
srcline cannot easily be cached for all calls, or we'd have to add
caches for all formatting combinations (6 so far). An alternative would
be to remove the formatting options and handle that on a different level
- i.e. print the sym/addr on demand wherever we actually output
something. And the unwind_inlines could be moved into a separate
function that does not return the srcline.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171019113836.5548-4-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When no inlined frames could be found for a given address, we did not
store this information anywhere. That means we potentially do the costly
inliner lookup repeatedly for cases where we know it can never succeed.
This patch makes dso__parse_addr_inlines always return a valid
inline_node. It will be empty when no inliners are found. This enables
us to cache the empty list in the DSO, thereby improving the performance
when many addresses fail to find the inliners.
For my trivial example, the performance impact is already quite
significant:
Before:
~~~~~
Performance counter stats for 'perf report --stdio --inline -g srcline -s srcline' (5 runs):
594.804032 task-clock (msec) # 0.998 CPUs utilized ( +- 0.07% )
53 context-switches # 0.089 K/sec ( +- 4.09% )
0 cpu-migrations # 0.000 K/sec ( +-100.00% )
5,687 page-faults # 0.010 M/sec ( +- 0.02% )
2,300,918,213 cycles # 3.868 GHz ( +- 0.09% )
4,395,839,080 instructions # 1.91 insn per cycle ( +- 0.00% )
939,177,205 branches # 1578.969 M/sec ( +- 0.00% )
11,824,633 branch-misses # 1.26% of all branches ( +- 0.10% )
0.596246531 seconds time elapsed ( +- 0.07% )
~~~~~
After:
~~~~~
Performance counter stats for 'perf report --stdio --inline -g srcline -s srcline' (5 runs):
113.111405 task-clock (msec) # 0.990 CPUs utilized ( +- 0.89% )
29 context-switches # 0.255 K/sec ( +- 54.25% )
0 cpu-migrations # 0.000 K/sec
5,380 page-faults # 0.048 M/sec ( +- 0.01% )
432,378,779 cycles # 3.823 GHz ( +- 0.75% )
670,057,633 instructions # 1.55 insn per cycle ( +- 0.01% )
141,001,247 branches # 1246.570 M/sec ( +- 0.01% )
2,346,845 branch-misses # 1.66% of all branches ( +- 0.19% )
0.114222393 seconds time elapsed ( +- 1.19% )
~~~~~
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171019113836.5548-3-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Some of the code paths I introduced before returned too early without
running the code to handle a node's branch count. By refactoring
match_chain to only have one exit point, this can be remedied.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jin Yao <yao.jin@linux.intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1707691.qaJ269GSZW@agathebauer
Link: http://lkml.kernel.org/r/20171018185350.14893-2-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The fake symbols we create for inlined frames will represent different
functions but can use the symbol start address. This leads to issues
when different inline branches all lead to the same function.
Before:
~~~~~
$ perf report -s sym -i perf.inlining.data --inline --stdio -g function
...
--38.86%--_start
__libc_start_main
main
|
--37.57%--std::norm<double> (inlined)
std::_Norm_helper<true>::_S_do_it<double> (inlined)
|
--36.36%--std::abs<double> (inlined)
std::__complex_abs (inlined)
|
--12.24%--std::linear_congruential_engine<unsigned long, 16807ul, 0ul, 2147483647ul>::operator() (inlined)
std::__detail::__mod<unsigned long, 2147483647ul, 16807ul, 0ul> (inlined)
std::__detail::_Mod<unsigned long, 2147483647ul, 16807ul, 0ul, true, true>::__calc (inlined)
~~~~~
Note that this backtrace representation is completely bogus.
Complex abs does not call the linear congruential engine! It
is just a side-effect of a longer inlined stack being appended
to a shorter, different inlined stack, both of which originate
in the same function (main).
This patch fixes the issue:
~~~~~
$ perf report -s sym -i perf.inlining.data --inline --stdio -g function
...
--38.86%--_start
__libc_start_main
main
|
|--35.59%--std::uniform_real_distribution<double>::operator()<std::linear_congruential_engine<unsigned long, 16807ul, 0ul, 2147483647ul> > (inlined)
| std::uniform_real_distribution<double>::operator()<std::linear_congruential_engine<unsigned long, 16807ul, 0ul, 2147483647ul> > (inlined)
| |
| --34.37%--std::__detail::_Adaptor<std::linear_congruential_engine<unsigned long, 16807ul, 0ul, 2147483647ul>, double>::operator() (inlined)
| std::generate_canonical<double, 53ul, std::linear_congruential_engine<unsigned long, 16807ul, 0ul, 2147483647ul> > (inlined)
| |
| --12.24%--std::linear_congruential_engine<unsigned long, 16807ul, 0ul, 2147483647ul>::operator() (inlined)
| std::__detail::__mod<unsigned long, 2147483647ul, 16807ul, 0ul> (inlined)
| std::__detail::_Mod<unsigned long, 2147483647ul, 16807ul, 0ul, true, true>::__calc (inlined)
|
--1.99%--std::norm<double> (inlined)
std::_Norm_helper<true>::_S_do_it<double> (inlined)
std::abs<double> (inlined)
std::__complex_abs (inlined)
~~~~~
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-10-milian.wolff@kdab.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
[ Fix up conflict with c1fbc0cf81 ("perf callchain: Compare dsos (as well) for CCKEY_FUNCTION"), remove unneeded hunk ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The original patch that introduced inline frame output in the various
browsers used this suffix already. The new centralized approach that
uses fake symbols for inlined frames was missing this approach so far.
Instead of changing the symbol name itself, we only print the suffix
where needed. This allows us to efficiently lookup the symbol for a
given name without first having to append the suffix before the lookup.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-8-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When a callchain entry has no srcline available, we ended up comparing
the instruction pointer. I consider this to be not too useful. Rather, I
think we should group the entries by function name, which this patch
adds. For people who want to split the data on the IP boundary, using
`-g address` is the correct choice.
Before:
~~~~~
100.00% 38.86% [.] main
|
|--61.14%--main inlining.cpp:14
| std::norm<double> complex:664
| std::_Norm_helper<true>::_S_do_it<double> complex:654
| std::abs<double> complex:597
| std::__complex_abs complex:589
| |
| |--56.03%--hypot
| | |
| | |--8.45%--__hypot_finite
| | |
| | |--7.62%--__hypot_finite
| | |
| | |--2.29%--__hypot_finite
| | |
| | |--2.24%--__hypot_finite
| | |
| | |--2.06%--__hypot_finite
| | |
| | |--1.81%--__hypot_finite
...
~~~~~
After:
~~~~~
100.00% 38.86% [.] main
|
|--61.14%--main inlining.cpp:14
| std::norm<double> complex:664
| std::_Norm_helper<true>::_S_do_it<double> complex:654
| std::abs<double> complex:597
| std::__complex_abs complex:589
| |
| |--60.29%--hypot
| | |
| | --56.03%--__hypot_finite
| |
| --0.85%--cabs
~~~~~
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-7-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The inline_node structs are maintained by the new dso->inlines tree.
This in turn keeps ownership of the fake symbols and srcline string
representing an inline frame.
This tree is sorted by address to allow quick lookups. All other entries
of the symbol beside the function name are unused for inline frames. The
advantage of this approach is that all existing users of the callchain
API can now transparently display inlined frames without having to patch
their code.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-6-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This is a preparation for the creation of real callchain entries for
inlined frames. The rest of the perf code uses the srcline string. As
such, using that also for the srcline API allows us to simplify some of
the upcoming code. Most notably, it will allow us to cache the srcline
for a given inline node and reuse it for different callchain entries.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-5-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This is a requirement to create real callchain entries for inlined
frames.
Since the list of inlines usually contains the target symbol too, i.e.
the location where the frames get inlined to, we alias that symbol and
reuse it as-is is. This ensures that other dependent functionality keeps
working, most notably annotation of the target frames.
For all other entries in the inline_list, a fake symbol is created.
These are marked by new 'inlined' member which is set to true. Only
those symbols are managed by the inline_list and get freed when the
inline_list is deleted from within inline_node__delete.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-4-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This is mostly a preparation to enable the creation of full callchain
nodes for inline frames. Such frames will reference the IP of the
non-inlined frame, but hold the symbol and srcline for an inlined
location. As such, we won't be able to query the srcline on-demand based
on the IP alone. Instead, we will leverage the functionality provided by
this patch here, and store the srcline for the inlined nodes in the new
srcline member of callchain_cursor_node.
Note that this patch on its own leaks the srcline, as there is no
free_callchain_cursor_node or similar. A future patch will add caching
of the srcline and handle deletion properly.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-3-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The follow-up commits will make inline frames first-class citizens in
the callchain, thereby obsoleting all of this special code.
Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Reviewed-by: Jiri Olsa <jolsa@redhat.com>
Reviewed-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yao Jin <yao.jin@linux.intel.com>
Link: http://lkml.kernel.org/r/20171009203310.17362-2-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Commit:
d2878d642a ("perf/x86/intel/bts: Disallow use by unprivileged users on paranoid systems")
... adds a privilege check in the exactly wrong place in the event init path:
after the 'LBR exclusive' reference has been taken, and doesn't release it
in the case of insufficient privileges. After this, nobody in the system
gets to use PT or LBR afterwards.
This patch moves the privilege check to where it should have been in the
first place.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: d2878d642a ("perf/x86/intel/bts: Disallow use by unprivileged users on paranoid systems")
Link: http://lkml.kernel.org/r/20171023123533.16973-1-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
- Update vendor events JSON metrics for Intel's Broadwell, Broadwell
Server, Haswell, Haswell Server, IvyBridge, IvyTown, JakeTown, Sandy
Bridge, Skylake and SkyLake Server (Andi Kleen)
- Add vendor event file for Intel's Goldmont Plus V1 (Kan Liang)
- Move perf_mmap methods from 'perf record' and evlist.c to a separate
mmap.[ch] pair, to better separate things and pave the way for further
work on multithreading tools (Arnaldo Carvalho de Melo)
- Do not check ABI headers in a detached tarball build, as it the kernel
headers from where we copied tools/include/ are by definition not
available (Arnaldo Carvalho de Melo)
- Make 'perf script' use fprintf() like printing, i.e. receiving a FILE
pointer so that it gets consistent with other tools/ code and allows
for printing to per-event files (Arnaldo Carvalho de Melo)
- Error handling fixes (resource release on exit) for 'perf script'
and 'perf kmem' (Christophe JAILLET)
- Make some 'perf event attr' tests optional on virtual machines, where
tested counters are not available (Jiri Olsa)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEELb9bqkb7Te0zijNb1lAW81NSqkAFAlnufiQACgkQ1lAW81NS
qkDLyBAAq9Q8acmhPIq5SVIpbr9CgglpJDK0MbWKH9d/sY7v3HaaK64yz+Zp5MlF
djbulHl5rZwSatnFH31yxs5uYRaVOUAKrJMtBUMGGfSh9oqOeEfUA4LItBF72z+q
U65Zje4/iH/M5lxB91er3jAoPFQPQ8brWsHXLHFjncMdovdz9EoOjq43UAfGTWtU
SOhRLgQN6qLIA+vjDGBOAeI4k0pIB2zZt0I2Hs96RehOX0vDn2CmkmFzwdDXIv/5
kSB/1qr2RCXl7FSB6Spnd6sWtRYUPhjYZhcvjQy3F/Qf/KgYwlcJ2PTJdnyFnpdy
Vn5DUviDybmFWeiGOy/aPa8gXvQET82DiKh3UfkEw1CA1jlES02JQq5l914OJHgQ
+s/BJ6KtcLaPDswaDevp+GCwm5JlZ1KKxwEgx6/HfueIqbv20IbpXe9bBRkqbgPp
oEsvH3tsM+ePATZpe9iYgM1QT3f8fAvRCxDwRHan82EFpuM4zggniO13sDZOIKq8
DkUA+gYYPMfbmUXwf4ZNoIaQjyvjXiHiokJVXiHRjuYV/uKvoCjzTKRJ67bzG0zF
Yz+YD+6VO3LQ54FXdo5guH+s2OQZZmcGVfiGrID8TD25FouAD72fA1GTRNxtzBaU
/lqZiFYPxw4BOkArze2Uxc76XukhKh0q9cUCZP+O/9QfH8XzfoY=
=0saa
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo-4.15-20171023' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
- Update vendor events JSON metrics for Intel's Broadwell, Broadwell
Server, Haswell, Haswell Server, IvyBridge, IvyTown, JakeTown, Sandy
Bridge, Skylake and SkyLake Server (Andi Kleen)
- Add vendor event file for Intel's Goldmont Plus V1 (Kan Liang)
- Move perf_mmap methods from 'perf record' and evlist.c to a separate
mmap.[ch] pair, to better separate things and pave the way for further
work on multithreading tools (Arnaldo Carvalho de Melo)
- Do not check ABI headers in a detached tarball build, as it the kernel
headers from where we copied tools/include/ are by definition not
available (Arnaldo Carvalho de Melo)
- Make 'perf script' use fprintf() like printing, i.e. receiving a FILE
pointer so that it gets consistent with other tools/ code and allows
for printing to per-event files (Arnaldo Carvalho de Melo)
- Error handling fixes (resource release on exit) for 'perf script'
and 'perf kmem' (Christophe JAILLET)
- Make some 'perf event attr' tests optional on virtual machines, where
tested counters are not available (Jiri Olsa)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We don't need perf.h, that is a kitchen sink, all we need is
perf_events.h for perf_ns_link_info, sys/types.h for pid_t and
linux/types.h for u64, list_head.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Li Zhijian <lizhijian@cn.fujitsu.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-f2uxyaj4s2hmntkrezpa6dqz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
If the string passed in '--time' is invalid, we must do some cleanup
before leaving. As in the other error handling paths of this function.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-janitors@vger.kernel.org
Fixes: 2a865bd8dd ("perf kmem: Add option to specify time window of interest")
Link: http://lkml.kernel.org/r/20170916060936.28199-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
If the string passed in '--time' is invalid, or if failed to set
libtraceevent function resolver, we must do some cleanup before leaving.
As in the other error handling paths of this function.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-janitors@vger.kernel.org
Link: http://lkml.kernel.org/r/20170916062537.28921-1-christophe.jaillet@wanadoo.fr
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We've been mixing print() with fprintf() style printing for a while, but
now we need to use fprintf() like syntax uniformly as a preparatory
patch for supporting printing to different files, one per event.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: yuzhoujian <yuzhoujian@didichuxing.com>
Link: http://lkml.kernel.org/n/tip-kv5z3v8ptfghbarv3a9usvin@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Out of print_binary() but receiving a fp pointer and expecting that the
printer be a fprintf like function, i.e. receive a FILE pointer and
return the number of characters printed.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: yuzhoujian <yuzhoujian@didichuxing.com>
Link: http://lkml.kernel.org/n/tip-6oqnxr6lmgqe6q6p3iugnscx@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Some of the metrics use an incorrect syntax for specifying the cmask for
an event. Convert to perf syntax so that they can be resolved.
Fixes metrics on Broadwell, SandyBridge.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/n/tip-3k3fkfj8obek9dkmryyrqzhu@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When we use one of:
[acme@jouet linux]$ make help | grep perf
perf-tar-src-pkg - Build perf-4.14.0-rc3.tar source tarball
perf-targz-src-pkg - Build perf-4.14.0-rc3.tar.gz source tarball
perf-tarbz2-src-pkg - Build perf-4.14.0-rc3.tar.bz2 source tarball
perf-tarxz-src-pkg - Build perf-4.14.0-rc3.tar.xz source tarball
[acme@jouet linux]$
I.e. when we create a detached tarball to build perf outside outside the
enveloping kernel sources (from a kernel tarball or a checked out
linux.git directory) we by definition can't check for differences among
the tools/{include,arch}, etc files we originally copied from the
kernel, so bail out in that case, to avoid warnings when doing the
detached builds.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-vbrga0mhplv7niwxr3ghjyxv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Use a spin_lock instead of mutex in atomic context. The devm_ fix is a
dependency.
intel_pmc_ipc:
- Use spin_lock to protect GCR updates
- Use devm_* calls in driver probe function
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJZ7iUGAAoJEKbMaAwKp364wIQIAKmAi4BH9x7HmJzhBVgToB4T
3UvSMPBdGDbNlz8STfC5txg+rLCcy00FscM2Dhf762mBijdm1TwIBuOMYhkLNawI
fPeuexn6wmj0lq8NnLQZ/laSiPoe1ZyRjfmlvPxB+jZu6o3PIeT/ccvHCTxGFy8u
dgg1A970rmnPbe8t9EBZQLftO/9KmDodg7w/gelIs6wuCFJ7xvl47hqsPM1GluXa
M/HQRRZCXDyVd+P1tqS6i/x4w3q162YT3aQV9q5lV6lsq2obLArtvBzIHF4vYMSm
2TczdpOI2eqRKUZWN3Bv5Inn8d5FOyXf3SCY/dwa6PRYaZkCQHN/OQycc3/3rKU=
=2dYt
-----END PGP SIGNATURE-----
Merge tag 'platform-drivers-x86-v4.14-3' of git://git.infradead.org/linux-platform-drivers-x86
Pull x86 platform driver fixes from Darren Hart:
"Use a spin_lock instead of mutex in atomic context. The devm_ fix is a
dependency. Summary:
intel_pmc_ipc:
- Use spin_lock to protect GCR updates
- Use devm_* calls in driver probe function"
* tag 'platform-drivers-x86-v4.14-3' of git://git.infradead.org/linux-platform-drivers-x86:
platform/x86: intel_pmc_ipc: Use spin_lock to protect GCR updates
platform/x86: intel_pmc_ipc: Use devm_* calls in driver probe function
Currently, update_no_reboot_bit() function implemented in this driver
uses mutex_lock() to protect its register updates. But this function is
called with in atomic context in iTCO_wdt_start() and iTCO_wdt_stop()
functions in iTCO_wdt.c driver, which in turn causes "sleeping into
atomic context" issue. This patch fixes this issue by replacing the
mutex_lock() with spin_lock() to protect the GCR read/write/update APIs.
Fixes: 9d855d4 ("platform/x86: intel_pmc_ipc: Fix iTCO_wdt GCS memory mapping failure")
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kupuswamy@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
This patch cleans up unnecessary free/alloc calls in ipc_plat_probe(),
ipc_pci_probe() and ipc_plat_get_res() functions by using devm_*
calls.
This patch also adds proper error handling for failure cases in
ipc_pci_probe() function.
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
[andy: fixed style issues, missed devm_free_irq(), removed unnecessary log message]
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Pull workqueue fix from Tejun Heo:
"This is a fix for an old bug in workqueue. Workqueue used a mutex to
arbitrate who gets to be the manager of a pool. When the manager role
gets released, the mutex gets unlocked while holding the pool's
irqsafe spinlock. This can lead to deadlocks as mutex's internal
spinlock isn't irqsafe. This got discovered by recent fixes to mutex
lockdep annotations.
The fix is a bit invasive for rc6 but if anything were wrong with the
fix it would likely have already blown up in -next, and we want the
fix in -stable anyway"
* 'for-4.14-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
workqueue: replace pool->manager_arb mutex with a flag
- Fix a touchpad pin control issue on the AMD affecting Asus
laptops.
- Fix an interrupt handling regression on the MCP23s08.
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZ7fAXAAoJEEEQszewGV1z2vAQAI3jJ6jK5YRzqmrnCSNOFoHa
+PH04sEAphGpxWnCcfAd5WXrJw5uT09anvBNQiJyi+4/JzLoA3M++SIEp60Otesm
DtsuKIsGJqFv1k5eVLBDQhDQLkB4wkvQksaJMrs9nuL2PliSVol4FwvlvOLr/J5J
QDO0G4Sx/pEG/nbRfdr5kxIuS4bdSn5/HAlbuw0x2iiDrcd41VxG1G3fVTXpyw0J
KSOWlqcK69JlVYHqUgAQ+p63cEingl2nVLh587zdGdmQSDKIAafPgWuDNrcAR3Aj
M4dvCqPbkaDoeSLl22wcgcMgUncOszFKaUn+or7AGGkJtrfMvi8HDPcDZxs/rKdb
EoitKrTtecQ6BcX259MJ2q06KkAkm36rpL0VrQQZbzIRydpj+wo/7CD51DWFDSKP
4ETW2PyJS/aT/i/Aa4sVo3eJe7MtnpDq6r6p99caayGs8tWtmIqWh1V54H4glYQb
BSo9YZ95Rdp7mV6+T7VtmUZPBArJIvPuHOAmQmUeTVISsb/BmRRf3VKBo7kSuSVR
3D4H6LO4bYMqiyjdvz/OdkQH2kozfZAN3oRUofc2uuU/5W5NBQ91nAdAkeRXRCc7
DRCG7+LLHdw9331AvRCvpbVbWjvmEiTPJ3lBifdjX4Q/8hFhbBVEoNyybktv6SF5
IEy6zxpVFksslrB4XzKA
=3FpO
-----END PGP SIGNATURE-----
Merge tag 'pinctrl-v4.14-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pin control fixes from Linus Walleij:
"Two last minute fixes for pin controllers, both regressions in
specific drivers:
- Fix a touchpad pin control issue on the AMD affecting Asus laptops
- Fix an interrupt handling regression on the MCP23s08"
* tag 'pinctrl-v4.14-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: mcp23s08: fix interrupt handling regression
pinctrl/amd: fix masking of GPIO interrupts
A couple of small driver specific bug fixes that have been collected
since the merge window.
-----BEGIN PGP SIGNATURE-----
iQFHBAABCgAxFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAlntyZQTHGJyb29uaWVA
a2VybmVsLm9yZwAKCRAk1otyXVSH0CtcB/9+bk0sc7RgEw+M6IqvGUWtV7W+wxTZ
PsZkTwoLj613gvPtJegKlP5sd0EToNTg+FTDoPyuHdR3PFJk0U6ifYZfDJI+9O16
NsOK8qljHZBWeTgXftYrGF/bFlTjf47nqymnYSvH/+M1DXCxIOGWe+1/aSq9A9Xf
zFBUQE6Qt1MUC2nRoKrIR3feemzEFIdzX4tft0u2sk2aZdi78hBxlPTckprNYBwY
vDcsnmRLY7uBw6OhCOpdNlisEW+FKxMQrRabYaeCFj9KAM8DYEiWwW9BQoyvd9+R
eb0HKKse+73iGgyZpCunsy7766t08mNoLGNsrnQvC2Li2ebiBMplb8bX
=PGp9
-----END PGP SIGNATURE-----
Merge tag 'regulator-fix-v4.14-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator
Pull regulator fixes from Mark Brown:
"A couple of small driver specific bug fixes that have been collected
since the merge window"
* tag 'regulator-fix-v4.14-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator:
regulator: rn5t618: Do not index regulator_desc arrays by id
regulator: axp20x: Fix poly-phase bit offset for AXP803 DCDC5/6
There's no need for extra cpuid_parse arch callback, it can be handled
directly in init callback.
Adding the init function to x86 to cover the cpuid initialization.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171011150158.11895-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Fix an incorrect description in the 'perf list' manpage. When a group
does not fit into the hardware it is partially scheduled, but does not
error out.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/20171010224322.15861-1-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Otherwise we fail on virtual machines with no support for specific HW
events.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20171009130712.14747-1-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The previous prep patch was just to show exactly what changed in that
function, now its time to move that method and things only it uses to
the right place, mmap.[ch]
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-aaxywfgw3d44x6xlu8zm1avu@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
It becomes a perf_mmap method, "push", that build reads from a mmap and
"pushes" it to a consumer, that in the initial case, for 'perf record',
just writes it to the perf.data file descriptor, but may be used by
'top', etc.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-u4l1qjbi6l76r2k0nv99220n@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
To better organize the sources, and we may end up even using it
directly, without evlists and evsels.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-oiqrm7grflurnnzo2ovfnslg@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>