Граф коммитов

299662 Коммитов

Автор SHA1 Сообщение Дата
Steven Rostedt 9644302e33 ftrace: Speed up search by skipping pages by address
As all records in a page of the ftrace table are sorted, we can
speed up the search algorithm by checking if the address to look for
falls in between the first and last record ip on the page.

This speeds up both the ftrace_location() and ftrace_text_reserved()
algorithms, as it can skip full pages when the search address is
not in them.

Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 19:58:46 -04:00
Steven Rostedt 706c81f87f ftrace: Remove extra helper functions
The ftrace_record_ip() and ftrace_alloc_dyn_node() were from the
time of the ftrace daemon. Although they were still used, they
still make things a bit more complex than necessary.

Move the code into the one function that uses it, and remove the
helper functions.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 19:58:45 -04:00
Steven Rostedt 9fd49328fc ftrace: Sort all function addresses, not just per page
Instead of just sorting the ip's of the functions per ftrace page,
sort the entire list before adding them to the ftrace pages.

This will allow the bsearch algorithm to be sped up as it can
also sort by pages, not just records within a page.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 19:58:44 -04:00
Vaibhav Nagarnaik 71babb2705 tracing: change CPU ring buffer state from tracing_cpumask
According to Documentation/trace/ftrace.txt:

tracing_cpumask:

        This is a mask that lets the user only trace
        on specified CPUS. The format is a hex string
        representing the CPUS.

The tracing_cpumask currently doesn't affect the tracing state of
per-CPU ring buffers.

This patch enables/disables CPU recording as its corresponding bit in
tracing_cpumask is set/unset.

Link: http://lkml.kernel.org/r/1336096792-25373-3-git-send-email-vnagarnaik@google.com

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Laurent Chavey <chavey@google.com>
Cc: Justin Teravest <teravest@google.com>
Cc: David Sharp <dhsharp@google.com>
Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 19:50:38 -04:00
Namhyung Kim 0a3d7ce7e6 tracing: Check return value of tracing_dentry_percpu()
If tracing_dentry_percpu() failed, tracing_init_debugfs_percpu()
will try to create each cpu directories on debugfs' root directory
as d_percpu is NULL.

Link: http://lkml.kernel.org/r/1335143517-2285-1-git-send-email-namhyung.kim@lge.com

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 19:50:37 -04:00
Steven Rostedt 308f7eeb78 ring-buffer: Reset head page before running self test
When the ring buffer does its consistency test on itself, it
removes the head page, runs the tests, and then adds it back
to what the "head_page" pointer was. But because the head_page
pointer may lack behind the real head page (held by the link
list pointer). The reset may be incorrect.

Instead, if the head_page exists (it does not on first allocation)
reset it back to the real head page before running the consistency
tests. Then it will be put back to its original location after
the tests are complete.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 19:50:36 -04:00
Steven Rostedt 659f451ff2 ring-buffer: Add integrity check at end of iter read
There use to be ring buffer integrity checks after updating the
size of the ring buffer. But now that the ring buffer can modify
the size while the system is running, the integrity checks were
removed, as they require the ring buffer to be disabed to perform
the check.

Move the integrity check to the reading of the ring buffer via the
iterator reads (the "trace" file). As reading via an iterator requires
disabling the ring buffer, it is a perfect place to have it.

If the ring buffer happens to be disabled when updating the size,
we still perform the integrity check.

Cc: Vaibhav Nagarnaik <vnagarnaik@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 19:50:23 -04:00
Vaibhav Nagarnaik 5040b4b7bc ring-buffer: Make addition of pages in ring buffer atomic
This patch adds the capability to add new pages to a ring buffer
atomically while write operations are going on. This makes it possible
to expand the ring buffer size without reinitializing the ring buffer.

The new pages are attached between the head page and its previous page.

Link: http://lkml.kernel.org/r/1336096792-25373-2-git-send-email-vnagarnaik@google.com

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Laurent Chavey <chavey@google.com>
Cc: Justin Teravest <teravest@google.com>
Cc: David Sharp <dhsharp@google.com>
Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 16:25:51 -04:00
Vaibhav Nagarnaik 83f40318da ring-buffer: Make removal of ring buffer pages atomic
This patch adds the capability to remove pages from a ring buffer
without destroying any existing data in it.

This is done by removing the pages after the tail page. This makes sure
that first all the empty pages in the ring buffer are removed. If the
head page is one in the list of pages to be removed, then the page after
the removed ones is made the head page. This removes the oldest data
from the ring buffer and keeps the latest data around to be read.

To do this in a non-racey manner, tracing is stopped for a very short
time while the pages to be removed are identified and unlinked from the
ring buffer. The pages are freed after the tracing is restarted to
minimize the time needed to stop tracing.

The context in which the pages from the per-cpu ring buffer are removed
runs on the respective CPU. This minimizes the events not traced to only
NMI trace contexts.

Link: http://lkml.kernel.org/r/1336096792-25373-1-git-send-email-vnagarnaik@google.com

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Laurent Chavey <chavey@google.com>
Cc: Justin Teravest <teravest@google.com>
Cc: David Sharp <dhsharp@google.com>
Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 16:18:57 -04:00
Steven Rostedt 6edb2a8a38 tracing: Clean up tracing_mark_write()
On gcc 4.5 the function tracing_mark_write() would give a warning
of page2 being uninitialized. This is due to a bug in gcc because
the logic prevents page2 from being used uninitialized, and
gcc 4.6+ does not complain (correctly).

Instead of adding a "unitialized" around page2, which could show
a bug later on, I combined page1 and page2 into an array map_pages[].
This binds the two and the two are modified according to nr_pages
(what gcc 4.5 seems to ignore). This no longer gives a warning with
gcc 4.5 nor with gcc 4.6.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-16 16:18:57 -04:00
Robert Richter 978da300c7 perf/x86/ibs: Fix undefined reference to `get_ibs_caps'
Fixing i386 allnoconfig built errors:

 arch/x86/built-in.o: In function `amd_pmu_hw_config':
 perf_event_amd.c:(.text+0xc3e1): undefined reference to `get_ibs_caps'

Reported-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-14 14:31:35 +02:00
Ingo Molnar 0c5a0f96e8 Arjan & Linus Annotation Edition
More like Linus', but hey at least an attempt at implementing a suggestion by
 Arjan made this time around!
 
 . Fix indirect calls beautifier, reported by Linus.
 
 . Use the objdump comments to nuke specificities about how access to a well
   know variable is encoded, suggested by Linus.
 
 . Show the number of places that jump to a target, requested by Arjan.
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJPrr3pAAoJENZQFvNTUqpAUZUP/jbBfYzJPqFWNvpPrTIeJXAP
 WVX8bOHCjSQId9dWrc/Nlf/25JUbZOnDE8FV9weBGjJfgc6Y7awfuVtFW/QNjZ7w
 2Hggn5242J+CSJ8jwLFFV3gJH3jQUE7m6egkkCQBdPI/ruarO01FKfydHvkTeSNt
 f418A+S2Bx6oEjZa+DO8g/bhJ9d4FbS5hqubWvqC6iTf5Z9+6fTNnR0dJG1GTdy6
 F8x7SXNG3AXKHNbqVzdXExtOYS2bum9yQNI/O4sjUgSoOdg7GzPYom08WglxnOkO
 K8BXj8stbEW3mZJANoNxUzR7xvfvcaBsEWmz8gF/LkZa51VnbtSsrqKkKZU8LuJ6
 MRBW9ue2BUrtLb+U9sfFSs1toRT8WfIetK7OXJIow6m+5TZBepWhZt+Gx0c1LCrt
 AOGvGSq1u0troomv52JYqNukI+w39quWBPk4C7/kKGY3SRjrriKyCyo1spJxH5en
 UVFQ8vEHu4gEUspXphKqctT4RAlBO2YdeiJv7t5h2TJgz9wnUwjnvrB1xpIvc6Yo
 2gNjSMReNupT5mL8peh0eYS4+4Dcx14Layl3JeYPZ+azIf2mJrfpGucEv/s1b3yr
 XK2HAQboNF/eZSsuJsmbvq/BIOHaUjFMNqLw8ORwK0WnPNNO0iAuYnu45KcfXhTx
 I+9JzyG3WYJtZp+R7Zqg
 =cE1R
 -----END PGP SIGNATURE-----

Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Arjan & Linus Annotation Edition

 - Fix indirect calls beautifier, reported by Linus.

 - Use the objdump comments to nuke specificities about how access to a well
   know variable is encoded, suggested by Linus.

 - Show the number of places that jump to a target, requested by Arjan.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-14 10:45:01 +02:00
Arnaldo Carvalho de Melo 54e7a4e88e perf annotate browser: Add key bindings help window
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1txmtzf71eqie5xcukbfxors@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-12 16:36:55 -03:00
Arnaldo Carvalho de Melo 2402e4a936 perf annotate browser: Show 'jumpy' functions
Just press 'J' and see how many places jump to jump targets.

The hottest jump target appears in red, targets with more than one
source have a different color than single source jump targets.

Suggested-by: Arjan van de Ven <arjan@infradead.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-7452y0dmc02a20ooins7rn79@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-12 16:21:53 -03:00
Arnaldo Carvalho de Melo 7d5b12f5a0 perf annotate browser: Count the numbers of jump sources to a target
Instead of simply marking an offset as a jump target. So that we can
implement a new feature: showing "jumpy" targets, I.e. addresses that
lots of places jump to.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-vc7b0u5yxgrubig0q61ayhxf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-12 13:40:52 -03:00
Arnaldo Carvalho de Melo c46219ac34 perf annotate: Introduce ->free() method in ins_ops
So that we don't special case disasm_line__free, allowing each
instruction class to provide an specialized destructor, like is needed
for 'lock'.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xxw4vs5n077tf35jsvjzylhb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-12 13:26:20 -03:00
Arnaldo Carvalho de Melo 7a997fe401 perf annotate: Augment lock instruction output
It just chops off the 'lock' and uses the ins__find, etc machinery to
call instruction specific parsers/beautifiers.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-4913ba2dzakz5rivgumosqbh@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-12 13:15:34 -03:00
Arnaldo Carvalho de Melo a43712c472 perf annotate: Resolve symbols using objdump comment for single op ins
Starting with inc, incl, dec, decl.

Requested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-jvh0jspefr5jyn0l7qko12st@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-11 17:21:09 -03:00
Arnaldo Carvalho de Melo 6de783b6f5 perf annotate: Resolve symbols using objdump comment
This:

     mov    0x95bbb6(%rip),%ecx        # ffffffff81ae8d04 <d_hash_shift>

Becomes:

     mov    d_hash_shift,%ecx

Ditto for many more instructions that take two operands.

Requested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-i5opbyai2x6mn9e5yjmhx9k6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-11 17:19:20 -03:00
Arnaldo Carvalho de Melo e8ea156195 perf annotate: Use raw form for register indirect call instructions
callq  *0x10(%rax)

was being rendered in simplified mode as:

   callq  *10

I.e. hexa, but without the 0x and omitting the register. In such cases
just use the raw form.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-m91tv004h2m1fkfgu6ovx3hb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-11 12:28:55 -03:00
Ingo Molnar 5dcefda0fd Fixes and improvements for perf/core:
. perf_target: abstraction for --uid, --pid, --tid, --cpu, --all-cpus handling,
   eliminating code duplicated in the tools, having constraints that apply to
   all of them, from Namhyung Kim
 
 . Fixes for handling fallback to cpu-clock on PPC, from David Ahern
 
 . Fix for processing events with unknown size, from Jiri Olsa
 
 . Compilation fix on 32-bit, from Jiri Olsa
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJPq+zRAAoJENZQFvNTUqpAe9YP/2VuVQ8OcIS6h9NjdfdAc3y/
 RX2KDvnCaqa8YFIAl9OOWwy6G35mu8NEr3Lpzw2kkKvdUE0HYFUa91kF4DDLvG/w
 6gpWUtH+9jtnydkX4z11qi1a+Pfq0irc83oaDgfFNArZKdf080OxoCfzaxOjZ3fH
 zs4JR2L8TYYxIiWWWl0vMFic9+hTIM3dyFYJswHGy/ppqDX6DWfq4XndTi1gjo99
 dxT+hKUvIMDI6Ztdyafkvfvge47ooI3OCuPlvdQzPQlh5ZY/BSZMDhUVHPaBBAV3
 qQY6tCGPgk6sEIoNGf3juRSQyaGoo5V5zKgNFsAG5rsZqIpmlErJTeGMSGeOW3ww
 TIAOXBVij98c8iwuGPAKR3ZUuNxXHBTAo4Sq/DjRGJ8RKILVaYe6Sp5DOjaFA+3U
 374XGvEIqqYViNZm3gq1JfVAjJ4179hk7K+zUK55IeSgsEnwQ9s379dIpwaxCtL4
 4Sb0FqyVbx7joo8NfLokXqblcBa/MRECrtVbHHEWuygzuV2Au437mOLOiTSGyyZr
 N6FsDNFUcx044QZAYsAcbAXIxE+DssOcEERiFD1gG8MoUohW/DnpbMMaQi9bfUSi
 b8SgFIpy9q9LlADRFYUZ+rVs4B9RCJNMtaa2UpcaRuUcXHrqZEWzxbPbSDjCbl4G
 /yoSHxH5QJu6bkYz5deW
 =ZZUs
 -----END PGP SIGNATURE-----

Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Fixes and improvements for perf/core:

- perf_target: abstraction for --uid, --pid, --tid, --cpu, --all-cpus handling,
  eliminating code duplicated in the tools, having constraints that apply to
  all of them, from Namhyung Kim

- Fixes for handling fallback to cpu-clock on PPC, from David Ahern

- Fix for processing events with unknown size, from Jiri Olsa

- Compilation fix on 32-bit, from Jiri Olsa

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-11 08:13:55 +02:00
Arnaldo Carvalho de Melo 5a5626b1b4 perf hists browser: Use '/' for search/filter instead of 's'
That is what is used in vi and mutt, and as well on the 'annotate'
browser.

Eventually we can have keymappings to make people used to other key
associations more confortable.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-fyln9286b8gx5q4n277l0djs@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-10 13:07:59 -03:00
Ingo Molnar c4f400e837 Merge branch 'tip/perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into perf/core 2012-05-09 19:34:30 +02:00
David Ahern f6c1be2711 perf annotate: shorten helpline so it fits in visible space
Additional toggles have pushed the help line out of view on a modestly
sized terminal (120 columns wide). Shorten it to just reminders.

Signed-off-by: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/1336510879-64610-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-09 12:02:36 -03:00
David Ahern d1cae34d6f perf record: Reset event name when falling back to cpu-clock
perf-record defaults to the H/W cycles event and if it is not supported
falls back to cpu-clock. Reset the event name as well.

Signed-off-by: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/1336495811-58461-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-09 12:02:10 -03:00
David Ahern 40491eaa46 perf top: Update event name when falling back to cpu-clock
The 'perf top' command falls back to cpu-clock if the H/W cycles event
is not supported, but the event name is not updated leading to a
misleading header:

PerfTop: 8 irqs/sec  kernel:75.0%  exact:  0.0% [1000Hz cycles],  ...

Update the event name when the event type is changed so that the
header displays correctly:

PerfTop: 794 irqs/sec  kernel:100.0%  exact:  0.0% [1000Hz cpu-clock], ...

Signed-off-by: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/r/1336495789-58420-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-09 12:01:57 -03:00
David Ahern 979987a567 perf stat: handle ENXIO error for perf_event_open
perf stat on PPC currently fails to run:

$ perf stat -- sleep 1
  Error: open_counter returned with 6 (No such device or address). /bin/dmesg may provide additional information.

  Fatal: Not all events could be opened.

The problem is that until 2.6.37 (behavior changed with commit b0a873e)
perf on PPC returns ENXIO when hw_perf_event_init() fails. With this
patch we get the expected behavior:

$ perf stat -v -- sleep 1
cycles event is not supported by the kernel.
stalled-cycles-frontend event is not supported by the kernel.
stalled-cycles-backend event is not supported by the kernel.
instructions event is not supported by the kernel.
branches event is not supported by the kernel.
branch-misses event is not supported by the kernel.

...

Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1336490956-57145-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-09 11:58:48 -03:00
David Ahern 028d455b12 perf record: Fix fallback to cpu-clock on ppc
perf-record on PPC is not falling back to cpu-clock:

$ perf record -ag -fo /tmp/perf.data -- sleep 1

  Error: sys_perf_event_open() syscall returned with 6 (No such device or address).  /bin/dmesg may provide additional information.

  Fatal: No CONFIG_PERF_EVENTS=y kernel support configured?

The problem is that until 2.6.37 (behavior changed with commit b0a873e)
perf on PPC returns ENXIO when hw_perf_event_init() fails. With this
patch we get the expected behavior:

$ perf record -ag -fo /tmp/perf.data -v -- sleep 1
Old kernel, cannot exclude guest or host samples.
The cycles event is not supported, trying to fall back to cpu-clock-ticks
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.151 MB /tmp/perf.data (~6592 samples) ]

Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1336490937-57106-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-09 11:57:29 -03:00
Jiri Olsa 04480d0110 perf report: Fix format string for x86-32 compilation
Using PRIu64 for printing out u64 nr_events to fix compilation
for x86 32 bits.

Cc: Arun Sharma <asharma@fb.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Frank C. Eigler <fche@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Ulrich Drepper <drepper@gmail.com>
Link: http://lkml.kernel.org/r/1335958638-5160-7-git-send-email-jolsa@redhat.com
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-09 11:27:08 -03:00
Peter Zijlstra cb04ff9ac4 sched, perf: Use a single callback into the scheduler
We can easily use a single callback for both sched-in and sched-out. This
reduces the code footprint in the scheduler path as well as removes
the PMU black spot otherwise present between the out and in callback.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-o56ajxp1edwqg6x9d31wb805@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:17 +02:00
Robert Richter 8b1e13638d perf/x86-ibs: Fix usage of IBS op current count
The value of IbsOpCurCnt rolls over when it reaches IbsOpMaxCnt. Thus,
it is reset to zero by hardware. To get the correct count we need to
add the max count to it in case we received an ibs sample (valid bit
set).

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-13-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:17 +02:00
Robert Richter fc5fb2b5e1 perf/x86-ibs: Catch spurious interrupts after stopping IBS
After disabling IBS there could be still incomming NMIs with samples
that even have the valid bit cleared. Mark all this NMIs as handled to
avoid spurious interrupt messages.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-12-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:16 +02:00
Robert Richter c9574fe0bd perf/x86-ibs: Implement workaround for IBS erratum #420
When disabling ibs there might be the case where hardware continuously
generates interrupts. This is described in erratum #420 (Instruction-
Based Sampling Engine May Generate Interrupt that Cannot Be Cleared).
To avoid this we must clear the counter mask first and then clear the
enable bit. This patch implements this.

See Revision Guide for AMD Family 10h Processors, Publication #41322.

Note: We now keep track of the last read ibs config value which is
then used to disable ibs. To update the config value we pass now a
pointer to the functions reading it.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-11-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:16 +02:00
Robert Richter 7caaf4d824 perf/x86-ibs: Extend hw period that triggers overflow
If the last hw period is too short we might hit the irq handler which
biases the results. Thus try to have a max last period that triggers
the sw overflow.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-10-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:15 +02:00
Robert Richter fc006cf7cc perf/x86-ibs: Trigger overflow if remaining period is too small
There are cases where the remaining period is smaller than the minimal
possible value. In this case the counter is restarted with the minimal
period. This is of no use as the interrupt handler will trigger
immediately again and most likely hits itself. This biases the
results.

So, if the remaining period is within the min range, we better do not
restart the counter and instead trigger the overflow.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-9-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:15 +02:00
Robert Richter 98112d2e95 perf/x86-ibs: Rename some variables
Simple patch that just renames some variables for better
understanding.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-8-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:14 +02:00
Robert Richter 450bbd493d perf/x86-ibs: Precise event sampling with IBS for AMD CPUs
This patch adds support for precise event sampling with IBS. There are
two counting modes to count either cycles or micro-ops. If the
corresponding performance counter events (hw events) are setup with
the precise flag set, the request is redirected to the ibs pmu:

 perf record -a -e cpu-cycles:p ...    # use ibs op counting cycle count
 perf record -a -e r076:p ...          # same as -e cpu-cycles:p
 perf record -a -e r0C1:p ...          # use ibs op counting micro-ops

Each ibs sample contains a linear address that points to the
instruction that was causing the sample to trigger. With ibs we have
skid 0. Thus, ibs supports precise levels 1 and 2. Samples are marked
with the PERF_EFLAGS_EXACT flag set. In rare cases the rip is invalid
when IBS was not able to record the rip correctly. Then the
PERF_EFLAGS_EXACT flag is cleared and the rip is taken from pt_regs.

V2:
* don't drop samples in precise level 2 if rip is invalid, instead
  support the PERF_EFLAGS_EXACT flag

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120502103309.GP18810@erda.amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:14 +02:00
Robert Richter d47e8238cd perf/x86-ibs: Take instruction pointer from ibs sample
Each IBS sample contains a linear address of the instruction that
caused the sample to trigger. This address is more precise than the
rip that was taken from the interrupt handler's stack. Update the rip
with that address. We use this in the next patch to implement
precise-event sampling on AMD systems using IBS.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-6-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:13 +02:00
Robert Richter 6accb9cf76 perf/x86-ibs: Fix frequency profiling
Fixing profiling at a fixed frequency, in this case the freq value and
sample period was setup incorrectly. Since sampling periods are
adjusted we also allow periods that have lower 4 bits set.

Another fix is the setup of the hw counter: If we modify
hwc->sample_period, we also need to update hwc->last_period and
hwc->period_left.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-5-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:13 +02:00
Robert Richter 7bf352384f perf/x86-ibs: Enable ibs op micro-ops counting mode
Allow enabling ibs op micro-ops counting mode.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-4-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:12 +02:00
Robert Richter fd0d000b2c perf: Pass last sampling period to perf_sample_data_init()
We always need to pass the last sample period to
perf_sample_data_init(), otherwise the event distribution will be
wrong. Thus, modifiyng the function interface with the required period
as argument. So basically a pattern like this:

        perf_sample_data_init(&data, ~0ULL);
        data.period = event->hw.last_period;

will now be like that:

        perf_sample_data_init(&data, ~0ULL, event->hw.last_period);

Avoids unininitialized data.period and simplifies code.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-3-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:12 +02:00
Robert Richter c75841a398 perf/x86-ibs: Fix update of period
The last sw period was not correctly updated on overflow and thus led
to wrong distribution of events. We always need to properly initialize
data.period in struct perf_sample_data.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-2-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:11 +02:00
Ingo Molnar ad8537cda6 Merge branch 'perf/x86-ibs' into perf/core 2012-05-09 15:22:23 +02:00
Steven Rostedt 68179686ac tracing: Remove ftrace_disable/enable_cpu()
The ftrace_disable_cpu() and ftrace_enable_cpu() functions were
needed back before the ring buffer was lockless. Now that the
ring buffer is lockless (and has been for some time), these functions
serve no purpose, and unnecessarily slow down operations of the tracer.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-08 21:06:26 -04:00
Jiri Olsa 50e18b94c6 tracing: Use seq_*_private interface for some seq files
It's appropriate to use __seq_open_private interface to open
some of trace seq files, because it covers all steps we are
duplicating in tracing code - zallocating the iterator and
setting it as seq_file's private.

Using this for following files:
  trace
  available_filter_functions
  enabled_functions

Link: http://lkml.kernel.org/r/1335342219-2782-5-git-send-email-jolsa@redhat.com

Signed-off-by: Jiri Olsa <jolsa@redhat.com>

[
 Fixed warnings for:
   kernel/trace/trace.c: In function '__tracing_open':
   kernel/trace/trace.c:2418:11: warning: unused variable 'ret' [-Wunused-variable]
   kernel/trace/trace.c:2417:19: warning: unused variable 'm' [-Wunused-variable]
]

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-08 21:04:12 -04:00
Ingo Molnar 149936a068 Merge branch 'perf/annotate' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Perf annotate browser improvements:

 - Get back the line separating the overheads from the disassembly, requested by
   Peter Zijlstra, Linus agreed now that it is a solid line and more column real
   state was harvested. Also it has the jump->arrow lines separated from it by
   the address/jump target column.

 - Don't change asm line color when toggling source code view. Requested by
   Peter Zijlstra.

Current snapshot:

 avtab_search_node
        │      push   %rbp
        │      mov    %rsp,%rbp
        │    → callq  mcount
        │      movzwl 0x6(%rsi),%edx
        │      and    $0x7fff,%dx
        │      test   %rdi,%rdi
        │    ↓ jne    20
   0.42 │17:┌─→xor    %eax,%eax
        │19:│  leaveq
   0.42 │   │← retq
        │   │  nopl   0x0(%rax,%rax,1)
        │20:│  mov    (%rdi),%rax
   0.08 │   │  test   %rax,%rax
        │   └──je     17
        │      movzwl (%rsi),%ecx
        │      movzwl 0x2(%rsi),%r9d
        │      movzwl 0x4(%rsi),%r8d
        │      movzwl %cx,%esi
        │      movzwl %r9w,%r10d
        │      shl    $0x9,%esi
        │      lea    (%rsi,%r10,4),%esi
        │      lea    (%r8,%rsi,1),%esi
        │      and    0x10(%rdi),%si
        │      movzwl %si,%esi
        │      mov    (%rax,%rsi,8),%rax
   1.01 │      test   %rax,%rax
        │    ↑ je     19
        │      nopw   0x0(%rax,%rax,1)
   3.19 │60:   cmp    %cx,(%rax)
        │    ↓ jne    7e
   0.08 │      cmp    %r9w,0x2(%rax)
        │    ↓ jne    7e
        │      cmp    %r8w,0x4(%rax)
        │    ↓ jne    79
        │      test   %dx,0x6(%rax)
        │    ↑ jne    19
        │79:   cmp    %r8w,0x4(%rax)
  83.45 │7e: ↑ ja     17
   3.36 │      mov    0x10(%rax),%rax
   7.98 │      test   %rax,%rax
        │    ↑ jne    60
        │      leaveq
        │    ← retq

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-08 16:55:15 +02:00
Arnaldo Carvalho de Melo 80eebd94d2 perf top: Default to system wide using perf_target methods
Additionally we were not checking if a cpu list had been provided by the
user. Fix that.

Reported-by: David Ahern <dsahern@gmail.com>
Reported-by: Namhyung Kim <namhyung@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ao3zrouylwmt7h9ikj0krubi@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-08 10:47:09 -03:00
Minho Ban b02ee9a33b tracing: Prevent wasting time evaluating parameters in trace_preempt_on/off
This fixes spending time for evaluating parameters in trace_preempt_on/off when
the tracer config is off.

The patch mainly inspired by Steven Rostedt, thanks Steven.

Link: http://lkml.kernel.org/r/4FA73510.7070705@samsung.com

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Turner <pjt@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Minho Ban <mhban@samsung.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-08 09:33:52 -04:00
Arnaldo Carvalho de Melo b9818e9375 perf annotate browser: Compact 'nop' output
Just suppress the nop operands, future infrastructure that will record
the instruction lenght (and its contents) in struct ins will allow
rendering them as nopN, i.e. nop5 for a 5-byte nop.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qddbeglfzqdlal8vj2yaj67y@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-07 19:00:42 -03:00
Arnaldo Carvalho de Melo 5417072bf6 perf annotate browser: Do raw printing in 'o'ffset in a single place
Instead of doing the same in all ins scnprintf methods.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-8mfairi2n1nentoa852alazv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-07 18:54:16 -03:00