Граф коммитов

769279 Коммитов

Автор SHA1 Сообщение Дата
Jiri Olsa cb45302d7c perf/hw_breakpoint: Remove superfluous bp->attr.disabled = 0
Once the breakpoint was succesfully modified, the attr->disabled value
is in bp->attr.disabled. So there's no reason to set it again, removing
that.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Milind Chabbi <chabbi.milind@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180827091228.2878-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-30 14:49:23 -03:00
Jiri Olsa bd14406b78 perf/hw_breakpoint: Modify breakpoint even if the new attr has disabled set
We need to change the breakpoint even if the attr with new fields has
disabled set to true.

Current code prevents following user code to change the breakpoint
address:

  ptrace(PTRACE_POKEUSER, child, offsetof(struct user, u_debugreg[0]), addr_1)
  ptrace(PTRACE_POKEUSER, child, offsetof(struct user, u_debugreg[0]), addr_2)
  ptrace(PTRACE_POKEUSER, child, offsetof(struct user, u_debugreg[7]), dr7)

The first PTRACE_POKEUSER creates the breakpoint with attr.disabled set
to true:

  ptrace_set_breakpoint_addr(nr = 0)
    struct perf_event *bp = t->ptrace_bps[nr];

    ptrace_register_breakpoint(..., disabled = true)
      ptrace_fill_bp_fields(..., disabled)
      register_user_hw_breakpoint

So the second PTRACE_POKEUSER will be omitted:

  ptrace_set_breakpoint_addr(nr = 0)
    struct perf_event *bp = t->ptrace_bps[nr];
    struct perf_event_attr attr = bp->attr;

    modify_user_hw_breakpoint(bp, &attr)
      if (!attr->disabled)
        modify_user_hw_breakpoint_check

Reported-by: Milind Chabbi <chabbi.milind@gmail.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180827091228.2878-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-30 14:49:23 -03:00
Jiri Olsa 9b3579fc6c perf tests: Add breakpoint modify tests
Adding to tests that aims on kernel breakpoint modification bugs.

First test creates HW breakpoint, tries to change it and checks it was
properly changed. It aims on kernel issue that prevents HW breakpoint to
be changed via ptrace interface.

The first test forks, the child sets itself as ptrace tracee and waits
in signal for parent to trace it, then it calls bp_1 and quits.

The parent does following steps:

 - creates a new breakpoint (id 0) for bp_2 function
 - changes that breakpoint to bp_1 function
 - waits for the breakpoint to hit and checks
   it has proper rip of bp_1 function

This test aims on an issue in kernel preventing to change disabled
breakpoints

Second test mimics the first one except for few steps
in the parent:
 - creates a new breakpoint (id 0) for bp_1 function
 - changes that breakpoint to bogus (-1) address
 - waits for the breakpoint to hit and checks
   it has proper rip of bp_1 function

This test aims on an issue in kernel disabling enabled
breakpoint after unsuccesful change.

Committer testing:

  # uname -a
  Linux jouet 4.18.0-rc8-00002-g1236568ee3cb #12 SMP Tue Aug 7 14:08:26 -03 2018 x86_64 x86_64 x86_64 GNU/Linux
  # perf test -v "bp modify"
  62: x86 bp modify                                         :
  --- start ---
  test child forked, pid 25671
  in bp_1
  tracee exited prematurely 2
  FAILED arch/x86/tests/bp-modify.c:209 modify test 1 failed

  test child finished with -1
  ---- end ----
  x86 bp modify: FAILED!
  #

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Milind Chabbi <chabbi.milind@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180827091228.2878-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-30 14:49:22 -03:00
Martin Liška 1dc27f6330 perf annotate: Properly interpret indirect call
The patch changes the parsing of:

	callq  *0x8(%rbx)

from:

  0.26 │     → callq  *8

to:

  0.26 │     → callq  *0x8(%rbx)

in this case an address is followed by a register, thus one can't parse
only the address.

Committer testing:

1) run 'perf record sleep 10'
2) before applying the patch, run:

     perf annotate --stdio2 > /tmp/before

3) after applying the patch, run:

     perf annotate --stdio2 > /tmp/after

4) diff /tmp/before /tmp/after:
  --- /tmp/before 2018-08-28 11:16:03.238384143 -0300
  +++ /tmp/after  2018-08-28 11:15:39.335341042 -0300
  @@ -13274,7 +13274,7 @@
                ↓ jle    128
                  hash_value = hash_table->hash_func (key);
                  mov    0x8(%rsp),%rdi
  -  0.91       → callq  *30
  +  0.91       → callq  *0x30(%r12)
                  mov    $0x2,%r8d
                  cmp    $0x2,%eax
                  node_hash = hash_table->hashes[node_index];
  @@ -13848,7 +13848,7 @@
                   mov    %r14,%rdi
                   sub    %rbx,%r13
                   mov    %r13,%rdx
  -              → callq  *38
  +              → callq  *0x38(%r15)
                   cmp    %rax,%r13
     1.91        ↓ je     240
            1b4:   mov    $0xffffffff,%r13d
  @@ -14026,7 +14026,7 @@
                   mov    %rcx,-0x500(%rbp)
                   mov    %r15,%rsi
                   mov    %r14,%rdi
  -              → callq  *38
  +              → callq  *0x38(%rax)
                   mov    -0x500(%rbp),%rcx
                   cmp    %rax,%rcx
                 ↓ jne    9b0
<SNIP tons of other such cases>

Signed-off-by: Martin Liška <mliska@suse.cz>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kim Phillips <kim.phillips@arm.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/bd1f3932-be2b-85f9-7582-111ee0a43b07@suse.cz
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-30 14:49:22 -03:00
Ingo Molnar 66e5db4a1c perf/core improvements and fixes:
LLVM/clang/eBPF: (Arnaldo Carvalho de Melo)
 
 - Allow passing options to llc in addition to to clang.
 
 Hardware tracing: (Jack Henschel)
 
 - Improve error message for PMU address filters, clarifying availability of
   that feature in hardware having hardware tracing such as Intel PT.
 
 Python interface: (Jiri Olsa)
 
 - Fix read_on_cpu() interface.
 
 ELF/DWARF libraries: (Jiri Olsa)
 
 - Fix handling of the combo compressed module file + decompressed associated
   debuginfo file.
 
 Build (Rasmus Villemoes)
 
 - Disable parallelism for 'make clean', avoiding multiple submakes deleting
   the same files and causing the build to fail on systems such as Yocto.
 
 Kernel ABI copies: (Arnaldo Carvalho de Melo)
 
 - Update tools's copy of x86's cpufeatures.h.
 
 - Update arch/x86/lib/memcpy_64.S copy used in 'perf bench mem memcpy'.
 
 Miscellaneous: (Steven Rostedt)
 
 - Change libtraceevent to SPDX License format.
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEELb9bqkb7Te0zijNb1lAW81NSqkAFAlt6508ACgkQ1lAW81NS
 qkA5Wg//dad6DiIsO4s+6jjBi+egnNxWiE2p2SimkatfeeDlpvdYar2tPf4KX0N8
 PNI/BtqrR6EDwAIzA/Aa2bcfF9p/csLe9jQED2MN86Z28Whl5CikQoKvz+r6RvoA
 tmiO/UPIM6QXPKyNKpEeaeExeIgCtE3feTO8kXfaJVrzrQu4DcfjeT4y6Mc9oBW6
 zkgCODvWswiS+uH4aj5gpHE09nrKVrrz2WilbrEIUY72iWPOviZsGcZXxsm09xxZ
 kHqRaXmqUbKmGn0zqUWwexFxnH1BZ/jvM20mLEmdgp3GHpIDtxVi9BYMJ+1Pp8/i
 6+4Tm3/4Km7pqXU56j9RZgUvNdgy2VcQuEv7d0GBb6RRDgjoJaKXq+Mx9qOnhcNH
 bNkkzm9qGt+vxGbP0cZniEZWsdSxPKvIUbwRwP77qRyXsTx+Wt84bnKRer6Wg5u0
 TbAmt35L4cr7oHJ3ctKkmCVUT98jXCdv5fTstKC/bnx5VQemNeeWeA/vmgv23E3O
 zMINDA2zfarqO8k3vQG2YVMzb+3zIHdcKCQOYP9oEBBtgvVzZyrMUQ2e69UlTvA8
 N8HXdia2fTg78btA4yeEG6VWC17Alkl3Yifei+iUGxc1EflHCGoB/HuEb6iVI/mh
 08aQMUMzZyPoABnzIlK4LJKc6DBiRQhQHwiOmcB1umNlSkbwaB4=
 =2nHb
 -----END PGP SIGNATURE-----

Merge tag 'perf-core-for-mingo-4.19-20180820' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

LLVM/clang/eBPF: (Arnaldo Carvalho de Melo)

 - Allow passing options to llc in addition to to clang.

Hardware tracing: (Jack Henschel)

 - Improve error message for PMU address filters, clarifying availability of
   that feature in hardware having hardware tracing such as Intel PT.

Python interface: (Jiri Olsa)

 - Fix read_on_cpu() interface.

ELF/DWARF libraries: (Jiri Olsa)

 - Fix handling of the combo compressed module file + decompressed associated
   debuginfo file.

Build (Rasmus Villemoes)

 - Disable parallelism for 'make clean', avoiding multiple submakes deleting
   the same files and causing the build to fail on systems such as Yocto.

Kernel ABI copies: (Arnaldo Carvalho de Melo)

 - Update tools's copy of x86's cpufeatures.h.

 - Update arch/x86/lib/memcpy_64.S copy used in 'perf bench mem memcpy'.

Miscellaneous: (Steven Rostedt)

 - Change libtraceevent to SPDX License format.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-08-23 10:29:19 +02:00
Arnaldo Carvalho de Melo 78303650e4 tools arch: Update arch/x86/lib/memcpy_64.S copy used in 'perf bench mem memcpy'
To bring in the change made in this cset:

Fixes: a7bea83089 ("x86/asm/64: Use 32-bit XOR to zero registers")

  CC       /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o
  LD       /tmp/build/perf/bench/perf-in.o
  LD       /tmp/build/perf/perf-in.o
  LINK     /tmp/build/perf/perf

Silencing this perf build warning:

  Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S'
  diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-sad22dudoz71qr3tsnlqtkia@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 10:17:14 -03:00
Arnaldo Carvalho de Melo 252df17711 tools arch x86: Update tools's copy of cpufeatures.h
To get the changes in the following csets:

  301d328a6f ("x86/cpufeatures: Add EPT_AD feature bit")
  706d51681d ("x86/speculation: Support Enhanced IBRS on future CPUs")

No tools were affected, copy it to silence this perf tool build warning:

  Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h'
  diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Feiner <pfeiner@google.com>
Cc: Sai Praneeth <sai.praneeth.prakhya@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-bvs8wgd5wp4lz9f0xf1iug5r@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 10:13:13 -03:00
Jiri Olsa 721f0dfc3c perf python: Fix pyrf_evlist__read_on_cpu() interface
Jaroslav reported errors from valgrind over perf python script:

  # echo 0 > /sys/devices/system/cpu/cpu4/online
  # valgrind ./test.py
  ==7524== Memcheck, a memory error detector
  ...
  ==7524== Command: ./test.py
  ==7524==
  pid 7526 exited
  ==7524== Invalid read of size 8
  ==7524==    at 0xCC2C2B3: perf_mmap__read_forward (evlist.c:780)
  ==7524==    by 0xCC2A681: pyrf_evlist__read_on_cpu (python.c:959)
  ...
  ==7524==  Address 0x65c4868 is 16 bytes after a block of size 459,36..
  ==7524==    at 0x4C2B955: calloc (vg_replace_malloc.c:711)
  ==7524==    by 0xCC2F484: zalloc (util.h:35)
  ==7524==    by 0xCC2F484: perf_evlist__alloc_mmap (evlist.c:978)
  ...

The reason for this is in the python interface, that allows a script to
pass arbitrary cpu number, which is then used to access struct
perf_evlist::mmap array. That's obviously wrong and works only when if
all cpus are available and fails if some cpu is missing, like in the
example above.

This patch makes pyrf_evlist__read_on_cpu() search the evlist's maps
array for the proper map to access.

It's linear search at the moment. Based on the way how is the
read_on_cpu used, I don't think we need to be fast in here.  But we
could add some hash in the middle to make it fast/er.

We don't allow python interface to set write_backward event attribute,
so it's safe to check only evlist's mmaps.

Reported-by: Jaroslav Škarvada <jskarvad@redhat.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817114556.28000-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 31fb4c0d7b perf mmap: Store real cpu number in 'struct perf_mmap'
Store the real cpu number in 'struct perf_mmap', which will be used by
python interface that allows user to read a particular memory map for
given cpu.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jaroslav Škarvada <jskarvad@redhat.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817114556.28000-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa b946cd3734 perf tools: Remove ext from struct kmod_path
Having comp carrying the compression ID, we no longer need return the
extension. Removing it and updating the automated test.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-14-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 88c74dc76a perf tools: Add gzip_is_compressed function
Add implementation of the is_compressed callback for gzip.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-13-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 4b57fd44b6 perf tools: Add lzma_is_compressed function
Add implementation of the is_compressed callback for lzma.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-12-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 8b42b7e5e8 perf tools: Add is_compressed callback to compressions array
Add is_compressed callback to the compressions array, that returns 0 if
the file is compressed or != 0 if not.

The new callback is used to recognize the situation when we have a
'compressed' object, like:

  /lib/modules/.../drivers/net/ethernet/intel/igb/igb.ko.xz

but we need to read its debug data from debuginfo files, which might not
be compressed, like:

  /root/.debug/.build-id/d6/...c4b301f/debug

So even for a 'compressed' object we read debug data from a plain
uncompressed object. To keep this transparent, we detect this in
decompress_kmodule() and return the file descriptor to the uncompressed
file.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-11-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa c9a8a6131f perf tools: Move the temp file processing into decompress_kmodule
We will add a compression check in the following patch and it makes it
easier if the file processing is done in a single place. It also makes
the current code simpler.

The decompress_kmodule function now returns the fd of the uncompressed
file and the file name in the pathname arg, if it's provided.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-10-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa dde755a90e perf tools: Use compression id in decompress_kmodule()
Once we parsed out the compression ID, we dont need to iterate all
available compressions and we can call it directly.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-9-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 2af5247530 perf tools: Store compression id into struct dso
Add comp to 'struct dso' to hold the compression index.  It will be used
in the following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-8-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 4b838b0db4 perf tools: Add compression id into 'struct kmod_path'
Store a decompression ID in 'struct kmod_path', so it can be later
stored in 'struct dso'.

Switch 'struct kmod_path's 'comp' from 'bool' to 'int' to return the
compressions array index. Add 0 index item into compressions array, so
that the comp usage stays as it was: 0 - no compression, != 0
compression index.

Update the kmod_path tests.

Committer notes:

Use a designated initializer + terminating comma, e.g. { .fmt = NULL, }, to fix
the build in several distros:

  centos:6:       util/dso.c:201: error: missing initializer
  centos:6:       util/dso.c:201: error: (near initialization for 'compressions[0].decompress')
  debian:9:       util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
  fedora:25:      util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
  fedora:26:      util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
  fedora:27:      util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
  oraclelinux:6:  util/dso.c:201: error: missing initializer
  oraclelinux:6:  util/dso.c:201: error: (near initialization for 'compressions[0].decompress')
  ubuntu:12.04.5: util/dso.c:201:2: error: missing initializer [-Werror=missing-field-initializers]
  ubuntu:12.04.5: util/dso.c:201:2: error: (near initialization for 'compressions[0].decompress') [-Werror=missing-field-initializers]
  ubuntu:16.04:   util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
  ubuntu:16.10:   util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
  ubuntu:16.10:   util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]
  ubuntu:17.10:   util/dso.c:201:24: error: missing field 'decompress' initializer [-Werror,-Wmissing-field-initializers]

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-7-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa e1e139463d perf tools: Make is_supported_compression() static
There's no outside user of it.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-6-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 85e1d419e7 perf tools: Make decompress_to_file() function static
There's no outside user of it.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-5-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa d68a29c282 perf tools: Get rid of dso__needs_decompress() call in __open_dso()
There's no need to call dso__needs_decompress() twice in the function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa 2354ae9bdc perf tools: Get rid of dso__needs_decompress() call in symbol__disassemble()
There's no need to call dso__needs_decompress() twice in the function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Jiri Olsa bcd4287ead perf tools: Get rid of dso__needs_decompress() call in read_object_code()
There's no need to call dso__needs_decompress() twice in the function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180817094813.15086-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:59 -03:00
Steven Rostedt (VMware) 6ab025ed44 tools lib traceevent: Change to SPDX License format
Replace the GPL text with SPDX tags in the tools/lib/traceevent files.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com>
Cc: Yordan Karadzhov (VMware) <y.karadz@gmail.com>
Cc: linux-trace-devel@vger.kernel.org
Link: http://lkml.kernel.org/r/20180816111015.125e0f25@gandalf.local.home
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:58 -03:00
Arnaldo Carvalho de Melo cb76371441 perf llvm: Allow passing options to llc in addition to clang
The newly added 'llvm.opts' variable allows passing options directly to
llc, like needed to get sane DWARF in BPF ELF debug sections:

With:

  [root@seventh perf]# cat ~/.perfconfig
  [llvm]
	  dump-obj = true
	clang-opt = -g
  [root@seventh perf]#

We get:

  [root@seventh perf]# perf trace -e tools/perf/examples/bpf/hello.c cat /etc/passwd > /dev/null
  LLVM: dumping tools/perf/examples/bpf/hello.o
       0.000 __bpf_stdout__:Hello, world
       0.015 __bpf_stdout__:Hello, world
       0.187 __bpf_stdout__:Hello, world
  [root@seventh perf]# pahole tools/perf/examples/bpf/hello.o
  struct clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c) {
	  clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566e.org clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c); /*     0     4 */
	  clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566e.org clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c); /*     4     4 */
	  clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566e.org clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c); /*     8     4 */
	  clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566e.org clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c); /*    12     4 */
	  clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566e.org clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c); /*    16     4 */
	  clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566e.org clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c); /*    20     4 */
	  clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566e.org clang version 8.0.0 (http://llvm.org/git/clang.git 8587270a739ee30c926a76d5657e65e85b560f6e) (http://llvm.org/git/llvm.git 0566eefef9c3777bd780ec4cbb9efa764633b76c); /*    24     4 */

	  /* size: 28, cachelines: 1, members: 7 */
	  /* last cacheline: 28 bytes */
  };
  [root@seventh perf]#

Adding these options to be passed to llvm's llc:

  [root@seventh perf]# cat ~/.perfconfig
  [llvm]
	  dump-obj = true
	  clang-opt = -g
	  opts = -mattr=dwarfris
  [root@seventh perf]#

We get sane output:

  [root@seventh perf]# perf trace -e tools/perf/examples/bpf/hello.c cat /etc/passwd > /dev/null
  LLVM: dumping tools/perf/examples/bpf/hello.o
       0.000 __bpf_stdout__:Hello, world
       0.015 __bpf_stdout__:Hello, world
       0.185 __bpf_stdout__:Hello, world
  [root@seventh perf]# pahole tools/perf/examples/bpf/hello.o
  struct bpf_map {
	  unsigned int               type;                 /*     0     4 */
	  unsigned int               key_size;             /*     4     4 */
	  unsigned int               value_size;           /*     8     4 */
	  unsigned int               max_entries;          /*    12     4 */
	  unsigned int               map_flags;            /*    16     4 */
	  unsigned int               inner_map_idx;        /*    20     4 */
	  unsigned int               numa_node;            /*    24     4 */

	  /* size: 28, cachelines: 1, members: 7 */
	  /* last cacheline: 28 bytes */
  };
  [root@seventh perf]#

Cc: Alexei Starovoitov <ast@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Martin KaFai Lau <kafai@fb.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>,
Cc: Yonghong Song <yhs@fb.com>
Link: https://lkml.kernel.org/n/tip-0lrwmrip4dru1651rm8xa7tq@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:58 -03:00
Jack Henschel 49836f7811 perf parser: Improve error message for PMU address filters
This is the second version of a patch that improves the error message of
the perf events parser when the PMU hardware does not support address
filters.

Previously, the perf returned the following error:

  $ perf record -e intel_pt// --filter 'filter sys_write'
  --filter option should follow a -e tracepoint or HW tracer option

This implies there is some syntax error present in the command line,
which is not true. Rather, notify the user that the CPU does not have
support for this feature.

For example, Intel chips based on the Broadwell micro-archticture have
the Intel PT PMU, but do not support address filtering.

Now, perf prints the following error message:

  $ perf record -e intel_pt// --filter 'filter sys_write'
  This CPU does not support address filtering

Signed-off-by: Jack Henschel <jackdev@mailbox.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180704121345.19025-1-jackdev@mailbox.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:58 -03:00
Rasmus Villemoes da15fc2fa9 perf tools: Disable parallelism for 'make clean'
The Yocto build system does a 'make clean' when rebuilding due to
changed dependencies, and that consistently fails for me (causing the
whole BSP build to fail) with errors such as

| find: '[...]/perf/1.0-r9/perf-1.0/plugin_mac80211.so': No such file or directory
| find: '[...]/perf/1.0-r9/perf-1.0/plugin_mac80211.so': No such file or directory
| find: find: '[...]/perf/1.0-r9/perf-1.0/libtraceevent.a''[...]/perf/1.0-r9/perf-1.0/libtraceevent.a': No such file or directory: No such file or directory
|
[...]
| find: cannot delete '/mnt/xfs/devel/pil/yocto/tmp-glibc/work/wandboard-oe-linux-gnueabi/perf/1.0-r9/perf-1.0/util/.pstack.o.cmd': No such file or directory

Apparently (despite the comment), 'make clean' ends up launching
multiple sub-makes that all want to remove the same things - perhaps
this only happens in combination with a O=... parameter. In any case, we
don't lose much by explicitly disabling the parallelism for the clean
target, and it makes automated builds much more reliable.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180705131527.19749-1-linux@rasmusvillemoes.dk
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-20 08:54:58 -03:00
Ingo Molnar 5804b11034 perf/core improvements ad fixes:
kernel:
 
 . kallsyms, x86: Export addresses of PTI entry trampolines (Alexander Shishkin)
 
 . kallsyms: Simplify update_iter_mod() (Adrian Hunter)
 
 . x86: Add entry trampolines to kcore (Adrian Hunter)
 
 Hardware tracing:
 
 . Fix auxtrace queue resize (Adrian Hunter)
 
 Arch specific:
 
 . Fix uninitialized ARM SPE record error variable (Kim Phillips)
 
 . Fix trace event post-processing in powerpc (Sandipan Das)
 
 Build:
 
 . Fix check-headers.sh AND list path of execution (Alexander Kapshuk)
 
 . Remove -mcet and -fcf-protection when building the python binding
   with older clang versions (Arnaldo Carvalho de Melo)
 
 . Make check-headers.sh check based on kernel dir (Jiri Olsa)
 
 . Move syscall_64.tbl check into check-headers.sh (Jiri Olsa)
 
 Infrastructure:
 
 . Check for null when copying nsinfo.  (Benno Evers)
 
 Libraries:
 
 . Rename libtraceevent prefixes, prep work for making it a shared
   library generaly available (Tzvetomir Stoyanov (VMware))
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEELb9bqkb7Te0zijNb1lAW81NSqkAFAlt0OP8ACgkQ1lAW81NS
 qkD0lg/8Dcjd6bCgHzrRJYcR4VUNgHq4LnpJEp3VVvtV29AmptVW3hOCF6siuwDI
 H8rtMzE0gflMVHae310qTaUIzo1A6/ugoRwxUKLKU8aWkA1ikl9iJn6uaTttOCIG
 H4a/mExILDicGfxMk6kAPdyDbYr7r+1UoF58asrWVjPQNPxoJSALJCPtMnLK7Cn8
 qMZN68TqIL5zifbRe6UHKCH/SmDuowVmEIz4Nin3QtwKFPH+I02TtSdYkNWTC5WK
 o469/zy9cceA6a8Q+bVEUP0OD1mU8BvRnZogOiZ5SdMiYZlDkFSqG5MzgTtJUXQC
 DxOKkANdWu7zON/KywGDX8kcIBySzd5toTiXLvsHNNhxR8pT3bU57QvrscqLIMX+
 SVbCR03h5EwLeJopvDKZjwtcSwCWp1aCXCrmLP04tcB1zi+mUIRohbzf2vZDPfwo
 IRtoHvPAgV9UAA7M+UAAtNc7G0Gg/K2c6LlfiuxDhgjk9Jqg4NbOz3cn6rvXtGQX
 B4RM9pdhoNi9tqrXNYCnLzinzKtPhWjyEbb0FBlgvXTNQNiDVjhrkt3pC1fH1zK9
 GX1F6L78x24z3bZl5hUyYXmxOkktpAeprjICXAikYvwwige6aNENVLhCI/fVFHcx
 ByxXbFMom/dSIgU0tBfdqktd7ZmSLm94obSuA6BW8/htf2JIztM=
 =W1go
 -----END PGP SIGNATURE-----

Merge tag 'perf-core-for-mingo-4.19-20180815' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

kernel:

- kallsyms, x86: Export addresses of PTI entry trampolines (Alexander Shishkin)

- kallsyms: Simplify update_iter_mod() (Adrian Hunter)

- x86: Add entry trampolines to kcore (Adrian Hunter)

Hardware tracing:

- Fix auxtrace queue resize (Adrian Hunter)

Arch specific:

- Fix uninitialized ARM SPE record error variable (Kim Phillips)

- Fix trace event post-processing in powerpc (Sandipan Das)

Build:

- Fix check-headers.sh AND list path of execution (Alexander Kapshuk)

- Remove -mcet and -fcf-protection when building the python binding
  with older clang versions (Arnaldo Carvalho de Melo)

- Make check-headers.sh check based on kernel dir (Jiri Olsa)

- Move syscall_64.tbl check into check-headers.sh (Jiri Olsa)

Infrastructure:

- Check for null when copying nsinfo.  (Benno Evers)

Libraries:

- Rename libtraceevent prefixes, prep work for making it a shared
  library generaly available (Tzvetomir Stoyanov (VMware))

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-18 13:11:51 +02:00
Adrian Hunter 6855dc41b2 x86: Add entry trampolines to kcore
Without program headers for PTI entry trampoline pages, the trampoline
virtual addresses do not map to anything.

Example before:

 sudo gdb --quiet vmlinux /proc/kcore
 Reading symbols from vmlinux...done.
 [New process 1]
 Core was generated by `BOOT_IMAGE=/boot/vmlinuz-4.16.0 root=UUID=a6096b83-b763-4101-807e-f33daff63233'.
 #0  0x0000000000000000 in irq_stack_union ()
 (gdb) x /21ib 0xfffffe0000006000
    0xfffffe0000006000:  Cannot access memory at address 0xfffffe0000006000
 (gdb) quit

After:

 sudo gdb --quiet vmlinux /proc/kcore
 [sudo] password for ahunter:
 Reading symbols from vmlinux...done.
 [New process 1]
 Core was generated by `BOOT_IMAGE=/boot/vmlinuz-4.16.0-fix-4-00005-gd6e65a8b4072 root=UUID=a6096b83-b7'.
 #0  0x0000000000000000 in irq_stack_union ()
 (gdb) x /21ib 0xfffffe0000006000
    0xfffffe0000006000:  swapgs
    0xfffffe0000006003:  mov    %rsp,-0x3e12(%rip)        # 0xfffffe00000021f8
    0xfffffe000000600a:  xchg   %ax,%ax
    0xfffffe000000600c:  mov    %cr3,%rsp
    0xfffffe000000600f:  bts    $0x3f,%rsp
    0xfffffe0000006014:  and    $0xffffffffffffe7ff,%rsp
    0xfffffe000000601b:  mov    %rsp,%cr3
    0xfffffe000000601e:  mov    -0x3019(%rip),%rsp        # 0xfffffe000000300c
    0xfffffe0000006025:  pushq  $0x2b
    0xfffffe0000006027:  pushq  -0x3e35(%rip)        # 0xfffffe00000021f8
    0xfffffe000000602d:  push   %r11
    0xfffffe000000602f:  pushq  $0x33
    0xfffffe0000006031:  push   %rcx
    0xfffffe0000006032:  push   %rdi
    0xfffffe0000006033:  mov    $0xffffffff91a00010,%rdi
    0xfffffe000000603a:  callq  0xfffffe0000006046
    0xfffffe000000603f:  pause
    0xfffffe0000006041:  lfence
    0xfffffe0000006044:  jmp    0xfffffe000000603f
    0xfffffe0000006046:  mov    %rdi,(%rsp)
    0xfffffe000000604a:  retq
 (gdb) quit

In addition, entry trampolines all map to the same page.  Represent that
by giving the corresponding program headers in kcore the same offset.

This has the benefit that, when perf tools uses /proc/kcore as a source
for kernel object code, samples from different CPU trampolines are
aggregated together.  Note, such aggregation is normal for profiling
i.e. people want to profile the object code, not every different virtual
address the object code might be mapped to (across different processes
for example).

Notes by PeterZ:

This also adds the KCORE_REMAP functionality.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1528289651-4113-4-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 19:13:26 -03:00
Alexander Shishkin d83212d5dd kallsyms, x86: Export addresses of PTI entry trampolines
Currently, the addresses of PTI entry trampolines are not exported to
user space. Kernel profiling tools need these addresses to identify the
kernel code, so add a symbol and address for each CPU's PTI entry
trampoline.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1528289651-4113-3-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 19:12:29 -03:00
Adrian Hunter b966794220 kallsyms: Simplify update_iter_mod()
The logic in update_iter_mod() is overcomplicated and gets worse every
time another get_ksymbol_* function is added.

In preparation for adding another get_ksymbol_* function, simplify logic
in update_iter_mod().

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: (ftrace changes only) Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1528289651-4113-2-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 19:10:23 -03:00
Adrian Hunter 99cbbe56eb perf auxtrace: Fix queue resize
When the number of queues grows beyond 32, the array of queues is
resized but not all members were being copied. Fix by also copying
'tid', 'cpu' and 'set'.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@vger.kernel.org
Fixes: e502789302 ("perf auxtrace: Add helpers for queuing AUX area tracing data")
Link: http://lkml.kernel.org/r/20180814084608.6563-1-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 19:00:53 -03:00
Arnaldo Carvalho de Melo 5508672d7f perf python: Remove -mcet and -fcf-protection when building with clang
These options are not present in older clang versions, so when we build
for a distro that has a gcc new enough to have these options and that
the distro python build config settings use them but clang doesn't
support, b00m.

This is the case with fedora 28 and rawhide, so check if clang has the
options and remove the missing ones from CFLAGS.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-7asds7yn6gzg6ns1lw17ukul@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 18:50:20 -03:00
Kim Phillips 3443533665 perf arm spe: Fix uninitialized record error variable
The auxtrace init variable 'err' was not being initialized, leading perf
to abort early in an SPE record command when there was no explicit
error, rather only based whatever memory contents were on the stack.
Initialize it explicitly on getting an SPE successfully, the same way
cs-etm does.

Signed-off-by: Kim Phillips <kim.phillips@arm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Dongjiu Geng <gengdongjiu@huawei.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: ffd3d18c20 ("perf tools: Add ARM Statistical Profiling Extensions (SPE) support")
Link: http://lkml.kernel.org/r/20180810174512.52900813e57cbccf18ce99a2@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 15:10:44 -03:00
Jiri Olsa c9b51a0170 perf tools: Move syscall_64.tbl check into check-headers.sh
Probably leftover from the time we introducd the check-headers.sh script.

Committer testing:

Remove the 'rseq' syscall from tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
to fake a diff:

make: Entering directory '/home/acme/git/perf/tools/perf'
  BUILD:   Doing 'make -j4' parallel build
Warning: Kernel ABI header at 'tools/perf/arch/x86/entry/syscalls/syscall_64.tbl' differs from latest version at 'arch/x86/entry/syscalls/syscall_64.tbl'
diff -u tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl
  CC       /tmp/build/perf/util/syscalltbl.o
  INSTALL  trace_plugins
<SNIP>
  $ diff -u tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl
  --- tools/perf/arch/x86/entry/syscalls/syscall_64.tbl	2018-08-13 15:49:50.896585176 -0300
  +++ arch/x86/entry/syscalls/syscall_64.tbl	2018-07-20 12:04:04.536858304 -0300
  @@ -342,6 +342,7 @@
   331	common	pkey_free		__x64_sys_pkey_free
   332	common	statx			__x64_sys_statx
   333	common	io_pgetevents		__x64_sys_io_pgetevents
  +334	common	rseq			__x64_sys_rseq

  #
  # x32-specific system call numbers start at 512 to avoid cache impact
  $

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Kapshuk <alexander.kapshuk@gmail.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180813111504.3568-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 15:10:40 -03:00
Jiri Olsa 7ea6e983b2 perf tools: Make check-headers.sh check based on kernel dir
Changing the logic to compare files with paths relative to kernel source
base dir. This way we can keep the output message for 2 unrelated files,
which is coming in following patch.

Committer testing:

Remove a line from tools/arch/x86/lib/memcpy_64.S to have it detected:

make: Entering directory '/home/acme/git/perf/tools/perf'
  BUILD:   Doing 'make -j4' parallel build
Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S'
diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S
  INSTALL  GTK UI
  INSTALL  binaries

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Kapshuk <alexander.kapshuk@gmail.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20180813111504.3568-1-jolsa@kernel.org
Link: http://lkml.kernel.org/r/20180814072726.GA13931@krava
[ Do not use pushd/popd, its a bashism, reported by Michael Ellerman, fixed by Jiri Olsa ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-08-14 15:08:33 -03:00
Linus Torvalds 13e091b6dd Merge branch 'x86-timers-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 timer updates from Thomas Gleixner:
 "Early TSC based time stamping to allow better boot time analysis.

  This comes with a general cleanup of the TSC calibration code which
  grew warts and duct taping over the years and removes 250 lines of
  code. Initiated and mostly implemented by Pavel with help from various
  folks"

* 'x86-timers-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits)
  x86/kvmclock: Mark kvm_get_preset_lpj() as __init
  x86/tsc: Consolidate init code
  sched/clock: Disable interrupts when calling generic_sched_clock_init()
  timekeeping: Prevent false warning when persistent clock is not available
  sched/clock: Close a hole in sched_clock_init()
  x86/tsc: Make use of tsc_calibrate_cpu_early()
  x86/tsc: Split native_calibrate_cpu() into early and late parts
  sched/clock: Use static key for sched_clock_running
  sched/clock: Enable sched clock early
  sched/clock: Move sched clock initialization and merge with generic clock
  x86/tsc: Use TSC as sched clock early
  x86/tsc: Initialize cyc2ns when tsc frequency is determined
  x86/tsc: Calibrate tsc only once
  ARM/time: Remove read_boot_clock64()
  s390/time: Remove read_boot_clock64()
  timekeeping: Default boot time offset to local_clock()
  timekeeping: Replace read_boot_clock64() with read_persistent_wall_and_boot_offset()
  s390/time: Add read_persistent_wall_and_boot_offset()
  x86/xen/time: Output xen sched_clock time from 0
  x86/xen/time: Initialize pv xen time in init_hypervisor_platform()
  ...
2018-08-13 18:28:19 -07:00
Linus Torvalds eac3411944 Merge branch 'x86/pti' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 PTI updates from Thomas Gleixner:
 "The Speck brigade sadly provides yet another large set of patches
  destroying the perfomance which we carefully built and preserved

   - PTI support for 32bit PAE. The missing counter part to the 64bit
     PTI code implemented by Joerg.

   - A set of fixes for the Global Bit mechanics for non PCID CPUs which
     were setting the Global Bit too widely and therefore possibly
     exposing interesting memory needlessly.

   - Protection against userspace-userspace SpectreRSB

   - Support for the upcoming Enhanced IBRS mode, which is preferred
     over IBRS. Unfortunately we dont know the performance impact of
     this, but it's expected to be less horrible than the IBRS
     hammering.

   - Cleanups and simplifications"

* 'x86/pti' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (60 commits)
  x86/mm/pti: Move user W+X check into pti_finalize()
  x86/relocs: Add __end_rodata_aligned to S_REL
  x86/mm/pti: Clone kernel-image on PTE level for 32 bit
  x86/mm/pti: Don't clear permissions in pti_clone_pmd()
  x86/mm/pti: Fix 32 bit PCID check
  x86/mm/init: Remove freed kernel image areas from alias mapping
  x86/mm/init: Add helper for freeing kernel image pages
  x86/mm/init: Pass unconverted symbol addresses to free_init_pages()
  mm: Allow non-direct-map arguments to free_reserved_area()
  x86/mm/pti: Clear Global bit more aggressively
  x86/speculation: Support Enhanced IBRS on future CPUs
  x86/speculation: Protect against userspace-userspace spectreRSB
  x86/kexec: Allocate 8k PGDs for PTI
  Revert "perf/core: Make sure the ring-buffer is mapped in all page-tables"
  x86/mm: Remove in_nmi() warning from vmalloc_fault()
  x86/entry/32: Check for VM86 mode in slow-path check
  perf/core: Make sure the ring-buffer is mapped in all page-tables
  x86/pti: Check the return value of pti_user_pagetable_walk_pmd()
  x86/pti: Check the return value of pti_user_pagetable_walk_p4d()
  x86/entry/32: Add debug code to check entry/exit CR3
  ...
2018-08-13 17:54:17 -07:00
Linus Torvalds d191c82d4d Merge branch 'x86-vdso-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 vdso update from Thomas Gleixner:
 "Use LD to link the VDSO libs instead of indirecting trough CC which
  causes build failures with Clang"

* 'x86-vdso-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86: vdso: Use $LD instead of $CC to link
2018-08-13 17:50:17 -07:00
Linus Torvalds 4d5ac4b8ca Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull misc x86 fixes from Thomas Gleixner:
 "Two fixes for x86:

   - Provide a declaration for native_save_fl() which unbreaks the
     wreckage caused by making it 'extern inline'.

   - Fix the failing paravirt patching which is supposed to replace
     indirect with direct calls. The wreckage is caused by an incorrect
     clobber test"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/paravirt: Fix spectre-v2 mitigations for paravirt guests
  x86/irqflags: Provide a declaration for native_save_fl
2018-08-13 17:01:03 -07:00
Linus Torvalds 203b4fc903 Merge branch 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 mm updates from Thomas Gleixner:

 - Make lazy TLB mode even lazier to avoid pointless switch_mm()
   operations, which reduces CPU load by 1-2% for memcache workloads

 - Small cleanups and improvements all over the place

* 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/mm: Remove redundant check for kmem_cache_create()
  arm/asm/tlb.h: Fix build error implicit func declaration
  x86/mm/tlb: Make clear_asid_other() static
  x86/mm/tlb: Skip atomic operations for 'init_mm' in switch_mm_irqs_off()
  x86/mm/tlb: Always use lazy TLB mode
  x86/mm/tlb: Only send page table free TLB flush to lazy TLB CPUs
  x86/mm/tlb: Make lazy TLB mode lazier
  x86/mm/tlb: Restructure switch_mm_irqs_off()
  x86/mm/tlb: Leave lazy TLB mode at page table free time
  mm: Allocate the mm_cpumask (mm->cpu_bitmap[]) dynamically based on nr_cpu_ids
  x86/mm: Add TLB purge to free pmd/pte page interfaces
  ioremap: Update pgtable free interfaces with addr
  x86/mm: Disable ioremap free page handling on x86-PAE
2018-08-13 16:29:35 -07:00
Linus Torvalds 7edcf0d314 Merge branch 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 platform updates from Thomas Gleixner:
 "Trivial cleanups and improvements"

* 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/platform/UV: Remove redundant check of p == q
  x86/platform/olpc: Use PTR_ERR_OR_ZERO()
  x86/platform/UV: Mark memblock related init code and data correctly
2018-08-13 16:08:26 -07:00
Linus Torvalds 30de24c7dd Merge branch 'x86-cache-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cache QoS (RDT/CAR) updates from Thomas Gleixner:
 "Add support for pseudo-locked cache regions.

  Cache Allocation Technology (CAT) allows on certain CPUs to isolate a
  region of cache and 'lock' it. Cache pseudo-locking builds on the fact
  that a CPU can still read and write data pre-allocated outside its
  current allocated area on cache hit. With cache pseudo-locking data
  can be preloaded into a reserved portion of cache that no application
  can fill, and from that point on will only serve cache hits. The cache
  pseudo-locked memory is made accessible to user space where an
  application can map it into its virtual address space and thus have a
  region of memory with reduced average read latency.

  The locking is not perfect and gets totally screwed by WBINDV and
  similar mechanisms, but it provides a reasonable enhancement for
  certain types of latency sensitive applications.

  The implementation extends the current CAT mechanism and provides a
  generally useful exclusive CAT mode on which it builds the extra
  pseude-locked regions"

* 'x86-cache-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits)
  x86/intel_rdt: Disable PMU access
  x86/intel_rdt: Fix possible circular lock dependency
  x86/intel_rdt: Make CPU information accessible for pseudo-locked regions
  x86/intel_rdt: Support restoration of subset of permissions
  x86/intel_rdt: Fix cleanup of plr structure on error
  x86/intel_rdt: Move pseudo_lock_region_clear()
  x86/intel_rdt: Limit C-states dynamically when pseudo-locking active
  x86/intel_rdt: Support L3 cache performance event of Broadwell
  x86/intel_rdt: More precise L2 hit/miss measurements
  x86/intel_rdt: Create character device exposing pseudo-locked region
  x86/intel_rdt: Create debugfs files for pseudo-locking testing
  x86/intel_rdt: Create resctrl debug area
  x86/intel_rdt: Ensure RDT cleanup on exit
  x86/intel_rdt: Resctrl files reflect pseudo-locked information
  x86/intel_rdt: Support creation/removal of pseudo-locked region
  x86/intel_rdt: Pseudo-lock region creation/removal core
  x86/intel_rdt: Discover supported platforms via prefetch disable bits
  x86/intel_rdt: Add utilities to test pseudo-locked region possibility
  x86/intel_rdt: Split resource group removal in two
  x86/intel_rdt: Enable entering of pseudo-locksetup mode
  ...
2018-08-13 16:01:46 -07:00
Linus Torvalds f499026456 Merge branch 'x86-hyperv-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86/hyper-v update from Thomas Gleixner:
 "Add fast hypercall support for guest running on the Microsoft HyperV(isor)"

* 'x86-hyperv-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/hyper-v: Fix wrong merge conflict resolution
  x86/hyper-v: Check for VP_INVAL in hyperv_flush_tlb_others()
  x86/hyper-v: Check cpumask_to_vpset() return value in hyperv_flush_tlb_others_ex()
  x86/hyper-v: Trace PV IPI send
  x86/hyper-v: Use cheaper HVCALL_SEND_IPI hypercall when possible
  x86/hyper-v: Use 'fast' hypercall for HVCALL_SEND_IPI
  x86/hyper-v: Implement hv_do_fast_hypercall16
  x86/hyper-v: Use cheaper HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} hypercalls when possible
2018-08-13 15:49:04 -07:00
Linus Torvalds 27a5250197 Merge branch 'x86-debug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 dump printing cleanup from Thomas Gleixner:
 "Clean up the show_opcodes() printout so nested dumps can be properly
  differentiated"

* 'x86-debug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86: Avoid pr_cont() in show_opcodes()
2018-08-13 15:21:12 -07:00
Linus Torvalds 7796916146 Merge branch 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cpu updates from Thomas Gleixner:
 "Two small updates for the CPU code:

   - Improve NUMA emulation

   - Add the EPT_AD CPU feature bit"

* 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/cpufeatures: Add EPT_AD feature bit
  x86/numa_emulation: Introduce uniform split capability
  x86/numa_emulation: Fix emulated-to-physical node mapping
2018-08-13 14:41:53 -07:00
Linus Torvalds 36f49ca8ca Merge branch 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cleanups from Thomas Gleixner:
 "Trival cleanups"

* 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/iommu: Use NULL instead of 0
  x86/platform/pcspeaker: Use PTR_ERR_OR_ZERO() to fix ptr_ret.cocci warning
2018-08-13 14:13:53 -07:00
Linus Torvalds 00b24d5455 Merge branch 'x86-build-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 build cleanup from Thomas Gleixner:
 "Remove a stale quirk for a no longer supported GCC version"

* 'x86-build-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/build: Remove old -funit-at-a-time GCC quirk
2018-08-13 14:12:24 -07:00
Linus Torvalds f24d6f2654 Merge branch 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 asm updates from Thomas Gleixner:
 "The lowlevel and ASM code updates for x86:

   - Make stack trace unwinding more reliable

   - ASM instruction updates for better code generation

   - Various cleanups"

* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/entry/64: Add two more instruction suffixes
  x86/asm/64: Use 32-bit XOR to zero registers
  x86/build/vdso: Simplify 'cmd_vdso2c'
  x86/build/vdso: Remove unused vdso-syms.lds
  x86/stacktrace: Enable HAVE_RELIABLE_STACKTRACE for the ORC unwinder
  x86/unwind/orc: Detect the end of the stack
  x86/stacktrace: Do not fail for ORC with regs on stack
  x86/stacktrace: Clarify the reliable success paths
  x86/stacktrace: Remove STACKTRACE_DUMP_ONCE
  x86/stacktrace: Do not unwind after user regs
  x86/asm: Use CC_SET/CC_OUT in percpu_cmpxchg8b_double() to micro-optimize code generation
2018-08-13 13:35:26 -07:00
Linus Torvalds b9b8e5b763 Merge branch 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 boot updates from Thomas Gleixner:
 "Boot code updates for x86:

   - Allow to skip a given amount of huge pages for address layout
     randomization on the kernel command line to prevent regressions in
     the huge page allocation with small memory sizes

   - Various cleanups"

* 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/boot: Use CC_SET()/CC_OUT() instead of open coding it
  x86/boot/KASLR: Make local variable mem_limit static
  x86/boot/KASLR: Skip specified number of 1GB huge pages when doing physical randomization (KASLR)
  x86/boot/KASLR: Add two new functions for 1GB huge pages handling
2018-08-13 13:32:42 -07:00
Linus Torvalds 66e22087bd Merge branch 'x86-apic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 apic update from Thomas Gleixner:
 "Trivial cleanups of the APIC related code"

* 'x86-apic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/apic: Trivial coding style fixes
  x86/vector: Merge allocate_vector() into assign_vector_locked()
2018-08-13 13:31:08 -07:00