Граф коммитов

48 Коммитов

Автор SHA1 Сообщение Дата
Qing Zhang a0a458fbd6 LoongArch/ftrace: Add recordmcount support
Recordmcount utility under scripts is run, after compiling each object,
to find out all the locations of calling _mcount() and put them into
specific seciton named __mcount_loc.

Then the linker collects all such information into a table in the kernel
image (between __start_mcount_loc and __stop_mcount_loc) for later use
by ftrace.

This patch adds LoongArch specific definitions to identify such locations.
And on LoongArch, only the C version is used to build the kernel now that
CONFIG_HAVE_C_RECORDMCOUNT is on.

Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Qing Zhang <zhangqing@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2022-12-14 08:41:53 +08:00
Chen Jun 999340d511 ftrace: Have recordmcount use w8 to read relp->r_info in arm64_is_fake_mcount
On little endian system, Use aarch64_be(gcc v7.3) downloaded from
linaro.org to build image with CONFIG_CPU_BIG_ENDIAN = y,
CONFIG_FTRACE = y, CONFIG_DYNAMIC_FTRACE = y.

gcc will create symbols of _mcount but recordmcount can not create
mcount_loc for *.o.
aarch64_be-linux-gnu-objdump -r fs/namei.o | grep mcount
00000000000000d0 R_AARCH64_CALL26  _mcount
...
0000000000007190 R_AARCH64_CALL26  _mcount

The reason is than funciton arm64_is_fake_mcount can not work correctly.
A symbol of _mcount in *.o compiled with big endian compiler likes:
00 00 00 2d 00 00 01 1b
w(rp->r_info) will return 0x2d instead of 0x011b. Because w() takes
uint32_t as parameter, which truncates rp->r_info.

Use w8() instead w() to read relp->r_info

Link: https://lkml.kernel.org/r/20210222135840.56250-1-chenjun102@huawei.com

Fixes: ea0eada456 ("recordmcount: only record relocation of type R_AARCH64_CALL26 on arm64.")
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Chen Jun <chenjun102@huawei.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-03-02 17:27:18 -05:00
Christophe Leroy 3df14264ad recordmcount: Fix build failure on non arm64
Commit ea0eada456 leads to the following build failure on powerpc:

  HOSTCC  scripts/recordmcount
scripts/recordmcount.c: In function 'arm64_is_fake_mcount':
scripts/recordmcount.c:440: error: 'R_AARCH64_CALL26' undeclared (first use in this function)
scripts/recordmcount.c:440: error: (Each undeclared identifier is reported only once
scripts/recordmcount.c:440: error: for each function it appears in.)
make[2]: *** [scripts/recordmcount] Error 1

Make sure R_AARCH64_CALL26 is always defined.

Fixes: ea0eada456 ("recordmcount: only record relocation of type R_AARCH64_CALL26 on arm64.")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Gregory Herrero <gregory.herrero@oracle.com>
Cc: Gregory Herrero <gregory.herrero@oracle.com>
Link: https://lore.kernel.org/r/5ca1be21fa6ebf73203b45fd9aadd2bafb5e6b15.1597049145.git.christophe.leroy@csgroup.eu
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-08-10 15:22:06 +01:00
Gregory Herrero ea0eada456 recordmcount: only record relocation of type R_AARCH64_CALL26 on arm64.
Currently, if a section has a relocation to '_mcount' symbol, a new
__mcount_loc entry will be added whatever the relocation type is.
This is problematic when a relocation to '_mcount' is in the middle of a
section and is not a call for ftrace use.

Such relocation could be generated with below code for example:
    bool is_mcount(unsigned long addr)
    {
        return (target == (unsigned long) &_mcount);
    }

With this snippet of code, ftrace will try to patch the mcount location
generated by this code on module load and fail with:

    Call trace:
     ftrace_bug+0xa0/0x28c
     ftrace_process_locs+0x2f4/0x430
     ftrace_module_init+0x30/0x38
     load_module+0x14f0/0x1e78
     __do_sys_finit_module+0x100/0x11c
     __arm64_sys_finit_module+0x28/0x34
     el0_svc_common+0x88/0x194
     el0_svc_handler+0x38/0x8c
     el0_svc+0x8/0xc
    ---[ end trace d828d06b36ad9d59 ]---
    ftrace failed to modify
    [<ffffa2dbf3a3a41c>] 0xffffa2dbf3a3a41c
     actual:   66:a9:3c:90
    Initializing ftrace call sites
    ftrace record flags: 2000000
     (0)
    expected tramp: ffffa2dc6cf66724

So Limit the relocation type to R_AARCH64_CALL26 as in perl version of
recordmcount.

Fixes: af64d2aa87 ("ftrace: Add arm64 support to recordmcount")
Signed-off-by: Gregory Herrero <gregory.herrero@oracle.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20200717143338.19302-1-gregory.herrero@oracle.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24 12:43:19 +01:00
Alex Sverdlin 927d780ee3 ARM: 8950/1: ftrace/recordmcount: filter relocation types
Scenario 1, ARMv7
=================

If code in arch/arm/kernel/ftrace.c would operate on mcount() pointer
the following may be generated:

00000230 <prealloc_fixed_plts>:
 230:   b5f8            push    {r3, r4, r5, r6, r7, lr}
 232:   b500            push    {lr}
 234:   f7ff fffe       bl      0 <__gnu_mcount_nc>
                        234: R_ARM_THM_CALL     __gnu_mcount_nc
 238:   f240 0600       movw    r6, #0
                        238: R_ARM_THM_MOVW_ABS_NC      __gnu_mcount_nc
 23c:   f8d0 1180       ldr.w   r1, [r0, #384]  ; 0x180

FTRACE currently is not able to deal with it:

WARNING: CPU: 0 PID: 0 at .../kernel/trace/ftrace.c:1979 ftrace_bug+0x1ad/0x230()
...
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.116-... #1
...
[<c0314e3d>] (unwind_backtrace) from [<c03115e9>] (show_stack+0x11/0x14)
[<c03115e9>] (show_stack) from [<c051a7f1>] (dump_stack+0x81/0xa8)
[<c051a7f1>] (dump_stack) from [<c0321c5d>] (warn_slowpath_common+0x69/0x90)
[<c0321c5d>] (warn_slowpath_common) from [<c0321cf3>] (warn_slowpath_null+0x17/0x1c)
[<c0321cf3>] (warn_slowpath_null) from [<c038ee9d>] (ftrace_bug+0x1ad/0x230)
[<c038ee9d>] (ftrace_bug) from [<c038f1f9>] (ftrace_process_locs+0x27d/0x444)
[<c038f1f9>] (ftrace_process_locs) from [<c08915bd>] (ftrace_init+0x91/0xe8)
[<c08915bd>] (ftrace_init) from [<c0885a67>] (start_kernel+0x34b/0x358)
[<c0885a67>] (start_kernel) from [<00308095>] (0x308095)
---[ end trace cb88537fdc8fa200 ]---
ftrace failed to modify [<c031266c>] prealloc_fixed_plts+0x8/0x60
 actual: 44:f2:e1:36
ftrace record flags: 0
 (0)   expected tramp: c03143e9

Scenario 2, ARMv4T
==================

ftrace: allocating 14435 entries in 43 pages
------------[ cut here ]------------
WARNING: CPU: 0 PID: 0 at kernel/trace/ftrace.c:2029 ftrace_bug+0x204/0x310
CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.5 #1
Hardware name: Cirrus Logic EDB9302 Evaluation Board
[<c0010a24>] (unwind_backtrace) from [<c000ecb0>] (show_stack+0x20/0x2c)
[<c000ecb0>] (show_stack) from [<c03c72e8>] (dump_stack+0x20/0x30)
[<c03c72e8>] (dump_stack) from [<c0021c18>] (__warn+0xdc/0x104)
[<c0021c18>] (__warn) from [<c0021d7c>] (warn_slowpath_null+0x4c/0x5c)
[<c0021d7c>] (warn_slowpath_null) from [<c0095360>] (ftrace_bug+0x204/0x310)
[<c0095360>] (ftrace_bug) from [<c04dabac>] (ftrace_init+0x3b4/0x4d4)
[<c04dabac>] (ftrace_init) from [<c04cef4c>] (start_kernel+0x20c/0x410)
[<c04cef4c>] (start_kernel) from [<00000000>] (  (null))
---[ end trace 0506a2f5dae6b341 ]---
ftrace failed to modify
[<c000c350>] perf_trace_sys_exit+0x5c/0xe8
 actual:   1e:ff:2f:e1
Initializing ftrace call sites
ftrace record flags: 0
 (0)
 expected tramp: c000fb24

The analysis for this problem has been already performed previously,
refer to the link below.

Fix the above problems by allowing only selected reloc types in
__mcount_loc. The list itself comes from the legacy recordmcount.pl
script.

Link: https://lore.kernel.org/lkml/56961010.6000806@pengutronix.de/
Cc: stable@vger.kernel.org
Fixes: ed60453fa8 ("ARM: 6511/1: ftrace: add ARM support for C version of recordmcount")
Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-01-19 16:08:25 +00:00
Matt Helsley 4fbcf07416 recordmcount: Clarify what cleanup() does
cleanup() mostly frees/unmaps the malloc'd/privately-mapped
copy of the ELF file recordmcount is working on, which is
set up in mmap_file(). It also deals with positioning within
the pseduo prive-mapping of the file and appending to the ELF
file.

Split into two steps:
	mmap_cleanup() for the mapping itself
	file_append_cleanup() for allocations storing the
		appended ELF data.

Also, move the global variable initializations out of the main,
per-object-file loop and nearer to the alloc/init (mmap_file())
and two cleanup functions so we can more clearly see how they're
related.

Link: http://lkml.kernel.org/r/2a387ac86d133d22c68f57b9933c32bab1d09a2d.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:40 -04:00
Matt Helsley c97fea2625 recordmcount: Remove redundant cleanup() calls
Redundant cleanup calls were introduced when transitioning from
the old error/success handling via setjmp/longjmp -- the longjmp
ensured the cleanup() call only happened once but replacing
the success_file()/fail_file() calls with cleanup() meant that
multiple cleanup() calls can happen as we return from function
calls.

In do_file(), looking just before and after the "goto out" jumps we
can see that multiple cleanups() are being performed. We remove
cleanup() calls from the nested functions because it makes the code
easier to review -- the resources being cleaned up are generally
allocated and initialized in the callers so freeing them there
makes more sense.

Other redundant cleanup() calls:

mmap_file() is only called from do_file() and, if mmap_file() fails,
then we goto out and do cleanup() there too.

write_file() is only called from do_file() and do_file()
calls cleanup() unconditionally after returning from write_file()
therefore the cleanup() calls in write_file() are not necessary.

find_secsym_ndx(), called from do_func()'s for-loop, when we are
cleaning up here it's obvious that we break out of the loop and
do another cleanup().

__has_rel_mcount() is called from two parts of do_func()
and calls cleanup(). In theory we move them into do_func(), however
these in turn prove redundant so another simplification step
removes them as well.

Link: http://lkml.kernel.org/r/de197e17fc5426623a847ea7cf3a1560a7402a4b.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:40 -04:00
Matt Helsley 2e63152bc1 recordmcount: Kernel style formatting
Fix up the whitespace irregularity in the ELF switch
blocks.

Swapping the initial value of gpfx allows us to
simplify all but one of the one-line switch cases even
further.

Link: http://lkml.kernel.org/r/647f21f43723d3e831cedd3238c893db03eea6f0.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:39 -04:00
Matt Helsley 3aec863824 recordmcount: Kernel style function signature formatting
The uwrite() and ulseek() functions are formatted inconsistently
with the rest of the file and the kernel overall. While we're
making other changes here let's fix this.

Link: http://lkml.kernel.org/r/4c67698f734be9867a2aba7035fe0ce59e1e4423.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:39 -04:00
Matt Helsley 3f1df12019 recordmcount: Rewrite error/success handling
Recordmcount uses setjmp/longjmp to manage control flow as
it reads and then writes the ELF file. This unusual control
flow is hard to follow and check in addition to being unlike
kernel coding style.

So we rewrite these paths to use regular return values to
indicate error/success. When an error or previously-completed object
file is found we return an error code following kernel
coding conventions -- negative error values and 0 for success when
we're not returning a pointer. We return NULL for those that fail
and return non-NULL pointers otherwise.

One oddity is already_has_rel_mcount -- there we use pointer comparison
rather than string comparison to differentiate between
previously-processed object files and returning the name of a text
section.

Link: http://lkml.kernel.org/r/8ba8633d4afe444931f363c8d924bf9565b89a86.1564596289.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:39 -04:00
Matt Helsley 17e262e995 recordmcount: Remove unused fd from uwrite() and ulseek()
uwrite() works within the pseudo-mapping and extends it as necessary
without needing the file descriptor (fd) parameter passed to it.
Similarly, ulseek() doesn't need its fd parameter. These parameters
were only added because the functions bear a conceptual resemblance
to write() and lseek(). Worse, they obscure the fact that at the time
uwrite() and ulseek() are called fd_map is not a valid file descriptor.

Remove the unused file descriptor parameters that make it look like
fd_map is still valid.

Link: http://lkml.kernel.org/r/2a136e820ee208469d375265c7b8eb28570749a0.1563992889.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:38 -04:00
Matt Helsley a146207916 recordmcount: Remove uread()
uread() is only used to initialize the ELF file's pseudo
private-memory mapping while uwrite() and ulseek() work within
the pseudo-mapping and extend it as necessary.  Thus it is not
a complementary function to uwrite() and ulseek(). It also makes
no sense to do cleanups inside uread() when its only caller,
mmap_file(), is doing the relevant allocations and associated
initializations.

Therefore it's clearer to use a plain read() call to initialize the
data in mmap_file() and remove uread().

Link: http://lkml.kernel.org/r/31a87c22b19150cec1c8dc800c8b0873a2741703.1563992889.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:38 -04:00
Matt Helsley 1bd95be204 recordmcount: Remove redundant strcmp
The strcmp is unnecessary since .text is already accepted as a
prefix in the strncmp().

Link: http://lkml.kernel.org/r/358e590b49adbe4185e161a8b364e323f3d52857.1563992889.git.mhelsley@vmware.com

Signed-off-by: Matt Helsley <mhelsley@vmware.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-08-31 12:19:38 -04:00
Thomas Gleixner 4317cf95ca treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 378
Based on 1 normalized pattern(s):

  licensed under the gnu general public license version 2 gplv2

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 5 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Armijn Hemel <armijn@tjaldur.nl>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190531081036.993848054@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-05 17:37:10 +02:00
Joe Lawrence 9c8e2f6d3d scripts/recordmcount.{c,pl}: support -ffunction-sections .text.* section names
When building with -ffunction-sections, the compiler will place each
function into its own ELF section, prefixed with ".text".  For example,
a simple test module with functions test_module_do_work() and
test_module_wq_func():

  % objdump --section-headers test_module.o | awk '/\.text/{print $2}'
  .text
  .text.test_module_do_work
  .text.test_module_wq_func
  .init.text
  .exit.text

Adjust the recordmcount scripts to look for ".text" as a section name
prefix.  This will ensure that those functions will be included in the
__mcount_loc relocations:

  % objdump --reloc --section __mcount_loc test_module.o
  OFFSET           TYPE              VALUE
  0000000000000000 R_X86_64_64       .text.test_module_do_work
  0000000000000008 R_X86_64_64       .text.test_module_wq_func
  0000000000000010 R_X86_64_64       .init.text

Link: http://lkml.kernel.org/r/1542745158-25392-2-git-send-email-joe.lawrence@redhat.com

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-12-08 20:54:08 -05:00
nixiaoming ac5db1fc89 scripts: Fixed printf format mismatch
scripts/kallsyms.c: function write_src:
"printf", the #1 format specifier "d" need arg type "int",
but the according arg "table_cnt" has type "unsigned int"

scripts/recordmcount.c: function do_file:
"fprintf", the #1 format specifier "d" need arg type "int",
but the according arg "(*w2)(ehdr->e_machine)" has type "unsigned int"

scripts/recordmcount.h: function find_secsym_ndx:
"fprintf", the #1 format specifier "d" need arg type "int",
but the according arg "txtndx" has type "unsigned int"

Signed-off-by: nixiaoming <nixiaoming@huawei.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-05-29 22:04:12 +09:00
James Hogan 5f171577b4
Drop a bunch of metag references
Now that arch/metag/ has been removed, drop a bunch of metag references
in various codes across the whole tree:
 - VM_GROWSUP and __VM_ARCH_SPECIFIC_1.
 - MT_METAG_* ELF note types.
 - METAG Kconfig dependencies (FRAME_POINTER) and ranges
   (MAX_STACK_SIZE_MB).
 - metag cases in tools (checkstack.pl, recordmcount.c, perf).

Signed-off-by: James Hogan <jhogan@kernel.org>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-mm@kvack.org
Cc: linux-metag@vger.kernel.org
2018-02-23 14:29:59 +00:00
Steven Rostedt (VMware) 42c269c88d ftrace: Allow for function tracing to record init functions on boot up
Adding a hook into free_reserve_area() that informs ftrace that boot up init
text is being free, lets ftrace safely remove those init functions from its
records, which keeps ftrace from trying to modify text that no longer
exists.

Note, this still does not allow for tracing .init text of modules, as
modules require different work for freeing its init code.

Link: http://lkml.kernel.org/r/1488502497.7212.24.camel@linux.intel.com

Cc: linux-mm@kvack.org
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Requested-by: Todd Brandt <todd.e.brandt@linux.intel.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-03-24 20:51:49 -04:00
Stephen Boyd 9648dc1577 recordmcount: arm: Implement make_nop
In similar spirit to x86 and arm64 support, add a make_nop_arm()
to replace calls to mcount with a nop in sections that aren't
traced.

Link: http://lkml.kernel.org/r/20161018234200.5804-1-sboyd@codeaurora.org

Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2016-11-14 16:43:00 -05:00
Chris Metcalf 6727ad9e20 nmi_backtrace: generate one-line reports for idle cpus
When doing an nmi backtrace of many cores, most of which are idle, the
output is a little overwhelming and very uninformative.  Suppress
messages for cpus that are idling when they are interrupted and just
emit one line, "NMI backtrace for N skipped: idling at pc 0xNNN".

We do this by grouping all the cpuidle code together into a new
.cpuidle.text section, and then checking the address of the interrupted
PC to see if it lies within that section.

This commit suitably tags x86 and tile idle routines, and only adds in
the minimal framework for other architectures.

Link: http://lkml.kernel.org/r/1472487169-14923-5-git-send-email-cmetcalf@mellanox.com
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
Tested-by: Petr Mladek <pmladek@suse.com>
Cc: Aaron Tomlin <atomlin@redhat.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-10-07 18:46:30 -07:00
Dmitry Vyukov e436fd61a8 scripts/recordmcount.c: account for .softirqentry.text
be7635e728 ("arch, ftrace: for KASAN put hard/soft IRQ entries into
separate sections") added .softirqentry.text section, but it was not added
to recordmcount.  So functions in the section are untracable.  Add the
section to scripts/recordmcount.c and scripts/recordmcount.pl.

Fixes: be7635e728 ("arch, ftrace: for KASAN put hard/soft IRQ entries into separate sections")
Link: http://lkml.kernel.org/r/1474902626-73468-1-git-send-email-dvyukov@google.com
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Steve Rostedt <rostedt@goodmis.org>
Cc: <stable@vger.kernel.org>	[4.6+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-09-28 16:19:02 -07:00
Laura Abbott b2e1c26f0b ftrace/recordmcount: Work around for addition of metag magic but not relocations
glibc recently did a sync up (94e73c95d9b5 "elf.h: Sync with the gabi
webpage") that added a #define for EM_METAG but did not add relocations

This triggers build errors:

scripts/recordmcount.c: In function 'do_file':
scripts/recordmcount.c:466:28: error: 'R_METAG_ADDR32' undeclared (first use in this function)
  case EM_METAG:  reltype = R_METAG_ADDR32;
                            ^~~~~~~~~~~~~~
scripts/recordmcount.c:466:28: note: each undeclared identifier is reported only once for each function it appears in
scripts/recordmcount.c:468:20: error: 'R_METAG_NONE' undeclared (first use in this function)
     rel_type_nop = R_METAG_NONE;
                    ^~~~~~~~~~~~

Work around this change with some more #ifdefery for the relocations.

Fedora Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1354034

Link: http://lkml.kernel.org/r/1468005530-14757-1-git-send-email-labbott@redhat.com

Cc: stable@vger.kernel.org # v3.9+
Cc: James Hogan <james.hogan@imgtec.com>
Fixes: 00512bdd45 ("metag: ftrace support")
Reported-by: Ross Burton <ross.burton@intel.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2016-08-02 12:57:24 -04:00
Colin Ian King 713a3e4de7 ftrace/scripts: Fix incorrect use of sprintf in recordmcount
Fix build warning:

scripts/recordmcount.c:589:4: warning: format not a string
literal and no format arguments [-Wformat-security]
    sprintf("%s: failed\n", file);

Fixes: a50bd43935 ("ftrace/scripts: Have recordmcount copy the object file")
Link: http://lkml.kernel.org/r/1451516801-16951-1-git-send-email-colin.king@canonical.com

Cc: Li Bin <huawei.libin@huawei.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org # 2.6.37+
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2016-01-04 11:13:16 -05:00
Steven Rostedt (Red Hat) a50bd43935 ftrace/scripts: Have recordmcount copy the object file
Russell King found that he had weird side effects when compiling the kernel
with hard linked ccache. The reason was that recordmcount modified the
kernel in place via mmap, and when a file gets modified twice by
recordmcount, it will complain about it. To fix this issue, Russell wrote a
patch that checked if the file was hard linked more than once and would
unlink it if it was.

Linus Torvalds was not happy with the fact that recordmcount does this in
place modification. Instead of doing the unlink only if the file has two or
more hard links, it does the unlink all the time. In otherwords, it always
does a copy if it changed something. That is, it does the write out if a
change was made.

Cc: stable@vger.kernel.org # 2.6.37+
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2015-12-16 15:46:07 -05:00
Russell King dd39a26538 scripts: recordmcount: break hardlinks
recordmcount edits the file in-place, which can cause problems when
using ccache in hardlink mode.  Arrange for recordmcount to break a
hardlinked object.

Link: http://lkml.kernel.org/r/E1a7MVT-0000et-62@rmk-PC.arm.linux.org.uk

Cc: stable@vger.kernel.org # 2.6.37+
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2015-12-16 09:28:23 -05:00
Li Bin 2ee8a74f2a recordmcount: arm64: Replace the ignored mcount call into nop
By now, the recordmcount only records the function that in
following sections:
.text/.ref.text/.sched.text/.spinlock.text/.irqentry.text/
.kprobes.text/.text.unlikely

For the function that not in these sections, the call mcount
will be in place and not be replaced when kernel boot up. And
it will bring performance overhead, such as do_mem_abort (in
.exception.text section). This patch make the call mcount to
nop for this case in recordmcount.

Link: http://lkml.kernel.org/r/1446019445-14421-1-git-send-email-huawei.libin@huawei.com
Link: http://lkml.kernel.org/r/1446193864-24593-4-git-send-email-huawei.libin@huawei.com

Cc: <lkp@intel.com>
Cc: <catalin.marinas@arm.com>
Cc: <takahiro.akashi@linaro.org>
Cc: <stable@vger.kernel.org> # 3.18+
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Li Bin <huawei.libin@huawei.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2015-11-03 10:50:29 -05:00
Li Bin 46a2b61ecb recordmcount: x86: Assign a meaningful value to rel_type_nop
Although, the default value of rel_type_nop is zero, and the value
of R_386_NONE/R_X86_64_NONE is zero too, but it should be assigned
a meaningful value explicitly, otherwise it looks confused.

Assign R_386_NONE to rel_type_nop for 386, assign R_X86_64_NONE
to rel_type_nop for x86_64.

Link: http://lkml.kernel.org/r/1446020606-16352-1-git-send-email-huawei.libin@huawei.com

Signed-off-by: Li Bin <huawei.libin@huawei.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2015-11-02 13:34:32 -05:00
Heiko Carstens c933146a5e s390/ftrace,kprobes: allow to patch first instruction
If the function tracer is enabled, allow to set kprobes on the first
instruction of a function (which is the function trace caller):

If no kprobe is set handling of enabling and disabling function tracing
of a function simply patches the first instruction. Either it is a nop
(right now it's an unconditional branch, which skips the mcount block),
or it's a branch to the ftrace_caller() function.

If a kprobe is being placed on a function tracer calling instruction
we encode if we actually have a nop or branch in the remaining bytes
after the breakpoint instruction (illegal opcode).
This is possible, since the size of the instruction used for the nop
and branch is six bytes, while the size of the breakpoint is only
two bytes.
Therefore the first two bytes contain the illegal opcode and the last
four bytes contain either "0" for nop or "1" for branch. The kprobes
code will then execute/simulate the correct instruction.

Instruction patching for kprobes and function tracer is always done
with stop_machine(). Therefore we don't have any races where an
instruction is patched concurrently on a different cpu.
Besides that also the program check handler which executes the function
trace caller instruction won't be executed concurrently to any
stop_machine() execution.

This allows to keep full fault based kprobes handling which generates
correct pt_regs contents automatically.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-10-27 13:27:27 +01:00
Heiko Carstens 53255c9a4d s390/ftrace: remove 31 bit ftrace support
31 bit and 64 bit diverge more and more and it is rather painful
to keep both parts running.
To make things simpler just remove the 31 bit support which nobody
uses anyway.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-10-09 09:14:18 +02:00
Linus Torvalds c1fdb2d338 Merge branch 'misc' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild
Pull kbuild misc updates from Michal Marek:
 "This is the non-critical part of kbuild for v3.16-rc1:
   - make deb-pkg can do s390x and arm64
   - new patterns in scripts/tags.sh
   - scripts/tags.sh skips userspace tools' sources (which sometimes
     have copies of kernel structures) and symlinks
   - improvements to the objdiff tool
   - two new coccinelle patches
   - other minor fixes"

* 'misc' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild:
  scripts: objdiff: support directories for the augument of record command
  scripts: objdiff: fix a comment
  scripts: objdiff: change the extension of disassembly from .o to .dis
  scripts: objdiff: improve path flexibility for record command
  scripts: objdiff: remove unnecessary code
  scripts: objdiff: direct error messages to stderr
  scripts: objdiff: get the path to .tmp_objdiff more simply
  deb-pkg: Add automatic support for s390x architecture
  coccicheck: Add unneeded return variable test
  kbuild: Fix a typo in documentation
  kbuild: trivial - use tabs for code indent where possible
  kbuild: trivial - remove trailing empty lines
  coccinelle: Check for missing NULL terminators in of_device_id tables
  scripts/tags.sh: ignore symlink'ed source files
  scripts/tags.sh: add regular expression replacement pattern for memcg
  builddeb: add arm64 in the supported architectures
  builddeb: use $OBJCOPY variable instead of objcopy
  scripts/tags.sh: ignore code of user space tools
  scripts/tags.sh: add pattern for DEFINE_HASHTABLE
  .gitignore: ignore Module.symvers in all directories
2014-06-12 21:29:20 -07:00
Masahiro Yamada 7eb6e34052 kbuild: trivial - remove trailing empty lines
Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com>
2014-06-10 00:04:06 +02:00
AKASHI Takahiro af64d2aa87 ftrace: Add arm64 support to recordmcount
Recordmcount utility under scripts is run, after compiling each object,
to find out all the locations of calling _mcount() and put them into
specific seciton named __mcount_loc.
Then linker collects all such information into a table in the kernel image
(between __start_mcount_loc and __stop_mcount_loc) for later use by ftrace.

This patch adds arm64 specific definitions to identify such locations.
There are two types of implementation, C and Perl. On arm64, only C version
is used to build the kernel now that CONFIG_HAVE_C_RECORDMCOUNT is on.
But Perl version is also maintained.

This patch also contains a workaround just in case where a header file,
elf.h, on host machine doesn't have definitions of EM_AARCH64 nor
R_AARCH64_ABS64. Without them, compiling C version of recordmcount will
fail.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-05-29 09:04:31 +01:00
James Hogan 00512bdd45 metag: ftrace support
Add ftrace support for metag.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
2013-03-02 20:09:55 +00:00
Martin Schwidefsky f296388682 ftrace/s390: mcount offset calculation
Do the mcount offset adjustment in the recordmcount.pl/recordmcount.[ch]
at compile time and not in ftrace_call_adjust at run time.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 15:05:06 -04:00
Martin Schwidefsky 521ccb5c4a ftrace/x86: mcount offset calculation
Do the mcount offset adjustment in the recordmcount.pl/recordmcount.[ch]
at compile time and not in ftrace_call_adjust at run time.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:55:57 -04:00
Steven Rostedt dfad3d598c ftrace/recordmcount: Add warning logic to warn on mcount not recorded
There's some sections that should not have mcount recorded and should not have
modifications to the that code. But currently they waste some time by calling
mcount anyway (which simply returns). As the real answer should be to
either whitelist the section or have gcc ignore it fully.

This change adds a option to recordmcount to warn when it finds a section
that is ignored by ftrace but still contains mcount callers. This is not on
by default as developers may not know if the section should be completely
ignored or added to the whitelist.

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023738.476989377@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:44:20 -04:00
Steven Rostedt ffd618fa39 ftrace/recordmcount: Make ignored mcount calls into nops at compile time
There are sections that are ignored by ftrace for the function tracing because
the text is in a section that can be removed without notice. The mcount calls
in these sections are ignored and ftrace never sees them. The downside of this
is that the functions in these sections still call mcount. Although the mcount
function is defined in assembly simply as a return, this added overhead is
unnecessary.

The solution is to convert these callers into nops at compile time.
A better solution is to add 'notrace' to the section markers, but as new sections
come up all the time, it would be nice that they are delt with when they
are created.

Later patches will deal with finding these sections and doing the proper solution.

Thanks to H. Peter Anvin for giving me the right nops to use for x86.

Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023738.237101176@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:43:32 -04:00
Steven Rostedt 9f087e7612 ftrace: Add .kprobe.text section to whitelist for recordmcount.c
The .kprobe.text section is safe to modify mcount to nop and tracing.
Add it to the whitelist in recordmcount.c and recordmcount.pl.

Cc: John Reiser <jreiser@bitwagon.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Link: http://lkml.kernel.org/r/20110421023737.743350547@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:42:15 -04:00
Steven Rostedt e90b0c8bf2 ftrace/trivial: Clean up record mcount to use Linux switch style
The Linux style for switch statements is:

	switch (var) {
	case x:
		[...]
		break;
	}

Not:
	switch (var) {
	case x: {
		[...]
	} break;

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023737.523968644@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:41:07 -04:00
Steven Rostedt dd5477ff3b ftrace/trivial: Clean up recordmcount.c to use Linux style comparisons
The Linux ftrace subsystem style for comparing is:

  var == 1
  var > 0

and not:

  1 == var
  0 < var

It is considered that Linux developers are smart enough not to do the

  if (var = 1)

mistake.

Cc: John Reiser <jreiser@bitwagon.com>
Link: http://lkml.kernel.org/r/20110421023737.290712238@goodmis.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-05-16 14:38:51 -04:00
Steven Rostedt 1274a9c2e9 ftrace: Add .ref.text as one of the safe areas to trace
The section .ref.text will not go away unexpectedly and is
safe to trace. Add it to the safe list of sections to allow
tracing.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2011-03-10 10:34:39 -05:00
Rabin Vincent ed60453fa8 ARM: 6511/1: ftrace: add ARM support for C version of recordmcount
Depending on the compiler version, ARM GCC calls the mcount function
either __gnu_mcount_nc or mcount.

Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-04 11:30:27 +00:00
Rabin Vincent cd3478f2bd ARM: 6509/1: ftrace: ignore any ftrace.o in C version of recordmcount
arch/arm/kernel/ftrace.c references mcount like kernel/tracing/ftrace.c,
so change the exclusion filter to match any ftrace.o.

Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-04 11:30:26 +00:00
Wu Zhangjin 412910cd04 ftrace/MIPS: Add module support for C version of recordmcount
Since MIPS modules' address space differs from the core kernel space, to access
the _mcount in the core kernel, the kernel functions in modules must use long
call (-mlong-calls): load the _mcount address into one register and jump to the
address stored by the register:

 c:  3c030000        lui     v1,0x0  <-------->  b label
           c: R_MIPS_HI16  _mcount
           c: R_MIPS_NONE  *ABS*
           c: R_MIPS_NONE  *ABS*
10:  64630000        daddiu  v1,v1,0
          10: R_MIPS_LO16 _mcount
          10: R_MIPS_NONE *ABS*
          10: R_MIPS_NONE *ABS*
14:	03e0082d 	move	at,ra
18:	0060f809 	jalr	v1
label:

In the old Perl version of recordmcount, we only need to record the position of
the 1st R_MIPS_HI16 type of _mcount, and later, in ftrace_make_nop(), replace
the instruction in this position by a "b label" and in ftrace_make_call(),
replace it back.

But, the default C version of recordmcount records all of the _mcount symbols,
so, we must filter the 2nd _mcount like the Perl version of recordmcount does.

The C version of recordmcount copes with the symbols before they are linked, So
It doesn't know the type of the symbols and therefore can not filter the
symbols as the Perl version of recordmcount does. But as we can see above, the
2nd _mcount symbols of the long call alawys follows the 1st _mcount symbol of
the same long call, which means the offset from the 1st to the 2nd is fixed, it
is 0x10-0xc = 4 here, 4 is the length of the 1st load instruciton, for MIPS has
fixed length of instructions, this offset is always 4.

And as we know, the _mcount is inserted into the entry of every kernel
function, the offset between the other _mcount's is expected to be always
bigger than 4. So, to filter the 2ns _mcount symbol of the long call, we can
simply check the offset between two _mcount symbols, If it is 4, then, filter
the 2nd _mcount symbol.

To avoid touching too much code, an 'empty' function fn_is_fake_mcount() is
added for all of the archs, and the specific archs can override it via chaning
the function pointer: is_fake_mcount in do_file() with the e_machine. e.g. This
patch adds MIPS_is_fake_mcount() to override the default fn_is_fake_mcount()
pointed by is_fake_mcount.

This fn_is_fake_mcount() checks if the _mcount symbol is fake, e.g. the 2nd
_mcount symbol of the long call is fake, for there are 2 _mcount symbols mapped
to one real mcount call, so, one of them is fake and must be filtered.

This fn_is_fake_mcount() is called in sift_rel_mcount() after finding the
_mcount symbols and before adding the _mcount symbol into mrelp, so, it can
prevent the fake mcount symbol going into the last __mcount_loc table.

Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com>
LKML-Reference: <b866f0138224340a132d31861fa3f9300dee30ac.1288176026.git.wuzhangjin@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-10-29 19:08:55 +01:00
John Reiser a2d49358ba ftrace/MIPS: Add MIPS64 support for C version of recordmcount
MIPS64 has 'weird' Elf64_Rel.r_info[1,2], which must be used instead of
the generic Elf64_Rel.r_info, otherwise, the C version of recordmcount
will not work for "segmentation fault".

Usage of "union mips_r_info" and the functions MIPS64_r_sym() and
MIPS64_r_info() written by Maciej W. Rozycki <macro@linux-mips.org>

----
[1] http://techpubs.sgi.com/library/manuals/4000/007-4658-001/pdf/007-4658-001.pdf
[2] arch/mips/include/asm/module.h

Tested-by: Wu Zhangjin <wuzhangjin@gmail.com>
Signed-off-by: John Reiser <jreiser@BitWagon.com>
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
LKML-Reference: <AANLkTinwXjLAYACUfhLYaocHD_vBbiErLN3NjwN8JqSy@mail.gmail.com>
LKML-Reference: <910dc2d5ae1ed042df4f96815fe4a433078d1c2a.1288176026.git.wuzhangjin@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-10-29 19:08:54 +01:00
Steven Rostedt 4447586364 ftrace: Do not process kernel/trace/ftrace.o with C recordmcount program
The file kernel/trace/ftrace.c references the mcount() call to
convert the mcount() callers to nops. But because it references
mcount(), the mcount() address is placed in the relocation table.

The C version of recordmcount reads the relocation table of all
object files, and it will add all references to mcount to the
__mcount_loc table that is used to find the places that call mcount()
and change the call to a nop. When recordmcount finds the mcount reference
in kernel/trace/ftrace.o, it saves that location even though the code
is not a call, but references mcount as data.

On boot up, when all calls are converted to nops, the code has a safety
check to determine what op code it is actually replacing before it
replaces it. If that op code at the address does not match, then
a warning is printed and the function tracer is disabled.

The reference to mcount in ftrace.c, causes this warning to trigger,
since the reference is not a call to mcount(). The ftrace.c file is
not compiled with the -pg flag, so no calls to mcount() should be
expected.

This patch simply makes recordmcount.c skip the kernel/trace/ftrace.c
file. This was the same solution used by the perl version of
recordmcount.

Reported-by: Ingo Molnar <mingo@elte.hu>
Cc: John Reiser <jreiser@bitwagon.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-10-15 11:49:47 -04:00
Steven Rostedt c28d5077f8 ftrace: Remove duplicate code for 64 and 32 bit in recordmcount.c
The elf reader for recordmcount.c had duplicate functions for both
32 bit and 64 bit elf handling. This was due to the need of using
the 32 and 64 bit elf structures.

This patch consolidates the two by using macros to define the 32
and 64 bit names in a recordmcount.h file, and then by just defining
a RECORD_MCOUNT_64 macro and including recordmcount.h twice we
create the funtions for both the 32 bit version as well as the
64 bit version using one code source.

Cc: John Reiser <jreiser@bitwagon.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-10-14 16:54:00 -04:00
John Reiser 81d3858d31 ftrace: Add C version of recordmcount compile time code
Currently, the mcount callers are found with a perl script that does
an objdump on every file in the kernel. This is a C version of that
same code which should increase the performance time of compiling
the kernel with dynamic ftrace enabled.

Signed-off-by: John Reiser <jreiser@bitwagon.com>

[ Updated the code to include .text.unlikely section as well as
  changing the format to follow Linux coding style. ]

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2010-10-14 16:44:34 -04:00