2005-04-17 02:20:36 +04:00
|
|
|
#
|
|
|
|
# Makefile for the linux kernel.
|
|
|
|
#
|
|
|
|
|
2013-08-01 00:53:42 +04:00
|
|
|
obj-y = fork.o exec_domain.o panic.o \
|
2014-06-22 14:06:40 +04:00
|
|
|
cpu.o exit.o softirq.o resource.o \
|
|
|
|
sysctl.o sysctl_binary.o capability.o ptrace.o user.o \
|
2017-09-09 02:17:12 +03:00
|
|
|
signal.o sys.o umh.o workqueue.o pid.o task_work.o \
|
2014-06-22 14:06:40 +04:00
|
|
|
extable.o params.o \
|
|
|
|
kthread.o sys_ni.o nsproxy.o \
|
2013-07-09 03:01:32 +04:00
|
|
|
notifier.o ksysfs.o cred.o reboot.o \
|
2016-07-30 21:58:49 +03:00
|
|
|
async.o range.o smpboot.o ucount.o
|
kernel: conditionally support non-root users, groups and capabilities
There are a lot of embedded systems that run most or all of their
functionality in init, running as root:root. For these systems,
supporting multiple users is not necessary.
This patch adds a new symbol, CONFIG_MULTIUSER, that makes support for
non-root users, non-root groups, and capabilities optional. It is enabled
under CONFIG_EXPERT menu.
When this symbol is not defined, UID and GID are zero in any possible case
and processes always have all capabilities.
The following syscalls are compiled out: setuid, setregid, setgid,
setreuid, setresuid, getresuid, setresgid, getresgid, setgroups,
getgroups, setfsuid, setfsgid, capget, capset.
Also, groups.c is compiled out completely.
In kernel/capability.c, capable function was moved in order to avoid
adding two ifdef blocks.
This change saves about 25 KB on a defconfig build. The most minimal
kernels have total text sizes in the high hundreds of kB rather than
low MB. (The 25k goes down a bit with allnoconfig, but not that much.
The kernel was booted in Qemu. All the common functionalities work.
Adding users/groups is not possible, failing with -ENOSYS.
Bloat-o-meter output:
add/remove: 7/87 grow/shrink: 19/397 up/down: 1675/-26325 (-24650)
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Iulia Manda <iulia.manda21@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-04-16 02:16:41 +03:00
|
|
|
|
2017-09-09 02:17:12 +03:00
|
|
|
obj-$(CONFIG_MODULES) += kmod.o
|
kernel: conditionally support non-root users, groups and capabilities
There are a lot of embedded systems that run most or all of their
functionality in init, running as root:root. For these systems,
supporting multiple users is not necessary.
This patch adds a new symbol, CONFIG_MULTIUSER, that makes support for
non-root users, non-root groups, and capabilities optional. It is enabled
under CONFIG_EXPERT menu.
When this symbol is not defined, UID and GID are zero in any possible case
and processes always have all capabilities.
The following syscalls are compiled out: setuid, setregid, setgid,
setreuid, setresuid, getresuid, setresgid, getresgid, setgroups,
getgroups, setfsuid, setfsgid, capget, capset.
Also, groups.c is compiled out completely.
In kernel/capability.c, capable function was moved in order to avoid
adding two ifdef blocks.
This change saves about 25 KB on a defconfig build. The most minimal
kernels have total text sizes in the high hundreds of kB rather than
low MB. (The 25k goes down a bit with allnoconfig, but not that much.
The kernel was booted in Qemu. All the common functionalities work.
Adding users/groups is not possible, failing with -ENOSYS.
Bloat-o-meter output:
add/remove: 7/87 grow/shrink: 19/397 up/down: 1675/-26325 (-24650)
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Iulia Manda <iulia.manda21@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-04-16 02:16:41 +03:00
|
|
|
obj-$(CONFIG_MULTIUSER) += groups.o
|
2011-10-25 12:00:11 +04:00
|
|
|
|
2008-10-07 03:06:12 +04:00
|
|
|
ifdef CONFIG_FUNCTION_TRACER
|
2016-01-30 06:54:03 +03:00
|
|
|
# Do not trace internal ftrace files
|
2015-01-09 15:06:33 +03:00
|
|
|
CFLAGS_REMOVE_irq_work.o = $(CC_FLAGS_FTRACE)
|
2008-05-12 23:20:55 +04:00
|
|
|
endif
|
|
|
|
|
kernel: add kcov code coverage
kcov provides code coverage collection for coverage-guided fuzzing
(randomized testing). Coverage-guided fuzzing is a testing technique
that uses coverage feedback to determine new interesting inputs to a
system. A notable user-space example is AFL
(http://lcamtuf.coredump.cx/afl/). However, this technique is not
widely used for kernel testing due to missing compiler and kernel
support.
kcov does not aim to collect as much coverage as possible. It aims to
collect more or less stable coverage that is function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard
interrupts and instrumentation of some inherently non-deterministic or
non-interesting parts of kernel is disbled (e.g. scheduler, locking).
Currently there is a single coverage collection mode (tracing), but the
API anticipates additional collection modes. Initially I also
implemented a second mode which exposes coverage in a fixed-size hash
table of counters (what Quentin used in his original patch). I've
dropped the second mode for simplicity.
This patch adds the necessary support on kernel side. The complimentary
compiler support was added in gcc revision 231296.
We've used this support to build syzkaller system call fuzzer, which has
found 90 kernel bugs in just 2 months:
https://github.com/google/syzkaller/wiki/Found-Bugs
We've also found 30+ bugs in our internal systems with syzkaller.
Another (yet unexplored) direction where kcov coverage would greatly
help is more traditional "blob mutation". For example, mounting a
random blob as a filesystem, or receiving a random blob over wire.
Why not gcov. Typical fuzzing loop looks as follows: (1) reset
coverage, (2) execute a bit of code, (3) collect coverage, repeat. A
typical coverage can be just a dozen of basic blocks (e.g. an invalid
input). In such context gcov becomes prohibitively expensive as
reset/collect coverage steps depend on total number of basic
blocks/edges in program (in case of kernel it is about 2M). Cost of
kcov depends only on number of executed basic blocks/edges. On top of
that, kernel requires per-thread coverage because there are always
background threads and unrelated processes that also produce coverage.
With inlined gcov instrumentation per-thread coverage is not possible.
kcov exposes kernel PCs and control flow to user-space which is
insecure. But debugfs should not be mapped as user accessible.
Based on a patch by Quentin Casasnovas.
[akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
[akpm@linux-foundation.org: unbreak allmodconfig]
[akpm@linux-foundation.org: follow x86 Makefile layout standards]
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: syzkaller <syzkaller@googlegroups.com>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tavis Ormandy <taviso@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Kees Cook <keescook@google.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: David Drysdale <drysdale@google.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-23 00:27:30 +03:00
|
|
|
# Prevents flicker of uninteresting __do_softirq()/__local_bh_disable_ip()
|
|
|
|
# in coverage traces.
|
|
|
|
KCOV_INSTRUMENT_softirq.o := n
|
|
|
|
# These are called from save_stack_trace() on slub debug path,
|
|
|
|
# and produce insane amounts of uninteresting coverage.
|
|
|
|
KCOV_INSTRUMENT_module.o := n
|
|
|
|
KCOV_INSTRUMENT_extable.o := n
|
|
|
|
# Don't self-instrument.
|
|
|
|
KCOV_INSTRUMENT_kcov.o := n
|
|
|
|
KASAN_SANITIZE_kcov.o := n
|
|
|
|
|
2014-02-08 12:01:10 +04:00
|
|
|
# cond_syscall is currently not LTO compatible
|
|
|
|
CFLAGS_sys_ni.o = $(DISABLE_LTO)
|
|
|
|
|
2011-11-15 20:14:39 +04:00
|
|
|
obj-y += sched/
|
2013-10-31 21:11:53 +04:00
|
|
|
obj-y += locking/
|
2012-01-14 03:33:03 +04:00
|
|
|
obj-y += power/
|
2013-08-01 00:53:42 +04:00
|
|
|
obj-y += printk/
|
2013-08-30 11:39:53 +04:00
|
|
|
obj-y += irq/
|
2013-10-09 07:23:47 +04:00
|
|
|
obj-y += rcu/
|
2014-12-16 20:58:19 +03:00
|
|
|
obj-y += livepatch/
|
2011-11-15 20:14:39 +04:00
|
|
|
|
2013-02-28 05:05:58 +04:00
|
|
|
obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
|
2008-10-19 07:27:19 +04:00
|
|
|
obj-$(CONFIG_FREEZER) += freezer.o
|
2008-07-25 12:45:35 +04:00
|
|
|
obj-$(CONFIG_PROFILING) += profile.o
|
2006-07-03 11:24:38 +04:00
|
|
|
obj-$(CONFIG_STACKTRACE) += stacktrace.o
|
2006-06-26 11:25:06 +04:00
|
|
|
obj-y += time/
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_FUTEX) += futex.o
|
2006-03-27 13:16:24 +04:00
|
|
|
ifeq ($(CONFIG_COMPAT),y)
|
|
|
|
obj-$(CONFIG_FUTEX) += futex_compat.o
|
|
|
|
endif
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
|
2011-01-13 03:59:39 +03:00
|
|
|
obj-$(CONFIG_SMP) += smp.o
|
2009-01-14 20:35:44 +03:00
|
|
|
ifneq ($(CONFIG_SMP),y)
|
2009-01-09 23:27:08 +03:00
|
|
|
obj-y += up.o
|
|
|
|
endif
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_UID16) += uid16.o
|
|
|
|
obj-$(CONFIG_MODULES) += module.o
|
2013-08-30 19:07:30 +04:00
|
|
|
obj-$(CONFIG_MODULE_SIG) += module_signing.o
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_KALLSYMS) += kallsyms.o
|
|
|
|
obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
|
crash: move crashkernel parsing and vmcore related code under CONFIG_CRASH_CORE
Patch series "kexec/fadump: remove dependency with CONFIG_KEXEC and
reuse crashkernel parameter for fadump", v4.
Traditionally, kdump is used to save vmcore in case of a crash. Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism. Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.
This patchset removes dependency with CONFIG_KEXEC for crashkernel
parameter and vmcoreinfo related code as it can be reused without kexec
support. Also, crashkernel parameter is reused instead of
fadump_reserve_mem to reserve memory for fadump.
The first patch moves crashkernel parameter parsing and vmcoreinfo
related code under CONFIG_CRASH_CORE instead of CONFIG_KEXEC_CORE. The
second patch reuses the definitions of append_elf_note() & final_note()
functions under CONFIG_CRASH_CORE in IA64 arch code. The third patch
removes dependency on CONFIG_KEXEC for firmware-assisted dump (fadump)
in powerpc. The next patch reuses crashkernel parameter for reserving
memory for fadump, instead of the fadump_reserve_mem parameter. This
has the advantage of using all syntaxes crashkernel parameter supports,
for fadump as well. The last patch updates fadump kernel documentation
about use of crashkernel parameter.
This patch (of 5):
Traditionally, kdump is used to save vmcore in case of a crash. Some
architectures like powerpc can save vmcore using architecture specific
support instead of kexec/kdump mechanism. Such architecture specific
support also needs to reserve memory, to be used by dump capture kernel.
crashkernel parameter can be a reused, for memory reservation, by such
architecture specific infrastructure.
But currently, code related to vmcoreinfo and parsing of crashkernel
parameter is built under CONFIG_KEXEC_CORE. This patch introduces
CONFIG_CRASH_CORE and moves the above mentioned code under this config,
allowing code reuse without dependency on CONFIG_KEXEC. There is no
functional change with this patch.
Link: http://lkml.kernel.org/r/149035338104.6881.4550894432615189948.stgit@hbathini.in.ibm.com
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-09 01:56:18 +03:00
|
|
|
obj-$(CONFIG_CRASH_CORE) += crash_core.o
|
2015-09-10 01:38:55 +03:00
|
|
|
obj-$(CONFIG_KEXEC_CORE) += kexec_core.o
|
2005-06-26 01:57:52 +04:00
|
|
|
obj-$(CONFIG_KEXEC) += kexec.o
|
2015-09-10 01:38:51 +03:00
|
|
|
obj-$(CONFIG_KEXEC_FILE) += kexec_file.o
|
2008-01-30 15:33:08 +03:00
|
|
|
obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_COMPAT) += compat.o
|
2016-12-27 22:49:05 +03:00
|
|
|
obj-$(CONFIG_CGROUPS) += cgroup/
|
2008-02-08 15:18:23 +03:00
|
|
|
obj-$(CONFIG_UTS_NS) += utsname.o
|
|
|
|
obj-$(CONFIG_USER_NS) += user_namespace.o
|
2008-02-08 15:18:24 +03:00
|
|
|
obj-$(CONFIG_PID_NS) += pid_namespace.o
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_IKCONFIG) += configs.o
|
2010-05-08 18:20:53 +04:00
|
|
|
obj-$(CONFIG_SMP) += stop_machine.o
|
2008-01-30 15:32:53 +03:00
|
|
|
obj-$(CONFIG_KPROBES_SANITY_TEST) += test_kprobes.o
|
2009-12-18 04:12:06 +03:00
|
|
|
obj-$(CONFIG_AUDIT) += audit.o auditfilter.o
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_AUDITSYSCALL) += auditsc.o
|
2015-08-05 23:29:36 +03:00
|
|
|
obj-$(CONFIG_AUDIT_WATCH) += audit_watch.o audit_fsnotify.o
|
[PATCH] audit: watching subtrees
New kind of audit rule predicates: "object is visible in given subtree".
The part that can be sanely implemented, that is. Limitations:
* if you have hardlink from outside of tree, you'd better watch
it too (or just watch the object itself, obviously)
* if you mount something under a watched tree, tell audit
that new chunk should be added to watched subtrees
* if you umount something in a watched tree and it's still mounted
elsewhere, you will get matches on events happening there. New command
tells audit to recalculate the trees, trimming such sources of false
positives.
Note that it's _not_ about path - if something mounted in several places
(multiple mount, bindings, different namespaces, etc.), the match does
_not_ depend on which one we are using for access.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-07-22 16:04:18 +04:00
|
|
|
obj-$(CONFIG_AUDIT_TREE) += audit_tree.o
|
2009-12-18 04:12:06 +03:00
|
|
|
obj-$(CONFIG_GCOV_KERNEL) += gcov/
|
kernel: add kcov code coverage
kcov provides code coverage collection for coverage-guided fuzzing
(randomized testing). Coverage-guided fuzzing is a testing technique
that uses coverage feedback to determine new interesting inputs to a
system. A notable user-space example is AFL
(http://lcamtuf.coredump.cx/afl/). However, this technique is not
widely used for kernel testing due to missing compiler and kernel
support.
kcov does not aim to collect as much coverage as possible. It aims to
collect more or less stable coverage that is function of syscall inputs.
To achieve this goal it does not collect coverage in soft/hard
interrupts and instrumentation of some inherently non-deterministic or
non-interesting parts of kernel is disbled (e.g. scheduler, locking).
Currently there is a single coverage collection mode (tracing), but the
API anticipates additional collection modes. Initially I also
implemented a second mode which exposes coverage in a fixed-size hash
table of counters (what Quentin used in his original patch). I've
dropped the second mode for simplicity.
This patch adds the necessary support on kernel side. The complimentary
compiler support was added in gcc revision 231296.
We've used this support to build syzkaller system call fuzzer, which has
found 90 kernel bugs in just 2 months:
https://github.com/google/syzkaller/wiki/Found-Bugs
We've also found 30+ bugs in our internal systems with syzkaller.
Another (yet unexplored) direction where kcov coverage would greatly
help is more traditional "blob mutation". For example, mounting a
random blob as a filesystem, or receiving a random blob over wire.
Why not gcov. Typical fuzzing loop looks as follows: (1) reset
coverage, (2) execute a bit of code, (3) collect coverage, repeat. A
typical coverage can be just a dozen of basic blocks (e.g. an invalid
input). In such context gcov becomes prohibitively expensive as
reset/collect coverage steps depend on total number of basic
blocks/edges in program (in case of kernel it is about 2M). Cost of
kcov depends only on number of executed basic blocks/edges. On top of
that, kernel requires per-thread coverage because there are always
background threads and unrelated processes that also produce coverage.
With inlined gcov instrumentation per-thread coverage is not possible.
kcov exposes kernel PCs and control flow to user-space which is
insecure. But debugfs should not be mapped as user accessible.
Based on a patch by Quentin Casasnovas.
[akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
[akpm@linux-foundation.org: unbreak allmodconfig]
[akpm@linux-foundation.org: follow x86 Makefile layout standards]
Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: syzkaller <syzkaller@googlegroups.com>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tavis Ormandy <taviso@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Kees Cook <keescook@google.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: David Drysdale <drysdale@google.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-23 00:27:30 +03:00
|
|
|
obj-$(CONFIG_KCOV) += kcov.o
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_KPROBES) += kprobes.o
|
2009-05-19 16:49:32 +04:00
|
|
|
obj-$(CONFIG_KGDB) += debug/
|
2009-01-15 22:08:40 +03:00
|
|
|
obj-$(CONFIG_DETECT_HUNG_TASK) += hung_task.o
|
2010-05-08 01:11:44 +04:00
|
|
|
obj-$(CONFIG_LOCKUP_DETECTOR) += watchdog.o
|
2017-07-13 00:35:46 +03:00
|
|
|
obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o
|
2005-04-17 02:20:36 +04:00
|
|
|
obj-$(CONFIG_SECCOMP) += seccomp.o
|
2006-03-23 21:56:55 +03:00
|
|
|
obj-$(CONFIG_RELAY) += relay.o
|
2007-02-14 11:33:58 +03:00
|
|
|
obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
|
2006-07-14 11:24:36 +04:00
|
|
|
obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
|
2006-10-01 10:28:55 +04:00
|
|
|
obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
|
tracing: Kernel Tracepoints
Implementation of kernel tracepoints. Inspired from the Linux Kernel
Markers. Allows complete typing verification by declaring both tracing
statement inline functions and probe registration/unregistration static
inline functions within the same macro "DEFINE_TRACE". No format string
is required. See the tracepoint Documentation and Samples patches for
usage examples.
Taken from the documentation patch :
"A tracepoint placed in code provides a hook to call a function (probe)
that you can provide at runtime. A tracepoint can be "on" (a probe is
connected to it) or "off" (no probe is attached). When a tracepoint is
"off" it has no effect, except for adding a tiny time penalty (checking
a condition for a branch) and space penalty (adding a few bytes for the
function call at the end of the instrumented function and adds a data
structure in a separate section). When a tracepoint is "on", the
function you provide is called each time the tracepoint is executed, in
the execution context of the caller. When the function provided ends its
execution, it returns to the caller (continuing from the tracepoint
site).
You can put tracepoints at important locations in the code. They are
lightweight hooks that can pass an arbitrary number of parameters, which
prototypes are described in a tracepoint declaration placed in a header
file."
Addition and removal of tracepoints is synchronized by RCU using the
scheduler (and preempt_disable) as guarantees to find a quiescent state
(this is really RCU "classic"). The update side uses rcu_barrier_sched()
with call_rcu_sched() and the read/execute side uses
"preempt_disable()/preempt_enable()".
We make sure the previous array containing probes, which has been
scheduled for deletion by the rcu callback, is indeed freed before we
proceed to the next update. It therefore limits the rate of modification
of a single tracepoint to one update per RCU period. The objective here
is to permit fast batch add/removal of probes on _different_
tracepoints.
Changelog :
- Use #name ":" #proto as string to identify the tracepoint in the
tracepoint table. This will make sure not type mismatch happens due to
connexion of a probe with the wrong type to a tracepoint declared with
the same name in a different header.
- Add tracepoint_entry_free_old.
- Change __TO_TRACE to get rid of the 'i' iterator.
Masami Hiramatsu <mhiramat@redhat.com> :
Tested on x86-64.
Performance impact of a tracepoint : same as markers, except that it
adds about 70 bytes of instructions in an unlikely branch of each
instrumented function (the for loop, the stack setup and the function
call). It currently adds a memory read, a test and a conditional branch
at the instrumentation site (in the hot path). Immediate values will
eventually change this into a load immediate, test and branch, which
removes the memory read which will make the i-cache impact smaller
(changing the memory read for a load immediate removes 3-4 bytes per
site on x86_32 (depending on mov prefixes), or 7-8 bytes on x86_64, it
also saves the d-cache hit).
About the performance impact of tracepoints (which is comparable to
markers), even without immediate values optimizations, tests done by
Hideo Aoki on ia64 show no regression. His test case was using hackbench
on a kernel where scheduler instrumentation (about 5 events in code
scheduler code) was added.
Quoting Hideo Aoki about Markers :
I evaluated overhead of kernel marker using linux-2.6-sched-fixes git
tree, which includes several markers for LTTng, using an ia64 server.
While the immediate trace mark feature isn't implemented on ia64, there
is no major performance regression. So, I think that we don't have any
issues to propose merging marker point patches into Linus's tree from
the viewpoint of performance impact.
I prepared two kernels to evaluate. The first one was compiled without
CONFIG_MARKERS. The second one was enabled CONFIG_MARKERS.
I downloaded the original hackbench from the following URL:
http://devresources.linux-foundation.org/craiger/hackbench/src/hackbench.c
I ran hackbench 5 times in each condition and calculated the average and
difference between the kernels.
The parameter of hackbench: every 50 from 50 to 800
The number of CPUs of the server: 2, 4, and 8
Below is the results. As you can see, major performance regression
wasn't found in any case. Even if number of processes increases,
differences between marker-enabled kernel and marker- disabled kernel
doesn't increase. Moreover, if number of CPUs increases, the differences
doesn't increase either.
Curiously, marker-enabled kernel is better than marker-disabled kernel
in more than half cases, although I guess it comes from the difference
of memory access pattern.
* 2 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 4.811 | 4.872 | +0.061 | +1.27 |
100 | 9.854 | 10.309 | +0.454 | +4.61 |
150 | 15.602 | 15.040 | -0.562 | -3.6 |
200 | 20.489 | 20.380 | -0.109 | -0.53 |
250 | 25.798 | 25.652 | -0.146 | -0.56 |
300 | 31.260 | 30.797 | -0.463 | -1.48 |
350 | 36.121 | 35.770 | -0.351 | -0.97 |
400 | 42.288 | 42.102 | -0.186 | -0.44 |
450 | 47.778 | 47.253 | -0.526 | -1.1 |
500 | 51.953 | 52.278 | +0.325 | +0.63 |
550 | 58.401 | 57.700 | -0.701 | -1.2 |
600 | 63.334 | 63.222 | -0.112 | -0.18 |
650 | 68.816 | 68.511 | -0.306 | -0.44 |
700 | 74.667 | 74.088 | -0.579 | -0.78 |
750 | 78.612 | 79.582 | +0.970 | +1.23 |
800 | 85.431 | 85.263 | -0.168 | -0.2 |
--------------------------------------------------------------
* 4 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.586 | 2.584 | -0.003 | -0.1 |
100 | 5.254 | 5.283 | +0.030 | +0.56 |
150 | 8.012 | 8.074 | +0.061 | +0.76 |
200 | 11.172 | 11.000 | -0.172 | -1.54 |
250 | 13.917 | 14.036 | +0.119 | +0.86 |
300 | 16.905 | 16.543 | -0.362 | -2.14 |
350 | 19.901 | 20.036 | +0.135 | +0.68 |
400 | 22.908 | 23.094 | +0.186 | +0.81 |
450 | 26.273 | 26.101 | -0.172 | -0.66 |
500 | 29.554 | 29.092 | -0.461 | -1.56 |
550 | 32.377 | 32.274 | -0.103 | -0.32 |
600 | 35.855 | 35.322 | -0.533 | -1.49 |
650 | 39.192 | 38.388 | -0.804 | -2.05 |
700 | 41.744 | 41.719 | -0.025 | -0.06 |
750 | 45.016 | 44.496 | -0.520 | -1.16 |
800 | 48.212 | 47.603 | -0.609 | -1.26 |
--------------------------------------------------------------
* 8 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.094 | 2.072 | -0.022 | -1.07 |
100 | 4.162 | 4.273 | +0.111 | +2.66 |
150 | 6.485 | 6.540 | +0.055 | +0.84 |
200 | 8.556 | 8.478 | -0.078 | -0.91 |
250 | 10.458 | 10.258 | -0.200 | -1.91 |
300 | 12.425 | 12.750 | +0.325 | +2.62 |
350 | 14.807 | 14.839 | +0.032 | +0.22 |
400 | 16.801 | 16.959 | +0.158 | +0.94 |
450 | 19.478 | 19.009 | -0.470 | -2.41 |
500 | 21.296 | 21.504 | +0.208 | +0.98 |
550 | 23.842 | 23.979 | +0.137 | +0.57 |
600 | 26.309 | 26.111 | -0.198 | -0.75 |
650 | 28.705 | 28.446 | -0.259 | -0.9 |
700 | 31.233 | 31.394 | +0.161 | +0.52 |
750 | 34.064 | 33.720 | -0.344 | -1.01 |
800 | 36.320 | 36.114 | -0.206 | -0.57 |
--------------------------------------------------------------
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: 'Peter Zijlstra' <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-18 20:16:16 +04:00
|
|
|
obj-$(CONFIG_TRACEPOINTS) += tracepoint.o
|
2008-01-25 23:08:34 +03:00
|
|
|
obj-$(CONFIG_LATENCYTOP) += latencytop.o
|
2016-05-24 02:22:26 +03:00
|
|
|
obj-$(CONFIG_ELFCORE) += elfcore.o
|
2008-10-07 03:06:12 +04:00
|
|
|
obj-$(CONFIG_FUNCTION_TRACER) += trace/
|
2008-05-12 23:20:42 +04:00
|
|
|
obj-$(CONFIG_TRACING) += trace/
|
trace: Stop compiling in trace_clock unconditionally
Commit 56449f437 "tracing: make the trace clocks available generally",
in April 2009, made trace_clock available unconditionally, since
CONFIG_X86_DS used it too.
Commit faa4602e47 "x86, perf, bts, mm: Delete the never used BTS-ptrace code",
in March 2010, removed CONFIG_X86_DS, and now only CONFIG_RING_BUFFER (split
out from CONFIG_TRACING for general use) has a dependency on trace_clock. So,
only compile in trace_clock with CONFIG_RING_BUFFER or CONFIG_TRACING
enabled.
Link: http://lkml.kernel.org/r/20120903024513.GA19583@leaf
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-09-03 06:45:14 +04:00
|
|
|
obj-$(CONFIG_TRACE_CLOCK) += trace/
|
2009-06-25 09:30:12 +04:00
|
|
|
obj-$(CONFIG_RING_BUFFER) += trace/
|
2010-10-28 19:31:17 +04:00
|
|
|
obj-$(CONFIG_TRACEPOINTS) += trace/
|
2010-10-14 10:01:34 +04:00
|
|
|
obj-$(CONFIG_IRQ_WORK) += irq_work.o
|
2011-02-10 13:04:45 +03:00
|
|
|
obj-$(CONFIG_CPU_PM) += cpu_pm.o
|
2014-10-24 05:41:08 +04:00
|
|
|
obj-$(CONFIG_BPF) += bpf/
|
2010-10-26 22:24:03 +04:00
|
|
|
|
|
|
|
obj-$(CONFIG_PERF_EVENTS) += events/
|
|
|
|
|
2009-10-25 15:24:45 +03:00
|
|
|
obj-$(CONFIG_USER_RETURN_NOTIFIER) += user-return-notifier.o
|
2010-01-06 11:47:10 +03:00
|
|
|
obj-$(CONFIG_PADATA) += padata.o
|
2011-03-24 02:43:29 +03:00
|
|
|
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
|
jump label: Reduce the cycle count by changing the link order
In the course of testing jump labels for use with the CFS
bandwidth controller, Paul Turner, discovered that using jump
labels reduced the branch count and the instruction count, but
did not reduce the cycle count or wall time.
I noticed that having the jump_label.o included in the kernel
but not used in any way still caused this increase in cycle
count and wall time. Thus, I moved jump_label.o in the
kernel/Makefile, thus changing the link order, and presumably
moving it out of hot icache areas. This brought down the cycle
count/time as expected.
In addition to Paul's testing, I've tested the patch using a
single 'static_branch()' in the getppid() path, and basically
running tight loops of calls to getppid(). Here are my results
for the branch disabled case:
With jump labels turned on (CONFIG_JUMP_LABEL), branch disabled:
Performance counter stats for 'bash -c /tmp/getppid;true' (50 runs):
3,969,510,217 instructions # 0.864 IPC ( +-0.000% )
4,592,334,954 cycles ( +- 0.046% )
751,634,470 branches ( +- 0.000% )
1.722635797 seconds time elapsed ( +- 0.046% )
Jump labels turned off (CONFIG_JUMP_LABEL not set), branch
disabled:
Performance counter stats for 'bash -c /tmp/getppid;true' (50 runs):
4,009,611,846 instructions # 0.867 IPC ( +-0.000% )
4,622,210,580 cycles ( +- 0.012% )
771,662,904 branches ( +- 0.000% )
1.734341454 seconds time elapsed ( +- 0.022% )
Signed-off-by: Jason Baron <jbaron@redhat.com>
Cc: rth@redhat.com
Cc: a.p.zijlstra@chello.nl
Cc: rostedt@goodmis.org
Link: http://lkml.kernel.org/r/20110805204040.GG2522@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Tested-by: Paul Turner <pjt@google.com>
2011-08-06 00:40:40 +04:00
|
|
|
obj-$(CONFIG_JUMP_LABEL) += jump_label.o
|
2012-11-27 22:33:25 +04:00
|
|
|
obj-$(CONFIG_CONTEXT_TRACKING) += context_tracking.o
|
2014-01-27 23:49:39 +04:00
|
|
|
obj-$(CONFIG_TORTURE_TEST) += torture.o
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2015-08-11 06:07:06 +03:00
|
|
|
obj-$(CONFIG_HAS_IOMEM) += memremap.o
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
$(obj)/configs.o: $(obj)/config_data.h
|
|
|
|
|
|
|
|
targets += config_data.gz
|
2010-12-14 19:39:44 +03:00
|
|
|
$(obj)/config_data.gz: $(KCONFIG_CONFIG) FORCE
|
2005-04-17 02:20:36 +04:00
|
|
|
$(call if_changed,gzip)
|
|
|
|
|
2014-08-09 01:25:38 +04:00
|
|
|
filechk_ikconfiggz = (echo "static const char kernel_config_data[] __used = MAGIC_START"; cat $< | scripts/basic/bin2c; echo "MAGIC_END;")
|
2005-04-17 02:20:36 +04:00
|
|
|
targets += config_data.h
|
|
|
|
$(obj)/config_data.h: $(obj)/config_data.gz FORCE
|
2011-07-06 03:42:18 +04:00
|
|
|
$(call filechk,ikconfiggz)
|