This holds a few fixes to the ftrace infrastructure as well as
the mixture of function graph tracing and kprobes. When jprobes and function graph tracing is enabled at the same time it will crash the system. # modprobe jprobe_example # echo function_graph > /sys/kernel/debug/tracing/current_tracer After the first fork (jprobe_example probes it), the system will crash. This is due to the way jprobes copies the stack frame and does not do a normal function return. This messes up with the function graph tracing accounting which hijacks the return address from the stack and replaces it with a hook function. It saves the return addresses in a separate stack to put back the correct return address when done. But because the jprobe functions do not do a normal return, their stack addresses are not put back until the function they probe is called, which means that the probed function will get the return address of the jprobe handler instead of its own. The simple fix here was to disable function graph tracing while the jprobe handler is being called. While debugging this I found two minor bugs with the function graph tracing. The first was about the function graph tracer sharing its function hash with the function tracer (they both get filtered by the same input). The changing of the set_ftrace_filter would not sync the function recording records after a change if the function tracer was disabled but the function graph tracer was enabled. This was due to the update only checking one of the ops instead of the shared ops to see if they were enabled and should perform the sync. This caused the ftrace accounting to break and a ftrace_bug() would be triggered, disabling ftrace until a reboot. The second was that the check to update records only checked one of the filter hashes. It needs to test both the "filter" and "notrace" hashes. The "filter" hash determines what functions to trace where as the "notrace" hash determines what functions not to trace (trace all but these). Both hashes need to be passed to the update code to find out what change is being done during the update. This also broke the ftrace record accounting and triggered a ftrace_bug(). This patch set also include two more fixes that were reported separately from the kprobe issue. One was that init_ftrace_syscalls() was called twice at boot up. This is not a major bug, but that call performed a rather large kmalloc (NR_syscalls * sizeof(*syscalls_metadata)). The second call made the first one a memory leak, and wastes memory. The other fix is a regression caused by an update in the v3.19 merge window. The moving to enable events early, moved the enabling before PID 1 was created. The syscall events require setting the TIF_SYSCALL_TRACEPOINT for all tasks. But for_each_process_thread() does not include the swapper task (PID 0), and ended up being a nop. A suggested fix was to add the init_task() to have its flag set, but I didn't really want to mess with PID 0 for this minor bug. Instead I disable and re-enable events again at early_initcall() where it use to be enabled. This also handles any other event that might have its own reg function that could break at early boot up. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJUt9vmAAoJEEjnJuOKh9ldLHEIAJ9XrPW2xMIY5yI69jT1F7pv PkSRqENnOK0l4UulD52SvIBecQTTBcEEjao4yVGkc7DCJBOws/1LZ5gW8OfNlKjq rMB8yaosL1tXJ1ARVPMjcQVy+228zkgTXznwEZCjku1g7LuScQ28qyXsXO7B6yiK xKoHqKjygmM/a2aVn+8tdiVKiDp6jdmkbYicbaFT4xP7XB5DaMmIiXRHxdvW6xdR azKrVfYiMyJqTZNt/EVSWUk2WjeaYhoXyNtvgPx515wTo/llCnzhjcsocXBtH2P/ YOtwl+1L7Z89ukV9oXqrtrUJZ6Ps7+g7I1flJuL7/1FlNGnklcP9JojD+t6HeT8= =vkec -----END PGP SIGNATURE----- Merge tag 'trace-fixes-v3.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull ftrace fixes from Steven Rostedt: "This holds a few fixes to the ftrace infrastructure as well as the mixture of function graph tracing and kprobes. When jprobes and function graph tracing is enabled at the same time it will crash the system: # modprobe jprobe_example # echo function_graph > /sys/kernel/debug/tracing/current_tracer After the first fork (jprobe_example probes it), the system will crash. This is due to the way jprobes copies the stack frame and does not do a normal function return. This messes up with the function graph tracing accounting which hijacks the return address from the stack and replaces it with a hook function. It saves the return addresses in a separate stack to put back the correct return address when done. But because the jprobe functions do not do a normal return, their stack addresses are not put back until the function they probe is called, which means that the probed function will get the return address of the jprobe handler instead of its own. The simple fix here was to disable function graph tracing while the jprobe handler is being called. While debugging this I found two minor bugs with the function graph tracing. The first was about the function graph tracer sharing its function hash with the function tracer (they both get filtered by the same input). The changing of the set_ftrace_filter would not sync the function recording records after a change if the function tracer was disabled but the function graph tracer was enabled. This was due to the update only checking one of the ops instead of the shared ops to see if they were enabled and should perform the sync. This caused the ftrace accounting to break and a ftrace_bug() would be triggered, disabling ftrace until a reboot. The second was that the check to update records only checked one of the filter hashes. It needs to test both the "filter" and "notrace" hashes. The "filter" hash determines what functions to trace where as the "notrace" hash determines what functions not to trace (trace all but these). Both hashes need to be passed to the update code to find out what change is being done during the update. This also broke the ftrace record accounting and triggered a ftrace_bug(). This patch set also include two more fixes that were reported separately from the kprobe issue. One was that init_ftrace_syscalls() was called twice at boot up. This is not a major bug, but that call performed a rather large kmalloc (NR_syscalls * sizeof(*syscalls_metadata)). The second call made the first one a memory leak, and wastes memory. The other fix is a regression caused by an update in the v3.19 merge window. The moving to enable events early, moved the enabling before PID 1 was created. The syscall events require setting the TIF_SYSCALL_TRACEPOINT for all tasks. But for_each_process_thread() does not include the swapper task (PID 0), and ended up being a nop. A suggested fix was to add the init_task() to have its flag set, but I didn't really want to mess with PID 0 for this minor bug. Instead I disable and re-enable events again at early_initcall() where it use to be enabled. This also handles any other event that might have its own reg function that could break at early boot up" * tag 'trace-fixes-v3.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: tracing: Fix enabling of syscall events on the command line tracing: Remove extra call to init_ftrace_syscalls() ftrace/jprobes/x86: Fix conflict between jprobes and function graph tracing ftrace: Check both notrace and filter for old hash ftrace: Fix updating of filters for shared global_ops filters
This commit is contained in:
Коммит
23aa4b416a
|
@ -1020,6 +1020,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
|
|||
regs->flags &= ~X86_EFLAGS_IF;
|
||||
trace_hardirqs_off();
|
||||
regs->ip = (unsigned long)(jp->entry);
|
||||
|
||||
/*
|
||||
* jprobes use jprobe_return() which skips the normal return
|
||||
* path of the function, and this messes up the accounting of the
|
||||
* function graph tracer to get messed up.
|
||||
*
|
||||
* Pause function graph tracing while performing the jprobe function.
|
||||
*/
|
||||
pause_graph_tracing();
|
||||
return 1;
|
||||
}
|
||||
NOKPROBE_SYMBOL(setjmp_pre_handler);
|
||||
|
@ -1048,24 +1057,25 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
|
|||
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
|
||||
u8 *addr = (u8 *) (regs->ip - 1);
|
||||
struct jprobe *jp = container_of(p, struct jprobe, kp);
|
||||
void *saved_sp = kcb->jprobe_saved_sp;
|
||||
|
||||
if ((addr > (u8 *) jprobe_return) &&
|
||||
(addr < (u8 *) jprobe_return_end)) {
|
||||
if (stack_addr(regs) != kcb->jprobe_saved_sp) {
|
||||
if (stack_addr(regs) != saved_sp) {
|
||||
struct pt_regs *saved_regs = &kcb->jprobe_saved_regs;
|
||||
printk(KERN_ERR
|
||||
"current sp %p does not match saved sp %p\n",
|
||||
stack_addr(regs), kcb->jprobe_saved_sp);
|
||||
stack_addr(regs), saved_sp);
|
||||
printk(KERN_ERR "Saved registers for jprobe %p\n", jp);
|
||||
show_regs(saved_regs);
|
||||
printk(KERN_ERR "Current registers\n");
|
||||
show_regs(regs);
|
||||
BUG();
|
||||
}
|
||||
/* It's OK to start function graph tracing again */
|
||||
unpause_graph_tracing();
|
||||
*regs = kcb->jprobe_saved_regs;
|
||||
memcpy((kprobe_opcode_t *)(kcb->jprobe_saved_sp),
|
||||
kcb->jprobes_stack,
|
||||
MIN_STACK_SIZE(kcb->jprobe_saved_sp));
|
||||
memcpy(saved_sp, kcb->jprobes_stack, MIN_STACK_SIZE(saved_sp));
|
||||
preempt_enable_no_resched();
|
||||
return 1;
|
||||
}
|
||||
|
|
|
@ -2497,12 +2497,14 @@ static void ftrace_run_update_code(int command)
|
|||
}
|
||||
|
||||
static void ftrace_run_modify_code(struct ftrace_ops *ops, int command,
|
||||
struct ftrace_hash *old_hash)
|
||||
struct ftrace_ops_hash *old_hash)
|
||||
{
|
||||
ops->flags |= FTRACE_OPS_FL_MODIFYING;
|
||||
ops->old_hash.filter_hash = old_hash;
|
||||
ops->old_hash.filter_hash = old_hash->filter_hash;
|
||||
ops->old_hash.notrace_hash = old_hash->notrace_hash;
|
||||
ftrace_run_update_code(command);
|
||||
ops->old_hash.filter_hash = NULL;
|
||||
ops->old_hash.notrace_hash = NULL;
|
||||
ops->flags &= ~FTRACE_OPS_FL_MODIFYING;
|
||||
}
|
||||
|
||||
|
@ -3579,7 +3581,7 @@ static struct ftrace_ops trace_probe_ops __read_mostly =
|
|||
|
||||
static int ftrace_probe_registered;
|
||||
|
||||
static void __enable_ftrace_function_probe(struct ftrace_hash *old_hash)
|
||||
static void __enable_ftrace_function_probe(struct ftrace_ops_hash *old_hash)
|
||||
{
|
||||
int ret;
|
||||
int i;
|
||||
|
@ -3637,6 +3639,7 @@ int
|
|||
register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
||||
void *data)
|
||||
{
|
||||
struct ftrace_ops_hash old_hash_ops;
|
||||
struct ftrace_func_probe *entry;
|
||||
struct ftrace_hash **orig_hash = &trace_probe_ops.func_hash->filter_hash;
|
||||
struct ftrace_hash *old_hash = *orig_hash;
|
||||
|
@ -3658,6 +3661,10 @@ register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
|||
|
||||
mutex_lock(&trace_probe_ops.func_hash->regex_lock);
|
||||
|
||||
old_hash_ops.filter_hash = old_hash;
|
||||
/* Probes only have filters */
|
||||
old_hash_ops.notrace_hash = NULL;
|
||||
|
||||
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, old_hash);
|
||||
if (!hash) {
|
||||
count = -ENOMEM;
|
||||
|
@ -3718,7 +3725,7 @@ register_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
|
|||
|
||||
ret = ftrace_hash_move(&trace_probe_ops, 1, orig_hash, hash);
|
||||
|
||||
__enable_ftrace_function_probe(old_hash);
|
||||
__enable_ftrace_function_probe(&old_hash_ops);
|
||||
|
||||
if (!ret)
|
||||
free_ftrace_hash_rcu(old_hash);
|
||||
|
@ -4006,10 +4013,34 @@ ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove)
|
|||
}
|
||||
|
||||
static void ftrace_ops_update_code(struct ftrace_ops *ops,
|
||||
struct ftrace_hash *old_hash)
|
||||
struct ftrace_ops_hash *old_hash)
|
||||
{
|
||||
if (ops->flags & FTRACE_OPS_FL_ENABLED && ftrace_enabled)
|
||||
struct ftrace_ops *op;
|
||||
|
||||
if (!ftrace_enabled)
|
||||
return;
|
||||
|
||||
if (ops->flags & FTRACE_OPS_FL_ENABLED) {
|
||||
ftrace_run_modify_code(ops, FTRACE_UPDATE_CALLS, old_hash);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* If this is the shared global_ops filter, then we need to
|
||||
* check if there is another ops that shares it, is enabled.
|
||||
* If so, we still need to run the modify code.
|
||||
*/
|
||||
if (ops->func_hash != &global_ops.local_hash)
|
||||
return;
|
||||
|
||||
do_for_each_ftrace_op(op, ftrace_ops_list) {
|
||||
if (op->func_hash == &global_ops.local_hash &&
|
||||
op->flags & FTRACE_OPS_FL_ENABLED) {
|
||||
ftrace_run_modify_code(op, FTRACE_UPDATE_CALLS, old_hash);
|
||||
/* Only need to do this once */
|
||||
return;
|
||||
}
|
||||
} while_for_each_ftrace_op(op);
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -4017,6 +4048,7 @@ ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len,
|
|||
unsigned long ip, int remove, int reset, int enable)
|
||||
{
|
||||
struct ftrace_hash **orig_hash;
|
||||
struct ftrace_ops_hash old_hash_ops;
|
||||
struct ftrace_hash *old_hash;
|
||||
struct ftrace_hash *hash;
|
||||
int ret;
|
||||
|
@ -4053,9 +4085,11 @@ ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len,
|
|||
|
||||
mutex_lock(&ftrace_lock);
|
||||
old_hash = *orig_hash;
|
||||
old_hash_ops.filter_hash = ops->func_hash->filter_hash;
|
||||
old_hash_ops.notrace_hash = ops->func_hash->notrace_hash;
|
||||
ret = ftrace_hash_move(ops, enable, orig_hash, hash);
|
||||
if (!ret) {
|
||||
ftrace_ops_update_code(ops, old_hash);
|
||||
ftrace_ops_update_code(ops, &old_hash_ops);
|
||||
free_ftrace_hash_rcu(old_hash);
|
||||
}
|
||||
mutex_unlock(&ftrace_lock);
|
||||
|
@ -4267,6 +4301,7 @@ static void __init set_ftrace_early_filters(void)
|
|||
int ftrace_regex_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct seq_file *m = (struct seq_file *)file->private_data;
|
||||
struct ftrace_ops_hash old_hash_ops;
|
||||
struct ftrace_iterator *iter;
|
||||
struct ftrace_hash **orig_hash;
|
||||
struct ftrace_hash *old_hash;
|
||||
|
@ -4300,10 +4335,12 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
|
|||
|
||||
mutex_lock(&ftrace_lock);
|
||||
old_hash = *orig_hash;
|
||||
old_hash_ops.filter_hash = iter->ops->func_hash->filter_hash;
|
||||
old_hash_ops.notrace_hash = iter->ops->func_hash->notrace_hash;
|
||||
ret = ftrace_hash_move(iter->ops, filter_hash,
|
||||
orig_hash, iter->hash);
|
||||
if (!ret) {
|
||||
ftrace_ops_update_code(iter->ops, old_hash);
|
||||
ftrace_ops_update_code(iter->ops, &old_hash_ops);
|
||||
free_ftrace_hash_rcu(old_hash);
|
||||
}
|
||||
mutex_unlock(&ftrace_lock);
|
||||
|
|
|
@ -6918,7 +6918,6 @@ void __init trace_init(void)
|
|||
tracepoint_printk = 0;
|
||||
}
|
||||
tracer_alloc_buffers();
|
||||
init_ftrace_syscalls();
|
||||
trace_event_init();
|
||||
}
|
||||
|
||||
|
|
|
@ -2429,12 +2429,39 @@ static __init int event_trace_memsetup(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static __init void
|
||||
early_enable_events(struct trace_array *tr, bool disable_first)
|
||||
{
|
||||
char *buf = bootup_event_buf;
|
||||
char *token;
|
||||
int ret;
|
||||
|
||||
while (true) {
|
||||
token = strsep(&buf, ",");
|
||||
|
||||
if (!token)
|
||||
break;
|
||||
if (!*token)
|
||||
continue;
|
||||
|
||||
/* Restarting syscalls requires that we stop them first */
|
||||
if (disable_first)
|
||||
ftrace_set_clr_event(tr, token, 0);
|
||||
|
||||
ret = ftrace_set_clr_event(tr, token, 1);
|
||||
if (ret)
|
||||
pr_warn("Failed to enable trace event: %s\n", token);
|
||||
|
||||
/* Put back the comma to allow this to be called again */
|
||||
if (buf)
|
||||
*(buf - 1) = ',';
|
||||
}
|
||||
}
|
||||
|
||||
static __init int event_trace_enable(void)
|
||||
{
|
||||
struct trace_array *tr = top_trace_array();
|
||||
struct ftrace_event_call **iter, *call;
|
||||
char *buf = bootup_event_buf;
|
||||
char *token;
|
||||
int ret;
|
||||
|
||||
if (!tr)
|
||||
|
@ -2456,18 +2483,7 @@ static __init int event_trace_enable(void)
|
|||
*/
|
||||
__trace_early_add_events(tr);
|
||||
|
||||
while (true) {
|
||||
token = strsep(&buf, ",");
|
||||
|
||||
if (!token)
|
||||
break;
|
||||
if (!*token)
|
||||
continue;
|
||||
|
||||
ret = ftrace_set_clr_event(tr, token, 1);
|
||||
if (ret)
|
||||
pr_warn("Failed to enable trace event: %s\n", token);
|
||||
}
|
||||
early_enable_events(tr, false);
|
||||
|
||||
trace_printk_start_comm();
|
||||
|
||||
|
@ -2478,6 +2494,31 @@ static __init int event_trace_enable(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* event_trace_enable() is called from trace_event_init() first to
|
||||
* initialize events and perhaps start any events that are on the
|
||||
* command line. Unfortunately, there are some events that will not
|
||||
* start this early, like the system call tracepoints that need
|
||||
* to set the TIF_SYSCALL_TRACEPOINT flag of pid 1. But event_trace_enable()
|
||||
* is called before pid 1 starts, and this flag is never set, making
|
||||
* the syscall tracepoint never get reached, but the event is enabled
|
||||
* regardless (and not doing anything).
|
||||
*/
|
||||
static __init int event_trace_enable_again(void)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
|
||||
tr = top_trace_array();
|
||||
if (!tr)
|
||||
return -ENODEV;
|
||||
|
||||
early_enable_events(tr, true);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
early_initcall(event_trace_enable_again);
|
||||
|
||||
static __init int event_trace_init(void)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
|
|
Загрузка…
Ссылка в новой задаче