This patch series starts a new selftests section in the
tools/testing/selftest directory called "ftrace" that holds tests aimed at testing ftrace and subsystems that use ftrace (like kprobes). So far only a few tests were written (by Masami Hiramatsu), but more will be added in the near future (3.19). -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJUNXv0AAoJEKQekfcNnQGuCtkH/1bQgWWVZk0zphISmR01stPj ods1P937icSXxFyrz2sVf7qht5GCisWkvSC34AWcBP88fY0KFPYAiR+4WlbCr3Y5 TEjEt/qRaBEDRtUzW5mjQMwTZ/G4KBMXOh87yrffQs8eSuIXcK55chkddtnMFZ13 H2bx27rMqFamOLbj3vyAfxUayonZsaXl7YQfRq1lu3jvZCi3zBJdjohHtXvrjKDo dJ+ZIJtazuFV/LmsCfEtjibQNCQiVIbsxRJ/oCqUAYzKyY4gFwWbqsyH9g0Wv7HX LAJpNgChLAau+mUQoVpc4tPU/NBCGPgVoV5ojdKJXU6WYkIkCUaIi6qBEHpG8LE= =HFgG -----END PGP SIGNATURE----- Merge tag 'ftracetest-3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull ftrace test code from Steven Rostedt: "This patch series starts a new selftests section in the tools/testing/selftest directory called "ftrace" that holds tests aimed at testing ftrace and subsystems that use ftrace (like kprobes). So far only a few tests were written (by Masami Hiramatsu), but more will be added in the near future (3.19)" * tag 'ftracetest-3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: tracing/kprobes: Add selftest scripts testing kprobe-tracer as startup test ftracetest: Add POSIX.3 standard and XFAIL result codes ftracetest: Add kprobe basic testcases ftracetest: Add ftrace basic testcases ftracetest: Initial commit for ftracetest
This commit is contained in:
Коммит
90eac7eee2
|
@ -9432,6 +9432,7 @@ F: include/*/ftrace.h
|
|||
F: include/linux/trace*.h
|
||||
F: include/trace/
|
||||
F: kernel/trace/
|
||||
F: tools/testing/selftests/ftrace/
|
||||
|
||||
TRIVIAL PATCHES
|
||||
M: Jiri Kosina <trivial@kernel.org>
|
||||
|
|
|
@ -14,6 +14,7 @@ TARGETS += powerpc
|
|||
TARGETS += user
|
||||
TARGETS += sysctl
|
||||
TARGETS += firmware
|
||||
TARGETS += ftrace
|
||||
|
||||
TARGETS_HOTPLUG = cpu-hotplug
|
||||
TARGETS_HOTPLUG += memory-hotplug
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
all:
|
||||
|
||||
run_tests:
|
||||
@/bin/sh ./ftracetest || echo "ftrace selftests: [FAIL]"
|
||||
|
||||
clean:
|
||||
rm -rf logs/*
|
|
@ -0,0 +1,82 @@
|
|||
Linux Ftrace Testcases
|
||||
|
||||
This is a collection of testcases for ftrace tracing feature in the Linux
|
||||
kernel. Since ftrace exports interfaces via the debugfs, we just need
|
||||
shell scripts for testing. Feel free to add new test cases.
|
||||
|
||||
Running the ftrace testcases
|
||||
============================
|
||||
|
||||
At first, you need to be the root user to run this script.
|
||||
To run all testcases:
|
||||
|
||||
$ sudo ./ftracetest
|
||||
|
||||
To run specific testcases:
|
||||
|
||||
# ./ftracetest test.d/basic3.tc
|
||||
|
||||
Or you can also run testcases under given directory:
|
||||
|
||||
# ./ftracetest test.d/kprobe/
|
||||
|
||||
Contributing new testcases
|
||||
==========================
|
||||
|
||||
Copy test.d/template to your testcase (whose filename must have *.tc
|
||||
extension) and rewrite the test description line.
|
||||
|
||||
* The working directory of the script is <debugfs>/tracing/.
|
||||
|
||||
* Take care with side effects as the tests are run with root privilege.
|
||||
|
||||
* The tests should not run for a long period of time (more than 1 min.)
|
||||
These are to be unit tests.
|
||||
|
||||
* You can add a directory for your testcases under test.d/ if needed.
|
||||
|
||||
* The test cases should run on dash (busybox shell) for testing on
|
||||
minimal cross-build environments.
|
||||
|
||||
* Note that the tests are run with "set -e" (errexit) option. If any
|
||||
command fails, the test will be terminated immediately.
|
||||
|
||||
* The tests can return some result codes instead of pass or fail by
|
||||
using exit_unresolved, exit_untested, exit_unsupported and exit_xfail.
|
||||
|
||||
Result code
|
||||
===========
|
||||
|
||||
Ftracetest supports following result codes.
|
||||
|
||||
* PASS: The test succeeded as expected. The test which exits with 0 is
|
||||
counted as passed test.
|
||||
|
||||
* FAIL: The test failed, but was expected to succeed. The test which exits
|
||||
with !0 is counted as failed test.
|
||||
|
||||
* UNRESOLVED: The test produced unclear or intermidiate results.
|
||||
for example, the test was interrupted
|
||||
or the test depends on a previous test, which failed.
|
||||
or the test was set up incorrectly
|
||||
The test which is in above situation, must call exit_unresolved.
|
||||
|
||||
* UNTESTED: The test was not run, currently just a placeholder.
|
||||
In this case, the test must call exit_untested.
|
||||
|
||||
* UNSUPPORTED: The test failed because of lack of feature.
|
||||
In this case, the test must call exit_unsupported.
|
||||
|
||||
* XFAIL: The test failed, and was expected to fail.
|
||||
To return XFAIL, call exit_xfail from the test.
|
||||
|
||||
There are some sample test scripts for result code under samples/.
|
||||
You can also run samples as below:
|
||||
|
||||
# ./ftracetest samples/
|
||||
|
||||
TODO
|
||||
====
|
||||
|
||||
* Fancy colored output :)
|
||||
|
|
@ -0,0 +1,253 @@
|
|||
#!/bin/sh
|
||||
|
||||
# ftracetest - Ftrace test shell scripts
|
||||
#
|
||||
# Copyright (C) Hitachi Ltd., 2014
|
||||
# Written by Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
|
||||
#
|
||||
# Released under the terms of the GPL v2.
|
||||
|
||||
usage() { # errno [message]
|
||||
[ "$2" ] && echo $2
|
||||
echo "Usage: ftracetest [options] [testcase(s)] [testcase-directory(s)]"
|
||||
echo " Options:"
|
||||
echo " -h|--help Show help message"
|
||||
echo " -k|--keep Keep passed test logs"
|
||||
echo " -d|--debug Debug mode (trace all shell commands)"
|
||||
exit $1
|
||||
}
|
||||
|
||||
errexit() { # message
|
||||
echo "Error: $1" 1>&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Ensuring user privilege
|
||||
if [ `id -u` -ne 0 ]; then
|
||||
errexit "this must be run by root user"
|
||||
fi
|
||||
|
||||
# Utilities
|
||||
absdir() { # file_path
|
||||
(cd `dirname $1`; pwd)
|
||||
}
|
||||
|
||||
abspath() {
|
||||
echo `absdir $1`/`basename $1`
|
||||
}
|
||||
|
||||
find_testcases() { #directory
|
||||
echo `find $1 -name \*.tc`
|
||||
}
|
||||
|
||||
parse_opts() { # opts
|
||||
local OPT_TEST_CASES=
|
||||
local OPT_TEST_DIR=
|
||||
|
||||
while [ "$1" ]; do
|
||||
case "$1" in
|
||||
--help|-h)
|
||||
usage 0
|
||||
;;
|
||||
--keep|-k)
|
||||
KEEP_LOG=1
|
||||
shift 1
|
||||
;;
|
||||
--debug|-d)
|
||||
DEBUG=1
|
||||
shift 1
|
||||
;;
|
||||
*.tc)
|
||||
if [ -f "$1" ]; then
|
||||
OPT_TEST_CASES="$OPT_TEST_CASES `abspath $1`"
|
||||
shift 1
|
||||
else
|
||||
usage 1 "$1 is not a testcase"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
if [ -d "$1" ]; then
|
||||
OPT_TEST_DIR=`abspath $1`
|
||||
OPT_TEST_CASES="$OPT_TEST_CASES `find_testcases $OPT_TEST_DIR`"
|
||||
shift 1
|
||||
else
|
||||
usage 1 "Invalid option ($1)"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
done
|
||||
if [ "$OPT_TEST_CASES" ]; then
|
||||
TEST_CASES=$OPT_TEST_CASES
|
||||
fi
|
||||
}
|
||||
|
||||
# Parameters
|
||||
DEBUGFS_DIR=`grep debugfs /proc/mounts | cut -f2 -d' '`
|
||||
TRACING_DIR=$DEBUGFS_DIR/tracing
|
||||
TOP_DIR=`absdir $0`
|
||||
TEST_DIR=$TOP_DIR/test.d
|
||||
TEST_CASES=`find_testcases $TEST_DIR`
|
||||
LOG_DIR=$TOP_DIR/logs/`date +%Y%m%d-%H%M%S`/
|
||||
KEEP_LOG=0
|
||||
DEBUG=0
|
||||
# Parse command-line options
|
||||
parse_opts $*
|
||||
|
||||
[ $DEBUG -ne 0 ] && set -x
|
||||
|
||||
# Verify parameters
|
||||
if [ -z "$DEBUGFS_DIR" -o ! -d "$TRACING_DIR" ]; then
|
||||
errexit "No ftrace directory found"
|
||||
fi
|
||||
|
||||
# Preparing logs
|
||||
LOG_FILE=$LOG_DIR/ftracetest.log
|
||||
mkdir -p $LOG_DIR || errexit "Failed to make a log directory: $LOG_DIR"
|
||||
date > $LOG_FILE
|
||||
prlog() { # messages
|
||||
echo "$@" | tee -a $LOG_FILE
|
||||
}
|
||||
catlog() { #file
|
||||
cat $1 | tee -a $LOG_FILE
|
||||
}
|
||||
prlog "=== Ftrace unit tests ==="
|
||||
|
||||
|
||||
# Testcase management
|
||||
# Test result codes - Dejagnu extended code
|
||||
PASS=0 # The test succeeded.
|
||||
FAIL=1 # The test failed, but was expected to succeed.
|
||||
UNRESOLVED=2 # The test produced indeterminate results. (e.g. interrupted)
|
||||
UNTESTED=3 # The test was not run, currently just a placeholder.
|
||||
UNSUPPORTED=4 # The test failed because of lack of feature.
|
||||
XFAIL=5 # The test failed, and was expected to fail.
|
||||
|
||||
# Accumulations
|
||||
PASSED_CASES=
|
||||
FAILED_CASES=
|
||||
UNRESOLVED_CASES=
|
||||
UNTESTED_CASES=
|
||||
UNSUPPORTED_CASES=
|
||||
XFAILED_CASES=
|
||||
UNDEFINED_CASES=
|
||||
TOTAL_RESULT=0
|
||||
|
||||
CASENO=0
|
||||
testcase() { # testfile
|
||||
CASENO=$((CASENO+1))
|
||||
prlog -n "[$CASENO]"`grep "^#[ \t]*description:" $1 | cut -f2 -d:`
|
||||
}
|
||||
|
||||
eval_result() { # retval sigval
|
||||
local retval=$2
|
||||
if [ $2 -eq 0 ]; then
|
||||
test $1 -ne 0 && retval=$FAIL
|
||||
fi
|
||||
case $retval in
|
||||
$PASS)
|
||||
prlog " [PASS]"
|
||||
PASSED_CASES="$PASSED_CASES $CASENO"
|
||||
return 0
|
||||
;;
|
||||
$FAIL)
|
||||
prlog " [FAIL]"
|
||||
FAILED_CASES="$FAILED_CASES $CASENO"
|
||||
return 1 # this is a bug.
|
||||
;;
|
||||
$UNRESOLVED)
|
||||
prlog " [UNRESOLVED]"
|
||||
UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO"
|
||||
return 1 # this is a kind of bug.. something happened.
|
||||
;;
|
||||
$UNTESTED)
|
||||
prlog " [UNTESTED]"
|
||||
UNTESTED_CASES="$UNTESTED_CASES $CASENO"
|
||||
return 0
|
||||
;;
|
||||
$UNSUPPORTED)
|
||||
prlog " [UNSUPPORTED]"
|
||||
UNSUPPORTED_CASES="$UNSUPPORTED_CASES $CASENO"
|
||||
return 1 # this is not a bug, but the result should be reported.
|
||||
;;
|
||||
$XFAIL)
|
||||
prlog " [XFAIL]"
|
||||
XFAILED_CASES="$XFAILED_CASES $CASENO"
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
prlog " [UNDEFINED]"
|
||||
UNDEFINED_CASES="$UNDEFINED_CASES $CASENO"
|
||||
return 1 # this must be a test bug
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Signal handling for result codes
|
||||
SIG_RESULT=
|
||||
SIG_BASE=36 # Use realtime signals
|
||||
SIG_PID=$$
|
||||
|
||||
SIG_UNRESOLVED=$((SIG_BASE + UNRESOLVED))
|
||||
exit_unresolved () {
|
||||
kill -s $SIG_UNRESOLVED $SIG_PID
|
||||
exit 0
|
||||
}
|
||||
trap 'SIG_RESULT=$UNRESOLVED' $SIG_UNRESOLVED
|
||||
|
||||
SIG_UNTESTED=$((SIG_BASE + UNTESTED))
|
||||
exit_untested () {
|
||||
kill -s $SIG_UNTESTED $SIG_PID
|
||||
exit 0
|
||||
}
|
||||
trap 'SIG_RESULT=$UNTESTED' $SIG_UNTESTED
|
||||
|
||||
SIG_UNSUPPORTED=$((SIG_BASE + UNSUPPORTED))
|
||||
exit_unsupported () {
|
||||
kill -s $SIG_UNSUPPORTED $SIG_PID
|
||||
exit 0
|
||||
}
|
||||
trap 'SIG_RESULT=$UNSUPPORTED' $SIG_UNSUPPORTED
|
||||
|
||||
SIG_XFAIL=$((SIG_BASE + XFAIL))
|
||||
exit_xfail () {
|
||||
kill -s $SIG_XFAIL $SIG_PID
|
||||
exit 0
|
||||
}
|
||||
trap 'SIG_RESULT=$XFAIL' $SIG_XFAIL
|
||||
|
||||
# Run one test case
|
||||
run_test() { # testfile
|
||||
local testname=`basename $1`
|
||||
local testlog=`mktemp --tmpdir=$LOG_DIR ${testname}-XXXXXX.log`
|
||||
testcase $1
|
||||
echo "execute: "$1 > $testlog
|
||||
SIG_RESULT=0
|
||||
# setup PID and PPID, $$ is not updated.
|
||||
(cd $TRACING_DIR; read PID _ < /proc/self/stat ;
|
||||
set -e; set -x; . $1) >> $testlog 2>&1
|
||||
eval_result $? $SIG_RESULT
|
||||
if [ $? -eq 0 ]; then
|
||||
# Remove test log if the test was done as it was expected.
|
||||
[ $KEEP_LOG -eq 0 ] && rm $testlog
|
||||
else
|
||||
catlog $testlog
|
||||
TOTAL_RESULT=1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main loop
|
||||
for t in $TEST_CASES; do
|
||||
run_test $t
|
||||
done
|
||||
|
||||
prlog ""
|
||||
prlog "# of passed: " `echo $PASSED_CASES | wc -w`
|
||||
prlog "# of failed: " `echo $FAILED_CASES | wc -w`
|
||||
prlog "# of unresolved: " `echo $UNRESOLVED_CASES | wc -w`
|
||||
prlog "# of untested: " `echo $UNTESTED_CASES | wc -w`
|
||||
prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w`
|
||||
prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w`
|
||||
prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w`
|
||||
|
||||
# if no error, return 0
|
||||
exit $TOTAL_RESULT
|
|
@ -0,0 +1,4 @@
|
|||
#!/bin/sh
|
||||
# description: failure-case example
|
||||
cat non-exist-file
|
||||
echo "this is not executed"
|
|
@ -0,0 +1,3 @@
|
|||
#!/bin/sh
|
||||
# description: pass-case example
|
||||
return 0
|
|
@ -0,0 +1,4 @@
|
|||
#!/bin/sh
|
||||
# description: unresolved-case example
|
||||
trap exit_unresolved INT
|
||||
kill -INT $PID
|
|
@ -0,0 +1,3 @@
|
|||
#!/bin/sh
|
||||
# description: unsupported-case example
|
||||
exit_unsupported
|
|
@ -0,0 +1,3 @@
|
|||
#!/bin/sh
|
||||
# description: untested-case example
|
||||
exit_untested
|
|
@ -0,0 +1,3 @@
|
|||
#!/bin/sh
|
||||
# description: xfail-case example
|
||||
cat non-exist-file || exit_xfail
|
|
@ -0,0 +1,3 @@
|
|||
#!/bin/sh
|
||||
# description: Basic trace file check
|
||||
test -f README -a -f trace -a -f tracing_on -a -f trace_pipe
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/sh
|
||||
# description: Basic test for tracers
|
||||
test -f available_tracers
|
||||
for t in `cat available_tracers`; do
|
||||
echo $t > current_tracer
|
||||
done
|
||||
echo nop > current_tracer
|
|
@ -0,0 +1,8 @@
|
|||
#!/bin/sh
|
||||
# description: Basic trace clock test
|
||||
test -f trace_clock
|
||||
for c in `cat trace_clock | tr -d \[\]`; do
|
||||
echo $c > trace_clock
|
||||
grep '\['$c'\]' trace_clock
|
||||
done
|
||||
echo local > trace_clock
|
|
@ -0,0 +1,11 @@
|
|||
#!/bin/sh
|
||||
# description: Kprobe dynamic event - adding and removing
|
||||
|
||||
[ -f kprobe_events ] || exit_unsupported # this is configurable
|
||||
|
||||
echo 0 > events/enable
|
||||
echo > kprobe_events
|
||||
echo p:myevent do_fork > kprobe_events
|
||||
grep myevent kprobe_events
|
||||
test -d events/kprobes/myevent
|
||||
echo > kprobe_events
|
|
@ -0,0 +1,13 @@
|
|||
#!/bin/sh
|
||||
# description: Kprobe dynamic event - busy event check
|
||||
|
||||
[ -f kprobe_events ] || exit_unsupported
|
||||
|
||||
echo 0 > events/enable
|
||||
echo > kprobe_events
|
||||
echo p:myevent do_fork > kprobe_events
|
||||
test -d events/kprobes/myevent
|
||||
echo 1 > events/kprobes/myevent/enable
|
||||
echo > kprobe_events && exit 1 # this must fail
|
||||
echo 0 > events/kprobes/myevent/enable
|
||||
echo > kprobe_events # this must succeed
|
|
@ -0,0 +1,16 @@
|
|||
#!/bin/sh
|
||||
# description: Kprobe dynamic event with arguments
|
||||
|
||||
[ -f kprobe_events ] || exit_unsupported # this is configurable
|
||||
|
||||
echo 0 > events/enable
|
||||
echo > kprobe_events
|
||||
echo 'p:testprobe do_fork $stack $stack0 +0($stack)' > kprobe_events
|
||||
grep testprobe kprobe_events
|
||||
test -d events/kprobes/testprobe
|
||||
echo 1 > events/kprobes/testprobe/enable
|
||||
( echo "forked")
|
||||
echo 0 > events/kprobes/testprobe/enable
|
||||
echo "-:testprobe" >> kprobe_events
|
||||
test -d events/kprobes/testprobe && exit 1 || exit 0
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
#!/bin/sh
|
||||
# description: Kretprobe dynamic event with arguments
|
||||
|
||||
[ -f kprobe_events ] || exit_unsupported # this is configurable
|
||||
|
||||
echo 0 > events/enable
|
||||
echo > kprobe_events
|
||||
echo 'r:testprobe2 do_fork $retval' > kprobe_events
|
||||
grep testprobe2 kprobe_events
|
||||
test -d events/kprobes/testprobe2
|
||||
echo 1 > events/kprobes/testprobe2/enable
|
||||
( echo "forked")
|
||||
echo 0 > events/kprobes/testprobe2/enable
|
||||
echo '-:testprobe2' >> kprobe_events
|
||||
test -d events/kprobes/testprobe2 && exit 1 || exit 0
|
|
@ -0,0 +1,9 @@
|
|||
#!/bin/sh
|
||||
# description: %HERE DESCRIBE WHAT THIS DOES%
|
||||
# you have to add ".tc" extention for your testcase file
|
||||
# Note that all tests are run with "errexit" option.
|
||||
|
||||
exit 0 # Return 0 if the test is passed, otherwise return !0
|
||||
# If the test could not run because of lack of feature, call exit_unsupported
|
||||
# If the test returned unclear results, call exit_unresolved
|
||||
# If the test is a dummy, or a placeholder, call exit_untested
|
Загрузка…
Ссылка в новой задаче