WSL2-Linux-Kernel/Documentation/RCU/trace.txt

536 строки
22 KiB
Plaintext

CONFIG_RCU_TRACE debugfs Files and Formats
The rcutree and rcutiny implementations of RCU provide debugfs trace
output that summarizes counters and state. This information is useful for
debugging RCU itself, and can sometimes also help to debug abuses of RCU.
The following sections describe the debugfs files and formats, first
for rcutree and next for rcutiny.
CONFIG_TREE_RCU and CONFIG_PREEMPT_RCU debugfs Files and Formats
These implementations of RCU provide several debugfs directories under the
top-level directory "rcu":
rcu/rcu_bh
rcu/rcu_preempt
rcu/rcu_sched
Each directory contains files for the corresponding flavor of RCU.
Note that rcu/rcu_preempt is only present for CONFIG_PREEMPT_RCU.
For CONFIG_TREE_RCU, the RCU flavor maps onto the RCU-sched flavor,
so that activity for both appears in rcu/rcu_sched.
In addition, the following file appears in the top-level directory:
rcu/rcutorture. This file displays rcutorture test progress. The output
of "cat rcu/rcutorture" looks as follows:
rcutorture test sequence: 0 (test in progress)
rcutorture update version number: 615
The first line shows the number of rcutorture tests that have completed
since boot. If a test is currently running, the "(test in progress)"
string will appear as shown above. The second line shows the number of
update cycles that the current test has started, or zero if there is
no test in progress.
Within each flavor directory (rcu/rcu_bh, rcu/rcu_sched, and possibly
also rcu/rcu_preempt) the following files will be present:
rcudata:
Displays fields in struct rcu_data.
rcuexp:
Displays statistics for expedited grace periods.
rcugp:
Displays grace-period counters.
rcuhier:
Displays the struct rcu_node hierarchy.
rcu_pending:
Displays counts of the reasons rcu_pending() decided that RCU had
work to do.
rcuboost:
Displays RCU boosting statistics. Only present if
CONFIG_RCU_BOOST=y.
The output of "cat rcu/rcu_preempt/rcudata" looks as follows:
0!c=30455 g=30456 cnq=1/0:1 dt=126535/140000000000000/0 df=2002 of=4 ql=0/0 qs=N... b=10 ci=74572 nci=0 co=1131 ca=716
1!c=30719 g=30720 cnq=1/0:0 dt=132007/140000000000000/0 df=1874 of=10 ql=0/0 qs=N... b=10 ci=123209 nci=0 co=685 ca=982
2!c=30150 g=30151 cnq=1/1:1 dt=138537/140000000000000/0 df=1707 of=8 ql=0/0 qs=N... b=10 ci=80132 nci=0 co=1328 ca=1458
3 c=31249 g=31250 cnq=1/1:0 dt=107255/140000000000000/0 df=1749 of=6 ql=0/450 qs=NRW. b=10 ci=151700 nci=0 co=509 ca=622
4!c=29502 g=29503 cnq=1/0:1 dt=83647/140000000000000/0 df=965 of=5 ql=0/0 qs=N... b=10 ci=65643 nci=0 co=1373 ca=1521
5 c=31201 g=31202 cnq=1/0:1 dt=70422/0/0 df=535 of=7 ql=0/0 qs=.... b=10 ci=58500 nci=0 co=764 ca=698
6!c=30253 g=30254 cnq=1/0:1 dt=95363/140000000000000/0 df=780 of=5 ql=0/0 qs=N... b=10 ci=100607 nci=0 co=1414 ca=1353
7 c=31178 g=31178 cnq=1/0:0 dt=91536/0/0 df=547 of=4 ql=0/0 qs=.... b=10 ci=109819 nci=0 co=1115 ca=969
This file has one line per CPU, or eight for this 8-CPU system.
The fields are as follows:
o The number at the beginning of each line is the CPU number.
CPUs numbers followed by an exclamation mark are offline,
but have been online at least once since boot. There will be
no output for CPUs that have never been online, which can be
a good thing in the surprisingly common case where NR_CPUS is
substantially larger than the number of actual CPUs.
o "c" is the count of grace periods that this CPU believes have
completed. Offlined CPUs and CPUs in dynticks idle mode may lag
quite a ways behind, for example, CPU 4 under "rcu_sched" above,
which has been offline through 16 RCU grace periods. It is not
unusual to see offline CPUs lagging by thousands of grace periods.
Note that although the grace-period number is an unsigned long,
it is printed out as a signed long to allow more human-friendly
representation near boot time.
o "g" is the count of grace periods that this CPU believes have
started. Again, offlined CPUs and CPUs in dynticks idle mode
may lag behind. If the "c" and "g" values are equal, this CPU
has already reported a quiescent state for the last RCU grace
period that it is aware of, otherwise, the CPU believes that it
owes RCU a quiescent state.
o "pq" indicates that this CPU has passed through a quiescent state
for the current grace period. It is possible for "pq" to be
"1" and "c" different than "g", which indicates that although
the CPU has passed through a quiescent state, either (1) this
CPU has not yet reported that fact, (2) some other CPU has not
yet reported for this grace period, or (3) both.
o "qp" indicates that RCU still expects a quiescent state from
this CPU. Offlined CPUs and CPUs in dyntick idle mode might
well have qp=1, which is OK: RCU is still ignoring them.
o "dt" is the current value of the dyntick counter that is incremented
when entering or leaving idle, either due to a context switch or
due to an interrupt. This number is even if the CPU is in idle
from RCU's viewpoint and odd otherwise. The number after the
first "/" is the interrupt nesting depth when in idle state,
or a large number added to the interrupt-nesting depth when
running a non-idle task. Some architectures do not accurately
count interrupt nesting when running in non-idle kernel context,
which can result in interesting anomalies such as negative
interrupt-nesting levels. The number after the second "/"
is the NMI nesting depth.
o "df" is the number of times that some other CPU has forced a
quiescent state on behalf of this CPU due to this CPU being in
idle state.
o "of" is the number of times that some other CPU has forced a
quiescent state on behalf of this CPU due to this CPU being
offline. In a perfect world, this might never happen, but it
turns out that offlining and onlining a CPU can take several grace
periods, and so there is likely to be an extended period of time
when RCU believes that the CPU is online when it really is not.
Please note that erring in the other direction (RCU believing a
CPU is offline when it is really alive and kicking) is a fatal
error, so it makes sense to err conservatively.
o "ql" is the number of RCU callbacks currently residing on
this CPU. The first number is the number of "lazy" callbacks
that are known to RCU to only be freeing memory, and the number
after the "/" is the total number of callbacks, lazy or not.
These counters count callbacks regardless of what phase of
grace-period processing that they are in (new, waiting for
grace period to start, waiting for grace period to end, ready
to invoke).
o "qs" gives an indication of the state of the callback queue
with four characters:
"N" Indicates that there are callbacks queued that are not
ready to be handled by the next grace period, and thus
will be handled by the grace period following the next
one.
"R" Indicates that there are callbacks queued that are
ready to be handled by the next grace period.
"W" Indicates that there are callbacks queued that are
waiting on the current grace period.
"D" Indicates that there are callbacks queued that have
already been handled by a prior grace period, and are
thus waiting to be invoked. Note that callbacks in
the process of being invoked are not counted here.
Callbacks in the process of being invoked are those
that have been removed from the rcu_data structures
queues by rcu_do_batch(), but which have not yet been
invoked.
If there are no callbacks in a given one of the above states,
the corresponding character is replaced by ".".
o "b" is the batch limit for this CPU. If more than this number
of RCU callbacks is ready to invoke, then the remainder will
be deferred.
o "ci" is the number of RCU callbacks that have been invoked for
this CPU. Note that ci+nci+ql is the number of callbacks that have
been registered in absence of CPU-hotplug activity.
o "nci" is the number of RCU callbacks that have been offloaded from
this CPU. This will always be zero unless the kernel was built
with CONFIG_RCU_NOCB_CPU=y and the "rcu_nocbs=" kernel boot
parameter was specified.
o "co" is the number of RCU callbacks that have been orphaned due to
this CPU going offline. These orphaned callbacks have been moved
to an arbitrarily chosen online CPU.
o "ca" is the number of RCU callbacks that have been adopted by this
CPU due to other CPUs going offline. Note that ci+co-ca+ql is
the number of RCU callbacks registered on this CPU.
Kernels compiled with CONFIG_RCU_BOOST=y display the following from
/debug/rcu/rcu_preempt/rcudata:
0!c=12865 g=12866 cnq=1/0:1 dt=83113/140000000000000/0 df=288 of=11 ql=0/0 qs=N... kt=0/O ktl=944 b=10 ci=60709 nci=0 co=748 ca=871
1 c=14407 g=14408 cnq=1/0:0 dt=100679/140000000000000/0 df=378 of=7 ql=0/119 qs=NRW. kt=0/W ktl=9b6 b=10 ci=109740 nci=0 co=589 ca=485
2 c=14407 g=14408 cnq=1/0:0 dt=105486/0/0 df=90 of=9 ql=0/89 qs=NRW. kt=0/W ktl=c0c b=10 ci=83113 nci=0 co=533 ca=490
3 c=14407 g=14408 cnq=1/0:0 dt=107138/0/0 df=142 of=8 ql=0/188 qs=NRW. kt=0/W ktl=b96 b=10 ci=121114 nci=0 co=426 ca=290
4 c=14405 g=14406 cnq=1/0:1 dt=50238/0/0 df=706 of=7 ql=0/0 qs=.... kt=0/W ktl=812 b=10 ci=34929 nci=0 co=643 ca=114
5!c=14168 g=14169 cnq=1/0:0 dt=45465/140000000000000/0 df=161 of=11 ql=0/0 qs=N... kt=0/O ktl=b4d b=10 ci=47712 nci=0 co=677 ca=722
6 c=14404 g=14405 cnq=1/0:0 dt=59454/0/0 df=94 of=6 ql=0/0 qs=.... kt=0/W ktl=e57 b=10 ci=55597 nci=0 co=701 ca=811
7 c=14407 g=14408 cnq=1/0:1 dt=68850/0/0 df=31 of=8 ql=0/0 qs=.... kt=0/W ktl=14bd b=10 ci=77475 nci=0 co=508 ca=1042
This is similar to the output discussed above, but contains the following
additional fields:
o "kt" is the per-CPU kernel-thread state. The digit preceding
the first slash is zero if there is no work pending and 1
otherwise. The character between the first pair of slashes is
as follows:
"S" The kernel thread is stopped, in other words, all
CPUs corresponding to this rcu_node structure are
offline.
"R" The kernel thread is running.
"W" The kernel thread is waiting because there is no work
for it to do.
"O" The kernel thread is waiting because it has been
forced off of its designated CPU or because its
->cpus_allowed mask permits it to run on other than
its designated CPU.
"Y" The kernel thread is yielding to avoid hogging CPU.
"?" Unknown value, indicates a bug.
The number after the final slash is the CPU that the kthread
is actually running on.
This field is displayed only for CONFIG_RCU_BOOST kernels.
o "ktl" is the low-order 16 bits (in hexadecimal) of the count of
the number of times that this CPU's per-CPU kthread has gone
through its loop servicing invoke_rcu_cpu_kthread() requests.
This field is displayed only for CONFIG_RCU_BOOST kernels.
The output of "cat rcu/rcu_preempt/rcuexp" looks as follows:
s=21872 wd1=0 wd2=0 wd3=5 enq=0 sc=21872
These fields are as follows:
o "s" is the sequence number, with an odd number indicating that
an expedited grace period is in progress.
o "wd1", "wd2", and "wd3" are the number of times that an attempt
to start an expedited grace period found that someone else had
completed an expedited grace period that satisfies the attempted
request. "Our work is done."
o "enq" is the number of quiescent states still outstanding.
o "sc" is the number of times that the attempt to start a
new expedited grace period succeeded.
The output of "cat rcu/rcu_preempt/rcugp" looks as follows:
completed=31249 gpnum=31250 age=1 max=18
These fields are taken from the rcu_state structure, and are as follows:
o "completed" is the number of grace periods that have completed.
It is comparable to the "c" field from rcu/rcudata in that a
CPU whose "c" field matches the value of "completed" is aware
that the corresponding RCU grace period has completed.
o "gpnum" is the number of grace periods that have started. It is
similarly comparable to the "g" field from rcu/rcudata in that
a CPU whose "g" field matches the value of "gpnum" is aware that
the corresponding RCU grace period has started.
If these two fields are equal, then there is no grace period
in progress, in other words, RCU is idle. On the other hand,
if the two fields differ (as they are above), then an RCU grace
period is in progress.
o "age" is the number of jiffies that the current grace period
has extended for, or zero if there is no grace period currently
in effect.
o "max" is the age in jiffies of the longest-duration grace period
thus far.
The output of "cat rcu/rcu_preempt/rcuhier" looks as follows:
c=14407 g=14408 s=0 jfq=2 j=c863 nfqs=12040/nfqsng=0(12040) fqlh=1051 oqlen=0/0
3/3 ..>. 0:7 ^0
e/e ..>. 0:3 ^0 d/d ..>. 4:7 ^1
The fields are as follows:
o "c" is exactly the same as "completed" under rcu/rcu_preempt/rcugp.
o "g" is exactly the same as "gpnum" under rcu/rcu_preempt/rcugp.
o "s" is the current state of the force_quiescent_state()
state machine.
o "jfq" is the number of jiffies remaining for this grace period
before force_quiescent_state() is invoked to help push things
along. Note that CPUs in idle mode throughout the grace period
will not report on their own, but rather must be check by some
other CPU via force_quiescent_state().
o "j" is the low-order four hex digits of the jiffies counter.
Yes, Paul did run into a number of problems that turned out to
be due to the jiffies counter no longer counting. Why do you ask?
o "nfqs" is the number of calls to force_quiescent_state() since
boot.
o "nfqsng" is the number of useless calls to force_quiescent_state(),
where there wasn't actually a grace period active. This can
no longer happen due to grace-period processing being pushed
into a kthread. The number in parentheses is the difference
between "nfqs" and "nfqsng", or the number of times that
force_quiescent_state() actually did some real work.
o "fqlh" is the number of calls to force_quiescent_state() that
exited immediately (without even being counted in nfqs above)
due to contention on ->fqslock.
o Each element of the form "3/3 ..>. 0:7 ^0" represents one rcu_node
structure. Each line represents one level of the hierarchy,
from root to leaves. It is best to think of the rcu_data
structures as forming yet another level after the leaves.
Note that there might be either one, two, three, or even four
levels of rcu_node structures, depending on the relationship
between CONFIG_RCU_FANOUT, CONFIG_RCU_FANOUT_LEAF (possibly
adjusted using the rcu_fanout_leaf kernel boot parameter), and
CONFIG_NR_CPUS (possibly adjusted using the nr_cpu_ids count of
possible CPUs for the booting hardware).
o The numbers separated by the "/" are the qsmask followed
by the qsmaskinit. The qsmask will have one bit
set for each entity in the next lower level that has
not yet checked in for the current grace period ("e"
indicating CPUs 5, 6, and 7 in the example above).
The qsmaskinit will have one bit for each entity that is
currently expected to check in during each grace period.
The value of qsmaskinit is assigned to that of qsmask
at the beginning of each grace period.
o The characters separated by the ">" indicate the state
of the blocked-tasks lists. A "G" preceding the ">"
indicates that at least one task blocked in an RCU
read-side critical section blocks the current grace
period, while a "E" preceding the ">" indicates that
at least one task blocked in an RCU read-side critical
section blocks the current expedited grace period.
A "T" character following the ">" indicates that at
least one task is blocked within an RCU read-side
critical section, regardless of whether any current
grace period (expedited or normal) is inconvenienced.
A "." character appears if the corresponding condition
does not hold, so that "..>." indicates that no tasks
are blocked. In contrast, "GE>T" indicates maximal
inconvenience from blocked tasks. CONFIG_TREE_RCU
builds of the kernel will always show "..>.".
o The numbers separated by the ":" are the range of CPUs
served by this struct rcu_node. This can be helpful
in working out how the hierarchy is wired together.
For example, the example rcu_node structure shown above
has "0:7", indicating that it covers CPUs 0 through 7.
o The number after the "^" indicates the bit in the
next higher level rcu_node structure that this rcu_node
structure corresponds to. For example, the "d/d ..>. 4:7
^1" has a "1" in this position, indicating that it
corresponds to the "1" bit in the "3" shown in the
"3/3 ..>. 0:7 ^0" entry on the next level up.
The output of "cat rcu/rcu_sched/rcu_pending" looks as follows:
0!np=26111 qsp=29 rpq=5386 cbr=1 cng=570 gpc=3674 gps=577 nn=15903 ndw=0
1!np=28913 qsp=35 rpq=6097 cbr=1 cng=448 gpc=3700 gps=554 nn=18113 ndw=0
2!np=32740 qsp=37 rpq=6202 cbr=0 cng=476 gpc=4627 gps=546 nn=20889 ndw=0
3 np=23679 qsp=22 rpq=5044 cbr=1 cng=415 gpc=3403 gps=347 nn=14469 ndw=0
4!np=30714 qsp=4 rpq=5574 cbr=0 cng=528 gpc=3931 gps=639 nn=20042 ndw=0
5 np=28910 qsp=2 rpq=5246 cbr=0 cng=428 gpc=4105 gps=709 nn=18422 ndw=0
6!np=38648 qsp=5 rpq=7076 cbr=0 cng=840 gpc=4072 gps=961 nn=25699 ndw=0
7 np=37275 qsp=2 rpq=6873 cbr=0 cng=868 gpc=3416 gps=971 nn=25147 ndw=0
The fields are as follows:
o The leading number is the CPU number, with "!" indicating
an offline CPU.
o "np" is the number of times that __rcu_pending() has been invoked
for the corresponding flavor of RCU.
o "qsp" is the number of times that the RCU was waiting for a
quiescent state from this CPU.
o "rpq" is the number of times that the CPU had passed through
a quiescent state, but not yet reported it to RCU.
o "cbr" is the number of times that this CPU had RCU callbacks
that had passed through a grace period, and were thus ready
to be invoked.
o "cng" is the number of times that this CPU needed another
grace period while RCU was idle.
o "gpc" is the number of times that an old grace period had
completed, but this CPU was not yet aware of it.
o "gps" is the number of times that a new grace period had started,
but this CPU was not yet aware of it.
o "ndw" is the number of times that a wakeup of an rcuo
callback-offload kthread had to be deferred in order to avoid
deadlock.
o "nn" is the number of times that this CPU needed nothing.
The output of "cat rcu/rcuboost" looks as follows:
0:3 tasks=.... kt=W ntb=0 neb=0 nnb=0 j=c864 bt=c894
balk: nt=0 egt=4695 bt=0 nb=0 ny=56 nos=0
4:7 tasks=.... kt=W ntb=0 neb=0 nnb=0 j=c864 bt=c894
balk: nt=0 egt=6541 bt=0 nb=0 ny=126 nos=0
This information is output only for rcu_preempt. Each two-line entry
corresponds to a leaf rcu_node structure. The fields are as follows:
o "n:m" is the CPU-number range for the corresponding two-line
entry. In the sample output above, the first entry covers
CPUs zero through three and the second entry covers CPUs four
through seven.
o "tasks=TNEB" gives the state of the various segments of the
rnp->blocked_tasks list:
"T" This indicates that there are some tasks that blocked
while running on one of the corresponding CPUs while
in an RCU read-side critical section.
"N" This indicates that some of the blocked tasks are preventing
the current normal (non-expedited) grace period from
completing.
"E" This indicates that some of the blocked tasks are preventing
the current expedited grace period from completing.
"B" This indicates that some of the blocked tasks are in
need of RCU priority boosting.
Each character is replaced with "." if the corresponding
condition does not hold.
o "kt" is the state of the RCU priority-boosting kernel
thread associated with the corresponding rcu_node structure.
The state can be one of the following:
"S" The kernel thread is stopped, in other words, all
CPUs corresponding to this rcu_node structure are
offline.
"R" The kernel thread is running.
"W" The kernel thread is waiting because there is no work
for it to do.
"Y" The kernel thread is yielding to avoid hogging CPU.
"?" Unknown value, indicates a bug.
o "ntb" is the number of tasks boosted.
o "neb" is the number of tasks boosted in order to complete an
expedited grace period.
o "nnb" is the number of tasks boosted in order to complete a
normal (non-expedited) grace period. When boosting a task
that was blocking both an expedited and a normal grace period,
it is counted against the expedited total above.
o "j" is the low-order 16 bits of the jiffies counter in
hexadecimal.
o "bt" is the low-order 16 bits of the value that the jiffies
counter will have when we next start boosting, assuming that
the current grace period does not end beforehand. This is
also in hexadecimal.
o "balk: nt" counts the number of times we didn't boost (in
other words, we balked) even though it was time to boost because
there were no blocked tasks to boost. This situation occurs
when there is one blocked task on one rcu_node structure and
none on some other rcu_node structure.
o "egt" counts the number of times we balked because although
there were blocked tasks, none of them were blocking the
current grace period, whether expedited or otherwise.
o "bt" counts the number of times we balked because boosting
had already been initiated for the current grace period.
o "nb" counts the number of times we balked because there
was at least one task blocking the current non-expedited grace
period that never had blocked. If it is already running, it
just won't help to boost its priority!
o "ny" counts the number of times we balked because it was
not yet time to start boosting.
o "nos" counts the number of times we balked for other
reasons, e.g., the grace period ended first.
CONFIG_TINY_RCU debugfs Files and Formats
These implementations of RCU provides a single debugfs file under the
top-level directory RCU, namely rcu/rcudata, which displays fields in
rcu_bh_ctrlblk and rcu_sched_ctrlblk.
The output of "cat rcu/rcudata" is as follows:
rcu_sched: qlen: 0
rcu_bh: qlen: 0
This is split into rcu_sched and rcu_bh sections. The field is as
follows:
o "qlen" is the number of RCU callbacks currently waiting either
for an RCU grace period or waiting to be invoked. This is the
only field present for rcu_sched and rcu_bh, due to the
short-circuiting of grace period in those two cases.