Граф коммитов

11 Коммитов

Автор SHA1 Сообщение Дата
Michal Hocko 93607e5a55 trace-vmscan-postprocess: sync with tracepoints updates
Both mm_vmscan_lru_shrink_active and mm_vmscan_lru_isolate have changed
so the script needs to be update to reflect those changes

Link: http://lkml.kernel.org/r/20170105151737.GU21618@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-22 16:41:29 -08:00
Andi Kleen 7f47d8cc03 x86, tracing, perf: Add trace point for MSR accesses
For debugging low level code interacting with the CPU it is often
useful to trace the MSR read/writes. This gives a concise summary of
PMU and other operations.

perf has an ad-hoc way to do this using trace_printk, but it's
somewhat limited (and also now spews ugly boot messages when enabled)

Instead define real trace points for all MSR accesses.

This adds three new trace points: read_msr and write_msr and rdpmc.

They also report if the access faulted (if *_safe is used)

This allows filtering and triggering on specific MSR values, which
allows various more advanced debugging techniques.

All the values are well defined in the CPU documentation.

The trace can be post processed with
Documentation/trace/postprocess/decode_msr.py to add symbolic MSR
names to the trace.

I only added it to native MSR accesses in C, not paravirtualized or in
entry*.S (which is not too interesting)

Originally the patch kit moved the MSRs out of line.  This uses an
alternative approach recommended by Steven Rostedt of only moving the
trace calls out of line, but open coding the access to the jump label.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/1449018060-1742-3-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-12-06 12:56:10 +01:00
Chen Yucong 2c51856c9b mm: trace-vmscan-postprocess.pl: report the number of file/anon pages respectively
Until now, the reporting from trace-vmscan-postprocess.pl is not very
useful because we cannot directly use this script for checking the
file/anon ratio of scanning.  This patch aims to report respectively the
number of file/anon pages which were scanned/reclaimed by kswapd or
direct-reclaim.  Sample output is usually something like the following.

Summary
Direct reclaims:                          8823
Direct reclaim pages scanned:             2438797
Direct reclaim file pages scanned:        1315200
Direct reclaim anon pages scanned:        1123597
Direct reclaim pages reclaimed:           446139
Direct reclaim file pages reclaimed:      378668
Direct reclaim anon pages reclaimed:      67471
Direct reclaim write file sync I/O:       0
Direct reclaim write anon sync I/O:       0
Direct reclaim write file async I/O:      0
Direct reclaim write anon async I/O:      4240
Wake kswapd requests:                     122310
Time stalled direct reclaim:              13.78 seconds

Kswapd wakeups:                           25817
Kswapd pages scanned:                     170779115
Kswapd file pages scanned:                162725123
Kswapd anon pages scanned:                8053992
Kswapd pages reclaimed:                   129065738
Kswapd file pages reclaimed:              128500930
Kswapd anon pages reclaimed:              564808
Kswapd reclaim write file sync I/O:       0
Kswapd reclaim write anon sync I/O:       0
Kswapd reclaim write file async I/O:      36
Kswapd reclaim write anon async I/O:      730730
Time kswapd awake:                        1015.50 seconds

Signed-off-by: Chen Yucong <slaoub@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:20 -07:00
Chen Yucong b27ebf7791 mm:vmscan: update the trace-vmscan-postprocess.pl for event vmscan/mm_vmscan_lru_isolate
When using trace-vmscan-postprocess.pl for checking the file/anon rate
of scanning, we can find that it can not be performed.  At the same
time, the following message will be reported:

  WARNING: Format not as expected for event vmscan/mm_vmscan_lru_isolate
  'file' != 'contig_taken' Fewer fields than expected in format at
  ./trace-vmscan-postprocess.pl line 171, <FORMAT> line 76.

In trace-vmscan-postprocess.pl, (contig_taken, contig_dirty, and
contig_failed) are be associated respectively to (nr_lumpy_taken,
nr_lumpy_dirty, and nr_lumpy_failed) for lumpy reclaim.  Via commit
c53919adc0 ("mm: vmscan: remove lumpy reclaim"), lumpy reclaim had
already been removed by Mel, but the update for
trace-vmscan-postprocess.pl was missed.

Signed-off-by: Chen Yucong <slaoub@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-07-03 09:21:54 -07:00
Vinayak Menon bd7278166a Documentation/trace/postprocess/trace-vmscan-postprocess.pl: fix the traceevent regex
When irq, preempt and lockdep fields are printed (field 3 in the example
below) in the trace output, the script fails.

An example entry:
  kswapd0-610   [000] ...1   158.112152: mm_vmscan_kswapd_wake: nid=0 order=0

Signed-off-by: Vinayak Menon <vinayakm.list@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-23 16:36:52 -08:00
Konstantin Khlebnikov b413d48aa7 mm-tracepoint: rename page-free events
Rename mm_page_free_direct into mm_page_free and mm_pagevec_free into
mm_page_free_batched

Since v2.6.33-5426-gc475dab the kernel triggers mm_page_free_direct for
all freed pages, not only for directly freed.  So, let's name it properly.
 For pages freed via page-list we also trigger mm_page_free_batched event.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-10 16:30:41 -08:00
Minchan Kim 4356f21d09 mm: change isolate mode from #define to bitwise type
Change ISOLATE_XXX macro with bitwise isolate_mode_t type.  Normally,
macro isn't recommended as it's type-unsafe and making debugging harder as
symbol cannot be passed throught to the debugger.

Quote from Johannes
" Hmm, it would probably be cleaner to fully convert the isolation mode
into independent flags.  INACTIVE, ACTIVE, BOTH is currently a
tri-state among flags, which is a bit ugly."

This patch moves isolate mode from swap.h to mmzone.h by memcontrol.h

Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-10-31 17:30:44 -07:00
Mel Gorman 7a2d19bced mm: vmscan: tracepoint: account for scanned pages similarly for both ftrace and vmstat
When correlating ftrace results with /proc/vmstat, I noticed that the
reporting scripts value for "pages scanned" differed significantly.  Both
values were "right" depending on how you look at it.

The difference is due to vmstat only counting scanning of the inactive
list towards pages scanned.  The analysis script for the tracepoint counts
active and inactive list yielding a far higher value than vmstat.  The
resulting scanning/reclaim ratio looks much worse.  The tracepoint is ok
but this patch updates the reporting script so that the report values for
scanned are similar to vmstat.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-12-22 19:43:33 -08:00
Mel Gorman e11da5b4fd tracing, vmscan: add trace events for LRU list shrinking
There have been numerous reports of stalls that pointed at the problem
being somewhere in the VM.  There are multiple roots to the problems which
means dealing with any of the root problems in isolation is tricky to
justify on their own and they would still need integration testing.  This
patch series puts together two different patch sets which in combination
should tackle some of the root causes of latency problems being reported.

Patch 1 adds a tracepoint for shrink_inactive_list.  For this series, the
most important results is being able to calculate the scanning/reclaim
ratio as a measure of the amount of work being done by page reclaim.

Patch 2 accounts for time spent in congestion_wait.

Patches 3-6 were originally developed by Kosaki Motohiro but reworked for
this series.  It has been noted that lumpy reclaim is far too aggressive
and trashes the system somewhat.  As SLUB uses high-order allocations, a
large cost incurred by lumpy reclaim will be noticeable.  It was also
reported during transparent hugepage support testing that lumpy reclaim
was trashing the system and these patches should mitigate that problem
without disabling lumpy reclaim.

Patch 7 adds wait_iff_congested() and replaces some callers of
congestion_wait().  wait_iff_congested() only sleeps if there is a BDI
that is currently congested.  Patch 8 notes that any BDI being congested
is not necessarily a problem because there could be multiple BDIs of
varying speeds and numberous zones.  It attempts to track when a zone
being reclaimed contains many pages backed by a congested BDI and if so,
reclaimers wait on the congestion queue.

I ran a number of tests with monitoring on X86, X86-64 and PPC64. Each
machine had 3G of RAM and the CPUs were

X86:    Intel P4 2-core
X86-64: AMD Phenom 4-core
PPC64:  PPC970MP

Each used a single disk and the onboard IO controller.  Dirty ratio was
left at 20.  I'm just going to report for X86-64 and PPC64 in a vague
attempt to keep this report short.  Four kernels were tested each based on
v2.6.36-rc4

traceonly-v2r2:     Patches 1 and 2 to instrument vmscan reclaims and congestion_wait
lowlumpy-v2r3:      Patches 1-6 to test if lumpy reclaim is better
waitcongest-v2r3:   Patches 1-7 to only wait on congestion
waitwriteback-v2r4: Patches 1-8 to detect when a zone is congested

nocongest-v1r5: Patches 1-3 for testing wait_iff_congestion
nodirect-v1r5:  Patches 1-10 to disable filesystem writeback for better IO

The tests run were as follows

kernbench
	compile-based benchmark. Smoke test performance

sysbench
	OLTP read-only benchmark. Will be re-run in the future as read-write

micro-mapped-file-stream
	This is a micro-benchmark from Johannes Weiner that accesses a
	large sparse-file through mmap(). It was configured to run in only
	single-CPU mode but can be indicative of how well page reclaim
	identifies suitable pages.

stress-highalloc
	Tries to allocate huge pages under heavy load.

kernbench, iozone and sysbench did not report any performance regression
on any machine.  sysbench did pressure the system lightly and there was
reclaim activity but there were no difference of major interest between
the kernels.

X86-64 micro-mapped-file-stream

                                      traceonly-v2r2           lowlumpy-v2r3        waitcongest-v2r3     waitwriteback-v2r4
pgalloc_dma                       1639.00 (   0.00%)       667.00 (-145.73%)      1167.00 ( -40.45%)       578.00 (-183.56%)
pgalloc_dma32                  2842410.00 (   0.00%)   2842626.00 (   0.01%)   2843043.00 (   0.02%)   2843014.00 (   0.02%)
pgalloc_normal                       0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pgsteal_dma                        729.00 (   0.00%)        85.00 (-757.65%)       609.00 ( -19.70%)       125.00 (-483.20%)
pgsteal_dma32                  2338721.00 (   0.00%)   2447354.00 (   4.44%)   2429536.00 (   3.74%)   2436772.00 (   4.02%)
pgsteal_normal                       0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pgscan_kswapd_dma                 1469.00 (   0.00%)       532.00 (-176.13%)      1078.00 ( -36.27%)       220.00 (-567.73%)
pgscan_kswapd_dma32            4597713.00 (   0.00%)   4503597.00 (  -2.09%)   4295673.00 (  -7.03%)   3891686.00 ( -18.14%)
pgscan_kswapd_normal                 0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pgscan_direct_dma                   71.00 (   0.00%)       134.00 (  47.01%)       243.00 (  70.78%)       352.00 (  79.83%)
pgscan_direct_dma32             305820.00 (   0.00%)    280204.00 (  -9.14%)    600518.00 (  49.07%)    957485.00 (  68.06%)
pgscan_direct_normal                 0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pageoutrun                       16296.00 (   0.00%)     21254.00 (  23.33%)     18447.00 (  11.66%)     20067.00 (  18.79%)
allocstall                         443.00 (   0.00%)       273.00 ( -62.27%)       513.00 (  13.65%)      1568.00 (  71.75%)

These are based on the raw figures taken from /proc/vmstat.  It's a rough
measure of reclaim activity.  Note that allocstall counts are higher
because we are entering direct reclaim more often as a result of not
sleeping in congestion.  In itself, it's not necessarily a bad thing.
It's easier to get a view of what happened from the vmscan tracepoint
report.

FTrace Reclaim Statistics: vmscan

                                traceonly-v2r2   lowlumpy-v2r3 waitcongest-v2r3 waitwriteback-v2r4
Direct reclaims                                443        273        513       1568
Direct reclaim pages scanned                305968     280402     600825     957933
Direct reclaim pages reclaimed               43503      19005      30327     117191
Direct reclaim write file async I/O              0          0          0          0
Direct reclaim write anon async I/O              0          3          4         12
Direct reclaim write file sync I/O               0          0          0          0
Direct reclaim write anon sync I/O               0          0          0          0
Wake kswapd requests                        187649     132338     191695     267701
Kswapd wakeups                                   3          1          4          1
Kswapd pages scanned                       4599269    4454162    4296815    3891906
Kswapd pages reclaimed                     2295947    2428434    2399818    2319706
Kswapd reclaim write file async I/O              1          0          1          1
Kswapd reclaim write anon async I/O             59        187         41        222
Kswapd reclaim write file sync I/O               0          0          0          0
Kswapd reclaim write anon sync I/O               0          0          0          0
Time stalled direct reclaim (seconds)         4.34       2.52       6.63       2.96
Time kswapd awake (seconds)                  11.15      10.25      11.01      10.19

Total pages scanned                        4905237   4734564   4897640   4849839
Total pages reclaimed                      2339450   2447439   2430145   2436897
%age total pages scanned/reclaimed          47.69%    51.69%    49.62%    50.25%
%age total pages scanned/written             0.00%     0.00%     0.00%     0.00%
%age  file pages scanned/written             0.00%     0.00%     0.00%     0.00%
Percentage Time Spent Direct Reclaim        29.23%    19.02%    38.48%    20.25%
Percentage Time kswapd Awake                78.58%    78.85%    76.83%    79.86%

What is interesting here for nocongest in particular is that while direct
reclaim scans more pages, the overall number of pages scanned remains the
same and the ratio of pages scanned to pages reclaimed is more or less the
same.  In other words, while we are sleeping less, reclaim is not doing
more work and as direct reclaim and kswapd is awake for less time, it
would appear to be doing less work.

FTrace Reclaim Statistics: congestion_wait
Direct number congest     waited                87        196         64          0
Direct time   congest     waited            4604ms     4732ms     5420ms        0ms
Direct full   congest     waited                72        145         53          0
Direct number conditional waited                 0          0        324       1315
Direct time   conditional waited               0ms        0ms        0ms        0ms
Direct full   conditional waited                 0          0          0          0
KSwapd number congest     waited                20         10         15          7
KSwapd time   congest     waited            1264ms      536ms      884ms      284ms
KSwapd full   congest     waited                10          4          6          2
KSwapd number conditional waited                 0          0          0          0
KSwapd time   conditional waited               0ms        0ms        0ms        0ms
KSwapd full   conditional waited                 0          0          0          0

The vanilla kernel spent 8 seconds asleep in direct reclaim and no time at
all asleep with the patches.

MMTests Statistics: duration
User/Sys Time Running Test (seconds)         10.51     10.73      10.6     11.66
Total Elapsed Time (seconds)                 14.19     13.00     14.33     12.76

Overall, the tests completed faster. It is interesting to note that backing off further
when a zone is congested and not just a BDI was more efficient overall.

PPC64 micro-mapped-file-stream
pgalloc_dma                    3024660.00 (   0.00%)   3027185.00 (   0.08%)   3025845.00 (   0.04%)   3026281.00 (   0.05%)
pgalloc_normal                       0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pgsteal_dma                    2508073.00 (   0.00%)   2565351.00 (   2.23%)   2463577.00 (  -1.81%)   2532263.00 (   0.96%)
pgsteal_normal                       0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pgscan_kswapd_dma              4601307.00 (   0.00%)   4128076.00 ( -11.46%)   3912317.00 ( -17.61%)   3377165.00 ( -36.25%)
pgscan_kswapd_normal                 0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pgscan_direct_dma               629825.00 (   0.00%)    971622.00 (  35.18%)   1063938.00 (  40.80%)   1711935.00 (  63.21%)
pgscan_direct_normal                 0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)         0.00 (   0.00%)
pageoutrun                       27776.00 (   0.00%)     20458.00 ( -35.77%)     18763.00 ( -48.04%)     18157.00 ( -52.98%)
allocstall                         977.00 (   0.00%)      2751.00 (  64.49%)      2098.00 (  53.43%)      5136.00 (  80.98%)

Similar trends to x86-64. allocstalls are up but it's not necessarily bad.

FTrace Reclaim Statistics: vmscan
Direct reclaims                                977       2709       2098       5136
Direct reclaim pages scanned                629825     963814    1063938    1711935
Direct reclaim pages reclaimed               75550     242538     150904     387647
Direct reclaim write file async I/O              0          0          0          2
Direct reclaim write anon async I/O              0         10          0          4
Direct reclaim write file sync I/O               0          0          0          0
Direct reclaim write anon sync I/O               0          0          0          0
Wake kswapd requests                        392119    1201712     571935     571921
Kswapd wakeups                                   3          2          3          3
Kswapd pages scanned                       4601307    4128076    3912317    3377165
Kswapd pages reclaimed                     2432523    2318797    2312673    2144616
Kswapd reclaim write file async I/O             20          1          1          1
Kswapd reclaim write anon async I/O             57        132         11        121
Kswapd reclaim write file sync I/O               0          0          0          0
Kswapd reclaim write anon sync I/O               0          0          0          0
Time stalled direct reclaim (seconds)         6.19       7.30      13.04      10.88
Time kswapd awake (seconds)                  21.73      26.51      25.55      23.90

Total pages scanned                        5231132   5091890   4976255   5089100
Total pages reclaimed                      2508073   2561335   2463577   2532263
%age total pages scanned/reclaimed          47.95%    50.30%    49.51%    49.76%
%age total pages scanned/written             0.00%     0.00%     0.00%     0.00%
%age  file pages scanned/written             0.00%     0.00%     0.00%     0.00%
Percentage Time Spent Direct Reclaim        18.89%    20.65%    32.65%    27.65%
Percentage Time kswapd Awake                72.39%    80.68%    78.21%    77.40%

Again, a similar trend that the congestion_wait changes mean that direct
reclaim scans more pages but the overall number of pages scanned while
slightly reduced, are very similar.  The ratio of scanning/reclaimed
remains roughly similar.  The downside is that kswapd and direct reclaim
was awake longer and for a larger percentage of the overall workload.
It's possible there were big differences in the amount of time spent
reclaiming slab pages between the different kernels which is plausible
considering that the micro tests runs after fsmark and sysbench.

Trace Reclaim Statistics: congestion_wait
Direct number congest     waited               845       1312        104          0
Direct time   congest     waited           19416ms    26560ms     7544ms        0ms
Direct full   congest     waited               745       1105         72          0
Direct number conditional waited                 0          0       1322       2935
Direct time   conditional waited               0ms        0ms       12ms      312ms
Direct full   conditional waited                 0          0          0          3
KSwapd number congest     waited                39        102         75         63
KSwapd time   congest     waited            2484ms     6760ms     5756ms     3716ms
KSwapd full   congest     waited                20         48         46         25
KSwapd number conditional waited                 0          0          0          0
KSwapd time   conditional waited               0ms        0ms        0ms        0ms
KSwapd full   conditional waited                 0          0          0          0

The vanilla kernel spent 20 seconds asleep in direct reclaim and only
312ms asleep with the patches.  The time kswapd spent congest waited was
also reduced by a large factor.

MMTests Statistics: duration
ser/Sys Time Running Test (seconds)         26.58     28.05      26.9     28.47
Total Elapsed Time (seconds)                 30.02     32.86     32.67     30.88

With all patches applies, the completion times are very similar.

X86-64 STRESS-HIGHALLOC
                traceonly-v2r2     lowlumpy-v2r3  waitcongest-v2r3waitwriteback-v2r4
Pass 1          82.00 ( 0.00%)    84.00 ( 2.00%)    85.00 ( 3.00%)    85.00 ( 3.00%)
Pass 2          90.00 ( 0.00%)    87.00 (-3.00%)    88.00 (-2.00%)    89.00 (-1.00%)
At Rest         92.00 ( 0.00%)    90.00 (-2.00%)    90.00 (-2.00%)    91.00 (-1.00%)

Success figures across the board are broadly similar.

                traceonly-v2r2     lowlumpy-v2r3  waitcongest-v2r3waitwriteback-v2r4
Direct reclaims                               1045        944        886        887
Direct reclaim pages scanned                135091     119604     109382     101019
Direct reclaim pages reclaimed               88599      47535      47863      46671
Direct reclaim write file async I/O            494        283        465        280
Direct reclaim write anon async I/O          29357      13710      16656      13462
Direct reclaim write file sync I/O             154          2          2          3
Direct reclaim write anon sync I/O           14594        571        509        561
Wake kswapd requests                          7491        933        872        892
Kswapd wakeups                                 814        778        731        780
Kswapd pages scanned                       7290822   15341158   11916436   13703442
Kswapd pages reclaimed                     3587336    3142496    3094392    3187151
Kswapd reclaim write file async I/O          91975      32317      28022      29628
Kswapd reclaim write anon async I/O        1992022     789307     829745     849769
Kswapd reclaim write file sync I/O               0          0          0          0
Kswapd reclaim write anon sync I/O               0          0          0          0
Time stalled direct reclaim (seconds)      4588.93    2467.16    2495.41    2547.07
Time kswapd awake (seconds)                2497.66    1020.16    1098.06    1176.82

Total pages scanned                        7425913  15460762  12025818  13804461
Total pages reclaimed                      3675935   3190031   3142255   3233822
%age total pages scanned/reclaimed          49.50%    20.63%    26.13%    23.43%
%age total pages scanned/written            28.66%     5.41%     7.28%     6.47%
%age  file pages scanned/written             1.25%     0.21%     0.24%     0.22%
Percentage Time Spent Direct Reclaim        57.33%    42.15%    42.41%    42.99%
Percentage Time kswapd Awake                43.56%    27.87%    29.76%    31.25%

Scanned/reclaimed ratios again look good with big improvements in
efficiency.  The Scanned/written ratios also look much improved.  With a
better scanned/written ration, there is an expectation that IO would be
more efficient and indeed, the time spent in direct reclaim is much
reduced by the full series and kswapd spends a little less time awake.

Overall, indications here are that allocations were happening much faster
and this can be seen with a graph of the latency figures as the
allocations were taking place
http://www.csn.ul.ie/~mel/postings/vmscanreduce-20101509/highalloc-interlatency-hydra-mean.ps

FTrace Reclaim Statistics: congestion_wait
Direct number congest     waited              1333        204        169          4
Direct time   congest     waited           78896ms     8288ms     7260ms      200ms
Direct full   congest     waited               756         92         69          2
Direct number conditional waited                 0          0         26        186
Direct time   conditional waited               0ms        0ms        0ms     2504ms
Direct full   conditional waited                 0          0          0         25
KSwapd number congest     waited                 4        395        227        282
KSwapd time   congest     waited             384ms    25136ms    10508ms    18380ms
KSwapd full   congest     waited                 3        232         98        176
KSwapd number conditional waited                 0          0          0          0
KSwapd time   conditional waited               0ms        0ms        0ms        0ms
KSwapd full   conditional waited                 0          0          0          0
KSwapd full   conditional waited               318          0        312          9

Overall, the time spent speeping is reduced.  kswapd is still hitting
congestion_wait() but that is because there are callers remaining where it
wasn't clear in advance if they should be changed to wait_iff_congested()
or not.  Overall the sleep imes are reduced though - from 79ish seconds to
about 19.

MMTests Statistics: duration
User/Sys Time Running Test (seconds)       3415.43   3386.65   3388.39    3377.5
Total Elapsed Time (seconds)               5733.48   3660.33   3689.41   3765.39

With the full series, the time to complete the tests are reduced by 30%

PPC64 STRESS-HIGHALLOC
                traceonly-v2r2     lowlumpy-v2r3  waitcongest-v2r3waitwriteback-v2r4
Pass 1          17.00 ( 0.00%)    34.00 (17.00%)    38.00 (21.00%)    43.00 (26.00%)
Pass 2          25.00 ( 0.00%)    37.00 (12.00%)    42.00 (17.00%)    46.00 (21.00%)
At Rest         49.00 ( 0.00%)    43.00 (-6.00%)    45.00 (-4.00%)    51.00 ( 2.00%)

Success rates there are *way* up particularly considering that the 16MB
huge pages on PPC64 mean that it's always much harder to allocate them.

FTrace Reclaim Statistics: vmscan
              stress-highalloc  stress-highalloc  stress-highalloc  stress-highalloc
                traceonly-v2r2     lowlumpy-v2r3  waitcongest-v2r3waitwriteback-v2r4
Direct reclaims                                499        505        564        509
Direct reclaim pages scanned                223478      41898      51818      45605
Direct reclaim pages reclaimed              137730      21148      27161      23455
Direct reclaim write file async I/O            399        136        162        136
Direct reclaim write anon async I/O          46977       2865       4686       3998
Direct reclaim write file sync I/O              29          0          1          3
Direct reclaim write anon sync I/O           31023        159        237        239
Wake kswapd requests                           420        351        360        326
Kswapd wakeups                                 185        294        249        277
Kswapd pages scanned                      15703488   16392500   17821724   17598737
Kswapd pages reclaimed                     5808466    2908858    3139386    3145435
Kswapd reclaim write file async I/O         159938      18400      18717      13473
Kswapd reclaim write anon async I/O        3467554     228957     322799     234278
Kswapd reclaim write file sync I/O               0          0          0          0
Kswapd reclaim write anon sync I/O               0          0          0          0
Time stalled direct reclaim (seconds)      9665.35    1707.81    2374.32    1871.23
Time kswapd awake (seconds)                9401.21    1367.86    1951.75    1328.88

Total pages scanned                       15926966  16434398  17873542  17644342
Total pages reclaimed                      5946196   2930006   3166547   3168890
%age total pages scanned/reclaimed          37.33%    17.83%    17.72%    17.96%
%age total pages scanned/written            23.27%     1.52%     1.94%     1.43%
%age  file pages scanned/written             1.01%     0.11%     0.11%     0.08%
Percentage Time Spent Direct Reclaim        44.55%    35.10%    41.42%    36.91%
Percentage Time kswapd Awake                86.71%    43.58%    52.67%    41.14%

While the scanning rates are slightly up, the scanned/reclaimed and
scanned/written figures are much improved.  The time spent in direct
reclaim and with kswapd are massively reduced, mostly by the lowlumpy
patches.

FTrace Reclaim Statistics: congestion_wait
Direct number congest     waited               725        303        126          3
Direct time   congest     waited           45524ms     9180ms     5936ms      300ms
Direct full   congest     waited               487        190         52          3
Direct number conditional waited                 0          0        200        301
Direct time   conditional waited               0ms        0ms        0ms     1904ms
Direct full   conditional waited                 0          0          0         19
KSwapd number congest     waited                 0          2         23          4
KSwapd time   congest     waited               0ms      200ms      420ms      404ms
KSwapd full   congest     waited                 0          2          2          4
KSwapd number conditional waited                 0          0          0          0
KSwapd time   conditional waited               0ms        0ms        0ms        0ms
KSwapd full   conditional waited                 0          0          0          0

Not as dramatic a story here but the time spent asleep is reduced and we
can still see what wait_iff_congested is going to sleep when necessary.

MMTests Statistics: duration
User/Sys Time Running Test (seconds)      12028.09   3157.17   3357.79   3199.16
Total Elapsed Time (seconds)              10842.07   3138.72   3705.54   3229.85

The time to complete this test goes way down.  With the full series, we
are allocating over twice the number of huge pages in 30% of the time and
there is a corresponding impact on the allocation latency graph available
at.

http://www.csn.ul.ie/~mel/postings/vmscanreduce-20101509/highalloc-interlatency-powyah-mean.ps

This patch:

Add a trace event for shrink_inactive_list() and updates the sample
postprocessing script appropriately.  It can be used to determine how many
pages were reclaimed and for non-lumpy reclaim where exactly the pages
were reclaimed from.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-10-26 16:52:07 -07:00
Mel Gorman b898cc7001 vmscan: tracing: add a postprocessing script for reclaim-related ftrace events
Add a simple post-processing script for the reclaim-related trace events.
It can be used to give an indication of how much traffic there is on the
LRU lists and how severe latencies due to reclaim are.  Example output
looks like the following

Reclaim latencies expressed as order-latency_in_ms
uname-3942             9-200.179000000004 9-98.7900000000373 9-99.8330000001006
kswapd0-311            0-662.097999999998 0-2.79700000002049 \
	0-149.100000000035 0-3295.73600000003 0-9806.31799999997 0-35528.833 \
	0-10043.197 0-129740.979 0-3.50500000000466 0-3.54899999999907 \
	0-9297.78999999992 0-3.48499999998603 0-3596.97999999998 0-3.92799999995623 \
	0-3.35000000009313 0-16729.017 0-3.57799999997951 0-47435.0630000001 \
	0-3.7819999998901 0-5864.06999999995 0-18635.334 0-10541.289 9-186011.565 \
	9-3680.86300000001 9-1379.06499999994 9-958571.115 9-66215.474 \
	9-6721.14699999988 9-1962.15299999993 9-1094806.125 9-2267.83199999994 \
	9-47120.9029999999 9-427653.886 9-2.6359999999404 9-632.148999999976 \
	9-476.753000000026 9-495.577000000048 9-8.45900000003166 9-6.6820000000298 \
	9-1.30500000016764 9-251.746000000043 9-383.905000000028 9-80.1419999999925 \
	9-281.160000000149 9-14.8780000000261 9-381.45299999998 9-512.07799999998 \
	9-49.5519999999087 9-167.439000000013 9-183.820999999996 9-239.527999999933 \
	9-19.9479999998584 9-148.747999999905 9-164.583000000101 9-16.9480000000913 \
	9-192.376000000164 9-64.1010000000242 9-1.40800000005402 9-3.60800000000745 \
	9-17.1359999999404 9-4.69500000006519 9-2.06400000001304 9-1582488.554 \
	9-6244.19499999983 9-348153.812 9-2.0999999998603 9-0.987999999895692 \
	0-32218.473 0-1.6140000000596 0-1.28100000019185 0-1.41300000017509 \
	0-1.32299999985844 0-602.584000000032 0-1.34400000004098 0-1.6929999999702 \
	1-22101.8190000001 9-174876.724 9-16.2420000000857 9-175.165999999736 \
	9-15.8589999997057 9-0.604999999981374 9-3061.09000000032 9-479.277000000235 \
	9-1.54499999992549 9-771.985000000335 9-4.88700000010431 9-15.0649999999441 \
	9-0.879999999888241 9-252.01500000013 9-1381.03600000031 9-545.689999999944 \
	9-3438.0129999998 9-3343.70099999988
bench-stresshig-3942   9-7063.33900000004 9-129960.482 9-2062.27500000002 \
	9-3845.59399999992 9-171.82799999998 9-16493.821 9-7615.23900000006 \
	9-10217.848 9-983.138000000035 9-2698.39999999991 9-4016.1540000001 \
	9-5522.37700000009 9-21630.429 \
	9-15061.048 9-10327.953 9-542.69700000016 9-317.652000000002 \
	9-8554.71699999995 9-1786.61599999992 9-1899.31499999994 9-2093.41899999999 \
	9-4992.62400000007 9-942.648999999976 9-1923.98300000001 9-3.7980000001844 \
	9-5.99899999983609 9-0.912000000011176 9-1603.67700000014 9-1.98300000000745 \
	9-3.96500000008382 9-0.902999999932945 9-2802.72199999983 9-1078.24799999991 \
	9-2155.82900000014 9-10.058999999892 9-1984.723 9-1687.97999999998 \
	9-1136.05300000007 9-3183.61699999985 9-458.731000000145 9-6.48600000003353 \
	9-1013.25200000009 9-8415.22799999989 9-10065.584 9-2076.79600000009 \
	9-3792.65699999989 9-71.2010000001173 9-2560.96999999997 9-2260.68400000012 \
	9-2862.65799999982 9-1255.81500000018 9-15.7440000001807 9-4.33499999996275 \
	9-1446.63800000004 9-238.635000000009 9-60.1790000000037 9-4.38800000003539 \
	9-639.567000000039 9-306.698000000091 9-31.4070000001229 9-74.997999999905 \
	9-632.725999999791 9-1625.93200000003 9-931.266000000061 9-98.7749999999069 \
	9-984.606999999844 9-225.638999999966 9-421.316000000108 9-653.744999999879 \
	9-572.804000000004 9-769.158999999985 9-603.918000000063 9-4.28499999991618 \
	9-626.21399999992 9-1721.25 9-0.854999999981374 9-572.39599999995 \
	9-681.881999999983 9-1345.12599999993 9-363.666999999899 9-3823.31099999999 \
	9-2991.28200000012 9-4.27099999994971 9-309.76500000013 9-3068.35700000008 \
	9-788.25 9-3515.73999999999 9-2065.96100000013 9-286.719999999972 \
	9-316.076000000117 9-344.151000000071 9-2.51000000000931 9-306.688000000082 \
	9-1515.00099999993 9-336.528999999864 9-793.491999999853 9-457.348999999929 \
	9-13620.155 9-119.933999999892 9-35.0670000000391 9-918.266999999993 \
	9-828.569000000134 9-4863.81099999999 9-105.222000000067 9-894.23900000006 \
	9-110.964999999851 9-0.662999999942258 9-12753.3150000002 9-12.6129999998957 \
	9-13368.0899999999 9-12.4199999999255 9-1.00300000002608 9-1.41100000008009 \
	9-10300.5290000001 9-16.502000000095 9-30.7949999999255 9-6283.0140000002 \
	9-4320.53799999994 9-6826.27300000004 9-3.07299999985844 9-1497.26799999992 \
	9-13.4040000000969 9-3.12999999988824 9-3.86100000003353 9-11.3539999998175 \
	9-0.10799999977462 9-21.780999999959 9-209.695999999996 9-299.647000000114 \
	9-6.01699999999255 9-20.8349999999627 9-22.5470000000205 9-5470.16800000006 \
	9-7.60499999998137 9-0.821000000229105 9-1.56600000010803 9-14.1669999998994 \
	9-0.209000000031665 9-1.82300000009127 9-1.70000000018626 9-19.9429999999702 \
	9-124.266999999993 9-0.0389999998733401 9-6.71400000015274 9-16.7710000001825 \
	9-31.0409999999683 9-0.516999999992549 9-115.888000000035 9-5.19900000002235 \
	9-222.389999999898 9-11.2739999999758 9-80.9050000000279 9-8.14500000001863 \
	9-4.44599999999627 9-0.218999999808148 9-0.715000000083819 9-0.233000000007451
\
	9-48.2630000000354 9-248.560999999987 9-374.96800000011 9-644.179000000004 \
	9-0.835999999893829 9-79.0060000000522 9-128.447999999858 9-0.692000000039116 \
	9-5.26500000013039 9-128.449000000022 9-2.04799999995157 9-12.0990000001621 \
	9-8.39899999997579 9-10.3860000001732 9-11.9310000000987 9-53.4450000000652 \
	9-0.46999999997206 9-2.96299999998882 9-17.9699999999721 9-0.776000000070781 \
	9-25.2919999998994 9-33.1110000000335 9-0.434000000124797 9-0.641000000061467 \
	9-0.505000000121072 9-1.12800000002608 9-149.222000000067 9-1.17599999997765 \
	9-3247.33100000001 9-10.7439999999478 9-153.523000000045 9-1.38300000014715 \
	9-794.762000000104 9-3.36199999996461 9-128.765999999829 9-181.543999999994 \
	9-78149.8229999999 9-176.496999999974 9-89.9940000001807 9-9.12700000009499 \
	9-250.827000000048 9-0.224999999860302 9-0.388999999966472 9-1.16700000036508 \
	9-32.1740000001155 9-12.6800000001676 9-0.0720000001601875 9-0.274999999906868
\
	9-0.724000000394881 9-266.866000000387 9-45.5709999999963 9-4.54399999976158 \
	9-8.27199999988079 9-4.38099999958649 9-0.512000000104308 9-0.0640000002458692
\
	9-5.20000000018626 9-0.0839999997988343 9-12.816000000108 9-0.503000000026077 \
	9-0.507999999914318 9-6.23999999975786 9-3.35100000025705 9-18.8530000001192 \
	9-25.2220000000671 9-68.2309999996796 9-98.9939999999478 9-0.441000000108033 \
	9-4.24599999981001 9-261.702000000048 9-3.01599999982864 9-0.0749999997206032 \
	9-0.0370000000111759 9-4.375 9-3.21800000034273 9-11.3960000001825 \
	9-0.0540000000037253 9-0.286000000312924 9-0.865999999921769 \
	9-0.294999999925494 9-6.45999999996275 9-4.31099999975413 9-128.248999999836 \
	9-0.282999999821186 9-102.155000000261 9-0.0860000001266599 \
	9-0.0540000000037253 9-0.935000000055879 9-0.0670000002719462 \
	9-5.8640000000596 9-19.9860000000335 9-4.18699999991804 9-0.566000000108033 \
	9-2.55099999997765 9-0.702000000048429 9-131.653999999631 9-0.638999999966472 \
	9-14.3229999998584 9-183.398000000045 9-178.095999999903 9-3.22899999981746 \
	9-7.31399999978021 9-22.2400000002235 9-11.7979999999516 9-108.10599999968 \
	9-99.0159999998286 9-102.640999999829 9-38.414000000339
Process                  Direct     Wokeup      Pages      Pages    Pages
details                   Rclms     Kswapd    Scanned    Sync-IO ASync-IO
cc1-30800                     0          1          0          0        0      wakeup-0=1
cc1-24260                     0          1          0          0        0      wakeup-0=1
cc1-24152                     0         12          0          0        0      wakeup-0=12
cc1-8139                      0          1          0          0        0      wakeup-0=1
cc1-4390                      0          1          0          0        0      wakeup-0=1
cc1-4648                      0          7          0          0        0      wakeup-0=7
cc1-4552                      0          3          0          0        0      wakeup-0=3
dd-4550                       0         31          0          0        0      wakeup-0=31
date-4898                     0          1          0          0        0      wakeup-0=1
cc1-6549                      0          7          0          0        0      wakeup-0=7
as-22202                      0         17          0          0        0      wakeup-0=17
cc1-6495                      0          9          0          0        0      wakeup-0=9
cc1-8299                      0          1          0          0        0      wakeup-0=1
cc1-6009                      0          1          0          0        0      wakeup-0=1
cc1-2574                      0          2          0          0        0      wakeup-0=2
cc1-30568                     0          1          0          0        0      wakeup-0=1
cc1-2679                      0          6          0          0        0      wakeup-0=6
sh-13747                      0         12          0          0        0      wakeup-0=12
cc1-22193                     0         18          0          0        0      wakeup-0=18
cc1-30725                     0          2          0          0        0      wakeup-0=2
as-4392                       0          2          0          0        0      wakeup-0=2
cc1-28180                     0         14          0          0        0      wakeup-0=14
cc1-13697                     0          2          0          0        0      wakeup-0=2
cc1-22207                     0          8          0          0        0      wakeup-0=8
cc1-15270                     0        179          0          0        0      wakeup-0=179
cc1-22011                     0         82          0          0        0      wakeup-0=82
cp-14682                      0          1          0          0        0      wakeup-0=1
as-11926                      0          2          0          0        0      wakeup-0=2
cc1-6016                      0          5          0          0        0      wakeup-0=5
make-18554                    0         13          0          0        0      wakeup-0=13
cc1-8292                      0         12          0          0        0      wakeup-0=12
make-24381                    0          1          0          0        0      wakeup-1=1
date-18681                    0         33          0          0        0      wakeup-0=33
cc1-32276                     0          1          0          0        0      wakeup-0=1
timestamp-outpu-2809          0        253          0          0        0      wakeup-0=240 wakeup-1=13
date-18624                    0          7          0          0        0      wakeup-0=7
cc1-30960                     0          9          0          0        0      wakeup-0=9
cc1-4014                      0          1          0          0        0      wakeup-0=1
cc1-30706                     0         22          0          0        0      wakeup-0=22
uname-3942                    4          1        306          0       17      direct-9=4       wakeup-9=1
cc1-28207                     0          1          0          0        0      wakeup-0=1
cc1-30563                     0          9          0          0        0      wakeup-0=9
cc1-22214                     0         10          0          0        0      wakeup-0=10
cc1-28221                     0         11          0          0        0      wakeup-0=11
cc1-28123                     0          6          0          0        0      wakeup-0=6
kswapd0-311                   0          7     357302          0    34233      wakeup-0=7
cc1-5988                      0          7          0          0        0      wakeup-0=7
as-30734                      0        161          0          0        0      wakeup-0=161
cc1-22004                     0         45          0          0        0      wakeup-0=45
date-4590                     0          4          0          0        0      wakeup-0=4
cc1-15279                     0        213          0          0        0      wakeup-0=213
date-30735                    0          1          0          0        0      wakeup-0=1
cc1-30583                     0          4          0          0        0      wakeup-0=4
cc1-32324                     0          2          0          0        0      wakeup-0=2
cc1-23933                     0          3          0          0        0      wakeup-0=3
cc1-22001                     0         36          0          0        0      wakeup-0=36
bench-stresshig-3942        287        287      80186       6295    12196      direct-9=287       wakeup-9=287
cc1-28170                     0          7          0          0        0      wakeup-0=7
date-7932                     0         92          0          0        0      wakeup-0=92
cc1-22222                     0          6          0          0        0      wakeup-0=6
cc1-32334                     0         16          0          0        0      wakeup-0=16
cc1-2690                      0          6          0          0        0      wakeup-0=6
cc1-30733                     0          9          0          0        0      wakeup-0=9
cc1-32298                     0          2          0          0        0      wakeup-0=2
cc1-13743                     0         18          0          0        0      wakeup-0=18
cc1-22186                     0          4          0          0        0      wakeup-0=4
cc1-28214                     0         11          0          0        0      wakeup-0=11
cc1-13735                     0          1          0          0        0      wakeup-0=1
updatedb-8173                 0         18          0          0        0      wakeup-0=18
cc1-13750                     0          3          0          0        0      wakeup-0=3
cat-2808                      0          2          0          0        0      wakeup-0=2
cc1-15277                     0        169          0          0        0      wakeup-0=169
date-18317                    0          1          0          0        0      wakeup-0=1
cc1-15274                     0        197          0          0        0      wakeup-0=197
cc1-30732                     0          1          0          0        0      wakeup-0=1

Kswapd                   Kswapd      Order      Pages      Pages    Pages
Instance                Wakeups  Re-wakeup    Scanned    Sync-IO ASync-IO
kswapd0-311                  91         24     357302          0    34233      wake-0=31 wake-1=1 wake-9=59       rewake-0=10 rewake-1=1 rewake-9=13

Summary
Direct reclaims:     		291
Direct reclaim pages scanned:	437794
Direct reclaim write sync I/O:	6295
Direct reclaim write async I/O:	46446
Wake kswapd requests:		2152
Time stalled direct reclaim: 	519.163009000002 ms

Kswapd wakeups:			91
Kswapd pages scanned:		357302
Kswapd reclaim write sync I/O:	0
Kswapd reclaim write async I/O:	34233
Time kswapd awake:		5282.749757 ms

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Larry Woodman <lwoodman@redhat.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Michael Rubin <mrubin@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-09 20:45:00 -07:00
Mel Gorman c9d05cfc00 tracing, page-allocator: add a postprocessing script for page-allocator-related ftrace events
This patch adds a simple post-processing script for the
page-allocator-related trace events.  It can be used to give an indication
of who the most allocator-intensive processes are and how often the zone
lock was taken during the tracing period.  Example output looks like

Process                   Pages      Pages      Pages    Pages       PCPU     PCPU     PCPU   Fragment Fragment  MigType Fragment Fragment  Unknown
details                  allocd     allocd      freed    freed      pages   drains  refills   Fallback  Causing  Changed   Severe Moderate
                                under lock     direct  pagevec      drain
swapper-0                     0          0          2        0          0        0        0          0        0        0        0        0        0
Xorg-3770                 10603       5952       3685     6978       5996      194      192          0        0        0        0        0        0
modprobe-21397               51          0          0       86         31        1        0          0        0        0        0        0        0
xchat-5370                  228         93          0        0          0        0        3          0        0        0        0        0        0
awesome-4317                 32         32          0        0          0        0       32          0        0        0        0        0        0
thinkfan-3863                 2          0          1        1          0        0        0          0        0        0        0        0        0
hald-addon-stor-3935          2          0          0        0          0        0        0          0        0        0        0        0        0
akregator-4506                1          1          0        0          0        0        1          0        0        0        0        0        0
xmms-14888                    0          0          1        0          0        0        0          0        0        0        0        0        0
khelper-12                    1          0          0        0          0        0        0          0        0        0        0        0        0

Optionally, the output can include information on the parent or aggregate
based on process name instead of aggregating based on each pid. Example output
including parent information and stripped out the PID looks something like;

Process                        Pages      Pages      Pages    Pages       PCPU     PCPU     PCPU   Fragment Fragment  MigType Fragment Fragment  Unknown
details                       allocd     allocd      freed    freed      pages   drains  refills   Fallback  Causing  Changed   Severe Moderate
                                     under lock     direct  pagevec      drain
gdm-3756 :: Xorg-3770           3796       2976         99     3813       3224      104       98          0        0        0        0        0        0
init-1 :: hald-3892                1          0          0        0          0        0        0          0        0        0        0        0        0
git-21447 :: editor-21448          4          0          4        0          0        0        0          0        0        0        0        0        0

This says that Xorg allocated 3796 pages and it's parent process is gdm
with a PID of 3756;

The postprocessor parses the text output of tracing.  While there is a
binary format, the expectation is that the binary output can be readily
translated into text and post-processed offline.  Obviously if the text
format changes, the parser will break but the regular expression parser is
fairly rudimentary so should be readily adjustable.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Reviewed-by: Ingo Molnar <mingo@elte.hu>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Li Ming Chun <macli@brc.ubc.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-22 07:17:34 -07:00