git/t/perf/aggregate.perl

357 строки
8.2 KiB
Plaintext
Исходник Обычный вид История

Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
#!/usr/bin/perl
Makefile: replace perl/Makefile.PL with simple make rules Replace the perl/Makefile.PL and the fallback perl/Makefile used under NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired by how the i18n infrastructure's build process works[1]. The reason for having the Makefile.PL in the first place is that it was initially[2] building a perl C binding to interface with libgit, this functionality, that was removed[3] before Git.pm ever made it to the master branch. We've since since started maintaining a fallback perl/Makefile, as MakeMaker wouldn't work on some platforms[4]. That's just the tip of the iceberg. We have the PM.stamp hack in the top-level Makefile[5] to detect whether we need to regenerate the perl/perl.mak, which I fixed just recently to deal with issues like the perl version changing from under us[6]. There is absolutely no reason for why this needs to be so complex anymore. All we're getting out of this elaborate Rube Goldberg machine was copying perl/* to perl/blib/* as we do a string-replacement on the *.pm files to hardcode @@LOCALEDIR@@ in the source, as well as pod2man-ing Git.pm & friends. So replace the whole thing with something that's pretty much a copy of how we generate po/build/**.mo from po/*.po, just with a small sed(1) command instead of msgfmt. As that's being done rename the files from *.pm to *.pmc just to indicate that they're generated (see "perldoc -f require"). While I'm at it, change the fallback for Error.pm from being something where we'll ship our own Error.pm if one doesn't exist at build time to one where we just use a Git::Error wrapper that'll always prefer the system-wide Error.pm, only falling back to our own copy if it really doesn't exist at runtime. It's now shipped as Git::FromCPAN::Error, making it easy to add other modules to Git::FromCPAN::* in the future if that's needed. Functional changes: * This will not always install into perl's idea of its global "installsitelib". This only potentially matters for packagers that need to expose Git.pm for non-git use, and as explained in the INSTALL file there's a trivial workaround. * The scripts themselves will 'use lib' the target directory, but if INSTLIBDIR is set it overrides it. It doesn't have to be this way, it could be set in addition to INSTLIBDIR, but my reading of [7] is that this is the desired behavior. * We don't build man pages for all of the perl modules as we used to, only Git(3pm). As discussed on-list[8] that we were building installed manpages for purely internal APIs like Git::I18N or private-Error.pm was always a bug anyway, and all the Git::SVN::* ones say they're internal APIs. There are apparently external users of Git.pm, but I don't expect there to be any of the others. As a side-effect of these general changes the perl documentation now only installed by install-{doc,man}, not a mere "install" as before. 1. 5e9637c629 ("i18n: add infrastructure for translating Git with gettext", 2011-11-18) 2. b1edc53d06 ("Introduce Git.pm (v4)", 2006-06-24) 3. 18b0fc1ce1 ("Git.pm: Kill Git.xs for now", 2006-09-23) 4. f848718a69 ("Make perl/ build procedure ActiveState friendly.", 2006-12-04) 5. ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) 6. c59c4939c2 ("perl: regenerate perl.mak if perl -V changes", 2017-03-29) 7. 0386dd37b1 ("Makefile: add PERLLIB_EXTRA variable that adds to default perl path", 2013-11-15) 8. 87bmjjv1pu.fsf@evledraar.booking.com ("Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules" Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-11 00:13:33 +03:00
use lib '../../perl/build/lib';
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
use strict;
use warnings;
use Getopt::Long;
perf-lib.sh: remove GIT_TEST_INSTALLED from perf-lib.sh Follow-up my preceding change which fixed the immediate "./run <revisions>" regression in 0baf78e7bc ("perf-lib.sh: rely on test-lib.sh for --tee handling", 2019-03-15) and entirely get rid of GIT_TEST_INSTALLED from perf-lib.sh (and aggregate.perl). As noted in that change the dance we're doing with GIT_TEST_INSTALLED perf-lib.sh isn't necessary, but there I was doing the most minimal set of changes to quickly fix a regression. But it's much simpler to never deal with the "GIT_TEST_INSTALLED" we were setting in perf-lib.sh at all. Instead the run_dirs_helper() sets the previously inferred $PERF_RESULTS_PREFIX directly. Setting this at the callsite that's already best positioned to exhaustively know about all the different cases we need to handle where PERF_RESULTS_PREFIX isn't what we want already (the empty string) makes the most sense. In one-off cases like: ./run ./p0000-perf-lib-sanity.sh ./p0000-perf-lib-sanity.sh We'll just do the right thing because PERF_RESULTS_PREFIX will be empty, and test-lib.sh takes care of finding where our git is. Any refactoring of this code needs to change both the shell code and the Perl code in aggregate.perl, because when running e.g.: ./run ../../ -- <test> The "../../" path to a relative bindir needs to be munged to a filename containing the results, and critically aggregate.perl does not get passed the path to those aggregations, just "../..". Let's fix cases where aggregate.perl would print e.g. ".." in its report output for this, and "git" for "/home/avar/g/git", i.e. it would always pick the last element. Now'll always print the full path instead. This also makes the code sturdier, e.g. you can feed "../.." to "./run" and then an absolute path to the aggregate.perl script, as long as the absolute path and "../.." resolved to the same directory printing the aggregation will work. Also simplify the "[_*]" on the RHS of "tr -c", we're trimming everything to "_", so we don't need that. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
2019-05-07 13:54:32 +03:00
use Cwd qw(realpath);
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
sub get_times {
my $name = shift;
open my $fh, "<", $name or return undef;
my $line = <$fh>;
return undef if not defined $line;
close $fh or die "cannot close $name: $!";
# times
if ($line =~ /^(?:(\d+):)?(\d+):(\d+(?:\.\d+)?) (\d+(?:\.\d+)?) (\d+(?:\.\d+)?)$/) {
my $rt = ((defined $1 ? $1 : 0.0)*60+$2)*60+$3;
return ($rt, $4, $5);
# size
} elsif ($line =~ /^\d+$/) {
return $&;
} else {
die "bad input line: $line";
}
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
}
sub relative_change {
my ($r, $firstr) = @_;
if ($firstr > 0) {
return sprintf "%+.1f%%", 100.0*($r-$firstr)/$firstr;
} elsif ($r == 0) {
return "=";
} else {
return "+inf";
}
}
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
sub format_times {
my ($r, $u, $s, $firstr) = @_;
# no value means we did not finish the test
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
if (!defined $r) {
return "<missing>";
}
# a single value means we have a size, not times
if (!defined $u) {
return format_size($r, $firstr);
}
# otherwise, we have real/user/system times
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
my $out = sprintf "%.2f(%.2f+%.2f)", $r, $u, $s;
$out .= ' ' . relative_change($r, $firstr) if defined $firstr;
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
return $out;
}
sub usage {
print <<EOT;
./aggregate.perl [options] [--] [<dir_or_rev>...] [--] [<test_script>...] >
Options:
--codespeed * Format output for Codespeed
--reponame <str> * Send given reponame to codespeed
--sort-by <str> * Sort output (only "regression" criteria is supported)
--subsection <str> * Use results from given subsection
EOT
exit(1);
}
sub human_size {
my $n = shift;
my @units = ('', qw(K M G));
while ($n > 900 && @units > 1) {
$n /= 1000;
shift @units;
}
return $n unless length $units[0];
return sprintf '%.1f%s', $n, $units[0];
}
sub format_size {
my ($size, $first) = @_;
# match the width of a time: 0.00(0.00+0.00)
my $out = sprintf '%15s', human_size($size);
$out .= ' ' . relative_change($size, $first) if defined $first;
return $out;
}
sub sane_backticks {
open(my $fh, '-|', @_);
return <$fh>;
}
my (@dirs, %dirnames, %dirabbrevs, %prefixes, @tests,
$codespeed, $sortby, $subsection, $reponame);
Getopt::Long::Configure qw/ require_order /;
my $rc = GetOptions("codespeed" => \$codespeed,
"reponame=s" => \$reponame,
"sort-by=s" => \$sortby,
"subsection=s" => \$subsection);
usage() unless $rc;
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
while (scalar @ARGV) {
my $arg = $ARGV[0];
my $dir;
my $prefix = '';
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
last if -f $arg or $arg eq "--";
if (! -d $arg) {
my $rev = sane_backticks(qw(git rev-parse --verify), $arg);
chomp $rev;
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
$dir = "build/".$rev;
perf-lib.sh: remove GIT_TEST_INSTALLED from perf-lib.sh Follow-up my preceding change which fixed the immediate "./run <revisions>" regression in 0baf78e7bc ("perf-lib.sh: rely on test-lib.sh for --tee handling", 2019-03-15) and entirely get rid of GIT_TEST_INSTALLED from perf-lib.sh (and aggregate.perl). As noted in that change the dance we're doing with GIT_TEST_INSTALLED perf-lib.sh isn't necessary, but there I was doing the most minimal set of changes to quickly fix a regression. But it's much simpler to never deal with the "GIT_TEST_INSTALLED" we were setting in perf-lib.sh at all. Instead the run_dirs_helper() sets the previously inferred $PERF_RESULTS_PREFIX directly. Setting this at the callsite that's already best positioned to exhaustively know about all the different cases we need to handle where PERF_RESULTS_PREFIX isn't what we want already (the empty string) makes the most sense. In one-off cases like: ./run ./p0000-perf-lib-sanity.sh ./p0000-perf-lib-sanity.sh We'll just do the right thing because PERF_RESULTS_PREFIX will be empty, and test-lib.sh takes care of finding where our git is. Any refactoring of this code needs to change both the shell code and the Perl code in aggregate.perl, because when running e.g.: ./run ../../ -- <test> The "../../" path to a relative bindir needs to be munged to a filename containing the results, and critically aggregate.perl does not get passed the path to those aggregations, just "../..". Let's fix cases where aggregate.perl would print e.g. ".." in its report output for this, and "git" for "/home/avar/g/git", i.e. it would always pick the last element. Now'll always print the full path instead. This also makes the code sturdier, e.g. you can feed "../.." to "./run" and then an absolute path to the aggregate.perl script, as long as the absolute path and "../.." resolved to the same directory printing the aggregation will work. Also simplify the "[_*]" on the RHS of "tr -c", we're trimming everything to "_", so we don't need that. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
2019-05-07 13:54:32 +03:00
} elsif ($arg eq '.') {
$dir = '.';
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
} else {
perf-lib.sh: remove GIT_TEST_INSTALLED from perf-lib.sh Follow-up my preceding change which fixed the immediate "./run <revisions>" regression in 0baf78e7bc ("perf-lib.sh: rely on test-lib.sh for --tee handling", 2019-03-15) and entirely get rid of GIT_TEST_INSTALLED from perf-lib.sh (and aggregate.perl). As noted in that change the dance we're doing with GIT_TEST_INSTALLED perf-lib.sh isn't necessary, but there I was doing the most minimal set of changes to quickly fix a regression. But it's much simpler to never deal with the "GIT_TEST_INSTALLED" we were setting in perf-lib.sh at all. Instead the run_dirs_helper() sets the previously inferred $PERF_RESULTS_PREFIX directly. Setting this at the callsite that's already best positioned to exhaustively know about all the different cases we need to handle where PERF_RESULTS_PREFIX isn't what we want already (the empty string) makes the most sense. In one-off cases like: ./run ./p0000-perf-lib-sanity.sh ./p0000-perf-lib-sanity.sh We'll just do the right thing because PERF_RESULTS_PREFIX will be empty, and test-lib.sh takes care of finding where our git is. Any refactoring of this code needs to change both the shell code and the Perl code in aggregate.perl, because when running e.g.: ./run ../../ -- <test> The "../../" path to a relative bindir needs to be munged to a filename containing the results, and critically aggregate.perl does not get passed the path to those aggregations, just "../..". Let's fix cases where aggregate.perl would print e.g. ".." in its report output for this, and "git" for "/home/avar/g/git", i.e. it would always pick the last element. Now'll always print the full path instead. This also makes the code sturdier, e.g. you can feed "../.." to "./run" and then an absolute path to the aggregate.perl script, as long as the absolute path and "../.." resolved to the same directory printing the aggregation will work. Also simplify the "[_*]" on the RHS of "tr -c", we're trimming everything to "_", so we don't need that. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
2019-05-07 13:54:32 +03:00
$dir = realpath($arg);
$dirnames{$dir} = $dir;
$prefix .= 'bindir';
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
}
push @dirs, $dir;
perf-lib.sh: remove GIT_TEST_INSTALLED from perf-lib.sh Follow-up my preceding change which fixed the immediate "./run <revisions>" regression in 0baf78e7bc ("perf-lib.sh: rely on test-lib.sh for --tee handling", 2019-03-15) and entirely get rid of GIT_TEST_INSTALLED from perf-lib.sh (and aggregate.perl). As noted in that change the dance we're doing with GIT_TEST_INSTALLED perf-lib.sh isn't necessary, but there I was doing the most minimal set of changes to quickly fix a regression. But it's much simpler to never deal with the "GIT_TEST_INSTALLED" we were setting in perf-lib.sh at all. Instead the run_dirs_helper() sets the previously inferred $PERF_RESULTS_PREFIX directly. Setting this at the callsite that's already best positioned to exhaustively know about all the different cases we need to handle where PERF_RESULTS_PREFIX isn't what we want already (the empty string) makes the most sense. In one-off cases like: ./run ./p0000-perf-lib-sanity.sh ./p0000-perf-lib-sanity.sh We'll just do the right thing because PERF_RESULTS_PREFIX will be empty, and test-lib.sh takes care of finding where our git is. Any refactoring of this code needs to change both the shell code and the Perl code in aggregate.perl, because when running e.g.: ./run ../../ -- <test> The "../../" path to a relative bindir needs to be munged to a filename containing the results, and critically aggregate.perl does not get passed the path to those aggregations, just "../..". Let's fix cases where aggregate.perl would print e.g. ".." in its report output for this, and "git" for "/home/avar/g/git", i.e. it would always pick the last element. Now'll always print the full path instead. This also makes the code sturdier, e.g. you can feed "../.." to "./run" and then an absolute path to the aggregate.perl script, as long as the absolute path and "../.." resolved to the same directory printing the aggregation will work. Also simplify the "[_*]" on the RHS of "tr -c", we're trimming everything to "_", so we don't need that. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
2019-05-07 13:54:32 +03:00
$dirnames{$dir} ||= $arg;
$prefix .= $dir;
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
$prefix =~ tr/^a-zA-Z0-9/_/c;
$prefixes{$dir} = $prefix . '.';
shift @ARGV;
}
if (not @dirs) {
@dirs = ('.');
}
$dirnames{'.'} = $dirabbrevs{'.'} = "this tree";
$prefixes{'.'} = '';
shift @ARGV if scalar @ARGV and $ARGV[0] eq "--";
@tests = @ARGV;
if (not @tests) {
@tests = glob "p????-*.sh";
}
my $resultsdir = "test-results";
if (! $subsection and
exists $ENV{GIT_PERF_SUBSECTION} and
$ENV{GIT_PERF_SUBSECTION} ne "") {
$subsection = $ENV{GIT_PERF_SUBSECTION};
}
if ($subsection) {
$resultsdir .= "/" . $subsection;
}
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
my @subtests;
my %shorttests;
for my $t (@tests) {
$t =~ s{(?:.*/)?(p(\d+)-[^/]+)\.sh$}{$1} or die "bad test name: $t";
my $n = $2;
my $fname = "$resultsdir/$t.subtests";
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
open my $fp, "<", $fname or die "cannot open $fname: $!";
for (<$fp>) {
chomp;
/^(\d+)$/ or die "malformed subtest line: $_";
push @subtests, "$t.$1";
$shorttests{"$t.$1"} = "$n.$1";
}
close $fp or die "cannot close $fname: $!";
}
sub read_descr {
my $name = shift;
open my $fh, "<", $name or return "<error reading description>";
binmode $fh, ":utf8" or die "PANIC on binmode: $!";
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
my $line = <$fh>;
close $fh or die "cannot close $name";
chomp $line;
return $line;
}
sub have_duplicate {
my %seen;
for (@_) {
return 1 if exists $seen{$_};
$seen{$_} = 1;
}
return 0;
}
sub have_slash {
for (@_) {
return 1 if m{/};
}
return 0;
}
sub display_dir {
my ($d) = @_;
return exists $dirabbrevs{$d} ? $dirabbrevs{$d} : $dirnames{$d};
}
sub print_default_results {
my %descrs;
my $descrlen = 4; # "Test"
for my $t (@subtests) {
$descrs{$t} = $shorttests{$t}.": ".read_descr("$resultsdir/$t.descr");
$descrlen = length $descrs{$t} if length $descrs{$t}>$descrlen;
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
}
my %newdirabbrevs = %dirabbrevs;
while (!have_duplicate(values %newdirabbrevs)) {
%dirabbrevs = %newdirabbrevs;
last if !have_slash(values %dirabbrevs);
%newdirabbrevs = %dirabbrevs;
for (values %newdirabbrevs) {
s{^[^/]*/}{};
}
}
my %times;
my @colwidth = ((0)x@dirs);
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
for my $i (0..$#dirs) {
my $w = length display_dir($dirs[$i]);
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
$colwidth[$i] = $w if $w > $colwidth[$i];
}
for my $t (@subtests) {
my $firstr;
for my $i (0..$#dirs) {
my $d = $dirs[$i];
my $base = "$resultsdir/$prefixes{$d}$t";
perf-lib: use a single filename for all measurement types The perf tests write files recording the results of tests. These results are later aggregated by 'aggregate.perl'. If the tests are run multiple times, those results are overwritten by the new results. This works just fine as long as there are only perf tests measuring the times, whose results are stored in "$base".times files. However 22bec79d1a ("t/perf: add infrastructure for measuring sizes", 2018-08-17) introduced a new type of test for measuring the size of something. The results of this are written to "$base".size files. "$base" is essentially made up of the basename of the script plus the test number. So if test numbers shift because a new test was introduced earlier in the script we might end up with both a ".times" and a ".size" file for the same test. In the aggregation script the ".times" file is preferred over the ".size" file, so some size tests might end with performance numbers from a previous run of the test. This is mainly relevant when writing perf tests that check both performance and sizes, and can get quite confusing during developement. We could fix this by doing a more thorough job of cleaning out old ".times" and ".size" files before running each test. However, an even easier solution is to just use the same filename for both types of measurement, meaning we'll always overwrite the previous result. We don't even need to change the file format to distinguish the two; aggregate.perl already decides which is which based on a regex of the content (this may become ambiguous if we add new types in the future, but we could easily add a header field to the file at that point). Based on an initial patch from Thomas Gummerer, who discovered the problem and did all of the analysis (which I stole for the commit message above): https://public-inbox.org/git/20191119185047.8550-1-t.gummerer@gmail.com/ Helped-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-25 17:09:25 +03:00
$times{$prefixes{$d}.$t} = [get_times("$base.result")];
my ($r,$u,$s) = @{$times{$prefixes{$d}.$t}};
my $w = length format_times($r,$u,$s,$firstr);
$colwidth[$i] = $w if $w > $colwidth[$i];
$firstr = $r unless defined $firstr;
}
}
my $totalwidth = 3*@dirs+$descrlen;
$totalwidth += $_ for (@colwidth);
printf "%-${descrlen}s", "Test";
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
for my $i (0..$#dirs) {
printf " %-$colwidth[$i]s", display_dir($dirs[$i]);
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
}
print "\n";
print "-"x$totalwidth, "\n";
for my $t (@subtests) {
printf "%-${descrlen}s", $descrs{$t};
my $firstr;
for my $i (0..$#dirs) {
my $d = $dirs[$i];
my ($r,$u,$s) = @{$times{$prefixes{$d}.$t}};
printf " %-$colwidth[$i]s", format_times($r,$u,$s,$firstr);
$firstr = $r unless defined $firstr;
}
print "\n";
}
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 14:25:09 +04:00
}
sub print_sorted_results {
my ($sortby) = @_;
if ($sortby ne "regression") {
print "Only 'regression' is supported as '--sort-by' argument\n";
usage();
}
my @evolutions;
for my $t (@subtests) {
my ($prevr, $prevu, $prevs, $prevrev);
for my $i (0..$#dirs) {
my $d = $dirs[$i];
perf-lib: use a single filename for all measurement types The perf tests write files recording the results of tests. These results are later aggregated by 'aggregate.perl'. If the tests are run multiple times, those results are overwritten by the new results. This works just fine as long as there are only perf tests measuring the times, whose results are stored in "$base".times files. However 22bec79d1a ("t/perf: add infrastructure for measuring sizes", 2018-08-17) introduced a new type of test for measuring the size of something. The results of this are written to "$base".size files. "$base" is essentially made up of the basename of the script plus the test number. So if test numbers shift because a new test was introduced earlier in the script we might end up with both a ".times" and a ".size" file for the same test. In the aggregation script the ".times" file is preferred over the ".size" file, so some size tests might end with performance numbers from a previous run of the test. This is mainly relevant when writing perf tests that check both performance and sizes, and can get quite confusing during developement. We could fix this by doing a more thorough job of cleaning out old ".times" and ".size" files before running each test. However, an even easier solution is to just use the same filename for both types of measurement, meaning we'll always overwrite the previous result. We don't even need to change the file format to distinguish the two; aggregate.perl already decides which is which based on a regex of the content (this may become ambiguous if we add new types in the future, but we could easily add a header field to the file at that point). Based on an initial patch from Thomas Gummerer, who discovered the problem and did all of the analysis (which I stole for the commit message above): https://public-inbox.org/git/20191119185047.8550-1-t.gummerer@gmail.com/ Helped-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-25 17:09:25 +03:00
my ($r, $u, $s) = get_times("$resultsdir/$prefixes{$d}$t.result");
if ($i > 0 and defined $r and defined $prevr and $prevr > 0) {
my $percent = 100.0 * ($r - $prevr) / $prevr;
push @evolutions, { "percent" => $percent,
"test" => $t,
"prevrev" => $prevrev,
"rev" => $d,
"prevr" => $prevr,
"r" => $r,
"prevu" => $prevu,
"u" => $u,
"prevs" => $prevs,
"s" => $s};
}
($prevr, $prevu, $prevs, $prevrev) = ($r, $u, $s, $d);
}
}
my @sorted_evolutions = sort { $b->{percent} <=> $a->{percent} } @evolutions;
for my $e (@sorted_evolutions) {
printf "%+.1f%%", $e->{percent};
print " " . $e->{test};
print " " . format_times($e->{prevr}, $e->{prevu}, $e->{prevs});
print " " . format_times($e->{r}, $e->{u}, $e->{s});
print " " . display_dir($e->{prevrev});
print " " . display_dir($e->{rev});
print "\n";
}
}
sub print_codespeed_results {
my ($subsection) = @_;
my $project = "Git";
my $executable = `uname -s -m`;
chomp $executable;
if ($subsection) {
$executable .= ", " . $subsection;
}
my $environment;
if ($reponame) {
$environment = $reponame;
} elsif (exists $ENV{GIT_PERF_REPO_NAME} and $ENV{GIT_PERF_REPO_NAME} ne "") {
$environment = $ENV{GIT_PERF_REPO_NAME};
} else {
$environment = `uname -r`;
chomp $environment;
}
my @data;
for my $t (@subtests) {
for my $d (@dirs) {
my $commitid = $prefixes{$d};
$commitid =~ s/^build_//;
$commitid =~ s/\.$//;
perf-lib: use a single filename for all measurement types The perf tests write files recording the results of tests. These results are later aggregated by 'aggregate.perl'. If the tests are run multiple times, those results are overwritten by the new results. This works just fine as long as there are only perf tests measuring the times, whose results are stored in "$base".times files. However 22bec79d1a ("t/perf: add infrastructure for measuring sizes", 2018-08-17) introduced a new type of test for measuring the size of something. The results of this are written to "$base".size files. "$base" is essentially made up of the basename of the script plus the test number. So if test numbers shift because a new test was introduced earlier in the script we might end up with both a ".times" and a ".size" file for the same test. In the aggregation script the ".times" file is preferred over the ".size" file, so some size tests might end with performance numbers from a previous run of the test. This is mainly relevant when writing perf tests that check both performance and sizes, and can get quite confusing during developement. We could fix this by doing a more thorough job of cleaning out old ".times" and ".size" files before running each test. However, an even easier solution is to just use the same filename for both types of measurement, meaning we'll always overwrite the previous result. We don't even need to change the file format to distinguish the two; aggregate.perl already decides which is which based on a regex of the content (this may become ambiguous if we add new types in the future, but we could easily add a header field to the file at that point). Based on an initial patch from Thomas Gummerer, who discovered the problem and did all of the analysis (which I stole for the commit message above): https://public-inbox.org/git/20191119185047.8550-1-t.gummerer@gmail.com/ Helped-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-25 17:09:25 +03:00
my ($result_value, $u, $s) = get_times("$resultsdir/$prefixes{$d}$t.result");
my %vals = (
"commitid" => $commitid,
"project" => $project,
"branch" => $dirnames{$d},
"executable" => $executable,
"benchmark" => $shorttests{$t} . " " . read_descr("$resultsdir/$t.descr"),
"environment" => $environment,
"result_value" => $result_value,
);
push @data, \%vals;
}
}
require JSON;
print JSON::to_json(\@data, {utf8 => 1, pretty => 1, canonical => 1}), "\n";
}
binmode STDOUT, ":utf8" or die "PANIC on binmode: $!";
if ($codespeed) {
print_codespeed_results($subsection);
} elsif (defined $sortby) {
print_sorted_results($sortby);
} else {
print_default_results();
}