Bug 1804453 - Move the benchmark setup logic into raptor Benchmark class. r=perftest-reviewers,AlexandruIonescu

This patch moves the benchmark setup logic out of the mach_commands.py file and into the Benchmark class in Raptor that is built to handle our benchmark tests. At the same time, the code is completely reworked to be simpler to understand, and to use pathlib instead of os for path manipulations. The existing benchmark tests that use code from our perf-automation repository are then modified to make use of this new method (fetch methods are also removed). The tests which exist in-tree don't need modifications.

Differential Revision: https://phabricator.services.mozilla.com/D164365
This commit is contained in:
Greg Mierzwinski 2023-01-05 14:57:33 +00:00
Родитель 225b8b6ea8
Коммит 0213874f77
18 изменённых файлов: 335 добавлений и 199 удалений

Просмотреть файл

@ -10,31 +10,6 @@ octane:
sha256: 38425ee1abfc5feca178b2f60fbd82b5873897c345112a85359be00024402f9f
size: 1816138
unity-webgl:
description: unity-webgl benchmark
fetch:
type: static-url
url: https://github.com/mozilla/perf-automation/releases/download/unity-webgl-v4/unity-webgl-54c3c3d9d3f6.zip
sha256: a5777b878c6d91b461ba517b9f3b459c8c4485b53e988aa07f535f5bbf72c3d0
size: 27035352
assorted-dom:
description: assorted-dom benchmark
fetch:
type: static-url
url: https://github.com/mozilla/perf-automation/releases/download/assorted-dom-v2/assorted-dom-54c3c3d9d3f6.zip
sha256: d3580b06123212f163eeb355e3e37b8abd998606397b66cbe960646884537c68
size: 402074
wasm-misc:
description: wasm-misc benchmark
fetch:
type: static-url
artifact-name: wasm-misc.zip
url: https://github.com/mozilla/perf-automation/releases/download/wasm-misc-v2/wasm-misc-54c3c3d9d3f6.zip
sha256: 9149b31c781fd5ac9a62a7a92d8cd39536f1a6d82e1335966773d412344704b8
size: 4425799
web-tooling-benchmark:
description: Web Tooling Benchmark
fetch:
@ -42,27 +17,3 @@ web-tooling-benchmark:
url: https://github.com/mozilla/perf-automation/releases/download/V1/web-tooling-benchmark-b2ac25c897c9.zip
sha256: 93b0b51df0cec3ca9bfa0bdf81d782306dcf18532e39b3ff3180409125daaff1
size: 5444135
jetstream2:
description: JetStream2 Benchmark
fetch:
type: static-url
url: https://github.com/mozilla/perf-automation/releases/download/jetstream2-v1.2/jetstream2-54c3c3d9d3f6.zip
sha256: 978f5920f27099d4a6b854c5e4a50d2a706fad8b5082a6fdcab95bb71fb6dc12
size: 25481279
matrix-react-bench:
description: Matrix-React Benchmark
fetch:
type: static-url
url: https://github.com/mozilla/perf-automation/releases/download/matrix-react-bench-v1.1/matrix-react-bench-c51ca3fc73cd.zip
sha256: eef4503db50ee7e156225124b50b4e2af471ac76467078d17a7cdd24359e3d0c
size: 57514835
twitch-animation:
description: Twitch animation benchmark
fetch:
type: static-url
url: https://github.com/mozilla/perf-automation/releases/download/twitch-animation-v1/twitch-animation-61332db58402.zip
sha256: d071642d84a00d49ea0a8f2a5a631ce4267f514182d376a48810b9f32e6f076f
size: 466119

Просмотреть файл

@ -337,13 +337,6 @@ browsertime-benchmark:
motionmark-htmlsuite: 1500
unity-webgl: 1500
default: 900
fetches:
fetch:
- assorted-dom
- jetstream2
- matrix-react-bench
- twitch-animation
- unity-webgl
mozharness:
extra-options:
- --extra-profiler-run
@ -379,9 +372,6 @@ browsertime-benchmark-wasm:
wasm-godot: 1500
wasm-godot-baseline: 1500
default: 900
fetches:
fetch:
- wasm-misc
mozharness:
extra-options:
- --extra-profiler-run

Просмотреть файл

@ -331,9 +331,6 @@ browsertime-unity-webgl-mobile:
mozharness:
extra-options:
- --test=unity-webgl
fetches:
fetch:
- unity-webgl
browsertime-power:
description: Browsertime Power Usage Tests on Android

Просмотреть файл

@ -698,6 +698,7 @@ class Raptor(
self.raptor_json_config = self.config.get("raptor_json_config")
self.repo_path = self.config.get("repo_path")
self.obj_path = self.config.get("obj_path")
self.mozbuild_path = self.config.get("mozbuild_path")
self.test = None
self.gecko_profile = self.config.get(
"gecko_profile"
@ -959,6 +960,8 @@ class Raptor(
kw_options["symbolsPath"] = self.symbols_path
if self.config.get("obj_path", None) is not None:
kw_options["obj-path"] = self.config["obj_path"]
if self.config.get("mozbuild_path", None) is not None:
kw_options["mozbuild-path"] = self.config["mozbuild_path"]
if self.test_url_params:
kw_options["test-url-params"] = self.test_url_params
if self.config.get("device_name") is not None:
@ -1271,6 +1274,8 @@ class Raptor(
env["MOZ_DEVELOPER_REPO_DIR"] = self.repo_path
if self.obj_path is not None:
env["MOZ_DEVELOPER_OBJ_DIR"] = self.obj_path
if self.mozbuild_path is not None:
env["MOZ_MOZBUILD_DIR"] = self.mozbuild_path
# Sets a timeout for how long Raptor should run without output
output_timeout = self.config.get("raptor_output_timeout", 3600)

Просмотреть файл

@ -243,14 +243,16 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox, chrome, chromium, safari
* **expected**: pass
* **fetch task**: assorted-dom
* **gecko profile entries**: 2000000
* **gecko profile interval**: 1
* **lower is better**: true
* **page cycles**: 1
* **page timeout**: 60000
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/assorted-dom
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **screen capture**: true
* **test url**: `<http://\<host\>:\<port\>/assorted-dom/assorted/driver.html?raptor>`__
* **test url**: `<http://\<host\>:\<port\>/assorted/driver.html?raptor>`__
* **type**: benchmark
* **unit**: ms
* **Test Task**:
@ -437,15 +439,17 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox, chrome, chromium, safari
* **expected**: pass
* **fetch task**: jetstream2
* **gecko profile entries**: 14000000
* **gecko profile interval**: 1
* **lower is better**: false
* **page cycles**: 4
* **page timeout**: 2000000
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/JetStream2
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **subtest lower is better**: false
* **subtest unit**: score
* **test url**: `<http://\<host\>:\<port\>/JetStream2/index.html?raptor>`__
* **test url**: `<http://\<host\>:\<port\>/index.html?raptor>`__
* **type**: benchmark
* **unit**: score
* **Test Task**:
@ -632,15 +636,17 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox, chrome, chromium
* **expected**: pass
* **fetch task**: matrix-react-bench
* **gecko profile entries**: 14000000
* **gecko profile interval**: 1
* **lower is better**: true
* **page cycles**: 30
* **page timeout**: 2000000
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/matrix-react-bench
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **subtest lower is better**: true
* **subtest unit**: ms
* **test url**: `<http://\<host\>:\<port\>/matrix-react-bench/matrix_demo.html>`__
* **test url**: `<http://\<host\>:\<port\>/matrix_demo.html>`__
* **type**: benchmark
* **unit**: ms
* **Test Task**:
@ -2335,16 +2341,18 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox
* **expected**: pass
* **fetch task**: twitch-animation
* **gecko profile entries**: 14000000
* **gecko profile interval**: 1
* **lower is better**: true
* **page cycles**: 1
* **page timeout**: 2000000
* **perfstats**: true
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/twitch-animation
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **subtest lower is better**: true
* **subtest unit**: ms
* **test url**: `<http://\<host\>:\<port\>/twitch-animation/index.html>`__
* **test url**: `<http://\<host\>:\<port\>/index.html>`__
* **type**: benchmark
* **unit**: ms
* **Test Task**:
@ -2486,13 +2494,15 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: geckoview, refbrow, fenix, chrome-m
* **expected**: pass
* **fetch task**: unity-webgl
* **gecko profile entries**: 8000000
* **gecko profile interval**: 1
* **lower is better**: false
* **page cycles**: 1
* **page timeout**: 420000
* **test url**: `<http://\<host\>:\<port\>/unity-webgl/index.html?raptor>`__
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/unity-webgl
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **test url**: `<http://\<host\>:\<port\>/index.html?raptor>`__
* **type**: benchmark
* **unit**: score
* **Test Task**:
@ -2769,13 +2779,15 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox, chrome, chromium, safari
* **expected**: pass
* **fetch task**: unity-webgl
* **gecko profile entries**: 8000000
* **gecko profile interval**: 1
* **lower is better**: false
* **page cycles**: 5
* **page timeout**: 420000
* **test url**: `<http://\<host\>:\<port\>/unity-webgl/index.html?raptor>`__
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/unity-webgl
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **test url**: `<http://\<host\>:\<port\>/index.html?raptor>`__
* **type**: benchmark
* **unit**: score
* **Test Task**:
@ -3542,13 +3554,15 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox, chrome, chromium
* **expected**: pass
* **fetch task**: wasm-misc
* **gecko profile entries**: 4000000
* **gecko profile interval**: 1
* **lower is better**: true
* **page cycles**: 5
* **page timeout**: 1200000
* **test url**: `<http://\<host\>:\<port\>/wasm-misc/index.html?raptor>`__
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/wasm-misc
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **test url**: `<http://\<host\>:\<port\>/index.html?raptor>`__
* **type**: benchmark
* **unit**: ms
* **Test Task**:
@ -3730,14 +3744,16 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox
* **expected**: pass
* **fetch task**: wasm-misc
* **gecko profile entries**: 4000000
* **gecko profile interval**: 1
* **lower is better**: true
* **page cycles**: 5
* **page timeout**: 1200000
* **preferences**: {"javascript.options.wasm_baselinejit": true, "javascript.options.wasm_optimizingjit": false}
* **test url**: `<http://\<host\>:\<port\>/wasm-misc/index.html?raptor>`__
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/wasm-misc
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **test url**: `<http://\<host\>:\<port\>/index.html?raptor>`__
* **type**: benchmark
* **unit**: ms
* **Test Task**:
@ -3879,14 +3895,16 @@ Standard benchmarks are third-party tests (i.e. Speedometer) that we have integr
* **alert threshold**: 2.0
* **apps**: firefox
* **expected**: pass
* **fetch task**: wasm-misc
* **gecko profile entries**: 4000000
* **gecko profile interval**: 1
* **lower is better**: true
* **page cycles**: 5
* **page timeout**: 1200000
* **preferences**: {"javascript.options.wasm_baselinejit": false, "javascript.options.wasm_optimizingjit": true}
* **test url**: `<http://\<host\>:\<port\>/wasm-misc/index.html?raptor>`__
* **repository**: https://github.com/mozilla/perf-automation
* **repository path**: benchmarks/wasm-misc
* **repository revision**: 61332db584026b73e37066d717a162825408c36b
* **test url**: `<http://\<host\>:\<port\>/index.html?raptor>`__
* **type**: benchmark
* **unit**: ms
* **Test Task**:

Просмотреть файл

@ -9,12 +9,10 @@
import json
import logging
import os
import shutil
import socket
import subprocess
import sys
import mozfile
from mach.decorators import Command
from mach.util import get_state_dir
from mozbuild.base import BinaryNotFoundException
@ -23,9 +21,6 @@ from mozbuild.base import MozbuildObject
HERE = os.path.dirname(os.path.realpath(__file__))
BENCHMARK_REPOSITORY = "https://github.com/mozilla/perf-automation"
BENCHMARK_REVISION = "61332db584026b73e37066d717a162825408c36b"
ANDROID_BROWSERS = ["geckoview", "refbrow", "fenix", "chrome-m"]
@ -45,7 +40,6 @@ class RaptorRunner(MozbuildObject):
"Please downgrade your Python version as Raptor does not yet support Python 3.10"
)
self.init_variables(raptor_args, kwargs)
self.setup_benchmarks()
self.make_config()
self.write_config()
self.make_args()
@ -88,63 +82,6 @@ class RaptorRunner(MozbuildObject):
)
self.virtualenv_path = os.path.join(self._topobjdir, "testing", "raptor-venv")
def setup_benchmarks(self):
"""Make sure benchmarks are linked to the proper location in the objdir.
Benchmarks can either live in-tree or in an external repository. In the latter
case also clone/update the repository if necessary.
"""
external_repo_path = os.path.join(get_state_dir(), "performance-tests")
print("Updating external benchmarks from {}".format(BENCHMARK_REPOSITORY))
try:
subprocess.check_output(["git", "--version"])
except Exception as ex:
print(
"Git is not available! Please install git and "
"ensure it is included in the terminal path"
)
raise ex
if not os.path.isdir(external_repo_path):
print("Cloning the benchmarks to {}".format(external_repo_path))
subprocess.check_call(
["git", "clone", BENCHMARK_REPOSITORY, external_repo_path]
)
else:
subprocess.check_call(["git", "checkout", "master"], cwd=external_repo_path)
subprocess.check_call(["git", "pull"], cwd=external_repo_path)
subprocess.check_call(
["git", "checkout", BENCHMARK_REVISION], cwd=external_repo_path
)
# Link or copy benchmarks to the objdir
benchmark_paths = (
os.path.join(external_repo_path, "benchmarks"),
os.path.join(self.topsrcdir, "third_party", "webkit", "PerformanceTests"),
)
benchmark_dest = os.path.join(self.topobjdir, "testing", "raptor", "benchmarks")
if not os.path.isdir(benchmark_dest):
os.makedirs(benchmark_dest)
for benchmark_path in benchmark_paths:
for name in os.listdir(benchmark_path):
path = os.path.join(benchmark_path, name)
dest = os.path.join(benchmark_dest, name)
if not os.path.isdir(path) or name.startswith("."):
continue
if hasattr(os, "symlink") and os.name != "nt":
if not os.path.exists(dest):
os.symlink(path, dest)
else:
# Clobber the benchmark in case a recent update removed any files.
mozfile.remove(dest)
shutil.copytree(path, dest)
def make_config(self):
default_actions = [
"populate-webroot",

Просмотреть файл

@ -3,14 +3,18 @@
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import pathlib
import shutil
import socket
import subprocess
import tempfile
import mozfile
from logger.logger import RaptorLogger
from wptserve import handlers, server
LOG = RaptorLogger(component="raptor-benchmark")
here = os.path.abspath(os.path.dirname(__file__))
here = pathlib.Path(__file__).parent.resolve()
class Benchmark(object):
@ -20,42 +24,15 @@ class Benchmark(object):
self.config = config
self.test = test
# bench_dir is where we will download all mitmproxy required files
# when running locally it comes from obj_path via mozharness/mach
if self.config.get("obj_path", None) is not None:
self.bench_dir = self.config.get("obj_path")
else:
# in production it is ../tasks/task_N/build/tests/raptor/raptor/...
# 'here' is that path, we can start with that
self.bench_dir = here
# now add path for benchmark source; locally we put it in a raptor benchmarks
# folder; in production the files are automatically copied to a different dir
if self.config.get("run_local", False):
self.bench_dir = os.path.join(
self.bench_dir, "testing", "raptor", "benchmarks"
)
else:
self.bench_dir = os.path.join(
self.bench_dir, "tests", "webkit", "PerformanceTests"
)
# Some benchmarks may have been downloaded from a fetch task, make
# sure they get copied over.
fetches_dir = os.environ.get("MOZ_FETCHES_DIR")
if (
test.get("fetch_task", False)
and fetches_dir
and os.path.isdir(fetches_dir)
):
for name in os.listdir(fetches_dir):
if test.get("fetch_task").lower() in name.lower():
path = os.path.join(fetches_dir, name)
if os.path.isdir(path):
shutil.copytree(path, os.path.join(self.bench_dir, name))
self.setup_benchmarks(
os.getenv("MOZ_DEVELOPER_REPO_DIR"),
os.getenv("MOZ_MOZBUILD_DIR"),
run_local=self.config.get("run_local", False),
)
LOG.info(f"bench_dir: {self.bench_dir}")
LOG.info("bench_dir contains:")
LOG.info(os.listdir(self.bench_dir))
LOG.info(list(self.bench_dir.iterdir()))
# now have the benchmark source ready, go ahead and serve it up!
self.start_http_server()
@ -78,8 +55,8 @@ class Benchmark(object):
# to add specific headers for serving files via wptserve, write out a headers dir file
# see http://wptserve.readthedocs.io/en/latest/handlers.html#file-handlers
LOG.info("writing wptserve headers file")
headers_file = os.path.join(self.bench_dir, "__dir__.headers")
file = open(headers_file, "w")
headers_file = pathlib.Path(self.bench_dir, "__dir__.headers")
file = headers_file.open("w")
file.write("Access-Control-Allow-Origin: *")
file.close()
LOG.info("wrote wpt headers file: %s" % headers_file)
@ -92,10 +69,235 @@ class Benchmark(object):
return server.WebTestHttpd(
host=self.host,
port=int(self.port),
doc_root=self.bench_dir,
doc_root=str(self.bench_dir),
routes=[("GET", "*", handlers.file_handler)],
)
def stop_serve(self):
LOG.info("TODO: stop serving benchmark source")
pass
def _full_clone(self, benchmark_repository, dest):
subprocess.check_call(
["git", "clone", benchmark_repository, str(dest.resolve())]
)
def _get_benchmark_folder(self, benchmark_dest, run_local):
if not run_local:
# If the test didn't specify a repo and we're in CI
# then we'll find them here and we don't need to do anything else
return pathlib.Path(benchmark_dest, "tests", "webkit", "PerformanceTests")
return pathlib.Path(benchmark_dest, "testing", "raptor", "benchmarks")
def _sparse_clone(self, benchmark_repository, dest):
"""Get a partial clone of the repo.
This need git version 2.30+ so it's currently unused but it works.
See bug 1804694. This method should only be used in CI, locally we
can simply pull the whole repo.
"""
subprocess.check_call(
[
"git",
"clone",
"--depth",
"1",
"--filter",
"blob:none",
"--sparse",
benchmark_repository,
str(dest.resolve()),
]
)
subprocess.check_call(
[
"git",
"sparse-checkout",
"set",
self.test.get("repository_path", "benchmarks"),
],
cwd=dest,
)
def _copy_or_link_files(
self, benchmark_path, benchmark_dest, skip_files_and_hidden=True
):
if not benchmark_dest.exists():
benchmark_dest.mkdir(parents=True, exist_ok=True)
dest = pathlib.Path(benchmark_dest, benchmark_path.name)
if hasattr(os, "symlink") and os.name != "nt":
if not dest.exists():
os.symlink(benchmark_path, dest)
else:
# Clobber the benchmark in case a recent update removed any files.
mozfile.remove(dest)
shutil.copytree(benchmark_path, dest)
if any(path.is_file() for path in benchmark_path.iterdir()):
# Host the parent of this directory to prevent hosting issues
# (e.g. linked files ending up with different routes)
host_folder = dest.parent
self.test["test_url"] = self.test["test_url"].replace(
"<port>/", f"<port>/{benchmark_path.name}/"
)
dest = host_folder
return dest
def _verify_benchmark_revision(self, benchmark_revision, external_repo_path):
try:
# Check if the given revision is valid
subprocess.check_call(
["git", "rev-parse", "--verify", f"{benchmark_revision}^{{commit}}"],
cwd=external_repo_path,
)
LOG.info("Given benchmark repository revision verified")
except Exception:
LOG.error(
f"Given revision doesn't exist in this repository: {benchmark_revision}"
)
raise
def _update_benchmark_repo(self, external_repo_path):
default_branch = self.test.get("repository_branch", None)
if default_branch is None:
try:
# Get the default branch name, and check it if's been updated
default_branch = (
subprocess.check_output(
["git", "rev-parse", "--abbrev-ref", "origin/HEAD"],
cwd=external_repo_path,
)
.decode("utf-8")
.strip()
.split("/")[-1]
)
remote_default_branch = (
subprocess.check_output(
["git", "remote", "set-head", "origin", "-a"],
cwd=external_repo_path,
)
.decode("utf-8")
.strip()
)
if default_branch not in remote_default_branch:
default_branch = remote_default_branch.split()[-1]
except Exception:
LOG.critical("Failed to find the default branch of the repository!")
raise
else:
LOG.info(f"Using non-default branch {default_branch}")
try:
subprocess.check_call(["git", "pull", "--all"], cwd=external_repo_path)
except subprocess.CalledProcessError:
LOG.info("Failed to pull new branches from remote")
LOG.info(external_repo_path)
subprocess.check_call(
["git", "checkout", default_branch], cwd=external_repo_path
)
subprocess.check_call(["git", "pull"], cwd=external_repo_path)
def _setup_git_benchmarks(self, mozbuild_path, benchmark_dest, run_local=True):
"""Setup a benchmark from a github repository."""
benchmark_repository = self.test["repository"]
benchmark_revision = self.test["repository_revision"]
# Specifies where we can find the benchmark within the cloned repo, this is the
# folder that will be hosted to run the test. If it isn't given, we'll host the
# root of the repository.
benchmark_repo_path = self.test.get("repository_path", "")
# Get the performance-tests cache (if it exists), otherwise create a temp folder
if mozbuild_path is None:
mozbuild_path = tempfile.mkdtemp()
external_repo_path = pathlib.Path(
mozbuild_path, "performance-tests", benchmark_repository.split("/")[-1]
)
try:
subprocess.check_output(["git", "--version"])
except Exception as ex:
LOG.info(
"Git is not available! Please install git and "
"ensure it is included in the terminal path"
)
raise ex
if not external_repo_path.is_dir():
LOG.info("Cloning the benchmarks to {}".format(external_repo_path))
# Bug 1804694 - Use sparse checkouts instead of full clones
# Locally, we should always do a full clone
self._full_clone(benchmark_repository, external_repo_path)
else:
self._update_benchmark_repo(external_repo_path)
self._verify_benchmark_revision(benchmark_revision, external_repo_path)
subprocess.check_call(
["git", "checkout", benchmark_revision], cwd=external_repo_path
)
benchmark_dest = pathlib.Path(
self._get_benchmark_folder(benchmark_dest, run_local), self.test["name"]
)
benchmark_dest = self._copy_or_link_files(
pathlib.Path(external_repo_path, benchmark_repo_path),
benchmark_dest,
skip_files_and_hidden=False,
)
return benchmark_dest
def _setup_in_tree_benchmarks(self, topsrc_path, benchmark_dest, run_local=True):
"""Setup a benchmakr that is found in-tree.
This method will be deprecated once bug 1804578 is resolved (copying our
in-tree benchmarks into a repo) to have a standard way of running benchmarks.
"""
benchmark_dest = self._get_benchmark_folder(benchmark_dest, run_local)
if not run_local:
# If the test didn't specify a repo and we're in CI
# then we'll find them here and we don't need to do anything else
return benchmark_dest
benchmark_dest = self._copy_or_link_files(
pathlib.Path(topsrc_path, "third_party", "webkit", "PerformanceTests"),
benchmark_dest,
)
return benchmark_dest
def setup_benchmarks(
self,
topsrc_path,
mozbuild_path,
run_local=True,
):
"""Make sure benchmarks are linked to the proper location in the objdir.
Benchmarks can either live in-tree or in an external repository. In the latter
case also clone/update the repository if necessary.
"""
# bench_dir is where we will download all mitmproxy required files
# when running locally it comes from obj_path via mozharness/mach
if self.config.get("obj_path", None) is not None:
bench_dir = pathlib.Path(self.config.get("obj_path"))
else:
# in production it is ../tasks/task_N/build/tests/raptor/raptor/...
# 'here' is that path, we can start with that
bench_dir = pathlib.Path(here)
if self.test.get("repository", None) is not None:
# Setup benchmarks that are found on Github
bench_dir = self._setup_git_benchmarks(
mozbuild_path, bench_dir, run_local=run_local
)
else:
# Setup the benchmarks that are available in-tree
bench_dir = self._setup_in_tree_benchmarks(
topsrc_path, bench_dir, run_local=run_local
)
self.bench_dir = bench_dir

Просмотреть файл

@ -339,6 +339,12 @@ def create_parser(mach_interface=False):
default=None,
help="Browser-build obj_path (received when running in production)",
)
add_arg(
"--mozbuild-path",
dest="mozbuild_path",
default=None,
help="This contains the path to mozbuild.",
)
add_arg(
"--noinstall",
dest="noinstall",

Просмотреть файл

@ -130,6 +130,18 @@ def validate_test_ini(test_details):
# and also remove duplicates if any, by converting valid_alerts to a 'set' first
test_details["alert_on"] = sorted(set(valid_alerts))
# if repository is defined, then a revision also needs to be defined
# the path is optional and we'll default to the root of the repo
if test_details.get("repository", None) is not None:
if test_details.get("repository_revision", None) is None:
LOG.error(
"`repository_revision` is required when a `repository` is defined."
)
valid_settings = False
elif test_details.get("type") not in ("benchmark"):
LOG.error("`repository` is only available for benchmark test types.")
valid_settings = False
return valid_settings

Просмотреть файл

@ -7,7 +7,6 @@
[DEFAULT]
alert_threshold = 2.0
apps = firefox, chrome, chromium, safari
fetch_task = assorted-dom
gecko_profile_entries = 2000000
gecko_profile_interval = 1
lower_is_better = true
@ -15,8 +14,11 @@ owner = PerfTest Team
page_cycles = 1
page_timeout = 60000
screen_capture = true
test_url = http://<host>:<port>/assorted-dom/assorted/driver.html?raptor
test_url = http://<host>:<port>/assorted/driver.html?raptor
type = benchmark
unit = ms
[assorted-dom]
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/assorted-dom

Просмотреть файл

@ -7,7 +7,6 @@
[DEFAULT]
alert_threshold = 2.0
apps = firefox, chrome, chromium, safari
fetch_task = jetstream2
gecko_profile_entries = 14000000
gecko_profile_interval = 1
lower_is_better = false
@ -16,8 +15,11 @@ page_cycles = 4
page_timeout = 2000000
subtest_lower_is_better = false
subtest_unit = score
test_url = http://<host>:<port>/JetStream2/index.html?raptor
test_url = http://<host>:<port>/index.html?raptor
type = benchmark
unit = score
[jetstream2]
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/JetStream2

Просмотреть файл

@ -7,7 +7,6 @@
[DEFAULT]
alert_threshold = 2.0
apps = firefox, chrome, chromium
fetch_task = matrix-react-bench
gecko_profile_entries = 14000000
gecko_profile_interval = 1
lower_is_better = true
@ -16,8 +15,11 @@ page_cycles = 30
page_timeout = 2000000
subtest_lower_is_better = true
subtest_unit = ms
test_url = http://<host>:<port>/matrix-react-bench/matrix_demo.html
test_url = http://<host>:<port>/matrix_demo.html
type = benchmark
unit = ms
[matrix-react-bench]
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/matrix-react-bench

Просмотреть файл

@ -7,7 +7,6 @@
[DEFAULT]
alert_threshold = 2.0
apps = firefox
fetch_task = twitch-animation
gecko_profile_entries = 14000000
gecko_profile_interval = 1
lower_is_better = true
@ -16,9 +15,12 @@ page_cycles = 1
page_timeout = 2000000
subtest_lower_is_better = true
subtest_unit = ms
test_url = http://<host>:<port>/twitch-animation/index.html
test_url = http://<host>:<port>/index.html
type = benchmark
unit = ms
perfstats = true
[twitch-animation]
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/twitch-animation

Просмотреть файл

@ -7,15 +7,17 @@
[DEFAULT]
alert_threshold = 2.0
apps = firefox, chrome, chromium, safari
fetch_task = unity-webgl
gecko_profile_entries = 8000000
gecko_profile_interval = 1
lower_is_better = false
owner = :jgilbert and Graphics(gfx) Team
page_cycles = 5
page_timeout = 420000 # 7 mins
test_url = http://<host>:<port>/unity-webgl/index.html?raptor
test_url = http://<host>:<port>/index.html?raptor
type = benchmark
unit = score
[unity-webgl]
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/unity-webgl

Просмотреть файл

@ -7,15 +7,17 @@
[DEFAULT]
alert_threshold = 2.0
apps = geckoview, refbrow, fenix, chrome-m
fetch_task = unity-webgl
gecko_profile_entries = 8000000
gecko_profile_interval = 1
lower_is_better = false
owner = :jgilbert and Graphics(gfx) Team
page_cycles = 1
page_timeout = 420000 # 7 mins
test_url = http://<host>:<port>/unity-webgl/index.html?raptor
test_url = http://<host>:<port>/index.html?raptor
type = benchmark
unit = score
[unity-webgl]
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/unity-webgl

Просмотреть файл

@ -6,14 +6,13 @@
[DEFAULT]
alert_threshold = 2.0
fetch_task = wasm-misc
gecko_profile_entries = 4000000
gecko_profile_interval = 1
lower_is_better = true
owner = :lth and SpiderMonkey Team
page_cycles = 5
page_timeout = 1200000
test_url = http://<host>:<port>/wasm-misc/index.html?raptor
test_url = http://<host>:<port>/index.html?raptor
type = benchmark
unit = ms
@ -21,3 +20,6 @@ unit = ms
apps = firefox
preferences = {"javascript.options.wasm_baselinejit": true,
"javascript.options.wasm_optimizingjit": false}
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/wasm-misc

Просмотреть файл

@ -6,14 +6,13 @@
[DEFAULT]
alert_threshold = 2.0
fetch_task = wasm-misc
gecko_profile_entries = 4000000
gecko_profile_interval = 1
lower_is_better = true
owner = :lth and SpiderMonkey Team
page_cycles = 5
page_timeout = 1200000
test_url = http://<host>:<port>/wasm-misc/index.html?raptor
test_url = http://<host>:<port>/index.html?raptor
type = benchmark
unit = ms
@ -21,3 +20,6 @@ unit = ms
apps = firefox
preferences = {"javascript.options.wasm_baselinejit": false,
"javascript.options.wasm_optimizingjit": true}
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/wasm-misc

Просмотреть файл

@ -9,13 +9,15 @@ alert_threshold = 2.0
apps = firefox, chrome, chromium
gecko_profile_entries = 4000000
gecko_profile_interval = 1
fetch_task = wasm-misc
lower_is_better = true
owner = :lth and SpiderMonkey Team
page_cycles = 5
page_timeout = 1200000
test_url = http://<host>:<port>/wasm-misc/index.html?raptor
test_url = http://<host>:<port>/index.html?raptor
type = benchmark
unit = ms
[wasm-misc]
repository = https://github.com/mozilla/perf-automation
repository_revision = 61332db584026b73e37066d717a162825408c36b
repository_path = benchmarks/wasm-misc