Backed out changeset 448679382b06 (bug 1649497) for linting failures on framework_gatherers.py. CLOSED TREE

This commit is contained in:
Csoregi Natalia 2020-07-14 20:20:48 +03:00
Родитель 0e25e01126
Коммит 2d34e224ae
17 изменённых файлов: 45 добавлений и 391 удалений

Просмотреть файл

@ -38,6 +38,7 @@ categories:
- tools/fuzzing
- tools/sanitizer
- testing/perfdocs
- testing/perftest
- tools/code-coverage
- testing-rust-code
l10n_doc:

Просмотреть файл

@ -10,3 +10,4 @@ mozperftest
running
writing
developing

Просмотреть файл

@ -1,8 +1,7 @@
# -*- Mode: python; indent-tabs-mode: nil; tab-width: 40 -*-
# vim: set filetype=python:
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
---
name: mozperftest
manifest: None
static-only: True
suites: {}
SPHINX_TREES['/testing/perftest'] = 'docs'

Просмотреть файл

@ -1,154 +0,0 @@
Developing in mozperftest
=========================
Architecture overview
---------------------
`mozperftest` implements a mach command that is a thin wrapper on the
top of `runner.py`, which allows us to run the tool without having to go through
a mach call. Command arguments are prepared in `argparser.py` and then made
available for the runner.
The runner creates a `MachEnvironment` instance (see `environment.py`) and a
`Metadata` instance (see `metadata.py`). These two objects are shared during the
whole test and used to share data across all parts.
The runner then calls `MachEnvironment.run`, which is in charge of running the test.
The `MachEnvironment` instance runs a sequence of **layers**.
Layers are classes responsible of one single aspect of a performance test. They
are organized in three categories:
- **system**: anything that sets up and tears down some resources or services
on the system. Existing system layers: **android**, **proxy**
- **test**: layers that are in charge of running a test to collect metrics.
Existing test layers: **browsertime** and **androidlog**
- **metrics**: all layers that process the metrics to turn them into usable
metrics. Existing system layers: **perfherder** and **console**
The MachEnvironment instance collects a series of layers for each category and
runs them sequentially.
The goal of this organization is to allow adding new performance tests runners
that will be based on a specific combination of layers. To avoid messy code,
we need to make sure that each layer represents a single aspect of the process
and that is completely independant from other layers (besides sharing the data
through the common environment.)
For instance, we could use `perftest` to run a C++ benchmark by implementing a
new **test** layer.
Layer
-----
A layer is a class that inherits from `mozperftest.layers.Layer` and implements
a few methods and class variables.
List of methods and variables:
- `name`: name of the layer (class variable, mandatory)
- `activated`: boolean to activate by default the layer (class variable, False)
- `user_exception`: will trigger the `on_exception` hook when an exception occurs
- `arguments`: dict containing arguments. Each argument is following
the `argparser` standard
- `run(self, medatata)`: called to execute the layer
- `setup(self)`: called when the layer is about to be executed
- `teardown(self)`: called when the layer is exiting
Example::
class EmailSender(Layer):
"""Sends an email with the results
"""
name = "email"
activated = False
arguments = {
"recipient": {
"type": str,
"default": "tarek@mozilla.com",
"help": "Recipient",
},
}
def setup(self):
self.server = smtplib.SMTP(smtp_server,port)
def teardown(self):
self.server.quit()
def __call__(self, metadata):
self.server.send_email(self.get_arg("recipient"), metadata.results())
It can then be added to one of the top functions that are used to create a list
of layers for each category:
- **mozperftest.metrics.pick_metrics** for the metrics category
- **mozperftest.system.pick_system** for the system category
- **mozperftest.test.pick_browser** for the test category
And also added in each `get_layers` function in each of those category.
The `get_layers` functions are invoked when building the argument parser.
In our example, adding the `EmailSender` layer will add two new options:
- **--email** a flag to activate the layer
- **--email-recipient**
Important layers
----------------
**mozperftest** can be used to run performance tests against browsers using the
**browsertime** test layer. It leverages the `browsertime.js
<https://www.sitespeed.io/documentation/browsertime/>`_ framework and provides
a full integration into Mozilla's build and CI systems.
Browsertime uses the selenium webdriver client to drive the browser, and
provides some metrics to measure performance during a user journey.
Coding style
------------
For the coding style, we want to:
- Follow `PEP 257 <https://www.python.org/dev/peps/pep-0257/>`_ for docstrings
- Avoid complexity as much as possible
- Use modern Python 3 code (for instance `pathlib` instead of `os.path`)
- Avoid dependencies on Mozilla build projects and frameworks as much as possible
(mozharness, mozbuild, etc), or make sure they are isolated and documented
Landing patches
---------------
.. warning::
It is mandatory for each patch to have a test. Any change without a test
will be rejected.
Before landing a patch for mozperftest, make sure you run `perftest-test`::
% ./mach perftest-test
=> black [OK]
=> flake8 [OK]
=> remove old coverage data [OK]
=> running tests [OK]
=> coverage
Name Stmts Miss Cover Missing
------------------------------------------------------------------------------------------
mozperftest/metrics/notebook/analyzer.py 29 20 31% 26-36, 39-42, 45-51
...
mozperftest/system/proxy.py 37 0 100%
------------------------------------------------------------------------------------------
TOTAL 1614 240 85%
[OK]
The command will run `black`, `flake8` and also make sure that the test coverage has not regressed.
You can use the `-s` option to bypass flake8/black to speed up your workflow, but make
sure you do a full tests run. You can also pass the name of one single test module.

Просмотреть файл

@ -4,7 +4,6 @@ Performance Testing
Below you can find links to the various documentation that exists for performance testing and the associated tests.
* :doc:`mozperftest`
* :doc:`raptor`
For more information please see this `wiki page <https://wiki.mozilla.org/TestEngineering/Performance>`_.

Просмотреть файл

@ -1,12 +0,0 @@
===========
mozperftest
===========
**mozperftest** can be used to run performance tests.
.. toctree::
running
writing
developing

Просмотреть файл

@ -1,30 +0,0 @@
Running a performance test
==========================
You can run `perftest` locally or in Mozilla's CI
Running locally
---------------
Running a test is as simple as calling it using `mach perftest` in a mozilla-central source
checkout::
$ ./mach perftest perftest_script.js
The mach command will bootstrap the installation of all required tools for the framework to run.
You can use `--help` to find out about all options.
Running in the CI
-----------------
You can run in the CI directly from the `mach perftest` command by adding the `--push-to-try` option
to your locally working perftest call. We have phones on bitbar that can run your android tests.
Tests are fairly fast to run in the CI because they use sparse profiles. Depending on the
availability of workers, once the task starts, it takes around 15mn to start the test.
.. warning::
If you plan to run tests often in the CI for android, you should contact the android
infra team to make sure there's availability in our pool of devices.

Просмотреть файл

@ -1,132 +0,0 @@
Writing a browsertime test
==========================
With the browsertime layer, performance scenarios are Node modules that
implement at least one async function that will be called by the framework once
the browser has started. The function gets a webdriver session and can interact
with the browser.
You can write complex, interactive scenarios to simulate a user journey,
and collect various metrics.
Full documentation is available `here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/>`_
The mozilla-central repository has a few performance tests script in
`testing/performance` and more should be added in components in the future.
By convention, a performance test is prefixed with **perftest_** to be
recognized by the `perftest` command.
A performance test implements at least one async function published in node's
`module.exports` as `test`. The function receives two objects:
- **context**, which contains:
- **options** - All the options sent from the CLI to Browsertime
- **log** - an instance to the log system so you can log from your navigation script
- **index** - the index of the runs, so you can keep track of which run you are currently on
- **storageManager** - The Browsertime storage manager that can help you read/store files to disk
- **selenium.webdriver** - The Selenium WebDriver public API object
- **selenium.driver** - The instantiated version of the WebDriver driving the current version of the browser
- **command** provides API to interact with the browser. It's a wrapper
around the selenium client `Full documentation here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/#commands>`_
Below is an example of a test that visits the BBC homepage and clicks on a link.
.. sourcecode:: javascript
"use strict";
async function setUp(context) {
context.log.info("setUp example!");
}
async function test(context, commands) {
await commands.navigate("https://www.bbc.com/");
// Wait for browser to settle
await commands.wait.byTime(10000);
// Start the measurement
await commands.measure.start("pageload");
// Click on the link and wait for page complete check to finish.
await commands.click.byClassNameAndWait("block-link__overlay-link");
// Stop and collect the measurement
await commands.measure.stop();
}
async function tearDown(context) {
context.log.info("tearDown example!");
}
module.exports = {
setUp,
test,
tearDown,
owner: "Performance Team",
test_name: "BBC",
description: "Measures pageload performance when clicking on a link from the bbc.com",
supportedBrowsers: "Any",
supportePlatforms: "Any",
};
Besides the `test` function, scripts can implement a `setUp` and a `tearDown` function to run
some code before and after the test. Those functions will be called just once, whereas
the `test` function might be called several times (through the `iterations` option)
You must also provide metadata information about the test:
- **owner**: name of the owner (person or team)
- **name**: name of the test
- **description**: short description
- **longDescription**: longer description
- **usage**: options used to run the test
- **supportedBrowsers**: list of supported browsers (or "Any")
- **supportedPlatforms**: list of supported platforms (or "Any")
Hooks
-----
A Python module can be used to run functions during a run lifecycle. Available hooks are:
- **before_iterations(args)** runs before everything is started. Gets the args, which
can be changed.
- **before_runs(env)** runs before the test is launched. Can be used to
change the running environment.
- **after_runs(env)** runs after the test is done.
- **on_exception(env, layer, exception)** called on any exception. Provides the
layer in which the exception occured, and the exception. If the hook returns `True`
the exception is ignored and the test resumes. If the hook returns `False`, the
exception is ignored and the test ends immediatly. The hook can also re-raise the
exception or raise its own exception.
In the example below, the `before_runs` hook is setting the options on the fly,
so users don't have to provide them in the command line::
from mozperftest.browser.browsertime import add_options
url = "'https://www.example.com'"
common_options = [("processStartTime", "true"),
("firefox.disableBrowsertimeExtension", "true"),
("firefox.android.intentArgument", "'-a'"),
("firefox.android.intentArgument", "'android.intent.action.VIEW'"),
("firefox.android.intentArgument", "'-d'"),
("firefox.android.intentArgument", url)]
def before_runs(env, **kw):
add_options(env, common_options)
To use this hook module, it can be passed to the `--hooks` option::
$ ./mach perftest --hooks hooks.py perftest_example.js

Просмотреть файл

@ -4,7 +4,6 @@
---
name: raptor
manifest: testing/raptor/raptor/raptor.ini
static-only: False
suites:
desktop:
description: "Tests for page-load performance. (WX: WebExtension, BT: Browsertime, FF: Firefox, CH: Chrome, CU: Chromium)"

Просмотреть файл

@ -2,7 +2,7 @@
perfdocs:
description: Performance Documentation linter
# This task handles its own search, so just include cwd
include: ['testing/raptor', 'python/mozperftest']
include: ['testing/raptor']
exclude: []
extensions: ['rst', 'ini', 'yml']
support-files: []

Просмотреть файл

@ -159,10 +159,3 @@ class RaptorGatherer(FrameworkGatherer):
def build_test_description(self, title, test_description=""):
return ["* " + title + " (" + test_description + ")"]
class MozperftestGatherer(FrameworkGatherer):
'''
Gatherer for the Mozperftest framework.
'''
pass

Просмотреть файл

@ -4,19 +4,16 @@
from __future__ import absolute_import
import os
import pathlib
import re
from perfdocs.logger import PerfDocLogger
from perfdocs.utils import read_yaml
from perfdocs.framework_gatherers import RaptorGatherer, MozperftestGatherer
from perfdocs.framework_gatherers import RaptorGatherer
logger = PerfDocLogger()
# TODO: Implement decorator/searcher to find the classes.
frameworks = {
"raptor": RaptorGatherer,
"mozperftest": MozperftestGatherer,
}
@ -26,12 +23,14 @@ class Gatherer(object):
and can obtain manifest-based test lists. Used by the Verifier.
"""
def __init__(self, workspace_dir):
def __init__(self, root_dir, workspace_dir):
"""
Initialzie the Gatherer.
:param str root_dir: Path to the testing directory.
:param str workspace_dir: Path to the gecko checkout.
"""
self.root_dir = root_dir
self.workspace_dir = workspace_dir
self._perfdocs_tree = []
self._test_list = []
@ -68,30 +67,29 @@ class Gatherer(object):
This method doesn't return anything. The result can be found in
the perfdocs_tree attribute.
"""
exclude_dir = ["tools/lint", ".hg", "testing/perfdocs"]
for path in pathlib.Path(self.workspace_dir).rglob("perfdocs"):
if any(re.search(d, str(path)) for d in exclude_dir):
continue
for dirpath, dirname, files in os.walk(self.root_dir):
# Walk through the testing directory tree
if dirpath.endswith("/perfdocs"):
matched = {"path": dirpath, "yml": "", "rst": "", "static": []}
for file in files:
# Add the yml/rst/static file to its key if re finds the searched file
if file == "config.yml" or file == "config.yaml":
matched["yml"] = file
elif file == "index.rst":
matched["rst"] = file
elif file.endswith(".rst"):
matched["static"].append(file)
files = [f for f in os.listdir(path)]
matched = {"path": str(path), "yml": "", "rst": "", "static": []}
# Append to structdocs if all the searched files were found
if all(val for val in matched.values() if not type(val) == list):
self._perfdocs_tree.append(matched)
for file in files:
# Add the yml/rst/static file to its key if re finds the searched file
if file == "config.yml" or file == "config.yaml":
matched["yml"] = file
elif file == "index.rst":
matched["rst"] = file
elif file.endswith(".rst"):
matched["static"].append(file)
# Append to structdocs if all the searched files were found
if all(val for val in matched.values() if not type(val) == list):
self._perfdocs_tree.append(matched)
logger.log("Found {} perfdocs directories in {}"
.format(len(self._perfdocs_tree), [d['path'] for d in self._perfdocs_tree]))
logger.log(
"Found {} perfdocs directories in {}".format(
len(self._perfdocs_tree), self.root_dir
)
)
def get_test_list(self, sdt_entry):
"""
@ -118,16 +116,15 @@ class Gatherer(object):
"yml_content": yaml_content,
"yml_path": yaml_path,
"name": yaml_content["name"],
"test_list": {}
}
# Get and then store the frameworks tests
self.framework_gatherers[framework["name"]] = frameworks[framework["name"]](
framework["yml_path"], self.workspace_dir
)
if not yaml_content["static-only"]:
framework["test_list"] = self.framework_gatherers[framework["name"]].get_test_list()
framework["test_list"] = self.framework_gatherers[
framework["name"]
].get_test_list()
self._test_list.append(framework)
return framework

Просмотреть файл

@ -52,20 +52,20 @@ def run_perfdocs(config, logger=None, paths=None, generate=True):
PerfDocLogger.LOGGER = logger
# Convert all the paths to relative ones
rel_paths = [re.sub(top_dir, "", path) for path in paths]
rel_paths = [re.sub(".*testing", "testing", path) for path in paths]
PerfDocLogger.PATHS = rel_paths
target_dir = [os.path.join(top_dir, i) for i in rel_paths]
for path in target_dir:
if not os.path.exists(path):
raise Exception("Cannot locate directory at %s" % path)
# TODO: Expand search to entire tree rather than just the testing directory
testing_dir = os.path.join(top_dir, "testing")
if not os.path.exists(testing_dir):
raise Exception("Cannot locate testing directory at %s" % testing_dir)
# Late import because logger isn't defined until later
from perfdocs.generator import Generator
from perfdocs.verifier import Verifier
# Run the verifier first
verifier = Verifier(top_dir)
verifier = Verifier(testing_dir, top_dir)
verifier.validate_tree()
if not PerfDocLogger.FAILED:

Просмотреть файл

@ -37,7 +37,6 @@ CONFIG_SCHEMA = {
"properties": {
"name": {"type": "string"},
"manifest": {"type": "string"},
"static-only": {"type": "boolean"},
"suites": {
"type": "object",
"properties": {
@ -55,12 +54,7 @@ CONFIG_SCHEMA = {
},
},
},
"required": [
"name",
"manifest",
"static-only",
"suites"
]
"required": ["name", "manifest", "suites"],
}
@ -71,14 +65,15 @@ class Verifier(object):
descriptions that can be used to build up a document.
"""
def __init__(self, workspace_dir):
def __init__(self, root_dir, workspace_dir):
"""
Initialize the Verifier.
:param str root_dir: Path to the 'testing' directory.
:param str workspace_dir: Path to the top-level checkout directory.
"""
self.workspace_dir = workspace_dir
self._gatherer = Gatherer(workspace_dir)
self._gatherer = Gatherer(root_dir, workspace_dir)
def validate_descriptions(self, framework_info):
"""
@ -292,10 +287,8 @@ class Verifier(object):
_valid_files = {
"yml": self.validate_yaml(matched_yml),
"rst": True
"rst": self.validate_rst_content(matched_rst),
}
if not read_yaml(matched_yml)['static-only']:
_valid_files["rst"] = self.validate_rst_content(matched_rst)
# Log independently the errors found for the matched files
for file_format, valid in _valid_files.items():