Bug 1466084 - Docs: Convert markup from reST to markdown

Markdown guide for reference:
https://daringfireball.net/projects/markdown/syntax

For the existing Sphinx `.. note::`s, there isn't an exact equivalent
until we switch to mkdocs (which has the `Admonition` extension), so
I've left as reStructuredText and wrapped them with an `eval_rst`
code block for now, rather than switching to HTML. See:
https://recommonmark.readthedocs.io/en/latest/auto_structify.html#embed-restructuredtext
This commit is contained in:
Ed Morley 2018-07-27 17:41:11 +01:00
Родитель 71c1124784
Коммит eeff0a6fb4
13 изменённых файлов: 731 добавлений и 720 удалений

Просмотреть файл

@ -4,72 +4,72 @@ Administrating Treeherder
Direct database access
----------------------
For cases where the REST API just isn't enough, a 3rd party
For cases where the REST API just isn't enough, a 3rd-party
application might want to connect directly to the Treeherder
database (or a copy of it). To support these cases, you
will probably want to create a specific user for each application
who can access publically available information in a read-only
who can access publicly available information in a read-only
manner (omitting sensitive data like session tokens).
The following SQL should be sufficient to generate such a user
(obviously you should replace `myuser` and `mysecurepassword`):
.. code-block:: sql
```sql
CREATE USER 'myuser' IDENTIFIED BY 'mysecurepassword' REQUIRE SSL;
CREATE USER 'myuser' IDENTIFIED BY 'mysecurepassword' REQUIRE SSL;
-- Tables where we want to allow only partial access.
-- Whilst `password` is not used (and randomly generated), it's still safer to exclude it.
GRANT SELECT (id, username, email) ON treeherder.auth_user to 'myuser';
-- Tables where we want to allow only partial access.
-- Whilst `password` is not used (and randomly generated), it's still safer to exclude it.
GRANT SELECT (id, username, email) ON treeherder.auth_user to 'myuser';
-- Tables containing no sensitive data.
GRANT SELECT ON treeherder.bug_job_map to 'myuser';
GRANT SELECT ON treeherder.bugscache to 'myuser';
GRANT SELECT ON treeherder.build_platform to 'myuser';
GRANT SELECT ON treeherder.classified_failure to 'myuser';
GRANT SELECT ON treeherder.commit to 'myuser';
GRANT SELECT ON treeherder.failure_classification to 'myuser';
GRANT SELECT ON treeherder.failure_line to 'myuser';
GRANT SELECT ON treeherder.failure_match to 'myuser';
GRANT SELECT ON treeherder.group to 'myuser';
GRANT SELECT ON treeherder.group_failure_lines to 'myuser';
GRANT SELECT ON treeherder.issue_tracker to 'myuser';
GRANT SELECT ON treeherder.job to 'myuser';
GRANT SELECT ON treeherder.job_detail to 'myuser';
GRANT SELECT ON treeherder.job_group to 'myuser';
GRANT SELECT ON treeherder.job_log to 'myuser';
GRANT SELECT ON treeherder.job_note to 'myuser';
GRANT SELECT ON treeherder.job_type to 'myuser';
GRANT SELECT ON treeherder.machine to 'myuser';
GRANT SELECT ON treeherder.machine_platform to 'myuser';
GRANT SELECT ON treeherder.matcher to 'myuser';
GRANT SELECT ON treeherder.option to 'myuser';
GRANT SELECT ON treeherder.option_collection to 'myuser';
GRANT SELECT ON treeherder.performance_alert to 'myuser';
GRANT SELECT ON treeherder.performance_alert_summary to 'myuser';
GRANT SELECT ON treeherder.performance_bug_template to 'myuser';
GRANT SELECT ON treeherder.performance_datum to 'myuser';
GRANT SELECT ON treeherder.performance_framework to 'myuser';
GRANT SELECT ON treeherder.performance_signature to 'myuser';
GRANT SELECT ON treeherder.product to 'myuser';
GRANT SELECT ON treeherder.push to 'myuser';
GRANT SELECT ON treeherder.reference_data_signatures to 'myuser';
GRANT SELECT ON treeherder.repository to 'myuser';
GRANT SELECT ON treeherder.repository_group to 'myuser';
GRANT SELECT ON treeherder.runnable_job to 'myuser';
GRANT SELECT ON treeherder.seta_jobpriority to 'myuser';
GRANT SELECT ON treeherder.taskcluster_metadata to 'myuser';
GRANT SELECT ON treeherder.text_log_error to 'myuser';
GRANT SELECT ON treeherder.text_log_error_match to 'myuser';
GRANT SELECT ON treeherder.text_log_error_metadata to 'myuser';
GRANT SELECT ON treeherder.text_log_step to 'myuser';
-- Tables containing no sensitive data.
GRANT SELECT ON treeherder.bug_job_map to 'myuser';
GRANT SELECT ON treeherder.bugscache to 'myuser';
GRANT SELECT ON treeherder.build_platform to 'myuser';
GRANT SELECT ON treeherder.classified_failure to 'myuser';
GRANT SELECT ON treeherder.commit to 'myuser';
GRANT SELECT ON treeherder.failure_classification to 'myuser';
GRANT SELECT ON treeherder.failure_line to 'myuser';
GRANT SELECT ON treeherder.failure_match to 'myuser';
GRANT SELECT ON treeherder.group to 'myuser';
GRANT SELECT ON treeherder.group_failure_lines to 'myuser';
GRANT SELECT ON treeherder.issue_tracker to 'myuser';
GRANT SELECT ON treeherder.job to 'myuser';
GRANT SELECT ON treeherder.job_detail to 'myuser';
GRANT SELECT ON treeherder.job_group to 'myuser';
GRANT SELECT ON treeherder.job_log to 'myuser';
GRANT SELECT ON treeherder.job_note to 'myuser';
GRANT SELECT ON treeherder.job_type to 'myuser';
GRANT SELECT ON treeherder.machine to 'myuser';
GRANT SELECT ON treeherder.machine_platform to 'myuser';
GRANT SELECT ON treeherder.matcher to 'myuser';
GRANT SELECT ON treeherder.option to 'myuser';
GRANT SELECT ON treeherder.option_collection to 'myuser';
GRANT SELECT ON treeherder.performance_alert to 'myuser';
GRANT SELECT ON treeherder.performance_alert_summary to 'myuser';
GRANT SELECT ON treeherder.performance_bug_template to 'myuser';
GRANT SELECT ON treeherder.performance_datum to 'myuser';
GRANT SELECT ON treeherder.performance_framework to 'myuser';
GRANT SELECT ON treeherder.performance_signature to 'myuser';
GRANT SELECT ON treeherder.product to 'myuser';
GRANT SELECT ON treeherder.push to 'myuser';
GRANT SELECT ON treeherder.reference_data_signatures to 'myuser';
GRANT SELECT ON treeherder.repository to 'myuser';
GRANT SELECT ON treeherder.repository_group to 'myuser';
GRANT SELECT ON treeherder.runnable_job to 'myuser';
GRANT SELECT ON treeherder.seta_jobpriority to 'myuser';
GRANT SELECT ON treeherder.taskcluster_metadata to 'myuser';
GRANT SELECT ON treeherder.text_log_error to 'myuser';
GRANT SELECT ON treeherder.text_log_error_match to 'myuser';
GRANT SELECT ON treeherder.text_log_error_metadata to 'myuser';
GRANT SELECT ON treeherder.text_log_step to 'myuser';
```
If new tables are added, you can generate a new set of grant
statements using the following SQL:
.. code-block:: sql
SELECT CONCAT('GRANT SELECT ON ', table_schema, '.', table_name, ' to ''myuser'';') AS grant_stmt
FROM information_schema.TABLES
WHERE table_schema = 'treeherder'
AND table_name NOT REGEXP 'django_|auth_|credentials';
```sql
SELECT CONCAT('GRANT SELECT ON ', table_schema, '.', table_name, ' to ''myuser'';') AS grant_stmt
FROM information_schema.TABLES
WHERE table_schema = 'treeherder'
AND table_name NOT REGEXP 'django_|auth_|credentials';
```

Просмотреть файл

@ -1,12 +1,10 @@
Code Style
==========
.. _python-import-style:
Python imports
--------------
`isort <https://github.com/timothycrosley/isort>`_ enforces the following Python global import order:
[isort](https://github.com/timothycrosley/isort) enforces the following Python global import order:
* ``from __future__ import ...``
* Python standard library
@ -21,11 +19,14 @@ In addition:
* After that, sort alphabetically by module name.
* When importing multiple items from one module, use this style:
.. code-block:: python
```python
from django.db import (models,
transaction)
```
from django.db import (models,
transaction)
The quickest way to correct import style locally is to let isort make the changes for you - see
[running the tests](common_tasks.html#running-the-tests).
The quickest way to correct import style locally is to let isort make the changes for you - see :ref:`running the tests <running-tests>`.
Note: It's not possible to disable isort wrapping style checking, so for now we've chosen the most deterministic `wrapping mode <https://github.com/timothycrosley/isort#multi-line-output-modes>`_ to reduce the line length guess-work when adding imports, even though it's not the most concise.
Note: It's not possible to disable isort wrapping style checking, so for now we've chosen the
most deterministic [wrapping mode](https://github.com/timothycrosley/isort#multi-line-output-modes)
to reduce the line length guess-work when adding imports, even though it's not the most concise.

Просмотреть файл

@ -1,61 +1,55 @@
Common tasks
============
.. _running-tests:
Running the tests
-----------------
You can run flake8, isort and the pytest suite inside the Vagrant VM, using:
.. code-block:: bash
vagrant ~/treeherder$ ./runtests.sh
```bash
vagrant ~/treeherder$ ./runtests.sh
```
Or for more control, run each tool individually:
* `pytest <https://docs.pytest.org/en/stable/>`_:
* [pytest](https://docs.pytest.org/en/stable/):
.. code-block:: bash
vagrant ~/treeherder$ pytest tests/
vagrant ~/treeherder$ pytest tests/log_parser/test_tasks.py
vagrant ~/treeherder$ pytest tests/etl/test_buildapi.py -k test_ingest_builds4h_jobs
vagrant ~/treeherder$ pytest tests/selenium/test_basics.py::test_treeherder_main
NB: You can run the Selenium tests headlessly by setting the ``MOZ_HEADLESS``
environment variable.
```bash
vagrant ~/treeherder$ pytest tests/
vagrant ~/treeherder$ pytest tests/log_parser/test_tasks.py
vagrant ~/treeherder$ pytest tests/etl/test_buildapi.py -k test_ingest_builds4h_jobs
vagrant ~/treeherder$ pytest tests/selenium/test_basics.py::test_treeherder_main
```
To run all tests, including slow tests that are normally skipped, use:
.. code-block:: bash
```bash
vagrant ~/treeherder$ pytest --runslow tests/
```
vagrant ~/treeherder$ pytest --runslow tests/
For more options, see `pytest --help` or <https://docs.pytest.org/en/stable/usage.html>
For more options, see ``pytest --help`` or https://docs.pytest.org/en/stable/usage.html
* [flake8](https://flake8.readthedocs.io/):
* `flake8 <https://flake8.readthedocs.io/>`_:
.. code-block:: bash
vagrant ~/treeherder$ flake8
```bash
vagrant ~/treeherder$ flake8
```
NB: If running flake8 from outside of the VM, ensure you are using the same version as used on Travis (see ``requirements/dev.txt``).
* `isort <https://github.com/timothycrosley/isort>`_ (checks the :ref:`Python import style <python-import-style>`):
* [isort](https://github.com/timothycrosley/isort) (checks the [Python import style](code_style.html#python-imports)):
To run interactively:
.. code-block:: bash
vagrant ~/treeherder$ isort
```bash
vagrant ~/treeherder$ isort
```
Or to apply all changes without confirmation:
.. code-block:: bash
vagrant ~/treeherder$ isort --apply
```bash
vagrant ~/treeherder$ isort --apply
```
NB: isort must be run from inside the VM, since a populated (and up to date) virtualenv is required so that isort can correctly categorise the imports.
@ -63,32 +57,31 @@ Or for more control, run each tool individually:
Profiling API endpoint performance
----------------------------------
On our development (vagrant) instance we have `django-debug-toolbar
<http://django-debug-toolbar.readthedocs.io/>`_ installed, which can give
On our development (vagrant) instance we have [django-debug-toolbar](
http://django-debug-toolbar.readthedocs.io/) installed, which can give
information on exactly what SQL is run to generate individual API
endpoints. Just navigate to an endpoint
(example: http://localhost:8000/api/repository/) and
(example: <http://localhost:8000/api/repository/>) and
you should see the toolbar to your right.
.. _add-hg-repo:
Add a new Mercurial repository
------------------------------
To add a new repository, the following steps are needed:
* Append new repository information to the fixtures file located at treeherder/model/fixtures/repository.json
* Append new repository information to the fixtures file located at:
`treeherder/model/fixtures/repository.json`
* Load the file you edited with the loaddata command:
.. code-block:: bash
vagrant ~/treeherder$ ./manage.py loaddata repository
```bash
vagrant ~/treeherder$ ./manage.py loaddata repository
```
* Restart any running gunicorn/celery processes.
For more information on adding a new GitHub repository
see :ref:`Add GitHub repository <add-github-repo>`.
For more information on adding a new GitHub repository, see
[Adding a GitHub repository](submitting_data.html#adding-a-github-repository).
Building the docs locally
@ -97,12 +90,12 @@ Building the docs locally
* Either ``vagrant ssh`` into the VM, or else activate a virtualenv on the host machine.
* From the root of the Treeherder repo, run:
.. code-block:: bash
```bash
> pip install -r requirements/docs.txt
> make livehtml
```
> pip install -r requirements/docs.txt
> make livehtml
* Visit http://127.0.0.1:8000 to view the docs.
* Visit <http://127.0.0.1:8001> to view the docs.
* Source changes will result in automatic rebuilds and browser page reload.
@ -112,32 +105,32 @@ Updating package.json
* Always use ``yarn`` to make changes, not ``npm``, so that ``yarn.lock`` remains in sync.
* Add new packages using ``yarn add <PACKAGE>`` (``yarn.lock`` will be automatically updated).
* After changes to ``package.json`` use ``yarn install`` to install them and automatically update ``yarn.lock``.
* For more details see the `Yarn documentation`_.
* For more details see the [Yarn documentation].
.. _Yarn documentation: https://yarnpkg.com/en/docs/usage
[Yarn documentation]: https://yarnpkg.com/en/docs/usage
Releasing a new version of the Python client
--------------------------------------------
* Determine whether the patch, minor or major version should be bumped, by
inspecting the `client Git log`_.
inspecting the [client Git log].
* File a separate bug for the version bump.
* Open a PR to update the version listed in `client.py`_.
* Open a PR to update the version listed in [client.py].
* Use Twine to publish **both** the sdist and the wheel to PyPI, by running
the following from the root of the Treeherder repository:
.. code-block:: bash
> pip install -U twine wheel
> cd treeherder/client/
> rm -rf dist/*
> python setup.py sdist bdist_wheel
> twine upload dist/*
```bash
> pip install -U twine wheel
> cd treeherder/client/
> rm -rf dist/*
> python setup.py sdist bdist_wheel
> twine upload dist/*
```
* File a ``Release Engineering::Buildduty`` bug requesting that the sdist
and wheel releases (plus any new dependent packages) be added to the
internal PyPI mirror. For an example, see `bug 1236965`_.
internal PyPI mirror. For an example, see [bug 1236965].
Hide Jobs with Tiers
--------------------
@ -153,6 +146,6 @@ hidden by default. There are two ways to set a job to be hidden in Treeherder:
Details Panel. That will place the signature hash in the filter field.
.. _client Git log: https://github.com/mozilla/treeherder/commits/master/treeherder/client
.. _client.py: https://github.com/mozilla/treeherder/blob/master/treeherder/client/thclient/client.py
.. _bug 1236965: https://bugzilla.mozilla.org/show_bug.cgi?id=1236965
[client Git log]: https://github.com/mozilla/treeherder/commits/master/treeherder/client
[client.py]: https://github.com/mozilla/treeherder/blob/master/treeherder/client/thclient/client.py
[bug 1236965]: https://bugzilla.mozilla.org/show_bug.cgi?id=1236965

Просмотреть файл

@ -1,5 +1,3 @@
.. _schema_validation:
Schema Validation
=================
@ -7,21 +5,20 @@ Some data types in Treeherder will have JSON Schema files in the form of YAML.
You can use these files to validate your data prior to submission to be sure
it is in the right format.
You can find all our data schemas in the `schemas`_ folder.
You can find all our data schemas in the [schemas] folder.
To validate your file against a ``yml`` file, you can use something like the
following example code:
.. code-block:: python
```python
import yaml
import jsonschema
import yaml
import jsonschema
schema = yaml.load(open("schemas/text-log-summary-artifact.yml"))
jsonschema.validate(data, schema)
schema = yaml.load(open("schemas/text-log-summary-artifact.yml"))
jsonschema.validate(data, schema)
```
This will give output telling you if your ``data`` element passes validation,
and, if not, exactly where it is out of compliance.
.. _schemas: https://github.com/mozilla/treeherder/tree/master/schemas
[schemas]: https://github.com/mozilla/treeherder/tree/master/schemas

Просмотреть файл

@ -1,19 +1,20 @@
Installation
================
============
```eval_rst
.. note:: This section describes how to set up a fully functioning
instance of Treeherder. If you only want to hack on the UI,
you can just setup a standalone webserver which accesses
the server backend using node.js, which is much simpler.
See the :doc:`UI development section <ui/installation>`.
```
Prerequisites
-------------
* If you are new to Mozilla or the A-Team, read the `A-Team Bootcamp`_.
* Install Git_, Virtualbox_ and Vagrant_ (latest versions recommended).
* Clone the `treeherder repo`_ from GitHub.
* If you are new to Mozilla or the A-Team, read the [A-Team Bootcamp].
* Install [Git], [Virtualbox] and [Vagrant] (latest versions recommended).
* Clone the [treeherder repo] from GitHub.
* Linux only: An nfsd server is required. You can install this on Ubuntu by running `apt-get install nfs-common nfs-kernel-server`
Setting up Vagrant
@ -21,62 +22,62 @@ Setting up Vagrant
* Open a shell, cd into the root of the Treeherder repository, and type:
.. code-block:: bash
> vagrant up --provision
```bash
> vagrant up --provision
```
It will typically take 5 to 30 minutes for the vagrant provision to
complete, depending on your network performance. If you experience
any errors, see the :ref:`troubleshooting page
<troubleshooting-vagrant>`. It is *very important* that the
provisioning process complete successfully before trying to interact
with your test instance of treeherder: some things might
any errors, see the [troubleshooting page](troubleshooting.md).
It is *very important* that the provisioning process complete successfully before
trying to interact with your test instance of treeherder: some things might
superficially seem to work a partially configured machine, but
it is almost guaranteed that some things *will break* in
hard-to-diagnose ways if vagrant provision is not run to completion.
* Once the virtual machine is set up, connect to it using:
.. code-block:: bash
> vagrant ssh
```bash
> vagrant ssh
```
A python virtual environment will be activated on login, and the working directory will be the treeherder source directory shared from the host machine.
* For the full list of available Vagrant commands (for example, suspending the VM when you are finished for the day), see their `command line documentation`_.
* For the full list of available Vagrant commands (for example, suspending the VM when you are finished for the day),
see their [command line documentation](https://www.vagrantup.com/docs/cli/).
.. _`command line documentation`: https://www.vagrantup.com/docs/cli/
* If you just wish to :ref:`run the tests <running-tests>`, you can stop now without performing the remaining steps.
* If you just wish to [run the tests](common_tasks.html#running-the-tests),
you can stop now without performing the remaining steps.
Starting a local Treeherder instance
------------------------------------
* Start a gunicorn instance inside the Vagrant VM, to serve the static UI and API requests:
.. code-block:: bash
vagrant ~/treeherder$ ./bin/run_gunicorn
```bash
vagrant ~/treeherder$ ./bin/run_gunicorn
```
Or for development you can use the django runserver instead of gunicorn:
.. code-block:: bash
vagrant ~/treeherder$ ./manage.py runserver
```bash
vagrant ~/treeherder$ ./manage.py runserver
```
this is more convenient because it automatically refreshes every time there's a change in the code.
* You must also start the UI dev server. Open a new terminal window and ``vagrant ssh`` to
the VM again, then run the following:
.. code-block:: bash
vagrant ~/treeherder$ yarn start:local
```bash
vagrant ~/treeherder$ yarn start:local
```
This will build the UI code in the ``dist/`` folder and keep watching for
new changes (See the :doc:`UI development section <ui/installation>` for more ways to work with the UI code).
new changes (See the [UI development section](ui/installation.md) for more ways to work with the UI code).
* Visit http://localhost:5000 in your browser (NB: port has changed). Note: There will be no data to display until the ingestion tasks are run.
* Visit <http://localhost:5000> in your browser (NB: port has changed). Note: There will be no data to display until the ingestion tasks are run.
Running the ingestion tasks
---------------------------
@ -85,9 +86,9 @@ Ingestion tasks populate the database with version control push logs, queued/run
* Start up a celery worker to process async tasks:
.. code-block:: bash
vagrant ~/treeherder$ celery -A treeherder worker -B --concurrency 5
```bash
vagrant ~/treeherder$ celery -A treeherder worker -B --concurrency 5
```
The "-B" option tells the celery worker to startup a beat service, so that periodic tasks can be executed.
You only need one worker with the beat service enabled. Multiple beat services will result in periodic tasks being executed multiple times.
@ -97,13 +98,13 @@ Ingesting a single push (at a time)
Alternatively, instead of running a full ingestion task, you can process just
the jobs associated with any single push generated in the last 4 hours
(builds-4h_), in a synchronous manner. This is ideal for testing. For example:
([builds-4h]), in a synchronous manner. This is ideal for testing. For example:
.. _builds-4h: http://builddata.pub.build.mozilla.org/buildjson/
[builds-4h]: http://builddata.pub.build.mozilla.org/buildjson/
.. code-block:: bash
vagrant ~/treeherder$ ./manage.py ingest_push mozilla-inbound 63f8a47cfdf5
```bash
vagrant ~/treeherder$ ./manage.py ingest_push mozilla-inbound 63f8a47cfdf5
```
If running this locally, replace `63f8a47cfdf5` with a recent revision (= pushed within
the last four hours) on mozilla-inbound.
@ -112,26 +113,26 @@ You can further restrict the amount of data to a specific type of job
with the "--filter-job-group" parameter. For example, to process only
talos jobs for a particular push, try:
.. code-block:: bash
vagrant ~/treeherder$ ./manage.py ingest_push --filter-job-group T mozilla-inbound 63f8a47cfdf
```bash
vagrant ~/treeherder$ ./manage.py ingest_push --filter-job-group T mozilla-inbound 63f8a47cfdf
```
Ingesting a range of pushes
---------------------------
It is also possible to ingest the last N pushes for a repository:
.. code-block:: bash
vagrant ~/treeherder$ ./manage.py ingest_push mozilla-central --last-n-pushes 100
```bash
vagrant ~/treeherder$ ./manage.py ingest_push mozilla-central --last-n-pushes 100
```
In this mode, only the pushlog data will be ingested: additional results
associated with the pushes will not. This mode is useful to seed pushes so
they are visible on the web interface and so you can easily copy and paste
changesets from the web interface into subsequent ``ingest_push`` commands.
.. _A-Team Bootcamp: https://ateam-bootcamp.readthedocs.io
.. _Git: https://git-scm.com
.. _Vagrant: https://www.vagrantup.com
.. _Virtualbox: https://www.virtualbox.org
.. _treeherder repo: https://github.com/mozilla/treeherder
[A-Team Bootcamp]: https://ateam-bootcamp.readthedocs.io
[Git]: https://git-scm.com
[Vagrant]: https://www.vagrantup.com
[Virtualbox]: https://www.virtualbox.org
[treeherder repo]: https://github.com/mozilla/treeherder

Просмотреть файл

@ -7,103 +7,122 @@ to ingest from any exchange you like. Some exchanges will be registered in
same data as Treeherder. Or you can specify your own and experiment with
posting your own data.
The Simple Case
---------------
If you just want to get the same data that Treeherder gets, then you have 3 steps:
1. Create a user on `Pulse Guardian`_ if you don't already have one
1. Create a user on [Pulse Guardian] if you don't already have one
2. Create your ``PULSE_DATA_INGESTION_CONFIG`` string
3. Open a Vagrant terminal to read Pushes
4. Open a Vagrant terminal to read Jobs
5. Open a Vagrant terminal to run **Celery**
1. Pulse Guardian
~~~~~~~~~~~~~~~~~
### 1. Pulse Guardian
Visit `Pulse Guardian`_, sign in, and create a **Pulse User**. It will ask you to set a
Visit [Pulse Guardian], sign in, and create a **Pulse User**. It will ask you to set a
username and password. Remember these as you'll use them in the next step.
Unfortunately, **Pulse** doesn't support creating queues with a guest account, so
this step is necessary.
2. Environment Variable
~~~~~~~~~~~~~~~~~~~~~~~
### 2. Environment Variable
If your **Pulse User** was username: ``foo`` and password: ``bar``, your config
string would be::
string would be:
PULSE_DATA_INGESTION_CONFIG="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"
```bash
PULSE_DATA_INGESTION_CONFIG="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"
```
3. Read Pushes
~~~~~~~~~~~~~~
### 3. Read Pushes
```eval_rst
.. note:: Be sure your Vagrant environment is up-to-date. Reload it and run ``vagrant provision`` if you're not sure.
```
``ssh`` into Vagrant, then set your config environment variable::
``ssh`` into Vagrant, then set your config environment variable:
export PULSE_DATA_INGESTION_CONFIG="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"
```bash
export PULSE_DATA_INGESTION_CONFIG="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"
```
Next, run the Treeherder management command to read Pushes from the default **Pulse**
exchange::
exchange:
./manage.py read_pulse_pushes
```bash
./manage.py read_pulse_pushes
```
You will see a list of the exchanges it has mounted to and a message for each
push as it is read. This process does not ingest the push into Treeherder. It
adds that Push message to a local **Celery** queue for ingestion. They will be
ingested in step 5.
4. Read Jobs
~~~~~~~~~~~~
### 4. Read Jobs
As in step 3, open a Vagrant terminal and export your ``PULSE_DATA_INGESTION_CONFIG``
variable. Then run the following management command::
variable. Then run the following management command:
./manage.py read_pulse_jobs
```bash
./manage.py read_pulse_jobs
```
You will again see the list of exchanges that your queue is now mounted to and
a message for each Job as it is read into your local **Celery** queue.
5. Celery
~~~~~~~~~
### 5. Celery
Open your next Vagrant terminal. You don't need to set your environment variable
in this one. Just run **Celery**::
in this one. Just run **Celery**:
celery -A treeherder worker -B --concurrency 5
```bash
celery -A treeherder worker -B --concurrency 5
```
That's it! With those processes running, you will begin ingesting Treeherder
data. To see the data, you will need to run the Treeherder UI.
See :ref:`unminified_ui` for more info.
data. To see the data, you will need to run the Treeherder UI and API.
See [Running the unminified UI with Vagrant] for more info.
[Running the unminified UI with Vagrant]: ui/installation.html#running-the-unminified-ui-with-vagrant
Advanced Configuration
----------------------
Changing which data to ingest
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Changing which data to ingest
If you don't want all the sources provided by default in ``settings.py``, you
can specify the exchange(s) to listen to for jobs by modifying
``PULSE_DATA_INGESTION_SOURCES``. For instance, you could specify the projects
as only ``try`` and ``mozilla-central`` by setting::
as only ``try`` and ``mozilla-central`` by setting:
export PULSE_DATA_INGESTION_SOURCES='[{"exchange": "exchange/taskcluster-treeherder/v1/jobs", "destinations": ["#"], "projects": ["try", "mozilla-central"]}]'
```bash
export PULSE_DATA_INGESTION_SOURCES='[{"exchange": "exchange/taskcluster-treeherder/v1/jobs", "destinations": ["#"], "projects": ["try", "mozilla-central"]}]'
```
To change which exchanges you listen to for pushes, you would modify
``PULSE_PUSH_SOURCES``. For instance, to get only **Gitbub** pushes for Bugzilla,
you would set::
you would set:
export PULSE_PUSH_SOURCES='[{"exchange": "exchange/taskcluster-github/v1/push","routing_keys": ["bugzilla#"]}]'
```bash
export PULSE_PUSH_SOURCES='[{"exchange": "exchange/taskcluster-github/v1/push","routing_keys": ["bugzilla#"]}]'
```
Advanced Celery options
~~~~~~~~~~~~~~~~~~~~~~~
### Advanced Celery options
If you only want to ingest the Pushes and Jobs, but don't care about log parsing
and all the other processing Treeherder does, then you can minimize the **Celery**
task. You will need::
task. You will need:
celery -A treeherder worker -B -Q pushlog,store_pulse_jobs,store_pulse_resultsets --concurrency 5
```bash
celery -A treeherder worker -B -Q pushlog,store_pulse_jobs,store_pulse_resultsets --concurrency 5
```
* The ``pushlog`` queue loads up to the last 10 Mercurial pushes that exist.
* The ``store_pulse_resultsets`` queue will ingest all the pushes from the exchanges
@ -111,8 +130,10 @@ task. You will need::
* The ``store_pulse_jobs`` queue will ingest all the jobs from the exchanges
specified in ``PULSE_DATA_INGESTION_SOURCES``.
```eval_rst
.. note:: Any job that comes from **Pulse** that does not have an associated push will be skipped.
.. note:: It is slightly confusing to see ``store_pulse_resultsets`` there. It is there for legacy reasons and will change to ``store_pulse_pushes`` at some point.
```
Posting Data
@ -121,15 +142,17 @@ Posting Data
To post data to your own **Pulse** exchange, you can use the ``publish_to_pulse``
management command. This command takes the ``routing_key``, ``connection_url``
and ``payload_file``. The payload file must be a ``JSON`` representation of
a job as specified in the `YML Schema`_.
a job as specified in the [YML Schema].
Here is a set of example parameters that could be used to run it::
Here is a set of example parameters that could be used to run it:
./manage.py publish_to_pulse mozilla-inbound.staging amqp://treeherder-test:mypassword@pulse.mozilla.org:5672/ ./scratch/test_job.json
```bash
./manage.py publish_to_pulse mozilla-inbound.staging amqp://treeherder-test:mypassword@pulse.mozilla.org:5672/ ./scratch/test_job.json
```
You can use the handy `Pulse Inspector`_ to view messages in your exchange to
You can use the handy [Pulse Inspector] to view messages in your exchange to
test that they are arriving at Pulse the way you expect.
.. _Pulse Guardian: https://pulseguardian.mozilla.org/whats_pulse
.. _Pulse Inspector: https://tools.taskcluster.net/pulse-inspector/
.. _YML Schema: https://github.com/mozilla/treeherder/blob/master/schemas/pulse-job.yml
[Pulse Guardian]: https://pulseguardian.mozilla.org/whats_pulse
[Pulse Inspector]: https://tools.taskcluster.net/pulse-inspector/
[YML Schema]: https://github.com/mozilla/treeherder/blob/master/schemas/pulse-job.yml

Просмотреть файл

@ -4,11 +4,9 @@ REST API
Treeherder provides a REST API which can be used to query for all the
push, job, and performance data it stores internally. For a browsable
interface, see:
https://treeherder.mozilla.org/docs/
<https://treeherder.mozilla.org/docs/>
.. _python-client:
Python Client
-------------
@ -17,39 +15,39 @@ interacting with the REST API. It is maintained inside the
Treeherder repository, but you can install your own copy from PyPI
using pip:
.. code-block:: bash
pip install treeherder-client
```bash
pip install treeherder-client
```
It will install a module called `thclient` that you can access, for example:
.. code-block:: python
from thclient import TreeherderClient
```python
from thclient import TreeherderClient
```
By default the production Treeherder API will be used, however this can be
overridden by passing a `server_url` argument to the `TreeherderClient`
constructor:
.. code-block:: python
```python
# Treeherder production
client = TreeherderClient()
# Treeherder production
client = TreeherderClient()
# Treeherder stage
client = TreeherderClient(server_url='https://treeherder.allizom.org')
# Treeherder stage
client = TreeherderClient(server_url='https://treeherder.allizom.org')
# Local vagrant instance
client = TreeherderClient(server_url='http://localhost:8000')
# Local vagrant instance
client = TreeherderClient(server_url='http://localhost:8000')
```
When using the Python client, don't forget to set up logging in the
caller so that any API error messages are output, like so:
.. code-block:: python
```python
import logging
import logging
logging.basicConfig()
logging.basicConfig()
```
For verbose output, pass ``level=logging.DEBUG`` to ``basicConfig()``.
@ -61,62 +59,58 @@ When interacting with Treeherder's API, you must set an appropriate
``User Agent`` header (rather than relying on the defaults of your
language/library) so that we can more easily track API feature usage,
as well as accidental abuse. Default scripting User Agents will receive
an HTTP 403 response (see `bug 1230222`_ for more details).
an HTTP 403 response (see [bug 1230222] for more details).
If you are using the :ref:`python-client`, an appropriate User Agent
If you are using the [Python Client](#python-client), an appropriate User Agent
is set for you. When using the Python requests library, the User Agent
can be set like so:
.. code-block:: python
```python
r = requests.get(url, headers={'User-Agent': ...})
```
r = requests.get(url, headers={'User-Agent': ...})
[bug 1230222]: https://bugzilla.mozilla.org/show_bug.cgi?id=1230222
.. _bug 1230222: https://bugzilla.mozilla.org/show_bug.cgi?id=1230222
.. _authentication:
Authentication
--------------
A Treeherder client instance should identify itself to the server
via the `Hawk authentication mechanism`_. To apply for credentials or
create some for local testing, see :ref:`managing-api-credentials`
via the [Hawk authentication mechanism]. To apply for credentials or
create some for local testing, see [Managing API Credentials](#managing-api-credentials)
below.
Once your credentials are set up, if you are using the Python client
pass them via the `client_id` and `secret` parameters to
TreeherderClient's constructor:
.. code-block:: python
client = TreeherderClient(client_id='hawk_id', secret='hawk_secret')
client.post_collection('mozilla-central', tac)
```python
client = TreeherderClient(client_id='hawk_id', secret='hawk_secret')
client.post_collection('mozilla-central', tac)
```
Remember to point the Python client at the Treeherder instance to which
the credentials belong - see :ref:`here <python-client>` for more details.
the credentials belong - see [here](#python-client) for more details.
To diagnose problems when authenticating, ensure Python logging has been
set up (see :ref:`python-client`).
set up (see [Python Client](#python-client)).
Note: The system clock on the machines making requests must be correct
(or more specifically, within 60 seconds of the Treeherder server time),
otherwise authentication will fail. In this case, the response body will be:
.. code-block:: json
```json
{"detail": "Hawk authentication failed: The token has expired. Is your system clock correct?"}
```
{"detail":"Hawk authentication failed: The token has expired. Is your system clock correct?"}
[Hawk authentication mechanism]: https://github.com/hueniverse/hawk
.. _Hawk authentication mechanism: https://github.com/hueniverse/hawk
.. _managing-api-credentials:
Managing API credentials
------------------------
Submitting data via the REST API has been deprecated in favour of Pulse
(`bug 1349182 <https://bugzilla.mozilla.org/show_bug.cgi?id=1349182>`__).
([bug 1349182](https://bugzilla.mozilla.org/show_bug.cgi?id=1349182)).
As such we are no longer issuing Hawk credentials for new projects,
and the UI for requesting/managing credentials has been removed.

Просмотреть файл

@ -1,24 +1,20 @@
Retrieving Data
===============
The :ref:`Python client <python-client>` also has some convenience
methods to query the Treeherder API. It is still in active development,
but already has methods for getting push and job information.
See the :ref:`Python client <python-client>` section for how to control
which Treeherder instance will be accessed by the client.
The [Python client](rest_api.html#python-client) also has some convenience
methods to query the Treeherder API.
Here's a simple example which prints the start timestamp of all the
jobs associated with the last 10 pushes on mozilla-central:
.. code-block:: python
```python
from thclient import TreeherderClient
from thclient import TreeherderClient
client = TreeherderClient()
client = TreeherderClient()
pushes = client.get_pushes('mozilla-central') # gets last 10 by default
for pushes in pushes:
jobs = client.get_jobs('mozilla-central', push_id=pushes['id'])
for job in jobs:
print job['start_timestamp']
pushes = client.get_pushes('mozilla-central') # gets last 10 by default
for pushes in pushes:
jobs = client.get_jobs('mozilla-central', push_id=pushes['id'])
for job in jobs:
print job['start_timestamp']
```

Просмотреть файл

@ -16,24 +16,28 @@ number of jobs.
Jobs that appear on Treeherder for the first time will be treated as a job with high priority for a couple of
weeks since we don't have historical data to determine how likely they're to catch a code regression.
In order to find open bugs for SETA visit list of `SETA bugs <https://bugzilla.mozilla.org/buglist.cgi?query_format=specific&order=relevance%20desc&bug_status=__open__&product=Tree%20Management&content=SETA&comments=0&list_id=13358642>`_.
In order to find open bugs for SETA visit list of [SETA bugs].
[SETA bugs]: https://bugzilla.mozilla.org/buglist.cgi?product=Tree%20Management&component=Treeherder%3A%20SETA&resolution=---
APIs
----
* /api/project/{project}/seta/{version}/job-priorities/
* `/api/project/{project}/seta/{version}/job-priorities/`
* This is the API that consumers like the Gecko decision task will use
* /api/project/{project}/seta/{version}/job-types/
* `/api/project/{project}/seta/{version}/job-types/`
* This API shows which jobs are defined for each project
* /api/seta/{version}/failures-fixed-by-commit/
* `/api/seta/{version}/failures-fixed-by-commit/`
* This API shows job failures that have been annotated with "fixed by commit"
Local set up
------------
After you set up Treeherder, ssh (3 different tabs) into the provisioned VM and run the following commands in each:
* 1st tab: ``./manage.py runserver``
@ -42,12 +46,14 @@ After you set up Treeherder, ssh (3 different tabs) into the provisioned VM and
Then try out the various APIs:
* http://localhost:5000/api/project/mozilla-inbound/seta/v1/job-priorities/?build_system_type=buildbot
* http://localhost:5000/api/project/mozilla-inbound/seta/v1/job-priorities/?build_system_type=taskcluster
* http://localhost:5000/api/project/mozilla-inbound/seta/v1/job-types/
* http://localhost:5000/api/seta/v1/failures-fixed-by-commit/
* <http://localhost:5000/api/project/mozilla-inbound/seta/v1/job-priorities/?build_system_type=buildbot>
* <http://localhost:5000/api/project/mozilla-inbound/seta/v1/job-priorities/?build_system_type=taskcluster>
* <http://localhost:5000/api/project/mozilla-inbound/seta/v1/job-types/>
* <http://localhost:5000/api/seta/v1/failures-fixed-by-commit/>
* This one won't work until https://bugzilla.mozilla.org/show_bug.cgi?id=1389123 is fixed
* This one won't work until [bug 1389123] is fixed.
[bug 1389123]: https://bugzilla.mozilla.org/show_bug.cgi?id=1389123
Maintenance
-----------
@ -56,16 +62,18 @@ Sometimes the default behaviour of SETA is not adequate (e.g. new jobs noticed g
when adding new platforms (e.g. stylo).
Instead of investing more on accommodating for various scenarios weve decided to document how to make changes in the DB when we have to.
If you want to inspect the priorities for various jobs and platforms you can query the JobPriority table from reDash:
Use this a starting query: https://sql.telemetry.mozilla.org/queries/14771/source#table
If you want to inspect the priorities for various jobs and platforms you can query the JobPriority table from reDash.
Use this query as a starting point:
Steps for adjusting jobs
^^^^^^^^^^^^^^^^^^^^^^^^
To connect to Treeherder you need Heroku permissions. Run this from a treeherder checkout:
<https://sql.telemetry.mozilla.org/queries/14771/source#table>
.. code-block:: bash
### Steps for adjusting jobs
heroku run --app treeherder-prod -- bash
To connect to Treeherder you need Heroku permissions & the Heroku CLI installed. Then run:
```bash
heroku run --app treeherder-prod -- bash
```
Sometimes, before you can adjust priorities of the jobs, you need to make sure they make it into the JobPriority table.
In order to do so we need to:
@ -73,23 +81,26 @@ In order to do so we need to:
* Make sure the scheduling changes have made it into mozilla-inbound
* SETA uses mozilla-inbound as a reference for jobs for all trunk trees
* Make sure the job shows up on the runnable jobs table
* You can check the `API <https://treeherder.mozilla.org/api/project/mozilla-inbound/runnable_jobs/>`_, however, it can time out
* You can update the table with ``export TREEHERDER_DEBUG=True && ./manage.py update_runnable_jobs`` (it will take several minutes)
* You can check the [runnable jobs API], however, it can time out
* You can update the table with:
`export TREEHERDER_DEBUG=True && ./manage.py update_runnable_jobs`
(it will take several minutes)
* Update the job priority table from the shell:
.. code-block:: bash
Open the Python shell using `./manage.py shell`, then enter:
```python
from treeherder.seta.update_job_priority import update_job_priority_table
update_job_priority_table()
```
If you want to remove the 2 week grace period and make the job low priority (priority=5) do something similar to this:
If you want to remove the 2 week grace period and make the job low priority (priority=5) do something similar to this:
.. code-block:: bash
./manage.py shell
```python
from treeherder.seta.models import JobPriority;
# Inspect the jobs you want to change
# Change the values appropriately
@ -98,3 +109,6 @@ If you want to remove the 2 week grace period and make the job low priority (pri
# Once satisfied
JobPriority.objects.filter(platform="windows7-32-stylo", priority=1).update(priority=5);
JobPriority.objects.filter(platform="windows7-32-stylo", expiration_date__isnull=False).update(expiration_date=None)
```
[runnable jobs API]: https://treeherder.mozilla.org/api/project/mozilla-inbound/runnable_jobs/

Просмотреть файл

@ -3,20 +3,20 @@ Submitting Data
To submit your test data to Treeherder, you have two options:
1. :ref:`submitting-using-pulse`
1. [Using Pulse](#using-pulse)
This is the new process Task Cluster is using to submit data to Treeherder.
There is a `Pulse Job Schema`_ to validate your payload against to ensure it will
be accepted. In this case, you create your own `Pulse`_ exchange and publish
There is a [Pulse Job Schema] to validate your payload against to ensure it will
be accepted. In this case, you create your own [Pulse] exchange and publish
to it. To get Treeherder to receive your data, you would create a bug to
have your Exchange added to Treeherder's config. All Treeherder instances
can subscribe to get your data, as can local dev instances for testing.
While it is beyond the scope of this document to explain how `Pulse`_ and
While it is beyond the scope of this document to explain how [Pulse] and
RabbitMQ work, we encourage you to read more about this technology on
its Wiki page.
2. :ref:`submitting-using-python-client`
2. [Using the Python Client](#using-the-python-client)
This is historically how projects and users have submitted data to Treeherder.
This requires getting Hawk credentials approved by a Treeherder Admin.
@ -29,49 +29,44 @@ To submit your test data to Treeherder, you have two options:
If you are establishing a new repository with Treeherder, then you will need to
do one of the following:
1. For GitHub repos: :ref:`add-github-repo`
1. For GitHub repos: [Adding a GitHub Repository](#adding-a-github-repository)
2. For Mercurial repos: :ref:`add-hg-repo`
2. For Mercurial repos: [Add a new Mercurial repository](common_tasks.html#add-a-new-mercurial-repository)
.. _submitting-using-pulse:
Using Pulse
-----------
To submit via a Pulse exchange, these are the steps you will need to follow:
1. Format your data
^^^^^^^^^^^^^^^^^^^
### 1. Format your data
You should format your job data according to the `Pulse Job Schema`_,
You should format your job data according to the [Pulse Job Schema],
which describes the various properties of a job: whether it passed or failed,
job group/type symbol, description, log information, etc.
You are responsible for validating your data prior to publishing it onto your
exchange, or Treeherder may reject it.
2. Create your Exchange
^^^^^^^^^^^^^^^^^^^^^^^
### 2. Create your Exchange
With `Pulse Guardian`_, you need to create your Pulse User in order to
With [Pulse Guardian], you need to create your Pulse User in order to
create your own Queues and Exchanges. There is no mechanism to create an
Exchange in the Pulse Guardian UI itself, however. You will need to create
your exchange in your submitting code. There are a few options available
for that:
1. `MozillaPulse`_
2. `Kombu`_
3. Or any RabbitMQ package of your choice
1. [MozillaPulse]
2. [Kombu]
3. Or any RabbitMQ package of your choice
To test publishing your data to your Exchange, you can use the Treeherder
management command `publish_to_pulse`_. This is also a very simple example
management command [publish_to_pulse]. This is also a very simple example
of a Pulse publisher using Kombu that you can use to learn to write your own
publisher.
3. Register with Treeherder
^^^^^^^^^^^^^^^^^^^^^^^^^^^
### 3. Register with Treeherder
Once you have successfully tested a round-trip through your Pulse exchange to
your development instance, you are ready to have Treeherder receive your data.
@ -79,17 +74,19 @@ your development instance, you are ready to have Treeherder receive your data.
Treeherder has to know about your exchange and which routing keys to use in
order to load your jobs.
Submit a `Treeherder bug`_ with the following information::
Submit a [Treeherder bug] with the following information:
{
"exchange": "exchange/my-pulse-user/v1/jobs",
"destinations": [
'treeherder'
],
"projects": [
'mozilla-inbound._'
],
},
```python
{
"exchange": "exchange/my-pulse-user/v1/jobs",
"destinations": [
'treeherder'
],
"projects": [
'mozilla-inbound._'
],
},
```
Treeherder will bind to the exchange looking for all combinations of routing
keys from ``destinations`` and ``projects`` listed above. For example with
@ -97,22 +94,24 @@ the above config, we will only load jobs with routing keys of
``treeherder.mozilla-inbound._``
If you want all jobs from your exchange to be loaded, you could simplify the
config by having values::
config by having values:
"destinations": [
'#'
],
"projects": [
'#'
],
```python
"destinations": [
'#'
],
"projects": [
'#'
],
```
If you want one config to go to Treeherder Staging and a different one to go
to Production, please specify that in the bug. You could use the same exchange
with different routing key settings, or two separate exchanges. The choice is
yours.
4. Publish jobs to your Exchange
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### 4. Publish jobs to your Exchange
Once the above config is set on Treeherder, you can begin publishing jobs
to your Exchange and they will start showing in Treeherder.
@ -121,26 +120,25 @@ You will no longer need any special credentials. You publish messages to the
Exchange YOU own. Treeherder is now just listening to it.
.. _submitting-using-python-client:
Using the Python Client
-----------------------
There are two types of data structures you can submit with the :ref:`Python client
<python-client>`: job and push collections. The client provides methods
There are two types of data structures you can submit with the [Python client]:
job and push collections. The client provides methods
for building a data structure that treeherder will accept. Data
structures can be extended with new properties as needed, there is a
minimal validation protocol applied that confirms the bare minimum
parts of the structures are defined.
See the :ref:`Python client <python-client>` section for how to control
See the [Python client] section for how to control
which Treeherder instance will be accessed by the client.
Authentication is covered :ref:`here <authentication>`.
Authentication is covered [here](rest_api.html#authentication).
[Python client]: rest_api.html#python-client
Job Collections
^^^^^^^^^^^^^^^
### Job Collections
Job collections can contain test results from any kind of test. The
`revision` provided should match the associated `revision` in the
@ -148,166 +146,165 @@ push structure. The `revision` is the top-most revision in the push.
The `job_guid` provided can be any unique string of 50
characters at most. A job collection has the following data structure.
.. code-block:: python
```python
[
{
'project': 'mozilla-inbound',
[
{
'project': 'mozilla-inbound',
'revision': '4317d9e5759d58852485a7a808095a44bc806e19',
'revision': '4317d9e5759d58852485a7a808095a44bc806e19',
'job': {
'job': {
'job_guid': 'd22c74d4aa6d2a1dcba96d95dccbd5fdca70cf33',
'job_guid': 'd22c74d4aa6d2a1dcba96d95dccbd5fdca70cf33',
'product_name': 'spidermonkey',
'product_name': 'spidermonkey',
'reason': 'scheduler',
'who': 'spidermonkey_info__mozilla-inbound-warnaserr',
'reason': 'scheduler',
'who': 'spidermonkey_info__mozilla-inbound-warnaserr',
'desc': 'Linux x86-64 mozilla-inbound spidermonkey_info-warnaserr build',
'desc': 'Linux x86-64 mozilla-inbound spidermonkey_info-warnaserr build',
'name': 'SpiderMonkey --enable-sm-fail-on-warnings Build',
'name': 'SpiderMonkey --enable-sm-fail-on-warnings Build',
# The symbol representing the job displayed in
# treeherder.allizom.org
'job_symbol': 'e',
# The symbol representing the job displayed in
# treeherder.allizom.org
'job_symbol': 'e',
# The symbol representing the job group in
# treeherder.allizom.org
'group_symbol': 'SM',
'group_name': 'SpiderMonkey',
# The symbol representing the job group in
# treeherder.allizom.org
'group_symbol': 'SM',
'group_name': 'SpiderMonkey',
'submit_timestamp': 1387221298,
'start_timestamp': 1387221345,
'end_timestamp': 1387222817,
'submit_timestamp': 1387221298,
'start_timestamp': 1387221345,
'end_timestamp': 1387222817,
'state': 'completed',
'result': 'success',
'state': 'completed',
'result': 'success',
'machine': 'bld-linux64-ec2-104',
'build_platform': {
'platform':'linux64', 'os_name': 'linux', 'architecture': 'x86_64'
},
'machine_platform': {
'platform': 'linux64', 'os_name': 'linux', 'architecture': 'x86_64'
},
'machine': 'bld-linux64-ec2-104',
'build_platform': {
'platform':'linux64', 'os_name': 'linux', 'architecture': 'x86_64'
},
'machine_platform': {
'platform': 'linux64', 'os_name': 'linux', 'architecture': 'x86_64'
},
'option_collection': {'opt': True},
'option_collection': {'opt': True},
# jobs can belong to different tiers
# setting the tier here will determine which tier the job
# belongs to. However, if a job is set as Tier of 1, but
# belongs to the Tier 2 profile on the server, it will still
# be saved as Tier 2.
'tier': 2,
# jobs can belong to different tiers
# setting the tier here will determine which tier the job
# belongs to. However, if a job is set as Tier of 1, but
# belongs to the Tier 2 profile on the server, it will still
# be saved as Tier 2.
'tier': 2,
# the ``name`` of the log can be the default of "buildbot_text"
# however, you can use a custom name. See below.
'log_references': [
{
'url': 'http://ftp.mozilla.org/pub/mozilla.org/spidermonkey/...',
'name': 'buildbot_text'
}
],
# the ``name`` of the log can be the default of "buildbot_text"
# however, you can use a custom name. See below.
'log_references': [
{
'url': 'http://ftp.mozilla.org/pub/mozilla.org/spidermonkey/...',
'name': 'buildbot_text'
}
],
# The artifact can contain any kind of structured data associated with a test.
'artifacts': [{
'type': 'json',
'name': '',
'blob': { my json content here}
}],
# The artifact can contain any kind of structured data associated with a test.
'artifacts': [{
'type': 'json',
'name': '',
'blob': { my json content here}
}],
# List of job guids that were superseded by this job
'superseded': []
},
...
}
]
```
# List of job guids that were superseded by this job
'superseded': []
},
...
]
see :ref:`custom-log-name` for more info.
see [Specifying Custom Log Names](#specifying-custom-log-names) for more info.
Usage
^^^^^
### Usage
If you want to use `TreeherderJobCollection` to build up the job data
structures to send, do something like this:
.. code-block:: python
```python
from thclient import (TreeherderClient, TreeherderClientError,
TreeherderJobCollection)
from thclient import (TreeherderClient, TreeherderClientError,
TreeherderJobCollection)
tjc = TreeherderJobCollection()
tjc = TreeherderJobCollection()
for data in dataset:
for data in dataset:
tj = tjc.get_job()
tj = tjc.get_job()
tj.add_revision( data['revision'] )
tj.add_project( data['project'] )
tj.add_coalesced_guid( data['superseded'] )
tj.add_job_guid( data['job_guid'] )
tj.add_job_name( data['name'] )
tj.add_job_symbol( data['job_symbol'] )
tj.add_group_name( data['group_name'] )
tj.add_group_symbol( data['group_symbol'] )
tj.add_description( data['desc'] )
tj.add_product_name( data['product_name'] )
tj.add_state( data['state'] )
tj.add_result( data['result'] )
tj.add_reason( data['reason'] )
tj.add_who( data['who'] )
tj.add_tier( 1 )
tj.add_submit_timestamp( data['submit_timestamp'] )
tj.add_start_timestamp( data['start_timestamp'] )
tj.add_end_timestamp( data['end_timestamp'] )
tj.add_machine( data['machine'] )
tj.add_revision( data['revision'] )
tj.add_project( data['project'] )
tj.add_coalesced_guid( data['superseded'] )
tj.add_job_guid( data['job_guid'] )
tj.add_job_name( data['name'] )
tj.add_job_symbol( data['job_symbol'] )
tj.add_group_name( data['group_name'] )
tj.add_group_symbol( data['group_symbol'] )
tj.add_description( data['desc'] )
tj.add_product_name( data['product_name'] )
tj.add_state( data['state'] )
tj.add_result( data['result'] )
tj.add_reason( data['reason'] )
tj.add_who( data['who'] )
tj.add_tier( 1 )
tj.add_submit_timestamp( data['submit_timestamp'] )
tj.add_start_timestamp( data['start_timestamp'] )
tj.add_end_timestamp( data['end_timestamp'] )
tj.add_machine( data['machine'] )
tj.add_build_info(
data['build']['os_name'], data['build']['platform'], data['build']['architecture']
)
tj.add_build_info(
data['build']['os_name'], data['build']['platform'], data['build']['architecture']
tj.add_machine_info(
data['machine']['os_name'], data['machine']['platform'], data['machine']['architecture']
)
tj.add_option_collection( data['option_collection'] )
tj.add_log_reference( 'buildbot_text', data['log_reference'] )
# data['artifact'] is a list of artifacts
for artifact_data in data['artifact']:
tj.add_artifact(
artifact_data['name'], artifact_data['type'], artifact_data['blob']
)
tjc.add(tj)
tj.add_machine_info(
data['machine']['os_name'], data['machine']['platform'], data['machine']['architecture']
)
tj.add_option_collection( data['option_collection'] )
tj.add_log_reference( 'buildbot_text', data['log_reference'] )
# data['artifact'] is a list of artifacts
for artifact_data in data['artifact']:
tj.add_artifact(
artifact_data['name'], artifact_data['type'], artifact_data['blob']
)
tjc.add(tj)
client = TreeherderClient(client_id='hawk_id', secret='hawk_secret')
client.post_collection('mozilla-central', tjc)
client = TreeherderClient(client_id='hawk_id', secret='hawk_secret')
client.post_collection('mozilla-central', tjc)
```
If you don't want to use `TreeherderJobCollection` to build up the data structure
to send, build the data structures directly and add them to the collection.
.. code-block:: python
```python
from thclient import TreeherderClient, TreeherderJobCollection
from thclient import TreeherderClient, TreeherderJobCollection
tjc = TreeherderJobCollection()
tjc = TreeherderJobCollection()
for job in job_data:
tj = tjc.get_job(job)
for job in job_data:
tj = tjc.get_job(job)
# Add any additional data to tj.data here
# Add any additional data to tj.data here
# add job to collection
tjc.add(tj)
# add job to collection
tjc.add(tj)
client = TreeherderClient(client_id='hawk_id', secret='hawk_secret')
client.post_collection('mozilla-central', tjc)
```
client = TreeherderClient(client_id='hawk_id', secret='hawk_secret')
client.post_collection('mozilla-central', tjc)
Job artifacts format
^^^^^^^^^^^^^^^^^^^^
### Job artifacts format
Artifacts can have name, type and blob. The blob property can contain any
valid data structure accordingly to type attribute. For example if you use
@ -315,51 +312,49 @@ the json type, your blob must be json-serializable to be valid. The name
attribute can be any arbitrary string identifying the artifact. Here is an
example of what a job artifact looks like in the context of a job object:
.. code-block:: python
```python
[
{
'project': 'mozilla-inbound',
'revision': '4317d9e5759d58852485a7a808095a44bc806e19',
'job': {
'job_guid': 'd22c74d4aa6d2a1dcba96d95dccbd5fdca70cf33',
# ...
# other job properties here
# ...
[
{
'project': 'mozilla-inbound',
'revision': '4317d9e5759d58852485a7a808095a44bc806e19',
'job': {
'job_guid': 'd22c74d4aa6d2a1dcba96d95dccbd5fdca70cf33',
# ...
# other job properties here
# ...
'artifacts': [
{
"type": "json",
"name": "my first artifact",
'blob': {
k1: v1,
k2: v2,
...
}
},
{
'type': 'json',
'name': 'my second artifact',
'blob': {
k1: v1,
k2: v2,
...
}
'artifacts': [
{
"type": "json",
"name": "my first artifact",
'blob': {
k1: v1,
k2: v2,
...
}
},
{
'type': 'json',
'name': 'my second artifact',
'blob': {
k1: v1,
k2: v2,
...
}
]
}
},
...
]
]
}
},
...
]
```
A special case of job artifact is a "Job Info" artifact. This kind of artifact
will be retrieved by the UI and rendered in the job detail panel. This
is what a Job Info artifact looks like:
.. code-block:: python
{
```python
{
"blob": {
"job_details": [
{
@ -382,7 +377,8 @@ is what a Job Info artifact looks like:
},
"type": "json",
"name": "Job Info"
}
}
```
All the elements in the job_details attribute of this artifact have a
mandatory title attribute and a set of optional attributes depending on
@ -392,36 +388,34 @@ will be rendered. Here are the possible values:
* **Text** - This is the simplest content type you can render and is the one
used by default if the content type specified is not recognised or is missing.
This content type renders as:
This content type renders as:
.. code-block:: html
<label>{{title}}</label><span>{{value}}</span>
```html
<label>{{title}}</label><span>{{value}}</span>
```
* **Link** - This content type renders as an anchor html tag with the
following format:
.. code-block:: html
{{title}}: <a title="{{value}}" href="{{url}}" target="_blank" rel="noopener">{{value}}</a>
```html
{{title}}: <a title="{{value}}" href="{{url}}" target="_blank" rel="noopener">{{value}}</a>
```
* **Raw Html** - The last resource for when you need to show some formatted
content.
Some Specific Collection POSTing Rules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### Some Specific Collection POSTing Rules
Treeherder will detect what data is submitted in the ``TreeherderCollection``
and generate the necessary artifacts accordingly. The outline below describes
what artifacts *Treeherder* will generate depending on what has been submitted.
See :ref:`schema_validation` for more info on validating some specialized JSON
See [Schema Validation](data_validation.md) for more info on validating some specialized JSON
data.
JobCollections
~~~~~~~~~~~~~~
#### JobCollections
Via the ``/jobs`` endpoint:
1. Submit a Log URL with no ``parse_status`` or ``parse_status`` set to "pending"
@ -443,10 +437,7 @@ Via the ``/jobs`` endpoint:
* This is *Treeherder's* current internal log parser workflow
.. _custom-log-name:
Specifying Custom Log Names
^^^^^^^^^^^^^^^^^^^^^^^^^^^
### Specifying Custom Log Names
By default, the Log Viewer expects logs to have the name of ``buildbot_text``
at this time. However, if you are supplying the ``text_log_summary`` artifact
@ -455,69 +446,66 @@ log name. You must specify the name in two places for this to work.
1. When you add the log reference to the job:
.. code-block:: python
tj.add_log_reference( 'my_custom_log', data['log_reference'] )
```python
tj.add_log_reference( 'my_custom_log', data['log_reference'] )
```
2. In the ``text_log_summary`` artifact blob, specify the ``logname`` param.
This artifact is what the Log Viewer uses to find the associated log lines
for viewing.
.. code-block:: python
```python
{
"blob":{
"step_data": {
"steps": [
{
"errors": [ ],
"name": "step",
"started_linenumber": 1,
"finished_linenumber": 1,
"finished": "2015-07-08 06:13:46",
"result": "success",
}
],
"errors_truncated": false
},
"logurl": "https://example.com/mylog.log",
"logname": "my_custom_log"
},
"type": "json",
"id": 10577808,
"name": "text_log_summary",
"job_id": 1774360
}
```
{
"blob":{
"step_data": {
"steps": [
{
"errors": [ ],
"name": "step",
"started_linenumber": 1,
"finished_linenumber": 1,
"finished": "2015-07-08 06:13:46",
"result": "success",
}
],
"errors_truncated": false
},
"logurl": "https://example.com/mylog.log",
"logname": "my_custom_log"
},
"type": "json",
"id": 10577808,
"name": "text_log_summary",
"job_id": 1774360
}
.. _add-github-repo:
Adding a GitHub Repository
--------------------------
The pushes from GitHub repos come to Treeherder via Pulse. The webhook to enable
this exists in the GitHub group ``mozilla``. (For example, ``github.com/mozilla/treeherder``)
this exists in the GitHub group `mozilla`. (For example, `github.com/mozilla/treeherder`)
The following steps are required:
1. Create a PR with the new repository information added to the fixtures file:
``treeherder/model/fixtures/repository.json`` (See other entries in that file
for examples of the data to fill.)
`treeherder/model/fixtures/repository.json`
2. Open a bug request to enable the webhook that will trigger pulse messages for
every push from your repo. Use the following information:
a. Component: GitHub: Administration
b. Ask to install the https://github.com/apps/taskcluster integration on your repositories
c. List the repositories you want to have access to the integration
d. Answer: Are any of those repositories private?
e. State that this is only to get Pulse messages for integration into Treeherder
* Component: GitHub: Administration
* Ask to install the <https://github.com/apps/taskcluster> integration on your repositories
* List the repositories you want to have access to the integration
* Answer: Are any of those repositories private?
* State that this is only to get Pulse messages for integration into Treeherder
.. _Pulse Guardian: https://pulseguardian.mozilla.org/whats_pulse
.. _Pulse: https://wiki.mozilla.org/Auto-tools/Projects/Pulse
.. _Pulse Inspector: https://tools.taskcluster.net/pulse-inspector/
.. _Pulse Job Schema: https://github.com/mozilla/treeherder/blob/master/schemas/pulse-job.yml
.. _Treeherder bug: https://bugzilla.mozilla.org/enter_bug.cgi?component=Treeherder:%20Data%20Ingestion&form_name=enter_bug&op_sys=All&product=Tree%20Management&rep_platform=All
.. _MozillaPulse: https://pypi.python.org/pypi/MozillaPulse
.. _Kombu: https://pypi.python.org/pypi/kombu
.. _publish_to_pulse: https://github.com/mozilla/treeherder/blob/master/treeherder/etl/management/commands/publish_to_pulse.py#L12-L12
[Pulse Guardian]: https://pulseguardian.mozilla.org/whats_pulse
[Pulse]: https://wiki.mozilla.org/Auto-tools/Projects/Pulse
[Pulse Inspector]: https://tools.taskcluster.net/pulse-inspector/
[Pulse Job Schema]: https://github.com/mozilla/treeherder/blob/master/schemas/pulse-job.yml
[Treeherder bug]: https://bugzilla.mozilla.org/enter_bug.cgi?component=Treeherder:%20Data%20Ingestion&form_name=enter_bug&product=Tree%20Management
[MozillaPulse]: https://pypi.python.org/pypi/MozillaPulse
[Kombu]: https://pypi.python.org/pypi/kombu
[publish_to_pulse]: https://github.com/mozilla/treeherder/blob/master/treeherder/etl/management/commands/publish_to_pulse.py#L12-L12

Просмотреть файл

@ -6,85 +6,87 @@ Initial page load
-----------------
Load Treeherder. eg.
* `stage`_
* `production`_
* [stage](https://treeherder.allizom.org)
* [production](https://treeherder.mozilla.org)
Depending on your test requirement.
**Expected**: Page loads displaying pushes pushed to mozilla-inbound.
**Expected**: Page loads displaying pushes pushed to mozilla-inbound.
Treeherder logo > Perfherder
**Expected**: Perfherder loads displaying its initial Graph page.
**Expected**: Perfherder loads displaying its initial Graph page.
Perfherder logo > Treeherder
**Expected**: Treeherder loads again, displaying pushes per step 1.
**Expected**: Treeherder loads again, displaying pushes per step 1.
Check Job details Tab selection
-------------------------------
Load Treeherder and select a completed/success job.
**Expected**: The Job details tab should load by default.
**Expected**: The Job details tab should load by default.
Select a completed/failed job.
**Expected**: The Failure summary tab should load.
**Expected**: The Failure summary tab should load.
Select a completed-success Talos job.
**Expected**: The Performance tab should load.
**Expected**: The Performance tab should load.
Select a completed-failed Talos job.
**Expected**: The Failure summary tab should load.
**Expected**: The Failure summary tab should load.
Select a Running job.
**Expected**: The Failure summary tab should load.
**Expected**: The Failure summary tab should load.
Pin a job
---------
Select a job, and click the 'pin' button in the lower navbar.
**Expected**: Selected job pinned
**Expected**: Selected job pinned
Select another job, and hit [spacebar]
**Expected**: Selected job pinned
**Expected**: Selected job pinned
Pinboard > Right hand menu dropdown > Clear all
**Expected**: Both jobs are removed from the pinboard.
**Expected**: Both jobs are removed from the pinboard.
Failure summary tab
-------------------
Select a classified or unclassified failed job.
**Expected**: Ensure the Failure summary tab loads by default.
**Expected**: Ensure the Failure summary tab loads by default.
If a Bug suggestion is present in the failure summary:
* Click on the bug description link
* Click on the bug pin icon
**Expected**: * Bug description link should load the correct BMO bug in a new tab
**Expected**:
* Bug description link should load the correct BMO bug in a new tab
* Pin should pin the job and add the bug to the bug classification field
Pinboard > Right hand dropdown menu > Clear all
Similar jobs tab
----------------
Select a job, select the Similar jobs tab, wait several seconds.
**Expected**: Recent jobs with matching symbols should load.
**Expected**: Recent jobs with matching symbols should load.
Select a Similar job row.
**Expected**: The adjacent panel should update with its job information.
**Expected**: The adjacent panel should update with its job information.
Scroll to the bottom of the Similar jobs tab, click 'Show previous jobs'.
**Expected**: Additional, older jobs with matching symbols should load.
**Expected**: Additional, older jobs with matching symbols should load.
Job details pane
----------------
@ -102,159 +104,159 @@ Select any job and confirm the following loads in the bottom left pane:
(Note: Backfill job will eventually be moved to the Action bar in bug 1187394).
**Expected**: Values load, are visible and correct, and links are valid.
**Expected**: Values load, are visible and correct, and links are valid.
Classify a job with associated bugs
-----------------------------------
Select and pin 3 jobs to the pinboard, select a classification type, add a classification comment and add bug 1164485. Select 'Save' in the pinboard.
**Expected**: The jobs show with an asterisk in the job table, green notification banners appear confirming successful classification for each job.
**Expected**: The jobs show with an asterisk in the job table, green notification banners appear confirming successful classification for each job.
Click Annotations tab.
**Expected**: Ensure the same data appears in the panel.
**Expected**: Ensure the same data appears in the panel.
Annotations tab > delete the bug and classification for that job. Select the other two jobs and repeat.
**Expected**: The jobs should be unclassified, annotations removed.
**Expected**: The jobs should be unclassified, annotations removed.
Reload the page.
**Expected**: The job should still be unclassified.
**Expected**: The job should still be unclassified.
Switch repos
------------
Click on the Repos menu, select a different repo.
**Expected**: The new repo and its pushes should load.
**Expected**: The new repo and its pushes should load.
Reverse the process, and switch back.
**Expected**: The original repo and pushes should load.
**Expected**: The original repo and pushes should load.
Toggle unclassified failures
----------------------------
Load Treeherder and click on the "(n) unclassified" button in the top navbar.
**Expected**: Only unclassified failures should be visible in the job table.
**Expected**: Only unclassified failures should be visible in the job table.
Filters panel
-------------
Click and open the 'Filters' menu panel in the top navbar, and turn off several job types in the panel.
**Expected**: Job types turned off are suppressed in the job table.
**Expected**: Job types turned off are suppressed in the job table.
Click on 'Reset' in the Filters panel.
**Expected**: Filters UI should revert and suppressed jobs should reappear in the job table.
**Expected**: Filters UI should revert and suppressed jobs should reappear in the job table.
Filters panel > Field Filters > click new. Add a new filter eg. Platform, Linux.
**Expected**: Only Linux platforms should be visible in the job table.
**Expected**: Only Linux platforms should be visible in the job table.
Filter by Job details name and signature
----------------------------------------
Select any job and in the lower left panel, click on the Job: keywords eg. "Linux x64 asan Mochitest Chrome"
**Expected**: Ensure only jobs containing those keywords are visible.
**Expected**: Ensure only jobs containing those keywords are visible.
Select any job and click on the adjacent "(sig)" signature link.
**Expected**: Ensure only jobs using that unique signature SHA are visible.
**Expected**: Ensure only jobs using that unique signature SHA are visible.
Pin all visible jobs in push
----------------------------
Click on the Pin 'all' pin-icon in the right hand side of any push bar.
**Expected**: Up to a maximum of 500 jobs should be pinned, and a matching notification warning should appear if that limit is reached.
**Expected**: Up to a maximum of 500 jobs should be pinned, and a matching notification warning should appear if that limit is reached.
Click in the pinboard on the extreme right hand drop down menu, and select 'Clear all'.
**Expected**: All jobs should be removed from the pinboard.
**Expected**: All jobs should be removed from the pinboard.
Login / Logout
--------------
Login via Taskcluster Auth.
**Expected**: The login button should switch to a generic "Person" avatar, and the user email should appear on hover.
**Expected**: The login button should switch to a generic "Person" avatar, and the user email should appear on hover.
Logout
**Expected**: The login button should switch back to "Login / Register".
**Expected**: The login button should switch back to "Login / Register".
View the Logviewer
------------------
Select any failed job and click the 'Log' icon in the lower navbar.
**Expected**: The Logviewer loads in a new tab, and it contains correct job and revision information in the top left corner, and it preloads to the first failure line if one exists.
**Expected**: The Logviewer loads in a new tab, and it contains correct job and revision information in the top left corner, and it preloads to the first failure line if one exists.
Click on another failure line in the failed step.
**Expected**: The log should scroll to that failure line.
**Expected**: The log should scroll to that failure line.
Click on 'show successful steps'.
**Expected**: Green successful step bars should appear in the top right panel.
**Expected**: Green successful step bars should appear in the top right panel.
Click on a successful step.
**Expected**: The log contents should scroll to the -- Start -- line for that step.
**Expected**: The log contents should scroll to the -- Start -- line for that step.
Thumbwheel/scroll/swipe downwards or upwards.
**Expected**: The log should quickly load new chunks when encountering a log boundary.
**Expected**: The log should quickly load new chunks when encountering a log boundary.
Click on the Raw Log link.
**Expected**: The raw log for the same job should load in a new tab.
**Expected**: The raw log for the same job should load in a new tab.
Click all the available links in the result header, eg. "Inspect Task".
**Expected**: Each should load correctly for that job.
**Expected**: Each should load correctly for that job.
Select Treeherder from the nav menu.
**Expected**: Treeherder should load in the same window.
**Expected**: Treeherder should load in the same window.
View the raw log
----------------
Select any completed job and click the raw log button in the lower navbar.
**Expected**: The raw log for that job should load in a new tab.
**Expected**: The raw log for that job should load in a new tab.
View pushes by Author
---------------------
Click on the Author email (eg. ryanvm@gmail.com) in a push bar.
**Expected**: Only pushes pushed by that Author should appear.
**Expected**: Only pushes pushed by that Author should appear.
Get next 10| pushes via the main page footer.
**Expected**: Only pushes from that Author should be added.
**Expected**: Only pushes from that Author should be added.
View a single push
------------------
Load Treeherder and click on the 'Date' on the left side of any push.
**Expected**: Only that push should load, with an accompanying URL param "&revision=(SHA)"
**Expected**: Only that push should load, with an accompanying URL param "&revision=(SHA)"
(optional) Wait a minute or two for ingestion updates.
**Expected**: Only newly started jobs for that same push (if any have occurred) should appear. No new pushes should load.
**Expected**: Only newly started jobs for that same push (if any have occurred) should appear. No new pushes should load.
Quick Filter input field
------------------------
Click the 'Filter platforms & jobs' input field in the top navbar, aka. Quick Filter.
**Expected**: Input field should expand in width for long input.
**Expected**: Input field should expand in width for long input.
Enter any text (eg. 'Android') and hit Enter
**Expected**: Filter should be applied against the visible jobs and platform rows.
**Expected**: Filter should be applied against the visible jobs and platform rows.
Click the grey (x) 'Clear this filter' icon the right hand side of the input field, and hit Enter.
**Expected**: Filter should be cleared and input should shrink to original width.
**Expected**: Filter should be cleared and input should shrink to original width.
Check push actions menu
-----------------------
@ -264,136 +266,134 @@ Bugherder,
BuildAPI,
Revision URL List
**Expected**: Each should open without error or hanging.
**Expected**: Each should open without error or hanging.
Get next 10|20|50 pushes
------------------------
Click on Get next 10| pushes.
**Expected**: Ensure exactly 10 additional pushes were loaded.
**Expected**: Ensure exactly 10 additional pushes were loaded.
Click on Get next 50| pushes.
**Expected**: Ensure the page has a reasonable load time of ~10 seconds.
**Expected**: Ensure the page has a reasonable load time of ~10 seconds.
View a single push via its Date link. Click Get next 10| pushes.
**Expected**: Ensure the page loads the 10 prior pushes and the "tochange" and "fromchange" in the url appear correct.
**Expected**: Ensure the page loads the 10 prior pushes and the "tochange" and "fromchange" in the url appear correct.
Filter pushes by URL fromchange, tochange
-----------------------------------------
See also the Treeherder `userguide`_ for URL Query String Parameters. Please test variants and perform exploratory testing as top/bottom of range is new functionality (Jun 3, 15')
See also the Treeherder [userguide] for URL Query String Parameters. Please test variants and perform exploratory testing as top/bottom of range is new functionality (Jun 3, 15')
Navigate to the 2nd push loaded, from the push action menu select 'Set as top of range'.
**Expected**: Ensure: (1) 1st push is omitted (2) url contains `&tochange=SHA` and (3) ten pushes are loaded from that new top
**Expected**: Ensure: (1) 1st push is omitted (2) url contains `&tochange=SHA` and (3) ten pushes are loaded from that new top
Navigate to the 3rd push loaded and select 'Set as bottom of range'
**Expected**: Ensure (1) only the 3 ranged pushes are loaded (2) url contains '&tochange=[top-SHA]&fromchange=[bottom-SHA]'
**Expected**: Ensure (1) only the 3 ranged pushes are loaded (2) url contains `&tochange=[top-SHA]&fromchange=[bottom-SHA]`
Click Get Next | 10 in the page footer.
**Expected**: Ensure 10 additional pages load for a total of 13 pushes.
**Expected**: Ensure 10 additional pages load for a total of 13 pushes.
(optional) wait a minute or two for job and push updates
**Expected**: Updates should only occur for the visible pushes. No new pushes should appear.
**Expected**: Updates should only occur for the visible pushes. No new pushes should appear.
Filter pushes by URL date range
-------------------------------
See also the Treeherder `userguide`_ for URL Query String Parameters
See also the Treeherder [userguide] for URL Query String Parameters
Add a revision range to the URL in the format, eg:
&startdate=2015-09-28&enddate=2015-09-28
`&startdate=2015-09-28&enddate=2015-09-28`
Warning: With the latest volume of jobs and pushes, anything greater than a single day window risks loading too much data for the browser with Treeherder default filter settings.
**Expected**: pushes loaded should honor that range.
**Expected**: pushes loaded should honor that range.
(Optional) Wait for new pushes to that repo.
**Expected**: pushes loaded should continue to honor that range.
**Expected**: pushes loaded should continue to honor that range.
Perfherder Graphs
-----------------
Load Perfherder at eg.
https://treeherder.allizom.org/perf.html
<https://treeherder.allizom.org/perf.html>
**Expected**: Landing page should appear.
**Expected**: Landing page should appear.
Click the blue 'Add test data' button, select a platform, enter a test series, and click Add+.
**Expected**: Performance series should load with scatter graph and line graph.
**Expected**: Performance series should load with scatter graph and line graph.
Click Add more test data, and add a 2nd series.
**Expected**: The second series is drawn in an alternate color, and both series can have their displays disabled/enabled via Show/Hide series tick UI.
**Expected**: The second series is drawn in an alternate color, and both series can have their displays disabled/enabled via Show/Hide series tick UI.
Change display range dropdown to 90 days (or other value)
**Expected**: Ensure both series expand to that date range. Confirm the data which has expired beyond the 6 week data cycle still appears, but the SHA just will instead display "loading revision".
**Expected**: Ensure both series expand to that date range. Confirm the data which has expired beyond the 6 week data cycle still appears, but the SHA just will instead display "loading revision".
No console errors throughout test run
-------------------------------------
Ensure the browser console is error free during and after the test run.
Open the console during the test run.
**Expected**: No errors should appear in the console.
**Expected**: No errors should appear in the console.
Perfherder Compare
------------------
Load Perfherder Compare at eg.
https://treeherder.allizom.org/perf.html#/comparechooser
<https://treeherder.allizom.org/perf.html#/comparechooser>
**Expected**: Landing page should appear.
**Expected**: Landing page should appear.
Select two push revisions from the 'Recent' dropdowns, and click 'Compare revisions'.
**Expected**: Some kind of result should appear (likely a warning "tests with no results: " table).
**Expected**: Some kind of result should appear (likely a warning "tests with no results: " table).
Click on the 'Substests' link for a row.
**Expected**: Sub-compare results should appear.
**Expected**: Sub-compare results should appear.
Click on the 'Graph' link for a sub-compare row if it exists.
**Expected**: The plotted graph for that series should appear.
**Expected**: The plotted graph for that series should appear.
All keyboard shortcuts
----------------------
Note: Listed "Toggle in-progress" shortcut 'i' is known not to be working at this time.
Check all keyboard shortcut functionality as listed in the `userguide`_.
Check all keyboard shortcut functionality as listed in the [userguide].
**Expected**: Each shortcut should work as expected.
**Expected**: Each shortcut should work as expected.
Job counts
----------
In any push with job counts, click on the group button eg. B( ) to expand the count.
**Expected**: Jobs should appear.
**Expected**: Jobs should appear.
Select an expanded job, and click again on the group button B() to collapse the count back down.
**Expected**: The count should appear as a highlighted large button. eg. pending gray "+14"
**Expected**: The count should appear as a highlighted large button. eg. pending gray "+14"
Click in empty space to deselect the collapsed job.
**Expected**: The count "+14" should be deselected.
**Expected**: The count "+14" should be deselected.
Click on the ( + ) global Expand/Collapse icon in the navbar to toggle all +n counts.
**Expected**: Counts should expand and collapse on all visible pushes.
**Expected**: Counts should expand and collapse on all visible pushes.
Navigate via the n,p and left/right keys.
**Expected**: +n counts should be skipped during navigation.
**Expected**: +n counts should be skipped during navigation.
expand all the groups, (the url querystring will reflect this) then reload the page
**Expected**: groups should still be expanded for all pushes
**Expected**: groups should still be expanded for all pushes
Optional: There are other variants that can be tested: classification of expanded job count members, Filters, and any other workflow integration testing.
.. _`stage`: https://treeherder.allizom.org
.. _`production`: https://treeherder.mozilla.org
.. _`userguide`: https://treeherder.mozilla.org/userguide.html
[userguide]: https://treeherder.mozilla.org/userguide.html

Просмотреть файл

@ -1,28 +1,33 @@
Troubleshooting
===============
.. _troubleshooting-vagrant:
Errors during Vagrant setup
---------------------------
* The Vagrant provisioning process during ``vagrant up --provision`` assumes the presence of a stable internet connection. In the event of a connection interruption during provision, you may see errors similar to *"Temporary failure resolving.."* or *"E: Unable to fetch some archives.."* after the process has completed. In that situation, you can attempt to re-provision using the command:
.. code-block:: bash
>vagrant provision
```bash
> vagrant provision
```
If that is still unsuccessful, you should attempt a ``vagrant destroy`` followed by another ``vagrant up --provision``.
* If you encounter an error saying *"mount.nfs: requested NFS version or transport protocol is not supported"*, you should restart the kernel server service using this sequence of commands:
.. code-block:: bash
```bash
systemctl stop nfs-kernel-server.service
systemctl disable nfs-kernel-server.service
systemctl enable nfs-kernel-server.service
systemctl start nfs-kernel-server.service
```
systemctl stop nfs-kernel-server.service
systemctl disable nfs-kernel-server.service
systemctl enable nfs-kernel-server.service
systemctl start nfs-kernel-server.service
* If you encounter an error saying:
* If you encounter an error saying *"The guest machine entered an invalid state while waiting for it to boot. Valid states are 'starting, running'. The machine is in the 'poweroff' state. Please verify everything is configured properly and try again."* you should should check your host machine's virtualization technology (vt-x) is enabled in the BIOS (see this guide_), then continue with ``vagrant up --provision``.
> *"The guest machine entered an invalid state while waiting for it to boot.
> Valid states are 'starting, running'. The machine is in the 'poweroff' state.
> Please verify everything is configured properly and try again."*
.. _guide: http://www.sysprobs.com/disable-enable-virtualization-technology-bios
...you should check your host machine's virtualization technology (vt-x) is enabled
in the BIOS (see this [guide]), then continue with `vagrant up --provision`.
[guide]: http://www.sysprobs.com/disable-enable-virtualization-technology-bios

Просмотреть файл

@ -5,8 +5,8 @@ It's possible to work on the UI without setting up the Vagrant VM.
To get started:
* Clone the `treeherder repo`_ from GitHub.
* Install `Node.js`_ and Yarn_ (see `package.json`_ for known compatible versions, listed under `engines`).
* Clone the [treeherder repo] from GitHub.
* Install [Node.js] and [Yarn] (see [package.json] for known compatible versions, listed under `engines`).
* Run ``yarn install`` to install all dependencies.
Running the standalone development server
@ -17,28 +17,26 @@ production site. You do not need to set up the Vagrant VM unless making backend
* Start the development server by running:
.. code-block:: bash
```bash
$ yarn start
```
$ yarn start
* The server will perform an initial build and then watch for new changes. Once the server is running, you can navigate to: `<http://localhost:5000>`_ to see the UI.
* The server will perform an initial build and then watch for new changes. Once the server is running, you can navigate to: <http://localhost:5000> to see the UI.
To run the unminified UI with data from the staging site instead of the production site, type:
.. code-block:: bash
$ yarn start:stage
```bash
$ yarn start:stage
```
If you need to serve data from another domain, type:
.. code-block:: bash
$ BACKEND_DOMAIN=<url> yarn start
```bash
$ BACKEND_DOMAIN=<url> yarn start
```
This will run the unminified UI using ``<url>`` as the service domain.
.. _unminified_ui:
Running the unminified UI with Vagrant
--------------------------------------
You may also run the unminified UI using the full treeherder Vagrant project.
@ -48,18 +46,18 @@ installation instructions, then follow these steps:
* SSH to the Vagrant machine and start the treeherder service, like this:
.. code-block:: bash
vagrant ~/treeherder$ ./manage.py runserver
```bash
vagrant ~/treeherder$ ./manage.py runserver
```
* Then, open a new terminal window and SSH to the Vagrant machine again. Run the
following:
.. code-block:: bash
```bash
vagrant ~/treeherder$ yarn start:local
```
vagrant ~/treeherder$ yarn start:local
* The server will perform an initial build and then watch for new changes. Once the server is running, you can navigate to: `<http://localhost:5000>`_ to see the UI.
* The server will perform an initial build and then watch for new changes. Once the server is running, you can navigate to: <http://localhost:5000> to see the UI.
Building the minified UI with Vagrant
-------------------------------------
@ -67,22 +65,23 @@ If you would like to view the minified production version of the UI with Vagrant
* SSH to the Vagrant machine and start the treeherder service:
.. code-block:: bash
vagrant ~/treeherder$ ./manage.py runserver
```bash
vagrant ~/treeherder$ ./manage.py runserver
```
* Then run the build task (either outside or inside of the Vagrant machine):
.. code-block:: bash
```bash
$ yarn build
```
$ yarn build
Once the build is complete, the minified version of the UI will now be accessible at http://localhost:8000 (NB: port 8000, unlike above).
Once the build is complete, the minified version of the UI will now be accessible at
<http://localhost:8000> (NB: port 8000, unlike above).
Validating JavaScript
---------------------
We run our JavaScript code in the frontend through eslint_ to ensure
We run our JavaScript code in the frontend through [eslint] to ensure
that new code has a consistent style and doesn't suffer from common
errors. Eslint will run automatically when you build the JavaScript code
or run the development server. A production build will fail if your code
@ -90,14 +89,14 @@ does not match the style requirements.
To run eslint by itself, you may run the lint task:
.. code-block:: bash
$ yarn lint
```bash
$ yarn lint
```
Running the unit tests
----------------------
The unit tests for the UI are run with Karma_ and Jasmine_. React components are tested with enzyme_.
The unit tests for the UI are run with [Karma] and [Jasmine]. React components are tested with [enzyme].
To run the tests:
@ -107,17 +106,17 @@ To run the tests:
While working on the frontend, you may wish to watch JavaScript files and re-run tests
automatically when files change. To do this, you may run the following command:
.. code-block:: bash
$ yarn test:watch
```bash
$ yarn test:watch
```
The tests will perform an initial run and then re-execute each time a project file is changed.
.. _Karma: http://karma-runner.github.io/0.8/config/configuration-file.html
.. _treeherder repo: https://github.com/mozilla/treeherder
.. _Node.js: https://nodejs.org/en/download/current/
.. _Yarn: https://yarnpkg.com/en/docs/install
.. _package.json: https://github.com/mozilla/treeherder/blob/master/package.json
.. _eslint: https://eslint.org
.. _Jasmine: https://jasmine.github.io/
.. _enzyme: http://airbnb.io/enzyme/
[Karma]: http://karma-runner.github.io/0.8/config/configuration-file.html
[treeherder repo]: https://github.com/mozilla/treeherder
[Node.js]: https://nodejs.org/en/download/current/
[Yarn]: https://yarnpkg.com/en/docs/install
[package.json]: https://github.com/mozilla/treeherder/blob/master/package.json
[eslint]: https://eslint.org
[Jasmine]: https://jasmine.github.io/
[enzyme]: http://airbnb.io/enzyme/