* Add a step to autoclassify some intermittent jobs during mozci ingestion
* Add error handling
* Link the Bugzilla single tracking bug during autoclassification
* Small fixes related to Bastien's review
* Proper logic for autoclassification Bugzilla bug linking
* Nit
Co-authored-by: Eva Bardou <ebardou@teklia.com>
* Listen to mozci classification pulse messages and save results in Treeherder database
* Nits and suggestions
* Fix tests
* Revert changes
* Add unit tests + Fix review related code snippets
* Nit
* Add a dedicated command to listen to mozci classification pulse messages
Co-authored-by: Eva Bardou <ebardou@teklia.com>
There are short or common test file names like 001.html. When Treeherder tries
to generate bug suggestions for that, it searches the intermittent bugs for
001.html in the summary which also matches other-test-001.html. If too many
bugs are returned (>20), Treeherder won't suggest any bugs.
By matching on path boundaries (/, \), whitespace () and list separators (,),
the other test files won't be matched. Because adding these rules to the SQL
query yields a slower method than filtering the wrong positive bugs out
afterwards, the latter method gets applied. This keeps the risk the SQL query
will not return all matches (limited at 50 lines) and has to be reevaluated if
it turns into an issue.
* Update docker-shared-user for pulse_url and add PROJECTS_TO_INGEST to backend container
* Update docs to make them clearer
* Fix exception caught in pytest.raises
* parameterize ingestion test with job tier
* provide slight refactors on test code
* refactor complex alerting conditions
* split alert generation (test) scenarios in happy/unhappy paths
* address some PyCharm inspection warnings
* remove StepParser and switch to ErrorParser
* remove writes to TextLogStep from artifact.py
* remove buildbot ref in builders
* replace TextLogStep model in DetailsPanel, SimilarJobsTab and logviewer App
* cleanup DetailsPanel
* remove old log parsing tests and update others
* add logging to error_summary.py
* add parse max error lines limit to ErrorParser
* fix in similar jobs tab for Bug 1652869
* remove StepParser and stop storing steps in TextLogStep
* remove more buildbot references
* replace TextLogStep model in DetailsPanel and logviewer App
* remove old log parsing tests and update other tests
* add logging to error_summary.py
* remove StepParser and stop storing steps
* replace TextLogStepModel in the UI with text-log-errors API
* remove old references to buildbot
* update and cleanup tests
* rename builds-4h to live_backing_log
* add foreign key to Job on TextLogError table
* remove TextLogErrorViewset and test
* add management command to backfill job ids and update artifact.py
This is part 1 of this bug and only makes changes to how JobDetails API
is used in the UI to retrieve uploaded artifacts. It updates ReplicatesGraph,
DetailsPanel, UnsupportedJob and LogViewer App. Adds helpers and updates tests.
Github based projects can list pushes in the UI incorrectly. This is caused by commits that have been ammended.
This code change switches to grab the push time from the `timestamp` field of the head commit from the Pulse event rather that Github's APIs. This fixes the problem of push sorting.
Note, that the field `timestamp` only exists in the Push event that the Pulse message contains. This field can be seen in the `events` API, however, that API contains all sorts of events and holds a maximum of 300 events.
This change also includes partial support for manual ingestion of Git pushes.
Git based projects can list pushes in the UI incorrectly. This is caused by commits that have been ammended.
This code change switches to use the `timestamp` field of head commit to determine push time rather a commit's authorship date. This fixes the problem of push sorting.
This change also includes support for manual ingestion of Git pushes.
* Bug 1574651 - remove unused JobConsumer and related code
The `update_pulse_test_fixtures` management command listens for job
messages, so is of no use anymore.
* Bug 1574651 - refactor pulse listening to support multiple AMQP servers
This looks forward to supporting ingesting jobs and tasks from multiple
Taskcluster depoyments, each of which is on its own AMQP server (or, at
least, a vhost).
* Bug 1574651 - pass rootUrl from pulse to celery, verify against repository
When jobs and pushes are loaded, the repo's root URL is known. This
just serves to ensure that the rootUrl for the repo and the rootUrl for
the event match up.
* Bug 1574651 - use root_url from message to make in-job URLs
* Bug 1574651 - update ingest-and-push-tasks to take --root-url
Currently, Treeherder consumes Pulse messages from an intermediary service called `taskcluster-treeherder`.
Such service needs to be shut down and its functionality imported into Treeherder.
In order to do this we need to switch to the standard Taskcluster exchanges as defined in here:
https://docs.taskcluster.net/docs/reference/platform/queue/exchanges
On a first pass we are only including the code from `taskcluster-treeherder` without changing
much of Treeherder's code. The code is translated from Javascript to Python and only some minor
code changes were done to reduce the difficulty on porting the code without introducing bugs.
Internally, on this first pass, we will still have an intermediary data structure representing
what `taskcluster-treeherder` is emitting, however, we will stop consuming the messages
from it and be able to shut it down.
Instead of consuming from one single exchange we will be consuming multiple ones. Each one representing
a different kind of task (e.g. pending vs running).
In order to test this change you need to open 5 terminal windows and follow these steps:
* On the first window run `docker-compose up`
* On the next three windows `export PULSE_URL="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"` and run the following commands:
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_jobs`
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_tasks`
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_pushes`
* On the last window run `docker-compose run backend celery -A treeherder worker -B --concurrency 5`
* Open on your browser `http://localhost:5000`
This is just a summary from [the docs](https://treeherder.readthedocs.io/pulseload.html).
= ETL management commands =
This change also introduces two ETL management command that can be executed like this:
== Ingest push and tasks ==
This script can ingest into Treeherder all tasks associated to a push.
It uses Python's asyncio to speed up the ingestion of tasks.
```bash
./manage.py ingest_push_and_tasks
```
== Update Pulse test fixtures ==
```bash
./manage.py update_pulse_test_fixtures
```
This command will read 100 Taskcluster Pulse messages, process them and store them as test fixtures
under these two files: `tests/sample_data/pulse_consumer/taskcluster_{jobs,metadata}.json`
Following this work would be to get rid of the intermediary job representation ([bug 1560596](https://bugzilla.mozilla.org/show_bug.cgi?id=1560596) which will
clean up some of the code and some of the old tests.
= Extra script =
Script that permits comparing pushes from two different Treeherder instances.
```
usage: Compare a push from a Treeherder instance to the production instance.
[-h] [--host HOST] --revision REVISION [--project PROJECT]
optional arguments:
-h, --help show this help message and exit
--host HOST Host to compare. It defaults to localhost
--revision REVISION Revision to compare
--project PROJECT Project to compare. It defaults to mozilla-central
```
= Other changes =
Other changes included:
* Import `taskcluster-treeherder`'s validation to ensure we're not fed garbage.
* Change `yaml.load(f)` for `yaml.load(f, Loader=yaml.FullLoader)`. Read [this](https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation) for details
* Introduce `taskcluster` and `taskcluster-urls` as dependencies
* The test `test_retry_missing_revision_never_succeeds` makes no sense because
we make Json validation on the Pulse message
* Bug 1395254 - Consume Taskcluster Pulse messages from standard queue exchanges
Currently, Treeherder consumes Pulse messages from an intermediary service called `taskcluster-treeherder`.
Such service needs to be shut down and its functionality imported into Treeherder.
In order to do this we need to switch to the standard Taskcluster exchanges as defined in here:
https://docs.taskcluster.net/docs/reference/platform/queue/exchanges
On a first pass we are only including the code from `taskcluster-treeherder` without changing
much of Treeherder's code. The code is translated from Javascript to Python and only some minor
code changes were done to reduce the difficulty on porting the code without introducing bugs.
Internally, on this first pass, we will still have an intermediary data structure representing
what `taskcluster-treeherder` is emitting, however, we will stop consuming the messages
from it and be able to shut it down.
Instead of consuming from one single exchange we will be consuming multiple ones. Each one representing
a different kind of task (e.g. pending vs running).
In order to test this change you need to open 4 terminal windows and follow these steps:
* On the first two windows `export PULSE_URL="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"` and run the following commands:
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_jobs`
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_pushes`
* On the third window run `docker-compose run backend celery -A treeherder worker -B --concurrency 5`
* On the last window run `docker-compose up`
* Open on your browser `http://localhost:5000`
This is just a summary from [the docs](https://treeherder.readthedocs.io/pulseload.html).
= ETL management commands =
This change also introduces two ETL management command that can be executed like this:
== Ingest push and tasks ==
This script can ingest into Treeherder all tasks associated to a push.
It uses Python's asyncio to speed up the ingestion of tasks.
```bash
./manage.py ingest_push_and_tasks
```
== Update Pulse test fixtures ==
```bash
./manage.py update_pulse_test_fixtures
```
This command will read 100 Taskcluster Pulse messages, process them and store them as test fixtures
under these two files: `tests/sample_data/pulse_consumer/taskcluster_{jobs,metadata}.json`
Following this work would be to get rid of the intermediary job representation ([bug 1560596](https://bugzilla.mozilla.org/show_bug.cgi?id=1560596) which will
clean up some of the code and some of the old tests.
= Other changes =
Other changes included:
* Import `taskcluster-treeherder`'s validation to ensure we're not fed garbage.
* Change `yaml.load(f)` for `yaml.load(f, Loader=yaml.FullLoader)`. Read [this](https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation) for details
* Introduce `taskcluster` and `taskcluster-urls` as dependencies
* The test `test_retry_missing_revision_never_succeeds` makes no sense because
we make Json validation on the Pulse message
Since by design `.encode()` returns bytes, whereas the test should be using
a string. The goal of the encode seems to have been to handle unicode
correctly, however that's now achieved via the use of:
`from __future__ import unicode_literals`
Fixes the following exception under Python 3:
`TypeError: Object of type bytes is not JSON serializable`
When `git push -f` is used, the "base" commit of the github push event
is whatever was previously in that branch.
That commit is likely not part of the history of the new branch,
so using the "List commits" API will not allow to find it
as a starting point.
Instead, if the API did not force pagination, we’d get the entire
repository history!
Using the "Compare two commits" API instead lets GitHub’s server
figure out which commits reachable from the new branch
that were not before, which is exactly the set of commits we need
to be "part of a push".
Runnable jobs for buildbot were calculated via a celerybeat task
(that was disabled in #4007) and the results stored in the
`runnable_jobs` table. This can all be removed now that buildbot is
EOL, since the remaining support for Taskcluster runnable jobs does
not use that celery task/Django model.
Now that consumers of OrangeFactor have been switched to the new
intermittent failures view UI/API, we can stop submitting failure
classifications to OrangeFactor's Elasticsearch instance.