This fixes the issues with retriggering and add-new-jobs that the
original PR introduced. It also adds a few unit tests to catch
this potential mistake in the future.
This reverts commit ffd871ae34.
`jest-dom` has moved to `@testing-library/jest-dom`. Please uninstall jest-dom and install `@testing-library/jest-dom` instead,
or use an older version of `jest-dom`. If you do upgrade to `@testing-library/jest-dom`, make sure to update your usage of `jest-dom`
to use `@testing-library/jest-dom/extend-expect` rather than simply `jest-dom/extend-expect`. Learn more about this change here:
https://github.com/testing-library/dom-testing-library/issues/260 Thanks!
* Rename details panel ``selectedJob`` to ``selectedJobFull``
The job that's passed in the DetailsPanel has a bunch of extra fields
that are not in the normal downloaded list of jobs. So I wanted to
depict that ``selectedJob`` is not the same thing as what you see
in the DetailsPanel.
* Stop using Redux where not necessary
I was using Redux to assign the selectedJob in a few details
classes when I should have just passed it where it was needed.
* New addAggregateFields function
Instead of using a more heavy weight JobModel for each job,
we just persist some fields that were getting constantly calculated
over and over. This was especially true during filtering and re-rendering.
* Remove some cruft leftover from Buildbot.
Currently, Treeherder consumes Pulse messages from an intermediary service called `taskcluster-treeherder`.
Such service needs to be shut down and its functionality imported into Treeherder.
In order to do this we need to switch to the standard Taskcluster exchanges as defined in here:
https://docs.taskcluster.net/docs/reference/platform/queue/exchanges
On a first pass we are only including the code from `taskcluster-treeherder` without changing
much of Treeherder's code. The code is translated from Javascript to Python and only some minor
code changes were done to reduce the difficulty on porting the code without introducing bugs.
Internally, on this first pass, we will still have an intermediary data structure representing
what `taskcluster-treeherder` is emitting, however, we will stop consuming the messages
from it and be able to shut it down.
Instead of consuming from one single exchange we will be consuming multiple ones. Each one representing
a different kind of task (e.g. pending vs running).
In order to test this change you need to open 5 terminal windows and follow these steps:
* On the first window run `docker-compose up`
* On the next three windows `export PULSE_URL="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"` and run the following commands:
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_jobs`
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_tasks`
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_pushes`
* On the last window run `docker-compose run backend celery -A treeherder worker -B --concurrency 5`
* Open on your browser `http://localhost:5000`
This is just a summary from [the docs](https://treeherder.readthedocs.io/pulseload.html).
= ETL management commands =
This change also introduces two ETL management command that can be executed like this:
== Ingest push and tasks ==
This script can ingest into Treeherder all tasks associated to a push.
It uses Python's asyncio to speed up the ingestion of tasks.
```bash
./manage.py ingest_push_and_tasks
```
== Update Pulse test fixtures ==
```bash
./manage.py update_pulse_test_fixtures
```
This command will read 100 Taskcluster Pulse messages, process them and store them as test fixtures
under these two files: `tests/sample_data/pulse_consumer/taskcluster_{jobs,metadata}.json`
Following this work would be to get rid of the intermediary job representation ([bug 1560596](https://bugzilla.mozilla.org/show_bug.cgi?id=1560596) which will
clean up some of the code and some of the old tests.
= Extra script =
Script that permits comparing pushes from two different Treeherder instances.
```
usage: Compare a push from a Treeherder instance to the production instance.
[-h] [--host HOST] --revision REVISION [--project PROJECT]
optional arguments:
-h, --help show this help message and exit
--host HOST Host to compare. It defaults to localhost
--revision REVISION Revision to compare
--project PROJECT Project to compare. It defaults to mozilla-central
```
= Other changes =
Other changes included:
* Import `taskcluster-treeherder`'s validation to ensure we're not fed garbage.
* Change `yaml.load(f)` for `yaml.load(f, Loader=yaml.FullLoader)`. Read [this](https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation) for details
* Introduce `taskcluster` and `taskcluster-urls` as dependencies
* The test `test_retry_missing_revision_never_succeeds` makes no sense because
we make Json validation on the Pulse message
* Bug 1395254 - Consume Taskcluster Pulse messages from standard queue exchanges
Currently, Treeherder consumes Pulse messages from an intermediary service called `taskcluster-treeherder`.
Such service needs to be shut down and its functionality imported into Treeherder.
In order to do this we need to switch to the standard Taskcluster exchanges as defined in here:
https://docs.taskcluster.net/docs/reference/platform/queue/exchanges
On a first pass we are only including the code from `taskcluster-treeherder` without changing
much of Treeherder's code. The code is translated from Javascript to Python and only some minor
code changes were done to reduce the difficulty on porting the code without introducing bugs.
Internally, on this first pass, we will still have an intermediary data structure representing
what `taskcluster-treeherder` is emitting, however, we will stop consuming the messages
from it and be able to shut it down.
Instead of consuming from one single exchange we will be consuming multiple ones. Each one representing
a different kind of task (e.g. pending vs running).
In order to test this change you need to open 4 terminal windows and follow these steps:
* On the first two windows `export PULSE_URL="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"` and run the following commands:
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_jobs`
* `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_pushes`
* On the third window run `docker-compose run backend celery -A treeherder worker -B --concurrency 5`
* On the last window run `docker-compose up`
* Open on your browser `http://localhost:5000`
This is just a summary from [the docs](https://treeherder.readthedocs.io/pulseload.html).
= ETL management commands =
This change also introduces two ETL management command that can be executed like this:
== Ingest push and tasks ==
This script can ingest into Treeherder all tasks associated to a push.
It uses Python's asyncio to speed up the ingestion of tasks.
```bash
./manage.py ingest_push_and_tasks
```
== Update Pulse test fixtures ==
```bash
./manage.py update_pulse_test_fixtures
```
This command will read 100 Taskcluster Pulse messages, process them and store them as test fixtures
under these two files: `tests/sample_data/pulse_consumer/taskcluster_{jobs,metadata}.json`
Following this work would be to get rid of the intermediary job representation ([bug 1560596](https://bugzilla.mozilla.org/show_bug.cgi?id=1560596) which will
clean up some of the code and some of the old tests.
= Other changes =
Other changes included:
* Import `taskcluster-treeherder`'s validation to ensure we're not fed garbage.
* Change `yaml.load(f)` for `yaml.load(f, Loader=yaml.FullLoader)`. Read [this](https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation) for details
* Introduce `taskcluster` and `taskcluster-urls` as dependencies
* The test `test_retry_missing_revision_never_succeeds` makes no sense because
we make Json validation on the Pulse message
Modify performance/summary endpoint to accommodate perfherder graphing needs:
* modify logic so signature query param does not filter on parent_signature__isnull
and add all_data param to return performance data as a list of objects with additional
data like PerformanceDatum
* add condition to all_data and return revision, repo name and repository_id
* accept multiple signatures
* make startday and endday optional if interval is provided
Bugfiler used an undocumented bugzilla api to search for
product/component pairs, and bugzilla recently moved the api endpoint,
breaking the bug filer. This patch points to the correct endpoint.
This updates the bug filer to use markdown in the bugzilla bug
description. It adds bolded labels to the `Filed by` entry and any URLs
specified. Addtionally the comment is surrounded in a code fence so
that long lines aren't wrapped and lines that also happen to have
markdown formatting in them are ignored.
Example output:
**Filed by:** foo [@] bar.com
**Parsed log:** http://.../parsed.log.html
**Full log:** http://.../full.log.html
---
```
[task 2019-03-06T03:54:26.459Z] 03:54:26 INFO - TEST-FAIL...
[task 2019-03-06T03:54:26.460Z] 03:54:26 INFO - INFO | LeakSanitize
...
```
This reverts commit fa533ff25a.
Ends up there's a bug with this change. The plan is for @ionutgoldan to create a new PR with just the migration and model change for PerformanceDatum which we can deploy. Then he can re-implement his code changes from this PR and we'll already be done with the lengthy migration to add the index.
Add index to performance_datum.push_timestamp field
Add more expire parameters for Perfherder data cycler
Don't keep perf data indefinitely
Have a debugger which logs even on production
* Bug 1521025 - Configure timestamps on endpoints
Add new fields to PerformanceAlert and PerformanceAlertSummary models. Override update methods in corresponding serializers to update specific fields. Add new component tests and update summaries only at endpoint level.
Since enough time has passed since #4697 (bug 1529223) landed for people
to refresh any existing Treeherder UI tabs (that will have still been
using the old header names/values).
This is the second part to the Font Awesome conversion started in #4556.
After this PR, all that is left is the Perfherder parts, which can be
converted from the inline HTML style SVG+JS to using
`@fortawesome/react-fontawesome` as part of the React conversion.
Unfortunately the "Custom Actions" usage of `ajv.compile()` requires that
the `script-src` CSP directive contain `'unsafe-eval'`, otherwise the
whole feature breaks.
Using `'unsafe-eval'` defeats much of the point of CSP, so it should be
removed as soon as possible. Bug 1530607 is filed to track.
Previously the frontend would calculate the access token expiry timestamp
in milliseconds and pass it to the `/login/` API via an `ExpiresAt` header.
The backend would then convert both the Id Token's `exp` and current time
to milliseconds, when calculating the earliest expiry. The result then
had to be converted back to seconds for use with Django's session
`.set_expiry()`.
It is instead much simpler to leave everything in seconds, since none of
the Auth0-provided inputs were in milliseconds to start with, so there is
no loss of precision, just fewer conversions required. Timestamps are also
more commonly in seconds, so use of seconds is less surprising.
After this is deployed there will initially be users who have old frontend
pages open that are still sending the expiry as milliseconds. In order to
be able to differentiate between new and old clients, the header has been
renamed to `Access-Token-Expires-At` (which also makes it clearer as to
what the expiry is for, given there is also an Id Token expiry), and a
temporary fall-back added to the backend that can be removed after a few
days has passed.
Since `test_get_username_from_userinfo` is a little too narrowly-scoped
and would be better as an API test. It has been combined with two other
tests in `test_auth.py` to give a more representative workflow test.
Since:
* They don't need to use the slower `transactional_db` fixture that has
advanced transaction-inspecting support.
* They don't need to add a request finalizer, since the `db` fixture
cleans up the User during test teardown automatically.
* `User` does not need to be imported locally.
This speeds up `test_auth.py` by 4x.
Since the name/description references the pre-auth0 implementation that
was removed in #3144. The `test_user` fixture has also been removed,
since it is not required for the test to run (the error referenced in
the comment no longer occurs).
The latest policy used in the report-only header has been working well
on production (the violation reports logged to New Relic are only from
scripts injected by browser addons), so we're ready to start enforcing
the policy by using the real `Content-Security-Policy` header name.
NB: When features are added in the future, PR authors and reviewers will
need to remember to update the policy if needed (for example to add domains
to the `connect-src` directive). The CSP header is not enabled when using
`webpack-dev-server` (it would break dev source maps and react-hot-loader)
so if in doubt test locally (using `yarn build` and serving via Django
runserver) or on prototype first.
See:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CSPhttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
Occasionally failing build/test runs can fail in such a way that results
in a significant amount of log spam and therefore log files that are
hundreds of MB in size each. This can cause log parsing backlogs,
particularly when many jobs on the same push fail in such a way.
The log parser now checks the `Content-Length` of log files prior to
streaming them, and skips the download/parse if it exceeds the set
threshold. The frontend has been adjusted to display an appropriate
message explaining why the parsed log is not available.
The threshold has been set to 5MB, since:
* the 99th percentile of download size on New Relic was ~2.8MB:
https://insights.newrelic.com/accounts/677903/dashboards/339080
* `Content-Length` is the size of the log prior to decompression, and
the chronic logspam cases have been known to have compression ratios
of 20-50x, which would translate to an uncompressed size limit of
up to 250MB (which is already much larger than buildbot's former 50MB
uncompressed size limit).
The id token payload contains an `exp` property, which is an integer
representing the number of seconds past the epoch at which the id token
expires.
However the mocked value in our authentication tests was the string `'500'`,
which is neither the correct data type, nor a timestamp. This meant that
during tests only, the `min(accesstoken_exp_in_ms, idtoken_exp_in_ms)`
in `AuthBackend.authenticate()` was comparing an int and a string, which
under Python 3 results in:
`TypeError: '<' not supported between instances of 'str' and 'int'`
A later bug/PR will refactor the auth backend to fix issues unrelated to
Python 3 compatibility and add more test coverage.
Makes the following changes to the initial header added in #4678:
1) Adds a `frame-src` directive
Whilst the Auth0 domain is already whitelisted in `connect-src` allowing
initial logins to work, Auth0.js renewals are performed in an iframe, so
need both the auth0 domain and `'self'` (for the `/login.html` callback)
to be permitted via `frame-src`.
2) Adds https://taskcluster-artifacts.net to `connect-src`
Since some requests to `queue.taskcluster.net` redirect to it (eg for the
"Add new jobs" feature), and for redirects both the original and new domain
need whitelisting.
3) Adds `'report-sample'` to `script-src` and `style-src`, which makes
the browser send JS/CSS samples for any violations of the "inline" rules,
making it easier to debug collected CSP violation reports.
This adds a `Content-Security-Policy-Report-Only` header for static assets
served by WhiteNoise (such as our frontend), which includes a first pass
at a possible policy that should work for Treeherder.
The header also includes a `report-uri` directive, which points at a newly
added API for collecting CSP violation reports. Reports are logged as
warnings (so will appear in Papertrail) and sent to New Relic as a custom
event. This will allow us to see whether the policy would block valid
requests, so we can refine it prior to converting to the real (ie blocks
things) `Content-Security-Policy` header.
The addition of `ng-csp` to `perf.html` is to enable AngularJS's ngCSP
feature, which turns off use of `eval()` and automatic stylesheet
injection, so that the policy directives `unsafe-eval` and
`unsafe-inline` don't have to be used. This requires us to then manually
import the AngularJS stylesheet to include the styles that would have
previously been injected:
https://docs.angularjs.org/api/ng/directive/ngCsp
See:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CSPhttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policyhttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy-Report-Only
* `get_resultsets()` has been deprecated for some time
* the other methods being removed are for API endpoints that no longer
exist after #3222 (bug 1437968)
These were formerly used when submitting job data using the Python client
(support for which was removed in bug 1349182), and in `buildapi.py` as
part of buildbot ingestion (until #4087 / bug 1443251).
This removes the final usages in our tests so we can drop them entirely.
Previously any exceptions raised whilst loading the expected output JSON
fixtures were suppressed, which made debugging the Python 3 test failures
harder than needs be.
The reason failures were suppressed was to allow the test to continue far
enough that the actual output could be saved to the fixture when creating
new tests. However reordering `do_test()` has the same effect without the
need for the `load_exp()` try-except handling.
Since the `request` package's `iter_lines()` returns bytes by default.
We cannot pass `decode_unicode=True` to `iter_lines()` since it splits on
Unicode newlines (which can exist in test message output), so instead we
manually `.decode()` each line before parsing it.
Fixes the following exception under Python 3:
`TypeError: a bytes-like object is required, not 'str'`
The test utility `load_exp()` had to be modified to no longer use append
mode when opening the expected output JSON files, in order to fix:
`json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)`
Also remove all Karma support and update the docs to only mention ``Jest``.
One of the test files was testing some AngularJS filters. I converted these
tests to test the equivalent helper functions.
Since by design `.encode()` returns bytes, whereas the test should be using
a string. The goal of the encode seems to have been to handle unicode
correctly, however that's now achieved via the use of:
`from __future__ import unicode_literals`
Fixes the following exception under Python 3:
`TypeError: Object of type bytes is not JSON serializable`
* Switches from `.iteritems()` to `.items()` in the Jinja template.
* Removes an assert from `test_intermittents_commenter` that is redundant
(due to `match_querystring=True`) and non-deterministic across Python
versions (due to the query string params being affected by dict order).
Python 3's `filter()` now return iterators rather than list/..., so must
be cast back to a `list()` if used in contexts where an iterator is not
supported. However in this case `first` fulfils the goal more cleanly.
Fixes:
`TypeError: 'filter' object is not subscriptable`
Since it's never called with the `task_id` parameter, and when the task ID
is not set, the return value is identical to calling `list_runnable_jobs`
directly.
Since it's only used by the frontend as a fallback for when it cannot
find a non-gzipped version of `runnable-jobs.json`, and enough time has
now passed for all jobs to have that file.
Bugzilla recently started parsing markdown in bug comments. This was
causing some issues for bugs filed with Treeherder's bug filer because
text was unintentionally being parsed as markdown.
This patch just adds a "#[markdown(off)]" to the beginning of the
comment field, which lets the comment escape being parsed as markdown.
Wrench jobs are a job type for the standalone WebRender test suite, and
run reftests as part of the job. Having reftest-ish things like the
reftest analyzer links for these jobs is desirable.
And explicitly disable redis-py TLS validation to restore the validation
behaviour back to how it was with redis-py v2, since Heroku Redis uses
self-signed certificates so connections to it will fail if validation
is enabled. This resolves the issue seen in bug 1510000.
Since it's unnecessary given that we now have the ability to import JSON
directly, and the current usage is causing ESLint warnings like:
```
warning: Illegal usage of jasmine global (jest/no-jasmine-globals) at tests/ui/unit/context/pushes.tests.jsx:15:5:
13 |
14 | beforeEach(() => {
> 15 | jasmine.getJSONFixtures().fixturesPath = 'base/tests/ui/mock';
| ^
16 |
17 | fetchMock.get(
18 | getProjectUrl('/resultset/?full=true&count=10', repoName),
```
This means the only `jasmine-jquery` feature we're now using is the
`toHaveLength` matcher, so use of `jasmine-jquery` can be dropped entirely
once the tests are migrated to Jest, which supports `toHaveLength`
natively.
When `git push -f` is used, the "base" commit of the github push event
is whatever was previously in that branch.
That commit is likely not part of the history of the new branch,
so using the "List commits" API will not allow to find it
as a starting point.
Instead, if the API did not force pagination, we’d get the entire
repository history!
Using the "Compare two commits" API instead lets GitHub’s server
figure out which commits reachable from the new branch
that were not before, which is exactly the set of commits we need
to be "part of a push".
Since it's more reliable (and strict) at code formatting than ESLint.
We use it via an ESLint plugin, and so disable the style-related AirBnB
preset rules, leaving the AirBnB guide to handle only correctness and
best practices rules.
It's highly encouraged to use an IDE integration or Git commit hook
to run Prettier (or `yarn lint --fix`) automatically. See:
* https://prettier.io/docs/en/editors.html
* https://prettier.io/docs/en/precommit.html
We may consider enabling a git commit hook out of the box (using eg
Husky) in the future, however they have previously been known to
interfere with partial-staging workflows, so would need to test the
fixes they have made for them thoroughly first.
In future PRs we may also want to start formatting JSON/CSS/Markdown
using Prettier too.
Neutrino controls our frontend linting, transpilation, source-maps,
testing, dev-server and optimisation of production builds.
Highlights of the upgrade are:
* Major version updates to the individual tools within (such as webpack,
Babel and ESLint), significantly improving performance, fixing
transpilation/minification correctness bugs, adding support for newer
ECMAScript features, and increasing linter coverage.
* Hot reloading in the dev server now works for all entry-points and not
just the jobs view, shortening the feedback cycle.
* Reduced bundle size due to webpack 4's tree shaking, scope hoisting,
automatic shared/vendor code chunk splitting (no need for the manually
maintained 'vendor' list).
* CSS is now extracted out of JS, which improves performance, reduces
bundle size and prevents the initial white flash of un-styled content.
* Support for dynamic imports/code splitting (needed for bug 1502192).
* Support for Jest via a new Jest preset (unblocks bug 1364045).
* Support for public class field declarations (unblocks bug 1480166).
* Improved source-maps (increases the quality of production exception
trace-backs and fixes several debugger breakpoint bugs).
* Reduced amount of custom configuration required for our fairly complex
frontend needs, reducing maintenance burden and allowing for easier
future Neutrino upgrades.
In addition this PR:
* Fixes the WhiteNoise `immutable_file_test()` regex, so that it now
correctly enables browser caching of images, fonts and source maps.
* Enables webpack-dev-server's overlay feature, which displays any
compilation errors in the browser, saving having to switch back
to the console (this can be enabled for warnings too if desired).
* Enables webpack-dev-server's automatic browser-opening feature,
which saves having to manually navigate to `localhost:5000` after
running `yarn start`.
* Switches Karma tests to run Firefox in headless mode, reducing the
workflow disruption when running `yarn test`.
* Uses the new webpack `performance` option to enable maximum asset
file size thresholds, to help prevent bundle-size regressions.
* Rewrites the `package.json` script commands so that they now work
correctly on Windows, even when setting environment variables.
Performance comparison:
* Local `yarn build`:
- Cached: 2m34s -> 23s
- Uncached: 2m34s -> 58s
* Local `yarn start`:
- Cached: 34.5s -> 13.6s
- Uncached: 34.5s -> 31.3s
* Local `yarn test`
- Cached: 61.5s -> 19.8s
- Uncached: 61.5s -> 22.0s
* Local `yarn lint`
- Cached: 3.8s -> 1.8s
- Uncached: 13.7s -> 13.4s
* Travis end-to-end time:
9 minutes -> 6 minutes
* Heroku deploy end-to-end time:
14 minutes -> 9 minutes
* Enables the display of skipped test/expected fail reasons, in
the pytest summary.
* Skips the Selenium tests with a clear reason message, unless the
built UI is found (preventing the annoying/confusing test timeouts).
* Removes the disabling of the `pytest-html` and `pytest-metadata`
plugins, since they are required when passing the `--html` option
to generate an HTML report.
* Updates the docs to mention `yarn build` and `--html`.
* Switches from the `ignore` setting to the new `extend_ignore`, which
doesn't overwrite the default ignore list, meaning we no longer have
to duplicate it ourselves.
* Remove the rarely used `[pycodestyle]` config section, since it's
only used when using tools like autopep8, which should really learn
to use the `[flake8]` section themselves.
* Enables the previously ignored F403 and F405 rules, adding `# noqa`
entries to instances that we do not wish to fix.
* Adjust max line length down to 100, since we already disable the
`E501: line too long` rule, making the length mostly redundant
other than in IDEs, where it's probably good to show a warning if
exceeding 100 characters.
* Fixes:
```
treeherder/intermittents_commenter/commenter.py:202:10:
W605 invalid escape sequence '\['
treeherder/intermittents_commenter/commenter.py:202:24:
W605 invalid escape sequence '\]'
treeherder/webapp/graphql/schema.py:7:1:
F403 'from treeherder.model.models import *' used; unable to detect undefined names
```
Closes#4177.
Refs #3425.
Refs #3565.
* Switch from using job.result_set_id to job.push_id
* Switch to using template strings for aggregateIds
* Fix notification where selected job not in push range
* Fix push status notifications(watching) to use safe React lifecycle method
* Fix some lodash _ imports to specific file imports
* Remove last usage of globalFilterChanged event
* Rename usage internal to PushJobs from "platforms" to "filteredPlatforms"
This takes what ThResultSetStore used to do and moves it to a React Context
called "Pushes.jsx" and into the "Push.jsx" component to manage
its own jobs.
Now that autophone and AWFY have migrated to Taskcluster, there are
no more submitters of jobs to the REST API (confirmed via New Relic
Insights). As such, this deprecated data ingestion method can now be
removed, along with support for API Hawk auth, API POST throttling
and `treeherder-client` job submission capability.
After this lands we'll need to manually drop the `credentials` table.
Runnable jobs for buildbot were calculated via a celerybeat task
(that was disabled in #4007) and the results stored in the
`runnable_jobs` table. This can all be removed now that buildbot is
EOL, since the remaining support for Taskcluster runnable jobs does
not use that celery task/Django model.
This pre-emptively fixes the issues found by the newer ESLint and
ESLint plugins that come with Neutrino 9 - in order to reduce the
size of the Neutrino 9 PR.
Since as of #3980 (bug 1470622) the frontend no longer calls the
`/retrigger/` `/cancel/` or `/cancel_all/` Treeherder APIs.
Whilst looking at the pulse related fixtures, I spotted that the
`mock_message_broker` fixture was already unused.
Whenever we create a BugJobMap with a User we want to update its Job's
best FailureClassification's Bug number with the one the BugJobMap was
created with. This keeps the autoclassify data (FailureClassification)
in sync with BugJobMap.
However, doing this via an overridden `.save()` effectively hides the
functionality from anyone reading through the code. Since the
`.update_autoclassification_bug()` method is only called from that one
place this moves all the functionality into the classmethod `.create()`.
This makes it explicit that creating a BugJobMap involves more than a
simple DB row creation.
* Update pytest from 3.7.4 to 3.8.0
* Fix django.core.urlresolvers deprecation warnings
The new version of pytest now correctly catches warnings that occur
within tests/fixtures, which has unearthed new Deprecation warnings
that need fixing to prevent test failures.
Prevents:
```
RemovedInDjango20Warning: Importing from django.core.urlresolvers is
deprecated in favor of django.urls.
```
Since testing Django management commands involves running them and
checking the log output this provides us with a way to test message
reading functionality while also reducing those scripts down to a simple
configuration.
The PulsePublisher class was built for extensibility, providing lots of hooks
for customisation. However we only had one subclass in use since its
introduction: TreeherderPublisher. This reduces the concrete class into a
single function which publishes the given message. In doing so all
configurability has been removed, since it was unused.