* Make changes to docs
* Add cert for prototype connections
* Add TLS_CERT_PATH variable to docker yaml file
* Change troubleshooting and database sections of docs
* Update docker-shared-user for pulse_url and add PROJECTS_TO_INGEST to backend container
* Update docs to make them clearer
* Fix exception caught in pytest.raises
Bug 1679162 had split the parsing of logs for failed tasks into multiple dimensions:
- monitored by code sheriffs or not
- raw or JSON data
This added 4 new queues:
- log_parser_fail_raw_sheriffed
- log_parser_fail_json_sheriffed
- log_parser_fail_raw_unsheriffed
- log_parser_fail_json_unsheriffed
The sheriffed ones handle the autoland and mozilla-* trees watched by code
sheriffs, the unsheriffed ones all other trees (e.g. 'try').
* Bump python to 3.7.10 to try to fix issues with typing module
* Remove requirement on typing again
* Remove bigquery and activedata references (#7051)
* Remove activedata and bigquery again
* Add importlib-metadata to dev requirements
* Remove remnants of selenium tests from setup.cfg
* Don't call taskcluster.aio.createSession outside of async functions (it's not allowed)
* Disable newrelic linters to fix issue with imp module
See https://discuss.newrelic.com/t/python-warnings-during-pytest/114897
* Fix linting errors
* Use pytest-xdist to speed up tests
* Properly ignore newrelic warnings
* Disable django.contrib.staticfiles in tests, because it breaks everything
* Set TREEHERDER_DEBUG=False for unit tests, because it breaks some tests
* Fix linting issues
* Add .circleci/config.yml
* Initial set up with node and yarn config
* set up heroku builds
* add python test config
* remove most travis references and travis.yml
* Docker: Bug 1630293 - Increase max MySql connections
* docs: Set a concurrency of 1
* docs: Remove the usage of -B for Docker set up
Using -B in more than one instance will cause the remaining celery beats to
execute in more than one container.
* Reduce unsupported known failure line while still reporting to New Relic
This makes sure that serving the documentation via poetry does not regress
and we have a single way of generating them.
Setting the build backend in `pyproject.toml` makes `pip` to use `poetry` to install the dependencies. By doing so, readthedocs will be able to install the required dependencies to generate the docs (since they don't support poetry directly).
* Configuration for black
* changes congiguration
* change pyproject's directory
* add files to be excluded and skip string normalization
* removed isort from pre-commit
* remove version locks for black
* fix
* remove all isort
* update requirements
Co-authored-by: SuyashSalampuria <suyash546@gmail.com>
Co-authored-by: Kyle Lahnakoski <kyle@lahnakoski.com>
* Add Travis job to run Python tests outside of Docker
* `runtests.sh` is renamed to `runchecks.sh` and it does not run Python tests
* `manage.py check --deploy` was duplicated in Travis
* Update testing documentation
* Remove `-bb` since it is not needed since Python 3.5
Git based projects can list pushes in the UI incorrectly. This can be caused by commits having been ammended on a PR or merges of old commits.
Using the committer's date (the date when the PR gets merged) instead of the author's date to determine push time fixes the sorting problem.
This change also includes:
* Support for manual ingestion of Git pushes
* Support for ingesting the latest commits for a repo
* Script to compare pushes between Treeherder instances
## Script to compare pushes between Treeherder instances
`compare_pushes.py` compares the last 50 pushes of various projects for different Treeherder instances. The output generates links to each instance and revision to visually compare.
```console
% ./misc/compare_pushes.py --projects android-components,fenix,reference-browser,servo-master,servo-auto,servo-try
Comparing android-components against production.
Comparing fenix against production.
Comparing reference-browser against production.
{"values_changed": {"root['push_timestamp']": {"new_value": 1582580346, "old_value": 1582581477}}}
https://treeherder.allizom.org/#/jobs?repo=reference-browser&revision=547a18b97534b237fa87bd22650f342836014c4ehttps://treeherder.mozilla.org/#/jobs?repo=reference-browser&revision=547a18b97534b237fa87bd22650f342836014c4e
Comparing servo-master against production.
Comparing servo-auto against production.
Comparing servo-try against production.
```
Github based projects can list pushes in the UI incorrectly. This is caused by commits that have been ammended.
This code change switches to grab the push time from the `timestamp` field of the head commit from the Pulse event rather that Github's APIs. This fixes the problem of push sorting.
Note, that the field `timestamp` only exists in the Push event that the Pulse message contains. This field can be seen in the `events` API, however, that API contains all sorts of events and holds a maximum of 300 events.
This change also includes partial support for manual ingestion of Git pushes.
* CSS Cleanup
* Use darker-info and darker-secondary for a11y
* Put Parent Push metric at end of list and clean it up
* Add a `scrollToLine` when clicking/expanding metric names
Git based projects can list pushes in the UI incorrectly. This is caused by commits that have been ammended.
This code change switches to use the `timestamp` field of head commit to determine push time rather a commit's authorship date. This fixes the problem of push sorting.
This change also includes support for manual ingestion of Git pushes.
Add style specs in case darker-secondary and darker-info color props are used in other reactstrap components in the future. Update docs and reorganize these classes in the stylesheet.
* Properly configure LOGGING_LEVEL
`LOGGING_LEVEL` was being cast down to False rather than "INFO" or "DEBUG".
There were no fallouts from it but it was preventing setting the right logging
level for the Docker container or the Heroku Review Apps.
* Validate preseed entries and fail if issues are found
Without validation some invalid data job priorities would be inserted into the table. No issues
would be noticed for SETA since the web API sanitizes the data before presenting it.