* Remove subtest_signatures property (super slow)
* Add a parent_signature property to signatures with a parent signature
* Add option to only fetch signatures without a parental signature
* Update clients to use the new API, for increased speed/awesomeness
This will be useful for some views where we don't know off the bat
whether a signature has subtests or not (e.g. the e10s dashboard,
the graphs view) and thus whether to show a "show subtests button"
Since otherwise it results in a header of form:
`strict-transport-security: max-age=365 days, 0:00:00`
...rather than:
`strict-transport-security: max-age=31536000`
Previously HTTPS redirection was only enabled on Heroku, since
stage/prod handled it on the load balancer. However, the load balancer
isn't setting the HSTS header, and deployment-specific environment
variables (such as `IS_HEROKU`) should really be avoided.
As such, the conditional instead now checks whether `SITE_URL` begins
with `https://`. This has the effect of enabling these Django security
features on stage/prod, but keeping them disabled locally/on Travis,
where the site isn't accessible over HTTPS.
WhiteNoise 3.0 now supports serving Brotli-compressed files to browsers
whose `Accept-Encoding` includes `br`. Note: Both Firefox and Chrome
only support Brotli over HTTPS.
To take advantage of this, the Brotli package just needs to be available
when the compression tool (`python -m whitenoise.compress`) is run. See:
http://whitenoise.evans.io/en/latest/changelog.html#brotli-compression-supporthttp://whitenoise.evans.io/en/latest/django.html#brotli-compression
The WhiteNoise docs say to use an unofficial PyPI package (brotlipy),
however this has a dependency on libffi (via cffi) and the official repo
now has it's own Python wrapper that does not. As such, this commit
instead uses the official Brotli package from GitHub, whilst we wait for
the official PyPI release (https://github.com/google/brotli/issues/72).
The Brotli install works fine on stage/prod/Heroku/Travis. The Vagrant
environment was missing g++, which is now installed during provision.
There are some backwards incompatible changes:
http://whitenoise.evans.io/en/latest/changelog.htmlhttps://github.com/evansd/whitenoise/compare/v2.0.6...v3.0
Specifically:
* The CLI compression utility must now be called via
`python -m whitenoise.compress` rather than `python -m whitenoise.gzip`.
* The `whitenoise.django.GzipManifestStaticFilesStorage` storage backend
has moved to `whitenoise.storage.CompressedManifestStaticFilesStorage`.
* The internal `add_files()` method has been split into two and the part
which we need to subclass is now named `update_files_dictionary()`. See:
07f9c0bece
For some reason changing the span to a button broke keyboard input
after job classification on Firefox on Linux/Windows (not Mac,
any flavor of Chrome also works fine). Blurring the input element
seems to fix this.
The /artifacts endpoint defaults to only returning the first 10 items
unless a count is passed in. We weren’t passing in that param, so we’d
only get back the first 10 (we need them for the buildbot request_ids).
This fixes it by passing in a “count” param equal to the number of jobs
we want to retrigger.
New resultsets will still store a value in their ``revision_hash`` field, but it will
just be the same value as their ``long_revision`` field.
This will log an exception in New Relic when a new resultset or job is posted
to the API with only a ``revision_hash``and not a ``revision`` value.
This also switches to using the longer 40 char revisions along side the
12 char revisions. But we leverage the longer ones for most actions. The
short revisions are stored and used so that people and the UI can support
locating a resultset (or setting ranges) with short revisions.
This will add that value to any exceptions that are caused during calls by a hawk
user. This will, in turn, help us work with that user to resolve the issue.
* We should not incorporate downstream alert information into the title
* If there are any regressions that belong to the summary, only incorporate
the regression information into the title
When first setting up a new app on Heroku, things like reporting the
deploy to New Relic will fail, since it requires that the app exist on
New Relic. However the app will only be created there once the Python
agent first reports app metadata, which won't happen until after the
deploy (there is no way to create the app via the web interface).
In addition, there may be cases in the future when stage/prod is broken,
and the pre-deploy tasks therefore fail, however we still want the
deploy to proceed.
To avoid needing to constantly edit this file, the environment variable
`IGNORE_PREDEPLOY_ERRORS` can now be set, in cases where the deploy
should continue even if there were errors. (Note this uses the bash 4.2+
`-v` option, see http://stackoverflow.com/a/18448624).
Requires that `NEW_RELIC_APP_NAME` and `NEW_RELIC_API_KEY` be set in the
environment. NB: `NEW_RELIC_API_KEY` is different from the existing
`NEW_RELIC_LICENSE_KEY`.
We're also making use of the runtime-dyno-metadata labs feature, which
sets the slug/release related environment variables used in this PR:
https://devcenter.heroku.com/articles/dyno-metadata
Since we'll soon be adding reporting deploys to New Relic, which will be
too verbose to include in the Procfile. Also adds additional log output
(which follows the buildpack compile log formatting convention) to make
it easier to find & follow the release tasks on Papertrail.
Uses the `set -euo pipefail` recommendation from:
http://redsymbol.net/articles/unofficial-bash-strict-mode/