pytest treats objects starting with the string "Test" as tests, so an
underscore prefix has been added to prevent warnings of form:
```
WC1 .../test_detect_intermittents.py cannot collect test class 'TestFailureDetector' because it has a __init__ constructor
```
Change of new environment variable `PULSE_PUSH_SOURCES`.
Keep old `publish-resultset-runnable-job-action` task name by creating a
method that points to `publish_push_runnable_job_action`.
Process output lines start with a prefix like GECKO(1234) or PID 1234
and then a pipe. This is an extra pipe symbol compared to other lines,
so our code that splits on | and assumes specific data in specific
positions is broken. Try to detect the process output case and discard
the first token so that we have the same fields as other
pipe-delimited data.
Fixes:
tests/autoclassify/test_classify_failures.py:7:1: F401 'treeherder.model.models.TextLogErrorMetadata' imported but unused
tests/etl/test_job_loader.py:7:1: F401 'treeherder.model.models.Repository' imported but unused
tests/model/test_classified_failure.py:6:1: F401 'treeherder.model.models.FailureLine' imported but unused
tests/seta/conftest.py:2:1: F401 'django.utils.timezone' imported but unused
tests/seta/test_job_priorities.py:8:1: F401 'treeherder.seta.settings.SETA_LOW_VALUE_PRIORITY' imported but unused
tests/webapp/api/test_text_log_summary_lines.py:4:1: F401 'treeherder.model.models.TextLogError' imported but unused
treeherder/auth/backends.py:13:5: F401 'django.utils.encoding.smart_str as smart_bytes' imported but unused
treeherder/autoclassify/tasks.py:4:1: F401 'django.conf.settings' imported but unused
treeherder/autoclassify/tasks.py:6:1: F401 'treeherder.celery_app' imported but unused
treeherder/perfalert/__init__.py:1:1: F401 '.perfalert.*' imported but unused
treeherder/seta/analyze_failures.py:7:1: F401 'treeherder.etl.seta.valid_platform' imported but unused
treeherder/seta/job_priorities.py:10:1: F401 'treeherder.model.models.Repository' imported but unused
treeherder/seta/models.py:7:1: F401 'treeherder.model.models.Repository' imported but unused
The seta migrations file change is due to the seta models no longer
depending on `model` (since the unnecessary `Repository` import has
been removed).
In most cases, but not all, we want to expire performance datum on the
same cadence as job data. Add some code to do this, with an optional
override to keep it around indefinitely for some cases.
tests/client/test_treeherder_client.py:514:1: E305 expected 2 blank lines after class or function definition, found 1
tests/model/test_error_summary.py:66:1: E305 expected 2 blank lines after class or function definition, found 1
tests/model/test_error_summary.py:94:1: E305 expected 2 blank lines after class or function definition, found 1
tests/model/test_error_summary.py:112:1: E305 expected 2 blank lines after class or function definition, found 1
tests/model/test_error_summary.py:148:1: E305 expected 2 blank lines after class or function definition, found 1
tests/model/test_error_summary.py:166:1: E305 expected 2 blank lines after class or function definition, found 1
tests/perfalert/test_analyze.py:94:1: E305 expected 2 blank lines after class or function definition, found 1
tests/webapp/api/test_auth.py:23:1: E305 expected 2 blank lines after class or function definition, found 1
tests/webapp/api/test_version.py:12:1: E305 expected 2 blank lines after class or function definition, found 1
treeherder/config/settings.py:21:1: E305 expected 2 blank lines after class or function definition, found 1
treeherder/credentials/admin.py:10:1: E305 expected 2 blank lines after class or function definition, found 0
treeherder/etl/schema.py:16:1: E305 expected 2 blank lines after class or function definition, found 1
treeherder/model/search.py:56:1: E305 expected 2 blank lines after class or function definition, found 1
treeherder/model/tasks.py:42:1: E305 expected 2 blank lines after class or function definition, found 1
We were always resetting the failure classification to an invalid value
when the last job note was deleted for a job, but we didn't notice it
before because we didn't have foreign key validation.
* Can no longer store raw artifacts (anything treeherder doesn't understand
is ignored)
* Attempting to retrieve an artifact now returns a 405 (not allowed)
It appears that the intent of this code is to to a phrase match of the
search string against the bug summary for relevance matching. However
the code incorrectly tried to quote the string and as a result failed
to handle special characters in the AGAINST clause (e.g. + - ~ >
etc.). This change simply removes any existing quote characters from
the string and places the entire thing in quotes. Per the MySQL
documentations:
> A phrase that is enclosed within double quote (") characters
matches only rows that contain the phrase literally, as it was
typed
Instead, generate the data when required. We will store the return value
of this in memcache for a day to ensure things are responsive for the sheriffs
when classifying recent failures.
* Bug 1286578 - Retry job task if resultset doesn't exist
This removes the logic which creates `skeleton resultsets`
when a job is ingested that we don't have a resultset for yet.
The new approach is to fail and wait for the task to retry.
The buildbot job ingestion already skips and retries later if
it encounters a job for which it has no resultset.
This adds a similar check to the Pulse Job ingestion. If
a job comes in with a revision that doesn't have a resultset
yet, then this will raise a ValueError. That will invoke the
retryable_task actions which will wait a bit, then retry. Each
time it will wait a little longer to retry. After 9 retries it
waits something like 3900 seconds which should be plenty of time
for the resultset ingestion to complete.
This changes ingestion, the API endpoints, and the frontend to match
the new structure. For now we continue to store text_log_summary artifacts,
though they don't do anything anymore.
Previously test_new_job_in_exclusion_profile was attempting to download
logs from ftp.mozilla.org, due to the log parser not being mocked, which
caused intermittent test timeouts on Travis.
This is required in order to create a unique index on title,
value and job_id to prevent duplicates. The index will be
created in a later PR.
This also uses update_or_create instead of get_or_create as
this will be the mechanism going forward to prevent duplicates.
A prior commit removed the ability to use "-Infinity" for the last_modified
query param. However, the fix accidentally stripped the param entirely.
This change ensures that the value is a valid date string. The range is
not limited.
This also adds some new tests to ensure the param of `last_modified` is
working correctly when included.