This test checks that `./manage.py makemigrations` was run, and the
resultant migrations file committed to the repo, since the last time the
Django models were updated.
Django 1.8 only supports an `exit_code` option, which unhelpfully makes
the command `sys.exit(1)` if there are *no* missing migrations, when
we're more interested in the opposite. As such, we have to confirm that
it does exit 1 (which causes a `SystemExit` exception).
On Django master they've replaced `exit_code` with `check_changes` that
inverts the check, which we should switch to once we're using a version
of Django that includes that fix.
Whilst Django itself handles the password property being set to `None`,
we use the Django DB configs in several places outside of Django (either
directly with MySQLdb, or via datasource which uses MySQLdb too).
MySQLdb will handle the password being the empty string, but raises
if `None` is passed, which can occur if using django-environ, due to:
https://github.com/joke2k/django-environ/issues/56
This change both works around that issue, and is also likely the right
thing to do regardless, since we shouldn't assume that the password is
even set in settings.py at all. (Django defaults the password to the
empty string, so it's perfectly acceptable to omit the password
property in the DATABASES dict entirely.)
* Make sorting more deterministic in cases where we have multiple
entries with the same push timestamp
* Internal PerfDatum clas renamed to just Datum
* Remove buildid parameter from Datum (not used for anything)
* Remove testrun_timestamp paramater from Datum (it's not really useful,
better to use push_timestamp
* Consolidate debug printing into one function that shows
everything
We're about to change the implementation of update_parse_status; this
test will ensure trying to update the status for a non-existent
job_log_url record will still result in a 404 response.
I changed the magic string from PERFORMANCE_DATA to PERFHERDER_DATA before
landing bug 1149164, but forgot to fix this particular part.
Updated unit test so this hopefully won't happen again
The job list endpoint was joining the job table and the resultset table
with the only purpose of sorting the results by push_timestamp.Also we
don't really need any sorting on that endpoint, so let's remove it.
This endpoint will be used by the similar jobs panel in the ui.
It returns a list of jobs shaped in the same way as the job list
endpoint but ordered by resultset_id DESC. In order to achieve decent
performance the returned list is filtered by the same job type as the
one selected.
I haven't found the exact reason why the tests were failing but it must
be a test isolation problem because they were passing individually.
I debugged this issue disabling the tests backwards starting from the
first failure and I found out that test_bz_api_process was the offender.
The test itself is not doing anything wrong but the refdata fixture
used to setup the test seems to be the root cause..
I replaced the two method calls with their orm counterpart and the
problem disappeared.
pytest-django doesn't setup a test database for every single test, but
only for those tests that actually require a db. Tests that require a db
need to either be marked with `@pytest.mark.django_db` or use a fixture
that has a dependency on `db` or `transactional_db`.
Using a non transactional db would make tests execution much faster, but
unfortunately it doesn't play well with the treeherder datasource
creation so I used a transactional_db.
pytest-django also allows you to specify a settings file to use for
tests in a pytest.ini file, which is nicer than monkeypatch the original
settings file in the pytest session start function 😃.
We were previously using the same database (test_treeherder) for both the
jobs and reference data model. I centralized the new db name in the test
settings file. All the test requiring the jobs db or its repository counterpart
can now access it using the `test_project` fixture, while utility functions use
directly the metioned setting. Where the project name is hardcoded in a static
file, I just replaced it with the new name `test_treeherder_jobs`
This guarantees that jobs databases is dropped at the end of each
test. It also makes the jobs database life cycle easier to understand.
In general to keep the tests as fast as possible we shouldn't have much
code in the setup and tear down of each test.
This adds a new FailureClassification for autoclassified
intermittent. When a job is completely classified by the
autoclassifer, and it has the same number of structured and
unstructured error lines, it is marked as an autoclassified
intermittent.
Conversely, when there is exactly one structured and one unstructured
error line, the autoclassifier did not match the job, but has a
detector that could match the job, and the job is marked as
intermittent by a human, add a new autoclassification target
corresponding to the error line.
We store both long and short, but only utilize the short (as before). We
need to populate all the short and long revision records before we can
start using them. So after this commit, we will begin backfilling the
old records that don't yet have those values populated. Once they all
are, we can move to using the long_revision primarily in Bug 1199364.
Some repos have been storing 40 char revisions, though most only store
12.
But a recent change to search by only 12 chars, even if 40 passed in
broke
the ability to search when the stored length was 40. This change will
search for both length revisions, if the 40 char is passed in.
We used to determine which performance signatures were in a repository by
looking at that repository's performance datums. Unfortunately, that method
was rather slow as the query had to work over millions of data points. So
instead, store repository and last modified information in the signature
table itself so it can be looked up quickly.
Since with the new per-user Hawk credentials, the same auth object can
be used for the whole session, so should just be passed when
instantiating TreeherderClient.
The "performance adapter class" never stored any persistent state, so
let's just refactor it into a bunch of standalone methods. Easier to
understand and reason about.
* Put talos-specific stuff in the talos data adapter
* Put generic stuff in the generic adapter, in preparation for creating
a generic perfherder data ingestion path
* Add some explanatory comments
* Use better casing for static defines
* Remove some now-unused code related to json float encoding
Since they're not specific to the Django app 'webapp'.
Whilst we're there, the local & example settings files have been
renamed. In the future I'd like to combine settings_local.example.py
with puppet/files/treeherder/local.vagrant.py, but I'll do that in
another bug.