I enabled the api html rendered so that if you fetch one of the endpoints using your browser
(i.e. with a 'Accept: text/html' header) it will return the right content type.
The default renderer is still the json one though, so it should be back-compatible.
Removes the default value for DATABASE_URL and DATABASE_URL_RO, so that
if either are not set, there is a clearer upfront Django error about
Databases not being configured, rather than timeouts trying to connect
to localhost.
This makes it more obvious when setting up a new instance that the
variables are not present in the environment (and would have avoided
some confusion on Heroku today, when DATABASE_URL was set, but
DATABASE_URL_RO was not) - in exchange for now needing two extra lines
in the puppet config for local testing.
dj-database-url extracts DB host, port, username, password and database
name from the env variable 'DATABASE_URL' (unless another env variable
name is specified). If the env variable is not defined, it falls back to
the default passed to dj_database_url.config().
This means for Heroku and similar we can replace the multiple DB env
variables with just one URL for default & one for read_only.
This also effectively makes the setting of the read only DB variable
mandatory for stage/production/heroku, since DEFAULT_DATABASE_URL won't
be valid for them - so prevents us inadvertently not using the read only
DB.
The deployment script also had to be updated, so that we set the
prod/stage-specific environment variables before using manage.py, since
dj-database-url cannot rely on what's in the stage/prod local.py config
(which isn't a bad thing, since we're deprecating that file).
These SQL procs are not used anywhere in the repo:
generic.selects.get_db_size
jobs.selects.get_result_set_job_list
jobs.selects.get_result_set_job_list_full
jobs.selects.get_revision_map_id
reference.inserts.create_repository_group
reference.selects.get_option_collection_hash
reference.selects.get_max_collection_hash
reference.selects.get_option_names
reference.selects.get_repository_group_id
There are also unused objectstore procs, but the entire file will be
removed shortly in bug 1140349.
This patch upgrades the version stored in the requirements file and fixes some issues introduced by breaking changes in the new version of the library:
- Writable nested fields are not available anymore, you need an explicit create method on the serializer to write a nested field.
- ModelViewSet now requires serializer_class and queryset attributes.
- @action and @link decorators are now replaced by either @detail_route or @list_route.
- any attempt to create a ModelSerializer instance with an attribute which type is either dict or list will raise an exception.
Since we use Celery for queueing job ingestion, the objectstore is
now irrelevant. This code is the first step. This will bypass
the Objectstore and ingest jobs directly to our ``jobs`` database.
Phase 2 is to remove all the Objectstore code (in a later commit)
Phase 3 is to delete the Objectstore databases and related fields in
other tables.
It appears that on occasion we parse a log more than once, which
resulted in duplicate performance series going into the database.
Let's be resilient about this by not inserting duplicate jobs into the
database (we always attach a unique job id to every datapoint, so there's
no chance of accidentally removing entries which happen to have the same
performance numbers)
This adds the ability to specify a custom log name and have the log
viewer use the ``logname`` param of the ``text_log_summary`` to get the
right log.
This also improves the error message returned by the /logslice/ API if a
log name is used that is not found.
Since after bug 1182455, engine is now guaranteed to always the same for
all projects, so we can just specify it in the schema directly.
As an added bonus, the templates are now valid SQL.
As part of Datasource v0.9's SQL injection fix, it no longer supports
passing comma-delimited strings to the `limit` parameter, to denote the
SQL LIMIT/OFFSET. Instead we need to pass integers to Datasource's
`limit` and `offset` params separately. The `offset` param now actually
works in Datasource v0.9, unlike previous releases.
* Fixes the `offset` parameter, since it previously used the value for
`limit` instead.
* The `limit` and `offset` parameters are now cast to int, to prevent
SQL injection if those parameters were not sanitised in the app.
Note: This intentionally removes the ability to pass a comma delimited
`limit` string of say "100,200" - since the now-working `offset`
parameter makes this redundant.
https://github.com/jeads/datasource/compare/v0.8...v0.9
Having the ability to use different DB hosts for each project sounded
like a good idea, but in reality, we have no need for it.
This switches us to using the global read-write and read-only database
host names rather than the fields on the datasource table. As such, the
'host', 'read_only_host' and 'type' (eg 'mysql') fields can be removed.
The Django model had a unique_together on host+name, so we now need to
make 'name' (ie database name) a unique key on it's own.
In addition, this removes the 'creation_date' field, since we don't use
it anywhere, and we can just look at the commit history to see when a
repo was created. (I imagine it may have had more use if we actually had
started partitioning the databases uses the old 'dataset' count field).
In a future bug, I'll remove the redundant substitution of 'engine' for
'InnoDB' in the template schema, given that engine is now always InnoDB
in create_db().
Since otherwise we get access denied errors using run_sql on Heroku.
All other calls use datasource, so have already been set up to pass the
SSL options.