Since for people installing node/yarn on their own (outside the
Vagrant environment) it can be confusing to work out which version is
appropriate, given node always has both an LTS and current release.
Note: This leaves the models and tables intact so that we can
revert without data loss in case we discover an issue. A follow-up
commit will remove those tables and models.
The maintenance section was not addressing that data can take a while before the different scheduled processes bring the data into the right tables.
This addresses documenting how to:
* Update the runnable jobs table
* Update the job priority table
This populates the job priority table with data locally.
It puts all jobs into the table without considerations of analyzing failures.
That can follow up in the future.
Update docs accordingly.
Change of new environment variable `PULSE_PUSH_SOURCES`.
Keep old `publish-resultset-runnable-job-action` task name by creating a
method that points to `publish_push_runnable_job_action`.
By default webservers like Django's runserver, gunicorn or the
Webpack devserver only bind to the loopback adapter (127.0.0.1) and
so are not accessible from outside the Vagrant / virtualbox VM,
since port forwarding only forwards traffic to the non-loopback
adapters.
Previously varnish (which listened on `0.0.0.0`) was reverse
proxying traffic to runserver/gunicorn, however we need to now do so
for webpack-dev-server on another port too. Doing both with varnish
adds complexity, and we don't actually need any of varnish's other
features, so ideally want to stop using it.
Rather than having to override each webserver to bind to all
adapters (using the IP `0.0.0.0`), it's possible to forward traffic
to the loopback adapter using iptables NAT PREROUTING rules. This
is still secure so long as the Vagrantfile port forwarding uses a
`host_ip` of `127.0.0.1`. To prevent this "Martian packet" traffic
from being blocked, `route_localnet` must also be set to `1`. See:
https://unix.stackexchange.com/questions/111433/iptables-redirect-outside-requests-to-127-0-0-1
By default neither sysctl or iptables settings are persisted across
reboots, and fixing that requires more complexity (eg installing the
iptables-persistent package and handling config changes during
provision). As such, it's just easier to re-run the commands on each
login since they take <30ms.
Previously a forgotten-about `local.conf.js` (which is git-ignored)
would override the URL passed by the `SERVICE_URL` environment variable.
With webpack and environment variables, there is no need to use a local
config file to control the API URL, so we can now remove this footgun.
Artifacts no longer exist (they've been replaced by more specific types
like "jobdetails"), and so fetching from this endpoint has been disabled
for some time already.
For data submission, we still call them artifacts (and sort their type
after submission), however all artifacts are currently submitted at the
same time as the job, so this endpoint is unused.
This import only affects internal treeherder usage, people using the
PyPI package import from the `thclient` subdirectory instead.
Fixes:
treeherder/client/__init__.py:1:1: F401 '.thclient.*' imported but unused
Since it's faster, deterministic and doesn't given obscure errors when
using `--no-bin-links` (which is required for both npm and yarn on
Windows hosts), and as such unblocks the work in bug 1343624.
Many of the commands are the same as with npm. See:
https://yarnpkg.com/en/docs/usage
The `test` script entry in `package.json` (used by `npm test`) already
calls karma with the appropriate parameters, so the helper scripts are
unnecessary.
For the same reason as the previous commit.
Ideally we'd remove the grunt abstraction entirely and call eslint from
the `lint` command, but we might as well save that to the Neutrino PR.
Routing commands via npm/yarn is preferred, since it avoids
having to do global installs of grunt-cli, which simplifies contributor
setup, and means less effort when we switch to Yarn (since it requires
manual PATH setup for globally installed packages).
These were added by bug 1312575 and bug 1323110.
The table exclusion list has also been updated to remove the corsheader
entry, since as of v2.0.0 it no longer creates any tables.
Since it is footgun-prone, discourages upstreaming of useful development
tricks & is unnecessary in an environment variable centric world.
The one remaining `BZ_API_URL` setting isn't actively used, and if this
changes in the future, it should be set via an environment variable
instead.
Outputting to the console rather than a log file:
* is more user-friendly during development
* is more consistent with Heroku
* means the Vagrant-specific Django LOGGING config is now closer to the
one in settings.py, and so more easily combined with it
Both gunicorn and celery default to outputting to stdout/stderr, so the
`logfile` options can be omitted entirely.
In this commit, Sheriff access is still maintained in the
Treeherder DB, rather than using the scopes derived from
LDAP.
For local usage with Vagrant, this requires accessing
Treeherder with localhost instead of
local.treeherder.mozilla.org
Loggin in to the Django Admin is not enabled in this
branch. Do use the admin, you must first login through
the normal Treeherder front-end. Then the admin will
be accessible if the user has the privileges to do so.
Persona login will still be technically possible through the
login.taskcluster.net site. But that choice will go away
shortly.
As a new contributor to Treeherder, I was confused how to get
Treeherder to ingest several pushes. The celery worker appeared to
only ingest the last 10 pushes.
This commit enhances the "ingest_push" command to allow ingesting
the last N pushes. I've used this to ingest the last 100 pushes
to seed the database with sufficient pushlog data.