treeherder/Procfile

50 строки
3.6 KiB
Plaintext
Исходник Обычный вид История

# This file defines the processes that will be run on Heroku.
# Each line must be in the format `<process type>: <command>`.
# https://devcenter.heroku.com/articles/how-heroku-works#knowing-what-to-execute
# https://devcenter.heroku.com/articles/procfile
# The `release` process type specifies the command to run during deployment, and is where
# we run DB migrations and other tasks that are 'release' rather than 'build' specific:
# https://devcenter.heroku.com/articles/release-phase
# https://devcenter.heroku.com/articles/runtime-principles#build-release-run
release: ./bin/pre_deploy
# The `web` process type is the only one that receives external traffic from Heroku's routers.
# We set the maximum request duration to 20 seconds, to ensure that poorly performing API
# queries do not consume a gunicorn worker for unbounded lengths of time. See:
# https://devcenter.heroku.com/articles/python-gunicorn
# The Heroku Python buildpack sets some sensible gunicorn defaults via environment variables:
# https://github.com/heroku/heroku-buildpack-python/blob/master/vendor/python.gunicorn.sh
# https://github.com/heroku/heroku-buildpack-python/blob/master/vendor/WEB_CONCURRENCY.sh
# TODO: Experiment with different dyno sizes and gunicorn concurrency/worker types (bug 1175472).
web: newrelic-admin run-program gunicorn treeherder.config.wsgi:application --timeout 20
# All other process types can have arbitrary names.
# The Celery options such as `--without-heartbeat` are from the recommendations here:
# https://www.cloudamqp.com/docs/celery.html
# The REMAP_SIGTERM is as recommended by:
# https://devcenter.heroku.com/articles/celery-heroku#using-remap_sigterm
# This schedules (but does not run itself) the cron-like tasks listed in `CELERY_BEAT_SCHEDULE`.
# However we're moving away from using this in favour of the Heroku scheduler addon.
# NB: This should not be scaled up to more than 1 dyno otherwise duplicate tasks will be scheduled.
# TODO: Move the remaining tasks to the addon and remove this process type (deps of bug 1176492).
celery_scheduler: REMAP_SIGTERM=SIGQUIT newrelic-admin run-program celery beat -A treeherder
# Push/job data is consumed from exchanges on pulse.mozilla.org using these kombu-powered
# Django management commands. They do not ingest the data themselves, instead adding tasks
# to the `store_pulse_{pushes,jobs}` queues for `worker_store_pulse_data` to process.
# NB: These should not be scaled up to more than 1 of each.
# TODO: Merge these two listeners into one since they use so little CPU each (bug 1530965).
pulse_listener_pushes: newrelic-admin run-program ./manage.py pulse_listener_pushes
Bug 1395254 - Consume Taskcluster Pulse messages from standard queue exchanges Currently, Treeherder consumes Pulse messages from an intermediary service called `taskcluster-treeherder`. Such service needs to be shut down and its functionality imported into Treeherder. In order to do this we need to switch to the standard Taskcluster exchanges as defined in here: https://docs.taskcluster.net/docs/reference/platform/queue/exchanges On a first pass we are only including the code from `taskcluster-treeherder` without changing much of Treeherder's code. The code is translated from Javascript to Python and only some minor code changes were done to reduce the difficulty on porting the code without introducing bugs. Internally, on this first pass, we will still have an intermediary data structure representing what `taskcluster-treeherder` is emitting, however, we will stop consuming the messages from it and be able to shut it down. Instead of consuming from one single exchange we will be consuming multiple ones. Each one representing a different kind of task (e.g. pending vs running). In order to test this change you need to open 5 terminal windows and follow these steps: * On the first window run `docker-compose up` * On the next three windows `export PULSE_URL="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"` and run the following commands: * `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_jobs` * `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_tasks` * `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_pushes` * On the last window run `docker-compose run backend celery -A treeherder worker -B --concurrency 5` * Open on your browser `http://localhost:5000` This is just a summary from [the docs](https://treeherder.readthedocs.io/pulseload.html). = ETL management commands = This change also introduces two ETL management command that can be executed like this: == Ingest push and tasks == This script can ingest into Treeherder all tasks associated to a push. It uses Python's asyncio to speed up the ingestion of tasks. ```bash ./manage.py ingest_push_and_tasks ``` == Update Pulse test fixtures == ```bash ./manage.py update_pulse_test_fixtures ``` This command will read 100 Taskcluster Pulse messages, process them and store them as test fixtures under these two files: `tests/sample_data/pulse_consumer/taskcluster_{jobs,metadata}.json` Following this work would be to get rid of the intermediary job representation ([bug 1560596](https://bugzilla.mozilla.org/show_bug.cgi?id=1560596) which will clean up some of the code and some of the old tests. = Extra script = Script that permits comparing pushes from two different Treeherder instances. ``` usage: Compare a push from a Treeherder instance to the production instance. [-h] [--host HOST] --revision REVISION [--project PROJECT] optional arguments: -h, --help show this help message and exit --host HOST Host to compare. It defaults to localhost --revision REVISION Revision to compare --project PROJECT Project to compare. It defaults to mozilla-central ``` = Other changes = Other changes included: * Import `taskcluster-treeherder`'s validation to ensure we're not fed garbage. * Change `yaml.load(f)` for `yaml.load(f, Loader=yaml.FullLoader)`. Read [this](https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation) for details * Introduce `taskcluster` and `taskcluster-urls` as dependencies * The test `test_retry_missing_revision_never_succeeds` makes no sense because we make Json validation on the Pulse message
2019-06-06 17:24:32 +03:00
pulse_listener_tasks: newrelic-admin run-program ./manage.py pulse_listener_tasks
# Processes pushes/jobs from Pulse that were collected by `pulse_listener_{pushes,jobs)`.
Bug 1395254 - Consume Taskcluster Pulse messages from standard queue exchanges Currently, Treeherder consumes Pulse messages from an intermediary service called `taskcluster-treeherder`. Such service needs to be shut down and its functionality imported into Treeherder. In order to do this we need to switch to the standard Taskcluster exchanges as defined in here: https://docs.taskcluster.net/docs/reference/platform/queue/exchanges On a first pass we are only including the code from `taskcluster-treeherder` without changing much of Treeherder's code. The code is translated from Javascript to Python and only some minor code changes were done to reduce the difficulty on porting the code without introducing bugs. Internally, on this first pass, we will still have an intermediary data structure representing what `taskcluster-treeherder` is emitting, however, we will stop consuming the messages from it and be able to shut it down. Instead of consuming from one single exchange we will be consuming multiple ones. Each one representing a different kind of task (e.g. pending vs running). In order to test this change you need to open 5 terminal windows and follow these steps: * On the first window run `docker-compose up` * On the next three windows `export PULSE_URL="amqp://foo:bar@pulse.mozilla.org:5671/?ssl=1"` and run the following commands: * `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_jobs` * `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_tasks` * `docker-compose run -e PULSE_URL backend ./manage.py pulse_listener_pushes` * On the last window run `docker-compose run backend celery -A treeherder worker -B --concurrency 5` * Open on your browser `http://localhost:5000` This is just a summary from [the docs](https://treeherder.readthedocs.io/pulseload.html). = ETL management commands = This change also introduces two ETL management command that can be executed like this: == Ingest push and tasks == This script can ingest into Treeherder all tasks associated to a push. It uses Python's asyncio to speed up the ingestion of tasks. ```bash ./manage.py ingest_push_and_tasks ``` == Update Pulse test fixtures == ```bash ./manage.py update_pulse_test_fixtures ``` This command will read 100 Taskcluster Pulse messages, process them and store them as test fixtures under these two files: `tests/sample_data/pulse_consumer/taskcluster_{jobs,metadata}.json` Following this work would be to get rid of the intermediary job representation ([bug 1560596](https://bugzilla.mozilla.org/show_bug.cgi?id=1560596) which will clean up some of the code and some of the old tests. = Extra script = Script that permits comparing pushes from two different Treeherder instances. ``` usage: Compare a push from a Treeherder instance to the production instance. [-h] [--host HOST] --revision REVISION [--project PROJECT] optional arguments: -h, --help show this help message and exit --host HOST Host to compare. It defaults to localhost --revision REVISION Revision to compare --project PROJECT Project to compare. It defaults to mozilla-central ``` = Other changes = Other changes included: * Import `taskcluster-treeherder`'s validation to ensure we're not fed garbage. * Change `yaml.load(f)` for `yaml.load(f, Loader=yaml.FullLoader)`. Read [this](https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation) for details * Introduce `taskcluster` and `taskcluster-urls` as dependencies * The test `test_retry_missing_revision_never_succeeds` makes no sense because we make Json validation on the Pulse message
2019-06-06 17:24:32 +03:00
worker_store_pulse_data: REMAP_SIGTERM=SIGQUIT newrelic-admin run-program celery worker -A treeherder --without-gossip --without-mingle --without-heartbeat -Q store_pulse_pushes,store_pulse_tasks --concurrency=3
# Handles the log parsing tasks scheduled by `worker_store_pulse_data` as part of job ingestion.
worker_log_parser: REMAP_SIGTERM=SIGQUIT newrelic-admin run-program celery worker -A treeherder --without-gossip --without-mingle --without-heartbeat -Q log_parser,log_parser_fail,log_autoclassify,log_autoclassify_fail --concurrency=7
# Tasks that don't need a dedicated worker.
worker_misc: REMAP_SIGTERM=SIGQUIT newrelic-admin run-program celery worker -A treeherder --without-gossip --without-mingle --without-heartbeat -Q default,generate_perf_alerts,pushlog,seta_analyze_failures --concurrency=3