* a viewLog() method in the result set controller that was *supposed* to be called
from the pinboard controller (when the user right clicks on a job), but isn't
anymore. The fact that no one's complained or bothered to fix it since it was
broken (this code is pretty old) is an indication to me that we don't care about
it that much. Let's just leave it out.
* An event handler for a "job context menu" that doesn't do anything (anymore)
To reduce duplication and ensure the configurations remain in sync.
Note: The Vagrant config does set `bind-address` to allow non-localhost
connections, which isn't necessary on Travis, however this is fine since
(a) it's only Travis, (b) the user grants on Travis won't actually allow
non-localhost connections anyway, even if Travis' network settings
allowed connections between test nodes.
Rather than overwriting the default MySQL 5.6 config file, the specific
changes we wish to make are now made via a file in the `mysql/conf.d/`
include directory. This makes it easier to see where we differ from the
defaults, as well as preventing us from inadvertently overriding any
new defaults in `/etc/mysql/my.cnf` when we update to new MySQL major
versions in the future.
The contents of this new file were determined by diffing against the
untouched `/etc/mysql/my.cnf` file, and removing anything that was still
set to defaults.
The path used for `exec{}` commands (defined in `vagrant.pp`) includes
`${VENV_DIR}/bin` so we don't need to specify the full path for pip
and python invocations that are meant to use the virtualenv binaries.
Since the dev.txt packages no longer depend on system packages installed
in mysql.pp, so don't need to be installed separately. The working
directory has also been adjusted to avoid the need to specify the full
path to the requirements files.
Previously there were two sample Django config files, and confusingly
the one that would be used in the Vagrant environment wasn't the one
that was the most visible.
In addition, we're not performing any kind of variable substitution, so
don't need to use `content => template()`.
Previously provision would append entries to .bashrc in multiple steps,
whereas now these are just included in the `.profile` that is symlinked
from the environment. As such, future changes will no longer need a
re-provision after pulling latest master, to take effect.
This renames the existing `.bash_aliases` file to `.profile`, since
we're soon going to use it for more than just aliases. It overwrites the
default `.profile` file in the VM, so we need to source `.bashrc` as the
original did.
In addition, rather than copying the file we now symlink it, so that
future changes don't require a re-provision after pulling latest master
to take effect.
Having them in a separate file is cleaner, makes the discovering where
the environment variables are set easier, plus means we can symlink the
file, so future variable changes will take effect immediately, rather
than needing a re-provision after pulling latest master.
A default user exists with username 'root' and blank password, which we
might as well use to save having to create another. We still have to add
a grant to allow root to connect from outside the VM, since the default
grant of `root@localhost` only allows connections via the loopback
interface.
The dependency on the `create-db` task has also been removed, since the
grant uses a wildcard, so doesn't refer to the `treeherder` DB directly.
A default user exists with username 'guest' and password 'guest', which
we might as well use to save having to create another and set up grants.
See:
https://www.rabbitmq.com/access-control.html
* Bug 1264074 - Move to_timestamp function to a reusable location
* Bug 1264074 - Refactor JobConsumer to have a PulseConsumer super class
Much of what was in the JobConsumer is reusable by the upcoming
ResultsetConsumer. So refactor those parts out so that each specific
consumer can reuse code as much as possible.
* Bug 1264074 - Add ability to ingest Github Resultsets via Pulse
This introduces a ResultsetConsumer and a read_pulse_resultsets
management command to ingest resultsets from the TaskCluster
github exchanges.
When a supported Github repo has a Pull Request created or
updated, or a push is made to master, then it will kick off a
Pulse message. We will receive it and then fetch any additional
information we need from github's API and store the Resultset.
This follows a very similar pattern to the Job Pulse ingestion.
* Bug 1264074 - Old code/comments cleanup
* Bug 1264074 - Tests for the Github resultset pulse loader
Using apt-get isn't worth it since:
* we have to manually add their repository due to it being incompatible
with add-apt-repository, resulting in a lot of boilerplate
* we want to pin to a specific version, so don't need apt-get to pull
new versions from the repository for us
* the elasticsearch package doesn't declare any dependencies, so don't
need apt-get for managing them either
* it's only a development environment, so TLS is fine for security
In addition, the installation process now matches that used in Travis,
which improves consistency between environments and means in the future
we could always factor out Elasticsearch install to a shared script
to avoid duplication.
In addition:
* Quietens curl's output to avoid progress bar logspam.
* Removes the unnecessary dpkg option `--force-confnew` (since it's only
needed if overwriting an existing Elasticsearch installation).
* Reduces the number of places where we duplicate the version number.
ActiveData's scraping of Treeherder's API has caused responsiveness and
performance issues for other users of Treeherder on several occasions,
so is being blocked until we can decide upon a less detrimental way for
ActiveData to obtain this data.