This additionally reconsiders the order of all of the actions, spacing them 50
"units" apart and putting the more common actions first.
MozReview-Commit-ID: 98IOYKVMcGU
--HG--
extra : rebase_source : 1273a8b86625bd8e4dc3bddab80c6912241f88c8
extra : histedit_source : 16314284a2b4e0368da843b036e22aaedf485307
Currently, `mach try fuzzy` generates a taskgraph that is configured exactly
like the most recent push to mozilla-central. This isn't always desirabe, so
pass some configuration down, to allow the taskgraph to behave differently.
Differential Revision: https://phabricator.services.mozilla.com/D2729
--HG--
extra : rebase_source : 99d6958b33211697227e65df17edc1eb337f63a4
extra : histedit_source : 69b5ff6805bc8409340eb71323a1f6fc637259d7
This is needed to not have a circular kind dependency when we actually spell out all dependencies (in a following patch)
Differential Revision: https://phabricator.services.mozilla.com/D1695
--HG--
rename : taskcluster/ci/repackage-signing/kind.yml => taskcluster/ci/repackage-signing-l10n/kind.yml
extra : rebase_source : c2998ba23f213090d27495eb44c3bde3a1628dff
Assorted fixes from trawling the sphinx logs - malformed formatting, broken references, leftovers from renaming action-task to action-callback and removing
yaml-templates, docstring fixes to make sphinx happier, and typos.
MozReview-Commit-ID: 6jUOljdLoE2
--HG--
extra : rebase_source : f2a9dbcde5180f760a80f47691a70eba76e58bad
Various fixes for the docs to make docs better and sphinx happier
* remove unused targets which were duplicated in balrog, partials
* fix up malformed targets and links
* convert docstrings to comments so sphinx ignores example response in utils/partials.py
* section underlines should match titles
MozReview-Commit-ID: GSYqsocBC4I
--HG--
extra : source : e464e9bbb42fb85170f1ce35a01f25811d753871
We shouldn't run this on central, as it falls back to the dev configs, and fails.
It should be fine on beta/release/esr60. I had to move this version of the check to its own
kind to avoid the dependency tree bringing in the entire build process. Perhaps we can
refactor later to avoid duplication
Differential Revision: https://phabricator.services.mozilla.com/D1765
--HG--
rename : taskcluster/ci/release-bouncer-check/kind.yml => taskcluster/ci/bouncer-check/kind.yml
This is just enough to make these repos discoverable and suggest their usage.
Anything more would duplicate documentation in those repositories.
MozReview-Commit-ID: 1WNEsoQBB9U
--HG--
extra : rebase_source : 19c25ee4b05fe72d2fed37ca68f75db3918d1c2f
Chains a release-eme-free-repack-beetmover-checksums kind after release-eme-free-repack-beetmover, to move
the target.checksums generated by the latter into the beetmover-checksums/ in candidates directory. Those
are then consumed by release-generate-checksum kind.
A lot of details like scopes, worker & provisioner, attributes, as well as data like repack_id and
partner_path, are inherited directly from the parent beetmover task. Mainly to avoid recalculating them.
In contrast to nightly builds, GPG signing of target.checksums has not been implemented. I don't believe
that adds any value in our current system because the sigs are not verified.
MozReview-Commit-ID: 38iz3J2PAXh
--HG--
extra : rebase_source : 8f2bebe747d97437780f1bc0d9e2f42159aee8a6
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : rebase_source : 4918b8c3bac53d63665006802054038bfbca0314
for L10n jobs should run per-push based on the corresponding builds
Differential Revision: https://phabricator.services.mozilla.com/D1406
--HG--
extra : rebase_source : 207d1c25e37ab2619a09fb209282ffe55025de26
for L10n jobs should run per-push based on the corresponding builds
Differential Revision: https://phabricator.services.mozilla.com/D1406
--HG--
extra : rebase_source : 902aaff764db6df06b96e7ba8c7d717afd6869e3