2016-10-26 23:18:24 +03:00
|
|
|
# This Source Code Form is subject to the terms of the Mozilla Public
|
|
|
|
# License, v. 2.0. If a copy of the MPL was not distributed with this
|
|
|
|
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
|
|
|
|
|
Bug 1427340 - Build toolchains on a Debian-based docker image. r=gps
... except libdmg-hfsplus. RedHat decided to patch libbz2 to have a
different soname, so a binary built on Debian can't run on
RedHat/CentOS. Ironically, a binary built on RedHat/CentOS can run
on Debian. While we could use some tricks to make libdmg-hfsplus built
on Debian work, at this point, it's not worth the effort. We can live
with libdmg-hfsplus being built on CentOS until the builds that use it
switch to Debian, which is imminent.
... and except mingw32-nsis. Sourceforce renewed their certificate last
week and somehow the corresponding CA is not yet recognized by the
ca-certificates in Debian wheezy (an update is underway but see below)
... and except wine, because it requires more 32-bits packages than can
be installed on the toolchain-build docker image. But all things
considered, the mingw32 builds don't need to be using the same docker
images as the linux builds, and they could be, like the android builds,
be based on a more recent build image. So the corresponding toolchains
can be built on a more recent version of Debian too.
Consequently, we keep all the mingw32 related toolchains on the
desktop-build image for now.
2017-12-29 10:45:52 +03:00
|
|
|
job-defaults:
|
|
|
|
worker:
|
|
|
|
docker-image: {in-tree: toolchain-build}
|
|
|
|
|
2017-08-02 12:02:44 +03:00
|
|
|
linux64-clang-3.9:
|
|
|
|
description: "Clang 3.9 toolchain build"
|
2016-10-26 23:18:24 +03:00
|
|
|
treeherder:
|
|
|
|
kind: build
|
Bug 1338061 - Move toolchain tasks to a separate "platform". r=dustin
The toolchain tasks are hard to spot on treeherder, in the ocean of
build and test jobs associated with the platforms they are currently
under.
Now that we have a significant number of toolchain tasks across
different platforms, it's even worse, especially combined with the fact
that they don't happen on every push.
To make them more easily visible, we move them to a new, separate,
"platform", with the name "toolchains", instead of having them in
different platforms. But since the distinction between Linux, OSX and
Windows 32/64 is still interesting to have, we create groups for each of
those platforms.
But because of bug 1215587, the jobs still end up associated to their
previous group, defeating the new grouping, so to work around that bug,
we also rename the jobs in subtle ways.
--HG--
extra : rebase_source : 6c093c070c18a64eba1c21bf2a2c97b2a9aaabc5
2017-01-27 05:46:22 +03:00
|
|
|
platform: toolchains/opt
|
2017-08-02 12:02:44 +03:00
|
|
|
symbol: TL(clang3.9)
|
2016-10-26 23:18:24 +03:00
|
|
|
tier: 1
|
2018-02-21 00:35:54 +03:00
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux-large
|
2016-10-26 23:18:24 +03:00
|
|
|
worker:
|
2017-10-13 07:22:41 +03:00
|
|
|
max-run-time: 7200
|
2017-02-01 03:27:31 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
2017-08-02 12:02:44 +03:00
|
|
|
script: build-clang-3.9-linux.sh
|
2017-02-01 03:27:31 +03:00
|
|
|
resources:
|
Bug 1386588 - Avoid wildcards in toolchain resources lists. r=gps
Those resources are used to compute a unique identifier for the
toolchain, and changes to those files will change the unique identifier,
and lead to the toolchain being rebuilt.
Using wildcards, especially in the build-clang directory, makes all the
files from there used for the unique identifier, even files irrelevant.
The side effect is that any change to any json file for clang toolchains
currently triggers *all* clang toolchains to be rebuilt, which is a
waste of resources and time.
But while it is tempting to list all the files involved, it is also
tedious and error-prone. Specifically, listing the relevant patch files
for clang toolchain builds is bound to end up outdated. OTOH, we're not
trying to mitigate bad actors here, but just to avoid shooting ourselves
in the foot. And patch files are, in practice, not changed. The jsons
are changed to reference them or not, but the patches themselves don't
change in relevant ways. They may be updated for new versions of clang,
which require a json change anyways. So we ignore the patch files.
2017-08-02 11:42:40 +03:00
|
|
|
- 'build/build-clang/build-clang.py'
|
2017-08-02 12:02:44 +03:00
|
|
|
- 'build/build-clang/clang-3.9-linux64.json'
|
2017-04-15 03:03:05 +03:00
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
2017-07-21 01:30:14 +03:00
|
|
|
toolchain-artifact: public/build/clang.tar.xz
|
2017-07-20 11:56:49 +03:00
|
|
|
toolchains:
|
2017-08-03 06:24:41 +03:00
|
|
|
- linux64-gcc-4.9
|
2016-10-26 23:18:24 +03:00
|
|
|
|
2017-10-13 07:22:41 +03:00
|
|
|
linux64-clang-5:
|
|
|
|
description: "Clang 5 toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(clang5)
|
|
|
|
tier: 1
|
2018-02-21 00:35:54 +03:00
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux-xlarge
|
2017-10-13 07:22:41 +03:00
|
|
|
worker:
|
|
|
|
max-run-time: 7200
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-clang-5-linux.sh
|
|
|
|
resources:
|
|
|
|
- 'build/build-clang/build-clang.py'
|
|
|
|
- 'build/build-clang/clang-5-linux64.json'
|
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
|
|
|
toolchain-artifact: public/build/clang.tar.xz
|
|
|
|
toolchains:
|
|
|
|
- linux64-gcc-4.9
|
|
|
|
|
2018-05-18 02:03:31 +03:00
|
|
|
linux64-clang-6:
|
|
|
|
description: "Clang 6 toolchain build"
|
2017-11-10 12:14:26 +03:00
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
2018-05-18 02:03:31 +03:00
|
|
|
symbol: TL(clang6)
|
2017-11-10 12:14:26 +03:00
|
|
|
tier: 1
|
2018-02-21 00:35:54 +03:00
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux-xlarge
|
2017-11-10 12:14:26 +03:00
|
|
|
worker:
|
|
|
|
max-run-time: 7200
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
2018-05-18 02:03:31 +03:00
|
|
|
script: build-clang-6-linux.sh
|
2017-11-10 12:14:26 +03:00
|
|
|
resources:
|
|
|
|
- 'build/build-clang/build-clang.py'
|
2018-05-18 02:03:31 +03:00
|
|
|
- 'build/build-clang/clang-6-linux64.json'
|
2017-11-10 12:14:26 +03:00
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
2018-06-08 04:36:07 +03:00
|
|
|
toolchain-alias: linux64-clang
|
2017-11-10 12:14:26 +03:00
|
|
|
toolchain-artifact: public/build/clang.tar.xz
|
|
|
|
toolchains:
|
|
|
|
- linux64-gcc-4.9
|
|
|
|
|
2018-05-18 02:03:31 +03:00
|
|
|
linux64-clang-6-macosx-cross:
|
|
|
|
description: "Clang 6 toolchain build with MacOS Compiler RT libs"
|
2018-02-09 00:58:12 +03:00
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
2018-05-18 02:03:31 +03:00
|
|
|
symbol: TL(clang6-macosx-cross)
|
2018-02-09 00:58:12 +03:00
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 3600
|
2018-02-09 00:58:12 +03:00
|
|
|
env:
|
|
|
|
TOOLTOOL_MANIFEST: "browser/config/tooltool-manifests/macosx64/cross-clang.manifest"
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
2018-05-18 02:03:31 +03:00
|
|
|
script: build-clang-6-linux-macosx-cross.sh
|
2018-02-09 00:58:12 +03:00
|
|
|
resources:
|
|
|
|
- 'build/build-clang/build-clang.py'
|
2018-05-18 02:03:31 +03:00
|
|
|
- 'build/build-clang/clang-6-macosx64.json'
|
2018-02-09 00:58:12 +03:00
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
2018-06-08 07:00:01 +03:00
|
|
|
toolchain-alias: linux64-clang-macosx-cross
|
2018-02-09 00:58:12 +03:00
|
|
|
toolchain-artifact: public/build/clang.tar.xz
|
|
|
|
tooltool-downloads: internal
|
|
|
|
toolchains:
|
|
|
|
- linux64-cctools-port
|
2018-05-18 02:03:31 +03:00
|
|
|
- linux64-clang-6
|
2018-02-09 00:58:12 +03:00
|
|
|
- linux64-gcc-4.9
|
|
|
|
|
2017-04-28 18:12:31 +03:00
|
|
|
linux64-clang-tidy:
|
2016-12-18 09:04:11 +03:00
|
|
|
description: "Clang-tidy build"
|
2017-01-03 05:07:20 +03:00
|
|
|
index:
|
|
|
|
product: static-analysis
|
|
|
|
job-name: linux64-clang-tidy
|
2016-12-18 09:04:11 +03:00
|
|
|
treeherder:
|
|
|
|
kind: build
|
Bug 1338061 - Move toolchain tasks to a separate "platform". r=dustin
The toolchain tasks are hard to spot on treeherder, in the ocean of
build and test jobs associated with the platforms they are currently
under.
Now that we have a significant number of toolchain tasks across
different platforms, it's even worse, especially combined with the fact
that they don't happen on every push.
To make them more easily visible, we move them to a new, separate,
"platform", with the name "toolchains", instead of having them in
different platforms. But since the distinction between Linux, OSX and
Windows 32/64 is still interesting to have, we create groups for each of
those platforms.
But because of bug 1215587, the jobs still end up associated to their
previous group, defeating the new grouping, so to work around that bug,
we also rename the jobs in subtle ways.
--HG--
extra : rebase_source : 6c093c070c18a64eba1c21bf2a2c97b2a9aaabc5
2017-01-27 05:46:22 +03:00
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(clang-tidy)
|
2016-12-18 09:04:11 +03:00
|
|
|
tier: 1
|
2018-02-21 00:35:54 +03:00
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux-large
|
2016-12-18 09:04:11 +03:00
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 3600
|
2017-02-01 03:27:31 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-clang-tidy-linux.sh
|
|
|
|
resources:
|
2016-12-18 09:04:11 +03:00
|
|
|
- 'build/clang-plugin/**'
|
Bug 1386588 - Avoid wildcards in toolchain resources lists. r=gps
Those resources are used to compute a unique identifier for the
toolchain, and changes to those files will change the unique identifier,
and lead to the toolchain being rebuilt.
Using wildcards, especially in the build-clang directory, makes all the
files from there used for the unique identifier, even files irrelevant.
The side effect is that any change to any json file for clang toolchains
currently triggers *all* clang toolchains to be rebuilt, which is a
waste of resources and time.
But while it is tempting to list all the files involved, it is also
tedious and error-prone. Specifically, listing the relevant patch files
for clang toolchain builds is bound to end up outdated. OTOH, we're not
trying to mitigate bad actors here, but just to avoid shooting ourselves
in the foot. And patch files are, in practice, not changed. The jsons
are changed to reference them or not, but the patches themselves don't
change in relevant ways. They may be updated for new versions of clang,
which require a json change anyways. So we ignore the patch files.
2017-08-02 11:42:40 +03:00
|
|
|
- 'build/build-clang/build-clang.py'
|
|
|
|
- 'build/build-clang/clang-tidy-linux64.json'
|
2017-04-15 03:03:05 +03:00
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
2017-07-21 01:30:14 +03:00
|
|
|
toolchain-artifact: public/build/clang-tidy.tar.xz
|
2018-01-17 08:30:57 +03:00
|
|
|
run-on-projects:
|
|
|
|
- trunk
|
|
|
|
- try
|
2017-07-20 11:56:49 +03:00
|
|
|
toolchains:
|
2017-08-03 06:24:41 +03:00
|
|
|
- linux64-gcc-4.9
|
2016-12-18 09:04:11 +03:00
|
|
|
|
2017-08-02 12:02:44 +03:00
|
|
|
linux64-gcc-4.9:
|
|
|
|
description: "GCC 4.9 toolchain build"
|
2016-10-26 23:18:24 +03:00
|
|
|
treeherder:
|
|
|
|
kind: build
|
Bug 1338061 - Move toolchain tasks to a separate "platform". r=dustin
The toolchain tasks are hard to spot on treeherder, in the ocean of
build and test jobs associated with the platforms they are currently
under.
Now that we have a significant number of toolchain tasks across
different platforms, it's even worse, especially combined with the fact
that they don't happen on every push.
To make them more easily visible, we move them to a new, separate,
"platform", with the name "toolchains", instead of having them in
different platforms. But since the distinction between Linux, OSX and
Windows 32/64 is still interesting to have, we create groups for each of
those platforms.
But because of bug 1215587, the jobs still end up associated to their
previous group, defeating the new grouping, so to work around that bug,
we also rename the jobs in subtle ways.
--HG--
extra : rebase_source : 6c093c070c18a64eba1c21bf2a2c97b2a9aaabc5
2017-01-27 05:46:22 +03:00
|
|
|
platform: toolchains/opt
|
2017-08-02 12:02:44 +03:00
|
|
|
symbol: TL(gcc4.9)
|
2016-10-26 23:18:24 +03:00
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 3600
|
2017-02-01 03:27:31 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
2017-08-02 12:02:44 +03:00
|
|
|
script: build-gcc-4.9-linux.sh
|
2017-02-01 03:27:31 +03:00
|
|
|
resources:
|
Bug 1386588 - Avoid wildcards in toolchain resources lists. r=gps
Those resources are used to compute a unique identifier for the
toolchain, and changes to those files will change the unique identifier,
and lead to the toolchain being rebuilt.
Using wildcards, especially in the build-clang directory, makes all the
files from there used for the unique identifier, even files irrelevant.
The side effect is that any change to any json file for clang toolchains
currently triggers *all* clang toolchains to be rebuilt, which is a
waste of resources and time.
But while it is tempting to list all the files involved, it is also
tedious and error-prone. Specifically, listing the relevant patch files
for clang toolchain builds is bound to end up outdated. OTOH, we're not
trying to mitigate bad actors here, but just to avoid shooting ourselves
in the foot. And patch files are, in practice, not changed. The jsons
are changed to reference them or not, but the patches themselves don't
change in relevant ways. They may be updated for new versions of clang,
which require a json change anyways. So we ignore the patch files.
2017-08-02 11:42:40 +03:00
|
|
|
- 'build/unix/build-gcc/build-gcc.sh'
|
2017-07-21 01:30:14 +03:00
|
|
|
toolchain-artifact: public/build/gcc.tar.xz
|
Bug 1460777 - Taskgraph tasks for retrieving remote content; r=dustin, glandium
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
2018-06-07 00:37:49 +03:00
|
|
|
fetches:
|
2018-07-17 16:05:06 +03:00
|
|
|
fetch:
|
|
|
|
- binutils-2.25.1
|
|
|
|
- cloog-0.18.1
|
|
|
|
- gcc-4.9.4
|
|
|
|
- gmp-5.1.3
|
|
|
|
- isl-0.12.2
|
|
|
|
- mpc-0.8.2
|
|
|
|
- mpfr-3.1.5
|
2016-10-26 23:18:24 +03:00
|
|
|
|
2017-08-02 13:43:15 +03:00
|
|
|
linux64-gcc-6:
|
|
|
|
description: "GCC 6 toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(gcc6)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 3600
|
2017-08-02 13:43:15 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-gcc-6-linux.sh
|
|
|
|
resources:
|
|
|
|
- 'build/unix/build-gcc/build-gcc.sh'
|
2017-08-16 07:35:14 +03:00
|
|
|
toolchain-alias: linux64-gcc
|
2017-08-02 13:43:15 +03:00
|
|
|
toolchain-artifact: public/build/gcc.tar.xz
|
Bug 1460777 - Taskgraph tasks for retrieving remote content; r=dustin, glandium
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
2018-06-07 00:37:49 +03:00
|
|
|
fetches:
|
2018-07-17 16:05:06 +03:00
|
|
|
fetch:
|
|
|
|
- binutils-2.28.1
|
|
|
|
- gcc-6.4.0
|
|
|
|
- gmp-5.1.3
|
|
|
|
- isl-0.15
|
|
|
|
- mpc-0.8.2
|
|
|
|
- mpfr-3.1.5
|
2017-08-02 13:43:15 +03:00
|
|
|
|
2018-03-14 03:37:27 +03:00
|
|
|
linux64-gcc-7:
|
|
|
|
description: "GCC 7 toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(gcc7)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 3600
|
2018-03-14 03:37:27 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-gcc-7-linux.sh
|
|
|
|
resources:
|
|
|
|
- 'build/unix/build-gcc/build-gcc.sh'
|
|
|
|
toolchain-artifact: public/build/gcc.tar.xz
|
Bug 1460777 - Taskgraph tasks for retrieving remote content; r=dustin, glandium
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
2018-06-07 00:37:49 +03:00
|
|
|
fetches:
|
2018-07-17 16:05:06 +03:00
|
|
|
fetch:
|
|
|
|
- binutils-2.28.1
|
|
|
|
- gcc-7.3.0
|
|
|
|
- gmp-6.1.0
|
|
|
|
- isl-0.16.1
|
|
|
|
- mpc-1.0.3
|
|
|
|
- mpfr-3.1.4
|
2018-03-14 03:37:27 +03:00
|
|
|
|
2018-05-10 00:36:45 +03:00
|
|
|
linux64-gcc-sixgill:
|
2017-10-10 03:49:29 +03:00
|
|
|
description: "sixgill GCC plugin build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(sixgill)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 3600
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-gcc-sixgill-plugin-linux.sh
|
|
|
|
resources:
|
|
|
|
- 'build/unix/build-gcc/build-gcc.sh'
|
|
|
|
- 'taskcluster/scripts/misc/build-gcc-sixgill-plugin-linux.sh'
|
|
|
|
toolchain-artifact: public/build/sixgill.tar.xz
|
|
|
|
toolchains:
|
2018-03-29 04:15:51 +03:00
|
|
|
- linux64-gcc-6
|
Bug 1460777 - Taskgraph tasks for retrieving remote content; r=dustin, glandium
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
2018-06-07 00:37:49 +03:00
|
|
|
fetches:
|
2018-07-17 16:05:06 +03:00
|
|
|
fetch:
|
|
|
|
- binutils-2.28.1
|
|
|
|
- isl-0.15
|
|
|
|
- gcc-6.4.0
|
|
|
|
- gmp-5.1.3
|
|
|
|
- mpc-0.8.2
|
|
|
|
- mpfr-3.1.5
|
2017-10-10 03:49:29 +03:00
|
|
|
|
Bug 1430315 - Add a toolchain job to build llvm-dsymutil independently. r=rillian
We've had problems with crashes in llvm-dsymutil for a while, and while
they are, in essence, due to the fact that rustc produces bad debug
info, they are a hurdle to our builds. The tool comes along clang, and
updating clang is not necessarily easy (witness bug 1409265), so, so
far, we've relied on backporting fixes, which can be time confusing
(witness bug 1410148).
OTOH, llvm-dsymutil is a rather specific tool, that doesn't strictly
need to be tied to clang. It's only tied to it because it uses the llvm
code to do some of the things it does, and it's part of the llvm source
tree. But it could just as well be a separate tool, like it was(is?) on
OSX.
So, we add a toolchain job to build it from the llvm source,
independently from clang, so that we can update it separately, if we
hit new crashes that happen to already be fixed on llvm trunk. It will
also allow to more easily update after upstream fixes crashes after we
report them.
--HG--
extra : rebase_source : b814353b4b4632e46646a24b8f54c5300618ff49
2018-01-16 10:23:33 +03:00
|
|
|
linux64-llvm-dsymutil:
|
|
|
|
description: "llvm-dsymutil toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(dsymutil)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 1800
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-llvm-dsymutil.sh
|
|
|
|
toolchain-artifact: public/build/llvm-dsymutil.tar.xz
|
|
|
|
toolchains:
|
|
|
|
- linux64-gcc-4.9
|
|
|
|
|
2017-04-28 18:12:31 +03:00
|
|
|
linux64-binutils:
|
2016-10-26 23:18:24 +03:00
|
|
|
description: "Binutils toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
Bug 1338061 - Move toolchain tasks to a separate "platform". r=dustin
The toolchain tasks are hard to spot on treeherder, in the ocean of
build and test jobs associated with the platforms they are currently
under.
Now that we have a significant number of toolchain tasks across
different platforms, it's even worse, especially combined with the fact
that they don't happen on every push.
To make them more easily visible, we move them to a new, separate,
"platform", with the name "toolchains", instead of having them in
different platforms. But since the distinction between Linux, OSX and
Windows 32/64 is still interesting to have, we create groups for each of
those platforms.
But because of bug 1215587, the jobs still end up associated to their
previous group, defeating the new grouping, so to work around that bug,
we also rename the jobs in subtle ways.
--HG--
extra : rebase_source : 6c093c070c18a64eba1c21bf2a2c97b2a9aaabc5
2017-01-27 05:46:22 +03:00
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(binutil)
|
2016-10-26 23:18:24 +03:00
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 3600
|
2017-02-01 03:27:31 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-binutils-linux.sh
|
|
|
|
resources:
|
Bug 1386588 - Avoid wildcards in toolchain resources lists. r=gps
Those resources are used to compute a unique identifier for the
toolchain, and changes to those files will change the unique identifier,
and lead to the toolchain being rebuilt.
Using wildcards, especially in the build-clang directory, makes all the
files from there used for the unique identifier, even files irrelevant.
The side effect is that any change to any json file for clang toolchains
currently triggers *all* clang toolchains to be rebuilt, which is a
waste of resources and time.
But while it is tempting to list all the files involved, it is also
tedious and error-prone. Specifically, listing the relevant patch files
for clang toolchain builds is bound to end up outdated. OTOH, we're not
trying to mitigate bad actors here, but just to avoid shooting ourselves
in the foot. And patch files are, in practice, not changed. The jsons
are changed to reference them or not, but the patches themselves don't
change in relevant ways. They may be updated for new versions of clang,
which require a json change anyways. So we ignore the patch files.
2017-08-02 11:42:40 +03:00
|
|
|
- 'build/unix/build-binutils/build-binutils.sh'
|
2017-07-21 01:30:14 +03:00
|
|
|
toolchain-artifact: public/build/binutils.tar.xz
|
2017-01-23 05:14:03 +03:00
|
|
|
|
2017-04-28 18:12:31 +03:00
|
|
|
linux64-cctools-port:
|
2017-01-23 05:14:03 +03:00
|
|
|
description: "cctools-port toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
Bug 1338061 - Move toolchain tasks to a separate "platform". r=dustin
The toolchain tasks are hard to spot on treeherder, in the ocean of
build and test jobs associated with the platforms they are currently
under.
Now that we have a significant number of toolchain tasks across
different platforms, it's even worse, especially combined with the fact
that they don't happen on every push.
To make them more easily visible, we move them to a new, separate,
"platform", with the name "toolchains", instead of having them in
different platforms. But since the distinction between Linux, OSX and
Windows 32/64 is still interesting to have, we create groups for each of
those platforms.
But because of bug 1215587, the jobs still end up associated to their
previous group, defeating the new grouping, so to work around that bug,
we also rename the jobs in subtle ways.
--HG--
extra : rebase_source : 6c093c070c18a64eba1c21bf2a2c97b2a9aaabc5
2017-01-27 05:46:22 +03:00
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(cctools)
|
2017-01-23 05:14:03 +03:00
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 1800
|
2017-02-01 03:27:31 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-cctools-port.sh
|
2017-04-17 02:34:24 +03:00
|
|
|
resources:
|
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
2017-07-21 01:30:14 +03:00
|
|
|
toolchain-artifact: public/build/cctools.tar.xz
|
2017-07-20 11:56:49 +03:00
|
|
|
toolchains:
|
2018-06-08 04:44:58 +03:00
|
|
|
- linux64-clang-6
|
2017-01-30 21:12:57 +03:00
|
|
|
|
2017-04-28 18:12:31 +03:00
|
|
|
linux64-hfsplus:
|
2017-02-09 05:37:28 +03:00
|
|
|
description: "hfsplus toolchain build"
|
2017-01-30 21:12:57 +03:00
|
|
|
treeherder:
|
|
|
|
kind: build
|
Bug 1338061 - Move toolchain tasks to a separate "platform". r=dustin
The toolchain tasks are hard to spot on treeherder, in the ocean of
build and test jobs associated with the platforms they are currently
under.
Now that we have a significant number of toolchain tasks across
different platforms, it's even worse, especially combined with the fact
that they don't happen on every push.
To make them more easily visible, we move them to a new, separate,
"platform", with the name "toolchains", instead of having them in
different platforms. But since the distinction between Linux, OSX and
Windows 32/64 is still interesting to have, we create groups for each of
those platforms.
But because of bug 1215587, the jobs still end up associated to their
previous group, defeating the new grouping, so to work around that bug,
we also rename the jobs in subtle ways.
--HG--
extra : rebase_source : 6c093c070c18a64eba1c21bf2a2c97b2a9aaabc5
2017-01-27 05:46:22 +03:00
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(hfs+)
|
2017-01-30 21:12:57 +03:00
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 1800
|
2017-02-01 03:27:31 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-hfsplus-linux.sh
|
|
|
|
resources:
|
Bug 1386588 - Avoid wildcards in toolchain resources lists. r=gps
Those resources are used to compute a unique identifier for the
toolchain, and changes to those files will change the unique identifier,
and lead to the toolchain being rebuilt.
Using wildcards, especially in the build-clang directory, makes all the
files from there used for the unique identifier, even files irrelevant.
The side effect is that any change to any json file for clang toolchains
currently triggers *all* clang toolchains to be rebuilt, which is a
waste of resources and time.
But while it is tempting to list all the files involved, it is also
tedious and error-prone. Specifically, listing the relevant patch files
for clang toolchain builds is bound to end up outdated. OTOH, we're not
trying to mitigate bad actors here, but just to avoid shooting ourselves
in the foot. And patch files are, in practice, not changed. The jsons
are changed to reference them or not, but the patches themselves don't
change in relevant ways. They may be updated for new versions of clang,
which require a json change anyways. So we ignore the patch files.
2017-08-02 11:42:40 +03:00
|
|
|
- 'build/unix/build-hfsplus/build-hfsplus.sh'
|
2017-04-15 03:03:05 +03:00
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
2017-07-21 01:30:14 +03:00
|
|
|
toolchain-artifact: public/build/hfsplus-tools.tar.xz
|
2017-07-20 11:56:49 +03:00
|
|
|
toolchains:
|
2018-06-08 04:44:58 +03:00
|
|
|
- linux64-clang-6
|
2017-02-07 18:57:23 +03:00
|
|
|
|
2017-04-28 18:12:31 +03:00
|
|
|
linux64-libdmg:
|
2017-02-07 18:57:23 +03:00
|
|
|
description: "libdmg-hfsplus toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
Bug 1338061 - Move toolchain tasks to a separate "platform". r=dustin
The toolchain tasks are hard to spot on treeherder, in the ocean of
build and test jobs associated with the platforms they are currently
under.
Now that we have a significant number of toolchain tasks across
different platforms, it's even worse, especially combined with the fact
that they don't happen on every push.
To make them more easily visible, we move them to a new, separate,
"platform", with the name "toolchains", instead of having them in
different platforms. But since the distinction between Linux, OSX and
Windows 32/64 is still interesting to have, we create groups for each of
those platforms.
But because of bug 1215587, the jobs still end up associated to their
previous group, defeating the new grouping, so to work around that bug,
we also rename the jobs in subtle ways.
--HG--
extra : rebase_source : 6c093c070c18a64eba1c21bf2a2c97b2a9aaabc5
2017-01-27 05:46:22 +03:00
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(libdmg-hfs+)
|
2017-02-07 18:57:23 +03:00
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 1800
|
2017-02-01 03:27:31 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-libdmg-hfsplus.sh
|
2017-07-21 01:30:14 +03:00
|
|
|
toolchain-artifact: public/build/dmg.tar.xz
|
2017-07-14 05:07:40 +03:00
|
|
|
|
2017-10-03 21:45:27 +03:00
|
|
|
linux64-android-sdk-linux-repack:
|
|
|
|
description: "Android SDK (Linux) repack toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(android-sdk-linux)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
docker-image: {in-tree: android-build}
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 1800
|
2017-10-03 21:45:27 +03:00
|
|
|
artifacts:
|
2017-11-11 08:36:41 +03:00
|
|
|
- name: project/gecko/android-sdk
|
|
|
|
path: /builds/worker/project/gecko/android-sdk/
|
|
|
|
type: directory
|
2017-10-03 21:45:27 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: repack-android-sdk-linux.sh
|
|
|
|
resources:
|
|
|
|
- 'python/mozboot/**/*android*'
|
|
|
|
toolchain-artifact: project/gecko/android-sdk/android-sdk-linux.tar.xz
|
|
|
|
toolchain-alias: android-sdk-linux
|
|
|
|
|
2018-02-01 17:59:23 +03:00
|
|
|
linux64-android-ndk-linux-repack:
|
|
|
|
description: "Android NDK (Linux) repack toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(android-ndk-linux)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
docker-image: {in-tree: android-build}
|
|
|
|
max-run-time: 1800
|
|
|
|
artifacts:
|
|
|
|
- name: project/gecko/android-ndk
|
|
|
|
path: /builds/worker/project/gecko/android-ndk/
|
|
|
|
type: directory
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: repack-android-ndk-linux.sh
|
|
|
|
resources:
|
|
|
|
- 'python/mozboot/**/*android*'
|
|
|
|
toolchain-artifact: project/gecko/android-ndk/android-ndk.tar.xz
|
|
|
|
toolchain-alias: android-ndk-linux
|
|
|
|
|
2017-10-13 22:59:04 +03:00
|
|
|
linux64-android-gradle-dependencies:
|
|
|
|
description: "Android Gradle dependencies toolchain task"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(gradle-dependencies)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
docker-image: {in-tree: android-build}
|
|
|
|
env:
|
|
|
|
GRADLE_USER_HOME: "/builds/worker/workspace/build/src/mobile/android/gradle/dotgradle-online"
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 1800
|
2017-10-13 22:59:04 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: android-gradle-dependencies.sh
|
|
|
|
sparse-profile: null
|
|
|
|
resources:
|
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
|
|
|
- 'taskcluster/scripts/misc/android-gradle-dependencies/**'
|
2018-01-09 01:08:53 +03:00
|
|
|
- '*.gradle'
|
|
|
|
- 'mobile/android/**/*.gradle'
|
2017-10-13 22:59:04 +03:00
|
|
|
- 'mobile/android/config/mozconfigs/android-api-16-gradle-dependencies/**'
|
|
|
|
- 'mobile/android/config/mozconfigs/common*'
|
2017-11-14 21:15:30 +03:00
|
|
|
- 'mobile/android/gradle.configure'
|
2017-10-13 22:59:04 +03:00
|
|
|
toolchain-artifact: public/build/android-gradle-dependencies.tar.xz
|
|
|
|
toolchain-alias: android-gradle-dependencies
|
|
|
|
toolchains:
|
|
|
|
# Aliases aren't allowed for toolchains depending on toolchains.
|
|
|
|
- linux64-android-sdk-linux-repack
|
|
|
|
|
2018-07-04 01:27:20 +03:00
|
|
|
linux64-rust-1.27:
|
2018-06-26 11:34:28 +03:00
|
|
|
description: "rust repack"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
2018-07-04 01:27:20 +03:00
|
|
|
symbol: TL(rust-1.27)
|
2018-06-26 11:34:28 +03:00
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 7200
|
|
|
|
env:
|
|
|
|
UPLOAD_DIR: artifacts
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: repack_rust.py
|
|
|
|
arguments: [
|
2018-07-04 01:27:20 +03:00
|
|
|
'--channel', '1.27.0',
|
2018-06-26 11:34:28 +03:00
|
|
|
'--host', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'i686-unknown-linux-gnu',
|
|
|
|
]
|
2018-06-26 11:34:28 +03:00
|
|
|
toolchain-artifact: public/build/rustc.tar.xz
|
|
|
|
|
|
|
|
linux64-rust-1.28:
|
|
|
|
description: "rust repack"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(rust)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 7200
|
|
|
|
env:
|
|
|
|
UPLOAD_DIR: artifacts
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: repack_rust.py
|
|
|
|
arguments: [
|
|
|
|
# 1.28.0-beta.6
|
|
|
|
'--channel', 'beta-2018-06-30',
|
|
|
|
'--host', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'i686-unknown-linux-gnu',
|
|
|
|
]
|
2018-04-24 22:07:49 +03:00
|
|
|
toolchain-alias: linux64-rust
|
2018-03-30 20:12:58 +03:00
|
|
|
toolchain-artifact: public/build/rustc.tar.xz
|
|
|
|
|
2018-06-14 08:33:12 +03:00
|
|
|
linux64-rust-nightly:
|
|
|
|
description: "rust nightly repack"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(rust-nightly)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 7200
|
|
|
|
env:
|
|
|
|
UPLOAD_DIR: artifacts
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: repack_rust.py
|
|
|
|
arguments: [
|
2018-07-19 14:32:52 +03:00
|
|
|
'--channel', 'nightly-2018-07-18',
|
2018-06-14 08:33:12 +03:00
|
|
|
'--host', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'i686-unknown-linux-gnu',
|
|
|
|
]
|
|
|
|
toolchain-artifact: public/build/rustc.tar.xz
|
|
|
|
|
2018-06-26 11:34:28 +03:00
|
|
|
linux64-rust-macos-1.28:
|
2017-09-13 02:30:19 +03:00
|
|
|
description: "rust repack with macos-cross support"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(rust-macos)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 7200
|
|
|
|
env:
|
2017-11-11 08:36:41 +03:00
|
|
|
UPLOAD_DIR: artifacts
|
2017-09-13 02:30:19 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: repack_rust.py
|
|
|
|
arguments: [
|
2018-06-26 11:34:28 +03:00
|
|
|
# 1.28.0-beta.6
|
|
|
|
'--channel', 'beta-2018-06-30',
|
2017-11-11 08:36:41 +03:00
|
|
|
'--host', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'x86_64-apple-darwin',
|
2017-09-13 02:30:19 +03:00
|
|
|
]
|
|
|
|
toolchain-alias: linux64-rust-macos
|
|
|
|
toolchain-artifact: public/build/rustc.tar.xz
|
|
|
|
|
2018-06-26 11:34:28 +03:00
|
|
|
linux64-rust-android-1.28:
|
2017-09-13 02:30:19 +03:00
|
|
|
description: "rust repack with android-cross support"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(rust-android)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 7200
|
|
|
|
env:
|
2017-11-11 08:36:41 +03:00
|
|
|
UPLOAD_DIR: artifacts
|
2017-09-13 02:30:19 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: repack_rust.py
|
|
|
|
arguments: [
|
2018-06-26 11:34:28 +03:00
|
|
|
# 1.28.0-beta.6
|
|
|
|
'--channel', 'beta-2018-06-30',
|
2017-11-11 08:36:41 +03:00
|
|
|
'--host', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'x86_64-unknown-linux-gnu',
|
|
|
|
'--target', 'armv7-linux-androideabi',
|
|
|
|
'--target', 'aarch64-linux-android',
|
|
|
|
'--target', 'i686-linux-android',
|
2017-09-13 02:30:19 +03:00
|
|
|
]
|
|
|
|
toolchain-alias: linux64-rust-android
|
|
|
|
toolchain-artifact: public/build/rustc.tar.xz
|
|
|
|
|
2017-07-14 05:07:40 +03:00
|
|
|
linux64-sccache:
|
|
|
|
description: "sccache toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(sccache)
|
2017-07-26 09:17:15 +03:00
|
|
|
tier: 1
|
2017-07-14 05:07:40 +03:00
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 1800
|
2017-07-14 05:07:40 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-sccache.sh
|
|
|
|
resources:
|
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
|
|
|
toolchain-artifact: public/build/sccache2.tar.xz
|
2017-07-26 10:32:15 +03:00
|
|
|
toolchains:
|
2018-06-26 11:34:28 +03:00
|
|
|
- linux64-rust-1.28
|
2017-07-26 01:33:44 +03:00
|
|
|
|
2018-06-08 02:47:58 +03:00
|
|
|
linux64-rust-size:
|
|
|
|
description: "rust-size toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(rust-size)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 1800
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-rust-size.sh
|
|
|
|
resources:
|
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
|
|
|
toolchain-artifact: public/build/rust-size.tar.xz
|
|
|
|
toolchains:
|
2018-06-26 11:34:28 +03:00
|
|
|
- linux64-rust-1.28
|
2018-06-08 02:47:58 +03:00
|
|
|
|
2017-07-26 01:33:44 +03:00
|
|
|
linux64-gn:
|
|
|
|
description: "gn toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(gn)
|
|
|
|
tier: 1
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-06-20 23:59:01 +03:00
|
|
|
max-run-time: 1800
|
2017-07-26 01:33:44 +03:00
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-gn-linux.sh
|
|
|
|
tooltool-downloads: public
|
|
|
|
resources:
|
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
|
|
|
- 'taskcluster/scripts/misc/build-gn-common.sh'
|
|
|
|
toolchain-artifact: public/build/gn.tar.xz
|
2018-01-17 08:30:57 +03:00
|
|
|
run-on-projects:
|
|
|
|
- trunk
|
|
|
|
- try
|
2017-07-26 01:33:44 +03:00
|
|
|
toolchains:
|
|
|
|
- linux64-gcc-4.9
|
2017-09-22 08:24:58 +03:00
|
|
|
|
2018-02-20 19:12:08 +03:00
|
|
|
linux64-tup:
|
|
|
|
description: "tup toolchain build"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TL(tup)
|
|
|
|
tier: 2
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
|
|
|
max-run-time: 3600
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-tup-linux.sh
|
|
|
|
resources:
|
|
|
|
- 'taskcluster/scripts/misc/tooltool-download.sh'
|
|
|
|
toolchain-artifact: public/build/tup.tar.xz
|
|
|
|
run-on-projects:
|
|
|
|
- trunk
|
|
|
|
- try
|
|
|
|
toolchains:
|
|
|
|
- linux64-gcc-4.9
|
|
|
|
|
2017-09-22 08:24:58 +03:00
|
|
|
linux64-upx:
|
|
|
|
description: "UPX build for MinGW32 Cross Compile"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TMW(upx)
|
|
|
|
tier: 2
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-01-26 08:39:07 +03:00
|
|
|
docker-image: {in-tree: mingw32-build}
|
2017-09-22 08:24:58 +03:00
|
|
|
max-run-time: 3600
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-upx.sh
|
|
|
|
toolchain-artifact: public/build/upx.tar.xz
|
|
|
|
|
|
|
|
linux64-wine:
|
|
|
|
description: "Wine build for MinGW32 Cross Compile"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TMW(wine)
|
|
|
|
tier: 2
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-01-26 08:39:07 +03:00
|
|
|
docker-image: {in-tree: mingw32-build}
|
2017-09-22 08:24:58 +03:00
|
|
|
max-run-time: 10800
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-wine.sh
|
|
|
|
toolchain-artifact: public/build/wine.tar.xz
|
|
|
|
|
|
|
|
linux64-mingw32-gcc:
|
|
|
|
description: "GCC toolchain build for MinGW32 Cross Compile"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TMW(mingw32-gcc)
|
|
|
|
tier: 2
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-01-26 08:39:07 +03:00
|
|
|
docker-image: {in-tree: mingw32-build}
|
2017-09-22 08:24:58 +03:00
|
|
|
max-run-time: 10800
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-gcc-mingw32.sh
|
|
|
|
resources:
|
|
|
|
- 'build/unix/build-gcc/build-gcc.sh'
|
|
|
|
toolchain-artifact: public/build/mingw32.tar.xz
|
Bug 1460777 - Taskgraph tasks for retrieving remote content; r=dustin, glandium
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
2018-06-07 00:37:49 +03:00
|
|
|
fetches:
|
2018-07-17 16:05:06 +03:00
|
|
|
fetch:
|
|
|
|
- binutils-2.27
|
|
|
|
- gcc-6.4.0
|
|
|
|
- gmp-5.1.3
|
|
|
|
- isl-0.15
|
|
|
|
- mpc-0.8.2
|
|
|
|
- mpfr-3.1.5
|
2017-09-22 08:24:58 +03:00
|
|
|
|
|
|
|
linux64-mingw32-nsis:
|
|
|
|
description: "NSIS build for MinGW32 Cross Compile"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TMW(mingw32-nsis)
|
|
|
|
tier: 2
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-01-26 08:39:07 +03:00
|
|
|
docker-image: {in-tree: mingw32-build}
|
2017-09-22 08:24:58 +03:00
|
|
|
max-run-time: 3600
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-mingw32-nsis.sh
|
2018-04-28 00:36:54 +03:00
|
|
|
resources:
|
|
|
|
- 'build/unix/build-gcc/build-gcc.sh'
|
|
|
|
- 'taskcluster/scripts/misc/build-gcc-mingw32.sh'
|
2017-09-22 08:24:58 +03:00
|
|
|
toolchain-artifact: public/build/nsis.tar.xz
|
|
|
|
toolchains:
|
|
|
|
- linux64-mingw32-gcc
|
|
|
|
|
|
|
|
linux64-mingw32-fxc2:
|
|
|
|
description: "fxc2.exe build for MinGW32 Cross Compile"
|
|
|
|
treeherder:
|
|
|
|
kind: build
|
|
|
|
platform: toolchains/opt
|
|
|
|
symbol: TMW(mingw32-fxc2)
|
|
|
|
tier: 2
|
|
|
|
worker-type: aws-provisioner-v1/gecko-{level}-b-linux
|
|
|
|
worker:
|
2018-01-26 08:39:07 +03:00
|
|
|
docker-image: {in-tree: mingw32-build}
|
2017-09-22 08:24:58 +03:00
|
|
|
max-run-time: 1800
|
|
|
|
run:
|
|
|
|
using: toolchain-script
|
|
|
|
script: build-mingw32-fxc2.sh
|
2018-04-28 00:36:54 +03:00
|
|
|
resources:
|
|
|
|
- 'build/unix/build-gcc/build-gcc.sh'
|
|
|
|
- 'taskcluster/scripts/misc/build-gcc-mingw32.sh'
|
2017-09-22 08:24:58 +03:00
|
|
|
toolchain-artifact: public/build/fxc2.tar.xz
|
|
|
|
toolchains:
|
2017-09-26 08:37:58 +03:00
|
|
|
- linux64-mingw32-gcc
|