This needs to be globally enabled to support sparse checkouts
and accessing repos using sparse checkouts.
Having the extension enabled should no-op unless sparse checkouts
are being used. i.e. it should be harmless to globally enable.
MozReview-Commit-ID: AKNUOXfYQPx
--HG--
extra : rebase_source : d32b8a89c56c39923d7b0cd61583c2828a29a872
bug 1382280 tripled the size of desktop-build image because of
installation of debug symbols. It is only used for valgrind, so let's
move valgrind task to use its own image.
MozReview-Commit-ID: 16St7dDj8tr
--HG--
rename : taskcluster/docker/desktop-build/Dockerfile => taskcluster/docker/valgrind-build/Dockerfile
extra : rebase_source : cc66813cab430d906643fbadf63c661e14784f6f
Today, cache names are mostly static and are brittle as a result.
In theory, when a backwards incompatible change is performed on
something that touches a cache, the cache name needs to be changed
to ensure tasks running the old code don't see cached data from the
new task. (Alternatively, all code is forward compatible, but that is
hard to implement in practice.)
For many things, the process works as planned. However, not everyone
knows that cache names need changed. And, it isn't always obvious
that some things require fresh caches. When mistakes are made, tasks
break intermittently due to cache wonkiness.
One area where we get into trouble is with UID and GID mismatch.
Task A will use a Docker image where our standard "worker" user/group
is UID/GID 1000:1000. Then Task B will use UID/GID 500:500. (This is
common when mixing Debian and RedHel based distros.) If they use the
same cache, then Task B needs to chown/chmod all files in the cache
or there could be a permissions problem. This is exactly why
run-task recursively chowns certain paths before dropping root
privileges.
Permissions setting in run-task solves permissions problems. But
it doesn't solve content incompatibility problems. For that, you
need to change cache names, not use caches, or blow away content
when incompatibilities are detected.
This commit starts the process of adding a little bit more coherence
to our caching story.
There are two main features in this commit:
1) Cache names tied to run-task content
2) Cache validation in run-task
Taskgraph now detects when a task is using caches with run-task. When
caches and run-task are both being used, the cache name is adjusted to
contain a hash of run-task's content. When run-task changes, the cache
name changes. So, changing run-task ensures that all caches from that point
forward are "clean." This frees run-task and any functionality related
to run-task (such as maintaining version control checkouts) from
having to maintain backwards or forwards compatibility with any other
version of run-task. This does mean that any changes to run-task
effectively wipe out caches. But changes to run-task tend to be
seldom, so this should be acceptable.
The second part of this change is code in run-task to record per-cache
properties and validate whether a populated cache is appropriate for
use. To enable this, taskgraph passes a list of cache paths via an
environment variable. For each cache path, run-task looks for a
well-defined file containing a list of "requirements." Right now,
that list is simply a version string. But other features will be
worked into it. If the cache is empty, we simply write out a new
requirements file and are done. If the file exists, we compare
requirements and fail fast if there is a mismatch. If the cache
has content but not this special file, then we abort (because this
should never happen).
The "requirements" validation isn't very useful now because the only
entry comes from run-task's source code and modifying run-task will
change the hash and cause a new cache to be used. The implementation
at this point is more demonstrating the concept than doing anything
terribly useful with it.
MozReview-Commit-ID: HtpXIc7OD1k
--HG--
extra : rebase_source : 2424696b1fde59f20152617a6ebb2afe14b94678
I must have been in a closure mood when I wrote this code. The
main function is getting a bit heavyweight. So let's extract
these closures to make things less dense.
MozReview-Commit-ID: 4p5yKB1tTxn
--HG--
extra : rebase_source : 3c3e0b352da6290043013aa36c783e21e01460ba
extra : source : 053f0b0b48635c6a87aefe15ad73f361f0f64f79
This is pretty straightforward: we just update some version numbers
and hashes.
The tooltool artifacts were produced and uploaded by me, just like
the last ones. I have some patches to establish a proper toolchain
task to build Mercurial. But it is a bit of a rabbit hole due to the
chicken-and-egg problem of Mercurial needing to be in Docker images.
Preserving the existing install mechanism is the simplest path
forward. Plus we need this patch so we can uplift so earlier releases
get a secure Mercurial in their Docker images.
color and pager are enabled by default in 4.2. So remove configuration
options for them that add no value.
MozReview-Commit-ID: 9pkHX044kV8
--HG--
extra : rebase_source : 4b66f05787bc1b46e1e4db2a47439f3d046becf5
The comment removed by this commit invited the potential for badness.
Mercurial 4.3 drops support for Python 2.6 anyway. So let's remove
any indication we support running Mercurial with Python 2.6.
MozReview-Commit-ID: 40K10s95FLg
--HG--
extra : rebase_source : 52251ff6d1e4877b1cd5dcbf4eb75c875cffa452
AFAICT there are no more in-tree references to this image. That
should mean we can nuke it. So do that.
MozReview-Commit-ID: 9LUGjt46ZCi
--HG--
extra : rebase_source : caa9e8f3e355710542794efb7f6f92c2ef43ef0a
The old process ran "before" and "after" steps as root. The
mozharness script doesn't run as root, which required some small
changes to not run Sonatype Nexus as root. Everything else is a
straight-forward move of the scripts out of the `android-gradle-build`
image and into `taskcluster/scripts`.
MozReview-Commit-ID: CqnNI33OKmb
--HG--
rename : taskcluster/docker/android-gradle-build/bin/after.sh => taskcluster/scripts/builder/build-android-dependencies/after.sh
rename : taskcluster/docker/android-gradle-build/bin/before.sh => taskcluster/scripts/builder/build-android-dependencies/before.sh
rename : taskcluster/docker/android-gradle-build/bin/repackage-jdk-centos.sh => taskcluster/scripts/builder/build-android-dependencies/repackage-jdk-centos.sh
extra : rebase_source : f94e6b9b780f96038c60d3825039a0f94add0404
We really want the Android build image to inherit from desktop-build,
but that isn't possible with the current `docker-image: in-tree:`
support. Therefore, way back in the mists of time, I culted
android-gradle-build from desktop-build. This moves it back (mostly)
in line with desktop-build, which has advanced.
MozReview-Commit-ID: 6GmuxHjhAbv
--HG--
extra : rebase_source : 265937bc9ba3bc4c18756b6c675100a62929bafe
Since the buildbot-based Windows builds using releng.manifest are busted
anyways, there is no reason to keep clang entries in there. Which makes
those manifests identical to clang.manifest, so remote the latter.
--HG--
extra : rebase_source : eef7eca4bafc4e348eadc04d6da2bd17ea20deea
valgrind test will try to load debug information for the modules present
in a stack trace. If it fails to do it, we endup with a stack trace with
only memory addresses.
We install debuginfo for all installed packages and look for all libs
in the system common locations, and try to install the corresponding
debug information package.
These are acomplished with debuginfo-install yum utility script.
MozReview-Commit-ID: 76mHOUKKJud
Bug 1338651 was backed out because when building a newer image, there
was a valgrind leak report that couldn't resolve symbols. Further
investigation showed the valgrind package installed had symbols stripped.
We upgrade valgrind version and build it from source with symbols.
We had to build inside the docker image because we need to run
"make install". Using "make dist" to generate a tar ball will also run
"make docs", and it is hard to make it work because of the outdated
texlive package present in CentOS 6.
We also apply a patch [1] to valgrind correctly generate symbols
for unloaded objects.
[1] https://bugs.kde.org/show_bug.cgi?id=79362#c62
MozReview-Commit-ID: 2IhuJY28Ke3
I took the time to change jcentral (which is just wrong) to jcenter,
which is the tag used in the nexus.xml.
Order matters! Gradle resolves dependencies in the order given. That
is, jcenter is preferred to google.
MozReview-Commit-ID: CcWBukhiHa4
--HG--
extra : rebase_source : 73a3b3f013d9154ff3f5732593ba9fbe2b75d1f0
Before this patch, we used the Gradle sdk-manager-plugin to download
and install Android SDKs and other dependencies. This plugin is now
deprecated; the main dependency downloading functionality has been
incorporated into the Android-Gradle build plugin. Unfortunately,
it's been incorporated into newer versions that in turn require newer
toolchains than we currently support, so we can't use the new
functionality immediately.
Rather than replace sdk-manager-plugin with equivalent Gradle-based
functionality, this ticket uses recently added bootstrap functionality
to bootstrap the Android SDK during the dependencies task. It then
_uses_ that SDK to run the dependency fetching task, _produces_ an
android-sdk-linux.tar.xz, and then _uploads_ the new artifact as a
private artifact, ready to be pushed into tooltool. This avoids
engineers building this critical part of the toolchain locally
themselves, and will also feed into ongoing work to push toolchain
artifacts into build jobs in Task Cluster.
MozReview-Commit-ID: B6FC0ugaCef
--HG--
extra : rebase_source : 782719438a464b8021db58be398be9d5afb3b543
The manifest is only used for windows clang-cl toolchain jobs, and
building clang-cl doesn't use make or rustc.
--HG--
extra : rebase_source : 2209098306461cac9c2145d8d9a0f2ea096b1f08
The nexus.xml included in this patch is the result of starting Nexus
and manually adding the jcenter proxy repository using the Nexus web
administration interface (all in a Docker container). I know of no
way to do this configuration incrementally without the web interface.
The diff between new and the default generated configuration is a
single new <repository>..</repository> element.
MozReview-Commit-ID: 2Bg5qX41pHB
--HG--
extra : rebase_source : c945acabcedd98439a0ca0e26251bab1a41de197
extra : source : 9b794a7fc266da1ae81afd795f91e72d04bbc992
Add a new tooltool package for x86_64-unknown-linux-gnu hosts
with the i686-pc-windows-msvc and i686-pc-windows-gnu standard
libraries for the benefit of the cross-mingw builds.
Add the mingw32 releng.manifest to the update list for
new tooltool packages.
MozReview-Commit-ID: KkYPfAojFU
--HG--
extra : rebase_source : 917f463517c5c222e883363438e1fa2ec0ffa6cf
Using /home/worker is the build directory has a 30% talos performance
loss, because test machines has a /home mount directory.
MozReview-Commit-ID: zehcGJrUQX
--HG--
extra : source : feedcde68c2a54da210f03eb287ab5c862fc982b
extra : intermediate-source : 485d1af7805ad9fa0e701c3571fc1291fbfc6850
Create a test for version control related functionality.
MozReview-Commit-ID: GXd27O69GNg
--HG--
extra : rebase_source : 56ce4a38b591fd62f05fbaed0ff05d56ec127422
This is needed before we can upgrade to flake8 3.3.0, as that version starts flagging these errors.
These files were modified by running:
autopep8 --select E305 --in-place -r <dir>
on the affected directories. I did it one dir at a time and verified the result after each.
MozReview-Commit-ID: FmlsfiKIbtr
--HG--
extra : rebase_source : 9df32258cadff5d27a0e72113c57f782756c0b18
This is no longer necessary with the 1.18.0 release.
MozReview-Commit-ID: 1IGQFuvRIzu
--HG--
extra : rebase_source : 4eb4daea4edfed48db388814240f9241021a2029
It was previously renamed in the Dockerfile, and that's unnecssarily confusing
when looking for the file in hg.
MozReview-Commit-ID: 7bwD4cjk4Pj
--HG--
rename : taskcluster/scripts/tester/test-ubuntu.sh => taskcluster/scripts/tester/test-linux.sh
extra : rebase_source : f22cd0f69c21e92126cc90ea3a4355e5c3db4205
For some reason, the locales package is not installed anymore during the
docker image build, which leads to the locale-gen command failing, since
it's not there.
--HG--
extra : rebase_source : 0a152499c623a00d27d8b916c472e5d5980d8193
Mercurial uses the latest version of TLS that is both supported by
Python and the server.
In automation, the servers we care about should all support TLS 1.2.
The Python side is trickier. Modern versions of Python (typically 2.7.9+)
support TLS 1.1 and 1.2. Mercurial will default to allowing TLS 1.1+ -
explicitly disallowing TLS 1.0. However, legacy versions of Python
don't support TLS 1.1+, so Mercurial will allow TLS 1.0+ rather than
prevent connections at all.
TLS 1.0 is borderline secure these days. I think it is a bug for TLS
1.0 to be used anywhere in the Firefox release process. This simple
patch changes our default Mercurial config in TaskCluster to require
TLS 1.2+ for all https:// communications. For modern Python versions,
this effectively prevents potential downgrade attacks to TLS 1.1
(connections before should have negotiated the use of TLS 1.2).
I expect this change to break things. Finding and fixing automation
that isn't capable of speaking TLS 1.1+ should be encouraged.
MozReview-Commit-ID: 876YpL5vB3T
--HG--
extra : rebase_source : 69c33c195f736a98b67d771e7364b6db28900ff4
This is a pretty straightforward change. Just bumping package versions
and hashes. Behavior should be almost identical to the previous 4.1.1+
packages.
MozReview-Commit-ID: CaVjM0JHYKi
--HG--
extra : rebase_source : dcd0ee2661fd088daf3b5c6709c4c6f2f95bd410
In short shouldn't call err.stack(), it's a property.
MozReview-Commit-ID: 2HpPgsdctTv
--HG--
extra : rebase_source : 1769c125b4d720991c810f5c9460b2161ecbc8a8