For the same reason test-array.c and test-ctors.c need to be built
explicitly without LTO.
--HG--
extra : rebase_source : d037ef7cf1dd2d278c2918dbfee5b4f4c213e408
When linking with ld.bfd or gold, this changes the PT_LOAD in which the
elfhack code section ends up, making it go in the same one as .init, .text,
etc. rather than .rel.*. When linking with lld, this completely
avoids doing a PT_LOAD split, because lld already splits .rel.* and
.text.
--HG--
extra : rebase_source : 1f69b8f4b48b055892ea24eaa6226859cc4ffd50
We treat segments overlapping as a fatal error, rather than a condition
to do nothing, because it happening is usually the result of some bad
assumptions on the input ELF, and we don't want to silently ignore
those.
However, there are cases where a setup /could/ lead to overlapping
segments, but would be skipped because elfhack wouldn't be a win
anyways. By checking segments overlap later, we allow those to not
hard fail.
--HG--
extra : rebase_source : deca2051722aeaa959c5e4dae06642908f6d843a
The current check makes assumption wrt what PT_LOAD the injected sections
end up in, and won't work with upcoming changes.
--HG--
extra : rebase_source : b7cfb65ea13c16f977fe523aaf9f39eafeb2cdce
This was cargo-culted from the asan/tsan mozconfigs, but is not necessary
for builds without sanitizers.
--HG--
extra : rebase_source : 41bad4761f424410cb7a099ecaecce8a86becf59
When a binary has a PT_GNU_RELRO segment, the elfhack injected code
uses mprotect to add the writable flag to relocated pages before
applying relocations, removing it afterwards. To do so, the elfhack
program uses the location and size of the PT_GNU_RELRO segment, and
adjusts it to be aligned according to the PT_LOAD alignment.
The problem here is that the PT_LOAD alignment doesn't necessarily match
the actual page alignment, and the resulting mprotect may end up not
covering the full extent of what the dynamic linker has protected
read-only according to the PT_GNU_RELRO segment. In turn, this can lead
to a crash on startup when trying to apply relocations to the still
read-only locations.
Practically speaking, this doesn't end up being a problem on x86, where
the PT_LOAD alignment is usually 4096, which happens to be the page
size, but on Debian armhf, it is 64k, while the run time page size can be
4k.
--HG--
extra : rebase_source : 5ac7356f685d87c1628727e6c84f7615409c57a5
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
After this change, we consistently import GPG keys from files in
the GCC build scripts.
MozReview-Commit-ID: BcyvCQoGbMS
--HG--
extra : source : 5fce34a460b51e45ac280a9f0cb8bad896fbcff1
extra : histedit_source : 01621ea8111315c251a9493a11efca72c2ba3c7d
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : rebase_source : 4918b8c3bac53d63665006802054038bfbca0314
After this change, we consistently import GPG keys from files in
the GCC build scripts.
MozReview-Commit-ID: BcyvCQoGbMS
--HG--
extra : rebase_source : 657ccce8e242cabdfaff396fd0d6439754a3f364
This build target doesn't have LTO enabled on it (yet)
MozReview-Commit-ID: 56tAHMyvH7o
--HG--
extra : rebase_source : 90039cd8e97332e2ef8aad7908b8a04b2869f4a5
This relies on the fact that providing multiple --version-script
combines them all, so we effectively create a new symbol version
that has no global symbol, but hides the std:🧵:_M_start_thread
symbols.
This version script trick happens to work with BFD ld, gold, and lld.
The downside is that when providing multiple --version-script's, ld
doesn't want any of them to have no version at all. So for the libraries
that do already have a version script (through SYMBOLS_FILE), we use a
version where there used to be none, using the library name as the
version. Practically speaking, this binds the libraries a little closer
than they used to be, kind of non-flat namespace on OSX (which is the
default there), meaning the dynamic linker will actively want to use
symbols from those libraries instead of a system library that might
happen to have the same symbol name.
--HG--
extra : rebase_source : a7f672c35609d993849385ddb874ba791b34f929
Version 2.25.1's libiberty can choke on some symbols. That was fixed in
2.27. As of writing, the last version is 2.30. Conservatively go with
2.28.1, which is the same major version (2.28) as what is currently in
Debian stable.
--HG--
extra : rebase_source : 9e5ba94421a1568f662cfd98af7168ea1c890934
This relies on the fact that providing multiple --version-script
combines them all, so we effectively create a new symbol version
that has no global symbol, but hides as much std::* stuff as possible.
The added symbol script could use `extern "C++"` syntax and demangled
symbols but there is no guarantee the demangled symbols won't change.
Plus, it's not possible to match demangled symbols that have a return
type: they contain a space, and the only way to match that is to use
double quotes, which doesn't allow globs at the same time.
This version script trick happens to work with BFD ld, gold, and lld.
The downside is that when providing multiple --version-script's, ld
doesn't want any of them to have no version at all. So for the libraries
that do already have a version script (through SYMBOLS_FILE), we use a
version where there used to be none, using the library name as the
version. Practically speaking, this binds the libraries a little closer
than they used to be, kind of non-flat namespace on OSX (which is the
default there), meaning the dynamic linker will actively want to use
symbols from those libraries instead of a system library that might
happen to have the same symbol name.
--HG--
extra : rebase_source : 78adb64b90e75ebad203b8a647b305c9d7198d16
In the span of one week, both gmplib.org and multiprecision.org,
respective home of gmp and mpc have gone down. The latter is still down.
It turns out that all versions of gmp and mpfr we need are mirrored on
ftp.gnu.org, so we can just use that instead. For mpc, versions > 1.0
are on ftp.gnu.org, but not earlier versions.
The one mpc version <= 1.0 we do need is 0.8.2, and a copy of the exact
same archive, as per its sha256, which we're already checking per the
gcc build scripts, can be found on snapshot.debian.org. We lose gpg
validation on the way, but since we're already checking the sha256,
that's a fine tradeoff.
At least this unblocks changes to toolchains until multiprecision.org
comes back online.
--HG--
extra : rebase_source : f2bc67f8757655d99bfd9735b80d7205f7bfe47b
And adapt the build-gcc.sh script for the changes to
contrib/download_prerequisites.
--HG--
rename : taskcluster/scripts/misc/build-gcc-6-linux.sh => taskcluster/scripts/misc/build-gcc-7-linux.sh
extra : rebase_source : b1d785777b8c141c0eb0f52a73734abd2db21b94
The fedora project is migrating from pkgs.fedoraproject.org to
src.fedoraproject.org. We should use the newer URL, particularly because the
later has a globally trusted TLS certificate.
MozReview-Commit-ID: 7TducICRR0k
--HG--
extra : rebase_source : 88e58d064d1699e853527019200d6fbd4989d6b3
They were added in bug 1427344 to reduce the differences between builds
on CentOS docker images and builds on Debian docker images. We're now
entirely on Debian docker images, so we can back this out.
--HG--
extra : rebase_source : 672e389cb8d6dbf7860a989024690de18e5c320e
As of bug 1430036 it was only set when building on CentOS, and as of bug
1432398, we don't have CentOS-based docker images anymore.
--HG--
extra : rebase_source : 5ade9bee773bca3283cfdb9d69209033fe82253f
By default, wget prints dots every 1k bytes. This can render a
lot of output for large files. We switch to the "mega" style, which
makes each dot represent 64k, thus reducing output by up to 64x.
We also force the use of dot display. By default, it uses "bar"
which attempts to use terminal formatting if possible. Since most
of this code executes in CI and terminal control characters can
interfere with logged output, we force the use of "dot." (Although
wget appears to automatically switch to dot in TC today. But
consistency is good.)
MozReview-Commit-ID: IpTWJdcauTV
--HG--
extra : rebase_source : 5c9aa1bbdcd78eaa0b31347ad026a2c1beaedc03
Switching to Debian build images will force a version bump from
pulseaudio 0.9.something to pulseaudio 2.0. Practically speaking, as
long as bug 1427150 is fixed (which it is), this is strictly better,
because this enables all the PA_CHECK_VERSION(2,0,0) code in libcubeb,
which handles port availability (whether output is plugged or not), and
with bug 1427150, it does so in a backward compatible manner.
Now, since this is a behavior change from what we're currently shipping,
this has the potential of triggering unexpected test failures, or break
sound for users. The likelyhood of the latter happening is rather low,
though, because Linux distros have been building with pulseaudio >= 2.0
for a long time and we haven't heard about port availability breaking
sound for them. But it's still better to decouple this change from the
switch to Debian.
We abuse the build-gtk3.sh script which installs gtk3 in the CentOS
build image to install pulseaudio as well.
--HG--
extra : rebase_source : eb4e4033c50d59117b5199d1653d85f871503b2f
One of the last remaining differences when building Firefox on Debian
with all the changes we've done so far is that linking against the
system libc statically links some CRT objects. This causes massive
differences in the resulting binaries because of slight differences
in those objects (because they weren't compiled with the same compiler
and because they're not for the exact same glibc version)
In practice, their content difference don't cause any problem. If they
did, we wouldn't be able to run our builds on newer systems than those
we build them on. The only hypothetical risk would be to run on systems
with a glibc older than Debian 7's, but those already can't run Firefox
anyways (those systems don't have Gtk+3, which is a system requirement).
AFAICT, this is only an hypothetical problem anyways, even such systems
with Gtk+3 should be able to run those builds. Plus, this is a change
that will happen anyways when switching to Debian-based build images,
since they would be using the CRT objects from there. We're merely
making it happen earlier so that the differences from switching to
Debian-based build images are more tractable.
Note we only do this when building GCC on Debian, allowing to roll back
to CentOS-based toolchains by just switching back the toolchain jobs to
use the desktop-build docker image again.
When building on Debian (which we now are), this means we enable
.init_array/.fini_array.
When building on CentOS, this means no change. Which implies we could
roll back to CentOS-based toolchains by just switching back the
toolchain jobs to use the desktop-build docker image again.
This change causes massive differences in the resulting binaries because
of the offset differences, but practically speaking, there is no
difference. .init_array/.fini_array have been supported in glibc for 18
years.
Currently, the build can finish succesfully even when one of the
programs fails to build and is not included in the final artifact. The
macosx build then fails because of that, which is the wrong place to
be failing.
--HG--
extra : rebase_source : 4a41b2f96eea45d3eefa2734900603b6e6ee0ea5
The system binutils and gcc are built with that option on Debian, but
not on CentOS. That makes no practical difference, except for the fact
that when building GCC, we use our own-built binutils (as per bug
1427316), but use the system GCC. And a GCC built with --with-sysroot=/
doesn't work with a binutils built without. However, a GCC built without
--with-sysroot=/ works fine with a binutils built with it. So this
change is compatible with building our GCC on both CentOS and Debian.
We're currently building GCC with the system binutils, which, at the
moment, is whatever version is available on the CentOS 6 build
environments. With the imminent switch to Debian 7, that will be a
different version.
It turns out the GCC configure script does enable some features
depending on the binutils it's built with. For the most notable
differences it makes when going from Centos 6 to Debian, it enables
.init_array/.fini_array depending on the binutils version, and enables
the use of CFI advances depending on gas and objdump respectively
supporting and displaying DW_CFA_advance_loc.
But we're already building a fixed version of binutils (which happens to
be more recent than the one in both CentOS 6 and Debian 7), and we're
using that version when using GCC to build, so we can just as much use
the version we built to build GCC.
In order to avoid any changes to the resulting builds, we explicitly
turn off .init_array/.fini_array (which currently happens implicitly
when building on CentOS 6). This will ensure that there is not other
change to the builds due to this binutils version bump
(.init_array/.fini_array being enabled shifts everything in the
binaries, so it makes the whole diff full of noise)
mk_add_options has this kind of awkward feature where
mk_add_options VAR=value
would set VAR for the build through client.mk, but not when running
make -C objdir target. But
mk_add_options "export VAR=value"
does.
We might want to change that on the long run, but the side effects would
have to be calculated first.
OTOH, we have automation jobs that run compilations during `make check`
(e.g. rusttests), which is not invoked through client.mk. So they
currently don't get the same PATH as the build part, meaning that
they're using system binutils instead of the one from the GCC toolchain
package.
--HG--
extra : rebase_source : aab7f221243c486cf70c7b0c91b9313231050ed8
In bug 1426785, Gtk+3 build was moved to docker image creation time, and
at the same time, the removal of .la files was stripped, because it
seemed too broad to do that in /usr/local/lib*, and because I didn't
really remember why it was there in the first place.
I now do remember, and that's because libtool likes to add useless
dependencies on libraries just because direct dependencies transitively
depend on them. For example, this adds a dependency on libffi to
libpangocairo, which doesn't use it directly.
I also happens that /usr/local/lib* is empty at the moment we build
gtk3, so we can just do the cleanup there.
On its own, this change is useless, but:
- it restores the libraries in their state pre-bug 1426785,
- it helps reduce some differences when building on Debian (for bug
1399679), easing the comparison of those builds.
--HG--
extra : rebase_source : 3fa644d8ae5eb4ccac5940c7d3605201f589936d
It now only does something trivial, which also happens to be a no-op
because it's the default. It does have a commented entry for possible
gtk+2 builds, but we're soon going to remove that possibility anyways in
bug 1278282.
--HG--
extra : rebase_source : 9ac927bb7bd8c057264c8f6f9ca5cbf79a839c4e
Now that build environment docker images have gtk+3 installed in
/usr/local, adjust mozconfigs to point pkg-config there, and remove
all the glue that was required to build using the tooltool package.
Also remove the --x-libraries=/usr/lib on 32-bits builds, which only
confuses the linker.
--HG--
extra : rebase_source : c7de7b3959a3c6b77ea202d9609c891b5b7ec442
Back when we started needing gtk+3 to build Firefox, we were using mock
to setup the build environment, and a tooltool package was the most
sensible way to handle this.
Fast forward to today, and we're close to moving the build environment
to Debian, which comes with gtk+3 packages. But in order to simplify
the various checks for the transition, it is desirable to stop using the
tooltool package. Which we can actually do in a reasonable way now that
we use docker images instead of mock, by building and installing gtk+3
in the build environment images.
So we modify the script that was producing the gtk+3 tooltool packages
such that it installs gtk+3 in the docker images, both 32 and 64 bits.
And invoke it when creating the desktop build environment docker images.
--HG--
extra : rebase_source : 75e987d6de7f3ae8a3d9b478fc173e191d28aace
It now only does something trivial, which also happens to be a no-op
because it's the default. It does have a commented entry for possible
gtk+2 builds, but we're soon going to remove that possibility anyways in
bug 1278282.
--HG--
extra : rebase_source : 0de751e523ee002bbe6638d223eb384364edd22b
Now that build environment docker images have gtk+3 installed in
/usr/local, adjust mozconfigs to point pkg-config there, and remove
all the glue that was required to build using the tooltool package.
Also remove the --x-libraries=/usr/lib on 32-bits builds, which only
confuses the linker.
--HG--
extra : rebase_source : 22b1273ae4b78807b355d33ed5895bdfe83a141d
Back when we started needing gtk+3 to build Firefox, we were using mock
to setup the build environment, and a tooltool package was the most
sensible way to handle this.
Fast forward to today, and we're close to moving the build environment
to Debian, which comes with gtk+3 packages. But in order to simplify
the various checks for the transition, it is desirable to stop using the
tooltool package. Which we can actually do in a reasonable way now that
we use docker images instead of mock, by building and installing gtk+3
in the build environment images.
So we modify the script that was producing the gtk+3 tooltool packages
such that it installs gtk+3 in the docker images, both 32 and 64 bits.
And invoke it when creating the desktop build environment docker images.
--HG--
extra : rebase_source : fe18bfb2ec8db183c44838d5a7a0051322b2a9c0
At the same time, we make it actually do something on spidermonkey
builds. We also add an environment variable alternative, that we use
in mozconfig.stdcxx, allowing for mozconfig.no-compile to override it
and avoid configure failures on e.g. artifact builds.
--HG--
extra : rebase_source : b68d362025e0c99f9184a03391c652ec2c9357ad
Both the cc crate and the rust compiler may want to use "cc", which,
on automation, points to the system GCC compiler instead of ours.
As a workaround, we add a cc symbolic link in the GCC toolchain artifact
so that, as long as the GCC toolchain artifact's bin directory is in
$PATH early enough, it's picked over /usr/bin/cc.
--HG--
extra : rebase_source : 53cacf8a750539706a484218e168c8c9e0ba49a6
The "contract" for toolchains is that extracting foo.tar.xz creates a
directory named foo/. That is however not true for mingw32.tar.xz, which
extracts into gcc/, possibly overwriting files from the gcc.tar.xz
archive (which is also used for mingw builds, for the host part).
This is also not true for nsis.tar.xz, but it reportedly has problems
when it's not in the same directory as mingw32.
But mingw32 doesn't actually need to be mixed with gcc, so it's better
to separate them as they are supposed to be.
--HG--
extra : rebase_source : 30d90af64459bbb31bc076e48f3c661fa9cd4a79
lld is being too smart for its own good, and places non-relocatable data
right after the program headers, which prevents the program headers from
growing. But elfhack wasn't checking for that, so happily placed the
non-relocatable data at its non-relocated location, overwriting the last
item of the program headers.
--HG--
extra : rebase_source : 6f26d475f0a19d88ddf21399dbce8ceac62b492d
Bug 1385783 changed things such that the two elfhack sections are not
adjacent anymore. They can even be in different segments in some cases,
but the undo code doesn't know how to actually handle that case.
So for now, allow non adjacent sections, but still verify that they are
in the same segment.
--HG--
extra : rebase_source : da95ef7df19eeea8dfd07b24f22e7bee18939b69
With all of our builds in Taskcluster now, we should never be uploading
symbols from build tasks. Unfortunately Windows builds were still doing so.
This patch removes MOZ_AUTOMATION_UPLOAD_SYMBOLS from all the in-tree
mozconfigs and a few other places so that it should always default off
(per moz-automation.mk). The rest of the uploadsymbols bits will be
removed once Thunderbird fixes their automation.
This patch was mostly autogenerated by running:
rg --files-with-matches UPLOAD_SYMBOLS browser/config/mozconfigs/ mobile/android/config/mozconfigs/ | xargs sed -ri '/.*UPLOAD_SYMBOLS.*/d'
sed -ri '/.*UPLOAD_SYMBOLS.*/d' build/unix/mozconfig.linux build/mozconfig.win-common build/macosx/local-mozconfig.common build/mozconfig.automation
Then mobile/android/config/mozconfigs/common and
taskcluster/scripts/builder/build-linux.sh were hand-edited.
MozReview-Commit-ID: Cy8kSEodSg4
--HG--
extra : rebase_source : 01caf1651b4eb428313e1f371aa585f8f34c4151
--enable-elf-hack is the default on all platforms where it's supported,
and is completely ignored on platforms where it's not supported.
While moving the flag to moz.configure, we're going to make it only
work on platforms where elfhack is supported, so we at least need to
remove it from mozconfigs for those platforms where it's not supported.
But generally speaking, we want less things in mozconfigs, so just
remove it from there, since it's the default anyways.
This is similar to what we had until bug 1278456 removed them when we dropped
support for older libstdc++, but for a different symbol.
--HG--
extra : rebase_source : 641fc6c86c8f47e3dbd752bc20056f61646541a7
That the wrapper implementation works has been verified by creating a
dummy program such as:
$ cat test.cc
#include <thread>
int main() {
std::thread([]() {
printf("foo\n");
}).join();
return 0;
}
And compiling it with and without the hack:
$ g++ -fno-rtti -o test test.cc -lpthread
$ objdump -TC test | grep UND.*GLIBCXX_3.4.22
0000000000000000 DF *UND* 0000000000000000 GLIBCXX_3.4.22 std:🧵:_State::~_State()
0000000000000000 DF *UND* 0000000000000000 GLIBCXX_3.4.22 std:🧵:_M_start_thread(std::unique_ptr<std:🧵:_State, std::default_delete<std:🧵:_State> >, void (*)())
$ ./test
foo
$ g++ -fno-rtti -o test test.cc $objdir/build/unix/stdc++compat/stdc++compat.o -lpthread
$ objdump -TC test | grep UND.*GLIBCXX_3.4.22
$ ./test
foo
--HG--
extra : rebase_source : 53ca8e2d0424eaeb539d50510c441c8a3252c819
In bug 635961, elfhack was made to (ab)use the bss section as a
temporary space for a pointer. To find it, it scanned writable PT_LOAD
segments to find one that has a different file and memory size,
indicating the presence of .bss. This usually works fine, but when
the binary is linked with lld and relro is enabled, the end of the
file-backed part of the PT_LOAD segment containing the .bss section
ends up in the RELRO segment, making that location read-only and
subsequently making the elfhacked binary crash when it tries to restore
the .bss to a clean state, because it's not actually writing in the .bss
section: lld page aligns it after the RELRO segment.
So instead of scanning PT_LOAD segments, we scan for SHT_NOBITS
sections that are not SHF_TLS (i.e. not .tbss).
--HG--
extra : rebase_source : f18c43897fd0139aa8535f983e13eb785088cb18
In some cases, we can end up linking some things with
--static-libstdc++. The notable (only?) example of that is for the
clang-plugin, and that happens because it gets some of its flags from
llvm-config, which contains --static-libstdc++ because clang itself is
built that way.
When that happens, the combination of --static-libstdc++ and
stdc++compat breaks the build because they have conflicting symbols,
which is very much by design.
There are two ways out of this:
- avoiding either -static-libstdc++ or stdc++compat
- work around the symbol conflicts
The former is not totally reliable ; we'd have to accurately determine
if we're in a potentially conflicting case, and remove one of the two in
that case, and while we can do that for the cases we explicitly know
about, that's not future-proof, and might fail just as much in the
future.
So we go with the latter. The way we do this is by defining all the
std++compat symbols weak, such that at link time, they're overridden by
any symbol with the same name. When building with -static-libstdc++,
libstdc++.a provides those symbols so the linker eliminates the weak
ones. When not building with -static-libstdc++, the linker keeps the
symbols from stdc++compat. That last assertion is validated by the
long-standing CHECK_STDCXX test that we run when linking shared
libraries and programs.
That still leaves the symbols weak in the final shared
libraries/programs, which is a change from the current setup, but
shouldn't cause problems because when using versions of libstdc++.so
that do provide those symbols, it's fine to use the libstdc++.so version
anyways.
It becomes a library of some sort, so that multiple scripts can benefit
from it to build different versions of GCC.
The GPG key associated with GCC is also refreshed from keys.gnupg.net,
adding a new subkey, used to sign newer versions of GCC (and
postprocessed with pgpstrip to make it smaller).
The lld linker creates separate segments for purely executable sections
(such as .text) and sections preceding those (such as .rel.dyn). Neither
gold nor bfd ld do that, and just put all those sections in the same
executable segment.
Since elfhack is putting its executable code between the two relocation
sections, it ends up in a non-executable segment, leading to a crash
when it's time to run that code.
We thus insert the elfhack code before the first executable section
instead of between the two relocation sections (which is where the
elfhack data lies, and stays).
--HG--
extra : rebase_source : ab18eb9ac518d69a8639ad0e785741395b662112
Since bug 635961, building with relro makes elfhack try to use the bss
data for a temporary function pointer. If there is not enough space for
a pointer in the bss, elfhack will complain it couldn't find the bss.
In normal circumstances, this is most likely fine. Libraries with a bss
so small that it can't fit a pointer are already too small to be
elfhacked anyways. In Firefox, the two libraries with the smallest bss
have enough space for two pointers, and aren't elfhacked (libmozgtk.so
and libplds4.so).
However, the testcase that is used during the build to validate that
elfhack works doesn't have a large enough bss on x86-64, making elfhack
bail out, and the build fail as a consequence.
This, in turn, is due to the only non-thread-local zeroed data being an
int, which is not enough to fit a pointer on x86-64. We thus make it a
size_t.
--HG--
extra : rebase_source : bca2ddbf9d4a5e8786881fc524d642c38d610227
The PT_PHDR segment is optional, but the Android toolchain decides to
create one in some cases, and places it first. When that happens, the
work around for bug 1233963 fails, because the fake phdr section has not
been adjusted yet (it only happens when we see a PT_LOAD).
So we adjust the fake phdr section when we see a PT_PHDR segment (and
avoid re-updating it when we see a subsequent PT_LOAD).
--HG--
extra : rebase_source : 2190ec2f20ba9d144b8828874f9f8d70dd5ad2f6
Somehow, with the Android toolchain, we end up with
non-empty-but-really-empty DT_INIT_ARRAYs.
In practical terms, they are arrays with no relocations, and content
that is meaningless:
$ objdump -s -j .init_array libnss3.so
libnss3.so: file format elf32-little
Contents of section .init_array:
1086e0 00000000 ....
$ readelf -r libnss3.so | grep 1086e0
$ objdump -s -j .init_array libplugin-container-pie.so
libplugin-container-pie.so: file format elf32-little
Contents of section .init_array:
4479c ffffffff 00000000 ffffffff 00000000 ................
$ readelf -r libplugin-container-pie.so | grep 4479c
Because so far, elfhack expected meaningful DT_INIT_ARRAYs, it bailed out
early in that case.
--HG--
extra : rebase_source : 217aacb42fdfabb466ed1f8149dfaeb4a595eda8
When the clang plugin is used, building something during export needs to
happen after the plugin is built. But there is no dependency ensuring
this happens.
OTOH, these sources in elfhack/inject don't need to be built that early,
so we'll just leave to the build system to build it at a proper time.
--HG--
extra : rebase_source : a6bef8ec6eece3a1b0e45f84c907c2fbc0800863
We can just check the GPG signature for the upstream tarballs that are
GPG signed. We keep a copy of the relevant GPG keys in tree so that
we only use a controlled set of keys.
I validated the GPG keys by:
- Creating a fresh keyring.
- Importing the keys with gpg --receive-key.
- Importing my own GPG public key in that keyring.
- Importing the gpg keys that the PGP pathfinder told me were on the path
to those keys (which weren't directly in their keyring, so I had to
manually find some steps first).
- Using `gpg --check-sigs` to validate that the all those keys I got are
the right ones.
Then the relevant GPG keys were exported with `gpg --export --armor` and
stripped with https://github.com/glandium/pgpstrip/.
For MPC, the first GPG-signed version upstream was 0.8.2, while the GCC
script to download prerequisites downloads 0.8.1. So instead of using
0.8.1, we use 0.8.2, which we can verify.
For GMP, the GCC script downloads 4.3.2. The only web-of-trust path is
through a revoked key, which signs a revoked uid of the GMP key.
Releases newer than 5.1.0 are signed with a new key that can be
validated with the steps above. So instead of using 4.3.2, we use 5.1.3
(last of the 5.1.x line).
But MPFR 2.4.2, which the GCC script downloads, doesn't build against
GMP 5.1.3, so instead of that, we use MPFR 3.1.5.
Sadly, the remaining GCC prerequisites are not signed, so I had to:
- Download the files from ftp.gnu.org.
- Download the corresponding files from snapshot.debian.org.
- Compare the raw files when possible, or the uncompressed (not extracted)
files (when, thankfully, they matched).
- Validate those snapshot.debian.org files checksums against the
checksums in the corresponding Sources.bz2/xz files.
- Validate the Sources.bz2/xz checksums against the corresponding InRelease
files.
- Validate the InRelease files GPG signatures against the Debian
archives keyring.
With all those things we actually don't get through the GCC script, we
also change how we get those prerequisites, by diverting the commands
the script runs and making it output the urls instead of downloading and
extracting the files.
All downloaded files, GPG-validated or otherwise, have their SHA-256
digest checked against a list in build/unix/build-gcc/checksums.
--HG--
extra : rebase_source : e6809a6ac392e6c5f99801826e1d30bdeee7ddf5
CLOSED TREE
Backed out changeset 158233bce738 (bug 1197325)
Backed out changeset b5ac3fa0bbe7 (bug 1197325)
Backed out changeset 55a8ad127517 (bug 1197325)
This removes the unnecessary setting of c-basic-offset from all
python-mode files.
This was automatically generated using
perl -pi -e 's/; *c-basic-offset: *[0-9]+//'
... on the affected files.
The bulk of these files are moz.build files but there a few others as
well.
MozReview-Commit-ID: 2pPf3DEiZqx
--HG--
extra : rebase_source : 0a7dcac80b924174a2c429b093791148ea6ac204
Something similar was done in bug 1278718 for ASan builds, because of
indirect dependencies on libstdc++ being picked by the linker and
leading to linkage failure with the older binutils from the CentOS 6
image we use to do desktop builds.
Build slaves on automation are based on Centos 6, which doesn't have a
recent enough version of libstdc++ for our new requirements. But since
we're building with a recent GCC or clang with its own libstdc++, we do
have such a libstdc++ available somewhere, and the compiler picks it
when invoking the linker.
Problems start happening when we execute some of the built programs
during the build, like host tools (e.g. nsinstall), or target programs
(xpcshell, during packaging). In that case, we need the compiler's
libstdc++ to be used. Which required adding the GCC or clang library
directory to LD_LIBRARY_PATH.
Unconveniently enough, the clang 3.5 tooltool package we're using for
ASAN builds until we can update to at least 3.8 (bug 1278718) doesn't
contain libstdc++.so. So for those builds, pull the GCC package from
tooltool as well, and pick libstdc++ from there.
We have very few directories where we have SOURCES declared that are not
part of a library or program in some way. In fact, there is only one
where it is legitimate because we only use the object file
(build/unix/elfhack/inject). Others are the result of moz.build control
flow (see e.g. netwerk/standalone), and we end up building more objects
than we need to.
There are other cases where we need objects without actually linking
them anywhere, but there are other sources in the same directory, and a
corresponding Linkable is emitted. And in fact, the only case I knew
about (media/libvpx), doesn't use such objects since bug 1151175.
The unix mozconfig.rust is actually completely generic now that we're
using toolchains built with --enable-rpath in tooltool.
Move the mozconfig.rust fragment up a level to reduce confusion.
Write a mozconfig.rust fragment which makes the rust toolchain
provided by tooltool available for linux builds, similar to
what we do for MacOS X.
Include this in linux64 mozconfigs to enable rust for official
nightly builds of that target. These aren't used outside of automation
builds, so including rust there will verify code on check-in
without requiring developers to install rust.
We must whitelist the mozconfig fragment to pass the consistency
check since we're not ready to let this feature ride the trains
to beta and release.
The tooltool reference is to a custom build of rustc 1.4
with --enable-rpath to avoid having to add the rustc lib
directory to LD_LIBRARY_PATH which somehow conflicts with
the gtk3 build we also install through tooltool.
It is also built with --enable-llvm-static-stdcpp on a
rust-buildbot dist docker image (centos:5 + script updates)
to avoid issues with GLIBCXX and GLIBC symbol versions.
This adds a stages config option, which can be used to select 1, 2, and
3 stage builds. It also marks the default trunk configuration to do 3
stage builds, and defaults to that.
Since CMake generated build systems can run cmake if necessary, this
will make it possible to pick up changes from the source directory if
any and resume as much of the build as possible.
This builds the foundation for removing the need to blow away any of the
work done by the previous runs of the script.
We build gcc after clang, and extract libgcc libraries and libstdc++
headers from gcc and place them in the clang installation directory in a
way that clang favors before it searches the system for libraries and
includes.
Its only purpose is to disable PGO. Where that was not already explicitly done,
or irrelevant (because the directory only contains python), I disabled it in
moz.build.
As part of this move, HOST_NSPR_MDCPUCFG needed to be changed to get the quoting right.
--HG--
extra : commitid : J26MhSiPq9g
extra : rebase_source : 81c5b98371042803741ddace8d01b0097757dff3
When switching between Gtk+3 and Gtk+2, config.cache will contain a PKG_CONFIG
that may not be suitable for the build:
- after a Gtk+2 build, config.cache will point to the system pkg-config, which
doesn't like the pkg-config files in the Gtk+3 tooltool package.
- after a Gtk+3 build, config.cache will point to the Gtk+3 tooltool package's
pkg-config, which is likely not there in a Gtk+2 build.
Setting PKG_CONFIG avoids all config.cache considerations altogether, so set it
appropriately for both cases.
Some mozconfigs don't include mozconfig.linux*, and don't get gtk-related
definitions, so move them in a separate mozconfig. To avoid having two
files, one for 32-bit builds and one for 64-bit builds, rely on the
includer to set PKG_CONFIG_LIBDIR appropriately.
At the same time, move all the --enable-default-toolkit=cairo-gtk2 in that
new file in the case the gtk3 package wasn't pulled from tooltool.