This will make sure that when running |mach python-test --python 3| locally,
we only run the tests that also run in CI with python 3 (and therefore pass
presumably).
MozReview-Commit-ID: 3OBr9yLSlSq
--HG--
extra : rebase_source : 456340d0ecdddf1078f2b5b4ebb1eddf3813b26a
The media/libpng/moz.build file overrides the C standard used via
-std=c89, per bug 1371266, which conflicts with the use of the
arm_neon.h header: compilation fails on the inline keyword, which didn't
exist in C89. We thus "bump" to the GNU89 standard, which is C89+GNU
extensions, including inline.
--HG--
extra : rebase_source : fe93a13e3bef8888e1874d2e94a6d8ef396aaf83
When I originally implemented bug 1458161, this is how it was done, but
it was suggested to use a configure-time check. This turned out to not
be great, because the rust compiler changes regularly, and we don't run
the configure tests when the version changes. When people upgraded their
rust compiler to 1.27, the code subsequently failed to build because the
features were still set for the previous version they had installed.
--HG--
extra : rebase_source : 1b5f7a02ad8495d68cd29289f7beea59b8912183
When a binary has a PT_GNU_RELRO segment, the elfhack injected code
uses mprotect to add the writable flag to relocated pages before
applying relocations, removing it afterwards. To do so, the elfhack
program uses the location and size of the PT_GNU_RELRO segment, and
adjusts it to be aligned according to the PT_LOAD alignment.
The problem here is that the PT_LOAD alignment doesn't necessarily match
the actual page alignment, and the resulting mprotect may end up not
covering the full extent of what the dynamic linker has protected
read-only according to the PT_GNU_RELRO segment. In turn, this can lead
to a crash on startup when trying to apply relocations to the still
read-only locations.
Practically speaking, this doesn't end up being a problem on x86, where
the PT_LOAD alignment is usually 4096, which happens to be the page
size, but on Debian armhf, it is 64k, while the run time page size can be
4k.
--HG--
extra : rebase_source : 5ac7356f685d87c1628727e6c84f7615409c57a5
We're well overdue for an upgrade of the rust compiler requirements.
Now that we're building with 1.28 (albeit a beta, due to be bumped when
it's released), we can bump the requirement away from 1.24 which is now
old. 1.27 is too new, though, so settle for the older 1.26.
--HG--
extra : rebase_source : c788ef4f7da9949b81df2f0577af6f6039ea63d8
When we're running using pipenv, we have more than one virtual environment. This means the current check to see if python matches the initial virtual environment gives a false positive when we're in a secondary virtual environment. This patch changes the condition to check if the current python path exists within the virtual environments root directory.
MozReview-Commit-ID: AAONwLWsigL
--HG--
extra : rebase_source : c0ac94448ee4545417b5116e58b51c6187cdb175
We used to do that before bug 1455767. Taking a step back, what we want
(and what this change implements) is to:
- allow to specify a linker to use explicitly
- check that using the corresponding linker flags makes us use that
linker
- if no linker is specified, and developer options are enabled, we want
to try to use gold if it's available, otherwise, we want to detect
what kind of linker the default linker is
- if --disable-gold is explicitly given, we don't want to automatically
use it.
--HG--
extra : rebase_source : 8d89c54bd5e555984d815beb8fdd3f5f75dae31e
Instead of clang 4, which they were the last to use, so remove the
clang 4 toolchain.
--HG--
extra : rebase_source : d03a083e9217aeb6c1d2c91decb978426f0e8d1a
The linux64-clang toolchain alias is currently clang 5. Switch it to
clang 6, but keep the spidermonkey tsan builds on clang 5 because of
bug 1467673.
The LLVM headers that come with clang 6 contain a DEBUG define that
conflicts with our DEBUG define and breaks the clang-plugin build,
so force unset ours.
--HG--
extra : rebase_source : aae88f1166108f003b06c022f14d5f4c61fc1ed9
Also work around https://bugs.llvm.org/show_bug.cgi?id=37746 by
explicitly handling ObjC interface variables separately. This actually
allows the searchfox macosx build to go much further than it used to (it
now fails during make package with apparently no output for rust code)
--HG--
extra : rebase_source : 354981ca9deebed5c60d3684f5c3abc554422393
We used to do that before bug 1455767. Taking a step back, what we want
(and what this change implements) is to:
- allow to specify a linker to use explicitly
- check that using the corresponding linker flags makes us use that
linker
- if no linker is specified, and developer options are enabled, we want
to try to use gold if it's available, otherwise, we want to detect
what kind of linker the default linker is
--HG--
extra : rebase_source : 655089f29c6403f6c31a977dcf5c6fd1e9941121
--enable-release not being passed means developer options are enabled,
which is generally speaking not desirable for builds meant to be
shipped. This is somewhat alleviated for Firefox by MOZILLA_OFFICIAL
implying --enable-release (as well as MOZ_AUTOMATION), but that doesn't
apply to e.g. standalone js builds (even some of the standalone js jobs
on our automation don't set MOZ_AUTOMATION for some reason).
A reasonable thing to do is just to default builds for release/beta
milestones to --enable-release, but still allow --disable-release
to enable the developer options.
--HG--
extra : rebase_source : 770d51b10a9cd17c63972435bb61eed10345ea71
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
After this change, we consistently import GPG keys from files in
the GCC build scripts.
MozReview-Commit-ID: BcyvCQoGbMS
--HG--
extra : source : 5fce34a460b51e45ac280a9f0cb8bad896fbcff1
extra : histedit_source : 01621ea8111315c251a9493a11efca72c2ba3c7d
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : rebase_source : 4918b8c3bac53d63665006802054038bfbca0314
After this change, we consistently import GPG keys from files in
the GCC build scripts.
MozReview-Commit-ID: BcyvCQoGbMS
--HG--
extra : rebase_source : 657ccce8e242cabdfaff396fd0d6439754a3f364
python-zstandard's 0.9.1 source distribution contains a debian/
directory.
On Squeeze, producing a Debian package is straightforward.
On Wheezy, we need to hack up Build-Depends because Wheezy doesn't
have a package for the Hypothesis fuzzing library. This package is
only used for testing and our package building disables testing,
so we don't even need to further hack up the packaging to disable
tests.
MozReview-Commit-ID: 6raXjdzggCH
--HG--
extra : rebase_source : 672492a40d65df8430eb17ba033bcb1c0890b7df
This build target doesn't have LTO enabled on it (yet)
MozReview-Commit-ID: 56tAHMyvH7o
--HG--
extra : rebase_source : 90039cd8e97332e2ef8aad7908b8a04b2869f4a5
The crash reporter symbol files are the easiest cross-platform way to
find static initializers. While some types of static initializers (e.g.
__attribute__(constructor) functions) don't appear there in a notable
way, the static initializers we do care the most about for tracking do
(static initializers from C++ globals). As a matter of fact, there is
only a difference of 2 compared to the currently reported count of 125
on a linux64 build, so this is a good enough approximation. And allows
us to easily track the count on Android, OSX and Windows builds, which
we currently don't do.
The tricky part is that the symbol files are in
dist/crashreporter-symbols/$lib/$fileid/$lib.sym, and $fileid is hard to
figure out. There is a `fileid` tool in testing/tools, but it is a
target binary, meaning it's not available on cross builds (OSX,
Android).
So the simplest is just to gather the data while creating the symbol
files, which unfortunately requires to go through some hoops to make it
happen for just the files we care about.
--HG--
extra : rebase_source : 458fed1ffd6f9294eefef61f10ff7a284af0d986
for L10n jobs should run per-push based on the corresponding builds
Differential Revision: https://phabricator.services.mozilla.com/D1403
--HG--
extra : rebase_source : b7b7f34204bd06ba2baf92ae6d103f8c207214fb
for L10n jobs should run per-push based on the corresponding builds
Differential Revision: https://phabricator.services.mozilla.com/D1403
--HG--
extra : rebase_source : fc669df941b70b720a6eb4118ad36d6679a28d48
This relies on the fact that providing multiple --version-script
combines them all, so we effectively create a new symbol version
that has no global symbol, but hides the std:🧵:_M_start_thread
symbols.
This version script trick happens to work with BFD ld, gold, and lld.
The downside is that when providing multiple --version-script's, ld
doesn't want any of them to have no version at all. So for the libraries
that do already have a version script (through SYMBOLS_FILE), we use a
version where there used to be none, using the library name as the
version. Practically speaking, this binds the libraries a little closer
than they used to be, kind of non-flat namespace on OSX (which is the
default there), meaning the dynamic linker will actively want to use
symbols from those libraries instead of a system library that might
happen to have the same symbol name.
--HG--
extra : rebase_source : a7f672c35609d993849385ddb874ba791b34f929
Version 2.25.1's libiberty can choke on some symbols. That was fixed in
2.27. As of writing, the last version is 2.30. Conservatively go with
2.28.1, which is the same major version (2.28) as what is currently in
Debian stable.
--HG--
extra : rebase_source : 9e5ba94421a1568f662cfd98af7168ea1c890934
This relies on the fact that providing multiple --version-script
combines them all, so we effectively create a new symbol version
that has no global symbol, but hides as much std::* stuff as possible.
The added symbol script could use `extern "C++"` syntax and demangled
symbols but there is no guarantee the demangled symbols won't change.
Plus, it's not possible to match demangled symbols that have a return
type: they contain a space, and the only way to match that is to use
double quotes, which doesn't allow globs at the same time.
This version script trick happens to work with BFD ld, gold, and lld.
The downside is that when providing multiple --version-script's, ld
doesn't want any of them to have no version at all. So for the libraries
that do already have a version script (through SYMBOLS_FILE), we use a
version where there used to be none, using the library name as the
version. Practically speaking, this binds the libraries a little closer
than they used to be, kind of non-flat namespace on OSX (which is the
default there), meaning the dynamic linker will actively want to use
symbols from those libraries instead of a system library that might
happen to have the same symbol name.
--HG--
extra : rebase_source : 78adb64b90e75ebad203b8a647b305c9d7198d16
In the span of one week, both gmplib.org and multiprecision.org,
respective home of gmp and mpc have gone down. The latter is still down.
It turns out that all versions of gmp and mpfr we need are mirrored on
ftp.gnu.org, so we can just use that instead. For mpc, versions > 1.0
are on ftp.gnu.org, but not earlier versions.
The one mpc version <= 1.0 we do need is 0.8.2, and a copy of the exact
same archive, as per its sha256, which we're already checking per the
gcc build scripts, can be found on snapshot.debian.org. We lose gpg
validation on the way, but since we're already checking the sha256,
that's a fine tradeoff.
At least this unblocks changes to toolchains until multiprecision.org
comes back online.
--HG--
extra : rebase_source : f2bc67f8757655d99bfd9735b80d7205f7bfe47b
Can be used:
ac_add_options --enable-linker=lld-7
or
ac_add_options --enable-linker=lld
MozReview-Commit-ID: GfDevGeooU4
--HG--
extra : rebase_source : ee2aad5f8f721a6fe5840affde6b29a3b940fb91
The python3-minimal package provides /usr/bin/python3 on Debian.
This commit installs this package so a `python3` executable is
provided.
This required backporting the package to wheezy. The final patch
is trivial. But I wasted a bit of time figuring out why `mk-build-deps`
wasn't working. It would no-op and exit 0 and then the build would
complain about missing dependencies!
glandium's theory is that the ":any" multiarch support on wheezy
isn't complete. Removing ":any" seems to make things "just work."
MozReview-Commit-ID: FBicpK4SmkQ
--HG--
extra : rebase_source : a28ce731024e8ed6a43fb30e2ed57da2abb50d0f
This option does nothing if it's provided, and things will probably
break in some most peculiar manner if somebody uses --without-pthreads.
Given that neither outcome is useful, we should remove the option.
It's the *compile*SdkVersion that needs to match the installed Android SDK plat-
form in order to be able to build an app, whereas the *target*SdkVersion is
merely a compatibility flag.
Since the received wisdom is that targetSdkVersion should be <= compileSdk-
Version and Android Studio is also showing a warning to that effect if you
modify the build.gradle of a small sample app accordingly, I've also added a
corresponding configure check of our own to enforce this.
MozReview-Commit-ID: F2RZemChFrm
--HG--
extra : rebase_source : cf4f6256baa4446d673b94d97f9497f93d7917ff
This serves two purposes:
1) It makes web-platform-tests pref downloading/handling a little more robust. When
run externally, it now downloads the entire testing/profiles directory. When loading
prefs it will look for both prefs_general.js (to support older versions of Firefox)
and profiles.json (for support moving forward).
This way we can add/remove/rename pref files under these directories without needing
to worry about breaking upstream wpt.
2) It provides developers an overview of which harnesses are using which base profiles.
Instead of hunting through test harness code to find this information, they can glance
at profiles.json.
MozReview-Commit-ID: AMzdnD8aGA2
--HG--
extra : rebase_source : 6fa0a802680417e49fcef99f3d03de7458a8fcba
svn revert requires a path, and does not take a revision. This isn't an
issue on build machines because we do a fresh checkout every time.
But if you're trying to run build-clang locally, with existing checkouts,
it will:
1) successfully svn update
2) run svn revert, saying "Skipped <rev>"
(except you don't see it because of -q)
3) svn revert returns a successfull eror code
4) patch fails because the file was never reverted and it attempts
to re-apply the patch
Also I think the revert command needs to come first.
MozReview-Commit-ID: 4OfrJNZwJNU
--HG--
extra : rebase_source : b3474e8048b3110f3f5948c3351923c02735ca4d
This moves testing/profiles/prefs_general.js to
testing/profiles/common/user.js. It also adds an 'extensions' folder to the
common profile. Dropping extension files here will get them installed in all
test harnesses (useful for testing on try).
The idea is that all test harnesses will eventually use this 'common' profile.
We'll also create some new per harness profiles, e.g testing/profiles/mochitest
and testing/profiles/reftest. This way there will be a single location
developers can go to set preferences, both for a specific harness, and across
all harnesses.
MozReview-Commit-ID: 8sqBqLiypgU
--HG--
rename : testing/profiles/prefs_general.js => testing/profiles/common/user.js
extra : rebase_source : 72a4a4b691e93c77479c7e70647b0ca373862c51
OOM rust crashes are currently not identified as such in crash reports
because rust libstd handles the OOMs and panics itself.
There are unstable ways to hook into this, which unfortunately are under
active changes in rust 1.27, but we're currently on 1.24 and 1.27 is not
released yet. The APIs didn't change between 1.24 and 1.26, so it's
fine-ish to use them as long as we limit their use to those versions.
As long as the Firefox versions we ship (as opposed to downstream) use
the "right" version of rust, we're good to go.
The APIs are in their phase of stabilization, so there shouldn't be too
many variants of the code to support.
--HG--
extra : rebase_source : 08a85aa102b24380b1f6764effffcc909ef3191b
This commit also removes "DEFAULT_APP", which is unused.
MozReview-Commit-ID: 5YYaC5LJqUn
--HG--
extra : rebase_source : 45f0f8f11698890fae0dcca71174f88dbdb412c8
Since it is run with `mach python`, it uses the environment defined by
`build/virtualenv_packages.txt`. Thus, we don't need to create a separate
virtualenv.
Differential Revision: https://phabricator.services.mozilla.com/D1015
--HG--
extra : rebase_source : 023095f82d7219a10dffb3579276c5285db37cfc
extra : source : a2999a5b9b7aa08a7c5c2ba6384724853d14b9c5
We previously did not require Python 3.5+ everywhere because we failed
to detect Python 3.5 on Windows in CI. That's because CI isn't using
start-shell.bat and it hasn't yet updated PATH to include
%MOZILLABUILD%\python3.
We shouldn't need to teach CI to have PATH contain everything in
MozillaBuild. This commit teaches moz.configure to automatically use
MozillaBuild's Python 3 if we're running in MozillaBuild.
Since we can now detect Python 3 everywhere in CI, we make Python 3.5+
required on all build configurations.
MozReview-Commit-ID: BwgWGeYMyPM
--HG--
extra : rebase_source : b52f604b01c73b0493b142f3f71aa26c99a42b91
for: taskgraph: Make update tasks support esr60
MozReview-Commit-ID: GUmAq3sBXGB
--HG--
extra : rebase_source : 0eaeb17809fede07f6b9fc4ee5d856d0078f83be
But only if we are:
a) not running in CI
b) running in CI on Linux
We will ideally make the requirement global. But Python 3.5 is not
yet available in CI on macOS. And we're not finding the MozillaBuild
copy in configure.
This was previously announced in November at
https://groups.google.com/d/msg/mozilla.dev.platform/rJrPh1QYXrQ/hqRrQsJ_BgAJ.
MozReview-Commit-ID: IyPCAcL3gop
--HG--
extra : rebase_source : f9e3db043a1ce9c1a903c943663f22245febf101
checksums.py is now run after upload.py, as part of the "upload" make
target.
As part of the refactor, checksums.py now takes as arguments a list of
directories containing files to checksum. It will recursively checksum
all files in listed directories.
This means we no longer have to pass an explicit list of files into
checksums.py. Instead, we can pass the artifact directory and
automatically checksum everything within. This will allow us to
generate files directly into the artifact directory instead of
having to run upload.py to copy files there.
MozReview-Commit-ID: 6ntnXU2Pp0E
--HG--
extra : rebase_source : 7e019b366f14b3692ec702ff331a1e0306dc3805
.write() is the preferred mechanism to write to a file object.
MozReview-Commit-ID: 1uhNeFayoxV
--HG--
extra : rebase_source : 5244e5548cefb8ebf98cb2415ab139e96e967433
This makes the logic in process_files() simpler.
MozReview-Commit-ID: KdphRJZLinx
--HG--
extra : rebase_source : 14a89fc9ba85d1dd4c2d2d18d0383552bb2d8e69