It turns out to be much easier to hook |mach artifact install| into
config.status and |mach build| than to hook into client.mk.
The additional virtualenv package avoids an import error when running
|mach artifact install|.
--HG--
extra : commitid : EnfWU0uyRfQ
extra : rebase_source : f7d11fc4c542f9798712c013c4319d92d40c28e5
StrictVersion is strict about version strings, insisting on whatever
convention Python uses. LooseVersion is not as strict but is strict
enough for our use cases.
DONTBUILD (NPOTB)
--HG--
extra : commitid : 17lNEAJhaV0
extra : rebase_source : 0a0cefa47b4558401cb85c6e9b237c0d6cf0e7fb
extra : amend_source : c7360d1a2f934338ec04d5f384d4530e3e9ebbc5
3.5.2 is what is listed in `mach mercurial-setup`. These should match.
Add a comment to each file saying to change both.
--HG--
extra : commitid : FebjTovmqGk
extra : rebase_source : 50490c1896a4c402f27cf4154b155932614da558
extra : amend_source : 73ae0ddc9f2770351d2ee2aaf5121656fb7e5750
Limit ourselves to include paths for now, because there are tricky things
involved in making this globally.
While here, use shell_quote instead of manual quoting for those paths.
With all include flags now using absolute paths, there is no need to try
to post-process them when getting them for CompileDB and codecomplete.
As a matter of fact, doing so fixes the flags in media/gmp-clearkey/0.1,
since they use literal "-include stdio.h", which was wrongly transformed
into "-include $objdir/media/gmp-clearkey/0.1/stdio.h".
MOZ_DEBUG_DEFINES are essentially defines used everywhere. So treat them as
feeding the initial value for DEFINES in each moz.build sandbox. This allows
the kind overrides that was done in the past by resetting MOZ_DEBUG_DEFINES
in Makefiles.
Currently, one needs to define DEFFILE or LD_VERSION_SCRIPT appropriately,
and somehow deal with the fact that their input format is different, which
currently relies on manual invocations of the convert_def_file script, with
awkward aggregations.
This simplifies the problem by using a simple list of symbols, with
preprocessing, allowing #includes.
We want to move it to CommonBackend, so it's better to make it more
independent, which the Defines instances now attached to ContextDerived
instances allow.
Like with ChromeManifestEntries, reloop in consume_object, with the double
goal of allowing to reuse the jar manifest handler code in other backends
and avoid code duplication in the FasterMake backend itself when support
for e.g. GeneratedFiles is added.
Instead of filling the install manifests accordingly, reloop in
consume_object, so that the jar manifest handler code can eventually
be reused in other backends.
Currently, only css files added through jar manifests are treated this way.
There is really no reason for the discrepancy, but there are actually no css
files added directly through moz.build, so this was never a problem.
On the other hand, it makes things simpler in a world where jar manifests are
treated as if they were entirely described in moz.build (which is where the
FasterMake backend is heading).
Again, this is not strictly necessary but allows to confirm idempotence of
further changes. And it has the nice side effect of making chrome manifest
files more consistent.
Using TEST_DIRS is nothing more than a shortcut for
if CONFIG['ENABLE_TESTS']:
DIRS += [...]
As such, we might as well remove it being a separate variable, and use some
Context magic to just fill DIRS when ENABLE_TESTS is set.
The security/manager/ssl/tests/unit/moz.build change ensures that the order
of DIRS before the change is kept, not because it matters, but because it
allows to confirm that nothing else is modified by this change.
Bug 1191230 added override lines with # characters to chrome manifests
for Windows.
So far, chrome manifests were handled with buildlist.py like in the
RecursiveMake backend, fed with Make variables. Without proper quoting,
those Make variables are just truncated by Make on the first # character,
and this results in mach build faster failing because of that.
However, the reason why chrome manifests were handled with buildlist.py
originally is that not all chrome manifest entries were known to the
FasterMake backend, but they now all are.
So instead of relying on Make variables and buildlist.py, we can now
rely on the newly added install manifests feature allowing to create files
with a given content.
From the backend perspective, CONFIGURE_DEFINE_FILES is the same as
GENERATED_FILES because in both cases a GeneratedFile object is emitted, but
from the perspective of some checks in the emitter, they aren't the same,
and that causes errors when adding a CONFIGURE_DEFINE_FILES to e.g. EXPORTS.
Also removes related unused variables in mach_commands.py.
--HG--
extra : commitid : IiDVMuEZtA5
extra : rebase_source : 575a51dd0ad5450323b4da5f441f8e5d721e41d6
From the backend perspective, CONFIGURE_DEFINE_FILES is the same as
GENERATED_FILES because in both cases a GeneratedFile object is emitted, but
from the perspective of some checks in the emitter, they aren't the same,
and that causes errors when adding a CONFIGURE_DEFINE_FILES to e.g. EXPORTS.
Running old extensions with newer versions of Mercurial may crash `hg`
due to the old extension accessing something or doing something that has
been changed in the new release.
To minimize the risk of this happening, we disable common 3rd party
extensions when cloning or pulling as part of `mach mercurial-setup`. We
don't want to disable everything because some extensions (like
remotenames) provide features the user may want enabled as part of the
clone/update. This leaves the door open for more failures. Hopefully
this approach is sufficient. We can always revisit later.
--HG--
extra : rebase_source : 92e7d8fe227f29fc64c0f69021bd731ba762faf3
In order to use StrictOrderingOnAppendListWithFlags instances in
mozbuild template functions, we need += to work correctly. This patch
implements extend and the associated functions (including +=),
disallowing some behaviour where convenient.
There's a subtle point hidden in the isinstance() tests: before this
patch, it was not easy to compare two
StrictOrderingOnAppendListWithFlags instances to see if they had the
*same* set of flags. That was because two instances may not have the
same class, and would only share the common
StrictOrderingOnAppendList, which isn't enough to infer the presence
of flags. To be slightly more clear, concrete instances will have
class StrictOrderingOnAppendListWithFlagsSpecialization (although
there are still multiple instances of that class) and all extend from
the unique class StrictOrderingOnAppendListWithFlags.
--HG--
extra : commitid : AMVDYt8khR
extra : rebase_source : 1ce0698691fc03fbdf6a976e92017c1d60bad15d
extra : histedit_source : 4812a565179fb4fac2e4b5cd89c4efe74e794dfa
DONTBUILD NPOTB
The top source directory configuration requires
mobile/android/gradle/m2repo/**, so it stays. There's no value
changing the location; it contains an Android-specific Gradle plugin.
We note the removal of |mach gradle-install| and point to the new
documentation.
--HG--
extra : commitid : 9Nhz2dnBIgY
extra : rebase_source : 32a2b8a92d57f963feac2bae28fed5a9f1b26f93
extra : amend_source : bf53a0b2d3d4ac0618bc82fe79914bdeaf1c1e0a
Both these files, are, after all, define files, like other CONFIGURE_DEFINE_FILES.
They only happen to have a special requirement for an expansion for all defines,
which doesn't need to happen through traditional preprocessing.
This change adds consistency in how configure-related headers are being handled.
This is needed to support hgwatchman.
--HG--
extra : commitid : 8D2A8YPNimB
extra : rebase_source : 7d5932aa049dfb352b93a87c2c8087dd7b324aab
extra : histedit_source : 9863189f265eca9e0b9363e13c59a7d55f5c633d
This change allows specifying objdir-relative paths in EXPORTS to enable
exporting entries from GENERATED_FILES. Objdir paths in EXPORTS that are
not in GENERATED_FILES will raise an exception.
Example:
```
EXPORTS += ['!g.h', 'f.h']
GENERATED_FILES += ['g.h']
```
Given the implementation, this should also work for FINAL_TARGET_FILES,
FINAL_TARGET_PP_FILES, and TESTING_FILES, but those are not well-tested.
This patch also renames the install manifest for '_tests' to match the
directory name for convenience in some code I refactored.
--HG--
extra : commitid : CwayzXtxv1O
extra : rebase_source : 5fb6f461fc740da9bce14bbdbfabdfe618af8803
Future improvements to process_install_manifest's --track option will require
adding data in the tracking dump that uses an install manifest form, and I don't
want e.g. switching branches or bisection to require to clobber in order to do the
right thing, so this change future-proofs the install manifest reader.
There are currently two operating modes for process_install_manifest:
- default, which removes any file in the destination directory that is not
in the install manifest
- --no-remove, which doesn't do the above.
While install manifests also have the ability to deal with files that may
be left in the destination directory some other way, that requires knowing the
list of those files in advance, which is not always possible.
For instance:
- with the FasterMake build backend, install manifests are split such that
there is one manifest per application of addon directory (to allow more
parallelism), which means there is one for dist/bin and one for several
of its sub-directories.
- With --disable-compile-environment combined with artefacts, the backends
are not aware of e.g. all the libraries and executables that end up in
dist/bin.
If we want to properly remove files when they are removed from moz.build
or jar.mn, we can't use --no-remove, but the alternative would remove those
files
So add an option that keeps a list of all the files that were installed as
part of processing the given install manifest(s). That information is simply
a dump of the install manifest, which, while it contains more information
than currently required, will allow to do smarter things in the future.
The default behavior for a FileCopier's copy is to remove all the files and
directories in the destination that aren't in its registry.
The remove_unaccounted argument can be passed as False to disable this
behavior.
This change adds another possibility, where remove_unaccounted may be a
FileRegistry, in which case only the files in that registry are removed.
This allows to e.g. only remove files that were copied from a previous
FileCopier.copy, leaving aside files that were in the destination for some
other reason.
Future improvements to process_install_manifest's --track option will require
adding data in the tracking dump that uses an install manifest form, and I don't
want e.g. switching branches or bisection to require to clobber in order to do the
right thing, so this change future-proofs the install manifest reader.
There are currently two operating modes for process_install_manifest:
- default, which removes any file in the destination directory that is not
in the install manifest
- --no-remove, which doesn't do the above.
While install manifests also have the ability to deal with files that may
be left in the destination directory some other way, that requires knowing the
list of those files in advance, which is not always possible.
For instance:
- with the FasterMake build backend, install manifests are split such that
there is one manifest per application of addon directory (to allow more
parallelism), which means there is one for dist/bin and one for several
of its sub-directories.
- With --disable-compile-environment combined with artefacts, the backends
are not aware of e.g. all the libraries and executables that end up in
dist/bin.
If we want to properly remove files when they are removed from moz.build
or jar.mn, we can't use --no-remove, but the alternative would remove those
files
So add an option that keeps a list of all the files that were installed as
part of processing the given install manifest(s). That information is simply
a dump of the install manifest, which, while it contains more information
than currently required, will allow to do smarter things in the future.
The default behavior for a FileCopier's copy is to remove all the files and
directories in the destination that aren't in its registry.
The remove_unaccounted argument can be passed as False to disable this
behavior.
This change adds another possibility, where remove_unaccounted may be a
FileRegistry, in which case only the files in that registry are removed.
This allows to e.g. only remove files that were copied from a previous
FileCopier.copy, leaving aside files that were in the destination for some
other reason.
The only use of BRANDING_FILES[...].source is in xulrunner/app/moz.build, for
the app.ico file.
This file has not been useful since the removal of the xpinstall-based
installer in bug 344236... 9 years ago.
Currently mach treats the first argument to eslint as the path and moves it to
the end of the arguments but this breaks usage like "mach eslint -f json browser".
It used to be necessary to change to the directory you wanted to lint but now
the .eslintignore is at the top level we just run from the top level. This means
the path argument doesn't need to be special anymore.
--HG--
extra : commitid : 5ozct0pVSC4
extra : rebase_source : 22132a240d8e6f4d099dbcdeb793958d7173e154
extra : amend_source : 2b9931b4283e1c84f699027e13eccc33fcdec978
The current implementation of HierarchicalStringList allows the following:
FOO.bar = [
'foo',
'bar',
]
while
FOO.bar += [
'foo',
'bar',
]
would be invalid because of the StrictOrderingOnAppendList enforcement.
It also allows to overwrite the entire list with a subsequent
FOO.bar = [
'baz',
]
while we've explicitly forbidden such things for every other list.
While in the vicinity, fix HierarchicalStringList._get_export_variable to not
call the HierarchicalStringList constructor uselessly.
This will allow a new kind of special variable where it is possible to do
FOO += ['bar']
All the current special variables are either strings (for which __setitem__ would
be called with a different string object), or a read-only dict (which doesn't
allow modifications).
We have many unit tests in the tree for some small parts of the build system
pipeline, but we don't have anything that resembles an end to end test, and we
kind of rely on the resulting Firefox not being broken by our changes.
With the Faster make backend growing, I want to ensure it produces the same
thing as the recursive make backend, at least for the parts it supports.
This adds some kind of test that allows to check that.
The test I'm about to add doesn't have XPIDL files, and that currently avoids
the FasterMake backend to run properly. Also, in the future, when the FasterMake
grows the ability to build C++ files, it should be possible to build Spidermonkey
with the FasterMake backend, but it doesn't have XPIDL files either.
This makes it clearer that really it's the same thing as FINAL_TARGET,
with preprocessing.
We still keep DIST_FILES in backend.mk because it's shorter and doesn't
really matter.
This new ChromeManifestEntry object type is generic and can hold any kind of
chrome manifest entry, but we currently only emit them for binary components.
References to sub-directory manifests is left to the backend, for now, until
all manifest entries are emitted by the frontend.
Ideally, we should properly make and shell quote everything we print out
in makefiles, but that's a can of worms I don't want to open just yet. So
I'll limit myself to just passthru variables.
This further improves the changes from bug 1224460 to e.g. handle variable
references mixed with text, and to avoid adding empty strings to the
resulting flags variables when the expansion leads to an empty string.
Pymake's clinetoargv is very specific to pymake's use case, yet has been abused
as a replacement for shlex because shlex doesn't handle things properly for our
use cases.
Using pymake's clinetoargv, however, has shortcomings, and we're better off
importing its code in mozbuild, simplifying it a little, and using that
instead.
Plus, less dependencies on pymake will help kill it for good some day.
FlatFormatter, JarFormatter and OmniJarFormatter all, in some way, deal
with different pieces of the package being handled differently.
Instead of each of them dealing with their different pieces in some subtly
different way, instead, introduce a new base package formatter class that
will handle it for all of them.
Use this new PiecemealFormatter for the FlatFormatter.
Only directories containing chrome manifests are given as base to formatters,
but there can still be files given outside the bases, like, on mac builds,
all files in Content/MacOS, or Content/Info.plist, whereas chrome manifests
are under Content/Resources.
There is a lot of repetition across its various tests, and we're going to add
some more in a subsequent change, so it is desirable to make it a less
repetitive task.
This function was found to be a little slow while profiling due to repeated calls to
mozpath.dirname. This patch speeds up the function replacing dirname with string manipulation
(these paths are already normalized), by caching results on the basis of directory,
and converting from iteration to recursion to increase use of the cache.
This commit speeds up the "install tests" step run as a part of the build and running
tests by ~10% on a fast linux laptop.
--HG--
extra : commitid : HdYkcXQ2ezQ
Support for displaying docstrings in `mach help` was added relatively
recently. `mach build` was never documented. Let's document it.
There are a gazillion things we could put in the documentation. For now,
mainly focus on targets.
--HG--
extra : commitid : FjtVDISK9Q5
extra : rebase_source : a69ba419e49ca0e4435e87597fdfe34623917a6c
extra : amend_source : 1161bf83569c82340ad1e4e4d21ba7f600753af1
When a make target is generated with FileAvoidWrite, this can cause targets to
get rebuilt perpetually when a prerequisite is updated, because FileAvoidWrite
will leave the target's mtime older than the prerequisite's when the target's
contents are unchanged. To avoid this issue, GENERATED_FILES is modified to
unconditionally update its target's mtime.
--HG--
extra : commitid : 4k5e5rKtPZ2
Bug 118468 landed an option for FileAvoidWrite to always write to an output
file, whether or not the contents would be changed. This was to address a
problem caused by not updating mtimes when building GENERATED_FILES, but
undoes the purpose of FileAvoidWrite and isn't really necessary.
This is addressed in a subsequent commit by unconditionally updating
mtimes when processing GENERATED_FILES.
--HG--
extra : commitid : AfOhgUstokq
Since MOZ_NATIVE_DEVICES builds against play-services-{basement,base,cast},
some ad-hoc de-duplication is necessary.
--HG--
extra : commitid : 2jNIgZpLUq2
extra : source : 0957d3435ac22765d7868cb3c7db1e0787836bc3
Calling CommonBackend.consume_object ensures that we process WebIDL and
IPDL files (and many other things) correctly. Calling
CommonBackend.consume_finished ensures that the CompileDB backend gets
to see the unified bindings and protocol files that we generate, and add
those files to the compilation database.
The only thing we need the obj for here is getting the objdir. Future
patches will just have an objdir when calling this function, and not a
proper mozbuild object. In light of these facts, let's change the
function to accept an objdir only, which will make those future patches
easier.
For GENERATED_FILES scripts that want to report dependencies, this
change makes it easy to use |preprocess|, rather than having to
construct and use |Preprocessor| manually.
In addition to their inputs declared in moz.build files, generated files
may also depend on other files, such as #includes in preprocessed files.
Let's provide a place for file_generate.py to write out those extra
dependencies.
Indicating a jar currently looks like the following in a jar manifest:
path/to/name.jar:
The `path/to` doesn't contain the implicit "chrome/" directory. This, in
turn, doesn't allow much flexibility to use the jar maker for what is not
necessarily under chrome/.
To use the jar maker to fill some chrome manifest for the default theme
extension, we currently use a hackish path to get to the right location,
and rely on the chrome.manifest file in the parent directory never to be
picked by the package manifest, which is a quite horrible way to do this,
but worked well enough for that specific use case.
With the need to handle system addons at the build system level, it
becomes necessary to come up with something less hackish.
What this change introduces is an additional syntax for the jar manifest,
in the following form:
[base/path] sub/path/to/name.jar:
Using this syntax, there is no implicit 'chrome' path. The `base/path` is
relative to the current DIST_SUBDIR, and the `sub/path` is relative to that
`base/path`. The distinction can be useful for build system backends.
The assumption that the "root" chrome.manifest is in the parent directory
of the implicit "chrome" directory dies, and the `base/path` is where the
root chrome.manifest is placed.
The bulk of this commit was generated with a script, executed at the top
level of a typical source code checkout. The only non-machine-generated
part was modifying MFBT's moz.build to reflect the new naming.
CLOSED TREE makes big refactorings like this a piece of cake.
# The main substitution.
find . -name '*.cpp' -o -name '*.cc' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
xargs perl -p -i -e '
s/nsRefPtr\.h/RefPtr\.h/g; # handle includes
s/nsRefPtr ?</RefPtr</g; # handle declarations and variables
'
# Handle a special friend declaration in gfx/layers/AtomicRefCountedWithFinalize.h.
perl -p -i -e 's/::nsRefPtr;/::RefPtr;/' gfx/layers/AtomicRefCountedWithFinalize.h
# Handle nsRefPtr.h itself, a couple places that define constructors
# from nsRefPtr, and code generators specially. We do this here, rather
# than indiscriminantly s/nsRefPtr/RefPtr/, because that would rename
# things like nsRefPtrHashtable.
perl -p -i -e 's/nsRefPtr/RefPtr/g' \
mfbt/nsRefPtr.h \
xpcom/glue/nsCOMPtr.h \
xpcom/base/OwningNonNull.h \
ipc/ipdl/ipdl/lower.py \
ipc/ipdl/ipdl/builtin.py \
dom/bindings/Codegen.py \
python/lldbutils/lldbutils/utils.py
# In our indiscriminate substitution above, we renamed
# nsRefPtrGetterAddRefs, the class behind getter_AddRefs. Fix that up.
find . -name '*.cpp' -o -name '*.h' -o -name '*.idl' | \
xargs perl -p -i -e 's/nsRefPtrGetterAddRefs/RefPtrGetterAddRefs/g'
if [ -d .git ]; then
git mv mfbt/nsRefPtr.h mfbt/RefPtr.h
else
hg mv mfbt/nsRefPtr.h mfbt/RefPtr.h
fi
--HG--
rename : mfbt/nsRefPtr.h => mfbt/RefPtr.h
The configure option has explicitly thrown an error for more than a year now,
and it happens that the remaining way to still forcefully use it has been
broken for more than 8 months.
DONTBUILD NPOTB
This downloads to a temporary file named uniquely but consistently
based on the URL, and then extracts a build ID using mozversion to use
as a human readable and sortable prefix. This approach can be re-used
by |mach artifact| based Desktop builds.
--HG--
extra : commitid : LxorDuq5D0t
extra : rebase_source : 2f280746f486b79dfe45ad928e4b618e0e12f1a0
The bulk of this commit was generated with a script, executed at the top
level of a typical source code checkout. The only non-machine-generated
part was modifying MFBT's moz.build to reflect the new naming.
# The main substitution.
find . -name '*.cpp' -o -name '*.cc' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
xargs perl -p -i -e '
s/nsRefPtr\.h/RefPtr\.h/g; # handle includes
s/nsRefPtr ?</RefPtr</g; # handle declarations and variables
'
# Handle a special friend declaration in gfx/layers/AtomicRefCountedWithFinalize.h.
perl -p -i -e 's/::nsRefPtr;/::RefPtr;/' gfx/layers/AtomicRefCountedWithFinalize.h
# Handle nsRefPtr.h itself, a couple places that define constructors
# from nsRefPtr, and code generators specially. We do this here, rather
# than indiscriminantly s/nsRefPtr/RefPtr/, because that would rename
# things like nsRefPtrHashtable.
perl -p -i -e 's/nsRefPtr/RefPtr/g' \
mfbt/nsRefPtr.h \
xpcom/glue/nsCOMPtr.h \
xpcom/base/OwningNonNull.h \
ipc/ipdl/ipdl/lower.py \
ipc/ipdl/ipdl/builtin.py \
dom/bindings/Codegen.py \
python/lldbutils/lldbutils/utils.py
# In our indiscriminate substitution above, we renamed
# nsRefPtrGetterAddRefs, the class behind getter_AddRefs. Fix that up.
find . -name '*.cpp' -o -name '*.h' -o -name '*.idl' | \
xargs perl -p -i -e 's/nsRefPtrGetterAddRefs/RefPtrGetterAddRefs/g'
if [ -d .git ]; then
git mv mfbt/nsRefPtr.h mfbt/RefPtr.h
else
hg mv mfbt/nsRefPtr.h mfbt/RefPtr.h
fi
--HG--
rename : mfbt/nsRefPtr.h => mfbt/RefPtr.h
It was added back in
5147d5c69f
for unclear reasons (and the lack of bug number doesn't help), and hasn't been
used, as far as I can see in the gecko-dev history, other than in bug 206029,
which is the only use currently in the tree.
Bug 206029 was working around the Flash player installer modifying Firefox's
prefs file and not dealing with it properly or something depending on the line
endings. 11 years later, all prefs files except channel-prefs.js are in
omni.ja, so obviously, bug 206029 doesn't actually apply anymore.
So, let's simplify it all and get rid of this.
Compressing C++ unit tests is a long pole when writing test archives.
Experimenting with various levels of compression revealed that
compression level 9 was providing minimal space savings for
significantly longer archiving times and greater CPU usage.
Results of our experimentation of `make -sj8 package-tests` on OS X
with various levels of compression are below. Note: these numbers were
accidentally obtained without JS tests being archived. This skews the
results a little but doesn't impact the analysis below.
ARCHIVE SIZE WALL CPU
(L=9)
cppunittest 76,806,629 30.6s
mochitest 61,276,928 9.4s
reftest 31,204,396 11.0s
ALL 228,146,761 31.2s 75.9s
(L=8)
cppunittest 76,851,593 24.1s
mochitest 61,279,322 8.9s
reftest 31,207,867 10.4s
ALL 228,228,096 24.9s 64.7s
(L=7)
cppunittest 77,102,292 14.3s
mochitest 61,305,147 8.2s
reftest 31,260,359 9.4s
ALL 228,717,803 15.0s 49.1s
(L=6)
cppunittest 77,321,408 11.5s
mochitest 61,336,539 8.2s
reftest 31,303,604 9.2s
ALL 229,123,307 12.2s 44.7s
(L=5)
cppunittest 78,226,404 8.2s
mochitest 61,483,804 7.6s
reftest 31,509,349 8.8s
ALL 230,725,600 9.6s 39.7s
(L=4)
cppunittest 79,733,669 6.3s
mochitest 61,825,519 7.6s
reftest 31,924,171 8.4s
ALL 233,669,991 9.0s 36.4s
(L=3)
cppunittest 82,380,731 5.8s
mochitest 62,554,431 7.1s
reftest 32,696,415 8.1s
ALL 239,180,168 8.9s 34.6s
Levels lower than 3 resulted in larger archives with no decreae in
wall time and marginal decrease in CPU time.
As we can see, lowering the compression level reduces archiving time by
>3x while only increasing total archive size by ~2.5 MB or ~1% for
compression level 5.
Total time hits a plateau around levels 4 and 5. After that, file size
increases faster for little decrease in wall time. I suspect that we're
hitting Python limits from having to process thousands of files: there's
only so fast Python can do I/O and make function calls.
I think choosing 4 or 5 for the new compression level are acceptable.
I went with 5 because the wall time savings from 5 to 4 are marginal and
the archive size does start to increase a bit faster at 4. That being
said, 4 does consume 10% less CPU. I could easily just 4 as well. 5 is
more conservative. We can always change to 4 after seeing results in the
wild.
The end result of this change is `make package-tests` is much faster:
Before: 228,146,761 bytes; 31.2s wall; 75.9s CPU
After: 230,725,600 bytes; 11.4s wall; 45.0s CPU
Delta: +2,578,839 bytes; -19.8s wall; -30.9s CPU
When you take the whole series into consideration:
Before: 44.2s wall; 84.6s CPU
After: 11.4s wall; 45.0s CPU
Lowering CPU is impressive considering we switched from the C `zip`
implementation to Python!
Keep in mind we were at ~78s wall before e87b74b3db43 introduced
concurrent archive generation!
And we still haven't eliminated the staging of JS tests, which are
several thousand files and a few dozen MB!
--HG--
extra : commitid : D1fD4NUTw2F
extra : rebase_source : c6de72656cfedc98c0cf1c09eefe1dfb84f3639b
An upcoming commit will introduce a caller that doesn't want the maximum
compression level. This commit introduces arguments to control the
compression level inside written archives.
--HG--
extra : commitid : KkDso3hB2QG
extra : rebase_source : 8fd05aeae5c3555e1169eac6656d584007cd0739
Metrics are nice. Adding this output clearly demonstrates that C++ unit
tests are the long pole by far: they take ~95% of wall execution time
to archive (~30s total). The next longest archive only takes ~11s to
produce. This will be important if we ever want to reduce archive time
further on optimal hardware.
FWIW, disabling compression will produce a C++ unit test archive in
1.0s. Archives with more files take longer, despite the significantly
smaller sizes.
--HG--
extra : commitid : 6E56aUoZUL2
extra : rebase_source : 48cad51d7fbae883861f35e1b5cb96799b452bfb
Won't impact performance much. But fewer make foo makes porting the C++
unit tests (which are the largest remaining tests) to the Python
archiver easier to grok.
This conversion did change behavior slightly. Previously, startup
cache files weren't being packaged if startup cache was disabled. Now,
we always package them since their presence in the test archive should
be harmless. The original change to guard their inclusion in
ee82e0ae5488 was probably unnecessary.
--HG--
extra : commitid : AzU65j0E1q0
extra : rebase_source : 9b8a15dc1a5f3c3d3e453cefb3a99b05f5a77711
This prevents copying of 447 files adding to ~4 MB.
--HG--
extra : commitid : 7zTbiQeMQSQ
extra : rebase_source : b3ac223835ba7289ace45aa7d02c5a050d54cc0d
This saves copying of ~100 files comprising ~1 MB. Not significant. But
it gets us a little closer to no staging.
--HG--
extra : commitid : 6Hjnhv4Yi5R
extra : rebase_source : 291c89682a23cde957b3c68f2efe3b6dc3d3d543
This is slightly more involved than earlier changes because reftests
have a one-off mechanism for finding files. Essentially, the master
reftest manifest is loaded, directories are discovered, and every file
in those directories is packaged.
We add support to our test archive generation tool to read sources from
reftest manifests and tell it where the reftest manifests are.
print-manifest-dirs.py was only being used for staging reftest files.
Since we don't do that any more, the functionality doesn't need to exist
in a standalone file, so it has been moved inline into test_archive.py.
This change avoids copying ~26,000 tests consuming 131 MB during test
packaging. This is a majority of the file count that was remaining in
the stage directory at this point. On my machine (which hasn't typically
seen major wall time wins from not staging files due to its fast SSD),
this change made test packaging ~20% faster, reducing wall time from
~50s to ~40s!
A Try push seemed to indicate drastic results with the series up to this
point. Including the already landed changes to generate test archives
concurrently, test packaging times on OS X builders dropped from ~18:40
to 6:29! Times on Linux x64 remained about the same (~2:46). This is
possibly due to these machines already having SSDs and due to normal
variance in performance of builders and EC2 instances.
--HG--
extra : commitid : 34E8V8lSGg7
extra : rebase_source : 720afcd35f6a2b6cb1217df23ae981408a88cb94
After this, only reftest files themselves are staged. Those will be
addressed in a subsequent commit.
--HG--
extra : commitid : 9jWl9Twcizr
extra : rebase_source : 3e4a319d60b7ee7eddecc597eb250184140b1e71
This avoids copying 5000+ files consuming ~37 MB on my build
configuration.
--HG--
extra : commitid : 6DmsjUYgjXq
extra : rebase_source : 123dd42a7d0b9cc244a3ab7773010dfc5769a4ac
With this change, all test ZIP archives are now generated via Python and
mozpack.
This change does not change I/O or file copy behavior at all. There is
still a lot of room for eliminating extra file copies.
--HG--
extra : commitid : 9mWdtDK6wAb
extra : rebase_source : 0f19c627d64d22bf9d65161d4f7df7c9778dea3c
This doesn't change I/O or copy behavior at all. But it does remove a
one-off make rule.
--HG--
extra : commitid : X0efdFHA0k
extra : rebase_source : c7cb8616461eccd1ff7f8eb3b409bd4944c9e1ec
This is pretty straightforward. This saves ~26 MB of file copies.
--HG--
extra : commitid : ItghoP73zS8
extra : rebase_source : 9656719a6459c1e6fa28165591722fe00d6d9b1d
The web-platform test archive now builds without any staging at all.
This saves ~103 MB of file copies on my machine.
The testing/web-platform/Makefile.in serves no purpose after this
change, so it and all references to it have been removed.
--HG--
extra : commitid : HDHGG3QGVBH
extra : rebase_source : dd7302aad96b46932aa00e4e66918c8077475b10
This is very similar to what we did for xpcshell. Like xpcshell, there
are still some staged files. However, about 73MB of copies are
eliminated with this change. On my machine, overall execution time of
test packaging appears to decrease, although CPU usage is up slightly.
--HG--
extra : commitid : 5dy340X80J9
extra : rebase_source : d37be29367b17e6c1d9c885ab4705932b7a42b39
This commit produces the xpcshell test archive without staging 5000+
xpcshell test files first.
We teach the archiver to ignore .mkdir.done files.
The xpcshell Makefile.in still stages some files. This is less than
ideal. However, it is a small handful of files and shouldn't add too
much overhead.
This appears to not impact overall CPU usage significantly on my
machine, despute using Python instead of `zip`. It does reduce I/O
by ~25MB by avoiding the staging copy.
--HG--
extra : commitid : IwvLaYvAbFt
extra : rebase_source : a690ae4b1adbabd491851a2479fa66d81241601b
Test archive generation currently copies a bunch of files into a staging
area then runs `zip` to produce ZIP files. There are 2 concerns with
this approach:
1) We incur a lot of extra I/O to copy files so everything is
rooted in a single tree so the `zip` invocation and paths are
simple.
2) ZIP files inherit properties from the local filesystem (including
mtime), making ZIP files non-deterministic.
This commit introduces a new mozbuild action for producing test
archives. It does so using the mozpack file finder and JAR writer,
which are used throughout the build to deterministically
produce ZIP/JAR files from files in multiple source directories.
We implement support for producing the mozharness archive. This archive
does not involve files that are staged, so no I/O is saved. In fact,
the switch from `zip` to Python likely makes this slightly slower.
However, we do have deterministic archives now.
Additional archives will be ported over in subsequent commits.
--HG--
extra : commitid : H1BOidPDZST
extra : rebase_source : 120e2bfea921e5fb3a8d97b2dd0227edce452cfd
Previously, we always skipped over files beginning with a ".". This
commit adds an option to include them.
This is needed to support test package generation via Python / mozpack.
--HG--
extra : commitid : 4pmEpukVX0s
extra : rebase_source : 31599230ce344b9be815b3a457cc8a7c6d8e5301
The flags added in toolkit/locales/Makefile.in turn out not to be actually
used, so just remove that.
The remaining uses of XULPPFLAGS are to set debug flags depending on whether
MOZ_DEBUG is set or not. Just set a dedicated variable with the right value
from configure.
When running mach `build-backend` or `config.status`, it is now possible to
pass multiple backends to the --backend/-b option, so that they can share
moz.build reading and object emitting.
The command line syntax is however maybe a little awkward:
mach build-backend -b Backend1 Backend2
but supporting with `-b Backend1 -b Backend2` requires more argument parser
twiddling (action='append' doesn't work out of the box with choices, we'd
need a custom action class)
Currently, we set a flag on each object to know whether it has been consumed
by the backend. This doesn't work nicely when multiple backends try to consume
the same objects.
- Make all backends report the time spent in their own execution
- Change how the data is collected for the reader and emitter such that
each of them is aware of its own data, instead of everything being
tracked by the backend.
This is meant to open the door to multiple backends running from the
same execution of config.status.
This commit exposes test-deps file info as a mach command, and
modifies the test scheme reader to make it filter out unsuitable
contexts when generating TestManifest objects for metadata context.
--HG--
extra : commitid : 7QOoHkfvWOF
This bumps the NDK version to r10e.
Previously, we used brew to install android-sdk and a custom version
of android-ndk. That makes it hard to control the installed versions.
This installs from downloaded archives, which unifies the Mac OS X
approach with the straight-forward Linux approach.
--HG--
extra : commitid : E7hEqsyy8Gw
extra : rebase_source : 9ea27e7d2ae3fbaaa3efbabdd701521981bec877
extra : histedit_source : c07c80c50ac066dc6808e7ccf96f0bc14dc09df2
The 'tools' package depends on 'platform-tools-preview' now. Roll
with it until Google breaks us back again.
The behaviour of the |android| tool has changed; recent versions don't
reveal what packages are installed. That means we can't skip already
installed packages; and we can't really tell if our installation
attempts succeeded. But we have faith!
--HG--
extra : commitid : 341NxbHTJXC
extra : rebase_source : 945e8018effc0b417fc3fedb7220455fabeaedb3
extra : histedit_source : e42fb06e176d5b9e9ebb6553af7045f3a061105f
This gets us a limited version of AAR support: we can consume static
AAR libraries, where here static does not refer to linking, but to
static assets that are fixed at build-backend time and not modified
(or produced) during the build. This lets us pin our dependencies
(and move to Google's versioned Maven repository packages, away from
Google's unversioned ad-hoc packages).
By restricting to static AAR libraries, we avoid having to handle
truly complicated dependency trees, as changing parts of generated AAR
files require delicate rebuilding of the APKs (and internal libraries)
that depend on the AAR files.
It is possible that we will generate AARs in the tree at some time.
Right now, we don't do that, even for GeckoView: the AARs produced are
assembled as artifacts at package time and are intended for external
consumption. We might want this for GeckoView and Fennec at some
time; we should consider using Gradle everywhere at that point.
The patch itself does the simplest possible thing (which has precedent
from Gradle and other build systems): it simply "explodes" the AAR
into the object directory and uses existing mechanisms to refer to the
exploded pieces.
AARs have both required and optional components. Each component is
defined with an expected and required flag. If a component is expected
and not present, or not expected and is present, an error is raised.
If the component is expected and present, autoconf's ifelse() macro is
used to define the relevant AAR_* component variables. If the
component is not expected and not present, no action is taken. A
consuming build backend therefore can guard all AAR_* component
variables with just the top-level AAR variable.
Many AAR files have empty assets/ directories. This patch doesn't
explode empty assets/ directories, protecting against trivial changes
to AAR files that don't impact the build.
There's a lot not to like in this approach, including:
* We need to manually reference internal AAR libs;
* I haven't separated the pinned version numbers out of configure.in.
However, it's closer to what we want than what we have!
--HG--
extra : commitid : 11kUhDAkCn5
extra : rebase_source : 2454c9842ab3296d53ca5fa394a5a962aa382c8d
extra : histedit_source : e2f97502d215016925e93500b8fd93f8b32fba3a