Since MOZ_NATIVE_DEVICES builds against play-services-{basement,base,cast},
some ad-hoc de-duplication is necessary.
--HG--
extra : commitid : 2jNIgZpLUq2
extra : source : 0957d3435ac22765d7868cb3c7db1e0787836bc3
Calling CommonBackend.consume_object ensures that we process WebIDL and
IPDL files (and many other things) correctly. Calling
CommonBackend.consume_finished ensures that the CompileDB backend gets
to see the unified bindings and protocol files that we generate, and add
those files to the compilation database.
The only thing we need the obj for here is getting the objdir. Future
patches will just have an objdir when calling this function, and not a
proper mozbuild object. In light of these facts, let's change the
function to accept an objdir only, which will make those future patches
easier.
For GENERATED_FILES scripts that want to report dependencies, this
change makes it easy to use |preprocess|, rather than having to
construct and use |Preprocessor| manually.
In addition to their inputs declared in moz.build files, generated files
may also depend on other files, such as #includes in preprocessed files.
Let's provide a place for file_generate.py to write out those extra
dependencies.
Indicating a jar currently looks like the following in a jar manifest:
path/to/name.jar:
The `path/to` doesn't contain the implicit "chrome/" directory. This, in
turn, doesn't allow much flexibility to use the jar maker for what is not
necessarily under chrome/.
To use the jar maker to fill some chrome manifest for the default theme
extension, we currently use a hackish path to get to the right location,
and rely on the chrome.manifest file in the parent directory never to be
picked by the package manifest, which is a quite horrible way to do this,
but worked well enough for that specific use case.
With the need to handle system addons at the build system level, it
becomes necessary to come up with something less hackish.
What this change introduces is an additional syntax for the jar manifest,
in the following form:
[base/path] sub/path/to/name.jar:
Using this syntax, there is no implicit 'chrome' path. The `base/path` is
relative to the current DIST_SUBDIR, and the `sub/path` is relative to that
`base/path`. The distinction can be useful for build system backends.
The assumption that the "root" chrome.manifest is in the parent directory
of the implicit "chrome" directory dies, and the `base/path` is where the
root chrome.manifest is placed.
The bulk of this commit was generated with a script, executed at the top
level of a typical source code checkout. The only non-machine-generated
part was modifying MFBT's moz.build to reflect the new naming.
CLOSED TREE makes big refactorings like this a piece of cake.
# The main substitution.
find . -name '*.cpp' -o -name '*.cc' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
xargs perl -p -i -e '
s/nsRefPtr\.h/RefPtr\.h/g; # handle includes
s/nsRefPtr ?</RefPtr</g; # handle declarations and variables
'
# Handle a special friend declaration in gfx/layers/AtomicRefCountedWithFinalize.h.
perl -p -i -e 's/::nsRefPtr;/::RefPtr;/' gfx/layers/AtomicRefCountedWithFinalize.h
# Handle nsRefPtr.h itself, a couple places that define constructors
# from nsRefPtr, and code generators specially. We do this here, rather
# than indiscriminantly s/nsRefPtr/RefPtr/, because that would rename
# things like nsRefPtrHashtable.
perl -p -i -e 's/nsRefPtr/RefPtr/g' \
mfbt/nsRefPtr.h \
xpcom/glue/nsCOMPtr.h \
xpcom/base/OwningNonNull.h \
ipc/ipdl/ipdl/lower.py \
ipc/ipdl/ipdl/builtin.py \
dom/bindings/Codegen.py \
python/lldbutils/lldbutils/utils.py
# In our indiscriminate substitution above, we renamed
# nsRefPtrGetterAddRefs, the class behind getter_AddRefs. Fix that up.
find . -name '*.cpp' -o -name '*.h' -o -name '*.idl' | \
xargs perl -p -i -e 's/nsRefPtrGetterAddRefs/RefPtrGetterAddRefs/g'
if [ -d .git ]; then
git mv mfbt/nsRefPtr.h mfbt/RefPtr.h
else
hg mv mfbt/nsRefPtr.h mfbt/RefPtr.h
fi
--HG--
rename : mfbt/nsRefPtr.h => mfbt/RefPtr.h
The configure option has explicitly thrown an error for more than a year now,
and it happens that the remaining way to still forcefully use it has been
broken for more than 8 months.
DONTBUILD NPOTB
This downloads to a temporary file named uniquely but consistently
based on the URL, and then extracts a build ID using mozversion to use
as a human readable and sortable prefix. This approach can be re-used
by |mach artifact| based Desktop builds.
--HG--
extra : commitid : LxorDuq5D0t
extra : rebase_source : 2f280746f486b79dfe45ad928e4b618e0e12f1a0
The bulk of this commit was generated with a script, executed at the top
level of a typical source code checkout. The only non-machine-generated
part was modifying MFBT's moz.build to reflect the new naming.
# The main substitution.
find . -name '*.cpp' -o -name '*.cc' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
xargs perl -p -i -e '
s/nsRefPtr\.h/RefPtr\.h/g; # handle includes
s/nsRefPtr ?</RefPtr</g; # handle declarations and variables
'
# Handle a special friend declaration in gfx/layers/AtomicRefCountedWithFinalize.h.
perl -p -i -e 's/::nsRefPtr;/::RefPtr;/' gfx/layers/AtomicRefCountedWithFinalize.h
# Handle nsRefPtr.h itself, a couple places that define constructors
# from nsRefPtr, and code generators specially. We do this here, rather
# than indiscriminantly s/nsRefPtr/RefPtr/, because that would rename
# things like nsRefPtrHashtable.
perl -p -i -e 's/nsRefPtr/RefPtr/g' \
mfbt/nsRefPtr.h \
xpcom/glue/nsCOMPtr.h \
xpcom/base/OwningNonNull.h \
ipc/ipdl/ipdl/lower.py \
ipc/ipdl/ipdl/builtin.py \
dom/bindings/Codegen.py \
python/lldbutils/lldbutils/utils.py
# In our indiscriminate substitution above, we renamed
# nsRefPtrGetterAddRefs, the class behind getter_AddRefs. Fix that up.
find . -name '*.cpp' -o -name '*.h' -o -name '*.idl' | \
xargs perl -p -i -e 's/nsRefPtrGetterAddRefs/RefPtrGetterAddRefs/g'
if [ -d .git ]; then
git mv mfbt/nsRefPtr.h mfbt/RefPtr.h
else
hg mv mfbt/nsRefPtr.h mfbt/RefPtr.h
fi
--HG--
rename : mfbt/nsRefPtr.h => mfbt/RefPtr.h
It was added back in
5147d5c69f
for unclear reasons (and the lack of bug number doesn't help), and hasn't been
used, as far as I can see in the gecko-dev history, other than in bug 206029,
which is the only use currently in the tree.
Bug 206029 was working around the Flash player installer modifying Firefox's
prefs file and not dealing with it properly or something depending on the line
endings. 11 years later, all prefs files except channel-prefs.js are in
omni.ja, so obviously, bug 206029 doesn't actually apply anymore.
So, let's simplify it all and get rid of this.
Compressing C++ unit tests is a long pole when writing test archives.
Experimenting with various levels of compression revealed that
compression level 9 was providing minimal space savings for
significantly longer archiving times and greater CPU usage.
Results of our experimentation of `make -sj8 package-tests` on OS X
with various levels of compression are below. Note: these numbers were
accidentally obtained without JS tests being archived. This skews the
results a little but doesn't impact the analysis below.
ARCHIVE SIZE WALL CPU
(L=9)
cppunittest 76,806,629 30.6s
mochitest 61,276,928 9.4s
reftest 31,204,396 11.0s
ALL 228,146,761 31.2s 75.9s
(L=8)
cppunittest 76,851,593 24.1s
mochitest 61,279,322 8.9s
reftest 31,207,867 10.4s
ALL 228,228,096 24.9s 64.7s
(L=7)
cppunittest 77,102,292 14.3s
mochitest 61,305,147 8.2s
reftest 31,260,359 9.4s
ALL 228,717,803 15.0s 49.1s
(L=6)
cppunittest 77,321,408 11.5s
mochitest 61,336,539 8.2s
reftest 31,303,604 9.2s
ALL 229,123,307 12.2s 44.7s
(L=5)
cppunittest 78,226,404 8.2s
mochitest 61,483,804 7.6s
reftest 31,509,349 8.8s
ALL 230,725,600 9.6s 39.7s
(L=4)
cppunittest 79,733,669 6.3s
mochitest 61,825,519 7.6s
reftest 31,924,171 8.4s
ALL 233,669,991 9.0s 36.4s
(L=3)
cppunittest 82,380,731 5.8s
mochitest 62,554,431 7.1s
reftest 32,696,415 8.1s
ALL 239,180,168 8.9s 34.6s
Levels lower than 3 resulted in larger archives with no decreae in
wall time and marginal decrease in CPU time.
As we can see, lowering the compression level reduces archiving time by
>3x while only increasing total archive size by ~2.5 MB or ~1% for
compression level 5.
Total time hits a plateau around levels 4 and 5. After that, file size
increases faster for little decrease in wall time. I suspect that we're
hitting Python limits from having to process thousands of files: there's
only so fast Python can do I/O and make function calls.
I think choosing 4 or 5 for the new compression level are acceptable.
I went with 5 because the wall time savings from 5 to 4 are marginal and
the archive size does start to increase a bit faster at 4. That being
said, 4 does consume 10% less CPU. I could easily just 4 as well. 5 is
more conservative. We can always change to 4 after seeing results in the
wild.
The end result of this change is `make package-tests` is much faster:
Before: 228,146,761 bytes; 31.2s wall; 75.9s CPU
After: 230,725,600 bytes; 11.4s wall; 45.0s CPU
Delta: +2,578,839 bytes; -19.8s wall; -30.9s CPU
When you take the whole series into consideration:
Before: 44.2s wall; 84.6s CPU
After: 11.4s wall; 45.0s CPU
Lowering CPU is impressive considering we switched from the C `zip`
implementation to Python!
Keep in mind we were at ~78s wall before e87b74b3db43 introduced
concurrent archive generation!
And we still haven't eliminated the staging of JS tests, which are
several thousand files and a few dozen MB!
--HG--
extra : commitid : D1fD4NUTw2F
extra : rebase_source : c6de72656cfedc98c0cf1c09eefe1dfb84f3639b
An upcoming commit will introduce a caller that doesn't want the maximum
compression level. This commit introduces arguments to control the
compression level inside written archives.
--HG--
extra : commitid : KkDso3hB2QG
extra : rebase_source : 8fd05aeae5c3555e1169eac6656d584007cd0739
Metrics are nice. Adding this output clearly demonstrates that C++ unit
tests are the long pole by far: they take ~95% of wall execution time
to archive (~30s total). The next longest archive only takes ~11s to
produce. This will be important if we ever want to reduce archive time
further on optimal hardware.
FWIW, disabling compression will produce a C++ unit test archive in
1.0s. Archives with more files take longer, despite the significantly
smaller sizes.
--HG--
extra : commitid : 6E56aUoZUL2
extra : rebase_source : 48cad51d7fbae883861f35e1b5cb96799b452bfb
Won't impact performance much. But fewer make foo makes porting the C++
unit tests (which are the largest remaining tests) to the Python
archiver easier to grok.
This conversion did change behavior slightly. Previously, startup
cache files weren't being packaged if startup cache was disabled. Now,
we always package them since their presence in the test archive should
be harmless. The original change to guard their inclusion in
ee82e0ae5488 was probably unnecessary.
--HG--
extra : commitid : AzU65j0E1q0
extra : rebase_source : 9b8a15dc1a5f3c3d3e453cefb3a99b05f5a77711
This prevents copying of 447 files adding to ~4 MB.
--HG--
extra : commitid : 7zTbiQeMQSQ
extra : rebase_source : b3ac223835ba7289ace45aa7d02c5a050d54cc0d
This saves copying of ~100 files comprising ~1 MB. Not significant. But
it gets us a little closer to no staging.
--HG--
extra : commitid : 6Hjnhv4Yi5R
extra : rebase_source : 291c89682a23cde957b3c68f2efe3b6dc3d3d543
This is slightly more involved than earlier changes because reftests
have a one-off mechanism for finding files. Essentially, the master
reftest manifest is loaded, directories are discovered, and every file
in those directories is packaged.
We add support to our test archive generation tool to read sources from
reftest manifests and tell it where the reftest manifests are.
print-manifest-dirs.py was only being used for staging reftest files.
Since we don't do that any more, the functionality doesn't need to exist
in a standalone file, so it has been moved inline into test_archive.py.
This change avoids copying ~26,000 tests consuming 131 MB during test
packaging. This is a majority of the file count that was remaining in
the stage directory at this point. On my machine (which hasn't typically
seen major wall time wins from not staging files due to its fast SSD),
this change made test packaging ~20% faster, reducing wall time from
~50s to ~40s!
A Try push seemed to indicate drastic results with the series up to this
point. Including the already landed changes to generate test archives
concurrently, test packaging times on OS X builders dropped from ~18:40
to 6:29! Times on Linux x64 remained about the same (~2:46). This is
possibly due to these machines already having SSDs and due to normal
variance in performance of builders and EC2 instances.
--HG--
extra : commitid : 34E8V8lSGg7
extra : rebase_source : 720afcd35f6a2b6cb1217df23ae981408a88cb94
After this, only reftest files themselves are staged. Those will be
addressed in a subsequent commit.
--HG--
extra : commitid : 9jWl9Twcizr
extra : rebase_source : 3e4a319d60b7ee7eddecc597eb250184140b1e71
This avoids copying 5000+ files consuming ~37 MB on my build
configuration.
--HG--
extra : commitid : 6DmsjUYgjXq
extra : rebase_source : 123dd42a7d0b9cc244a3ab7773010dfc5769a4ac
With this change, all test ZIP archives are now generated via Python and
mozpack.
This change does not change I/O or file copy behavior at all. There is
still a lot of room for eliminating extra file copies.
--HG--
extra : commitid : 9mWdtDK6wAb
extra : rebase_source : 0f19c627d64d22bf9d65161d4f7df7c9778dea3c
This doesn't change I/O or copy behavior at all. But it does remove a
one-off make rule.
--HG--
extra : commitid : X0efdFHA0k
extra : rebase_source : c7cb8616461eccd1ff7f8eb3b409bd4944c9e1ec
This is pretty straightforward. This saves ~26 MB of file copies.
--HG--
extra : commitid : ItghoP73zS8
extra : rebase_source : 9656719a6459c1e6fa28165591722fe00d6d9b1d