This involves a few changes:
- Remove the .exe from the makensis binaries. which.which will
auto-add it so Windows will keep working - and with it
present we were finding makensis.exe on Linux and trying to
run it, which isn't going to work
- Doesn't bother checking if nsis is 32bit if we're running on
Linux
- Add the -nocd option to nsis (on Linux) because it takes the
current working directory from the target of a symlink rather
than the symlink itself. See
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=704828
MozReview-Commit-ID: CVT8LwS1t8w
--HG--
extra : rebase_source : 2a62327326ba80dfd728048d19f0ff1c90100838
Several years ago there was a single zip file for all test files. Clients
would only extract the files they needed. Thus, zip was a reasonable
archive format because it allowed direct access to members without
having to decompress the entirety of the stream.
We have since split up that monolithic archive into separate,
domain-specific archives. e.g. 1 archive for mochitests and one
for xpcshell tests. This drastically cut down on network I/O
required on testers because they only fetched archives/data that
was relevant. It also enabled parallel generation of test archives,
we shaved dozens of seconds off builds due to compression being
a long pole.
Despite the architectural changes to test archive management, we
still used zip files. This is not ideal because we no longer access
specific files in test archives and thus don't care about single/partial
member access performance.
This commit implements support for generating tar.gz test archives.
And it switches the web-platform archive to a tar.gz file.
The performance implications for archive generation are significant:
before: 48,321,250 bytes; 6.05s
after: 31,844,267 bytes; 4.57s
The size is reduced because we have a single compression context
so data from 1 file can benefit compression in a subsequent file.
CPU usage is reduced because the compressor has to work less with
1 context than it does with N. While I didn't measure it, decompression
performance should also be improved for the same reasons. And of course
network I/O will be reduced.
mozharness consumers use a generic method for handling unarchiving.
This method automagically handles multiple file extensions. So as long
as downstream consumers aren't hard coding ".zip" this change should
"just work."
MozReview-Commit-ID: LQa5MIHLsms
--HG--
extra : rebase_source : 100092c2f2ff609362a724fff60f46dd6e49c94e
extra : intermediate-source : d10f5ccd882b965fcad39914f7c3c930d1301a41
extra : source : a0e257e346ccf3c1db332ec5903241f4eeb9a7ee
Without this change, browser_update.js "resets" a preference that it never
changed to a different value, which leaks through to future tests. This was
introduced in a8fcca075fde, and appears to be a simple mistake since that change
removes a setup/teardown pref change pair, but the prefs it changes are two
different ones!
This leaked pref change leads to test failures when special powers and mochitest
are installed as non-temporary addons.
MozReview-Commit-ID: 2jx3fB1iZMx
--HG--
extra : rebase_source : 35394dda16814d80116854bd40c00c95f30d34e2
Also removes some dead code.
A lot of the code in ExtensionUtils.jsm is not needed in all processes, and a
lot of the rest isn't needed until extension code runs. Most of it winds up
being loaded into all processes way earlier than necessary.
MozReview-Commit-ID: CMRjCPOjRF2
--HG--
extra : rebase_source : 37718eaf05a22b8ccb95f633cf7454bd7975cdce
This is the second step to migrating the policy service to pure native code,
with similar impacts and reasoning to the previous patch.
MozReview-Commit-ID: L5XdPzWNZXM
--HG--
extra : rebase_source : dda006a0afb9d56e2738dbc0b0d94ba0496db5c9
Currently we can't differentiate between when a badge is shown
and a doorhanger is shown. This creates an additional problem
where if the badge progresses into a doorhanger after a window of
time has passed, it registers as two notifications shown, when
logically it is one. This splits out badges and doorhangers to
remedy that.
MozReview-Commit-ID: CTTaWDG1tah
--HG--
extra : rebase_source : 2b13b703ac4e12caa040138dadd2875df76ff61a
The whitelisting function thisTestLeaksUncaughtRejectionsAndShouldBeFixed was replaced by expectUncaughtRejection, and existing calls did not take effect anymore.
MozReview-Commit-ID: 3uOxkgWYWEz
--HG--
extra : rebase_source : 5a10a3ebbfe0ce2a801330041f95447c313a9a70
extra : source : 6f0394b523a66dab444b8551deb8f3c6c81d8f31
The whitelisting function thisTestLeaksUncaughtRejectionsAndShouldBeFixed was replaced by expectUncaughtRejection, and existing calls did not take effect anymore.
MozReview-Commit-ID: 3uOxkgWYWEz
--HG--
extra : rebase_source : 3a7720091180a770b32b595f8094c0d20170166d
With these changes the latest update in updates.xml is always the latest update in progress even before applying the update. This makes it so that after a successful update the code in nsBrowserContentHandler.js will always get the correct custom update property.
Several years ago there was a single zip file for all test files. Clients
would only extract the files they needed. Thus, zip was a reasonable
archive format because it allowed direct access to members without
having to decompress the entirety of the stream.
We have since split up that monolithic archive into separate,
domain-specific archives. e.g. 1 archive for mochitests and one
for xpcshell tests. This drastically cut down on network I/O
required on testers because they only fetched archives/data that
was relevant. It also enabled parallel generation of test archives,
we shaved dozens of seconds off builds due to compression being
a long pole.
Despite the architectural changes to test archive management, we
still used zip files. This is not ideal because we no longer access
specific files in test archives and thus don't care about single/partial
member access performance.
This commit implements support for generating tar.gz test archives.
And it switches the web-platform archive to a tar.gz file.
The performance implications for archive generation are significant:
before: 48,321,250 bytes; 6.05s
after: 31,844,267 bytes; 4.57s
The size is reduced because we have a single compression context
so data from 1 file can benefit compression in a subsequent file.
CPU usage is reduced because the compressor has to work less with
1 context than it does with N. While I didn't measure it, decompression
performance should also be improved for the same reasons. And of course
network I/O will be reduced.
mozharness consumers use a generic method for handling unarchiving.
This method automagically handles multiple file extensions. So as long
as downstream consumers aren't hard coding ".zip" this change should
"just work."
MozReview-Commit-ID: LQa5MIHLsms
--HG--
extra : rebase_source : cd029cdbbcccc1d16f03d63a5f1fdf60be5db4fd
extra : source : a0e257e346ccf3c1db332ec5903241f4eeb9a7ee
I see the following JavaScript warning in stdout when I run Firefox tests from the console.
JavaScript warning: resource://gre/modules/addons/XPIProvider.jsm, line 2970: String.localeCompare is deprecated; use String.prototype.localeCompare instead
MozReview-Commit-ID: ERiTd3rQ4Wc
--HG--
extra : rebase_source : a8bf8daa18842b13ca263ec6292a1c215bc19a6d