Use the UI thread's tid for checking if we're on the UI thread in Gecko.
This lets us get rid of `GeckoThread.registerUiThread`, in order to
avoid a race where we check for UI thread before `registerUiThread` is
called.
MozReview-Commit-ID: 11gAWgx4UZo
These rules are just convenience, both for developers and automation.
Instead of having to hard-code to run make in a particular directory to
do l10n-repacks, you can now just do
./mach build installers-de
and that's that.
MozReview-Commit-ID: C4WKXljjN7n
--HG--
extra : rebase_source : f305bb2bc0ddb4712c8b28f5225fd8ad22a16055
To not merge the en-US language pack, the merge-% steps are in
a conditional function that disables that for en-US. Using a function
here as that's easier than a shell if in the merge rule, and
Makefile conditionals don't get evaluated late enough.
To liberate the l10n builds from settings in the automation,
we move the patch logic from LOCALE_MERGEDIR to REAL_LOCALE_MERGEDIR.
To determine strongly when we're in a repack or building a langpack,
the trick here is to
export IS_LANGUAGE_REPACK
in l10n.mk, and only set that to true in the entry-point rules.
Now, we can use that value in config.mk to define the l10n-specific
rules.
I did the same thing for langpack-%, which allows us to disable
the crashreporter files for language packs, for example.
With that,
make installers-de
just works, if you have localizations checked out.
For a while, we might run l10n-merge twice in automation, but it's really not
optional, so let's just make sure we run it.
MozReview-Commit-ID: 3nr33CKxkBQ
--HG--
extra : rebase_source : 0605a4adba018fa4b85d563cdafba80b0533bc91
Set AB_CD on per-locale entry point pattern rules.
Not setting this on the repackaging top-level pattern rules, as they
need AB_CD to be en-US to find the original package to unpack.
MozReview-Commit-ID: JqrLYyEyvvb
--HG--
extra : rebase_source : 82c840f16e131fe8f340e21ff86a34c70e3f7f97
- use EOJ's handy .equals() to compare JSON structures
- generated DSA signature prefix seem to have changed post Java 1.8 update
MozReview-Commit-ID: JwQLb998Kro
--HG--
extra : rebase_source : 802045c6ad6f2c46e34c9765022c5707c65ee3e6
There's some racing going on between compositor methods that use the
XPCOM queue and the disposeNative method that uses the priority queue.
Move everything to the XPCOM queue to fix this condition.
MozReview-Commit-ID: BUxotrpBVsW
This undoes a caveat created from the last changeset; I did not profile this
change.
MozReview-Commit-ID: 6jpXyt0GRUj
--HG--
extra : rebase_source : e200e7942782ad042a6bb703f137dac71a398a21
I replaced JSON parsing for all highlight candidates (at most, 500) with a
faster estimation using regex: we only use the full JSON parsing to get the
perfect values for the items to be shown (~5).
One caveat of this change: JSON parsing will be moved to the main thread when
the getMetadataSlow is lazily-loaded.
Disclaimer: my device seems to be running faster than yesterday so profiling
may not be consistent but here are the profiling results:
- HighlightsRanking.extractFeatures: 78.1% -> 54.5%
- Highlight.<init>: 26.5% -> 14.5%
- JSONObject.<init>: 11.4% -> rm'd
- initFast*: (replaced JSONObject.<init> & friends) -> 4.2%
With ^ the disclaimer in mind, runtime decreased from 12.6s to 5.3s (this is
slower due to profiling).
MozReview-Commit-ID: CTqAyDDmaJQ
--HG--
extra : rebase_source : 1318c460b55159e38a5dd41d53ebcee00e67029c
After the previous changeset, some numbers stood out:
- HighlightsRanking.extractFeatures: 44.9%
- HighlightCandidate.getFeatureValue: 19.4%
- Collections.secondaryHash: 17.3%
- HashMap.get: 11.7%
My hypothesis was that our HighlightCandidate.features implementation was slow:
it was mapping FeatureNames -> values in a HashMap but HashMap look-ups are
slower than a direct memory access.
I replaced the implementation with a direct access from an array - about as
fast as we can get. This encouraged me to make some changes with the following
benefits:
- Rewrote HighlightsRanking.normalize to save iterations and allocations.
- Rm code from HighlightsRanking.scoreEntries: we no longer need to iterate to
construct the filtered items, we just index directly into the list
- Rewrote HighlightsRanking.decay(), which I think is a little clearer now.
- Saved a few iterator/object allocations inside inner loops in places.
The tests pass and we have coverage for the normalize changes but not for
scoreEntries.
---
For perf, my changes affected multiple methods so the percentages are no longer
reliable but I can verify absolute runtime changes. I ran three tests, the best
of which showed an overall 33% runtime compared to the previous changeset and
the other two profiles showed a 66% overall runtime. In particular, for the
middle run, the changes for affected methods go from X microseconds to Y
microseconds:
- Features.get: 3,554,796 -> 322,145
- secondaryHash: 3,165,785 -> 35,253
- HighlightsRanking.normalize: 6,578,481 -> 1,734,078
- HighlightsRanking.scoreEntries: 3,017,272 -> 448,300
As far as I know, my changes should not have introduced any new inefficiencies
to the code.
MozReview-Commit-ID: 9THXe8KqBbB
--HG--
extra : rebase_source : a190bc2e7c0f3ed2b5cb65202b902dcd455b3aa8
This reduces the calls to `getColumnIndexOrThrow` to 9 (from 1.6k) and
HighlightsRanking.extractFeatures goes from 77.1% inclusive CPU time -> 40.8%,
14.6k ms -> 7.1k ms.
MozReview-Commit-ID: L6HqvBK5I4i
--HG--
extra : rebase_source : f67c5ed207a4684edc4a3e7779dabd59c7f98608
This undoes a caveat created from the last changeset; I did not profile this
change.
MozReview-Commit-ID: 6jpXyt0GRUj
--HG--
extra : rebase_source : 7d09b16829376caf1116364e71dddbab7a5314a3