image.animated.decode-on-demand.threshold-kb is the maximum size in kB
that the aggregate frames of an animation can use before it starts to
discard already displayed frames, and redecode them as necessary. The
lower it is set to, the less overall memory we will consume at the
expense of execution time for as long as the tab with the animation(s)
above the threshold are kept open.
image.animated.decode-on-demand.batch-size is the minimum number of
frames we want to have buffered ahead of an animation's currently
displayed frame. The decoding will request this number of frames at a
time to maximize use of memory caching. Note that this is related to the
above preference as well; increasing the batch size will in effect raise
what the minimum threshold. This simplifies the logic in patches later
in the series.
For FF59, we disabled WebVR for macOS before allowing it to ride the trains to release. Softvision was unable to verify for QA due to challenges getting a working hardware configuration for macOS VR at SoftVision.
We have since gained approval from the Firefox Release Team to re-enable WebVR for macOS in release for FF60.
Essentially, we need to reverse the changes in bug 1426500 and uplift to FF59/Beta before the next cycle.
--HG--
extra : rebase_source : 1b0c68b130f869f0e71cf2d93db92bb78dddc79b
This patch disables device sensors except orientation by default.
It implements per-sensor prefs to disable orientation, motion, proximity and ambient light
selectively. The patch also makes the pref checks happen at runtime (versus on process
start) using Preferences::AddBoolVarCache.
The patch also removes the related Event constructors also.
MozReview-Commit-ID: EA8ARjjtlkF
--HG--
rename : dom/events/test/test_bug742376.html => dom/events/test/test_deviceSensor.html
rename : dom/events/test/test_eventctors.html => dom/events/test/test_disabled_events.html
rename : dom/events/test/test_eventctors.html => dom/events/test/test_eventctors_sensors.html
extra : rebase_source : 39da98ac9226ac727f5197d28561b0b762da06f4
Before Firefox 58 we collected extended collection from users on nightly,
aurora, and beta. Then we had to change things (see bug 1406391).
In doing so, we accidentally stopped receiving data from "release candidate"
beta builds. This patch resumes that collection by detecting an RC build as
having a MOZ_UPDATE_CHANNEL of "release", but an app.update.channel of "beta"
MozReview-Commit-ID: 3EzzDtQj8Kw
--HG--
extra : rebase_source : 371d2b804cad4fff3fc6a954621e651940867435
The canvas prompt is extremely annoying. It happens everyone, automatically. And in
99.9% (not scientific) of cases it is not triggered by user input, but my automatic
tracking scripts.
This commit will automatically decline the canvas read if it was not triggered by
user input.
Just in case this breaks something irrepairably, we have a cutoff pref.
We don't intend to keep this pref forever, and have asked anyone who sets it to
tell us why.
MozReview-Commit-ID: CxNkuraRWpV
--HG--
extra : rebase_source : 12cfc94cecbd378c0859ae50066c6338bcaa6692
image.mem.volatile.min_threshold_kb is the minimum buffer allocation for
an image frame in KB before it will use volatile memory. If it is less
than it will use the heap. This only is set to > 0 on Android.
image.mem.animated.use_heap forces image frames to use the heap if it is
for an animated image. This is only enabled for Android, and was
previously a compile time option also for Android.
"Include what you use."
--HG--
extra : rebase_source : 2239a380029e0efbc9dd3042459222a67c38d70f
extra : amend_source : 4453c32cc469caa592049167205666997f1a1e7b
extra : histedit_source : a533edd4a4d3d0642b08989e93674661d27baa6a%2C37d27eeef9580381ccc0de8507f60166dabf1730
We instead add a templated method NS_MutatorMethod that returns a std::function<nsresult(nsIURIMutator*)> which Apply then calls with mMutator as an argument.
The function returned by NS_MutatorMethod performs a QueryInterface, then calls the passed method with arguments on the result.
MozReview-Commit-ID: Jjqp7gGLG1D
--HG--
extra : rebase_source : f2a17aee7bb66a7ba8652817d43b9aa7ec7ef710
We instead add a templated method NS_MutatorMethod that returns a std::function<nsresult(nsIURIMutator*)> which Apply then calls with mMutator as an argument.
The function returned by NS_MutatorMethod performs a QueryInterface, then calls the passed method with arguments on the result.
MozReview-Commit-ID: Jjqp7gGLG1D
--HG--
extra : rebase_source : 592d13349a8c4627c7ce3146ec592f577b39f3cc
This was first suggested 17 years ago!
The error recovery works by just scanning forward for the next ';' token.
This change allows a lot of the gtest tests to be combined.
MozReview-Commit-ID: CbZ2OFtdIxf
--HG--
extra : rebase_source : 5a43fff06e88b45a095725856bbe1e6b5470c9a0
The image decoding thread pool can grow to be quite large, up to 32
threads, depending on the number of processors on the system. If the
user is not actively browsing, these threads are occupying resources
which could be reused elsewhere. After the timeout period, it will
release up to half of the threads in the pool.
The meaning of "possibly-changed" is provided by the big comment above
MustSendToContentProcesses.
On a new profile this reduces the number of prefs sent like so:
- Command-line: 222 --> 3
- IPC: 3129 --> 130
On an older profile:
- Command-line: 222 --> 3
- IPC: 3165 --> 180
MozReview-Commit-ID: DcgedhXhZd8
--HG--
extra : rebase_source : acef424fab5031347cbcbd5c3e6a24ee66895ef9
No Nightly testers don't report new compatibility issue. Additionally, if we
make Firefox use <div> as defaultParagraphSeparator in release build, web
services may stop supporting our current behavior quickly because they can
get rid of hack for us. Therefore, we should do this in the cycle of Gecko 60
which is next ESR. If we did this later, ESR users may have become not to be
able to use existing web services suddenly immediately after we did this in 61
or 62. We should avoid this bad scenario.
MozReview-Commit-ID: 7Um79Ky7n8i
--HG--
extra : rebase_source : 45c6d521ddc1166decb60cc8437ffb703b1e9aff
This has two advantages. First, it reduces memory usage, as per the following
calculation.
64-bit:
- Old sizes:
- sizeof(Pref) = 32
- New sizes:
- sizeof(PrefEntry) = 16
- sizeof(Pref) = 32
- Change:
- -16 per empty slot in the hash table
- +16 per used slot
- A win if less than half the table slots are used
32-bit
- Old sizes:
- sizeof(Pref) = 20
- New sizes:
- sizeof(PrefEntry) = 8
- sizeof(Pref) = 16
- Change:
- -12 per empty slot in the hash table
- +4 per used slot in the hash table
- A win if table is < 75% full
Table size:
- The table is currently less than half full: ~3100 used out of 8192 slots.
- The table is always <= 75% full, because that's the max load factor (for
non-gigantic tables).
- Therefore it's a win for both cases.
Old sizes, chrome process, 64-bit:
> 718,712 B (00.36%) -- preferences
> +--262,176 B (00.13%) -- hash-table
> +--197,384 B (00.10%) -- callbacks
> +--114,688 B (00.06%) -- pref-name-arena
> +---92,240 B (00.05%) -- root-branches
> +---30,456 B (00.02%) -- string-values
> +---21,688 B (00.01%) -- cache-data
> +-------80 B (00.00%) -- misc
New sizes, chrome process, 64-bit:
> 672,568 B (00.41%) -- preferences
> +--181,160 B (00.11%) -- callbacks
> +--131,104 B (00.08%) -- hash-table # smaller
> +--114,688 B (00.07%) -- pref-name-arena
> +--101,152 B (00.06%) -- pref-values # new
> +---92,240 B (00.06%) -- root-branches
> +---30,456 B (00.02%) -- string-values
> +---21,688 B (00.01%) -- cache-data
> +-------80 B (00.00%) -- misc
Old sizes, smallest content process, 64-bit:
> 500,712 B (02.89%) -- preferences
> +--262,176 B (01.51%) -- hash-table
> +--114,688 B (00.66%) -- pref-name-arena
> +---62,520 B (00.36%) -- callbacks
> +---30,456 B (00.18%) -- string-values
> +---17,832 B (00.10%) -- cache-data
> +---12,960 B (00.07%) -- root-branches
> +-------80 B (00.00%) -- misc
New sizes, smallest content process, 64-bit:
> 470,792 B (02.70%) -- preferences
> +--131,104 B (00.75%) -- hash-table # smaller
> +--114,688 B (00.66%) -- pref-name-arena
> +--101,152 B (00.58%) -- pref-values # new
> +---62,520 B (00.36%) -- callbacks
> +---30,456 B (00.17%) -- string-values
> +---17,832 B (00.10%) -- cache-data
> +---12,960 B (00.07%) -- root-branches
> +-------80 B (00.00%) -- misc
The "hash-table" values drop by more than the size of the new "pref-values"
value.
On 64-bit, this reduces memory usage per process by 30--40 KB. On 32-bit, the
number is slightly more.
The second major advantage of this change is flexibility -- it opens up the
possibility of different Pref objects being stored in different way. For
example, static Prefs could be stared statically, letting them be shared
between processes so long as they don't change (see bug 1437168).
MozReview-Commit-ID: KmgbJaoOQ1J
--HG--
extra : rebase_source : 9f8201583432c1414ab3e17e80fe23a369ac264b
This feature is confusing for Nightly users in its current state, and the
suggested websites, in foreign languages, may look worrisome to some.
Bug 1340663 must figure out these issues before re-enabling the feature.
MozReview-Commit-ID: 6RJ0Ff1B3AJ
--HG--
extra : rebase_source : 569b5ae833c4f5c05656522d0d9d0ad00679c370
This construct is nicer than NS_INTERFACE_MAP_BEGIN and assures the
reader there's no weirdness in the QI implementation. This change does
mean that PGO doesn't get an opportunity to measure the frequency of
which interfaces are QI'd most often. I think this is probably an OK
tradeoff to make, given the prevalence of NS_IMPL_QUERY_INTERFACE
elsewhere in the codebase.
The one thing we have to ensure with this change is that the ambiguous
QI to nsISupports uses the proper class after the change. The
NS_IMPL_QUERY_INTERFACE macro chooses the first interface listed to
disambiguate the cast to nsISupports.
The image decoding thread pool can grow to be quite large, up to 32
threads, depending on the number of processors on the system. If the
user is not actively browsing, these threads are occupying resources
which could be reused elsewhere. After the timeout period, it will
release up to half of the threads in the pool.
This lets us have a proper constructor for Pref, which is nice.
The patch also adds a missing case to PrefTypeToString(), and reorders the
fields in Pref to be more sensible.
MozReview-Commit-ID: A01ULF4q08O
--HG--
extra : rebase_source : 835e494ad18e3ea4de9f02beca8266551bfffe5e
mZips key is used only for internal hashtable lookups, so GetPersistentDescriptor is suitable.
MozReview-Commit-ID: 48wDOSjyo3r
--HG--
extra : rebase_source : 03c4b47812dade1d3e321727aafacfbc12bcbf32
extra : intermediate-source : 85a0b6bc25a1f960767ac28ff23a8c26829946a2
extra : source : 544bf26e258d42c835c80672416b0e29a48ba33b
They currently fail to pass on `aKind`, always getting the user value (when
possible). There are three callsites that are affected:
- nsSHistory::Startup, docshell/shistory/nsSHistory.cpp.
- FeatureState::SetDefaultFromPref(), in gfx/config/gfxFeature.cpp.
- gfxPlatform::InitOMTPConfig(), in gfx/thebes/gfxPlatform.cpp.
The patch also adds a gtest that would have failed prior to the fix.
MozReview-Commit-ID: L0U1XQmPUFc
--HG--
extra : rebase_source : d51d09836609c5a45d0b9f20570427681d8b3309
Similar to ATOK, Japanist 10 requests all or part of composition string.
If we return TS_E_NOLAYOUT in this case, you'll see candidate window at
top-left of the screen.
For avoiding this issue, we should not return TS_E_NOLAYOUT to Japanist 10
when the query range is in composition string.
MozReview-Commit-ID: 2OPafUO5PQC
--HG--
extra : rebase_source : bd7a594d8d3540374d61860651b69528aa6e3793
This patch rearranges these functions so that nsPrefBranch::GetPrefType() calls
into Preferences::GetType(), because all other nsPrefBranch methods depend on
Preferences methods.
The patch also removes the `aKind` argument from GetType(), because it has no
effect -- a pref only has one type, regardless of whether it has a default
value, a user value, or both.
MozReview-Commit-ID: J3vxFPaP8S3
This patch was autogenerated by my decomponents.py
It covers almost every file with the extension js, jsm, html, py,
xhtml, or xul.
It removes blank lines after removed lines, when the removed lines are
preceded by either blank lines or the start of a new block. The "start
of a new block" is defined fairly hackily: either the line starts with
//, ends with */, ends with {, <![CDATA[, """ or '''. The first two
cover comments, the third one covers JS, the fourth covers JS embedded
in XUL, and the final two cover JS embedded in Python. This also
applies if the removed line was the first line of the file.
It covers the pattern matching cases like "var {classes: Cc,
interfaces: Ci, utils: Cu, results: Cr} = Components;". It'll remove
the entire thing if they are all either Ci, Cr, Cc or Cu, or it will
remove the appropriate ones and leave the residue behind. If there's
only one behind, then it will turn it into a normal, non-pattern
matching variable definition. (For instance, "const { classes: Cc,
Constructor: CC, interfaces: Ci, utils: Cu } = Components" becomes
"const CC = Components.Constructor".)
MozReview-Commit-ID: DeSHcClQ7cG
--HG--
extra : rebase_source : d9c41878036c1ef7766ef5e91a7005025bc1d72b
Specifically:
- Make the warning about editing in all-caps;
- Make it clear that about:config is a browser thing;
- Add a mention of the user.js file;
- Use C++ comments, because I prefer them to C comments and I am the module
owner :)
MozReview-Commit-ID: 9GXS5nNHywO
The prefs parser has two significant problems.
- It doesn't separate tokenizing from parsing.
- It is implemented as a loop around a big switch on a "current state"
variable.
As a result, it is hard to understand and modify, slower than it could be, and
in obscure cases (involving comments and whitespace) it fails to parse what
should be valid input.
This patch replaces it with a recursive descent parser (albeit one without any
recursion!) that has separate tokenization. The new parser is easier to
understand and modify, more correct, and has better error messages. It doesn't
do error recovery, but that would be much easier to add than in the old parser.
The new parser also runs about 1.9x faster than the existing parser. (As
measured by parsing greprefs.js's contents from memory 1000 times in
succession, omitting the prefs hash table construction. If the table
construction is included, it's about 1.6x faster.)
The new parser is slightly stricter than the old parser in a few ways.
- Disconcertingly, the old parser allowed arbitrary junk between prefs
(including at the start and end of the prefs file) so long as that junk
didn't include any of the following chars: '/', '#', 'u', 's', 'p'. I.e.
lines like these:
!foo@bar&pref("prefname", true);
ticky_pref("prefname", true); // missing 's' at start
User_pref("prefname", true); // should be 'u' at start
would all be treated the same as this:
pref("prefname", true);
The new parser disallows such junk because it isn't necessary and seems like
an unintentional botch by the old parser.
- The old parser allowed character 0x1a (SUB) between tokens and treated it
like '\n'.
The new parser does not allow this character. SUB was used to indicate
end-of-file (*not* end-of-line) in some old operating systems such as MS-DOS,
but this doesn't seem necessary today.
- The old parser tolerated (with a warning) invalid escape sequences within
string literals -- such as "\q" (not a valid escape) and "\x1" and "\u12"
(both of which have insufficient hex digits) -- accepting them literally.
The new parser does not tolerate invalid escape sequences because it doesn't
seem necessary and would complicate things.
- The old parser tolerated character 0x00 (NUL) within string literals; this is
dangerous because C++ code that manipulates string values with embedded NULs
will almost certainly consider those chars as end-of-string markers.
The new parser treats NUL chars as end-of-file, to avoid this danger and
because it facilitates a significant optimization (described within the
code).
- The old parser allowed integer literals to overflow, silently wrapping them.
The new parser treats integer overflow as a parse error. This seems better,
and it caught existing overflows of places.database.lastMaintenance, in
testing/profiles/prefs_general.js (bug 1424030) and
testing/talos/talos/config.py (bug 1434813).
The first of these changes meant that a couple of existing prefs with ";;" at
the end had to be changed (done in the preceding patch).
The minor increase in strictness shouldn't be a problem for default pref files
such as greprefs.js within the application (which we can modify), nor for
app-written prefs files such as prefs.js. It could affect user-written prefs
files such as user.js; the experience above suggests that integer overflow and
";;" are the most likely problems in practice. In my opinion, the risk here is
acceptable.
The new parser also does a better job of tracking line numbers because it (a)
treats "\r\n" sequences as a single end-of-line marker, and (a) pays attention
to end-of-line sequences within string literals.
Finally, the patch adds thorough tests of both valid and invalid syntax.
MozReview-Commit-ID: JD3beOQl4AJ
The prefs parser has two significant problems.
- It doesn't separate tokenizing from parsing.
- It is implemented as a loop around a big switch on a "current state"
variable.
As a result, it is hard to understand and modify, slower than it could be, and
in obscure cases (involving comments and whitespace) it fails to parse what
should be valid input.
This patch replaces it with a recursive descent parser (albeit one without any
recursion!) that has separate tokenization. The new parser is easier to
understand and modify, more correct, and has better error messages. It doesn't
do error recovery, but that would be much easier to add than in the old parser.
The new parser also runs about 1.9x faster than the existing parser. (As
measured by parsing greprefs.js's contents from memory 1000 times in
succession, omitting the prefs hash table construction. If the table
construction is included, it's about 1.6x faster.)
The new parser is slightly stricter than the old parser in a few ways.
- Disconcertingly, the old parser allowed arbitrary junk between prefs
(including at the start and end of the prefs file) so long as that junk
didn't include any of the following chars: '/', '#', 'u', 's', 'p'. I.e.
lines like these:
!foo@bar&pref("prefname", true);
ticky_pref("prefname", true); // missing 's' at start
User_pref("prefname", true); // should be 'u' at start
would all be treated the same as this:
pref("prefname", true);
The new parser disallows such junk because it isn't necessary and seems like
an unintentional botch by the old parser.
- The old parser allowed character 0x1a (SUB) between tokens and treated it
like '\n'.
The new parser does not allow this character. SUB was used to indicate
end-of-file (*not* end-of-line) in some old operating systems such as MS-DOS,
but this doesn't seem necessary today.
- The old parser tolerated (with a warning) invalid escape sequences within
string literals -- such as "\q" (not a valid escape) and "\x1" and "\u12"
(both of which have insufficient hex digits) -- accepting them literally.
The new parser does not tolerate invalid escape sequences because it doesn't
seem necessary and would complicate things.
- The old parser tolerated character 0x00 (NUL) within string literals; this is
dangerous because C++ code that manipulates string values with embedded NULs
will almost certainly consider those chars as end-of-string markers.
The new parser treats NUL chars as end-of-file, to avoid this danger and
because it facilitates a significant optimization (described within the
code).
- The old parser allowed integer literals to overflow, silently wrapping them.
The new parser treats integer overflow as a parse error. This seems better,
and it caught an existing overflow in testing/profiles/prefs_general.js, for
places.database.lastMaintenance (see bug 1424030).
The first of these changes meant that a couple of existing prefs with ";;" at
the end had to be changed (done in the preceding patch).
The minor increase in strictness shouldn't be a problem for default pref files
such as greprefs.js within the application (which we can modify), nor for
app-written prefs files such as prefs.js. It could affect user-written prefs
files such as user.js; the experience above suggests that ";;" is the most
likely problem in practice. In my opinion, the risk here is acceptable.
The new parser also does a better job of tracking line numbers because it (a)
treats "\r\n" sequences as a single end-of-line marker, and (a) pays attention
to end-of-line sequences within string literals.
Finally, the patch adds thorough tests of both valid and invalid syntax.
MozReview-Commit-ID: 8EYWH7KxGG
* * *
[mq]: win-fix
MozReview-Commit-ID: 91Bxjfghqfw
--HG--
extra : rebase_source : a8773413e5d68c33e4329df6819b6e1f82c22b85
This reverts the change introduced in bug 1394053.
Google has made the download protection lists available to everyone
and so we no longer need to restrict the download protection feature
to official builds.
MozReview-Commit-ID: CQcG5Ip1mDV
--HG--
extra : rebase_source : 55ff4f1e5a09e3c83ad9b24b2eb44789834b2357
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
extra : intermediate-source : 34c999fa006bffe8705cf50c54708aa21a962e62
extra : histedit_source : b2be2c5e5d226e6c347312456a6ae339c1e634b0
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : rebase_source : c004a023389f1f6bf3d2f3efe93c13d423b23ccd
UI Events declares that keypress event should be fired when the keypress event
causes some text input. However, we're keeping our traditional behavior for
historical reasons because our internal event handlers (including event
handlers of Thunderbird) handles keypress events for any keys. Therefore,
for minimizing the side effect, we should stop kicking keypress event handlers
in the default event group in web content.
This patch adds new pref for enabling the standard behavior in web content.
Additionally, creates WidgetKeyboardEvent::IsInputtingText() for sharing the
check logic between TextEventDispatcher and TextEditor/HTMLEditor.
MozReview-Commit-ID: 3rtXdLBPeVC
--HG--
extra : rebase_source : 2fc3c9a09840d0d03800c9a42bb83ca76a8db2d5
* Makes the implementation of nsStandardURL::Mutator into a template called TemplatedMutator<T>
* Makes both nsStandardURL::Mutator and SubstitutingURL::Mutator extend TemplatedMutator<T>
MozReview-Commit-ID: EpxFpBkrdSK
--HG--
extra : rebase_source : 07d568ff84fb199c7549ae5f402e01e4b86c1c37
On the off chance exposing navigator.webdriver turns out
to be catastrophic, this patch introduces a new preference
dom.webdriver.enabled that controls its exposure. This lets us
flip a pref on release without releasing an update.
MozReview-Commit-ID: KisaqPb0Y4V
--HG--
extra : rebase_source : 4000a1d283e70eda988262efc70de801eec3586a
Provides an optional resolver mechanism for Firefox that allows running
together with or instead of the native resolver.
TRR offers resolving of host names using a dedicated DNS-over-HTTPS server
(HTTPS is required, HTTP/2 is preferable).
DNS-over-HTTPS (DOH) allows DNS resolves with enhanced privacy, secure
transfers and improved performance.
To keep the failure rate at a minimum, the TRR system manages a dynamic
persistent blacklist for host names that can't be resolved with DOH but works
with the native resolver. Blacklisted entries will not be retried over DOH for
a couple of days. "localhost" and names in the ".local" TLD will not be
resolved via DOH.
TRR is preffed OFF by default and you need to set a URI for an available DOH
server to be able to use it. Since the URI for DOH is set with a name itself,
it may have to use the native resolver for bootstrapping. (Optionally, the
user can set the IP address of the DOH server in a pref to avoid the required
initial native resolve.)
When TRR starts up, it will first verify that it works by checking a
"confirmation" domain name. This confirmation domain is a pref by default set
to "example.com". TRR will also by default await the captive-portal detection
to raise its green flag before getting activated.
All prefs for TRR are under the "network.trr" hierarchy.
The DNS-over-HTTPS spec: https://tools.ietf.org/html/draft-ietf-doh-dns-over-https-03
MozReview-Commit-ID: GuuU6vjTjlm
--HG--
extra : rebase_source : 53fcca757334090ac05fec540ef29d109d5ceed3
Adds new network.http.referer.defaultPolicy.pbmode pref which defaults to 2.
When setting referrer from default policy, checks mLoadInfo OriginAttributes
for mPrivateBrowsingId > 0 to detect PBM.
MozReview-Commit-ID: 7SfNk0dO1rW
--HG--
extra : rebase_source : a050a61cad005740edde99f846a69c6a7568dbc6
- WebVR will continue to be enabled on macOS for Nightly
and Dev Edition
MozReview-Commit-ID: LpEX13yZVbb
--HG--
extra : rebase_source : 07b93a9f0cbb57fb00f17404f0cf4a37f78f6a5c
This pref does not override privacy.resistFingerprinting, but when it is set (and
privacy.resistFingerprinting is not) we will still adjust the precision of almost
all timers. The adjustment amount is the second pref, which is defaulted to
20us but now dynamically adjustable (in the scale of microseconds.)
This patch does _not_ address the performance API, which privacy.resistFingerprinting
disables.
We are landing this preffed on at the current value we clamp performance.now() at
which is 20us.
MozReview-Commit-ID: ESZlSvH9w1D
--HG--
extra : rebase_source : a8afead1bdba958c6c7b383b2216dacb3a1b135c
This is a follow-up to bug 1409249. There are a lot of places where our
factory singleton constructors either don't correctly handle their returned
references being released by the component manager, or do handle it, but in
ways that are not obvious.
This patch handles a few places where we can sometimes wind up with dangling
singleton pointers, adds some explanatory comments and sanity check
assertions, and replaces some uses of manual refcounting with StaticRefPtr and
ClearOnShutdown.
There are still some places where we may wind up with odd behavior if the
first QI for a getService call fails. In those cases, we wind up destroying
the first instance of a service that we create, and re-creating a new one
later.
MozReview-Commit-ID: ANYndvd7aZx
--HG--
extra : rebase_source : acfb0611a028fef6b9387eb5d1d9e285782fbc7c
This commit adds a paint worker thread pool to PaintThread, and dispatches
tiled paints to it. The thread pool is only created if tiling is enabled,
and its size is set by 'layers.omtp.paint-workers' and defaults to 1. If
-1 is specified, it will be sized to 'max((cpu_cores * 3) / 4, 1)'.
The one tricky part of dispatching tiled paints to a thread pool is the
AsyncEndLayerTransaction message that must execute once all paints are
finished. Previously, this runnable would be queued after all the paints
had been queued, ensuring it would be run after they had all completed.
With a thread pool, there is no guarantee. Instead this commit, uses
a flag on CompositorBridgeChild to signify whether all of the paints
have been queued ('mOutstandingAsyncEndLayerTransaction'), and after
every tiled paint it is examined to see if that paint was the last
paint, and if it is to run AsyncEndLayerTransaction. In addition,
if the async paints complete before we even mark the end of the
layer transaction, we queue it like normal.
The profiler markers are also complicated by using a thread pool.
I don't know of a great way to keep them working as they are per
thread, so for now I've removed them. I may have been the only
one using them anyway.
MozReview-Commit-ID: 5LIJ9GWSfCn
--HG--
extra : rebase_source : 0c26806f337a1b4b1511945f9c72e787b426c5ba
Most of the Shadow DOM related code are behind "dom.webcomponents.enabled" and
this pref is only used by Shadow DOM right now, so we should rename it to
"dom.webcomponents.shadowdom.enabled"
MozReview-Commit-ID: er1c7AsSSW
Mechanism for restricting pinch zooming when gesture is a two
finger pan. If the pinch span is below a given threshold and the
scroll distance above a given threshold then the zoom level is
maintained to allow for smooth panning with two fingers.
MozReview-Commit-ID: 62Fv0WeplOo
--HG--
extra : rebase_source : 71d7da4c4b4cc3a5adde13ad1a7c1fbf49856c35
This patch introduces three keyed histograms:
- PREFERENCES_FILE_LOAD_SIZE_B
- PREFERENCES_FILE_LOAD_NUM_PREFS
- PREFERENCES_FILE_LOAD_TIME_US
They are all keyed on the prefs file's name; in my local Linux64 build there
are 13 such files.
Because prefs start up earlier than telemetry, we have to save the measurements
and then pass them to telemetry later.
MozReview-Commit-ID: H6KD7oeK8O0
--HG--
extra : rebase_source : b89c34270b07186b0ccc71bd41c70d81b2c6a334
New content processes get prefs in three ways.
- They read them from greprefs.js, prefs.js and other such files.
- They get sent "early prefs" from the parent process via the command line
(-intPrefs/-boolPrefs/-stringPrefs).
- They get sent "late prefs" from the parent process via IPC message.
(The latter two are necessary for communicating prefs that have been added or
modified in the parent process since the file reading occurred at startup.)
We have some machinery that detects if a late pref is accessed before the late
prefs are set, which is good. But it has a big exception in it; late pref
accesses that occur early via Add*VarCache() and RegisterCallbackAndCall() are
allowed.
This exception was added in bug 1341414. The description of that bug says "We
should change AddBoolVarCache so that it doesn't look at the pref in the
content process until prefs have been received from the parent." Unfortunately,
the patch in that bug added the exception to the checking without changing
Add*VarCache() in the suggested way!
This means it's possible for late prefs to be read early via VarCaches (or
RegisterCallbackAndCall()) when their values are incorrect, which is bad.
Changing Add*VarCache() to delay the reading as bug 1341414 originally
suggested seems difficult. A simpler fix is to just remove the exception in the
checking and extend the early prefs list as necessary. This patch does that,
lengthening the early prefs list from ~210 to ~300. Fortunately, most (all?) of
the added prefs are ints or bools rather than strings, so it doesn't increase
the size of the command line arguments for content processes by too much.
--HG--
extra : rebase_source : 5ea5876c206401d23a368ef9cb5040522c9ca377
This feature is intended to help Firefox frontend developers experiment with
replacing XUL content with modern flexbox. We might also eventually use
this emulation to *actually* render most or all of our legacy XUL UI.
MozReview-Commit-ID: 3g2W9o3t23H
--HG--
extra : rebase_source : a3e8b443d9b58e5af3287a23de6edc276ed5e847
This patch adds following Microsoft's IMEs into the black list which set
their open state to "closed" when input scope is set to IS_URL and sets
input scope for the URL bar to IS_DEFAULT.
Additionally, this adds a new pref to disable this hack because a lot of
users will affect this hack but perhaps, somebody may not like this if
they use tablet.
The new black listed IMEs:
- Microsoft Bopomofo
- Microsoft ChangJie
- Microsoft Phonetic
- Microsoft Quick
- Microsoft New ChangJie
- Microsoft New Phonetic
- Microsoft New Quick
- Microsoft Pinyin
- Microsoft Pinyin New Experience Input Style
- Microsoft Wubi
- Microsoft IME for Korean (except on Win7)
- Microsoft Old Hangul
MozReview-Commit-ID: BwJKFcu80B8
--HG--
extra : rebase_source : 75aeed04504b476520102984ab6e7875c98b36c8
Currently pref_ReadPrefFromJar() will return NS_OK if parsing fails. This is
weird, and inconsistent with openPrefFile().
MozReview-Commit-ID: 7cHSewQYymE
--HG--
extra : rebase_source : 0c9ac8294da022db0b9d03e4855aaabe768f3d71
This part is about setting on/off audio processing feature. It's long, but
it's mostly mechanichal changes, from the old API to the new one.
This also covers reseting the processing in case of device changes (with
macros).
MozReview-Commit-ID: EI2TxHRicEr
--HG--
extra : rebase_source : 5c389e00019633d371d74cdd2d881dab4d353848
extra : source : 2c7a56648de9125ae1893d54ec011b6cbb181d86