* Changed positiveInteger to validate upper bounds
* Removed processing for the no longer existent 'session' field
* Added default values for absent fields
MozReview-Commit-ID: E4x6TyMwZQs
--HG--
extra : rebase_source : 745cfeb0e3757651ed20f04de19eccb98eea9fc5
setTimeout was previously undefined. Created a test specifically
for MigrationUtils, since trying to test this through an actual
Chrome migration path seemed trickier and more fragile.
MozReview-Commit-ID: 33U7ZzzG1CC
--HG--
extra : rebase_source : 0399d9bc768d8eb7451fbb75ec3282f903cdb4ad
As part of this change, the confusingly named global variable 'state' is
renamed to 'currentURLTargetType', and named "enum" values are assigned
to it rather than raw integers.
MozReview-Commit-ID: FTEOB9wF8Q1
If the reftest harness times out the load of a URL before the 'load' event
has even fired then the error message that we get is 'load failed with unknown
reason'. This isn't very helpful for people unfamiliar with the harness. This
change sets an initial error message that notes that we're waiting on the
'load' event, what the timeout delay is, and what URL we are/were waiting on
loading. This will allow the timeout delay to be compared to log timestamps
and will make it clear whether the timeout occurred for some other URL than
the one we were expecting to load (which would be an error in the harness
logic).
It should now be impossible for the 'load failed with unknown reason' to occur,
but if there is a logic error in the harness code (some race condition?) then
it may still happen.
MozReview-Commit-ID: JOb1kYBpLro
The default debug console line limit is far too short, especially when
debugging a debug build or a build with some logging turned on. I've never
had any issues with turning off the limit, so hopefully this is fine for
everyone else too.
MozReview-Commit-ID: Krlo7XPEjE0
By default the build console is limited to 500 lines. This can be far too
short, especially when building using `mach build --verbose`. I've run with
this value set to 1,000,000 before without issue, so 10,000 should be fine.
MozReview-Commit-ID: GxBUjW7nOYJ
There's no good reason why these can't be code constants.
Especially given that, due to a bug, changes to the
"idle_queue.{min,long}_period" constants were not being passed onto the C++
code!
Here's why: those two prefs were specified as integers in all.js. But we used
AddFloatVarCache() to set up the reading of those prefs. libpref fakes floats
by storing them as strings and then converting them to floats when they are
read.
Which means that AddFloatVarCache() used to fail to get the value from all.js
-- because there's a type mismatch, int vs. string -- and instead use the
fallback default. That value is the same as the one in all.js, which is lucky.
But if someone changed the value in about:config to 100 (an integer), a similar
failure would have occured and the value used by the C++ code wouldn't be
updated!
Also note that idle_queue.max_timer_thread_bound did not have a value in
all.js.
What a mess!
--HG--
extra : rebase_source : 86f8fa905163803eb95007609c029e18c2c4f586
There's no good reason why these should't just be constants. The patch also
appends "MiB" to some of the C++ values to make their meaning clearer.
This patch fixes one outright bug, and one inconsistency.
The bug is due to a prefname mismatch:
- ContentPrefs.cpp and AvailableMemoryTracker.cpp use
"memory.low_virtual_mem_threshold_mb".
- all.js uses "memory.low_virtual_memory_threshold_mb".
Which means that "memory.low_virtual_memory_threshold_mb" showed up in
about:config, but if you changed it nothing would happen because the callback
listened for changes to to "memory.low_virtual_mem_threshold_mb"!
Now for the inconsistency. The above means we actually use a value of 256 for
the virtual memory threshold, even though all.js says 128. But we *do* use a
value of 128 for the commit space threshold, because that's what all.js says
and that prefname is used correctly everywhere. The patch changes the commit
space threshold to 256 for consistency with the virtual memory threshold.
What a mess!
--HG--
extra : rebase_source : d3842c732efa9ab0e90eeb6a4f341aeb289589ed
This was originally added for b2g, where the pref had a different value.
Bug 1398033 enabled it everywhere.
--HG--
extra : rebase_source : c8bb03190cd00d25e6c2ec62a99ed8e911e08197
This removes dead code using headlessClient and lastRunCrashID in crash
reporting. headlessClient is unconditional now. nsIXULRuntime.lastRunCrashID
is not used anymore so remove code for implementing it.
MozReview-Commit-ID: AU4bUeIx3O0
libpref has callbacks. You can register a callback function against a
particular pref (or pref prefix), and it'll be called when any matching pref
changes.
This is implemented in two layers. The lower layer is the list of CallbackNode
objects, pointed to by gFirstCallback.
The upper layer involves gObserverTable, which is a hash table of ValueObserver
objects. It is used for callbacks registered via
Preferences::RegisterCallback() and Preferences::Add*VarCache(), but not for
observers registered with Preferences::Add{Weak,Strong}Observer(). If multiple
callbacks with identical prefnames, callback functions and MatchKinds occur,
they are commoned up into a single ValueObserver, which is then wrapped by a
single CallbackNode. (The callbacks within a single ValueObserver can have
different void* aClosure args; ValueObserver keeps those in an nsTArray.)
Note also that gObserverTable is very inelegant, with duplication of data
between the keys and the values due it being a
nsRefPtrHashtable<ValueObserverHashKey, ValueObserver> and ValueObserver being
a subclass of ValueObserverHashKey(!)
This extra layer might make sense if there were a lot of commoning up
happening, but there's not. Across all process kinds there is an average of
between 1.2 and 1.7 closures per ValueObserver. The vast majority have 1
closure, and there are a handful that have multiple; the highest number I've
seen is 71.
(Note that this two-layer design probably seemed more natural back when libpref
was spread across multiple files.)
This patch removes the ValueObserver layer. The only tricky part is that there
is a notion of a MatchKind -- ExactMatch or PrefixMatch -- which exists in the
ValueObserverLayer but not in the CallbackNode layer. So the patch moves the
MatchKind into the CallbackNode layer, which is straightforward.
On Linux64 this reduces memory usage by about 40 KiB in the parent process and
about 30 KiB in each content processes.
The performance implications are minor.
- The list of CallbackNodes is a bit longer, so when a pref is changed we must
check more nodes. But we avoid possibly doing prefname matching twice.
- Callback registration is much faster, just a simple list prepend, without any
hash table insertion required.
- Callback unregistration is probably not much different. There is a longer
list to traverse, but no hash table removal.
- Pref lookup is by far the hottest libpref operation, and it's unchanged.
For perf to be genuinely affected would require (a) a *lot* more
almost-duplicate callbacks that would have been commoned up, and (b) frequent
changing of the pref those callbacks are observing. This seems highly unlikely.
MozReview-Commit-ID: 6xo4xmytOf3
--HG--
extra : rebase_source : cf9884a8a40b2cc3b00ab165ae5ab4ec3383f72d
Bug 1345294 introduced nsPrefBranch::{get,set}StringPref(), which allowed the
getting of utf8 strings from prefs, which previously required using
nsISupportsString with {get,set}ComplexValue. That bug also converted most
uses.
This patch finishes the job.
- It removes the nsISupportsString support.
- It converts existing code that relied on the nsISupportsString.
- It removes the lint that was set up to detect such uses of nsISupportsString.
--HG--
extra : rebase_source : fb7af066adfa0491a79fae6282a62b08661553c8