***
Bug 1514594: Part 3a - Change ChromeUtils.import to return an exports object; not pollute global. r=mccr8
This changes the behavior of ChromeUtils.import() to return an exports object,
rather than a module global, in all cases except when `null` is passed as a
second argument, and changes the default behavior not to pollute the global
scope with the module's exports. Thus, the following code written for the old
model:
ChromeUtils.import("resource://gre/modules/Services.jsm");
is approximately the same as the following, in the new model:
var {Services} = ChromeUtils.import("resource://gre/modules/Services.jsm");
Since the two behaviors are mutually incompatible, this patch will land with a
scripted rewrite to update all existing callers to use the new model rather
than the old.
***
Bug 1514594: Part 3b - Mass rewrite all JS code to use the new ChromeUtils.import API. rs=Gijs
This was done using the followng script:
https://bitbucket.org/kmaglione/m-c-rewrites/src/tip/processors/cu-import-exports.jsm
***
Bug 1514594: Part 3c - Update ESLint plugin for ChromeUtils.import API changes. r=Standard8
Differential Revision: https://phabricator.services.mozilla.com/D16747
***
Bug 1514594: Part 3d - Remove/fix hundreds of duplicate imports from sync tests. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16748
***
Bug 1514594: Part 3e - Remove no-op ChromeUtils.import() calls. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16749
***
Bug 1514594: Part 3f.1 - Cleanup various test corner cases after mass rewrite. r=Gijs
***
Bug 1514594: Part 3f.2 - Cleanup various non-test corner cases after mass rewrite. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16750
--HG--
extra : rebase_source : 359574ee3064c90f33bf36c2ebe3159a24cc8895
extra : histedit_source : b93c8f42808b1599f9122d7842d2c0b3e656a594%2C64a3a4e3359dc889e2ab2b49461bab9e27fc10a7
Currently they cause the `String::from_utf16()` call to return an error result,
and then the subsequent `unwrap()` on that result aborts.
--HG--
extra : rebase_source : 6be81d4d1e618444f762a1ba4e93b5ce648dd45b
Currently, if a get_char() call returns EOF, the index moves beyond the
buffer's bounds and get_char() cannot be called again without triggering a
panic. As a result, everywhere that encounters an EOF and then does subsequent
parsing ungets the EOF... except there was one place that failed to do that:
the match case for CharKind::Slash in get_token(). This meant that a single '/'
at the end of the input could trigger a bounds violation (but only if it is the
start of a new token).
This EOF-unget requirement is subtle and easy to get wrong, so this patch
eliminates it. get_char() now can be called repeatedly after an EOF, and will
return EOF on each subsequent call. This means that some of the existing
unget_char() calls can be removed. Some others are still necessary to get line
numbers correct in error messages, but the outcome of mishandled cases is now
much less drastic -- an incorrect line number in an error message instead of a
panic.
The patch also clarifies a couple of related comments.
--HG--
extra : rebase_source : 62a3f07bb83b95495b2975724876b619a33b5c9d
With the new loading model for frame scripts, lexical variables defined in a
global frame script are not available to other frame scripts.
Additionally, scripts loaded into a context object by the subscript loader
should not depend on being able to access properties of the message manager as
if they were globals.
MozReview-Commit-ID: 6QEyA1sBVOV
--HG--
extra : rebase_source : d3a7820104645dc356bdf8ea660b970e1f6c20e7
I initially tried to avoid this, but decided it was necessary given the number
of times I had to repeat the same pattern of casting a variable to void*, and
then casting it back in a part of code far distant from the original type.
This changes our preference callback registration functions to match the type
of the callback's closure argument to the actual type of the closure pointer
passed, and then casting it to the type of our generic callback function. This
ensures that the callback function always gets an argument of the type it's
actually expecting without adding any additional runtime memory or
QueryInterface overhead for tracking it.
MozReview-Commit-ID: 9tLKBe10ddP
--HG--
extra : rebase_source : 7524fa8dcd5585f5a31fdeb37d95714f1bb94922
This patch changes our preference look-up behavior to first check the dynamic
hashtable, and then fall back to the shared map.
In order for this to work, we need to make several other changes as well:
- Attempts to modify a preference that only exists in the shared table
requires that we copy it to the dynamic table, and change the value of the
new entry.
- Attempts to clear a user preference with no default value, but which also
exists in the shared map, requires that we keep an entry in the dynamic
table to mask the shared entry. To make this work, we change the type of
these entries to None, and ignore them during look-ups and iteration.
- Iteration needs to take both hashtables into consideration. The
serialization iterator for changed preferences only needs to care about
dynamic values, so it remains unchanged. Most of the others need to use
PrefsIter() instead.
MozReview-Commit-ID: 9PWmSZxoC9Z
--HG--
extra : intermediate-source : b4f9178f132de2b5f7064df9a9e1b489ea6576c3
extra : absorb_source : 57fd90ea8195adff9d314b813e94dc643fd085e4
extra : source : 5051f15fc2005667cfe76ccae0afb1fb0657c103
extra : histedit_source : 2ebf0e90a5e1b65795b20e9269def7cbbf2d1f11
This patch changes our preference look-up behavior to first check the dynamic
hashtable, and then fall back to the shared map.
In order for this to work, we need to make several other changes as well:
- Attempts to modify a preference that only exists in the shared table
requires that we copy it to the dynamic table, and change the value of the
new entry.
- Attempts to clear a user preference with no default value, but which also
exists in the shared map, requires that we keep an entry in the dynamic
table to mask the shared entry. To make this work, we change the type of
these entries to None, and ignore them during look-ups and iteration.
- Iteration needs to take both hashtables into consideration. The
serialization iterator for changed preferences only needs to care about
dynamic values, so it remains unchanged. Most of the others need to use
PrefsIter() instead.
MozReview-Commit-ID: 9PWmSZxoC9Z
--HG--
extra : source : 5051f15fc2005667cfe76ccae0afb1fb0657c103
extra : histedit_source : 28f2216b8c1e9d08ed9b2c03910d4b8434e55421
This patch changes our preference look-up behavior to first check the dynamic
hashtable, and then fall back to the shared map.
In order for this to work, we need to make several other changes as well:
- Attempts to modify a preference that only exists in the shared table
requires that we copy it to the dynamic table, and change the value of the
new entry.
- Attempts to clear a user preference with no default value, but which also
exists in the shared map, requires that we keep an entry in the dynamic
table to mask the shared entry. To make this work, we change the type of
these entries to None, and ignore them during look-ups and iteration.
- Iteration needs to take both hashtables into consideration. The
serialization iterator for changed preferences only needs to care about
dynamic values, so it remains unchanged. Most of the others need to use
PrefsIter() instead.
MozReview-Commit-ID: 9PWmSZxoC9Z
--HG--
extra : rebase_source : 054f4dcd534b41da889304c3c6d3365d56e8edda
extra : absorb_source : de81199e174f2d3604c803a5c016ba2f3baf3558
Sticky prefs are already specifiable with `sticky_pref`, but this is a more
general attribute mechanism. The ability to specify a locked pref in the data
file is new.
The patch also adds nsIPrefService.readDefaultPrefsFromFile, to match the
existing nsIPrefService.readUserPrefsFromFile method, and converts a number of
the existing tests to use it.
MozReview-Commit-ID: 9LLMBJVZfg7
--HG--
extra : rebase_source : fa25bad87c4d9fcba6dc13cd2cc04ea6a2354f51
This was first suggested 17 years ago!
The error recovery works by just scanning forward for the next ';' token.
This change allows a lot of the gtest tests to be combined.
MozReview-Commit-ID: CbZ2OFtdIxf
--HG--
extra : rebase_source : 5a43fff06e88b45a095725856bbe1e6b5470c9a0
They currently fail to pass on `aKind`, always getting the user value (when
possible). There are three callsites that are affected:
- nsSHistory::Startup, docshell/shistory/nsSHistory.cpp.
- FeatureState::SetDefaultFromPref(), in gfx/config/gfxFeature.cpp.
- gfxPlatform::InitOMTPConfig(), in gfx/thebes/gfxPlatform.cpp.
The patch also adds a gtest that would have failed prior to the fix.
MozReview-Commit-ID: L0U1XQmPUFc
--HG--
extra : rebase_source : d51d09836609c5a45d0b9f20570427681d8b3309
This patch was autogenerated by my decomponents.py
It covers almost every file with the extension js, jsm, html, py,
xhtml, or xul.
It removes blank lines after removed lines, when the removed lines are
preceded by either blank lines or the start of a new block. The "start
of a new block" is defined fairly hackily: either the line starts with
//, ends with */, ends with {, <![CDATA[, """ or '''. The first two
cover comments, the third one covers JS, the fourth covers JS embedded
in XUL, and the final two cover JS embedded in Python. This also
applies if the removed line was the first line of the file.
It covers the pattern matching cases like "var {classes: Cc,
interfaces: Ci, utils: Cu, results: Cr} = Components;". It'll remove
the entire thing if they are all either Ci, Cr, Cc or Cu, or it will
remove the appropriate ones and leave the residue behind. If there's
only one behind, then it will turn it into a normal, non-pattern
matching variable definition. (For instance, "const { classes: Cc,
Constructor: CC, interfaces: Ci, utils: Cu } = Components" becomes
"const CC = Components.Constructor".)
MozReview-Commit-ID: DeSHcClQ7cG
--HG--
extra : rebase_source : d9c41878036c1ef7766ef5e91a7005025bc1d72b
The prefs parser has two significant problems.
- It doesn't separate tokenizing from parsing.
- It is implemented as a loop around a big switch on a "current state"
variable.
As a result, it is hard to understand and modify, slower than it could be, and
in obscure cases (involving comments and whitespace) it fails to parse what
should be valid input.
This patch replaces it with a recursive descent parser (albeit one without any
recursion!) that has separate tokenization. The new parser is easier to
understand and modify, more correct, and has better error messages. It doesn't
do error recovery, but that would be much easier to add than in the old parser.
The new parser also runs about 1.9x faster than the existing parser. (As
measured by parsing greprefs.js's contents from memory 1000 times in
succession, omitting the prefs hash table construction. If the table
construction is included, it's about 1.6x faster.)
The new parser is slightly stricter than the old parser in a few ways.
- Disconcertingly, the old parser allowed arbitrary junk between prefs
(including at the start and end of the prefs file) so long as that junk
didn't include any of the following chars: '/', '#', 'u', 's', 'p'. I.e.
lines like these:
!foo@bar&pref("prefname", true);
ticky_pref("prefname", true); // missing 's' at start
User_pref("prefname", true); // should be 'u' at start
would all be treated the same as this:
pref("prefname", true);
The new parser disallows such junk because it isn't necessary and seems like
an unintentional botch by the old parser.
- The old parser allowed character 0x1a (SUB) between tokens and treated it
like '\n'.
The new parser does not allow this character. SUB was used to indicate
end-of-file (*not* end-of-line) in some old operating systems such as MS-DOS,
but this doesn't seem necessary today.
- The old parser tolerated (with a warning) invalid escape sequences within
string literals -- such as "\q" (not a valid escape) and "\x1" and "\u12"
(both of which have insufficient hex digits) -- accepting them literally.
The new parser does not tolerate invalid escape sequences because it doesn't
seem necessary and would complicate things.
- The old parser tolerated character 0x00 (NUL) within string literals; this is
dangerous because C++ code that manipulates string values with embedded NULs
will almost certainly consider those chars as end-of-string markers.
The new parser treats NUL chars as end-of-file, to avoid this danger and
because it facilitates a significant optimization (described within the
code).
- The old parser allowed integer literals to overflow, silently wrapping them.
The new parser treats integer overflow as a parse error. This seems better,
and it caught existing overflows of places.database.lastMaintenance, in
testing/profiles/prefs_general.js (bug 1424030) and
testing/talos/talos/config.py (bug 1434813).
The first of these changes meant that a couple of existing prefs with ";;" at
the end had to be changed (done in the preceding patch).
The minor increase in strictness shouldn't be a problem for default pref files
such as greprefs.js within the application (which we can modify), nor for
app-written prefs files such as prefs.js. It could affect user-written prefs
files such as user.js; the experience above suggests that integer overflow and
";;" are the most likely problems in practice. In my opinion, the risk here is
acceptable.
The new parser also does a better job of tracking line numbers because it (a)
treats "\r\n" sequences as a single end-of-line marker, and (a) pays attention
to end-of-line sequences within string literals.
Finally, the patch adds thorough tests of both valid and invalid syntax.
MozReview-Commit-ID: JD3beOQl4AJ
The prefs parser has two significant problems.
- It doesn't separate tokenizing from parsing.
- It is implemented as a loop around a big switch on a "current state"
variable.
As a result, it is hard to understand and modify, slower than it could be, and
in obscure cases (involving comments and whitespace) it fails to parse what
should be valid input.
This patch replaces it with a recursive descent parser (albeit one without any
recursion!) that has separate tokenization. The new parser is easier to
understand and modify, more correct, and has better error messages. It doesn't
do error recovery, but that would be much easier to add than in the old parser.
The new parser also runs about 1.9x faster than the existing parser. (As
measured by parsing greprefs.js's contents from memory 1000 times in
succession, omitting the prefs hash table construction. If the table
construction is included, it's about 1.6x faster.)
The new parser is slightly stricter than the old parser in a few ways.
- Disconcertingly, the old parser allowed arbitrary junk between prefs
(including at the start and end of the prefs file) so long as that junk
didn't include any of the following chars: '/', '#', 'u', 's', 'p'. I.e.
lines like these:
!foo@bar&pref("prefname", true);
ticky_pref("prefname", true); // missing 's' at start
User_pref("prefname", true); // should be 'u' at start
would all be treated the same as this:
pref("prefname", true);
The new parser disallows such junk because it isn't necessary and seems like
an unintentional botch by the old parser.
- The old parser allowed character 0x1a (SUB) between tokens and treated it
like '\n'.
The new parser does not allow this character. SUB was used to indicate
end-of-file (*not* end-of-line) in some old operating systems such as MS-DOS,
but this doesn't seem necessary today.
- The old parser tolerated (with a warning) invalid escape sequences within
string literals -- such as "\q" (not a valid escape) and "\x1" and "\u12"
(both of which have insufficient hex digits) -- accepting them literally.
The new parser does not tolerate invalid escape sequences because it doesn't
seem necessary and would complicate things.
- The old parser tolerated character 0x00 (NUL) within string literals; this is
dangerous because C++ code that manipulates string values with embedded NULs
will almost certainly consider those chars as end-of-string markers.
The new parser treats NUL chars as end-of-file, to avoid this danger and
because it facilitates a significant optimization (described within the
code).
- The old parser allowed integer literals to overflow, silently wrapping them.
The new parser treats integer overflow as a parse error. This seems better,
and it caught an existing overflow in testing/profiles/prefs_general.js, for
places.database.lastMaintenance (see bug 1424030).
The first of these changes meant that a couple of existing prefs with ";;" at
the end had to be changed (done in the preceding patch).
The minor increase in strictness shouldn't be a problem for default pref files
such as greprefs.js within the application (which we can modify), nor for
app-written prefs files such as prefs.js. It could affect user-written prefs
files such as user.js; the experience above suggests that ";;" is the most
likely problem in practice. In my opinion, the risk here is acceptable.
The new parser also does a better job of tracking line numbers because it (a)
treats "\r\n" sequences as a single end-of-line marker, and (a) pays attention
to end-of-line sequences within string literals.
Finally, the patch adds thorough tests of both valid and invalid syntax.
MozReview-Commit-ID: 8EYWH7KxGG
* * *
[mq]: win-fix
MozReview-Commit-ID: 91Bxjfghqfw
--HG--
extra : rebase_source : a8773413e5d68c33e4329df6819b6e1f82c22b85
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
extra : intermediate-source : 34c999fa006bffe8705cf50c54708aa21a962e62
extra : histedit_source : b2be2c5e5d226e6c347312456a6ae339c1e634b0
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : rebase_source : c004a023389f1f6bf3d2f3efe93c13d423b23ccd