See the comment in the file explaining it. For a case of logging 100k numbers,
this dropped the time per number from 15 microseconds to 9 with the console
closed, and 55 microseconds to 38 with the console open. I think we could shave
off more with a native approach, but I don't know that it's worth it and it's
much more likely for that to introduce bugs.
Differential Revision: https://phabricator.services.mozilla.com/D143782
This also makes a couple of other fixes to existing comments to bring them in
line with the actual behaviour and names used.
Differential Revision: https://phabricator.services.mozilla.com/D110428
The test that is timing out with these patches does something relatively simple:
await TestUtils.waitForCondition(async function() {
let color = await ContentTask.spawn(browserWindow, async function() {
/* Do stuff... */
});
return color == something;
});
await closeWindow(browserWindow);
Turns out that this can intermittently leak the window due to waitForCondition
using setInterval. setInterval can schedule multiple tasks while awaiting for
the inner ContentTask.
What this means, is that we may still have a ContentTask awaiting us when we get
to close the window. Closing the window makes the ContentTask not finish, and
thus we leak a promise keeping alive the window in gPromises:
https://searchfox.org/mozilla-central/rev/6566d92dd46417a2f57e75c515135ebe84c9cef5/testing/mochitest/BrowserTestUtils/ContentTask.jsm#24
Which means that we keep alive the window all the way until shutdown.
Fix it by ensuring that we only run one task at a time.
Differential Revision: https://phabricator.services.mozilla.com/D52833
--HG--
extra : moz-landing-system : lando
The test that is timing out with these patches does something relatively simple:
await TestUtils.waitForCondition(async function() {
let color = await ContentTask.spawn(browserWindow, async function() {
/* Do stuff... */
});
return color == something;
});
await closeWindow(browserWindow);
Turns out that this can intermittently leak the window due to waitForCondition
using setInterval. setInterval can schedule multiple tasks while awaiting for
the inner ContentTask.
What this means, is that we may still have a ContentTask awaiting us when we get
to close the window. Closing the window makes the ContentTask not finish, and
thus we leak a promise keeping alive the window in gPromises:
https://searchfox.org/mozilla-central/rev/6566d92dd46417a2f57e75c515135ebe84c9cef5/testing/mochitest/BrowserTestUtils/ContentTask.jsm#24
Which means that we keep alive the window all the way until shutdown.
Fix it by ensuring that we only run one task at a time.
Differential Revision: https://phabricator.services.mozilla.com/D52833
--HG--
extra : moz-landing-system : lando
***
Bug 1514594: Part 3a - Change ChromeUtils.import to return an exports object; not pollute global. r=mccr8
This changes the behavior of ChromeUtils.import() to return an exports object,
rather than a module global, in all cases except when `null` is passed as a
second argument, and changes the default behavior not to pollute the global
scope with the module's exports. Thus, the following code written for the old
model:
ChromeUtils.import("resource://gre/modules/Services.jsm");
is approximately the same as the following, in the new model:
var {Services} = ChromeUtils.import("resource://gre/modules/Services.jsm");
Since the two behaviors are mutually incompatible, this patch will land with a
scripted rewrite to update all existing callers to use the new model rather
than the old.
***
Bug 1514594: Part 3b - Mass rewrite all JS code to use the new ChromeUtils.import API. rs=Gijs
This was done using the followng script:
https://bitbucket.org/kmaglione/m-c-rewrites/src/tip/processors/cu-import-exports.jsm
***
Bug 1514594: Part 3c - Update ESLint plugin for ChromeUtils.import API changes. r=Standard8
Differential Revision: https://phabricator.services.mozilla.com/D16747
***
Bug 1514594: Part 3d - Remove/fix hundreds of duplicate imports from sync tests. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16748
***
Bug 1514594: Part 3e - Remove no-op ChromeUtils.import() calls. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16749
***
Bug 1514594: Part 3f.1 - Cleanup various test corner cases after mass rewrite. r=Gijs
***
Bug 1514594: Part 3f.2 - Cleanup various non-test corner cases after mass rewrite. r=Gijs
Differential Revision: https://phabricator.services.mozilla.com/D16750
--HG--
extra : rebase_source : 359574ee3064c90f33bf36c2ebe3159a24cc8895
extra : histedit_source : b93c8f42808b1599f9122d7842d2c0b3e656a594%2C64a3a4e3359dc889e2ab2b49461bab9e27fc10a7
This patch was autogenerated by my decomponents.py
It covers almost every file with the extension js, jsm, html, py,
xhtml, or xul.
It removes blank lines after removed lines, when the removed lines are
preceded by either blank lines or the start of a new block. The "start
of a new block" is defined fairly hackily: either the line starts with
//, ends with */, ends with {, <![CDATA[, """ or '''. The first two
cover comments, the third one covers JS, the fourth covers JS embedded
in XUL, and the final two cover JS embedded in Python. This also
applies if the removed line was the first line of the file.
It covers the pattern matching cases like "var {classes: Cc,
interfaces: Ci, utils: Cu, results: Cr} = Components;". It'll remove
the entire thing if they are all either Ci, Cr, Cc or Cu, or it will
remove the appropriate ones and leave the residue behind. If there's
only one behind, then it will turn it into a normal, non-pattern
matching variable definition. (For instance, "const { classes: Cc,
Constructor: CC, interfaces: Ci, utils: Cu } = Components" becomes
"const CC = Components.Constructor".)
MozReview-Commit-ID: DeSHcClQ7cG
--HG--
extra : rebase_source : d9c41878036c1ef7766ef5e91a7005025bc1d72b
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
extra : intermediate-source : 34c999fa006bffe8705cf50c54708aa21a962e62
extra : histedit_source : b2be2c5e5d226e6c347312456a6ae339c1e634b0
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : source : 12fc4dee861c812fd2bd032c63ef17af61800c70
This was done using the following script:
37e3803c7a/processors/chromeutils-import.jsm
MozReview-Commit-ID: 1Nc3XDu0wGl
--HG--
extra : rebase_source : c004a023389f1f6bf3d2f3efe93c13d423b23ccd
* In the first stage, we fetch changed records, newest first, up to the
download limit. We keep track of the oldest record modified time we
see.
* Once we've fetched all records, we reconcile, noting records that
fail to decrypt or reconcile for the next sync. We then ask the store
to apply all remaining records. Previously, `applyIncomingBatchSize`
specified how many records to apply at a time. I removed this because
it added an extra layer of indirection that's no longer necessary,
now that download batching buffers all records in memory, and all
stores are async.
* In the second stage, we fetch IDs for all remaining records changed
between the last sync and the oldest modified time we saw in the
first stage. We *don't* set the download limit here, to ensure we
add *all* changed records to our backlog, and we use the `"oldest"`
sort order instead of `"index"`.
* In the third stage, we backfill as before. We don't want large deltas
to delay other engines from syncing, so we still only take IDs up to
the download limit from the backlog, and include failed IDs from the
previous sync. On subsequent syncs, we'll keep fetching from the
backlog until it's empty.
Other changes to note in this patch:
* `Collection::_rebuildURL` now allows callers to specify both `older`
and `newer`. According to :rfkelly, this is explicitly and
intentionally supported.
* Tests that exercise `applyIncomingBatchSize` are gone, since that's
no longer a thing.
* The test server now shuffles records if the sort order is
unspecified.
MozReview-Commit-ID: 4EXvNOa8mIo
--HG--
extra : rebase_source : f382f0a883c5aa1f6a4466fefe22ad1a88ab6d20
* In the first stage, we fetch changed records, newest first, up to the
download limit. We keep track of the oldest record modified time we
see.
* Once we've fetched all records, we reconcile, noting records that
fail to decrypt or reconcile for the next sync. We then ask the store
to apply all remaining records. Previously, `applyIncomingBatchSize`
specified how many records to apply at a time. I removed this because
it added an extra layer of indirection that's no longer necessary,
now that download batching buffers all records in memory, and all
stores are async.
* In the second stage, we fetch IDs for all remaining records changed
between the last sync and the oldest modified time we saw in the
first stage. We *don't* set the download limit here, to ensure we
add *all* changed records to our backlog, and we use the `"oldest"`
sort order instead of `"index"`.
* In the third stage, we backfill as before. We don't want large deltas
to delay other engines from syncing, so we still only take IDs up to
the download limit from the backlog, and include failed IDs from the
previous sync. On subsequent syncs, we'll keep fetching from the
backlog until it's empty.
Other changes to note in this patch:
* `Collection::_rebuildURL` now allows callers to specify both `older`
and `newer`. According to :rfkelly, this is explicitly and
intentionally supported.
* Tests that exercise `applyIncomingBatchSize` are gone, since that's
no longer a thing.
* The test server now shuffles records if the sort order is
unspecified.
MozReview-Commit-ID: 4EXvNOa8mIo
--HG--
extra : rebase_source : 13605dd3a43569a6d83dc2eb15a578a7bbd5c1ca