This uses a similar strategy as that employed by moz_places_afterdelete_trigger,
creating a temp table which we write host inserts into, and then deleting all
the rows from it when we're done inserting, effectively resulting in a per-
statement trigger to only do the significant work per host.
MozReview-Commit-ID: 5TUueknq3ng
--HG--
extra : rebase_source : 1892edfcaa7b6afd29ce794a93d6ab3d46c48895
Bug 1345294 introduced nsPrefBranch::{get,set}StringPref(), which allowed the
getting of utf8 strings from prefs, which previously required using
nsISupportsString with {get,set}ComplexValue. That bug also converted most
uses.
This patch finishes the job.
- It removes the nsISupportsString support.
- It converts existing code that relied on the nsISupportsString.
- It removes the lint that was set up to detect such uses of nsISupportsString.
--HG--
extra : rebase_source : b885ee784704819e181430200af5ef762e269d14
MozReview-Commit-ID: LLTg0ae5BbW
***
Bug 1414438 - Use `getBatched` instead of `get` in sync
--HG--
extra : rebase_source : b9b057160470ec5bc5544a1a4d5d429bee460452
Using `nsISecretDecoderRing` directly bypasses
`nsILoginManagerCrypto.uiBusy` and the observer notifications, so other
consumers might not be aware we're already showing the dialog. We also
bail early if the UI is busy, to avoid showing multiple dialogs.
MozReview-Commit-ID: I7xzUWZkyPH
--HG--
extra : rebase_source : 91cef140cc54d1c81fe5c1986ffd2b8983ddd575
It's no longer needed, now that legacy extensions aren't supported.
Pieces removed include the following.
- The "load-extension-default" observer notification.
- The code for reading defaults/preferences/*.js from extensions.
- The unit test for this stuff.
- A crash reporter annotation relating to very long prefs set by add-ons.
- All references to "ExtPrefDL".
MozReview-Commit-ID: KMBoYn3uZ3x
--HG--
extra : rebase_source : 4dc8ffd425c6cdf06806409090c4f9d04a64930b
* In the first stage, we fetch changed records, newest first, up to the
download limit. We keep track of the oldest record modified time we
see.
* Once we've fetched all records, we reconcile, noting records that
fail to decrypt or reconcile for the next sync. We then ask the store
to apply all remaining records. Previously, `applyIncomingBatchSize`
specified how many records to apply at a time. I removed this because
it added an extra layer of indirection that's no longer necessary,
now that download batching buffers all records in memory, and all
stores are async.
* In the second stage, we fetch IDs for all remaining records changed
between the last sync and the oldest modified time we saw in the
first stage. We *don't* set the download limit here, to ensure we
add *all* changed records to our backlog, and we use the `"oldest"`
sort order instead of `"index"`.
* In the third stage, we backfill as before. We don't want large deltas
to delay other engines from syncing, so we still only take IDs up to
the download limit from the backlog, and include failed IDs from the
previous sync. On subsequent syncs, we'll keep fetching from the
backlog until it's empty.
Other changes to note in this patch:
* `Collection::_rebuildURL` now allows callers to specify both `older`
and `newer`. According to :rfkelly, this is explicitly and
intentionally supported.
* Tests that exercise `applyIncomingBatchSize` are gone, since that's
no longer a thing.
* The test server now shuffles records if the sort order is
unspecified.
MozReview-Commit-ID: 4EXvNOa8mIo
--HG--
extra : rebase_source : f382f0a883c5aa1f6a4466fefe22ad1a88ab6d20
The test captures the existing logic in `_processIncoming`, even though
it's not quite correct:
* First, we fetch all records changed since the last sync, up to the
download limit, and without an explicit sort order. This happens to
work correctly now because the Python server uses "newest" by
default, but can change in the future.
* If we reached the download limit fetching records, we request
IDs for all records changed since the last sync, also up to the
download limit, and sorted by index. This is likely to return IDs
for records we've already seen, since the index is based on the
frecency. It's also likely to miss IDs for other changed records,
because the number of changed records might be higher than the
download limit.
* Since we then fast-forward the last sync time, we'll never download
any remaining changed records that we didn't add to our backlog.
* Finally, we backfill previously failed and backlogged records.
MozReview-Commit-ID: 7uQLXMseMIU
--HG--
extra : rebase_source : 719ee2d9e46102195251b410f093da3247095c22
* In the first stage, we fetch changed records, newest first, up to the
download limit. We keep track of the oldest record modified time we
see.
* Once we've fetched all records, we reconcile, noting records that
fail to decrypt or reconcile for the next sync. We then ask the store
to apply all remaining records. Previously, `applyIncomingBatchSize`
specified how many records to apply at a time. I removed this because
it added an extra layer of indirection that's no longer necessary,
now that download batching buffers all records in memory, and all
stores are async.
* In the second stage, we fetch IDs for all remaining records changed
between the last sync and the oldest modified time we saw in the
first stage. We *don't* set the download limit here, to ensure we
add *all* changed records to our backlog, and we use the `"oldest"`
sort order instead of `"index"`.
* In the third stage, we backfill as before. We don't want large deltas
to delay other engines from syncing, so we still only take IDs up to
the download limit from the backlog, and include failed IDs from the
previous sync. On subsequent syncs, we'll keep fetching from the
backlog until it's empty.
Other changes to note in this patch:
* `Collection::_rebuildURL` now allows callers to specify both `older`
and `newer`. According to :rfkelly, this is explicitly and
intentionally supported.
* Tests that exercise `applyIncomingBatchSize` are gone, since that's
no longer a thing.
* The test server now shuffles records if the sort order is
unspecified.
MozReview-Commit-ID: 4EXvNOa8mIo
--HG--
extra : rebase_source : 13605dd3a43569a6d83dc2eb15a578a7bbd5c1ca
The test captures the existing logic in `_processIncoming`, even though
it's not quite correct:
* First, we fetch all records changed since the last sync, up to the
download limit, and without an explicit sort order. This happens to
work correctly now because the Python server uses "newest" by
default, but can change in the future.
* If we reached the download limit fetching records, we request
IDs for all records changed since the last sync, also up to the
download limit, and sorted by index. This is likely to return IDs
for records we've already seen, since the index is based on the
frecency. It's also likely to miss IDs for other changed records,
because the number of changed records might be higher than the
download limit.
* Since we then fast-forward the last sync time, we'll never download
any remaining changed records that we didn't add to our backlog.
* Finally, we backfill previously failed and backlogged records.
MozReview-Commit-ID: 7uQLXMseMIU
--HG--
extra : rebase_source : 5742474889845b934c3d2e8b479d26d719cd03c0