TLS_DHE_RSA_WITH_AES_128_CBC_SHA is no longer supported in API26+.
MozReview-Commit-ID: AtNf2xZh2Bz
--HG--
extra : rebase_source : fef7d2018e77a4a4a7594bf32de750c8fa39e2ea
Remove all references to Build.SDK_INT comparing 14 and lower
MozReview-Commit-ID: JdAjYvQ6mfX
--HG--
extra : rebase_source : f6cae8af84c26f42dcc02c133e7bc702f1af61e6
Parent class (FxAccountSyncAdapter) is essentially a singleton, and so we'd end up re-using class
fields between syncs, among them the collected telemetry data. It's cleaner and safer to move
ownership of TelemetryCollector into IntrumentedSessionCallback. With this change, telemetry
data is contained within and eventually emitted from a single owner object.
MozReview-Commit-ID: Abx13VmILcE
--HG--
extra : rebase_source : b68b44951361727015c2a10895e42f6a34806b27
While this patch does make it clearer that telemetry error handling could be factored better,
at least it gets us to a consistent usage pattern.
MozReview-Commit-ID: 4Oamt9D03Ue
--HG--
extra : rebase_source : da73247ae0a27ba6ae3d6ad0d8814c1e2249e722
The approach here is to simply mark current TelemetryCollector as having restarted.
The downside of this approach is that two technically separate syncs are combined into one
telemetry ping. However, the two syncs are logically connected to each other, and combining
their telemetry will make it easier to figure out why a restart occurred, as well as what
happened after the restart.
MozReview-Commit-ID: AtJbge2ulMz
--HG--
extra : rebase_source : 4f9efb83da8f31b2e0470df6538c67533872f23a
While this is a "named" stage, it doesn't follow the Repository<->Repository
semantics of other named stages, and so it needs to be instrumented separately.
MozReview-Commit-ID: IKrc5Fb1bYm
--HG--
extra : rebase_source : 59c83e44235101f76b42f0eced867ce7b9d5a464
SyncAdapter owns a TelemetryCollector, which is passed into GlobalSession to be "filled up"
with telemtry data.
GlobalSession obtains instances of TelemetryStageCollector from the TelemetryCollector, and
passes them into individual stages. They are filled up with telemetry as stages are executed.
Stage errors are recorded in TelemetryStageCollector.
Various global errors are recorded in TelemetryCollector itself.
On completion (success, failure, abort), telemetry is "built" and broadcasted via LocalBroadcastManager.
TelemetryContract is used to establish a key convention between the "broadcaster" and whoever is
on the receiving end of this telemetry.
This patch instruments stages which follow the Repository<->Repository flow semantics. Other stages,
such as the clients stage, meta/globa, info/* and crypto/keys are instrumented separately in follow-up
patches.
MozReview-Commit-ID: 5VLRc96GLdV
--HG--
extra : rebase_source : 4c7a7e1fde2e32d401eb28c70b9f04fdbd148ffd
This is what we (and other platforms) use as part of telemetry payloads in place of either
our local FxA Device ID or the sync client ID.
Note that this server API is currently undocumented.
Parameter introduced in 2021994ca4
MozReview-Commit-ID: 64sY5RZ2ZxK
--HG--
extra : rebase_source : d1790feae1c0f46dc5f420aeed347da12a6ac85c
We will need them later for telemetry reporting. For now we're just keeping the last exception which
we encountered (which agrees with desktop's behaviour), and Bug 1362208 explores follow-up work to
aggregate and count the exceptions as we see them.
MozReview-Commit-ID: 8yKkZVGJZ9e
--HG--
extra : rebase_source : 501ff746ecfb3022a0fe89844e307153bfdb5164
This patch:
- introduces a way to signal that a record has been reconciled; this is not a "flow control"
event type, and must be used in addition to regular "recordStored" delegate call
- draws a clearer distinction between "attempted to store" and "stored, as reported by session's storage layer"
MozReview-Commit-ID: 99UbUJzu57w
--HG--
extra : rebase_source : d7424fec748b9a2d07d1c98b78ce89fd418750e4
It's not just used for testing, and annotation is causing IDE to highlight its uses in code as invalid.
MozReview-Commit-ID: JvzX2VgNKom
--HG--
extra : rebase_source : a16933121371818307329523916d35e82b2446c9
The primary issue is that we use a throwing InputStreamReader
constructor. If it throws, then any nested streams will be lost.
We can fix that by using the non-throwing InputStreamReader
constructor (which uses a Charset as the second parameter,
instead of a String which causes an Exception to be thrown
if it can't be parsed)
We also simplify some nested Stream's a little: most of the
Stream constructors don't throw, so there's no harm in not keeping
individual references to those that don't throw - and that
results in less Stream references for us to handle.
MozReview-Commit-ID: 2hyRFGVmGnU
--HG--
extra : rebase_source : 9d2b25997e0f71089c0ef56c0069cafe068f821e
The primary issue is that we use a throwing InputStreamReader
constructor. If it throws, then any nested streams will be lost.
We can fix that by using the non-throwing InputStreamReader
constructor (which uses a Charset as the second parameter,
instead of a String which causes an Exception to be thrown
if it can't be parsed)
We also simplify some nested Stream's a little: most of the
Stream constructors don't throw, so there's no harm in not keeping
individual references to those that don't throw - and that
results in less Stream references for us to handle.
MozReview-Commit-ID: 2hyRFGVmGnU
--HG--
extra : rebase_source : 15dd97d28012a017326b01ae8ddc370c7f1ec484
Incoming records might be missing the dateAdded field, and so we perform some pre-processing:
- during reconciliation, dateAdded is set to the lowest of (remote lastModified, remote dateAdded, local dateAdded)
- during insertion, if dateAdded is missing it is set to lastModified
Whenever we modify dateAdded for a record during sync, we also bump its lastModified value. This will trigger an
upload of this record, and consequently a re-upload by clients which are able to provide an older dateAdded value.
It is possible that this might cause conflicts on other devices, but the expected likelyhood of that happening is low.
MozReview-Commit-ID: 3tDeXKSBgrO
--HG--
extra : rebase_source : 26cb13838df7a4adb6d4fe3c51f0ecf3fd2eda95
Confusion between storeDone() and storeDone(long end) resulted in certain sessions (bookmarks
and form history) not overriding the current method. As a result, their final "flush the queue"
methods weren't being called by the buffering middleware.
This patch removes the storeDone(long end) method, making such confusion a non-issue.
Given that a lot of sessions tend to build up buffers which they need to then flush after a storeDone()
call, passing in a timestamp into that method doesn't make sense. Instead, let's supply a default
implementation in RepositorySession which calls onStoreCompleted(endTimestamp) with current time,
and allow sessions to override this method and own the onStoreCompleted(endTimestamp) call.
MozReview-Commit-ID: 84o7aAL8RPC
--HG--
extra : rebase_source : 41767ad502bd5ad8a0a487235bfdca8cf0d0c927
Since we're uploading records atomically, order in which they're processed by the uploader
only matters if we want to do sanity checks on certain types of records. Server might still
preserve some of the order, but for our purposes here it shouldn't matter.
We'd like to ensure that we process the "mobile root" bookmark record along with other folder
records first, so that we increase our chances of avoiding making a failing network request if
that those records' payload is too large.
Sorting by bookmark type achieves this.
MozReview-Commit-ID: KrAs3zepaOk
--HG--
extra : rebase_source : 24f1d3d6aa2ee3b6777dc38abdd1e01aba5213c2
If we try to upload a record whose payload BSO field is larger than the limit specified
by the server, fail that record during BatchingUploader's processing.
Consequently, Synchronizer will fail current sync stage and advance to the next.
Previous behaviour was to essentially rely on the server to fail our POST request,
at which point we'll fail current sync stage. So in a way, this is an optimization to
avoid making network requests which we know will fail.
MozReview-Commit-ID: 5aJRRNoUuXe
--HG--
extra : rebase_source : 18920cfe7b7599be1984c53ebc0c9897c98fb7d9
We need to access sessionToken in the Engaged state in order to perform device
registration. We expose getSessionToken() on the base State class, to allow
customers to get the sessionToken easily instead of having to downcast the
TokensAndKeysState/Engaged states.
MozReview-Commit-ID: 8s2C350noUG
--HG--
extra : rebase_source : e0bc8bf7ebfdcb7a31bb4a6ddb5b928acf7baba9
We upload meta/global in three scenarios:
- fresh start
- when it was modified after a successful sync
- when it was modified after an aborted sync
Use X-I-U-S header to assert what we believe about meta/global's presence (during freshStart)
and last-modified timestamp (in all other cases).
We might encounter a concurrent modification condition, manifesting as a 412 error. If we see such an error:
- on fresh start, we restart globalSession
- on regular upload, we request a re-sync of all stages
MozReview-Commit-ID: 3qyb6rUSOeY
--HG--
extra : rebase_source : 166be44aceb634b4e9fa3a8e20f7047cfec2af54
On startup and at the beginning of a sync we check how long it has been since we've subscribed
to a channel for fxa service. If it's been over 21 days, request re-subscription.
MozReview-Commit-ID: GzvPecZ9hTy
--HG--
extra : rebase_source : d0292acddbdd231502808469d4e5502a4ac93779
BatchingDownloader uses provided RepositoryStateProvider instance in order to track
offset and high water mark as it performs batching.
The state holder objects are initialized by individual ServerSyncStages, and prefixes are used to ensure keys
won't clash.
Two RepositoryStateProvider implementations are used: persistent and non-persistent. Non-persistent
state provider does not allow for resuming after a sync restart, while persistent one does.
Persistent state provider is used by the history stage. It is fetched oldest-first, and records are applied
to live storage as they're downloaded. These conditions let use resume downloads. It's also possible to
resume downloads for stages which use a persistent buffer, but currently we do not have any.
Offset value and its context is reset if we hit a 412 error; it is maintained if we hit a sync deadline, allowing us to
minimize number of records we'll redownload. BatchingDownloaderController owns resuming and context checking logic.
High water mark (h.w.m.) is maintained across syncs and used instead of stage's "last-synced" timestamp if said stage is
set to fetch oldest-first and explicitely allows use of a h.w.m. Server15RepositorySession provides correct timestamp
to RecordsChannel, decoupling BatchingDownloader from this logic.
MozReview-Commit-ID: IH28YrDU4vW
--HG--
extra : rebase_source : 63bd7daaa1fd2a63e10289d6d4cd198aaf81498b