This patch uses categorical keyed Historgram to collect data. While the key is
used to determine whether is it an external error or an internal error (Note:
the external error is referred to low level failure, for example: database
corrupt, OS API errors, ... etc; the internal error is referred to errors, like:
not handle file properly, unexpected filenames, ... etc), the labels for
categorical indicates where the error happens.
Furthermore, this patch make QuotaManager keep traversing the profile even if
an error happens so that we can get more information in the telemetry data.
Please note that these things should only happen in the Nightly Channel.
Differential Revision: https://phabricator.services.mozilla.com/D15908
--HG--
extra : moz-landing-system : lando
This patch slightly speedups origin initialization by having a special column in
the database table for current data usage.
This patch also fixes a problem with length computation of some unicode strings.
This improves performance by keeping snapshots around for some time if there are no changes done by other processes. If a snapshot is not destroyed immediately after getting into the stable state then there's a chance that it won't have to be synchronously created again when a new opeartion is requested.
Items are now cached also in an array (besides a hashtable). This gives us very fast snapshot initizilization for datastores that fit into the prefill limit. String buffers are reference counted, so memory footprint is only affected by the size of nsString.
This patch also introduces a WriteOptimizer which is an abstraction for collecting, coalescing and applying write operations.
Before this patch, it was only possible to initialize a snapshot to the Partial state or AllOrderedItems state. Now there's a third state AllOrderedKeys.
This improves performance by eliminating sync calls to parent process when we know nothing about a key in content process (in that case we have to use a sync call to the parent process to see if there's a value for it).
With this patch we always try to send all keys to content when a snapshot is being initialized. For this to work efficiently, we cache the size of all keys.
Having cached size of all keys also allows us to just iterate the mValues hashtable when the size of keys is bigger than snapshot prefill threshold (instead of iterating over the mKeys array and joining with mValues for each particular key).
There's some additional cleanup in snapshot info construction and Datastore::SetItem/RemoveItem/Clear methods.
This adds synchronization to the global database used by previous local storage implementation.
This patch was enhanced by asuth to bind attached database path.
Places has shown that argument binding is necessary. (Profiles may include usernames in their path which can have cool emoji and such.)
There's now an upper limit for snapshot prefilling. The value is configurable and is currently set to 4096 bytes.
Snapshots can operate in multiple modes depending on if all items have been loaded or all keys have been received. This should provide the best performance for each specific state.
This patch also adds support for creating explicit snapshots which can be used for testing.
This improves performance a lot in cases when multiple operations are invoked by a single JS function (number of sync IPC calls is reduced to a minimum). It also improves correctness since changes are not visible to other content processes until a JS function finishes.
The patch implements core infrastructure, all items are sent to content when a snapshot is initialized and everything is fully working. However, sending of all items at once is not optimal for bigger databases. Support for lazy loading of items is implemented in a following patch.
If a database actor already exists for given origin, reuse it instead of creating a new one. This improves memory footprint a bit and also eliminates some round trips to the parent process.
This patch was enhanced by asuth to bind attached database path.
Places has shown that argument binding is necessary. (Profiles may include usernames in their path which can have cool emoji and such.)
Datastores are preloaded only for content principals. The preloading is triggered as soon as possible to lower the chance of blocking the main thread in content process. If there is no physical database on disk for given origin, datastore is not created. Preloaded datastores are kept alive for 20 seconds.
Storage events are fired either directly after getting response from synchronous SetItem call or through observers. When a new onstorage event listener is added, we sycnhronously register an observer in the parent process. There's always only one observer actor per content process.
PBackgroundLSDatabase is now managed by a new PBackgroundLSObject protocol. PBackgroundLSObject is needed to eliminate the need to pass the principal info and document URI everytime a write operation occurs.
Preparation of an observer shares some states with preparation of a datastore, so common stuff now lives in LSRequestBase and preparation of a datastore now implements a nested state machine.
This patch was enhanced by asuth to drop observers only when the last storage listener is removed.
EventListenerRemoved is invoked on any removal, not just the final removal, so we need to make sure it's the final removal before dropping observer.
Since we keep/cache data in memory anyway, private browsing support is mostly about avoiding any persistence related calls and clearing private browsing datastores when we get a notification. The separation between normal and private browsing datastores is done by the privateBrowsingId attribute which is part of the origin string.
Introduced a Connection and a ConnectioThread object.
All I/O requests are processed by a new single thread managed by ConnectionThread object.
Connection objects are prepared by the prepare datastore operation and then kept alive by datastores just like directory locks.
All datastore I/O operations are wrapped in a transaction which automaticaly commits after 5 seconds.
Datastore preparation now also loads all items from the database.
Expose the nested main event target, so it can be used in the single process case by the parent side to process runnables which need to run on the main thread.
After this change we don't have use hacks like getting profile directory path on the child side and sending it to the parent. The parent side can now do it freely even in the single process case.
This patch was enhanced by asuth to not tunnel the nested main event target through IPC.
Preparation of datastores now creates real database files on disk. The LocalStorage directory is protected by a directory lock.
Infrastructure for database schema upgrades is in place too.
Database actors are now explicitely tracked by the datastore. When the last actor finishes the directory lock is released.
Added also implementation for QuotaClient::GetUsageForOrigin() and QuotaClient::AbortOperations().
This adds a new quota client implementation, but only implements ShutdownWorkThreads.
At shutdown we wait for all running operations to finish including database actors which are closed by using an extra IPC message which avoids races between the parent and child.
Databases are dropped on the child side as soon as they are not used (e.g. after unlinking by the cycle collector).
The implementation is based on a cache (datastore) living in the parent process and sync IPC calls initiated from content processes.
IPC communication is done using per principal/origin database actors which connect to the datastore.
The synchronous blocking of the main thread is done by creating a nested event target and spinning the event loop.