The current Login structure is nested:
```
Login {
RecordFields record;
LoginFields fields;
SecureLoginFields sec_fields;
}
```
and thus exposes internal data structuring to the consumer. Since we make the encryption transparent for the consumer (Android, iOS, desktop) here, such a separation no longer makes sense here and above can be simplified to
```
Login {
// record fields
string id;
i64 times_used;
i64 time_created;
i64 time_last_used;
i64 time_password_changed;
// login fields
string origin;
string? http_realm;
string? form_action_origin;
string username_field;
string password_field;
// secure login fields
string password;
string username;
}
```
The advantage of eliminating this separation lies, on the one hand, in the simplification of the API and the resulting easier use of the component, and on the other hand, in the easier changeability of the internal data structure. If, for example, we decide later to encrypt additional fields, such a change is possible without having to adapt the consumers.
This prepares the Logins component for the desktop and simplifies its
API.
BREAKING CHANGE:
This commit introduces breaking changes to the Logins component:
During initialization, it receives an additional argument, a
EncryptorDecryptorTrait implementation. In addition, several LoginsStore
API methods have been changed to not require an encryption key argument
anymore, and return Logins objects instead of EncryptedLogins.
Additionally, a new API method has been added to the LoginsStore,
`has_logins_by_base_domain(&self, base_domain: &str)`, which can be used
to check for the existence of a login for a given base domain.
**EncryptorDecryptor**
With the introduction of the EncryptorDecryptor trait, encryption
becomes transparent. That means, the LoginStore API receives some
breaking changes as outlined above. A ManagedEncryptorDecryptor will
provide an EncryptorDecryptor implementation which uses the currently
used crypto methods, given a KeyManager implementation. This eases
adaption for mobile. Furthermore, we provide a StaticKeyManager
implementation, which can be used in tests and in cases where the key is
- you name it - static. Constructors Now an implementation of the above
property must be passed to the constructors. To do this, the signatures
are extended as follows:
```
pub fn new(path: impl AsRef<Path>, encdec: Arc<dyn EncryptorDecryptor>) -> ApiResult<Self>
pub fn new_from_db(db: LoginDb, encdec: Arc<dyn EncryptorDecryptor>) -> Self
pub fn new_in_memory(encdec: Arc<dyn EncryptorDecryptor>) -> ApiResult<Self>
```
**LoginStore API Methods**
This allows the LoginStore API to be simplified as follows, making
encryption transparent by eliminating the need to pass the key and
allowing the methods to return decrypted login objects.
```
pub fn list(&self) -> ApiResult<Vec<Login>>
pub fn get(&self, id: &str) -> ApiResult<Option<Login>>
pub fn get_by_base_domain(&self, base_domain: &str) -> ApiResult<Vec<Login>>
pub fn find_login_to_update(&self, entry: LoginEntry) -> ApiResult<Option<Login>>
pub fn update(&self, id: &str, entry: LoginEntry) -> ApiResult<Login>
pub fn add(&self, entry: LoginEntry) -> ApiResult<Login>
pub fn add_or_update(&self, entry: LoginEntry) -> ApiResult<Login>
```
We will stop Uniffi-exposing the crypto primitives encrypt, decrypt,
encrypt_struct and decrypt_struct. Also EncryptedLogin will not be
exposed anymore. Checking for the Existence of Logins for a given Base
Domain In order to check for the existence of stored logins for a given
base domain, we provide an additional store method,
has_logins_by_base_domain(&self, base_domain: &str), which does not
utilize the EncryptorDecryptor.
Another by-change is in the `check_canary` function: here we do not
throw anymore if a wrong key is used but return false.
These seem like they could be useful for the experiment. I'm not sure
we'll be able to hook them up but we might as well try.
Refactored the logic to get the full keywords and added tests for it
since we're now using it in two places.
This sets it up so we always ingest the FTS data and use the
`SuggestionProviderConstraints` passed to the query to determine how to
perform the query. This seems like the simplest approach and it doesn't
increase ingestion time that much. The benchmarks on my machine went
from 339.29 ms to 465.60 ms.
This adds a feature `keydb` to rc_crypto's nss create, which enables the
`ensure_initialized_with_profile_dir` initialize function. This
configures NSS to use a profile and persist keys into key4.db.
Also adding methods for managing AES256 keys with NSS:
* `authentication_with_primary_password_is_needed`: check wheather primary password is enabled
* `authenticate_with_primary_password`: authenticate with primary password against NSS key database
* `get_or_create_aes256_key`: retrieve a key from key4.db or, if not present, create one
For some weird reason, the old command line was enabling the features
for all binaries. Since the remote settings CLI depends on rc_crypto,
this was causing failures when trying to build NSS. Adding the `-p`
flag, fixes this.
Added extra data to `Suggestion::Fakespot` to capture how the FTS match
was made. The plan is to use this as a facet for our metrics to help us
consider how to tune the matching logic (i.e. maybe we should not use
stemming, maybe we should reqiure that terms are close together).
Added Suggest CLI flag to print out the FTS match info.
#6479 reduced the overhead for running benchmarks, but a side effect was
that it left directories filled with sqlite DBs in `/tmp`. This makes
sure we delete those directories.
This adds two low-level bindings to NSS for dealing with the primary
password:
* `PK11_NeedLogin`: for checking wheather a primary password is set
* `PK11_CheckUserPassword`: for authorization with primary password
Additionally the following two low-level NSS bindings are added to deal
with key persistence:
* `PK11_NeedUserInit`
* `PK11_SetSymKeyNickname`
To manage AES-256 GCM keys with NSS in the way that is currently implemented on the desktop, we need extended functionality of the Rust bindings to NSS. Specifically, these are functions for key generation with persistence, for listing persisted keys, and for wrapping and unwrapping to get at the key material of stored keys.
This simple change is a good way to introduce the sql_support crate and
start the migration system.
Made a couple other fixes along the way:
- Don't make the CLI `main` method async, that causes panics when
running the `get` or `sync` commands since those will create a second
runtime within the async runtime.
- Added `remote-settings-data`, which is generated by the CLI, to .gitignore
- Fixed sql_support testing bug for schemas with no indexes.
Build a JAR file with libjnidispatch and libmegazord, targetted at
Desktop arches so that devs can run unit tests with it. Export this as
a configuration and Maven package. This allows android-components code
to run the unit tests and also simplifies the gradle code for our own
components.
Named this full-megazord-libsForTest, which I think is a bit more clear
than full-megazord-forUnitTests.
One reason we didn't catch this sooner we relied posting `/taskcluster
ci full` in a GH comment, but that didn't actually run all the tasks.
- Added a set of test parameters for the github issue comment
- Run all tasks for issue comments
Merge new records with the old ones, rather than unconditionally storing
the new records. This fixes buggy behavior when sync does an
incremental update.
I'm going to update some of the surrounding code and I figure now is a
good time to simplify this.
One difference is when `sync_if_empty=true` and we have cached records
from the server but there were 0 records in the response. Before, we
would sync again, now we don't. I believe this is the correct behavior:
we only want to sync when the cache is empty, not if the record list is
empty. In any case, I don't think this case will happen in practice.
* Moved `dependsOnMegazord` from `publish.gradle` into
`component-common.gradle`. I don't think it makes sense to mix this
with the publishing code.
* Use configurations to handle the native megazord dependency.
* Manually copy the native megazord into the resource dir when running
unit tests. The strategy we were using before doesn't work after
Gradle 8.1.
* dependsOnMegazord no longer adds a native-support dependency. This is
only used by viaduct, so I moved the dependency into viaduct's
build.gradle.
* Removed the jnaForTest code. We no longer need this hack now that
`implementation` and `testImplementation` can specify different types.