Merge remote-tracking branch 'origin/master' into places

This commit is contained in:
Mark Hammond 2018-09-17 17:42:30 +10:00
Родитель b0aee4e411 1f069d2b75
Коммит b8480c68b5
61 изменённых файлов: 4461 добавлений и 4532 удалений

1
.gitignore поставляемый
Просмотреть файл

@ -4,3 +4,4 @@ Cargo.lock
credentials.json
*-engine.json
.cargo
*.db

Просмотреть файл

@ -53,9 +53,11 @@ tasks:
github:
events:
- release
scopes:
- "secrets:get:project/application-services/publish"
payload:
maxRunTime: 3600
deadline: "{{ '2 hours' | $fromNow }}"
maxRunTime: 7200
deadline: "{{ '4 hours' | $fromNow }}"
image: 'mozillamobile/rust-component:buildtools-27.0.3-ndk-r15c-ndk-version-21-rust-stable-rust-beta'
command:
- /bin/bash
@ -66,7 +68,11 @@ tasks:
&& cd application-services
&& git config advice.detachedHead false
&& git checkout '{{ event.version }}'
&& python automation/taskcluster/release/fetch-bintray-api-key.py
&& ./scripts/taskcluster-android.sh
&& cd logins-api/android
&& ./gradlew --no-daemon clean library:assembleRelease
&& ./gradlew bintrayUpload --debug -PvcsTag="{{ event.head.sha }}"
artifacts:
'public/bin/mozilla/fxa_client_android_{{ event.version }}.zip':
type: 'file'
@ -88,9 +94,11 @@ tasks:
github:
events:
- tag
scopes:
- "secrets:get:project/application-services/publish"
payload:
maxRunTime: 3600
deadline: "{{ '2 hours' | $fromNow }}"
maxRunTime: 7200
deadline: "{{ '4 hours' | $fromNow }}"
image: 'mozillamobile/rust-component:buildtools-27.0.3-ndk-r15c-ndk-version-21-rust-stable-rust-beta'
command:
- /bin/bash
@ -101,11 +109,11 @@ tasks:
&& cd application-services
&& git config advice.detachedHead false
&& git checkout '{{ event.head.tag }}'
&& python automation/taskcluster/release/fetch-bintray-api-key.py
&& ./scripts/taskcluster-android.sh
# && python automation/taskcluster/release/fetch-bintray-api-key.py
&& cd logins-api/android
&& ./gradlew --no-daemon clean library:assembleRelease
# && VCS_TAG=`git show-ref {{ event.head.tag }}` ./gradlew bintrayUpload --debug -PvcsTag="$VCS_TAG"
&& ./gradlew bintrayUpload --debug -PvcsTag="{{ event.head.sha }}"
artifacts:
'public/bin/mozilla/fxa_client_android_{{ event.head.tag }}.zip':
type: 'file'

Просмотреть файл

@ -3,7 +3,6 @@ language: rust
# (The version in Travis's default Ubuntu Trusty is much too old).
os: osx
osx_image:
- xcode9.4
- xcode10
before_install:
- brew install sqlcipher --with-fts
@ -22,7 +21,7 @@ jobs:
- stage: iOS GitHub Release
if: tag IS present
rust: beta
osx_image: xcode9.4
osx_image: xcode10
script: ./scripts/travis-ci-ios.sh
deploy:
provider: releases

Просмотреть файл

@ -2,12 +2,11 @@
members = [
"fxa-client",
"fxa-client/ffi",
"logins",
"sandvich/desktop",
"sync15-adapter",
"sync15/passwords",
"sync15/passwords/ffi",
"places",
"logins-sql",
"logins-sql/ffi",
"places"
]
# For RSA keys cloning. Remove once openssl 0.10.8+ is released.

373
LICENSE Normal file
Просмотреть файл

@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

Просмотреть файл

@ -5,7 +5,7 @@
import os
import taskcluster
SECRET_NAME = 'project/mentat/publish'
SECRET_NAME = 'project/application-services/publish'
TASKCLUSTER_BASE_URL = 'http://taskcluster/secrets/v1'
@ -19,7 +19,9 @@ def main():
"""Fetch the bintray user and api key from taskcluster's secret service
and save it to local.properties in the project root directory.
"""
print('fetching {} ...'.format(SECRET_NAME))
data = fetch_publish_secrets(SECRET_NAME)
print('fetching {} ... DONE ({} bytes)'.format(SECRET_NAME, len(str(data))))
properties_file_path = os.path.join(os.path.dirname(__file__), '../../../logins-api/android/local.properties')
with open(properties_file_path, 'w') as properties_file:

Просмотреть файл

@ -0,0 +1,367 @@
---
id: dev-process
title: Development Process
sidebar_label: Development Process
---
We develop and deploy on a two-week iteration cycle. Every two weeks we
cut a release "train" that goes through deployment to stage and into production.
* [Product planning](#product-planning)
* [Issue management](#issue-management)
* [Milestones](#milestones)
* [Waffle columns](#waffle-columns)
* [Other labels](#other-labels)
* [Bug triage](#bug-triage)
* [Checkin meetings](#checkin-meetings)
* [Mondays at 08:30](#mondays-at-08-30)
* [Mondays at 12:30](#mondays-at-12-30)
* [Mondays at 13:00](#mondays-at-13-00)
* [Tuesdays at 13:30](#tuesdays-at-13-30)
* [Code review](#code-review)
* [Review Checklist](#review-checklist)
* [Tagging releases](#tagging-releases)
* [What if the merge messes up the changelog?](#what-if-the-merge-messes-up-the-changelog)
* [What if I already pushed a fix to `master` and it needs to be uplifted to an earlier train?](#what-if-i-already-pushed-a-fix-to-master-and-it-needs-to-be-uplifted-to-an-earlier-train)
* [What if there are two separate train branches containing parallel updates?](#what-if-there-are-two-separate-train-branches-containing-parallel-updates)
## Product Planning
Product-level feature planning is managed
in an AirTable board
alongside other work
for the Application Services team:
* [The app-services airtable board](https://airtable.com/tbl8uNZikl6DGUEUI/)
## Issue management
Most of our work takes place on [GitHub](https://github.com/mozilla/fxa).
We use labels and milestones to keep things organised,
and [waffle.io](https://waffle.io) to provide
an overview of bug status and activity:
* [Active issues for Firefox Accounts](https://waffle.io/mozilla/fxa)
Issue status is reflected by the following:
* The milestone indicates *why* we are working on this issue.
* The waffle column indicates *what* the next action is and *when*
we expect to complete it.
* The assignee, if any, indicates *who* is responsible for that action.
### Milestones
When we start working on a new feature,
we create a corresponding
[milestone in github](https://github.com/mozilla/fxa/milestones)
and break down the task
into bugs associated with that milestone.
There's also an ongoing
["quality" milestone](https://waffle.io/mozilla/fxa?milestone=FxA-0:%20quality)
for tracking work
related to overall quality
rather than a particular feature.
If it's not obvious
what milestone an issue should belong to,
that's a strong signal
that we're not ready to work on it yet.
Milestones are synced across all our repos using the
[sync_milestones.js](https://github.com/mozilla/fxa/blob/master/scripts/sync_milestones.js)
script.
### Waffle Columns
Issues that are not being actively worked on are managed in the following columns:
* **triage**: all incoming issues start out in this column by default.
* **backlog**: issues that we plan to work on someday, but not urgently.
* **next**: issues that we plan to pick up in the next development cycle.
Issues that are under active development are managed in the following columns:
* **active**: issues that someone is actively working on.
* **in review**: issues that have a PR ready for review; the assignee is the.
* **blocked**: issues on which progress has stalled due to external factors.
All issues in these four columns should have an assignee, who is the person
responsible for taking the next action on that bug.
### Other Labels
We use the following labels to add additional context on issues:
current development cycle.
* **shipit**: indicates items that need to be merged before cutting the current train.
* **good-first-bug**: indicates well-scoped, approachable tasks that may be a good
starting point for new contributors to the project.
* **i18n**: indicates issues that affect internationalized strings, and so need special
care when merging to avoid issues with translations.
* **ux**: indicates issues that have a UX component, and thus should receive input and
validation from the UX team.
Labels are synced across all the repos using the
[sync_labels.js](https://github.com/mozilla/fxa/blob/master/scripts/sync_labels.js)
script.
### Bug Triage
Issues in the **triage** column should move into one of the other columns
via these guidelines:
* If it's so important that we need to get to it in the next few days,
put it in **active** and consider adding a **❤❤❤** label to
increase visibility.
* If we should get to it in the next few weeks, put it in **next**.
* If we should get to it in the next few months, put it towards the top
of **backlog** and add a **❤** label to increase visibility.
* If we should get to it eventually, put it further down in **backlog**.
* Otherwise, just close it.
While we hold regular triage meetings, developers with sufficient context are
welcome to deal with issues in the **triage** column at any time.
## Checkin Meetings
The team meets regularly
to stay in sync about development status
and ensure nothing is falling through the cracks.
During meetings we take notes in the
**[coordination google-doc](https://docs.google.com/document/d/1r_qfb-D1Yt5KAT8IIVPvjaeliFORbQk-xFE_tRNM4Rc/)**,
and afterward we send a summary of each meeting
to an appropriate mailing list.
We hold the following meetings
over the course of each two-week cycle,
with meeting times pinned
to Mozilla Standard Time (aka Pacific Time).
### Mondays at 08:30
This is a 60 minute meeting slot that's convenient for Europe and US-East.
The first 30 minutes are split between UX/PM and dev/ops discussions,
the second 30 for triaging new bugs and pruning the backlog.
Minutes are emailed to [dev-fxacct@mozilla.org](https://mail.mozilla.org/pipermail/dev-fxacct/)
### Mondays at 12:30
#### Weekly: Show and Tell and Share
We get together to demonstrate
any new features that will be included on the next train,
or any other interesting work
that was completed in the previous cycle.
Minutes are emailed to [dev-fxacct@mozilla.org](https://mail.mozilla.org/pipermail/dev-fxacct/)
### Mondays at 13:00
This is the one time each week
where all team members everywhere in the world
get together in the same (virtual) room
at the same time.
#### First week: Dev Planning Meeting
We review any items remaining
in **blocked**, **review** or **active**
to determine whether they
should carry over to the upcoming train,
or be de-priotitized.
We then work through the issues
in **next** to decide what to commit to
for the upcoming train.
Minutes are not recorded from this meeting.
#### Second week: Retrospective
We take time every two weeks
to explicitly reflect on our development process -
what worked, what didn't, what new things we'd like to try.
Minutes are private
and are emailed to [fxa-staff@mozilla.com](https://groups.google.com/a/mozilla.com/forum/#!forum/fxa-staff)
### Tuesdays at 13:30
This is a 30 minute meeting slot
that is convenient for US-West and Oceania.
#### Weekly: DevOps Catchup
We dedicate some time to discuss backend operational issues.
On weeks when we are cutting a new train,
we review the status of any **shipit** items
from the Monday meeting, and tag new releases
of the relevant repos for the outbound train.
Minutes are emailed to [dev-fxacct@mozilla.org](https://mail.mozilla.org/pipermail/dev-fxacct/),
sans any confidential operational notes.
## Code Review
This project is production Mozilla code and subject to our [engineering practices and quality standards](https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Committing_Rules_and_Responsibilities). Every patch must be [reviewed](https://developer.mozilla.org/en-US/docs/Code_Review_FAQ) by an owner or peer of the [Firefox Accounts module](https://wiki.mozilla.org/Modules/Other#Firefox_Accounts).
### Review Checklist
Here are some handy questions and things to consider when reviewing code for Firefox Accounts:
* How will we tell if this change is successful?
* If it's fixing a bug, have we introduced tests to ensure the bug stays fixed?
* If it's a feature, do we have metrics to tell whether it's providing value?
* Should it be A/B tested to check whether it's a good idea at all?
* Did test coverage increase, or at least stay the same?
* We need a pretty good reason to merge code that decreases test coverage...
* If it's hard to answer this question, consider adding a test that tests the test coverage.
* Does it introduce new user-facing strings?
* These strings will need to go through our localization process. Check that the
templates in which they're defined are covered by our string extraction scripts.
* The code must be merged before the string-extraction date for that development cycle.
* Does it store user-provided data?
* The validation rules should be explicit, documented, and clearly enforced before storage.
* Does it display user-controlled data?
* It must be appropriately escaped, e.g. htmlescaped before being inserted into web content.
* Does it involve a database schema migration?
* The changes must be backwards-compatible with the previous deployed version. This means
that you can't do something like `ALTER TABLE CHANGE COLUMN` in a single deployment, but
must split it into two: one to add the new column and start using it, and second to
drop the now-unused old column.
* Does it contain any long-running statements that might lock tables during deployment?
* Can the changes be rolled back without data loss or a service outage?
* Has the canonical db schema been kept in sync with the patch files?
* Once merged, please file an Ops bug to track deployment in stage and production.
* Does it alter the public API of a service?
* Ensure that the chage is backwards compatible.
* Ensure that it's documented appropriately in the API description.
* Note whether we should announce it on one or more developer mailing lists.
* Does it add new metrics or logging?
* Make sure they're documented for future reference.
* Does it conform to the prevailing style of the codebase?
* If it introduces new files, ensure they're covered by the linter.
* If you notice a stylistic issue that was *not* detected by the linter,
consider updating the linter.
* For fixes that are patching a train,
has the PR been opened against the correct train branch?
* If the PR is against `master`,
it is likely that it will mess up
our change logs and the git history
when merged.
* If no appropriate train branch exists,
one can be created at the appropriate point in history
and pushed.
After the patch has been tagged (see below),
the train branch can then be merged to `master`.
Commits should not be cherry-picked
between train branches and `master`.
## Tagging releases
Each repo has a `grunt` script
for tagging new releases.
This script is responsible for:
* Updating the version strings
in `package.json` and `npm-shrinkwrap.json`.
* Writing commit summaries
to the change log.
* Committing these changes.
The script will not push the tag,
so you can always check what's changed
before making the decision
about whether the changes were correct
and it's okay to push.
To tag a major release, run:
```
grunt version
```
To tag a patch release, run:
```
grunt version:patch
```
Patch releases should normally be tagged
in a specific `train-nnn` branch,
which must then be merged back to `master`.
It's important that:
1. The merge happens;
2. It really is just a vanilla `git merge`
and not a `rebase`, `cherry-pick` or `merge --squash`.
Doing it this way
ensures that all releases show up in the changelog,
with commits correctly listed under the appropriate version,
and that future releases are never missing the details
from earlier ones.
Other approaches,
like cherry-picking between branches
or fixing in master then uplifting to a train branch,
will break the history.
### What if the merge messes up the changelog?
After merging but before pushing,
you should check the changelog to make sure
that the expected versions are listed
and they're in the right order.
If any are missing or the order is wrong,
manually edit the changelog
so that it makes sense,
using the commit summaries from `git log --graph --oneline`
to fill in any blanks as necessary.
Then `git add` those changes
and squash them into the preceding merge commit
using `git commit --amend`.
Now you can push
and the merged changelog will make sense.
### What if I already pushed a fix to `master` and it needs to be uplifted to an earlier train?
In this case,
it's okay to use `git cherry-pick`
because that's the only way to get the fix
into the earlier train.
However, after tagging and pushing the earlier release,
you should still merge the train branch back to `master`
so that future changelogs include the new release.
### What if there are two separate train branches containing parallel updates?
In this case,
the easiest way to keep the changelogs complete
and in the appropriate version order,
is to:
1. Merge from the earlier train branch
into the later one.
[Fix up the changelog](#what-if-the-merge-messes-up-the-changelog)
if it needs it
and then push the train branch.
2. Now merge from the later train branch
into `master`.
Again,
remember to fix up the changelog before pushing
if required.

Просмотреть файл

@ -53,7 +53,7 @@ development, and will be happy to help answer any questions you might have:
We meet regularly to triage bugs and make grand plans for the future. Anyone is welcome to
join us in the following forums:
* Regular video meetings, as noted on the [project calendar](https://www.google.com/calendar/embed?src=mozilla.com_urbkla6jvphpk1t8adi5c12kic%40group.calendar.google.com) and with minutes in the [coordination etherpad](https://id.etherpad.mozilla.org/fxa-engineering-coordination)
* Regular video meetings, as noted on the [project calendar](https://www.google.com/calendar/embed?src=mozilla.com_urbkla6jvphpk1t8adi5c12kic%40group.calendar.google.com) and with minutes in the [coordination google-doc](https://docs.google.com/document/d/1r_qfb-D1Yt5KAT8IIVPvjaeliFORbQk-xFE_tRNM4Rc/)
* The [Firefox Accounts mailing list](https://mail.mozilla.org/listinfo/dev-fxacct)
* The `#fxa` channel on [Mozilla IRC](https://wiki.mozilla.org/IRC)
@ -71,7 +71,7 @@ being developed in a separate repository. The main components fit together like
Most repositories are [available via GitHub](https://github.com/mozilla?utf8=%E2%9C%93&query=fxa)
You can read more about the [details of our development process](/dev-process/)
You can read more about the [details of our development process](./dev-process.html)
### Core Servers and Libraries
@ -159,7 +159,7 @@ If you have found a bug in FxA, please file it via the dashboard above
There is also a "Core/FxAccounts" bugzilla component that covers the accounts code inside Firefox itself, and a "Server: Firefox Accounts" component for when FxA code interacts with parts of Mozilla that operate out of bugzilla:
* [Bugzilla search for "Core/FxAccounts"](https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&component=FxAccounts&product=Core&list_id=12360036)
* [Bugzilla search for "Server: Firefox Accounts"](https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&component=Server%3A Firefox Accounts&product=Cloud Services)
* [Bugzilla search for "Server: Firefox Accounts"](https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&component=Server%3A%20Firefox%20Accounts&product=Cloud%20Services)
## How To
@ -185,4 +185,4 @@ There is also a "Core/FxAccounts" bugzilla component that covers the accounts co
[![](https://www.lucidchart.com/publicSegments/view/ea28050a-024f-42bc-aa6c-023e8cf101e3/image.png)](https://www.lucidchart.com/publicSegments/view/ea28050a-024f-42bc-aa6c-023e8cf101e3/image.png)
[LucidChart View](https://www.lucidchart.com/documents/edit/677146e7-0fb8-4486-99a7-7eacaa16b6be/1)
[LucidChart View](https://www.lucidchart.com/documents/edit/677146e7-0fb8-4486-99a7-7eacaa16b6be/1)

Просмотреть файл

@ -229,6 +229,7 @@ Below is a list of current and future Mozilla services that delegate authenticat
* [Find My Device](https://wiki.mozilla.org/CloudServices/FindMyDevice)
* [Firefox Marketplace](/en-US/Marketplace)
* [addons.mozilla.org](https://addons.mozilla.org)
* [Mozilla IAM](https://github.com/mozilla-iam/mozilla-iam) (since [June 2018](https://discourse.mozilla.org/t/announcing-firefox-accounts-in-mozilla-iam/29218))
Contact
-------

Просмотреть файл

@ -4,7 +4,7 @@ buildscript {
ext.kotlin_version = '1.2.50'
ext.library = [
version: '0.2.0'
version: '0.4.0'
]
ext.build = [

Просмотреть файл

@ -11,3 +11,13 @@ org.gradle.jvmargs=-Xmx1536m
# This option should only be used with decoupled projects. More details, visit
# http://www.gradle.org/docs/current/userguide/multi_project_builds.html#sec:decoupled_projects
# org.gradle.parallel=true
libGroupId=org.mozilla.sync15
libRepositoryName=application-services
libProjectName=application-services
libProjectDescription=Firefox Application Services
libUrl=https://github.com/mozilla/application-services
libVcsUrl=https://github.com/mozilla/application-services.git
libLicense=MPL-2.0
libLicenseUrl=https://www.mozilla.org/en-US/MPL/2.0/

Просмотреть файл

@ -33,7 +33,7 @@ android {
cargo {
// The directory of the Cargo.toml to build.
module = '../../../sync15/passwords/ffi'
module = '../../../logins-sql/ffi'
// The Android NDK API level to target.
apiLevel = 21
@ -84,7 +84,6 @@ cargo {
dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
implementation 'com.android.support:appcompat-v7:27.1.1'
implementation 'com.beust:klaxon:3.0.1' // JSON parsing.
implementation 'net.java.dev.jna:jna:4.5.2@aar'
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:0.23.4'

Просмотреть файл

@ -9,7 +9,6 @@
package org.mozilla.sync15.logins
import android.util.Log
import com.beust.klaxon.Klaxon
import com.sun.jna.Pointer
import kotlinx.coroutines.experimental.launch
import org.mozilla.sync15.logins.rust.PasswordSyncAdapter
@ -101,7 +100,7 @@ class DatabaseLoginsStorage(private val dbPath: String) : Closeable, LoginsStora
if (json == null) {
null
} else {
Klaxon().parse<ServerPassword>(json)
ServerPassword.fromJSON(json)
}
)
}
@ -120,7 +119,7 @@ class DatabaseLoginsStorage(private val dbPath: String) : Closeable, LoginsStora
PasswordSyncAdapter.INSTANCE.sync15_passwords_get_all(this.raw!!, it)
}.then { json ->
Log.d("Logins", "got list: " + json);
SyncResult.fromValue(Klaxon().parseArray<ServerPassword>(json!!)!!)
SyncResult.fromValue(ServerPassword.fromJSONArray(json!!))
}
}
@ -149,7 +148,7 @@ class DatabaseLoginsStorage(private val dbPath: String) : Closeable, LoginsStora
try {
return p.getString(0, "utf8");
} finally {
PasswordSyncAdapter.INSTANCE.destroy_c_char(p);
PasswordSyncAdapter.INSTANCE.sync15_passwords_destroy_string(p);
}
}

Просмотреть файл

@ -7,6 +7,9 @@
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License. */
package org.mozilla.sync15.logins
import org.json.JSONArray
import org.json.JSONException
import org.json.JSONObject;
/**
* Raw password data that is stored by the LoginsStorage implementation.
@ -47,5 +50,45 @@ class ServerPassword (
val usernameField: String? = null,
val passwordField: String? = null
)
) {
companion object {
fun fromJSON(jsonObject: JSONObject): ServerPassword {
return ServerPassword(
id = jsonObject.getString("id"),
hostname = jsonObject.getString("hostname"),
password = jsonObject.getString("password"),
username = jsonObject.optString("username", null),
httpRealm = jsonObject.optString("httpRealm", null),
formSubmitURL = jsonObject.optString("formSubmitURL", null),
usernameField = jsonObject.optString("usernameField", null),
passwordField = jsonObject.optString("passwordField", null),
timesUsed = jsonObject.getInt("timesUsed"),
timeCreated = jsonObject.getLong("timeCreated"),
timeLastUsed = jsonObject.getLong("timeLastUsed"),
timePasswordChanged = jsonObject.getLong("timePasswordChanged")
)
}
fun fromJSON(jsonText: String): ServerPassword {
return fromJSON(JSONObject(jsonText))
}
fun fromJSONArray(jsonArrayText: String): List<ServerPassword> {
val result: MutableList<ServerPassword> = mutableListOf();
val array = JSONArray(jsonArrayText);
for (index in 0..array.length()) {
result.add(fromJSON(array.getJSONObject(index)));
}
return result
}
}
}

Просмотреть файл

@ -54,8 +54,11 @@ internal interface PasswordSyncAdapter : Library {
fun sync15_passwords_touch(state: RawLoginSyncState, id: String, error: RustError.ByReference)
fun sync15_passwords_delete(state: RawLoginSyncState, id: String, error: RustError.ByReference): Boolean
// Note: returns guid of new login entry (unless one was specifically requested)
fun sync15_passwords_add(state: RawLoginSyncState, new_login_json: String, error: RustError.ByReference): Pointer
fun sync15_passwords_update(state: RawLoginSyncState, existing_login_json: String, error: RustError.ByReference)
fun destroy_c_char(p: Pointer)
fun sync15_passwords_destroy_string(p: Pointer)
}
class RawLoginSyncState : PointerType()

Просмотреть файл

@ -60,7 +60,7 @@ open class RustError : Structure() {
fun consumeErrorMessage(): String {
val result = this.getMessage()
if (this.message != null) {
PasswordSyncAdapter.INSTANCE.destroy_c_char(this.message!!);
PasswordSyncAdapter.INSTANCE.sync15_passwords_destroy_string(this.message!!);
this.message = null
}
if (result == null) {

Просмотреть файл

@ -23,7 +23,7 @@ android {
abi {
enable true
reset()
include 'x86', 'arm64', 'armeabi-v7a'
include 'x86', 'arm64-v8a', 'armeabi-v7a'
}
}
@ -47,7 +47,7 @@ dependencies {
implementation 'com.android.support:support-v4:27.1.1'
implementation 'com.android.support:recyclerview-v7:27.1.1'
implementation 'com.beust:klaxon:3.0.1' // JSON parsing.
implementation 'org.mozilla.components:fxa:0.21'
implementation 'org.mozilla.components:fxa:0.22'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'

Просмотреть файл

@ -162,7 +162,7 @@ class MainActivity : AppCompatActivity() {
fun initStore(): SyncResult<DatabaseLoginsStorage> {
val appFiles = this.applicationContext.getExternalFilesDir(null)
val storage = DatabaseLoginsStorage(appFiles.absolutePath + "/logins.db");
val storage = DatabaseLoginsStorage(appFiles.absolutePath + "/logins.sqlite");
return storage.unlock("my_secret_key").then {
SyncResult.fromValue(storage)
}

28
logins-sql/Cargo.toml Normal file
Просмотреть файл

@ -0,0 +1,28 @@
[package]
name = "logins-sql"
version = "0.1.0"
authors = ["Thom Chiovoloni <tchiovoloni@mozilla.com>"]
[dependencies]
sync15-adapter = { path = "../sync15-adapter" }
serde = "1.0.75"
serde_derive = "1.0.75"
serde_json = "1.0.26"
log = "0.4.4"
lazy_static = "1.1.0"
url = "1.7.1"
failure = "0.1"
failure_derive = "0.1"
[dependencies.rusqlite]
version = "0.14.0"
features = ["sqlcipher", "limits"]
[dev-dependencies]
more-asserts = "0.2.1"
env_logger = "0.5.13"
prettytable-rs = "0.7.0"
fxa-client = { path = "../fxa-client" }
webbrowser = "0.3.1"
chrono = "0.4.6"
clap = "2.32.0"

Просмотреть файл

@ -0,0 +1,507 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
#![recursion_limit = "4096"]
extern crate logins_sql;
extern crate sync15_adapter as sync;
extern crate fxa_client;
extern crate url;
#[macro_use]
extern crate prettytable;
extern crate serde;
#[macro_use]
extern crate serde_derive;
extern crate serde_json;
extern crate rusqlite;
extern crate webbrowser;
extern crate clap;
#[macro_use]
extern crate log;
extern crate env_logger;
extern crate chrono;
extern crate failure;
use failure::Fail;
use std::{fs, io::{self, Read, Write}};
use std::collections::HashMap;
use fxa_client::{FirefoxAccount, Config, OAuthInfo};
use sync::{Sync15StorageClientInit, KeyBundle};
use logins_sql::{PasswordEngine, Login};
const CLIENT_ID: &str = "98adfa37698f255b";
const REDIRECT_URI: &str = "https://lockbox.firefox.com/fxa/ios-redirect.html";
const CONTENT_BASE: &str = "https://accounts.firefox.com";
const SYNC_SCOPE: &str = "https://identity.mozilla.com/apps/oldsync";
const SCOPES: &[&str] = &[
SYNC_SCOPE,
"https://identity.mozilla.com/apps/lockbox",
];
// I'm completely punting on good error handling here.
type Result<T> = std::result::Result<T, failure::Error>;
#[derive(Debug, Deserialize)]
struct ScopedKeyData {
k: String,
kty: String,
kid: String,
scope: String,
}
fn load_fxa_creds(path: &str) -> Result<FirefoxAccount> {
let mut file = fs::File::open(path)?;
let mut s = String::new();
file.read_to_string(&mut s)?;
Ok(FirefoxAccount::from_json(&s)?)
}
fn load_or_create_fxa_creds(path: &str, cfg: Config) -> Result<FirefoxAccount> {
load_fxa_creds(path)
.or_else(|e| {
info!("Failed to load existing FxA credentials from {:?} (error: {}), launching OAuth flow", path, e);
create_fxa_creds(path, cfg)
})
}
fn create_fxa_creds(path: &str, cfg: Config) -> Result<FirefoxAccount> {
let mut acct = FirefoxAccount::new(cfg, CLIENT_ID, REDIRECT_URI);
let oauth_uri = acct.begin_oauth_flow(SCOPES, true)?;
if let Err(_) = webbrowser::open(&oauth_uri.as_ref()) {
warn!("Failed to open a web browser D:");
println!("Please visit this URL, sign in, and then copy-paste the final URL below.");
println!("\n {}\n", oauth_uri);
} else {
println!("Please paste the final URL below:\n");
}
let final_url = url::Url::parse(&prompt_string("Final URL").unwrap_or(String::new()))?;
let query_params = final_url.query_pairs().into_owned().collect::<HashMap<String, String>>();
acct.complete_oauth_flow(&query_params["code"], &query_params["state"])?;
let mut file = fs::File::create(path)?;
write!(file, "{}", acct.to_json()?)?;
file.flush()?;
Ok(acct)
}
fn prompt_string<S: AsRef<str>>(prompt: S) -> Option<String> {
print!("{}: ", prompt.as_ref());
let _ = io::stdout().flush(); // Don't care if flush fails really.
let mut s = String::new();
io::stdin().read_line(&mut s).expect("Failed to read line...");
if let Some('\n') = s.chars().next_back() { s.pop(); }
if let Some('\r') = s.chars().next_back() { s.pop(); }
if s.len() == 0 {
None
} else {
Some(s)
}
}
fn prompt_usize<S: AsRef<str>>(prompt: S) -> Option<usize> {
if let Some(s) = prompt_string(prompt) {
match s.parse::<usize>() {
Ok(n) => Some(n),
Err(_) => {
println!("Couldn't parse!");
None
}
}
} else {
None
}
}
fn read_login() -> Login {
let username = prompt_string("username").unwrap_or_default();
let password = prompt_string("password").unwrap_or_default();
let form_submit_url = prompt_string("form_submit_url");
let hostname = prompt_string("hostname").unwrap_or_default();
let http_realm = prompt_string("http_realm");
let username_field = prompt_string("username_field").unwrap_or_default();
let password_field = prompt_string("password_field").unwrap_or_default();
let record = Login {
id: sync::util::random_guid().unwrap().into(),
username,
password,
username_field,
password_field,
form_submit_url,
http_realm,
hostname,
.. Login::default()
};
if let Err(e) = record.check_valid() {
warn!("Warning: produced invalid record: {}", e);
}
record
}
fn update_string(field_name: &str, field: &mut String, extra: &str) -> bool {
let opt_s = prompt_string(format!("new {} [now {}{}]", field_name, field, extra));
if let Some(s) = opt_s {
*field = s;
true
} else {
false
}
}
fn string_opt(o: &Option<String>) -> Option<&str> {
o.as_ref().map(|s| s.as_ref())
}
fn string_opt_or<'a>(o: &'a Option<String>, or: &'a str) -> &'a str {
string_opt(o).unwrap_or(or)
}
fn update_login(record: &mut Login) {
update_string("username", &mut record.username, ", leave blank to keep");
update_string("password", &mut record.password, ", leave blank to keep");
update_string("hostname", &mut record.hostname, ", leave blank to keep");
update_string("username_field", &mut record.username_field, ", leave blank to keep");
update_string("password_field", &mut record.password_field, ", leave blank to keep");
if prompt_bool(&format!("edit form_submit_url? (now {}) [yN]", string_opt_or(&record.form_submit_url, "(none)"))).unwrap_or(false) {
record.form_submit_url = prompt_string("form_submit_url");
}
if prompt_bool(&format!("edit http_realm? (now {}) [yN]", string_opt_or(&record.http_realm, "(none)"))).unwrap_or(false) {
record.http_realm = prompt_string("http_realm");
}
if let Err(e) = record.check_valid() {
warn!("Warning: produced invalid record: {}", e);
}
}
fn prompt_bool(msg: &str) -> Option<bool> {
let result = prompt_string(msg);
result.and_then(|r| match r.chars().next().unwrap() {
'y' | 'Y' | 't' | 'T' => Some(true),
'n' | 'N' | 'f' | 'F' => Some(false),
_ => None
})
}
fn prompt_chars(msg: &str) -> Option<char> {
prompt_string(msg).and_then(|r| r.chars().next())
}
fn timestamp_to_string(milliseconds: i64) -> String {
use chrono::{Local, DateTime};
use std::time::{UNIX_EPOCH, Duration};
let time = UNIX_EPOCH + Duration::from_millis(milliseconds as u64);
let dtl: DateTime<Local> = time.into();
dtl.format("%l:%M:%S %p%n%h %e, %Y").to_string()
}
fn show_sql(e: &PasswordEngine, sql: &str) -> Result<()> {
use prettytable::{row::Row, cell::Cell, Table};
use rusqlite::types::Value;
let conn = e.conn();
let mut stmt = conn.prepare(sql)?;
let cols: Vec<String> = stmt.column_names().into_iter().map(|x| x.to_owned()).collect();
let len = cols.len();
let mut table = Table::new();
table.add_row(Row::new(
cols.iter().map(|name| Cell::new(&name).style_spec("bc")).collect()
));
let rows = stmt.query_map(&[], |row| {
(0..len).into_iter().map(|idx| {
match row.get::<_, Value>(idx) {
Value::Null => Cell::new("null").style_spec("Fd"),
Value::Integer(i) => Cell::new(&i.to_string()).style_spec("Fb"),
Value::Real(r) => Cell::new(&r.to_string()).style_spec("Fb"),
Value::Text(s) => Cell::new(&s.to_string()).style_spec("Fr"),
Value::Blob(b) => Cell::new(&format!("{}b blob", b.len()))
}
}).collect::<Vec<_>>()
})?;
for row in rows {
table.add_row(Row::new(row?));
}
table.printstd();
Ok(())
}
fn show_all(engine: &PasswordEngine) -> Result<Vec<String>> {
let records = engine.list()?;
let mut table = prettytable::Table::new();
table.add_row(row![bc =>
"(idx)",
"Guid",
"Username",
"Password",
"Host",
"Submit URL",
"HTTP Realm",
"User Field",
"Pass Field",
"Uses",
"Created At",
"Changed At",
"Last Used"
]);
let mut v = Vec::with_capacity(records.len());
let mut record_copy = records.clone();
record_copy.sort_by(|a, b| a.id.cmp(&b.id));
for rec in records.iter() {
table.add_row(row![
r->v.len(),
Fr->&rec.id,
&rec.username,
Fd->&rec.password,
&rec.hostname,
string_opt_or(&rec.form_submit_url, ""),
string_opt_or(&rec.http_realm, ""),
&rec.username_field,
&rec.password_field,
rec.times_used,
timestamp_to_string(rec.time_created),
timestamp_to_string(rec.time_password_changed),
if rec.time_last_used == 0 {
"Never".to_owned()
} else {
timestamp_to_string(rec.time_last_used)
}
]);
v.push(rec.id.clone());
}
table.printstd();
Ok(v)
}
fn prompt_record_id(e: &PasswordEngine, action: &str) -> Result<Option<String>> {
let index_to_id = show_all(e)?;
let input = if let Some(input) = prompt_usize(&format!("Enter (idx) of record to {}", action)) {
input
} else {
return Ok(None);
};
if input >= index_to_id.len() {
info!("No such index");
return Ok(None);
}
Ok(Some(index_to_id[input].clone()))
}
fn init_logging() {
// Explicitly ignore some rather noisy crates. Turn on trace for everyone else.
let spec = "trace,tokio_threadpool=warn,tokio_reactor=warn,tokio_core=warn,tokio=warn,hyper=warn,want=warn,mio=warn,reqwest=warn";
env_logger::init_from_env(
env_logger::Env::default().filter_or("RUST_LOG", spec)
);
}
fn main() -> Result<()> {
init_logging();
std::env::set_var("RUST_BACKTRACE", "1");
let matches = clap::App::new("sync_pass_sql")
.about("CLI login syncing tool (backed by sqlcipher)")
.arg(clap::Arg::with_name("database_path")
.short("d")
.long("database")
.value_name("LOGINS_DATABASE")
.takes_value(true)
.help("Path to the logins database (default: \"./logins.db\")"))
.arg(clap::Arg::with_name("encryption_key")
.short("k")
.long("key")
.value_name("ENCRYPTION_KEY")
.takes_value(true)
.help("Database encryption key.")
.required(true))
.arg(clap::Arg::with_name("credential_file")
.short("c")
.long("credentials")
.value_name("CREDENTIAL_JSON")
.takes_value(true)
.help("Path to store our cached fxa credentials (defaults to \"./credentials.json\""))
.get_matches();
let cred_file = matches.value_of("credential_file").unwrap_or("./credentials.json");
let db_path = matches.value_of("database_path").unwrap_or("./logins.db");
// This should already be checked by `clap`, IIUC
let encryption_key = matches.value_of("encryption_key").expect("Encryption key is not optional");
// Lets not log the encryption key, it's just not a good habit to be in.
debug!("Using credential file = {:?}, db = {:?}", cred_file, db_path);
// TODO: allow users to use stage/etc.
let cfg = Config::import_from(CONTENT_BASE)?;
let tokenserver_url = cfg.token_server_endpoint_url()?;
// TODO: we should probably set a persist callback on acct?
let mut acct = load_or_create_fxa_creds(cred_file, cfg.clone())?;
let token: OAuthInfo;
match acct.get_oauth_token(SCOPES)? {
Some(t) => token = t,
None => {
// The cached credentials did not have appropriate scope, sign in again.
warn!("Credentials do not have appropriate scope, launching OAuth flow.");
acct = create_fxa_creds(cred_file, cfg.clone())?;
token = acct.get_oauth_token(SCOPES)?.unwrap();
}
}
let keys: HashMap<String, ScopedKeyData> = serde_json::from_str(&token.keys.unwrap())?;
let key = keys.get(SYNC_SCOPE).unwrap();
let client_init = Sync15StorageClientInit {
key_id: key.kid.clone(),
access_token: token.access_token.clone(),
tokenserver_url,
};
let root_sync_key = KeyBundle::from_ksync_base64(&key.k)?;
let mut engine = PasswordEngine::new(db_path, Some(encryption_key))?;
info!("Engine has {} passwords", engine.list()?.len());
if let Err(e) = show_all(&engine) {
warn!("Failed to show initial login data! {}", e);
}
loop {
match prompt_chars("[A]dd, [D]elete, [U]pdate, [S]ync, [V]iew, [R]eset, [W]ipe, [T]ouch, E[x]ecute SQL Query, or [Q]uit").unwrap_or('?') {
'A' | 'a' => {
info!("Adding new record");
let record = read_login();
if let Err(e) = engine.add(record) {
warn!("Failed to create record! {}", e);
}
}
'D' | 'd' => {
info!("Deleting record");
match prompt_record_id(&engine, "delete") {
Ok(Some(id)) => {
if let Err(e) = engine.delete(&id) {
warn!("Failed to delete record! {}", e);
}
}
Err(e) => {
warn!("Failed to get record ID! {}", e);
}
_ => {}
}
}
'U' | 'u' => {
info!("Updating record fields");
match prompt_record_id(&engine, "update") {
Err(e) => {
warn!("Failed to get record ID! {}", e);
}
Ok(Some(id)) => {
let mut login = match engine.get(&id) {
Ok(Some(login)) => login,
Ok(None) => {
warn!("No such login!");
continue
}
Err(e) => {
warn!("Failed to update record (get failed) {}", e);
continue;
}
};
update_login(&mut login);
if let Err(e) = engine.update(login) {
warn!("Failed to update record! {}", e);
}
}
_ => {}
}
}
'R' | 'r' => {
info!("Resetting client.");
if let Err(e) = engine.reset() {
warn!("Failed to reset! {}", e);
}
}
'W' | 'w' => {
info!("Wiping all data from client!");
if let Err(e) = engine.wipe() {
warn!("Failed to wipe! {}", e);
}
}
'S' | 's' => {
info!("Syncing!");
if let Err(e) = engine.sync(&client_init, &root_sync_key) {
warn!("Sync failed! {}", e);
warn!("BT: {:?}", e.backtrace());
} else {
info!("Sync was successful!");
}
}
'V' | 'v' => {
if let Err(e) = show_all(&engine) {
warn!("Failed to dump passwords? This is probably bad! {}", e);
}
}
'T' | 't' => {
info!("Touching (bumping use count) for a record");
match prompt_record_id(&engine, "update") {
Err(e) => {
warn!("Failed to get record ID! {}", e);
}
Ok(Some(id)) => {
if let Err(e) = engine.touch(&id) {
warn!("Failed to touch record! {}", e);
}
}
_ => {}
}
}
'x' | 'X' => {
info!("Running arbitrary SQL, there's no way this could go wrong!");
if let Some(sql) = prompt_string("SQL (one line only, press enter when done):\n") {
if let Err(e) = show_sql(&engine, &sql) {
warn!("Failed to run sql query: {}", e);
}
}
}
'Q' | 'q' => {
break;
}
'?' => {
continue;
}
c => {
println!("Unknown action '{}', exiting.", c);
break;
}
}
}
println!("Exiting (bye!)");
Ok(())
}

26
logins-sql/ffi/Cargo.toml Normal file
Просмотреть файл

@ -0,0 +1,26 @@
[package]
name = "loginsql_ffi"
version = "0.1.0"
authors = ["Thom Chiovoloni <tchiovoloni@mozilla.com>"]
[lib]
name = "loginsapi_ffi"
crate-type = ["lib", "staticlib", "cdylib"]
[dependencies]
serde_json = "1.0.26"
log = "0.4.4"
url = "1.7.1"
[dependencies.rusqlite]
version = "0.14.0"
features = ["sqlcipher"]
[dependencies.logins-sql]
path = ".."
[dependencies.sync15-adapter]
path = "../../sync15-adapter"
[target.'cfg(target_os = "android")'.dependencies]
android_logger = "0.6.0"

221
logins-sql/ffi/src/error.rs Normal file
Просмотреть файл

@ -0,0 +1,221 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::{self, panic, thread, ptr, process};
use std::os::raw::c_char;
use std::ffi::CString;
use logins_sql::{
Result,
Error,
ErrorKind,
};
use sync15_adapter::{
ErrorKind as Sync15ErrorKind
};
#[inline]
fn string_to_c_char(r_string: String) -> *mut c_char {
CString::new(r_string).unwrap().into_raw()
}
// "Translate" in the next few functions refers to translating a rust Result
// type into a `(error, value)` tuple (well, sort of -- the `error` is taken as
// an out parameter and the value is all that's returned, but it's a conceptual
// tuple).
pub unsafe fn with_translated_result<F, T>(error: *mut ExternError, callback: F) -> *mut T
where F: FnOnce() -> Result<T> {
match try_call_with_result(error, callback) {
Some(v) => Box::into_raw(Box::new(v)),
None => ptr::null_mut(),
}
}
pub unsafe fn with_translated_void_result<F>(error: *mut ExternError, callback: F)
where F: FnOnce() -> Result<()> {
let _: Option<()> = try_call_with_result(error, callback);
}
pub unsafe fn with_translated_value_result<F, T>(error: *mut ExternError, callback: F) -> T
where
F: FnOnce() -> Result<T>,
T: Default,
{
try_call_with_result(error, callback).unwrap_or_default()
}
pub unsafe fn with_translated_string_result<F>(error: *mut ExternError, callback: F) -> *mut c_char
where F: FnOnce() -> Result<String> {
if let Some(s) = try_call_with_result(error, callback) {
string_to_c_char(s)
} else {
ptr::null_mut()
}
}
pub unsafe fn with_translated_opt_string_result<F>(error: *mut ExternError, callback: F) -> *mut c_char
where F: FnOnce() -> Result<Option<String>> {
if let Some(Some(s)) = try_call_with_result(error, callback) {
string_to_c_char(s)
} else {
// This is either an error case, or callback returned None.
ptr::null_mut()
}
}
unsafe fn try_call_with_result<R, F>(out_error: *mut ExternError, callback: F) -> Option<R>
where F: FnOnce() -> Result<R> {
// Ugh, using AssertUnwindSafe here is safe (in terms of memory safety),
// but a lie -- this code may behave improperly in the case that we unwind.
// That said, it's UB to unwind across the FFI boundary, and in practice
// weird things happen if we do (we aren't caught on the other side).
//
// We should eventually figure out a better story here, possibly the
// PasswordsEngine should get re-initialized if we hit this.
let res: thread::Result<(ExternError, Option<R>)> =
panic::catch_unwind(panic::AssertUnwindSafe(|| match callback() {
Ok(v) => (ExternError::default(), Some(v)),
Err(e) => (e.into(), None),
}));
match res {
Ok((err, o)) => {
if !out_error.is_null() {
let eref = &mut *out_error;
*eref = err;
} else {
error!("Fatal error: an error occurred but no error parameter was given {:?}", err);
process::abort();
}
o
}
Err(e) => {
if !out_error.is_null() {
let eref = &mut *out_error;
*eref = e.into();
} else {
let err: ExternError = e.into();
error!("Fatal error: a panic occurred but no error parameter was given {:?}", err);
process::abort();
}
None
}
}
}
/// C-compatible Error code. Negative codes are not expected to be handled by
/// the application, a code of zero indicates that no error occurred, and a
/// positive error code indicates an error that will likely need to be handled
/// by the application
#[repr(i32)]
#[derive(Clone, Copy, Debug)]
pub enum ExternErrorCode {
/// An unexpected error occurred which likely cannot be meaningfully handled
/// by the application.
OtherError = -2,
/// The rust code hit a `panic!` (or something equivalent, like `assert!`).
UnexpectedPanic = -1,
/// No error occcurred.
NoError = 0,
/// Indicates the FxA credentials are invalid, and should be refreshed.
AuthInvalidError = 1,
// TODO: lockbox indicated that they would want to know when we fail to open
// the DB due to invalid key.
// https://github.com/mozilla/application-services/issues/231
}
/// Represents an error that occurred on the rust side. Many rust FFI functions take a
/// `*mut ExternError` as the last argument. This is an out parameter that indicates an
/// error that occurred during that function's execution (if any).
///
/// For functions that use this pattern, if the ExternError's message property is null, then no
/// error occurred. If the message is non-null then it contains a string description of the
/// error that occurred.
///
/// Important: This message is allocated on the heap and it is the consumer's responsibility to
/// free it!
///
/// While this pattern is not ergonomic in Rust, it offers two main benefits:
///
/// 1. It avoids defining a large number of `Result`-shaped types in the FFI consumer, as would
/// be required with something like an `struct ExternResult<T> { ok: *mut T, err:... }`
/// 2. It offers additional type safety over `struct ExternResult { ok: *mut c_void, err:... }`,
/// which helps avoid memory safety errors.
#[repr(C)]
#[derive(Debug)]
pub struct ExternError {
/// A string message, primarially intended for debugging. This will be null
/// in the case that no error occurred.
pub message: *mut c_char,
/// Error code.
/// - A code of 0 indicates no error
/// - A negative error code indicates an error which is not expected to be
/// handled by the application.
pub code: ExternErrorCode,
}
impl Default for ExternError {
fn default() -> ExternError {
ExternError {
message: ptr::null_mut(),
code: ExternErrorCode::NoError,
}
}
}
fn get_code(err: &Error) -> ExternErrorCode {
match err.kind() {
ErrorKind::SyncAdapterError(e) => {
error!("Sync error {:?}", e);
match e.kind() {
Sync15ErrorKind::TokenserverHttpError(401) => {
ExternErrorCode::AuthInvalidError
}
_ => ExternErrorCode::OtherError,
}
}
err => {
error!("Unexpected error: {:?}", err);
ExternErrorCode::OtherError
}
}
}
impl From<Error> for ExternError {
fn from(e: Error) -> ExternError {
let code = get_code(&e);
let message = string_to_c_char(e.to_string());
ExternError { message, code }
}
}
// This is the `Err` of std::thread::Result, which is what
// `panic::catch_unwind` returns.
impl From<Box<std::any::Any + Send + 'static>> for ExternError {
fn from(e: Box<std::any::Any + Send + 'static>) -> ExternError {
// The documentation suggests that it will usually be a str or String.
let message = if let Some(s) = e.downcast_ref::<&'static str>() {
string_to_c_char(s.to_string())
} else if let Some(s) = e.downcast_ref::<String>() {
string_to_c_char(s.clone())
} else {
// Note that it's important that this be allocated on the heap,
// since we'll free it later!
string_to_c_char("Unknown panic!".into())
};
ExternError {
code: ExternErrorCode::UnexpectedPanic,
message,
}
}
}

230
logins-sql/ffi/src/lib.rs Normal file
Просмотреть файл

@ -0,0 +1,230 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
extern crate serde_json;
extern crate rusqlite;
extern crate logins_sql;
extern crate sync15_adapter;
extern crate url;
#[macro_use] extern crate log;
#[cfg(target_os = "android")]
extern crate android_logger;
pub mod error;
use std::os::raw::c_char;
use std::ffi::{CString, CStr};
use error::{
ExternError,
with_translated_result,
with_translated_value_result,
with_translated_void_result,
with_translated_string_result,
with_translated_opt_string_result,
};
use logins_sql::{
Login,
PasswordEngine,
};
#[inline]
unsafe fn c_str_to_str<'a>(cstr: *const c_char) -> &'a str {
CStr::from_ptr(cstr).to_str().unwrap_or_default()
}
fn logging_init() {
#[cfg(target_os = "android")]
{
android_logger::init_once(
android_logger::Filter::default().with_min_level(log::Level::Trace),
Some("libloginsapi_ffi"));
debug!("Android logging should be hooked up!")
}
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_state_new(
db_path: *const c_char,
encryption_key: *const c_char,
error: *mut ExternError
) -> *mut PasswordEngine {
logging_init();
trace!("sync15_passwords_state_new");
with_translated_result(error, || {
let path = c_str_to_str(db_path);
let key = c_str_to_str(encryption_key);
let state = PasswordEngine::new(path, Some(key))?;
Ok(state)
})
}
// indirection to help `?` figure out the target error type
fn parse_url(url: &str) -> sync15_adapter::Result<url::Url> {
Ok(url::Url::parse(url)?)
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_sync(
state: *mut PasswordEngine,
key_id: *const c_char,
access_token: *const c_char,
sync_key: *const c_char,
tokenserver_url: *const c_char,
error: *mut ExternError
) {
trace!("sync15_passwords_sync");
with_translated_void_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_sync");
let state = &mut *state;
state.sync(
&sync15_adapter::Sync15StorageClientInit {
key_id: c_str_to_str(key_id).into(),
access_token: c_str_to_str(access_token).into(),
tokenserver_url: parse_url(c_str_to_str(tokenserver_url))?,
},
&sync15_adapter::KeyBundle::from_ksync_base64(
c_str_to_str(sync_key).into()
)?
)
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_touch(
state: *const PasswordEngine,
id: *const c_char,
error: *mut ExternError
) {
trace!("sync15_passwords_touch");
with_translated_void_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_touch");
let state = &*state;
state.touch(c_str_to_str(id))
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_delete(
state: *const PasswordEngine,
id: *const c_char,
error: *mut ExternError
) -> bool {
trace!("sync15_passwords_delete");
with_translated_value_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_delete");
let state = &*state;
state.delete(c_str_to_str(id))
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_wipe(
state: *const PasswordEngine,
error: *mut ExternError
) {
trace!("sync15_passwords_wipe");
with_translated_void_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_wipe");
let state = &*state;
state.wipe()
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_reset(
state: *const PasswordEngine,
error: *mut ExternError
) {
trace!("sync15_passwords_reset");
with_translated_void_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_reset");
let state = &*state;
state.reset()
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_get_all(
state: *const PasswordEngine,
error: *mut ExternError
) -> *mut c_char {
trace!("sync15_passwords_get_all");
with_translated_string_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_get_all");
let state = &*state;
let all_passwords = state.list()?;
let result = serde_json::to_string(&all_passwords)?;
Ok(result)
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_get_by_id(
state: *const PasswordEngine,
id: *const c_char,
error: *mut ExternError
) -> *mut c_char {
trace!("sync15_passwords_get_by_id");
with_translated_opt_string_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_get_by_id");
let state = &*state;
if let Some(password) = state.get(c_str_to_str(id))? {
Ok(Some(serde_json::to_string(&password)?))
} else {
Ok(None)
}
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_add(
state: *const PasswordEngine,
record_json: *const c_char,
error: *mut ExternError
) -> *mut c_char {
trace!("sync15_passwords_add");
with_translated_string_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_add");
let state = &*state;
let mut parsed: serde_json::Value = serde_json::from_str(c_str_to_str(record_json))?;
if parsed.get("id").is_none() {
// Note: we replace this with a real guid in `db.rs`.
parsed["id"] = serde_json::Value::String(String::default());
}
let login: Login = serde_json::from_value(parsed)?;
state.add(login)
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_update(
state: *const PasswordEngine,
record_json: *const c_char,
error: *mut ExternError
) {
trace!("sync15_passwords_update");
with_translated_void_result(error, || {
assert!(!state.is_null(), "Null state passed to sync15_passwords_update");
let state = &*state;
let parsed: Login = serde_json::from_str(c_str_to_str(record_json))?;
state.update(parsed)
});
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_destroy_string(s: *mut c_char) {
if !s.is_null() {
drop(CString::from_raw(s));
}
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_state_destroy(obj: *mut PasswordEngine) {
if !obj.is_null() {
drop(Box::from_raw(obj));
}
}

764
logins-sql/src/db.rs Normal file
Просмотреть файл

@ -0,0 +1,764 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use rusqlite::{Connection, types::{ToSql, FromSql}, Row, limits};
use std::time::SystemTime;
use std::path::Path;
use std::collections::HashSet;
use error::*;
use schema;
use login::{LocalLogin, MirrorLogin, Login, SyncStatus, SyncLoginData};
use sync::{self, ServerTimestamp, IncomingChangeset, Store, OutgoingChangeset, Payload};
use update_plan::UpdatePlan;
use util;
pub struct LoginDb {
pub db: Connection,
pub max_var_count: usize,
}
// In PRAGMA foo='bar', `'bar'` must be a constant string (it cannot be a
// bound parameter), so we need to escape manually. According to
// https://www.sqlite.org/faq.html, the only character that must be escaped is
// the single quote, which is escaped by placing two single quotes in a row.
fn escape_string_for_pragma(s: &str) -> String {
s.replace("'", "''")
}
impl LoginDb {
pub fn with_connection(db: Connection, encryption_key: Option<&str>) -> Result<Self> {
#[cfg(test)] {
util::init_test_logging();
}
let encryption_pragmas = if let Some(key) = encryption_key {
// TODO: We probably should support providing a key that doesn't go
// through PBKDF2 (e.g. pass it in as hex, or use sqlite3_key
// directly. See https://www.zetetic.net/sqlcipher/sqlcipher-api/#key
// "Raw Key Data" example. Note that this would be required to open
// existing iOS sqlcipher databases).
format!("PRAGMA key = '{}';", escape_string_for_pragma(key))
} else {
"".to_owned()
};
// `temp_store = 2` is required on Android to force the DB to keep temp
// files in memory, since on Android there's no tmp partition. See
// https://github.com/mozilla/mentat/issues/505. Ideally we'd only
// do this on Android, or allow caller to configure it.
let initial_pragmas = format!("
{}
PRAGMA temp_store = 2;
", encryption_pragmas);
db.execute_batch(&initial_pragmas)?;
let max_var_limit = db.limit(limits::Limit::SQLITE_LIMIT_VARIABLE_NUMBER);
// This is an i32, so check before casting it. We also disallow 0
// because it's not clear how we could ever possibly handle it.
// (Realistically, we'll also fail to handle low values, but oh well).
assert!(max_var_limit > 0,
"SQLITE_LIMIT_VARIABLE_NUMBER must not be 0 or negative!");
let mut logins = Self {
db,
max_var_count: max_var_limit as usize
};
schema::init(&mut logins)?;
Ok(logins)
}
pub fn open(path: impl AsRef<Path>, encryption_key: Option<&str>) -> Result<Self> {
Ok(Self::with_connection(Connection::open(path)?, encryption_key)?)
}
pub fn open_in_memory(encryption_key: Option<&str>) -> Result<Self> {
Ok(Self::with_connection(Connection::open_in_memory()?, encryption_key)?)
}
pub fn execute_all(&self, stmts: &[&str]) -> Result<()> {
for sql in stmts {
self.execute(sql, &[])?;
}
Ok(())
}
#[inline]
pub fn execute(&self, stmt: &str, params: &[(&str, &ToSql)]) -> Result<usize> {
Ok(self.do_exec(stmt, params, false)?)
}
#[inline]
pub fn execute_cached(&self, stmt: &str, params: &[(&str, &ToSql)]) -> Result<usize> {
Ok(self.do_exec(stmt, params, true)?)
}
fn do_exec(&self, sql: &str, params: &[(&str, &ToSql)], cache: bool) -> Result<usize> {
let res = if cache {
self.db.prepare_cached(sql)
.and_then(|mut s| s.execute_named(params))
} else {
self.db.execute_named(sql, params)
};
if let Err(e) = &res {
warn!("Error running SQL {}. Statement: {:?}", e, sql);
}
Ok(res?)
}
pub fn query_one<T: FromSql>(&self, sql: &str) -> Result<T> {
let res: T = self.db.query_row(sql, &[], |row| row.get(0))?;
Ok(res)
}
// The type returned by prepare/prepare_cached are different, but must live
// to the end of the function, so (AFAICT) it's difficult/impossible to
// remove the duplication between query_row/query_row_cached.
pub fn query_row<T>(&self, sql: &str, args: &[(&str, &ToSql)], f: impl FnOnce(&Row) -> Result<T>) -> Result<Option<T>> {
let mut stmt = self.db.prepare(sql)?;
let res = stmt.query_named(args);
if let Err(e) = &res {
warn!("Error executing query: {}. Query: {}", e, sql);
}
let mut rows = res?;
match rows.next() {
Some(result) => Ok(Some(f(&result?)?)),
None => Ok(None),
}
}
pub fn query_row_cached<T>(&self, sql: &str, args: &[(&str, &ToSql)], f: impl FnOnce(&Row) -> Result<T>) -> Result<Option<T>> {
let mut stmt = self.db.prepare_cached(sql)?;
let res = stmt.query_named(args);
if let Err(e) = &res {
warn!("Error executing query: {}. Query: {}", e, sql);
}
let mut rows = res?;
match rows.next() {
Some(result) => Ok(Some(f(&result?)?)),
None => Ok(None),
}
}
}
// login specific stuff.
impl LoginDb {
fn mark_as_synchronized(&mut self, guids: &[&str], ts: ServerTimestamp) -> Result<()> {
util::each_chunk(guids, self.max_var_count, |chunk, _| {
self.db.execute(
&format!("DELETE FROM loginsM WHERE guid IN ({vars})",
vars = util::sql_vars(chunk.len())),
chunk
)?;
self.db.execute(
&format!("
INSERT OR IGNORE INTO loginsM (
{common_cols}, is_overridden, server_modified
)
SELECT {common_cols}, 0, {modified_ms_i64}
FROM loginsL
WHERE is_deleted = 0 AND guid IN ({vars})",
common_cols = schema::COMMON_COLS,
modified_ms_i64 = ts.as_millis() as i64,
vars = util::sql_vars(chunk.len())),
chunk
)?;
self.db.execute(
&format!("DELETE FROM loginsL WHERE guid IN ({vars})",
vars = util::sql_vars(chunk.len())),
chunk
)?;
Ok(())
})?;
self.set_last_sync(ts)?;
Ok(())
}
// Fetch all the data for the provided IDs.
// TODO: Might be better taking a fn instead of returning all of it... But that func will likely
// want to insert stuff while we're doing this so ugh.
fn fetch_login_data(&self, records: &[(sync::Payload, ServerTimestamp)]) -> Result<Vec<SyncLoginData>> {
let mut sync_data = Vec::with_capacity(records.len());
{
let mut seen_ids: HashSet<String> = HashSet::with_capacity(records.len());
for incoming in records.iter() {
if seen_ids.contains(&incoming.0.id) {
throw!(ErrorKind::DuplicateGuid(incoming.0.id.to_string()))
}
seen_ids.insert(incoming.0.id.clone());
sync_data.push(SyncLoginData::from_payload(incoming.0.clone(), incoming.1)?);
}
}
util::each_chunk_mapped(&records, self.max_var_count, |r| &r.0.id as &ToSql, |chunk, offset| {
// pairs the bound parameter for the guid with an integer index.
let values_with_idx = util::repeat_display(chunk.len(), ",", |i, f| write!(f, "({},?)", i + offset));
let query = format!("
WITH to_fetch(guid_idx, fetch_guid) AS (VALUES {vals})
SELECT
{common_cols},
is_overridden,
server_modified,
NULL as local_modified,
NULL as is_deleted,
NULL as sync_status,
1 as is_mirror,
to_fetch.guid_idx as guid_idx
FROM loginsM
JOIN to_fetch
ON loginsM.guid = to_fetch.fetch_guid
UNION ALL
SELECT
{common_cols},
NULL as is_overridden,
NULL as server_modified,
local_modified,
is_deleted,
sync_status,
0 as is_mirror,
to_fetch.guid_idx as guid_idx
FROM loginsL
JOIN to_fetch
ON loginsL.guid = to_fetch.fetch_guid",
// give each VALUES item 2 entries, an index and the parameter.
vals = values_with_idx,
common_cols = schema::COMMON_COLS,
);
let mut stmt = self.db.prepare(&query)?;
let rows = stmt.query_and_then(chunk, |row| {
let guid_idx_i = row.get::<_, i64>("guid_idx");
// Hitting this means our math is wrong...
assert!(guid_idx_i >= 0);
let guid_idx = guid_idx_i as usize;
let is_mirror: bool = row.get("is_mirror");
if is_mirror {
sync_data[guid_idx].set_mirror(MirrorLogin::from_row(row)?)?;
} else {
sync_data[guid_idx].set_local(LocalLogin::from_row(row)?)?;
}
Ok(())
})?;
// `rows` is an Iterator<Item = Result<()>>, so we need to collect to handle the errors.
rows.collect::<Result<_>>()?;
Ok(())
})?;
Ok(sync_data)
}
// It would be nice if this were a batch-ish api (e.g. takes a slice of records and finds dupes
// for each one if they exist)... I can't think of how to write that query, though.
fn find_dupe(&self, l: &Login) -> Result<Option<Login>> {
let form_submit_host_port = l.form_submit_url.as_ref().and_then(|s| util::url_host_port(&s));
let args = &[
(":hostname", &l.hostname as &ToSql),
(":http_realm", &l.http_realm as &ToSql),
(":username", &l.username as &ToSql),
(":form_submit", &form_submit_host_port as &ToSql),
];
let mut query = format!("
SELECT {common}
FROM loginsL
WHERE hostname IS :hostname
AND httpRealm IS :http_realm
AND username IS :username",
common = schema::COMMON_COLS,
);
if form_submit_host_port.is_some() {
// Stolen from iOS
query += " AND (formSubmitURL = '' OR (instr(formSubmitURL, :form_submit) > 0))";
} else {
query += " AND formSubmitURL IS :form_submit"
}
Ok(self.query_row(&query, args, |row| Login::from_row(row))?)
}
pub fn get_all(&self) -> Result<Vec<Login>> {
let mut stmt = self.db.prepare_cached(&GET_ALL_SQL)?;
let rows = stmt.query_and_then(&[], Login::from_row)?;
rows.collect::<Result<_>>()
}
pub fn get_by_id(&self, id: &str) -> Result<Option<Login>> {
// Probably should be cached...
self.query_row(&GET_BY_GUID_SQL,
&[(":guid", &id as &ToSql)],
Login::from_row)
}
pub fn touch(&self, id: &str) -> Result<()> {
self.ensure_local_overlay_exists(id)?;
self.mark_mirror_overridden(id)?;
let now_ms = util::system_time_ms_i64(SystemTime::now());
// As on iOS, just using a record doesn't flip it's status to changed.
// TODO: this might be wrong for lockbox!
self.execute_cached("
UPDATE loginsL
SET timeLastUsed = :now_millis,
timesUsed = timesUsed + 1,
local_modified = :now_millis
WHERE guid = :guid
AND is_deleted = 0",
&[(":now_millis", &now_ms as &ToSql),
(":guid", &id as &ToSql)]
)?;
Ok(())
}
pub fn add(&self, mut login: Login) -> Result<Login> {
login.check_valid()?;
let now_ms = util::system_time_ms_i64(SystemTime::now());
// Allow an empty GUID to be passed to indicate that we should generate
// one. (Note that the FFI, does not require that the `id` field be
// present in the JSON, and replaces it with an empty string if missing).
if login.id.is_empty() {
// Our FFI handles panics so this is fine. In practice there's not
// much we can do here. Using a CSPRNG for this is probably
// unnecessary, so we likely could fall back to something less
// fallible eventually, but it's unlikely very much else will work
// if this fails, so it doesn't matter much.
login.id = sync::util::random_guid()
.expect("Failed to generate failed to generate random bytes for GUID");
}
// Fill in default metadata.
// TODO: allow this to be provided for testing?
login.time_created = now_ms;
login.time_password_changed = now_ms;
login.time_last_used = now_ms;
login.times_used = 1;
let sql = format!("
INSERT OR IGNORE INTO loginsL (
hostname,
httpRealm,
formSubmitURL,
usernameField,
passwordField,
timesUsed,
username,
password,
guid,
timeCreated,
timeLastUsed,
timePasswordChanged,
local_modified,
is_deleted,
sync_status
) VALUES (
:hostname,
:http_realm,
:form_submit_url,
:username_field,
:password_field,
:times_used,
:username,
:password,
:guid,
:time_created,
:time_last_used,
:time_password_changed,
:local_modified,
0, -- is_deleted
{new} -- sync_status
)", new = SyncStatus::New as u8);
let rows_changed = self.execute(&sql, &[
(":hostname", &login.hostname as &ToSql),
(":http_realm", &login.http_realm as &ToSql),
(":form_submit_url", &login.form_submit_url as &ToSql),
(":username_field", &login.username_field as &ToSql),
(":password_field", &login.password_field as &ToSql),
(":username", &login.username as &ToSql),
(":password", &login.password as &ToSql),
(":guid", &login.id as &ToSql),
(":time_created", &login.time_created as &ToSql),
(":times_used", &login.times_used as &ToSql),
(":time_last_used", &login.time_last_used as &ToSql),
(":time_password_changed", &login.time_password_changed as &ToSql),
(":local_modified", &now_ms as &ToSql)
])?;
if rows_changed == 0 {
error!("Record {:?} already exists (use `update` to update records, not add)",
login.id);
throw!(ErrorKind::DuplicateGuid(login.id));
}
Ok(login)
}
pub fn update(&self, login: Login) -> Result<()> {
login.check_valid()?;
// Note: These fail with DuplicateGuid if the record doesn't exist.
self.ensure_local_overlay_exists(login.guid_str())?;
self.mark_mirror_overridden(login.guid_str())?;
let now_ms = util::system_time_ms_i64(SystemTime::now());
let sql = format!("
UPDATE loginsL
SET local_modified = :now_millis,
timeLastUsed = :now_millis,
-- Only update timePasswordChanged if, well, the password changed.
timePasswordChanged = (CASE
WHEN password = :password
THEN timePasswordChanged
ELSE :now_millis
END),
httpRealm = :http_realm,
formSubmitURL = :form_submit_url,
usernameField = :username_field,
passwordField = :password_field,
timesUsed = timesUsed + 1,
username = :username,
password = :password,
hostname = :hostname,
-- leave New records as they are, otherwise update them to `changed`
sync_status = max(sync_status, {changed})
WHERE guid = :guid",
changed = SyncStatus::Changed as u8
);
self.db.execute_named(&sql, &[
(":hostname", &login.hostname as &ToSql),
(":username", &login.username as &ToSql),
(":password", &login.password as &ToSql),
(":http_realm", &login.http_realm as &ToSql),
(":form_submit_url", &login.form_submit_url as &ToSql),
(":username_field", &login.username_field as &ToSql),
(":password_field", &login.password_field as &ToSql),
(":guid", &login.id as &ToSql),
(":now_millis", &now_ms as &ToSql),
])?;
Ok(())
}
pub fn exists(&self, id: &str) -> Result<bool> {
Ok(self.query_row("
SELECT EXISTS(
SELECT 1 FROM loginsL
WHERE guid = :guid AND is_deleted = 0
UNION ALL
SELECT 1 FROM loginsM
WHERE guid = :guid AND is_overridden IS NOT 1
)",
&[(":guid", &id as &ToSql)],
|row| Ok(row.get(0))
)?.unwrap_or(false))
}
/// Delete the record with the provided id. Returns true if the record
/// existed already.
pub fn delete(&self, id: &str) -> Result<bool> {
let exists = self.exists(id)?;
let now_ms = util::system_time_ms_i64(SystemTime::now());
// Directly delete IDs that have not yet been synced to the server
self.execute(&format!("
DELETE FROM loginsL
WHERE guid = :guid
AND sync_status = {status_new}",
status_new = SyncStatus::New as u8),
&[(":guid", &id as &ToSql)]
)?;
// For IDs that have, mark is_deleted and clear sensitive fields
self.execute(&format!("
UPDATE loginsL
SET local_modified = :now_ms,
sync_status = {status_changed},
is_deleted = 1,
password = '',
hostname = '',
username = ''
WHERE guid = :guid",
status_changed = SyncStatus::Changed as u8),
&[(":now_ms", &now_ms as &ToSql), (":guid", &id as &ToSql)])?;
// Mark the mirror as overridden
self.execute("UPDATE loginsM SET is_overridden = 1 WHERE guid = :guid",
&[(":guid", &id as &ToSql)])?;
// If we don't have a local record for this ID, but do have it in the mirror
// insert a tombstone.
self.execute(&format!("
INSERT OR IGNORE INTO loginsL
(guid, local_modified, is_deleted, sync_status, hostname, timeCreated, timePasswordChanged, password, username)
SELECT guid, :now_ms, 1, {changed}, '', timeCreated, :now_ms, '', ''
FROM loginsM
WHERE guid = :guid",
changed = SyncStatus::Changed as u8),
&[(":now_ms", &now_ms as &ToSql),
(":guid", &id as &ToSql)])?;
Ok(exists)
}
fn mark_mirror_overridden(&self, guid: &str) -> Result<()> {
self.execute_cached("
UPDATE loginsM SET
is_overridden = 1
WHERE guid = :guid
", &[(":guid", &guid as &ToSql)])?;
Ok(())
}
fn ensure_local_overlay_exists(&self, guid: &str) -> Result<()> {
let already_have_local: bool = self.query_row_cached(
"SELECT EXISTS(SELECT 1 FROM loginsL WHERE guid = :guid)",
&[(":guid", &guid as &ToSql)],
|row| Ok(row.get(0))
)?.unwrap_or_default();
if already_have_local {
return Ok(())
}
debug!("No overlay; cloning one for {:?}.", guid);
let changed = self.clone_mirror_to_overlay(guid)?;
if changed == 0 {
error!("Failed to create local overlay for GUID {:?}.", guid);
throw!(ErrorKind::NoSuchRecord(guid.to_owned()));
}
Ok(())
}
fn clone_mirror_to_overlay(&self, guid: &str) -> Result<usize> {
self.execute_cached(
&*CLONE_SINGLE_MIRROR_SQL,
&[(":guid", &guid as &ToSql)]
)
}
pub fn reset(&self) -> Result<()> {
info!("Executing reset on password store!");
self.execute_all(&[
&*CLONE_ENTIRE_MIRROR_SQL,
"DELETE FROM loginsM",
&format!("UPDATE loginsL SET sync_status = {}", SyncStatus::New as u8),
])?;
self.set_last_sync(ServerTimestamp(0.0))?;
// TODO: Should we clear global_state?
Ok(())
}
pub fn wipe(&self) -> Result<()> {
info!("Executing reset on password store!");
let now_ms = util::system_time_ms_i64(SystemTime::now());
self.execute(&format!("DELETE FROM loginsL WHERE sync_status = {new}", new = SyncStatus::New as u8), &[])?;
self.execute(
&format!("
UPDATE loginsL
SET local_modified = :now_ms,
sync_status = {changed},
is_deleted = 1,
password = '',
hostname = '',
username = ''
WHERE is_deleted = 0",
changed = SyncStatus::Changed as u8),
&[(":now_ms", &now_ms as &ToSql)])?;
self.execute("UPDATE loginsM SET is_overridden = 1", &[])?;
self.execute(
&format!("
INSERT OR IGNORE INTO loginsL
(guid, local_modified, is_deleted, sync_status, hostname, timeCreated, timePasswordChanged, password, username)
SELECT guid, :now_ms, 1, {changed}, '', timeCreated, :now_ms, '', ''
FROM loginsM",
changed = SyncStatus::Changed as u8),
&[(":now_ms", &now_ms as &ToSql)])?;
Ok(())
}
fn reconcile(&self, records: Vec<SyncLoginData>, server_now: ServerTimestamp) -> Result<UpdatePlan> {
let mut plan = UpdatePlan::default();
for mut record in records {
debug!("Processing remote change {}", record.guid());
let upstream = if let Some(inbound) = record.inbound.0.take() {
inbound
} else {
debug!("Processing inbound deletion (always prefer)");
plan.plan_delete(record.guid.clone());
continue;
};
let upstream_time = record.inbound.1;
match (record.mirror.take(), record.local.take()) {
(Some(mirror), Some(local)) => {
debug!(" Conflict between remote and local, Resolving with 3WM");
plan.plan_three_way_merge(
local, mirror, upstream, upstream_time, server_now);
}
(Some(_mirror), None) => {
debug!(" Forwarding mirror to remote");
plan.plan_mirror_update(upstream, upstream_time);
}
(None, Some(local)) => {
debug!(" Conflicting record without shared parent, using newer");
plan.plan_two_way_merge(&local.login, (upstream, upstream_time));
}
(None, None) => {
if let Some(dupe) = self.find_dupe(&upstream)? {
debug!(" Incoming record {} was is a dupe of local record {}", upstream.id, dupe.id);
plan.plan_two_way_merge(&dupe, (upstream, upstream_time));
} else {
debug!(" No dupe found, inserting into mirror");
plan.plan_mirror_insert(upstream, upstream_time, false);
}
}
}
}
Ok(plan)
}
fn execute_plan(&mut self, plan: UpdatePlan) -> Result<()> {
let mut tx = self.db.transaction()?;
plan.execute(&mut tx, self.max_var_count)?;
tx.commit()?;
Ok(())
}
pub fn fetch_outgoing(&self, st: ServerTimestamp) -> Result<OutgoingChangeset> {
let mut outgoing = OutgoingChangeset::new("passwords".into(), st);
let mut stmt = self.db.prepare_cached(&format!("
SELECT * FROM loginsL
WHERE sync_status IS NOT {synced}",
synced = SyncStatus::Synced as u8
))?;
let rows = stmt.query_and_then(&[], |row| {
Ok(if row.get::<_, bool>("is_deleted") {
Payload::new_tombstone(row.get_checked::<_, String>("guid")?)
} else {
let login = Login::from_row(row)?;
Payload::from_record(login)?
})
})?;
outgoing.changes = rows.collect::<Result<_>>()?;
Ok(outgoing)
}
fn do_apply_incoming(
&mut self,
inbound: IncomingChangeset
) -> Result<OutgoingChangeset> {
let data = self.fetch_login_data(&inbound.changes)?;
let plan = self.reconcile(data, inbound.timestamp)?;
self.execute_plan(plan)?;
Ok(self.fetch_outgoing(inbound.timestamp)?)
}
fn put_meta(&self, key: &str, value: &ToSql) -> Result<()> {
self.execute_cached(
"REPLACE INTO loginsSyncMeta (key, value) VALUES (:key, :value)",
&[(":key", &key as &ToSql), (":value", value)]
)?;
Ok(())
}
fn get_meta<T: FromSql>(&self, key: &str) -> Result<Option<T>> {
self.query_row_cached(
"SELECT value FROM loginsSyncMeta WHERE key = :key",
&[(":key", &key as &ToSql)],
|row| Ok(row.get_checked(0)?)
)
}
pub fn set_last_sync(&self, last_sync: ServerTimestamp) -> Result<()> {
debug!("Updating last sync to {}", last_sync);
let last_sync_millis = last_sync.as_millis() as i64;
self.put_meta(schema::LAST_SYNC_META_KEY, &last_sync_millis)
}
pub fn set_global_state(&self, global_state: &str) -> Result<()> {
self.put_meta(schema::GLOBAL_STATE_META_KEY, &global_state)
}
pub fn get_last_sync(&self) -> Result<Option<ServerTimestamp>> {
Ok(self.get_meta::<i64>(schema::LAST_SYNC_META_KEY)?
.map(|millis| ServerTimestamp(millis as f64 / 1000.0)))
}
pub fn get_global_state(&self) -> Result<Option<String>> {
self.get_meta::<String>(schema::GLOBAL_STATE_META_KEY)
}
}
impl Store for LoginDb {
type Error = Error;
fn apply_incoming(
&mut self,
inbound: IncomingChangeset
) -> Result<OutgoingChangeset> {
self.do_apply_incoming(inbound)
}
fn sync_finished(
&mut self,
new_timestamp: ServerTimestamp,
records_synced: &[String],
) -> Result<()> {
self.mark_as_synchronized(
&records_synced.iter().map(|r| r.as_str()).collect::<Vec<_>>(),
new_timestamp
)
}
}
lazy_static! {
static ref GET_ALL_SQL: String = format!("
SELECT {common_cols} FROM loginsL WHERE is_deleted = 0
UNION ALL
SELECT {common_cols} FROM loginsM WHERE is_overridden = 0
",
common_cols = schema::COMMON_COLS,
);
static ref GET_BY_GUID_SQL: String = format!("
SELECT {common_cols}
FROM loginsL
WHERE is_deleted = 0
AND guid = :guid
UNION ALL
SELECT {common_cols}
FROM loginsM
WHERE is_overridden IS NOT 1
AND guid = :guid
ORDER BY hostname ASC
LIMIT 1
",
common_cols = schema::COMMON_COLS,
);
static ref CLONE_ENTIRE_MIRROR_SQL: String = format!("
INSERT OR IGNORE INTO loginsL ({common_cols}, local_modified, is_deleted, sync_status)
SELECT {common_cols}, NULL AS local_modified, 0 AS is_deleted, 0 AS sync_status
FROM loginsM",
common_cols = schema::COMMON_COLS,
);
static ref CLONE_SINGLE_MIRROR_SQL: String = format!(
"{} WHERE guid = :guid",
&*CLONE_ENTIRE_MIRROR_SQL,
);
}

296
logins-sql/src/engine.rs Normal file
Просмотреть файл

@ -0,0 +1,296 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use login::Login;
use error::*;
use sync::{self, Sync15StorageClient, Sync15StorageClientInit, GlobalState, KeyBundle};
use db::LoginDb;
use std::path::Path;
use serde_json;
use rusqlite;
#[derive(Debug)]
pub(crate) struct SyncInfo {
pub state: GlobalState,
pub client: Sync15StorageClient,
// Used so that we know whether or not we need to re-initialize `client`
pub last_client_init: Sync15StorageClientInit,
}
// This isn't really an engine in the firefox sync15 desktop sense -- it's
// really a bundle of state that contains the sync storage client, the sync
// state, and the login DB.
pub struct PasswordEngine {
sync: Option<SyncInfo>,
db: LoginDb,
}
impl PasswordEngine {
pub fn new(path: impl AsRef<Path>, encryption_key: Option<&str>) -> Result<Self> {
let db = LoginDb::open(path, encryption_key)?;
Ok(Self { db, sync: None })
}
pub fn new_in_memory(encryption_key: Option<&str>) -> Result<Self> {
let db = LoginDb::open_in_memory(encryption_key)?;
Ok(Self { db, sync: None })
}
pub fn list(&self) -> Result<Vec<Login>> {
self.db.get_all()
}
pub fn get(&self, id: &str) -> Result<Option<Login>> {
self.db.get_by_id(id)
}
pub fn touch(&self, id: &str) -> Result<()> {
self.db.touch(id)
}
pub fn delete(&self, id: &str) -> Result<bool> {
self.db.delete(id)
}
pub fn wipe(&self) -> Result<()> {
self.db.wipe()
}
pub fn reset(&self) -> Result<()> {
self.db.reset()
}
pub fn update(&self, login: Login) -> Result<()> {
self.db.update(login)
}
pub fn add(&self, login: Login) -> Result<String> {
// Just return the record's ID (which we may have generated).
self.db.add(login).map(|record| record.id)
}
// This is basiclaly exposed just for sync_pass_sql, but it doesn't seem
// unreasonable.
pub fn conn(&self) -> &rusqlite::Connection {
&self.db.db
}
pub fn sync(
&mut self,
storage_init: &Sync15StorageClientInit,
root_sync_key: &KeyBundle
) -> Result<()> {
// Note: If `to_ready` (or anything else with a ?) fails below, this
// `take()` means we end up with `state.sync.is_none()`, which means the
// next sync will redownload meta/global, crypto/keys, etc. without
// needing to. Apparently this is both okay and by design.
let maybe_sync_info = self.sync.take().map(Ok);
// `maybe_sync_info` is None if we haven't called `sync` since
// restarting the browser.
//
// If this is the case we may or may not have a persisted version of
// GlobalState stored in the DB (we will iff we've synced before, unless
// we've `reset()`, which clears it out).
let mut sync_info = maybe_sync_info.unwrap_or_else(|| -> Result<SyncInfo> {
info!("First time through since unlock. Trying to load persisted global state.");
let state = if let Some(persisted_global_state) = self.db.get_global_state()? {
serde_json::from_str::<GlobalState>(&persisted_global_state)
.unwrap_or_else(|_| {
// Don't log the error since it might contain sensitive
// info like keys (the JSON does, after all).
error!("Failed to parse GlobalState from JSON! Falling back to default");
// Unstick ourselves by using the default state.
GlobalState::default()
})
} else {
info!("No previously persisted global state, using default");
GlobalState::default()
};
let client = Sync15StorageClient::new(storage_init.clone())?;
Ok(SyncInfo {
state,
client,
last_client_init: storage_init.clone(),
})
})?;
// If the options passed for initialization of the storage client aren't
// the same as the ones we used last time, reinitialize it. (Note that
// we could avoid the comparison in the case where we had `None` in
// `state.sync` before, but this probably doesn't matter).
//
// It's a little confusing that we do things this way (transparently
// re-initialize the client), but it reduces the size of the API surface
// exposed over the FFI, and simplifies the states that the client code
// has to consider (as far as it's concerned it just has to pass
// `current` values for these things, and not worry about having to
// re-initialize the sync state).
if storage_init != &sync_info.last_client_init {
info!("Detected change in storage client init, updating");
sync_info.client = Sync15StorageClient::new(storage_init.clone())?;
sync_info.last_client_init = storage_init.clone();
}
// Advance the state machine to the point where it can perform a full
// sync. This may involve uploading meta/global, crypto/keys etc.
{
// Scope borrow of `sync_info.client`
let mut state_machine =
sync::SetupStateMachine::for_full_sync(&sync_info.client, &root_sync_key);
info!("Advancing state machine to ready (full)");
let next_sync_state = state_machine.to_ready(sync_info.state)?;
sync_info.state = next_sync_state;
}
// Reset our local state if necessary.
if sync_info.state.engines_that_need_local_reset().contains("passwords") {
info!("Passwords sync ID changed; engine needs local reset");
self.db.reset()?;
}
// Persist the current sync state in the DB.
info!("Updating persisted global state");
let s = sync_info.state.to_persistable_string();
self.db.set_global_state(&s)?;
info!("Syncing passwords engine!");
let ts = self.db.get_last_sync()?.unwrap_or_default();
// We don't use `?` here so that we can restore the value of of
// `self.sync` even if sync fails.
let result = sync::synchronize(
&sync_info.client,
&sync_info.state,
&mut self.db,
"passwords".into(),
ts,
true
);
match &result {
Ok(()) => info!("Sync was successful!"),
Err(e) => warn!("Sync failed! {:?}", e),
}
// Restore our value of `sync_info` even if the sync failed.
self.sync = Some(sync_info);
Ok(result?)
}
}
#[cfg(test)]
mod test {
use super::*;
use std::time::SystemTime;
use util;
// Doesn't check metadata fields
fn assert_logins_equiv(a: &Login, b: &Login) {
assert_eq!(b.id, a.id);
assert_eq!(b.hostname, a.hostname);
assert_eq!(b.form_submit_url, a.form_submit_url);
assert_eq!(b.http_realm, a.http_realm);
assert_eq!(b.username, a.username);
assert_eq!(b.password, a.password);
assert_eq!(b.username_field, a.username_field);
assert_eq!(b.password_field, a.password_field);
}
#[test]
fn test_general() {
let engine = PasswordEngine::new_in_memory(Some("secret")).unwrap();
let list = engine.list().expect("Grabbing Empty list to work");
assert_eq!(list.len(), 0);
let start_us = util::system_time_ms_i64(SystemTime::now());
let a = Login {
id: "aaaaaaaaaaaa".into(),
hostname: "https://www.example.com".into(),
form_submit_url: Some("https://www.example.com/login".into()),
username: "coolperson21".into(),
password: "p4ssw0rd".into(),
username_field: "user_input".into(),
password_field: "pass_input".into(),
.. Login::default()
};
let b = Login {
// Note: no ID, should be autogenerated for us
hostname: "https://www.example2.com".into(),
http_realm: Some("Some String Here".into()),
username: "asdf".into(),
password: "fdsa".into(),
username_field: "input_user".into(),
password_field: "input_pass".into(),
.. Login::default()
};
let a_id = engine.add(a.clone()).expect("added a");
let b_id = engine.add(b.clone()).expect("added b");
assert_eq!(a_id, a.id);
assert_ne!(b_id, b.id, "Should generate guid when none provided");
let a_from_db = engine.get(&a_id)
.expect("Not to error getting a")
.expect("a to exist");
assert_logins_equiv(&a, &a_from_db);
assert_ge!(a_from_db.time_created, start_us);
assert_ge!(a_from_db.time_password_changed, start_us);
assert_ge!(a_from_db.time_last_used, start_us);
assert_eq!(a_from_db.times_used, 1);
let b_from_db = engine.get(&b_id)
.expect("Not to error getting b")
.expect("b to exist");
assert_logins_equiv(&b_from_db, &Login {
id: b_id.clone(),
.. b.clone()
});
assert_ge!(b_from_db.time_created, start_us);
assert_ge!(b_from_db.time_password_changed, start_us);
assert_ge!(b_from_db.time_last_used, start_us);
assert_eq!(b_from_db.times_used, 1);
let mut list = engine.list().expect("Grabbing list to work");
assert_eq!(list.len(), 2);
let mut expect = vec![a_from_db.clone(), b_from_db.clone()];
list.sort_by(|a, b| b.id.cmp(&a.id));
expect.sort_by(|a, b| b.id.cmp(&a.id));
assert_eq!(list, expect);
engine.delete(&a_id).expect("Successful delete");
assert!(engine.get(&a_id)
.expect("get after delete should still work")
.is_none());
let list = engine.list().expect("Grabbing list to work");
assert_eq!(list.len(), 1);
assert_eq!(list[0], b_from_db);
let now_us = util::system_time_ms_i64(SystemTime::now());
let b2 = Login { password: "newpass".into(), id: b_id.clone(), .. b.clone() };
engine.update(b2.clone()).expect("update b should work");
let b_after_update = engine.get(&b_id)
.expect("Not to error getting b")
.expect("b to exist");
assert_logins_equiv(&b_after_update, &b2);
assert_ge!(b_after_update.time_created, start_us);
assert_le!(b_after_update.time_created, now_us);
assert_ge!(b_after_update.time_password_changed, now_us);
assert_ge!(b_after_update.time_last_used, now_us);
// Should be two even though we updated twice
assert_eq!(b_after_update.times_used, 2);
}
}

132
logins-sql/src/error.rs Normal file
Просмотреть файл

@ -0,0 +1,132 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use failure::{Fail, Context, Backtrace};
use std::{self, fmt};
use std::boxed::Box;
use rusqlite;
use serde_json;
use sync;
use url;
pub type Result<T> = std::result::Result<T, Error>;
// Backported part of the (someday real) failure 1.x API, basically equivalent
// to error_chain's `bail!` (We don't call it that because `failure` has a
// `bail` macro with different semantics)
macro_rules! throw {
($e:expr) => {
return Err(::std::convert::Into::into($e));
}
}
#[derive(Debug)]
pub struct Error(Box<Context<ErrorKind>>);
impl Fail for Error {
#[inline]
fn cause(&self) -> Option<&Fail> {
self.0.cause()
}
#[inline]
fn backtrace(&self) -> Option<&Backtrace> {
self.0.backtrace()
}
}
impl fmt::Display for Error {
#[inline]
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fmt::Display::fmt(&*self.0, f)
}
}
impl Error {
#[inline]
pub fn kind(&self) -> &ErrorKind {
&*self.0.get_context()
}
}
impl From<ErrorKind> for Error {
#[inline]
fn from(kind: ErrorKind) -> Error {
Error(Box::new(Context::new(kind)))
}
}
impl From<Context<ErrorKind>> for Error {
#[inline]
fn from(inner: Context<ErrorKind>) -> Error {
Error(Box::new(inner))
}
}
#[derive(Debug, Fail)]
pub enum ErrorKind {
#[fail(display = "Invalid login: {}", _0)]
InvalidLogin(InvalidLogin),
#[fail(display = "The `sync_status` column in DB has an illegal value: {}", _0)]
BadSyncStatus(u8),
#[fail(display = "A duplicate GUID is present: {:?}", _0)]
DuplicateGuid(String),
#[fail(display = "No record with guid exists (when one was required): {:?}", _0)]
NoSuchRecord(String),
#[fail(display = "Error synchronizing: {}", _0)]
SyncAdapterError(#[fail(cause)] sync::Error),
#[fail(display = "Error parsing JSON data: {}", _0)]
JsonError(#[fail(cause)] serde_json::Error),
#[fail(display = "Error executing SQL: {}", _0)]
SqlError(#[fail(cause)] rusqlite::Error),
#[fail(display = "Error parsing URL: {}", _0)]
UrlParseError(#[fail(cause)] url::ParseError),
}
macro_rules! impl_from_error {
($(($variant:ident, $type:ty)),+) => ($(
impl From<$type> for ErrorKind {
#[inline]
fn from(e: $type) -> ErrorKind {
ErrorKind::$variant(e)
}
}
impl From<$type> for Error {
#[inline]
fn from(e: $type) -> Error {
ErrorKind::from(e).into()
}
}
)*);
}
impl_from_error! {
(SyncAdapterError, sync::Error),
(JsonError, serde_json::Error),
(UrlParseError, url::ParseError),
(SqlError, rusqlite::Error),
(InvalidLogin, InvalidLogin)
}
#[derive(Debug, Fail)]
pub enum InvalidLogin {
#[fail(display = "Hostname is empty")]
EmptyHostname,
#[fail(display = "Password is empty")]
EmptyPassword,
#[fail(display = "Both `formSubmitUrl` and `httpRealm` are present")]
BothTargets,
#[fail(display = "Neither `formSubmitUrl` and `httpRealm` are present")]
NoTarget,
}

50
logins-sql/src/lib.rs Normal file
Просмотреть файл

@ -0,0 +1,50 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
extern crate sync15_adapter as sync;
#[macro_use]
extern crate log;
#[cfg(test)]
extern crate env_logger;
#[macro_use]
extern crate lazy_static;
extern crate failure;
#[macro_use]
extern crate failure_derive;
#[cfg(test)]
#[macro_use]
extern crate more_asserts;
extern crate url;
extern crate rusqlite;
extern crate serde;
extern crate serde_json;
#[macro_use]
extern crate serde_derive;
#[macro_use]
mod error;
mod login;
pub mod schema;
mod util;
mod db;
mod engine;
mod update_plan;
pub use error::*;
pub use login::*;
pub use engine::*;

431
logins-sql/src/login.rs Normal file
Просмотреть файл

@ -0,0 +1,431 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use sync::{self, ServerTimestamp};
use rusqlite::Row;
use util;
use std::time::{self, SystemTime};
use error::*;
#[derive(Debug, Clone, Hash, PartialEq, Serialize, Deserialize, Default)]
#[serde(rename_all = "camelCase")]
pub struct Login {
// TODO: consider `#[serde(rename = "id")] pub guid: String` to avoid confusion
pub id: String,
pub hostname: String,
// rename_all = "camelCase" by default will do formSubmitUrl, but we can just
// override this one field.
#[serde(rename = "formSubmitURL")]
pub form_submit_url: Option<String>,
pub http_realm: Option<String>,
#[serde(default)]
pub username: String,
pub password: String,
#[serde(default)]
pub username_field: String,
#[serde(default)]
pub password_field: String,
#[serde(default)]
pub time_created: i64,
#[serde(default)]
pub time_password_changed: i64,
#[serde(default)]
pub time_last_used: i64,
#[serde(default)]
pub times_used: i64,
}
fn string_or_default(row: &Row, col: &str) -> Result<String> {
Ok(row.get_checked::<_, Option<String>>(col)?.unwrap_or_default())
}
impl Login {
#[inline]
pub fn guid(&self) -> &String {
&self.id
}
#[inline]
pub fn guid_str(&self) -> &str {
self.id.as_str()
}
pub fn check_valid(&self) -> Result<()> {
if self.hostname.is_empty() {
throw!(InvalidLogin::EmptyHostname);
}
if self.password.is_empty() {
throw!(InvalidLogin::EmptyPassword);
}
if self.form_submit_url.is_some() && self.http_realm.is_some() {
throw!(InvalidLogin::BothTargets);
}
if self.form_submit_url.is_none() && self.http_realm.is_none() {
throw!(InvalidLogin::NoTarget);
}
Ok(())
}
pub(crate) fn from_row(row: &Row) -> Result<Login> {
Ok(Login {
id: row.get_checked("guid")?,
password: row.get_checked("password")?,
username: string_or_default(row, "username")?,
hostname: row.get_checked("hostname")?,
http_realm: row.get_checked("httpRealm")?,
form_submit_url: row.get_checked("formSubmitURL")?,
username_field: string_or_default(row, "usernameField")?,
password_field: string_or_default(row, "passwordField")?,
time_created: row.get_checked("timeCreated")?,
// Might be null
time_last_used: row.get_checked::<_, Option<i64>>("timeLastUsed")?.unwrap_or_default(),
time_password_changed: row.get_checked("timePasswordChanged")?,
times_used: row.get_checked("timesUsed")?,
})
}
}
#[derive(Clone, Debug)]
pub(crate) struct MirrorLogin {
pub login: Login,
pub is_overridden: bool,
pub server_modified: ServerTimestamp,
}
impl MirrorLogin {
#[inline]
pub fn guid_str(&self) -> &str {
self.login.guid_str()
}
pub(crate) fn from_row(row: &Row) -> Result<MirrorLogin> {
Ok(MirrorLogin {
login: Login::from_row(row)?,
is_overridden: row.get_checked("is_overridden")?,
server_modified: ServerTimestamp(
row.get_checked::<_, i64>("server_modified")? as f64 / 1000.0)
})
}
}
// This doesn't really belong here.
#[derive(Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash)]
#[repr(u8)]
pub(crate) enum SyncStatus {
Synced = 0,
Changed = 1,
New = 2,
}
impl SyncStatus {
#[inline]
pub fn from_u8(v: u8) -> Result<Self> {
match v {
0 => Ok(SyncStatus::Synced),
1 => Ok(SyncStatus::Changed),
2 => Ok(SyncStatus::New),
v => throw!(ErrorKind::BadSyncStatus(v)),
}
}
}
#[derive(Clone, Debug)]
pub(crate) struct LocalLogin {
pub login: Login,
pub sync_status: SyncStatus,
pub is_deleted: bool,
pub local_modified: SystemTime,
}
impl LocalLogin {
#[inline]
pub fn guid_str(&self) -> &str {
self.login.guid_str()
}
pub(crate) fn from_row(row: &Row) -> Result<LocalLogin> {
Ok(LocalLogin {
login: Login::from_row(row)?,
sync_status: SyncStatus::from_u8(row.get_checked("sync_status")?)?,
is_deleted: row.get_checked("is_deleted")?,
local_modified: util::system_time_millis_from_row(row, "local_modified")?
})
}
}
macro_rules! impl_login {
($ty:ty { $($fields:tt)* }) => {
impl AsRef<Login> for $ty {
#[inline]
fn as_ref(&self) -> &Login {
&self.login
}
}
impl AsMut<Login> for $ty {
#[inline]
fn as_mut(&mut self) -> &mut Login {
&mut self.login
}
}
impl From<$ty> for Login {
#[inline]
fn from(l: $ty) -> Self {
l.login
}
}
impl From<Login> for $ty {
#[inline]
fn from(login: Login) -> Self {
Self { login, $($fields)* }
}
}
};
}
impl_login!(LocalLogin {
sync_status: SyncStatus::New,
is_deleted: false,
local_modified: time::UNIX_EPOCH
});
impl_login!(MirrorLogin {
is_overridden: false,
server_modified: ServerTimestamp(0.0)
});
// Stores data needed to do a 3-way merge
pub(crate) struct SyncLoginData {
pub guid: String,
pub local: Option<LocalLogin>,
pub mirror: Option<MirrorLogin>,
// None means it's a deletion
pub inbound: (Option<Login>, ServerTimestamp),
}
impl SyncLoginData {
#[inline]
pub fn guid_str(&self) -> &str {
&self.guid[..]
}
#[inline]
pub fn guid(&self) -> &String {
&self.guid
}
#[inline]
pub fn from_payload(payload: sync::Payload, ts: ServerTimestamp) -> Result<Self> {
let guid = payload.id.clone();
let login: Option<Login> =
if payload.is_tombstone() {
None
} else {
let record: Login = payload.into_record()?;
Some(record)
};
Ok(Self { guid, local: None, mirror: None, inbound: (login, ts) })
}
}
macro_rules! impl_login_setter {
($setter_name:ident, $field:ident, $Login:ty) => {
impl SyncLoginData {
pub(crate) fn $setter_name (&mut self, record: $Login) -> Result<()> {
// TODO: We probably shouldn't panic in this function!
if self.$field.is_some() {
// Shouldn't be possible (only could happen if UNIQUE fails in sqlite, or if we
// get duplicate guids somewhere,but we check).
panic!("SyncLoginData::{} called on object that already has {} data",
stringify!($setter_name),
stringify!($field));
}
if self.guid_str() != record.guid_str() {
// This is almost certainly a bug in our code.
panic!("Wrong guid on login in {}: {:?} != {:?}",
stringify!($setter_name),
self.guid_str(), record.guid_str());
}
self.$field = Some(record);
Ok(())
}
}
};
}
impl_login_setter!(set_local, local, LocalLogin);
impl_login_setter!(set_mirror, mirror, MirrorLogin);
#[derive(Debug, Default, Clone)]
pub(crate) struct LoginDelta {
// "non-commutative" fields
pub hostname: Option<String>,
pub password: Option<String>,
pub username: Option<String>,
pub http_realm: Option<String>,
pub form_submit_url: Option<String>,
pub time_created: Option<i64>,
pub time_last_used: Option<i64>,
pub time_password_changed: Option<i64>,
// "non-conflicting" fields (which are the same)
pub password_field: Option<String>,
pub username_field: Option<String>,
// Commutative field
pub times_used: i64,
}
macro_rules! merge_field {
($merged:ident, $b:ident, $prefer_b:expr, $field:ident) => {
if let Some($field) = $b.$field.take() {
if $merged.$field.is_some() {
warn!("Collision merging login field {}", stringify!($field));
if $prefer_b {
$merged.$field = Some($field);
}
} else {
$merged.$field = Some($field);
}
}
};
}
impl LoginDelta {
pub fn merge(self, mut b: LoginDelta, b_is_newer: bool) -> LoginDelta {
let mut merged = self;
merge_field!(merged, b, b_is_newer, hostname);
merge_field!(merged, b, b_is_newer, password);
merge_field!(merged, b, b_is_newer, username);
merge_field!(merged, b, b_is_newer, http_realm);
merge_field!(merged, b, b_is_newer, form_submit_url);
merge_field!(merged, b, b_is_newer, time_created);
merge_field!(merged, b, b_is_newer, time_last_used);
merge_field!(merged, b, b_is_newer, time_password_changed);
merge_field!(merged, b, b_is_newer, password_field);
merge_field!(merged, b, b_is_newer, username_field);
// commutative fields
merged.times_used += b.times_used;
merged
}
}
macro_rules! apply_field {
($login:ident, $delta:ident, $field:ident) => {
if let Some($field) = $delta.$field.take() {
$login.$field = $field.into();
}
};
}
impl Login {
pub(crate) fn apply_delta(&mut self, mut delta: LoginDelta) {
apply_field!(self, delta, hostname);
apply_field!(self, delta, password);
apply_field!(self, delta, username);
apply_field!(self, delta, time_created);
apply_field!(self, delta, time_last_used);
apply_field!(self, delta, time_password_changed);
apply_field!(self, delta, password_field);
apply_field!(self, delta, username_field);
// Use Some("") to indicate that it should be changed to be None (hacky...)
if let Some(realm) = delta.http_realm.take() {
self.http_realm = if realm.is_empty() { None } else { Some(realm) };
}
if let Some(url) = delta.form_submit_url.take() {
self.form_submit_url = if url.is_empty() { None } else { Some(url) };
}
self.times_used += delta.times_used;
}
pub(crate) fn delta(&self, older: &Login) -> LoginDelta {
let mut delta = LoginDelta::default();
if self.form_submit_url != older.form_submit_url {
delta.form_submit_url = Some(self.form_submit_url.clone().unwrap_or_default());
}
if self.http_realm != older.http_realm {
delta.http_realm = Some(self.http_realm.clone().unwrap_or_default());
}
if self.hostname != older.hostname {
delta.hostname = Some(self.hostname.clone());
}
if self.username != older.username {
delta.username = Some(self.username.clone());
}
if self.password != older.password {
delta.password = Some(self.password.clone());
}
if self.password_field != older.password_field {
delta.password_field = Some(self.password_field.clone());
}
if self.username_field != older.username_field {
delta.username_field = Some(self.username_field.clone());
}
// We discard zero (and negative numbers) for timestamps so that a
// record that doesn't contain this information (these are
// `#[serde(default)]`) doesn't skew our records.
//
// Arguably, we should also also ignore values later than our
// `time_created`, or earlier than our `time_last_used` or
// `time_password_changed`. Doing this properly would probably require
// a scheme analogous to Desktop's weak-reupload system, so I'm punting
// on it for now.
if self.time_created > 0 && self.time_created != older.time_created {
delta.time_created = Some(self.time_created);
}
if self.time_last_used > 0 && self.time_last_used != older.time_last_used {
delta.time_last_used = Some(self.time_last_used);
}
if self.time_password_changed > 0 && self.time_password_changed != older.time_password_changed {
delta.time_password_changed = Some(self.time_password_changed);
}
if self.times_used > 0 && self.times_used != older.times_used {
delta.times_used = self.times_used - older.times_used;
}
delta
}
}

304
logins-sql/src/schema.rs Normal file
Просмотреть файл

@ -0,0 +1,304 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
//! Logins Schema v4
//! ================
//!
//! The schema we use is a evolution of the firefox-ios logins database format.
//! There are three tables:
//!
//! - `loginsL`: The local table.
//! - `loginsM`: The mirror table.
//! - `loginsSyncMeta`: The table used to to store various sync metadata.
//!
//! ## `loginsL`
//!
//! This stores local login information, also known as the "overlay".
//!
//! `loginsL` is essentially unchanged from firefox-ios, however note the
//! semantic change v4 makes to timestamp fields (which is explained in more
//! detail in the [COMMON_COLS] documentation).
//!
//! It is important to note that `loginsL` is not guaranteed to be present for
//! all records. Synced records may only exist in `loginsM` (although this is
//! not guaranteed). In either case, queries should read from both `loginsL` and
//! `loginsM`.
//!
//! ### `loginsL` Columns
//!
//! Contains all fields in [COMMON_COLS], as well as the following additional
//! columns:
//!
//! - `local_modified`: A millisecond local timestamp indicating when the record
//! was changed locally, or NULL if the record has never been changed locally.
//!
//! - `is_deleted`: A boolean indicating whether or not this record is a
//! tombstone.
//!
//! - `sync_status`: A `SyncStatus` enum value, one of
//!
//! - `0` (`SyncStatus::Synced`): Indicating that the record has been synced
//!
//! - `1` (`SyncStatus::Changed`): Indicating that the record should be
//! has changed locally and is known to exist on the server.
//!
//! - `2` (`SyncStatus::New`): Indicating that the record has never been
//! synced, or we have been reset since the last time it synced.
//!
//! ## `loginsM`
//!
//! This stores server-side login information, also known as the "mirror".
//!
//! Like `loginsL`, `loginM` has not changed from firefox-ios, beyond the
//! change to store timestamps as milliseconds explained in [COMMON_COLS].
//!
//! Also like `loginsL`, `loginsM` is not guaranteed to have rows for all
//! records. It should not have rows for records which were not synced!
//!
//! It is important to note that `loginsL` is not guaranteed to be present for
//! all records. Synced records may only exist in `loginsM`! Queries should
//! test against both!
//!
//! ### `loginsM` Columns
//!
//! Contains all fields in [COMMON_COLS], as well as the following additional
//! columns:
//!
//! - `server_modified`: the most recent server-modification timestamp
//! ([sync15_adapter::ServerTimestamp]) we've seen for this record. Stored as
//! a millisecond value.
//!
//! - `is_overridden`: A boolean indicating whether or not the mirror contents
//! are invalid, and that we should defer to the data stored in `loginsL`.
//!
//! ## `loginsSyncMeta`
//!
//! This is a simple key-value table based on the `moz_meta` table in places.
//! This table was added (by this rust crate) in version 4, and so is not
//! present in firefox-ios.
//!
//! Currently it is used to store two items:
//!
//! 1. The last sync timestamp is stored under [LAST_SYNC_META_KEY], a
//! `sync15_adapter::ServerTimestamp` stored in integer milliseconds.
//!
//! 2. The persisted sync state machine information is stored under
//! [GLOBAL_STATE_META_KEY]. This is a `sync15_adapter::GlobalState` stored as
//! JSON.
//!
use error::*;
use db;
/// Note that firefox-ios is currently on version 3. Version 4 is this version,
/// which adds a metadata table and changes timestamps to be in milliseconds
pub const VERSION: i64 = 4;
/// Every column shared by both tables except for `id`
///
/// Note: `timeCreated`, `timeLastUsed`, and `timePasswordChanged` are in
/// milliseconds. This is in line with how the server and Desktop handle it, but
/// counter to how firefox-ios handles it (hence needing to fix them up
/// firefox-ios on schema upgrade from 3, the last firefox-ios password schema
/// version).
///
/// The reason for breaking from how firefox-ios does things is just because it
/// complicates the code to have multiple kinds of timestamps, for very little
/// benefit. It also makes it unclear what's stored on the server, leading to
/// further confusion.
///
/// However, note that the `local_modified` (of `loginsL`) and `server_modified`
/// (of `loginsM`) are stored as milliseconds as well both on firefox-ios and
/// here (and so they do not need to be updated with the `timeLastUsed`/
/// `timePasswordChanged`/`timeCreated` timestamps.
pub const COMMON_COLS: &'static str = "
guid,
username,
password,
hostname,
httpRealm,
formSubmitURL,
usernameField,
passwordField,
timeCreated,
timeLastUsed,
timePasswordChanged,
timesUsed
";
const COMMON_SQL: &'static str = "
id INTEGER PRIMARY KEY AUTOINCREMENT,
hostname TEXT NOT NULL,
-- Exactly one of httpRealm or formSubmitURL should be set
httpRealm TEXT,
formSubmitURL TEXT,
usernameField TEXT,
passwordField TEXT,
timesUsed INTEGER NOT NULL DEFAULT 0,
timeCreated INTEGER NOT NULL,
timeLastUsed INTEGER,
timePasswordChanged INTEGER NOT NULL,
username TEXT,
password TEXT NOT NULL,
guid TEXT NOT NULL UNIQUE
";
lazy_static! {
static ref CREATE_LOCAL_TABLE_SQL: String = format!(
"CREATE TABLE IF NOT EXISTS loginsL (
{common_sql},
-- Milliseconds, or NULL if never modified locally.
local_modified INTEGER,
is_deleted TINYINT NOT NULL DEFAULT 0,
sync_status TINYINT NOT NULL DEFAULT 0
)",
common_sql = COMMON_SQL
);
static ref CREATE_MIRROR_TABLE_SQL: String = format!(
"CREATE TABLE IF NOT EXISTS loginsM (
{common_sql},
-- Milliseconds (a sync15_adapter::ServerTimestamp multiplied by
-- 1000 and truncated)
server_modified INTEGER NOT NULL,
is_overridden TINYINT NOT NULL DEFAULT 0
)",
common_sql = COMMON_SQL
);
static ref SET_VERSION_SQL: String = format!(
"PRAGMA user_version = {version}",
version = VERSION
);
}
const CREATE_META_TABLE_SQL: &'static str = "
CREATE TABLE IF NOT EXISTS loginsSyncMeta (
key TEXT PRIMARY KEY,
value NOT NULL
)
";
const CREATE_OVERRIDE_HOSTNAME_INDEX_SQL: &'static str = "
CREATE INDEX IF NOT EXISTS idx_loginsM_is_overridden_hostname
ON loginsM (is_overridden, hostname)
";
const CREATE_DELETED_HOSTNAME_INDEX_SQL: &'static str = "
CREATE INDEX IF NOT EXISTS idx_loginsL_is_deleted_hostname
ON loginsL (is_deleted, hostname)
";
// As noted above, we use these when updating from schema v3 (firefox-ios's
// last schema) to convert from microsecond timestamps to milliseconds.
const UPDATE_LOCAL_TIMESTAMPS_TO_MILLIS_SQL: &'static str = "
UPDATE loginsL
SET timeCreated = timeCreated / 1000,
timeLastUsed = timeLastUsed / 1000,
timePasswordChanged = timePasswordChanged / 1000
";
const UPDATE_MIRROR_TIMESTAMPS_TO_MILLIS_SQL: &'static str = "
UPDATE loginsM
SET timeCreated = timeCreated / 1000,
timeLastUsed = timeLastUsed / 1000,
timePasswordChanged = timePasswordChanged / 1000
";
pub(crate) static LAST_SYNC_META_KEY: &'static str = "last_sync_time";
pub(crate) static GLOBAL_STATE_META_KEY: &'static str = "global_state";
pub(crate) fn init(db: &db::LoginDb) -> Result<()> {
let user_version = db.query_one::<i64>("PRAGMA user_version")?;
if user_version == 0 {
// This logic is largely taken from firefox-ios. AFAICT at some point
// they went from having schema versions tracked using a table named
// `tableList` to using `PRAGMA user_version`. This leads to the
// following logic:
//
// - If `tableList` exists, we're hopelessly far in the past, drop any
// tables we have (to ensure we avoid name collisions/stale data) and
// recreate. (This is captured by the `upgrade` case where from == 0)
//
// - If `tableList` doesn't exist and `PRAGMA user_version` is 0, it's
// the first time through, just create the new tables.
//
// - Otherwise, it's a normal schema upgrade from an earlier
// `PRAGMA user_version`.
let table_list_exists = db.query_one::<i64>(
"SELECT count(*) FROM sqlite_master WHERE type = 'table' AND name = 'tableList'"
)? != 0;
if table_list_exists {
drop(db)?;
}
return create(db);
}
if user_version != VERSION {
if user_version < VERSION {
upgrade(db, user_version)?;
} else {
warn!("Loaded future schema version {} (we only understand version {}). \
Optimisitically ",
user_version, VERSION)
}
}
Ok(())
}
// https://github.com/mozilla-mobile/firefox-ios/blob/master/Storage/SQL/LoginsSchema.swift#L100
fn upgrade(db: &db::LoginDb, from: i64) -> Result<()> {
debug!("Upgrading schema from {} to {}", from, VERSION);
if from == VERSION {
return Ok(());
}
assert_ne!(from, 0,
"Upgrading from user_version = 0 should already be handled (in `init`)");
if from < 3 {
// These indices were added in v3 (apparently)
db.execute_all(&[
CREATE_OVERRIDE_HOSTNAME_INDEX_SQL,
CREATE_DELETED_HOSTNAME_INDEX_SQL,
])?;
}
if from < 4 {
// This is the update from the firefox-ios schema to our schema.
// The `loginsSyncMeta` table was added in v4, and we moved
// from using microseconds to milliseconds for `timeCreated`,
// `timeLastUsed`, and `timePasswordChanged`.
db.execute_all(&[
CREATE_META_TABLE_SQL,
UPDATE_LOCAL_TIMESTAMPS_TO_MILLIS_SQL,
UPDATE_MIRROR_TIMESTAMPS_TO_MILLIS_SQL,
&*SET_VERSION_SQL,
])?;
}
Ok(())
}
pub(crate) fn create(db: &db::LoginDb) -> Result<()> {
debug!("Creating schema");
db.execute_all(&[
&*CREATE_LOCAL_TABLE_SQL,
&*CREATE_MIRROR_TABLE_SQL,
CREATE_OVERRIDE_HOSTNAME_INDEX_SQL,
CREATE_DELETED_HOSTNAME_INDEX_SQL,
CREATE_META_TABLE_SQL,
&*SET_VERSION_SQL,
])?;
Ok(())
}
pub(crate) fn drop(db: &db::LoginDb) -> Result<()> {
debug!("Dropping schema");
db.execute_all(&[
"DROP TABLE IF EXISTS loginsM",
"DROP TABLE IF EXISTS loginsL",
"DROP TABLE IF EXISTS loginsSyncMeta",
"PRAGMA user_version = 0",
])?;
Ok(())
}

Просмотреть файл

@ -0,0 +1,249 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use rusqlite::{types::ToSql, Transaction};
use std::time::SystemTime;
use error::*;
use login::{LocalLogin, MirrorLogin, Login, SyncStatus};
use sync::ServerTimestamp;
use util;
#[derive(Default, Debug, Clone)]
pub(crate) struct UpdatePlan {
pub delete_mirror: Vec<String>,
pub delete_local: Vec<String>,
pub local_updates: Vec<MirrorLogin>,
// the bool is the `is_overridden` flag, the i64 is ServerTimestamp in millis
pub mirror_inserts: Vec<(Login, i64, bool)>,
pub mirror_updates: Vec<(Login, i64)>,
}
impl UpdatePlan {
pub fn plan_two_way_merge(&mut self, local: &Login, upstream: (Login, ServerTimestamp)) {
let is_override = local.time_password_changed > upstream.0.time_password_changed;
self.mirror_inserts.push((upstream.0, upstream.1.as_millis() as i64, is_override));
if !is_override {
self.delete_local.push(local.id.to_string());
}
}
pub fn plan_three_way_merge(
&mut self,
local: LocalLogin,
shared: MirrorLogin,
upstream: Login,
upstream_time: ServerTimestamp,
server_now: ServerTimestamp
) {
let local_age = SystemTime::now().duration_since(local.local_modified).unwrap_or_default();
let remote_age = server_now.duration_since(upstream_time).unwrap_or_default();
let local_delta = local.login.delta(&shared.login);
let upstream_delta = upstream.delta(&shared.login);
let merged_delta = local_delta.merge(upstream_delta, remote_age < local_age);
// Update mirror to upstream
self.mirror_updates.push((upstream, upstream_time.as_millis() as i64));
let mut new = shared;
new.login.apply_delta(merged_delta);
new.server_modified = upstream_time;
self.local_updates.push(new);
}
pub fn plan_delete(&mut self, id: String) {
self.delete_local.push(id.to_string());
self.delete_mirror.push(id.to_string());
}
pub fn plan_mirror_update(&mut self, login: Login, time: ServerTimestamp) {
self.mirror_updates.push((login, time.as_millis() as i64));
}
pub fn plan_mirror_insert(&mut self, login: Login, time: ServerTimestamp, is_override: bool) {
self.mirror_inserts.push((login, time.as_millis() as i64, is_override));
}
fn perform_deletes(&self, tx: &mut Transaction, max_var_count: usize) -> Result<()> {
util::each_chunk(&self.delete_local, max_var_count, |chunk, _| {
tx.execute(&format!("DELETE FROM loginsL WHERE guid IN ({vars})",
vars = util::sql_vars(chunk.len())),
chunk)?;
Ok(())
})?;
util::each_chunk(&self.delete_mirror, max_var_count, |chunk, _| {
tx.execute(&format!("DELETE FROM loginsM WHERE guid IN ({vars})",
vars = util::sql_vars(chunk.len())),
chunk)?;
Ok(())
})?;
Ok(())
}
// These aren't batched but probably should be.
fn perform_mirror_updates(&self, tx: &mut Transaction) -> Result<()> {
let sql = "
UPDATE loginsM
SET server_modified = :server_modified,
httpRealm = :http_realm,
formSubmitURL = :form_submit_url,
usernameField = :username_field,
passwordField = :password_field,
password = :password,
hostname = :hostname,
username = :username,
-- Avoid zeroes if the remote has been overwritten by an older client.
timesUsed = coalesce(nullif(:times_used, 0), timesUsed),
timeLastUsed = coalesce(nullif(:time_last_used, 0), timeLastUsed),
timePasswordChanged = coalesce(nullif(:time_password_changed, 0), timePasswordChanged),
timeCreated = coalesce(nullif(:time_created, 0), timeCreated)
WHERE guid = :guid
";
let mut stmt = tx.prepare_cached(sql)?;
for (login, timestamp) in &self.mirror_updates {
trace!("Updating mirror {:?}", login.guid_str());
stmt.execute_named(&[
(":server_modified", timestamp as &ToSql),
(":http_realm", &login.http_realm as &ToSql),
(":form_submit_url", &login.form_submit_url as &ToSql),
(":username_field", &login.username_field as &ToSql),
(":password_field", &login.password_field as &ToSql),
(":password", &login.password as &ToSql),
(":hostname", &login.hostname as &ToSql),
(":username", &login.username as &ToSql),
(":times_used", &login.times_used as &ToSql),
(":time_last_used", &login.time_last_used as &ToSql),
(":time_password_changed", &login.time_password_changed as &ToSql),
(":time_created", &login.time_created as &ToSql),
(":guid", &login.guid_str() as &ToSql),
])?;
}
Ok(())
}
fn perform_mirror_inserts(&self, tx: &mut Transaction) -> Result<()> {
let sql = "
INSERT OR IGNORE INTO loginsM (
is_overridden,
server_modified,
httpRealm,
formSubmitURL,
usernameField,
passwordField,
password,
hostname,
username,
timesUsed,
timeLastUsed,
timePasswordChanged,
timeCreated,
guid
) VALUES (
:is_overridden,
:server_modified,
:http_realm,
:form_submit_url,
:username_field,
:password_field,
:password,
:hostname,
:username,
:times_used,
:time_last_used,
:time_password_changed,
:time_created,
:guid
)";
let mut stmt = tx.prepare_cached(&sql)?;
for (login, timestamp, is_overridden) in &self.mirror_inserts {
trace!("Inserting mirror {:?}", login.guid_str());
stmt.execute_named(&[
(":is_overridden", is_overridden as &ToSql),
(":server_modified", timestamp as &ToSql),
(":http_realm", &login.http_realm as &ToSql),
(":form_submit_url", &login.form_submit_url as &ToSql),
(":username_field", &login.username_field as &ToSql),
(":password_field", &login.password_field as &ToSql),
(":password", &login.password as &ToSql),
(":hostname", &login.hostname as &ToSql),
(":username", &login.username as &ToSql),
(":times_used", &login.times_used as &ToSql),
(":time_last_used", &login.time_last_used as &ToSql),
(":time_password_changed", &login.time_password_changed as &ToSql),
(":time_created", &login.time_created as &ToSql),
(":guid", &login.guid_str() as &ToSql),
])?;
}
Ok(())
}
fn perform_local_updates(&self, tx: &mut Transaction) -> Result<()> {
let sql = format!("
UPDATE loginsL
SET local_modified = :local_modified,
httpRealm = :http_realm,
formSubmitURL = :form_submit_url,
usernameField = :username_field,
passwordField = :password_field,
timeLastUsed = :time_last_used,
timePasswordChanged = :time_password_changed,
timesUsed = :times_used,
password = :password,
hostname = :hostname,
username = :username,
sync_status = {changed}
WHERE guid = :guid",
changed = SyncStatus::Changed as u8);
let mut stmt = tx.prepare_cached(&sql)?;
// XXX OutgoingChangeset should no longer have timestamp.
let local_ms: i64 = util::system_time_ms_i64(SystemTime::now());
for l in &self.local_updates {
trace!("Updating local {:?}", l.guid_str());
stmt.execute_named(&[
(":local_modified", &local_ms as &ToSql),
(":http_realm", &l.login.http_realm as &ToSql),
(":form_submit_url", &l.login.form_submit_url as &ToSql),
(":username_field", &l.login.username_field as &ToSql),
(":password_field", &l.login.password_field as &ToSql),
(":password", &l.login.password as &ToSql),
(":hostname", &l.login.hostname as &ToSql),
(":username", &l.login.username as &ToSql),
(":time_last_used", &l.login.time_last_used as &ToSql),
(":time_password_changed", &l.login.time_password_changed as &ToSql),
(":times_used", &l.login.times_used as &ToSql),
(":guid", &l.guid_str() as &ToSql),
])?;
}
Ok(())
}
pub fn execute(&self, tx: &mut Transaction, max_var_count: usize) -> Result<()> {
debug!("UpdatePlan: deleting records...");
self.perform_deletes(tx, max_var_count)?;
debug!("UpdatePlan: Updating existing mirror records...");
self.perform_mirror_updates(tx)?;
debug!("UpdatePlan: Inserting new mirror records...");
self.perform_mirror_inserts(tx)?;
debug!("UpdatePlan: Updating reconciled local records...");
self.perform_local_updates(tx)?;
Ok(())
}
}

126
logins-sql/src/util.rs Normal file
Просмотреть файл

@ -0,0 +1,126 @@
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use error::*;
use rusqlite::{types::ToSql, Row};
use std::{fmt, time};
use url::Url;
// `mapped` basically just refers to the translating of `T` to `&dyn ToSql`
// using the `to_sql` function. It's annoying that this is needed.
pub fn each_chunk_mapped<'a, T: 'a>(
items: &'a [T],
chunk_size: usize,
to_sql: impl Fn(&'a T) -> &'a ToSql,
mut do_chunk: impl FnMut(&[&ToSql], usize) -> Result<()>
) -> Result<()> {
if items.is_empty() {
return Ok(());
}
let mut vec = Vec::with_capacity(chunk_size.min(items.len()));
let mut offset = 0;
for chunk in items.chunks(chunk_size) {
vec.clear();
vec.extend(chunk.iter().map(|v| to_sql(v)));
do_chunk(&vec, offset)?;
offset += chunk.len();
}
Ok(())
}
pub fn each_chunk<'a, T: ToSql + 'a>(
items: &[T],
chunk_size: usize,
do_chunk: impl FnMut(&[&ToSql], usize) -> Result<()>
) -> Result<()> {
each_chunk_mapped(items, chunk_size, |t| t as &ToSql, do_chunk)
}
#[derive(Debug, Clone)]
pub struct RepeatDisplay<'a, F> {
count: usize,
sep: &'a str,
fmt_one: F
}
impl<'a, F> fmt::Display for RepeatDisplay<'a, F>
where F: Fn(usize, &mut fmt::Formatter) -> fmt::Result {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for i in 0..self.count {
if i != 0 {
f.write_str(self.sep)?;
}
(self.fmt_one)(i, f)?;
}
Ok(())
}
}
pub fn repeat_display<'a, F>(count: usize, sep: &'a str, fmt_one: F) -> RepeatDisplay<'a, F>
where F: Fn(usize, &mut fmt::Formatter) -> fmt::Result {
RepeatDisplay { count, sep, fmt_one }
}
pub fn sql_vars(count: usize) -> impl fmt::Display {
repeat_display(count, ",", |_, f| write!(f, "?"))
}
pub fn url_host_port(url_str: &str) -> Option<String> {
let url = Url::parse(url_str).ok()?;
let host = url.host_str()?;
Some(if let Some(p) = url.port() {
format!("{}:{}", host, p)
} else {
host.to_string()
})
}
pub fn system_time_millis_from_row(row: &Row, col_name: &str) -> Result<time::SystemTime> {
let time_ms = row.get_checked::<_, Option<i64>>(col_name)?.unwrap_or_default() as u64;
Ok(time::UNIX_EPOCH + time::Duration::from_millis(time_ms))
}
pub fn duration_ms_i64(d: time::Duration) -> i64 {
(d.as_secs() as i64) * 1000 + ((d.subsec_nanos() as i64) / 1_000_000)
}
pub fn system_time_ms_i64(t: time::SystemTime) -> i64 {
duration_ms_i64(t.duration_since(time::UNIX_EPOCH).unwrap_or_default())
}
// Unfortunately, there's not a better way to turn on logging in tests AFAICT
#[cfg(test)]
pub(crate) fn init_test_logging() {
use env_logger;
use std::sync::{Once, ONCE_INIT};
static INIT_LOGGING: Once = ONCE_INIT;
INIT_LOGGING.call_once(|| {
env_logger::init_from_env(
env_logger::Env::default().filter_or("RUST_LOG", "trace")
);
});
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_vars() {
assert_eq!(format!("{}", sql_vars(1)), "?");
assert_eq!(format!("{}", sql_vars(2)), "?,?");
assert_eq!(format!("{}", sql_vars(3)), "?,?,?");
}
#[test]
fn test_repeat_disp() {
assert_eq!(format!("{}", repeat_display(1, ",", |i, f| write!(f, "({},?)", i))),
"(0,?)");
assert_eq!(format!("{}", repeat_display(2, ",", |i, f| write!(f, "({},?)", i))),
"(0,?),(1,?)");
assert_eq!(format!("{}", repeat_display(3, ",", |i, f| write!(f, "({},?)", i))),
"(0,?),(1,?),(2,?)");
}
}

Просмотреть файл

@ -1,24 +0,0 @@
[package]
name = "logins"
version = "0.0.1"
[lib]
name = "logins"
path = "src/lib.rs"
[dependencies]
chrono = "0.4"
failure = "0.1.1"
failure_derive = "0.1.1"
lazy_static = "0.2"
log = "0.4"
serde = "^1.0.63"
serde_derive = "^1.0.63"
serde_json = "1.0"
[dependencies.mentat]
git = "https://github.com/mozilla/mentat"
tag = "v0.8.1"
features = ["sqlcipher"]
default_features = false

Просмотреть файл

@ -1,646 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
//! An interface to *credentials* (username/password pairs, optionally titled) and *logins* (usages
//! at points in time), stored in a Mentat store.
//!
//! [`Credential`] is the main type exposed. Credentials are identified by opaque IDs.
//!
//! Store credentials in Mentat with [`add_credential`]. Retrieve credentials present in the Mentat
//! store with [`get_credential`] and [`get_all_credentials`].
//!
//! Record local usages of a credential with [`touch_by_id`]. Retrieve local metadata about a
//! credential with [`times_used`], [`time_last_used`], and [`time_last_modified`].
//!
//! Remove credentials from the Mentat store with [`delete_by_id`] and [`delete_by_ids`].
use mentat::{
Binding,
DateTime,
Entid,
QueryInputs,
QueryResults,
Queryable,
StructuredMap,
TxReport,
TypedValue,
Utc,
};
use mentat::entity_builder::{
BuildTerms,
TermBuilder,
};
use mentat::conn::{
InProgress,
};
use errors::{
Error,
Result,
};
use types::{
Credential,
CredentialId,
};
use vocab::{
CREDENTIAL_ID,
CREDENTIAL_USERNAME,
CREDENTIAL_PASSWORD,
CREDENTIAL_CREATED_AT,
CREDENTIAL_TITLE,
LOGIN_AT,
// TODO: connect logins to specific LOGIN_DEVICE.
LOGIN_CREDENTIAL,
// TODO: connect logins to LOGIN_FORM.
};
impl Credential {
/// Produce a `Credential` from a structured map (as returned by a pull expression).
pub(crate) fn from_structured_map(map: &StructuredMap) -> Option<Self> {
let id = map[&*CREDENTIAL_ID].as_string().map(|x| (**x).clone()).map(CredentialId).unwrap(); // XXX
let username = map.get(&*CREDENTIAL_USERNAME).and_then(|username| username.as_string()).map(|x| (**x).clone()); // XXX
let password = map[&*CREDENTIAL_PASSWORD].as_string().map(|x| (**x).clone()).unwrap(); // XXX
let created_at = map[&*CREDENTIAL_CREATED_AT].as_instant().map(|x| (*x).clone()).unwrap(); // XXX
let title = map.get(&*CREDENTIAL_TITLE).and_then(|username| username.as_string()).map(|x| (**x).clone()); // XXX
// TODO: device.
Some(Credential {
id,
created_at,
username,
password,
title,
})
}
}
/// Assert the given `credential` against the given `builder`.
///
/// N.b., this uses the (globally) named tempid "c", so it can't be used twice against the same
/// builder!
pub(crate) fn build_credential(builder: &mut TermBuilder, credential: Credential) -> Result<()> {
let c = builder.named_tempid("c");
builder.add(c.clone(),
CREDENTIAL_ID.clone(),
TypedValue::typed_string(credential.id))?;
if let Some(username) = credential.username {
builder.add(c.clone(),
CREDENTIAL_USERNAME.clone(),
TypedValue::String(username.into()))?;
}
builder.add(c.clone(),
CREDENTIAL_PASSWORD.clone(),
TypedValue::String(credential.password.into()))?;
// TODO: set created to the transaction timestamp. This might require implementing
// (transaction-instant), which requires some thought because it is a "delayed binding".
builder.add(c.clone(),
CREDENTIAL_CREATED_AT.clone(),
TypedValue::Instant(credential.created_at))?;
if let Some(title) = credential.title {
builder.add(c.clone(),
CREDENTIAL_TITLE.clone(),
TypedValue::String(title.into()))?;
}
Ok(())
}
/// Transact the given `credential` against the given `InProgress` write.
///
/// If a credential with the given ID exists, it will be modified in place.
pub fn add_credential(in_progress: &mut InProgress, credential: Credential) -> Result<TxReport> {
let mut builder = TermBuilder::new();
build_credential(&mut builder, credential.clone())?;
in_progress.transact_builder(builder).map_err(|e| e.into())
}
/// Fetch the credential with given `id`.
pub fn get_credential<Q>(queryable: &Q, id: CredentialId) -> Result<Option<Credential>> where Q: Queryable {
let q = r#"[:find
(pull ?c [:credential/id :credential/username :credential/password :credential/createdAt :credential/title]) .
:in
?id
:where
[?c :credential/id ?id]
]"#;
let inputs = QueryInputs::with_value_sequence(vec![
(var!(?id), TypedValue::typed_string(&id)),
]);
let scalar = queryable.q_once(q, inputs)?.into_scalar()?;
let credential = match scalar {
Some(Binding::Map(cm)) => Ok(Credential::from_structured_map(cm.as_ref())),
Some(other) => {
error!("Unexpected query result: {:?}", other);
bail!(Error::BadQueryResultType);
},
None => Ok(None),
};
credential
}
/// Fetch all known credentials.
///
/// No ordering is implied.
pub fn get_all_credentials<Q>(queryable: &Q) -> Result<Vec<Credential>>
where Q: Queryable {
let q = r#"[
:find
[?id ...]
:where
[_ :credential/id ?id]
:order
(asc ?id) ; We order for testing convenience.
]"#;
let ids: Result<Vec<_>> = queryable.q_once(q, None)?
.into_coll()?
.into_iter()
.map(|id| {
match id {
Binding::Scalar(TypedValue::String(id)) => Ok(CredentialId((*id).clone())),
other => {
error!("Unexpected query result: {:?}", other);
bail!(Error::BadQueryResultType);
},
}
})
.collect();
let ids = ids?;
// TODO: do this more efficiently.
let mut cs = Vec::with_capacity(ids.len());
for id in ids {
get_credential(queryable, id)?.map(|c| cs.push(c));
}
Ok(cs)
}
/// Record a local usage of the credential with given `id`, optionally `at` the given timestamp.
pub fn touch_by_id(in_progress: &mut InProgress, id: CredentialId, at: Option<DateTime<Utc>>) -> Result<TxReport> {
// TODO: Also record device.
let mut builder = TermBuilder::new();
let l = builder.named_tempid("l");
// New login.
builder.add(l.clone(),
LOGIN_AT.clone(),
// TODO: implement and use (tx-instant).
TypedValue::Instant(at.unwrap_or_else(|| ::mentat::now())))?;
builder.add(l.clone(),
LOGIN_CREDENTIAL.clone(),
TermBuilder::lookup_ref(CREDENTIAL_ID.clone(), TypedValue::typed_string(id)))?;
in_progress.transact_builder(builder).map_err(|e| e.into())
}
/// Delete the credential with the given `id`, if one exists.
pub fn delete_by_id(in_progress: &mut InProgress, id: CredentialId) -> Result<bool> {
Ok(delete_by_ids(in_progress, ::std::iter::once(id))? == 1)
}
/// Delete credentials with the given `ids`, if any exist.
pub fn delete_by_ids<I>(in_progress: &mut InProgress, ids: I) -> Result<usize>
where I: IntoIterator<Item=CredentialId> {
// TODO: implement and use some version of `:db/retractEntity`, rather than onerously deleting
// credential data and usage data.
//
// N.b., I'm not deleting the dangling link from `:sync.password/credential` here. That's a
// choice; not deleting that link allows the Sync password to discover that its underlying
// credential has been removed (although, deleting that link reveals the information as well).
// Using `:db/retractEntity` in some form impacts this decision.
let q = r#"[
:find
?e ?a ?v
:in
?id
:where
(or-join [?e ?a ?v ?id]
(and
[?e :credential/id ?id]
[?e ?a ?v])
(and
[?c :credential/id ?id]
[?e :login/credential ?c]
[?e ?a ?v]))
]"#;
let mut builder = TermBuilder::new();
let mut deleted = 0;
for id in ids {
let inputs = QueryInputs::with_value_sequence(vec![(var!(?id), TypedValue::typed_string(id))]);
let results = in_progress.q_once(q, inputs)?.results;
match results {
QueryResults::Rel(vals) => {
if vals.row_count() > 0 {
deleted += 1;
}
for vs in vals {
match (vs.len(), vs.get(0), vs.get(1), vs.get(2)) {
(3, Some(&Binding::Scalar(TypedValue::Ref(e))), Some(&Binding::Scalar(TypedValue::Ref(a))), Some(&Binding::Scalar(ref v))) => {
builder.retract(e, a, v.clone())?; // TODO: don't clone.
}
other => {
error!("Unexpected query result: {:?}", other);
bail!(Error::BadQueryResultType);
},
}
}
},
other => {
error!("Unexpected query result: {:?}", other);
bail!(Error::BadQueryResultType);
},
}
}
in_progress.transact_builder(builder).map_err(|e| e.into()).and(Ok(deleted))
}
/// Find a credential matching the given `username` and `password`, if one exists.
///
/// It is possible that multiple credentials match, in which case one is chosen at random. (This is
/// an impedance mismatch between the model of logins we're driving towards and the requirements of
/// Sync 1.5 passwords to do content-aware merging.)
pub fn find_credential_by_content<Q>(queryable: &Q, username: String, password: String) -> Result<Option<Credential>>
where Q: Queryable
{
let q = r#"[:find ?id .
:in
?username ?password
:where
[?c :credential/id ?id]
[?c :credential/username ?username]
[?c :credential/password ?password]]"#;
let inputs = QueryInputs::with_value_sequence(vec![(var!(?username), TypedValue::String(username.clone().into())),
(var!(?password), TypedValue::String(password.clone().into()))]);
let id = match queryable.q_once(q, inputs)?.into_scalar()? {
Some(x) => {
match x.into_string() {
Some(x) => CredentialId((*x).clone()),
None => {
error!("Unexpected query result! find_credential_by_content returned None");
bail!(Error::BadQueryResultType);
}
}
}
None => return Ok(None),
};
get_credential(queryable, id)
}
/// Return the number of times the credential with given `id` has been used locally, or `None` if
/// such a credential doesn't exist, optionally limiting to usages strictly after the given
/// `after_tx`.
// TODO: u64.
// TODO: filter by devices.
pub fn times_used<Q>(queryable: &Q, id: CredentialId, after_tx: Option<Entid>) -> Result<Option<i64>>
where Q: Queryable
{
// TODO: Don't run this first query to determine if a credential (ID) exists. This is only here
// because it's surprisingly awkward to return `None` rather than `0` for a non-existent
// credential ID.
if get_credential(queryable, id.clone())?.is_none() {
return Ok(None);
}
let q = r#"[:find
(count ?l) .
:in
?id ?after_tx
:where
[?c :credential/id ?id]
[?l :login/credential ?c]
[?l :login/at _ ?login-tx]
[(tx-after ?login-tx ?after_tx)]]"#;
// TODO: drop the comparison when `after_tx` is `None`.
let values =
QueryInputs::with_value_sequence(vec![(var!(?id), TypedValue::typed_string(&id)),
(var!(?after_tx), TypedValue::Ref(after_tx.unwrap_or(0)))]);
let local_times_used = match queryable.q_once(q, values)?.into_scalar()? {
Some(Binding::Scalar(TypedValue::Long(times_used))) => Some(times_used), // TODO: work out overflow for u64.
None => None,
Some(other) => {
error!("Unexpected result from times_used query! {:?}", other);
bail!(Error::BadQueryResultType);
},
};
Ok(local_times_used)
}
/// Return the last time the credential with given `id` was used locally, or `None` if such a
/// credential doesn't exist, optionally limiting to usages strictly after the given `after_tx`.
// TODO: filter by devices.
pub fn time_last_used<Q>(queryable: &Q, id: CredentialId, after_tx: Option<Entid>) -> Result<Option<DateTime<Utc>>>
where Q: Queryable
{
let q = r#"[:find
(max ?at) .
:in
?id ?after_tx
:where
[?c :credential/id ?id]
[?l :login/credential ?c]
[?l :login/at ?at ?login-tx]
[(tx-after ?login-tx ?after_tx)]
]"#;
// TODO: drop the comparison when `after_tx` is `None`.
let values =
QueryInputs::with_value_sequence(vec![(var!(?id), TypedValue::typed_string(id)),
(var!(?after_tx), TypedValue::Ref(after_tx.unwrap_or(0)))]);
let local_time_last_used = match queryable.q_once(q, values)?.into_scalar()? {
Some(Binding::Scalar(TypedValue::Instant(time_last_used))) => Some(time_last_used),
None => None,
Some(other) => {
error!("Unexpected query result! {:?}", other);
bail!(Error::BadQueryResultType);
}
};
Ok(local_time_last_used)
}
/// Return the last time the credential with given `id` was modified locally, or `None` if such a
/// credential doesn't exist.
pub fn time_last_modified<Q>(queryable: &Q, id: CredentialId) -> Result<Option<DateTime<Utc>>>
where Q: Queryable
{
// TODO: handle optional usernames.
let q = r#"[:find
[?username-txInstant ?password-txInstant]
:in
?id
:where
[?credential :credential/id ?id]
[?credential :credential/username ?username ?username-tx]
[?username-tx :db/txInstant ?username-txInstant]
[?credential :credential/password ?password ?password-tx]
[?password-tx :db/txInstant ?password-txInstant]]"#;
let inputs = QueryInputs::with_value_sequence(vec![(var!(?id), TypedValue::typed_string(id))]);
match queryable.q_once(q, inputs)?.into_tuple()? {
Some((Binding::Scalar(TypedValue::Instant(username_tx_instant)),
Binding::Scalar(TypedValue::Instant(password_tx_instant)))) => {
let last_modified = ::std::cmp::max(username_tx_instant, password_tx_instant);
Ok(Some(last_modified))
},
None => Ok(None),
Some(other) => {
error!("Unexpected query result: {:?}", other);
bail!(Error::BadQueryResultType);
}
}
}
#[cfg(test)]
mod tests {
use mentat::{
FromMicros,
};
use super::*;
use tests::{
testing_store,
};
lazy_static! {
static ref CREDENTIAL1: Credential = {
Credential {
id: CredentialId("1".into()),
username: Some("user1@mockymid.com".into()),
password: "password1".into(),
created_at: DateTime::<Utc>::from_micros(1523908112453),
title: None,
}
};
static ref CREDENTIAL2: Credential = {
Credential {
id: CredentialId("2".into()),
username: Some("user2@mockymid.com".into()),
password: "password2".into(),
created_at: DateTime::<Utc>::from_micros(1523909000000),
title: Some("marché".into()), // Observe accented character.
}
};
static ref CREDENTIAL_WITHOUT_USERNAME: Credential = {
Credential {
id: CredentialId("3".into()),
username: None,
password: "password3".into(),
created_at: DateTime::<Utc>::from_micros(1523909111111),
title: Some("credential without username".into()),
}
};
}
#[test]
fn test_credentials() {
let mut store = testing_store();
let mut in_progress = store.begin_transaction().expect("begun successfully");
// First, let's add a single credential.
add_credential(&mut in_progress, CREDENTIAL1.clone()).expect("to add_credential 1");
let c = get_credential(&in_progress, CREDENTIAL1.id.clone()).expect("to get_credential 1");
assert_eq!(Some(CREDENTIAL1.clone()), c);
let cs = get_all_credentials(&in_progress).expect("to get_all_credentials 1");
assert_eq!(vec![CREDENTIAL1.clone()], cs);
// Now a second one.
add_credential(&mut in_progress, CREDENTIAL2.clone()).expect("to add_credential 2");
let c = get_credential(&in_progress, CREDENTIAL1.id.clone()).expect("to get_credential 1");
assert_eq!(Some(CREDENTIAL1.clone()), c);
let c = get_credential(&in_progress, CREDENTIAL2.id.clone()).expect("to get_credential 2");
assert_eq!(Some(CREDENTIAL2.clone()), c);
let cs = get_all_credentials(&in_progress).expect("to get_all_credentials 2");
assert_eq!(vec![CREDENTIAL1.clone(), CREDENTIAL2.clone()], cs);
}
#[test]
fn test_credential_without_username() {
let mut store = testing_store();
let mut in_progress = store.begin_transaction().expect("begun successfully");
// Let's verify that we can serialize and deserialize a credential without a username.
add_credential(&mut in_progress, CREDENTIAL_WITHOUT_USERNAME.clone()).unwrap();
let c = get_credential(&in_progress, CREDENTIAL_WITHOUT_USERNAME.id.clone()).unwrap();
assert_eq!(Some(CREDENTIAL_WITHOUT_USERNAME.clone()), c);
let cs = get_all_credentials(&in_progress).unwrap();
assert_eq!(vec![CREDENTIAL_WITHOUT_USERNAME.clone()], cs);
}
#[test]
fn test_delete_by_id() {
let mut store = testing_store();
let mut in_progress = store.begin_transaction().expect("begun successfully");
// First, let's add a few credentials.
add_credential(&mut in_progress, CREDENTIAL1.clone()).expect("to add_credential 1");
add_credential(&mut in_progress, CREDENTIAL2.clone()).expect("to add_credential 2");
let deleted = delete_by_id(&mut in_progress, CREDENTIAL1.id.clone()).expect("to delete by id");
assert!(deleted);
// The record's gone.
let c = get_credential(&in_progress,
CREDENTIAL1.id.clone()).expect("to get_credential");
assert_eq!(c, None);
// If we try to delete again, that's okay.
let deleted = delete_by_id(&mut in_progress, CREDENTIAL1.id.clone()).expect("to delete by id when it's already deleted");
assert!(!deleted);
let c = get_credential(&in_progress,
CREDENTIAL1.id.clone()).expect("to get_credential");
assert_eq!(c, None);
// The other password wasn't deleted.
let c = get_credential(&in_progress,
CREDENTIAL2.id.clone()).expect("to get_credential");
assert_eq!(c, Some(CREDENTIAL2.clone()));
}
#[test]
fn test_delete_by_ids() {
let mut store = testing_store();
let mut in_progress = store.begin_transaction().expect("begun successfully");
// First, let's add a few credentials.
add_credential(&mut in_progress, CREDENTIAL1.clone()).expect("to add_credential 1");
add_credential(&mut in_progress, CREDENTIAL2.clone()).expect("to add_credential 2");
let iters = ::std::iter::once(CREDENTIAL1.id.clone()).chain(::std::iter::once(CREDENTIAL2.id.clone()));
let count = delete_by_ids(&mut in_progress, iters.clone()).expect("to delete_by_ids");
assert_eq!(count, 2);
// The records are gone.
let c = get_credential(&in_progress,
CREDENTIAL1.id.clone()).expect("to get_credential");
assert_eq!(c, None);
let c = get_credential(&in_progress,
CREDENTIAL2.id.clone()).expect("to get_credential");
assert_eq!(c, None);
// If we try to delete again, that's okay.
let count = delete_by_ids(&mut in_progress, iters.clone()).expect("to delete_by_ids");
assert_eq!(count, 0);
}
#[test]
fn test_find_credential_by_content() {
let mut store = testing_store();
let mut in_progress = store.begin_transaction().expect("begun successfully");
add_credential(&mut in_progress, CREDENTIAL1.clone()).expect("to add_credential 1");
let c = find_credential_by_content(&in_progress,
CREDENTIAL1.username.clone().unwrap(),
CREDENTIAL1.password.clone()).expect("to find_credential_by_content");
assert_eq!(c, Some(CREDENTIAL1.clone()));
let c = find_credential_by_content(&in_progress,
"incorrect username".to_string(),
CREDENTIAL1.password.clone()).expect("to find_credential_by_content");
assert_eq!(c, None);
let c = find_credential_by_content(&in_progress,
CREDENTIAL1.username.clone().unwrap(),
"incorrect password".to_string()).expect("to find_credential_by_content");
assert_eq!(c, None);
}
#[test]
fn test_times_used() {
let mut store = testing_store();
let mut in_progress = store.begin_transaction().expect("begun successfully");
// First, let's add a few credentials.
add_credential(&mut in_progress, CREDENTIAL1.clone()).expect("to add_credential 1");
add_credential(&mut in_progress, CREDENTIAL2.clone()).expect("to add_credential 2");
let report1 = touch_by_id(&mut in_progress, CREDENTIAL1.id.clone(), None).expect("touch_by_id");
let now1 = ::mentat::now();
let report2 = touch_by_id(&mut in_progress, CREDENTIAL2.id.clone(), None).expect("touch_by_id");
let now2 = ::mentat::now();
touch_by_id(&mut in_progress, CREDENTIAL1.id.clone(), Some(now1)).expect("touch_by_id");
let report3 = touch_by_id(&mut in_progress, CREDENTIAL2.id.clone(), Some(now2)).expect("touch_by_id");
assert_eq!(None, times_used(&in_progress, "unknown credential".into(), None).expect("times_used"));
assert_eq!(Some(2), times_used(&in_progress, CREDENTIAL1.id.clone(), None).expect("times_used"));
assert_eq!(Some(1), times_used(&in_progress, CREDENTIAL1.id.clone(), Some(report1.tx_id)).expect("times_used"));
assert_eq!(Some(1), times_used(&in_progress, CREDENTIAL1.id.clone(), Some(report2.tx_id)).expect("times_used"));
assert_eq!(Some(0), times_used(&in_progress, CREDENTIAL1.id.clone(), Some(report3.tx_id)).expect("times_used"));
assert_eq!(Some(2), times_used(&in_progress, CREDENTIAL2.id.clone(), None).expect("times_used"));
assert_eq!(Some(2), times_used(&in_progress, CREDENTIAL2.id.clone(), Some(report1.tx_id)).expect("times_used"));
assert_eq!(Some(1), times_used(&in_progress, CREDENTIAL2.id.clone(), Some(report2.tx_id)).expect("times_used"));
assert_eq!(Some(0), times_used(&in_progress, CREDENTIAL2.id.clone(), Some(report3.tx_id)).expect("times_used"));
}
#[test]
fn test_last_time_used() {
let mut store = testing_store();
let mut in_progress = store.begin_transaction().expect("begun successfully");
// First, let's add a few credentials.
add_credential(&mut in_progress, CREDENTIAL1.clone()).expect("to add_credential 1");
add_credential(&mut in_progress, CREDENTIAL2.clone()).expect("to add_credential 2");
// Just so there is a visit for credential 2, in case there is an error across credentials.
touch_by_id(&mut in_progress, CREDENTIAL2.id.clone(), None).expect("touch_by_id");
touch_by_id(&mut in_progress, CREDENTIAL1.id.clone(), None).expect("touch_by_id");
let now1 = ::mentat::now();
touch_by_id(&mut in_progress, CREDENTIAL1.id.clone(), Some(now1)).expect("touch_by_id");
assert_eq!(None, time_last_used(&in_progress, "unknown credential".into(), None).expect("time_last_used"));
assert_eq!(Some(now1), time_last_used(&in_progress, CREDENTIAL1.id.clone(), None).expect("time_last_used"));
// This is a little unusual. We're going to record consecutive usages with timestamps going
// backwards in time.
let now2 = ::mentat::now();
let report = touch_by_id(&mut in_progress, CREDENTIAL2.id.clone(), Some(now2)).expect("touch_by_id");
touch_by_id(&mut in_progress, CREDENTIAL2.id.clone(), Some(now1)).expect("touch_by_id");
assert_eq!(Some(now2), time_last_used(&in_progress, CREDENTIAL2.id.clone(), None).expect("time_last_used"));
assert_eq!(Some(now1), time_last_used(&in_progress, CREDENTIAL2.id.clone(), Some(report.tx_id)).expect("time_last_used"));
}
}

Просмотреть файл

@ -1,50 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use std; // To refer to std::result::Result.
use mentat::{
MentatError,
};
use failure::Fail;
pub type Result<T> = std::result::Result<T, Error>;
#[macro_export]
macro_rules! bail {
($e:expr) => (
return Err($e.into());
)
}
#[derive(Debug, Fail)]
pub enum Error {
#[fail(display = "bad query result type")]
BadQueryResultType,
#[fail(display = "{}", _0)]
MentatError(#[cause] MentatError),
}
// Because Mentat doesn't expose its entire API from the top-level `mentat` crate, we sometimes
// witness error types that are logically subsumed by `MentatError`. We wrap those here, since
// _our_ consumers should not care about the specific Mentat error type.
impl<E: Into<MentatError> + std::fmt::Debug> From<E> for Error {
fn from(error: E) -> Error {
error!("MentatError -> LoginsError {:?}", error);
let mentat_err: MentatError = error.into();
if let Some(bt) = mentat_err.backtrace() {
debug!("Backtrace: {:?}", bt);
}
Error::MentatError(mentat_err)
}
}

Просмотреть файл

@ -1,141 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
///! This module implements special `serde` support for `ServerPassword` instances.
///!
///! Unfortunately, there doesn't seem to be a good way to directly deserialize `ServerPassword`
///! from JSON because of `target`. In theory `#[serde(flatten)]` on that property would do it, but
///! Firefox for Desktop writes records like `{"httpRealm": null, "formSubmitURL": "..."}`, e.g.,
///! where both fields are present, but one is `null`. This breaks `serde`. We therefore use a
///! custom serializer and deserializer through the `SerializablePassword` type.
use serde::{
self,
Deserializer,
Serializer,
};
use mentat::{
DateTime,
FromMillis,
ToMillis,
Utc,
};
use types::{
FormTarget,
ServerPassword,
SyncGuid,
};
fn zero_timestamp() -> DateTime<Utc> {
DateTime::<Utc>::from_millis(0)
}
#[derive(Debug, Clone, Hash, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
struct SerializablePassword {
pub id: String,
pub hostname: String,
#[serde(rename = "formSubmitURL")]
#[serde(skip_serializing_if = "Option::is_none")]
pub form_submit_url: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub http_realm: Option<String>,
#[serde(default)]
pub username: Option<String>,
pub password: String,
#[serde(default)]
#[serde(skip_serializing_if = "Option::is_none")]
pub username_field: Option<String>,
#[serde(default)]
#[serde(skip_serializing_if = "Option::is_none")]
pub password_field: Option<String>,
#[serde(default)]
pub time_created: i64,
#[serde(default)]
pub time_password_changed: i64,
#[serde(default)]
pub time_last_used: i64,
#[serde(default)]
pub times_used: usize,
}
impl From<ServerPassword> for SerializablePassword {
fn from(sp: ServerPassword) -> SerializablePassword {
let (form_submit_url, http_realm) = match sp.target {
FormTarget::FormSubmitURL(url) => (Some(url), None),
FormTarget::HttpRealm(realm) => (None, Some(realm)),
};
SerializablePassword {
id: sp.uuid.0,
username_field: sp.username_field,
password_field: sp.password_field,
form_submit_url,
http_realm,
hostname: sp.hostname,
username: sp.username,
password: sp.password,
times_used: sp.times_used,
time_password_changed: sp.time_password_changed.to_millis(),
time_last_used: sp.time_last_used.to_millis(),
time_created: sp.time_created.to_millis(),
}
}
}
impl serde::ser::Serialize for ServerPassword {
fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
SerializablePassword::from(self.clone()).serialize(serializer)
}
}
impl<'de> serde::de::Deserialize<'de> for ServerPassword {
fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<ServerPassword, D::Error> {
let s = SerializablePassword::deserialize(deserializer)?;
let target = match (s.form_submit_url, s.http_realm) {
(Some(_), Some(_)) =>
return Err(serde::de::Error::custom("ServerPassword has both formSubmitURL and httpRealm")),
(None, None) =>
return Err(serde::de::Error::custom("ServerPassword is missing both formSubmitURL and httpRealm")),
(Some(url), None) =>
FormTarget::FormSubmitURL(url),
(None, Some(realm)) =>
FormTarget::HttpRealm(realm),
};
Ok(ServerPassword {
uuid: SyncGuid(s.id),
modified: zero_timestamp(),
hostname: s.hostname,
username: s.username,
password: s.password,
target,
username_field: s.username_field,
password_field: s.password_field,
times_used: s.times_used,
time_created: FromMillis::from_millis(s.time_created),
time_last_used: FromMillis::from_millis(s.time_last_used),
time_password_changed: FromMillis::from_millis(s.time_password_changed),
})
}
}

Просмотреть файл

@ -1,123 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
//! This crate is an interface for working with Sync 1.5 passwords and arbitrary logins.
//!
//! We use "passwords" or "password records" to talk about Sync 1.5's object format stored in the
//! "passwords" collection. We use "logins" to talk about local credentials, which will grow to be
//! more general than Sync 1.5's limited object format.
//!
//! For Sync 1.5 passwords, we reference the somewhat out-dated but still useful [client
//! documentation](https://mozilla-services.readthedocs.io/en/latest/sync/objectformats.html#passwords).
//!
//! # Data model
//!
//! There are three fundamental parts to the model of logins implemented:
//! 1. *credentials* are username/password pairs
//! 1. *forms* are contexts where credentials can be used
//! 1. *logins* are usages: this *credential* was used to login to this *form*
//!
//! In this model, a user might have a single username/password pair for their Google Account;
//! enter it into multiple forms (say, login forms on "mail.google.com" and "calendar.google.com",
//! and a password reset form on "accounts.google.com"); and have used the login forms weekly but
//! the password reset form only once.
//!
//! This model can grow to accommodate new types of credentials and new contexts for usage. A new
//! credential might be a hardware key (like Yubikey) that is identified by a device serial number;
//! or it might be a cookie from a web browser login. And a password manager might be on a mobile
//! device and not embedded in a Web browser: it might provide credentials to specific Apps as a
//! platform-specific password filling API. In this case, the context is not a *form*.
//!
//! To support Sync 1.5, we add a fourth fundamental part to the model: a Sync password notion that
//! glues together a credential, a form, and some materialized logins usage data. The
//! [`ServerPassword`] type captures these notions.
//!
//! # Limitations of the Sync 1.5 object model
//!
//! There are many limitations of the Sync 1.5 object model, but the two most significant for this
//! implementation are:
//!
//! 1. A consumer that is *not a Web browser* can't smoothly create Sync 1.5 password records!
//! Consider the password manager on a mobile device not embedded in a Web browser: there is no way
//! for it to associate login usage with a particular web site, let alone a particular form. That
//! is, the only usage context that Sync 1.5 password records accommodates looks exactly like
//! Firefox's usage context. (Any consumer can fabricate required entries in the `ServerPassword`
//! type, or require the user to provide them -- but the product experience will suffer.)
//!
//! 1. It can't represent the use of the same username/password pair across more than one site,
//! leading to the creation of add-ons like
//! [mass-password-reset](https://addons.mozilla.org/en-US/firefox/addon/mass-password-reset/). There
//! is a many-to-many relationship between credentials and forms. Firefox Desktop and Firefox Sync
//! both duplicate credentials when they're saved after use in multiple places. But conversely,
//! note that there are situations in which the same username and password mean different things:
//! the most common is password reuse.
#![recursion_limit="128"]
#![crate_name = "logins"]
extern crate chrono;
extern crate failure;
#[macro_use] extern crate failure_derive;
#[macro_use] extern crate log;
#[macro_use] extern crate lazy_static;
extern crate serde;
#[macro_use] extern crate serde_derive;
extern crate serde_json;
#[macro_use] extern crate mentat;
pub mod credentials;
pub mod errors;
pub use errors::{
Error,
Result,
};
mod json;
pub mod passwords;
pub mod types;
pub use types::{
Credential,
CredentialId,
FormTarget,
ServerPassword,
SyncGuid,
};
mod vocab;
pub use vocab::{
CREDENTIAL_VOCAB,
FORM_VOCAB,
LOGIN_VOCAB,
ensure_vocabulary,
};
#[cfg(test)]
mod tests {
use super::*;
use mentat::{
Store,
};
pub(crate) fn testing_store() -> Store {
let mut store = Store::open("").expect("opened");
// Scoped borrow of `store`.
{
let mut in_progress = store.begin_transaction().expect("begun successfully");
ensure_vocabulary(&mut in_progress).expect("to ensure_vocabulary");
in_progress.commit().expect("commit succeeded");
}
store
}
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,122 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
///! This module defines some core types that support Sync 1.5 passwords and arbitrary logins.
use std::convert::{
AsRef,
};
use mentat::{
DateTime,
Utc,
Uuid,
};
/// Firefox Sync password records must have at least a formSubmitURL or httpRealm, but not both.
#[derive(PartialEq, Eq, Hash, Clone, Debug)] // , Serialize, Deserialize)]
pub enum FormTarget {
// #[serde(rename = "httpRealm")]
HttpRealm(String),
// #[serde(rename = "formSubmitURL")]
FormSubmitURL(String),
}
#[derive(PartialEq, Eq, Hash, Clone, Debug, Serialize, Deserialize)]
pub struct SyncGuid(pub String);
impl AsRef<str> for SyncGuid {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
impl<T> From<T> for SyncGuid where T: Into<String> {
fn from(x: T) -> SyncGuid {
SyncGuid(x.into())
}
}
/// A Sync 1.5 password record.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct ServerPassword {
/// The UUID of this record, returned by the remote server as part of this record's envelope.
///
/// For historical reasons, Sync 1.5 passwords use a UUID rather than a (9 character) GUID like
/// other collections.
pub uuid: SyncGuid,
/// The time last modified, returned by the remote server as part of this record's envelope.
pub modified: DateTime<Utc>,
/// Material fields. A password without a username corresponds to an XXX.
pub hostname: String,
pub username: Option<String>,
pub password: String,
pub target: FormTarget,
/// Metadata. Unfortunately, not all clients pass-through (let alone collect and propagate!)
/// metadata correctly.
pub times_used: usize,
pub time_created: DateTime<Utc>,
pub time_last_used: DateTime<Utc>,
pub time_password_changed: DateTime<Utc>,
/// Mostly deprecated: these fields were once used to help with form fill.
pub username_field: Option<String>,
pub password_field: Option<String>,
}
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct CredentialId(pub String);
impl AsRef<str> for CredentialId {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
impl CredentialId {
pub fn random() -> Self {
CredentialId(Uuid::new_v4().hyphenated().to_string())
}
}
impl<T> From<T> for CredentialId where T: Into<String> {
fn from(x: T) -> CredentialId {
CredentialId(x.into())
}
}
/// A username/password pair, optionally decorated with a user-specified title.
///
/// A credential is uniquely identified by its `id`.
#[derive(PartialEq, Eq, Hash, Clone, Debug)]
pub struct Credential {
/// A stable opaque identifier uniquely naming this credential.
pub id: CredentialId,
// The username associated to this credential.
pub username: Option<String>,
// The password associated to this credential.
pub password: String,
// When the credential was created. This is best-effort: it's the timestamp observed by the
// device on which the credential was created, which is incomparable with timestamps observed by
// other devices in the constellation (including any servers).
pub created_at: DateTime<Utc>,
/// An optional user-specified title of this credential, like `My LDAP`.
pub title: Option<String>,
}

Просмотреть файл

@ -1,376 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use mentat::{
InProgress,
Keyword,
ValueType,
};
use mentat::vocabulary;
use mentat::vocabulary::{
VersionedStore,
};
use errors::{
Result,
};
lazy_static! {
pub static ref CREDENTIAL_ID: Keyword = {
kw!(:credential/id)
};
pub static ref CREDENTIAL_USERNAME: Keyword = {
kw!(:credential/username)
};
pub static ref CREDENTIAL_PASSWORD: Keyword = {
kw!(:credential/password)
};
pub static ref CREDENTIAL_CREATED_AT: Keyword = {
kw!(:credential/createdAt)
};
pub static ref CREDENTIAL_TITLE: Keyword = {
kw!(:credential/title)
};
/// The vocabulary describing *credentials*, i.e., username/password pairs; `:credential/*`.
///
/// ```edn
/// [:credential/username :db.type/string :db.cardinality/one]
/// [:credential/password :db.type/string :db.cardinality/one]
/// [:credential/created :db.type/instant :db.cardinality/one]
/// ; An application might allow users to name their credentials; e.g., "My LDAP".
/// [:credential/title :db.type/string :db.cardinality/one]
/// ```
pub static ref CREDENTIAL_VOCAB: vocabulary::Definition = {
vocabulary::Definition {
name: kw!(:org.mozilla/credential),
version: 1,
attributes: vec![
(CREDENTIAL_ID.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.unique(vocabulary::attribute::Unique::Identity)
.multival(false)
.build()),
(CREDENTIAL_USERNAME.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
(CREDENTIAL_PASSWORD.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
(CREDENTIAL_CREATED_AT.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Instant)
.multival(false)
.build()),
(CREDENTIAL_TITLE.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
],
pre: vocabulary::Definition::no_op,
post: vocabulary::Definition::no_op,
}
};
pub static ref LOGIN_AT: Keyword = {
kw!(:login/at)
};
pub static ref LOGIN_DEVICE: Keyword = {
kw!(:login/device)
};
pub static ref LOGIN_CREDENTIAL: Keyword = {
kw!(:login/credential)
};
pub static ref LOGIN_FORM: Keyword = {
kw!(:login/form)
};
/// The vocabulary describing *logins* (usages); `:logins/*`.
///
/// This is metadata capturing user behavior.
///
/// ```edn
// [:login/at :db.type/instant :db.cardinality/one]
// [:login/device :db.type/ref :db.cardinality/one]
// [:login/credential :db.type/ref :db.cardinality/one]
// [:login/form :db.type/ref :db.cardinality/one]
/// ```
pub static ref LOGIN_VOCAB: vocabulary::Definition = {
vocabulary::Definition {
name: kw!(:org.mozilla/login),
version: 1,
attributes: vec![
(LOGIN_AT.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Instant)
.multival(false)
.build()),
(LOGIN_DEVICE.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Ref)
.multival(false)
.build()),
(LOGIN_CREDENTIAL.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Ref)
.multival(false)
.build()),
(LOGIN_FORM.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Ref)
.multival(false)
.build()),
],
pre: vocabulary::Definition::no_op,
post: vocabulary::Definition::no_op,
}
};
pub static ref FORM_HOSTNAME: Keyword = {
kw!(:form/hostname)
};
pub static ref FORM_SUBMIT_URL: Keyword = {
kw!(:form/submitUrl)
};
pub static ref FORM_USERNAME_FIELD: Keyword = {
kw!(:form/usernameField)
};
pub static ref FORM_PASSWORD_FIELD: Keyword = {
kw!(:form/passwordField)
};
pub static ref FORM_HTTP_REALM: Keyword = {
kw!(:form/httpRealm)
};
// This is arguably backwards. In the future, we'd like forms to be independent of Sync 1.5
// password records, in the way that we're making credentials independent of password records.
// For now, however, we don't want to add an identifier and identify forms by content, so we're
// linking a form to a unique Sync password. Having the link go in this direction lets us
// upsert the form.
pub static ref FORM_SYNC_PASSWORD: Keyword = {
kw!(:form/syncPassword)
};
/// The vocabulary describing *forms* (usage contexts in a Web browser); `:forms/*`.
///
/// A form is either an HTTP login box _or_ a Web form.
///
/// ```edn
/// [:http/httpRealm :db.type/string :db.cardinality/one]
/// ; It's possible that hostname or submitUrl are unique-identity attributes.
/// [:form/hostname :db.type/string :db.cardinality/one]
/// [:form/submitUrl :db.type/string :db.cardinality/one]
/// [:form/usernameField :db.type/string :db.cardinality/one]
/// [:form/passwordField :db.type/string :db.cardinality/one]
pub static ref FORM_VOCAB: vocabulary::Definition = {
vocabulary::Definition {
name: kw!(:org.mozilla/form),
version: 1,
attributes: vec![
(FORM_SYNC_PASSWORD.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Ref)
.multival(false)
.unique(vocabulary::attribute::Unique::Identity)
.build()),
(FORM_HOSTNAME.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
(FORM_SUBMIT_URL.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
(FORM_USERNAME_FIELD.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
(FORM_PASSWORD_FIELD.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
(FORM_HTTP_REALM.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.build()),
],
pre: vocabulary::Definition::no_op,
post: vocabulary::Definition::no_op,
}
};
pub(crate) static ref SYNC_PASSWORD_UUID: Keyword = {
kw!(:sync.password/uuid)
};
pub(crate) static ref SYNC_PASSWORD_CREDENTIAL: Keyword = {
kw!(:sync.password/credential)
};
// Use materialTx for material change comparisons, metadataTx for metadata change
// comparisons. Downloading updates materialTx only. We only use materialTx to
// determine whether or not to upload. Uploaded records are built using metadataTx,
// however. Successful upload sets both materialTx and metadataTx.
pub(crate) static ref SYNC_PASSWORD_MATERIAL_TX: Keyword = {
kw!(:sync.password/materialTx)
};
pub(crate) static ref SYNC_PASSWORD_METADATA_TX: Keyword = {
kw!(:sync.password/metadataTx)
};
pub(crate) static ref SYNC_PASSWORD_SERVER_MODIFIED: Keyword = {
kw!(:sync.password/serverModified)
};
pub(crate) static ref SYNC_PASSWORD_TIMES_USED: Keyword = {
kw!(:sync.password/timesUsed)
};
pub(crate) static ref SYNC_PASSWORD_TIME_CREATED: Keyword = {
kw!(:sync.password/timeCreated)
};
pub(crate) static ref SYNC_PASSWORD_TIME_LAST_USED: Keyword = {
kw!(:sync.password/timeLastUsed)
};
pub(crate) static ref SYNC_PASSWORD_TIME_PASSWORD_CHANGED: Keyword = {
kw!(:sync.password/timePasswordChanged)
};
/// The vocabulary describing *Sync 1.5 passwords*; `:sync.password/*`.
///
/// A Sync 1.5 password joins a credential (via `:sync.password/credential), a form (via the inverse relationship `:form/syncPassword`), and usages together.
///
/// Consumers should not use this vocabulary directly; it is here only to support Sync 1.5.
pub(crate) static ref SYNC_PASSWORD_VOCAB: vocabulary::Definition = {
vocabulary::Definition {
name: kw!(:org.mozilla/sync.password),
version: 1,
attributes: vec![
(SYNC_PASSWORD_CREDENTIAL.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Ref)
.multival(false)
.unique(vocabulary::attribute::Unique::Identity)
.build()),
(SYNC_PASSWORD_UUID.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::String)
.multival(false)
.unique(vocabulary::attribute::Unique::Identity)
.build()),
(SYNC_PASSWORD_MATERIAL_TX.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Ref)
.multival(false)
.build()),
(SYNC_PASSWORD_METADATA_TX.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Ref)
.multival(false)
.build()),
(SYNC_PASSWORD_SERVER_MODIFIED.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Instant)
.multival(false)
.build()),
(SYNC_PASSWORD_TIMES_USED.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Long)
.multival(false)
.build()),
(SYNC_PASSWORD_TIME_CREATED.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Instant)
.multival(false)
.build()),
(SYNC_PASSWORD_TIME_LAST_USED.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Instant)
.multival(false)
.build()),
(SYNC_PASSWORD_TIME_PASSWORD_CHANGED.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Instant)
.multival(false)
.build()),
],
pre: vocabulary::Definition::no_op,
post: vocabulary::Definition::no_op,
}
};
pub(crate) static ref SYNC_PASSWORDS_LAST_SERVER_TIMESTAMP: Keyword = {
kw!(:sync.passwords/lastServerTimestamp)
};
/// The vocabulary describing the last time the Sync 1.5 "passwords" collection was synced.
///
/// Consumers should not use this vocabulary directly; it is here only to support Sync 1.5.
pub(crate) static ref SYNC_PASSWORDS_VOCAB: vocabulary::Definition = {
vocabulary::Definition {
name: kw!(:org.mozilla/sync.passwords),
version: 1,
attributes: vec![
(SYNC_PASSWORDS_LAST_SERVER_TIMESTAMP.clone(),
vocabulary::AttributeBuilder::helpful()
.value_type(ValueType::Double)
.multival(false)
.build()),
],
pre: vocabulary::Definition::no_op,
post: vocabulary::Definition::no_op,
}
};
}
/// Ensure that the Mentat vocabularies describing *credentials*, *logins*, *forms*, and *Sync 1.5
/// passwords* is present in the store.
///
/// This will install or upgrade the vocabularies as necessary, and should be called by every
/// consumer early in its lifecycle.
pub fn ensure_vocabulary(in_progress: &mut InProgress) -> Result<()> {
debug!("Ensuring logins vocabulary is installed.");
in_progress.verify_core_schema()?;
in_progress.ensure_vocabulary(&CREDENTIAL_VOCAB)?;
in_progress.ensure_vocabulary(&LOGIN_VOCAB)?;
in_progress.ensure_vocabulary(&FORM_VOCAB)?;
in_progress.ensure_vocabulary(&SYNC_PASSWORD_VOCAB)?;
in_progress.ensure_vocabulary(&SYNC_PASSWORDS_VOCAB)?;
Ok(())
}

Просмотреть файл

@ -176,7 +176,7 @@ impl Sync15StorageClient {
resp.url().path()
);
return Err(ErrorKind::StorageHttpError {
code: resp.status(),
code: resp.status().as_u16(),
route: resp.url().path().into(),
}.into());
}

Просмотреть файл

@ -9,7 +9,7 @@ use error::Result;
use record_types::CryptoKeysRecord;
use util::ServerTimestamp;
#[derive(Clone, Debug, PartialEq)]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct CollectionKeys {
pub timestamp: ServerTimestamp,
pub default: KeyBundle,

Просмотреть файл

@ -3,7 +3,7 @@
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::time::SystemTime;
use reqwest::{self, StatusCode as HttpStatusCode};
use reqwest;
use failure::{self, Fail, Context, Backtrace, SyncFailure};
use std::{fmt, result, string};
use std::boxed::Box;
@ -44,7 +44,7 @@ impl Error {
pub fn is_not_found(&self) -> bool {
match self.kind() {
ErrorKind::StorageHttpError { code: HttpStatusCode::NotFound, .. } => true,
ErrorKind::StorageHttpError { code: 404, .. } => true,
_ => false
}
}
@ -72,12 +72,11 @@ pub enum ErrorKind {
#[fail(display = "SHA256 HMAC Mismatch error")]
HmacMismatch,
// TODO: it would be nice if this were _0.to_u16(), but we cant have an expression there...
#[fail(display = "HTTP status {} when requesting a token from the tokenserver", _0)]
TokenserverHttpError(HttpStatusCode),
TokenserverHttpError(u16),
#[fail(display = "HTTP status {} during a storage request to \"{}\"", code, route)]
StorageHttpError { code: HttpStatusCode, route: String },
StorageHttpError { code: u16, route: String },
#[fail(display = "Server requested backoff. Retry after {:?}", _0)]
BackoffError(SystemTime),

Просмотреть файл

@ -10,7 +10,7 @@ use openssl::hash::MessageDigest;
use openssl::pkey::PKey;
use openssl::sign::Signer;
#[derive(Clone, PartialEq, Eq, Hash, Debug)]
#[derive(Clone, PartialEq, Eq, Hash, Debug, Serialize, Deserialize)]
pub struct KeyBundle {
enc_key: Vec<u8>,
mac_key: Vec<u8>,

Просмотреть файл

@ -198,7 +198,7 @@ impl LimitTracker {
}
}
#[derive(Deserialize, Debug, Clone)]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct InfoConfiguration {
/// The maximum size in bytes of the overall HTTP request body that will be accepted by the
/// server.
@ -247,7 +247,7 @@ impl Default for InfoConfiguration {
}
}
#[derive(Clone, Debug, Default, Deserialize)]
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct InfoCollections(HashMap<String, ServerTimestamp>);
impl InfoCollections {
@ -363,7 +363,7 @@ impl PostResponseHandler for NormalResponseHandler {
return Err(ErrorKind::BatchInterrupted.into());
} else {
return Err(ErrorKind::StorageHttpError {
code: r.status,
code: r.status.as_u16(),
route: "collection storage (TODO: record route somewhere)".into()
}.into());
}
@ -519,7 +519,7 @@ where
let resp = resp_or_error?;
if !resp.status.is_success() {
let code = resp.status;
let code = resp.status.as_u16();
self.on_response.handle_response(resp, !want_commit)?;
error!("Bug: expected OnResponse to have bailed out!");
// Should we assert here instead?

Просмотреть файл

@ -12,6 +12,7 @@ use key_bundle::KeyBundle;
use record_types::{MetaGlobalEngine, MetaGlobalRecord};
use request::{InfoCollections, InfoConfiguration};
use util::{random_guid, ServerTimestamp, SERVER_EPOCH};
use serde_json;
use self::SetupState::*;
@ -39,10 +40,16 @@ lazy_static! {
static ref DEFAULT_DECLINED: Vec<&'static str> = vec![];
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "schema_version")]
enum PersistedState {
V1(GlobalState),
}
/// Holds global Sync state, including server upload limits, and the
/// last-fetched collection modified times, `meta/global` record, and
/// collection encryption keys.
#[derive(Debug, Default)]
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct GlobalState {
pub config: InfoConfiguration,
pub collections: InfoCollections,
@ -52,6 +59,18 @@ pub struct GlobalState {
}
impl GlobalState {
pub fn to_persistable_string(&self) -> String {
let state = PersistedState::V1(self.clone());
serde_json::to_string(&state)
.expect("Should only fail for recursive types (this is not recursive)")
}
pub fn from_persisted_string(data: &str) -> error::Result<Self> {
match serde_json::from_str(data)? {
PersistedState::V1(global_state) => Ok(global_state)
}
}
pub fn key_for_collection(&self, collection: &str) -> error::Result<&KeyBundle> {
Ok(self.keys
.as_ref()
@ -590,7 +609,7 @@ impl FetchAction {
}
/// Flags an engine for enablement or disablement.
#[derive(Debug)]
#[derive(Debug, Serialize, Deserialize, Clone)]
pub enum EngineStateChange {
ResetAll,
ResetAllExcept(HashSet<String>),
@ -602,7 +621,6 @@ pub enum EngineStateChange {
#[cfg(test)]
mod tests {
use super::*;
use reqwest;
use bso_record::{BsoRecord, EncryptedBso, EncryptedPayload};
@ -618,7 +636,7 @@ mod tests {
match &self.info_configuration {
Ok(config) => Ok(config.clone()),
Err(_) => Err(ErrorKind::StorageHttpError {
code: reqwest::StatusCode::InternalServerError,
code: 500,
route: "info/configuration".to_string(),
}.into()),
}
@ -628,7 +646,7 @@ mod tests {
match &self.info_collections {
Ok(collections) => Ok(collections.clone()),
Err(_) => Err(ErrorKind::StorageHttpError {
code: reqwest::StatusCode::InternalServerError,
code: 500,
route: "info/collections".to_string(),
}.into()),
}
@ -640,15 +658,15 @@ mod tests {
// TODO(lina): Special handling for 404s, we want to ensure we
// handle missing keys and other server errors correctly.
Err(_) => Err(ErrorKind::StorageHttpError {
code: reqwest::StatusCode::InternalServerError,
code: 500,
route: "meta/global".to_string(),
}.into()),
}
}
fn put_meta_global(&self, global: &BsoRecord<MetaGlobalRecord>) -> error::Result<()> {
fn put_meta_global(&self, _global: &BsoRecord<MetaGlobalRecord>) -> error::Result<()> {
Err(ErrorKind::StorageHttpError {
code: reqwest::StatusCode::InternalServerError,
code: 500,
route: "meta/global".to_string(),
}.into())
}
@ -658,15 +676,15 @@ mod tests {
Ok(keys) => Ok(keys.clone()),
// TODO(lina): Same as above, for 404s.
Err(_) => Err(ErrorKind::StorageHttpError {
code: reqwest::StatusCode::InternalServerError,
code: 500,
route: "crypto/keys".to_string(),
}.into()),
}
}
fn put_crypto_keys(&self, keys: &EncryptedBso) -> error::Result<()> {
fn put_crypto_keys(&self, _keys: &EncryptedBso) -> error::Result<()> {
Err(ErrorKind::StorageHttpError {
code: reqwest::StatusCode::InternalServerError,
code: 500,
route: "crypto/keys".to_string(),
}.into())
}

Просмотреть файл

@ -46,9 +46,6 @@ where E: From<error::Error>
info!("Downloaded {} remote changes", incoming_changes.changes.len());
let mut outgoing = store.apply_incoming(incoming_changes)?;
assert_eq!(outgoing.timestamp, timestamp,
"last sync timestamp should never change unless we change it");
outgoing.timestamp = last_changed_remote;
info!("Uploading {} outgoing changes", outgoing.changes.len());

Просмотреть файл

@ -82,7 +82,8 @@ impl TokenFetcher for TokenServerFetcher {
let when = self.now() + Duration::from_millis(ms);
return Err(ErrorKind::BackoffError(when).into());
}
return Err(ErrorKind::TokenserverHttpError(resp.status()).into());
let status = resp.status().as_u16();
return Err(ErrorKind::TokenserverHttpError(status).into());
}
let token: TokenserverToken = resp.json()?;

Просмотреть файл

@ -1,35 +0,0 @@
[package]
name = "sync15_passwords"
version = "0.1.0"
[lib]
name = "sync15_passwords"
path = "src/lib.rs"
[dependencies]
failure = "0.1.1"
failure_derive = "0.1.1"
log = "0.4"
serde = "^1.0.63"
serde_derive = "^1.0.63"
serde_json = "1.0"
[dev-dependencies]
env_logger = "0.5"
prettytable-rs = "0.6"
url = "1.6.0"
[dependencies.sync15-adapter]
path = "../../sync15-adapter"
[dependencies.mentat]
git = "https://github.com/mozilla/mentat"
tag = "v0.8.1"
features = ["sqlcipher"]
default_features = false
[dependencies.logins]
path = "../../logins"
# features = ["sqlcipher"]
# default_features = false

Просмотреть файл

@ -1 +0,0 @@
../tests/sync_pass_mentat.rs

Просмотреть файл

@ -1,32 +0,0 @@
[package]
name = "loginsapi_ffi"
version = "0.1.0"
authors = ["Mark Hammond <mhammond@skippinet.com.au>"]
[lib]
name = "loginsapi_ffi"
crate-type = ["lib", "staticlib", "cdylib"]
[dependencies]
serde_json = "1.0"
failure = "0.1.1"
log = "0.4"
url = "1.6.0"
reqwest = "0.8.2"
[dependencies.ffi-toolkit]
#path="../../../../ffi-toolkit"
git = "https://github.com/mozilla/ffi-toolkit.git"
branch = "master"
[dependencies.sync15-adapter]
path = "../../../sync15-adapter"
[dependencies.sync15_passwords]
path = ".."
[dependencies.mentat]
git = "https://github.com/mozilla/mentat"
tag = "v0.8.1"
features = ["sqlcipher"]
default_features = false

Просмотреть файл

@ -1,192 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use ffi_toolkit::string::{
string_to_c_char
};
use std::ptr;
use std::os::raw::c_char;
use sync15_passwords::{
Result,
Sync15PasswordsError,
Sync15PasswordsErrorKind,
};
use reqwest::StatusCode;
use sync::{
ErrorKind as Sync15ErrorKind
};
pub unsafe fn with_translated_result<F, T>(error: *mut ExternError, callback: F) -> *mut T
where F: FnOnce() -> Result<T> {
translate_result(callback(), error)
}
pub unsafe fn with_translated_void_result<F>(error: *mut ExternError, callback: F)
where F: FnOnce() -> Result<()> {
translate_void_result(callback(), error);
}
pub unsafe fn with_translated_value_result<F, T>(error: *mut ExternError, callback: F) -> T
where F: FnOnce() -> Result<T>, T: Default {
try_translate_result(callback(), error).unwrap_or_default()
}
pub unsafe fn with_translated_string_result<F>(error: *mut ExternError, callback: F) -> *mut c_char
where F: FnOnce() -> Result<String> {
if let Some(s) = try_translate_result(callback(), error) {
string_to_c_char(s)
} else {
ptr::null_mut()
}
}
pub unsafe fn with_translated_opt_string_result<F>(error: *mut ExternError, callback: F) -> *mut c_char
where F: FnOnce() -> Result<Option<String>> {
if let Some(Some(s)) = try_translate_result(callback(), error) {
string_to_c_char(s)
} else {
// This is either an error case, or callback returned None.
ptr::null_mut()
}
}
/// C-compatible Error code. Negative codes are not expected to be handled by
/// the application, a code of zero indicates that no error occurred, and a
/// positive error code indicates an error that will likely need to be handled
/// by the application
#[repr(i32)]
#[derive(Clone, Copy, Debug)]
pub enum ExternErrorCode {
// TODO: When/if we make this API panic-safe, add an `UnexpectedPanic = -2`
/// An unexpected error occurred which likely cannot be meaningfully handled
/// by the application.
OtherError = -1,
/// No error occcurred.
NoError = 0,
/// Indicates the FxA credentials are invalid, and should be refreshed.
AuthInvalidError = 1,
// TODO: lockbox indicated that they would want to know when we fail to open
// the DB due to invalid key.
}
// XXX rest of this is COPYPASTE from mentat/ffi/util.rs, this likely belongs in ffi-toolkit
// (something similar is there, but it is more error prone and not usable in a general way)
// XXX Actually, once errors are more stable we should do something more like fxa (and put that in
// ffi-toolkit). Yesterday I thought this was impossible but IDK I was tired? It's possible.
/// Represents an error that occurred on the mentat side. Many mentat FFI functions take a
/// `*mut ExternError` as the last argument. This is an out parameter that indicates an
/// error that occurred during that function's execution (if any).
///
/// For functions that use this pattern, if the ExternError's message property is null, then no
/// error occurred. If the message is non-null then it contains a string description of the
/// error that occurred.
///
/// Important: This message is allocated on the heap and it is the consumer's responsibility to
/// free it using `destroy_mentat_string`!
///
/// While this pattern is not ergonomic in Rust, it offers two main benefits:
///
/// 1. It avoids defining a large number of `Result`-shaped types in the FFI consumer, as would
/// be required with something like an `struct ExternResult<T> { ok: *mut T, err:... }`
/// 2. It offers additional type safety over `struct ExternResult { ok: *mut c_void, err:... }`,
/// which helps avoid memory safety errors.
#[repr(C)]
#[derive(Debug)]
pub struct ExternError {
/// A string message, primarially intended for debugging.
pub message: *mut c_char,
/// Error code.
/// - A code of 0 indicates no error
/// - A negative error code indicates an error which is not expected to be
/// handled by the application.
pub code: ExternErrorCode,
// TODO: We probably want an extra (json?) property for misc. metadata.
}
impl Default for ExternError {
fn default() -> ExternError {
ExternError {
message: ptr::null_mut(),
code: ExternErrorCode::NoError,
}
}
}
fn get_code(err: &Sync15PasswordsError) -> ExternErrorCode {
match err.kind() {
Sync15PasswordsErrorKind::Sync15AdapterError(e) => {
match e.kind() {
Sync15ErrorKind::TokenserverHttpError(StatusCode::Unauthorized) => {
ExternErrorCode::AuthInvalidError
}
_ => ExternErrorCode::OtherError,
}
}
_ => ExternErrorCode::OtherError,
}
}
/// Translate Result<T, E>, into something C can understand, when T is not `#[repr(C)]`
///
/// - If `result` is `Ok(v)`, moves `v` to the heap and returns a pointer to it, and sets
/// `error` to a state indicating that no error occurred (`message` is null).
/// - If `result` is `Err(e)`, returns a null pointer and stores a string representing the error
/// message (which was allocated on the heap and should eventually be freed) into
/// `error.message`
pub unsafe fn translate_result<T>(result: Result<T>, error: *mut ExternError) -> *mut T {
// TODO: can't unwind across FFI...
assert!(!error.is_null(), "Error output parameter is not optional");
let error = &mut *error;
error.message = ptr::null_mut();
match result {
Ok(val) => Box::into_raw(Box::new(val)),
Err(e) => {
error!("Rust Error: {:?}", e);
error.message = string_to_c_char(e.to_string());
error.code = get_code(&e);
ptr::null_mut()
}
}
}
pub unsafe fn try_translate_result<T>(result: Result<T>, error: *mut ExternError) -> Option<T> {
// TODO: can't unwind across FFI...
assert!(!error.is_null(), "Error output parameter is not optional");
let error = &mut *error;
error.message = ptr::null_mut();
match result {
Ok(val) => Some(val),
Err(e) => {
error!("Rust Error: {:?}", e);
error.message = string_to_c_char(e.to_string());
error.code = get_code(&e);
None
}
}
}
/// Identical to `translate_result`, but with additional type checking for the case that we have
/// a `Result<(), E>` (which we're about to drop on the floor).
pub unsafe fn translate_void_result(result: Result<()>, error: *mut ExternError) {
// TODO: update this comment.
// Note that Box<T> guarantees that if T is zero sized, it's not heap allocated. So not
// only do we never need to free the return value of this, it would be a problem if someone did.
translate_result(result, error);
}

Просмотреть файл

@ -1,289 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
// We take the "low road" here when returning the structs - we expose the
// items (and arrays of items) as strings, which are JSON. The rust side of
// the world gets serialization and deserialization for free and it makes
// memory management that little bit simpler.
extern crate failure;
extern crate serde_json;
extern crate url;
extern crate reqwest;
#[macro_use] extern crate ffi_toolkit;
extern crate mentat;
extern crate sync15_passwords;
extern crate sync15_adapter as sync;
#[macro_use] extern crate log;
mod error;
use error::{
ExternError,
with_translated_result,
with_translated_value_result,
with_translated_void_result,
with_translated_string_result,
with_translated_opt_string_result,
};
use std::os::raw::{
c_char,
};
use std::sync::{Once, ONCE_INIT};
use ffi_toolkit::string::{
c_char_to_string,
};
pub use ffi_toolkit::memory::{
destroy_c_char,
};
use sync::{
Sync15StorageClient,
Sync15StorageClientInit,
GlobalState,
};
use sync15_passwords::{
passwords,
PasswordEngine,
ServerPassword,
};
pub struct SyncInfo {
state: GlobalState,
client: Sync15StorageClient,
// Used so that we know whether or not we need to re-initialize `client`
last_client_init: Sync15StorageClientInit,
}
pub struct PasswordState {
engine: PasswordEngine,
sync: Option<SyncInfo>,
}
#[cfg(target_os = "android")]
extern { pub fn __android_log_write(level: ::std::os::raw::c_int, tag: *const c_char, text: *const c_char) -> ::std::os::raw::c_int; }
struct DevLogger;
impl log::Log for DevLogger {
fn enabled(&self, _: &log::Metadata) -> bool { true }
fn log(&self, record: &log::Record) {
let message = format!("{}:{} -- {}", record.level(), record.target(), record.args());
println!("{}", message);
#[cfg(target_os = "android")]
{
unsafe {
let message = ::std::ffi::CString::new(message).unwrap();
let level_int = match record.level() {
log::Level::Trace => 2,
log::Level::Debug => 3,
log::Level::Info => 4,
log::Level::Warn => 5,
log::Level::Error => 6,
};
let message = message.as_ptr();
let tag = b"RustInternal\0";
__android_log_write(level_int, tag.as_ptr() as *const c_char, message);
}
}
// TODO ios (use NSLog(__CFStringMakeConstantString(b"%s\0"), ...), maybe windows? (OutputDebugStringA)
}
fn flush(&self) {}
}
static INIT_LOGGER: Once = ONCE_INIT;
static DEV_LOGGER: &'static log::Log = &DevLogger;
fn init_logger() {
log::set_logger(DEV_LOGGER).unwrap();
log::set_max_level(log::LevelFilter::Trace);
std::env::set_var("RUST_BACKTRACE", "1");
info!("Hooked up rust logger!");
}
define_destructor!(sync15_passwords_state_destroy, PasswordState);
// This is probably too many string arguments...
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_state_new(
mentat_db_path: *const c_char,
encryption_key: *const c_char,
error: *mut ExternError
) -> *mut PasswordState {
INIT_LOGGER.call_once(init_logger);
with_translated_result(error, || {
let store = mentat::Store::open_with_key(c_char_to_string(mentat_db_path),
c_char_to_string(encryption_key))?;
let engine = PasswordEngine::new(store)?;
Ok(PasswordState {
engine,
sync: None,
})
})
}
// indirection to help `?` figure out the target error type
fn parse_url(url: &str) -> sync::Result<url::Url> {
Ok(url::Url::parse(url)?)
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_sync(
state: *mut PasswordState,
key_id: *const c_char,
access_token: *const c_char,
sync_key: *const c_char,
tokenserver_url: *const c_char,
error: *mut ExternError
) {
with_translated_void_result(error, || {
assert_pointer_not_null!(state);
let state = &mut *state;
let root_sync_key = sync::KeyBundle::from_ksync_base64(
c_char_to_string(sync_key).into())?;
let requested_init = Sync15StorageClientInit {
key_id: c_char_to_string(key_id).into(),
access_token: c_char_to_string(access_token).into(),
tokenserver_url: parse_url(c_char_to_string(tokenserver_url))?,
};
// TODO: If `to_ready` (or anything else with a ?) fails below, this
// `take()` means we end up with `state.sync.is_none()`, which means the
// next sync will redownload meta/global, crypto/keys, etc. without
// needing to. (AFAICT fixing this requires a change in sync15-adapter,
// since to_ready takes GlobalState as a move, and it's not clear if
// that change even is a good idea).
let mut sync_info = state.sync.take().map(Ok)
.unwrap_or_else(|| -> sync::Result<SyncInfo> {
let state = GlobalState::default();
let client = Sync15StorageClient::new(requested_init.clone())?;
Ok(SyncInfo {
state,
client,
last_client_init: requested_init.clone(),
})
})?;
// If the options passed for initialization of the storage client aren't
// the same as the ones we used last time, reinitialize it. (Note that
// we could avoid the comparison in the case where we had `None` in
// `state.sync` before, but this probably doesn't matter).
if requested_init != sync_info.last_client_init {
sync_info.client = Sync15StorageClient::new(requested_init.clone())?;
sync_info.last_client_init = requested_init;
}
{ // Scope borrow of `sync_info.client`
let mut state_machine =
sync::SetupStateMachine::for_readonly_sync(&sync_info.client, &root_sync_key);
let next_sync_state = state_machine.to_ready(sync_info.state)?;
sync_info.state = next_sync_state;
}
// We don't use a ? on the next line so that even if `state.engine.sync`
// fails, we don't forget the sync_state.
let result = state.engine.sync(&sync_info.client, &sync_info.state);
state.sync = Some(sync_info);
result
});
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_touch(state: *mut PasswordState, id: *const c_char, error: *mut ExternError) {
with_translated_void_result(error, || {
assert_pointer_not_null!(state);
let state = &mut *state;
state.engine.touch_credential(c_char_to_string(id).into())?;
Ok(())
});
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_delete(state: *mut PasswordState, id: *const c_char, error: *mut ExternError) -> bool {
with_translated_value_result(error, || {
assert_pointer_not_null!(state);
let state = &mut *state;
let deleted = state.engine.delete_credential(c_char_to_string(id).into())?;
Ok(deleted)
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_wipe(state: *mut PasswordState, error: *mut ExternError) {
with_translated_void_result(error, || {
assert_pointer_not_null!(state);
let state = &mut *state;
state.engine.wipe()?;
Ok(())
});
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_reset(state: *mut PasswordState, error: *mut ExternError) {
with_translated_void_result(error, || {
assert_pointer_not_null!(state);
let state = &mut *state;
state.engine.reset()?;
// XXX We probably need to clear out some things from `state.service`!
Ok(())
});
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_get_all(state: *mut PasswordState, error: *mut ExternError) -> *mut c_char {
with_translated_string_result(error, || {
assert_pointer_not_null!(state);
let state = &mut *state;
// Type declaration is just to make sure we have the right type (and for documentation)
let passwords: Vec<ServerPassword> = {
let mut in_progress_read = state.engine.store.begin_read()?;
passwords::get_all_sync_passwords(&mut in_progress_read)?
};
let result = serde_json::to_string(&passwords)?;
Ok(result)
})
}
#[no_mangle]
pub unsafe extern "C" fn sync15_passwords_get_by_id(state: *mut PasswordState, id: *const c_char, error: *mut ExternError) -> *mut c_char {
with_translated_opt_string_result(error, || {
assert_pointer_not_null!(state);
let state = &mut *state;
// Type declaration is just to make sure we have the right type (and for documentation)
let maybe_pass: Option<ServerPassword> = {
let mut in_progress_read = state.engine.store.begin_read()?;
passwords::get_sync_password(&mut in_progress_read, c_char_to_string(id).into())?
};
let pass = if let Some(p) = maybe_pass { p } else {
return Ok(None)
};
Ok(Some(serde_json::to_string(&pass)?))
})
}
#[no_mangle]
pub extern "C" fn wtf_destroy_c_char(s: *mut c_char) {
// the "pub use" above should should be enough to expose this?
// It appears that is enough to expose it in a windows DLL, but for
// some reason it's not expored for Android.
// *sob* - and now that I've defined this, suddenly this *and*
// destroy_c_char are exposed (and removing this again removes the
// destroy_c_char)
// Oh well, a yak for another day.
destroy_c_char(s);
}

Просмотреть файл

@ -1,234 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
use sync15_adapter as sync;
use self::sync::{
ServerTimestamp,
OutgoingChangeset,
Payload,
};
use mentat::{
self,
DateTime,
FromMillis,
Utc,
};
use logins::{
credentials,
passwords,
Credential,
CredentialId,
SyncGuid,
ServerPassword,
ensure_vocabulary,
};
use std::collections::BTreeSet;
use errors::{
Sync15PasswordsError,
Result,
};
// TODO: These probably don't all need to be public!
pub struct PasswordEngine {
pub last_server_timestamp: ServerTimestamp,
pub current_tx_id: Option<mentat::Entid>,
pub store: mentat::store::Store,
}
impl PasswordEngine {
pub fn new(mut store: mentat::store::Store) -> Result<PasswordEngine> {
let last_server_timestamp: ServerTimestamp = { // Scope borrow of `store`.
let mut in_progress = store.begin_transaction()?;
ensure_vocabulary(&mut in_progress)?;
let timestamp = passwords::get_last_server_timestamp(&in_progress)?;
in_progress.commit()?;
ServerTimestamp(timestamp.unwrap_or_default())
};
Ok(PasswordEngine {
current_tx_id: None,
last_server_timestamp,
store,
})
}
pub fn touch_credential(&mut self, id: String) -> Result<()> {
let mut in_progress = self.store.begin_transaction()?;
credentials::touch_by_id(&mut in_progress, CredentialId(id), None)?;
in_progress.commit()?;
Ok(())
}
pub fn delete_credential(&mut self, id: String) -> Result<bool> {
let mut in_progress = self.store.begin_transaction()?;
let deleted = credentials::delete_by_id(&mut in_progress, CredentialId(id))?;
in_progress.commit()?;
Ok(deleted)
}
pub fn update_credential(&mut self, id: &str, updater: impl FnMut(&mut Credential)) -> Result<bool> {
let mut in_progress = self.store.begin_transaction()?;
let mut credential = credentials::get_credential(&in_progress, CredentialId(id.into()))?;
if credential.as_mut().map(updater).is_none() {
return Ok(false);
}
credentials::add_credential(&mut in_progress, credential.unwrap())?;
in_progress.commit()?;
Ok(true)
}
pub fn sync(
&mut self,
client: &sync::Sync15StorageClient,
state: &sync::GlobalState,
) -> Result<()> {
let ts = self.last_server_timestamp;
sync::synchronize(client, state, self, "passwords".into(), ts, true)?;
Ok(())
}
pub fn reset(&mut self) -> Result<()> {
{ // Scope borrow of self.
let mut in_progress = self.store.begin_transaction()?;
passwords::reset_client(&mut in_progress)?;
in_progress.commit()?;
}
self.last_server_timestamp = 0.0.into();
Ok(())
}
pub fn wipe(&mut self) -> Result<()> {
self.last_server_timestamp = 0.0.into();
// let mut in_progress = store.begin_transaction().map_err(|_| "failed to begin_transaction")?;
// // reset_client(&mut in_progress).map_err(|_| "failed to reset_client")?;
// in_progress.commit().map_err(|_| "failed to commit")?;
// self.save()?;
Ok(())
}
pub fn get_unsynced_changes(&mut self) -> Result<(Vec<Payload>, ServerTimestamp)> {
let mut result = vec![];
let in_progress_read = self.store.begin_read()?;
let deleted = passwords::get_deleted_sync_password_uuids_to_upload(&in_progress_read)?;
debug!("{} deleted records to upload: {:?}", deleted.len(), deleted);
for r in deleted {
result.push(Payload::new_tombstone(r.0))
}
let modified = passwords::get_modified_sync_passwords_to_upload(&in_progress_read)?;
debug!("{} modified records to upload: {:?}", modified.len(), modified.iter().map(|r| &r.uuid.0).collect::<Vec<_>>());
for r in modified {
result.push(Payload::from_record(r)?);
}
Ok((result, self.last_server_timestamp))
}
}
impl sync::Store for PasswordEngine {
type Error = Sync15PasswordsError;
fn apply_incoming(
&mut self,
inbound: sync::IncomingChangeset
) -> Result<OutgoingChangeset> {
debug!("Remote collection has {} changes timestamped at {}",
inbound.changes.len(), inbound.timestamp);
{ // Scope borrow of inbound.changes.
let (to_delete, to_apply): (Vec<_>, Vec<_>) = inbound.changes.iter().partition(|(payload, _)| payload.is_tombstone());
debug!("{} records to delete: {:?}", to_delete.len(), to_delete);
debug!("{} records to apply: {:?}", to_apply.len(), to_apply);
}
self.current_tx_id = { // Scope borrow of self.
let mut in_progress = self.store.begin_transaction()?;
for (payload, server_timestamp) in inbound.changes {
if payload.is_tombstone() {
passwords::delete_by_sync_uuid(&mut in_progress, payload.id().into())?;
} else {
debug!("Applying: {:?}", payload);
let mut server_password: ServerPassword = payload.clone().into_record()?;
server_password.modified = DateTime::<Utc>::from_millis(server_timestamp.as_millis() as i64);
passwords::apply_password(&mut in_progress, server_password)?;
}
}
let current_tx_id = in_progress.last_tx_id();
in_progress.commit()?;
Some(current_tx_id)
};
let (outbound_changes, last_server_timestamp) = self.get_unsynced_changes()?;
let outbound = OutgoingChangeset {
changes: outbound_changes,
timestamp: last_server_timestamp,
collection: "passwords".into()
};
debug!("After applying incoming changes, local collection has {} outgoing changes timestamped at {}",
outbound.changes.len(), outbound.timestamp);
Ok(outbound)
}
fn sync_finished(&mut self, new_last_server_timestamp: ServerTimestamp, records_synced: &[String]) -> Result<()> {
debug!("Synced {} outbound changes at remote timestamp {}", records_synced.len(), new_last_server_timestamp);
for id in records_synced {
trace!(" {:?}", id);
}
let current_tx_id = self.current_tx_id.unwrap(); // XXX
{ // Scope borrow of self.
let mut in_progress = self.store.begin_transaction()?;
let deleted = passwords::get_deleted_sync_password_uuids_to_upload(&in_progress)?;
let deleted: BTreeSet<String> = deleted.into_iter().map(|x| x.0).collect();
let (deleted, uploaded): (Vec<_>, Vec<_>) =
records_synced.iter().cloned().partition(|id| deleted.contains(id));
passwords::mark_synced_by_sync_uuids(&mut in_progress, uploaded.into_iter().map(SyncGuid).collect(), current_tx_id)?;
passwords::delete_by_sync_uuids(&mut in_progress, deleted.into_iter().map(SyncGuid).collect())?;
passwords::set_last_server_timestamp(&mut in_progress, new_last_server_timestamp.0)?;
in_progress.commit()?;
};
self.last_server_timestamp = new_last_server_timestamp;
Ok(())
}
}

Просмотреть файл

@ -1,135 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![allow(dead_code)]
use std; // To refer to std::result::Result.
use serde_json;
use mentat;
use logins;
use sync15_adapter;
use failure::{Context, Backtrace, Fail};
pub type Result<T> = std::result::Result<T, Sync15PasswordsError>;
#[macro_export]
macro_rules! bail {
($e:expr) => (
return Err($e.into());
)
}
#[derive(Debug)]
pub struct Sync15PasswordsError(Box<Context<Sync15PasswordsErrorKind>>);
impl Fail for Sync15PasswordsError {
#[inline]
fn cause(&self) -> Option<&Fail> {
self.0.cause()
}
#[inline]
fn backtrace(&self) -> Option<&Backtrace> {
self.0.backtrace()
}
}
impl std::fmt::Display for Sync15PasswordsError {
#[inline]
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
std::fmt::Display::fmt(&*self.0, f)
}
}
impl Sync15PasswordsError {
#[inline]
pub fn kind(&self) -> &Sync15PasswordsErrorKind {
&*self.0.get_context()
}
}
impl From<Sync15PasswordsErrorKind> for Sync15PasswordsError {
#[inline]
fn from(kind: Sync15PasswordsErrorKind) -> Sync15PasswordsError {
Sync15PasswordsError(Box::new(Context::new(kind)))
}
}
impl From<Context<Sync15PasswordsErrorKind>> for Sync15PasswordsError {
#[inline]
fn from(inner: Context<Sync15PasswordsErrorKind>) -> Sync15PasswordsError {
Sync15PasswordsError(Box::new(inner))
}
}
#[derive(Debug, Fail)]
pub enum Sync15PasswordsErrorKind {
#[fail(display = "{}", _0)]
MentatError(#[cause] mentat::MentatError),
#[fail(display = "{}", _0)]
LoginsError(#[cause] logins::Error),
#[fail(display = "{}", _0)]
Sync15AdapterError(#[cause] sync15_adapter::Error),
#[fail(display = "{}", _0)]
SerdeJSONError(#[cause] serde_json::Error),
}
impl From<mentat::MentatError> for Sync15PasswordsErrorKind {
fn from(error: mentat::MentatError) -> Sync15PasswordsErrorKind {
Sync15PasswordsErrorKind::MentatError(error)
}
}
impl From<logins::Error> for Sync15PasswordsErrorKind {
fn from(error: logins::Error) -> Sync15PasswordsErrorKind {
Sync15PasswordsErrorKind::LoginsError(error)
}
}
impl From<sync15_adapter::Error> for Sync15PasswordsErrorKind {
fn from(error: sync15_adapter::Error) -> Sync15PasswordsErrorKind {
Sync15PasswordsErrorKind::Sync15AdapterError(error)
}
}
impl From<serde_json::Error> for Sync15PasswordsErrorKind {
fn from(error: serde_json::Error) -> Sync15PasswordsErrorKind {
Sync15PasswordsErrorKind::SerdeJSONError(error)
}
}
impl From<mentat::MentatError> for Sync15PasswordsError {
fn from(error: mentat::MentatError) -> Sync15PasswordsError {
Sync15PasswordsErrorKind::from(error).into()
}
}
impl From<logins::Error> for Sync15PasswordsError {
fn from(error: logins::Error) -> Sync15PasswordsError {
Sync15PasswordsErrorKind::from(error).into()
}
}
impl From<sync15_adapter::Error> for Sync15PasswordsError {
fn from(error: sync15_adapter::Error) -> Sync15PasswordsError {
Sync15PasswordsErrorKind::from(error).into()
}
}
impl From<serde_json::Error> for Sync15PasswordsError {
fn from(error: serde_json::Error) -> Sync15PasswordsError {
Sync15PasswordsErrorKind::from(error).into()
}
}

Просмотреть файл

@ -1,38 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
#![crate_name = "sync15_passwords"]
extern crate failure;
#[macro_use] extern crate failure_derive;
#[macro_use] extern crate log;
extern crate serde_json;
extern crate mentat;
extern crate logins;
extern crate sync15_adapter;
pub mod engine;
pub use engine::{
PasswordEngine,
};
pub mod errors;
pub use errors::{
Sync15PasswordsError,
Sync15PasswordsErrorKind,
Result,
};
pub use logins::{
ServerPassword,
credentials,
passwords,
};

Просмотреть файл

@ -1,414 +0,0 @@
// Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
extern crate env_logger;
extern crate failure;
#[macro_use] extern crate prettytable;
extern crate serde;
#[macro_use] extern crate serde_derive;
extern crate serde_json;
extern crate url;
extern crate logins;
extern crate mentat;
extern crate sync15_adapter as sync;
extern crate sync15_passwords;
use sync15_passwords::PasswordEngine;
use mentat::{
DateTime,
FromMillis,
Utc,
};
use std::io::{self, Read, Write};
use failure::Error;
use std::fs;
use std::process;
use std::collections::HashMap;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
#[derive(Debug, Deserialize)]
struct OAuthCredentials {
access_token: String,
refresh_token: String,
keys: HashMap<String, ScopedKeyData>,
expires_in: u64,
auth_at: u64,
}
#[derive(Debug, Deserialize)]
struct ScopedKeyData {
k: String,
kid: String,
scope: String,
}
use logins::{
Credential,
FormTarget,
ServerPassword,
};
use logins::passwords;
fn do_auth(recur: bool) -> Result<OAuthCredentials, Error> {
match fs::File::open("./credentials.json") {
Err(_) => {
if recur {
panic!("Failed to open credentials 2nd time");
}
println!("No credentials found, invoking boxlocker.py...");
process::Command::new("python")
.arg("../boxlocker/boxlocker.py").output()
.expect("Failed to run boxlocker.py");
return do_auth(true);
},
Ok(mut file) => {
let mut s = String::new();
file.read_to_string(&mut s)?;
let creds: OAuthCredentials = serde_json::from_str(&s)?;
let time = SystemTime::now().duration_since(UNIX_EPOCH)?.as_secs();
if creds.expires_in + creds.auth_at < time {
println!("Warning, credentials may be stale.");
}
Ok(creds)
}
}
}
fn prompt_string<S: AsRef<str>>(prompt: S) -> Option<String> {
print!("{}: ", prompt.as_ref());
let _ = io::stdout().flush(); // Don't care if flush fails really.
let mut s = String::new();
io::stdin().read_line(&mut s).expect("Failed to read line...");
if let Some('\n') = s.chars().next_back() { s.pop(); }
if let Some('\r') = s.chars().next_back() { s.pop(); }
if s.len() == 0 {
None
} else {
Some(s)
}
}
fn prompt_usize<S: AsRef<str>>(prompt: S) -> Option<usize> {
if let Some(s) = prompt_string(prompt) {
match s.parse::<usize>() {
Ok(n) => Some(n),
Err(_) => {
println!("Couldn't parse!");
None
}
}
} else {
None
}
}
#[inline]
fn duration_ms(dur: Duration) -> u64 {
dur.as_secs() * 1000 + ((dur.subsec_nanos() / 1_000_000) as u64)
}
#[inline]
fn unix_time_ms() -> u64 {
duration_ms(SystemTime::now().duration_since(UNIX_EPOCH).unwrap())
}
fn read_login() -> ServerPassword {
let username = prompt_string("username"); // .unwrap_or(String::new());
let password = prompt_string("password").unwrap_or(String::new());
let form_submit_url = prompt_string("form_submit_url");
let hostname = prompt_string("hostname");
let http_realm = prompt_string("http_realm");
let username_field = prompt_string("username_field"); // .unwrap_or(String::new());
let password_field = prompt_string("password_field"); // .unwrap_or(String::new());
let ms_i64 = unix_time_ms() as i64;
ServerPassword {
uuid: sync::util::random_guid().unwrap().into(),
username,
password,
username_field,
password_field,
target: match form_submit_url {
Some(form_submit_url) => FormTarget::FormSubmitURL(form_submit_url),
None => FormTarget::HttpRealm(http_realm.unwrap_or(String::new())), // XXX this makes little sense.
},
hostname: hostname.unwrap_or(String::new()), // XXX.
time_created: DateTime::<Utc>::from_millis(ms_i64),
time_password_changed: DateTime::<Utc>::from_millis(ms_i64),
times_used: 0,
time_last_used: DateTime::<Utc>::from_millis(ms_i64),
modified: DateTime::<Utc>::from_millis(ms_i64), // XXX what should we do here?
}
}
fn update_string(field_name: &str, field: &mut String, extra: &str) -> bool {
let opt_s = prompt_string(format!("new {} [now {}{}]", field_name, field, extra));
if let Some(s) = opt_s {
*field = s;
true
} else {
false
}
}
fn string_opt(o: &Option<String>) -> Option<&str> {
o.as_ref().map(|s| s.as_ref())
}
fn string_opt_or<'a>(o: &'a Option<String>, or: &'a str) -> &'a str {
string_opt(o).unwrap_or(or)
}
// fn update_login(record: &mut ServerPassword) {
// update_string("username", &mut record.username, ", leave blank to keep");
// let changed_password = update_string("password", &mut record.password, ", leave blank to keep");
// if changed_password {
// record.time_password_changed = unix_time_ms() as i64;
// }
// update_string("username_field", &mut record.username_field, ", leave blank to keep");
// update_string("password_field", &mut record.password_field, ", leave blank to keep");
// if prompt_bool(&format!("edit hostname? (now {}) [yN]", string_opt_or(&record.hostname, "(none)"))).unwrap_or(false) {
// record.hostname = prompt_string("hostname");
// }
// if prompt_bool(&format!("edit form_submit_url? (now {}) [yN]", string_opt_or(&record.form_submit_url, "(none)"))).unwrap_or(false) {
// record.form_submit_url = prompt_string("form_submit_url");
// }
// }
fn update_credential(record: &mut Credential) {
let mut username = record.username.clone().unwrap_or("".into());
if update_string("username", &mut username, ", leave blank to keep") {
record.username = Some(username);
}
update_string("password", &mut record.password, ", leave blank to keep");
update_string("title", &mut record.password, ", leave blank to keep");
}
fn prompt_bool(msg: &str) -> Option<bool> {
let result = prompt_string(msg);
result.and_then(|r| match r.chars().next().unwrap() {
'y' | 'Y' | 't' | 'T' => Some(true),
'n' | 'N' | 'f' | 'F' => Some(false),
_ => None
})
}
fn prompt_chars(msg: &str) -> Option<char> {
prompt_string(msg).and_then(|r| r.chars().next())
}
fn as_table<'a, I>(records: I) -> (prettytable::Table, Vec<String>) where I: IntoIterator<Item=&'a ServerPassword> {
let mut table = prettytable::Table::new();
table.add_row(row![
"(idx)", "id",
"username", "password",
"usernameField", "passwordField",
"hostname",
"formSubmitURL"
// Skipping metadata so this isn't insanely long
]);
let v: Vec<_> = records.into_iter().enumerate().map(|(index, rec)| {
let target = match &rec.target {
&FormTarget::FormSubmitURL(ref form_submit_url) => form_submit_url,
&FormTarget::HttpRealm(ref http_realm) => http_realm,
};
table.add_row(row![
index,
rec.uuid.as_ref(),
string_opt_or(&rec.username, "<username>"),
&rec.password,
string_opt_or(&rec.username_field, "<username_field>"),
string_opt_or(&rec.password_field, "<password_field>"),
&rec.hostname,
target
]);
rec.uuid.0.clone()
}).collect();
(table, v)
}
fn show_all(e: &mut PasswordEngine) -> Result<Vec<String>, Error> {
let records = {
let mut in_progress_read = e.store.begin_read()?;
// .map_err(|_| "failed to begin_read")?;
passwords::get_all_sync_passwords(&mut in_progress_read)?
// .map_err(|_| "failed to get_all_sync_passwords")?
};
let (table, map) = as_table(&records);
table.printstd();
Ok(map)
}
fn prompt_record_id(e: &mut PasswordEngine, action: &str) -> Result<Option<String>, Error> {
let index_to_id = show_all(e)?;
let input = match prompt_usize(&format!("Enter (idx) of record to {}", action)) {
Some(x) => x,
None => {
println!("Bad input");
return Ok(None);
},
};
if input >= index_to_id.len() {
println!("No such index");
return Ok(None);
}
Ok(Some(index_to_id[input].clone().into()))
}
fn main() -> Result<(), Error> {
env_logger::init();
let oauth_data = do_auth(false)?;
let scope = &oauth_data.keys["https://identity.mozilla.com/apps/oldsync"];
let client = sync::Sync15StorageClient::new(sync::Sync15StorageClientInit {
key_id: scope.kid.clone(),
access_token: oauth_data.access_token.clone(),
tokenserver_url: url::Url::parse("https://oauth-sync.dev.lcip.org/syncserver/token/1.0/sync/1.5")?,
})?;
let mut sync_state = sync::GlobalState::default();
let root_sync_key = sync::KeyBundle::from_ksync_base64(&scope.k)?;
let mut state_machine =
sync::SetupStateMachine::for_readonly_sync(&client, &root_sync_key);
sync_state = state_machine.to_ready(sync_state)?;
let mut engine = PasswordEngine::new(mentat::Store::open("logins.mentatdb")?)?;
println!("Performing startup sync; engine has last server timestamp {}.", engine.last_server_timestamp);
if let Err(e) = engine.sync(&client, &sync_state) {
println!("Initial sync failed: {}", e);
if !prompt_bool("Would you like to continue [yN]").unwrap_or(false) {
return Err(e.into());
}
}
show_all(&mut engine)?;
loop {
// match prompt_chars("[A]dd, [D]elete, [U]pdate, [S]ync, [V]iew, [R]eset, [W]ipe or [Q]uit").unwrap_or('?') {
match prompt_chars("[T]ouch credential, [D]elete credential, [U]pdate credential, [S]ync, [V]iew, [R]eset, [W]ipe, or [Q]uit").unwrap_or('?') {
'T' | 't' => {
println!("Touching (recording usage of) credential");
if let Some(id) = prompt_record_id(&mut engine, "touch (record usage of)")? {
// Here we're using that the credential uuid and the Sync 1.5 uuid are the same;
// that's not a stable assumption.
if let Err(e) = engine.touch_credential(id) {
println!("Failed to touch credential! {}", e);
}
}
}
// 'A' | 'a' => {
// println!("Adding new record");
// let record = read_login();
// if let Err(e) = engine.create(record) {
// println!("Failed to create record! {}", e);
// }
// }
'D' | 'd' => {
println!("Deleting credential");
if let Some(id) = prompt_record_id(&mut engine, "delete")? {
// Here we're using that the credential uuid and the Sync 1.5 uuid are the same;
// that's not a stable assumption.
if let Err(e) = engine.delete_credential(id) {
println!("Failed to delete credential! {}", e);
}
}
}
'U' | 'u' => {
println!("Updating credential fields");
if let Some(id) = prompt_record_id(&mut engine, "update")? {
// Here we're using that the credential uuid and the Sync 1.5 uuid are the same;
// that's not a stable assumption.
if let Err(e) = engine.update_credential(&id, update_credential) {
println!("Failed to update credential! {}", e);
}
}
}
'R' | 'r' => {
println!("Resetting client's last server timestamp (was {}).", engine.last_server_timestamp);
if let Err(e) = engine.reset() {
println!("Failed to reset! {}", e);
}
}
'W' | 'w' => {
println!("Wiping all data from client!");
if let Err(e) = engine.wipe() {
println!("Failed to wipe! {}", e);
}
}
'S' | 's' => {
println!("Syncing engine with last server timestamp {}!", engine.last_server_timestamp);
if let Err(e) = engine.sync(&client, &sync_state) {
println!("Sync failed! {}", e);
} else {
println!("Sync was successful!");
}
}
'V' | 'v' => {
// println!("Engine has {} records, a last sync timestamp of {}, and {} queued changes",
// engine.records.len(), engine.last_sync, engine.changes.len());
println!("Engine has a last server timestamp of {}", engine.last_server_timestamp);
{ // Scope borrow of engine.
let in_progress_read = engine.store.begin_read()?;
// .map_err(|_| "failed to begin_read")?;
let deleted = passwords::get_deleted_sync_password_uuids_to_upload(&in_progress_read)?;
// .map_err(|_| "failed to get_deleted_sync_password_uuids_to_upload")?;
println!("{} deleted records to upload: {:?}", deleted.len(), deleted);
let modified = passwords::get_modified_sync_passwords_to_upload(&in_progress_read)?;
// .map_err(|_| "failed to get_modified_sync_passwords_to_upload")?;
println!("{} modified records to upload:", modified.len());
if !modified.is_empty() {
let (table, _map) = as_table(&modified);
table.printstd();
}
}
println!("Local collection:");
show_all(&mut engine)?;
}
'Q' | 'q' => {
break;
}
'?' => {
continue;
}
c => {
println!("Unknown action '{}', exiting.", c);
break;
}
}
}
println!("Exiting (bye!)");
Ok(())
}

Просмотреть файл

@ -7,6 +7,9 @@ npm install
npm start
```
If this doesn't automatically launch a browser,
you can navigate to the local docs at http://localhost:3000/application-services/
### Deploy
```

Просмотреть файл

@ -0,0 +1,102 @@
---
title: Firefox Accounts Train-119
author: Shane Tomlinson
authorUrl: https://github.com/shane-tomlinson
---
Hi All,
On August 30th, we shipped FxA train-119 to production
with the following highlights:
<!--truncate-->
## FxA-0: quality
The push to improve quality and cleanup messy code never ends. A lot of
work went into integrating with Pushbox, fixing tests, and updating
libraries.
* https://github.com/mozilla/fxa-auth-server/pull/2597
* https://github.com/mozilla/fxa-auth-server/pull/2591
* https://github.com/mozilla/fxa-auth-server/pull/2588
* https://github.com/mozilla/fxa-auth-server/pull/2585
* https://github.com/mozilla/fxa-auth-server/pull/2584
* https://github.com/mozilla/fxa-auth-server/pull/2581
* https://github.com/mozilla/fxa-auth-server/pull/2578
* https://github.com/mozilla/fxa-auth-server/pull/2573
* https://github.com/mozilla/fxa-auth-server/pull/2567
* https://github.com/mozilla/fxa-content-server/pull/6485
* https://github.com/mozilla/fxa-content-server/pull/6475
* https://github.com/mozilla/fxa-content-server/pull/6420
* https://github.com/mozilla/fxa-content-server/pull/6472
* https://github.com/mozilla/fxa-content-server/pull/6465
* https://github.com/mozilla/fxa-content-server/pull/6405
* https://github.com/mozilla/fxa-content-server/pull/6453
* https://github.com/mozilla/fxa-content-server/pull/6462
* https://github.com/mozilla/fxa-content-server/pull/6460
* https://github.com/mozilla/fxa-content-server/pull/6449
* https://github.com/mozilla/fxa-content-server/pull/6433
* https://github.com/mozilla/fxa-content-server/pull/6444
* https://github.com/mozilla/fxa-content-server/pull/6443
* https://github.com/mozilla/fxa-content-server/pull/6441
* https://github.com/mozilla/fxa-content-server/pull/6436
* https://github.com/mozilla/fxa-content-server/pull/6432
* https://github.com/mozilla/fxa-content-server/pull/6426
* https://github.com/mozilla/fxa-oauth-server/pull/594
* https://github.com/mozilla/fxa-oauth-server/pull/551
* https://github.com/mozilla/fxa-oauth-server/pull/586
## FxA-151: Email deliverability
The new email service is now running in production and being
put to use. This cycle we improved metrics, fixed tests,
improved configuration management, and other general cleanup.
* https://github.com/mozilla/fxa-auth-server/pull/2576
* https://github.com/mozilla/fxa-auth-server/pull/2574
* https://github.com/mozilla/fxa-auth-server/pull/2572
* https://github.com/mozilla/fxa-auth-server/pull/2571
* https://github.com/mozilla/fxa-content-server/pull/6470
* https://github.com/mozilla/fxa-email-service/pull/178
* https://github.com/mozilla/fxa-email-service/pull/177
* https://github.com/mozilla/fxa-email-service/pull/176
* https://github.com/mozilla/fxa-email-service/pull/174
* https://github.com/mozilla/fxa-email-service/pull/175
* https://github.com/mozilla/fxa-email-service/pull/171
## FxA-153: Account recovery
Major work is complete on Account recovery and the test
phase has begun. This cycle focused heavily on cleaning
up the UX.
* https://github.com/mozilla/fxa-content-server/pull/6461
* https://github.com/mozilla/fxa-content-server/pull/6431
* https://github.com/mozilla/fxa-content-server/pull/6418
## FxA-156: Fenix Pairing flow
The Fenix Pairing flow is coming along, though very little
code has been merged. This train only has some preliminary
work merged to make further code review simpler.
* https://github.com/mozilla/fxa-content-server/pull/6479
## No milestone
Special thanks go to the following community contributors,
who have code shipping in this train:
* hritvi
* divyabiyani
* brizental
As always, you can find more details in the changelogs for each repo:
* https://github.com/mozilla/fxa-auth-server/blob/v1.119.6/CHANGELOG.md
* https://github.com/mozilla/fxa-content-server/blob/v1.119.4/CHANGELOG.md
* https://github.com/mozilla/fxa-oauth-server/blob/v1.119.0/CHANGELOG.md
* https://github.com/mozilla/fxa-customs-server/blob/v1.119.0/CHANGELOG.md
* https://github.com/mozilla/fxa-email-service/blob/v1.119.0/CHANGELOG.md

Просмотреть файл

@ -0,0 +1,108 @@
---
title: Firefox Accounts Train-120
author: Shane Tomlinson
authorUrl: https://github.com/shane-tomlinson
---
Hi All,
On September 10th, we shipped FxA train-120 to production
with the following highlights:
<!--truncate-->
## FxA-0: quality
The push to improve quality and cleanup messy code never ends.
We made several token and recovery code updates, fixed `blob:` URIs being blocked,
fixed tests, fixed integration with pushbox, and removed nsp support. We also
started to add the LGTM code analysis tool.
* https://github.com/mozilla/fxa-auth-server/pull/2608
* https://github.com/mozilla/fxa-auth-server/pull/2601
* https://github.com/mozilla/fxa-auth-server/pull/2604
* https://github.com/mozilla/fxa-auth-server/pull/2603
* https://github.com/mozilla/fxa-auth-server/pull/2602
* https://github.com/mozilla/fxa-auth-server/pull/2600
* https://github.com/mozilla/fxa-auth-server/pull/2590
* https://github.com/mozilla/fxa-content-server/pull/6538
* https://github.com/mozilla/fxa-content-server/pull/6517
* https://github.com/mozilla/fxa-content-server/pull/6518
* https://github.com/mozilla/fxa-content-server/pull/6500
* https://github.com/mozilla/fxa-content-server/pull/6510
* https://github.com/mozilla/fxa-content-server/pull/6508
* https://github.com/mozilla/fxa-content-server/pull/6505
* https://github.com/mozilla/fxa-content-server/pull/6499
* https://github.com/mozilla/fxa-content-server/pull/6483
* https://github.com/mozilla/fxa-content-server/pull/6488
* https://github.com/mozilla/fxa-content-server/pull/6484
* https://github.com/mozilla/fxa-content-server/pull/6490
* https://github.com/mozilla/fxa-auth-db-mysql/pull/394
* https://github.com/mozilla/fxa-auth-db-mysql/pull/391
* https://github.com/mozilla/fxa-auth-db-mysql/pull/389
* https://github.com/mozilla/fxa-auth-db-mysql/pull/386
* https://github.com/mozilla/fxa-customs-server/pull/274
* https://github.com/mozilla/fxa-customs-server/pull/264
## FxA-151: Email deliverability
This cycle we improved logging and error handling.
* https://github.com/mozilla/fxa-auth-server/pull/2606
* https://github.com/mozilla/fxa-auth-server/pull/2595
* https://github.com/mozilla/fxa-email-service/pull/182
* https://github.com/mozilla/fxa-email-service/pull/180
## FxA-152: Improve password strength
The new password strength UI is now displayed to 100% of users
in both German and English, as well as 25% of Arabic users.
* https://github.com/mozilla/fxa-content-server/pull/6521
## FxA-153: Account recovery
Account recovery testing continues with several fixes applied
after a security and UX review.
* https://github.com/mozilla/fxa-auth-server/pull/2607
* https://github.com/mozilla/fxa-content-server/pull/6511
* https://github.com/mozilla/fxa-content-server/pull/6482
* https://github.com/mozilla/fxa-auth-db-mysql/pull/395
## FxA-155: signin papercuts
We no longer display the Firefox logo to mobile users on the
Choose What To Sync screen, and support the `at_hash` OpenID Connect
query parameter.
* https://github.com/mozilla/fxa-content-server/pull/6509
* https://github.com/mozilla/fxa-oauth-server/pull/598
* https://github.com/mozilla/fxa-customs-server/pull/277
## FxA-156: Fenix Pairing flow
More groundwork was merged to ease the final code review.
* https://github.com/mozilla/fxa-content-server/pull/6503
* https://github.com/mozilla/fxa-content-server/pull/6502
* https://github.com/mozilla/fxa-content-server/pull/6501
Special thanks go to the following community contributors,
who have code shipping in this train:
* xcorail
* brizental
* hritvi
As always, you can find more details in the changelogs for each repo:
* https://github.com/mozilla/fxa-auth-server/blob/v1.120.2/CHANGELOG.md
* https://github.com/mozilla/fxa-content-server/blob/v1.120.2/CHANGELOG.md
* https://github.com/mozilla/fxa-auth-db-mysql/blob/v1.120.0/CHANGELOG.md
* https://github.com/mozilla/fxa-oauth-server/blob/v1.120.0/CHANGELOG.md
* https://github.com/mozilla/fxa-customs-server/blob/v1.120.1/CHANGELOG.md
* https://github.com/mozilla/fxa-profile-server/blob/v1.120.0/CHANGELOG.md
* https://github.com/mozilla/fxa-email-service/blob/v1.120.0/CHANGELOG.md

Просмотреть файл

@ -5,9 +5,12 @@
"previous": "Previous",
"tagline": "Build the next thing...",
"accounts/50000-most-common-passwords": "50,000 Most Common Passwords",
"accounts/dev-process": "Development Process",
"Development Process": "Development Process",
"accounts/end-to-end-encryption": "End-to-end encryption",
"accounts/fxa-client-ios": "iOS SDK",
"accounts/fxa-client-android": "Android SDK",
"accounts/fxa-client-ios": "iOS SDK",
"accounts/metrics": "accounts/metrics",
"accounts/project-details": "Project Details",
"Project Details": "Project Details",
"accounts/welcome": "About Firefox Accounts",
@ -33,12 +36,12 @@
"Sync": "Sync",
"Applications": "Applications",
"Firefox Accounts": "Firefox Accounts",
"Mobile SDKs": "Mobile SDKs",
"Other Features": "Other Features",
"Firefox Sync": "Firefox Sync",
"Testing": "Testing",
"Design Docs": "Design Docs",
"Team": "Team",
"Mobile SDKs": "Mobile SDKs"
"Team": "Team"
},
"pages-strings": {
"Help Translate|recruit community translators for your project": "Help Translate",

Просмотреть файл

@ -7,7 +7,8 @@
"accounts": {
"Firefox Accounts": [
"accounts/welcome",
"accounts/project-details"
"accounts/project-details",
"accounts/dev-process"
],
"Mobile SDKs": [
"accounts/fxa-client-ios",