Over in https://github.com/mozilla/fxa-auth-server/pull/2535, work is
being done to enable the auth server to specify the provider for
specific requests. This causes a problem in fxa-local-dev, because we
always want to use the local smtp provider there.
As a hackish approach to resolving that conflict, this change introduces
a new boolean setting called `forceprovider`. If `forceprovider` is
`true`, the default provider may not be overridden by individual
requests.
Fixes#146
When I changed to use Rocket::custom instead of Rocket.toml for creating configuration, I forgot about ROCKET_SECRET_KEY.
It's good to note that, because we are using Rocket::custom none of the ROCKET_ env vars will work, so if ever we need to set any of the fields in config, we need to add to our config and to our rocket config builder function.
These are the possible fields in rocket config: https://api.rocket.rs/rocket/config/struct.ConfigBuilder.html#fields
Fixes#130.
We don't want the regular cycle of conflicts between Rust nightlies and (usually) rocket codegen to affect us in prod. And, funnily enough, the version that I first tried pinning to contains one of those of conflicts, so good reminder there. (nightly 2018-07-15 works with Rocket 0.3.15, but we're pinned to a specific commit on master for now)
Fixes#111
In order to do this I had to point our rocket version to the Rocket repo and that introduced some breaking changes which I also fixed in this PR. Mostly they were variable name changes
Fixes#100Fixes#24
As mentioned in #24 we still need to see about ROCKET_WORKERS.
Also, in production, rocket sends a warning about not having set a secret_key, but I decided to leave it without anything for now. Anyways that would be something that probably we wouldn't "hard code" here.
Fixes#82
When we are in a testing environment (which means when NODE_ENV=test) the logger won't show anything.
So now, instead of having mozlog:true | false in the config, we have logging: null | pretty | mozlog.
Fixes#48Fixes#77
I had to change basically everything since my last PR (#85) about this, so I thought it would be best to just create a new PR.
To get the failure errors working I needed to get at least providers, bounces and db to also work with the failure errors, not just the Rocket and HTTP errors. That's what I'm doing in this PR. Since this is kind of a big change, I thought it would be best to send this in before adapting the tests to the new error type, so, right now, many tests are commented.
Let me know what you guys think. If you think this is a good change I still would need to adapt all tests and also there are some error types specially for the queues bin that need to be changed as well.
I personally think this is a good change. It is abstracting all the errors to one single place in the code. While working on this refactoring, I saw a bunch of repeated code for error handling. This ends that, which means it will be easier to maintain and create new error types with this way of doing things. Also, we are now getting much more information about each error, not just the HTTP status and message and it's very easy to customize this even further.
Stores a "metadata" string from the caller in Redis, keyed by a hash of
the message id. In practice the metadata is JSON in the auth server's
case, but this repo neither knows or cares about that. No functionality
is added here to read or clear the data from Redis, that's coming in a
separate changeset for the queues process.
The HMAC key for hashing the message id comes from config, obviously,
because it's secret. It has a very specific name at the moment, but we
should feel free to rename it to something more generic than that if we
have other data that we'd like to hash with it in the future.
https://github.com/mozilla/fxa-email-service/pull/72
r=vladikoff
Fixes#68.
We don't want to let knowledge about the auth db leak into any unrelated application logic, because it's both temporary and in possession of capabilities that this repo has no business knowing about.
Fixes#61 .
Not much to talk about here, just deleted the settings and adjusted the tests.
The Cargo.lock file was updated as well, I think that's because some of the crates were updated. Since we got >= in the cargo versions right now, I think when crates are updated we just grab the new versions on cargo build, thus updating the Cargo.lock file.