зеркало из https://github.com/mozilla/subhub.git
refactor sub and hub to be separate code and aws functions
This commit is contained in:
Родитель
862a3f71b9
Коммит
d52e6c10f4
|
@ -26,4 +26,9 @@ node_modules
|
|||
.vscode/
|
||||
.tox
|
||||
venv
|
||||
.doit.db
|
||||
.doit.db
|
||||
|
||||
# Dockerfiles
|
||||
src/sub/Dockerfile
|
||||
src/hub/Dockerfile
|
||||
Dockerfile.base
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
# https://editorconfig.org/
|
||||
|
||||
root = true
|
||||
|
||||
[*]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
end_of_line = lf
|
||||
charset = utf-8
|
|
@ -128,4 +128,10 @@ node_modules/
|
|||
|
||||
# GraphViz
|
||||
*.png
|
||||
*.dot
|
||||
*.dot
|
||||
|
||||
# src tarballs
|
||||
.src.tar.gz
|
||||
|
||||
# OSX
|
||||
.DS_Store
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
<component name="ProjectRunConfigurationManager">
|
||||
<configuration default="false" name="behave" type="PyBehaveRunConfigurationType" factoryName="Behave">
|
||||
<module name="moz-subhub" />
|
||||
<module name="subhub" />
|
||||
<option name="INTERPRETER_OPTIONS" value="" />
|
||||
<option name="PARENT_ENVS" value="true" />
|
||||
<option name="SDK_HOME" value="$PROJECT_DIR$/venv/bin/python" />
|
||||
|
@ -9,7 +9,7 @@
|
|||
<option name="ADD_CONTENT_ROOTS" value="true" />
|
||||
<option name="ADD_SOURCE_ROOTS" value="true" />
|
||||
<EXTENSION ID="PythonCoverageRunConfigurationExtension" runner="coverage.py" />
|
||||
<WHAT_TO_RUN WHAT_TO_RUN="$PROJECT_DIR$/subhub/tests/behave/" />
|
||||
<WHAT_TO_RUN WHAT_TO_RUN="$PROJECT_DIR$/sub/tests/behave/" />
|
||||
<option name="ADDITIONAL_ARGS" value="" />
|
||||
<method v="2" />
|
||||
</configuration>
|
||||
|
|
|
@ -40,4 +40,4 @@ jobs:
|
|||
- doit draw
|
||||
- stage: Unit Test
|
||||
script:
|
||||
- doit test
|
||||
- doit test
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
FROM python:3.7-alpine
|
||||
MAINTAINER Stewart Henderson <shenderson@mozilla.com>
|
||||
|
||||
ENV PYTHONDONTWRITEBYTECODE 1
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
|
||||
RUN mkdir -p /base/etc
|
||||
COPY etc /base/etc
|
||||
|
||||
RUN mkdir -p /base/bin
|
||||
COPY bin /base/bin
|
||||
|
||||
WORKDIR /base
|
||||
COPY automation_requirements.txt /base
|
||||
COPY src/app_requirements.txt /base
|
||||
COPY src/test_requirements.txt /base
|
||||
|
||||
RUN apk add bash==5.0.0-r0 && \
|
||||
bin/install-packages.sh && \
|
||||
pip3 install -r automation_requirements.txt && \
|
||||
pip3 install -r app_requirements.txt && \
|
||||
pip3 install -r test_requirements.txt && \
|
||||
pip3 install awscli==1.16.213 && \
|
||||
pip3 install "connexion[swagger-ui]"
|
|
@ -0,0 +1,11 @@
|
|||
[[source]]
|
||||
name = "pypi"
|
||||
url = "https://pypi.org/simple"
|
||||
verify_ssl = true
|
||||
|
||||
[dev-packages]
|
||||
|
||||
[packages]
|
||||
|
||||
[requires]
|
||||
python_version = "3.7"
|
22
README.md
22
README.md
|
@ -10,6 +10,8 @@ Payment subscription REST api for customers:
|
|||
- yarn (https://yarnpkg.com): package manager for node modules for setting up serverless for running and deploying subhub
|
||||
- cloc
|
||||
- [GraphViz](https://graphviz.org/)
|
||||
- Docker
|
||||
- Docker Compose
|
||||
|
||||
## Important Environment Variables
|
||||
The CFG object is for accessing values either from the `subhub/.env` file and|or superseded by env vars.
|
||||
|
@ -123,7 +125,7 @@ The `check` task have several subtasks:
|
|||
- `doit check:json` This makes sure all of the json files in the git repo can be loaded.
|
||||
- `doit check:yaml` This makes sure all of the yaml files in the git repo can be loaded.
|
||||
- `doit check:black` This runs `black --check` to ensure formatting.
|
||||
- `doit check:reqs` This compares subhub/requirements.txt vs what is installed via pip freeze.
|
||||
- `doit check:reqs` This compares automation_requirements.txt vs what is installed via pip freeze.
|
||||
|
||||
## setup the virtualenv (venv)
|
||||
This task will create the virtual env and install all of the requirements for use in running code locally.
|
||||
|
@ -177,13 +179,23 @@ This run the `serverless deploy` command and requires the user to be logged into
|
|||
doit deploy
|
||||
```
|
||||
|
||||
Alternatively you may deploy a subset of the `deploy` function by specifying the component as such:
|
||||
|
||||
```
|
||||
doit deploy SERVICE FUNCTION
|
||||
```
|
||||
|
||||
Where,
|
||||
SERVICE is the service that you are deploying from the set of fxa.
|
||||
FUNCTION is the function that you are deploying from the set of sub, hub, mia.
|
||||
|
||||
## dependency graph
|
||||
This command will generate a GraphViz `dot` file that can be used to generate a media file.
|
||||
```
|
||||
doit graph
|
||||
```
|
||||
|
||||
## dependency graph image
|
||||
## dependency graph image
|
||||
This command will generate a PNG of the dependency graph.
|
||||
```
|
||||
doit draw
|
||||
|
@ -202,12 +214,12 @@ doit draw
|
|||
A [Postman](https://www.getpostman.com/) URL collection is available for testing, learning,
|
||||
etc [here](https://www.getpostman.com/collections/ab233178aa256e424668).
|
||||
|
||||
## [Performance Tests](./subhub/tests/performance/README.md)
|
||||
## [Performance Tests](./{sub,hub}/tests/performance/README.md)
|
||||
|
||||
## Behave Tests
|
||||
|
||||
The `behave` tests for this project are located in the `subhub/tests/bdd` directory. The
|
||||
The `behave` tests for this project are located in the `src/{sub,hub}/tests/bdd` directory. The
|
||||
steps that are available presently are available in the `steps`subdirectory. You can run this in a
|
||||
few ways:
|
||||
* Jetbrains PyCharm: A runtime configuration is loaded in that allows for debugging and running of the feature files.
|
||||
* Command line: `cd subhub/tests/bdd && behave` after satisfying the `requirements.txt` in that directory.
|
||||
* Command line: `cd src/{sub,hub}/tests/bdd && behave` after satisfying the `src/test_requirements.txt`.
|
||||
|
|
|
@ -12,7 +12,7 @@ administered by the Mozilla IT SRE team. We are available on #it-sre on slack.
|
|||
## Secrets
|
||||
Secrets in this project all reside in AWS Secrets Manager. There is one set of secrets for each
|
||||
environment: prod, stage, qa, dev. These secrets are loaded as environment variables via the
|
||||
subhub/secrets.py file and then generally used via the env loading mechanism in suhub/cfg.py which
|
||||
src/shared/secrets.py file and then generally used via the env loading mechanism in src/shared/cfg.py which
|
||||
uses decouple to load them as fields.
|
||||
|
||||
## Source Repos
|
||||
|
|
|
@ -1,19 +1,57 @@
|
|||
version: "3.7"
|
||||
|
||||
services:
|
||||
subhub:
|
||||
container_name: subhub
|
||||
image: mozilla/subhub
|
||||
# The `command` section below can take any command from
|
||||
# `doit list` and run them here.
|
||||
command: local
|
||||
base:
|
||||
image: mozilla/subhub-base
|
||||
container_name: base
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.base
|
||||
sub:
|
||||
container_name: sub
|
||||
image: mozilla/sub
|
||||
command: python3 sub/app.py
|
||||
build:
|
||||
context: src/sub
|
||||
dockerfile: Dockerfile
|
||||
args:
|
||||
LOCAL_FLASK_PORT: 5000
|
||||
DYNALITE_PORT: 4567
|
||||
environment:
|
||||
AWS_ACCESS_KEY_ID: "fake-id"
|
||||
AWS_SECRET_ACCESS_KEY: "fake-key"
|
||||
STRIPE_API_KEY: "sk_test_123"
|
||||
SUPPORT_API_KEY: "support_test"
|
||||
LOCAL_FLASK_PORT: 5000
|
||||
STRIPE_API_KEY: $STRIPE_API_KEY
|
||||
PAYMENT_API_KEY: $PAYMENT_API_KEY
|
||||
SUPPORT_API_KEY: $SUPPORT_API_KEY
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "5000:5000"
|
||||
depends_on:
|
||||
- base
|
||||
- dynalite
|
||||
|
||||
hub:
|
||||
container_name: hub
|
||||
image: mozilla/hub
|
||||
command: python3 hub/app.py
|
||||
build:
|
||||
context: src/hub
|
||||
dockerfile: Dockerfile
|
||||
args:
|
||||
LOCAL_FLASK_PORT: 5001
|
||||
DYNALITE_PORT: 4567
|
||||
environment:
|
||||
AWS_ACCESS_KEY_ID: "fake-id"
|
||||
AWS_SECRET_ACCESS_KEY: "fake-key"
|
||||
STRIPE_API_KEY: $STRIPE_API_KEY
|
||||
HUB_API_KEY: $HUB_API_KEY
|
||||
ports:
|
||||
- "5001:5001"
|
||||
depends_on:
|
||||
- base
|
||||
- dynalite
|
||||
|
||||
dynalite:
|
||||
build: https://github.com/vitarn/docker-dynalite.git
|
||||
ports:
|
||||
- 4567
|
||||
|
||||
|
|
250
dodo.py
250
dodo.py
|
@ -14,8 +14,11 @@ from functools import lru_cache
|
|||
from doit.tools import LongRunning
|
||||
from pathlib import Path
|
||||
from pkg_resources import parse_version
|
||||
from os.path import join, dirname, realpath
|
||||
|
||||
from subhub.cfg import CFG, call, CalledProcessError
|
||||
sys.path.insert(0, join(dirname(realpath(__file__)), 'src'))
|
||||
|
||||
from shared.cfg import CFG, call, CalledProcessError
|
||||
|
||||
DOIT_CONFIG = {
|
||||
'default_tasks': [
|
||||
|
@ -45,6 +48,10 @@ SVCS = [
|
|||
svc for svc in os.listdir('services')
|
||||
if os.path.isdir(f'services/{svc}') if os.path.isfile(f'services/{svc}/serverless.yml')
|
||||
]
|
||||
SRCS = [
|
||||
src for src in os.listdir('src/')
|
||||
if os.path.isdir(f'src/{src}') if src != 'shared'
|
||||
]
|
||||
|
||||
mutex = threading.Lock()
|
||||
|
||||
|
@ -106,6 +113,59 @@ def pyfiles(path, exclude=None):
|
|||
pyfiles = set(Path(path).rglob('*.py')) - set(Path(exclude).rglob('*.py') if exclude else [])
|
||||
return [pyfile.as_posix() for pyfile in pyfiles]
|
||||
|
||||
def load_serverless(svc):
|
||||
return yaml.safe_load(open(f'services/{svc}/serverless.yml'))
|
||||
|
||||
def get_svcs_to_funcs():
|
||||
return {svc: list(load_serverless(svc)['functions'].keys()) for svc in SVCS}
|
||||
|
||||
def svc_func(svc, func=None):
|
||||
assert svc in SVCS, f"svc '{svc}' not in {SVCS}"
|
||||
funcs = get_svcs_to_funcs()[svc]
|
||||
if func:
|
||||
assert func in funcs, f"for svc '{svc}', func '{func}' not in {funcs}"
|
||||
return svc, func
|
||||
|
||||
def svc_action(svc, action=None):
|
||||
assert svc in SVCS, f"svc '{svc}' not in {SVCS}"
|
||||
assert action in ('create', 'delete')
|
||||
return svc, action
|
||||
|
||||
def parameterized(dec):
|
||||
def layer(*args, **kwargs):
|
||||
def repl(f):
|
||||
return dec(f, *args, **kwargs)
|
||||
return repl
|
||||
layer.__name__ = dec.__name__
|
||||
layer.__doc__ = dec.__doc__
|
||||
return layer
|
||||
|
||||
@parameterized
|
||||
def guard(func, env):
|
||||
def wrapper(*args, **kwargs):
|
||||
task_dict = func(*args, **kwargs)
|
||||
if CFG.DEPLOYED_ENV == env and CFG('DEPLOY_TO', None) != env:
|
||||
task_dict['actions'] = [
|
||||
f'attempting to run {func.__name__} without env var DEPLOY_TO={env} set',
|
||||
'false',
|
||||
]
|
||||
return task_dict
|
||||
wrapper.__name__ = func.__name__
|
||||
wrapper.__doc__ = func.__doc__
|
||||
return wrapper
|
||||
|
||||
@parameterized
|
||||
def skip(func, taskname):
|
||||
def wrapper(*args, **kwargs):
|
||||
task_dict = func(*args, **kwargs)
|
||||
envvar = f'SKIP_{taskname.upper()}'
|
||||
if CFG(envvar, None):
|
||||
task_dict['uptodate'] = [True]
|
||||
return task_dict
|
||||
wrapper.__name__ = func.__name__
|
||||
wrapper.__doc__ = func.__doc__
|
||||
return wrapper
|
||||
|
||||
# TODO: This needs to check for the existence of the dependency prior to execution or update project requirements.
|
||||
def task_count():
|
||||
'''
|
||||
|
@ -203,9 +263,9 @@ def gen_file_check(name, func, *patterns, message=None):
|
|||
|
||||
def check_black():
|
||||
'''
|
||||
run black --check in subhub directory
|
||||
run black --check in src/ directory
|
||||
'''
|
||||
black_check = f'black --check {CFG.PROJECT_PATH}'
|
||||
black_check = f'black --check src/'
|
||||
return {
|
||||
'name': 'black',
|
||||
'task_dep': [
|
||||
|
@ -273,7 +333,7 @@ def task_check():
|
|||
yield gen_file_check('json', json.load, 'services/**/*.json')
|
||||
yield gen_file_check('yaml', yaml.safe_load, 'services/**/*.yaml', 'services/**/*.yml')
|
||||
header_message = "consider running 'doit header:<filename>'"
|
||||
yield gen_file_check('header', check_header, 'subhub/**/*.py', message=header_message)
|
||||
yield gen_file_check('header', check_header, 'src/**/*.py', message=header_message)
|
||||
yield check_black()
|
||||
yield check_reqs()
|
||||
|
||||
|
@ -300,13 +360,12 @@ def task_creds():
|
|||
],
|
||||
}
|
||||
|
||||
@skip('test')
|
||||
def task_stripe():
|
||||
'''
|
||||
check to see if STRIPE_API_KEY is set
|
||||
'''
|
||||
def stripe_check():
|
||||
if os.environ.get('SKIP_TESTS', None):
|
||||
return True
|
||||
try:
|
||||
CFG.STRIPE_API_KEY
|
||||
except:
|
||||
|
@ -327,20 +386,20 @@ def task_stripe():
|
|||
|
||||
def task_black():
|
||||
'''
|
||||
run black on subhub/
|
||||
run black on src/
|
||||
'''
|
||||
return {
|
||||
'actions': [
|
||||
f'black {CFG.PROJECT_PATH}',
|
||||
f'black src/'
|
||||
],
|
||||
}
|
||||
|
||||
def task_header():
|
||||
'''
|
||||
apply the HEADER to all the py files under subhub/
|
||||
apply the HEADER to all the py files under src/
|
||||
'''
|
||||
def ensure_headers():
|
||||
for pyfile in pyfiles('subhub/'):
|
||||
for pyfile in pyfiles('src/'):
|
||||
with open(pyfile, 'r') as old:
|
||||
content = old.read()
|
||||
if has_header(content):
|
||||
|
@ -356,12 +415,13 @@ def task_header():
|
|||
],
|
||||
}
|
||||
|
||||
@skip('venv')
|
||||
def task_venv():
|
||||
'''
|
||||
setup virtual env
|
||||
'''
|
||||
app_requirements = f'{CFG.PROJECT_PATH}/requirements.txt'
|
||||
test_requirements = f'{CFG.PROJECT_PATH}/tests/requirements.txt'
|
||||
app_requirements = f'src/app_requirements.txt'
|
||||
test_requirements = f'src/test_requirements.txt'
|
||||
return {
|
||||
'task_dep': [
|
||||
'check',
|
||||
|
@ -372,9 +432,6 @@ def task_venv():
|
|||
f'[ -f "{app_requirements}" ] && {PIP3} install -r "{app_requirements}"',
|
||||
f'[ -f "{test_requirements}" ] && {PIP3} install -r "{test_requirements}"',
|
||||
],
|
||||
'uptodate': [
|
||||
lambda: os.environ.get('SKIP_VENV', None),
|
||||
],
|
||||
}
|
||||
|
||||
def task_dynalite():
|
||||
|
@ -435,22 +492,13 @@ def task_local():
|
|||
'''
|
||||
run local deployment
|
||||
'''
|
||||
ENVS=envs(
|
||||
AWS_ACCESS_KEY_ID='fake-id',
|
||||
AWS_SECRET_ACCESS_KEY='fake-key',
|
||||
PYTHONPATH='.'
|
||||
)
|
||||
return {
|
||||
'task_dep': [
|
||||
'check',
|
||||
'stripe',
|
||||
'venv',
|
||||
'dynalite:start', #FIXME: test removed as a dep due to concurrency bug
|
||||
'tar'
|
||||
],
|
||||
'actions': [
|
||||
f'{PYTHON3} -m setup develop',
|
||||
'echo $PATH',
|
||||
f'env {ENVS} {PYTHON3} subhub/app.py',
|
||||
f'docker-compose up --build'
|
||||
],
|
||||
}
|
||||
|
||||
|
@ -479,7 +527,7 @@ def task_perf_local():
|
|||
AWS_SECRET_ACCESS_KEY='fake-key',
|
||||
PYTHONPATH='.'
|
||||
)
|
||||
cmd = f'env {ENVS} {PYTHON3} subhub/app.py'
|
||||
cmd = f'env {ENVS} {PYTHON3} src/sub/app.py' #FIXME: should work on hub too...
|
||||
return {
|
||||
'basename': 'perf-local',
|
||||
'task_dep':[
|
||||
|
@ -491,8 +539,8 @@ def task_perf_local():
|
|||
'actions':[
|
||||
f'{PYTHON3} -m setup develop',
|
||||
'echo $PATH',
|
||||
LongRunning(f'nohup env {envs} {PYTHON3} subhub/app.py > /dev/null &'),
|
||||
f'cd subhub/tests/performance && locust -f locustfile.py --host=http://localhost:{FLASK_PORT}'
|
||||
LongRunning(f'nohup env {envs} {PYTHON3} src/sub/app.py > /dev/null &'), #FIXME: same as above
|
||||
f'cd src/sub/tests/performance && locust -f locustfile.py --host=http://localhost:{FLASK_PORT}' #FIXME: same
|
||||
]
|
||||
}
|
||||
|
||||
|
@ -509,25 +557,29 @@ def task_perf_remote():
|
|||
],
|
||||
'actions':[
|
||||
f'{PYTHON3} -m setup develop',
|
||||
f'cd subhub/tests/performance && locust -f locustfile.py --host=https://{CFG.DEPLOY_DOMAIN}'
|
||||
f'cd src/sub/tests/performance && locust -f locustfile.py --host=https://{CFG.DEPLOY_DOMAIN}' #FIXME: same as above
|
||||
]
|
||||
}
|
||||
|
||||
@skip('mypy')
|
||||
def task_mypy():
|
||||
'''
|
||||
run mpyp, a static type checker for Python 3
|
||||
'''
|
||||
return {
|
||||
'task_dep': [
|
||||
'check',
|
||||
'yarn',
|
||||
'venv',
|
||||
],
|
||||
'actions': [
|
||||
f'cd {CFG.REPO_ROOT} && {envs(MYPYPATH="venv")} {MYPY} -p subhub'
|
||||
],
|
||||
}
|
||||
for pkg in ('sub', 'hub'):
|
||||
yield {
|
||||
'name': pkg,
|
||||
'task_dep': [
|
||||
'check',
|
||||
'yarn',
|
||||
'venv',
|
||||
],
|
||||
'actions': [
|
||||
f'cd {CFG.REPO_ROOT}/src && env {envs(MYPYPATH=VENV)} {MYPY} -p {pkg}',
|
||||
],
|
||||
}
|
||||
|
||||
@skip('test')
|
||||
def task_test():
|
||||
'''
|
||||
run tox in tests/
|
||||
|
@ -539,21 +591,18 @@ def task_test():
|
|||
'yarn',
|
||||
'venv',
|
||||
'dynalite:stop',
|
||||
'mypy',
|
||||
#'mypy', FIXME: this needs to be activated once mypy is figured out
|
||||
],
|
||||
'actions': [
|
||||
f'cd {CFG.REPO_ROOT} && tox',
|
||||
],
|
||||
'uptodate': [
|
||||
lambda: os.environ.get('SKIP_TESTS', None),
|
||||
],
|
||||
}
|
||||
|
||||
def task_pytest():
|
||||
'''
|
||||
run pytest per test file
|
||||
'''
|
||||
for filename in Path('subhub/tests').glob('**/*.py'):
|
||||
for filename in Path('src/sub/tests').glob('**/*.py'): #FIXME: should work on hub too...
|
||||
yield {
|
||||
'name': filename,
|
||||
'task_dep': [
|
||||
|
@ -585,41 +634,77 @@ def task_package():
|
|||
],
|
||||
}
|
||||
|
||||
def task_deploy():
|
||||
def task_tar():
|
||||
'''
|
||||
run serverless deploy -v for every service
|
||||
tar up source files, dereferncing symlinks
|
||||
'''
|
||||
def deploy_to_prod():
|
||||
if CFG.DEPLOYED_ENV == 'prod':
|
||||
if CFG('DEPLOY_TO', None) == 'prod':
|
||||
return True
|
||||
return False
|
||||
return True
|
||||
for svc in SVCS:
|
||||
servicepath = f'services/{svc}'
|
||||
curl = f'curl --silent https://{CFG.DEPLOYED_ENV}.{svc}.mozilla-subhub.app/v1/version'
|
||||
describe = 'git describe --abbrev=7'
|
||||
excludes = ' '.join([
|
||||
f'--exclude={CFG.SRCTAR}',
|
||||
'--exclude=__pycache__',
|
||||
'--exclude=*.pyc',
|
||||
'--exclude=.env',
|
||||
'--exclude=.git',
|
||||
])
|
||||
for src in SRCS:
|
||||
## it is important to note that this is required to keep the tarballs from
|
||||
## genereating different checksums and therefore different layers in docker
|
||||
cmd = f'cd {CFG.REPO_ROOT}/src/{src} && echo "$(git status -s)" > {CFG.REVISION} && tar cvh {excludes} . | gzip -n > {CFG.SRCTAR} && rm {CFG.REVISION}'
|
||||
yield {
|
||||
'name': svc,
|
||||
'name': src,
|
||||
'task_dep': [
|
||||
'check',
|
||||
'creds',
|
||||
'stripe',
|
||||
'yarn',
|
||||
'test',
|
||||
'check:noroot',
|
||||
# 'test',
|
||||
],
|
||||
'actions': [
|
||||
f'cd {servicepath} && env {envs()} {SLS} deploy --stage {CFG.DEPLOYED_ENV} --aws-s3-accelerate -v',
|
||||
f'echo "{curl}"',
|
||||
f'{curl}',
|
||||
f'echo "{describe}"',
|
||||
f'{describe}',
|
||||
] if deploy_to_prod() else [
|
||||
f'attempting to deploy to prod without env var DEPLOY_TO=prod set',
|
||||
'false',
|
||||
f'echo "{cmd}"',
|
||||
f'{cmd}',
|
||||
],
|
||||
}
|
||||
|
||||
@guard('prod')
|
||||
def task_deploy():
|
||||
'''
|
||||
deploy <svc> [<func>]
|
||||
'''
|
||||
def deploy(args):
|
||||
svc, func = svc_func(*args)
|
||||
if func:
|
||||
deploy_cmd = f'cd services/{svc} && env {envs()} {SLS} deploy function --stage {CFG.DEPLOYED_ENV} --aws-s3-accelerate -v --function {func}'
|
||||
else:
|
||||
deploy_cmd = f'cd services/{svc} && env {envs()} {SLS} deploy --stage {CFG.DEPLOYED_ENV} --aws-s3-accelerate -v'
|
||||
call(deploy_cmd, stdout=None, stderr=None)
|
||||
return {
|
||||
'task_dep': [
|
||||
'check',
|
||||
'creds',
|
||||
'stripe',
|
||||
'yarn',
|
||||
'test',
|
||||
],
|
||||
'pos_arg': 'args',
|
||||
'actions': [(deploy,)],
|
||||
}
|
||||
|
||||
@guard('prod')
|
||||
def task_domain():
|
||||
'''
|
||||
domain <svc> [create|delete]
|
||||
'''
|
||||
def domain(args):
|
||||
svc, action = svc_action(*args)
|
||||
assert action in ('create', 'delete'), "provide 'create' or 'delete'"
|
||||
domain_cmd = f'cd services/{svc} && env {envs()} {SLS} {action}_domain --stage {CFG.DEPLOYED_ENV} -v'
|
||||
call(domain_cmd, stdout=None, stderr=None)
|
||||
return {
|
||||
'task_dep': [
|
||||
'check',
|
||||
'creds',
|
||||
'yarn',
|
||||
],
|
||||
'pos_arg': 'args',
|
||||
'actions': [(domain,)],
|
||||
}
|
||||
|
||||
def task_remove():
|
||||
'''
|
||||
run serverless remove -v for every service
|
||||
|
@ -655,13 +740,17 @@ def task_curl():
|
|||
'''
|
||||
curl again remote deployment url: /version, /deployed
|
||||
'''
|
||||
for route in ('deployed', 'version'):
|
||||
yield {
|
||||
'name': route,
|
||||
'actions': [
|
||||
f'curl --silent https://{CFG.DEPLOYED_ENV}.fxa.mozilla-subhub.app/v1/{route}',
|
||||
],
|
||||
}
|
||||
def curl(args):
|
||||
svc, func = svc_func(*args)
|
||||
funcs = [func] if func else [func for func in get_svcs_to_funcs()[svc] if func != 'mia']
|
||||
for func in funcs:
|
||||
for route in ('version', 'deployed'):
|
||||
cmd = f'curl --silent https://{CFG.DEPLOYED_ENV}.{svc}.mozilla-subhub.app/v1/{func}/{route}'
|
||||
call(f'echo "{cmd}"; {cmd}', stdout=None, stderr=None)
|
||||
return {
|
||||
'pos_arg': 'args',
|
||||
'actions': [(curl,)],
|
||||
}
|
||||
|
||||
def task_rmrf():
|
||||
'''
|
||||
|
@ -687,7 +776,8 @@ def task_rmrf():
|
|||
yield {
|
||||
'name': name,
|
||||
'actions': [
|
||||
f'sudo find {CFG.REPO_ROOT} -depth -name {name} -type {type} -exec {rmrf}' for name, type in targets.items()
|
||||
f'sudo find {CFG.REPO_ROOT} -depth -name {name} -type {type} -exec {rmrf}'
|
||||
for name, type in targets.items()
|
||||
],
|
||||
}
|
||||
|
||||
|
@ -713,4 +803,4 @@ def task_draw():
|
|||
'file_dep': ['tasks.dot'],
|
||||
'targets': ['tasks.png'],
|
||||
'actions': ['dot -Tpng %(dependencies)s -o %(targets)s'],
|
||||
}
|
||||
}
|
||||
|
|
|
@ -6,7 +6,7 @@ libffi-dev==3.2.1-r6
|
|||
openssl-dev==1.1.1c-r0
|
||||
zeromq-dev==4.3.2-r1
|
||||
linux-headers==4.19.36-r0
|
||||
nodejs==10.16.0-r0
|
||||
nodejs
|
||||
curl==7.65.1-r0
|
||||
yarn==1.16.0-r0
|
||||
gcc==8.3.0-r0
|
||||
|
@ -15,4 +15,4 @@ musl-dev==1.1.22-r3
|
|||
pkgconfig
|
||||
git==2.22.0-r0
|
||||
graphviz-dev==2.40.1-r1
|
||||
graphviz==2.40.1-r1
|
||||
graphviz==2.40.1-r1
|
||||
|
|
|
@ -12,6 +12,8 @@
|
|||
"serverless-domain-manager": "3.2.7",
|
||||
"serverless-offline": "5.10.1",
|
||||
"serverless-plugin-tracing": "2.0.0",
|
||||
"serverless-python-requirements": "5.0.0"
|
||||
"serverless-python-requirements": "5.0.0",
|
||||
"serverless-package-external": "1.1.1",
|
||||
"serverless-wsgi": "1.7.3"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,71 @@
|
|||
# Serverless
|
||||
|
||||
## Commands
|
||||
|
||||
From this (`services/fxa`) directory, execute the following commands of interest. NOTE: If you require extra detail of the Serverless framework,
|
||||
you will need to set the follow environment variable.
|
||||
|
||||
`export SLS_DEBUG=*`
|
||||
|
||||
### Offline Testing
|
||||
|
||||
Start DynamoDB locally, `sls dynamodb start &`
|
||||
|
||||
Start offline services, `serverless offline start --host 0.0.0.0 --stage dev`
|
||||
|
||||
Once this is done, you can access the DynamoDB Javascript Shell by
|
||||
navigating [here](http://localhost:8000/shell/). Additionally, you may interact with the application as you would on AWS via commands such as:
|
||||
* Perform a HTTP GET of `http://localhost:3000/v1/sub/version`
|
||||
|
||||
### Domain Creation
|
||||
|
||||
`sls create_domain`
|
||||
|
||||
### Packaging
|
||||
|
||||
`sls package --stage ENVIRONMENT`
|
||||
|
||||
Where `ENVIRONMENT` is in the set of (dev, staging, prod).
|
||||
|
||||
You may inspect the contents of each packages with:
|
||||
|
||||
`zipinfo .serverless/{ARCHIVE}.zip`
|
||||
|
||||
Where `ARCHIVE` is a member of
|
||||
|
||||
* sub
|
||||
* hub
|
||||
* mia
|
||||
|
||||
### Logs
|
||||
|
||||
You can inspect the Serverless logs by function via the command:
|
||||
|
||||
`sls logs -f {FUNCTION}`
|
||||
|
||||
Where `FUNCTION` is a member of
|
||||
|
||||
* sub
|
||||
* hub
|
||||
* mia
|
||||
|
||||
#### Live Tailing of the Logs
|
||||
|
||||
`serverless logs -f {FUNCTION} --tail`
|
||||
|
||||
### Running
|
||||
|
||||
`sls wsgi serve`
|
||||
|
||||
### To-do
|
||||
|
||||
* [Investigate Serverless Termination Protection for Production](https://www.npmjs.com/package/serverless-termination-protection)
|
||||
* [Investigate metering requests via apiKeySourceType](https://serverless.com/framework/docs/providers/aws/events/apigateway/)
|
||||
|
||||
## References
|
||||
|
||||
1. [SLS_DEBUG](https://github.com/serverless/serverless/pull/1729/files)
|
||||
2. [API Gateway Resource Policy Support](https://github.com/serverless/serverless/issues/4926)
|
||||
3. [Add apig resource policy](https://github.com/serverless/serverless/pull/5071)
|
||||
4. [add PRIVATE endpointType](https://github.com/serverless/serverless/pull/5080)
|
||||
5. [Serverless AWS Lambda Events](https://serverless.com/framework/docs/providers/aws/events/)
|
|
@ -3,3 +3,4 @@ fxa:
|
|||
stage: 142069644989
|
||||
qa: 142069644989
|
||||
dev: 927034868273
|
||||
fab: 927034868273
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"LAMBDA_MEMORY_SIZE": 256,
|
||||
"LAMBDA_RESERVED_CONCURRENCY": 1,
|
||||
"LAMBDA_TIMEOUT": 5,
|
||||
"MIA_RATE_SCHEDULE": "24 hours"
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"LAMBDA_MEMORY_SIZE": 512,
|
||||
"LAMBDA_RESERVED_CONCURRENCY": 5,
|
||||
"LAMBDA_TIMEOUT": 5,
|
||||
"MIA_RATE_SCHEDULE": "30 days"
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"LAMBDA_MEMORY_SIZE": 512,
|
||||
"LAMBDA_RESERVED_CONCURRENCY": 5,
|
||||
"LAMBDA_TIMEOUT": 5,
|
||||
"MIA_RATE_SCHEDULE": "6 hours"
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"LAMBDA_MEMORY_SIZE": 256,
|
||||
"LAMBDA_RESERVED_CONCURRENCY": 1,
|
||||
"LAMBDA_TIMEOUT": 5,
|
||||
"MIA_RATE_SCHEDULE": "30 days"
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"LAMBDA_MEMORY_SIZE": 512,
|
||||
"LAMBDA_RESERVED_CONCURRENCY": 5,
|
||||
"LAMBDA_TIMEOUT": 5,
|
||||
"MIA_RATE_SCHEDULE": "6 hours"
|
||||
}
|
|
@ -0,0 +1,17 @@
|
|||
default:
|
||||
DEPLOYED_BY: ${env:DEPLOYED_BY}
|
||||
DEPLOYED_ENV: ${env:DEPLOYED_ENV}
|
||||
DEPLOYED_WHEN: ${env:DEPLOYED_WHEN}
|
||||
STAGE: ${self:provider.stage}
|
||||
PROJECT_NAME: ${env:PROJECT_NAME}
|
||||
BRANCH: ${env:BRANCH}
|
||||
REVISION: ${env:REVISION}
|
||||
VERSION: ${env:VERSION}
|
||||
REMOTE_ORIGIN_URL: ${env:REMOTE_ORIGIN_URL}
|
||||
LOG_LEVEL: ${env:LOG_LEVEL}
|
||||
NEW_RELIC_ACCOUNT_ID: ${env:NEW_RELIC_ACCOUNT_ID}
|
||||
NEW_RELIC_TRUSTED_ACCOUNT_ID: ${env:NEW_RELIC_TRUSTED_ACCOUNT_ID}
|
||||
NEW_RELIC_SERVERLESS_MODE_ENABLED: ${env:NEW_RELIC_SERVERLESS_MODE_ENABLED}
|
||||
NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: ${env:NEW_RELIC_DISTRIBUTED_TRACING_ENABLED}
|
||||
PROFILING_ENABLED: ${env:PROFILING_ENABLED}
|
||||
PROCESS_EVENTS_HOURS: 6
|
|
@ -1,43 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
|
||||
import awsgi
|
||||
import newrelic.agent
|
||||
|
||||
from aws_xray_sdk.core import xray_recorder
|
||||
from aws_xray_sdk.ext.flask.middleware import XRayMiddleware
|
||||
|
||||
newrelic.agent.initialize()
|
||||
|
||||
# First some funky path manipulation so that we can work properly in
|
||||
# the AWS environment
|
||||
dir_path = os.path.dirname(os.path.realpath(__file__))
|
||||
sys.path.append(dir_path)
|
||||
|
||||
from subhub.app import create_app
|
||||
from subhub.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
||||
xray_recorder.configure(service="subhub")
|
||||
|
||||
# Create app at module scope to cache it for repeat requests
|
||||
try:
|
||||
app = create_app()
|
||||
XRayMiddleware(app.app, xray_recorder)
|
||||
except Exception: # pylint: disable=broad-except
|
||||
logger.exception("Exception occurred while loading app")
|
||||
# TODO: Add Sentry exception catch here
|
||||
raise
|
||||
|
||||
@newrelic.agent.lambda_handler()
|
||||
def handle(event, context):
|
||||
try:
|
||||
logger.info("handling event", subhub_event=event, context=context)
|
||||
return awsgi.response(app, event, context)
|
||||
except Exception as e: # pylint: disable=broad-except
|
||||
logger.exception("exception occurred", subhub_event=event, context=context, error=e)
|
||||
# TODO: Add Sentry exception catch here
|
||||
raise
|
|
@ -0,0 +1,44 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
|
||||
# TODO!
|
||||
# import newrelic.agent
|
||||
import serverless_wsgi
|
||||
|
||||
serverless_wsgi.TEXT_MIME_TYPES.append("application/custom+json")
|
||||
|
||||
from os.path import join, dirname, realpath
|
||||
# First some funky path manipulation so that we can work properly in
|
||||
# the AWS environment
|
||||
sys.path.insert(0, join(dirname(realpath(__file__)), 'src'))
|
||||
|
||||
# TODO!
|
||||
# newrelic.agent.initialize()
|
||||
|
||||
from aws_xray_sdk.core import xray_recorder, patch_all
|
||||
from aws_xray_sdk.core.context import Context
|
||||
from aws_xray_sdk.ext.flask.middleware import XRayMiddleware
|
||||
|
||||
from hub.app import create_app
|
||||
from shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
||||
xray_recorder.configure(service="fxa.hub")
|
||||
patch_all()
|
||||
|
||||
hub_app = create_app()
|
||||
XRayMiddleware(hub_app.app, xray_recorder)
|
||||
|
||||
# TODO!
|
||||
# @newrelic.agent.lambda_handler()
|
||||
def handle(event, context):
|
||||
try:
|
||||
logger.info("handling hub event", subhub_event=event, context=context)
|
||||
return serverless_wsgi.handle_request(hub_app.app, event, context)
|
||||
except Exception as e: # pylint: disable=broad-except
|
||||
logger.exception("exception occurred", subhub_event=event, context=context, error=e)
|
||||
# TODO: Add Sentry exception catch here
|
||||
raise
|
|
@ -0,0 +1,41 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
|
||||
# TODO!
|
||||
# import newrelic.agent
|
||||
import serverless_wsgi
|
||||
|
||||
from os.path import join, dirname, realpath
|
||||
# First some funky path manipulation so that we can work properly in
|
||||
# the AWS environment
|
||||
sys.path.insert(0, join(dirname(realpath(__file__)), 'src'))
|
||||
|
||||
# TODO!
|
||||
# newrelic.agent.initialize()
|
||||
|
||||
from aws_xray_sdk.core import xray_recorder, patch_all
|
||||
from aws_xray_sdk.core.context import Context
|
||||
from aws_xray_sdk.ext.flask.middleware import XRayMiddleware
|
||||
|
||||
from hub.verifications import events_check
|
||||
from shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
||||
xray_recorder.configure(service="fxa.mia")
|
||||
patch_all()
|
||||
|
||||
# TODO!
|
||||
# @newrelic.agent.lambda_handler()
|
||||
def handle_mia(event, context):
|
||||
try:
|
||||
logger.info("handling mia event", subhub_event=event, context=context)
|
||||
processing_duration=int(os.getenv('PROCESS_EVENTS_HOURS', '6'))
|
||||
events_check.process_events(processing_duration)
|
||||
except Exception as e: # pylint: disable=broad-except
|
||||
logger.exception("exception occurred", subhub_event=event, context=context, error=e)
|
||||
# TODO: Add Sentry exception catch here
|
||||
raise
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
Resources:
|
||||
UsersTable:
|
||||
Type: 'AWS::DynamoDB::Table'
|
||||
Properties:
|
||||
TableName: ${self:custom.usersTable}
|
||||
AttributeDefinitions:
|
||||
- AttributeName: user_id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
- AttributeName: user_id
|
||||
KeyType: HASH
|
||||
PointInTimeRecoverySpecification:
|
||||
PointInTimeRecoveryEnabled: true
|
||||
# Set the capacity to auto-scale
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
DeletedUsersTable:
|
||||
Type: 'AWS::DynamoDB::Table'
|
||||
Properties:
|
||||
TableName: ${self:custom.deletedUsersTable}
|
||||
AttributeDefinitions:
|
||||
- AttributeName: user_id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
- AttributeName: user_id
|
||||
KeyType: HASH
|
||||
PointInTimeRecoverySpecification:
|
||||
PointInTimeRecoveryEnabled: true
|
||||
# Set the capacity to auto-scale
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
EventsTable:
|
||||
Type: 'AWS::DynamoDB::Table'
|
||||
Properties:
|
||||
TableName: ${self:custom.eventsTable}
|
||||
AttributeDefinitions:
|
||||
- AttributeName: event_id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
- AttributeName: event_id
|
||||
KeyType: HASH
|
||||
PointInTimeRecoverySpecification:
|
||||
PointInTimeRecoveryEnabled: true
|
||||
# Set the capacity to auto-scale
|
||||
BillingMode: PAY_PER_REQUEST
|
|
@ -0,0 +1,24 @@
|
|||
Resources:
|
||||
SubHubSNS:
|
||||
Type: AWS::SNS::Topic
|
||||
Properties:
|
||||
DisplayName: FxA ${self:provider.stage} Event Data
|
||||
TopicName: ${self:provider.stage}-fxa-event-data
|
||||
SubHubTopicPolicy:
|
||||
Type: AWS::SNS::TopicPolicy
|
||||
Properties:
|
||||
PolicyDocument:
|
||||
Id: AWSAccountTopicAccess
|
||||
Version: '2008-10-17'
|
||||
Statement:
|
||||
- Sid: FxAStageAccess
|
||||
Effect: Allow
|
||||
Principal:
|
||||
AWS: arn:aws:iam::${self:provider.snsaccount}:root
|
||||
Action:
|
||||
- SNS:Subscribe
|
||||
- SNS:Receive
|
||||
- SNS:GetTopicAttributes
|
||||
Resource: arn:aws:sns:us-west-2:903937621340:${self:provider.stage}-fxa-event-data
|
||||
Topics:
|
||||
- Ref: SubHubSNS
|
|
@ -0,0 +1,15 @@
|
|||
default:
|
||||
cost-center: 1440
|
||||
project-name: Subhub
|
||||
project-desc: Payment subscription REST API for customers
|
||||
project-email: subhub@mozilla.com
|
||||
deployed-by: ${env:DEPLOYED_BY}
|
||||
deployed-env: ${env:DEPLOYED_ENV}
|
||||
deployed-when: ${env:DEPLOYED_WHEN}
|
||||
deployed-method: serverless
|
||||
sources: https://github.com/mozilla/subhub
|
||||
urls: prod.fxa.mozilla-subhub.app/v1
|
||||
keywords: subscriptions:flask:serverless:swagger
|
||||
branch: ${env:BRANCH}
|
||||
revision: ${env:REVISION}
|
||||
version: ${env:VERSION}
|
|
@ -2,133 +2,42 @@
|
|||
service:
|
||||
name: fxa
|
||||
|
||||
plugins:
|
||||
- serverless-python-requirements
|
||||
- serverless-domain-manager
|
||||
- serverless-plugin-tracing
|
||||
- serverless-dynamodb-local
|
||||
- serverless-offline
|
||||
|
||||
provider:
|
||||
name: aws
|
||||
runtime: python3.7
|
||||
region: us-west-2
|
||||
stage: ${opt:stage, 'dev'}
|
||||
stackName: ${self:custom.prefix}-stack
|
||||
apiName: ${self:custom.prefix}-apigw
|
||||
deploymentPrefix: ${self:custom.prefix}
|
||||
endpointType: regional
|
||||
logRetentionInDays: 90
|
||||
logs:
|
||||
restApi: true
|
||||
memorySize: 512
|
||||
reservedConcurrency: 5
|
||||
timeout: 5
|
||||
tracing: true
|
||||
snsaccount: ${file(./accounts.yml):fxa.${self:provider.stage}}
|
||||
environment:
|
||||
DEPLOYED_BY: ${env:DEPLOYED_BY}
|
||||
DEPLOYED_ENV: ${env:DEPLOYED_ENV}
|
||||
DEPLOYED_WHEN: ${env:DEPLOYED_WHEN}
|
||||
STAGE: ${self:provider.stage}
|
||||
PROJECT_NAME: ${env:PROJECT_NAME}
|
||||
BRANCH: ${env:BRANCH}
|
||||
REVISION: ${env:REVISION}
|
||||
VERSION: ${env:VERSION}
|
||||
REMOTE_ORIGIN_URL: ${env:REMOTE_ORIGIN_URL}
|
||||
LOG_LEVEL: ${env:LOG_LEVEL}
|
||||
NEW_RELIC_ACCOUNT_ID: ${env:NEW_RELIC_ACCOUNT_ID}
|
||||
NEW_RELIC_TRUSTED_ACCOUNT_ID: ${env:NEW_RELIC_TRUSTED_ACCOUNT_ID}
|
||||
NEW_RELIC_SERVERLESS_MODE_ENABLED: ${env:NEW_RELIC_SERVERLESS_MODE_ENABLED}
|
||||
NEW_RELIC_DISTRIBUTED_TRACING_ENABLED: ${env:NEW_RELIC_DISTRIBUTED_TRACING_ENABLED}
|
||||
PROFILING_ENABLED: ${env:PROFILING_ENABLED}
|
||||
USER_TABLE:
|
||||
Ref: 'Users'
|
||||
EVENT_TABLE:
|
||||
Ref: 'Events'
|
||||
DELETED_USER_TABLE:
|
||||
Ref: 'DeletedUsers'
|
||||
tags:
|
||||
cost-center: 1440
|
||||
project-name: subhub
|
||||
project-desc: payment subscription REST api for customers
|
||||
project-email: subhub@mozilla.com
|
||||
deployed-by: ${env:DEPLOYED_BY}
|
||||
deployed-env: ${env:DEPLOYED_ENV}
|
||||
deployed-when: ${env:DEPLOYED_WHEN}
|
||||
deployed-method: serverless
|
||||
sources: https://github.com/mozilla/subhub
|
||||
urls: prod.fxa.mozilla-subhub.app/v1
|
||||
keywords: subhub:subscriptions:flask:serverless:swagger
|
||||
branch: ${env:BRANCH}
|
||||
revision: ${env:REVISION}
|
||||
version: ${env:VERSION}
|
||||
stackTags:
|
||||
service: ${self:service}
|
||||
iamRoleStatements:
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- 'dynamodb:Query'
|
||||
- 'dynamodb:Scan'
|
||||
- 'dynamodb:GetItem'
|
||||
- 'dynamodb:PutItem'
|
||||
- 'dynamodb:UpdateItem'
|
||||
- 'dynamodb:DeleteItem'
|
||||
- 'dynamodb:DescribeTable'
|
||||
- 'dynamodb:CreateTable'
|
||||
Resource:
|
||||
- { 'Fn::GetAtt': ['Users', 'Arn'] }
|
||||
- { 'Fn::GetAtt': ['Events', 'Arn'] }
|
||||
- { 'Fn::GetAtt': ['DeletedUsers', 'Arn']}
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- 'secretsmanager:GetSecretValue'
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:secretsmanager', Ref: AWS::Region, Ref: AWS::AccountId, 'secret:${self:provider.stage}/*']]
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- logs:CreateLogGroup
|
||||
- logs:CreateLogStream
|
||||
- logs:PutLogEvents
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:logs', Ref: AWS::Region, Ref: AWS::AccountId, 'log-group:/aws/lambda/*:*:*']]
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- kms:Decrypt
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:kms', Ref: AWS::Region, Ref: AWS::AccountId, 'alias/*']]
|
||||
- 'Fn::Join': [':', ['arn:aws:kms', Ref: AWS::Region, Ref: AWS::AccountId, 'key/*']]
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- 'xray:PutTraceSegments'
|
||||
- 'xray:PutTelemetryRecords'
|
||||
Resource:
|
||||
- '*'
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- sns:Publish
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:sns', Ref: AWS::Region, Ref: AWS::AccountId, '${self:provider.stage}-fxa-event-data']]
|
||||
resourcePolicy: ${self:custom.resourcePolicies.${self:custom.access.${self:provider.stage}}}
|
||||
|
||||
package:
|
||||
individually: true
|
||||
exclude:
|
||||
- '**/*'
|
||||
- 'node_modules/*'
|
||||
include:
|
||||
- 'handler.py'
|
||||
- 'subhub/**'
|
||||
- 'src/**'
|
||||
|
||||
custom:
|
||||
stage: ${opt:stage, self:provider.stage}
|
||||
usersTable: ${self:custom.stage}-users
|
||||
deletedUsersTable: ${self:custom.stage}-deletedUsers
|
||||
eventsTable: ${self:custom.stage}-events
|
||||
prefix: ${self:provider.stage}-${self:service.name}
|
||||
subdomain: ${self:provider.stage}.${self:service.name}
|
||||
pythonRequirements:
|
||||
dockerizePip: 'non-linux'
|
||||
fileName: subhub/requirements.txt
|
||||
git-repo: https://github.com/mozilla/subhub
|
||||
fileName: "../../src/app_requirements.txt"
|
||||
packageExternal:
|
||||
external:
|
||||
- '../../src/sub'
|
||||
- '../../src/hub'
|
||||
- '../../src/shared'
|
||||
- '../../src'
|
||||
git-repo: "https://github.com/mozilla/subhub"
|
||||
dynamodb:
|
||||
stages:
|
||||
- dev
|
||||
start:
|
||||
port: 8000
|
||||
inMemory: true
|
||||
heapInitial: 200m
|
||||
heapMax: 1g
|
||||
migrate: true
|
||||
seed: true
|
||||
convertEmptyValues: true
|
||||
customDomain:
|
||||
domainName: ${self:custom.subdomain}.mozilla-subhub.app
|
||||
certificateName: ${self:custom.subdomain}.mozilla-subhub.app
|
||||
|
@ -142,6 +51,7 @@ custom:
|
|||
stage: restricted
|
||||
qa: restricted
|
||||
dev: unfettered
|
||||
fab: unfettered
|
||||
resourcePolicies:
|
||||
unfettered:
|
||||
- Effect: Allow
|
||||
|
@ -177,7 +87,7 @@ custom:
|
|||
- execute-api:/*/*/support/*
|
||||
Condition:
|
||||
IpAddress:
|
||||
aws:SourceIp: ${file(./whitelist.yml):support.${self:provider.stage}}
|
||||
aws:SourceIp: ${file(whitelist.yml):support.${self:provider.stage}}
|
||||
- Effect: Allow
|
||||
Principal: "*"
|
||||
Action: execute-api:Invoke
|
||||
|
@ -186,7 +96,7 @@ custom:
|
|||
- execute-api:/*/*/plans
|
||||
Condition:
|
||||
IpAddress:
|
||||
aws:SourceIp: ${file(./whitelist.yml):payments.${self:provider.stage}}
|
||||
aws:SourceIp: ${file(whitelist.yml):payments.${self:provider.stage}}
|
||||
- Effect: Allow
|
||||
Principal: "*"
|
||||
Action: execute-api:Invoke
|
||||
|
@ -194,108 +104,143 @@ custom:
|
|||
- execute-api:/*/*/hub
|
||||
Condition:
|
||||
IpAddress:
|
||||
aws:SourceIp: ${file(./whitelist.yml):hub}
|
||||
aws:SourceIp: ${file(whitelist.yml):hub}
|
||||
|
||||
plugins:
|
||||
- serverless-python-requirements
|
||||
- serverless-domain-manager
|
||||
- serverless-plugin-tracing
|
||||
- serverless-dynamodb-local
|
||||
- serverless-package-external
|
||||
- serverless-offline
|
||||
provider:
|
||||
name: aws
|
||||
runtime: python3.7
|
||||
region: us-west-2
|
||||
stage: ${opt:stage}
|
||||
stackName: ${self:custom.prefix}-stack
|
||||
apiName: ${self:custom.prefix}-apigw
|
||||
deploymentPrefix: ${self:custom.prefix}
|
||||
endpointType: regional
|
||||
logRetentionInDays: 90
|
||||
logs:
|
||||
# NOTE: https://github.com/serverless/serverless/issues/6112
|
||||
# Logging documentation:
|
||||
# https://serverless.com/framework/docs/providers/aws/events/apigateway/
|
||||
restApi: true
|
||||
tracing:
|
||||
lambda: true
|
||||
apiGateway: true
|
||||
memorySize: ${file(config/config.${self:custom.stage}.json):LAMBDA_MEMORY_SIZE}
|
||||
reservedConcurrency: ${file(config/config.${self:custom.stage}.json):LAMBDA_RESERVED_CONCURRENCY}
|
||||
timeout: ${file(config/config.${self:custom.stage}.json):LAMBDA_TIMEOUT}
|
||||
snsaccount: ${file(accounts.yml):fxa.${self:provider.stage}}
|
||||
environment: ${file(env.yml):${self:custom.stage}, file(env.yml):default}
|
||||
tags: ${file(resources/tags.yml):${self:custom.stage}, file(resources/tags.yml):default}
|
||||
stackTags:
|
||||
service: ${self:service}
|
||||
# Reference: https://serverless.com/blog/abcs-of-iam-permissions/
|
||||
iamRoleStatements:
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- 'dynamodb:Query'
|
||||
- 'dynamodb:Scan'
|
||||
- 'dynamodb:GetItem'
|
||||
- 'dynamodb:PutItem'
|
||||
- 'dynamodb:UpdateItem'
|
||||
- 'dynamodb:DeleteItem'
|
||||
- 'dynamodb:DescribeTable'
|
||||
- 'dynamodb:CreateTable'
|
||||
Resource: 'arn:aws:dynamodb:us-west-2:*:*'
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- 'secretsmanager:GetSecretValue'
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:secretsmanager', Ref: AWS::Region, Ref: AWS::AccountId, 'secret:${self:provider.stage}/*']]
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- logs:CreateLogGroup
|
||||
- logs:CreateLogStream
|
||||
- logs:PutLogEvents
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:logs', Ref: AWS::Region, Ref: AWS::AccountId, 'log-group:/aws/lambda/*:*:*']]
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- kms:Decrypt
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:kms', Ref: AWS::Region, Ref: AWS::AccountId, 'alias/*']]
|
||||
- 'Fn::Join': [':', ['arn:aws:kms', Ref: AWS::Region, Ref: AWS::AccountId, 'key/*']]
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- 'xray:PutTraceSegments'
|
||||
- 'xray:PutTelemetryRecords'
|
||||
Resource:
|
||||
- '*'
|
||||
- Effect: Allow
|
||||
Action:
|
||||
- sns:Publish
|
||||
Resource:
|
||||
- 'Fn::Join': [':', ['arn:aws:sns', Ref: AWS::Region, Ref: AWS::AccountId, '${self:provider.stage}-fxa-event-data']]
|
||||
resourcePolicy: ${self:custom.resourcePolicies.${self:custom.access.${self:provider.stage}}}
|
||||
functions:
|
||||
subhub:
|
||||
name: ${self:custom.prefix}-function
|
||||
description: >
|
||||
subhub service for handling subscription services interactions
|
||||
handler: handler.handle
|
||||
timeout: 30
|
||||
sub:
|
||||
name: '${self:custom.prefix}-sub'
|
||||
description: "Function for handling subscription services interactions\n"
|
||||
handler: subhandler.handle
|
||||
events:
|
||||
- http:
|
||||
method: ANY
|
||||
path: /
|
||||
cors: true
|
||||
- http:
|
||||
method: ANY
|
||||
path: '{proxy+}'
|
||||
cors: true
|
||||
|
||||
- http: {
|
||||
path: "/sub/{proxy+}",
|
||||
method: get,
|
||||
cors: false,
|
||||
private: false
|
||||
}
|
||||
hub:
|
||||
name: ${self:custom.prefix}-hub
|
||||
description: >
|
||||
Function for handling subscription services interactions
|
||||
handler: hubhandler.handle
|
||||
events:
|
||||
- http: {
|
||||
path: /hub,
|
||||
method: post,
|
||||
cors: false,
|
||||
private: false
|
||||
}
|
||||
- http: {
|
||||
path: "/hub/{proxy+}",
|
||||
method: any,
|
||||
cors: false,
|
||||
private: false
|
||||
}
|
||||
mia:
|
||||
name: ${self:custom.prefix}-mia
|
||||
description: >
|
||||
Function for reconcilation of missing hub events
|
||||
handler: miahandler.handle
|
||||
events:
|
||||
# Invoke Lambda function on a schedule (either cron or rate limited). This fires an
|
||||
#
|
||||
# Reference: Serverless Event Scheduling
|
||||
# https://serverless.com/framework/docs/providers/aws/events/schedule/
|
||||
# Reference: AWS Cloudwatch Scheduled Event:
|
||||
# https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html#schedule_event_type
|
||||
#
|
||||
# Rate Syntax, http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html#RateExpressions
|
||||
# rate(value unit)
|
||||
# where value is an unsigned integer
|
||||
# and the unit is a unit of time in the set of (minute, minutes, hour, hours, day, days)
|
||||
#
|
||||
# Cron Syntax, http://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.html
|
||||
# cron(minutes day-of-month(DOM) month day-of-week(DOW) year)
|
||||
# where
|
||||
# Field | Values | Wildcards
|
||||
# Minutes | 0-59 | ,-*/
|
||||
# Hours | 0-23 | ,-*/
|
||||
# DOM | 1-31 | ,-*/?LW
|
||||
# Month | 1-12 | ,-*/
|
||||
# DOW | 1-7 | ,-*?/L#
|
||||
# Year | 192199 | ,-*/
|
||||
- schedule: rate(${file(config/config.${self:custom.stage}.json):MIA_RATE_SCHEDULE})
|
||||
resources:
|
||||
Resources:
|
||||
SubHubSNS:
|
||||
Type: AWS::SNS::Topic
|
||||
Properties:
|
||||
DisplayName: FxA ${self:provider.stage} Event Data
|
||||
TopicName: ${self:provider.stage}-fxa-event-data
|
||||
SubHubTopicPolicy:
|
||||
Type: AWS::SNS::TopicPolicy
|
||||
Properties:
|
||||
PolicyDocument:
|
||||
Id: AWSAccountTopicAccess
|
||||
Version: '2008-10-17'
|
||||
Statement:
|
||||
- Sid: FxAStageAccess
|
||||
Effect: Allow
|
||||
Principal:
|
||||
AWS: arn:aws:iam::${self:provider.snsaccount}:root
|
||||
Action:
|
||||
- SNS:Subscribe
|
||||
- SNS:Receive
|
||||
- SNS:GetTopicAttributes
|
||||
Resource: arn:aws:sns:us-west-2:903937621340:${self:provider.stage}-fxa-event-data
|
||||
Topics:
|
||||
- Ref: SubHubSNS
|
||||
Users:
|
||||
Type: 'AWS::DynamoDB::Table'
|
||||
DeletionPolicy: Retain
|
||||
Properties:
|
||||
AttributeDefinitions:
|
||||
-
|
||||
AttributeName: user_id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
-
|
||||
AttributeName: user_id
|
||||
KeyType: HASH
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
PointInTimeRecoverySpecification:
|
||||
PointInTimeRecoveryEnabled: true
|
||||
DeletedUsers:
|
||||
Type: 'AWS::DynamoDB::Table'
|
||||
DeletionPolicy: Retain
|
||||
Properties:
|
||||
AttributeDefinitions:
|
||||
- AttributeName: user_id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
- AttributeName: user_id
|
||||
KeyType: HASH
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
PointInTimeRecoverySpecification:
|
||||
PointInTimeRecoveryEnabled: true
|
||||
Events:
|
||||
Type: 'AWS::DynamoDB::Table'
|
||||
DeletionPolicy: Retain
|
||||
Properties:
|
||||
AttributeDefinitions:
|
||||
-
|
||||
AttributeName: event_id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
-
|
||||
AttributeName: event_id
|
||||
KeyType: HASH
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
PointInTimeRecoverySpecification:
|
||||
PointInTimeRecoveryEnabled: true
|
||||
Outputs:
|
||||
SubHubSNS:
|
||||
Value:
|
||||
Ref: SubHubSNS
|
||||
Export:
|
||||
Name: ${self:custom.stage}-SubHubSNS
|
||||
SubHubTopicPolicy:
|
||||
Value:
|
||||
Ref: SubHubTopicPolicy
|
||||
Users:
|
||||
Value:
|
||||
Ref: Users
|
||||
Events:
|
||||
Value:
|
||||
Ref: Events
|
||||
DeletedUsers:
|
||||
Value:
|
||||
Ref: DeletedUsers
|
||||
- ${file(resources/dynamodb-table.yml)}
|
||||
- ${file(resources/sns-topic.yml)}
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
|
||||
# TODO!
|
||||
# import newrelic.agent
|
||||
import serverless_wsgi
|
||||
|
||||
serverless_wsgi.TEXT_MIME_TYPES.append("application/custom+json")
|
||||
|
||||
from os.path import join, dirname, realpath
|
||||
# First some funky path manipulation so that we can work properly in
|
||||
# the AWS environment
|
||||
sys.path.insert(0, join(dirname(realpath(__file__)), 'src'))
|
||||
|
||||
# TODO!
|
||||
# newrelic.agent.initialize()
|
||||
|
||||
from aws_xray_sdk.core import xray_recorder, patch_all
|
||||
from aws_xray_sdk.core.context import Context
|
||||
from aws_xray_sdk.ext.flask.middleware import XRayMiddleware
|
||||
|
||||
|
||||
from sub.app import create_app
|
||||
from shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
||||
xray_recorder.configure(service="fxa.sub")
|
||||
patch_all()
|
||||
|
||||
sub_app = create_app()
|
||||
XRayMiddleware(sub_app.app, xray_recorder)
|
||||
|
||||
# TODO!
|
||||
# @newrelic.agent.lambda_handler()
|
||||
def handle(event, context):
|
||||
try:
|
||||
logger.info("handling sub event", subhub_event=event, context=context)
|
||||
return serverless_wsgi.handle_request(sub_app.app, event, context)
|
||||
except Exception as e: # pylint: disable=broad-except
|
||||
logger.exception("exception occurred", subhub_event=event, context=context, error=e)
|
||||
# TODO: Add Sentry exception catch here
|
||||
raise
|
|
@ -1 +0,0 @@
|
|||
../../subhub
|
|
@ -3,6 +3,10 @@ payments:
|
|||
- 54.68.203.164
|
||||
- 35.164.199.47
|
||||
- 52.36.241.207
|
||||
prod-test:
|
||||
- 3.217.6.148
|
||||
- 3.214.25.122
|
||||
- 3.94.151.18
|
||||
support:
|
||||
prod:
|
||||
- 13.210.202.182/32
|
||||
|
|
|
@ -21,4 +21,3 @@ warn_no_return = False
|
|||
warn_redundant_casts = True
|
||||
warn_unused_ignores = True
|
||||
warn_unreachable = False
|
||||
|
||||
|
|
6
setup.py
6
setup.py
|
@ -7,10 +7,10 @@ from setuptools import setup, find_packages
|
|||
with open("README.md", "r") as fh:
|
||||
long_description = fh.read()
|
||||
|
||||
with open('subhub/requirements.txt') as f:
|
||||
with open('src/app_requirements.txt') as f:
|
||||
app_requirements = f.read().splitlines()
|
||||
|
||||
with open('subhub/tests/requirements.txt') as f:
|
||||
with open('src/test_requirements.txt') as f:
|
||||
test_requirements = f.read().splitlines()
|
||||
|
||||
setup_requirements = [
|
||||
|
@ -37,7 +37,7 @@ setup(
|
|||
install_requires=app_requirements,
|
||||
license='Mozilla Public License 2.0',
|
||||
include_package_data=True,
|
||||
packages=find_packages(include=['subhub']),
|
||||
packages=find_packages(include=['src']),
|
||||
setup_requires=setup_requirements,
|
||||
test_suite='tests',
|
||||
tests_require=test_requirements,
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
# requirements for subhub application to run
|
||||
# requirements for hub.application to run
|
||||
# do not add testing reqs or automation reqs here
|
||||
attrdict==2.0.1
|
||||
aws-wsgi==0.0.8
|
||||
boto3==1.9.184
|
||||
botocore==1.12.184
|
||||
certifi==2019.6.16
|
||||
|
@ -31,9 +30,10 @@ PyYAML==5.1.1
|
|||
requests==2.22.0
|
||||
s3transfer==0.2.1
|
||||
six==1.12.0
|
||||
stripe==2.33.0
|
||||
stripe==2.35.1
|
||||
structlog==19.1.0
|
||||
urllib3==1.25.3
|
||||
pyinstrument==3.0.3
|
||||
aws-xray-sdk==2.4.2
|
||||
cachetools==3.1.1
|
||||
cachetools==3.1.1
|
||||
serverless-wsgi==1.7.3
|
|
@ -0,0 +1,31 @@
|
|||
FROM mozilla/subhub-base:latest
|
||||
MAINTAINER Stewart Henderson <shenderson@mozilla.com>
|
||||
|
||||
ARG STRIPE_API_KEY
|
||||
ARG AWS_ACCESS_KEY_ID
|
||||
ARG AWS_SECRET_ACCESS_KEY
|
||||
ARG LOCAL_FLASK_PORT
|
||||
ARG SUPPORT_API_KEY
|
||||
ARG DYNALITE_PORT
|
||||
|
||||
ENV STRIPE_API_KEY=$STRIPE_API_KEY
|
||||
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
|
||||
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
|
||||
ENV LOCAL_FLASK_PORT=$LOCAL_FLASK_PORT
|
||||
ENV HUB_API_KEY=$HUB_API_KEY
|
||||
ENV DYNALITE_PORT=$DYNALITE_PORT
|
||||
ENV FLASK_ENV=development
|
||||
|
||||
ENV DEPLOYED_ENV=local
|
||||
ENV BRANCH=local
|
||||
ENV REVISION=latest
|
||||
ENV VERSION=latest
|
||||
ENV PROJECT_NAME=subhub
|
||||
ENV REMOTE_ORIGIN_URL=git@github.com:mozilla/subhub.git
|
||||
|
||||
EXPOSE $LOCAL_FLASK_PORT
|
||||
|
||||
RUN mkdir -p /subhub/hub
|
||||
ADD .src.tar.gz /subhub/hub
|
||||
WORKDIR /subhub/
|
||||
ENV PYTHONPATH=.
|
|
@ -0,0 +1,146 @@
|
|||
# This Source Code Form is subject to the terms of the Mozilla Public
|
||||
# License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
import connexion
|
||||
import stripe
|
||||
|
||||
from flask import current_app, g, jsonify
|
||||
from flask_cors import CORS
|
||||
from flask import request
|
||||
|
||||
from shared import secrets
|
||||
from shared.cfg import CFG
|
||||
from shared.exceptions import SubHubError
|
||||
from shared.db import HubEvent
|
||||
from shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
||||
# Setup Stripe Error handlers
|
||||
def intermittent_stripe_error(e):
|
||||
logger.error("intermittent stripe error", error=e)
|
||||
return jsonify({"message": f"{e.user_message}"}), 503
|
||||
|
||||
|
||||
def server_stripe_error(e):
|
||||
logger.error("server stripe error", error=e)
|
||||
return (
|
||||
jsonify({"message": f"{e.user_message}", "params": None, "code": f"{e.code}"}),
|
||||
500,
|
||||
)
|
||||
|
||||
|
||||
def server_stripe_error_with_params(e):
|
||||
logger.error("server stripe error with params", error=e)
|
||||
return (
|
||||
jsonify(
|
||||
{
|
||||
"message": f"{e.user_message}",
|
||||
"params": f"{e.param}",
|
||||
"code": f"{e.code}",
|
||||
}
|
||||
),
|
||||
500,
|
||||
)
|
||||
|
||||
|
||||
def server_stripe_card_error(e):
|
||||
logger.error("server stripe card error", error=e)
|
||||
return jsonify({"message": f"{e.user_message}", "code": f"{e.code}"}), 402
|
||||
|
||||
|
||||
def is_container() -> bool:
|
||||
import requests
|
||||
|
||||
try:
|
||||
requests.get(f"http://dynalite:{CFG.DYNALITE_PORT}")
|
||||
return True
|
||||
except Exception as e:
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
def create_app(config=None):
|
||||
logger.info("creating flask app", config=config)
|
||||
region = "localhost"
|
||||
host = f"http://localhost:{CFG.DYNALITE_PORT}"
|
||||
if is_container():
|
||||
host = f"http://dynalite:{CFG.DYNALITE_PORT}"
|
||||
stripe.api_key = CFG.STRIPE_API_KEY
|
||||
if CFG.AWS_EXECUTION_ENV:
|
||||
region = "us-west-2"
|
||||
host = None
|
||||
options = dict(swagger_ui=CFG.SWAGGER_UI)
|
||||
|
||||
app = connexion.FlaskApp(__name__, specification_dir=".", options=options)
|
||||
app.add_api("swagger.yaml", pass_context_arg_name="request", strict_validation=True)
|
||||
|
||||
app.app.hub_table = HubEvent(table_name=CFG.EVENT_TABLE, region=region, host=host)
|
||||
|
||||
if not app.app.hub_table.model.exists():
|
||||
app.app.hub_table.model.create_table(
|
||||
read_capacity_units=1, write_capacity_units=1, wait=True
|
||||
)
|
||||
|
||||
@app.app.errorhandler(SubHubError)
|
||||
def display_subhub_errors(e: SubHubError):
|
||||
if e.status_code == 500:
|
||||
logger.error("display subhub errors", error=e)
|
||||
response = jsonify(e.to_dict())
|
||||
response.status_code = e.status_code
|
||||
return response
|
||||
|
||||
for error in (
|
||||
stripe.error.APIConnectionError,
|
||||
stripe.error.APIError,
|
||||
stripe.error.RateLimitError,
|
||||
stripe.error.IdempotencyError,
|
||||
):
|
||||
app.app.errorhandler(error)(intermittent_stripe_error)
|
||||
|
||||
for error in (stripe.error.AuthenticationError,):
|
||||
app.app.errorhandler(error)(server_stripe_error)
|
||||
|
||||
for error in (
|
||||
stripe.error.InvalidRequestError,
|
||||
stripe.error.StripeErrorWithParamCode,
|
||||
):
|
||||
app.app.errorhandler(error)(server_stripe_error_with_params)
|
||||
|
||||
for error in (stripe.error.CardError,):
|
||||
app.app.errorhandler(error)(server_stripe_card_error)
|
||||
|
||||
@app.app.before_request
|
||||
def before_request():
|
||||
g.hub_table = current_app.hub_table
|
||||
g.app_system_id = None
|
||||
if CFG.PROFILING_ENABLED:
|
||||
if "profile" in request.args and not hasattr(sys, "_called_from_test"):
|
||||
from pyinstrument import Profiler
|
||||
|
||||
g.profiler = Profiler()
|
||||
g.profiler.start()
|
||||
|
||||
@app.app.after_request
|
||||
def after_request(response):
|
||||
if not hasattr(g, "profiler") or hasattr(sys, "_called_from_test"):
|
||||
return response
|
||||
if CFG.PROFILING_ENABLED:
|
||||
g.profiler.stop()
|
||||
output_html = g.profiler.output_html()
|
||||
return app.app.make_response(output_html)
|
||||
return response
|
||||
|
||||
CORS(app.app)
|
||||
return app
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app = create_app()
|
||||
app.debug = True
|
||||
app.use_reloader = True
|
||||
app.run(host="0.0.0.0", port=CFG.LOCAL_FLASK_PORT)
|
|
@ -6,7 +6,7 @@ from abc import ABC
|
|||
|
||||
import flask
|
||||
|
||||
from subhub.log import get_logger
|
||||
from hub.shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
|
@ -10,9 +10,9 @@ from typing import Dict
|
|||
from botocore.exceptions import ClientError
|
||||
from stripe.error import APIConnectionError
|
||||
|
||||
from subhub.hub.routes.abstract import AbstractRoute
|
||||
from subhub.cfg import CFG
|
||||
from subhub.log import get_logger
|
||||
from hub.routes.abstract import AbstractRoute
|
||||
from hub.shared.cfg import CFG
|
||||
from hub.shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
|
@ -2,9 +2,9 @@
|
|||
# License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
|
||||
from subhub.hub.routes.firefox import FirefoxRoute
|
||||
from subhub.hub.routes.salesforce import SalesforceRoute
|
||||
from subhub.hub.routes.static import StaticRoutes
|
||||
from hub.routes.firefox import FirefoxRoute
|
||||
from hub.routes.salesforce import SalesforceRoute
|
||||
from hub.routes.static import StaticRoutes
|
||||
|
||||
|
||||
class RoutesPipeline:
|
|
@ -7,10 +7,9 @@ import requests
|
|||
|
||||
from typing import Dict
|
||||
|
||||
from subhub.hub.routes.abstract import AbstractRoute
|
||||
from subhub.cfg import CFG
|
||||
|
||||
from subhub.log import get_logger
|
||||
from hub.routes.abstract import AbstractRoute
|
||||
from shared.cfg import CFG
|
||||
from shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
|
@ -0,0 +1 @@
|
|||
../shared
|
|
@ -0,0 +1,228 @@
|
|||
swagger: "2.0"
|
||||
|
||||
info:
|
||||
title: "SubHub - Hub API"
|
||||
version: "1.0"
|
||||
|
||||
consumes:
|
||||
- "application/json"
|
||||
produces:
|
||||
- "application/json"
|
||||
|
||||
basePath: /v1
|
||||
|
||||
securityDefinitions:
|
||||
HubApiKey:
|
||||
type: apiKey
|
||||
in: header
|
||||
name: Authorization
|
||||
description: |
|
||||
Hub validation
|
||||
x-apikeyInfoFunc: shared.authentication.hub_auth
|
||||
parameters:
|
||||
uidParam:
|
||||
in: path
|
||||
name: uid
|
||||
type: string
|
||||
required: true
|
||||
description: User ID
|
||||
subIdParam:
|
||||
in: path
|
||||
name: sub_id
|
||||
type: string
|
||||
required: true
|
||||
description: Subscription ID
|
||||
paths:
|
||||
/hub/version:
|
||||
get:
|
||||
operationId: shared.version.get_version
|
||||
tags:
|
||||
- Version
|
||||
summary: SubHub - Hub API version
|
||||
description: Show Subhub version string (git describe --abbrev=7)
|
||||
produces:
|
||||
- application/json
|
||||
responses:
|
||||
200:
|
||||
description: Success
|
||||
schema:
|
||||
$ref: '#/definitions/Version'
|
||||
/hub/deployed:
|
||||
get:
|
||||
operationId: shared.deployed.get_deployed
|
||||
tags:
|
||||
- Deployed
|
||||
summary: SubHub deployed
|
||||
description: Show Subhub deployed env vars
|
||||
produces:
|
||||
- application/json
|
||||
responses:
|
||||
200:
|
||||
description: Success
|
||||
schema:
|
||||
$ref: '#/definitions/Deployed'
|
||||
/hub:
|
||||
post:
|
||||
operationId: hub.vendor.controller.view
|
||||
tags:
|
||||
- Hub
|
||||
summary: Receives hub calls
|
||||
description: Receives hub calls.
|
||||
produces:
|
||||
- application/json
|
||||
responses:
|
||||
200:
|
||||
description: Hub call received successfully.
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
500:
|
||||
description: Error - unable to receive webhook.
|
||||
schema:
|
||||
$ref: '#/definitions/Errormessage'
|
||||
parameters:
|
||||
- in: body
|
||||
name: data
|
||||
schema:
|
||||
type: object
|
||||
definitions:
|
||||
Version:
|
||||
type: object
|
||||
properties:
|
||||
BRANCH:
|
||||
type: string
|
||||
example: master
|
||||
VERSION:
|
||||
type: string
|
||||
example: v0.0.3-11-gaf8af91
|
||||
REVISION:
|
||||
type: string
|
||||
example: af8af912255d204bcd178fe47e9a1af3215e09d4
|
||||
Deployed:
|
||||
type: object
|
||||
properties:
|
||||
DEPLOYED_BY:
|
||||
type: string
|
||||
example: sidler@76
|
||||
DEPLOYED_ENV:
|
||||
type: string
|
||||
example: dev
|
||||
DEPLOYED_WHEN:
|
||||
type: string
|
||||
example: 2019-07-26T21:21:16.180200
|
||||
Plans:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
plan_id:
|
||||
type: string
|
||||
example: pro_basic_823
|
||||
product_id:
|
||||
type: string
|
||||
example: pro_basic
|
||||
product_name:
|
||||
type: string
|
||||
example: Moz Sub
|
||||
interval:
|
||||
type: string
|
||||
example: month
|
||||
enum:
|
||||
- day
|
||||
- week
|
||||
- month
|
||||
- year
|
||||
amount:
|
||||
type: integer
|
||||
example: 500
|
||||
description: A positive number in cents representing how much to charge on a recurring basis.
|
||||
currency:
|
||||
type: string
|
||||
example: usd
|
||||
plan_name:
|
||||
type: string
|
||||
example: Monthly Rocket Launches
|
||||
Subscriptions:
|
||||
type: object
|
||||
properties:
|
||||
subscriptions:
|
||||
type: array
|
||||
required: [
|
||||
"subscription_id",
|
||||
"status",
|
||||
"plan_name",
|
||||
"plan_id",
|
||||
"ended_at",
|
||||
"current_period_start",
|
||||
"current_period_end",
|
||||
"cancel_at_period_end"
|
||||
]
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
subscription_id:
|
||||
type: string
|
||||
example: sub_abc123
|
||||
plan_id:
|
||||
type: string
|
||||
example: pro_basic_823
|
||||
plan_name:
|
||||
type: string
|
||||
example: "pro_basic"
|
||||
current_period_end:
|
||||
type: number
|
||||
description: Seconds since UNIX epoch.
|
||||
example: 1557361022
|
||||
current_period_start:
|
||||
type: number
|
||||
description: Seconds since UNIX epoch.
|
||||
example: 1557361022
|
||||
end_at:
|
||||
type: number
|
||||
description: Non-null if the subscription is ending at a period in time.
|
||||
example: 1557361022
|
||||
status:
|
||||
type: string
|
||||
description: Subscription status.
|
||||
example: active
|
||||
cancel_at_period_end:
|
||||
type: boolean
|
||||
description: Shows if subscription will be cancelled at the end of the period.
|
||||
example: true
|
||||
failure_code:
|
||||
type: string
|
||||
description: Shows the failure code for subscription that is incomplete. This is an optional field.
|
||||
example: Card declined
|
||||
failure_message:
|
||||
type: string
|
||||
description: Shows the failure message for subscription that is incomplete. This is an optional field.
|
||||
example: Your card was declined.
|
||||
Errormessage:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
example: The resource is not available.
|
||||
code:
|
||||
type: number
|
||||
example: 404
|
||||
IntermittentError:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
example: Connection cannot be completed.
|
||||
ServerError:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
example: Server not available
|
||||
param:
|
||||
type: string
|
||||
example: Customer not found
|
||||
code:
|
||||
type: string
|
||||
example: Invalid Account
|
|
@ -6,7 +6,6 @@ import os
|
|||
import sys
|
||||
import signal
|
||||
import subprocess
|
||||
import uuid
|
||||
import logging
|
||||
|
||||
import psutil
|
||||
|
@ -14,12 +13,9 @@ import pytest
|
|||
import stripe
|
||||
from flask import g
|
||||
|
||||
from subhub.sub import payments
|
||||
from subhub.app import create_app
|
||||
from subhub.cfg import CFG
|
||||
from subhub.customer import create_customer
|
||||
|
||||
from subhub.log import get_logger
|
||||
from hub.app import create_app
|
||||
from hub.shared.cfg import CFG
|
||||
from hub.shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
||||
|
@ -37,7 +33,6 @@ def pytest_configure():
|
|||
# Latest boto3 now wants fake credentials around, so here we are.
|
||||
os.environ["AWS_ACCESS_KEY_ID"] = "fake"
|
||||
os.environ["AWS_SECRET_ACCESS_KEY"] = "fake"
|
||||
os.environ["USER_TABLE"] = "users-testing"
|
||||
os.environ["EVENT_TABLE"] = "events-testing"
|
||||
os.environ["ALLOWED_ORIGIN_SYSTEMS"] = "Test_system,Test_System,Test_System1"
|
||||
sys._called_from_test = True
|
||||
|
@ -73,37 +68,5 @@ def pytest_unconfigure():
|
|||
def app():
|
||||
app = create_app()
|
||||
with app.app.app_context():
|
||||
g.subhub_account = app.app.subhub_account
|
||||
g.hub_table = app.app.hub_table
|
||||
g.subhub_deleted_users = app.app.subhub_deleted_users
|
||||
yield app
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def create_customer_for_processing():
|
||||
uid = uuid.uuid4()
|
||||
customer = create_customer(
|
||||
g.subhub_account,
|
||||
user_id="process_customer",
|
||||
source_token="tok_visa",
|
||||
email="test_fixture@{}tester.com".format(uid.hex),
|
||||
origin_system="Test_system",
|
||||
display_name="John Tester",
|
||||
)
|
||||
yield customer
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def create_subscription_for_processing():
|
||||
uid = uuid.uuid4()
|
||||
subscription = payments.subscribe_to_plan(
|
||||
"process_test",
|
||||
{
|
||||
"pmt_token": "tok_visa",
|
||||
"plan_id": "plan_EtMcOlFMNWW4nd",
|
||||
"origin_system": "Test_system",
|
||||
"email": "subtest@{}tester.com".format(uid),
|
||||
"display_name": "John Tester",
|
||||
},
|
||||
)
|
||||
yield subscription
|
|
@ -6,9 +6,9 @@ import mockito
|
|||
import requests
|
||||
import boto3
|
||||
import flask
|
||||
from subhub.cfg import CFG
|
||||
from hub.shared.cfg import CFG
|
||||
|
||||
from subhub.tests.unit.stripe.utils import run_test, MockSqsClient
|
||||
from hub.tests.unit.stripe.utils import run_test, MockSqsClient
|
||||
|
||||
|
||||
def test_stripe_hub_succeeded(mocker):
|
|
@ -13,9 +13,9 @@ import requests
|
|||
|
||||
from mockito import when, mock, unstub
|
||||
|
||||
from subhub.tests.unit.stripe.utils import run_test, MockSqsClient, MockSnsClient
|
||||
from subhub.cfg import CFG
|
||||
from subhub.log import get_logger
|
||||
from hub.tests.unit.stripe.utils import run_test, MockSqsClient, MockSnsClient
|
||||
from hub.shared.cfg import CFG
|
||||
from hub.shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
|
@ -3,22 +3,21 @@
|
|||
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
import os
|
||||
import json
|
||||
|
||||
import boto3
|
||||
import flask
|
||||
from flask import Response
|
||||
import stripe
|
||||
import requests
|
||||
|
||||
from flask import Response
|
||||
from mockito import when, mock, unstub
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from subhub.tests.unit.stripe.utils import run_view, run_event_process
|
||||
from subhub.cfg import CFG
|
||||
from subhub.hub.verifications.events_check import EventCheck, process_events
|
||||
from subhub.log import get_logger
|
||||
from hub.tests.unit.stripe.utils import run_view, run_event_process
|
||||
from hub.verifications.events_check import EventCheck, process_events
|
||||
from hub.shared.cfg import CFG
|
||||
from hub.shared.log import get_logger
|
||||
|
||||
logger = get_logger()
|
||||
|
|
@ -0,0 +1,137 @@
|
|||
{
|
||||
"created": 1326853478,
|
||||
"livemode": false,
|
||||
"id": "evt_00000000000000",
|
||||
"type": "invoice.finalized",
|
||||
"object": "event",
|
||||
"request": null,
|
||||
"pending_webhooks": 1,
|
||||
"api_version": "2018-02-06",
|
||||
"data": {
|
||||
"object": {
|
||||
"id": "in_0000000",
|
||||
"object": "invoice",
|
||||
"account_country": "US",
|
||||
"account_name": "Mozilla Corporation",
|
||||
"amount_due": 1000,
|
||||
"amount_paid": 1000,
|
||||
"amount_remaining": 0,
|
||||
"application_fee_amount": null,
|
||||
"attempt_count": 1,
|
||||
"attempted": true,
|
||||
"auto_advance": false,
|
||||
"billing": "charge_automatically",
|
||||
"billing_reason": "subscription_create",
|
||||
"charge": "ch_0000000",
|
||||
"collection_method": "charge_automatically",
|
||||
"created": 1559568873,
|
||||
"currency": "usd",
|
||||
"custom_fields": null,
|
||||
"customer": "cus_00000000000",
|
||||
"customer_address": null,
|
||||
"customer_email": "john@gmail.com",
|
||||
"customer_name": null,
|
||||
"customer_phone": null,
|
||||
"customer_shipping": null,
|
||||
"customer_tax_exempt": "none",
|
||||
"customer_tax_ids": [
|
||||
],
|
||||
"default_payment_method": null,
|
||||
"default_source": null,
|
||||
"default_tax_rates": [
|
||||
],
|
||||
"description": null,
|
||||
"discount": null,
|
||||
"due_date": null,
|
||||
"ending_balance": 0,
|
||||
"footer": null,
|
||||
"hosted_invoice_url": "https://pay.stripe.com/invoice/invst_000000",
|
||||
"invoice_pdf": "https://pay.stripe.com/invoice/invst_000000/pdf",
|
||||
"lines": {
|
||||
"object": "list",
|
||||
"data": [
|
||||
{
|
||||
"id": "sli_000000",
|
||||
"object": "line_item",
|
||||
"amount": 1000,
|
||||
"currency": "usd",
|
||||
"description": "1 Moz-Sub × Moz_Sub (at $10.00 / month)",
|
||||
"discountable": true,
|
||||
"livemode": false,
|
||||
"metadata": {
|
||||
},
|
||||
"period": {
|
||||
"end": 1562160873,
|
||||
"start": 1559568873
|
||||
},
|
||||
"plan": {
|
||||
"id": "plan_000000",
|
||||
"object": "plan",
|
||||
"active": true,
|
||||
"aggregate_usage": null,
|
||||
"amount": 1000,
|
||||
"billing_scheme": "per_unit",
|
||||
"created": 1555354251,
|
||||
"currency": "usd",
|
||||
"interval": "month",
|
||||
"interval_count": 1,
|
||||
"livemode": false,
|
||||
"metadata": {
|
||||
"service1": "VPN",
|
||||
"service2": "Water Bottles"
|
||||
},
|
||||
"nickname": "Mozilla_Subscription",
|
||||
"product": "prod_000000",
|
||||
"tiers": null,
|
||||
"tiers_mode": null,
|
||||
"transform_usage": null,
|
||||
"trial_period_days": 30,
|
||||
"usage_type": "licensed"
|
||||
},
|
||||
"proration": false,
|
||||
"quantity": 1,
|
||||
"subscription": "sub_000000",
|
||||
"subscription_item": "si_000000",
|
||||
"tax_amounts": [
|
||||
],
|
||||
"tax_rates": [
|
||||
],
|
||||
"type": "subscription"
|
||||
}
|
||||
],
|
||||
"has_more": false,
|
||||
"total_count": 1,
|
||||
"url": "/v1/invoices/in_0000000/lines"
|
||||
},
|
||||
"livemode": false,
|
||||
"metadata": {
|
||||
},
|
||||
"next_payment_attempt": null,
|
||||
"number": "C8828DAC-0001",
|
||||
"paid": true,
|
||||
"payment_intent": "pi_000000",
|
||||
"period_end": 1559568873,
|
||||
"period_start": 1559568873,
|
||||
"post_payment_credit_notes_amount": 0,
|
||||
"pre_payment_credit_notes_amount": 0,
|
||||
"receipt_number": null,
|
||||
"starting_balance": 0,
|
||||
"statement_descriptor": null,
|
||||
"status": "paid",
|
||||
"status_transitions": {
|
||||
"finalized_at": 1559568873,
|
||||
"marked_uncollectible_at": null,
|
||||
"paid_at": 1559568874,
|
||||
"voided_at": null
|
||||
},
|
||||
"subscription": "sub_000000",
|
||||
"subtotal": 1000,
|
||||
"tax": null,
|
||||
"tax_percent": null,
|
||||
"total": 1000,
|
||||
"total_tax_amounts": [
|
||||
],
|
||||
"webhooks_delivered_at": null
|
||||
}
|
||||
}
|
||||
}
|
|
@ -9,10 +9,10 @@ import requests
|
|||
import boto3
|
||||
import flask
|
||||
|
||||
from subhub.cfg import CFG
|
||||
from subhub import secrets
|
||||
from hub.shared.cfg import CFG
|
||||
from hub.shared import secrets
|
||||
|
||||
from subhub.tests.unit.stripe.utils import run_test, MockSqsClient
|
||||
from hub.tests.unit.stripe.utils import run_test, MockSqsClient
|
||||
|
||||
|
||||
def run_customer(mocker, data, filename):
|
|
@ -10,10 +10,10 @@ import flask
|
|||
import stripe.error
|
||||
from mockito import when, mock, unstub
|
||||
|
||||
from subhub.cfg import CFG
|
||||
from subhub import secrets
|
||||
from hub.shared.cfg import CFG
|
||||
from hub.shared import secrets
|
||||
|
||||
from subhub.tests.unit.stripe.utils import run_test, MockSqsClient
|
||||
from hub.tests.unit.stripe.utils import run_test, MockSqsClient
|
||||
|
||||
|
||||
def run_customer(mocker, data, filename):
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче