зеркало из https://github.com/mozilla/MozDef.git
Merge remote-tracking branch 'origin/master' into breaking_es6_changes
This commit is contained in:
Коммит
b0a7b37cca
42
CHANGELOG
42
CHANGELOG
|
@ -3,6 +3,38 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [v1.38.3] - 2019-04-01
|
||||
### Fixed
|
||||
- AWS CodeBuild tag semver regex
|
||||
|
||||
## [v1.38.2] - 2019-03-29
|
||||
### Fixed
|
||||
- Remaining references to old alertplugins container
|
||||
|
||||
## [v1.38.1] - 2019-03-29
|
||||
### Added
|
||||
- Enable CI/CD with AWS CodeBuild
|
||||
- Create AMIs of MozDef, replicate and share them
|
||||
- Link everything (container images, AMIs, templates) together by MozDef version
|
||||
|
||||
### Changed
|
||||
- Publish versioned CloudFormation templates
|
||||
- RabbitMQ configured to use a real password
|
||||
|
||||
## [v1.38] - 2019-03-28
|
||||
### Added
|
||||
- Create alert plugins with ability to modify alerts in pipeline
|
||||
|
||||
### Changed
|
||||
- Renamed existing alertplugin service to alertactions
|
||||
- Updated rabbitmq docker container to 3.7
|
||||
|
||||
### Fixed
|
||||
- Resolved sshd mq plugin to handle more types of events
|
||||
|
||||
|
||||
## [v1.37] - 2019-03-01
|
||||
### Added
|
||||
- Watchlist - use the UI to quickly add a term (username, IP, command, etc.) that MozDef alerts on
|
||||
|
@ -10,7 +42,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
|||
|
||||
### Changed
|
||||
- Improve error handling on Slack bot
|
||||
- Improve Slack bot alert format for better readibility
|
||||
- Improve Slack bot alert format for better readability
|
||||
- Minor UI adjustments
|
||||
|
||||
### Fixed
|
||||
|
@ -19,5 +51,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
|||
- Added checks on sending SQS messages to only accept intra-account messages
|
||||
- Improved docker performance and disk space requirements
|
||||
|
||||
[Unreleased Changes]: https://github.com/mozilla/MozDef/compare/v1.37...HEAD
|
||||
[Releases prior to v1.37](https://github.com/mozilla/MozDef/releases)
|
||||
[Unreleased]: https://github.com/mozilla/MozDef/compare/v1.38.3...HEAD
|
||||
[v1.38.3]: https://github.com/mozilla/MozDef/compare/v1.38.2...v1.38.3
|
||||
[v1.38.2]: https://github.com/mozilla/MozDef/compare/v1.38.1...v1.38.2
|
||||
[v1.38.1]: https://github.com/mozilla/MozDef/compare/v1.38...v1.38.1
|
||||
[v1.38]: https://github.com/mozilla/MozDef/compare/v1.37...v1.38
|
||||
[v1.37]: https://github.com/mozilla/MozDef/releases/tag/v1.37
|
||||
|
|
|
@ -0,0 +1,15 @@
|
|||
# Community Participation Guidelines
|
||||
|
||||
This repository is governed by Mozilla's code of conduct and etiquette guidelines.
|
||||
For more details, please read the
|
||||
[Mozilla Community Participation Guidelines](https://www.mozilla.org/about/governance/policies/participation/).
|
||||
|
||||
## How to Report
|
||||
For more information on how to report violations of the Community Participation Guidelines, please read our '[How to Report](https://www.mozilla.org/about/governance/policies/participation/reporting/)' page.
|
||||
|
||||
<!--
|
||||
## Project Specific Etiquette
|
||||
|
||||
In some cases, there will be additional project etiquette i.e.: (https://bugzilla.mozilla.org/page.cgi?id=etiquette.html).
|
||||
Please update for your project.
|
||||
-->
|
71
Makefile
71
Makefile
|
@ -11,9 +11,11 @@ DKR_IMAGES := mozdef_alertactions mozdef_alerts mozdef_base mozdef_bootstrap moz
|
|||
BUILD_MODE := build ## Pass `pull` in order to pull images instead of building them
|
||||
NAME := mozdef
|
||||
VERSION := 0.1
|
||||
BRANCH := master
|
||||
NO_CACHE := ## Pass `--no-cache` in order to disable Docker cache
|
||||
GITHASH := latest ## Pass `$(git rev-parse --short HEAD`) to tag docker hub images as latest git-hash instead
|
||||
TEST_CASE := tests ## Run all (`tests`) or a specific test case (ex `tests/alerts/tests/alerts/test_proxy_drop_exfil_domains.py`)
|
||||
TMPDIR := $(shell mktemp -d )
|
||||
|
||||
.PHONY:all
|
||||
all:
|
||||
|
@ -24,11 +26,11 @@ all:
|
|||
run: build ## Run all MozDef containers
|
||||
docker-compose -f docker/compose/docker-compose.yml -p $(NAME) up -d
|
||||
|
||||
.PHONY: run-cloudy-mozdef restart-cloudy-mozdef
|
||||
.PHONY: run-cloudy-mozdef
|
||||
run-cloudy-mozdef: ## Run the MozDef containers necessary to run in AWS (`cloudy-mozdef`). This is used by the CloudFormation-initiated setup.
|
||||
$(shell test -f docker/compose/cloudy_mozdef.env || touch docker/compose/cloudy_mozdef.env)
|
||||
$(shell test -f docker/compose/cloudy_mozdef_kibana.env || touch docker/compose/cloudy_mozdef_kibana.env)
|
||||
docker-compose -f docker/compose/docker-compose-cloudy-mozdef.yml -p $(NAME) pull
|
||||
# docker-compose -f docker/compose/docker-compose-cloudy-mozdef.yml -p $(NAME) pull # Images are now in the local packer build AMI and no docker pull is needed
|
||||
docker-compose -f docker/compose/docker-compose-cloudy-mozdef.yml -p $(NAME) up -d
|
||||
|
||||
.PHONY: run-env-mozdef
|
||||
|
@ -39,58 +41,109 @@ else
|
|||
@echo $(ENV) not found.
|
||||
endif
|
||||
|
||||
.PHONY: restart-cloudy-mozdef
|
||||
restart-cloudy-mozdef:
|
||||
docker-compose -f docker/compose/docker-compose-cloudy-mozdef.yml -p $(NAME) restart
|
||||
|
||||
.PHONY: tests run-tests-resources run-tests-resources-external run-tests
|
||||
.PHONY: test
|
||||
test: build-tests run-tests
|
||||
|
||||
.PHONY: tests
|
||||
tests: build-tests run-tests ## Run all tests (getting/building images as needed)
|
||||
|
||||
.PHONY: run-tests-resources-external
|
||||
run-tests-resources-external: ## Just spin up external resources for tests and have them listen externally
|
||||
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) run -p 9200:9200 -d elasticsearch
|
||||
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) run -p 5672:5672 -d rabbitmq
|
||||
|
||||
.PHONY: run-tests-resources
|
||||
run-tests-resources: ## Just run the external resources required for tests
|
||||
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) up -d
|
||||
run-test:
|
||||
|
||||
.PHONY: run-test
|
||||
run-test: run-tests
|
||||
|
||||
.PHONY: run-test
|
||||
run-tests: run-tests-resources ## Just run the tests (no build/get). Use `make TEST_CASE=tests/...` for specific tests only
|
||||
docker run -it --rm mozdef/mozdef_tester bash -c "source /opt/mozdef/envs/python/bin/activate && flake8 --config .flake8 ./"
|
||||
docker run -it --rm --network=test-mozdef_default mozdef/mozdef_tester bash -c "source /opt/mozdef/envs/python/bin/activate && py.test --delete_indexes --delete_queues $(TEST_CASE)"
|
||||
|
||||
.PHONY: rebuild-run-tests
|
||||
rebuild-run-tests: build-tests run-tests
|
||||
|
||||
|
||||
.PHONY: build
|
||||
build: ## Build local MozDef images (use make NO_CACHE=--no-cache build to disable caching)
|
||||
build: build-from-cwd
|
||||
|
||||
.PHONY: build-from-cwd
|
||||
build-from-cwd: ## Build local MozDef images (use make NO_CACHE=--no-cache build to disable caching)
|
||||
docker-compose -f docker/compose/docker-compose.yml -p $(NAME) $(NO_CACHE) $(BUILD_MODE)
|
||||
|
||||
.PHONY: build-from-github
|
||||
build-from-github: ## Build local MozDef images from the github branch (use make NO_CACHE=--no-cache build to disable caching).
|
||||
@echo "Performing a build from the github branch using $(TMPDIR) for BRANCH=$(BRANCH)"
|
||||
cd $(TMPDIR) && git clone https://github.com/mozilla/MozDef.git && cd MozDef && git checkout $(BRANCH) && make build-from-cwd
|
||||
rm -rf $(TMPDIR)
|
||||
|
||||
.PHONY: build-tests
|
||||
build-tests: ## Build end-to-end test environment only
|
||||
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) $(NO_CACHE) $(BUILD_MODE)
|
||||
|
||||
.PHONY: stop down
|
||||
.PHONY: stop
|
||||
stop: down
|
||||
|
||||
.PHONY: down
|
||||
down: ## Shutdown all services we started with docker-compose
|
||||
docker-compose -f docker/compose/docker-compose.yml -p $(NAME) stop
|
||||
docker-compose -f docker/compose/docker-compose.yml -p test-$(NAME) stop
|
||||
|
||||
.PHONY: docker-push docker-get hub hub-get
|
||||
.PHONY: docker-push
|
||||
docker-push: hub
|
||||
|
||||
.PHONY: hub
|
||||
hub: ## Upload locally built MozDef images tagged as the current git head (hub.docker.com/mozdef).
|
||||
docker login
|
||||
docker-compose -f docker/compose/docker-compose.yml -p $(NAME) push
|
||||
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) push
|
||||
|
||||
.PHONY: tag-images
|
||||
tag-images:
|
||||
cloudy_mozdef/ci/docker_tag_or_push tag $(BRANCH)
|
||||
|
||||
.PHONY: docker-push-tagged
|
||||
docker-push-tagged: tag-images hub-tagged
|
||||
|
||||
.PHONY: hub-tagged
|
||||
hub-tagged: ## Upload locally built MozDef images tagged as the BRANCH. Branch and tagged release are interchangeable here.
|
||||
cloudy_mozdef/ci/docker_tag_or_push push $(BRANCH)
|
||||
|
||||
.PHONY: docker-get
|
||||
docker-get: hub-get
|
||||
|
||||
.PHONY: hub-get
|
||||
hub-get: ## Download all pre-built images (hub.docker.com/mozdef)
|
||||
docker-compose -f docker/compose/docker-compose.yml -p $(NAME) pull
|
||||
docker-compose -f docker/compose/docker-compose-test.yml -p test-$(NAME) pull
|
||||
|
||||
.PHONY: docker-login
|
||||
docker-login: hub-login
|
||||
|
||||
.PHONY: hub-login
|
||||
hub-login: ## Login as the MozDef CI user in order to perform a release of the containers.
|
||||
@docker login -u mozdefci --password $(shell aws ssm get-parameter --name '/mozdef/ci/dockerhubpassword' --with-decrypt | jq .Parameter.Value)
|
||||
|
||||
.PHONY: clean
|
||||
clean: ## Cleanup all docker volumes and shutdown all related services
|
||||
-docker-compose -f docker/compose/docker-compose.yml -p $(NAME) down -v --remove-orphans
|
||||
-docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) down -v --remove-orphans
|
||||
|
||||
# Shorthands
|
||||
.PHONY: rebuild
|
||||
rebuild: clean build
|
||||
rebuild: clean build-from-cwd
|
||||
|
||||
.PHONY: new-alert
|
||||
new-alert: ## Create an example alert and working alert unit test
|
||||
python tests/alert_templater.py
|
||||
|
||||
.PHONY: set-version-and-fetch-docker-container
|
||||
set-version-and-fetch-docker-container: build-from-cwd tag-images # Lock the release of MozDef by pulling the docker containers on AMI build and caching replace all instances of latest in the compose override with the BRANCH
|
||||
sed -i s/latest/$(BRANCH)/g docker/compose/docker-compose-cloudy-mozdef.yml
|
||||
|
|
|
@ -2,6 +2,8 @@ ROOT_DIR := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
|
|||
PARENTDIR := $(realpath ../)
|
||||
AWS_REGION := us-west-2
|
||||
STACK_NAME := mozdef-aws-nested
|
||||
BRANCH := master
|
||||
AMI_MAP_TEMP_FILE := /tmp/mozdef-ami-map.txt
|
||||
DEV_STACK_PARAMS_FILENAME := aws_parameters.dev.json
|
||||
# For more information on the rationale behind the code in STACK_PARAMS see https://github.com/aws/aws-cli/issues/2429#issuecomment-441133480
|
||||
DEV_STACK_PARAMS := $(shell test -e $(DEV_STACK_PARAMS_FILENAME) && python -c 'import json,sys;f=open(sys.argv[1]);print(" ".join([",".join(["%s=\\\"%s\\\""%(k,v) for k,v in x.items()]) for x in json.load(f)]));f.close()' $(DEV_STACK_PARAMS_FILENAME))
|
||||
|
@ -20,24 +22,26 @@ S3_PROD_STACK_URI := https://s3-$(AWS_REGION).amazonaws.com/$(S3_PROD_BUCKET_NAM
|
|||
# OIDC_CLIENT_SECRET is set in an environment variable by running "source aws_parameters.sh"
|
||||
OIDC_CLIENT_SECRET_PARAM_ARG := $(shell test -n "$(OIDC_CLIENT_SECRET)" && echo "ParameterKey=OIDCClientSecret,ParameterValue=$(OIDC_CLIENT_SECRET)")
|
||||
|
||||
.PHONY:all
|
||||
all:
|
||||
@echo 'Available make targets:'
|
||||
@grep '^[^#[:space:]\.PHONY.*].*:' Makefile
|
||||
@echo 'Run ./dmake <target> in order to run the Makefile targets in Docker'
|
||||
|
||||
# Note: This requires AWS access
|
||||
.PHONY: packer-build
|
||||
packer-build: ## Build the base AMI with packer
|
||||
cd packer && packer build packer.json
|
||||
# https://blog.gruntwork.io/locating-aws-ami-owner-id-and-image-name-for-packer-builds-7616fe46b49a
|
||||
.PHONY: packer-build-github
|
||||
packer-build-github: ## Build the base AMI with packer
|
||||
@echo "Branch based build triggered for $(BRANCH)."
|
||||
ci/pack_and_copy $(BRANCH) $(AMI_MAP_TEMP_FILE)
|
||||
|
||||
.PHONY: create-prod-stack
|
||||
.PHONY: create-dev-stack
|
||||
create-dev-stack: test ## Create everything you need for a fresh new stack!
|
||||
@export AWS_REGION=$(AWS_REGION)
|
||||
@echo "Make sure you have an environment variable OIDC_CLIENT_SECRET set."
|
||||
aws cloudformation create-stack --stack-name $(STACK_NAME) --template-url $(S3_DEV_STACK_URI)mozdef-parent.yml \
|
||||
--capabilities CAPABILITY_IAM \
|
||||
--parameters ParameterKey=S3TemplateLocation,ParameterValue=$(S3_DEV_STACK_URI) \
|
||||
$(OIDC_CLIENT_SECRET_PARAM_ARG) \
|
||||
--parameters $(OIDC_CLIENT_SECRET_PARAM_ARG) \
|
||||
$(DEV_STACK_PARAMS) \
|
||||
--output text
|
||||
|
||||
|
@ -46,18 +50,19 @@ create-dev-s3-bucket:
|
|||
@export AWS_REGION=$(AWS_REGION)
|
||||
aws s3api create-bucket --bucket $(S3_DEV_BUCKET_NAME) --acl public-read --create-bucket-configuration LocationConstraint=$(AWS_REGION)
|
||||
|
||||
.PHONY: updated-dev-stack
|
||||
.PHONY: update-dev-stack
|
||||
update-dev-stack: test ## Updates the nested stack on AWS
|
||||
@export AWS_REGION=$(AWS_REGION)
|
||||
aws cloudformation update-stack --stack-name $(STACK_NAME) --template-url $(S3_DEV_STACK_URI)mozdef-parent.yml \
|
||||
--capabilities CAPABILITY_IAM \
|
||||
--parameters ParameterKey=S3TemplateLocation,ParameterValue=$(S3_DEV_STACK_URI) \
|
||||
$(OIDC_CLIENT_SECRET_PARAM_ARG) \
|
||||
--parameters $(OIDC_CLIENT_SECRET_PARAM_ARG) \
|
||||
$(DEV_STACK_PARAMS) \
|
||||
--output text
|
||||
|
||||
.PHONY: cfn-lint test
|
||||
.PHONY: test
|
||||
test: cfn-lint
|
||||
|
||||
.PHONY: cfn-lint
|
||||
cfn-lint: ## Verify the CloudFormation template pass linting tests
|
||||
-cfn-lint cloudformation/*.yml
|
||||
|
||||
|
@ -76,6 +81,11 @@ publish-prod-templates:
|
|||
@export AWS_REGION=$(AWS_REGION)
|
||||
aws s3 sync cloudformation/ $(S3_PROD_BUCKET_URI) --exclude="*" --include="*.yml"
|
||||
|
||||
.PHONY: publish-versioned-templates
|
||||
publish-versioned-templates:
|
||||
@export AWS_REGION=$(AWS_REGION)
|
||||
ci/publish_versioned_templates $(BRANCH) $(S3_PROD_BUCKET_URI) $(S3_PROD_STACK_URI) $(AMI_MAP_TEMP_FILE)
|
||||
|
||||
.PHONY: diff-dev-templates
|
||||
diff-dev-templates:
|
||||
tempdir=`mktemp --directory`; aws s3 sync $(S3_DEV_BUCKET_URI) "$$tempdir" --exclude="*" --include="*.yml"; diff --recursive --unified "$$tempdir" cloudformation; rm -rf "$$tempdir"
|
||||
|
|
|
@ -20,8 +20,8 @@
|
|||
"UsePreviousValue": false
|
||||
},
|
||||
{
|
||||
"ParameterKey": "AMIImageId",
|
||||
"ParameterValue": "ami-0c3705bb3b43ad51f",
|
||||
"ParameterKey": "SSHIngressCIDR",
|
||||
"ParameterValue": "0.0.0.0/0",
|
||||
"UsePreviousValue": false
|
||||
},
|
||||
{
|
||||
|
@ -43,10 +43,5 @@
|
|||
"ParameterKey": "OIDCClientSecret",
|
||||
"ParameterValue": "secret-value-goes-here",
|
||||
"UsePreviousValue": false
|
||||
},
|
||||
{
|
||||
"ParameterKey": "S3TemplateLocation",
|
||||
"ParameterValue": "https://s3-us-west-2.amazonaws.com/example-bucket-name/cloudformation/path/",
|
||||
"UsePreviousValue": false
|
||||
}
|
||||
]
|
||||
|
|
|
@ -0,0 +1,14 @@
|
|||
version: 0.2
|
||||
|
||||
phases:
|
||||
install:
|
||||
commands:
|
||||
- echo 'Codebuild is ubuntu 14.04 installing packer to compensate Someone should build a ci docker container \;).'
|
||||
- wget -nv https://releases.hashicorp.com/packer/1.3.5/packer_1.3.5_linux_amd64.zip
|
||||
- unzip packer_1.3.5_linux_amd64.zip
|
||||
- chmod +x packer
|
||||
- mv packer /usr/bin/
|
||||
build:
|
||||
commands:
|
||||
- mkdir -p serverless-functions/build/python/lib/python3.6/site-packages
|
||||
- bash cloudy_mozdef/ci/deploy
|
|
@ -0,0 +1,34 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e # Exit immediately if a command exits with a non-zero status.
|
||||
|
||||
echo 'Welcome GitHub webhook to the CodeBuild Job of MozDef.'
|
||||
echo "It's dangerous to go alone. Take one of these: <%%%%|==========>"
|
||||
|
||||
# echo "Begin test of the MozDef codebase."
|
||||
# export COMPOSE_INTERACTIVE_NO_CLI=1 make tests
|
||||
# The above does not currently work in a non-interactive TTY.
|
||||
# Fails with error
|
||||
# docker run -it --rm mozdef/mozdef_tester bash -c "source /opt/mozdef/envs/python/bin/activate && flake8 --config .flake8 ./"
|
||||
# the input device is not a TTY
|
||||
# make: *** [run-tests] Error 1
|
||||
# Then again we probably do not need to run the test suite here because it has been run three times to get the code here.
|
||||
# echo "Tests complete.
|
||||
|
||||
echo "Processing webhook event for ${CODEBUILD_WEBHOOK_TRIGGER}."
|
||||
|
||||
if [[ "branch/master" == "$CODEBUILD_WEBHOOK_TRIGGER" \
|
||||
|| "$CODEBUILD_WEBHOOK_TRIGGER" =~ ^tag\/v[0-9]+\.[0-9]+\.[0-9]+(\-(prod|pre|testing))?$ ]]; then
|
||||
echo "Building a release"
|
||||
echo "C|_| This may take a bit. Might as well grab a coffee."
|
||||
make build-from-cwd
|
||||
cd cloudy_mozdef
|
||||
BRANCH="`echo $CODEBUILD_WEBHOOK_TRIGGER | cut -d '/' -f2`"
|
||||
make BRANCH=${BRANCH} packer-build-github
|
||||
make BRANCH=${BRANCH} publish-versioned-templates
|
||||
cd ..
|
||||
make hub-login
|
||||
make BRANCH=${BRANCH} docker-push-tagged
|
||||
fi
|
||||
|
||||
echo "End build of the MozDef codebase."
|
|
@ -0,0 +1,22 @@
|
|||
#!/bin/bash
|
||||
|
||||
action="${1}"
|
||||
branch="${2}"
|
||||
|
||||
for name in mozdef_meteor mozdef_base mozdef_tester mozdef_mq_worker mozdef_kibana \
|
||||
mozdef_syslog mozdef_cron mozdef_elasticsearch mozdef_loginput mozdef_mongodb \
|
||||
mozdef_bootstrap mozdef_alerts mozdef_nginx mozdef_alertactions mozdef_rabbitmq \
|
||||
mozdef_rest mozdef_base ; do
|
||||
if [ "${action}" == "tag" ]; then
|
||||
if [ "${branch}" == "master" ]; then
|
||||
docker tag mozdef/${name}:latest mozdef/${name}:${branch}
|
||||
else
|
||||
docker tag mozdef/${name}:${branch}
|
||||
fi
|
||||
elif [ "${action}" == "push" ]; then
|
||||
docker push mozdef/${name}:${branch}
|
||||
if [ "${branch}" == "master" ]; then
|
||||
docker push mozdef/${name}:latest
|
||||
fi
|
||||
fi
|
||||
done
|
|
@ -0,0 +1,72 @@
|
|||
#!/bin/bash
|
||||
|
||||
BRANCH=$1
|
||||
AMI_MAP_TEMP_FILE=$2
|
||||
TMPDIR=$(mktemp -d)
|
||||
|
||||
cd packer
|
||||
packer -machine-readable build -var github_branch="${BRANCH}" packer.json 2>&1 | tee "${TMPDIR}/packer-output.txt"
|
||||
|
||||
awk -F "," '$4 == "0" {print $0}' "${TMPDIR}/packer-output.txt"
|
||||
ami_source_region=$(awk -F "," '($4 == "0") && ($5 == "id") {print $6}' "${TMPDIR}/packer-output.txt" | awk -F ":" '{print $1}')
|
||||
ami_source_id=$(awk -F "," '($4 == "0") && ($5 == "id") {print $6}' "${TMPDIR}/packer-output.txt" | awk -F ":" '{print $2}')
|
||||
ami_dest_region_list="us-east-1"
|
||||
# aws_marketplace_account_id is the AWS account ID, owned by AWS, not us, that
|
||||
# we share with in order to enable the AWS Marketplace to access our AMIs
|
||||
aws_marketplace_account_id="679593333241"
|
||||
|
||||
if [ -z "${ami_source_id}" -o -z "${ami_source_region}" ]; then
|
||||
echo "Packer output did not provide AMI ID or region. Exiting"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Mappings:" > "${AMI_MAP_TEMP_FILE}"
|
||||
echo " RegionMap:" >> "${AMI_MAP_TEMP_FILE}"
|
||||
echo " ${ami_source_region}:" >> "${AMI_MAP_TEMP_FILE}"
|
||||
echo " HVM64: ${ami_source_id}" >> "${AMI_MAP_TEMP_FILE}"
|
||||
|
||||
echo "Sharing ${ami_source_id} in ${ami_source_region} with ${aws_marketplace_account_id}"
|
||||
AWS_DEFAULT_REGION="${ami_source_region}" aws ec2 modify-image-attribute \
|
||||
--image-id "${ami_source_id}" \
|
||||
--launch-permission "Add=[{UserId=${aws_marketplace_account_id}}]"
|
||||
|
||||
echo "Querying for name of ${ami_source_id}"
|
||||
ami_name=$(AWS_DEFAULT_REGION="${ami_source_region}" aws ec2 describe-images \
|
||||
--image-ids ${ami_source_id} \
|
||||
--query 'Images[0].Name' \
|
||||
--output text)
|
||||
|
||||
echo "Fetching tags for ${ami_source_id}"
|
||||
AWS_DEFAULT_REGION="${ami_source_region}" aws ec2 describe-tags \
|
||||
--filters Name=resource-id,Values=${ami_source_id} \
|
||||
--query 'Tags[*].{Key: Key, Value: Value}' > "${TMPDIR}/tags.json"
|
||||
|
||||
for ami_dest_region in $ami_dest_region_list; do
|
||||
echo "Copying ${ami_source_id} from ${ami_source_region} to ${ami_dest_region}"
|
||||
ami_dest_id=$(AWS_DEFAULT_REGION="${ami_dest_region}" aws ec2 copy-image \
|
||||
--name "${ami_name}" \
|
||||
--source-image-id "${ami_source_id}" \
|
||||
--source-region "${ami_source_region}" \
|
||||
--description "A MozDef replicated AMI" \
|
||||
--query "ImageId" \
|
||||
--output text)
|
||||
|
||||
echo "Waiting for copy of ${ami_source_id} to complete"
|
||||
AWS_DEFAULT_REGION="${ami_dest_region}" aws ec2 wait image-available \
|
||||
--image-ids "${ami_dest_id}"
|
||||
|
||||
echo " ${ami_dest_region}:" >> "${AMI_MAP_TEMP_FILE}"
|
||||
echo " HVM64: ${ami_dest_id}" >> "${AMI_MAP_TEMP_FILE}"
|
||||
|
||||
echo "Applying tags to ${ami_dest_id}"
|
||||
AWS_DEFAULT_REGION="${ami_dest_region}" aws ec2 create-tags \
|
||||
--resources ${ami_dest_id} \
|
||||
--tags file://${TMPDIR}/tags.json
|
||||
|
||||
echo "Sharing ${ami_dest_id} in ${ami_dest_region} with ${aws_marketplace_account_id}"
|
||||
AWS_DEFAULT_REGION="${ami_dest_region}" aws ec2 modify-image-attribute \
|
||||
--image-id "${ami_dest_id}" \
|
||||
--launch-permission "Add=[{UserId=${aws_marketplace_account_id}}]"
|
||||
done
|
||||
|
||||
rm -rf "${TMPDIR}"
|
|
@ -0,0 +1,28 @@
|
|||
#!/bin/bash
|
||||
|
||||
BRANCH=$1
|
||||
S3_PROD_BUCKET_URI=$2
|
||||
S3_PROD_STACK_URI=$3
|
||||
AMI_MAP_TEMP_FILE=$4
|
||||
TMPDIR=$(mktemp -d)
|
||||
|
||||
VERSIONED_BUCKET_URI="${S3_PROD_BUCKET_URI}/${BRANCH}"
|
||||
VERSIONED_STACK_URI="${S3_PROD_STACK_URI}${BRANCH}/"
|
||||
|
||||
echo " VariableMap:" >> "${AMI_MAP_TEMP_FILE}"
|
||||
echo " Variables:" >> "${AMI_MAP_TEMP_FILE}"
|
||||
echo " S3TemplateLocation: ${VERSIONED_STACK_URI}" >> "${AMI_MAP_TEMP_FILE}"
|
||||
|
||||
echo "Injecting the region AMI mapping into the mozdef-parent.yml CloudFormation template"
|
||||
sed '/# INSERT MAPPING HERE.*/{
|
||||
s/# INSERT MAPPING HERE.*//g
|
||||
r '"${AMI_MAP_TEMP_FILE}"'
|
||||
}' cloudformation/mozdef-parent.yml > ${TMPDIR}/mozdef-parent.yml
|
||||
|
||||
echo "Uploading CloudFormation templates to S3 directory ${VERSIONED_BUCKET_URI}/"
|
||||
# Sync all .yml files except mozdef-parent.yml
|
||||
aws s3 sync cloudformation/ ${VERSIONED_BUCKET_URI} --exclude="*" --include="*.yml" --exclude="mozdef-parent.yml"
|
||||
# cp modified mozdef-parent.yml from TMPDIR to S3
|
||||
aws s3 cp ${TMPDIR}/mozdef-parent.yml ${VERSIONED_BUCKET_URI}/
|
||||
|
||||
rm -rf "${TMPDIR}"
|
|
@ -63,7 +63,8 @@ Resources:
|
|||
Statement:
|
||||
- Sid: AllowSNSToSendToSQS
|
||||
Effect: Allow
|
||||
Principal: !Join [ '', 'arn:', 'aws:', 'iam:', !Ref AWS::AccountId, ':root' ]
|
||||
Principal:
|
||||
AWS: '*'
|
||||
Action: sqs:SendMessage
|
||||
Resource: !GetAtt MozDefCloudTrailSQSQueue.Arn
|
||||
Condition:
|
||||
|
|
|
@ -102,6 +102,11 @@ Resources:
|
|||
IamInstanceProfile: !Ref IamInstanceProfile
|
||||
ImageId: !Ref AMIImageId
|
||||
InstanceType: !Ref InstanceType
|
||||
BlockDeviceMappings:
|
||||
- DeviceName: "/dev/xvda"
|
||||
Ebs:
|
||||
VolumeSize: 14
|
||||
VolumeType: gp2
|
||||
KeyName: !Ref KeyName
|
||||
SecurityGroups:
|
||||
- !Ref MozDefSecurityGroupId
|
||||
|
@ -113,6 +118,8 @@ Resources:
|
|||
- amazon-efs-utils
|
||||
write_files:
|
||||
- content: |
|
||||
# Cloudy MozDef env file as imported by docker compose.
|
||||
# Drives the configuration of variables for a variety of containers.
|
||||
OPTIONS_ESSERVERS=${ESURL}
|
||||
OPTIONS_KIBANAURL=${KibanaURL}
|
||||
# The OPTIONS_METEOR_KIBANAURL uses the reserved word "relative" which triggers MozDef
|
||||
|
@ -120,6 +127,7 @@ Resources:
|
|||
OPTIONS_METEOR_KIBANAURL=https://relative:9090/_plugin/kibana/
|
||||
OPTIONS_METEOR_ROOTURL=https://${DomainName}
|
||||
# See https://github.com/mozilla-iam/mozilla.oidc.accessproxy/blob/master/README.md#setup
|
||||
# Future support will be added for cognito backed authentication.
|
||||
client_id=${OIDCClientId}
|
||||
client_secret=${OIDCClientSecret}
|
||||
discovery_url=${OIDCDiscoveryURL}
|
||||
|
@ -127,6 +135,8 @@ Resources:
|
|||
redirect_uri_path=/redirect_uri
|
||||
httpsredir=no
|
||||
# Meteor settings
|
||||
# Mongo is discovered through the local docker-compose network.
|
||||
# These are not typos.
|
||||
MONGO_URL=mongodb://mongodb:3002/meteor
|
||||
ROOT_URL=http://localhost
|
||||
OPTIONS_METEOR_PORT=3000
|
||||
|
@ -141,15 +151,24 @@ Resources:
|
|||
path: /opt/mozdef/docker/compose/cloudy_mozdef.env
|
||||
- content: |
|
||||
#!/usr/bin/env python
|
||||
|
||||
# config.py file drives the alert worker container consuming messages from rabbitmq
|
||||
# This is slated for future refactor to config and code that drives celery
|
||||
|
||||
# This Source Code Form is subject to the terms of the Mozilla Public
|
||||
# License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
# Copyright (c) 2014 Mozilla Corporation
|
||||
|
||||
from celery.schedules import crontab, timedelta
|
||||
import time
|
||||
import logging
|
||||
import time
|
||||
|
||||
from celery.schedules import crontab
|
||||
from celery.schedules import timedelta
|
||||
from os import getenv
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# XXX TBD find a way to make this configurable outside of deploys.
|
||||
ALERTS = {
|
||||
'bruteforce_ssh.AlertBruteforceSsh': {'schedule': crontab(minute='*/1')},
|
||||
'unauth_ssh.AlertUnauthSSH': {'schedule': crontab(minute='*/1')},
|
||||
|
@ -162,10 +181,11 @@ Resources:
|
|||
# 'relative pythonfile name (exclude the .py) - EX: sso_dashboard',
|
||||
]
|
||||
|
||||
# Rabbit MQ Password now comes from the docker environment per host.
|
||||
RABBITMQ = {
|
||||
'mqserver': 'rabbitmq',
|
||||
'mquser': 'guest',
|
||||
'mqpassword': 'guest',
|
||||
'mquser': 'mozdef',
|
||||
'mqpassword': getenv('RABBITMQ_PASSWORD'),
|
||||
'mqport': 5672,
|
||||
'alertexchange': 'alerts',
|
||||
'alertqueue': 'mozdef.alert'
|
||||
|
@ -216,6 +236,9 @@ Resources:
|
|||
logging.Formatter.converter = time.gmtime
|
||||
path: /opt/mozdef/docker/compose/mozdef_alerts/files/config.py
|
||||
- content: |
|
||||
# Cloudy MozDef env file as imported by docker compose for the kibana reverse proxy
|
||||
# Cloudy MozDef uses managed elasticsearch and proxies connections to that using the LUA proxy
|
||||
# This will support cognito backed authentication in the future
|
||||
client_id=${OIDCClientId}
|
||||
client_secret=${OIDCClientSecret}
|
||||
discovery_url=${OIDCDiscoveryURL}
|
||||
|
@ -225,15 +248,19 @@ Resources:
|
|||
cookiename=seskibana
|
||||
path: /opt/mozdef/docker/compose/cloudy_mozdef_kibana.env
|
||||
- content: |
|
||||
# This configures the worker that pulls in CloudTrail logs
|
||||
OPTIONS_TASKEXCHANGE=${CloudTrailSQSNotificationQueueName}
|
||||
path: /opt/mozdef/docker/compose/cloudy_mozdef_mq_cloudtrail.env
|
||||
- content: |
|
||||
# This is the additional worker reserved for future use
|
||||
OPTIONS_TASKEXCHANGE=${MozDefSQSQueueName}
|
||||
path: /opt/mozdef/docker/compose/cloudy_mozdef_mq_sns_sqs.env
|
||||
runcmd:
|
||||
- echo RABBITMQ_PASSWORD=`python3 -c 'import secrets; s = secrets.token_hex(); print(s)'` > /opt/mozdef/docker/compose/rabbitmq.env
|
||||
- chmod --verbose 600 /opt/mozdef/docker/compose/rabbitmq.env
|
||||
- chmod --verbose 600 /opt/mozdef/docker/compose/cloudy_mozdef.env
|
||||
- chmod --verbose 600 /opt/mozdef/docker/compose/cloudy_mozdef_kibana.env
|
||||
- chmod --verbose 600 /opt/mozdef/docker/compose/cloudy_mozdef_mq_sqs.env
|
||||
- chmod --verbose 600 /opt/mozdef/docker/compose/cloudy_mozdef_mq_sns_sqs.env
|
||||
- mkdir --verbose --parents ${EFSMountPoint}
|
||||
- echo '*.* @@127.0.0.1:514' >> /etc/rsyslog.conf
|
||||
- systemctl enable rsyslog
|
||||
|
@ -242,6 +269,9 @@ Resources:
|
|||
- for i in 1 2 3 4 5 6; do mount --verbose --all --types efs defaults && break || sleep 15; done
|
||||
- cd /opt/mozdef && git pull origin master
|
||||
- make -C /opt/mozdef -f /opt/mozdef/Makefile run-cloudy-mozdef
|
||||
- cd /opt/mozdef && docker-compose -f docker/compose/docker-compose.yml -p mozdef exec -T rabbitmq rabbitmqctl add_user mozdef \$RABBITMQ_PASSWORD
|
||||
- cd /opt/mozdef && docker-compose -f docker/compose/docker-compose.yml -p mozdef exec -T rabbitmq rabbitmqctl set_user_tags mozdef administrator
|
||||
- cd /opt/mozdef && docker-compose -f docker/compose/docker-compose.yml -p mozdef exec -T rabbitmq rabbitmqctl set_permissions -p / mozdef ".*" ".*" ".*"
|
||||
MozDefAutoScaleGroup:
|
||||
Type: AWS::AutoScaling::AutoScalingGroup
|
||||
Properties:
|
||||
|
|
|
@ -13,7 +13,7 @@ Metadata:
|
|||
Parameters:
|
||||
- InstanceType
|
||||
- KeyName
|
||||
- AMIImageId
|
||||
- SSHIngressCIDR
|
||||
- Label:
|
||||
default: Certificate
|
||||
Parameters:
|
||||
|
@ -24,10 +24,6 @@ Metadata:
|
|||
- OIDCDiscoveryURL
|
||||
- OIDCClientId
|
||||
- OIDCClientSecret
|
||||
- Label:
|
||||
default: Template Location
|
||||
Parameters:
|
||||
- S3TemplateLocation
|
||||
ParameterLabels:
|
||||
VpcId:
|
||||
default: VPC ID
|
||||
|
@ -37,8 +33,8 @@ Metadata:
|
|||
default: EC2 Instance Type
|
||||
KeyName:
|
||||
default: EC2 SSH Key Name
|
||||
AMIImageId:
|
||||
default: EC2 AMI Image ID
|
||||
SSHIngressCIDR:
|
||||
default: Inbound SSH allowed IP address CIDR
|
||||
DomainName:
|
||||
default: FQDN to host MozDef at
|
||||
ACMCertArn:
|
||||
|
@ -49,8 +45,6 @@ Metadata:
|
|||
default: OIDC Client ID
|
||||
OIDCClientSecret:
|
||||
default: OIDC Client Secret
|
||||
S3TemplateLocation:
|
||||
default: S3 Template Location URL
|
||||
Parameters:
|
||||
VpcId:
|
||||
Type: AWS::EC2::VPC::Id
|
||||
|
@ -65,10 +59,11 @@ Parameters:
|
|||
KeyName:
|
||||
Type: AWS::EC2::KeyPair::KeyName
|
||||
Description: Name of an existing EC2 KeyPair to enable SSH access to the web server
|
||||
AMIImageId:
|
||||
Type: AWS::EC2::Image::Id
|
||||
Description: The AMI Image ID to use of the EC2 instance
|
||||
Default: ami-073434079b0366251
|
||||
SSHIngressCIDR:
|
||||
Type: String
|
||||
AllowedPattern: '^([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?$'
|
||||
ConstraintDescription: A valid CIDR (e.g. 203.0.113.0/24)
|
||||
Description: The CIDR of IP addresses from which to allow inbound SSH connections
|
||||
DomainName:
|
||||
Type: String
|
||||
Description: The fully qualified DNS name you will host CloudyMozDef at.
|
||||
|
@ -88,22 +83,19 @@ Parameters:
|
|||
Type: String
|
||||
Description: The secret that your OIDC provider issues you for your Mozdef instance.
|
||||
NoEcho: true
|
||||
S3TemplateLocation:
|
||||
Type: String
|
||||
AllowedPattern: '^https?:\/\/.*\.amazonaws\.com\/.*\/'
|
||||
ConstraintDescription: A valid amazonaws.com S3 URL
|
||||
Description: "The URL to the S3 bucket used to fetch the nested stack templates (Example: https://s3-us-west-2.amazonaws.com/example-bucket-name/cloudformation/path/)"
|
||||
Default: https://s3-us-west-2.amazonaws.com/public.us-west-2.infosec.mozilla.org/mozdef/cf/
|
||||
# A RegionMap of AMI IDs is required by AWS Marketplace https://docs.aws.amazon.com/marketplace/latest/userguide/cloudformation.html#aws-cloudformation-template-preparation
|
||||
# INSERT MAPPING HERE : This template does not work in this state. The mapping is replaced with a working AWS region to AMI ID mapping as well as a variable map with the S3TemplateLocationPrefix by cloudy_mozdef/ci/publish_versioned_templates. The resulting functioning CloudFormation template is uploaded to S3 for the version being built.
|
||||
Resources:
|
||||
MozDefSecurityGroups:
|
||||
Type: AWS::CloudFormation::Stack
|
||||
Properties:
|
||||
Parameters:
|
||||
VpcId: !Ref VpcId
|
||||
SSHIngressCIDR: !Ref SSHIngressCIDR
|
||||
Tags:
|
||||
- Key: application
|
||||
Value: mozdef
|
||||
TemplateURL: !Join [ '', [ !Ref S3TemplateLocation, mozdef-security-group.yml ] ]
|
||||
TemplateURL: !Join [ '', [ !FindInMap [ VariableMap, Variables, S3TemplateLocation ] , mozdef-security-group.yml ] ]
|
||||
MozDefIAMRoleAndInstanceProfile:
|
||||
Type: AWS::CloudFormation::Stack
|
||||
Properties:
|
||||
|
@ -116,7 +108,7 @@ Resources:
|
|||
Tags:
|
||||
- Key: application
|
||||
Value: mozdef
|
||||
TemplateURL: !Join [ '', [ !Ref S3TemplateLocation, base-iam.yml ] ]
|
||||
TemplateURL: !Join [ '', [ !FindInMap [ VariableMap, Variables, S3TemplateLocation ], base-iam.yml ] ]
|
||||
MozDefInstance:
|
||||
Type: AWS::CloudFormation::Stack
|
||||
Properties:
|
||||
|
@ -126,7 +118,7 @@ Resources:
|
|||
KeyName: !Ref KeyName
|
||||
IamInstanceProfile: !GetAtt MozDefIAMRoleAndInstanceProfile.Outputs.InstanceProfileArn
|
||||
AutoScaleGroupSubnetIds: !Join [ ',', !Ref PublicSubnetIds ]
|
||||
AMIImageId: !Ref AMIImageId
|
||||
AMIImageId: !FindInMap [ RegionMap, !Ref 'AWS::Region', HVM64 ]
|
||||
EFSID: !GetAtt MozDefEFS.Outputs.EFSID
|
||||
MozDefSecurityGroupId: !GetAtt MozDefSecurityGroups.Outputs.MozDefSecurityGroupId
|
||||
MozDefLoadBalancerSecurityGroupId: !GetAtt MozDefSecurityGroups.Outputs.MozDefLoadBalancerSecurityGroupId
|
||||
|
@ -145,7 +137,7 @@ Resources:
|
|||
Value: mozdef
|
||||
- Key: stack
|
||||
Value: !Ref AWS::StackName
|
||||
TemplateURL: !Join [ '', [ !Ref S3TemplateLocation, mozdef-instance.yml ] ]
|
||||
TemplateURL: !Join [ '', [ !FindInMap [ VariableMap, Variables, S3TemplateLocation ], mozdef-instance.yml ] ]
|
||||
MozDefES:
|
||||
Type: AWS::CloudFormation::Stack
|
||||
DependsOn: MozDefIAMRoleAndInstanceProfile
|
||||
|
@ -161,7 +153,7 @@ Resources:
|
|||
Value: mozdef
|
||||
- Key: stack
|
||||
Value: !Ref AWS::StackName
|
||||
TemplateURL: !Join [ '', [ !Ref S3TemplateLocation, mozdef-es.yml ] ]
|
||||
TemplateURL: !Join [ '', [ !FindInMap [ VariableMap, Variables, S3TemplateLocation ], mozdef-es.yml ] ]
|
||||
MozDefEFS:
|
||||
Type: AWS::CloudFormation::Stack
|
||||
Properties:
|
||||
|
@ -175,7 +167,7 @@ Resources:
|
|||
Value: mozdef
|
||||
- Key: stack
|
||||
Value: !Ref AWS::StackName
|
||||
TemplateURL: !Join [ '', [ !Ref S3TemplateLocation, mozdef-efs.yml ] ]
|
||||
TemplateURL: !Join [ '', [ !FindInMap [ VariableMap, Variables, S3TemplateLocation ], mozdef-efs.yml ] ]
|
||||
MozDefSQS:
|
||||
Type: AWS::CloudFormation::Stack
|
||||
Properties:
|
||||
|
@ -184,7 +176,7 @@ Resources:
|
|||
Value: mozdef
|
||||
- Key: stack
|
||||
Value: !Ref AWS::StackName
|
||||
TemplateURL: !Join [ '', [ !Ref S3TemplateLocation, mozdef-sqs.yml ] ]
|
||||
TemplateURL: !Join [ '', [ !FindInMap [ VariableMap, Variables, S3TemplateLocation ], mozdef-sqs.yml ] ]
|
||||
MozDefCloudTrail:
|
||||
Type: AWS::CloudFormation::Stack
|
||||
Properties:
|
||||
|
@ -193,7 +185,7 @@ Resources:
|
|||
Value: mozdef
|
||||
- Key: stack
|
||||
Value: !Ref AWS::StackName
|
||||
TemplateURL: !Join [ '', [ !Ref S3TemplateLocation, mozdef-cloudtrail.yml ] ]
|
||||
TemplateURL: !Join [ '', [ !FindInMap [ VariableMap, Variables, S3TemplateLocation ], mozdef-cloudtrail.yml ] ]
|
||||
CloudFormationLambdaIAMRole:
|
||||
Type: AWS::IAM::Role
|
||||
Properties:
|
||||
|
|
|
@ -4,6 +4,11 @@ Parameters:
|
|||
VpcId:
|
||||
Type: AWS::EC2::VPC::Id
|
||||
Description: The VPC ID of the VPC to deploy in
|
||||
SSHIngressCIDR:
|
||||
Type: String
|
||||
AllowedPattern: '^([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?$'
|
||||
ConstraintDescription: A valid CIDR (e.g. 203.0.113.0/24)
|
||||
Description: The CIDR of IP addresses from which to allow inbound SSH connections
|
||||
Resources:
|
||||
MozDefSecurityGroup:
|
||||
Type: AWS::EC2::SecurityGroup
|
||||
|
@ -16,7 +21,7 @@ Resources:
|
|||
- IpProtocol: tcp
|
||||
FromPort: 22
|
||||
ToPort: 22
|
||||
CidrIp: 0.0.0.0/0
|
||||
CidrIp: !Ref SSHIngressCIDR
|
||||
- IpProtocol: tcp
|
||||
FromPort: 80
|
||||
ToPort: 80
|
||||
|
|
|
@ -17,7 +17,8 @@ Resources:
|
|||
Statement:
|
||||
- Sid: AllowThisAccountSendToSQS
|
||||
Effect: Allow
|
||||
Principal: !Join [ '', 'arn:', 'aws:', 'iam:', !Ref AWS::AccountId, ':root' ]
|
||||
Principal:
|
||||
AWS: !Join [ '', [ 'arn:', 'aws:', 'iam::', !Ref 'AWS::AccountId', ':root' ] ]
|
||||
Action: sqs:SendMessage
|
||||
Resource: !GetAtt MozDefSQSQueue.Arn
|
||||
Queues:
|
||||
|
|
|
@ -15,9 +15,31 @@
|
|||
"ssh_pty" : "true",
|
||||
"ssh_username": "ec2-user",
|
||||
"ami_name": "mozdef_{{timestamp}}",
|
||||
"launch_block_device_mappings": [
|
||||
{
|
||||
"delete_on_termination": true,
|
||||
"device_name": "/dev/xvda",
|
||||
"volume_size": 14
|
||||
}
|
||||
],
|
||||
"ami_description": "An automated build of MozDef triggered via the makefile.",
|
||||
"ami_groups": [
|
||||
"all"
|
||||
]
|
||||
],
|
||||
"run_tags": {
|
||||
"app": "packer-builder-mozdef"
|
||||
},
|
||||
"run_volume_tags": {
|
||||
"app": "packer-builder-mozdef"
|
||||
},
|
||||
"snapshot_tags": {
|
||||
"app": "packer-builder-mozdef"
|
||||
},
|
||||
"tags": {
|
||||
"github:Branch": "{{ user `github_branch`}}",
|
||||
"buildTimestamp": "{{timestamp}}",
|
||||
"app": "mozdef"
|
||||
}
|
||||
}],
|
||||
"provisioners": [
|
||||
{ "type": "shell",
|
||||
|
@ -28,12 +50,23 @@
|
|||
"sudo yum install -y mysql-devel python python-devel python-pip",
|
||||
"sudo yum install -y git",
|
||||
"sudo yum install -y docker",
|
||||
"sudo yum install -y python3",
|
||||
"sudo pip install virtualenv ",
|
||||
"sudo pip install docker-compose",
|
||||
"sudo systemctl enable docker",
|
||||
"sudo systemctl start docker",
|
||||
"sudo mkdir -p /opt/mozdef/",
|
||||
"sudo git clone https://github.com/mozilla/MozDef /opt/mozdef",
|
||||
"cd /opt/mozdef && sudo git checkout master"
|
||||
"cd /opt/mozdef && sudo git checkout {{ user `github_branch`}}",
|
||||
"cd /opt/mozdef && sudo git rev-parse HEAD",
|
||||
"cd /opt/mozdef && sudo touch docker/compose/cloudy_mozdef.env docker/compose/rabbitmq.env docker/compose/cloudy_mozdef_mq_cloudtrail.env docker/compose/cloudy_mozdef_mq_sns_sqs.env docker/compose/cloudy_mozdef_kibana.env",
|
||||
"cd /opt/mozdef && sudo make BRANCH={{ user `github_branch`}} set-version-and-fetch-docker-container",
|
||||
"cd /opt/mozdef && sudo docker-compose -f docker/compose/docker-compose-cloudy-mozdef.yml -p mozdef pull",
|
||||
"rm -rf /home/ec2-user/.ssh/authorized_keys",
|
||||
"rm -rf /home/ec2-user/.ssh/known_hosts",
|
||||
"sudo rm -rf /tmp/*",
|
||||
"sudo rm -rf /home/ec2-user/.bash_history",
|
||||
"sudo rm -rf /root/.ssh"
|
||||
]}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -27,7 +27,7 @@ services:
|
|||
networks:
|
||||
- default
|
||||
mongodb:
|
||||
image: mozdef/mozdef_mongodb
|
||||
image: mozdef/mozdef_mongodb:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
|
@ -37,24 +37,24 @@ services:
|
|||
networks:
|
||||
- default
|
||||
bootstrap:
|
||||
image: mozdef/mozdef_bootstrap
|
||||
image: mozdef/mozdef_bootstrap:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
command: bash -c 'python initial_setup.py http://elasticsearch:9200 cron/defaultMappingTemplate.json cron/backup.conf'
|
||||
command: bash -c 'python initial_setup.py http://elasticsearch:9200 cron/defaultMappingTemplate.json cron/backup.conf http://kibana:5601'
|
||||
depends_on:
|
||||
- base
|
||||
networks:
|
||||
- default
|
||||
# MozDef Specific Containers
|
||||
base:
|
||||
image: mozdef/mozdef_base
|
||||
image: mozdef/mozdef_base:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
command: bash -c 'su - mozdef -c /opt/mozdef/envs/mozdef/cron/update_geolite_db.sh'
|
||||
volumes:
|
||||
- geolite_db:/opt/mozdef/envs/mozdef/data
|
||||
alertactions:
|
||||
image: mozdef/mozdef_alertactions
|
||||
image: mozdef/mozdef_alertactions:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
|
@ -66,9 +66,10 @@ services:
|
|||
networks:
|
||||
- default
|
||||
alerts:
|
||||
image: mozdef/mozdef_alerts
|
||||
image: mozdef/mozdef_alerts:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
- rabbitmq.env
|
||||
volumes:
|
||||
- /opt/mozdef/docker/compose/mozdef_alerts/files/config.py:/opt/mozdef/envs/mozdef/alerts/lib/config.py
|
||||
restart: always
|
||||
|
@ -79,11 +80,11 @@ services:
|
|||
networks:
|
||||
- default
|
||||
cron:
|
||||
image: mozdef/mozdef_cron
|
||||
image: mozdef/mozdef_cron:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
command: bash -c 'crond -n'
|
||||
command: bash -c 'cd / && bash launch_cron'
|
||||
volumes:
|
||||
- cron:/opt/mozdef/envs/mozdef/cron
|
||||
- geolite_db:/opt/mozdef/envs/mozdef/data/
|
||||
|
@ -93,7 +94,7 @@ services:
|
|||
networks:
|
||||
- default
|
||||
loginput:
|
||||
image: mozdef/mozdef_loginput
|
||||
image: mozdef/mozdef_loginput:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
|
@ -106,7 +107,7 @@ services:
|
|||
networks:
|
||||
- default
|
||||
meteor:
|
||||
image: mozdef/mozdef_meteor
|
||||
image: mozdef/mozdef_meteor:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
|
@ -120,7 +121,7 @@ services:
|
|||
networks:
|
||||
- default
|
||||
rest:
|
||||
image: mozdef/mozdef_rest
|
||||
image: mozdef/mozdef_rest:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
|
@ -132,7 +133,7 @@ services:
|
|||
networks:
|
||||
- default
|
||||
syslog:
|
||||
image: mozdef/mozdef_syslog
|
||||
image: mozdef/mozdef_syslog:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
|
@ -145,7 +146,9 @@ services:
|
|||
networks:
|
||||
- default
|
||||
rabbitmq:
|
||||
image: mozdef/mozdef_rabbitmq
|
||||
image: mozdef/mozdef_rabbitmq:latest
|
||||
env_file:
|
||||
- rabbitmq.env
|
||||
restart: always
|
||||
command: rabbitmq-server
|
||||
ports:
|
||||
|
@ -156,7 +159,7 @@ services:
|
|||
networks:
|
||||
- default
|
||||
mq_eventtask:
|
||||
image: mozdef/mozdef_mq_worker
|
||||
image: mozdef/mozdef_mq_worker:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
restart: always
|
||||
|
@ -172,7 +175,7 @@ services:
|
|||
volumes:
|
||||
- geolite_db:/opt/mozdef/envs/mozdef/data/
|
||||
mq_cloudtrail:
|
||||
image: mozdef/mozdef_mq_worker
|
||||
image: mozdef/mozdef_mq_worker:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
- cloudy_mozdef_mq_cloudtrail.env
|
||||
|
@ -189,7 +192,7 @@ services:
|
|||
volumes:
|
||||
- geolite_db:/opt/mozdef/envs/mozdef/data/
|
||||
mq_sqs:
|
||||
image: mozdef/mozdef_mq_worker
|
||||
image: mozdef/mozdef_mq_worker:latest
|
||||
env_file:
|
||||
- cloudy_mozdef.env
|
||||
- cloudy_mozdef_mq_sns_sqs.env
|
||||
|
|
|
@ -103,7 +103,7 @@ services:
|
|||
cache_from:
|
||||
- mozdef/mozdef_bootstrap
|
||||
- mozdef_bootstrap:latest
|
||||
command: bash -c 'while ! timeout 1 bash -c "echo > /dev/tcp/elasticsearch/9200";do sleep 1;done && python initial_setup.py http://elasticsearch:9200 cron/defaultMappingTemplate.json cron/backup.conf'
|
||||
command: bash -c 'while ! timeout 1 bash -c "echo > /dev/tcp/elasticsearch/9200";do sleep 1;done && python initial_setup.py http://elasticsearch:9200 cron/defaultMappingTemplate.json cron/backup.conf http://kibana:5601'
|
||||
depends_on:
|
||||
- base
|
||||
- elasticsearch
|
||||
|
|
|
@ -2,8 +2,6 @@ FROM centos:7
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
# When changing kibana version remember to edit
|
||||
# docker/compose/mozdef_bootstrap/files/initial_setup.py accordingly
|
||||
ENV KIBANA_VERSION 5.6.14
|
||||
|
||||
RUN \
|
||||
|
|
|
@ -16,6 +16,7 @@ import os
|
|||
import sys
|
||||
|
||||
from elasticsearch.exceptions import ConnectionError
|
||||
import requests
|
||||
|
||||
from mozdef_util.elasticsearch_client import ElasticsearchClient
|
||||
from mozdef_util.query_models import SearchQuery, TermMatch
|
||||
|
@ -23,8 +24,9 @@ from mozdef_util.query_models import SearchQuery, TermMatch
|
|||
|
||||
parser = argparse.ArgumentParser(description='Create the correct indexes and aliases in elasticsearch')
|
||||
parser.add_argument('esserver', help='Elasticsearch server (ex: http://elasticsearch:9200)')
|
||||
parser.add_argument('default_mapping_file', help='The relative path to default mapping json file (ex: cron/defaultTemplateMapping.json)')
|
||||
parser.add_argument('default_mapping_file', help='The relative path to default mapping json file (ex: cron/defaultMappingTemplate.json)')
|
||||
parser.add_argument('backup_conf_file', help='The relative path to backup.conf file (ex: cron/backup.conf)')
|
||||
parser.add_argument('kibana_url', help='The URL of the kibana endpoint (ex: http://kibana:5601)')
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
|
@ -35,6 +37,7 @@ esserver = esserver.strip('/')
|
|||
print "Connecting to " + esserver
|
||||
client = ElasticsearchClient(esserver)
|
||||
|
||||
kibana_url = os.environ.get('OPTIONS_KIBANAURL', args.kibana_url)
|
||||
|
||||
current_date = datetime.now()
|
||||
event_index_name = current_date.strftime("events-%Y%m%d")
|
||||
|
@ -42,7 +45,6 @@ previous_event_index_name = (current_date - timedelta(days=1)).strftime("events-
|
|||
weekly_index_alias = 'events-weekly'
|
||||
alert_index_name = current_date.strftime("alerts-%Y%m")
|
||||
kibana_index_name = '.kibana'
|
||||
kibana_version = '5.6.14'
|
||||
|
||||
index_settings_str = ''
|
||||
with open(args.default_mapping_file) as data_file:
|
||||
|
@ -109,7 +111,7 @@ if kibana_index_name not in all_indices:
|
|||
|
||||
# Wait for .kibana index to be ready
|
||||
num_times = 0
|
||||
while not client.index_exists('.kibana'):
|
||||
while not client.index_exists(kibana_index_name):
|
||||
if num_times < 3:
|
||||
print("Waiting for .kibana index to be ready")
|
||||
time.sleep(1)
|
||||
|
@ -121,7 +123,7 @@ while not client.index_exists('.kibana'):
|
|||
# Check to see if index patterns exist in .kibana
|
||||
query = SearchQuery()
|
||||
query.add_must(TermMatch('_type', 'index-pattern'))
|
||||
results = query.execute(client, indices=['.kibana'])
|
||||
results = query.execute(client, indices=[kibana_index_name])
|
||||
if len(results['hits']) == 0:
|
||||
# Create index patterns and assign default index mapping
|
||||
index_mappings_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'index_mappings')
|
||||
|
@ -131,21 +133,22 @@ if len(results['hits']) == 0:
|
|||
with open(json_file_path) as json_data:
|
||||
mapping_data = json.load(json_data)
|
||||
print "Creating {0} index mapping".format(mapping_data['title'])
|
||||
client.save_object(body=mapping_data, index='.kibana', doc_type='index-pattern', doc_id=mapping_data['title'])
|
||||
client.save_object(body=mapping_data, index=kibana_index_name, doc_type='index-pattern', doc_id=mapping_data['title'])
|
||||
|
||||
# Assign default index to 'events'
|
||||
client.refresh('.kibana')
|
||||
default_mapping_data = {
|
||||
"defaultIndex": 'events'
|
||||
}
|
||||
print "Assigning events as default index mapping"
|
||||
client.save_object(default_mapping_data, '.kibana', 'config', kibana_version)
|
||||
index_name = 'events'
|
||||
url = '{}/api/kibana/settings/defaultIndex'.format(kibana_url)
|
||||
data = {'value': index_name}
|
||||
r = requests.post(url, json=data, headers={'kbn-xsrf': "true"})
|
||||
if not r.ok:
|
||||
print("Failed to set defaultIndex to events : {} {}".format(r.status_code, r.content))
|
||||
|
||||
|
||||
# Check to see if dashboards already exist in .kibana
|
||||
query = SearchQuery()
|
||||
query.add_must(TermMatch('_type', 'dashboard'))
|
||||
results = query.execute(client, indices=['.kibana'])
|
||||
results = query.execute(client, indices=[kibana_index_name])
|
||||
if len(results['hits']) == 0:
|
||||
dashboards_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'dashboards')
|
||||
listing = os.listdir(dashboards_path)
|
||||
|
@ -157,4 +160,4 @@ if len(results['hits']) == 0:
|
|||
mapping_data['_source']['title'],
|
||||
mapping_data['_type']
|
||||
))
|
||||
client.save_object(body=mapping_data['_source'], index='.kibana', doc_type=mapping_data['_type'], doc_id=mapping_data['_id'])
|
||||
client.save_object(body=mapping_data['_source'], index=kibana_index_name, doc_type=mapping_data['_type'], doc_id=mapping_data['_id'])
|
||||
|
|
|
@ -20,7 +20,12 @@ COPY docker/compose/mozdef_cron/files/syncAlertsToMongo.conf /opt/mozdef/envs/mo
|
|||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/cron
|
||||
|
||||
# https://stackoverflow.com/a/48651061/168874
|
||||
COPY docker/compose/mozdef_cron/files/launch_cron /launch_cron
|
||||
|
||||
USER mozdef
|
||||
RUN crontab /cron_entries.txt
|
||||
|
||||
USER root
|
||||
WORKDIR /
|
||||
CMD ['./launch_cron']
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
BASH_ENV=/env
|
||||
* * * * * /opt/mozdef/envs/mozdef/cron/healthAndStatus.sh
|
||||
* * * * * /opt/mozdef/envs/mozdef/cron/healthToMongo.sh
|
||||
* * * * * /opt/mozdef/envs/mozdef/cron/collectAttackers.sh
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Export all root users environment variables, filtering out non custom variables and write them into /env
|
||||
declare -p | egrep -v ' HOSTNAME=| LS_COLORS=| TERM=| PATH=| PWD=| TZ=| SHLVL=| HOME=| LESSOPEN=| _=| affinity:container=| BASHOPTS=| BASH_VERSINFO=| EUID=| PPID=| SHELLOPTS=| UID=' > /env
|
||||
crond -n -x ext
|
|
@ -35,3 +35,9 @@ Add is_ip utility function
|
|||
------------------
|
||||
|
||||
* Replace elasticsearch flush with refresh
|
||||
|
||||
|
||||
1.0.6 (2019-03-29)
|
||||
------------------
|
||||
|
||||
* Add get_aliases function to elasticsearch client
|
||||
|
|
|
@ -85,6 +85,9 @@ class ElasticsearchClient():
|
|||
def get_alias(self, alias_name):
|
||||
return self.es_connection.indices.get_alias(index='*', name=alias_name).keys()
|
||||
|
||||
def get_aliases(self):
|
||||
return self.es_connection.cat.stats()['indices'].keys()
|
||||
|
||||
def refresh(self, index_name):
|
||||
self.es_connection.indices.refresh(index=index_name)
|
||||
|
||||
|
|
|
@ -56,6 +56,6 @@ setup(
|
|||
test_suite='tests',
|
||||
tests_require=[],
|
||||
url='https://github.com/mozilla/MozDef/tree/master/lib',
|
||||
version='1.0.5',
|
||||
version='1.0.6',
|
||||
zip_safe=False,
|
||||
)
|
||||
|
|
|
@ -31,7 +31,7 @@ jmespath==0.9.3
|
|||
kombu==4.1.0
|
||||
meld3==1.0.2
|
||||
mozdef-client==1.0.11
|
||||
mozdef-util==1.0.5
|
||||
mozdef-util==1.0.6
|
||||
MySQL-python==1.2.5
|
||||
netaddr==0.7.1
|
||||
nose==1.3.7
|
||||
|
|
Загрузка…
Ссылка в новой задаче