зеркало из https://github.com/mozilla/MozDef.git
Merge branch 'master' of https://github.com/mozilla/mozdef into ipaddr_alert_plugin
This commit is contained in:
Коммит
70b013d04d
|
@ -13,8 +13,8 @@ alerts/generic_alerts
|
|||
/.project
|
||||
/data
|
||||
.vscode
|
||||
cloudy_mozdef/aws_parameters.json
|
||||
cloudy_mozdef/aws_parameters.sh
|
||||
cloudy_mozdef/aws_parameters.*.json
|
||||
cloudy_mozdef/aws_parameters.*.sh
|
||||
docs/source/_build
|
||||
docs/source/_static
|
||||
*.swp
|
||||
|
|
|
@ -5,6 +5,58 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
|||
|
||||
## [Unreleased]
|
||||
|
||||
## [v3.1.1] - 2019-07-25
|
||||
|
||||
### Added
|
||||
- Ability to get open indices in ElasticsearchClient
|
||||
- Documentation on installing dependencies on Mac OS X
|
||||
|
||||
### Changed
|
||||
- AWS Managed Elasticsearch/Kibana version to 6.7
|
||||
|
||||
### Fixed
|
||||
- Disk free/total in /about page shows at most 2 decimal places
|
||||
- Connections to SQS and S3 without access key and secret
|
||||
- Ability to block IPs and add to Watchlist
|
||||
|
||||
|
||||
## [v3.1.0] - 2019-07-18
|
||||
|
||||
### Added
|
||||
- Captured the AWS CodeBuild CI/CD configuration in code with documentation
|
||||
- Support for HTTP Basic Auth in AWS deployment
|
||||
- Docker healthchecks to docker containers
|
||||
- Descriptions to all AWS Lambda functions
|
||||
- Support for alerts-* index in docker environment
|
||||
- Alert that detects excessive numbers of AWS API describe calls
|
||||
- Additional AWS infrastructure to support AWS re:Inforce 2019 workshop
|
||||
- Documentation specific to MozDef installation now that MozDef uses Python 3
|
||||
- Config setting for CloudTrail notification SQS queue polling time
|
||||
- Config setting for Slack bot welcome message
|
||||
|
||||
### Changed
|
||||
- Kibana port from 9443 to 9090
|
||||
- AWS CloudFormation default values from "unset" to empty string
|
||||
- Simplify mozdef-mq logic determining AMQP endpoint URI
|
||||
- SQS to always use secure transport
|
||||
- CloudTrail alert unit tests
|
||||
- Incident summary placeholder text for greater clarity
|
||||
- Display of Veris data for easier viewing
|
||||
- All Dockerfiles to reduce image size, pin package signing keys and improve
|
||||
clarity
|
||||
|
||||
### Fixed
|
||||
- Workers starting before GeoIP data is available
|
||||
- Mismatched MozDefACMCertArn parameter name in CloudFormation template
|
||||
- Duplicate mozdefvpcflowlogs object
|
||||
- Hard coded AWS Availability Zone
|
||||
- httplib2 by updating to version to 0.13.0 for python3
|
||||
- mozdef_util by modifying bulk queue to acquire lock before saving events
|
||||
- Dashboard Kibana URL
|
||||
- Unnecessary and conflicting package dependencies from MozDef and mozdef_util
|
||||
- get_indices to include closed indices
|
||||
|
||||
|
||||
## [v3.0.0] - 2019-07-08
|
||||
### Added
|
||||
- Support for Python3
|
||||
|
@ -132,7 +184,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
|
|||
- Added checks on sending SQS messages to only accept intra-account messages
|
||||
- Improved docker performance and disk space requirements
|
||||
|
||||
[Unreleased]: https://github.com/mozilla/MozDef/compare/v3.0.0...HEAD
|
||||
[Unreleased]: https://github.com/mozilla/MozDef/compare/v3.1.1...HEAD
|
||||
[v3.1.1]: https://github.com/mozilla/MozDef/compare/v3.1.0...v3.1.1
|
||||
[v3.1.0]: https://github.com/mozilla/MozDef/compare/v3.0.0...v3.1.0
|
||||
[v3.0.0]: https://github.com/mozilla/MozDef/compare/v2.0.1...v3.0.0
|
||||
[v2.0.1]: https://github.com/mozilla/MozDef/compare/v2.0.0...v2.0.1
|
||||
[v2.0.0]: https://github.com/mozilla/MozDef/compare/v1.40.0...v2.0.0
|
|
@ -8,5 +8,5 @@
|
|||
|
||||
# Entire set can review certain documentation files
|
||||
/README.md @pwnbus @mpurzynski @Phrozyn @tristanweir @gene1wood @andrewkrug
|
||||
/CHANGELOG @pwnbus @mpurzynski @Phrozyn @tristanweir @gene1wood @andrewkrug
|
||||
/CHANGELOG.md @pwnbus @mpurzynski @Phrozyn @tristanweir @gene1wood @andrewkrug
|
||||
/docs/ @pwnbus @mpurzynski @Phrozyn @tristanweir @gene1wood @andrewkrug
|
||||
|
|
2
Makefile
2
Makefile
|
@ -53,7 +53,7 @@ run-tests-resources: ## Just run the external resources required for tests
|
|||
.PHONY: run-test
|
||||
run-test: run-tests
|
||||
|
||||
.PHONY: run-test
|
||||
.PHONY: run-tests
|
||||
run-tests: run-tests-resources ## Just run the tests (no build/get). Use `make TEST_CASE=tests/...` for specific tests only
|
||||
docker run -it --rm mozdef/mozdef_tester bash -c "source /opt/mozdef/envs/python/bin/activate && flake8 --config .flake8 ./"
|
||||
docker run -it --rm --network=test-mozdef_default mozdef/mozdef_tester bash -c "source /opt/mozdef/envs/python/bin/activate && py.test --delete_indexes --delete_queues $(TEST_CASE)"
|
||||
|
|
|
@ -25,7 +25,7 @@ The Mozilla Enterprise Defense Platform (MozDef) seeks to automate the security
|
|||
|
||||
## Goals:
|
||||
|
||||
* Provide a platform for use by defenders to rapidly discover and respond to security incidents.
|
||||
* Provide a platform for use by defenders to rapidly discover and respond to security incidents
|
||||
* Automate interfaces to other systems like bunker, cymon, mig
|
||||
* Provide metrics for security events and incidents
|
||||
* Facilitate real-time collaboration amongst incident handlers
|
||||
|
@ -36,7 +36,7 @@ The Mozilla Enterprise Defense Platform (MozDef) seeks to automate the security
|
|||
|
||||
MozDef is in production at Mozilla where we are using it to process over 300 million events per day.
|
||||
|
||||
[1]: https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=mozdef-for-aws&templateURL=https://s3-us-west-2.amazonaws.com/public.us-west-2.infosec.mozilla.org/mozdef/cf/v1.38.5/mozdef-parent.yml
|
||||
[1]: https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=mozdef-for-aws&templateURL=https://s3-us-west-2.amazonaws.com/public.us-west-2.infosec.mozilla.org/mozdef/cf/v3.1.1/mozdef-parent.yml
|
||||
|
||||
## Survey & Contacting us
|
||||
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
[options]
|
||||
threshold_count = 1
|
||||
search_depth_min = 60
|
|
@ -0,0 +1,48 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# This Source Code Form is subject to the terms of the Mozilla Public
|
||||
# License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
# Copyright (c) 2014 Mozilla Corporation
|
||||
|
||||
|
||||
from lib.alerttask import AlertTask
|
||||
from mozdef_util.query_models import SearchQuery, TermMatch
|
||||
import re
|
||||
|
||||
|
||||
class AlertLdapPasswordSpray(AlertTask):
|
||||
def main(self):
|
||||
self.parse_config('ldap_password_spray.conf', ['threshold_count', 'search_depth_min'])
|
||||
search_query = SearchQuery(minutes=int(self.config.search_depth_min))
|
||||
search_query.add_must([
|
||||
TermMatch('category', 'ldap'),
|
||||
TermMatch('details.response.error', 'LDAP_INVALID_CREDENTIALS')
|
||||
])
|
||||
self.filtersManual(search_query)
|
||||
self.searchEventsAggregated('details.client', samplesLimit=10)
|
||||
self.walkAggregations(threshold=int(self.config.threshold_count))
|
||||
|
||||
def onAggregation(self, aggreg):
|
||||
category = 'ldap'
|
||||
tags = ['ldap']
|
||||
severity = 'WARNING'
|
||||
email_list = set()
|
||||
email_regex = r'.*mail=([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9_-]+)'
|
||||
|
||||
for event in aggreg['allevents']:
|
||||
for request in event['_source']['details']['requests']:
|
||||
match_object = re.match(email_regex, request['details'][0])
|
||||
if match_object:
|
||||
email_list.add(match_object.group(1))
|
||||
|
||||
# If no emails, don't throw alert
|
||||
# if len(email_list) == 0:
|
||||
# return None
|
||||
|
||||
summary = 'LDAP Password Spray Attack in Progress from {0} targeting the following account(s): {1}'.format(
|
||||
aggreg['value'],
|
||||
",".join(sorted(email_list))
|
||||
)
|
||||
|
||||
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)
|
|
@ -20,6 +20,7 @@ from celery import Task
|
|||
from celery.utils.log import get_task_logger
|
||||
|
||||
from mozdef_util.utilities.toUTC import toUTC
|
||||
from mozdef_util.utilities.logger import logger
|
||||
from mozdef_util.elasticsearch_client import ElasticsearchClient
|
||||
from mozdef_util.query_models import TermMatch, ExistsMatch
|
||||
|
||||
|
@ -545,6 +546,6 @@ class AlertTask(Task):
|
|||
try:
|
||||
json_obj = json.load(fd)
|
||||
except ValueError:
|
||||
sys.stderr.write("FAILED to open the configuration file\n")
|
||||
logger.error("FAILED to open the configuration file\n")
|
||||
|
||||
return json_obj
|
||||
|
|
|
@ -59,6 +59,8 @@ def enrich(alert, known_ips):
|
|||
|
||||
alert = alert.copy()
|
||||
|
||||
if 'details' not in alert:
|
||||
alert['details'] = {}
|
||||
alert['details']['sites'] = []
|
||||
|
||||
for ip in set(ips):
|
||||
|
@ -140,6 +142,8 @@ class message(object):
|
|||
'''
|
||||
|
||||
def __init__(self):
|
||||
# Run plugin on all alerts
|
||||
self.registration = '*'
|
||||
self._config = _load_config(CONFIG_FILE)
|
||||
|
||||
def onMessage(self, message):
|
||||
|
|
|
@ -88,6 +88,9 @@ class message(object):
|
|||
'''
|
||||
|
||||
def __init__(self):
|
||||
# Run plugin on portscan alerts
|
||||
self.registration = 'portscan'
|
||||
|
||||
config = _load_config(CONFIG_FILE)
|
||||
|
||||
try:
|
||||
|
|
|
@ -160,6 +160,9 @@ def init_config():
|
|||
# mqack=True sets persistant delivery, False sets transient delivery
|
||||
options.mq_ack = get_config('mqack', True, options.configfile)
|
||||
|
||||
# wether or not the bot should send a welcome message upon connecting
|
||||
options.notify_welcome = get_config('notify_welcome', True, options.configfile)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = OptionParser()
|
||||
|
@ -170,7 +173,7 @@ if __name__ == "__main__":
|
|||
(options, args) = parser.parse_args()
|
||||
init_config()
|
||||
|
||||
bot = SlackBot(options.slack_token, options.channels, options.name)
|
||||
bot = SlackBot(options.slack_token, options.channels, options.name, options.notify_welcome)
|
||||
monitor_alerts_thread = Thread(target=consume_alerts, args=[bot])
|
||||
monitor_alerts_thread.daemon = True
|
||||
monitor_alerts_thread.start()
|
||||
|
|
|
@ -20,10 +20,11 @@ greetings = [
|
|||
|
||||
|
||||
class SlackBot():
|
||||
def __init__(self, api_key, channels, bot_name):
|
||||
def __init__(self, api_key, channels, bot_name, notify_welcome):
|
||||
self.slack_client = SlackClient(api_key)
|
||||
self.channels = channels
|
||||
self.bot_name = bot_name
|
||||
self.notify_welcome = notify_welcome
|
||||
self.load_commands()
|
||||
|
||||
def load_commands(self):
|
||||
|
@ -36,7 +37,8 @@ class SlackBot():
|
|||
def run(self):
|
||||
if self.slack_client.rtm_connect():
|
||||
logger.info("Bot connected to slack")
|
||||
self.post_welcome_message(random.choice(greetings))
|
||||
if self.notify_welcome:
|
||||
self.post_welcome_message(random.choice(greetings))
|
||||
self.listen_for_messages()
|
||||
else:
|
||||
logger.error("Unable to connect to slack")
|
||||
|
|
|
@ -21,6 +21,10 @@ echo " Event : ${CODEBUILD_WEBHOOK_EVENT}"
|
|||
echo " Head Ref : ${CODEBUILD_WEBHOOK_HEAD_REF}"
|
||||
echo " Trigger : ${CODEBUILD_WEBHOOK_TRIGGER}"
|
||||
|
||||
if [ -z "${CODEBUILD_WEBHOOK_TRIGGER}" ]; then
|
||||
echo "CODEBUILD_WEBHOOK_TRIGGER is unset, likely because this build was retried. Retry build isn't supported. Exiting"
|
||||
exit 1
|
||||
fi
|
||||
echo "Codebuild is ubuntu 14.04. Installing packer in order to compensate. Someone should build a CI docker container \;)."
|
||||
wget -nv https://releases.hashicorp.com/packer/1.3.5/packer_1.3.5_linux_amd64.zip
|
||||
unzip packer_1.3.5_linux_amd64.zip -d /usr/bin
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
AWSTemplateFormatVersion: 2010-09-09
|
||||
Description: CodeBuild CI/CD Job to build on commit
|
||||
Description: MozDef CodeBuild CI/CD Job and IAM role
|
||||
Parameters:
|
||||
CodeBuildProjectName:
|
||||
Type: String
|
||||
Description: The name of the CodeBuild project to create. Project names can't be modified once the project is created
|
||||
Mappings:
|
||||
VariableMap:
|
||||
Variables:
|
||||
|
@ -42,12 +46,12 @@ Resources:
|
|||
Action:
|
||||
- s3:PutObject*
|
||||
- s3:GetObject*
|
||||
Resource: !Join [ '', [ 'arn:aws:s3::', !FindInMap [ 'VariableMap', 'Variables', 'S3BucketToPublishCloudFormationTemplatesTo' ], '/*' ] ]
|
||||
Resource: !Join [ '', [ 'arn:aws:s3:::', !FindInMap [ 'VariableMap', 'Variables', 'S3BucketToPublishCloudFormationTemplatesTo' ], '/*' ] ]
|
||||
- Sid: ListS3BucketContents
|
||||
Effect: Allow
|
||||
Action:
|
||||
- s3:ListBucket*
|
||||
Resource: !Join [ '', [ 'arn:aws:s3::', !FindInMap [ 'VariableMap', 'Variables', 'S3BucketToPublishCloudFormationTemplatesTo' ] ] ]
|
||||
Resource: !Join [ '', [ 'arn:aws:s3:::', !FindInMap [ 'VariableMap', 'Variables', 'S3BucketToPublishCloudFormationTemplatesTo' ] ] ]
|
||||
- Sid: CreatePackerEC2Instance
|
||||
Effect: Allow
|
||||
Action:
|
||||
|
@ -79,24 +83,21 @@ Resources:
|
|||
- ec2:DescribeImageAttribute
|
||||
- ec2:DescribeSubnets
|
||||
Resource: '*'
|
||||
- Sid: ReadSSMParameters
|
||||
- Sid: ReadSecrets
|
||||
Effect: Allow
|
||||
Action: ssm:GetParameter
|
||||
Resource: arn:aws:ssm:*:*:parameter/mozdef/ci/*
|
||||
# I think these are vestigial, created by the CodeBuild UI.
|
||||
# Also they're not even the right resource path since they contain "/aws/codebuild/" but the actual LogGroup that CodeBuild writes to doesn't
|
||||
# e.g. arn:aws:logs:us-west-2:371522382791:log-group:MozDefCI:*
|
||||
# - Sid: NotSure1
|
||||
# Effect: Allow
|
||||
# Action:
|
||||
# - logs:CreateLogGroup
|
||||
# - logs:CreateLogStream
|
||||
# - logs:PutLogEvents
|
||||
# Resource:
|
||||
# - !Join [ ':', [ 'arn:aws:logs', !Ref 'AWS::Region', !Ref 'AWS::AccountId', 'log-group:/aws/codebuild/mozdef' ] ]
|
||||
# - !Join [ ':', [ 'arn:aws:logs', !Ref 'AWS::Region', !Ref 'AWS::AccountId', 'log-group:/aws/codebuild/mozdef:*' ] ]
|
||||
# - !Join [ ':', [ 'arn:aws:logs', !Ref 'AWS::Region', !Ref 'AWS::AccountId', 'log-group:/aws/codebuild/MozDefCI' ] ]
|
||||
# - !Join [ ':', [ 'arn:aws:logs', !Ref 'AWS::Region', !Ref 'AWS::AccountId', 'log-group:/aws/codebuild/MozDefCI:*' ] ]
|
||||
- Sid: CloudWatchLogGroup
|
||||
Effect: Allow
|
||||
Action:
|
||||
- logs:CreateLogGroup
|
||||
Resource: !Join [ ':', [ 'arn:aws:logs', !Ref 'AWS::Region', !Ref 'AWS::AccountId', 'log-group', !FindInMap [ 'VariableMap', 'Variables', 'CloudWatchLogGroupName' ] ] ]
|
||||
- Sid: CloudWatchLogStream
|
||||
Effect: Allow
|
||||
Action:
|
||||
- logs:CreateLogStream
|
||||
- logs:PutLogEvents
|
||||
Resource: !Join [ ':', [ 'arn:aws:logs', !Ref 'AWS::Region', !Ref 'AWS::AccountId', 'log-group', !FindInMap [ 'VariableMap', 'Variables', 'CloudWatchLogGroupName' ], 'log-stream:*' ] ]
|
||||
- Sid: NotSure2
|
||||
Action:
|
||||
- s3:PutObject
|
||||
|
@ -110,8 +111,8 @@ Resources:
|
|||
CodeBuildProject:
|
||||
Type: AWS::CodeBuild::Project
|
||||
Properties:
|
||||
Name: mozdef
|
||||
Description: Builds MozDef AMI, dockers containers, and runs test suite. Owner is Andrew Krug.
|
||||
Name: !Ref CodeBuildProjectName
|
||||
Description: Builds the MozDef AMI, the MozDef Docker containers and shares the AMIs with AWS Marketplace.
|
||||
BadgeEnabled: True
|
||||
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
|
||||
Artifacts:
|
||||
|
@ -120,22 +121,21 @@ Resources:
|
|||
Type: LINUX_CONTAINER
|
||||
ComputeType: BUILD_GENERAL1_MEDIUM
|
||||
Image: aws/codebuild/docker:18.09.0-1.7.0
|
||||
PrivilegedMode: true # Required for docker
|
||||
Source:
|
||||
Type: GITHUB
|
||||
# Auth: # This information is for the AWS CodeBuild console's use only. Your code should not get or set Auth directly.
|
||||
# SourceIdentifier: # Not sure what this should be yet
|
||||
BuildSpec: cloudy_mozdef/buildspec.yml
|
||||
Location: https://github.com/mozilla/MozDef
|
||||
Location: https://github.com/mozilla/MozDef.git
|
||||
ReportBuildStatus: True
|
||||
Triggers:
|
||||
Webhook: true
|
||||
FilterGroups:
|
||||
- - Type: EVENT
|
||||
Pattern: PUSH
|
||||
- Type: HEAD_REF # Build on commits to branch reinforce2019
|
||||
Pattern: '^refs/heads/reinforce2019'
|
||||
- Type: HEAD_REF # Build on commits to branch master
|
||||
Pattern: '^refs/heads/master'
|
||||
- - Type: EVENT
|
||||
Pattern: PUSH
|
||||
- Type: HEAD_REF # Build on tags like v1.2.3 and v1.2.3-testing
|
||||
Pattern: '^refs/tags\/v[0-9]+\.[0-9]+\.[0-9]+(\-(prod|pre|testing))?$'
|
||||
Tags:
|
||||
|
|
|
@ -63,7 +63,7 @@ Resources:
|
|||
EBSEnabled: true
|
||||
VolumeType: gp2
|
||||
VolumeSize: !Ref BlockStoreSizeGB
|
||||
ElasticsearchVersion: '5.6'
|
||||
ElasticsearchVersion: '6.7'
|
||||
ElasticsearchClusterConfig:
|
||||
InstanceCount: !Ref ESInstanceCount
|
||||
AccessPolicies:
|
||||
|
|
|
@ -340,9 +340,11 @@ Resources:
|
|||
- content: |
|
||||
# This configures the worker that pulls in CloudTrail logs
|
||||
OPTIONS_TASKEXCHANGE=${CloudTrailSQSNotificationQueueName}
|
||||
OPTIONS_REGION=${AWS::Region}
|
||||
path: /opt/mozdef/docker/compose/cloudy_mozdef_mq_cloudtrail.env
|
||||
- content: |
|
||||
OPTIONS_TASKEXCHANGE=${MozDefSQSQueueName}
|
||||
OPTIONS_REGION=${AWS::Region}
|
||||
path: /opt/mozdef/docker/compose/cloudy_mozdef_mq_sqs.env
|
||||
- content: |
|
||||
[Unit]
|
||||
|
|
|
@ -5,5 +5,5 @@ if $programname == 'eventtask-worker' then /var/log/mozdef/eventtask.log
|
|||
if $programname == 'alertactions-worker' then /var/log/mozdef/alertactions.log
|
||||
if $programname == 'mongod.3002' then /var/log/mozdef/mongo/meteor-mongo.log
|
||||
if $programname == 'mongod' then /var/log/mozdef/mongo/mongo.log
|
||||
if $programname == 'kibana5' then /var/log/mozdef/kibana.log
|
||||
if $programname == 'kibana' then /var/log/mozdef/kibana.log
|
||||
& stop
|
||||
|
|
|
@ -16,6 +16,7 @@ import traceback
|
|||
import mozdef_client as mozdef
|
||||
|
||||
from mozdef_util.utilities.dot_dict import DotDict
|
||||
from mozdef_util.utilities.logger import logger
|
||||
|
||||
|
||||
def fatal(msg):
|
||||
|
@ -23,10 +24,6 @@ def fatal(msg):
|
|||
sys.exit(1)
|
||||
|
||||
|
||||
def debug(msg):
|
||||
sys.stderr.write("+++ {}\n".format(msg))
|
||||
|
||||
|
||||
# This is from https://auth0.com/docs/api/management/v2#!/Logs/get_logs
|
||||
# auth0 calls these events with an acronym and description
|
||||
# The logs have the acronym, but not the description
|
||||
|
@ -163,7 +160,7 @@ def process_msg(mozmsg, msg):
|
|||
details.success = True
|
||||
except KeyError:
|
||||
# New message type, check https://manage-dev.mozilla.auth0.com/docs/api/management/v2#!/Logs/get_logs for ex.
|
||||
debug("New auth0 message type, please add support: {}".format(msg.type))
|
||||
logger.error("New auth0 message type, please add support: {}".format(msg.type))
|
||||
details["eventname"] = msg.type
|
||||
|
||||
# determine severity level
|
||||
|
@ -323,7 +320,7 @@ def main():
|
|||
config = DotDict(hjson.load(fd))
|
||||
|
||||
if config is None:
|
||||
print("No configuration file 'auth02mozdef.json' found.")
|
||||
logger.error("No configuration file 'auth02mozdef.json' found.")
|
||||
sys.exit(1)
|
||||
|
||||
headers = {"Authorization": "Bearer {}".format(config.auth0.token), "Accept": "application/json"}
|
||||
|
|
|
@ -24,7 +24,7 @@ def esCloseIndices():
|
|||
logger.debug('started')
|
||||
try:
|
||||
es = ElasticsearchClient((list('{0}'.format(s) for s in options.esservers)))
|
||||
indices = es.get_indices()
|
||||
indices = es.get_open_indices()
|
||||
except Exception as e:
|
||||
logger.error("Unhandled exception while connecting to ES, terminating: %r" % (e))
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ def isJVMMemoryHigh():
|
|||
|
||||
def clearESCache():
|
||||
es = esConnect(None)
|
||||
indexes = es.get_indices()
|
||||
indexes = es.get_open_indices()
|
||||
# assums index names like events-YYYYMMDD etc.
|
||||
# used to avoid operating on current indexes
|
||||
dtNow = datetime.utcnow()
|
||||
|
|
|
@ -77,11 +77,14 @@ def getEsNodesStats():
|
|||
load_str = "{0},{1},{2}".format(load_average['1m'], load_average['5m'], load_average['15m'])
|
||||
hostname = nodeid
|
||||
if 'host' in jsonobj['nodes'][nodeid]:
|
||||
hostname=jsonobj['nodes'][nodeid]['host']
|
||||
hostname = jsonobj['nodes'][nodeid]['host']
|
||||
|
||||
disk_free = "{0:.2f}".format(jsonobj['nodes'][nodeid]['fs']['total']['free_in_bytes'] / (1024 * 1024 * 1024))
|
||||
disk_total = "{0:.2f}".format(jsonobj['nodes'][nodeid]['fs']['total']['total_in_bytes'] / (1024 * 1024 * 1024))
|
||||
results.append({
|
||||
'hostname': hostname,
|
||||
'disk_free': jsonobj['nodes'][nodeid]['fs']['total']['free_in_bytes'] / (1024 * 1024 * 1024),
|
||||
'disk_total': jsonobj['nodes'][nodeid]['fs']['total']['total_in_bytes'] / (1024 * 1024 * 1024),
|
||||
'disk_free': disk_free,
|
||||
'disk_total': disk_total,
|
||||
'mem_heap_per': jsonobj['nodes'][nodeid]['jvm']['mem']['heap_used_percent'],
|
||||
'gc_old': jsonobj['nodes'][nodeid]['jvm']['gc']['collectors']['old']['collection_time_in_millis'] / 1000,
|
||||
'cpu_usage': jsonobj['nodes'][nodeid]['os']['cpu']['percent'],
|
||||
|
|
|
@ -7,11 +7,17 @@ ENV ES_JAVA_VERSION 1.8.0
|
|||
|
||||
|
||||
RUN \
|
||||
gpg="gpg --no-default-keyring --secret-keyring /dev/null --keyring /dev/null --no-option --keyid-format 0xlong" && \
|
||||
rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 && \
|
||||
rpm -qi gpg-pubkey-f4a80eb5 | $gpg | grep 0x24C6A8A7F4A80EB5 && \
|
||||
rpmkeys --import https://packages.elastic.co/GPG-KEY-elasticsearch && \
|
||||
rpm -qi gpg-pubkey-d88e42b4-52371eca | $gpg | grep 0xD27D666CD88E42B4 && \
|
||||
yum install -y java-$ES_JAVA_VERSION && \
|
||||
mkdir -p /opt/mozdef/envs && \
|
||||
curl -s -L https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VERSION.rpm -o elasticsearch.rpm && \
|
||||
rpm -i elasticsearch.rpm && \
|
||||
yum clean all
|
||||
curl --silent --location https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ES_VERSION.rpm -o elasticsearch.rpm && \
|
||||
rpm --install elasticsearch.rpm && \
|
||||
yum clean all && \
|
||||
rm -rf /var/cache/yum
|
||||
|
||||
USER elasticsearch
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ discovery.type: single-node
|
|||
action.destructive_requires_name: true
|
||||
|
||||
# Disable auto creation unless these indexes
|
||||
action.auto_create_index: .watches,.triggered_watches,.watcher-history-*
|
||||
action.auto_create_index: .watches,.triggered_watches,.watcher-history-*,.kibana_*
|
||||
|
||||
# Add these to prevent requiring a user/pass and termination of ES when looking for "ingest" assignments.
|
||||
# The watcher directive allows for the deletion of failed watcher indices as they sometimes get created with glitches.
|
||||
|
|
|
@ -6,11 +6,12 @@ LABEL maintainer="mozdef@mozilla.com"
|
|||
ENV KIBANA_VERSION 6.8.0
|
||||
|
||||
RUN \
|
||||
curl -s -L https://artifacts.elastic.co/downloads/kibana/kibana-$KIBANA_VERSION-linux-x86_64.tar.gz | tar -C / -xz && \
|
||||
cd /kibana-$KIBANA_VERSION-linux-x86_64
|
||||
mkdir /kibana && \
|
||||
curl --silent --location https://artifacts.elastic.co/downloads/kibana/kibana-$KIBANA_VERSION-linux-x86_64.tar.gz \
|
||||
| tar --extract --gzip --strip 1 --directory /kibana
|
||||
|
||||
COPY docker/compose/kibana/files/kibana.yml /kibana-$KIBANA_VERSION-linux-x86_64/config/kibana.yml
|
||||
COPY docker/compose/kibana/files/kibana.yml /kibana/config/kibana.yml
|
||||
|
||||
WORKDIR /kibana-$KIBANA_VERSION-linux-x86_64
|
||||
WORKDIR /kibana
|
||||
|
||||
EXPOSE 5601
|
||||
|
|
|
@ -5,14 +5,20 @@ LABEL maintainer="mozdef@mozilla.com"
|
|||
ENV MONGO_VERSION 3.4
|
||||
|
||||
RUN \
|
||||
echo -e "[mongodb-org-$MONGO_VERSION]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/$MONGO_VERSION/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-$MONGO_VERSION.asc" > /etc/yum.repos.d/mongodb.repo && \
|
||||
echo -e "[mongodb-org-$MONGO_VERSION]\n\
|
||||
name=MongoDB Repository\n\
|
||||
baseurl=https://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/$MONGO_VERSION/x86_64/\n\
|
||||
gpgcheck=1\n\
|
||||
enabled=1\n\
|
||||
gpgkey=https://www.mongodb.org/static/pgp/server-$MONGO_VERSION.asc" > /etc/yum.repos.d/mongodb.repo && \
|
||||
gpg="gpg --no-default-keyring --secret-keyring /dev/null --keyring /dev/null --no-option --keyid-format 0xlong" && \
|
||||
rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 && \
|
||||
rpm -qi gpg-pubkey-f4a80eb5 | $gpg | grep 0x24C6A8A7F4A80EB5 && \
|
||||
rpmkeys --import https://www.mongodb.org/static/pgp/server-3.4.asc && \
|
||||
rpm -qi gpg-pubkey-a15703c6 | $gpg | grep 0xBC711F9BA15703C6 && \
|
||||
yum install -y mongodb-org && \
|
||||
yum clean all
|
||||
yum clean all && \
|
||||
rm -rf /var/cache/yum
|
||||
|
||||
COPY docker/compose/mongodb/files/mongod.conf /etc/mongod.conf
|
||||
|
||||
|
|
|
@ -2,10 +2,9 @@ FROM mozdef/mozdef_base
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
COPY alerts /opt/mozdef/envs/mozdef/alerts
|
||||
COPY docker/compose/mozdef_alertactions/files/alert_actions_worker.conf /opt/mozdef/envs/mozdef/alerts/alert_actions_worker.conf
|
||||
COPY docker/compose/mozdef_alerts/files/config.py /opt/mozdef/envs/mozdef/alerts/lib/config.py
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/alerts
|
||||
COPY --chown=mozdef:mozdef alerts /opt/mozdef/envs/mozdef/alerts
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_alertactions/files/alert_actions_worker.conf /opt/mozdef/envs/mozdef/alerts/alert_actions_worker.conf
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_alerts/files/config.py /opt/mozdef/envs/mozdef/alerts/lib/config.py
|
||||
|
||||
WORKDIR /opt/mozdef/envs/mozdef/alerts
|
||||
|
||||
|
|
|
@ -2,11 +2,9 @@ FROM mozdef/mozdef_base
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
COPY alerts /opt/mozdef/envs/mozdef/alerts
|
||||
COPY docker/compose/mozdef_alerts/files/config.py /opt/mozdef/envs/mozdef/alerts/lib/
|
||||
COPY docker/compose/mozdef_alerts/files/get_watchlist.conf /opt/mozdef/envs/mozdef/alerts/get_watchlist.conf
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/alerts
|
||||
COPY --chown=mozdef:mozdef alerts /opt/mozdef/envs/mozdef/alerts
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_alerts/files/config.py /opt/mozdef/envs/mozdef/alerts/lib/
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_alerts/files/get_watchlist.conf /opt/mozdef/envs/mozdef/alerts/get_watchlist.conf
|
||||
|
||||
WORKDIR /opt/mozdef/envs/mozdef/alerts
|
||||
|
||||
|
|
|
@ -5,54 +5,55 @@ LABEL maintainer="mozdef@mozilla.com"
|
|||
ENV TZ UTC
|
||||
|
||||
RUN \
|
||||
gpg="gpg --no-default-keyring --secret-keyring /dev/null --keyring /dev/null --no-option --keyid-format 0xlong" && \
|
||||
rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 && \
|
||||
rpm -qi gpg-pubkey-f4a80eb5 | $gpg | grep 0x24C6A8A7F4A80EB5 && \
|
||||
rpmkeys --import https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7 && \
|
||||
rpm -qi gpg-pubkey-352c64e5 | $gpg | grep 0x6A2FAEA2352C64E5 && \
|
||||
yum makecache fast && \
|
||||
yum install -y epel-release && \
|
||||
yum install -y \
|
||||
glibc-devel \
|
||||
gcc \
|
||||
libstdc++ \
|
||||
libffi-devel \
|
||||
zlib-devel \
|
||||
libcurl-devel \
|
||||
openssl \
|
||||
openssl-devel \
|
||||
git \
|
||||
make && \
|
||||
useradd -ms /bin/bash -d /opt/mozdef -m mozdef && \
|
||||
mkdir /opt/mozdef/envs && \
|
||||
cd /opt/mozdef && \
|
||||
yum install -y python36 \
|
||||
python36-devel \
|
||||
python36-pip && \
|
||||
glibc-devel \
|
||||
gcc \
|
||||
libstdc++ \
|
||||
libffi-devel \
|
||||
zlib-devel \
|
||||
libcurl-devel \
|
||||
openssl \
|
||||
openssl-devel \
|
||||
git \
|
||||
make \
|
||||
python36 \
|
||||
python36-devel \
|
||||
python36-pip && \
|
||||
yum clean all && \
|
||||
rm -rf /var/cache/yum && \
|
||||
useradd --create-home --shell /bin/bash --home-dir /opt/mozdef mozdef && \
|
||||
pip3 install virtualenv && \
|
||||
mkdir /opt/mozdef/envs/mozdef && \
|
||||
mkdir /opt/mozdef/envs/mozdef/cron
|
||||
install --owner mozdef --group mozdef --directory /opt/mozdef/envs /opt/mozdef/envs/mozdef /opt/mozdef/envs/mozdef/cron
|
||||
|
||||
# Force pycurl to understand we prefer nss backend
|
||||
# Pycurl with ssl support is required by kombu in order to use SQS
|
||||
ENV PYCURL_SSL_LIBRARY=nss
|
||||
|
||||
# Create python virtual environment and install dependencies
|
||||
COPY requirements.txt /opt/mozdef/envs/mozdef/requirements.txt
|
||||
COPY --chown=mozdef:mozdef requirements.txt /opt/mozdef/envs/mozdef/requirements.txt
|
||||
|
||||
COPY cron/update_geolite_db.py /opt/mozdef/envs/mozdef/cron/update_geolite_db.py
|
||||
COPY cron/update_geolite_db.conf /opt/mozdef/envs/mozdef/cron/update_geolite_db.conf
|
||||
COPY cron/update_geolite_db.sh /opt/mozdef/envs/mozdef/cron/update_geolite_db.sh
|
||||
COPY --chown=mozdef:mozdef cron/update_geolite_db.py /opt/mozdef/envs/mozdef/cron/update_geolite_db.py
|
||||
COPY --chown=mozdef:mozdef cron/update_geolite_db.conf /opt/mozdef/envs/mozdef/cron/update_geolite_db.conf
|
||||
COPY --chown=mozdef:mozdef cron/update_geolite_db.sh /opt/mozdef/envs/mozdef/cron/update_geolite_db.sh
|
||||
|
||||
COPY mozdef_util /opt/mozdef/envs/mozdef/mozdef_util
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/
|
||||
COPY --chown=mozdef:mozdef mozdef_util /opt/mozdef/envs/mozdef/mozdef_util
|
||||
|
||||
USER mozdef
|
||||
RUN \
|
||||
virtualenv -p /usr/bin/python3.6 /opt/mozdef/envs/python && \
|
||||
source /opt/mozdef/envs/python/bin/activate && \
|
||||
pip install -r /opt/mozdef/envs/mozdef/requirements.txt && \
|
||||
pip install --requirement /opt/mozdef/envs/mozdef/requirements.txt && \
|
||||
cd /opt/mozdef/envs/mozdef/mozdef_util && \
|
||||
pip install -e .
|
||||
pip install --editable . && \
|
||||
mkdir /opt/mozdef/envs/mozdef/data
|
||||
|
||||
RUN mkdir /opt/mozdef/envs/mozdef/data
|
||||
|
||||
WORKDIR /opt/mozdef/envs/mozdef
|
||||
|
||||
|
@ -62,7 +63,3 @@ VOLUME /opt/mozdef/envs/mozdef/data
|
|||
ENV PATH=/opt/mozdef/envs/python/bin:$PATH
|
||||
|
||||
USER root
|
||||
|
||||
# Remove once https://github.com/jeffbryner/configlib/pull/9 is mergeg
|
||||
# and a new version of configlib is in place
|
||||
RUN sed -i 's/from configlib import getConfig/from .configlib import getConfig/g' /opt/mozdef/envs/python/lib/python3.6/site-packages/configlib/__init__.py
|
|
@ -2,16 +2,14 @@ FROM mozdef/mozdef_base
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
RUN mkdir -p /opt/mozdef/envs/mozdef/docker/conf
|
||||
RUN install --owner mozdef --group mozdef --directory /opt/mozdef/envs/mozdef/docker /opt/mozdef/envs/mozdef/docker/conf
|
||||
|
||||
COPY cron/mozdefStateDefaultMappingTemplate.json /opt/mozdef/envs/mozdef/cron/mozdefStateDefaultMappingTemplate.json
|
||||
COPY cron/defaultMappingTemplate.json /opt/mozdef/envs/mozdef/cron/defaultMappingTemplate.json
|
||||
COPY docker/compose/mozdef_cron/files/backup.conf /opt/mozdef/envs/mozdef/cron/backup.conf
|
||||
COPY docker/compose/mozdef_bootstrap/files/initial_setup.py /opt/mozdef/envs/mozdef/initial_setup.py
|
||||
COPY docker/compose/mozdef_bootstrap/files/index_mappings /opt/mozdef/envs/mozdef/index_mappings
|
||||
COPY docker/compose/mozdef_bootstrap/files/resources /opt/mozdef/envs/mozdef/resources
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/
|
||||
COPY --chown=mozdef:mozdef cron/mozdefStateDefaultMappingTemplate.json /opt/mozdef/envs/mozdef/cron/mozdefStateDefaultMappingTemplate.json
|
||||
COPY --chown=mozdef:mozdef cron/defaultMappingTemplate.json /opt/mozdef/envs/mozdef/cron/defaultMappingTemplate.json
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_cron/files/backup.conf /opt/mozdef/envs/mozdef/cron/backup.conf
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_bootstrap/files/initial_setup.py /opt/mozdef/envs/mozdef/initial_setup.py
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_bootstrap/files/index_mappings /opt/mozdef/envs/mozdef/index_mappings
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_bootstrap/files/resources /opt/mozdef/envs/mozdef/resources
|
||||
|
||||
WORKDIR /opt/mozdef/envs/mozdef
|
||||
|
||||
|
|
|
@ -125,14 +125,14 @@ if state_index_name not in all_indices:
|
|||
client.create_index(state_index_name, index_config=state_index_settings)
|
||||
|
||||
# Wait for kibana service to get ready
|
||||
total_num_tries = 10
|
||||
total_num_tries = 20
|
||||
for attempt in range(total_num_tries):
|
||||
try:
|
||||
if requests.get(kibana_url).ok:
|
||||
if requests.get(kibana_url, allow_redirects=True):
|
||||
break
|
||||
except Exception:
|
||||
pass
|
||||
print('Unable to connect to Elasticsearch...retrying')
|
||||
print('Unable to connect to Kibana ({0})...retrying'.format(kibana_url))
|
||||
sleep(5)
|
||||
else:
|
||||
print('Cannot connect to Kibana after ' + str(total_num_tries) + ' tries, exiting script.')
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"visualization": {
|
||||
"title": "Cloudtrail Eventname Pie-Graph",
|
||||
"visState": "{\"title\":\"Cloudtrail Eventname Pie-Graph\",\"type\":\"pie\",\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"details.sourceipaddress\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"Source IP Address\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"details.eventname\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"AWS Api Call\"}}]}",
|
||||
"uiStateJSON": "{}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"events-*\",\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
|
||||
}
|
||||
},
|
||||
"type": "visualization"
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"visualization": {
|
||||
"title": "Cloudtrail Eventname Table",
|
||||
"visState": "{\"title\":\"Cloudtrail Eventname Table\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"details.eventname\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\"}}]}",
|
||||
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"events-*\",\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
|
||||
}
|
||||
},
|
||||
"type": "visualization"
|
||||
}
|
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
"dashboard": {
|
||||
"title": "Cloudtrail Events",
|
||||
"hits": 0,
|
||||
"description": "",
|
||||
"panelsJSON": "[{\"embeddableConfig\":{\"vis\":{\"legendOpen\":false}},\"gridData\":{\"x\":12,\"y\":16,\"w\":12,\"h\":18,\"i\":\"1\"},\"id\":\"cloudtrail_eventname_pie_graph\",\"panelIndex\":\"1\",\"title\":\"Event Names\",\"type\":\"visualization\",\"version\":\"6.8.0\"},{\"embeddableConfig\":{\"vis\":{\"legendOpen\":true}},\"gridData\":{\"x\":12,\"y\":0,\"w\":36,\"h\":16,\"i\":\"2\"},\"id\":\"cloudtrail_events_line_graph\",\"panelIndex\":\"2\",\"type\":\"visualization\",\"version\":\"6.8.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":0,\"w\":12,\"h\":7,\"i\":\"3\"},\"id\":\"cloudtrail_total_event_count\",\"panelIndex\":\"3\",\"title\":\"# Events\",\"type\":\"visualization\",\"version\":\"6.8.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":24,\"y\":16,\"w\":24,\"h\":18,\"i\":\"4\"},\"id\":\"cloudtrail_events_map\",\"panelIndex\":\"4\",\"type\":\"visualization\",\"version\":\"6.8.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":7,\"w\":12,\"h\":15,\"i\":\"5\"},\"id\":\"cloudtrail_user_identity_table\",\"panelIndex\":\"5\",\"type\":\"visualization\",\"version\":\"6.8.0\"},{\"embeddableConfig\":{},\"gridData\":{\"x\":0,\"y\":22,\"w\":12,\"h\":12,\"i\":\"8\"},\"id\":\"cloudtrail_eventname_table\",\"panelIndex\":\"8\",\"type\":\"visualization\",\"version\":\"6.8.0\"}]",
|
||||
"optionsJSON": "{\"darkTheme\":false,\"hidePanelTitles\":false,\"useMargins\":true}",
|
||||
"version": 1,
|
||||
"timeRestore": false,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[{\"$state\":{\"store\":\"appState\"},\"meta\":{\"alias\":null,\"disabled\":false,\"index\":\"e3d06450-9b8d-11e9-9b0e-b35568cb01e3\",\"key\":\"source\",\"negate\":false,\"params\":{\"query\":\"cloudtrail\",\"type\":\"phrase\"},\"type\":\"phrase\",\"value\":\"cloudtrail\"},\"query\":{\"match\":{\"source\":{\"query\":\"cloudtrail\",\"type\":\"phrase\"}}}}]}"
|
||||
}
|
||||
},
|
||||
"type": "dashboard"
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"visualization": {
|
||||
"title": "Cloudtrail Events Line Graph",
|
||||
"visState": "{\"title\":\"Cloudtrail Events Line Graph\",\"type\":\"line\",\"params\":{\"addLegend\":true,\"addTimeMarker\":true,\"addTooltip\":true,\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"labels\":{\"show\":true,\"truncate\":100},\"position\":\"bottom\",\"scale\":{\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{},\"type\":\"category\"}],\"grid\":{\"categoryLines\":true,\"style\":{\"color\":\"#eee\"},\"valueAxis\":\"ValueAxis-1\"},\"legendPosition\":\"right\",\"seriesParams\":[{\"data\":{\"id\":\"1\",\"label\":\"Count\"},\"drawLinesBetweenPoints\":true,\"mode\":\"normal\",\"show\":\"true\",\"showCircles\":true,\"type\":\"line\",\"valueAxis\":\"ValueAxis-1\"}],\"times\":[],\"type\":\"line\",\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"labels\":{\"filter\":false,\"rotate\":0,\"show\":true,\"truncate\":100},\"name\":\"LeftAxis-1\",\"position\":\"left\",\"scale\":{\"mode\":\"normal\",\"type\":\"linear\"},\"show\":true,\"style\":{},\"title\":{\"text\":\"Count\"},\"type\":\"value\"}]},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"receivedtimestamp\",\"timeRange\":{\"from\":\"now-24h\",\"to\":\"now\",\"mode\":\"quick\"},\"useNormalizedEsInterval\":true,\"interval\":\"auto\",\"drop_partials\":false,\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"details.awsregion\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":false,\"missingBucketLabel\":\"Missing\",\"customLabel\":\"User\"}}]}",
|
||||
"uiStateJSON": "{\"vis\":{\"legendOpen\":false}}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"events-*\",\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[]}"
|
||||
}
|
||||
},
|
||||
"type": "visualization"
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"visualization": {
|
||||
"title": "Cloudtrail Events Map",
|
||||
"visState": "{\"title\":\"Cloudtrail Events Map\",\"type\":\"tile_map\",\"params\":{\"colorSchema\":\"Yellow to Red\",\"mapType\":\"Shaded Circle Markers\",\"isDesaturated\":true,\"addTooltip\":true,\"heatClusterSize\":1.5,\"legendPosition\":\"bottomright\",\"mapZoom\":2,\"mapCenter\":[0,0],\"wms\":{\"enabled\":false,\"options\":{\"format\":\"image/png\",\"transparent\":true},\"selectedTmsLayer\":{\"origin\":\"elastic_maps_service\",\"id\":\"road_map\",\"minZoom\":0,\"maxZoom\":18,\"attribution\":\"<p>© <a href=\\\"https://www.openstreetmap.org/copyright\\\">OpenStreetMap contributors</a>|<a href=\\\"https://openmaptiles.org\\\">OpenMapTiles</a>|<a href=\\\"https://www.maptiler.com\\\">MapTiler</a>|<a href=\\\"https://www.elastic.co/elastic-maps-service\\\">Elastic Maps Service</a></p> \"}}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"geohash_grid\",\"schema\":\"segment\",\"params\":{\"field\":\"details.sourceipgeopoint\",\"autoPrecision\":true,\"isFilteredByCollar\":true,\"useGeocentroid\":true,\"mapZoom\":2,\"mapCenter\":{\"lon\":0,\"lat\":-0.17578097424708533},\"mapBounds\":{\"bottom_right\":{\"lat\":-83.94227191521858,\"lon\":282.30468750000006},\"top_left\":{\"lat\":83.9050579559856,\"lon\":-282.30468750000006}},\"precision\":2}}]}",
|
||||
"uiStateJSON": "{}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"events-*\",\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
|
||||
}
|
||||
},
|
||||
"type": "visualization"
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"visualization": {
|
||||
"title": "Cloudtrail Total Event Count",
|
||||
"visState": "{\"title\":\"Cloudtrail Total Event Count\",\"type\":\"metric\",\"params\":{\"addTooltip\":true,\"addLegend\":false,\"type\":\"metric\",\"metric\":{\"percentageMode\":false,\"useRanges\":false,\"colorSchema\":\"Green to Red\",\"metricColorMode\":\"None\",\"colorsRange\":[{\"from\":0,\"to\":10000}],\"labels\":{\"show\":true},\"invertColors\":false,\"style\":{\"bgFill\":\"#000\",\"bgColor\":false,\"labelColor\":false,\"subText\":\"\",\"fontSize\":60}}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{\"customLabel\":\"Total\"}}]}",
|
||||
"uiStateJSON": "{}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"events-*\",\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"filter\":[]}"
|
||||
}
|
||||
},
|
||||
"type": "visualization"
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"visualization": {
|
||||
"title": "Cloudtrail User Identify Table",
|
||||
"visState": "{\"title\":\"Cloudtrail User Identify Table\",\"type\":\"table\",\"params\":{\"perPage\":10,\"showPartialRows\":false,\"showMetricsAtAllLevels\":false,\"sort\":{\"columnIndex\":null,\"direction\":null},\"showTotal\":false,\"totalFunc\":\"sum\"},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"bucket\",\"params\":{\"field\":\"details.useridentity.arn\",\"size\":2,\"order\":\"desc\",\"orderBy\":\"1\",\"otherBucket\":false,\"otherBucketLabel\":\"Other\",\"missingBucket\":true,\"missingBucketLabel\":\"Missing\"}}]}",
|
||||
"uiStateJSON": "{\"vis\":{\"params\":{\"sort\":{\"columnIndex\":null,\"direction\":null}}}}",
|
||||
"description": "",
|
||||
"version": 1,
|
||||
"kibanaSavedObjectMeta": {
|
||||
"searchSourceJSON": "{\"index\":\"events-*\",\"query\":{\"language\":\"lucene\",\"query\":\"\"},\"filter\":[]}"
|
||||
}
|
||||
},
|
||||
"type": "visualization"
|
||||
}
|
|
@ -4,10 +4,10 @@ LABEL maintainer="mozdef@mozilla.com"
|
|||
|
||||
ARG BOT_TYPE
|
||||
|
||||
COPY bot/$BOT_TYPE /opt/mozdef/envs/mozdef/bot/$BOT_TYPE
|
||||
COPY docker/compose/mozdef_bot/files/mozdefbot.conf /opt/mozdef/envs/mozdef/bot/$BOT_TYPE/mozdefbot.conf
|
||||
RUN install --owner mozdef --group mozdef --directory /opt/mozdef/envs/mozdef/bot
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/bot/$BOT_TYPE
|
||||
COPY --chown=mozdef:mozdef bot/$BOT_TYPE /opt/mozdef/envs/mozdef/bot/$BOT_TYPE
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_bot/files/mozdefbot.conf /opt/mozdef/envs/mozdef/bot/$BOT_TYPE/mozdefbot.conf
|
||||
|
||||
WORKDIR /opt/mozdef/envs/mozdef/bot/$BOT_TYPE
|
||||
|
||||
|
|
|
@ -3,5 +3,5 @@ ADD docker/compose/mozdef_cognito_proxy/files/default.conf /etc/nginx/conf.d/def
|
|||
ADD docker/compose/mozdef_cognito_proxy/files/nginx.conf /usr/local/openresty/nginx/conf/nginx.conf
|
||||
RUN touch /etc/nginx/htpasswd
|
||||
RUN /usr/local/openresty/luajit/bin/luarocks install lua-resty-jwt
|
||||
RUN yum install -y httpd-tools && yum clean all
|
||||
RUN yum install -y httpd-tools && yum clean all && rm -rf /var/cache/yum
|
||||
CMD bash -c "/usr/bin/htpasswd -bc /etc/nginx/htpasswd mozdef $basic_auth_secret 2> /dev/null; /usr/bin/openresty -g 'daemon off;'"
|
||||
|
|
|
@ -5,20 +5,19 @@ LABEL maintainer="mozdef@mozilla.com"
|
|||
RUN \
|
||||
yum makecache fast && \
|
||||
yum install -y cronie && \
|
||||
yum clean all
|
||||
yum clean all && \
|
||||
rm -rf /var/cache/yum
|
||||
|
||||
COPY cron /opt/mozdef/envs/mozdef/cron
|
||||
COPY --chown=mozdef:mozdef cron /opt/mozdef/envs/mozdef/cron
|
||||
COPY docker/compose/mozdef_cron/files/cron_entries.txt /cron_entries.txt
|
||||
|
||||
# Copy config files for crons
|
||||
COPY docker/compose/mozdef_cron/files/backup.conf /opt/mozdef/envs/mozdef/cron/backup.conf
|
||||
COPY docker/compose/mozdef_cron/files/collectAttackers.conf /opt/mozdef/envs/mozdef/cron/collectAttackers.conf
|
||||
COPY docker/compose/mozdef_cron/files/eventStats.conf /opt/mozdef/envs/mozdef/cron/eventStats.conf
|
||||
COPY docker/compose/mozdef_cron/files/healthAndStatus.conf /opt/mozdef/envs/mozdef/cron/healthAndStatus.conf
|
||||
COPY docker/compose/mozdef_cron/files/healthToMongo.conf /opt/mozdef/envs/mozdef/cron/healthToMongo.conf
|
||||
COPY docker/compose/mozdef_cron/files/syncAlertsToMongo.conf /opt/mozdef/envs/mozdef/cron/syncAlertsToMongo.conf
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/cron
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_cron/files/backup.conf /opt/mozdef/envs/mozdef/cron/backup.conf
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_cron/files/collectAttackers.conf /opt/mozdef/envs/mozdef/cron/collectAttackers.conf
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_cron/files/eventStats.conf /opt/mozdef/envs/mozdef/cron/eventStats.conf
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_cron/files/healthAndStatus.conf /opt/mozdef/envs/mozdef/cron/healthAndStatus.conf
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_cron/files/healthToMongo.conf /opt/mozdef/envs/mozdef/cron/healthToMongo.conf
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_cron/files/syncAlertsToMongo.conf /opt/mozdef/envs/mozdef/cron/syncAlertsToMongo.conf
|
||||
|
||||
# https://stackoverflow.com/a/48651061/168874
|
||||
COPY docker/compose/mozdef_cron/files/launch_cron /launch_cron
|
||||
|
|
|
@ -2,10 +2,8 @@ FROM mozdef/mozdef_base
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
COPY loginput /opt/mozdef/envs/mozdef/loginput
|
||||
COPY docker/compose/mozdef_loginput/files/index.conf /opt/mozdef/envs/mozdef/loginput/index.conf
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/loginput
|
||||
COPY --chown=mozdef:mozdef loginput /opt/mozdef/envs/mozdef/loginput
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_loginput/files/index.conf /opt/mozdef/envs/mozdef/loginput/index.conf
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
|
|
|
@ -11,46 +11,52 @@ ENV PORT=3000
|
|||
|
||||
ARG METEOR_BUILD='YES'
|
||||
|
||||
RUN \
|
||||
useradd -ms /bin/bash -d /opt/mozdef -m mozdef && \
|
||||
mkdir -p /opt/mozdef/envs/mozdef && \
|
||||
cd /opt/mozdef && \
|
||||
chown -R mozdef:mozdef /opt/mozdef && \
|
||||
curl -sL https://rpm.nodesource.com/setup_8.x | bash - && \
|
||||
yum makecache fast && \
|
||||
yum install -y \
|
||||
wget \
|
||||
make \
|
||||
glibc-devel \
|
||||
gcc \
|
||||
gcc-c++ \
|
||||
libstdc++ \
|
||||
libffi-devel \
|
||||
zlib-devel \
|
||||
nodejs && \
|
||||
yum clean all && \
|
||||
mkdir /opt/mozdef/meteor && \
|
||||
curl -sL -o /opt/mozdef/meteor.tar.gz https://static-meteor.netdna-ssl.com/packages-bootstrap/$METEOR_VERSION/meteor-bootstrap-os.linux.x86_64.tar.gz && \
|
||||
tar -xzf /opt/mozdef/meteor.tar.gz -C /opt/mozdef/meteor && \
|
||||
mv /opt/mozdef/meteor/.meteor /opt/mozdef && \
|
||||
rm -r /opt/mozdef/meteor && \
|
||||
cp /opt/mozdef/.meteor/packages/meteor-tool/*/mt-os.linux.x86_64/scripts/admin/launch-meteor /usr/bin/meteor
|
||||
# Ignore warnings like 'No such file or directory for /usr/share/info/*.info.gz"
|
||||
# https://bugzilla.redhat.com/show_bug.cgi?id=516757
|
||||
|
||||
COPY meteor /opt/mozdef/envs/mozdef/meteor
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/meteor
|
||||
RUN \
|
||||
useradd --create-home --shell /bin/bash --home-dir /opt/mozdef mozdef && \
|
||||
cd /opt/mozdef && \
|
||||
gpg="gpg --no-default-keyring --secret-keyring /dev/null --keyring /dev/null --no-option --keyid-format 0xlong" && \
|
||||
rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 && \
|
||||
rpm -qi gpg-pubkey-f4a80eb5 | $gpg | grep 0x24C6A8A7F4A80EB5 && \
|
||||
yum makecache fast && \
|
||||
yum install -y which && \
|
||||
curl --silent --location https://rpm.nodesource.com/setup_8.x | bash - && \
|
||||
rpmkeys --import /etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-EL && \
|
||||
rpm -qi gpg-pubkey-34fa74dd | $gpg | grep 0x5DDBE8D434FA74DD && \
|
||||
yum install -y \
|
||||
make \
|
||||
glibc-devel \
|
||||
gcc \
|
||||
gcc-c++ \
|
||||
libstdc++ \
|
||||
libffi-devel \
|
||||
zlib-devel \
|
||||
nodejs && \
|
||||
yum clean all && \
|
||||
rm -rf /var/cache/yum && \
|
||||
echo "Downloading meteor" && \
|
||||
curl --silent --location https://static-meteor.netdna-ssl.com/packages-bootstrap/$METEOR_VERSION/meteor-bootstrap-os.linux.x86_64.tar.gz \
|
||||
| tar --extract --gzip --directory /opt/mozdef .meteor && \
|
||||
ln --symbolic /opt/mozdef/.meteor/packages/meteor-tool/*/mt-os.linux.x86_64/scripts/admin/launch-meteor /usr/bin/meteor && \
|
||||
install --owner mozdef --group mozdef --directory /opt/mozdef/envs /opt/mozdef/envs/mozdef
|
||||
|
||||
COPY --chown=mozdef:mozdef meteor /opt/mozdef/envs/mozdef/meteor
|
||||
|
||||
USER mozdef
|
||||
RUN mkdir -p /opt/mozdef/envs/meteor/mozdef
|
||||
|
||||
# build meteor runtime if asked, if set to NO, only create the dir created above to mount to do live development
|
||||
RUN if [ "${METEOR_BUILD}" = "YES" ]; then \
|
||||
cd /opt/mozdef/envs/mozdef/meteor && \
|
||||
meteor npm install && \
|
||||
echo "Starting meteor build" && \
|
||||
time meteor build --server localhost:3002 --directory /opt/mozdef/envs/meteor/mozdef && \
|
||||
cp -r /opt/mozdef/envs/mozdef/meteor/node_modules /opt/mozdef/envs/meteor/mozdef/node_modules &&\
|
||||
cd /opt/mozdef/envs/meteor/mozdef/bundle/programs/server && \
|
||||
npm install ;\
|
||||
RUN \
|
||||
if [ "${METEOR_BUILD}" = "YES" ]; then \
|
||||
mkdir -p /opt/mozdef/envs/meteor/mozdef && \
|
||||
cd /opt/mozdef/envs/mozdef/meteor && \
|
||||
meteor npm install && \
|
||||
echo "Starting meteor build" && \
|
||||
time meteor build --server localhost:3002 --directory /opt/mozdef/envs/meteor/mozdef && \
|
||||
ln --symbolic /opt/mozdef/envs/meteor/mozdef/node_modules /opt/mozdef/envs/mozdef/meteor/node_modules && \
|
||||
cd /opt/mozdef/envs/meteor/mozdef/bundle/programs/server && \
|
||||
npm install ;\
|
||||
fi
|
||||
|
||||
WORKDIR /opt/mozdef/envs/meteor/mozdef
|
||||
|
|
|
@ -2,10 +2,8 @@ FROM mozdef/mozdef_base
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
COPY mq /opt/mozdef/envs/mozdef/mq
|
||||
COPY docker/compose/mozdef_mq_worker/files/*.conf /opt/mozdef/envs/mozdef/mq/
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/mq
|
||||
COPY --chown=mozdef:mozdef mq /opt/mozdef/envs/mozdef/mq
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_mq_worker/files/*.conf /opt/mozdef/envs/mozdef/mq/
|
||||
|
||||
WORKDIR /opt/mozdef/envs/mozdef/mq
|
||||
|
||||
|
|
|
@ -2,10 +2,8 @@ FROM mozdef/mozdef_base
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
COPY rest /opt/mozdef/envs/mozdef/rest
|
||||
COPY docker/compose/mozdef_rest/files/index.conf /opt/mozdef/envs/mozdef/rest/index.conf
|
||||
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/rest
|
||||
COPY --chown=mozdef:mozdef rest /opt/mozdef/envs/mozdef/rest
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_rest/files/index.conf /opt/mozdef/envs/mozdef/rest/index.conf
|
||||
|
||||
EXPOSE 8081
|
||||
|
||||
|
|
|
@ -2,11 +2,9 @@ FROM mozdef/mozdef_base
|
|||
|
||||
LABEL maintainer="mozdef@mozilla.com"
|
||||
|
||||
RUN mkdir -p /opt/mozdef/envs/mozdef/examples
|
||||
COPY ./examples /opt/mozdef/envs/mozdef/examples
|
||||
COPY --chown=mozdef:mozdef ./examples /opt/mozdef/envs/mozdef/examples
|
||||
|
||||
COPY docker/compose/mozdef_sampledata/files/sampleData2MozDef.conf /opt/mozdef/envs/mozdef/examples/demo/sampleData2MozDef.conf
|
||||
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/examples
|
||||
COPY --chown=mozdef:mozdef docker/compose/mozdef_sampledata/files/sampleData2MozDef.conf /opt/mozdef/envs/mozdef/examples/demo/sampleData2MozDef.conf
|
||||
RUN chmod u+rwx /opt/mozdef/envs/mozdef/examples/demo/sampleevents.sh
|
||||
|
||||
WORKDIR /opt/mozdef/envs/mozdef/examples/demo
|
||||
|
|
|
@ -14,7 +14,8 @@ RUN \
|
|||
rpm -qi gpg-pubkey-352c64e5 | $gpg | grep 0x6A2FAEA2352C64E5 && \
|
||||
yum install -y epel-release && \
|
||||
yum install -y syslog-ng.x86_64 syslog-ng-json && \
|
||||
yum clean all
|
||||
yum clean all && \
|
||||
rm -rf /var/cache/yum
|
||||
|
||||
COPY docker/compose/mozdef_syslog/files/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
|
||||
|
||||
|
|
|
@ -8,10 +8,13 @@ COPY docker/compose/rabbitmq/files/rabbitmq-server.repo /etc/yum.repos.d/rabbitm
|
|||
COPY docker/compose/rabbitmq/files/erlang.repo /etc/yum.repos.d/erlang.repo
|
||||
|
||||
RUN \
|
||||
yum -q makecache -y fast && \
|
||||
yum install -y epel-release && \
|
||||
gpg="gpg --no-default-keyring --secret-keyring /dev/null --keyring /dev/null --no-option --keyid-format 0xlong" && \
|
||||
rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 && \
|
||||
rpm -qi gpg-pubkey-f4a80eb5 | $gpg | grep 0x24C6A8A7F4A80EB5 && \
|
||||
yum --quiet makecache -y fast && \
|
||||
yum install -y rabbitmq-server-$RABBITMQ_VERSION && \
|
||||
yum clean all
|
||||
yum clean all && \
|
||||
rm -rf /var/cache/yum
|
||||
|
||||
COPY docker/compose/rabbitmq/files/rabbitmq.config /etc/rabbitmq/
|
||||
COPY docker/compose/rabbitmq/files/enabled_plugins /etc/rabbitmq/
|
||||
|
|
|
@ -53,7 +53,7 @@ At this point, begin development and periodically run your unit-tests locally wi
|
|||
Background on concepts
|
||||
----------------------
|
||||
|
||||
- Logs - These are individual log entries that are typically emitted from systems, like an Apache log
|
||||
- Logs - These are individual log entries that are typically emitted from systems, like an Apache log.
|
||||
- Events - The entry point into MozDef, a log parsed into JSON by some log shipper (syslog-ng, nxlog) or a native JSON data source like GuardDuty, CloudTrail, most SaaS systems, etc.
|
||||
- Alerts - These are either a 1:1 events to alerts (this thing happens and alert) or a M:1 events to alerts (N of these things happen and alert).
|
||||
|
||||
|
|
|
@ -27,7 +27,8 @@ insert_simple.js
|
|||
Usage: `node ./insert_simple.js <processes> <totalInserts> <host1> [host2] [host3] [...]`
|
||||
|
||||
* `processes`: Number of processes to spawn
|
||||
* `totalInserts`: Number of inserts to perform, please note after a certain number node will slow down. You want to have a lower number if you are in this case.
|
||||
* `totalInserts`: Number of inserts to perform
|
||||
* Please note after a certain number node will slow down. You want to have a lower number if you are in this case.
|
||||
* `host1`, `host2`, `host3`, etc: Elasticsearch hosts to which you want to send the HTTP requests
|
||||
|
||||
insert_bulk.js
|
||||
|
@ -39,7 +40,8 @@ Usage: `node ./insert_bulk.js <processes> <insertsPerQuery> <totalInserts> <host
|
|||
|
||||
* `processes`: Number of processes to spawn
|
||||
* `insertsPerQuery`: Number of logs per request
|
||||
* `totalInserts`: Number of inserts to perform, please note after a certain number node will slow down. You want to have a lower number if you are in this case.
|
||||
* `totalInserts`: Number of inserts to perform
|
||||
* Please note after a certain number node will slow down. You want to have a lower number if you are in this case.
|
||||
* `host1`, `host2`, `host3`, etc: Elasticsearch hosts to which you want to send the HTTP requests
|
||||
|
||||
search_all_fulltext.js
|
||||
|
@ -50,7 +52,8 @@ search_all_fulltext.js
|
|||
Usage: `node ./search_all_fulltext.js <processes> <totalSearches> <host1> [host2] [host3] [...]`
|
||||
|
||||
* `processes`: Number of processes to spawn
|
||||
* `totalSearches`: Number of search requests to perform, please note after a certain number node will slow down. You want to have a lower number if you are in this case.
|
||||
* `totalSearches`: Number of search requests to perform
|
||||
* Please note after a certain number node will slow down. You want to have a lower number if you are in this case.
|
||||
* `host1`, `host2`, `host3`, etc: Elasticsearch hosts to which you want to send the HTTP requests
|
||||
|
||||
|
||||
|
|
|
@ -19,34 +19,34 @@ The Test Sequence
|
|||
_________________
|
||||
|
||||
* Travis CI creates webhooks when first setup which allow commits to the MozDef
|
||||
GitHub repo to trigger Travis
|
||||
GitHub repo to trigger Travis.
|
||||
* When a commit is made to MozDef, Travis CI follows the instructions in the
|
||||
`.travis.yml <https://github.com/mozilla/MozDef/blob/master/.travis.yml>`_
|
||||
file
|
||||
* `.travis.yml` installs `docker-compose` in the `before_install` phase
|
||||
* in the `install` phase, Travis runs the
|
||||
file.
|
||||
* `.travis.yml` installs `docker-compose` in the `before_install` phase.
|
||||
* In the `install` phase, Travis runs the
|
||||
`build-tests <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L88-L89>`_
|
||||
make target which calls `docker-compose build` on the
|
||||
`docker/compose/docker-compose-tests.yml`_ file which builds a few docker
|
||||
containers to use for testing
|
||||
* in the `script` phase, Travis runs the
|
||||
containers to use for testing.
|
||||
* In the `script` phase, Travis runs the
|
||||
`tests <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L52>`_
|
||||
make target which
|
||||
|
||||
* calls the `build-tests` make target which again runs `docker-compose build`
|
||||
on the `docker/compose/docker-compose-tests.yml`_ file
|
||||
on the `docker/compose/docker-compose-tests.yml`_ file.
|
||||
* calls the
|
||||
`run-tests <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L67-L69>`_
|
||||
make target which
|
||||
make target which.
|
||||
|
||||
* calls the
|
||||
`run-tests-resources <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L60-L61>`_
|
||||
make target which starts the docker
|
||||
containers listed in `docker/compose/docker-compose-tests.yml`_
|
||||
containers listed in `docker/compose/docker-compose-tests.yml`_.
|
||||
* runs `flake8` with the
|
||||
`.flake8 <https://github.com/mozilla/MozDef/blob/master/.flake8>`_
|
||||
config file to check code style
|
||||
* runs `py.test tests` which runs all the test cases
|
||||
config file to check code style.
|
||||
* runs `py.test tests` which runs all the test cases.
|
||||
|
||||
AWS CodeBuild
|
||||
-------------
|
||||
|
@ -54,45 +54,81 @@ AWS CodeBuild
|
|||
Enabling GitHub AWS CodeBuild Integration
|
||||
_________________________________________
|
||||
|
||||
* Request that a github.com/mozilla GitHub Organization owner temporarily
|
||||
`approve / whitelist
|
||||
<https://help.github.com/en/articles/approving-oauth-apps-for-your-organization>`_
|
||||
the `AWS CodeBuild integration <https://bugzilla.mozilla.org/show_bug.cgi?id=1506740>`_
|
||||
in the github.com/mozilla GitHub Organization
|
||||
* Manually configure the GitHub integration in AWS CodeBuild which will create
|
||||
the GitHub webhooks needed using the dedicated, AWS account specific, GitHub
|
||||
service user. A service user is needed as AWS CodeBuild can only integrate
|
||||
with GitHub from one AWS account in one region with a single GitHub user.
|
||||
Technically we could use different users for each region in a single AWS
|
||||
account, but for simplicity we're limiting to only one GitHub user per AWS
|
||||
account (instead of one GitHub user per AWS account per region)
|
||||
Onetime Manual Step
|
||||
*******************
|
||||
|
||||
* For the `infosec-prod` AWS account use the `infosec-prod-371522382791-codebuild`
|
||||
GitHub user
|
||||
* For the `infosec-dev` AWS account use the `infosec-dev-656532927350-codebuild`
|
||||
GitHub user
|
||||
The steps to establish a GitHub CodeBuild integration unfortunately
|
||||
require a onetime manual step be done before using CloudFormation to
|
||||
configure the integration. This onetime manual step **need only happen a
|
||||
single time for a given AWS Account + Region**. It need **not be
|
||||
performed with each new CodeBuild project or each new GitHub repo**
|
||||
|
||||
* Request that a GitHub Organization owner, re-deny the integration for
|
||||
github.com/mozilla
|
||||
1. Manually enable the GitHub integration in AWS CodeBuild using the
|
||||
dedicated, AWS account specific, GitHub service user.
|
||||
|
||||
1. A service user is needed as AWS CodeBuild can only integrate with
|
||||
GitHub from one AWS account in one region with a single GitHub
|
||||
user. Technically you could use different users for each region in
|
||||
a single AWS account, but for simplicity limit yourself to only
|
||||
one GitHub user per AWS account (instead of one GitHub user per
|
||||
AWS account per region)
|
||||
|
||||
2. To do the one time step of integrating the entire AWS account in
|
||||
that region with the GitHub service user
|
||||
|
||||
1. Browse to `CodeBuild`_\ in AWS and click Create Project
|
||||
2. Navigate down to ``Source`` and set ``Source Provider`` to
|
||||
``GitHub``
|
||||
3. For ``Repository`` select
|
||||
``Connect with a GitHub personal access token``
|
||||
4. Enter the persona access token for the GitHub service user. If
|
||||
you haven't created one do so and grant it ``repo`` and
|
||||
``admin:repo_hook``
|
||||
5. Click ``Save Token``
|
||||
6. Abort the project setup process by clicking the
|
||||
``Build Projects`` breadcrumb at the top. This “Save Token”
|
||||
step was the only thing you needed to do in that process
|
||||
|
||||
Grant the GitHub service user access to the GitHub repository
|
||||
*************************************************************
|
||||
|
||||
1. As an admin of the GitHub repository go to that repositories
|
||||
settings, select Collaborators and Teams, and add the GitHub
|
||||
service user to the repository
|
||||
2. Set their access level to ``Admin``
|
||||
3. Copy the invite link, login as the service user and accept the
|
||||
invitation
|
||||
|
||||
Deploy CloudFormation stack creating CodeBuild project
|
||||
******************************************************
|
||||
|
||||
Deploy the ``mozdef-cicd-codebuild.yml`` CloudFormation template
|
||||
to create the CodeBuild project and IAM Role
|
||||
|
||||
.. _CodeBuild: https://us-west-2.console.aws.amazon.com/codesuite/codebuild/
|
||||
|
||||
The Build Sequence
|
||||
__________________
|
||||
|
||||
* A branch is merged into `master` in the GitHub repo or a version git tag is
|
||||
applied to a commit
|
||||
* GitHub emits a webhook event to AWS CodeBuild indicating this
|
||||
applied to a commit.
|
||||
* GitHub emits a webhook event to AWS CodeBuild indicating this.
|
||||
* AWS CodeBuild considers the Filter Groups configured to decide if the tag
|
||||
or branch warrants triggering a build. These Filter Groups are defined in
|
||||
the ``mozdef-cicd-codebuild.yml`` CloudFormation template. Assuming the tag
|
||||
or branch are acceptable, CodeBuild continues.
|
||||
* AWS CodeBuild reads the
|
||||
`buildspec.yml <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/buildspec.yml>`_
|
||||
file to know what to do
|
||||
file to know what to do.
|
||||
* The `install` phase of the `buildspec.yml` fetches
|
||||
`packer <https://www.packer.io/>`_ and unzips it
|
||||
`packer <https://www.packer.io/>`_ and unzips it.
|
||||
|
||||
* `packer` is a tool that spawns an ec2 instance, provisions it, and renders
|
||||
an AWS Machine Image (AMI) from it.
|
||||
|
||||
* The `build` phase of the `buildspec.yml` runs the
|
||||
`cloudy_mozdef/ci/deploy <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/ci/deploy>`_
|
||||
script in the AWS CodeBuild Ubuntu 14.04 environment
|
||||
script in the AWS CodeBuild Ubuntu 14.04 environment.
|
||||
* The `deploy` script calls the
|
||||
`build-from-cwd <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L78-L79>`_
|
||||
target of the `Makefile` which calls `docker-compose build` on the
|
||||
|
@ -117,16 +153,16 @@ __________________
|
|||
|
||||
* Uploads the local image that was just built by AWS CodeBuild to DockerHub.
|
||||
If the branch being built is `master` then the image is uploaded both with
|
||||
a tag of `master` as well as with a tag of `latest`
|
||||
a tag of `master` as well as with a tag of `latest`.
|
||||
* If the branch being built is from a version tag (e.g. `v1.2.3`) then the
|
||||
image is uploaded with only that version tag applied
|
||||
image is uploaded with only that version tag applied.
|
||||
* The `deploy` script next calls the
|
||||
`packer-build-github <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/cloudy_mozdef/Makefile#L34-L36>`_
|
||||
make target in the
|
||||
`cloudy_mozdef/Makefile <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/Makefile>`_
|
||||
which calls the
|
||||
`ci/pack_and_copy <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/ci/pack_and_copy>`_
|
||||
script which does the following steps
|
||||
script which does the following steps.
|
||||
|
||||
* Calls packer which launches an ec2 instance, executing a bunch of steps and
|
||||
and producing an AMI
|
||||
|
@ -143,19 +179,19 @@ __________________
|
|||
|
||||
* Within this ec2 instance, packer `clones the MozDef GitHub repo and checks
|
||||
out the branch that triggered this build
|
||||
<https://github.com/mozilla/MozDef/blob/c7a166f2e29dde8e5d71853a279fb0c47a48e1b2/cloudy_mozdef/packer/packer.json#L58-L60>`_
|
||||
* packer replaces all instances of the word `latest` in the
|
||||
<https://github.com/mozilla/MozDef/blob/c7a166f2e29dde8e5d71853a279fb0c47a48e1b2/cloudy_mozdef/packer/packer.json#L58-L60>`_.
|
||||
* Packer replaces all instances of the word `latest` in the
|
||||
`docker-compose-cloudy-mozdef.yml <https://github.com/mozilla/MozDef/blob/master/docker/compose/docker-compose-cloudy-mozdef.yml>`_
|
||||
file with either the branch `master` or the version tag (e.g. `v1.2.3`)
|
||||
* packer runs `docker-compose pull` on the
|
||||
file with either the branch `master` or the version tag (e.g. `v1.2.3`).
|
||||
* Packer runs `docker-compose pull` on the
|
||||
`docker-compose-cloudy-mozdef.yml <https://github.com/mozilla/MozDef/blob/master/docker/compose/docker-compose-cloudy-mozdef.yml>`_
|
||||
file to pull down both the docker images that were just built by AWS
|
||||
CodeBuild and uploaded to Dockerhub as well as other non MozDef docker
|
||||
images
|
||||
images.
|
||||
|
||||
* After packer completes executing the steps laid out in `packer.json` inside
|
||||
the ec2 instance, it generates an AMI from that instance and continues with
|
||||
the copying, tagging and sharing steps described above
|
||||
the copying, tagging and sharing steps described above.
|
||||
* Now back in the AWS CodeBuild environment, the `deploy` script continues by
|
||||
calling the
|
||||
`publish-versioned-templates <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/cloudy_mozdef/Makefile#L85-L87>`_
|
||||
|
@ -169,7 +205,7 @@ __________________
|
|||
CloudFormation template so that the template knows the AMI IDs of that
|
||||
specific branch of code.
|
||||
* uploads the CloudFormation templates to S3 in a directory either called
|
||||
`master` or the tag version that was built (e.g. `v1.2.3`)
|
||||
`master` or the tag version that was built (e.g. `v1.2.3`).
|
||||
|
||||
.. _docker/compose/docker-compose-tests.yml: https://github.com/mozilla/MozDef/blob/master/docker/compose/docker-compose-tests.yml
|
||||
.. _tag-images: https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L109-L110
|
||||
|
|
|
@ -7,7 +7,7 @@ Cloud based MozDef is an opinionated deployment of the MozDef services created i
|
|||
ingest CloudTrail, GuardDuty, and provide security services.
|
||||
|
||||
.. image:: images/cloudformation-launch-stack.png
|
||||
:target: https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=mozdef-for-aws&templateURL=https://s3-us-west-2.amazonaws.com/public.us-west-2.infosec.mozilla.org/mozdef/cf/v1.38.5/mozdef-parent.yml
|
||||
:target: https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=mozdef-for-aws&templateURL=https://s3-us-west-2.amazonaws.com/public.us-west-2.infosec.mozilla.org/mozdef/cf/v3.1.0/mozdef-parent.yml
|
||||
|
||||
|
||||
Feedback
|
||||
|
@ -32,18 +32,19 @@ MozDef requires the following:
|
|||
- An OIDC Provider with ClientID, ClientSecret, and Discovery URL
|
||||
|
||||
- Mozilla uses Auth0 but you can use any OIDC provider you like: Shibboleth,
|
||||
KeyCloak, AWS Cognito, Okta, Ping (etc.)
|
||||
KeyCloak, AWS Cognito, Okta, Ping (etc.).
|
||||
- You will need to configure the redirect URI of ``/redirect_uri`` as allowed in
|
||||
your OIDC provider configuration
|
||||
your OIDC provider configuration.
|
||||
- An ACM Certificate in the deployment region for your DNS name
|
||||
- A VPC with three public subnets available.
|
||||
- A VPC with three public subnets available
|
||||
|
||||
- It is advised that this VPC be dedicated to MozDef or used solely for security automation.
|
||||
- The three public subnets must all be in different `availability zones <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#using-regions-availability-zones-describe>`_
|
||||
and have a large enough number of IP addresses to accommodate the infrastructure
|
||||
and have a large enough number of IP addresses to accommodate the infrastructure.
|
||||
- The VPC must have an `internet gateway <https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html>`_
|
||||
enabled on it so that MozDef can reach the internet
|
||||
- An SQS queue receiving GuardDuty events. At the time of writing this is not required but may be required in future.
|
||||
enabled on it so that MozDef can reach the internet.
|
||||
- An SQS queue receiving GuardDuty events
|
||||
- At the time of writing this is not required but may be required in future.
|
||||
|
||||
|
||||
Supported Regions
|
||||
|
|
|
@ -113,6 +113,13 @@ Then::
|
|||
PYCURL_SSL_LIBRARY=nss pip install -r requirements.txt
|
||||
|
||||
|
||||
If you're using Mac OS X::
|
||||
|
||||
export PYCURL_SSL_LIBRARY=openssl
|
||||
export LDFLAGS=-L/usr/local/opt/openssl/lib;export CPPFLAGS=-I/usr/local/opt/openssl/include
|
||||
pip install -r requirements.txt
|
||||
|
||||
|
||||
Copy the following into a file called .bash_profile for the mozdef user within /opt/mozdef::
|
||||
|
||||
[mozdef@server ~]$ vim /opt/mozdef/.bash_profile
|
||||
|
|
|
@ -20,7 +20,7 @@ Goals
|
|||
High level
|
||||
**********
|
||||
|
||||
* Provide a platform for use by defenders to rapidly discover and respond to security incidents.
|
||||
* Provide a platform for use by defenders to rapidly discover and respond to security incidents
|
||||
* Automate interfaces to other systems like firewalls, cloud protections and anything that has an API
|
||||
* Provide metrics for security events and incidents
|
||||
* Facilitate real-time collaboration amongst incident handlers
|
||||
|
@ -31,25 +31,25 @@ Technical
|
|||
*********
|
||||
|
||||
* Offer micro services that make up an Open Source Security Information and Event Management (SIEM)
|
||||
* Scalable, should be able to handle thousands of events per second, provide fast searching, alerting, correlation and handle interactions between teams of incident handlers.
|
||||
* Scalable, should be able to handle thousands of events per second, provide fast searching, alerting, correlation and handle interactions between teams of incident handlers
|
||||
|
||||
MozDef aims to provide traditional SIEM functionality including:
|
||||
|
||||
* Accepting events/logs from a variety of systems
|
||||
* Storing events/logs
|
||||
* Facilitating searches
|
||||
* Facilitating alerting
|
||||
* Facilitating log management (archiving,restoration)
|
||||
* Accepting events/logs from a variety of systems.
|
||||
* Storing events/logs.
|
||||
* Facilitating searches.
|
||||
* Facilitating alerting.
|
||||
* Facilitating log management (archiving,restoration).
|
||||
|
||||
It is non-traditional in that it:
|
||||
|
||||
* Accepts only JSON input
|
||||
* Provides you open access to your data
|
||||
* Accepts only JSON input.
|
||||
* Provides you open access to your data.
|
||||
* Integrates with a variety of log shippers including logstash, beaver, nxlog, syslog-ng and any shipper that can send JSON to either rabbit-mq or an HTTP(s) endpoint.
|
||||
* Provides easy integration to Cloud-based data sources such as cloudtrail or guard duty
|
||||
* Provides easy python plugins to manipulate your data in transit
|
||||
* Provides extensive plug-in opportunities to customize your event enrichment stream, your alert workflow, etc
|
||||
* Provides realtime access to teams of incident responders to allow each other to see their work simultaneously
|
||||
* Provides easy integration to Cloud-based data sources such as CloudTrail or GuardDuty.
|
||||
* Provides easy python plugins to manipulate your data in transit.
|
||||
* Provides extensive plug-in opportunities to customize your event enrichment stream, your alert workflow, etc.
|
||||
* Provides realtime access to teams of incident responders to allow each other to see their work simultaneously.
|
||||
|
||||
|
||||
Architecture
|
||||
|
@ -60,7 +60,7 @@ MozDef is based on open source technologies including:
|
|||
* RabbitMQ (message queue and amqp(s)-based log input)
|
||||
* uWSGI (supervisory control of python-based workers)
|
||||
* bottle.py (simple python interface for web request handling)
|
||||
* elasticsearch (scalable indexing and searching of JSON documents)
|
||||
* Elasticsearch (scalable indexing and searching of JSON documents)
|
||||
* Meteor (responsive framework for Node.js enabling real-time data sharing)
|
||||
* MongoDB (scalable data store, tightly integrated to Meteor)
|
||||
* VERIS from verizon (open source taxonomy of security incident categorizations)
|
||||
|
@ -74,11 +74,11 @@ Frontend processing
|
|||
|
||||
Frontend processing for MozDef consists of receiving an event/log (in json) over HTTP(S), AMQP(S), or SQS
|
||||
doing data transformation including normalization, adding metadata, etc. and pushing
|
||||
the data to elasticsearch.
|
||||
the data to Elasticsearch.
|
||||
|
||||
Internally MozDef uses RabbitMQ to queue events that are still to be processed.
|
||||
The diagram below shows the interactions between the python scripts (controlled by uWSGI),
|
||||
the RabbitMQ exchanges and elasticsearch indices.
|
||||
the RabbitMQ exchanges and Elasticsearch indices.
|
||||
|
||||
.. image:: images/frontend_processing.png
|
||||
|
||||
|
@ -95,7 +95,7 @@ Initial Release:
|
|||
* Facilitate replacing base SIEM functionality including log input, event management, search, alerts, basic correlations
|
||||
* Enhance the incident workflow UI to enable realtime collaboration
|
||||
* Enable basic plug-ins to the event input stream for meta data, additional parsing, categorization and basic machine learning
|
||||
* Support as many common event/log shippers as possible with repeatable recipies
|
||||
* Support as many common event/log shippers as possible with repeatable recipes
|
||||
* Base integration into Mozilla's defense mechanisms for automation
|
||||
* 3D visualizations of threat actors
|
||||
* Fine tuning of interactions between meteor, mongo, dc.js
|
||||
|
@ -106,7 +106,7 @@ Recently implemented:
|
|||
* Docker containers for each service
|
||||
* Updates to support recent (breaking) versions of Elasticsearch
|
||||
|
||||
Future (join us!):
|
||||
Future (join us!):
|
||||
|
||||
* Correlation through machine learning, AI
|
||||
* Enhanced search for alerts, events, attackers within the MozDef UI
|
||||
|
|
|
@ -131,11 +131,11 @@ Background
|
|||
|
||||
Mozilla used CEF as a logging standard for compatibility with Arcsight and for standardization across systems. While CEF is an admirable standard, MozDef prefers JSON logging for the following reasons:
|
||||
|
||||
* Every development language can create a JSON structure
|
||||
* JSON is easily parsed by computers/programs which are the primary consumer of logs
|
||||
* CEF is primarily used by Arcsight and rarely seen outside that platform and doesn't offer the extensibility of JSON
|
||||
* Every development language can create a JSON structure.
|
||||
* JSON is easily parsed by computers/programs which are the primary consumer of logs.
|
||||
* CEF is primarily used by Arcsight and rarely seen outside that platform and doesn't offer the extensibility of JSON.
|
||||
* A wide variety of log shippers (heka, logstash, fluentd, nxlog, beaver) are readily available to meet almost any need to transport logs as JSON.
|
||||
* JSON is already the standard for cloud platforms like amazon's cloudtrail logging
|
||||
* JSON is already the standard for cloud platforms like amazon's cloudtrail logging.
|
||||
|
||||
Description
|
||||
***********
|
||||
|
@ -288,8 +288,10 @@ Alerts are stored in the `alerts`_ folder.
|
|||
|
||||
There are two types of alerts:
|
||||
|
||||
* simple alerts that consider events on at a time. For example you may want to get an alert everytime a single LDAP modification is detected.
|
||||
* aggregation alerts allow you to aggregate events on the field of your choice. For example you may want to alert when more than 3 login attempts failed for the same username.
|
||||
* simple alerts that consider events on at a time
|
||||
* For example you may want to get an alert everytime a single LDAP modification is detected.
|
||||
* aggregation alerts that allow you to aggregate events on the field of your choice
|
||||
* For example you may want to alert when more than 3 login attempts failed for the same username.
|
||||
|
||||
You'll find documented examples in the `alerts`_ folder.
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ Copyright (c) 2014 Mozilla Corporation
|
|||
<div class="form-group">
|
||||
<label class="col-xs-3 control-label" for="summary">Incident Summary:</label>
|
||||
<div class="col-xs-5">
|
||||
<input id="summary" name="summary" placeholder="'Category: 'activity' on 'affected target''"
|
||||
<input id="summary" name="summary" placeholder="e.g. Data Disclosure on Foo Product"
|
||||
class="form-control summary" required="" type="text">
|
||||
</div>
|
||||
</div>
|
||||
|
@ -50,4 +50,4 @@ Copyright (c) 2014 Mozilla Corporation
|
|||
</div>
|
||||
</div>
|
||||
</form>
|
||||
</template>
|
||||
</template>
|
||||
|
|
|
@ -112,7 +112,7 @@ Copyright (c) 2014 Mozilla Corporation
|
|||
title="Drag and Drop the veris tags from the upper right to the area below"></i>
|
||||
</label>
|
||||
<div class="tags col-xs-10">
|
||||
<div class="form-control">drag here to add a tag</div>
|
||||
<div class="form-control">To add tags: drag tags from the tag filter menu to here</div>
|
||||
<ul class="pull-left list-unstyled">
|
||||
{{#each tags}}
|
||||
<li class="list-unstyled pull-left">
|
||||
|
|
|
@ -7,7 +7,7 @@ Copyright (c) 2014 Mozilla Corporation
|
|||
|
||||
|
||||
<template name="side_nav_menu">
|
||||
<div class="container itemcontainer">
|
||||
<div class="container headercontainer">
|
||||
<nav class="main-menu">
|
||||
{{#if true }}
|
||||
<ul>
|
||||
|
|
|
@ -13,7 +13,7 @@ Copyright (c) 2014 Mozilla Corporation
|
|||
placeholder="tag filter">
|
||||
</div>
|
||||
<div class="dropdown">
|
||||
<a class="dropdown-toggle" data-toggle="dropdown" id="dLabel" data-target="#"><span
|
||||
<a class="dropdown-toggle" data-toggle="dropdown" id="dLabel"><span
|
||||
class="label label-info">common</span><b class="caret"></b></a>
|
||||
<ul class="dropdown-menu" role="menu" aria-labelledby="dLabel">
|
||||
<li>category</li>
|
||||
|
|
|
@ -66,7 +66,6 @@ caption, legend {
|
|||
color:var(--txt-primary-color);
|
||||
font-style: normal;
|
||||
font-weight: normal;
|
||||
|
||||
}
|
||||
|
||||
.headercontainer {
|
||||
|
@ -121,7 +120,6 @@ caption, legend {
|
|||
line-height: 13px;
|
||||
}
|
||||
|
||||
|
||||
.attackshoverboard {
|
||||
/*width: 500px;*/
|
||||
/*height: 500px;*/
|
||||
|
@ -133,17 +131,63 @@ caption, legend {
|
|||
display: none;
|
||||
}
|
||||
|
||||
.dropdown-submenu{position:relative;}
|
||||
.dropdown-submenu>.dropdown-menu{top:0;left:100%;-webkit-border-radius:0 6px 6px 6px;-moz-border-radius:0 6px 6px 6px;border-radius:0 6px 6px 6px;}
|
||||
.dropdown-submenu:active>.dropdown-menu, .dropdown-submenu:hover>.dropdown-menu {
|
||||
display: block;
|
||||
right:162px;
|
||||
.dropdown-menu li:hover {
|
||||
background: rgb(163, 163, 163);
|
||||
cursor: pointer;
|
||||
}
|
||||
.dropdown-submenu>a:after{display:block;content:" ";float:right;width:0;height:0;border-color:transparent;border-style:solid;border-width:5px 0 5px 5px;border-left-color:#cccccc;margin-top:5px;margin-right:-10px;}
|
||||
.dropdown-submenu:active>a:after{border-left-color:#ffffff;}
|
||||
.dropdown-submenu.pull-left{float:none;}.dropdown-submenu.pull-left>.dropdown-menu{left:-100%;margin-left:10px;-webkit-border-radius:6px 0 6px 6px;-moz-border-radius:6px 0 6px 6px;border-radius:6px 0 6px 6px;}
|
||||
|
||||
.tag:hover {
|
||||
background: #5bc0de;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.dropdown-submenu {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.dropdown-submenu > .dropdown-menu {
|
||||
top: 0;
|
||||
left: 100%;
|
||||
-webkit-border-radius: 0 6px 6px 6px;
|
||||
-moz-border-radius: 0 6px 6px 6px;
|
||||
border-radius: 0 6px 6px 6px;
|
||||
}
|
||||
|
||||
.dropdown-submenu:active > .dropdown-menu,
|
||||
.dropdown-submenu:hover > .dropdown-menu {
|
||||
display: block;
|
||||
right: 162px;
|
||||
}
|
||||
|
||||
.dropdown-submenu > a:after {
|
||||
display: block;
|
||||
content: " ";
|
||||
float: right;
|
||||
width: 0;
|
||||
height: 0;
|
||||
border-color: transparent;
|
||||
border-style: solid;
|
||||
border-width: 5px 0 5px 5px;
|
||||
border-left-color: #cccccc;
|
||||
margin-top: 5px;
|
||||
margin-right: -10px;
|
||||
}
|
||||
|
||||
.dropdown-submenu:active > a:after {
|
||||
border-left-color: #ffffff;
|
||||
}
|
||||
|
||||
.dropdown-submenu.pull-left {
|
||||
float: none;
|
||||
}
|
||||
|
||||
.dropdown-submenu.pull-left > .dropdown-menu {
|
||||
left: -100%;
|
||||
margin-left: 10px;
|
||||
-webkit-border-radius: 6px 0 6px 6px;
|
||||
-moz-border-radius: 6px 0 6px 6px;
|
||||
border-radius: 6px 0 6px 6px;
|
||||
}
|
||||
|
||||
.attackercallout {
|
||||
width: 120px;
|
||||
|
@ -172,6 +216,7 @@ caption, legend {
|
|||
text-transform: uppercase;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.attackercallout ul{
|
||||
list-style: none;
|
||||
float: left;
|
||||
|
@ -183,6 +228,7 @@ caption, legend {
|
|||
.attackercallout .indicator{
|
||||
color: yellow;
|
||||
}
|
||||
|
||||
.attackercallout a{
|
||||
color: yellow;
|
||||
}
|
||||
|
@ -199,23 +245,27 @@ caption, legend {
|
|||
.alert.alert-NOTICE {
|
||||
--alert-bg-color: #4a6785;
|
||||
--alert-txt-color: white;
|
||||
}
|
||||
}
|
||||
|
||||
.alert.alert-WARNING {
|
||||
--alert-bg-color: #ffd351;
|
||||
--alert-txt-color: black;
|
||||
}
|
||||
}
|
||||
|
||||
.alert.alert-CRITICAL {
|
||||
--alert-bg-color: #d04437;
|
||||
--alert-txt-color: white;
|
||||
}
|
||||
}
|
||||
|
||||
.alert.alert-INFO {
|
||||
--alert-bg-color: #cccccc;
|
||||
--alert-txt-color: black;
|
||||
}
|
||||
}
|
||||
|
||||
.alert.alert-ERROR {
|
||||
--alert-bg-color: #d04437;
|
||||
--alert-txt-color: white;
|
||||
}
|
||||
}
|
||||
|
||||
.alert {
|
||||
color: var(--alert-txt-color);
|
||||
|
@ -223,7 +273,7 @@ caption, legend {
|
|||
text-transform: uppercase;
|
||||
display: table-cell;
|
||||
font-weight: bold;
|
||||
}
|
||||
}
|
||||
|
||||
.alert-row a{
|
||||
color: var(--a-link-color);
|
||||
|
@ -237,19 +287,6 @@ caption, legend {
|
|||
color: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
/* incident/investigation styles */
|
||||
.daterangepicker, .daterangepicker td {
|
||||
color: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
.tabcontent {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.tabcontent.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
textarea {
|
||||
overflow: auto;
|
||||
vertical-align: top;
|
||||
|
@ -267,7 +304,7 @@ h1, h2, h3, h4, h5, h6, .h1, .h2, .h3, .h4, .h5, .h6 {
|
|||
border-radius: 4px;
|
||||
color: var(--txt-primary-color);
|
||||
background-color: var(--arm-color);
|
||||
}
|
||||
}
|
||||
|
||||
.btn-warning.active,
|
||||
.btn-warning:active,
|
||||
|
@ -276,7 +313,7 @@ h1, h2, h3, h4, h5, h6, .h1, .h2, .h3, .h4, .h5, .h6 {
|
|||
color: var(--txt-secondary-color);
|
||||
background-color: var(--arm-focus-color);
|
||||
border-color: var(--arm-color);
|
||||
}
|
||||
}
|
||||
|
||||
.btnAlertAcked,
|
||||
.btnAlertAcked.active,
|
||||
|
@ -285,8 +322,7 @@ h1, h2, h3, h4, h5, h6, .h1, .h2, .h3, .h4, .h5, .h6 {
|
|||
color: var(--txt-disabled-color);
|
||||
background-color: var(--arm-focus-color);
|
||||
border-color: var(--arm-color);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
input[type="search"] {
|
||||
border-radius: 15px;
|
||||
|
@ -340,9 +376,14 @@ td {
|
|||
}
|
||||
|
||||
.tabcontent {
|
||||
display: none;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.tabcontent.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.tabnav a {
|
||||
color: rgb(173, 216, 230);
|
||||
}
|
||||
|
@ -646,7 +687,7 @@ sidenav {
|
|||
border-width: 0px 1px;
|
||||
z-index: 100;
|
||||
position:relative;
|
||||
float:left;
|
||||
float: left;
|
||||
border-image: none;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
|
|
@ -112,6 +112,67 @@ caption, legend {
|
|||
line-height: 13px;
|
||||
}
|
||||
|
||||
.tag:hover {
|
||||
background: #5bc0de;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.dropdown-menu li {
|
||||
color: #000;
|
||||
}
|
||||
|
||||
.dropdown-menu li:hover {
|
||||
background: #ccc;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.dropdown-submenu {
|
||||
position:relative;
|
||||
}
|
||||
|
||||
.dropdown-submenu > .dropdown-menu {
|
||||
top: 0;
|
||||
left: 100%;
|
||||
-webkit-border-radius: 0 6px 6px 6px;
|
||||
-moz-border-radius: 0 6px 6px 6px;
|
||||
border-radius: 0 6px 6px 6px;
|
||||
}
|
||||
|
||||
.dropdown-submenu:active > .dropdown-menu,
|
||||
.dropdown-submenu:hover > .dropdown-menu {
|
||||
display: block;
|
||||
right: 162px;
|
||||
}
|
||||
|
||||
.dropdown-submenu > a:after {
|
||||
display: block;
|
||||
content: " ";
|
||||
float: right;
|
||||
width: 0;
|
||||
height: 0;
|
||||
border-color: transparent;
|
||||
border-style: solid;
|
||||
border-width: 5px 0 5px 5px;
|
||||
border-left-color: #cccccc;
|
||||
margin-top: 5px;
|
||||
margin-right: -10px;
|
||||
}
|
||||
|
||||
.dropdown-submenu:active > a:after {
|
||||
border-left-color: #ffffff;
|
||||
}
|
||||
|
||||
.dropdown-submenu.pull-left {
|
||||
float: none;
|
||||
}
|
||||
|
||||
.dropdown-submenu.pull-left > .dropdown-menu {
|
||||
left: -100%;
|
||||
margin-left: 10px;
|
||||
-webkit-border-radius: 6px 0 6px 6px;
|
||||
-moz-border-radius: 6px 0 6px 6px;
|
||||
border-radius: 6px 0 6px 6px;
|
||||
}
|
||||
|
||||
.attackshoverboard {
|
||||
/*width: 500px;*/
|
||||
|
@ -124,18 +185,6 @@ caption, legend {
|
|||
display: none;
|
||||
}
|
||||
|
||||
.dropdown-submenu{position:relative;}
|
||||
.dropdown-submenu>.dropdown-menu{top:0;left:100%;-webkit-border-radius:0 6px 6px 6px;-moz-border-radius:0 6px 6px 6px;border-radius:0 6px 6px 6px;}
|
||||
.dropdown-submenu:active>.dropdown-menu, .dropdown-submenu:hover>.dropdown-menu {
|
||||
display: block;
|
||||
right:162px;
|
||||
}
|
||||
.dropdown-submenu>a:after{display:block;content:" ";float:right;width:0;height:0;border-color:transparent;border-style:solid;border-width:5px 0 5px 5px;border-left-color:#cccccc;margin-top:5px;margin-right:-10px;}
|
||||
.dropdown-submenu:active>a:after{border-left-color:#ffffff;}
|
||||
.dropdown-submenu.pull-left{float:none;}.dropdown-submenu.pull-left>.dropdown-menu{left:-100%;margin-left:10px;-webkit-border-radius:6px 0 6px 6px;-moz-border-radius:6px 0 6px 6px;border-radius:6px 0 6px 6px;}
|
||||
|
||||
|
||||
|
||||
.attackercallout {
|
||||
width: 120px;
|
||||
height: 160px;
|
||||
|
@ -225,11 +274,11 @@ caption, legend {
|
|||
}
|
||||
|
||||
.modal-header {
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
.modal-body {
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
.modal-body .row {
|
||||
|
@ -294,12 +343,12 @@ td {
|
|||
|
||||
|
||||
.welcome {
|
||||
height: 180px;
|
||||
width: 600px;
|
||||
margin-left: 25%;
|
||||
text-align: center;
|
||||
color: var(--txt-primary-color);
|
||||
vertical-align: middle;
|
||||
height: 180px;
|
||||
width: 600px;
|
||||
margin-left: 25%;
|
||||
text-align: center;
|
||||
color: var(--txt-primary-color);
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
.mozdeflogo{
|
||||
|
@ -308,11 +357,16 @@ td {
|
|||
}
|
||||
|
||||
.tabcontent {
|
||||
margin-top: 20px;
|
||||
display: none;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.tabcontent.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.tabnav a {
|
||||
color: lightblue;
|
||||
color: lightblue;
|
||||
}
|
||||
|
||||
/* uncomment this login ui css to hide the local account/password signup options
|
||||
|
|
|
@ -129,6 +129,16 @@ caption, legend {
|
|||
display: none;
|
||||
}
|
||||
|
||||
.dropdown-menu li:hover {
|
||||
background: rgb(163, 163, 163);
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.tag:hover {
|
||||
background: #5bc0de;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.dropdown-submenu{position:relative;}
|
||||
.dropdown-submenu>.dropdown-menu{top:0;left:100%;-webkit-border-radius:0 6px 6px 6px;-moz-border-radius:0 6px 6px 6px;border-radius:0 6px 6px 6px;}
|
||||
.dropdown-submenu:active>.dropdown-menu, .dropdown-submenu:hover>.dropdown-menu {
|
||||
|
@ -250,14 +260,6 @@ caption, legend {
|
|||
color: var(--txt-primary-color);
|
||||
}
|
||||
|
||||
.tabcontent {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.tabcontent.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
textarea {
|
||||
overflow: auto;
|
||||
vertical-align: top;
|
||||
|
@ -333,12 +335,12 @@ td {
|
|||
}
|
||||
|
||||
.welcome {
|
||||
height: 180px;
|
||||
width: 600px;
|
||||
margin-left: 25%;
|
||||
text-align: center;
|
||||
color: var(--txt-primary-color);
|
||||
vertical-align: middle;
|
||||
height: 180px;
|
||||
width: 600px;
|
||||
margin-left: 25%;
|
||||
text-align: center;
|
||||
color: var(--txt-primary-color);
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
.mozdeflogo{
|
||||
|
@ -347,11 +349,16 @@ td {
|
|||
}
|
||||
|
||||
.tabcontent {
|
||||
margin-top: 20px;
|
||||
display: none;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.tabcontent.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.tabnav a {
|
||||
color: lightblue;
|
||||
color: lightblue;
|
||||
}
|
||||
|
||||
/* don't float the 'create account' link*/
|
||||
|
|
|
@ -21,8 +21,8 @@ Copyright (c) 2014 Mozilla Corporation
|
|||
--txt-shadow-color: #576d54;
|
||||
--arm-color: #e69006;
|
||||
--arm-focus-color: #d58512;
|
||||
--font-main: #fff;
|
||||
--font-focus: #000;
|
||||
--txt-primary-color: #fff;
|
||||
--txt-secondary-color: #000;
|
||||
--a-link-color: #a2a9b2;
|
||||
}
|
||||
|
||||
|
@ -45,7 +45,7 @@ body{
|
|||
/*margin: 0;*/
|
||||
/*min-width: 990px;*/
|
||||
padding: 0;
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
line-height: normal;
|
||||
text-align: left;
|
||||
}
|
||||
|
@ -56,12 +56,12 @@ body{
|
|||
|
||||
/*mozdef custom */
|
||||
.upperwhite {
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
caption, legend {
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
}
|
||||
|
||||
.shadow {
|
||||
|
@ -69,7 +69,7 @@ caption, legend {
|
|||
}
|
||||
|
||||
.ipaddress {
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
font-style: normal;
|
||||
font-weight: normal;
|
||||
|
||||
|
@ -101,7 +101,7 @@ caption, legend {
|
|||
opacity: .3;
|
||||
z-index: 2;
|
||||
font-size: 13px;
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
}
|
||||
|
||||
#bottom-toolbar:hover {
|
||||
|
@ -115,30 +115,78 @@ caption, legend {
|
|||
line-height: 13px;
|
||||
}
|
||||
|
||||
.tag:hover {
|
||||
background: #5bc0de;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.dropdown-menu li {
|
||||
color: #000;
|
||||
}
|
||||
|
||||
.dropdown-menu li:hover {
|
||||
background: #ccc;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.dropdown-submenu {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.dropdown-submenu > .dropdown-menu {
|
||||
top: 0;
|
||||
left: 100%;
|
||||
-webkit-border-radius: 0 6px 6px 6px;
|
||||
-moz-border-radius: 0 6px 6px 6px;
|
||||
border-radius: 0 6px 6px 6px;
|
||||
}
|
||||
|
||||
.dropdown-submenu:active > .dropdown-menu,
|
||||
.dropdown-submenu:hover > .dropdown-menu {
|
||||
display: block;
|
||||
right: 162px;
|
||||
}
|
||||
|
||||
.dropdown-submenu > a:after {
|
||||
display: block;
|
||||
content: " ";
|
||||
float: right;
|
||||
width: 0;
|
||||
height: 0;
|
||||
border-color: transparent;
|
||||
border-style: solid;
|
||||
border-width: 5px 0 5px 5px;
|
||||
border-left-color: #cccccc;
|
||||
margin-top: 5px;
|
||||
margin-right: -10px;
|
||||
}
|
||||
|
||||
.dropdown-submenu:active > a:after {
|
||||
border-left-color: #ffffff;
|
||||
}
|
||||
|
||||
.dropdown-submenu.pull-left {
|
||||
float: none;
|
||||
}
|
||||
.dropdown-submenu.pull-left > .dropdown-menu {
|
||||
left: -100%;
|
||||
margin-left: 10px;
|
||||
-webkit-border-radius: 6px 0 6px 6px;
|
||||
-moz-border-radius: 6px 0 6px 6px;
|
||||
border-radius: 6px 0 6px 6px;
|
||||
}
|
||||
|
||||
.attackshoverboard {
|
||||
/*width: 500px;*/
|
||||
/*height: 500px;*/
|
||||
/*background-color: green;*/
|
||||
-moz-transform: scaleY(-1);
|
||||
-webkit-transform: scaleY(-1);
|
||||
-o-transform: scaleY(-1);
|
||||
transform: scaleY(-1);
|
||||
display: none;
|
||||
/*width: 500px;*/
|
||||
/*height: 500px;*/
|
||||
/*background-color: green;*/
|
||||
-moz-transform: scaleY(-1);
|
||||
-webkit-transform: scaleY(-1);
|
||||
-o-transform: scaleY(-1);
|
||||
transform: scaleY(-1);
|
||||
display: none;
|
||||
}
|
||||
|
||||
.dropdown-submenu{position:relative;}
|
||||
.dropdown-submenu>.dropdown-menu{top:0;left:100%;-webkit-border-radius:0 6px 6px 6px;-moz-border-radius:0 6px 6px 6px;border-radius:0 6px 6px 6px;}
|
||||
.dropdown-submenu:active>.dropdown-menu, .dropdown-submenu:hover>.dropdown-menu {
|
||||
display: block;
|
||||
right:162px;
|
||||
}
|
||||
.dropdown-submenu>a:after{display:block;content:" ";float:right;width:0;height:0;border-color:transparent;border-style:solid;border-width:5px 0 5px 5px;border-left-color:#cccccc;margin-top:5px;margin-right:-10px;}
|
||||
.dropdown-submenu:active>a:after{border-left-color:#ffffff;}
|
||||
.dropdown-submenu.pull-left{float:none;}.dropdown-submenu.pull-left>.dropdown-menu{left:-100%;margin-left:10px;-webkit-border-radius:6px 0 6px 6px;-moz-border-radius:6px 0 6px 6px;border-radius:6px 0 6px 6px;}
|
||||
|
||||
|
||||
|
||||
.attackercallout {
|
||||
width: 120px;
|
||||
height: 160px;
|
||||
|
@ -170,7 +218,8 @@ caption, legend {
|
|||
text-transform: uppercase;
|
||||
margin-top: 20px;
|
||||
}
|
||||
.attackercallout ul{
|
||||
|
||||
.attackercallout ul {
|
||||
list-style: none;
|
||||
float: left;
|
||||
left: auto;
|
||||
|
@ -181,6 +230,7 @@ caption, legend {
|
|||
.attackercallout .indicator{
|
||||
color: yellow;
|
||||
}
|
||||
|
||||
.attackercallout a{
|
||||
color: yellow;
|
||||
}
|
||||
|
@ -232,11 +282,11 @@ caption, legend {
|
|||
}
|
||||
|
||||
.modal-header {
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
.modal-body {
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
.modal-body .row {
|
||||
|
@ -247,7 +297,7 @@ caption, legend {
|
|||
.btn {
|
||||
border: 1px outset;
|
||||
border-radius: 4px;
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
background-color: var(--arm-color);
|
||||
}
|
||||
|
||||
|
@ -255,7 +305,7 @@ caption, legend {
|
|||
.btn-warning:active,
|
||||
.btn-warning:hover,
|
||||
.open > .dropdown-toggle.btn-warning {
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
background-color: var(--arm-focus-color);
|
||||
border-color: var(--arm-color);
|
||||
}
|
||||
|
@ -272,7 +322,7 @@ caption, legend {
|
|||
.btn-notice {
|
||||
border: 1px outset;
|
||||
border-radius: 4px;
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
background-color: var(--ack-edit-color);
|
||||
}
|
||||
|
||||
|
@ -280,13 +330,13 @@ caption, legend {
|
|||
.btn-notice:active,
|
||||
.btn-notice:hover,
|
||||
.open > .dropdown-toggle.btn-notice {
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
background-color: var(--ack-edit-focus-color);
|
||||
border-color: var(--ack-edit-border-color);
|
||||
}
|
||||
|
||||
.btn-notice:disabled, button[disabled] {
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
background-color: var(--ack-edit-disabled-color);
|
||||
border-color: var(--ack-edit-border-color);
|
||||
}
|
||||
|
@ -294,12 +344,12 @@ caption, legend {
|
|||
.btn-generic {
|
||||
border: 1px outset;
|
||||
border-radius: 4px;
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
background-color: var(--ack-edit-color);
|
||||
}
|
||||
|
||||
.btn-generic:focus {
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
background-color: #286090;
|
||||
border-color: #204d74;
|
||||
}
|
||||
|
@ -308,7 +358,7 @@ caption, legend {
|
|||
.btn-generic:active,
|
||||
.btn-genric:hover,
|
||||
.open > .dropdown-toggle.btn-generic {
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
background-color: var(--ack-edit-focus-color);
|
||||
border-color: var(--ack-edit-border-color);
|
||||
}
|
||||
|
@ -344,11 +394,11 @@ input[type="search"] {
|
|||
.table-hover tbody tr:hover > th,
|
||||
.table-hover > tbody > tr:hover {
|
||||
background-color: #9a9ea5;
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
td{
|
||||
color: var(--font-main);
|
||||
color: var(--txt-primary-color);
|
||||
}
|
||||
|
||||
.welcome {
|
||||
|
@ -357,7 +407,7 @@ td{
|
|||
width: 600px;
|
||||
margin-left: 25%;
|
||||
text-align: center;
|
||||
color: var(--font-focus);
|
||||
color: var(--txt-secondary-color);
|
||||
border: none;
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
@ -366,8 +416,13 @@ td{
|
|||
width: 500px;
|
||||
}
|
||||
|
||||
.tabcontent{
|
||||
margin-top: 20px;
|
||||
.tabcontent {
|
||||
display: none;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.tabcontent.active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.tabnav a{
|
||||
|
@ -418,17 +473,17 @@ td{
|
|||
}
|
||||
|
||||
circle:hover{
|
||||
fill: var(--font-main);
|
||||
fill: var(--txt-primary-color);
|
||||
}
|
||||
|
||||
.node {
|
||||
stroke: var(--font-focus);
|
||||
stroke: var(--txt-secondary-color);
|
||||
stroke-width: 1.5px;
|
||||
}
|
||||
|
||||
.textlabel{
|
||||
stroke-width: .2px;
|
||||
stroke: var(--font-focus);
|
||||
stroke: var(--txt-secondary-color);
|
||||
}
|
||||
|
||||
.vtagholders {
|
||||
|
@ -715,7 +770,7 @@ nav.main-menu.expanded {
|
|||
z-index: 1000;
|
||||
}
|
||||
|
||||
.main-menu>ul {
|
||||
.main-menu > ul {
|
||||
margin: 7px 0;
|
||||
}
|
||||
|
||||
|
@ -725,13 +780,13 @@ nav.main-menu.expanded {
|
|||
width: 225px;
|
||||
}
|
||||
|
||||
.main-menu li:hover>a,
|
||||
nav.main-menu li.active>a,
|
||||
.dropdown-menu>li>a:hover,
|
||||
.dropdown-menu>li>a:focus,
|
||||
.dropdown-menu>.active>a,
|
||||
.dropdown-menu>.active>a:hover,
|
||||
.dropdown-menu>.active>a:focus,
|
||||
.main-menu li:hover > a,
|
||||
nav.main-menu li.active > a,
|
||||
.dropdown-menu > li > a:hover,
|
||||
.dropdown-menu > li > a:focus,
|
||||
.dropdown-menu >.active > a,
|
||||
.dropdown-menu >.active > a:hover,
|
||||
.dropdown-menu >.active > a:focus,
|
||||
.no-touch .dashboard-page nav.dashboard-menu ul li:hover a,
|
||||
.dashboard-page nav.dashboard-menu ul li.active a {
|
||||
color: rgb(0, 0, 0);
|
||||
|
@ -837,7 +892,7 @@ nav.main-menu li.active>a,
|
|||
left: 200px;
|
||||
}
|
||||
|
||||
.main-menu li>a {
|
||||
.main-menu li > a {
|
||||
position: relative;
|
||||
display: table;
|
||||
border-collapse: collapse;
|
||||
|
@ -869,7 +924,7 @@ nav.main-menu li.active>a,
|
|||
font-family: 'Zilla Slab', serif;
|
||||
}
|
||||
|
||||
.main-menu>ul.logout {
|
||||
.main-menu > ul.logout {
|
||||
position: absolute;
|
||||
left: 0;
|
||||
bottom: 0;
|
||||
|
|
|
@ -92,3 +92,15 @@ Add is_ip utility function
|
|||
------------------
|
||||
|
||||
* Updated bulk queue to acquire lock before saving events
|
||||
|
||||
|
||||
3.0.2 (2019-07-17)
|
||||
------------------
|
||||
|
||||
* Updated ElasticsearchClient.get_indices() to include closed indices
|
||||
|
||||
|
||||
3.0.3 (2019-07-18)
|
||||
------------------
|
||||
|
||||
* Added ElasticsearchClient.get_open_indices()
|
||||
|
|
|
@ -53,7 +53,11 @@ class ElasticsearchClient():
|
|||
self.es_connection.indices.delete(index=index_name, ignore=ignore_codes)
|
||||
|
||||
def get_indices(self):
|
||||
return list(self.es_connection.indices.stats()['indices'].keys())
|
||||
# Includes open and closed indices
|
||||
return list(self.es_connection.indices.get_alias('*', params=dict(expand_wildcards='all')).keys())
|
||||
|
||||
def get_open_indices(self):
|
||||
return list(self.es_connection.indices.get_alias('*', params=dict(expand_wildcards='open')).keys())
|
||||
|
||||
def index_exists(self, index_name):
|
||||
return self.es_connection.indices.exists(index_name)
|
||||
|
|
|
@ -20,7 +20,6 @@ requirements = [
|
|||
'wheel>=0.32.1',
|
||||
'watchdog>=0.9.0',
|
||||
'flake8>=3.5.0',
|
||||
'tox>=3.5.2',
|
||||
'coverage>=4.5.1',
|
||||
'Sphinx>=1.8.1',
|
||||
'twine>=1.12.1',
|
||||
|
@ -59,6 +58,6 @@ setup(
|
|||
test_suite='tests',
|
||||
tests_require=[],
|
||||
url='https://github.com/mozilla/MozDef/tree/master/lib',
|
||||
version='3.0.1',
|
||||
version='3.0.3',
|
||||
zip_safe=False,
|
||||
)
|
||||
|
|
|
@ -228,10 +228,10 @@ class taskConsumer(object):
|
|||
self.flush_wait_time = (response['Credentials']['Expiration'] - current_time).seconds - 3
|
||||
else:
|
||||
role_creds = {}
|
||||
role_creds['region_name'] = options.region
|
||||
self.s3_client = boto3.client(
|
||||
's3',
|
||||
region_name=options.region,
|
||||
**role_creds
|
||||
**get_aws_credentials(**role_creds)
|
||||
)
|
||||
|
||||
def reauth_timer(self):
|
||||
|
@ -284,11 +284,10 @@ class taskConsumer(object):
|
|||
logger.info('Received network related error...reconnecting')
|
||||
time.sleep(5)
|
||||
self.sqs_queue = connect_sqs(
|
||||
task_exchange=options.taskexchange,
|
||||
**get_aws_credentials(
|
||||
options.region,
|
||||
options.accesskey,
|
||||
options.secretkey)
|
||||
region_name=options.region,
|
||||
aws_access_key_id=options.accesskey,
|
||||
aws_secret_access_key=options.secretkey,
|
||||
task_exchange=options.taskexchange
|
||||
)
|
||||
time.sleep(options.sleep_time)
|
||||
|
||||
|
@ -383,11 +382,10 @@ def main():
|
|||
sys.exit(1)
|
||||
|
||||
sqs_queue = connect_sqs(
|
||||
task_exchange=options.taskexchange,
|
||||
**get_aws_credentials(
|
||||
options.region,
|
||||
options.accesskey,
|
||||
options.secretkey)
|
||||
region_name=options.region,
|
||||
aws_access_key_id=options.accesskey,
|
||||
aws_secret_access_key=options.secretkey,
|
||||
task_exchange=options.taskexchange
|
||||
)
|
||||
# consume our queue
|
||||
taskConsumer(sqs_queue, es).run()
|
||||
|
@ -413,7 +411,6 @@ def initConfig():
|
|||
# rabbit message queue options
|
||||
options.mqserver = getConfig('mqserver', 'localhost', options.configfile)
|
||||
options.taskexchange = getConfig('taskexchange', 'eventtask', options.configfile)
|
||||
options.eventexchange = getConfig('eventexchange', 'events', options.configfile)
|
||||
# rabbit: how many messages to ask for at once from the message queue
|
||||
options.prefetch = getConfig('prefetch', 10, options.configfile)
|
||||
# rabbit: user creds
|
||||
|
|
|
@ -24,7 +24,6 @@ from mozdef_util.utilities.logger import logger, initLogger
|
|||
from mozdef_util.elasticsearch_client import ElasticsearchClient, ElasticsearchBadServer, ElasticsearchInvalidIndex, ElasticsearchException
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../"))
|
||||
from mq.lib.aws import get_aws_credentials
|
||||
from mq.lib.plugins import sendEventToPlugins, registerPlugins
|
||||
from mq.lib.sqs import connect_sqs
|
||||
|
||||
|
@ -192,11 +191,11 @@ def main():
|
|||
sys.exit(1)
|
||||
|
||||
sqs_queue = connect_sqs(
|
||||
task_exchange=options.taskexchange,
|
||||
**get_aws_credentials(
|
||||
options.region,
|
||||
options.accesskey,
|
||||
options.secretkey))
|
||||
region_name=options.region,
|
||||
aws_access_key_id=options.accesskey,
|
||||
aws_secret_access_key=options.secretkey,
|
||||
task_exchange=options.taskexchange
|
||||
)
|
||||
# consume our queue
|
||||
taskConsumer(sqs_queue, es, options).run()
|
||||
|
||||
|
|
|
@ -29,7 +29,6 @@ from mozdef_util.utilities.logger import logger, initLogger
|
|||
from mozdef_util.elasticsearch_client import ElasticsearchClient, ElasticsearchBadServer, ElasticsearchInvalidIndex, ElasticsearchException
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../"))
|
||||
from mq.lib.aws import get_aws_credentials
|
||||
from mq.lib.plugins import sendEventToPlugins, registerPlugins
|
||||
from mq.lib.sqs import connect_sqs
|
||||
|
||||
|
@ -331,11 +330,11 @@ def main():
|
|||
sys.exit(1)
|
||||
|
||||
sqs_queue = connect_sqs(
|
||||
task_exchange=options.taskexchange,
|
||||
**get_aws_credentials(
|
||||
options.region,
|
||||
options.accesskey,
|
||||
options.secretkey))
|
||||
region_name=options.region,
|
||||
aws_access_key_id=options.accesskey,
|
||||
aws_secret_access_key=options.secretkey,
|
||||
task_exchange=options.taskexchange
|
||||
)
|
||||
# consume our queue
|
||||
taskConsumer(sqs_queue, es).run()
|
||||
|
||||
|
@ -355,7 +354,6 @@ def initConfig():
|
|||
# rabbit message queue options
|
||||
options.mqserver = getConfig('mqserver', 'localhost', options.configfile)
|
||||
options.taskexchange = getConfig('taskexchange', 'eventtask', options.configfile)
|
||||
options.eventexchange = getConfig('eventexchange', 'events', options.configfile)
|
||||
# rabbit: how many messages to ask for at once from the message queue
|
||||
options.prefetch = getConfig('prefetch', 10, options.configfile)
|
||||
# rabbit: user creds
|
||||
|
|
|
@ -4,14 +4,14 @@
|
|||
# Copyright (c) 2017 Mozilla Corporation
|
||||
|
||||
|
||||
def get_aws_credentials(region=None, access_key=None, secret_key=None, security_token=None):
|
||||
def get_aws_credentials(region_name=None, aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None):
|
||||
result = {}
|
||||
if region and region != '<add_region>':
|
||||
result['region_name'] = region
|
||||
if access_key and access_key != '<add_accesskey>':
|
||||
result['aws_access_key_id'] = access_key
|
||||
if secret_key and secret_key != '<add_secretkey>':
|
||||
result['aws_secret_access_key'] = secret_key
|
||||
if security_token:
|
||||
result['security_token'] = security_token
|
||||
if region_name and region_name != '<add_region>':
|
||||
result['region_name'] = region_name
|
||||
if aws_access_key_id and aws_access_key_id != '<add_accesskey>':
|
||||
result['aws_access_key_id'] = aws_access_key_id
|
||||
if aws_secret_access_key and aws_secret_access_key != '<add_secretkey>':
|
||||
result['aws_secret_access_key'] = aws_secret_access_key
|
||||
if aws_session_token:
|
||||
result['aws_session_token'] = aws_session_token
|
||||
return result
|
||||
|
|
|
@ -1,18 +1,12 @@
|
|||
import boto3
|
||||
from .aws import get_aws_credentials
|
||||
|
||||
|
||||
def connect_sqs(region_name=None, aws_access_key_id=None,
|
||||
aws_secret_access_key=None, task_exchange=None):
|
||||
credentials = {}
|
||||
if aws_access_key_id is not None:
|
||||
credentials['aws_access_key_id'] = aws_access_key_id
|
||||
if aws_secret_access_key is not None:
|
||||
credentials['aws_secret_access_key'] = aws_secret_access_key
|
||||
|
||||
sqs = boto3.resource(
|
||||
'sqs',
|
||||
region_name=region_name,
|
||||
**credentials
|
||||
**get_aws_credentials(region_name, aws_access_key_id, aws_secret_access_key)
|
||||
)
|
||||
queue = sqs.get_queue_by_name(QueueName=task_exchange)
|
||||
return queue
|
||||
|
|
|
@ -23,11 +23,13 @@ class message(object):
|
|||
'details.apiversion',
|
||||
'details.serviceeventdetails',
|
||||
'details.requestparameters.attribute',
|
||||
'details.requestparameters.bucketpolicy.statement.principal',
|
||||
'details.requestparameters.bucketpolicy.statement.principal.service',
|
||||
'details.requestparameters.bucketpolicy.statement.principal.aws',
|
||||
'details.requestparameters.callerreference',
|
||||
'details.requestparameters.description',
|
||||
'details.requestparameters.describeflowlogsrequest.filter.value',
|
||||
'details.requestparameters.disableapitermination',
|
||||
'details.requestparameters.distributionconfig.callerreference',
|
||||
'details.requestparameters.domainname',
|
||||
'details.requestparameters.domainnames',
|
||||
'details.requestparameters.ebsoptimized',
|
||||
|
|
|
@ -6,7 +6,7 @@ bottle==0.12.4
|
|||
celery==4.1.0
|
||||
celery[sqs]==4.1.0
|
||||
cffi==1.9.1
|
||||
configlib==2.0.3
|
||||
configlib==2.0.4
|
||||
configparser==3.5.0b2
|
||||
cryptography==2.3.1
|
||||
dnspython==1.15.0
|
||||
|
@ -30,7 +30,7 @@ jmespath==0.9.3
|
|||
kombu==4.1.0
|
||||
meld3==1.0.2
|
||||
mozdef-client==1.0.11
|
||||
mozdef-util==3.0.1
|
||||
mozdef-util==3.0.3
|
||||
netaddr==0.7.19
|
||||
nose==1.3.7
|
||||
oauth2client==1.4.12
|
||||
|
@ -58,6 +58,5 @@ tzlocal==1.4
|
|||
uritemplate==0.6
|
||||
urllib3==1.24.3
|
||||
uwsgi==2.0.17.1
|
||||
virtualenv==1.11.4
|
||||
tldextract==2.2.0
|
||||
websocket-client==0.44.0
|
||||
|
|
|
@ -11,7 +11,6 @@ import pynsive
|
|||
import random
|
||||
import re
|
||||
import requests
|
||||
import sys
|
||||
import socket
|
||||
import importlib
|
||||
from bottle import route, run, response, request, default_app, post
|
||||
|
@ -536,10 +535,10 @@ def kibanaDashboards():
|
|||
})
|
||||
|
||||
except ElasticsearchInvalidIndex as e:
|
||||
sys.stderr.write('Kibana dashboard index not found: {0}\n'.format(e))
|
||||
logger.error('Kibana dashboard index not found: {0}\n'.format(e))
|
||||
|
||||
except Exception as e:
|
||||
sys.stderr.write('Kibana dashboard received error: {0}\n'.format(e))
|
||||
logger.error('Kibana dashboard received error: {0}\n'.format(e))
|
||||
|
||||
return json.dumps(resultsList)
|
||||
|
||||
|
@ -555,7 +554,7 @@ def getWatchlist():
|
|||
# Log the entries we are removing to maintain an audit log
|
||||
expired = watchlistentries.find({'dateExpiring': {"$lte": datetime.utcnow() - timedelta(hours=1)}})
|
||||
for entry in expired:
|
||||
sys.stdout.write('Deleting entry {0} from watchlist /n'.format(entry))
|
||||
logger.debug('Deleting entry {0} from watchlist /n'.format(entry))
|
||||
|
||||
# delete any that expired
|
||||
watchlistentries.delete_many({'dateExpiring': {"$lte": datetime.utcnow() - timedelta(hours=1)}})
|
||||
|
@ -578,7 +577,7 @@ def getWatchlist():
|
|||
)
|
||||
return json.dumps(WatchList)
|
||||
except ValueError as e:
|
||||
sys.stderr.write('Exception {0} collecting watch list\n'.format(e))
|
||||
logger.error('Exception {0} collecting watch list\n'.format(e))
|
||||
|
||||
|
||||
def getWhois(ipaddress):
|
||||
|
@ -591,7 +590,7 @@ def getWhois(ipaddress):
|
|||
whois['fqdn']=socket.getfqdn(str(netaddr.IPNetwork(ipaddress)[0]))
|
||||
return (json.dumps(whois))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error looking up whois for {0}: {1}\n'.format(ipaddress, e))
|
||||
logger.error('Error looking up whois for {0}: {1}\n'.format(ipaddress, e))
|
||||
|
||||
|
||||
def verisSummary(verisRegex=None):
|
||||
|
@ -617,7 +616,7 @@ def verisSummary(verisRegex=None):
|
|||
else:
|
||||
return json.dumps(list())
|
||||
except Exception as e:
|
||||
sys.stderr.write('Exception while aggregating veris summary: {0}\n'.format(e))
|
||||
logger.error('Exception while aggregating veris summary: {0}\n'.format(e))
|
||||
|
||||
|
||||
def initConfig():
|
||||
|
|
|
@ -6,9 +6,10 @@
|
|||
import requests
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from configlib import getConfig, OptionParser
|
||||
|
||||
from mozdef_util.utilities.logger import logger
|
||||
|
||||
|
||||
class message(object):
|
||||
def __init__(self):
|
||||
|
@ -41,7 +42,7 @@ class message(object):
|
|||
self.configfile = './plugins/cymon.conf'
|
||||
self.options = None
|
||||
if os.path.exists(self.configfile):
|
||||
sys.stdout.write('found conf file {0}\n'.format(self.configfile))
|
||||
logger.debug('found conf file {0}\n'.format(self.configfile))
|
||||
self.initConfiguration()
|
||||
|
||||
def onMessage(self, request, response):
|
||||
|
@ -58,9 +59,8 @@ class message(object):
|
|||
except ValueError:
|
||||
response.status = 500
|
||||
|
||||
print(requestDict, requestDict.keys())
|
||||
if 'ipaddress' in requestDict:
|
||||
url="https://cymon.io/api/nexus/v1/ip/{0}/events?combined=true&format=json".format(requestDict['ipaddress'])
|
||||
url = "https://cymon.io/api/nexus/v1/ip/{0}/events?combined=true&format=json".format(requestDict['ipaddress'])
|
||||
|
||||
# add the cymon api key?
|
||||
if self.options is not None:
|
||||
|
|
|
@ -7,11 +7,12 @@ import os
|
|||
import random
|
||||
import requests
|
||||
import re
|
||||
import sys
|
||||
from configlib import getConfig, OptionParser
|
||||
from datetime import datetime, timedelta
|
||||
from pymongo import MongoClient
|
||||
|
||||
from mozdef_util.utilities.logger import logger
|
||||
|
||||
|
||||
def isFQDN(fqdn):
|
||||
try:
|
||||
|
@ -59,7 +60,7 @@ class message(object):
|
|||
self.configfile = './plugins/fqdnblocklist.conf'
|
||||
self.options = None
|
||||
if os.path.exists(self.configfile):
|
||||
sys.stdout.write('found conf file {0}\n'.format(self.configfile))
|
||||
logger.debug('found conf file {0}\n'.format(self.configfile))
|
||||
self.initConfiguration()
|
||||
|
||||
def parse_fqdn_whitelist(self, fqdn_whitelist_location):
|
||||
|
@ -146,8 +147,8 @@ class message(object):
|
|||
fqdnblock['creator'] = userID
|
||||
fqdnblock['reference'] = referenceID
|
||||
ref = fqdnblocklist.insert(fqdnblock)
|
||||
sys.stdout.write('{0} written to db\n'.format(ref))
|
||||
sys.stdout.write('%s: added to the fqdnblocklist table\n' % (fqdn))
|
||||
logger.debug('{0} written to db\n'.format(ref))
|
||||
logger.debug('%s: added to the fqdnblocklist table\n' % (fqdn))
|
||||
|
||||
# send to statuspage.io?
|
||||
if len(self.options.statuspage_api_key) > 1:
|
||||
|
@ -170,17 +171,17 @@ class message(object):
|
|||
headers=headers,
|
||||
data=post_data)
|
||||
if response.ok:
|
||||
sys.stdout.write('%s: notification sent to statuspage.io\n' % (fqdn))
|
||||
logger.info('%s: notification sent to statuspage.io\n' % (fqdn))
|
||||
else:
|
||||
sys.stderr.write('%s: statuspage.io notification failed %s\n' % (fqdn, response.json()))
|
||||
logger.error('%s: statuspage.io notification failed %s\n' % (fqdn, response.json()))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error while notifying statuspage.io for %s: %s\n' %(fqdn, e))
|
||||
logger.error('Error while notifying statuspage.io for %s: %s\n' % (fqdn, e))
|
||||
else:
|
||||
sys.stderr.write('%s: is already present in the fqdnblocklist table\n' % (fqdn))
|
||||
logger.error('%s: is already present in the fqdnblocklist table\n' % (fqdn))
|
||||
else:
|
||||
sys.stderr.write('%s: is not a valid fqdn\n' % (fqdn))
|
||||
logger.error('%s: is not a valid fqdn\n' % (fqdn))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error while blocking %s: %s\n' % (fqdn, e))
|
||||
logger.error('Error while blocking %s: %s\n' % (fqdn, e))
|
||||
|
||||
def onMessage(self, request, response):
|
||||
'''
|
||||
|
@ -203,28 +204,27 @@ class message(object):
|
|||
# loop through the fields of the form
|
||||
# and fill in our values
|
||||
try:
|
||||
for i in request.json:
|
||||
for field in request.json:
|
||||
# were we checked?
|
||||
if self.name in i:
|
||||
blockfqdn = i.values()[0]
|
||||
if 'fqdn' in i:
|
||||
fqdn = i.values()[0]
|
||||
if 'duration' in i:
|
||||
duration = i.values()[0]
|
||||
if 'comment' in i:
|
||||
comment = i.values()[0]
|
||||
if 'referenceid' in i:
|
||||
referenceID = i.values()[0]
|
||||
if 'userid' in i:
|
||||
userid = i.values()[0]
|
||||
|
||||
if self.name in field:
|
||||
blockfqdn = field[self.name]
|
||||
if 'fqdn' in field:
|
||||
fqdn = field['fqdn']
|
||||
if 'duration' in field:
|
||||
duration = field['duration']
|
||||
if 'comment' in field:
|
||||
comment = field['comment']
|
||||
if 'referenceid' in field:
|
||||
referenceID = field['referenceid']
|
||||
if 'userid' in field:
|
||||
userid = field['userid']
|
||||
if blockfqdn and fqdn is not None:
|
||||
if isFQDN(fqdn):
|
||||
whitelisted = False
|
||||
for whitelist_fqdn in self.options.fqdnwhitelist:
|
||||
if fqdn == whitelist_fqdn:
|
||||
whitelisted = True
|
||||
sys.stdout.write('{0} is whitelisted as part of {1}\n'.format(fqdn, whitelist_fqdn))
|
||||
logger.debug('{0} is whitelisted as part of {1}\n'.format(fqdn, whitelist_fqdn))
|
||||
|
||||
if not whitelisted:
|
||||
self.blockFQDN(
|
||||
|
@ -234,15 +234,15 @@ class message(object):
|
|||
referenceID,
|
||||
userid
|
||||
)
|
||||
sys.stdout.write('added {0} to blocklist\n'.format(fqdn))
|
||||
logger.debug('added {0} to blocklist\n'.format(fqdn))
|
||||
else:
|
||||
sys.stdout.write('not adding {0} to blocklist, it was found in whitelist\n'.format(fqdn))
|
||||
logger.debug('not adding {0} to blocklist, it was found in whitelist\n'.format(fqdn))
|
||||
else:
|
||||
sys.stdout.write('not adding {0} to blocklist, invalid fqdn\n'.format(fqdn))
|
||||
logger.error('not adding {0} to blocklist, invalid fqdn\n'.format(fqdn))
|
||||
response.status = "400 invalid FQDN"
|
||||
response.body = "invalid FQDN"
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error handling request.json %r \n' % (e))
|
||||
logger.error('Error handling request.json %r \n' % (e))
|
||||
response.status = "500"
|
||||
|
||||
return (request, response)
|
||||
|
|
|
@ -7,10 +7,10 @@ import netaddr
|
|||
import os
|
||||
import random
|
||||
import requests
|
||||
import sys
|
||||
from configlib import getConfig, OptionParser
|
||||
from datetime import datetime, timedelta
|
||||
from pymongo import MongoClient
|
||||
from mozdef_util.utilities.logger import logger
|
||||
|
||||
|
||||
def isIPv4(ip):
|
||||
|
@ -70,14 +70,14 @@ class message(object):
|
|||
self.configfile = './plugins/ipblocklist.conf'
|
||||
self.options = None
|
||||
if os.path.exists(self.configfile):
|
||||
sys.stdout.write('found conf file {0}\n'.format(self.configfile))
|
||||
logger.debug('found conf file {0}\n'.format(self.configfile))
|
||||
self.initConfiguration()
|
||||
|
||||
def parse_network_whitelist(self, network_whitelist_location):
|
||||
networks = []
|
||||
with open(network_whitelist_location, "r") as text_file:
|
||||
for line in text_file:
|
||||
line=line.strip().strip("'").strip('"')
|
||||
line = line.strip().strip("'").strip('"')
|
||||
if isIPv4(line) or isIPv6(line):
|
||||
networks.append(line)
|
||||
return networks
|
||||
|
@ -140,11 +140,11 @@ class message(object):
|
|||
ipblock = ipblocklist.find_one({'ipaddress': str(ipcidr)})
|
||||
if ipblock is None:
|
||||
# insert
|
||||
ipblock= dict()
|
||||
ipblock = dict()
|
||||
ipblock['_id'] = genMeteorID()
|
||||
# str to get the ip/cidr rather than netblock cidr.
|
||||
# i.e. '1.2.3.4/24' not '1.2.3.0/24'
|
||||
ipblock['address']= str(ipcidr)
|
||||
ipblock['address'] = str(ipcidr)
|
||||
ipblock['dateAdded'] = datetime.utcnow()
|
||||
# Compute start and end dates
|
||||
# default
|
||||
|
@ -166,8 +166,8 @@ class message(object):
|
|||
ipblock['creator'] = userID
|
||||
ipblock['reference'] = referenceID
|
||||
ref = ipblocklist.insert(ipblock)
|
||||
sys.stdout.write('{0} written to db\n'.format(ref))
|
||||
sys.stdout.write('%s: added to the ipblocklist table\n' % (ipaddress))
|
||||
logger.debug('{0} written to db\n'.format(ref))
|
||||
logger.debug('%s: added to the ipblocklist table\n' % (ipaddress))
|
||||
|
||||
# send to statuspage.io?
|
||||
if len(self.options.statuspage_api_key) > 1:
|
||||
|
@ -190,17 +190,17 @@ class message(object):
|
|||
headers=headers,
|
||||
data=post_data)
|
||||
if response.ok:
|
||||
sys.stdout.write('%s: notification sent to statuspage.io\n' % (str(ipcidr)))
|
||||
logger.debug('%s: notification sent to statuspage.io\n' % (str(ipcidr)))
|
||||
else:
|
||||
sys.stderr.write('%s: statuspage.io notification failed %s\n' % (str(ipcidr),response.json()))
|
||||
logger.error('%s: statuspage.io notification failed %s\n' % (str(ipcidr), response.json()))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error while notifying statuspage.io for %s: %s\n' %(str(ipcidr),e))
|
||||
logger.error('Error while notifying statuspage.io for %s: %s\n' % (str(ipcidr), e))
|
||||
else:
|
||||
sys.stderr.write('%s: is already present in the ipblocklist table\n' % (str(ipcidr)))
|
||||
logger.error('%s: is already present in the ipblocklist table\n' % (str(ipcidr)))
|
||||
else:
|
||||
sys.stderr.write('%s: is not a valid ip address\n' % (ipaddress))
|
||||
logger.error('%s: is not a valid ip address\n' % (ipaddress))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error while blocking %s: %s\n' % (ipaddress, e))
|
||||
logger.exception('Error while blocking %s: %s\n' % (ipaddress, e))
|
||||
|
||||
def onMessage(self, request, response):
|
||||
'''
|
||||
|
@ -220,23 +220,23 @@ class message(object):
|
|||
userid = None
|
||||
blockip = False
|
||||
|
||||
# loop through the fields of the form
|
||||
# and fill in our values
|
||||
try:
|
||||
for i in request.json:
|
||||
# loop through the fields of the form
|
||||
# and fill in our values
|
||||
for field in request.json:
|
||||
# were we checked?
|
||||
if self.name in i:
|
||||
blockip = i.values()[0]
|
||||
if 'ipaddress' in i:
|
||||
ipaddress = i.values()[0]
|
||||
if 'duration' in i:
|
||||
duration = i.values()[0]
|
||||
if 'comment' in i:
|
||||
comment = i.values()[0]
|
||||
if 'referenceid' in i:
|
||||
referenceID = i.values()[0]
|
||||
if 'userid' in i:
|
||||
userid = i.values()[0]
|
||||
if self.name in field:
|
||||
blockip = field[self.name]
|
||||
if 'ipaddress' in field:
|
||||
ipaddress = field['ipaddress']
|
||||
if 'duration' in field:
|
||||
duration = field['duration']
|
||||
if 'comment' in field:
|
||||
comment = field['comment']
|
||||
if 'referenceid' in field:
|
||||
referenceID = field['referenceid']
|
||||
if 'userid' in field:
|
||||
userid = field['userid']
|
||||
|
||||
if blockip and ipaddress is not None:
|
||||
# figure out the CIDR mask
|
||||
|
@ -251,7 +251,7 @@ class message(object):
|
|||
whitelist_network = netaddr.IPNetwork(whitelist_range)
|
||||
if ipcidr in whitelist_network:
|
||||
whitelisted = True
|
||||
sys.stdout.write('{0} is whitelisted as part of {1}\n'.format(ipcidr, whitelist_network))
|
||||
logger.debug('{0} is whitelisted as part of {1}\n'.format(ipcidr, whitelist_network))
|
||||
|
||||
if not whitelisted:
|
||||
self.blockIP(str(ipcidr),
|
||||
|
@ -259,10 +259,10 @@ class message(object):
|
|||
duration,
|
||||
referenceID,
|
||||
userid)
|
||||
sys.stdout.write('added {0} to blocklist\n'.format(ipaddress))
|
||||
logger.info('added {0} to blocklist\n'.format(ipaddress))
|
||||
else:
|
||||
sys.stdout.write('not adding {0} to blocklist, it was found in whitelist\n'.format(ipaddress))
|
||||
logger.info('not adding {0} to blocklist, it was found in whitelist\n'.format(ipaddress))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error handling request.json %r \n'% (e))
|
||||
logger.error('Error handling request.json %r \n' % (e))
|
||||
|
||||
return (request, response)
|
||||
|
|
|
@ -5,12 +5,12 @@
|
|||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from configlib import getConfig, OptionParser
|
||||
from datetime import datetime, timedelta
|
||||
from mozdef_util.elasticsearch_client import ElasticsearchClient
|
||||
from mozdef_util.query_models import SearchQuery, RangeMatch, Aggregation, ExistsMatch, PhraseMatch
|
||||
from mozdef_util.utilities.toUTC import toUTC
|
||||
from mozdef_util.utilities.logger import logger
|
||||
|
||||
|
||||
class message(object):
|
||||
|
@ -44,7 +44,7 @@ class message(object):
|
|||
self.configfile = './plugins/logincounts.conf'
|
||||
self.options = None
|
||||
if os.path.exists(self.configfile):
|
||||
sys.stdout.write('found conf file {0}\n'.format(self.configfile))
|
||||
logger.debug('found conf file {0}\n'.format(self.configfile))
|
||||
self.initConfiguration()
|
||||
|
||||
def onMessage(self, request, response):
|
||||
|
|
|
@ -4,11 +4,12 @@
|
|||
# Copyright (c) 2014 Mozilla Corporation
|
||||
|
||||
import os
|
||||
import sys
|
||||
import configparser
|
||||
import netaddr
|
||||
from boto3.session import Session
|
||||
|
||||
from mozdef_util.utilities.logger import logger
|
||||
|
||||
|
||||
def isIPv4(ip):
|
||||
try:
|
||||
|
@ -63,7 +64,7 @@ class message(object):
|
|||
self.options = None
|
||||
self.multioptions = []
|
||||
if os.path.exists(self.configfile):
|
||||
sys.stdout.write('found conf file {0}\n'.format(self.configfile))
|
||||
logger.debug('found conf file {0}\n'.format(self.configfile))
|
||||
self.initConfiguration()
|
||||
|
||||
def initConfiguration(self):
|
||||
|
@ -100,7 +101,7 @@ class message(object):
|
|||
if len(routetable['Associations']) > 0:
|
||||
if 'SubnetId' in routetable['Associations'][0]:
|
||||
subnet_id = routetable['Associations'][0]['SubnetId']
|
||||
sys.stdout.write('{0} {1}\n'.format(rt_id, vpc_id))
|
||||
logger.debug('{0} {1}\n'.format(rt_id, vpc_id))
|
||||
|
||||
response = client.describe_network_interfaces(
|
||||
Filters=[
|
||||
|
@ -131,10 +132,10 @@ class message(object):
|
|||
]
|
||||
)
|
||||
|
||||
sys.stdout.write('{0}\n'.format(response))
|
||||
logger.debug('{0}\n'.format(response))
|
||||
if len(response['NetworkInterfaces']) > 0:
|
||||
bheni_id = response['NetworkInterfaces'][0]['NetworkInterfaceId']
|
||||
sys.stdout.write('{0} {1} {2}\n'.format(rt_id, vpc_id, bheni_id))
|
||||
logger.debug('{0} {1} {2}\n'.format(rt_id, vpc_id, bheni_id))
|
||||
|
||||
# get a handle to a route table associated with a netsec-private subnet
|
||||
route_table = ec2.RouteTable(rt_id)
|
||||
|
@ -144,11 +145,11 @@ class message(object):
|
|||
NetworkInterfaceId=bheni_id,
|
||||
)
|
||||
else:
|
||||
sys.stdout.write('Skipping route table {0} in the VPC {1} - blackhole ENI could not be found\n'.format(rt_id, vpc_id))
|
||||
logger.debug('Skipping route table {0} in the VPC {1} - blackhole ENI could not be found\n'.format(rt_id, vpc_id))
|
||||
continue
|
||||
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error while creating a blackhole entry %s: %r\n' % (ipaddress, e))
|
||||
logger.error('Error while creating a blackhole entry %s: %r\n' % (ipaddress, e))
|
||||
|
||||
def onMessage(self, request, response):
|
||||
'''
|
||||
|
@ -163,29 +164,28 @@ class message(object):
|
|||
# loop through the fields of the form
|
||||
# and fill in our values
|
||||
try:
|
||||
for i in request.json:
|
||||
for field in request.json:
|
||||
# were we checked?
|
||||
if self.name in i:
|
||||
sendToBHVPC = i.values()[0]
|
||||
if 'ipaddress' in i:
|
||||
ipaddress = i.values()[0]
|
||||
|
||||
if self.name in field:
|
||||
sendToBHVPC = field[self.name]
|
||||
if 'ipaddress' in field:
|
||||
ipaddress = field['ipaddress']
|
||||
# are we configured?
|
||||
if self.multioptions is None:
|
||||
sys.stderr.write("Customs server blockip requested but not configured\n")
|
||||
logger.error("Customs server blockip requested but not configured\n")
|
||||
sendToBHVPC = False
|
||||
|
||||
if sendToBHVPC and ipaddress is not None:
|
||||
# figure out the CIDR mask
|
||||
if isIPv4(ipaddress) or isIPv6(ipaddress):
|
||||
ipcidr=netaddr.IPNetwork(ipaddress)
|
||||
ipcidr = netaddr.IPNetwork(ipaddress)
|
||||
if not ipcidr.ip.is_loopback() \
|
||||
and not ipcidr.ip.is_private() \
|
||||
and not ipcidr.ip.is_reserved():
|
||||
ipaddress = str(ipcidr.cidr)
|
||||
self.addBlackholeEntry(ipaddress)
|
||||
sys.stdout.write('Blackholed {0}\n'.format(ipaddress))
|
||||
logger.info('Blackholed {0}\n'.format(ipaddress))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error handling request.json %r \n'% (e))
|
||||
logger.error('Error handling request.json %r \n' % (e))
|
||||
|
||||
return (request, response)
|
||||
|
|
|
@ -5,11 +5,12 @@
|
|||
|
||||
import os
|
||||
import random
|
||||
import sys
|
||||
from configlib import getConfig, OptionParser
|
||||
from datetime import datetime, timedelta
|
||||
from pymongo import MongoClient
|
||||
|
||||
from mozdef_util.utilities.logger import logger
|
||||
|
||||
|
||||
def genMeteorID():
|
||||
return('%024x' % random.randrange(16**24))
|
||||
|
@ -43,7 +44,7 @@ class message(object):
|
|||
self.configfile = './plugins/watchlist.conf'
|
||||
self.options = None
|
||||
if os.path.exists(self.configfile):
|
||||
sys.stdout.write('found conf file {0}\n'.format(self.configfile))
|
||||
logger.debug('found conf file {0}\n'.format(self.configfile))
|
||||
self.initConfiguration()
|
||||
|
||||
def initConfiguration(self):
|
||||
|
@ -100,13 +101,13 @@ class message(object):
|
|||
watched['creator']=userID
|
||||
watched['reference']=referenceID
|
||||
ref=watchlist.insert(watched)
|
||||
sys.stdout.write('{0} written to db.\n'.format(ref))
|
||||
sys.stdout.write('%s added to the watchlist table.\n' % (watchcontent))
|
||||
logger.debug('{0} written to db.\n'.format(ref))
|
||||
logger.debug('%s added to the watchlist table.\n' % (watchcontent))
|
||||
|
||||
else:
|
||||
sys.stderr.write('%s is already present in the watchlist table\n' % (str(watchcontent)))
|
||||
logger.error('%s is already present in the watchlist table\n' % (str(watchcontent)))
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error while watching %s: %s\n' % (watchcontent, e))
|
||||
logger.error('Error while watching %s: %s\n' % (watchcontent, e))
|
||||
|
||||
def onMessage(self, request, response):
|
||||
'''
|
||||
|
@ -125,24 +126,22 @@ class message(object):
|
|||
# loop through the fields of the form
|
||||
# and fill in our values
|
||||
try:
|
||||
for i in request.json:
|
||||
# were we checked?
|
||||
if self.name in i.keys():
|
||||
watchitem = i.values()[0]
|
||||
if 'watchcontent' in i.keys():
|
||||
watchcontent = i.values()[0]
|
||||
if 'duration' in i.keys():
|
||||
duration = i.values()[0]
|
||||
if 'comment' in i.keys():
|
||||
comment = i.values()[0]
|
||||
if 'referenceid' in i.keys():
|
||||
referenceID = i.values()[0]
|
||||
if 'userid' in i.keys():
|
||||
userid = i.values()[0]
|
||||
|
||||
for field in request.json:
|
||||
if self.name in field:
|
||||
watchitem = field[self.name]
|
||||
if 'watchcontent' in field:
|
||||
watchcontent = field['watchcontent']
|
||||
if 'duration' in field:
|
||||
duration = field['duration']
|
||||
if 'comment' in field:
|
||||
comment = field['comment']
|
||||
if 'referenceid' in field:
|
||||
referenceID = field['referenceid']
|
||||
if 'userid' in field:
|
||||
userid = field['userid']
|
||||
if watchitem and watchcontent is not None:
|
||||
if len(watchcontent) < 2:
|
||||
sys.stderr.write('{0} does not meet requirements. Not added. \n'.format(watchcontent))
|
||||
logger.error('{0} does not meet requirements. Not added. \n'.format(watchcontent))
|
||||
|
||||
else:
|
||||
self.watchItem(str(watchcontent),
|
||||
|
@ -152,6 +151,6 @@ class message(object):
|
|||
userid)
|
||||
|
||||
except Exception as e:
|
||||
sys.stderr.write('Error handling request.json %r \n'% (e))
|
||||
logger.error('Error handling request.json %r \n' % (e))
|
||||
|
||||
return (request, response)
|
||||
|
|
|
@ -0,0 +1,99 @@
|
|||
# This Source Code Form is subject to the terms of the Mozilla Public
|
||||
# License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
# Copyright (c) 2017 Mozilla Corporation
|
||||
from .positive_alert_test_case import PositiveAlertTestCase
|
||||
from .negative_alert_test_case import NegativeAlertTestCase
|
||||
|
||||
from .alert_test_suite import AlertTestSuite
|
||||
|
||||
|
||||
class TestAlertLdapPasswordSpray(AlertTestSuite):
|
||||
alert_filename = "ldap_password_spray"
|
||||
# This event is the default positive event that will cause the
|
||||
# alert to trigger
|
||||
default_event = {
|
||||
"_source": {
|
||||
"category": "ldap",
|
||||
"details": {
|
||||
"client": "1.2.3.4",
|
||||
"requests": [
|
||||
{
|
||||
'verb': 'BIND',
|
||||
'details': [
|
||||
'dn="mail=jsmith@example.com,o=com,dc=example"',
|
||||
'method=128'
|
||||
]
|
||||
}
|
||||
],
|
||||
"response": {
|
||||
"error": 'LDAP_INVALID_CREDENTIALS',
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# This alert is the expected result from running this task
|
||||
default_alert = {
|
||||
"category": "ldap",
|
||||
"tags": ["ldap"],
|
||||
"severity": "WARNING",
|
||||
"summary": "LDAP Password Spray Attack in Progress from 1.2.3.4 targeting the following account(s): jsmith@example.com",
|
||||
}
|
||||
|
||||
# This alert is the expected result from this task against multiple matching events
|
||||
default_alert_aggregated = AlertTestSuite.copy(default_alert)
|
||||
default_alert_aggregated[
|
||||
"summary"
|
||||
] = "LDAP Password Spray Attack in Progress from 1.2.3.4 targeting the following account(s): jsmith@example.com"
|
||||
|
||||
test_cases = []
|
||||
|
||||
test_cases.append(
|
||||
PositiveAlertTestCase(
|
||||
description="Positive test with default events and default alert expected",
|
||||
events=AlertTestSuite.create_events(default_event, 1),
|
||||
expected_alert=default_alert,
|
||||
)
|
||||
)
|
||||
|
||||
test_cases.append(
|
||||
PositiveAlertTestCase(
|
||||
description="Positive test with default events and default alert expected - dedup",
|
||||
events=AlertTestSuite.create_events(default_event, 2),
|
||||
expected_alert=default_alert,
|
||||
)
|
||||
)
|
||||
|
||||
events = AlertTestSuite.create_events(default_event, 10)
|
||||
for event in events:
|
||||
event["_source"]["details"]["response"]["error"] = "LDAP_SUCCESS"
|
||||
test_cases.append(
|
||||
NegativeAlertTestCase(
|
||||
description="Negative test with default negative event", events=events
|
||||
)
|
||||
)
|
||||
|
||||
events = AlertTestSuite.create_events(default_event, 10)
|
||||
for event in events:
|
||||
event["_source"]["category"] = "bad"
|
||||
test_cases.append(
|
||||
NegativeAlertTestCase(
|
||||
description="Negative test case with events with incorrect category",
|
||||
events=events,
|
||||
)
|
||||
)
|
||||
|
||||
events = AlertTestSuite.create_events(default_event, 10)
|
||||
for event in events:
|
||||
event["_source"][
|
||||
"utctimestamp"
|
||||
] = AlertTestSuite.subtract_from_timestamp_lambda({"minutes": 241})
|
||||
event["_source"][
|
||||
"receivedtimestamp"
|
||||
] = AlertTestSuite.subtract_from_timestamp_lambda({"minutes": 241})
|
||||
test_cases.append(
|
||||
NegativeAlertTestCase(
|
||||
description="Negative test case with old timestamp", events=events
|
||||
)
|
||||
)
|
|
@ -411,9 +411,23 @@ class TestGetIndices(ElasticsearchClientTest):
|
|||
if self.config_delete_indexes:
|
||||
self.es_client.create_index('test_index')
|
||||
time.sleep(1)
|
||||
indices = self.es_client.get_indices()
|
||||
indices.sort()
|
||||
assert indices == [self.alert_index_name, self.previous_event_index_name, self.event_index_name, 'test_index']
|
||||
all_indices = self.es_client.get_indices()
|
||||
all_indices.sort()
|
||||
open_indices = self.es_client.get_open_indices()
|
||||
open_indices.sort()
|
||||
expected_indices = [self.alert_index_name, self.previous_event_index_name, self.event_index_name, 'test_index']
|
||||
assert all_indices == expected_indices
|
||||
assert open_indices == expected_indices
|
||||
|
||||
def test_closed_get_indices(self):
|
||||
if self.config_delete_indexes:
|
||||
self.es_client.create_index('test_index')
|
||||
time.sleep(1)
|
||||
self.es_client.close_index('test_index')
|
||||
all_indices = self.es_client.get_indices()
|
||||
open_indices = self.es_client.get_open_indices()
|
||||
assert 'test_index' in all_indices
|
||||
assert 'test_index' not in open_indices
|
||||
|
||||
|
||||
class TestIndexExists(ElasticsearchClientTest):
|
||||
|
|
Загрузка…
Ссылка в новой задаче