Merge remote-tracking branch 'upstream/master' into nagios_alert_pready

This commit is contained in:
Michal Purzynski 2019-01-04 11:31:18 -08:00
Родитель b6a8d018df 4db7b02281
Коммит c8f4f35285
177 изменённых файлов: 5810 добавлений и 1692 удалений

26
.flake8
Просмотреть файл

@ -1,21 +1,19 @@
[flake8]
exclude =
.flake8
.git
*__init__.py
per-file-ignores =
# Ignore 'library imported but unused' for only the alert config files
# since we stub timedelta and crontab
alerts/lib/config.py: F401
docker/compose/mozdef_alerts/files/config.py: F401
# Ignore any import statements in __init__ files
mozdef_util/mozdef_util/query_models/__init__.py: F401
# Ignore redefinition of index name
rest/index.py: F811
ignore =
E123 # closing bracket does not match indentation of opening bracket's line
E225 # missing whitespace around operator
E226 # missing whitespace around arithmetic operator
E228 # missing whitespace around modulo operator
E231 # missing whitespace after ','
E265 # block comment should start with '# '
E402 # module level import not at top of file
E501 # line too long
E722 # do not use bare except'
F401 # library imported but unused
F601 # dictionary key 'tags' repeated with different values
F811 # redefinition of unused 'datetime' from line 10
F821 # undefined name 'SysLogHandler'
F841 # local variable 'CIDR' is assigned to but never used
W503 # line break before binary operator

44
.github/ISSUE_TEMPLATE.md поставляемый
Просмотреть файл

@ -1,44 +0,0 @@
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
#### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
- Feature Idea
- Documentation Report
#### COMPONENT NAME
<!--- Name of the cron/worker/module/plugin/task/feature -->
#### CONFIGURATION
<!---
Mention any settings you have changed/added/removed
-->
#### OS / ENVIRONMENT
<!---
Mention the OS you are running MozDef from and versions of all MozDef components you are running
-->
#### DESCRIPTION
<!--- Explain the problem briefly -->
#### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste any log messages/configurations/scripts below that are applicable -->
<!--- You can also paste gist.github.com links for larger files -->
#### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
#### ACTUAL RESULTS
<!--- What actually happened? -->

Просмотреть файл

@ -34,15 +34,20 @@ run-cloudy-mozdef: ## Run the MozDef containers necessary to run in AWS (`cloudy
restart-cloudy-mozdef:
docker-compose -f docker/compose/docker-compose-cloudy-mozdef.yml -p $(NAME) restart
.PHONY: tests run-tests-resources run-tests
.PHONY: tests run-tests-resources run-tests-resources-external run-tests
test: build-tests run-tests
tests: build-tests run-tests ## Run all tests (getting/building images as needed)
run-tests-resources-external: ## Just spin up external resources for tests and have them listen externally
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) run -p 9200:9200 -d elasticsearch
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) run -p 5672:5672 -d rabbitmq
run-tests-resources: ## Just run the external resources required for tests
docker-compose -f docker/compose/docker-compose-tests.yml -p test-$(NAME) up -d
run-test:
run-tests: run-tests-resources ## Just run the tests (no build/get). Use `make TEST_CASE=tests/...` for specific tests only
docker run -it --rm mozdef/mozdef_tester bash -c "source /opt/mozdef/envs/python/bin/activate && flake8 --config .flake8 ./"
docker run -it --rm --network=test-mozdef_default mozdef/mozdef_tester bash -c "source /opt/mozdef/envs/python/bin/activate && py.test --delete_indexes --delete_queues $(TEST_CASE)"
rebuild-run-tests: build-tests run-tests
.PHONY: build
build: ## Build local MozDef images (use make NO_CACHE=--no-cache build to disable caching)

Просмотреть файл

@ -22,6 +22,10 @@ The Mozilla Defense Platform (MozDef) seeks to automate the security incident ha
MozDef is in production at Mozilla where we are using it to process over 300 million events per day.
## Give MozDef a Try in AWS:
[![Launch MozDef](docs/source/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=mozdef-for-aws&templateURL=https://s3-us-west-2.amazonaws.com/mozdef.infosec.allizom.org/cf/mozdef-parent.yml)
## Documentation:
http://mozdef.readthedocs.org/en/latest/

Просмотреть файл

@ -18,7 +18,7 @@ class AlertBruteforceSsh(AlertTask):
search_query.add_must([
PhraseMatch('summary', 'failed'),
TermMatch('details.program', 'sshd'),
TermsMatch('summary', ['login', 'invalid', 'ldap_count_entries', 'publickey'])
TermsMatch('summary', ['login', 'invalid', 'ldap_count_entries', 'publickey', 'keyboard'])
])
for ip_address in self.config.skiphosts.split():

Просмотреть файл

@ -6,7 +6,7 @@
# Copyright (c) 2017 Mozilla Corporation
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, ExistsMatch, PhraseMatch
from mozdef_util.query_models import SearchQuery, TermMatch, PhraseMatch
class AlertBugzillaPBruteforce(AlertTask):

Просмотреть файл

@ -15,10 +15,10 @@ class AlertCloudtrailLoggingDisabled(AlertTask):
search_query.add_must([
TermMatch('source', 'cloudtrail'),
TermMatch('eventName', 'StopLogging')
TermMatch('eventname', 'StopLogging')
])
search_query.add_must_not(TermMatch('errorCode', 'AccessDenied'))
search_query.add_must_not(TermMatch('errorcode', 'AccessDenied'))
self.filtersManual(search_query)
self.searchEventsSimple()
@ -29,6 +29,6 @@ class AlertCloudtrailLoggingDisabled(AlertTask):
tags = ['cloudtrail', 'aws', 'cloudtrailpagerduty']
severity = 'CRITICAL'
summary = 'Cloudtrail Logging Disabled: ' + event['_source']['requestParameters']['name']
summary = 'Cloudtrail Logging Disabled: ' + event['_source']['requestparameters']['name']
return self.createAlertDict(summary, category, tags, [event], severity)

Просмотреть файл

@ -13,7 +13,7 @@ class AlertDuoAuthFail(AlertTask):
search_query = SearchQuery(minutes=15)
search_query.add_must([
TermMatch('category', 'event'),
TermMatch('category', 'authentication'),
ExistsMatch('details.sourceipaddress'),
ExistsMatch('details.username'),
PhraseMatch('details.result', 'FRAUD')

Просмотреть файл

@ -40,9 +40,9 @@ class AlertAccountCreations(AlertTask):
severity = 'INFO'
summary = ('{0} fxa account creation attempts by {1}'.format(aggreg['count'], aggreg['value']))
emails = self.mostCommon(aggreg['allevents'],'_source.details.email')
#did they try to create more than one email account?
#or just retry an existing one
emails = self.mostCommon(aggreg['allevents'], '_source.details.email')
# did they try to create more than one email account?
# or just retry an existing one
if len(emails) > 1:
for i in emails[:5]:
summary += ' {0} ({1} hits)'.format(i[0], i[1])

Просмотреть файл

@ -0,0 +1,43 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, ExistsMatch
class AlertGuardDutyProbe(AlertTask):
def main(self):
# Create a query to look back the last 20 minutes
search_query = SearchQuery(minutes=20)
# Add search terms to our query
search_query.add_must([
TermMatch('source', 'guardduty'),
TermMatch('details.finding.action.actionType', 'PORT_PROBE'),
ExistsMatch('details.sourceipaddress'),
])
self.filtersManual(search_query)
# Search aggregations on field 'sourceipaddress'
# keep X samples of events at most
self.searchEventsAggregated('details.sourceipaddress', samplesLimit=10)
# alert when >= X matching events in an aggregation
self.walkAggregations(threshold=1)
# Set alert properties
def onAggregation(self, aggreg):
# aggreg['count']: number of items in the aggregation, ex: number of failed login attempts
# aggreg['value']: value of the aggregation field, ex: toto@example.com
# aggreg['events']: list of events in the aggregation
category = 'bruteforce'
tags = ['guardduty', 'bruteforce']
severity = 'INFO'
summary = "Guard Duty Port Probe by {}".format(aggreg['value'])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -6,7 +6,7 @@
# Copyright (c) 2017 Mozilla Corporation
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, ExistsMatch, PhraseMatch
from mozdef_util.query_models import SearchQuery, TermMatch, PhraseMatch
class AlertHTTPBruteforce(AlertTask):

Просмотреть файл

@ -6,7 +6,7 @@
# Copyright (c) 2017 Mozilla Corporation
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, ExistsMatch, PhraseMatch
from mozdef_util.query_models import SearchQuery, TermMatch, PhraseMatch
class AlertHTTPErrors(AlertTask):

Просмотреть файл

@ -1,6 +1,3 @@
import os
import sys
from mozdef_util.plugin_set import PluginSet
from mozdef_util.utilities.logger import logger

Просмотреть файл

@ -0,0 +1,4 @@
{
"sourcemustmatch":"[10.0.0.0 TO 10.255.255.255]",
"sourcemustnotmatch":"10.33.44.54 OR 10.88.77.54 OR 10.76.54.54 OR 10.251.30.138 OR 10.54.65.234"
}

Просмотреть файл

@ -0,0 +1,47 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2018 Mozilla Corporation
from lib.alerttask import AlertTask, add_hostname_to_ip
from mozdef_util.query_models import SearchQuery, TermMatch, QueryStringMatch, PhraseMatch
class NSMScanAddress(AlertTask):
def __init__(self):
AlertTask.__init__(self)
self._config = self.parse_json_alert_config('nsm_scan_address.json')
def main(self):
search_query = SearchQuery(minutes=1)
search_query.add_must([
TermMatch('category', 'bro'),
TermMatch('details.source', 'notice'),
PhraseMatch('details.note', 'Scan::Address_Scan'),
QueryStringMatch('details.sourceipaddress: {}'.format(self._config['sourcemustmatch']))
])
search_query.add_must_not([
QueryStringMatch('details.sourceipaddress: {}'.format(self._config['sourcemustnotmatch']))
])
self.filtersManual(search_query)
self.searchEventsAggregated('details.sourceipaddress', samplesLimit=10)
self.walkAggregations(threshold=1)
def onAggregation(self, aggreg):
category = 'nsm'
severity = 'NOTICE'
tags = ['nsm', "bro", 'addressscan']
indicators = 'unknown'
x = aggreg['events'][0]['_source']
if 'details' in x:
if 'indicators' in x['details']:
indicators = x['details']['sourceipaddress']
indicators_info = add_hostname_to_ip(indicators, '{0} ({1})', require_internal=False)
summary = 'Address scan from {}'.format(indicators_info)
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -0,0 +1,5 @@
{
"sourcemustmatch":"[10.0.0.0 TO 10.255.255.255]",
"sourcemustnotmatch":"10.33.44.54 OR 10.88.77.54 OR 10.76.54.54 OR 10.251.30.138 OR 10.54.65.234",
"destinationmustnotmatch": "*192.168*"
}

50
alerts/nsm_scan_port.py Normal file
Просмотреть файл

@ -0,0 +1,50 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2018 Mozilla Corporation
from lib.alerttask import AlertTask, add_hostname_to_ip
from mozdef_util.query_models import SearchQuery, TermMatch, QueryStringMatch, PhraseMatch
class NSMScanPort(AlertTask):
def __init__(self):
AlertTask.__init__(self)
self._config = self.parse_json_alert_config('nsm_scan_port.json')
def main(self):
search_query = SearchQuery(minutes=2)
search_query.add_must([
TermMatch('category', 'bro'),
TermMatch('details.source', 'notice'),
PhraseMatch('details.note', 'Scan::Port_Scan'),
QueryStringMatch('details.sourceipaddress: {}'.format(self._config['sourcemustmatch']))
])
search_query.add_must_not([
QueryStringMatch('details.sourceipaddress: {}'.format(self._config['sourcemustnotmatch']))
])
search_query.add_must_not([
QueryStringMatch('details.msg: {}'.format(self._config['destinationmustnotmatch']))
])
self.filtersManual(search_query)
self.searchEventsAggregated('details.sourceipaddress', samplesLimit=10)
self.walkAggregations(threshold=1)
def onAggregation(self, aggreg):
category = 'nsm'
severity = 'WARNING'
tags = ['nsm', "bro", 'portscan']
indicators = 'unknown'
x = aggreg['events'][0]['_source']
if 'details' in x:
if 'indicators' in x['details']:
indicators = x['details']['sourceipaddress']
indicators_info = add_hostname_to_ip(indicators, '{0} ({1})', require_internal=False)
summary = 'Port scan from {}'.format(indicators_info)
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -0,0 +1,4 @@
{
"sourcemustmatch":"[10.0.0.0 TO 10.255.255.255]",
"sourcemustnotmatch":"10.33.44.54 OR 10.88.77.54 OR 10.76.54.54 OR 10.251.30.138 OR 10.54.65.234"
}

46
alerts/nsm_scan_random.py Normal file
Просмотреть файл

@ -0,0 +1,46 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2018 Mozilla Corporation
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, QueryStringMatch, PhraseMatch
class NSMScanRandom(AlertTask):
def __init__(self):
AlertTask.__init__(self)
self._config = self.parse_json_alert_config('nsm_scan_random.json')
def main(self):
search_query = SearchQuery(minutes=1)
search_query.add_must([
TermMatch('category', 'bro'),
TermMatch('details.source', 'notice'),
PhraseMatch('details.note', 'Scan::Random_Scan'),
QueryStringMatch('details.sourceipaddress: {}'.format(self._config['sourcemustmatch']))
])
search_query.add_must_not([
QueryStringMatch('details.sourceipaddress: {}'.format(self._config['sourcemustnotmatch']))
])
self.filtersManual(search_query)
self.searchEventsAggregated('details.sourceipaddress', samplesLimit=10)
self.walkAggregations(threshold=1)
def onAggregation(self, aggreg):
category = 'nsm'
severity = 'WARNING'
tags = ['nsm', "bro", 'randomscan']
indicators = 'unknown'
x = aggreg['events'][0]['_source']
if 'details' in x:
if 'indicators' in x['details']:
indicators = x['details']['sourceipaddress']
summary = 'Random scan from {}'.format(indicators)
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -5,7 +5,6 @@
import hjson
import os
import sys
from binascii import b2a_hex
import boto3
import datetime

Просмотреть файл

@ -65,7 +65,7 @@ class message(object):
}
]
})
r = requests.post(
requests.post(
'https://events.pagerduty.com/generic/2010-04-15/create_event.json',
headers=headers,
data=payload,

Просмотреть файл

Просмотреть файл

@ -6,24 +6,24 @@
# Copyright (c) 2014 Mozilla Corporation
from urlparse import urlparse
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, QueryStringMatch, ExistsMatch, PhraseMatch, WildcardMatch
from mozdef_util.query_models import SearchQuery, TermMatch, QueryStringMatch
class AlertProxyDropExfilDomains(AlertTask):
class AlertProxyExfilDomains(AlertTask):
def main(self):
self.parse_config('proxy_drop_exfil_domains.conf', ['exfil_domains'])
self.parse_config('proxy_exfil_domains.conf', ['exfil_domains'])
search_query = SearchQuery(minutes=20)
search_query.add_must([
TermMatch('category', 'squid'),
TermMatch('tags', 'squid'),
TermMatch('details.proxyaction', "TCP_DENIED/-")
])
# Only notify on certain domains listed in the config
domain_regex = "/({0}).*/".format(
domain_regex = "/.*({0}).*/".format(
self.config.exfil_domains.replace(',', '|'))
search_query.add_must([
QueryStringMatch('details.destination: {}'.format(domain_regex))
@ -49,10 +49,16 @@ class AlertProxyDropExfilDomains(AlertTask):
exfil_domains = set()
for event in aggreg['allevents']:
domain = event['_source']['details']['destination'].split(':')
exfil_domains.add(domain[0])
try:
domain = urlparse(event['_source']['details']['destination']).netloc
except Exception:
# We already have a domain, not a URL
target = event['_source']['details']['destination'].split(':')
domain = target[0]
summary = 'Suspicious Proxy DROP event(s) detected from {0} to the following exfil domain(s): {1}'.format(
exfil_domains.add(domain)
summary = 'Suspicious Proxy event(s) detected from {0} to the following exfil domain(s): {1}'.format(
aggreg['value'],
",".join(sorted(exfil_domains))
)

Просмотреть файл

@ -6,7 +6,7 @@
# Copyright (c) 2017 Mozilla Corporation
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, ExistsMatch, PhraseMatch
from mozdef_util.query_models import SearchQuery, TermMatch, PhraseMatch
class AlertSSHManyConns(AlertTask):

Просмотреть файл

@ -56,7 +56,7 @@ class SSHKey(AlertTask):
self._whitelist.append({
'hostre': entry[:pindex],
'path': entry[pindex + 1:]
})
})
# Return false if the key path is present in the whitelist, otherwise return
# true

Просмотреть файл

@ -8,7 +8,7 @@
# This code alerts on every successfully opened session on any of the host from a given list
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, QueryStringMatch, PhraseMatch
from mozdef_util.query_models import SearchQuery, TermMatch, PhraseMatch
class TraceAudit(AlertTask):
@ -32,12 +32,12 @@ class TraceAudit(AlertTask):
severity = 'WARNING'
tags = ['audit']
summary = ('{0} instances of Strace or Ptrace executed on a system by {1}'.format(aggreg['count'], aggreg['value'], ))
hostnames = self.mostCommon(aggreg['allevents'],'_source.hostname')
#did they modify more than one host?
#or just modify an existing configuration more than once?
if len(hostnames) > 1:
for i in hostnames[:5]:
summary += ' on {0} ({1} hosts)'.format(i[0], i[1])
hosts = set([event['_source']['hostname'] for event in aggreg['events']])
summary = '{0} instances of Strace or Ptrace executed by {1} on {2}'.format(
aggreg['count'],
aggreg['value'],
','.join(hosts)
)
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -1,2 +1,3 @@
[options]
skipprocess = process1 process2
skipprocess = process1 process2
expectedusers = user1 user2

Просмотреть файл

@ -8,12 +8,12 @@
# This code alerts on every successfully opened session on any of the host from a given list
from lib.alerttask import AlertTask
from mozdef_util.query_models import SearchQuery, TermMatch, QueryStringMatch, PhraseMatch
from mozdef_util.query_models import SearchQuery, TermMatch, PhraseMatch
class WriteAudit(AlertTask):
def main(self):
self.parse_config('write_audit.conf', ['skipprocess'])
self.parse_config('write_audit.conf', ['skipprocess', 'expectedusers'])
search_query = SearchQuery(minutes=15)
search_query.add_must([
@ -33,10 +33,25 @@ class WriteAudit(AlertTask):
severity = 'WARNING'
tags = ['audit']
summary = ('{0} Filesystem write(s) to an auditd path by {1}'.format(aggreg['count'], aggreg['value'], ))
hostnames = self.mostCommon(aggreg['allevents'],'_source.hostname')
#did they modify more than one host?
#or just modify an existing configuration more than once?
users = set()
paths = set()
for event in aggreg['events']:
users.add(event['_source']['details']['user'])
paths.add(event['_source']['summary'].split(' ')[1])
summary = '{0} Filesystem write(s) to an auditd path ({1}) by {2} ({3})'.format(
aggreg['count'],
', '.join(paths),
', '.join(users),
aggreg['value']
)
if aggreg['value'] in self.config.expectedusers.split(' '):
severity = 'NOTICE'
hostnames = self.mostCommon(aggreg['allevents'], '_source.hostname')
# did they modify more than one host?
# or just modify an existing configuration more than once?
if len(hostnames) > 1:
for i in hostnames[:5]:
summary += ' on {0} ({1} hosts)'.format(i[0], i[1])

Просмотреть файл

@ -11,22 +11,18 @@ from datetime import datetime
import pytz
import json
import socket
import json
from optparse import OptionParser
from requests_futures.sessions import FuturesSession
from multiprocessing import Process, Queue
import random
import logging
from logging.handlers import SysLogHandler
from Queue import Empty
from requests.packages.urllib3.exceptions import ClosedPoolError
import requests
import time
httpsession = FuturesSession(max_workers=5)
httpsession.trust_env=False # turns of needless .netrc check for creds
#a = requests.adapters.HTTPAdapter(max_retries=2)
#httpsession.mount('http://', a)
# a = requests.adapters.HTTPAdapter(max_retries=2)
# httpsession.mount('http://', a)
logger = logging.getLogger(sys.argv[0])
@ -36,13 +32,13 @@ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(messag
def postLogs(logcache):
#post logs asynchronously with requests workers and check on the results
#expects a queue object from the multiprocessing library
# post logs asynchronously with requests workers and check on the results
# expects a queue object from the multiprocessing library
posts=[]
try:
while not logcache.empty():
postdata=logcache.get_nowait()
if len(postdata)>0:
if len(postdata) > 0:
url=options.url
a=httpsession.get_adapter(url)
a.max_retries=3
@ -52,15 +48,15 @@ def postLogs(logcache):
pass
for p,postdata,url in posts:
try:
if p.result().status_code >=500:
logger.error("exception posting to %s %r [will retry]\n"%(url,p.result().status_code))
#try again later when the next message in forces other attempts at posting.
if p.result().status_code >= 500:
logger.error("exception posting to %s %r [will retry]\n" % (url, p.result().status_code))
# try again later when the next message in forces other attempts at posting.
logcache.put(postdata)
except ClosedPoolError as e:
#logger.fatal("Closed Pool Error exception posting to %s %r %r [will retry]\n"%(url,e,postdata))
# logger.fatal("Closed Pool Error exception posting to %s %r %r [will retry]\n"%(url,e,postdata))
logcache.put(postdata)
except Exception as e:
logger.fatal("exception posting to %s %r %r [will not retry]\n"%(url,e,postdata))
logger.fatal("exception posting to %s %r %r [will not retry]\n" % (url, e, postdata))
sys.exit(1)
@ -71,7 +67,7 @@ if __name__ == '__main__':
sh=logging.StreamHandler(sys.stdout)
sh.setFormatter(formatter)
logger.addHandler(sh)
#create a list of logs we can append json to and call for a post when we want.
# create a list of logs we can append json to and call for a post when we want.
logcache=Queue()
try:
for i in range(0,10):
@ -98,21 +94,21 @@ if __name__ == '__main__':
postingProcess=Process(target=postLogs,args=(logcache,),name="json2MozdefStressTest")
postingProcess.start()
except OSError as e:
if e.errno==35: # resource temporarily unavailable.
if e.errno == 35: # resource temporarily unavailable.
print(e)
pass
else:
logger.error('%r'%e)
logger.error('%r' % e)
while not logcache.empty():
try:
postingProcess=Process(target=postLogs,args=(logcache,),name="json2MozdefStressTest")
postingProcess.start()
except OSError as e:
if e.errno==35: # resource temporarily unavailable.
if e.errno == 35: # resource temporarily unavailable.
print(e)
pass
else:
logger.error('%r'%e)
logger.error('%r' % e)
except KeyboardInterrupt as e:
sys.exit(1)

Просмотреть файл

@ -10,7 +10,6 @@
import logging
import random
from kitnirc.client import Channel
from kitnirc.modular import Module
from kitnirc.user import User

Просмотреть файл

@ -7,9 +7,7 @@
# Copyright (c) 2014 Mozilla Corporation
import logging
from kitnirc.client import Channel
from kitnirc.modular import Module
from kitnirc.user import User
import threading
import time
import json
@ -41,7 +39,7 @@ class Zilla(Module):
self.interval = 9999999
self.channel = '#test'
self._bugzilla = bugzilla.Bugzilla(url=self.url+'rest/', api_key=self.api_key)
self._bugzilla = bugzilla.Bugzilla(url=self.url + 'rest/', api_key=self.api_key)
_log.info("zilla module initialized for {}, pooling every {} seconds.".format(self.url, self.interval))
@ -49,8 +47,8 @@ class Zilla(Module):
last = 0
while not self._stop:
now = time.time()
if ((now-last) > self.interval):
#Add all the actions you want to do with bugzilla here ;)
if ((now - last) > self.interval):
# Add all the actions you want to do with bugzilla here ;)
self.bugzilla_search()
last = now
time.sleep(1)

Просмотреть файл

@ -8,25 +8,18 @@
import json
import kitnirc.client
import kitnirc.modular
import kombu
import logging
import netaddr
import os
import pytz
import random
import select
import sys
import time
import threading
from configlib import getConfig, OptionParser
from datetime import datetime
from dateutil.parser import parse
from kombu import Connection, Queue, Exchange
from kombu.mixins import ConsumerMixin
from ipwhois import IPWhois
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.geo_ip import GeoIP
@ -134,7 +127,7 @@ def ipLocation(ip):
if geoDict['country_code'] in ('US'):
if geoDict['metro_code']:
location = location + '/{0}'.format(geoDict['metro_code'])
except Exception as e:
except Exception:
location = ""
return location
@ -151,10 +144,11 @@ def formatAlert(jsonDictIn):
if 'category' in jsonDictIn.keys():
category = jsonDictIn['category']
return colorify('{0}: {1} {2}'.format(severity, colors['blue']
+ category
+ colors['normal'],
summary.encode('ascii', 'replace')))
return colorify('{0}: {1} {2}'.format(
severity,
colors['blue'] + category + colors['normal'],
summary.encode('ascii', 'replace')
))
class mozdefBot():
@ -197,15 +191,6 @@ class mozdefBot():
# start the mq consumer
consumeAlerts(self)
@self.client.handle('LINE')
def line_handler(client, *params):
try:
self.root_logger.debug('linegot:' + line)
except AttributeError as e:
# catch error in kitnrc : chan.remove(actor) where channel
# object has no attribute remove
pass
@self.client.handle('PRIVMSG')
def priv_handler(client, actor, recipient, message):
self.root_logger.debug(

Просмотреть файл

@ -20,8 +20,6 @@ from kombu.mixins import ConsumerMixin
from slackclient import SlackClient
import sys
import os
from mozdef_util.utilities.toUTC import toUTC

Просмотреть файл

@ -2,7 +2,9 @@ ROOT_DIR := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
PARENTDIR := $(realpath ../)
AWS_REGION := us-west-2
STACK_NAME := mozdef-aws-nested
STACK_PARAMS := file://aws_parameters.json
STACK_PARAMS_FILENAME := aws_parameters.json
# For more information on the rationale behind the code in STACK_PARAMS see https://github.com/aws/aws-cli/issues/2429#issuecomment-441133480
STACK_PARAMS := $(shell test -e $(STACK_PARAMS_FILENAME) && python -c 'import json,sys;f=open(sys.argv[1]);print(" ".join([",".join(["%s=\\\"%s\\\""%(k,v) for k,v in x.items()]) for x in json.load(f)]));f.close()' $(STACK_PARAMS_FILENAME))
# MozDef uses a nested CF stack, the mozdef-parent.yml will tie all child stacks together and load them from S3
# See also mozdef.infosec.mozilla.org bucket
S3_BUCKET_NAME := mozdef.infosec.allizom.org
@ -28,9 +30,9 @@ create-stack: test ## Create everything you need for a fresh new stack!
@echo "Make sure you have an environment variable OIDC_CLIENT_SECRET set."
aws cloudformation create-stack --stack-name $(STACK_NAME) --template-url $(S3_STACK_URI)mozdef-parent.yml \
--capabilities CAPABILITY_IAM \
--parameters $(STACK_PARAMS) \
--parameters ParameterKey=S3TemplateLocation,ParameterValue=$(S3_STACK_URI) \
ParameterKey=OIDCClientSecret,ParameterValue=$(OIDC_CLIENT_SECRET) \
$(OIDC_CLIENT_SECRET_PARAM_ARG) \
$(STACK_PARAMS) \
--output text
.PHONY: create-s3-bucket

Просмотреть файл

@ -1,7 +1,52 @@
[
{
"ParameterKey": "VpcId",
"ParameterValue": "vpc-abcdef12",
"UsePreviousValue": false
},
{
"ParameterKey": "PublicSubnetIds",
"ParameterValue": "subnet-abcdef12,subnet-bcdef123",
"UsePreviousValue": false
},
{
"ParameterKey": "InstanceType",
"ParameterValue": "m5.large",
"UsePreviousValue": false
},
{
"ParameterKey": "KeyName",
"ParameterValue": "my-key-name",
"UsePreviousValue": false
},
{
"ParameterKey": "AMIImageId",
"ParameterValue": "ami-0c3705bb3b43ad51f",
"UsePreviousValue": false
},
{
"ParameterKey": "ACMCertArn",
"ParameterValue": "arn:aws:acm:us-west-2:123456789012:certificate/abcdef01-2345-6789-abcd-ef0123456789",
"UsePreviousValue": false
},
{
"ParameterKey": "OIDCDiscoveryURL",
"ParameterValue": "https://auth.example.com/.well-known/openid-configuration",
"UsePreviousValue": false
},
{
"ParameterKey": "OIDCClientId",
"ParameterValue": "abcdefghijklmnopqrstuvwxyz012345",
"UsePreviousValue": false
},
{
"ParameterKey": "OIDCClientSecret",
"ParameterValue": "secret-goes-here",
"ParameterValue": "secret-value-goes-here",
"UsePreviousValue": false
},
{
"ParameterKey": "S3TemplateLocation",
"ParameterValue": "https://s3-us-west-2.amazonaws.com/example-bucket-name/cloudformation/path/",
"UsePreviousValue": false
}
]
]

Просмотреть файл

@ -1,23 +1,19 @@
AWSTemplateFormatVersion: 2010-09-09
Description: Amazon Elastic File System
Parameters:
VPCID0:
VpcId:
Type: AWS::EC2::VPC::Id
Description: VpcId of your existing Virtual Private Cloud (VPC)
ConstraintDescription: Must be the VPC Id of an existing Virtual Private Cloud.
Default: vpc-dc8eacb4
Subnet:
Description: Select existing subnets.
SubnetList:
Type: List<AWS::EC2::Subnet::Id>
Default: subnet-8931f7ee,subnet-de322aa8,subnet-2582cd7d
Description: Select existing subnets.
NumberOfSubnets:
Description: Number of subnets in the Subnet parameter
Type: String
Default: 3
Description: Number of subnets in the Subnet parameter
MozDefSecurityGroup:
Description: The security group of the Mozdef endpoint that is accessing EFS
Type: AWS::EC2::SecurityGroup::Id
Default: sg-02d7fe9627ea068e2
Description: The security group of the Mozdef endpoint that is accessing EFS
Conditions:
Has1Subnets: !Or [!Equals [!Ref NumberOfSubnets, 1], Condition: Has2Subnets]
Has2Subnets: !Or [!Equals [!Ref NumberOfSubnets, 2], Condition: Has3Subnets]
@ -57,7 +53,7 @@ Resources:
Value: mozdef
- Key: stack
Value: !Ref AWS::StackName
VpcId: !Ref VPCID0
VpcId: !Ref VpcId
ElasticFileSystem:
Type: AWS::EFS::FileSystem
MountTarget1:
@ -65,142 +61,142 @@ Resources:
Condition: Has1Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 0, !Ref Subnet ]
SubnetId: !Select [ 0, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget2:
Type: AWS::EFS::MountTarget
Condition: Has2Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 1, !Ref Subnet ]
SubnetId: !Select [ 1, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget3:
Type: AWS::EFS::MountTarget
Condition: Has3Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 2, !Ref Subnet ]
SubnetId: !Select [ 2, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget4:
Type: AWS::EFS::MountTarget
Condition: Has4Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 3, !Ref Subnet ]
SubnetId: !Select [ 3, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget5:
Type: AWS::EFS::MountTarget
Condition: Has5Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 4, !Ref Subnet ]
SubnetId: !Select [ 4, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget6:
Type: AWS::EFS::MountTarget
Condition: Has6Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 5, !Ref Subnet ]
SubnetId: !Select [ 5, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget7:
Type: AWS::EFS::MountTarget
Condition: Has7Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 6, !Ref Subnet ]
SubnetId: !Select [ 6, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget8:
Type: AWS::EFS::MountTarget
Condition: Has8Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 7, !Ref Subnet ]
SubnetId: !Select [ 7, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget9:
Type: AWS::EFS::MountTarget
Condition: Has9Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 8, !Ref Subnet ]
SubnetId: !Select [ 8, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget10:
Type: AWS::EFS::MountTarget
Condition: Has10Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 9, !Ref Subnet ]
SubnetId: !Select [ 9, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget11:
Type: AWS::EFS::MountTarget
Condition: Has11Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 10, !Ref Subnet ]
SubnetId: !Select [ 10, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget12:
Type: AWS::EFS::MountTarget
Condition: Has12Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 11, !Ref Subnet ]
SubnetId: !Select [ 11, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget13:
Type: AWS::EFS::MountTarget
Condition: Has13Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 12, !Ref Subnet ]
SubnetId: !Select [ 12, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget14:
Type: AWS::EFS::MountTarget
Condition: Has14Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 13, !Ref Subnet ]
SubnetId: !Select [ 13, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget15:
Type: AWS::EFS::MountTarget
Condition: Has15Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 14, !Ref Subnet ]
SubnetId: !Select [ 14, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget16:
Type: AWS::EFS::MountTarget
Condition: Has16Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 15, !Ref Subnet ]
SubnetId: !Select [ 15, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget17:
Type: AWS::EFS::MountTarget
Condition: Has17Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 16, !Ref Subnet ]
SubnetId: !Select [ 16, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget18:
Type: AWS::EFS::MountTarget
Condition: Has18Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 17, !Ref Subnet ]
SubnetId: !Select [ 17, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget19:
Type: AWS::EFS::MountTarget
Condition: Has19Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 18, !Ref Subnet ]
SubnetId: !Select [ 18, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
MountTarget20:
Type: AWS::EFS::MountTarget
Condition: Has20Subnets
Properties:
FileSystemId: !Ref ElasticFileSystem
SubnetId: !Select [ 19, !Ref Subnet ]
SubnetId: !Select [ 19, !Ref SubnetList ]
SecurityGroups: [ !Ref MountTargetSecurityGroup ]
Outputs:
EFSID:
Description: Logical ID of the EFS Filesystem
Value: !Ref ElasticFileSystem
Value: !Ref ElasticFileSystem

Просмотреть файл

@ -4,22 +4,17 @@ Parameters:
SubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: Comma-delimited list of subnet IDs within which the ElasticSearch instance will be provisioned.
Default: subnet-dd8eacb5,subnet-df8eacb7,subnet-de8eacb6
BlockStoreSizeGB:
Type: Number
Description: The size of the Elastic Block Store to have back ElasticSearch, in GigaBytes.
Default: 100
VpcId:
Type: AWS::EC2::VPC::Id
Description: The VPC ID of the VPC to deploy in
Default: vpc-dc8eacb4
MozDefInstanceSecurityGroup:
Type: AWS::EC2::SecurityGroup::Id
Description: The MozDef EC2 Instance security group that accesses ES
Default: sg-8f38dae0
ESInstanceCount:
Type: Number
Default: 1
Description: The number of ElasticSearch nodes in the cluster
Resources:
## Not currently supported by CloudFormation.

Просмотреть файл

@ -4,27 +4,22 @@ Parameters:
VpcId:
Type: AWS::EC2::VPC::Id
Description: The VPC ID of the VPC to deploy in
Default: vpc-dc8eacb4
InstanceType:
Type: String
Default: m5.large
Description: EC2 instance type, e.g. m1.small, m1.large, etc.
Default: m5.large
KeyName:
Type: AWS::EC2::KeyPair::KeyName
Description: Name of an existing EC2 KeyPair to enable SSH access to the web server
Default: infosec-pdx-workweek-2018
IamInstanceProfile:
Type: String
Description: The ARN of the IAM Instance Profile
Default: arn:aws:iam::656532927350:instance-profile/netsecdevbastion-BastionInstanceProfile-12CM3TOELV20R
AutoScaleGroupSubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: A comma delimited list of subnet IDs
Default: subnet-dd8eacb5,subnet-df8eacb7,subnet-de8eacb6
AMIImageId:
Type: AWS::EC2::Image::Id
Description: The AMI Image ID to use of the EC2 instance
Default: ami-0ff5b922fb3ae2332
EFSID:
Type: String
Description: Logical ID of the EFS Filesystem
@ -41,7 +36,6 @@ Parameters:
MozDefACMCertArn:
Type: String
Description: The arn of your pre-issued certificate for ssl termination.
Default: arn:aws:acm:us-west-2:656532927350:certificate/79f641f2-4046-4754-a28f-4db80d7c0583
ESURL:
Type: String
Description: The AWS ES endpoint URL
@ -55,20 +49,24 @@ Parameters:
Description: The AWS ES Kibana URL with domain only
Default: https://kibana.example.com/
OIDCClientId:
Description: The client ID that your OIDC provider issues you for your Mozdef instance.
Type: String
Default: lGsSlYNdiV6f5tF05pWN3EbQoDPHx44k
Description: The client ID that your OIDC provider issues you for your Mozdef instance.
OIDCClientSecret:
Type: String
Description: The secret that your OIDC provider issues you for your Mozdef instance.
NoEcho: true
OIDCDiscoveryURL:
Type: String
Default: https://auth.mozilla.auth0.com/.well-known/openid-configuration
Description: The URL of your OIDC provider's well-known discovery URL
CloudTrailSQSNotificationQueueName:
Type: String
Description: The URL of your OIDC provider's well-known discovery URL
Description: The name of the SQS used for CloudTrail notifications.
MozDefSQSQueueName:
Type: String
Description: The name of the generic SQS queue used to pickup events.
DomainName:
Type: String
Description: The fully qualified domain you'll be hosting MozDef at.
Resources:
MozDefElasticLoadBalancingV2TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
@ -117,7 +115,10 @@ Resources:
- content: |
OPTIONS_ESSERVERS=${ESURL}
OPTIONS_KIBANAURL=${KibanaURL}
# The OPTIONS_METEOR_KIBANAURL uses the reserved word "relative" which triggers MozDef
# to use relative links to Kibana : https://github.com/mozilla/MozDef/pull/956
OPTIONS_METEOR_KIBANAURL=https://relative:9090/_plugin/kibana/
OPTIONS_METEOR_ROOTURL=https://${DomainName}
# See https://github.com/mozilla-iam/mozilla.oidc.accessproxy/blob/master/README.md#setup
client_id=${OIDCClientId}
client_secret=${OIDCClientSecret}
@ -134,7 +135,86 @@ Resources:
cookiename=sesmeteor
# Increase the AWS ES total fields limit from 1000 to 4000
OPTIONS_MAPPING_TOTAL_FIELDS_LIMIT=4000
# Set thresholds for attack dataviz lower means more ogres
OPTIONS_IPV4ATTACKERHITCOUNT=5
OPTIONS_IPV4ATTACKERPREFIXLENGTH=24
path: /opt/mozdef/docker/compose/cloudy_mozdef.env
- content: |
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
from celery.schedules import crontab, timedelta
import time
import logging
ALERTS = {
'bruteforce_ssh.AlertBruteforceSsh': {'schedule': crontab(minute='*/1')},
'unauth_ssh.AlertUnauthSSH': {'schedule': crontab(minute='*/1')},
'guard_duty_probe.AlertGuardDutyProbe': {'schedule': crontab(minute='*/1')},
'cloudtrail_logging_disabled.AlertCloudtrailLoggingDisabled': {'schedule': timedelta(minutes=1)},
'cloudtrail_deadman.AlertCloudtrailDeadman': {'schedule': timedelta(hours=1)}
}
ALERT_PLUGINS = [
# 'relative pythonfile name (exclude the .py) - EX: sso_dashboard',
]
RABBITMQ = {
'mqserver': 'rabbitmq',
'mquser': 'guest',
'mqpassword': 'guest',
'mqport': 5672,
'alertexchange': 'alerts',
'alertqueue': 'mozdef.alert'
}
ES = {
'servers': ["${ESURL}"]
}
OPTIONS = {
'defaulttimezone': 'UTC',
}
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'simple': {
'format': '%(levelname)s %(message)s',
'datefmt': '%y %b %d, %H:%M:%S',
},
'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s %(filename)s:%(lineno)d: %(message)s'
}
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'celery': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'celery.log',
'formatter': 'standard',
'maxBytes': 1024 * 1024 * 100, # 100 mb
},
},
'loggers': {
'celery': {
'handlers': ['celery', 'console'],
'level': 'DEBUG',
},
}
}
logging.Formatter.converter = time.gmtime
path: /opt/mozdef/docker/compose/mozdef_alerts/files/config.py
- content: |
client_id=${OIDCClientId}
client_secret=${OIDCClientSecret}
@ -147,9 +227,13 @@ Resources:
- content: |
OPTIONS_TASKEXCHANGE=${CloudTrailSQSNotificationQueueName}
path: /opt/mozdef/docker/compose/cloudy_mozdef_mq_cloudtrail.env
- content: |
OPTIONS_TASKEXCHANGE=${MozDefSQSQueueName}
path: /opt/mozdef/docker/compose/cloudy_mozdef_mq_sns_sqs.env
runcmd:
- chmod --verbose 600 /opt/mozdef/docker/compose/cloudy_mozdef.env
- chmod --verbose 600 /opt/mozdef/docker/compose/cloudy_mozdef_kibana.env
- chmod --verbose 600 /opt/mozdef/docker/compose/cloudy_mozdef_mq_sqs.env
- mkdir --verbose --parents ${EFSMountPoint}
- echo '*.* @@127.0.0.1:514' >> /etc/rsyslog.conf
- systemctl enable rsyslog

Просмотреть файл

@ -3,28 +3,25 @@ Description: Creates MozDef Amazon MQ
Parameters:
MQUserParameter:
Type: String
Default: mozdef
Description: The username for the AmazonMQ user.
Default: mozdef
MQPasswordParameter:
Type: String
NoEcho: true
Default: ''
Description: The password for the AmazonMQ user. Leave this blank if you want an auto-generated password.
Default: ''
NoEcho: true
MQInstanceType:
Type: String
Default: mq.t2.micro
Description: The instance type for the AmazonMQ instance.
Default: mq.t2.micro
MQVpcId:
Type: AWS::EC2::VPC::Id
Default: vpc-dc8eacb4
Description: The VPC ID of the VPC within which the AmazonMQ instance will be provisioned.
MQSubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Default: subnet-dd8eacb5,subnet-df8eacb7,subnet-de8eacb6
Description: Comma-delimited list of subnet IDs of the VPCs within which the AmazonMQ instance will be provisioned.
MozDefSecurityGroup:
Type: AWS::EC2::SecurityGroup::Id
Default: sg-02d7fe9627ea068e2
Description: The security group of the Mozdef endpoint that is accessing AmazonMQ
Conditions:
PasswordIsSet: !Not [ !Equals [ !Ref MQPasswordParameter, '' ]]

Просмотреть файл

@ -14,52 +14,91 @@ Metadata:
- InstanceType
- KeyName
- AMIImageId
- Label:
default: Certificate
Parameters:
- ACMCertArn
- Label:
default: OIDC Configuration
Parameters:
- OIDCDiscoveryURL
- OIDCClientId
- OIDCClientSecret
- Label:
default: Template Location
Parameters:
- S3TemplateLocation
ParameterLabels:
VpcId:
default: VPC ID
PublicSubnetIds:
default: Public Subnet IDs
InstanceType:
default: EC2 Instance Type
KeyName:
default: EC2 SSH Key Name
AMIImageId:
default: EC2 AMI Image ID
DomainName:
default: FQDN to host MozDef at
ACMCertArn:
default: ACM Certificate ARN
OIDCDiscoveryURL:
default: OIDC Discovery URL
OIDCClientId:
default: OIDC Client ID
OIDCClientSecret:
default: OIDC Client Secret
S3TemplateLocation:
default: S3 Template Location URL
Parameters:
S3TemplateLocation:
Type: String
Description: The URL to the S3 bucket used to fetch the nested stack templates
Default: https://s3-us-west-2.amazonaws.com/mozdef.infosec.allizom.org/cf/
VpcId:
Type: AWS::EC2::VPC::Id
Description: The VPC ID of the VPC to deploy in
Default: vpc-dc8eacb4
Description: 'The VPC ID of the VPC to deploy in (Example : vpc-abcdef12)'
PublicSubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: A comma delimited list of public subnet IDs
Default: subnet-dd8eacb5,subnet-df8eacb7,subnet-de8eacb6
Description: 'A comma delimited list of public subnet IDs (Example: subnet-abcdef12,subnet-bcdef123)'
InstanceType:
Type: String
Default: m5.large
Description: EC2 instance type, e.g. m1.small, m1.large, etc.
Default: m5.large
KeyName:
Type: AWS::EC2::KeyPair::KeyName
Description: Name of an existing EC2 KeyPair to enable SSH access to the web server
Default: infosec-pdx-workweek-2018
AMIImageId:
Type: String
Type: AWS::EC2::Image::Id
Description: The AMI Image ID to use of the EC2 instance
Default: ami-0ff5b922fb3ae2332
Default: ami-073434079b0366251
DomainName:
Type: String
Description: The fully qualified DNS name you will host CloudyMozDef at.
Default: cloudymozdef.security.allizom.org
ACMCertArn:
Description: The ARN of your pre-issued ACM cert.
Default : arn:aws:acm:us-west-2:656532927350:certificate/79f641f2-4046-4754-a28f-4db80d7c0583
Type: String
MinLength: '1'
Description: "The ARN of your pre-issued ACM cert. (Example: arn:aws:acm:us-west-2:123456789012:certificate/abcdef01-2345-6789-abcd-ef0123456789)"
OIDCDiscoveryURL:
Type: String
AllowedPattern: '^https?:\/\/.*'
ConstraintDescription: A valid URL
Description: "The URL of your OIDC provider's well-known discovery URL (Example: https://auth.example.com/.well-known/openid-configuration)"
OIDCClientId:
Description: The client ID that your OIDC provider issues you for your Mozdef instance.
Type: String
Default: lGsSlYNdiV6f5tF05pWN3EbQoDPHx44k
Description: The client ID that your OIDC provider issues you for your Mozdef instance.
OIDCClientSecret:
Type: String
Description: The secret that your OIDC provider issues you for your Mozdef instance.
NoEcho: true
OIDCDiscoveryURL:
S3TemplateLocation:
Type: String
Default: https://auth.mozilla.auth0.com/.well-known/openid-configuration
Description: The URL of your OIDC provider's well-known discovery URL
AllowedPattern: '^https?:\/\/.*\.amazonaws\.com\/.*'
ConstraintDescription: A valid amazonaws.com S3 URL
Description: "The URL to the S3 bucket used to fetch the nested stack templates (Example: https://s3-us-west-2.amazonaws.com/example-bucket-name/cloudformation/path/)"
Resources:
MozDefSecurityGroups:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
VpcId: !Ref VpcId
Tags:
- Key: application
Value: mozdef
@ -98,6 +137,8 @@ Resources:
OIDCClientSecret: !Ref OIDCClientSecret
OIDCDiscoveryURL: !Ref OIDCDiscoveryURL
CloudTrailSQSNotificationQueueName: !GetAtt MozDefCloudTrail.Outputs.CloudTrailSQSQueueName
MozDefSQSQueueName: !GetAtt MozDefSQS.Outputs.SQSQueueName
DomainName: !Ref DomainName
Tags:
- Key: application
Value: mozdef
@ -124,8 +165,8 @@ Resources:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
VPCID0: !Ref VpcId
Subnet: !Join [ ',', !Ref PublicSubnetIds ]
VpcId: !Ref VpcId
SubnetList: !Join [ ',', !Ref PublicSubnetIds ]
NumberOfSubnets: !GetAtt NumberOfSubnets.Length
MozDefSecurityGroup: !GetAtt MozDefSecurityGroups.Outputs.MozDefSecurityGroupId
Tags:
@ -245,4 +286,4 @@ Resources:
Properties:
RoleName: AWSServiceRoleForAmazonElasticsearchService
PathPrefix: '/aws-service-role/es.amazonaws.com/'
ServiceToken: !GetAtt DoesRoleExistLambdaFunction.Arn
ServiceToken: !GetAtt DoesRoleExistLambdaFunction.Arn

Просмотреть файл

@ -4,7 +4,6 @@ Parameters:
VpcId:
Type: AWS::EC2::VPC::Id
Description: The VPC ID of the VPC to deploy in
Default: vpc-dc8eacb4
Resources:
MozDefSecurityGroup:
Type: AWS::EC2::SecurityGroup

Просмотреть файл

@ -9,10 +9,23 @@ Resources:
Value: mozdef
- Key: stack
Value: !Ref AWS::StackName
MozDefCloudTrailSQSQueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: AllowThisAccountSendToSQS
Effect: Allow
Principal: '*'
Action: sqs:SendMessage
Resource: !GetAtt MozDefSQSQueue.Arn
Queues:
- !Ref MozDefSQSQueue
Outputs:
SQSQueueArn:
Description: ARN of the SQS Queue that MozDef will consume events from
Value: !GetAtt MozDefSQSQueue.Arn
SQSQueueName:
Description: Name of the SQS Queue that MozDef will consume events from
Value: !GetAtt MozDefSQSQueue.QueueName
Value: !GetAtt MozDefSQSQueue.QueueName

Просмотреть файл

@ -14,7 +14,10 @@
"instance_type": "t2.large",
"ssh_pty" : "true",
"ssh_username": "ec2-user",
"ami_name": "mozdef_{{timestamp}}"
"ami_name": "mozdef_{{timestamp}}",
"ami_groups": [
"all"
]
}],
"provisioners": [
{ "type": "shell",
@ -30,7 +33,7 @@
"sudo systemctl enable docker",
"sudo mkdir -p /opt/mozdef/",
"sudo git clone https://github.com/mozilla/MozDef /opt/mozdef",
"cd /opt/mozdef && sudo git checkout origin/master"
]}
"cd /opt/mozdef && sudo git checkout master"
]}
]
}

Просмотреть файл

@ -263,6 +263,9 @@ def process_msg(mozmsg, msg):
See also https://auth0.com/docs/api/management/v2#!/Logs/get_logs
"""
details = DotDict({})
# defaults
details.username = "UNKNOWN"
details.userid = "UNKNNOWN"
# key words used to set category and success/failure markers
authentication_words = ['Login', 'Logout', 'Auth']
@ -271,14 +274,18 @@ def process_msg(mozmsg, msg):
failed_words = ['Failed']
# default category (might be modified below to be more specific)
mozmsg.category = 'iam'
mozmsg.set_category('iam')
mozmsg.source = 'auth0'
# fields that should always exist
mozmsg.timestamp = msg.date
details['messageid'] = msg._id
details['userid'] = msg.user_id
details['sourceipaddress'] = msg.ip
try:
details['userid'] = msg.user_id
except KeyError:
pass
try:
details['username'] = msg.user_name
except KeyError:
@ -294,7 +301,7 @@ def process_msg(mozmsg, msg):
pass
try:
mozmsg.useragent = msg.user_agent
details['useragent'] = msg.user_agent
except KeyError:
pass
@ -303,9 +310,9 @@ def process_msg(mozmsg, msg):
details['eventname'] = log_types[msg.type].event
# determine the event category
if any(authword in details['eventname'] for authword in authentication_words):
mozmsg.category = "authentication"
mozmsg.set_category("authentication")
if any(authword in details['eventname'] for authword in authorization_words):
mozmsg.category = "authorization"
mozmsg.set_category("authorization")
# determine success/failure
if any(failword in details['eventname'] for failword in failed_words):
details.success = False
@ -333,7 +340,7 @@ def process_msg(mozmsg, msg):
details['description'] = ""
# set the summary
if 'auth' in mozmsg.category:
if 'auth' in mozmsg._category:
# make summary be action/username (success login user@place.com)
mozmsg.summary = "{event} {desc}".format(
event=details.eventname,

Просмотреть файл

@ -50,8 +50,8 @@ def main():
aws_access_key_id=options.aws_access_key_id,
aws_secret_access_key=options.aws_secret_access_key
)
idate = date.strftime(datetime.utcnow()-timedelta(days=1),'%Y%m%d')
bucketdate = date.strftime(datetime.utcnow()-timedelta(days=1),'%Y-%m')
idate = date.strftime(datetime.utcnow() - timedelta(days=1), '%Y%m%d')
bucketdate = date.strftime(datetime.utcnow() - timedelta(days=1), '%Y-%m')
hostname = socket.gethostname()
# Create or update snapshot configuration
@ -120,7 +120,7 @@ echo "DONE!"
except boto.exception.NoAuthHandlerFound:
logger.error("No auth handler found, check your credentials")
except Exception as e:
logger.error("Unhandled exception, terminating: %r"%e)
logger.error("Unhandled exception, terminating: %r" % e)
def initConfig():
@ -129,65 +129,65 @@ def initConfig():
'output',
'stdout',
options.configfile
)
)
# syslog hostname
options.sysloghostname = getConfig(
'sysloghostname',
'localhost',
options.configfile
)
)
options.syslogport = getConfig(
'syslogport',
514,
options.configfile
)
)
options.esservers = list(getConfig(
'esservers',
'http://localhost:9200',
options.configfile).split(',')
)
)
options.indices = list(getConfig(
'backup_indices',
'events,alerts,.kibana',
options.configfile).split(',')
)
)
options.dobackup = list(getConfig(
'backup_dobackup',
'1,1,1',
options.configfile).split(',')
)
)
options.rotation = list(getConfig(
'backup_rotation',
'daily,monthly,none',
options.configfile).split(',')
)
)
options.pruning = list(getConfig(
'backup_pruning',
'20,0,0',
options.configfile).split(',')
)
)
# aws credentials to use to send files to s3
options.aws_access_key_id = getConfig(
'aws_access_key_id',
'',
options.configfile
)
)
options.aws_secret_access_key = getConfig(
'aws_secret_access_key',
'',
options.configfile
)
)
options.aws_region = getConfig(
'aws_region',
'us-west-1',
options.configfile
)
)
options.aws_bucket = getConfig(
'aws_bucket',
'',
options.configfile
)
)
if __name__ == '__main__':

Просмотреть файл

@ -19,8 +19,6 @@ from pymongo import MongoClient
from collections import Counter
from kombu import Connection, Exchange
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
from mozdef_util.query_models import SearchQuery, PhraseMatch
@ -153,7 +151,7 @@ def searchMongoAlerts(mozdefdb):
# count by ip
{"$group": {"_id": "$sourceip", "hitcount": {"$sum": 1}}},
# limit to those with X observances
{"$match": {"hitcount": {"$gt": 5}}},
{"$match": {"hitcount": {"$gt": options.ipv4attackerhitcount}}},
# sort
{"$sort": SON([("hitcount", -1), ("_id", -1)])},
# top 10
@ -166,7 +164,7 @@ def searchMongoAlerts(mozdefdb):
# set CIDR
# todo: lookup ipwhois for asn_cidr value
# potentially with a max mask value (i.e. asn is /8, limit attackers to /24)
ipcidr.prefixlen = 32
ipcidr.prefixlen = options.ipv4attackerprefixlength
# append to or create attacker.
# does this match an existing attacker's indicators
@ -260,10 +258,10 @@ def searchMongoAlerts(mozdefdb):
# summarize the alert categories
# returns list of tuples: [(u'bruteforce', 8)]
categoryCounts= mostCommon(matchingalerts,'category')
#are the alerts all the same category?
# are the alerts all the same category?
if len(categoryCounts) == 1:
#is the alert category mapped to an attacker category?
# is the alert category mapped to an attacker category?
for category in options.categorymapping:
if category.keys()[0] == categoryCounts[0][0]:
attacker['category'] = category[category.keys()[0]]
@ -482,6 +480,10 @@ def initConfig():
# set to either amqp or amqps for ssl
options.mqprotocol = getConfig('mqprotocol', 'amqp', options.configfile)
# Set these settings to change the correlation for attackers
options.ipv4attackerprefixlength = getConfig('ipv4attackerprefixlength', 32, options.configfile)
options.ipv4attackerhitcount = getConfig('ipv4ipv4attackerhitcount', 5, options.configfile)
if __name__ == '__main__':
parser = OptionParser()

Просмотреть файл

@ -7,7 +7,6 @@
import json
import logging
import os
import re
import sys
from datetime import datetime
@ -15,8 +14,6 @@ from configlib import getConfig, OptionParser
from logging.handlers import SysLogHandler
from hashlib import md5
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient, ElasticsearchBadServer
from mozdef_util.query_models import SearchQuery, TermMatch, PhraseMatch
@ -63,7 +60,7 @@ def readOUIFile(ouifilename):
for i in ouifile.readlines()[0::]:
i=i.strip()
if '(hex)' in i:
#print(i)
# print(i)
fields=i.split('\t')
macprefix=fields[0][0:8].replace('-',':').lower()
entity=fields[2]

Просмотреть файл

@ -17,7 +17,6 @@ from configlib import getConfig, OptionParser
from logging.handlers import SysLogHandler
from pymongo import MongoClient
import os
from mozdef_util.utilities.toUTC import toUTC
@ -62,7 +61,7 @@ def parse_fqdn_whitelist(fqdn_whitelist_location):
fqdns = []
with open(fqdn_whitelist_location, "r") as text_file:
for line in text_file:
line=line.strip().strip("'").strip('"')
line = line.strip().strip("'").strip('"')
if isFQDN(line):
fqdns.append(line)
return fqdns
@ -77,10 +76,10 @@ def main():
mozdefdb = client.meteor
fqdnblocklist = mozdefdb['fqdnblocklist']
# ensure indexes
fqdnblocklist.create_index([('dateExpiring',-1)])
fqdnblocklist.create_index([('dateExpiring', -1)])
# delete any that expired
fqdnblocklist.delete_many({'dateExpiring': {"$lte": datetime.utcnow()-timedelta(days=options.expireage)}})
fqdnblocklist.delete_many({'dateExpiring': {"$lte": datetime.utcnow() - timedelta(days=options.expireage)}})
# Lastly, export the combined blocklist
fqdnCursor = mozdefdb['fqdnblocklist'].aggregate([
@ -95,7 +94,7 @@ def main():
{"$project": {"address": 1}},
{"$limit": options.fqdnlimit}
])
FQDNList=[]
FQDNList = []
for fqdn in fqdnCursor:
if fqdn not in options.fqdnwhitelist:
FQDNList.append(fqdn['address'])
@ -105,7 +104,7 @@ def main():
outputfile.write("{0}\n".format(fqdn))
outputfile.close()
# to s3?
if len(options.aws_bucket_name)>0:
if len(options.aws_bucket_name) > 0:
s3_upload_file(options.outputfile, options.aws_bucket_name, options.aws_document_key_name)
except ValueError as e:
@ -134,26 +133,26 @@ def initConfig():
options.outputfile = getConfig('outputfile', 'fqdnblocklist.txt', options.configfile)
# Days after expiration that we purge an fqdnblocklist entry (from the ui, they don't end up in the export after expiring)
options.expireage = getConfig('expireage',1,options.configfile)
options.expireage = getConfig('expireage', 1, options.configfile)
# Max FQDNs to emit
options.fqdnlimit = getConfig('fqdnlimit', 1000, options.configfile)
# AWS creds
options.aws_access_key_id=getConfig('aws_access_key_id','',options.configfile) # aws credentials to use to connect to mozilla_infosec_blocklist
options.aws_secret_access_key=getConfig('aws_secret_access_key','',options.configfile)
options.aws_bucket_name=getConfig('aws_bucket_name','',options.configfile)
options.aws_document_key_name=getConfig('aws_document_key_name','',options.configfile)
options.aws_access_key_id = getConfig('aws_access_key_id', '', options.configfile) # aws credentials to use to connect to mozilla_infosec_blocklist
options.aws_secret_access_key = getConfig('aws_secret_access_key', '', options.configfile)
options.aws_bucket_name = getConfig('aws_bucket_name', '', options.configfile)
options.aws_document_key_name = getConfig('aws_document_key_name', '', options.configfile)
def s3_upload_file(file_path, bucket_name, key_name):
"""
Upload a file to the given s3 bucket and return a template url.
"""
conn = boto.connect_s3(aws_access_key_id=options.aws_access_key_id,aws_secret_access_key=options.aws_secret_access_key)
conn = boto.connect_s3(aws_access_key_id=options.aws_access_key_id, aws_secret_access_key=options.aws_secret_access_key)
try:
bucket = conn.get_bucket(bucket_name, validate=False)
except boto.exception.S3ResponseError as e:
except boto.exception.S3ResponseError:
conn.create_bucket(bucket_name)
bucket = conn.get_bucket(bucket_name, validate=False)

Просмотреть файл

@ -17,7 +17,6 @@ from configlib import getConfig, OptionParser
from logging.handlers import SysLogHandler
from pymongo import MongoClient
import os
from mozdef_util.utilities.toUTC import toUTC
@ -52,7 +51,7 @@ def isIPv4(ip):
# netaddr on it's own considers 1 and 0 to be valid_ipv4
# so a little sanity check prior to netaddr.
# Use IPNetwork instead of valid_ipv4 to allow CIDR
if '.' in ip and len(ip.split('.'))==4:
if '.' in ip and len(ip.split('.')) == 4:
# some ips are quoted
netaddr.IPNetwork(ip.strip("'").strip('"'))
return True
@ -89,7 +88,7 @@ def aggregateAttackerIPs(attackers):
whitelisted = False
logger.debug('working {0}'.format(i))
ip = i['_id']['ipv4address']
ipcidr=netaddr.IPNetwork(ip)
ipcidr = netaddr.IPNetwork(ip)
if not ipcidr.ip.is_loopback() and not ipcidr.ip.is_private() and not ipcidr.ip.is_reserved():
for whitelist_range in options.ipwhitelist:
whitelist_network = netaddr.IPNetwork(whitelist_range)
@ -97,8 +96,8 @@ def aggregateAttackerIPs(attackers):
logger.debug(str(ipcidr) + " is whitelisted as part of " + str(whitelist_network))
whitelisted = True
#strip any host bits 192.168.10/24 -> 192.168.0/24
ipcidrnet=str(ipcidr.cidr)
# strip any host bits 192.168.10/24 -> 192.168.0/24
ipcidrnet = str(ipcidr.cidr)
if ipcidrnet not in iplist and not whitelisted:
iplist.append(ipcidrnet)
else:
@ -110,7 +109,7 @@ def parse_network_whitelist(network_whitelist_location):
networks = []
with open(network_whitelist_location, "r") as text_file:
for line in text_file:
line=line.strip().strip("'").strip('"')
line = line.strip().strip("'").strip('"')
if isIPv4(line) or isIPv6(line):
networks.append(line)
return networks
@ -124,29 +123,29 @@ def main():
client = MongoClient(options.mongohost, options.mongoport)
mozdefdb = client.meteor
ipblocklist = mozdefdb['ipblocklist']
attackers=mozdefdb['attackers']
attackers = mozdefdb['attackers']
# ensure indexes
ipblocklist.create_index([('dateExpiring',-1)])
attackers.create_index([('lastseentimestamp',-1)])
attackers.create_index([('category',1)])
ipblocklist.create_index([('dateExpiring', -1)])
attackers.create_index([('lastseentimestamp', -1)])
attackers.create_index([('category', 1)])
# First, gather IP addresses from recent attackers and add to the block list
attackerIPList = aggregateAttackerIPs(attackers)
# add attacker IPs to the blocklist
# first delete ones we've created from an attacker
ipblocklist.delete_many({'creator': 'mozdef','reference':'attacker'})
ipblocklist.delete_many({'creator': 'mozdef', 'reference': 'attacker'})
# delete any that expired
ipblocklist.delete_many({'dateExpiring': {"$lte": datetime.utcnow()-timedelta(days=options.expireage)}})
ipblocklist.delete_many({'dateExpiring': {"$lte": datetime.utcnow() - timedelta(days=options.expireage)}})
# add the aggregations we've found recently
for ip in attackerIPList:
ipblocklist.insert_one(
{'_id': genMeteorID(),
'address':ip,
'address': ip,
'reference': 'attacker',
'creator':'mozdef',
'creator': 'mozdef',
'dateAdded': datetime.utcnow()})
# Lastly, export the combined blocklist
@ -162,7 +161,7 @@ def main():
{"$project": {"address": 1}},
{"$limit": options.iplimit}
])
IPList=[]
IPList = []
for ip in ipCursor:
IPList.append(ip['address'])
# to text
@ -171,7 +170,7 @@ def main():
outputfile.write("{0}\n".format(ip))
outputfile.close()
# to s3?
if len(options.aws_bucket_name)>0:
if len(options.aws_bucket_name) > 0:
s3_upload_file(options.outputfile, options.aws_bucket_name, options.aws_document_key_name)
except ValueError as e:
@ -203,29 +202,29 @@ def initConfig():
options.category = getConfig('category', 'bruteforcer', options.configfile)
# Max days to look back for attackers
options.attackerage = getConfig('attackerage',90,options.configfile)
options.attackerage = getConfig('attackerage', 90, options.configfile)
# Days after expiration that we purge an ipblocklist entry (from the ui, they don't end up in the export after expiring)
options.expireage = getConfig('expireage',1,options.configfile)
options.expireage = getConfig('expireage', 1, options.configfile)
# Max IPs to emit
options.iplimit = getConfig('iplimit', 1000, options.configfile)
# AWS creds
options.aws_access_key_id=getConfig('aws_access_key_id','',options.configfile) # aws credentials to use to connect to mozilla_infosec_blocklist
options.aws_secret_access_key=getConfig('aws_secret_access_key','',options.configfile)
options.aws_bucket_name=getConfig('aws_bucket_name','',options.configfile)
options.aws_document_key_name=getConfig('aws_document_key_name','',options.configfile)
options.aws_access_key_id = getConfig('aws_access_key_id', '', options.configfile) # aws credentials to use to connect to mozilla_infosec_blocklist
options.aws_secret_access_key = getConfig('aws_secret_access_key', '', options.configfile)
options.aws_bucket_name = getConfig('aws_bucket_name', '', options.configfile)
options.aws_document_key_name = getConfig('aws_document_key_name', '', options.configfile)
def s3_upload_file(file_path, bucket_name, key_name):
"""
Upload a file to the given s3 bucket and return a template url.
"""
conn = boto.connect_s3(aws_access_key_id=options.aws_access_key_id,aws_secret_access_key=options.aws_secret_access_key)
conn = boto.connect_s3(aws_access_key_id=options.aws_access_key_id, aws_secret_access_key=options.aws_secret_access_key)
try:
bucket = conn.get_bucket(bucket_name, validate=False)
except boto.exception.S3ResponseError as e:
except boto.exception.S3ResponseError:
conn.create_bucket(bucket_name)
bucket = conn.get_bucket(bucket_name, validate=False)

Просмотреть файл

@ -120,7 +120,7 @@
},
"requestparameters" : {
"properties" : {
"logStreamName": {
"logstreamname": {
"properties": {
"raw_value": {
"type" : "keyword"

Просмотреть файл

@ -5,13 +5,12 @@
#
import sys
import os
from datetime import datetime, timedelta, tzinfo
try:
from datetime import timezone
utc = timezone.utc
except ImportError:
#Hi there python2 user
# Hi there python2 user
class UTC(tzinfo):
def utcoffset(self, dt):
return timedelta(0)
@ -36,12 +35,14 @@ def normalize(details):
normalized = {}
for f in details:
if f in ("ip", "ip_address"):
if f in ("ip", "ip_address", "client_ip"):
normalized["sourceipaddress"] = details[f]
continue
if f == "result":
if details[f] != "SUCCESS":
normalized["error"] = True
if details[f] == "SUCCESS":
normalized["success"] = True
else:
normalized["success"] = False
normalized[f] = details[f]
return normalized
@ -79,13 +80,14 @@ def process_events(mozmsg, duo_events, etype, state):
continue
details[i] = e[i]
mozmsg.set_category(etype)
mozmsg.details = normalize(details)
if etype == 'administration':
mozmsg.summary = e['action']
elif etype == 'telephony':
mozmsg.summary = e['context']
elif etype == 'authentication':
mozmsg.summary = e['eventtype']+' '+e['result']+' for '+e['username']
mozmsg.summary = e['eventtype'] + ' ' + e['result'] + ' for ' + e['username']
mozmsg.send()
@ -107,20 +109,20 @@ def main():
duo = duo_client.Admin(ikey=options.IKEY, skey=options.SKEY, host=options.URL)
mozmsg = mozdef.MozDefEvent(options.MOZDEF_URL)
mozmsg.tags=['duosecurity', 'logs']
mozmsg.tags = ['duosecurity']
if options.update_tags != '':
mozmsg.tags.append(options.update_tags)
mozmsg.category = 'Authentication'
mozmsg.source = 'DuoSecurity API'
mozmsg.set_category('authentication')
mozmsg.source = 'DuoSecurityAPI'
if options.DEBUG:
mozmsg.debug = options.DEBUG
mozmsg.set_send_to_syslog(True, only_syslog=True)
# This will process events for all 3 log types and send them to MozDef. the state stores the last position in the
# log when this script was last called.
state = process_events(mozmsg, duo.get_administrator_log(mintime=state['administration']+1), 'administration', state)
state = process_events(mozmsg, duo.get_authentication_log(mintime=state['authentication']+1), 'authentication', state)
state = process_events(mozmsg, duo.get_telephony_log(mintime=state['telephony']+1), 'telephony', state)
state = process_events(mozmsg, duo.get_administrator_log(mintime=state['administration'] + 1), 'administration', state)
state = process_events(mozmsg, duo.get_authentication_log(mintime=state['authentication'] + 1), 'authentication', state)
state = process_events(mozmsg, duo.get_telephony_log(mintime=state['telephony'] + 1), 'telephony', state)
pickle.dump(state, open(options.statepath, 'wb'))

Просмотреть файл

@ -13,7 +13,6 @@ import logging
from configlib import getConfig, OptionParser
from datetime import datetime, date, timedelta
import os
from mozdef_util.elasticsearch_client import ElasticsearchClient
from mozdef_util.utilities.logger import logger

Просмотреть файл

@ -15,10 +15,8 @@ from logging.handlers import SysLogHandler
from time import sleep
import socket
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient, ElasticsearchBadServer
from mozdef_util.elasticsearch_client import ElasticsearchClient
from mozdef_util.query_models import SearchQuery, Aggregation
logger = logging.getLogger(sys.argv[0])

Просмотреть файл

@ -5,21 +5,17 @@
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
import os
import sys
import logging
import requests
import json
from configlib import getConfig, OptionParser
from datetime import datetime
from datetime import timedelta
from logging.handlers import SysLogHandler
from httplib2 import Http
from oauth2client.client import SignedJwtAssertionCredentials
from apiclient.discovery import build
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
logger = logging.getLogger(sys.argv[0])
@ -187,7 +183,7 @@ def main():
state.data['lastrun'] = lastrun
state.write_state_file()
except Exception as e:
logger.error("Unhandled exception, terminating: %r"%e)
logger.error("Unhandled exception, terminating: %r" % e)
def initConfig():
@ -203,10 +199,10 @@ def initConfig():
# for detailed information on delegating a service account for use in gathering google admin sdk reports
#
#google's json credential file exported from the project/admin console
# google's json credential file exported from the project/admin console
options.jsoncredentialfile=getConfig('jsoncredentialfile','/path/to/filename.json',options.configfile)
#email of admin to impersonate as a service account
# email of admin to impersonate as a service account
options.impersonate = getConfig('impersonate', 'someone@yourcompany.com', options.configfile)

Просмотреть файл

@ -16,8 +16,6 @@ from requests.auth import HTTPBasicAuth
from configlib import getConfig, OptionParser
from logging.handlers import SysLogHandler
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
@ -71,7 +69,7 @@ def main():
logger.debug('Creating %s index' % index)
es.create_index(index, default_mapping_contents)
except Exception as e:
logger.error("Unhandled exception, terminating: %r"%e)
logger.error("Unhandled exception, terminating: %r" % e)
auth = HTTPBasicAuth(options.mquser, options.mqpassword)

Просмотреть файл

@ -14,7 +14,6 @@ from configlib import getConfig, OptionParser
from logging.handlers import SysLogHandler
from pymongo import MongoClient
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
from mozdef_util.query_models import SearchQuery, TermMatch

Просмотреть файл

@ -5,26 +5,21 @@
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
import os
import sys
from configlib import getConfig,OptionParser
from configlib import getConfig, OptionParser
import logging
from logging.handlers import SysLogHandler
import json
from datetime import datetime
from datetime import timedelta
from datetime import date
import requests
import netaddr
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
logger = logging.getLogger(sys.argv[0])
logger.level=logging.INFO
logger.level = logging.INFO
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
@ -69,20 +64,17 @@ def main():
logger.addHandler(sh)
logger.debug('started')
#logger.debug(options)
# logger.debug(options)
try:
es = ElasticsearchClient((list('{0}'.format(s) for s in options.esservers)))
s = requests.Session()
s.headers.update({'Accept': 'application/json'})
s.headers.update({'Content-type': 'application/json'})
s.headers.update({'Authorization':'SSWS {0}'.format(options.apikey)})
s.headers.update({'Authorization': 'SSWS {0}'.format(options.apikey)})
#capture the time we start running so next time we catch any events created while we run.
# capture the time we start running so next time we catch any events created while we run.
state = State(options.state_file)
lastrun = toUTC(datetime.now()).isoformat()
#in case we don't archive files..only look at today and yesterday's files.
yesterday=date.strftime(datetime.utcnow()-timedelta(days=1),'%Y/%m/%d')
today = date.strftime(datetime.utcnow(),'%Y/%m/%d')
r = s.get('https://{0}/api/v1/events?startDate={1}&limit={2}'.format(
options.oktadomain,
@ -138,7 +130,7 @@ def main():
else:
logger.error('Could not get Okta events HTTP error code {} reason {}'.format(r.status_code, r.reason))
except Exception as e:
logger.error("Unhandled exception, terminating: %r"%e)
logger.error("Unhandled exception, terminating: %r" % e)
def initConfig():

Просмотреть файл

@ -18,14 +18,14 @@ from datetime import date
from datetime import timedelta
from configlib import getConfig, OptionParser
import sys
import os
from logging.handlers import SysLogHandler
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
logger = logging.getLogger(sys.argv[0])
logger.level=logging.WARNING
logger.level = logging.WARNING
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
@ -47,10 +47,10 @@ def esPruneIndexes():
if pruning != '0':
index_to_prune = index
if rotation == 'daily':
idate = date.strftime(toUTC(datetime.now()) - timedelta(days=int(pruning)),'%Y%m%d')
idate = date.strftime(toUTC(datetime.now()) - timedelta(days=int(pruning)), '%Y%m%d')
index_to_prune += '-%s' % idate
elif rotation == 'monthly':
idate = date.strftime(datetime.utcnow() - timedelta(days=31*int(pruning)),'%Y%m')
idate = date.strftime(datetime.utcnow() - timedelta(days=31 * int(pruning)), '%Y%m')
index_to_prune += '-%s' % idate
if index_to_prune in indices:
@ -62,7 +62,7 @@ def esPruneIndexes():
logger.error("Unhandled exception while deleting %s, terminating: %r" % (index_to_prune, e))
except Exception as e:
logger.error("Unhandled exception, terminating: %r"%e)
logger.error("Unhandled exception, terminating: %r" % e)
def initConfig():
@ -71,43 +71,43 @@ def initConfig():
'output',
'stdout',
options.configfile
)
)
# syslog hostname
options.sysloghostname = getConfig(
'sysloghostname',
'localhost',
options.configfile
)
)
options.syslogport = getConfig(
'syslogport',
514,
options.configfile
)
)
options.esservers = list(getConfig(
'esservers',
'http://localhost:9200',
options.configfile).split(',')
)
)
options.indices = list(getConfig(
'backup_indices',
'events,alerts,.kibana',
options.configfile).split(',')
)
)
options.dobackup = list(getConfig(
'backup_dobackup',
'1,1,1',
options.configfile).split(',')
)
)
options.rotation = list(getConfig(
'backup_rotation',
'daily,monthly,none',
options.configfile).split(',')
)
)
options.pruning = list(getConfig(
'backup_pruning',
'20,0,0',
options.configfile).split(',')
)
)
if __name__ == '__main__':

Просмотреть файл

@ -19,7 +19,6 @@ from datetime import timedelta
from configlib import getConfig, OptionParser
import json
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
@ -133,48 +132,48 @@ def initConfig():
'output',
'stdout',
options.configfile
)
)
# syslog hostname
options.sysloghostname = getConfig(
'sysloghostname',
'localhost',
options.configfile
)
)
options.syslogport = getConfig(
'syslogport',
514,
options.configfile
)
)
options.esservers = list(getConfig(
'esservers',
'http://localhost:9200',
options.configfile).split(',')
)
)
options.indices = list(getConfig(
'backup_indices',
'events,alerts,.kibana',
options.configfile).split(',')
)
)
options.dobackup = list(getConfig(
'backup_dobackup',
'1,1,1',
options.configfile).split(',')
)
)
options.rotation = list(getConfig(
'backup_rotation',
'daily,monthly,none',
options.configfile).split(',')
)
)
options.pruning = list(getConfig(
'backup_pruning',
'20,0,0',
options.configfile).split(',')
)
)
options.weekly_rotation_indices = list(getConfig(
'weekly_rotation_indices',
'events',
options.configfile).split(',')
)
)
default_mapping_location = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'defaultMappingTemplate.json')
options.default_mapping_file = getConfig('default_mapping_file', default_mapping_location, options.configfile)

Просмотреть файл

@ -9,7 +9,6 @@
# You only need to run it once, it will setup the templates
# used as future indexes are created
import requests
import sys
import os
from configlib import getConfig, OptionParser
@ -23,17 +22,17 @@ def initConfig():
'esservers',
'http://localhost:9200',
options.configfile).split(',')
)
)
options.templatenames = list(getConfig(
'templatenames',
'defaulttemplate',
options.configfile).split(',')
)
)
options.templatefiles = list(getConfig(
'templatefiles',
'',
options.configfile).split(',')
)
)
if __name__ == '__main__':

Просмотреть файл

@ -14,22 +14,14 @@
import json
import os
import sys
import socket
import time
from configlib import getConfig, OptionParser
from datetime import datetime
from hashlib import md5
import boto.sqs
from boto.sqs.message import RawMessage
import base64
import kombu
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.utilities.to_unicode import toUnicode
from mozdef_util.utilities.remove_at import removeAt
from mozdef_util.utilities.is_cef import isCEF
from mozdef_util.utilities.logger import logger, initLogger
from mozdef_util.elasticsearch_client import ElasticsearchClient, ElasticsearchBadServer, ElasticsearchInvalidIndex, ElasticsearchException
from mozdef_util.elasticsearch_client import ElasticsearchClient
def getDocID(sqsregionidentifier):

Просмотреть файл

@ -14,8 +14,6 @@ from configlib import getConfig, OptionParser
from logging.handlers import SysLogHandler
from pymongo import MongoClient
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
from mozdef_util.elasticsearch_client import ElasticsearchClient
from mozdef_util.query_models import SearchQuery, TermMatch

Просмотреть файл

@ -6,7 +6,6 @@
# Copyright (c) 2017 Mozilla Corporation
import sys
import os
from configlib import getConfig, OptionParser
from mozdef_util.elasticsearch_client import ElasticsearchClient

Просмотреть файл

@ -8,21 +8,13 @@
import copy
import os
import sys
import re
import json
import csv
import string
import ConfigParser
import tempfile
import logging
import socket
import hashlib
import MySQLdb
from requests import Session
from optparse import OptionParser
from datetime import datetime
from os import stat
from os.path import exists, getsize
class MozDefError(Exception):
@ -108,7 +100,7 @@ class MozDefEvent():
raise MozDefError('Summary is a required field')
try:
r = self.httpsession.post(self.url, json.dumps(log_msg, encoding='utf-8'), verify=self.verify_certificate)
self.httpsession.post(self.url, json.dumps(log_msg, encoding='utf-8'), verify=self.verify_certificate)
except Exception as e:
if not self.fire_and_forget_mode:
@ -123,7 +115,7 @@ def main():
mdEvent.debug = True
mdEvent.fire_and_forget_mode = False
#connect to mysql
# connect to mysql
db=MySQLdb.connect(host=options.hostname, user=options.username,passwd=options.password,db=options.database)
c=db.cursor(MySQLdb.cursors.DictCursor)
@ -146,7 +138,7 @@ def main():
duration = call['LeaveTime'] - call['JoinTime']
call['CallDuration'] = duration.seconds
#fix up the data for json
# fix up the data for json
for k in call.keys():
# convert datetime objects to isoformat for json serialization
if isinstance(call[k], datetime):
@ -157,7 +149,7 @@ def main():
call[k] = call[k].decode('utf-8','ignore').encode('ascii','ignore')
mdEvent.send(timestamp=call['JoinTime'],
summary='Vidyo call status for '+call['UniqueCallID'].encode('ascii', 'ignore'),
summary='Vidyo call status for ' + call['UniqueCallID'].encode('ascii', 'ignore'),
tags=['vidyo'],
details=call,
category='vidyo',

Просмотреть файл

@ -69,6 +69,8 @@ services:
image: mozdef/mozdef_alerts
env_file:
- cloudy_mozdef.env
volumes:
- /opt/mozdef/docker/compose/mozdef_alerts/files/config.py:/opt/mozdef/envs/mozdef/alerts/lib/config.py
restart: always
command: bash -c 'source /opt/mozdef/envs/python/bin/activate && celery -A celeryconfig worker --loglevel=info --beat'
depends_on:
@ -186,6 +188,23 @@ services:
- default
volumes:
- geolite_db:/opt/mozdef/envs/mozdef/data/
mq_sqs:
image: mozdef/mozdef_mq_worker
env_file:
- cloudy_mozdef.env
- cloudy_mozdef_mq_sns_sqs.env
restart: always
command: bash -c 'source /opt/mozdef/envs/python/bin/activate && python esworker_sns_sqs.py -c esworker_sns_sqs.conf'
scale: 1
depends_on:
- base
- rabbitmq
- loginput
- bootstrap
networks:
- default
volumes:
- geolite_db:/opt/mozdef/envs/mozdef/data/
volumes:
cron:
geolite_db:

Просмотреть файл

@ -2,6 +2,8 @@ FROM centos:7
LABEL maintainer="mozdef@mozilla.com"
# When changing kibana version remember to edit
# docker/compose/mozdef_bootstrap/files/initial_setup.py accordingly
ENV KIBANA_VERSION 5.6.7
RUN \

Просмотреть файл

@ -7,6 +7,7 @@ RUN mkdir -p /opt/mozdef/envs/mozdef/docker/conf
COPY cron/defaultMappingTemplate.json /opt/mozdef/envs/mozdef/cron/defaultMappingTemplate.json
COPY docker/compose/mozdef_cron/files/backup.conf /opt/mozdef/envs/mozdef/cron/backup.conf
COPY docker/compose/mozdef_bootstrap/files/initial_setup.py /opt/mozdef/envs/mozdef/initial_setup.py
COPY docker/compose/mozdef_bootstrap/files/index_mappings /opt/mozdef/envs/mozdef/index_mappings
RUN chown -R mozdef:mozdef /opt/mozdef/envs/mozdef/

Просмотреть файл

@ -0,0 +1,6 @@
{
"title": "alerts",
"timeFieldName": "utctimestamp",
"notExpandable": true,
"fields": "[{\"name\":\"_id\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_index\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"_score\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_source\",\"type\":\"_source\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_type\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"category\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"notify_mozdefbot\",\"type\":\"boolean\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"severity\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"summary\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"tags\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"utctimestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true}]"
}

Просмотреть файл

@ -0,0 +1,6 @@
{
"title": "events-weekly",
"timeFieldName": "utctimestamp",
"notExpandable": true,
"fields": "[{\"name\":\"_id\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_index\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"_score\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_source\",\"type\":\"_source\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_type\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"category\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.apiversion.raw_value\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.destinationipaddress\",\"type\":\"ip\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.destinationport\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.hostname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.requestparameters.logstreamname.raw_value\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.sourceipaddress\",\"type\":\"ip\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.sourceipv4address\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.sourceport\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.srcip\",\"type\":\"ip\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.success\",\"type\":\"boolean\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"hostname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"mozdefhostname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"processid\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"processname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"receivedtimestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"severity\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"source\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"summary\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"timestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"utctimestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"version\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true}]"
}

Просмотреть файл

@ -0,0 +1,6 @@
{
"title": "events",
"timeFieldName": "utctimestamp",
"notExpandable": true,
"fields": "[{\"name\":\"_id\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_index\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"_score\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_source\",\"type\":\"_source\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_type\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"category\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.apiversion.raw_value\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.destinationipaddress\",\"type\":\"ip\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.destinationport\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.hostname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.requestparameters.logstreamname.raw_value\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.sourceipaddress\",\"type\":\"ip\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.sourceipv4address\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.sourceport\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.srcip\",\"type\":\"ip\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"details.success\",\"type\":\"boolean\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"hostname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"mozdefhostname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"processid\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"processname\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"receivedtimestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"severity\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"source\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"summary\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"timestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"utctimestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"version\",\"type\":\"string\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true}]"
}

Просмотреть файл

@ -11,10 +11,10 @@ from datetime import datetime, timedelta
from time import sleep
from configlib import getConfig
import json
import time
from elasticsearch.exceptions import ConnectionError
import sys
import os
from mozdef_util.elasticsearch_client import ElasticsearchClient
@ -39,6 +39,8 @@ event_index_name = current_date.strftime("events-%Y%m%d")
previous_event_index_name = (current_date - timedelta(days=1)).strftime("events-%Y%m%d")
weekly_index_alias = 'events-weekly'
alert_index_name = current_date.strftime("alerts-%Y%m")
kibana_index_name = '.kibana'
kibana_version = '5.6.7'
index_settings_str = ''
with open(args.default_mapping_file) as data_file:
@ -78,6 +80,7 @@ index_settings['settings'] = {
}
}
# Create initial indices
if event_index_name not in all_indices:
print "Creating " + event_index_name
client.create_index(event_index_name, index_config=index_settings)
@ -96,3 +99,25 @@ client.create_alias('alerts', alert_index_name)
if weekly_index_alias not in all_indices:
print "Creating " + weekly_index_alias
client.create_alias_multiple_indices(weekly_index_alias, [event_index_name, previous_event_index_name])
if kibana_index_name not in all_indices:
print "Creating " + kibana_index_name
client.create_index(kibana_index_name)
# Create index patterns and assign default index mapping
time.sleep(1)
index_mappings_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'index_mappings')
listing = os.listdir(index_mappings_path)
for infile in listing:
json_file_path = os.path.join(index_mappings_path, infile)
with open(json_file_path) as json_data:
mapping_data = json.load(json_data)
print "Creating {0} index mapping".format(mapping_data['title'])
client.save_object(body=mapping_data, index='.kibana', doc_type='index-pattern', doc_id=mapping_data['title'])
# Assign default index to 'events'
client.flush('.kibana')
default_mapping_data = {
"defaultIndex": 'events'
}
print "Assigning events as default index mapping"
client.save_object(default_mapping_data, '.kibana', 'config', kibana_version)

2
docs/requirements.txt Normal file
Просмотреть файл

@ -0,0 +1,2 @@
sphinx
sphinx_rtd_theme

Просмотреть файл

@ -44,8 +44,9 @@ Requirements:
This test should pass and you will have confirmed you have a working environment.
At this point, begin development and periodically run your unit-tests locally with the following command::
At this point, begin development and periodically run your unit-tests locally with the following commands::
make build-tests
make run-tests TEST_CASE=tests/alerts/[YOUR ALERT TEST FILE].py
@ -72,6 +73,74 @@ How to get the alert in MozDef?
The best way to get your alert into MozDef (once it's completed) is to propose a pull request and ask for a review from a MozDef developer. They will be able to help you get the most out of the alert and help point out pitfalls. Once the alert is accepted into MozDef master, there is a process by which MozDef installations can make use or 'enable' that alert. It's best to work with that MozDef instance's maintainer to enable any new alerts.
Example first alert
-------------------
Let's step through creating a simple alert you might want to verify a working deployment.
For this sub-section it is assumed that you have a working MozDef instance which resides in some MozDefDir and is receiving logs.
First move to to your MozDefDir and issue
::
make new-alert
You will be asked for a string to name a new alert and the associated test. For this example we will use the string "foo"
::
make new-alert
Enter your alert name (Example: proxy drop executable): foo
Creating alerts/foo.py
Creating tests/alerts/test_foo.py
These will be created as above in the alerts and tests/alerts directories.
There's a lot to the generated code, but a class called "AlertFoo" is of immediate interest and will define when and how to alert.
Here's the head of the auto generated class.
::
class AlertFoo(AlertTask):
def main(self):
# Create a query to look back the last 20 minutes
search_query = SearchQuery(minutes=20)
# Add search terms to our query
search_query.add_must([
TermMatch('category', 'helloworld'),
ExistsMatch('details.sourceipaddress'),
])
...
In essence this code will tell MozDef to query the collection of logs for messages timestamped within 20 minutes (from time of query execution) and to look for messages which are of category "helloworld" which also have a source IP address.
If you're pumping logs into MozDef odds are you don't have any which will be tagged as "helloworld". You can of course create those logs, but lets assume that you have logs tagged as "syslog" for the moment.
Change the TermMatch line to
::
TermMatch('category', 'syslog'),
and you will get alerts for syslog labeled messages.
Ideally you should edit your test to match, but it's not strictly necessary.
Next we will need to enable the log and to schedule it. At time of writing this is a bit annoying.
Open the file
::
docker/compose/mozdef_alerts/files/config.py
or simply
::
alerts/files/config.py
if you are not working from the docker images
and add your new foo alert to the others with a crontab style schedule
::
ALERTS = {
'foo.AlertFoo': {'schedule': crontab(minute='*/1')},
'bruteforce_ssh.AlertBruteforceSsh': {'schedule': crontab(minute='*/1')},
'unauth_ssh.AlertUnauthSSH': {'schedule': crontab(minute='*/1')},
}
Restart your MozDef instance and you should begin seeing alerts on the alerts page.
Questions?
----------

Просмотреть файл

@ -1,11 +1,82 @@
Benchmarking
============
MozDef for AWS
===============
**What is MozDef for AWS**
Cloud based MozDef is an opinionated deployment of the MozDef services created in 2018 to help AWS users
ingest cloudtrail, guardduty, and provide security services.
.. image:: images/cloudformation-launch-stack.png
:target: https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=mozdef-for-aws&templateURL=https://s3-us-west-2.amazonaws.com/mozdef.infosec.allizom.org/cf/mozdef-parent.yml
Feedback
-----------
MozDef for AWS is new and we'd love your feedback. Try filing GitHub issues here in the repository or connect with us
in the Mozilla Discourse Security Category.
https://discourse.mozilla.org/c/security
You can also take a short survey on MozDef for AWS after you have deployed it.
https://goo.gl/forms/JYjTYDK45d3JdnGd2
Dependencies
--------------
MozDef requires the following:
- A DNS name ( cloudymozdef.security.allizom.org )
- An OIDC Provider with ClientID, ClientSecret, and Discovery URL
- Mozilla Uses Auth0 but you can use any OIDC provider you like: Shibboleth, KeyCloak, AWS Cognito, Okta, Ping (etc)
- An ACM Certificate in the deployment region for your DNS name
- A VPC with three public subnets available.
- It is advised that this VPC be dedicated to MozDef or used solely for security automation.
- An SQS queue recieving GuardDuty events. At the time of writing this is not required but may be required in future.
Supported Regions
------------------
MozDef for AWS is currently only supported in us-west-2 but will onboard additional regions over time.
Architecture
-------------
.. image:: images/MozDefCloudArchitecture.png
Deployment Process
-------------------
1. Launch the one click stack and provide the requisite values.
2. Wait for the stack to complete. You'll see several nested stacks in the Cloudformation console. *Note: This may take a while*
3. Navigate to the URL you set up for MozDef. It should redirect you to the single sign on provider. If successful you'll see the MozDef UI.
4. Try navigating to ElasticSearch https://your_base_url:9090
You should see the following:
::
{
"name" : "SMf4400",
"cluster_name" : "656532927350:mozdef-mozdef-yemjpbnpw8xb",
"cluster_uuid" : "_yBEIsFkQH-nEZfrFgj7mg",
"version" : {
"number" : "5.6.8",
"build_hash" : "688ecce",
"build_date" : "2018-09-11T14:44:40.463Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
5. Test out Kibana at https://your_base_url:9090/_plugin/kibana/app/kibana#/discover?_g=()
Using MozDef
-------------
Refer back to our other docs on how to use MozDef for general guidance. Cloud specific instructions will evolve here.
If you saw something about MozDef for AWS at re: Invent 2018 and you want to contribute we'd love your PRs.

Просмотреть файл

@ -97,7 +97,7 @@ pygments_style = 'sphinx'
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the

Двоичные данные
docs/source/images/cloudformation-launch-stack.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 3.8 KiB

Просмотреть файл

@ -14,6 +14,7 @@ Table of Contents
demo
installation
alert_development_guide
mozdef_util
screenshots
usage
cloud_deployment

Просмотреть файл

@ -61,6 +61,10 @@ Python
Create a mozdef user::
adduser mozdef -d /opt/mozdef
cp /etc/skel/.bash* /opt/mozdef/
cd /opt/mozdef
chown mozdef: .bash*
chown -R mozdef: *
We need to install a python2.7 virtualenv.
@ -74,7 +78,8 @@ On APT-based systems::
Then::
su - mozdef
sudo -i -u mozdef -g mozdef
mkdir /opt/mozdef/python2.7
wget https://www.python.org/ftp/python/2.7.11/Python-2.7.11.tgz
tar xvzf Python-2.7.11.tgz
cd Python-2.7.11
@ -132,7 +137,7 @@ You can then install the rabbitmq server::
To start rabbitmq at startup::
chkconfig rabbitmq-server on
systemctl enable rabbitmq-server
On APT-based systems ::
@ -178,8 +183,11 @@ We have a mongod.conf in the config directory prepared for you. To use it simply
For meteor installation follow these steps::
sudo -i -u mozdef -g mozdef
curl https://install.meteor.com/?release=1.8 | sh
For node you can exit from the mozdef user::
wget https://nodejs.org/dist/v8.12.0/node-v8.12.0.tar.gz
tar xvzf node-v8.12.0.tar.gz
cd node-v8.12.0
@ -187,8 +195,10 @@ For meteor installation follow these steps::
make
sudo make install
Then from the meteor subdirectory of this git repository (/opt/mozdef/meteor) run::
Then from the meteor subdirectory of this git repository (/opt/mozdef/meteor) run as the mozdef user with venv activated::
sudo -i -u mozdef -g mozdef
source envs/python/bin/activate
meteor add iron-router
If you wish to use meteor as the authentication handler you'll also need to install the Accounts-Password pkg::
@ -229,12 +239,12 @@ Alternatively you can run the meteor UI in 'deployment' mode using a native node
First install node::
yum install bzip2 gcc gcc-c++ sqlite sqlite-devel
wget https://nodejs.org/dist/v4.7.0/node-v4.7.0.tar.gz
tar xvfz node-v4.7.0.tar.gz
cd node-v4.7.0
python configure
wget https://nodejs.org/dist/v8.12.0/node-v8.12.0.tar.gz
tar xvzf node-v8.12.0.tar.gz
cd node-v8.12.0
./configure
make
make install
sudo make install
Then bundle the meteor portion of mozdef to deploy on another server::
@ -306,7 +316,7 @@ If you don't have this package in your repos, before installing create `/etc/yum
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/OS/OSRELEASE/$basearch/
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1
@ -317,9 +327,9 @@ UWSGI
We use `uwsgi`_ to interface python and nginx, in your venv execute the following::
wget https://projects.unbit.it/downloads/uwsgi-2.0.12.tar.gz
tar zxvf uwsgi-2.0.12.tar.gz
cd uwsgi-2.0.12
wget https://projects.unbit.it/downloads/uwsgi-2.0.17.1.tar.gz
tar zxvf uwsgi-2.0.17.1.tar.gz
cd uwsgi-2.0.17.1
~/python2.7/bin/python uwsgiconfig.py --build
~/python2.7/bin/python uwsgiconfig.py --plugin plugins/python core
cp python_plugin.so ~/envs/python/bin/

Просмотреть файл

@ -0,0 +1,9 @@
Mozdef_util Library
===================
We provide a library used to interact with MozDef components.
.. include:: mozdef_util/connect.rst
.. include:: mozdef_util/create.rst
.. include:: mozdef_util/search.rst
.. include:: mozdef_util/match_query_classes.rst

Просмотреть файл

@ -0,0 +1,8 @@
Connecting to Elasticsearch
---------------------------
.. code-block:: python
:linenos:
from mozdef_util.elasticsearch_client import ElasticsearchClient
es_client = ElasticsearchClient("http://127.0.0.1:9200")

Просмотреть файл

@ -0,0 +1,85 @@
Creating/Updating Documents
---------------------------
Create a new Event
^^^^^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
event_dict = {
"example_key": "example value"
}
es_client.save_event(body=event_dict)
Update an existing event
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
event_dict = {
"example_key": "example new value"
}
# Assuming 12345 is the id of the existing entry
es_client.save_event(body=event_dict, doc_id="12345")
Create a new alert
^^^^^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
alert_dict = {
"example_key": "example value"
}
es_client.save_alert(body=alert_dict)
Update an existing alert
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
alert_dict = {
"example_key": "example new value"
}
# Assuming 12345 is the id of the existing entry
es_client.save_alert(body=alert_dict, doc_id="12345")
Create a new generic document
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
document_dict = {
"example_key": "example value"
}
es_client.save_object(index='randomindex', doc_type='randomtype', body=document_dict)
Update an existing document
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
document_dict = {
"example_key": "example new value"
}
# Assuming 12345 is the id of the existing entry
es_client.save_object(index='randomindex', doc_type='randomtype', body=document_dict, doc_id="12345")
Bulk Importing
^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
from mozdef_util.elasticsearch_client import ElasticsearchClient
es_client = ElasticsearchClient("http://127.0.0.1:9200", bulk_amount=30, bulk_refresh_time=5)
es_client.save_event(body={'key': 'value'}, bulk=True)
- Line 2: bulk_amount (defaults to 100), specifies how many messages should sit in the bulk queue before they get written to elasticsearch
- Line 2: bulk_refresh_time (defaults to 30), is the amount of time that a bulk flush is forced
- Line 3: bulk (defaults to False) determines if an event should get added to a bulk queue

Просмотреть файл

@ -0,0 +1,145 @@
Match/Query Classes
-------------------
ExistsMatch
^^^^^^^^^^^
Checks to see if a specific field exists in a document
.. code-block:: python
:linenos:
from mozdef_util.query_models import ExistsMatch
ExistsMatch("randomfield")
TermMatch
^^^^^^^^^
Checks if a specific field matches the key
.. code-block:: python
:linenos:
from mozdef_util.query_models import TermMatch
TermMatch("details.ip", "127.0.0.1")
TermsMatch
^^^^^^^^^^
Checks if a specific field matches any of the keys
.. code-block:: python
:linenos:
from mozdef_util.query_models import TermsMatch
TermsMatch("details.ip", ["127.0.0.1", "1.2.3.4"])
WildcardMatch
^^^^^^^^^^^^^
Allows regex to be used in looking for documents that a field contains all or part of a key
.. code-block:: python
:linenos:
from mozdef_util.query_models import WildcardMatch
WildcardMatch('summary', 'test*')
PhraseMatch
^^^^^^^^^^^
Checks if a field contains a specific phrase (includes spaces)
.. code-block:: python
:linenos:
from mozdef_util.query_models import PhraseMatch
PhraseMatch('summary', 'test run')
BooleanMatch
^^^^^^^^^^^^
Used to apply specific "matchers" to a query. This will unlikely be used outside of SearchQuery.
.. code-block:: python
:linenos:
from mozdef_util.query_models import ExistsMatch, TermMatch, BooleanMatch
must = [
ExistsMatch('details.ip')
]
must_not = [
TermMatch('_type', 'alert')
]
BooleanMatch(must=must, should=[], must_not=must_not)
MissingMatch
^^^^^^^^^^^^
Checks if a field does not exist in a document
.. code-block:: python
:linenos:
from mozdef_util.query_models import MissingMatch
MissingMatch('summary')
RangeMatch
^^^^^^^^^^
Checks if a field value is within a specific range (mostly used to look for documents in a time frame)
.. code-block:: python
:linenos:
from mozdef_util.query_models import RangeMatch
RangeMatch('utctimestamp', "2016-08-12T21:07:12.316450+00:00", "2016-08-13T21:07:12.316450+00:00")
QueryStringMatch
^^^^^^^^^^^^^^^^
Uses a custom query string to generate the "match" based on (Similar to what you would see in kibana)
.. code-block:: python
:linenos:
from mozdef_util.query_models import QueryStringMatch
QueryStringMatch('summary: test')
Aggregation
^^^^^^^^^^^
Used to aggregate results based on a specific field
.. code-block:: python
:linenos:
from mozdef_util.query_models import Aggregation, SearchQuery, ExistsMatch
search_query = SearchQuery(hours=24)
must = [
ExistsMatch('seenindicator')
]
search_query.add_must(must)
aggr = Aggregation('details.ip')
search_query.add_aggregation(aggr)
results = search_query.execute(es_client, indices=['events','events-previous'])

Просмотреть файл

@ -0,0 +1,154 @@
Searching for documents
-----------------------
Simple search
^^^^^^^^^^^^^
.. code-block:: python
:linenos:
from mozdef_util.query_models import SearchQuery, TermMatch, ExistsMatch
search_query = SearchQuery(hours=24)
must = [
TermMatch('category', 'brointel'),
ExistsMatch('seenindicator')
]
search_query.add_must(must)
results = search_query.execute(es_client, indices=['events','events-previous'])
SimpleResults
When you perform a "simple" search (one without any aggregation), a SimpleResults object is returned. This object is a dict, with the following format:
.. list-table::
:widths: 25 50
:header-rows: 1
* - Key
- Description
* - hits
- Contains an array of documents that matched the search query
* - meta
- Contains a hash of fields describing the search query (Ex: if the query timed out or not)
Example simple result:
.. code-block:: text
:linenos:
{
'hits': [
{
'_id': u'cp5ZsOgLSu6tHQm5jAZW1Q',
'_index': 'events-20161005',
'_score': 1.0,
'_source': {
'details': {
'information': 'Example information'
},
'category': 'excategory',
'summary': 'Test Summary'
},
'_type': 'event'
}
],
'meta': {'timed_out': False}
}
Aggregate search
^^^^^^^^^^^^^^^^
.. code-block:: python
:linenos:
from mozdef_util.query_models import SearchQuery, TermMatch, Aggregation
search_query = SearchQuery(hours=24)
search_query.add_must(TermMatch('_type', 'event'))
search_query.add_must(TermMatch('category', 'brointel'))
search_query.add_aggregation(Aggregation('_type'))
results = search_query.execute(es_client)
AggregatedResults
When you perform an aggregated search (Ex: give me a count of all different ip addresses are in the documents that match a specific query), a AggregatedResults object is returned. This object is a dict, with the following format:
.. list-table::
:widths: 25 50
:header-rows: 1
* - Key
- Description
* - aggregations
- Contains the aggregation results, grouped by field name
* - hits
- Contains an array of documents that matched the search query
* - meta
- Contains a hash of fields describing the search query (Ex: if the query timed out or not)
.. code-block:: text
:linenos:
{
'aggregations': {
'ip': {
'terms': [
{
'count': 2,
'key': '1.2.3.4'
},
{
'count': 1,
'key': '127.0.0.1'
}
]
}
},
'hits': [
{
'_id': u'LcdS2-koQWeICOpbOT__gA',
'_index': 'events-20161005',
'_score': 1.0,
'_source': {
'details': {
'information': 'Example information'
},
'ip': '1.2.3.4',
'summary': 'Test Summary'
},
'_type': 'event'
},
{
'_id': u'F1dLS66DR_W3v7ZWlX4Jwg',
'_index': 'events-20161005',
'_score': 1.0,
'_source': {
'details': {
'information': 'Example information'
},
'ip': '1.2.3.4',
'summary': 'Test Summary'
},
'_type': 'event'
},
{
'_id': u'G1nGdxqoT6eXkL5KIjLecA',
'_index': 'events-20161005',
'_score': 1.0,
'_source': {
'details': {
'information': 'Example information'
},
'ip': '127.0.0.1',
'summary': 'Test Summary'
},
'_type': 'event'
}
],
'meta': {
'timed_out': False
}
}

Просмотреть файл

@ -152,14 +152,14 @@ Mandatory Fields
+-----------------+-------------------------------------+-----------------------------------+
| Field | Purpose | Sample Value |
+=================+=====================================+===================================+
| category | General category/type of event | Authentication, Authorization, |
| | matching the 'what should I log' | Account Creation, Shutdown, |
| | section below | Startup, Account Deletion, |
| | | Account Unlock, brointel, |
| | | bronotice |
| category | General category/type of event | authentication, authorization, |
| | matching the 'what should I log' | account creation, shutdown, |
| | section below | atartup, account deletion, |
| | | account unlock, zeek |
| | | |
+-----------------+-------------------------------------+-----------------------------------+
| details | Additional, event-specific fields | "dn": "john@example.com,o=com, |
| | that you would like included with | dc=example", "facility": "daemon" |
| details | Additional, event-specific fields | <see below> |
| | that you would like included with | |
| | the event. Please completely spell | |
| | out a field rather an abbreviate: | |
| | i.e. sourceipaddress instead of | |
@ -187,7 +187,7 @@ Mandatory Fields
+-----------------+-------------------------------------+-----------------------------------+
| tags | An array or list of any tags you | vpn, audit |
| | would like applied to the event | |
| | | nsm,bro,intel |
| | | nsm,zeek,intel |
+-----------------+-------------------------------------+-----------------------------------+
| timestamp | Full date plus time timestamp of | 2014-01-30T19:24:43+06:00 |
| | the event in ISO format including | |
@ -230,6 +230,9 @@ Details substructure (mandatory if such data is sent, otherwise optional)
| error | Action resulted in an | true/false |
| | error or failure | |
+----------------------+--------------------------+---------------------------------+
| success | Transaction failed/ | true/false |
| | or succeeded | |
+----------------------+--------------------------+---------------------------------+
| username | Username, email, login, | kang@mozilla.com |
| | etc. | |
+----------------------+--------------------------+---------------------------------+
@ -259,7 +262,8 @@ Examples
"details": {
"username": "joe",
"task": "access to admin page /admin_secret_radioactiv",
"result": "10 authentication failures in a row"
"result": "10 authentication failures in a row",
"success": false
}
}

Просмотреть файл

@ -10,37 +10,25 @@ import sys
from datetime import datetime
import pytz
import json
import socket
import json
from requests_futures.sessions import FuturesSession
from multiprocessing import Process, Queue
import random
import logging
from logging.handlers import SysLogHandler
from Queue import Empty
from requests.packages.urllib3.exceptions import ClosedPoolError
import requests
import time
from configlib import getConfig, OptionParser
import ConfigParser
import glob
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from datetime import date
import pytz
import sys
import os
from mozdef_util.utilities.toUTC import toUTC
#use futures to run in the background
#httpsession = FuturesSession(max_workers=5)
# use futures to run in the background
# httpsession = FuturesSession(max_workers=5)
httpsession = requests.session()
httpsession.trust_env=False # turns of needless .netrc check for creds
#a = requests.adapters.HTTPAdapter(max_retries=2)
#httpsession.mount('http://', a)
# a = requests.adapters.HTTPAdapter(max_retries=2)
# httpsession.mount('http://', a)
logger = logging.getLogger(sys.argv[0])
@ -48,7 +36,7 @@ logger.level=logging.INFO
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
#create a list of logs we can append json to and call for a post when we want.
# create a list of logs we can append json to and call for a post when we want.
logcache=Queue()
@ -67,9 +55,9 @@ def setConfig(option,value,configfile):
def postLogs(logcache):
#post logs asynchronously with requests workers and check on the results
#expects a queue object from the multiprocessing library
posts=[]
# post logs asynchronously with requests workers and check on the results
# expects a queue object from the multiprocessing library
# posts=[]
try:
while not logcache.empty():
postdata=logcache.get_nowait()
@ -79,10 +67,10 @@ def postLogs(logcache):
a.max_retries=3
r=httpsession.post(url,data=postdata)
print(r, postdata)
#append to posts if this is long running and you want
#events to try again later.
#posts.append((r,postdata,url))
except Empty as e:
# append to posts if this is long running and you want
# events to try again later.
# posts.append((r,postdata,url))
except Empty:
pass
# for p, postdata, url in posts:
# try:
@ -99,18 +87,18 @@ def postLogs(logcache):
def genRandomIPv4():
#random, IPs
# random, IPs
return '.'.join("%d" % (random.randint(0,254)) for x in range(4))
def genAttackerIPv4():
#random, but not too random as to allow for alerting about attacks from
#the same IP.
# random, but not too random as to allow for alerting about attacks from
# the same IP.
coreIPs=['1.93.25.',
'222.73.115.',
'116.10.191.',
'144.0.0.']
#change this to non zero according to taste for semi-random-ness
# change this to non zero according to taste for semi-random-ness
if random.randint(0,10)>= 0:
return '{0}{1}'.format(random.choice(coreIPs), random.randint(1,2))
else:
@ -120,28 +108,28 @@ def genAttackerIPv4():
def makeEvents():
try:
eventfiles = glob.glob(options.eventsglob)
#pick a random number of events to send
# pick a random number of events to send
for i in range(1, random.randrange(20, 100)):
#pick a random type of event to send
# pick a random type of event to send
eventfile = random.choice(eventfiles)
#print(eventfile)
# print(eventfile)
events = json.load(open(eventfile))
target = random.randint(0, len(events))
for event in events[target:target+1]:
for event in events[target:target + 1]:
event['timestamp'] = pytz.timezone('UTC').localize(datetime.utcnow()).isoformat()
#remove stored times
# remove stored times
if 'utctimestamp' in event.keys():
del event['utctimestamp']
if 'receivedtimestamp' in event.keys():
del event['receivedtimestamp']
#add demo to the tags so it's clear it's not real data.
# add demo to the tags so it's clear it's not real data.
if 'tags' not in event.keys():
event['tags'] = list()
event['tags'].append('demodata')
#replace potential <randomipaddress> with a random ip address
# replace potential <randomipaddress> with a random ip address
if 'summary' in event.keys() and '<randomipaddress>' in event['summary']:
randomIP = genRandomIPv4()
event['summary'] = event['summary'].replace("<randomipaddress>", randomIP)
@ -150,20 +138,20 @@ def makeEvents():
event['details']['sourceipaddress'] = randomIP
event['details']['sourceipv4address'] = randomIP
#print(event['timestamp'], event['tags'], event['summary'])
# print(event['timestamp'], event['tags'], event['summary'])
logcache.put(json.dumps(event))
if not logcache.empty():
time.sleep(.01)
try:
postingProcess=Process(target=postLogs,args=(logcache,),name="json2MozdefDemoData")
postingProcess = Process(target=postLogs, args=(logcache,), name="json2MozdefDemoData")
postingProcess.start()
except OSError as e:
if e.errno==35: # resource temporarily unavailable.
if e.errno == 35: # resource temporarily unavailable.
print(e)
pass
else:
logger.error('%r'%e)
logger.error('%r' % e)
except KeyboardInterrupt as e:
sys.exit(1)
@ -174,39 +162,39 @@ def makeAlerts():
send events that will be correlated into alerts
'''
try:
#time for us to run?
timetoRun=toUTC(options.lastalert) + timedelta(minutes=options.alertsminutesinterval)
# time for us to run?
timetoRun = toUTC(options.lastalert) + timedelta(minutes=options.alertsminutesinterval)
if timetoRun > toUTC(datetime.now()):
#print(timetoRun)
# print(timetoRun)
return
#print(timetoRun, options.lastalert)
# print(timetoRun, options.lastalert)
eventfiles = glob.glob(options.alertsglob)
#pick a random number of events to send
# pick a random number of events to send
for i in range(0, options.alertscount):
#pick a random type of event to send
# pick a random type of event to send
eventfile = random.choice(eventfiles)
events = json.load(open(eventfile))
target = random.randint(0, len(events))
# if there's only one event in the file..use it.
if len(events) == 1 and target == 1:
target = 0
for event in events[target:target+1]:
for event in events[target:target + 1]:
event['timestamp'] = pytz.timezone('UTC').localize(datetime.utcnow()).isoformat()
#remove stored times
# remove stored times
if 'utctimestamp' in event.keys():
del event['utctimestamp']
if 'receivedtimestamp' in event.keys():
del event['receivedtimestamp']
#add demo to the tags so it's clear it's not real data.
# add demo to the tags so it's clear it's not real data.
if 'tags' not in event.keys():
event['tags'] = list()
event['tags'].append('demodata')
event['tags'].append('demoalert')
#replace potential <randomipaddress> with a random ip address
# replace potential <randomipaddress> with a random ip address
if 'summary' in event.keys() and '<randomipaddress>' in event['summary']:
randomIP = genRandomIPv4()
event['summary'] = event['summary'].replace("<randomipaddress>", randomIP)
@ -229,11 +217,11 @@ def makeAlerts():
postingProcess=Process(target=postLogs,args=(logcache,),name="json2MozdefDemoData")
postingProcess.start()
except OSError as e:
if e.errno==35: # resource temporarily unavailable.
if e.errno == 35: # resource temporarily unavailable.
print(e)
pass
else:
logger.error('%r'%e)
logger.error('%r' % e)
except KeyboardInterrupt as e:
sys.exit(1)
@ -244,39 +232,39 @@ def makeAttackers():
send events that will be correlated into attackers using pre-defined IPs
'''
try:
#time for us to run?
# time for us to run?
timetoRun=toUTC(options.lastattacker) + timedelta(minutes=options.attackersminutesinterval)
if timetoRun > toUTC(datetime.now()):
#print(timetoRun)
# print(timetoRun)
return
#print(timetoRun, options.lastalert)
# print(timetoRun, options.lastalert)
eventfiles = glob.glob(options.alertsglob)
#pick a random number of events to send
# pick a random number of events to send
for i in range(0, options.alertscount):
#pick a random type of event to send
# pick a random type of event to send
eventfile = random.choice(eventfiles)
events = json.load(open(eventfile))
target = random.randint(0, len(events))
# if there's only one event in the file..use it.
if len(events) == 1 and target == 1:
target = 0
for event in events[target:target+1]:
for event in events[target:target + 1]:
event['timestamp'] = pytz.timezone('UTC').localize(datetime.utcnow()).isoformat()
#remove stored times
# remove stored times
if 'utctimestamp' in event.keys():
del event['utctimestamp']
if 'receivedtimestamp' in event.keys():
del event['receivedtimestamp']
#add demo to the tags so it's clear it's not real data.
# add demo to the tags so it's clear it's not real data.
if 'tags' not in event.keys():
event['tags'] = list()
event['tags'].append('demodata')
event['tags'].append('demoalert')
#replace potential <randomipaddress> with a random ip address
# replace potential <randomipaddress> with a random ip address
if 'summary' in event.keys() and '<randomipaddress>' in event['summary']:
randomIP = genAttackerIPv4()
event['summary'] = event['summary'].replace("<randomipaddress>", randomIP)
@ -299,11 +287,11 @@ def makeAttackers():
postingProcess=Process(target=postLogs,args=(logcache,),name="json2MozdefDemoData")
postingProcess.start()
except OSError as e:
if e.errno==35: # resource temporarily unavailable.
if e.errno == 35: # resource temporarily unavailable.
print(e)
pass
else:
logger.error('%r'%e)
logger.error('%r' % e)
except KeyboardInterrupt as e:
sys.exit(1)
@ -314,15 +302,15 @@ def initConfig():
options.eventsglob = getConfig('eventsglob', './sampleevents/events*json', options.configfile)
options.alertsglob = getConfig('alertsglob', './sampleevents/alert*json', options.configfile)
options.attackersglob = getConfig('attackersglob', './sampleevents/attacker*json', options.configfile)
#how many alerts to create
# how many alerts to create
options.alertscount = getConfig('alertscount', 2, options.configfile)
#how many minutes to wait between creating ^ alerts
# how many minutes to wait between creating ^ alerts
options.alertsminutesinterval = getConfig('alertsminutesinterval', 5, options.configfile)
options.lastalert = getConfig('lastalert', datetime.now() - timedelta(hours=1), options.configfile)
#how many attackers to create
# how many attackers to create
options.attackerscount = getConfig('attackers', 1, options.configfile)
#how many minutes to wait between creating ^ attackers
# how many minutes to wait between creating ^ attackers
options.attackersminutesinterval = getConfig('attackersminutesinterval', 5, options.configfile)
options.lastattacker = getConfig('lastattacker', datetime.now() - timedelta(hours=1), options.configfile)
@ -349,8 +337,8 @@ if __name__ == '__main__':
postingProcess=Process(target=postLogs,args=(logcache,),name="json2MozdefDemoData")
postingProcess.start()
except OSError as e:
if e.errno==35: # resource temporarily unavailable.
if e.errno == 35: # resource temporarily unavailable.
print(e)
pass
else:
logger.error('%r'%e)
logger.error('%r' % e)

Просмотреть файл

@ -7,7 +7,6 @@
import os
import sys
import inspect
from configlib import getConfig, OptionParser
sys.path.insert(1, os.path.join(sys.path[0], '../..'))

Просмотреть файл

@ -67,9 +67,9 @@ defaultTemplate = r'''
}
'''
#valid json?
# valid json?
templateJson = json.loads(defaultTemplate)
#post it:
# post it
r = requests.put(url="http://servername:9200/_template/defaulttemplate", data=defaultTemplate)
print(r)

Просмотреть файл

@ -4,14 +4,11 @@
# Copyright (c) 2017 Mozilla Corporation
import os
import sys
import bottle
from bottle import debug,route, run, template, response,request,post, default_app
from bottle import route, run, response, request, default_app
from bottle import _stdout as bottlelog
import kombu
from kombu import Connection,Queue,Exchange
from kombu import Connection, Queue, Exchange
import json
from configlib import getConfig,OptionParser
from configlib import getConfig, OptionParser
@route('/status')
@ -30,11 +27,11 @@ def status():
@route('/test')
@route('/test/')
def testindex():
ip = request.environ.get('REMOTE_ADDR')
#response.headers['X-IP'] = '{0}'.format(ip)
# ip = request.environ.get('REMOTE_ADDR')
# response.headers['X-IP'] = '{0}'.format(ip)
response.status=200
#act like elastic search bulk index
# act like elastic search bulk index
@route('/_bulk',method='POST')
@ -42,10 +39,10 @@ def testindex():
def bulkindex():
if request.body:
bulkpost=request.body.read()
#bottlelog('request:{0}\n'.format(bulkpost))
# bottlelog('request:{0}\n'.format(bulkpost))
request.body.close()
if len(bulkpost)>10: # TODO Check for bulk format.
#iterate on messages and post to event message queue
# iterate on messages and post to event message queue
eventlist=[]
for i in bulkpost.splitlines():
@ -53,10 +50,10 @@ def bulkindex():
for i in eventlist:
try:
#valid json?
# valid json?
try:
eventDict=json.loads(i)
except ValueError as e:
except ValueError:
response.status=500
return
# don't post the items telling us where to post things..
@ -77,17 +74,17 @@ def bulkindex():
def eventsindex():
if request.body:
anevent=request.body.read()
#bottlelog('request:{0}\n'.format(anevent))
# bottlelog('request:{0}\n'.format(anevent))
request.body.close()
#valid json?
# valid json?
try:
eventDict=json.loads(anevent)
except ValueError as e:
except ValueError:
response.status=500
return
#let the message queue worker who gets this know where it was posted
# let the message queue worker who gets this know where it was posted
eventDict['endpoint']='events'
#post to event message queue
# post to event message queue
ensurePublish=mqConn.ensure(mqproducer,mqproducer.publish,max_retries=10)
ensurePublish(eventDict,exchange=eventTaskExchange,routing_key=options.taskexchange)
@ -96,21 +93,21 @@ def eventsindex():
@route('/cef', method=['POST','PUT'])
@route('/cef/',method=['POST','PUT'])
#debug(True)
# debug(True)
def cefindex():
if request.body:
anevent=request.body.read()
request.body.close()
#valid json?
# valid json?
try:
cefDict=json.loads(anevent)
except ValueError as e:
except ValueError:
response.status=500
return
#let the message queue worker who gets this know where it was posted
# let the message queue worker who gets this know where it was posted
cefDict['endpoint']='cef'
#post to eventtask exchange
# post to eventtask exchange
ensurePublish=mqConn.ensure(mqproducer,mqproducer.publish,max_retries=10)
ensurePublish(cefDict,exchange=eventTaskExchange,routing_key=options.taskexchange)
return
@ -129,17 +126,17 @@ def customindex(application):
if request.body:
anevent=request.body.read()
request.body.close()
#valid json?
# valid json?
try:
customDict=json.loads(anevent)
except ValueError as e:
except ValueError:
response.status=500
return
#let the message queue worker who gets this know where it was posted
# let the message queue worker who gets this know where it was posted
customDict['endpoint']= application
customDict['customendpoint'] = True
#post to eventtask exchange
# post to eventtask exchange
ensurePublish=mqConn.ensure(mqproducer,mqproducer.publish,max_retries=10)
ensurePublish(customDict,exchange=eventTaskExchange,routing_key=options.taskexchange)
return
@ -154,13 +151,13 @@ def initConfig():
options.listen_host=getConfig('listen_host', '127.0.0.1', options.configfile)
#get config info:
# get config info:
parser=OptionParser()
parser.add_option("-c", dest='configfile', default=os.path.join(os.path.dirname(__file__), __file__).replace('.py', '.conf'), help="configuration file to use")
(options,args) = parser.parse_args()
initConfig()
#connect and declare the message queue/kombu objects.
# connect and declare the message queue/kombu objects.
connString='amqp://{0}:{1}@{2}:{3}//'.format(options.mquser,options.mqpassword,options.mqserver,options.mqport)
mqConn=Connection(connString)

Просмотреть файл

@ -6,7 +6,7 @@
ul.dropdown {
position: relative;
display: inline-block;
width: min-content;
width: max-content;
border: 1px solid rgba(0, 0, 0, 0.2);
border-radius: 3px;
box-shadow: 0px 5px 10px rgba(0, 0, 0, 0.2);
@ -93,4 +93,4 @@ ul.dropdown ul ul {
}
ul.dropdown li:hover > ul {
visibility: visible;
}
}

Просмотреть файл

@ -1,21 +0,0 @@
# http://editorconfig.org
root = true
[*]
indent_style = space
indent_size = 4
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8
end_of_line = lf
[*.bat]
indent_style = tab
end_of_line = crlf
[LICENSE]
insert_final_newline = false
[Makefile]
indent_style = tab

15
mozdef_util/.github/ISSUE_TEMPLATE.md поставляемый
Просмотреть файл

@ -1,15 +0,0 @@
* MozDef Util version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```

Просмотреть файл

@ -1,16 +0,0 @@
# Config file for automatic testing at travis-ci.org
language: python
python:
- 3.6
- 3.5
- 3.4
- 2.7
# Command to install dependencies, e.g. pip install -r requirements.txt --use-mirrors
install: pip install -U tox-travis
# Command to run tests, e.g. python setup.py test
script: tox

Просмотреть файл

@ -1,128 +0,0 @@
.. highlight:: shell
============
Contributing
============
Contributions are welcome, and they are greatly appreciated! Every little bit
helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions
----------------------
Report Bugs
~~~~~~~~~~~
Report bugs at https://github.com/jeffbryner/mozdef_util/issues.
If you are reporting a bug, please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
Fix Bugs
~~~~~~~~
Look through the GitHub issues for bugs. Anything tagged with "bug" and "help
wanted" is open to whoever wants to implement it.
Implement Features
~~~~~~~~~~~~~~~~~~
Look through the GitHub issues for features. Anything tagged with "enhancement"
and "help wanted" is open to whoever wants to implement it.
Write Documentation
~~~~~~~~~~~~~~~~~~~
MozDef Util could always use more documentation, whether as part of the
official MozDef Util docs, in docstrings, or even on the web in blog posts,
articles, and such.
Submit Feedback
~~~~~~~~~~~~~~~
The best way to send feedback is to file an issue at https://github.com/jeffbryner/mozdef_util/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions
are welcome :)
Get Started!
------------
Ready to contribute? Here's how to set up `mozdef_util` for local development.
1. Fork the `mozdef_util` repo on GitHub.
2. Clone your fork locally::
$ git clone git@github.com:your_name_here/mozdef_util.git
3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development::
$ mkvirtualenv mozdef_util
$ cd mozdef_util/
$ python setup.py develop
4. Create a branch for local development::
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
5. When you're done making changes, check that your changes pass flake8 and the
tests, including testing other Python versions with tox::
$ flake8 mozdef_util tests
$ python setup.py test or py.test
$ tox
To get flake8 and tox, just pip install them into your virtualenv.
6. Commit your changes and push your branch to GitHub::
$ git add .
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature
7. Submit a pull request through the GitHub website.
Pull Request Guidelines
-----------------------
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 2.7, 3.4, 3.5 and 3.6, and for PyPy. Check
https://travis-ci.org/jeffbryner/mozdef_util/pull_requests
and make sure that the tests pass for all supported Python versions.
Tips
----
To run a subset of tests::
$ python -m unittest tests.test_mozdef_util
Deploying
---------
A reminder for the maintainers on how to deploy.
Make sure all your changes are committed (including an entry in HISTORY.rst).
Then run::
$ bumpversion patch # possible: major / minor / patch
$ git push
$ git push --tags
Travis will then deploy to PyPI if tests pass.

Просмотреть файл

@ -1,340 +1,373 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Mozilla Public License Version 2.0
==================================
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
1. Definitions
--------------
Preamble
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
1.3. "Contribution"
means Covered Software of a particular Contributor.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
1.5. "Incompatible With Secondary Licenses"
means
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
The precise terms and conditions for copying, distribution and
modification follow.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
1.8. "License"
means this document.
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1.10. "Modifications"
means any of the following:
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
(b) any new file in Source Code Form that contains any Covered
Software.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
2. License Grants and Conditions
--------------------------------
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
2.1. Grants
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
2.2. Effective Date
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
2.3. Limitations on Grant Scope
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
(a) for any code that a Contributor has removed from Covered Software;
or
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
2.4. Subsequent Licenses
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
2.5. Representation
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
2.6. Fair Use
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
2.7. Conditions
NO WARRANTY
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
3. Responsibilities
-------------------
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
3.1. Distribution of Source Form
END OF TERMS AND CONDITIONS
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
How to Apply These Terms to Your New Programs
3.2. Distribution of Executable Form
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
If You distribute Covered Software in Executable Form then:
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
3.3. Distribution of a Larger Work
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
3.4. Notices
Also add information on how to contact you by electronic and paper mail.
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
3.5. Application of Additional Terms
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
5. Termination
--------------
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

Просмотреть файл

@ -1,10 +0,0 @@
include CONTRIBUTING.rst
include HISTORY.rst
include LICENSE
include README.rst
recursive-include tests *
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
recursive-include docs *.rst conf.py Makefile make.bat *.jpg *.png *.gif

Просмотреть файл

@ -1,18 +1,6 @@
.PHONY: clean clean-test clean-pyc clean-build docs help
.PHONY: clean clean-pyc clean-build help
.DEFAULT_GOAL := help
define BROWSER_PYSCRIPT
import os, webbrowser, sys
try:
from urllib import pathname2url
except:
from urllib.request import pathname2url
webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1])))
endef
export BROWSER_PYSCRIPT
define PRINT_HELP_PYSCRIPT
import re, sys
@ -24,12 +12,10 @@ for line in sys.stdin:
endef
export PRINT_HELP_PYSCRIPT
BROWSER := python -c "$$BROWSER_PYSCRIPT"
help:
@python -c "$$PRINT_HELP_PYSCRIPT" < $(MAKEFILE_LIST)
clean: clean-build clean-pyc clean-test ## remove all build, test, coverage and Python artifacts
clean: clean-build clean-pyc ## remove all build and Python artifacts
clean-build: ## remove build artifacts
rm -fr build/
@ -44,37 +30,8 @@ clean-pyc: ## remove Python file artifacts
find . -name '*~' -exec rm -f {} +
find . -name '__pycache__' -exec rm -fr {} +
clean-test: ## remove test and coverage artifacts
rm -fr .tox/
rm -f .coverage
rm -fr htmlcov/
rm -fr .pytest_cache
lint: ## check style with flake8
flake8 mozdef_util tests
test: ## run tests quickly with the default Python
python setup.py test
test-all: ## run tests on every Python version with tox
tox
coverage: ## check code coverage quickly with the default Python
coverage run --source mozdef_util setup.py test
coverage report -m
coverage html
$(BROWSER) htmlcov/index.html
docs: ## generate Sphinx HTML documentation, including API docs
rm -f docs/mozdef_util.rst
rm -f docs/modules.rst
sphinx-apidoc -o docs/ mozdef_util
$(MAKE) -C docs clean
$(MAKE) -C docs html
$(BROWSER) docs/_build/html/index.html
servedocs: docs ## compile the docs watching for changes
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .
flake8 mozdef_util
release: dist ## package and upload a release
twine upload dist/*

Просмотреть файл

@ -6,19 +6,4 @@ MozDef Util
Utilities shared throughout the MozDef codebase
* Free software: Mozilla Public License 2.0
* Documentation: https://mozdef-util.readthedocs.io.
Features
--------
* TODO
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
* Documentation: https://mozdef.readthedocs.io/en/latest/mozdef_util.html

Просмотреть файл

@ -1,20 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = python -msphinx
SPHINXPROJ = mozdef_util
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

Просмотреть файл

@ -1,160 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# mozdef_util documentation build configuration file, created by
# sphinx-quickstart on Fri Jun 9 13:47:02 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another
# directory, add these directories to sys.path here. If the directory is
# relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import mozdef_util
# -- General configuration ---------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'MozDef Util'
copyright = u"2018, Mozilla Infosec"
author = u"Mozilla Infosec"
# The version info for the project you're documenting, acts as replacement
# for |version| and |release|, also used in various other places throughout
# the built documents.
#
# The short X.Y version.
version = mozdef_util.__version__
# The full version, including alpha/beta/rc tags.
release = mozdef_util.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output -------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a
# theme further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# -- Options for HTMLHelp output ---------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'mozdef_utildoc'
# -- Options for LaTeX output ------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto, manual, or own class]).
latex_documents = [
(master_doc, 'mozdef_util.tex',
u'MozDef Util Documentation',
u'Mozilla Infosec', 'manual'),
]
# -- Options for manual page output ------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'mozdef_util',
u'MozDef Util Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'mozdef_util',
u'MozDef Util Documentation',
author,
'mozdef_util',
'One line description of project.',
'Miscellaneous'),
]

Просмотреть файл

@ -1 +0,0 @@
.. include:: ../CONTRIBUTING.rst

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше