Setup codebase for merge of two repos

This commit is contained in:
Brandon Myers 2017-06-15 14:56:47 -05:00
Родитель eca3ffd4bc
Коммит 1d8c59b93f
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 8AA79AD83045BBC7
257 изменённых файлов: 141428 добавлений и 1408 удалений

3
.git-crypt/.gitattributes поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
# Do not edit this file. To specify the files to encrypt, create your own
# .gitattributes file in the directory where your files are.
* !filter !diff

Двоичный файл не отображается.

Двоичный файл не отображается.

Двоичный файл не отображается.

Двоичный файл не отображается.

Двоичный файл не отображается.

Двоичный файл не отображается.

Двоичный файл не отображается.

3
.gitattributes поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
*.conf filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.ini filter=git-crypt diff=git-crypt

1
.gitignore поставляемый
Просмотреть файл

@ -3,3 +3,4 @@ meteor/packages
*.pyc
celebeat-schedule.*
results
*.log

6
.gitmodules поставляемый
Просмотреть файл

@ -1,6 +0,0 @@
[submodule "lib/mozdef_client"]
path = lib/mozdef_client
url = https://github.com/gdestuynder/mozdef_client
[submodule "bot/modules/bugzilla"]
path = bot/modules/bugzilla
url = https://github.com/gdestuynder/simple_bugzilla

Просмотреть файл

@ -7,7 +7,7 @@ We would also love to hear how you are using MozDef and to receive contributions
Bug reports
-----------
If you think you have found a bug in MozDef, first make sure that you are testing against the [master branch](https://github.com/mozilla/MozDef) - your issue may already have been fixed. If not, search our [issues list](https://github.com/mozilla/MozDef/issues) on GitHub in case a similar issue has already been opened.
If you think you have found a bug in MozDef, first make sure that you are testing against the [master branch](https://github.com/jeffbryner/MozDef) - your issue may already have been fixed. If not, search our [issues list](https://github.com/jeffbryner/MozDef/issues) on GitHub in case a similar issue has already been opened.
It is very helpful if you can prepare a reproduction of the bug. In other words, provide a small test case which we can run to confirm your bug. It makes it easier to find the problem and to fix it.
@ -17,7 +17,7 @@ Feature requests
----------------
If you are looking for a feature that doesn't exist currently in MozDef, you are probably not alone.
Open an issue on our [issues list](https://github.com/mozilla/MozDef/issues) on GitHub which describes the feature you would like to see, why you need it, and how it should work.
Open an issue on our [issues list](https://github.com/jeffbryner/MozDef/issues) on GitHub which describes the feature you would like to see, why you need it, and how it should work.
If you attach diagrams or mockups, it would be super nice ;-).
Contributing code and documentation changes
@ -31,7 +31,7 @@ The process is described below.
### Fork and clone the repository
You will need to fork the main [MozDef repository](https://github.com/mozilla/MozDef) and clone it to your local machine. See
You will need to fork the main [MozDef repository](https://github.com/jeffbryner/MozDef) and clone it to your local machine. See
[github help page](https://help.github.com/articles/fork-a-repo) for help.
Push your local changes to your forked copy of the repository and [submit a pull request](https://help.github.com/articles/using-pull-requests). In the pull request, describe what your changes do and mention the number of the issue where discussion has taken place, eg "Closes #123".

Просмотреть файл

@ -1,28 +1,10 @@
# mozdefqa1-private-scl3
Repo for MozDefQA1
MozDef: The Mozilla Defense Platform
=====================================
mozdef-private-scl3
Why?
----
private repo for the scl3 QA deployment of mozdef
The inspiration for MozDef comes from the large arsenal of tools available to attackers. Suites like metasploit, armitage, lair, dradis and others are readily available to help attackers coordinate, share intelligence and finely tune their attacks in real time. Defenders are usually limited to wikis, ticketing systems and manual tracking databases attached to the end of a Security Information Event Management (SIEM) system.
This repo will be used to make updates to the QA MozDef system in SCL3. Primarily for testing new alerts and logging.
The Mozilla Defense Platform (MozDef) seeks to automate the security incident handling process and facilitate the real-time activities of incident handlers.
Goals:
------
* Provide a platform for use by defenders to rapidly discover and respond to security incidents.
* Automate interfaces to other systems like bunker, banhammer, mig
* Provide metrics for security events and incidents
* Facilitate real-time collaboration amongst incident handlers
* Facilitate repeatable, predictable processes for incident handling
* Go beyond traditional SIEM systems in automating incident handling, information sharing, workflow, metrics and response automation
Status:
--------
MozDef is in production at Mozilla where we are using it to process over 300 million events per day.
DOCS:
-----
http://mozdef.readthedocs.org/en/latest/
Only .py, .ini, .conf, and .sh files are added with the exception of the GeoLiteCity.dat file. I figure this would be a good way to update that file as needed, until we create an automated way to do it.

Просмотреть файл

@ -1,16 +0,0 @@
[uwsgi]
chdir = /home/mozdef/envs/mozdef/alerts/
uid = mozdef
mule = alertWorker.py
pyargv = -c /home/mozdef/envs/mozdef/alerts/alertWorker.conf
daemonize = /home/mozdef/envs/mozdef/logs/uwsgi.alertPluginsmules.log
;ignore normal operations that generate nothing but normal response
log-drain = generated 0 bytes
log-date = %%a %%b %%d %%H:%%M:%%S
socket = /home/mozdef/envs/mozdef/alerts/alertPluginsmules.socket
virtualenv = /home/mozdef/envs/mozdef/
master-fifo = /home/mozdef/envs/mozdef/alerts/alertPluginsmules.fifo
never-swap
pidfile= /home/mozdef/envs/mozdef/alerts/alertPluginsmules.pid
vacuum = true
enable-threads

Двоичные данные
alerts/alertWorker.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,50 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertFailedAMOLogin(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=10)
# Configure filters
must = [
pyes.TermFilter('_type', 'addons'),
pyes.TermFilter('signatureid', 'authfail'),
pyes.ExistsFilter('details.sourceipaddress'),
pyes.QueryFilter(pyes.MatchQuery("msg","The password was incorrect","phrase")),
pyes.ExistsFilter('suser')
]
self.filtersManual(date_timedelta, must=must)
# Search aggregations, keep X samples of events at most
self.searchEventsAggregated('details.suser', samplesLimit=15)
# alert when >= X matching events in an aggregation
self.walkAggregations(threshold=20)
# Set alert properties
def onAggregation(self, aggreg):
# aggreg['count']: number of items in the aggregation, ex: number of failed login attempts
# aggreg['value']: value of the aggregation field, ex: toto@example.com
# aggreg['events']: list of events in the aggregation
category = 'addons'
tags = ['addons']
severity = 'NOTICE'
summary = ('{0} amo failed logins: {1}'.format(aggreg['count'], aggreg['value']))
# append most common ips
ips = self.mostCommon(aggreg['allevents'],'_source.details.sourceipaddress')
for i in ips[:5]:
summary += ' {0} ({1} hits)'.format(i[0], i[1])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -0,0 +1,55 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
# Jeff Bryner jbryner@mozilla.com
# Aaron Meihm ameihm@mozilla.com
# Michal Purzynski <mpurzynski@mozilla.com>
# Alicia Smith <asmith@mozilla.com>
from lib.alerttask import AlertTask
import pyes
class AlertSFTPEvent(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=5)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'auditd'),
pyes.TermFilter('category', 'execve'),
pyes.TermFilter('processname', 'audisp-json'),
pyes.TermFilter('details.processname', 'ssh'),
pyes.QueryFilter(pyes.MatchQuery('details.parentprocess', 'sftp', 'phrase')),
]
self.filtersManual(date_timedelta, must=must)
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'execve'
severity = 'NOTICE'
tags = ['audisp-json, audit']
srchost = 'unknown'
username = 'unknown'
directory = 'unknown'
x = event['_source']
if 'details' in x:
if 'hostname' in x['details']:
srchost = x['details']['hostname']
if 'originaluser' in x['details']:
username = x['details']['originaluser']
if 'cwd' in x['details']:
directory = x['details']['cwd']
summary = 'SFTP Event by {0} from host {1} in directory {2}'.format(username, srchost, directory)
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, [event], severity)

Просмотреть файл

@ -0,0 +1,45 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Michal Purzynski michal@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertBugzillaPBruteforce(AlertTask):
def main(self):
# look for events in last 15 mins
date_timedelta = dict(minutes=15)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'bro'),
pyes.TermFilter('eventsource', 'nsm'),
pyes.TermFilter('category', 'bronotice'),
pyes.ExistsFilter('details.sourceipaddress'),
pyes.QueryFilter(pyes.MatchQuery('details.note','BugzBruteforcing::HTTP_BugzBruteforcing_Attacker','phrase')),
]
self.filtersManual(date_timedelta, must=must)
# Search events
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'httperrors'
tags = ['http']
severity = 'NOTICE'
hostname = event['_source']['hostname']
url = "https://mana.mozilla.org/wiki/display/SECURITY/NSM+IR+procedures"
# the summary of the alert is the same as the event
summary = '{0} {1}'.format(hostname, event['_source']['summary'])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, [event], severity=severity, url=url)

Просмотреть файл

@ -0,0 +1,48 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jonathan Claudius jclaudius@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertConfluenceShellUsage(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=5)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'auditd'),
pyes.TermFilter('details.user', 'confluence'),
pyes.QueryFilter(pyes.QueryStringQuery('hostname: /.*(mana|confluence).*/')),
]
must_not = [
pyes.TermFilter('details.originaluser', 'root'),
]
self.filtersManual(date_timedelta, must=must, must_not=must_not)
# Search aggregations on field 'sourceipaddress', keep X samples of events at most
self.searchEventsAggregated('hostname', samplesLimit=10)
# alert when >= X matching events in an aggregation
# in this case, always
self.walkAggregations(threshold=1)
# Set alert properties
def onAggregation(self, aggreg):
# aggreg['count']: number of items in the aggregation, ex: number of failed login attempts
# aggreg['value']: value of the aggregation field, ex: toto@example.com
# aggreg['events']: list of events in the aggregation
category = 'intrusion'
tags = ['confluence', 'mana']
severity = 'CRITICAL'
summary = 'Confluence user is running shell commands on {0}'.format(aggreg['value'])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

75
alerts/deadman.py Normal file
Просмотреть файл

@ -0,0 +1,75 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
#
# a collection of alerts looking for the lack of events
# to alert on a dead input source.
from lib.alerttask import AlertTask
import pyes
def fakeEvent():
# make a fake event
# mimicing some parameters since this isn't an ES event
# but should have an ES event metadata structure
# like _index, _type, etc.
event = dict()
event['_index'] = ''
event['_type'] = ''
event['_source'] = dict()
event['_id'] = ''
return event
class broNSM(AlertTask):
def main(self, *args, **kwargs):
# look for events in last x mins
date_timedelta = dict(minutes=20)
# call with hostlist=['host1','host2','host3']
# to search for missing events
if kwargs and 'hostlist' in kwargs.keys():
for host in kwargs['hostlist']:
self.log.debug('checking deadman for host: {0}'.format(host))
must = [
pyes.QueryFilter(pyes.MatchQuery("details.note","MozillaAlive::Bro_Is_Watching_You","phrase")),
pyes.QueryFilter(pyes.MatchQuery("details.peer_descr", host, "phrase")),
pyes.TermFilter('category', 'bronotice'),
pyes.TermFilter('_type', 'bro')
]
self.filtersManual(date_timedelta, must=must)
# Search events
self.searchEventsSimple()
self.walkEvents(hostname=host)
# Set alert properties
# if no events found
def onNoEvent(self, hostname):
category = 'deadman'
tags = ['bro']
severity = 'ERROR'
summary = ('no {0} bro healthcheck events found since {1}'.format(hostname, self.begindateUTC.isoformat()))
url = "https://mana.mozilla.org/wiki/display/SECURITY/NSM+IR+procedures"
# make an event to attach to the alert
event = fakeEvent()
# attach our info about not having an event to _source:
# to mimic an ES document
event['_source']['category'] = 'deadman'
event['_source']['tags'] = ['bro']
event['_source']['severity'] = 'ERROR'
event['_source']['hostname'] = hostname
event['_source']['summary'] = summary
# serialize the filter to avoid datetime objects causing json problems.
event['_source']['details'] = dict(filter= '{0}'.format(self.filter.serialize()))
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, [event], severity=severity, url=url)

Просмотреть файл

@ -0,0 +1,47 @@
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
# Jeff Bryner jbryner@mozilla.com
# Aaron Meihm ameihm@mozilla.com
# Michal Purzynski <mpurzynski@mozilla.com>
# Alicia Smith <asmith@mozilla.com>
from lib.alerttask import AlertTask
import pyes
class AlertDuoAuthFail(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=30)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'event'),
pyes.TermFilter('category', 'event'),
pyes.ExistsFilter('details.ip'),
pyes.ExistsFilter('details.username'),
pyes.QueryFilter(pyes.MatchQuery('details.result', 'FRAUD', 'phrase')),
]
self.filtersManual(date_timedelta, must=must)
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'event'
severity = 'WARNING'
url = "https://mana.mozilla.org/wiki/display/SECURITY/IR+Procedure%3A+DuoSecurity"
sourceipaddress = 'unknown'
user = 'unknown'
x = event['_source']
if 'details' in x:
if 'ip' in x['details']:
sourceipaddress = x['details']['ip']
if 'username' in x['details']:
user = x['details']['username']
summary = 'Duo Authentication Failure: user {1} rejected and marked a Duo Authentication attempt from {0} as fraud'.format(sourceipaddress, user)
# Create the alert object based on these properties
return self.createAlertDict(summary, category, [], [event], severity, url)

55
alerts/fxaAlerts.py Normal file
Просмотреть файл

@ -0,0 +1,55 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertAccountCreations(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=10)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'event'),
pyes.TermFilter('tags', 'firefoxaccounts'),
pyes.QueryFilter(pyes.MatchQuery('details.path','/v1/account/create','phrase'))
]
#ignore test accounts and attempts to create accounts that already exist.
must_not = [
pyes.QueryFilter(pyes.WildcardQuery(field='details.email',value='*restmail.net')),
pyes.TermFilter('details.code','429')
]
self.filtersManual(date_timedelta, must=must, must_not=must_not)
# Search aggregations on field 'sourceipv4address', keep X samples of events at most
self.searchEventsAggregated('details.sourceipv4address', samplesLimit=10)
# alert when >= X matching events in an aggregation
self.walkAggregations(threshold=10)
# Set alert properties
def onAggregation(self, aggreg):
# aggreg['count']: number of items in the aggregation, ex: number of failed login attempts
# aggreg['value']: value of the aggregation field, ex: toto@example.com
# aggreg['events']: list of events in the aggregation
category = 'fxa'
tags = ['fxa']
severity = 'INFO'
summary = ('{0} fxa account creation attempts by {1}'.format(aggreg['count'], aggreg['value']))
emails = self.mostCommon(aggreg['allevents'],'_source.details.email')
#did they try to create more than one email account?
#or just retry an existing one
if len(emails) > 1:
for i in emails[:5]:
summary += ' {0} ({1} hits)'.format(i[0], i[1])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -0,0 +1,47 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
# Jeff Bryner jbryner@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertHostScannerFinding(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=15)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'cef'),
pyes.ExistsFilter('details.dhost'),
pyes.QueryFilter(pyes.MatchQuery("signatureid","sensitivefiles","phrase"))
]
self.filtersManual(date_timedelta, must=must)
# Search aggregations on field 'sourceipaddress', keep X samples of events at most
self.searchEventsAggregated('details.dhost', samplesLimit=30)
# alert when >= X matching events in an aggregation
self.walkAggregations(threshold=1)
# Set alert properties
def onAggregation(self, aggreg):
# aggreg['count']: number of items in the aggregation, ex: number of failed login attempts
# aggreg['value']: value of the aggregation field, ex: toto@example.com
# aggreg['events']: list of events in the aggregation
category = 'hostscanner'
tags = ['hostscanner']
severity = 'NOTICE'
summary = ('{0} host scanner findings on {1}'.format(aggreg['count'], aggreg['value']))
filenames = self.mostCommon(aggreg['allevents'],'_source.details.path')
for i in filenames[:5]:
summary += ' {0} ({1} hits)'.format(i[0], i[1])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Просмотреть файл

@ -0,0 +1,45 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Michal Purzynski michal@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertHTTPBruteforce(AlertTask):
def main(self):
# look for events in last 15 mins
date_timedelta = dict(minutes=15)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'bro'),
pyes.TermFilter('eventsource', 'nsm'),
pyes.TermFilter('category', 'bronotice'),
pyes.ExistsFilter('details.sourceipaddress'),
pyes.QueryFilter(pyes.MatchQuery('details.note','AuthBruteforcing::HTTP_AuthBruteforcing_Attacker','phrase')),
]
self.filtersManual(date_timedelta, must=must)
# Search events
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'httperrors'
tags = ['http']
severity = 'NOTICE'
hostname = event['_source']['hostname']
url = "https://mana.mozilla.org/wiki/display/SECURITY/NSM+IR+procedures"
# the summary of the alert is the same as the event
summary = '{0} {1}'.format(hostname, event['_source']['summary'])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, [event], severity=severity, url=url)

37
alerts/ldapAdd_pyes.py Normal file
Просмотреть файл

@ -0,0 +1,37 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
from lib.alerttask import AlertTask
import pyes
class ldapAdd(AlertTask):
def main(self):
# look for events in last x
date_timedelta = dict(minutes=15)
# Configure filters using pyes
must = [
pyes.TermFilter('category', 'ldapChange'),
pyes.TermFilter('changetype', 'add')
]
self.filtersManual(date_timedelta, must=must)
# Search events
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'ldap'
tags = ['ldap']
severity = 'INFO'
summary='{0} added {1}'.format(event['_source']['details']['actor'], event['_source']['details']['dn'])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, [event], severity)

37
alerts/ldapDelete_pyes.py Normal file
Просмотреть файл

@ -0,0 +1,37 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
from lib.alerttask import AlertTask
import pyes
class ldapDelete(AlertTask):
def main(self):
# look for events in last x
date_timedelta = dict(minutes=15)
# Configure filters using pyes
must = [
pyes.TermFilter('category', 'ldapChange'),
pyes.TermFilter('changetype', 'delete')
]
self.filtersManual(date_timedelta, must=must)
# Search events
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'ldap'
tags = ['ldap']
severity = 'INFO'
summary='{0} deleted {1}'.format(event['_source']['details']['actor'], event['_source']['details']['dn'])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, [event], severity)

Просмотреть файл

@ -12,26 +12,42 @@ from celery.schedules import crontab, timedelta
import time
import logging
#ALERTS = {
#'bro_intel.AlertBroIntel': crontab(minute='*/1'),
#'bro_notice.AlertBroNotice': crontab(minute='*/1'),
#'bruteforce_ssh.AlertBruteforceSsh': crontab(minute='*/1'),
#'cloudtrail.AlertCloudtrail': crontab(minute='*/1'),
#'fail2ban.AlertFail2ban': crontab(minute='*/1'),
#'duo_fail_open.AlertDuoFailOpen': crontab(minute='*/2'),
#'amoFailedLogins_pyes.AlertFailedAMOLogin': crontab(minute='*/2'),
#'hostScannerAlerts_pyes.AlertHostScannerFinding': crontab(minute='*/10'),
#'deadman.broNSM3': crontab(minute='*/5'),
#}
ALERTS = {
'bro_intel.AlertBroIntel': {'schedule': crontab(minute='*/1')},
'bro_notice.AlertBroNotice': {'schedule': crontab(minute='*/1')},
'bruteforce_ssh.AlertBruteforceSsh': {'schedule': crontab(minute='*/1')},
'cloudtrail.AlertCloudtrail': {'schedule': crontab(minute='*/1')},
'fail2ban.AlertFail2ban': {'schedule': crontab(minute='*/1')},
#'deadman.broNSM': {'schedule': timedelta(minutes=1),'kwargs':dict(hostlist=['nsm3', 'nsm5'])},
'bruteforce_ssh_pyes.AlertBruteforceSsh': {'schedule': timedelta(minutes=1)},
'unauth_ssh_pyes.AlertUnauthSSH': {'schedule': timedelta(minutes=1)},
'confluence_shell_pyes.AlertConfluenceShellUsage': {'schedule': timedelta(minutes=1)},
'unauth_scan_pyes.AlertUnauthInternalScan': {'schedule': timedelta(minutes=1)},
'auditd_sftp_pyes.AlertSFTPEvent': {'schedule': timedelta(minutes=1)},
'proxy_drop_pyes.AlertProxyDrop': {'schedule': timedelta(minutes=1)},
'duo_authfail_pyes.AlertDuoAuthFail': {'schedule': timedelta(seconds=60)},
'vpn_duo_auth_failures_pyes.AlertManyVPNDuoAuthFailures': {'schedule': timedelta(minutes=20)},
}
RABBITMQ = {
'mqserver': 'localhost',
'mquser': 'guest',
'mqpassword': 'guest',
'mquser': 'mozdef',
'mqpassword': 'mozdef',
'mqport': 5672,
'alertexchange': 'alerts',
'alertqueue': 'mozdef.alert'
}
ES = {
'servers': ['http://localhost:9200']
'servers': ['http://mozdefqa1.private.scl3.mozilla.com:9200']
}
OPTIONS = {

86
alerts/lib/config.py.orig Normal file
Просмотреть файл

@ -0,0 +1,86 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
from celery.schedules import crontab, timedelta
import time
import logging
#ALERTS = {
#'bro_intel.AlertBroIntel': crontab(minute='*/1'),
#'bro_notice.AlertBroNotice': crontab(minute='*/1'),
#'bruteforce_ssh.AlertBruteforceSsh': crontab(minute='*/1'),
#'cloudtrail.AlertCloudtrail': crontab(minute='*/1'),
#'fail2ban.AlertFail2ban': crontab(minute='*/1'),
#'duo_fail_open.AlertDuoFailOpen': crontab(minute='*/2'),
#'amoFailedLogins_pyes.AlertFailedAMOLogin': crontab(minute='*/2'),
#'hostScannerAlerts_pyes.AlertHostScannerFinding': crontab(minute='*/10'),
#'deadman.broNSM3': crontab(minute='*/5'),
#}
ALERTS = {
#'deadman.broNSM': {'schedule': timedelta(minutes=1),'kwargs':dict(hostlist=['nsm3', 'nsm5'])},
'bruteforce_ssh_pyes.AlertBruteforceSsh': {'schedule': timedelta(minutes=1)},
'unauth_ssh_pyes.AlertUnauthSSH': {'schedule': timedelta(minutes=1)},
'confluence_shell_pyes.AlertConfluenceShellUsage': {'schedule': timedelta(minutes=1)},
}
RABBITMQ = {
'mqserver': 'localhost',
'mquser': 'mozdef',
'mqpassword': 'mozdef',
'mqport': 5672,
'alertexchange': 'alerts',
'alertqueue': 'mozdef.alert'
}
ES = {
'servers': ['http://mozdefqa1.private.scl3.mozilla.com:9200']
}
OPTIONS = {
'defaulttimezone': 'UTC',
}
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'simple': {
'format': '%(levelname)s %(message)s',
'datefmt': '%y %b %d, %H:%M:%S',
},
'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s %(filename)s:%(lineno)d: %(message)s'
}
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'celery': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'celery.log',
'formatter': 'standard',
'maxBytes': 1024 * 1024 * 100, # 100 mb
},
},
'loggers': {
'celery': {
'handlers': ['celery', 'console'],
'level': 'DEBUG',
},
}
}
logging.Formatter.converter = time.gmtime

Двоичные данные
alerts/plugins/pagerDutyTriggerEvent.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -21,7 +21,7 @@ class message(object):
the pager duty event api
'''
self.registration = ['bro']
self.registration = ['sftp-server']
self.priority = 2
# set my own conf file
@ -52,7 +52,7 @@ class message(object):
}
payload = json.dumps({
"service_key": "{0}".format(self.options.serviceKey),
"incident_key": "bro",
"incident_key": "Possible Intrusion",
"event_type": "trigger",
"description": "{0}".format(message['summary']),
"client": "mozdef",
@ -70,4 +70,4 @@ class message(object):
# plugins registered with lower (>2) priority
# will receive the message and can also act on it
# but even if not modified, you must return it
return message
return message

40
alerts/proxy_drop_pyes.py Normal file
Просмотреть файл

@ -0,0 +1,40 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jonathan Claudius jclaudius@mozilla.com
# Brandon Myers bmyers@mozilla.com
# Alicia Smith asmith@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertProxyDrop(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=5)
# Configure filters using pyes
must = [
pyes.TermFilter('category', 'squid'),
pyes.ExistsFilter('details.proxyaction'),
]
self.filtersManual(date_timedelta, must=must)
self.searchEventsSimple()
self.walkEvents()
#Set alert properties
def onEvent(self, event):
category = 'squid'
tags = ['squid']
severity = 'WARNING'
url = ""
summary = event['_source']['summary']
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, [event], severity, url)

33
alerts/sshioc.py Normal file
Просмотреть файл

@ -0,0 +1,33 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2015 Mozilla Corporation
#
# Contributors:
# Aaron Meihm <ameihm@mozilla.com>
from lib.alerttask import AlertTask
import pyes
class AlertSSHIOC(AlertTask):
def main(self):
date_timedelta = dict(minutes=30)
must = [
pyes.TermFilter('_type', 'event'),
pyes.TermFilter('tags', 'mig-runner-sshioc'),
]
self.filtersManual(date_timedelta, must=must, must_not=[])
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'sshioc'
tags = ['sshioc']
severity = 'WARNING'
summary = 'SSH IOC match from runner plugin'
return self.createAlertDict(summary, category, tags, [event], severity)

Двоичные данные
alerts/supervisord.alerts.conf

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,53 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
# Jeff Bryner jbryner@mozilla.com
# Aaron Meihm ameihm@mozilla.com
# Michal Purzynski <mpurzynski@mozilla.com>
# Alicia Smith <asmith@mozilla.com>
from lib.alerttask import AlertTask
import pyes
class AlertUnauthPortScan(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=30)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'bro'),
pyes.TermFilter('category', 'bronotice'),
pyes.TermFilter('eventsource', 'nsm'),
pyes.ExistsFilter('details.sourceipaddress'),
pyes.QueryFilter(pyes.MatchQuery('details.note', 'Scan::Port_Scan', 'phrase')),
]
self.filtersManual(date_timedelta, must=must)
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'scan'
severity = 'NOTICE'
hostname = event['_source']['hostname']
url = "https://mana.mozilla.org/wiki/display/SECURITY/NSM+IR+procedures"
sourceipaddress = 'unknown'
target = 'unknown'
x = event['_source']
if 'details' in x:
if 'sourceipaddress' in x['details']:
sourceipaddress = x['details']['sourceipaddress']
if 'destinationipaddress' in x['details']:
target = x['details']['destinationipaddress']
summary = '{2}: Unauthorized Port Scan Event from {0} scanning ports on host {1}'.format(sourceipaddress, target, hostname)
# Create the alert object based on these properties
return self.createAlertDict(summary, category, [], [event], severity, url)

Просмотреть файл

@ -0,0 +1,54 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
# Jeff Bryner jbryner@mozilla.com
# Aaron Meihm ameihm@mozilla.com
# Michal Purzynski <mpurzynski@mozilla.com>
# Alicia Smith <asmith@mozilla.com>
from lib.alerttask import AlertTask
import pyes
class AlertUnauthInternalScan(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=2)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'bro'),
pyes.TermFilter('category', 'bronotice'),
pyes.TermFilter('eventsource', 'nsm'),
pyes.TermFilter('hostname', 'nsmserver1'),
pyes.ExistsFilter('details.sourceipaddress'),
pyes.QueryFilter(pyes.MatchQuery('details.note', 'Scan::Address_Scan', 'phrase')),
]
self.filtersManual(date_timedelta, must=must)
self.searchEventsSimple()
self.walkEvents()
# Set alert properties
def onEvent(self, event):
category = 'scan'
severity = 'NOTICE'
hostname = event['_source']['hostname']
url = "https://mana.mozilla.org/wiki/display/SECURITY/NSM+IR+procedures"
sourceipaddress = 'unknown'
port = 'unknown'
x = event['_source']
if 'details' in x:
if 'sourceipaddress' in x['details']:
sourceipaddress = x['details']['sourceipaddress']
if 'p' in x['details']:
port = x['details']['p']
summary = '{2}: Unauthorized Internal Scan Event from {0} scanning ports {1}'.format(sourceipaddress, port, hostname)
# Create the alert object based on these properties
return self.createAlertDict(summary, category, [], [event], severity, url)

Двоичные данные
alerts/unauth_ssh_pyes.conf

Двоичный файл не отображается.

Просмотреть файл

@ -1,21 +0,0 @@
[uwsgi]
chdir = /home/mozdef/envs/mozdef/alerts
uid = mozdef
mule = alertWorker.py
mule = alertWorker.py
mule = alertWorker.py
mule = alertWorker.py
pyargv = -c /home/mozdef/envs/mozdef/alerts/alertWorker.conf
py-auto-reload=30s
;stats = 127.0.0.1:9192
;py-auto-reload=30s
daemonize = /home/mozdef/envs/mozdef/logs/uwsgi.AlertPluginsMules.log
;ignore normal operations that generate nothing but normal response
log-drain = generated 0 bytes
log-date = %%a %%b %%d %%H:%%M:%%S
socket = /home/mozdef/envs/mozdef/alerts/AlertPluginsMules.socket
virtualenv = /home/mozdef/envs/mozdef/
master-fifo = /home/mozdef/envs/mozdef/alerts/AlertPluginsMules.fifo
never-swap
pidfile= /home/mozdef/envs/mozdef/alerts/AlertPluginsMules.pid
vacuum = true

Просмотреть файл

@ -0,0 +1,53 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
# Jeff Bryner jbryner@mozilla.com
from lib.alerttask import AlertTask
import pyes
class AlertManyVPNDuoAuthFailures(AlertTask):
def main(self):
# look for events in last X mins
date_timedelta = dict(minutes=2)
# Configure filters using pyes
must = [
pyes.TermFilter('_type', 'event'),
pyes.TermFilter('category', 'event'),
pyes.TermFilter('tags', 'duosecurity'),
pyes.QueryFilter(pyes.MatchQuery('details.integration','global and external openvpn','phrase')),
pyes.QueryFilter(pyes.MatchQuery('details.result','FAILURE','phrase')),
]
# must_not = [
# pyes.QueryFilter(pyes.MatchQuery('summary','10.22.75.203','phrase')),
# pyes.QueryFilter(pyes.MatchQuery('summary','10.8.75.144','phrase'))
# ]
self.filtersManual(date_timedelta, must=must, must_not=must_not)
# Search aggregations on field 'username', keep X samples of events at most
self.searchEventsAggregated('details.username', samplesLimit=5)
# alert when >= X matching events in an aggregation
self.walkAggregations(threshold=5)
# Set alert properties
def onAggregation(self, aggreg):
# aggreg['count']: number of items in the aggregation, ex: number of failed login attempts
# aggreg['value']: value of the aggregation field, ex: toto@example.com
# aggreg['events']: list of events in the aggregation
category = 'openvpn'
tags = ['vpn', 'auth', 'duo']
severity = 'NOTICE'
summary = ('{0} openvpn authentication attempts by {1}'.format(aggreg['count'], aggreg['value']))
sourceip = self.mostCommon(aggreg['allevents'],'_source.details.ip')
for i in sourceip[:5]:
summary += ' {0} ({1} hits)'.format(i[0], i[1])
# Create the alert object based on these properties
return self.createAlertDict(summary, category, tags, aggreg['events'], severity)

Двоичные данные
bot/GeoLiteCity.dat

Двоичный файл не отображается.

27
bot/README.md Normal file
Просмотреть файл

@ -0,0 +1,27 @@
KitnIRC - A Python IRC Bot Framework
====================================
KitnIRC is an IRC framework that attempts to handle most of the
monotony of writing IRC bots without sacrificing flexibility.
Usage
-----
See the `skeleton` directory in the root level for a starting code skeleton
you can copy into a new project's directory and build off of, and
[Getting Started](https://github.com/ayust/kitnirc/wiki/Getting-Started)
for introductory documentation.
License
-------
KitnIRC is licensed under the MIT License (see `LICENSE` for details).
Other Resources
---------------
Useful reference documents for those working with the IRC protocol as a client:
* [RFC 2812](http://tools.ietf.org/html/rfc2812)
* [ISUPPORT draft](http://tools.ietf.org/html/draft-brocklesby-irc-isupport-03)
* [List of numeric replies](https://www.alien.net.au/irc/irc2numerics.html)

Двоичные данные
bot/debug.conf Normal file

Двоичный файл не отображается.

@ -1 +0,0 @@
Subproject commit d8bf81a2ae658ab23d35493e346f0b27eb889f71

Двоичные данные
bot/mozdefbot.conf Normal file

Двоичный файл не отображается.

Двоичные данные
bot/mozdefbot.ini Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -273,6 +273,7 @@ class mozdefBot():
pass
except Exception as e:
sys.stdout.write('stdout - bot error, quitting {0}'.format(e))
self.client.root_logger.error('bot error..quitting {0}'.format(e))
self.client.disconnect()
if self.mqConsumer:
@ -386,7 +387,7 @@ def initConfig():
# change this to your default zone for when it's not specified
# in time strings
options.defaultTimeZone = getConfig('defaulttimezone',
'US/Pacific',
'UTC',
options.configfile)
# irc options

Двоичные данные
bot/options.conf

Двоичный файл не отображается.

2
bot/safe/.gitattributes поставляемый Normal file
Просмотреть файл

@ -0,0 +1,2 @@
* filter=git-crypt diff=git-crypt
.gitattributes !filter !diff

Двоичные данные
bot/safe/mozdefbot.py Executable file

Двоичный файл не отображается.

Просмотреть файл

@ -1,45 +0,0 @@
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /opt/mozdef/envs/mozdef/logs/meteor-mongo.log
# Where and how to store data.
storage:
dbPath: /opt/mozdef/envs/mongo/db
journal:
enabled: true
mmapv1:
smallFiles: true
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mozdefdb/mozdefdb.pid # location of pidfile
# network interfaces
net:
port: 3002
bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces.
#security:
#operationProfiling:
replication:
oplogSizeMB: 8
#sharding:
## Enterprise-Only Options
#auditLog:
#snmp:

243
cron/amoAlerts.py Executable file
Просмотреть файл

@ -0,0 +1,243 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import sys
import json
import logging
import pika
import pytz
import pyes
from collections import Counter
from configlib import getConfig, OptionParser
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from logging.handlers import SysLogHandler
logger = logging.getLogger(sys.argv[0])
def loggerTimeStamp(self, record, datefmt=None):
return toUTC(datetime.now()).isoformat()
def initLogger():
logger.level = logging.INFO
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter.formatTime = loggerTimeStamp
if options.output == 'syslog':
logger.addHandler(SysLogHandler(address=(options.sysloghostname, options.syslogport)))
else:
sh = logging.StreamHandler(sys.stderr)
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate, localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc = pytz.UTC
objDate = None
if type(suspectedDate) == str:
objDate = parse(suspectedDate, fuzzy=True)
elif type(suspectedDate) == datetime:
objDate = suspectedDate
if objDate.tzinfo is None:
objDate = pytz.timezone(localTimeZone).localize(objDate)
objDate = utc.normalize(objDate)
else:
objDate = utc.normalize(objDate)
if objDate is not None:
objDate = utc.normalize(objDate)
return objDate
def flattenDict(dictIn):
sout = ''
for k, v in dictIn.iteritems():
sout += '{0}: {1} '.format(k, v)
return sout
def alertToMessageQueue(alertDict):
try:
connection = pika.BlockingConnection(pika.ConnectionParameters(host=options.mqserver))
channel = connection.channel()
# declare the exchanges
channel.exchange_declare(exchange=options.alertexchange, type='topic', durable=True)
# cherry pick items from the alertDict to send to the alerts messageQueue
mqAlert = dict(severity='INFO', category='')
if 'severity' in alertDict.keys():
mqAlert['severity'] = alertDict['severity']
if 'category' in alertDict.keys():
mqAlert['category'] = alertDict['category']
if 'utctimestamp' in alertDict.keys():
mqAlert['utctimestamp'] = alertDict['utctimestamp']
if 'eventtimestamp' in alertDict.keys():
mqAlert['eventtimestamp'] = alertDict['eventtimestamp']
mqAlert['summary'] = alertDict['summary']
logger.debug(mqAlert)
channel.basic_publish(exchange=options.alertexchange, routing_key=options.alertqueue, body=json.dumps(mqAlert))
except Exception as e:
logger.error('Exception while sending alert to message queue: {0}'.format(e))
def alertToES(es, alertDict):
try:
res = es.index(index='alerts', doc_type='alert', doc=alertDict)
return(res)
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esFailedAMOLogin():
begindateUTC= toUTC(datetime.now() - timedelta(minutes=10))
enddateUTC= toUTC(datetime.now())
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'addons')
qEvents = pyes.TermFilter("signatureid","authfail")
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
q.filters.append(pyes.BoolFilter(
must=[qType,
qDate,
qEvents,
pyes.QueryFilter(pyes.MatchQuery("msg","The password was incorrect","phrase")),
pyes.ExistsFilter('suser')
],
must_not=[
qalerted
]))
return q
def esRunSearch(es, query, aggregateField, detailLimit=5):
try:
pyesresults = es.search(query, size=1000, indices='events,events-previous')
# logger.debug(results.count())
# correlate any matches by the aggregate field.
# make a simple list of indicator values that can be counted/summarized by Counter
resultsIndicators = list()
# bug in pyes..capture results as raw list or it mutates after first access:
# copy the hits.hits list as our results, which is the same as the official elastic search library returns.
results = pyesresults._search_raw()['hits']['hits']
for r in results:
resultsIndicators.append(r['_source']['details'][aggregateField])
# use the list of tuples ('indicator',count) to create a dictionary with:
# indicator,count,es records
# and add it to a list to return.
indicatorList = list()
for i in Counter(resultsIndicators).most_common():
idict = dict(indicator=i[0], count=i[1], events=[])
for r in results:
if r['_source']['details'][aggregateField].encode('ascii', 'ignore') == i[0]:
# copy events detail into this correlation up to our detail limit
if len(idict['events'])<detailLimit:
idict['events'].append(r)
indicatorList.append(idict)
return indicatorList
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def createAlerts(es, indicatorCounts, threshold, description):
'''given a list of dictionaries:
count: X
indicator: sometext
events: list of pyes results matching the indicator
1) create a summary alert with detail of the events
2) update the events with an alert timestamp so they are not included in further alerts
'''
try:
if len(indicatorCounts) > 0:
for i in indicatorCounts:
if i['count'] > threshold:
alert = dict(utctimestamp=toUTC(datetime.now()).isoformat(), severity='NOTICE', summary='', category='addons', tags=['addons'], eventsource=[], events=[])
for e in i['events']:
alert['events'].append(
dict(documentindex=e['_index'],
documenttype=e['_type'],
documentsource=e['_source'],
documentid=e['_id']))
alert['severity'] = 'NOTICE'
alert['summary'] = ('{0} {1}: {2}'.format(i['count'], description, i['indicator']))
# append first X source IPs
alert['summary'] += ' sample sourceips: '
for e in i['events'][0:3]:
if 'sourceipaddress' in e['_source']['details'].keys():
alert['summary'] += '{0} '.format(e['_source']['details']['sourceipaddress'])
for e in i['events']:
# append the relevant events in text format to avoid errant ES issues.
# should be able to just set eventsource to i['events'] but different versions of ES 1.0 complain
alert['eventsource'].append(flattenDict(e))
logger.debug(alert['summary'])
logger.debug(alert['events'])
logger.debug(alert)
# save alert to alerts index, update events index with alert ID for cross reference
alertResult = alertToES(es, alert)
##logger.debug(alertResult)
# for each event in this list of indicatorCounts
# update with the alertid/index
# and update the alerttimestamp on the event itself so it's not re-alerted
for e in i['events']:
if 'alerts' not in e['_source'].keys():
e['_source']['alerts'] = []
e['_source']['alerts'].append(dict(index=alertResult['_index'], type=alertResult['_type'], id=alertResult['_id']))
e['_source']['alerttimestamp'] = toUTC(datetime.now()).isoformat()
es.update(e['_index'], e['_type'], e['_id'], document=e['_source'])
alertToMessageQueue(alert)
except ValueError as e:
logger.error("Exception %r when creating alerts " % e)
def main():
logger.debug('starting')
logger.debug(options)
es = pyes.ES((list('{0}'.format(s) for s in options.esservers)))
# search for failed amo logins
indicatorCounts=esRunSearch(es,esFailedAMOLogin(),'suser', 50)
createAlerts(es,indicatorCounts, 5, 'amo failed logins')
logger.debug('finished')
def initConfig():
# change this to your default zone for when it's not specified
options.defaultTimeZone = getConfig('defaulttimezone', 'UTC', options.configfile)
# msg queue settings
options.mqserver = getConfig('mqserver', 'localhost', options.configfile) # message queue server hostname
options.alertqueue = getConfig('alertqueue', 'mozdef.alert', options.configfile) # alert queue topic
options.alertexchange = getConfig('alertexchange', 'alerts', options.configfile) # alert queue exchange name
# logging settings
options.output = getConfig('output', 'stdout', options.configfile) # output our log to stdout or syslog
options.sysloghostname = getConfig('sysloghostname', 'localhost', options.configfile) # syslog hostname
options.syslogport = getConfig('syslogport', 514, options.configfile) # syslog port
# elastic search server settings
options.esservers = list(getConfig('esservers', 'http://localhost:9200', options.configfile).split(','))
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-c", dest='configfile', default=sys.argv[0].replace('.py', '.conf'), help="configuration file to use")
(options, args) = parser.parse_args()
initConfig()
initLogger()
main()

Двоичные данные
cron/auditDAlerts.conf Normal file

Двоичный файл не отображается.

445
cron/auditDAlerts.py Executable file
Просмотреть файл

@ -0,0 +1,445 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import sys
import json
import logging
import pika
import pytz
import pyes
from collections import Counter
from configlib import getConfig, OptionParser
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from logging.handlers import SysLogHandler
logger = logging.getLogger(sys.argv[0])
def loggerTimeStamp(self, record, datefmt=None):
return toUTC(datetime.now()).isoformat()
def initLogger():
logger.level = logging.INFO
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter.formatTime = loggerTimeStamp
if options.output == 'syslog':
logger.addHandler(SysLogHandler(address=(options.sysloghostname, options.syslogport)))
else:
sh = logging.StreamHandler(sys.stderr)
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate, localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc = pytz.UTC
objDate = None
if type(suspectedDate) == str:
objDate = parse(suspectedDate, fuzzy=True)
elif type(suspectedDate) == datetime:
objDate = suspectedDate
if objDate.tzinfo is None:
objDate = pytz.timezone(localTimeZone).localize(objDate)
objDate = utc.normalize(objDate)
else:
objDate = utc.normalize(objDate)
if objDate is not None:
objDate = utc.normalize(objDate)
return objDate
def flattenDict(dictIn):
sout = ''
for k, v in dictIn.iteritems():
sout += '{0}: {1} '.format(k, v)
return sout
def alertToMessageQueue(alertDict):
try:
connection = pika.BlockingConnection(pika.ConnectionParameters(host=options.mqserver))
channel = connection.channel()
# declare the exchanges
channel.exchange_declare(exchange=options.alertexchange, type='topic', durable=True)
# cherry pick items from the alertDict to send to the alerts messageQueue
mqAlert = dict(severity='INFO', category='')
if 'severity' in alertDict.keys():
mqAlert['severity'] = alertDict['severity']
if 'category' in alertDict.keys():
mqAlert['category'] = alertDict['category']
if 'utctimestamp' in alertDict.keys():
mqAlert['utctimestamp'] = alertDict['utctimestamp']
if 'eventtimestamp' in alertDict.keys():
mqAlert['eventtimestamp'] = alertDict['eventtimestamp']
mqAlert['summary'] = alertDict['summary']
logger.debug(mqAlert)
channel.basic_publish(exchange=options.alertexchange, routing_key=options.alertqueue, body=json.dumps(mqAlert))
except Exception as e:
logger.error('Exception while sending alert to message queue: {0}'.format(e))
def alertToES(es, alertDict):
try:
res = es.index(index='alerts', doc_type='alert', doc=alertDict)
return(res)
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esShadowSearch():
# find stuff like cat /etc/shadow
# search for events within the date range that haven't already been alerted (i.e. given an alerttimestamp)
begindateUTC = toUTC(datetime.now() - timedelta(minutes=30))
enddateUTC = toUTC(datetime.now())
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qEvents = pyes.TermFilter('command', 'shadow')
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
# query must match dates, should have keywords must not match whitelisted items
q.filters.append(
pyes.BoolFilter(
must=[qType,
qDate,
pyes.ExistsFilter('suser')],
should=[qEvents],
must_not=[
qalerted,
pyes.QueryFilter(pyes.MatchQuery("cwd","/var/backups","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dproc","/usr/bin/glimpse","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dproc","/bin/chmod","phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'cmp -s shadow.bak /etc/shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'cp -p /etc/shadow shadow.bak',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('suser', 'infrasec',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('parentprocess', 'mig-agent',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('parentprocess', 'passwd',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'no drop shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'js::shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'target.new',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', '/usr/share/man',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'shadow-invert.png',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'ruby-shadow',"phrase")),
pyes.QueryFilter(pyes.QueryStringQuery('command:gzip')),
pyes.QueryFilter(pyes.QueryStringQuery('command:http')),
pyes.QueryFilter(pyes.QueryStringQuery('command:html'))
]))
return q
def esRPMSearch():
begindateUTC= toUTC(datetime.now() - timedelta(minutes=30))
enddateUTC= toUTC(datetime.now())
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qEvents = pyes.TermFilter("dproc","rpm")
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
q.filters.append(pyes.BoolFilter(must=[qType, qDate,qEvents,
pyes.ExistsFilter('suser')],
should=[
pyes.QueryFilter(pyes.MatchQuery("command","-e","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","--erase","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","-i","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","--install","phrase"))
],
must_not=[
qalerted,
pyes.QueryFilter(pyes.MatchQuery("command","--eval","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","--info","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dhost","deploy","phrase")), # ignore rpm builds on deploy hosts
pyes.QueryFilter(pyes.MatchQuery("parentprocess","puppet","phrase")), # ignore rpm -e hp
]))
return q
def esYumSearch():
begindateUTC= toUTC(datetime.now() - timedelta(minutes=30))
enddateUTC= toUTC(datetime.now())
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qEvents = pyes.TermFilter("fname","yum")
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
q.filters.append(pyes.BoolFilter(must=[qType, qDate,qEvents,
pyes.ExistsFilter('suser')],
should=[
pyes.QueryFilter(pyes.MatchQuery("command","remove","phrase"))
],
must_not=[
qalerted,
pyes.QueryFilter(pyes.MatchQuery("fname","yum.conf","phrase"))
]))
return q
def esGCCSearch():
begindateUTC= toUTC(datetime.now() - timedelta(minutes=30))
enddateUTC= toUTC(datetime.now())
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qEvents = pyes.TermFilter("fname","gcc")
qCommand = pyes.ExistsFilter('command')
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
q.filters.append(
pyes.BoolFilter(must=[qType,
qDate,
qEvents,
qCommand,
pyes.ExistsFilter('suser')
],
must_not=[
qalerted,
pyes.QueryFilter(pyes.MatchQuery("command","conftest.c dhave_config_h","boolean")),
pyes.QueryFilter(pyes.MatchQuery("command","gcc -v","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","gcc -e","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","gcc --version","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","gcc -qversion","phrase")),
pyes.QueryFilter(pyes.MatchQuery("command","gcc --help","phrase")),
pyes.QueryFilter(pyes.MatchQuery("parentprocess","gcc","phrase")),
pyes.QueryFilter(pyes.MatchQuery("parentprocess","g++ c++ make imake configure python python2 python2.6 python2.7","boolean")),
pyes.QueryFilter(pyes.MatchQuery("suser","root","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dhost","jenkins1","boolean")),
pyes.QueryFilter(pyes.MatchQuery("command","gcc -Wl,-t -o /tmp","phrase"))
]))
return q
def esHistoryModSearch():
begindateUTC= toUTC(datetime.now() - timedelta(minutes=30))
enddateUTC= toUTC(datetime.now())
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qCommand = pyes.ExistsFilter('command')
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
q.filters.append(
pyes.BoolFilter(must=[
qType, qDate,qCommand,
pyes.ExistsFilter('suser'),
pyes.QueryFilter(pyes.MatchQuery("parentprocess","bash sh ksh","boolean")),
pyes.QueryFilter(pyes.MatchQuery("command","bash_history sh_history zsh_history .history secure messages history","boolean"))
],
should=[
pyes.QueryFilter(pyes.MatchQuery("command","rm vi vim nano emacs","boolean")),
pyes.QueryFilter(pyes.MatchQuery("command","history -c","phrase"))
],
must_not=[
qalerted
]))
return q
def esRunSearch(es, query, aggregateField, detailLimit=5):
try:
pyesresults = es.search(query, size=1000, indices='events,events-previous')
# logger.debug(results.count())
# correlate any matches by the aggregate field.
# make a simple list of indicator values that can be counted/summarized by Counter
resultsIndicators = list()
# bug in pyes..capture results as raw list or it mutates after first access:
# copy the hits.hits list as our results, which is the same as the official elastic search library returns.
results = pyesresults._search_raw()['hits']['hits']
for r in results:
resultsIndicators.append(r['_source']['details'][aggregateField])
# use the list of tuples ('indicator',count) to create a dictionary with:
# indicator,count,es records
# and add it to a list to return.
indicatorList = list()
for i in Counter(resultsIndicators).most_common():
idict = dict(indicator=i[0], count=i[1], events=[])
for r in results:
if r['_source']['details'][aggregateField].encode('ascii', 'ignore') == i[0]:
# copy events detail into this correlation up to our detail limit
if len(idict['events'])<detailLimit:
idict['events'].append(r)
indicatorList.append(idict)
return indicatorList
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esSearch(es, begindateUTC=None, enddateUTC=None):
if begindateUTC is None:
begindateUTC = toUTC(datetime.now() - timedelta(minutes=80))
if enddateUTC is None:
enddateUTC = toUTC(datetime.now())
try:
# search for events within the date range that haven't already been alerted (i.e. given an alerttimestamp)
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qEvents = pyes.TermFilter('command', 'shadow')
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
# query must match dates, should have keywords must not match whitelisted items
q.filters.append(pyes.BoolFilter(must=[qType, qDate ], should=[qEvents],must_not=[
qalerted,
pyes.QueryFilter(pyes.MatchQuery("cwd","/var/backups","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dproc","/usr/bin/glimpse","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dproc","/bin/chmod","phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'cmp -s shadow.bak /etc/shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'cp -p /etc/shadow shadow.bak',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('suser', 'infrasec',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('parentprocess', 'mig-agent',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('parentprocess', 'passwd',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'no drop shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'js::shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'target.new',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', '/usr/share/man',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'shadow-invert.png',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'ruby-shadow',"phrase")),
pyes.QueryFilter(pyes.QueryStringQuery('command:gzip')),
pyes.QueryFilter(pyes.QueryStringQuery('command:http')),
pyes.QueryFilter(pyes.QueryStringQuery('command:html'))
]))
pyesresults = es.search(q, size=1000, indices='events')
# logger.debug(results.count())
# correlate any matches by the dhost field.
# make a simple list of indicator values that can be counted/summarized by Counter
resultsIndicators = list()
# bug in pyes..capture results as raw list or it mutates after first access:
# copy the hits.hits list as our resusts, which is the same as the official elastic search library returns.
results = pyesresults._search_raw()['hits']['hits']
for r in results:
resultsIndicators.append(r['_source']['details']['dhost'])
# use the list of tuples ('indicator',count) to create a dictionary with:
# indicator,count,es records
# and add it to a list to return.
indicatorList = list()
for i in Counter(resultsIndicators).most_common():
idict = dict(indicator=i[0], count=i[1], events=[])
for r in results:
if r['_source']['details']['dhost'].encode('ascii', 'ignore') == i[0]:
idict['events'].append(r)
indicatorList.append(idict)
return indicatorList
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def createAlerts(es, indicatorCounts):
'''given a list of dictionaries:
count: X
indicator: sometext
events: list of pyes results matching the indicator
1) create a summary alert with detail of the events
2) update the events with an alert timestamp so they are not included in further alerts
'''
try:
if len(indicatorCounts) > 0:
for i in indicatorCounts:
alert = dict(utctimestamp=toUTC(datetime.now()).isoformat(), severity='NOTICE', summary='', category='auditd', tags=['auditd'], eventsource=[], events=[])
for e in i['events']:
alert['events'].append(
dict(documentindex=e['_index'],
documenttype=e['_type'],
documentsource=e['_source'],
documentid=e['_id']))
alert['severity'] = 'NOTICE'
if i['count']==1:
alert['summary'] = ('suspicious command: {0}'.format(i['indicator']))
else:
alert['summary'] = ('{0} suspicious commands: {1}'.format(i['count'], i['indicator']))
for e in i['events'][:3]:
if 'dhost' in e['_source']['details'].keys():
alert['summary'] += ' on {0}'.format(e['_source']['details']['dhost'])
# first 50 chars of a command, then ellipsis
alert['summary'] += ' {0}'.format(e['_source']['details']['command'][:50] + (e['_source']['details']['command'][:50] and '...'))
for e in i['events']:
# append the relevant events in text format to avoid errant ES issues.
# should be able to just set eventsource to i['events'] but different versions of ES 1.0 complain
alert['eventsource'].append(flattenDict(e))
logger.debug(alert['summary'])
logger.debug(alert['events'])
logger.debug(alert)
# save alert to alerts index, update events index with alert ID for cross reference
alertResult = alertToES(es, alert)
##logger.debug(alertResult)
# for each event in this list of indicatorCounts
# update with the alertid/index
# and update the alerttimestamp on the event itself so it's not re-alerted
for e in i['events']:
if 'alerts' not in e['_source'].keys():
e['_source']['alerts'] = []
e['_source']['alerts'].append(dict(index=alertResult['_index'], type=alertResult['_type'], id=alertResult['_id']))
e['_source']['alerttimestamp'] = toUTC(datetime.now()).isoformat()
es.update(e['_index'], e['_type'], e['_id'], document=e['_source'])
alertToMessageQueue(alert)
except ValueError as e:
logger.error("Exception %r when creating alerts " % e)
def main():
logger.debug('starting')
logger.debug(options)
es = pyes.ES((list('{0}'.format(s) for s in options.esservers)))
# run a series of searches for suspicious commands
# aggregating by a specific field (usually dhost or suser)
# and alert if found
# /etc/shadow manipulation by destination host
indicatorCounts = esRunSearch(es,esShadowSearch(), 'suser')
createAlerts(es, indicatorCounts)
# search for rpm -i or -e type commands by suser:
indicatorCounts=esRunSearch(es,esRPMSearch(),'suser')
createAlerts(es,indicatorCounts)
# search for yum remove commands by suser:
indicatorCounts=esRunSearch(es,esYumSearch(),'suser')
createAlerts(es,indicatorCounts)
# search for gcc commands by suser:
indicatorCounts=esRunSearch(es,esGCCSearch(),'suser')
createAlerts(es,indicatorCounts)
# search for history modification commands by suser:
indicatorCounts=esRunSearch(es,esHistoryModSearch(),'suser')
createAlerts(es,indicatorCounts)
logger.debug('finished')
def initConfig():
# change this to your default zone for when it's not specified
options.defaultTimeZone = getConfig('defaulttimezone', 'UTC', options.configfile)
# msg queue settings
options.mqserver = getConfig('mqserver', 'localhost', options.configfile) # message queue server hostname
options.alertqueue = getConfig('alertqueue', 'mozdef.alert', options.configfile) # alert queue topic
options.alertexchange = getConfig('alertexchange', 'alerts', options.configfile) # alert queue exchange name
# logging settings
options.output = getConfig('output', 'stdout', options.configfile) # output our log to stdout or syslog
options.sysloghostname = getConfig('sysloghostname', 'localhost', options.configfile) # syslog hostname
options.syslogport = getConfig('syslogport', 514, options.configfile) # syslog port
# elastic search server settings
options.esservers = list(getConfig('esservers', 'http://localhost:9200', options.configfile).split(','))
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-c", dest='configfile', default=sys.argv[0].replace('.py', '.conf'), help="configuration file to use")
(options, args) = parser.parse_args()
initConfig()
initLogger()
main()

4
cron/auditDAlerts.sh Executable file
Просмотреть файл

@ -0,0 +1,4 @@
#!/usr/bin/env bash
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/auditDAlerts.py -c /home/mozdef/envs/mozdef/cron/auditDAlerts.conf
/home/mozdef/envs/mozdef/cron/auditDFileAlerts.py -c /home/mozdef/envs/mozdef/cron/auditDAlerts.conf

307
cron/auditDFileAlerts.py Executable file
Просмотреть файл

@ -0,0 +1,307 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import sys
import json
import logging
import pika
import pytz
import pyes
from collections import Counter
from configlib import getConfig, OptionParser
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from logging.handlers import SysLogHandler
logger = logging.getLogger(sys.argv[0])
def loggerTimeStamp(self, record, datefmt=None):
return toUTC(datetime.now()).isoformat()
def initLogger():
logger.level = logging.INFO
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter.formatTime = loggerTimeStamp
if options.output == 'syslog':
logger.addHandler(SysLogHandler(address=(options.sysloghostname, options.syslogport)))
else:
sh = logging.StreamHandler(sys.stderr)
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate, localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc = pytz.UTC
objDate = None
if type(suspectedDate) == str:
objDate = parse(suspectedDate, fuzzy=True)
elif type(suspectedDate) == datetime:
objDate = suspectedDate
if objDate.tzinfo is None:
objDate = pytz.timezone(localTimeZone).localize(objDate)
objDate = utc.normalize(objDate)
else:
objDate = utc.normalize(objDate)
if objDate is not None:
objDate = utc.normalize(objDate)
return objDate
def flattenDict(dictIn):
sout = ''
for k, v in dictIn.iteritems():
sout += '{0}: {1} '.format(k, v)
return sout
def alertToMessageQueue(alertDict):
try:
connection = pika.BlockingConnection(pika.ConnectionParameters(host=options.mqserver))
channel = connection.channel()
# declare the exchanges
channel.exchange_declare(exchange=options.alertexchange, type='topic', durable=True)
# cherry pick items from the alertDict to send to the alerts messageQueue
mqAlert = dict(severity='INFO', category='')
if 'severity' in alertDict.keys():
mqAlert['severity'] = alertDict['severity']
if 'category' in alertDict.keys():
mqAlert['category'] = alertDict['category']
if 'utctimestamp' in alertDict.keys():
mqAlert['utctimestamp'] = alertDict['utctimestamp']
if 'eventtimestamp' in alertDict.keys():
mqAlert['eventtimestamp'] = alertDict['eventtimestamp']
mqAlert['summary'] = alertDict['summary']
logger.debug(mqAlert)
channel.basic_publish(exchange=options.alertexchange, routing_key=options.alertqueue, body=json.dumps(mqAlert))
except Exception as e:
logger.error('Exception while sending alert to message queue: {0}'.format(e))
def alertToES(es, alertDict):
try:
res = es.index(index='alerts', doc_type='alert', doc=alertDict)
return(res)
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esUserWriteSearch():
begindateUTC= toUTC(datetime.now() - timedelta(minutes=30))
enddateUTC= toUTC(datetime.now())
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qEvents = pyes.TermFilter("signatureid","write")
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
q.filters.append(
pyes.BoolFilter(must=[
qType, qDate,qEvents,
pyes.QueryFilter(pyes.MatchQuery("auditkey","user","phrase")),
pyes.ExistsFilter('suser')
],
must_not=[
qalerted,
pyes.QueryFilter(pyes.MatchQuery("parentprocess","puppet dhclient-script","boolean"))
]))
return q
def esRunSearch(es, query, aggregateField):
try:
pyesresults = es.search(query, size=1000, indices='events,events-previous')
# logger.debug(results.count())
# correlate any matches by the aggregate field.
# make a simple list of indicator values that can be counted/summarized by Counter
resultsIndicators = list()
# bug in pyes..capture results as raw list or it mutates after first access:
# copy the hits.hits list as our results, which is the same as the official elastic search library returns.
results = pyesresults._search_raw()['hits']['hits']
for r in results:
if aggregateField in r['_source']['details']:
resultsIndicators.append(r['_source']['details'][aggregateField])
else:
logger.error('{0} aggregate key not found {1}'.format(aggregateField, r['_source']))
sys.exit(1)
# use the list of tuples ('indicator',count) to create a dictionary with:
# indicator,count,es records
# and add it to a list to return.
indicatorList = list()
for i in Counter(resultsIndicators).most_common():
idict = dict(indicator=i[0], count=i[1], events=[])
for r in results:
if r['_source']['details'][aggregateField].encode('ascii', 'ignore') == i[0]:
idict['events'].append(r)
indicatorList.append(idict)
return indicatorList
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esSearch(es, begindateUTC=None, enddateUTC=None):
if begindateUTC is None:
begindateUTC = toUTC(datetime.now() - timedelta(minutes=80))
if enddateUTC is None:
enddateUTC = toUTC(datetime.now())
try:
# search for events within the date range that haven't already been alerted (i.e. given an alerttimestamp)
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'auditd')
qEvents = pyes.TermFilter('command', 'shadow')
qalerted = pyes.ExistsFilter('alerttimestamp')
q=pyes.ConstantScoreQuery(pyes.MatchAllQuery())
# query must match dates, should have keywords must not match whitelisted items
q.filters.append(pyes.BoolFilter(must=[qType, qDate ], should=[qEvents],must_not=[
qalerted,
pyes.QueryFilter(pyes.MatchQuery("cwd","/var/backups","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dproc","/usr/bin/glimpse","phrase")),
pyes.QueryFilter(pyes.MatchQuery("dproc","/bin/chmod","phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'cmp -s shadow.bak /etc/shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'cp -p /etc/shadow shadow.bak',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('suser', 'infrasec',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('parentprocess', 'mig-agent',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('parentprocess', 'passwd',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'no drop shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'js::shadow',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'target.new',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', '/usr/share/man',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'shadow-invert.png',"phrase")),
pyes.QueryFilter(pyes.MatchQuery('command', 'ruby-shadow',"phrase")),
pyes.QueryFilter(pyes.QueryStringQuery('command:gzip')),
pyes.QueryFilter(pyes.QueryStringQuery('command:http')),
pyes.QueryFilter(pyes.QueryStringQuery('command:html'))
]))
pyesresults = es.search(q, size=1000, indices='events')
# logger.debug(results.count())
# correlate any matches by the dhost field.
# make a simple list of indicator values that can be counted/summarized by Counter
resultsIndicators = list()
# bug in pyes..capture results as raw list or it mutates after first access:
# copy the hits.hits list as our resusts, which is the same as the official elastic search library returns.
results = pyesresults._search_raw()['hits']['hits']
for r in results:
resultsIndicators.append(r['_source']['details']['dhost'])
# use the list of tuples ('indicator',count) to create a dictionary with:
# indicator,count,es records
# and add it to a list to return.
indicatorList = list()
for i in Counter(resultsIndicators).most_common():
idict = dict(indicator=i[0], count=i[1], events=[])
for r in results:
if r['_source']['details']['dhost'].encode('ascii', 'ignore') == i[0]:
idict['events'].append(r)
indicatorList.append(idict)
return indicatorList
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def createAlerts(es, indicatorCounts):
'''given a list of dictionaries:
count: X
indicator: sometext
events: list of pyes results matching the indicator
1) create a summary alert with detail of the events
2) update the events with an alert timestamp so they are not included in further alerts
'''
try:
if len(indicatorCounts) > 0:
for i in indicatorCounts:
alert = dict(utctimestamp=toUTC(datetime.now()).isoformat(), severity='NOTICE', summary='', category='auditd', tags=['auditd'], eventsource=[], events=[])
for e in i['events']:
alert['events'].append(
dict(documentindex=e['_index'],
documenttype=e['_type'],
documentsource=e['_source'],
documentid=e['_id']))
alert['severity'] = 'NOTICE'
if i['count']==1:
alert['summary'] = ('suspicious file access: {0} '.format(i['indicator']))
else:
alert['summary'] = ('{0} suspicious file access: {1}'.format(i['count'], i['indicator']))
for e in i['events'][:3]:
alert['summary'] += ' {0}'.format(e['_source']['details']['fname'])
if 'dhost' in e['_source']['details'].keys():
alert['summary'] += ' on {0}'.format(e['_source']['details']['dhost'])
for e in i['events']:
# append the relevant events in text format to avoid errant ES issues.
# should be able to just set eventsource to i['events'] but different versions of ES 1.0 complain
alert['eventsource'].append(flattenDict(e))
logger.debug(alert['summary'])
logger.debug(alert['events'])
logger.debug(alert)
# save alert to alerts index, update events index with alert ID for cross reference
alertResult = alertToES(es, alert)
##logger.debug(alertResult)
# for each event in this list of indicatorCounts
# update with the alertid/index
# and update the alerttimestamp on the event itself so it's not re-alerted
for e in i['events']:
if 'alerts' not in e['_source'].keys():
e['_source']['alerts'] = []
e['_source']['alerts'].append(dict(index=alertResult['_index'], type=alertResult['_type'], id=alertResult['_id']))
e['_source']['alerttimestamp'] = toUTC(datetime.now()).isoformat()
es.update(e['_index'], e['_type'], e['_id'], document=e['_source'])
alertToMessageQueue(alert)
except ValueError as e:
logger.error("Exception %r when creating alerts " % e)
def main():
logger.debug('starting')
logger.debug(options)
es = pyes.ES((list('{0}'.format(s) for s in options.esservers)))
# searches for suspicious file access
# aggregating by a specific field (usually dhost or suser)
# and alert if found
# signature: WRITE by a user, not by puppet
indicatorCounts = esRunSearch(es,esUserWriteSearch(), 'suser')
createAlerts(es, indicatorCounts)
logger.debug('finished')
def initConfig():
# change this to your default zone for when it's not specified
options.defaultTimeZone = getConfig('defaulttimezone', 'UTC', options.configfile)
# msg queue settings
options.mqserver = getConfig('mqserver', 'localhost', options.configfile) # message queue server hostname
options.alertqueue = getConfig('alertqueue', 'mozdef.alert', options.configfile) # alert queue topic
options.alertexchange = getConfig('alertexchange', 'alerts', options.configfile) # alert queue exchange name
# logging settings
options.output = getConfig('output', 'stdout', options.configfile) # output our log to stdout or syslog
options.sysloghostname = getConfig('sysloghostname', 'localhost', options.configfile) # syslog hostname
options.syslogport = getConfig('syslogport', 514, options.configfile) # syslog port
# elastic search server settings
options.esservers = list(getConfig('esservers', 'http://localhost:9200', options.configfile).split(','))
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-c", dest='configfile', default=sys.argv[0].replace('.py', '.conf'), help="configuration file to use")
(options, args) = parser.parse_args()
initConfig()
initLogger()
main()

Просмотреть файл

@ -1,14 +0,0 @@
{
"mozdef": {
"url": "http://localhost:8080/events"
},
//Generate on https://auth0.com/docs/api/management/v2 (top left TOKEN GENERATOR)
"auth0": {
"reqnr": 100,
"token": "",
"url": "https://<YOU>.auth0.com/api/v2/logs"
},
"state_file": "auth02mozdef.state",
//Sends msgs to syslog instead of MozDef mainly
"DEBUG": False
}

Просмотреть файл

@ -1,331 +0,0 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2016 Mozilla Corporation
# Author: gdestuynder@mozilla.com
# Imports auth0.com logs into MozDef
import hjson
import sys
import requests
import mozdef_client as mozdef
try:
import urllib.parse
quote_url = urllib.parse.quote
except ImportError:
#Well hello there python2 user!
import urllib
quote_url = urllib.quote
class DotDict(dict):
'''dict.item notation for dict()'s'''
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
def __init__(self, dct):
for key, value in dct.items():
if hasattr(value, 'keys'):
value = DotDict(value)
self[key] = value
def fatal(msg):
print(msg)
sys.exit(1)
def debug(msg):
sys.stderr.write('+++ {}\n'.format(msg))
#This is from https://auth0.com/docs/api/management/v2#!/Logs/get_logs
#and https://github.com/auth0/auth0-logs-to-logentries/blob/master/index.js (MIT)
log_types=DotDict({
's': {
"event": 'Success Login',
"level": 1 # Info
},
'seacft': {
"event": 'Success Exchange',
"level": 1 # Info
},
'feacft': {
"event": 'Failed Exchange',
"level": 3 # Error
},
'f': {
"event": 'Failed Login',
"level": 3 # Error
},
'w': {
"event": 'Warnings During Login',
"level": 2 # Warning
},
'du': {
"event": 'Deleted User',
"level": 1 # Info
},
'fu': {
"event": 'Failed Login (invalid email/username)',
"level": 3 # Error
},
'fp': {
"event": 'Failed Login (wrong password)',
"level": 3 # Error
},
'fc': {
"event": 'Failed by Connector',
"level": 3 # Error
},
'fco': {
"event": 'Failed by CORS',
"level": 3 # Error
},
'con': {
"event": 'Connector Online',
"level": 1 # Info
},
'coff': {
"event": 'Connector Offline',
"level": 3 # Error
},
'fcpro': {
"event": 'Failed Connector Provisioning',
"level": 4 # Critical
},
'ss': {
"event": 'Success Signup',
"level": 1 # Info
},
'fs': {
"event": 'Failed Signup',
"level": 3 # Error
},
'cs': {
"event": 'Code Sent',
"level": 0 # Debug
},
'cls': {
"event": 'Code/Link Sent',
"level": 0 # Debug
},
'sv': {
"event": 'Success Verification Email',
"level": 0 # Debug
},
'fv': {
"event": 'Failed Verification Email',
"level": 0 # Debug
},
'scp': {
"event": 'Success Change Password',
"level": 1 # Info
},
'fcp': {
"event": 'Failed Change Password',
"level": 3 # Error
},
'sce': {
"event": 'Success Change Email',
"level": 1 # Info
},
'fce': {
"event": 'Failed Change Email',
"level": 3 # Error
},
'scu': {
"event": 'Success Change Username',
"level": 1 # Info
},
'fcu': {
"event": 'Failed Change Username',
"level": 3 # Error
},
'scpn': {
"event": 'Success Change Phone Number',
"level": 1 # Info
},
'fcpn': {
"event": 'Failed Change Phone Number',
"level": 3 # Error
},
'svr': {
"event": 'Success Verification Email Request',
"level": 0 # Debug
},
'fvr': {
"event": 'Failed Verification Email Request',
"level": 3 # Error
},
'scpr': {
"event": 'Success Change Password Request',
"level": 0 # Debug
},
'fcpr': {
"event": 'Failed Change Password Request',
"level": 3 # Error
},
'fn': {
"event": 'Failed Sending Notification',
"level": 3 # Error
},
'sapi': {
"event": 'API Operation',
"level": 1 #Info
},
'fapi': {
"event": 'Failed API Operation',
"level": 3 #Error
},
'limit_wc': {
"event": 'Blocked Account',
"level": 4 # Critical
},
'limit_ui': {
"event": 'Too Many Calls to /userinfo',
"level": 4 # Critical
},
'api_limit': {
"event": 'Rate Limit On API',
"level": 4 #Critical
},
'sdu': {
"event": 'Successful User Deletion',
"level": 1 # Info
},
'fdu': {
"event": 'Failed User Deletion',
"level": 3 # Error
}
})
def process_msg(mozmsg, msg):
"""Normalization function for auth0 msg.
@mozmsg: MozDefEvent (mozdef message)
@msg: DotDict (json with auth0 raw message data).
All the try-except loops handle cases where the auth0 msg may or may not contain expected fields.
The msg structure is not garanteed.
See also https://auth0.com/docs/api/management/v2#!/Logs/get_logs
"""
details = DotDict({})
try:
mozmsg.useragent = msg.user_agent
except KeyError:
pass
details['type'] = log_types[msg.type].event
if log_types[msg.type].level == 3:
mozmsg.set_severity(mozdef.MozDefEvent.SEVERITY_ERROR)
elif log_types[msg.type].level > 3:
mozmsg.set_severity(mozdef.MozDefEvent.SEVERITY_CRITICAL)
details['sourceipaddress'] = msg.ip
try:
details['description'] = msg.description
except KeyError:
details['description'] = ""
mozmsg.timestamp = msg.date
details['auth0_msg_id'] = msg._id
try:
details['auth0_client'] = msg.client_name
except KeyError:
pass
details['auth0_client_id'] = msg.client_id
try:
details['username'] = msg.details.request.auth.user
details['action'] = msg.details.response.body.name
except KeyError:
try:
details['errormsg'] = msg.details.error.message
details['error'] = 'true'
except KeyError:
pass
details['username'] = msg.user_name
try:
auth0details = msg.details.details
except KeyError:
auth0details = ""
mozmsg.summary = "{type} {desc} {auth0details}".format(type=details.type, desc=details.description,
auth0details=auth0details)
mozmsg.details = details
#that's just too much data, IMO
#mozmsg.details['auth0_raw'] = msg
return mozmsg
def load_state(fpath):
"""Load last msg id we've read from auth0 (log index).
@fpath string (path to state file)
"""
state = 0
try:
with open(fpath) as fd:
state = int(fd.read().split('\n')[0])
except FileNotFoundError:
pass
return state
def save_state(fpath, state):
"""Saves last msg id we've read from auth0 (log index).
@fpath string (path to state file)
@state int (state value)
"""
with open(fpath, mode='w') as fd:
fd.write(str(state)+'\n')
def main():
#Configuration loading
with open('auth02mozdef.json') as fd:
config = DotDict(hjson.load(fd))
if config == None:
print("No configuration file 'auth02mozdef.json' found.")
sys.exit(1)
headers = {'Authorization': 'Bearer {}'.format(config.auth0.token),
'Accept': 'application/json'}
fromid = load_state(config.state_file)
r = requests.get('{url}?take={reqnr}&sort=date:1&per_page={reqnr}&include_totals=true&from={fromid}'.format(
url=config.auth0.url,
reqnr=config.auth0.reqnr,
fromid=fromid),
headers=headers)
#If we fail here, auth0 is not responding to us the way we expected it
if (not r.ok):
raise Exception(r.url, r.reason, r.status_code, r.json())
ret = r.json()
#Process all new auth0 log msgs, normalize and send them to mozdef
for msg in ret:
mozmsg = mozdef.MozDefEvent(config.mozdef.url)
if config.DEBUG:
mozmsg.set_send_to_syslog(True, only_syslog=True)
mozmsg.source = config.auth0.url
mozmsg.tags = ['auth0']
msg = DotDict(msg)
lastid = msg._id
#Fill in mozdef msg fields from the auth0 msg
try:
mozmsg = process_msg(mozmsg, msg)
except KeyError as e:
#if this happens the msg was malformed in some way
mozmsg.details['error'] = 'true'
mozmsg.details['errormsg'] = e
mozmsg.summary = 'Failed to parse auth0 message'
mozmsg.send()
save_state(config.state_file, lastid)
if __name__ == "__main__":
main()

Двоичные данные
cron/backup.conf

Двоичный файл не отображается.

92
cron/backupES.sh Executable file
Просмотреть файл

@ -0,0 +1,92 @@
#!/bin/bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
# herein we backup our indexes! this script should run at like 6pm or something, after logstash
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas,
# compress the data files, create a restore script, and push it all up to S3.
#set environment for s3/python
source /home/mozdef/envs/mozdef/bin/activate
#figure out what we are archiving (default to yesterday's events-YYYYMMDD index)
IDATE=`date --date='yesterday' +"%Y%m%d"`
INDEXNAME="events-$IDATE" # this had better match the index name in ES
#maybe we were told what to archive.
if [[ ! -z $1 ]]
then
INDEXNAME=$1
fi
#compensate for qa server pathname strangeness.
HOSTNAME=`hostname`
if [[ $HOSTNAME == *mozdefqa* ]]
then
INDEXDIR="/data/es/mozdefqa/nodes/0/indices/"
else
INDEXDIR="/data/es/mozdef/nodes/0/indices/"
fi
BACKUPCMD="/home/mozdef/envs/mozdef/bin/s3cmd put"
echo "using $INDEXDIR as index directory"
echo "archiving $INDEXNAME"
BACKUPDIR="/tmp/es-backups/"
YEARMONTH=`date --date='yesterday' +"%Y-%m"`
S3TARGET="s3://mozdefesbackups/elasticsearch/$YEARMONTH/$HOSTNAME/$INDEXNAME"
# create mapping file with index settings. this metadata is required by ES to use index file data
echo -n "Backing up metadata... "
curl -XGET -o /tmp/mapping "http://localhost:9200/$INDEXNAME/_mapping?pretty=true" > /dev/null 2>&1
sed -i '1,2d' /tmp/mapping #strip the first two lines of the metadata
echo '{"settings":{"number_of_shards":5,"number_of_replicas":1},"mappings":{' >> /tmp/mappost
# prepend hardcoded settings metadata to index-specific metadata
cat /tmp/mapping >> /tmp/mappost
echo "DONE!"
# now lets tar up our data files. these are huge, so lets be nice
echo -n "Backing up data files (this may take some time)... "
mkdir -p $BACKUPDIR
cd $INDEXDIR
nice -n 19 tar czf $BACKUPDIR/$INDEXNAME.tar.gz $INDEXNAME
echo "DONE!"
echo -n "Creating restore script... "
# time to create our restore script! oh god scripts creating scripts, this never ends well...
cat << EOF >> $BACKUPDIR/$INDEXNAME-restore.sh
#!/bin/bash
# this script requires $INDEXNAME.tar.gz and will restore it into elasticsearch
# it is ESSENTIAL that the index you are restoring does NOT exist in ES. delete it
# if it does BEFORE trying to restore data.
# create index and mapping
echo -n "Creating index and mappings... "
curl -XPUT 'http://localhost:9200/$INDEXNAME/' -d '`cat /tmp/mappost`' > /dev/null 2>&1
echo "DONE!"
# extract our data files into place
echo -n "Restoring index (this may take a while)... "
cd $INDEXDIR
tar xzf $BACKUPDIR/$INDEXNAME.tar.gz
echo "DONE!"
# restart ES to allow it to open the new dir and file data
echo -n "Restarting Elasticsearch... "
/etc/init.d/elasticsearch restart
echo "DONE!"
EOF
echo "DONE!" # restore script done
# push both tar.gz and restore script to s3
echo -n "Saving to S3 (this may take some time)... "
$BACKUPCMD $BACKUPDIR/$INDEXNAME.tar.gz $S3TARGET.tar.gz
$BACKUPCMD $BACKUPDIR/$INDEXNAME-restore.sh $S3TARGET-restore.sh
echo "DONE!"
# cleanup tmp files
rm /tmp/mappost
rm /tmp/mapping

79
cron/backupES10.sh Executable file
Просмотреть файл

@ -0,0 +1,79 @@
#!/bin/bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Anthony Verez averez@mozilla.com
# herein we backup our indexes! this script should run at like 6pm or something, after you
# rotate indexes to a new ES date index and theres no new data coming in to the old one. we create a snapshot,
# create a restore script, and push it all up to S3.
# Usage: ./backupES10.sh <esserverhostname> [index]
#set environment for s3/python
source /home/mozdef/envs/mozdef/bin/activate
#figure out what we are archiving (default to yesterday's events-YYYYMMDD index)
IDATE=`date --date='yesterday' +"%Y%m%d"`
INDEXNAME="events-$IDATE" # this had better match the index name in ES
#maybe we were told what to archive.
HOSTNAME=`hostname`
if [[ ! -z $1 ]]
then
HOSTNAME=$1
fi
if [[ ! -z $2 ]]
then
INDEXNAME=$2
fi
BACKUPCMD="/home/mozdef/envs/mozdef/bin/s3cmd put"
#compensate for qa server pathname strangeness.
YEARMONTH=`date --date='yesterday' +"%Y-%m"`
S3TARGET="s3://mozdefesbackups/elasticsearch/$YEARMONTH/$HOSTNAME/$INDEXNAME"
# if snapshot repo not registered
if result=$(curl -s -XGET "http://${HOSTNAME}:9200/_snapshot/s3backup?pretty" | grep "\"status\" : 404"); then
echo "Configuring snapshot repository (first time run only)..."
curl -s -XPUT "http://${HOSTNAME}:9200/_snapshot/s3backup" -d '{
"type": "s3",
"settings": {
"bucket": "mozdefesbackups",
"base_path": "elasticsearch/'"$YEARMONTH"'/'"$HOSTNAME"'",
"region": "us-west"
}
}' > /dev/null 2>&1
echo "DONE"
fi
echo -n "Creating snapshot (this may take a while)..."
curl -s -XPUT "http://${HOSTNAME}:9200/_snapshot/s3backup/$INDEXNAME?wait_for_completion=true" -d '{
"indices": "'"${INDEXNAME}"'"
}' > /dev/null 2>&1
echo "DONE"
echo -n "Creating restore script... "
# time to create our restore script! oh god scripts creating scripts, this never ends well...
cat << EOF >> ~/$INDEXNAME-restore.sh
#!/bin/bash
echo -n "Restoring the snapshot..."
curl -s -XPOST "http://${HOSTNAME}:9200/_snapshot/s3backup/${INDEXNAME}/_restore?wait_for_completion=true"
echo "DONE!"
EOF
echo "DONE!" # restore script done
# push the restore script to s3
echo -n "Saving to restore script to S3... "
$BACKUPCMD ~/$INDEXNAME-restore.sh $S3TARGET-restore.sh > /dev/null 2>&1
echo "DONE!"
# cleanup tmp files
echo -n "Cleaning up files..."
rm -rf ~/$INDEXNAME-restore.sh
echo "DONE!"

0
cron/backupSnapshot.py Normal file → Executable file
Просмотреть файл

Двоичные данные
cron/broAlerts.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -37,7 +37,7 @@ def initLogger():
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate,localTimeZone="US/Pacific"):
def toUTC(suspectedDate,localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc=pytz.UTC
objDate=None
@ -238,7 +238,7 @@ def main():
def initConfig():
#change this to your default zone for when it's not specified
options.defaultTimeZone=getConfig('defaulttimezone','US/Pacific',options.configfile)
options.defaultTimeZone=getConfig('defaulttimezone','UTC',options.configfile)
#msg queue settings
options.mqserver=getConfig('mqserver','localhost',options.configfile) #message queue server hostname
options.alertqueue=getConfig('alertqueue','mozdef.alert',options.configfile) #alert queue topic

Двоичные данные
cron/bruteForcers.conf Normal file

Двоичный файл не отображается.

236
cron/bruteForcers.py Executable file
Просмотреть файл

@ -0,0 +1,236 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import sys
import json
import logging
import netaddr
import pika
import pytz
import pyes
from collections import Counter
from configlib import getConfig, OptionParser
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from logging.handlers import SysLogHandler
logger = logging.getLogger(sys.argv[0])
def loggerTimeStamp(self, record, datefmt=None):
return toUTC(datetime.now()).isoformat()
def initLogger():
logger.level = logging.INFO
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter.formatTime = loggerTimeStamp
if options.output == 'syslog':
logger.addHandler(SysLogHandler(address=(options.sysloghostname, options.syslogport)))
else:
sh = logging.StreamHandler(sys.stderr)
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate, localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc = pytz.UTC
objDate = None
if type(suspectedDate) == str:
objDate = parse(suspectedDate, fuzzy=True)
elif type(suspectedDate) == datetime:
objDate = suspectedDate
if objDate.tzinfo is None:
objDate = pytz.timezone(localTimeZone).localize(objDate)
objDate = utc.normalize(objDate)
else:
objDate = utc.normalize(objDate)
if objDate is not None:
objDate = utc.normalize(objDate)
return objDate
def flattenDict(dictIn):
sout = ''
for k, v in dictIn.iteritems():
sout += '{0}: {1} '.format(k, v)
return sout
def alertToMessageQueue(alertDict):
try:
connection = pika.BlockingConnection(pika.ConnectionParameters(host=options.mqserver))
channel = connection.channel()
# declare the exchanges
channel.exchange_declare(exchange=options.alertexchange, type='topic', durable=True)
# cherry pick items from the alertDict to send to the alerts messageQueue
mqAlert = dict(severity='INFO', category='')
if 'severity' in alertDict.keys():
mqAlert['severity'] = alertDict['severity']
if 'category' in alertDict.keys():
mqAlert['category'] = alertDict['category']
if 'utctimestamp' in alertDict.keys():
mqAlert['utctimestamp'] = alertDict['utctimestamp']
if 'eventtimestamp' in alertDict.keys():
mqAlert['eventtimestamp'] = alertDict['eventtimestamp']
mqAlert['summary'] = alertDict['summary']
channel.basic_publish(exchange=options.alertexchange, routing_key=options.alertqueue, body=json.dumps(mqAlert))
except Exception as e:
logger.error('Exception while sending alert to message queue: {0}'.format(e))
def alertToES(es, alertDict):
try:
res = es.index(index='alerts', doc_type='alert', doc=alertDict)
return(res)
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esSearch(es, begindateUTC=None, enddateUTC=None):
resultsList = list()
if begindateUTC is None:
begindateUTC = toUTC(datetime.now() - timedelta(minutes=15))
if enddateUTC is None:
enddateUTC = toUTC(datetime.now())
try:
# search for events within the date range that haven't already been alerted (i.e. given an alerttimestamp)
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
q = pyes.ConstantScoreQuery(pyes.MatchAllQuery())
qType = pyes.TermFilter('_type', 'event')
qSystems = pyes.TermFilter('eventsource','systemslogs')
qFail = pyes.QueryFilter(pyes.MatchQuery('summary','failed','phrase'))
qssh = pyes.TermFilter('program','sshd')
q = pyes.FilteredQuery(q,pyes.BoolFilter(
must=[qType,qSystems,qFail,qssh,qDate],
should=[
pyes.QueryFilter(pyes.MatchQuery('summary',
'login ldap_count_entries',
'boolean'))],
must_not=[
pyes.ExistsFilter('alerttimestamp'),
pyes.QueryFilter(pyes.MatchQuery('summary','10.22.8.128','phrase')),
pyes.QueryFilter(pyes.MatchQuery('summary','10.8.75.35','phrase')),
pyes.QueryFilter(pyes.MatchQuery('summary','208.118.237.','phrase'))
]))
results=es.search(q,indices='events')
# grab the results before iterating them to avoid pyes bug
rawresults=results._search_raw()
alerts=list()
ips=list()
# see if any of these failed attempts cross our threshold per source ip
for r in rawresults['hits']['hits'][:]:
if 'details' in r['_source'].keys() and 'sourceipaddress' in r['_source']['details']:
ips.append(r['_source']['details']['sourceipaddress'])
else:
#search for an ip'ish thing in the summary
for w in r['_source']['summary'].strip().split():
if netaddr.valid_ipv4(w.strip("'")) or netaddr.valid_ipv6(w.strip("'")):
ips.append(w.strip("'"))
for i in Counter(ips).most_common():
if i[1]>= options.threshold:
# create an alert dictionary
alertDict=dict(sourceiphits=i[1],
sourceipaddress=str(netaddr.IPAddress(i[0])),
events=[])
# add source events
for r in rawresults['hits']['hits']:
if 'details' in r['_source'].keys() and 'sourceipaddress' in r['_source']['details'] and r['_source']['details']['sourceipaddress'].encode('ascii', 'ignore') == i[0]:
alertDict['events'].append(r)
alerts.append(alertDict)
return alerts
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def createAlerts(es, alerts):
'''given a list of dictionaries:
sourceiphits (int)
sourceipv4address (ip a string)
events: a list of pyes results maching the alert
1) create a summary alert with detail of the events
2) update the events with the alert timestamp and ID
'''
try:
if len(alerts) > 0:
for i in alerts:
alert = dict(utctimestamp=toUTC(datetime.now()).isoformat(), severity='NOTICE', summary='', category='bruteforce', tags=['ssh'], eventsource=[], events=[])
for e in i['events']:
alert['events'].append(
dict(documentindex=e['_index'],
documenttype=e['_type'],
documentsource=e['_source'],
documentid=e['_id']))
alert['severity'] = 'NOTICE'
alert['summary'] = ('{0} ssh bruteforce attempts by {1}'.format(i['sourceiphits'], i['sourceipaddress']))
for e in i['events'][:3]:
if 'details' in e.keys() and 'hostname' in e['details']:
alert['summary'] += ' on {0}'.format(e['_source']['details']['hostname'])
logger.debug(alert)
# save alert to alerts index, update events index with alert ID for cross reference
alertResult = alertToES(es, alert)
# for each event in this list
# update with the alertid/index
# and update the alerttimestamp on the event itself so it's not re-alerted
for e in i['events']:
if 'alerts' not in e['_source'].keys():
e['_source']['alerts'] = []
e['_source']['alerts'].append(dict(index=alertResult['_index'], type=alertResult['_type'], id=alertResult['_id']))
e['_source']['alerttimestamp'] = toUTC(datetime.now()).isoformat()
es.update(e['_index'], e['_type'], e['_id'], document=e['_source'])
alertToMessageQueue(alert)
except ValueError as e:
logger.error("Exception %r when creating alerts " % e)
def main():
logger.debug('starting')
logger.debug(options)
es = pyes.ES((list('{0}'.format(s) for s in options.esservers)))
# see if we have matches.
alerts = esSearch(es)
createAlerts(es, alerts)
logger.debug('finished')
def initConfig():
# change this to your default zone for when it's not specified
options.defaultTimeZone = getConfig('defaulttimezone', 'UTC', options.configfile)
# msg queue settings
options.mqserver = getConfig('mqserver', 'localhost', options.configfile) # message queue server hostname
options.alertqueue = getConfig('alertqueue', 'mozdef.alert', options.configfile) # alert queue topic
options.alertexchange = getConfig('alertexchange', 'alerts', options.configfile) # alert queue exchange name
# logging settings
options.output = getConfig('output', 'stdout', options.configfile) # output our log to stdout or syslog
options.sysloghostname = getConfig('sysloghostname', 'localhost', options.configfile) # syslog hostname
options.syslogport = getConfig('syslogport', 514, options.configfile) # syslog port
# elastic search server settings
options.esservers = list(getConfig('esservers', 'http://localhost:9200', options.configfile).split(','))
# threshold settings
options.threshold = getConfig('threshold', 2, options.configfile)
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-c", dest='configfile', default=sys.argv[0].replace('.py', '.conf'), help="configuration file to use")
(options, args) = parser.parse_args()
initConfig()
initLogger()
main()

12
cron/bruteForcers.sh Executable file
Просмотреть файл

@ -0,0 +1,12 @@
#!/usr/bin/env bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/bruteForcers.py -c /home/mozdef/envs/mozdef/cron/bruteForcers.conf

Двоичные данные
cron/cloudTrailAlerts.conf

Двоичный файл не отображается.

Просмотреть файл

@ -36,7 +36,7 @@ def initLogger():
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate,localTimeZone="US/Pacific"):
def toUTC(suspectedDate,localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc=pytz.UTC
objDate=None
@ -161,7 +161,7 @@ def main():
def initConfig():
#change this to your default zone for when it's not specified
options.defaultTimeZone=getConfig('defaulttimezone','US/Pacific',options.configfile)
options.defaultTimeZone=getConfig('defaulttimezone','UTC',options.configfile)
#msg queue settings
options.mqserver=getConfig('mqserver','localhost',options.configfile) #message queue server hostname
options.alertqueue=getConfig('alertqueue','mozdef.alert',options.configfile) #alert queue topic

Двоичные данные
cron/cloudtrail2mozdef.conf

Двоичный файл не отображается.

Просмотреть файл

@ -31,6 +31,9 @@ from dateutil.parser import parse
from datetime import date
import pytz
# This hack is in place while we wait for https://bugzilla.mozilla.org/show_bug.cgi?id=1216784 to be resolved
HACK=True
logger = logging.getLogger(sys.argv[0])
class RoleManager:
@ -234,7 +237,7 @@ def toUTC(suspectedDate,localTimeZone=None):
def main():
logging.getLogger('boto').setLevel(logging.CRITICAL) # disable all boto error logging
logger.level=logging.INFO
logger.level=logging.ERROR
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
if options.output=='syslog':
@ -271,6 +274,13 @@ def main():
ct = boto.cloudtrail.connect_to_region(region,
**ct_credentials)
trails=ct.describe_trails()['trailList']
except boto.exception.NoAuthHandlerFound as e:
# TODO Remove this hack once https://bugzilla.mozilla.org/show_bug.cgi?id=1216784 is complete
if HACK:
# logger.error("Working around missing permissions with a HACK")
trails=[{'S3BucketName':'mozilla-cloudtrail-logs'}]
else:
continue
except Exception as e:
logger.error("Unable to connect to cloudtrail %s in order to "
"enumerate CloudTrails in region %s due to %s" %
@ -364,7 +374,7 @@ def initConfig():
options.output=getConfig('output','stdout',options.configfile) #output our log to stdout or syslog
options.sysloghostname=getConfig('sysloghostname','localhost',options.configfile) #syslog hostname
options.syslogport=getConfig('syslogport',514,options.configfile) #syslog port
options.defaultTimeZone=getConfig('defaulttimezone','US/Pacific',options.configfile)
options.defaultTimeZone=getConfig('defaulttimezone','UTC',options.configfile)
options.aws_access_key_id=getConfig('aws_access_key_id','',options.configfile) #aws credentials to use to connect to cloudtrail
options.aws_secret_access_key=getConfig('aws_secret_access_key','',options.configfile)
options.esservers=list(getConfig('esservers','http://localhost:9200',options.configfile).split(','))

Просмотреть файл

@ -0,0 +1,263 @@
{
"177680776199": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:02:16.300848+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:02:53.322680+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:03:09.370790+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:04:07.457735+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:02:31.550038+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:04:25.985469+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:01:26.380566+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:03:51.609651+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:03:25.306956+00:00"
}
},
"230871113385": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:05:37.773429+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:06:15.614286+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:06:32.095336+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:07:34.597542+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:05:55.380316+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:07:50.469648+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:04:43.507322+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:07:18.532305+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:06:50.402308+00:00"
}
},
"236517346949": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:08:54.304191+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:09:31.439196+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:09:43.937902+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:10:39.272162+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:09:06.777115+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:10:51.498164+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:08:07.111432+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:10:24.227553+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:09:56.898939+00:00"
}
},
"248062938574": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:11:05.668815+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:11:07.664306+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:11:09.088791+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:11:12.367831+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:11:07.002660+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:11:13.564943+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:11:04.102873+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:11:11.042753+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:11:10.594555+00:00"
}
},
"330914478726": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:11:15.541730+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:11:17.667754+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:11:18.892453+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:11:22.055186+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:11:16.743135+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:11:23.413032+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:11:14.262719+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:11:20.862854+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:11:20.181270+00:00"
}
},
"589768463761": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:12:15.510353+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:12:40.895555+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:12:51.249506+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:13:30.979796+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:12:24.907569+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:13:41.086288+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:11:24.065524+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:13:22.597375+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:13:00.354636+00:00"
}
},
"604263479206": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:14:38.956960+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:15:04.846670+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:15:14.791686+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:15:58.907842+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:14:48.244702+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:16:07.334392+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:13:50.875926+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:15:50.705749+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:15:24.610317+00:00"
}
},
"647505682097": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:17:05.471860+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:17:32.265657+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:17:47.501100+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:18:39.457315+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:17:20.328676+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:18:51.433340+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:16:16.945074+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:18:27.028303+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:18:02.904107+00:00"
}
},
"656532927350": {
"ap-northeast-1": {
"lastrun": "2016-04-15T16:19:42.129129+00:00"
},
"ap-southeast-1": {
"lastrun": "2016-04-15T16:19:50.964854+00:00"
},
"ap-southeast-2": {
"lastrun": "2016-04-15T16:19:55.646822+00:00"
},
"eu-central-1": {
"lastrun": "2016-04-15T16:20:08.688100+00:00"
},
"eu-west-1": {
"lastrun": "2016-04-15T16:19:46.665865+00:00"
},
"sa-east-1": {
"lastrun": "2016-04-15T16:20:13.535822+00:00"
},
"us-east-1": {
"lastrun": "2016-04-15T16:19:06.169657+00:00"
},
"us-west-1": {
"lastrun": "2016-04-15T16:20:05.027017+00:00"
},
"us-west-2": {
"lastrun": "2016-04-15T16:20:00.457907+00:00"
}
}
}

Двоичные данные
cron/collectAttackers.conf Normal file

Двоичный файл не отображается.

Двоичные данные
cron/collectAttackers.dev.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -417,8 +417,7 @@ def updateMongoWithESEvents(mozdefdb, results):
esrecord = dict(documentid=r['_id'],
documenttype=r['_type'],
documentindex=r['_index'],
documentsource=r['_source'],
read=False)
documentsource=r['_source'])
logger.debug('searching for ' + str(sourceIP))
#attacker = attackers.find_one({'events.details.sourceipaddress': str(sourceIP.ip)})

12
cron/collectAttackers.sh Executable file
Просмотреть файл

@ -0,0 +1,12 @@
#!/usr/bin/env bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/collectAttackers.py -c /home/mozdef/envs/mozdef/cron/collectAttackers.conf

Двоичные данные
cron/collectSSHFingerprints.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -47,7 +47,7 @@ def initLogger():
logger.addHandler(sh)
def toUTC(suspectedDate, localTimeZone="US/Pacific"):
def toUTC(suspectedDate, localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc = pytz.UTC
objDate = None

13
cron/collectSSHFingerprints.sh Executable file
Просмотреть файл

@ -0,0 +1,13 @@
#!/usr/bin/env bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/collectSSHFingerprints.py -c /home/mozdef/envs/mozdef/cron/collectSSHFingerprints.conf

Двоичные данные
cron/compromisedCreds2fxa.conf Normal file

Двоичный файл не отображается.

229
cron/compromisedCreds2fxa.py Executable file
Просмотреть файл

@ -0,0 +1,229 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import os
import sys
import logging
import pytz
import requests
import json
import time
import re
from configlib import getConfig,OptionParser,setConfig
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from datetime import date
from logging.handlers import SysLogHandler
from pytx import init
from pytx import ThreatIndicator
import boto.sqs
from boto.sqs.message import RawMessage
from urllib2 import urlopen
from urllib import urlencode
logger = logging.getLogger(sys.argv[0])
logger.level=logging.INFO
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
def toUTC(suspectedDate,localTimeZone=None):
'''make a UTC date out of almost anything'''
utc=pytz.UTC
objDate=None
if localTimeZone is None:
localTimeZone=options.defaultTimeZone
if type(suspectedDate) in (str,unicode):
objDate=parse(suspectedDate,fuzzy=True)
elif type(suspectedDate)==datetime:
objDate=suspectedDate
if objDate.tzinfo is None:
objDate=pytz.timezone(localTimeZone).localize(objDate)
objDate=utc.normalize(objDate)
else:
objDate=utc.normalize(objDate)
if objDate is not None:
objDate=utc.normalize(objDate)
return objDate
def buildQuery(optionDict):
'''
Builds a query string based on the dict of options
'''
if optionDict['since'] is None or optionDict['until'] is None:
logger.error('"since" and "until" are both required')
raise Exception('You must specify both "since" and "until" values')
fields = ({
'access_token' : options.appid + '|' + options.appsecret,
'threat_type': 'COMPROMISED_CREDENTIAL',
'type' : 'EMAIL_ADDRESS',
'fields' : 'indicator,passwords',
'since' : optionDict['since'],
'until' : optionDict['until'],
})
return options.txserver + 'threat_indicators?' + urlencode(fields)
def executeQuery(url):
queryResults=[]
try:
response = urlopen(url).read()
except TypeError as e:
logger.error('Type error %r'%e)
return queryResults,None
except Exception as e:
lines = str(e.info()).split('\r\n')
msg = str(e)
for line in lines:
# get the exact error from the server
result = re.search('^WWW-Authenticate: .*\) (.*)\"$', line)
if result:
msg = result.groups()[0]
logger.error ('ERROR: %s\nReceived' % (msg))
return queryResults,None
try:
data = json.loads(response)
if 'data' in data.keys():
for d in data['data']:
queryResults.append(dict(email=d['indicator'],md5=d['passwords']))
if 'paging' in data:
nextURL=data['paging']['next']
else:
nextURL=None
return queryResults,nextURL
except Exception as e:
logger.error('ERROR: %r' % (e))
return queryResults,None
def sendToCustomsServer(queue, emailAddress=None):
try:
if emailAddress is not None:
# connect and send a message like:
# '{"Message": {"ban": {"email": "someone@somewhere.com"}}}'
# encoded like this:
# {"Message":"{\"ban\":{\"email\":\"someone@somewhere.com\"}}"}
banMessage = dict(Message=json.dumps(dict(ban=dict(email=emailAddress))))
m = RawMessage()
m.set_body(json.dumps(banMessage))
queue.write(m)
logger.info('Sent {0} to customs server'.format(emailAddress))
except Exception as e:
logger.error('Error while sending to customs server %s: %r' % (emailAddress, e))
def main():
if options.output=='syslog':
logger.addHandler(SysLogHandler(address=(options.sysloghostname,options.syslogport)))
else:
sh=logging.StreamHandler(sys.stderr)
sh.setFormatter(formatter)
logger.addHandler(sh)
logger.debug('started')
# set up the threat exchange secret
init(options.appid, options.appsecret)
# set up SQS
conn = boto.sqs.connect_to_region(options.region,
aws_access_key_id=options.aws_access_key_id,
aws_secret_access_key=options.aws_secret_access_key)
queue = conn.get_queue(options.aws_queue_name)
try:
# capture the time we start running so next time we catch any events
# created while we run.
lastrun=toUTC(datetime.now()).isoformat()
queryDict = {}
queryDict['since'] = options.lastrun.isoformat()
queryDict['until'] = datetime.utcnow().isoformat()
logger.debug('Querying {0}'.format(queryDict))
# we get results in pages
# so iterate through the pages
# and append to a list
nextURL=buildQuery(queryDict)
allResults=[]
while nextURL is not None:
results,nextURL=executeQuery(nextURL)
for r in results:
allResults.append(r)
# send the results to SQS
for r in allResults:
sendToCustomsServer(queue, r['email'])
# record the time we started as
# the start time for next time.
if len(allResults) > 0:
setConfig('lastrun',lastrun,options.configfile)
except Exception as e:
logger.error("Unhandled exception, terminating: %r"%e)
logger.debug('finished')
def initConfig():
options.output=getConfig('output','stdout',options.configfile) #output our log to stdout or syslog
options.sysloghostname=getConfig('sysloghostname','localhost',options.configfile) #syslog hostname
options.syslogport=getConfig('syslogport',514,options.configfile) #syslog port
options.defaultTimeZone=getConfig('defaulttimezone','UTC',options.configfile) #default timezone
options.mozdefurl = getConfig('url', 'http://localhost:8080/events', options.configfile) #mozdef event input url to post to
options.lastrun=toUTC(getConfig('lastrun',toUTC(datetime.now()-timedelta(hours=24)),options.configfile))
options.recordlimit = getConfig('recordlimit', 1000, options.configfile) #max number of records to request
# threat exchange options
options.appid = getConfig('appid',
'',
options.configfile)
options.appsecret=getConfig('appsecret',
'',
options.configfile)
options.txserver = getConfig('txserver',
'https://graph.facebook.com/',
options.configfile)
# boto options
options.region = getConfig('region',
'us-west-2',
options.configfile)
options.aws_access_key_id=getConfig('aws_access_key_id',
'',
options.configfile)
options.aws_secret_access_key=getConfig('aws_secret_access_key',
'',
options.configfile)
options.aws_queue_name=getConfig('aws_queue_name',
'',
options.configfile)
if __name__ == '__main__':
parser=OptionParser()
parser.add_option("-c", dest='configfile' , default=sys.argv[0].replace('.py', '.conf'), help="configuration file to use")
(options,args) = parser.parse_args()
initConfig()
main()

12
cron/compromisedCreds2fxa.sh Executable file
Просмотреть файл

@ -0,0 +1,12 @@
#!/usr/bin/env bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/compromisedCreds2fxa.py

Двоичные данные
cron/correlateUserMacAddress.conf Normal file

Двоичный файл не отображается.

13
cron/correlateUserMacAddress.sh Executable file
Просмотреть файл

@ -0,0 +1,13 @@
#!/usr/bin/env bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/correlateUserMacAddress.py -c /home/mozdef/envs/mozdef/cron/correlateUserMacAddress.conf

Двоичные данные
cron/createIPBlockList.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -8,6 +8,8 @@
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import boto
import boto.s3
import calendar
import logging
import pyes
@ -15,6 +17,7 @@ import pytz
import random
import netaddr
import sys
from boto.s3.key import Key
from bson.son import SON
from datetime import datetime
from datetime import timedelta
@ -149,6 +152,30 @@ def initConfig():
# Max IPs to emit
options.iplimit = getConfig('iplimit', 1000, options.configfile)
# AWS creds
options.aws_access_key_id=getConfig('aws_access_key_id','',options.configfile) #aws credentials to use to connect to mozilla_infosec_blocklist
options.aws_secret_access_key=getConfig('aws_secret_access_key','',options.configfile)
def s3_upload_file(file_path, bucket_name, key_name):
"""
Upload a file to the given s3 bucket and return a template url.
"""
conn = boto.connect_s3(aws_access_key_id=options.aws_access_key_id,aws_secret_access_key=options.aws_secret_access_key)
try:
bucket = conn.get_bucket(bucket_name, validate=False)
except boto.exception.S3ResponseError as e:
conn.create_bucket(bucket_name)
bucket = conn.get_bucket(bucket_name, validate=False)
key = boto.s3.key.Key(bucket)
key.key = key_name
key.set_contents_from_filename(file_path)
key.set_acl('public-read')
url = "https://s3.amazonaws.com/{}/{}".format(bucket.name, key.name)
print( "URL: {}".format(url))
return urljk
if __name__ == '__main__':
parser = OptionParser()
@ -161,3 +188,4 @@ if __name__ == '__main__':
initConfig()
initLogger()
main()
s3_upload_file('/Users/aliciasmith/python/blocklist/static/qaipblocklist.txt', 'mozilla_infosec_blocklist','qaipblocklist')

13
cron/createIPBlockList.sh Executable file
Просмотреть файл

@ -0,0 +1,13 @@
#!/usr/bin/env bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/createIPBlockList.py -c /home/mozdef/envs/mozdef/cron/createIPBlockList.conf

Двоичные данные
cron/debug.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,23 @@
{
"template" : "*",
"mappings" : {
"_default_" : {
"dynamic_templates" : [
{
"string_fields" : {
"mapping" : {
"index" : "not_analyzed",
"type" : "string",
"doc_values": true
},
"match_mapping_type" : "string",
"match" : "*"
}
}
],
"_all" : {
"enabled" : true
}
}
}
}

Просмотреть файл

@ -1,125 +0,0 @@
{
"defaulttemplate" : {
"order" : 0,
"template" : "*",
"settings" : { },
"mappings" : {
"_default_" : {
"dynamic_templates" : [ {
"string_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match_mapping_type" : "string",
"match" : "*"
}
}, {
"float_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match" : "*",
"match_mapping_type" : "float"
}
}, {
"double_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match_mapping_type" : "double",
"match" : "*"
}
}, {
"byte_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match_mapping_type" : "byte",
"match" : "*"
}
}, {
"short_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match_mapping_type" : "short",
"match" : "*"
}
}, {
"integer_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match_mapping_type" : "integer",
"match" : "*"
}
}, {
"long_fields" : {
"mapping" : {
"index" : "not_analyzed",
"doc_values" : true,
"type" : "string"
},
"match_mapping_type" : "long",
"match" : "*"
}
} ],
"properties" : {
"summary" : {
"type" : "string"
},
"details" : {
"properties" : {
"destinationport" : {
"index" : "not_analyzed",
"type" : "long"
},
"sourceipaddress" : {
"type" : "ip"
},
"sourceipv4address" : {
"type" : "string"
},
"srcip" : {
"type" : "ip"
},
"destinationipaddress" : {
"type" : "ip"
},
"success" : {
"type" : "boolean"
},
"sourceport" : {
"index" : "not_analyzed",
"type" : "long"
}
}
},
"receivedtimestamp" : {
"format" : "dateOptionalTime",
"type" : "date"
},
"utctimestamp" : {
"format" : "dateOptionalTime",
"type" : "date"
}
},
"_all" : {
"enabled" : true
}
}
},
"aliases" : { }
}

Двоичные данные
cron/esCacheMaint.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -152,7 +152,7 @@ def main():
def initConfig():
# change this to your default zone for when it's not specified
options.defaultTimeZone = getConfig('defaulttimezone', 'US/Pacific', options.configfile)
options.defaultTimeZone = getConfig('defaulttimezone', 'UTC', options.configfile)
# elastic search options.
options.esservers = list(getConfig('esservers', 'http://localhost:9200', options.configfile).split(','))

Двоичные данные
cron/eventStats.conf Normal file

Двоичный файл не отображается.

Двоичные данные
cron/eventStatsAlerts.conf Normal file

Двоичный файл не отображается.

320
cron/events_template.json Normal file
Просмотреть файл

@ -0,0 +1,320 @@
{
"template":"events*",
"mappings":{
"event":{
"_ttl" : { "enabled" : false },
"properties":{
"category":{
"index":"not_analyzed",
"type":"string"
},
"details":{
"properties":{
"destinationipaddress":{
"type":"ip"
},
"destinationport":{
"type":"string"
},
"dn":{
"type":"string"
},
"hostname":{
"type" : "multi_field",
"fields" : {
"hostname": {"type": "string"},
"raw" : {"type" : "string", "index" : "not_analyzed"}
}
},
"email" : {
"type" : "string",
"index" : "not_analyzed"
},
"msg":{
"type":"string"
},
"note":{
"type":"string"
},
"processid":{
"type":"string"
},
"program":{
"type":"string",
"index" : "not_analyzed"
},
"protocol":{
"type":"string"
},
"result":{
"type":"string"
},
"source":{
"type":"string"
},
"sourceipaddress":{
"type":"ip"
},
"sourceipgeolocation":{
"properties":{
"country_name": {
"type": "string",
"index" : "not_analyzed"
}
}
},
"sourceport":{
"type":"string"
},
"srcip":{
"type":"ip"
},
"sub":{
"type":"string"
},
"success":{
"type":"boolean"
},
"timestamp":{
"type":"string"
},
"ts":{
"type":"string"
},
"uid":{
"type":"string"
}
}
},
"eventsource":{
"type":"string"
},
"hostname":{
"type":"string"
},
"processid":{
"type":"string"
},
"receivedtimestamp":{
"type":"date",
"format":"dateOptionalTime"
},
"severity":{
"type":"string"
},
"summary":{
"type":"string"
},
"tags":{
"index":"not_analyzed",
"type":"string"
},
"timestamp":{
"type":"date",
"format":"dateOptionalTime"
},
"utctimestamp":{
"type":"date",
"format":"dateOptionalTime"
}
}
},
"auditd":{
"_ttl" : { "enabled" : true },
"properties":{
"category":{
"index":"not_analyzed",
"type":"string"
},
"details":{
"properties":{
"dhost":{
"type" : "multi_field",
"fields" : {
"dhost": {"type": "string"},
"raw" : {"type" : "string", "index" : "not_analyzed"}
}
},
"auid":{
"type": "string",
"index":"not_analyzed"
},
"deviceversion":{
"type": "string",
"index":"not_analyzed"
},
"duid":{
"type": "string",
"index":"not_analyzed"
},
"egid":{
"type": "string",
"index":"not_analyzed"
},
"euid":{
"type": "string",
"index":"not_analyzed"
},
"fsgid":{
"type": "string",
"index":"not_analyzed"
},
"fsuid":{
"type": "string",
"index":"not_analyzed"
},
"gid":{
"type": "string",
"index":"not_analyzed"
},
"ses":{
"type": "long"
},
"severity":{
"type": "string",
"index":"not_analyzed"
},
"sgid":{
"type": "string",
"index":"not_analyzed"
},
"suid":{
"type": "string",
"index":"not_analyzed"
},
"version":{
"type": "string",
"index":"not_analyzed"
},
"ogid": {
"type": "string",
"index":"not_analyzed"
},
"ouid": {
"type": "string",
"index":"not_analyzed"
},
"uid": {
"type": "string",
"index":"not_analyzed"
},
"pid": {
"type": "string",
"index":"not_analyzed"
}
}
},
"receivedtimestamp":{
"type":"date",
"format":"dateOptionalTime"
},
"severity":{
"type":"string"
},
"summary":{
"type":"string"
},
"tags":{
"index":"not_analyzed",
"type":"string"
},
"timestamp":{
"type":"date",
"format":"dateOptionalTime"
},
"utctimestamp":{
"type":"date",
"format":"dateOptionalTime"
}
}
},
"netflow":{
"_ttl" : { "enabled" : false },
"properties":{
"category":{
"index":"not_analyzed",
"type":"string"
},
"details":{
"properties":{
"hostname":{
"type" : "multi_field",
"fields" : {
"hostname": {"type": "string"},
"raw" : {"type" : "string", "index" : "not_analyzed"}
}
},
"destinationport":{
"type":"long",
"index":"not_analyzed"
},
"sourceport":{
"type":"long",
"index":"not_analyzed"
}
}
}
}
},
"bro": {
"_ttl" : { "enabled" : false },
"properties":{
"category":{
"index":"not_analyzed",
"type":"string"
},
"hostname":{
"index":"not_analyzed",
"type":"string"
},
"details":{
"properties":{
"sources" : {
"type" : "string",
"index":"not_analyzed"
},
"seenwhere" : {
"type" : "string",
"index":"not_analyzed"
},
"seenindicatortype" : {
"type" : "string",
"index":"not_analyzed"
},
"note" : {
"type" : "string",
"index":"not_analyzed"
},
"signature" : {
"type" : "string",
"index":"not_analyzed"
},
"payload_printable" : {
"type" : "string",
"index":"not_analyzed"
},
"payload" : {
"type" : "string",
"index":"not_analyzed"
},
"packet" : {
"type" : "string",
"index":"not_analyzed"
},
"peer_descr" : {
"type" : "string",
"index":"not_analyzed"
},
"hostname":{
"index":"not_analyzed",
"type":"string"
}
}
}
}
},
"_default_": {
"_ttl" : { "enabled" : false }
}
}
}

Двоичные данные
cron/fail2banAlerts.conf Normal file

Двоичный файл не отображается.

203
cron/fail2banAlerts.py Executable file
Просмотреть файл

@ -0,0 +1,203 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import sys
import json
import logging
import netaddr
import pika
import pytz
import pyes
from collections import Counter
from configlib import getConfig, OptionParser
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from logging.handlers import SysLogHandler
logger = logging.getLogger(sys.argv[0])
def loggerTimeStamp(self, record, datefmt=None):
return toUTC(datetime.now()).isoformat()
def initLogger():
logger.level = logging.INFO
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter.formatTime = loggerTimeStamp
if options.output == 'syslog':
logger.addHandler(SysLogHandler(address=(options.sysloghostname, options.syslogport)))
else:
sh = logging.StreamHandler(sys.stderr)
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate, localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc = pytz.UTC
objDate = None
if type(suspectedDate) == str:
objDate = parse(suspectedDate, fuzzy=True)
elif type(suspectedDate) == datetime:
objDate = suspectedDate
if objDate.tzinfo is None:
objDate = pytz.timezone(localTimeZone).localize(objDate)
objDate = utc.normalize(objDate)
else:
objDate = utc.normalize(objDate)
if objDate is not None:
objDate = utc.normalize(objDate)
return objDate
def flattenDict(dictIn):
sout = ''
for k, v in dictIn.iteritems():
sout += '{0}: {1} '.format(k, v)
return sout
def alertToMessageQueue(alertDict):
try:
connection = pika.BlockingConnection(pika.ConnectionParameters(host=options.mqserver))
channel = connection.channel()
# declare the exchanges
channel.exchange_declare(exchange=options.alertexchange, type='topic', durable=True)
# cherry pick items from the alertDict to send to the alerts messageQueue
mqAlert = dict(severity='INFO', category='')
if 'severity' in alertDict.keys():
mqAlert['severity'] = alertDict['severity']
if 'category' in alertDict.keys():
mqAlert['category'] = alertDict['category']
if 'utctimestamp' in alertDict.keys():
mqAlert['utctimestamp'] = alertDict['utctimestamp']
if 'eventtimestamp' in alertDict.keys():
mqAlert['eventtimestamp'] = alertDict['eventtimestamp']
mqAlert['summary'] = alertDict['summary']
channel.basic_publish(exchange=options.alertexchange, routing_key=options.alertqueue, body=json.dumps(mqAlert))
except Exception as e:
logger.error('Exception while sending alert to message queue: {0}'.format(e))
def alertToES(es, alertDict):
try:
res = es.index(index='alerts', doc_type='alert', doc=alertDict)
return(res)
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esSearch(es, begindateUTC=None, enddateUTC=None):
resultsList = list()
if begindateUTC is None:
begindateUTC = toUTC(datetime.now() - timedelta(minutes=1))
if enddateUTC is None:
enddateUTC = toUTC(datetime.now())
try:
# search for events within the date range that haven't already been alerted (i.e. given an alerttimestamp)
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
q = pyes.ConstantScoreQuery(pyes.MatchAllQuery())
qType = pyes.TermFilter('_type', 'event')
qFail2Ban = pyes.TermFilter('program','fail2ban')
q = pyes.FilteredQuery(
q,pyes.BoolFilter(must=[qType,
qFail2Ban,
qDate,
pyes.QueryFilter(pyes.MatchQuery("summary","banned for","phrase"))]))
results=es.search(q,indices='events')
# grab the results before iterating them to avoid pyes bug
rawresults=results._search_raw()
alerts=list()
for r in rawresults['hits']['hits'][:]:
alertDict = dict(category='fail2ban',
summary='{0}: {1}'.format(r['_source']['details']['hostname'], r['_source']['summary'].strip()),
events=[r])
alerts.append(alertDict)
return alerts
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def createAlerts(es, alerts):
'''given a list of dictionaries:
category (string)
summary (string)
events (list of pyes results)
1) create a summary alert with detail of the events
'''
try:
if len(alerts) > 0:
for i in alerts:
alert = dict(utctimestamp=toUTC(datetime.now()).isoformat(), severity='NOTICE', summary='', category='fail2ban', tags=['fail2ban'], eventsource=[], events=[])
for e in i['events']:
alert['events'].append(
dict(documentindex=e['_index'],
documenttype=e['_type'],
documentsource=e['_source'],
documentid=e['_id']))
alert['summary'] = i['summary']
logger.debug(alert)
# save alert to alerts index, update events index with alert ID for cross reference
alertResult = alertToES(es, alert)
# for each event in this list
# update with the alertid/index
# and update the alerttimestamp on the event itself so it's not re-alerted
for e in i['events']:
if 'alerts' not in e['_source'].keys():
e['_source']['alerts'] = []
e['_source']['alerts'].append(dict(index=alertResult['_index'], type=alertResult['_type'], id=alertResult['_id']))
e['_source']['alerttimestamp'] = toUTC(datetime.now()).isoformat()
es.update(e['_index'], e['_type'], e['_id'], document=e['_source'])
alertToMessageQueue(alert)
except ValueError as e:
logger.error("Exception %r when creating alerts " % e)
def main():
logger.debug('starting')
logger.debug(options)
es = pyes.ES((list('{0}'.format(s) for s in options.esservers)))
# see if we have matches.
alerts = esSearch(es)
createAlerts(es, alerts)
logger.debug('finished')
def initConfig():
# change this to your default zone for when it's not specified
options.defaultTimeZone = getConfig('defaulttimezone', 'UTC', options.configfile)
# msg queue settings
options.mqserver = getConfig('mqserver', 'localhost', options.configfile) # message queue server hostname
options.alertqueue = getConfig('alertqueue', 'mozdef.alert', options.configfile) # alert queue topic
options.alertexchange = getConfig('alertexchange', 'alerts', options.configfile) # alert queue exchange name
# logging settings
options.output = getConfig('output', 'stdout', options.configfile) # output our log to stdout or syslog
options.sysloghostname = getConfig('sysloghostname', 'localhost', options.configfile) # syslog hostname
options.syslogport = getConfig('syslogport', 514, options.configfile) # syslog port
# elastic search server settings
options.esservers = list(getConfig('esservers', 'http://localhost:9200', options.configfile).split(','))
# threshold settings
options.threshold = getConfig('threshold', 2, options.configfile)
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-c", dest='configfile', default=sys.argv[0].replace('.py', '.conf'), help="configuration file to use")
(options, args) = parser.parse_args()
initConfig()
initLogger()
main()

13
cron/fail2banAlerts.sh Executable file
Просмотреть файл

@ -0,0 +1,13 @@
#!/usr/bin/env bash
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# Copyright (c) 2014 Mozilla Corporation
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
source /home/mozdef/envs/mozdef/bin/activate
/home/mozdef/envs/mozdef/cron/fail2banAlerts.py -c /home/mozdef/envs/mozdef/cron/fail2banAlerts.conf

Двоичные данные
cron/fxaAccountCreateAlerts.conf Normal file

Двоичный файл не отображается.

239
cron/fxaAccountCreateAlerts.py Executable file
Просмотреть файл

@ -0,0 +1,239 @@
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Contributors:
# Jeff Bryner jbryner@mozilla.com
import sys
import json
import logging
import netaddr
import pika
import pytz
import pyes
from collections import Counter
from configlib import getConfig, OptionParser
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
from logging.handlers import SysLogHandler
logger = logging.getLogger(sys.argv[0])
def loggerTimeStamp(self, record, datefmt=None):
return toUTC(datetime.now()).isoformat()
def initLogger():
logger.level = logging.INFO
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter.formatTime = loggerTimeStamp
if options.output == 'syslog':
logger.addHandler(SysLogHandler(address=(options.sysloghostname, options.syslogport)))
else:
sh = logging.StreamHandler(sys.stderr)
sh.setFormatter(formatter)
logger.addHandler(sh)
def toUTC(suspectedDate, localTimeZone="UTC"):
'''make a UTC date out of almost anything'''
utc = pytz.UTC
objDate = None
if type(suspectedDate) == str:
objDate = parse(suspectedDate, fuzzy=True)
elif type(suspectedDate) == datetime:
objDate = suspectedDate
if objDate.tzinfo is None:
objDate = pytz.timezone(localTimeZone).localize(objDate)
objDate = utc.normalize(objDate)
else:
objDate = utc.normalize(objDate)
if objDate is not None:
objDate = utc.normalize(objDate)
return objDate
def flattenDict(dictIn):
sout = ''
for k, v in dictIn.iteritems():
sout += '{0}: {1} '.format(k, v)
return sout
def alertToMessageQueue(alertDict):
try:
connection = pika.BlockingConnection(pika.ConnectionParameters(host=options.mqserver))
channel = connection.channel()
# declare the exchanges
channel.exchange_declare(exchange=options.alertexchange, type='topic', durable=True)
# cherry pick items from the alertDict to send to the alerts messageQueue
mqAlert = dict(severity='INFO', category='')
if 'severity' in alertDict.keys():
mqAlert['severity'] = alertDict['severity']
if 'category' in alertDict.keys():
mqAlert['category'] = alertDict['category']
if 'utctimestamp' in alertDict.keys():
mqAlert['utctimestamp'] = alertDict['utctimestamp']
if 'eventtimestamp' in alertDict.keys():
mqAlert['eventtimestamp'] = alertDict['eventtimestamp']
mqAlert['summary'] = alertDict['summary']
channel.basic_publish(exchange=options.alertexchange, routing_key=options.alertqueue, body=json.dumps(mqAlert))
except Exception as e:
logger.error('Exception while sending alert to message queue: {0}'.format(e))
def alertToES(es, alertDict):
try:
res = es.index(index='alerts', doc_type='alert', doc=alertDict)
return(res)
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def esSearch(es, begindateUTC=None, enddateUTC=None):
resultsList = list()
if begindateUTC is None:
begindateUTC = toUTC(datetime.now() - timedelta(minutes=15))
if enddateUTC is None:
enddateUTC = toUTC(datetime.now())
try:
# search for events within the date range that haven't already been alerted (i.e. given an alerttimestamp)
qDate = pyes.RangeQuery(qrange=pyes.ESRange('utctimestamp', from_value=begindateUTC, to_value=enddateUTC))
qType = pyes.TermFilter('_type', 'event')
qFxa = pyes.TermFilter('tags', "firefoxaccounts")
qAlerted = pyes.ExistsFilter('alerttimestamp')
qMozTest = pyes.QueryFilter(pyes.WildcardQuery(field="details.email",value='*restmail.net'))
q = pyes.ConstantScoreQuery(pyes.MatchAllQuery())
qPath = pyes.QueryFilter(pyes.MatchQuery('details.path','/v1/account/create','phrase'))
q = pyes.FilteredQuery(q,pyes.BoolFilter(must=[qDate,qFxa,qPath],must_not=[qAlerted, qMozTest]))
q2=q.search()
q2.facet.add_term_facet('details.sourceipv4address',size=100)
results=es.search(q2,size=1000,indices='events,events-previous')
# grab the results before iterating them to avoid pyes bug
rawresults=results._search_raw()
alerts=list()
for hit in rawresults.facets['details.sourceipv4address'].terms:
if hit['count']>= options.threshold:
hit['emails']=list()
hit['events']=list()
hit['sourceipgeolocation']=''
for r in rawresults['hits']['hits']:
if 'sourceipv4address' in r['_source']['details'] and r['_source']['details']['sourceipv4address']==str(netaddr.IPAddress(hit['term'])):
if 'sourceipgeolocation' in r['_source']['details']:
hit['sourceipgeolocation']=r['_source']['details']['sourceipgeolocation']
if r['_source']['details']['email'].lower() not in hit['emails']:
hit['emails'].append(r['_source']['details']['email'].lower())
hit['events'].append(
dict(documentid=r['_id'],
documenttype=r['_type'],
documentindex=r['_index'],
documentsource=r['_source'])
)
if len(hit['emails'])>= options.threshold:
# create an alert dictionary
alertDict=dict(sourceiphits=hit['count'],
emailcount=len(hit['emails']),
sourceipv4address=str(netaddr.IPAddress(hit['term'])),
emails=hit['emails'],
events=hit['events'],
sourceipgeolocation=hit['sourceipgeolocation'])
alerts.append(alertDict)
return alerts
except pyes.exceptions.NoServerAvailable:
logger.error('Elastic Search server could not be reached, check network connectivity')
def createAlerts(es, alerts):
'''given a list of dictionaries:
sourceiphits (int)
emailcount (int)
sourceipv4address (ip a string)
sourceipgeolocation (dictionairy of geoIP result)
['city', 'region_code', 'area_code', 'time_zone', 'dma_code', 'metro_code', 'country_code3', 'latitude', 'postal_code', 'longitude', 'country_code', 'country_name', 'continent']
emails (list of email addresses)
events (list of dictionaries of ['documentindex', 'documentid', 'documenttype','documentsource] )
1) create a summary alert with detail of the events
2) update the events with an alert timestamp so they are not included in further alerts
'''
try:
if len(alerts) > 0:
for i in alerts:
alert = dict(utctimestamp=toUTC(datetime.now()).isoformat(), severity='NOTICE', summary='', category='fxa', tags=['fxa'], eventsource=[], events=[])
for e in i['events']:
alert['events'].append(dict(documentindex=e['documentindex'], documenttype=e['documenttype'], documentid=e['documentid'], documentsource=e['documentsource']))
alert['severity'] = 'NOTICE'
alert['summary'] = ('{0} accounts {1} created by {2} '.format(i['emailcount'], i['emails'], i['sourceipv4address']))
for e in i['events']:
# append the relevant events in text format to avoid errant ES issues.
# should be able to just set eventsource to i['events'] but different versions of ES 1.0 complain
alert['eventsource'].append(flattenDict(e))
# alert['eventsource']=i['events']
logger.debug(alert['summary'])
logger.debug(alert['events'])
logger.debug(alert)
# save alert to alerts index, update events index with alert ID for cross reference
alertResult = alertToES(es, alert)
##logger.debug(alertResult)
# for each event in this list of indicatorCounts
# update with the alertid/index
# and update the alerttimestamp on the event itself so it's not re-alerted
for e in i['events']:
if 'alerts' not in e['documentsource'].keys():
e['documentsource']['alerts'] = []
e['documentsource']['alerts'].append(
dict(index=alertResult['_index'],
type=alertResult['_type'],
id=alertResult['_id']))
e['documentsource']['alerttimestamp'] = toUTC(datetime.now()).isoformat()
es.update(e['documentindex'], e['documenttype'], e['documentid'], document=e['documentsource'])
alertToMessageQueue(alert)
except ValueError as e:
logger.error("Exception %r when creating alerts " % e)
def main():
logger.debug('starting')
logger.debug(options)
es = pyes.ES((list('{0}'.format(s) for s in options.esservers)))
# see if we have matches.
alerts = esSearch(es)
createAlerts(es, alerts)
logger.debug('finished')
def initConfig():
# change this to your default zone for when it's not specified
options.defaultTimeZone = getConfig('defaulttimezone', 'UTC', options.configfile)
# msg queue settings
options.mqserver = getConfig('mqserver', 'localhost', options.configfile) # message queue server hostname
options.alertqueue = getConfig('alertqueue', 'mozdef.alert', options.configfile) # alert queue topic
options.alertexchange = getConfig('alertexchange', 'alerts', options.configfile) # alert queue exchange name
# logging settings
options.output = getConfig('output', 'stdout', options.configfile) # output our log to stdout or syslog
options.sysloghostname = getConfig('sysloghostname', 'localhost', options.configfile) # syslog hostname
options.syslogport = getConfig('syslogport', 514, options.configfile) # syslog port
# elastic search server settings
options.esservers = list(getConfig('esservers', 'http://localhost:9200', options.configfile).split(','))
# threshold settings
options.threshold = getConfig('threshold', 2, options.configfile)
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-c", dest='configfile', default=sys.argv[0].replace('.py', '.conf'), help="configuration file to use")
(options, args) = parser.parse_args()
initConfig()
initLogger()
main()

Двоичные данные
cron/google2mozdef.conf Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -193,7 +193,7 @@ def initConfig():
options.output=getConfig('output','stdout',options.configfile) #output our log to stdout or syslog
options.sysloghostname=getConfig('sysloghostname','localhost',options.configfile) #syslog hostname
options.syslogport=getConfig('syslogport',514,options.configfile) #syslog port
options.defaultTimeZone=getConfig('defaulttimezone','US/Pacific',options.configfile) #default timezone
options.defaultTimeZone=getConfig('defaulttimezone','UTC',options.configfile) #default timezone
options.url = getConfig('url', 'http://localhost:8080/events', options.configfile) #mozdef event input url to post to
options.lastrun=toUTC(getConfig('lastrun',toUTC(datetime.now()-timedelta(hours=24)),options.configfile))
options.recordlimit = getConfig('recordlimit', 1000, options.configfile) #max number of records to request

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше