Fix documentation bullet list syntax

This more closely conforms to how best to punctuate bullet lists[1].
Thanks to @Siddharth1698 for noticing these inconsistencies and suggesting
a fix in #1400

[1]: https://www.businesswritingblog.com/business_writing/2012/01/punctuating-bullet-points-.html
This commit is contained in:
Gene Wood 2019-07-29 07:41:48 -07:00
Родитель fe936db79b
Коммит e97f1908a3
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: F0A9E7DCD39E452E
7 изменённых файлов: 67 добавлений и 61 удалений

Просмотреть файл

@ -25,7 +25,7 @@ The Mozilla Enterprise Defense Platform (MozDef) seeks to automate the security
## Goals:
* Provide a platform for use by defenders to rapidly discover and respond to security incidents.
* Provide a platform for use by defenders to rapidly discover and respond to security incidents
* Automate interfaces to other systems like bunker, cymon, mig
* Provide metrics for security events and incidents
* Facilitate real-time collaboration amongst incident handlers

Просмотреть файл

@ -53,7 +53,7 @@ At this point, begin development and periodically run your unit-tests locally wi
Background on concepts
----------------------
- Logs - These are individual log entries that are typically emitted from systems, like an Apache log
- Logs - These are individual log entries that are typically emitted from systems, like an Apache log.
- Events - The entry point into MozDef, a log parsed into JSON by some log shipper (syslog-ng, nxlog) or a native JSON data source like GuardDuty, CloudTrail, most SaaS systems, etc.
- Alerts - These are either a 1:1 events to alerts (this thing happens and alert) or a M:1 events to alerts (N of these things happen and alert).

Просмотреть файл

@ -27,7 +27,8 @@ insert_simple.js
Usage: `node ./insert_simple.js <processes> <totalInserts> <host1> [host2] [host3] [...]`
* `processes`: Number of processes to spawn
* `totalInserts`: Number of inserts to perform, please note after a certain number node will slow down. You want to have a lower number if you are in this case.
* `totalInserts`: Number of inserts to perform
* Please note after a certain number node will slow down. You want to have a lower number if you are in this case.
* `host1`, `host2`, `host3`, etc: Elasticsearch hosts to which you want to send the HTTP requests
insert_bulk.js
@ -39,7 +40,8 @@ Usage: `node ./insert_bulk.js <processes> <insertsPerQuery> <totalInserts> <host
* `processes`: Number of processes to spawn
* `insertsPerQuery`: Number of logs per request
* `totalInserts`: Number of inserts to perform, please note after a certain number node will slow down. You want to have a lower number if you are in this case.
* `totalInserts`: Number of inserts to perform
* Please note after a certain number node will slow down. You want to have a lower number if you are in this case.
* `host1`, `host2`, `host3`, etc: Elasticsearch hosts to which you want to send the HTTP requests
search_all_fulltext.js
@ -50,7 +52,8 @@ search_all_fulltext.js
Usage: `node ./search_all_fulltext.js <processes> <totalSearches> <host1> [host2] [host3] [...]`
* `processes`: Number of processes to spawn
* `totalSearches`: Number of search requests to perform, please note after a certain number node will slow down. You want to have a lower number if you are in this case.
* `totalSearches`: Number of search requests to perform
* Please note after a certain number node will slow down. You want to have a lower number if you are in this case.
* `host1`, `host2`, `host3`, etc: Elasticsearch hosts to which you want to send the HTTP requests

Просмотреть файл

@ -19,34 +19,34 @@ The Test Sequence
_________________
* Travis CI creates webhooks when first setup which allow commits to the MozDef
GitHub repo to trigger Travis
GitHub repo to trigger Travis.
* When a commit is made to MozDef, Travis CI follows the instructions in the
`.travis.yml <https://github.com/mozilla/MozDef/blob/master/.travis.yml>`_
file
* `.travis.yml` installs `docker-compose` in the `before_install` phase
* in the `install` phase, Travis runs the
file.
* `.travis.yml` installs `docker-compose` in the `before_install` phase.
* In the `install` phase, Travis runs the
`build-tests <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L88-L89>`_
make target which calls `docker-compose build` on the
`docker/compose/docker-compose-tests.yml`_ file which builds a few docker
containers to use for testing
* in the `script` phase, Travis runs the
containers to use for testing.
* In the `script` phase, Travis runs the
`tests <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L52>`_
make target which
* calls the `build-tests` make target which again runs `docker-compose build`
on the `docker/compose/docker-compose-tests.yml`_ file
on the `docker/compose/docker-compose-tests.yml`_ file.
* calls the
`run-tests <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L67-L69>`_
make target which
make target which.
* calls the
`run-tests-resources <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L60-L61>`_
make target which starts the docker
containers listed in `docker/compose/docker-compose-tests.yml`_
containers listed in `docker/compose/docker-compose-tests.yml`_.
* runs `flake8` with the
`.flake8 <https://github.com/mozilla/MozDef/blob/master/.flake8>`_
config file to check code style
* runs `py.test tests` which runs all the test cases
config file to check code style.
* runs `py.test tests` which runs all the test cases.
AWS CodeBuild
-------------
@ -111,24 +111,24 @@ The Build Sequence
__________________
* A branch is merged into `master` in the GitHub repo or a version git tag is
applied to a commit
* GitHub emits a webhook event to AWS CodeBuild indicating this
applied to a commit.
* GitHub emits a webhook event to AWS CodeBuild indicating this.
* AWS CodeBuild considers the Filter Groups configured to decide if the tag
or branch warrants triggering a build. These Filter Groups are defined in
the ``mozdef-cicd-codebuild.yml`` CloudFormation template. Assuming the tag
or branch are acceptable, CodeBuild continues.
* AWS CodeBuild reads the
`buildspec.yml <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/buildspec.yml>`_
file to know what to do
file to know what to do.
* The `install` phase of the `buildspec.yml` fetches
`packer <https://www.packer.io/>`_ and unzips it
`packer <https://www.packer.io/>`_ and unzips it.
* `packer` is a tool that spawns an ec2 instance, provisions it, and renders
an AWS Machine Image (AMI) from it.
* The `build` phase of the `buildspec.yml` runs the
`cloudy_mozdef/ci/deploy <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/ci/deploy>`_
script in the AWS CodeBuild Ubuntu 14.04 environment
script in the AWS CodeBuild Ubuntu 14.04 environment.
* The `deploy` script calls the
`build-from-cwd <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L78-L79>`_
target of the `Makefile` which calls `docker-compose build` on the
@ -153,16 +153,16 @@ __________________
* Uploads the local image that was just built by AWS CodeBuild to DockerHub.
If the branch being built is `master` then the image is uploaded both with
a tag of `master` as well as with a tag of `latest`
a tag of `master` as well as with a tag of `latest`.
* If the branch being built is from a version tag (e.g. `v1.2.3`) then the
image is uploaded with only that version tag applied
image is uploaded with only that version tag applied.
* The `deploy` script next calls the
`packer-build-github <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/cloudy_mozdef/Makefile#L34-L36>`_
make target in the
`cloudy_mozdef/Makefile <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/Makefile>`_
which calls the
`ci/pack_and_copy <https://github.com/mozilla/MozDef/blob/master/cloudy_mozdef/ci/pack_and_copy>`_
script which does the following steps
script which does the following steps.
* Calls packer which launches an ec2 instance, executing a bunch of steps and
and producing an AMI
@ -179,19 +179,19 @@ __________________
* Within this ec2 instance, packer `clones the MozDef GitHub repo and checks
out the branch that triggered this build
<https://github.com/mozilla/MozDef/blob/c7a166f2e29dde8e5d71853a279fb0c47a48e1b2/cloudy_mozdef/packer/packer.json#L58-L60>`_
* packer replaces all instances of the word `latest` in the
<https://github.com/mozilla/MozDef/blob/c7a166f2e29dde8e5d71853a279fb0c47a48e1b2/cloudy_mozdef/packer/packer.json#L58-L60>`_.
* Packer replaces all instances of the word `latest` in the
`docker-compose-cloudy-mozdef.yml <https://github.com/mozilla/MozDef/blob/master/docker/compose/docker-compose-cloudy-mozdef.yml>`_
file with either the branch `master` or the version tag (e.g. `v1.2.3`)
* packer runs `docker-compose pull` on the
file with either the branch `master` or the version tag (e.g. `v1.2.3`).
* Packer runs `docker-compose pull` on the
`docker-compose-cloudy-mozdef.yml <https://github.com/mozilla/MozDef/blob/master/docker/compose/docker-compose-cloudy-mozdef.yml>`_
file to pull down both the docker images that were just built by AWS
CodeBuild and uploaded to Dockerhub as well as other non MozDef docker
images
images.
* After packer completes executing the steps laid out in `packer.json` inside
the ec2 instance, it generates an AMI from that instance and continues with
the copying, tagging and sharing steps described above
the copying, tagging and sharing steps described above.
* Now back in the AWS CodeBuild environment, the `deploy` script continues by
calling the
`publish-versioned-templates <https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/cloudy_mozdef/Makefile#L85-L87>`_
@ -205,7 +205,7 @@ __________________
CloudFormation template so that the template knows the AMI IDs of that
specific branch of code.
* uploads the CloudFormation templates to S3 in a directory either called
`master` or the tag version that was built (e.g. `v1.2.3`)
`master` or the tag version that was built (e.g. `v1.2.3`).
.. _docker/compose/docker-compose-tests.yml: https://github.com/mozilla/MozDef/blob/master/docker/compose/docker-compose-tests.yml
.. _tag-images: https://github.com/mozilla/MozDef/blob/cfeafb77f9d4d4d8df02117a0ffca0ec9379a7d5/Makefile#L109-L110

Просмотреть файл

@ -32,18 +32,19 @@ MozDef requires the following:
- An OIDC Provider with ClientID, ClientSecret, and Discovery URL
- Mozilla uses Auth0 but you can use any OIDC provider you like: Shibboleth,
KeyCloak, AWS Cognito, Okta, Ping (etc.)
KeyCloak, AWS Cognito, Okta, Ping (etc.).
- You will need to configure the redirect URI of ``/redirect_uri`` as allowed in
your OIDC provider configuration
your OIDC provider configuration.
- An ACM Certificate in the deployment region for your DNS name
- A VPC with three public subnets available.
- A VPC with three public subnets available
- It is advised that this VPC be dedicated to MozDef or used solely for security automation.
- The three public subnets must all be in different `availability zones <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#using-regions-availability-zones-describe>`_
and have a large enough number of IP addresses to accommodate the infrastructure
and have a large enough number of IP addresses to accommodate the infrastructure.
- The VPC must have an `internet gateway <https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html>`_
enabled on it so that MozDef can reach the internet
- An SQS queue receiving GuardDuty events. At the time of writing this is not required but may be required in future.
enabled on it so that MozDef can reach the internet.
- An SQS queue receiving GuardDuty events
- At the time of writing this is not required but may be required in future.
Supported Regions

Просмотреть файл

@ -20,7 +20,7 @@ Goals
High level
**********
* Provide a platform for use by defenders to rapidly discover and respond to security incidents.
* Provide a platform for use by defenders to rapidly discover and respond to security incidents
* Automate interfaces to other systems like firewalls, cloud protections and anything that has an API
* Provide metrics for security events and incidents
* Facilitate real-time collaboration amongst incident handlers
@ -31,25 +31,25 @@ Technical
*********
* Offer micro services that make up an Open Source Security Information and Event Management (SIEM)
* Scalable, should be able to handle thousands of events per second, provide fast searching, alerting, correlation and handle interactions between teams of incident handlers.
* Scalable, should be able to handle thousands of events per second, provide fast searching, alerting, correlation and handle interactions between teams of incident handlers
MozDef aims to provide traditional SIEM functionality including:
* Accepting events/logs from a variety of systems
* Storing events/logs
* Facilitating searches
* Facilitating alerting
* Facilitating log management (archiving,restoration)
* Accepting events/logs from a variety of systems.
* Storing events/logs.
* Facilitating searches.
* Facilitating alerting.
* Facilitating log management (archiving,restoration).
It is non-traditional in that it:
* Accepts only JSON input
* Provides you open access to your data
* Accepts only JSON input.
* Provides you open access to your data.
* Integrates with a variety of log shippers including logstash, beaver, nxlog, syslog-ng and any shipper that can send JSON to either rabbit-mq or an HTTP(s) endpoint.
* Provides easy integration to Cloud-based data sources such as cloudtrail or guard duty
* Provides easy python plugins to manipulate your data in transit
* Provides extensive plug-in opportunities to customize your event enrichment stream, your alert workflow, etc
* Provides realtime access to teams of incident responders to allow each other to see their work simultaneously
* Provides easy integration to Cloud-based data sources such as CloudTrail or GuardDuty.
* Provides easy python plugins to manipulate your data in transit.
* Provides extensive plug-in opportunities to customize your event enrichment stream, your alert workflow, etc.
* Provides realtime access to teams of incident responders to allow each other to see their work simultaneously.
Architecture
@ -60,7 +60,7 @@ MozDef is based on open source technologies including:
* RabbitMQ (message queue and amqp(s)-based log input)
* uWSGI (supervisory control of python-based workers)
* bottle.py (simple python interface for web request handling)
* elasticsearch (scalable indexing and searching of JSON documents)
* Elasticsearch (scalable indexing and searching of JSON documents)
* Meteor (responsive framework for Node.js enabling real-time data sharing)
* MongoDB (scalable data store, tightly integrated to Meteor)
* VERIS from verizon (open source taxonomy of security incident categorizations)
@ -74,11 +74,11 @@ Frontend processing
Frontend processing for MozDef consists of receiving an event/log (in json) over HTTP(S), AMQP(S), or SQS
doing data transformation including normalization, adding metadata, etc. and pushing
the data to elasticsearch.
the data to Elasticsearch.
Internally MozDef uses RabbitMQ to queue events that are still to be processed.
The diagram below shows the interactions between the python scripts (controlled by uWSGI),
the RabbitMQ exchanges and elasticsearch indices.
the RabbitMQ exchanges and Elasticsearch indices.
.. image:: images/frontend_processing.png
@ -95,7 +95,7 @@ Initial Release:
* Facilitate replacing base SIEM functionality including log input, event management, search, alerts, basic correlations
* Enhance the incident workflow UI to enable realtime collaboration
* Enable basic plug-ins to the event input stream for meta data, additional parsing, categorization and basic machine learning
* Support as many common event/log shippers as possible with repeatable recipies
* Support as many common event/log shippers as possible with repeatable recipes
* Base integration into Mozilla's defense mechanisms for automation
* 3D visualizations of threat actors
* Fine tuning of interactions between meteor, mongo, dc.js
@ -106,7 +106,7 @@ Recently implemented:
* Docker containers for each service
* Updates to support recent (breaking) versions of Elasticsearch
Future (join us!):
Future (join us!):
* Correlation through machine learning, AI
* Enhanced search for alerts, events, attackers within the MozDef UI

Просмотреть файл

@ -131,11 +131,11 @@ Background
Mozilla used CEF as a logging standard for compatibility with Arcsight and for standardization across systems. While CEF is an admirable standard, MozDef prefers JSON logging for the following reasons:
* Every development language can create a JSON structure
* JSON is easily parsed by computers/programs which are the primary consumer of logs
* CEF is primarily used by Arcsight and rarely seen outside that platform and doesn't offer the extensibility of JSON
* Every development language can create a JSON structure.
* JSON is easily parsed by computers/programs which are the primary consumer of logs.
* CEF is primarily used by Arcsight and rarely seen outside that platform and doesn't offer the extensibility of JSON.
* A wide variety of log shippers (heka, logstash, fluentd, nxlog, beaver) are readily available to meet almost any need to transport logs as JSON.
* JSON is already the standard for cloud platforms like amazon's cloudtrail logging
* JSON is already the standard for cloud platforms like amazon's cloudtrail logging.
Description
***********
@ -288,8 +288,10 @@ Alerts are stored in the `alerts`_ folder.
There are two types of alerts:
* simple alerts that consider events on at a time. For example you may want to get an alert everytime a single LDAP modification is detected.
* aggregation alerts allow you to aggregate events on the field of your choice. For example you may want to alert when more than 3 login attempts failed for the same username.
* simple alerts that consider events on at a time
* For example you may want to get an alert everytime a single LDAP modification is detected.
* aggregation alerts that allow you to aggregate events on the field of your choice
* For example you may want to alert when more than 3 login attempts failed for the same username.
You'll find documented examples in the `alerts`_ folder.