Adding new files for Dockercon security workshop and updating some older labs - nigelpoulton@hotmail.com

Signed-off-by: Nigel <nigelpoulton@hotmail.com>
This commit is contained in:
Nigel 2017-04-10 14:23:02 +01:00
Родитель 855ea4a22f
Коммит bcc5f9e8e2
17 изменённых файлов: 1529 добавлений и 63 удалений

Просмотреть файл

@ -2,10 +2,18 @@
This directory contains tutorials on how to take advantage of a non-exhaustive collection of Docker security features. Moreover, the tutorials are designed to explain and demonstrate the strong security defaults in Docker for each feature.
## Docker
* [Content Trust](trust/README.md)
* [Content Trust Basics](trust-basics/README.md)
* [Secrets Management](secrets/README.md)
* [Secrets Management with Docker Datacenter](secrets-ddc/README.md)
* [Secure Networking Basics](networking/README.md)
* [Security Scanning](scanning/README.md)
* [Swarm Mode Security Basics](swarm/README.md)
## Linux
* [AppArmor](apparmor/README.md)
* [Capabilities](capabilities/README.md)
* [Content Trust](trust/README.md)
* [Control Groups](cgroups/README.md)
* [Seccomp](seccomp/README.md)
* [User Namespaces](userns/README.md)
* [User Namespaces](userns/README.md)

Просмотреть файл

@ -4,7 +4,7 @@
> **Time**: Approximately 25 minutes
AppArmor (Application Armor) is a Linux Security Module (LSM). It protects the operating system by applying profiles to individual applications. In contrast to managing *capabilities* with `CAP_DROP` and syscalls with *seccomp*, AppArmor allows for much finer-grained control. For example, AppArmor can restrict file operations on specified paths.
AppArmor (Application Armor) is a Linux Security Module (LSM). It protects the operating system by applying profiles to individual applications or containers. In contrast to managing *capabilities* with `CAP_DROP` and syscalls with *seccomp*, AppArmor allows for much finer-grained control. For example, AppArmor can restrict file operations on specified paths.
In this lab you will learn the basics of AppArmor and how to use it with Docker for improved security.
@ -24,7 +24,7 @@ You will need all of the following to complete this lab:
- A Linux-based Docker Host with AppArmor enabled in the kernel (most Debian-based distros)
- Docker 1.12 or higher
The following command shows you how to check if AppArmor is enabled in your system's kernel:
The following command shows you how to check if AppArmor is enabled in your system's kernel and available to Docker:
Check from Docker 1.12 or higher
```
@ -36,11 +36,11 @@ The following command shows you how to check if AppArmor is enabled in your syst
# <a name="primer"></a>Step 1: AppArmor primer
By default, Docker applies the `docker-default` AppArmor profile to new containers. This profile is located in `/etc/apparmor.d/docker/` and you can find more information about it in the [documentation](https://docs.docker.com/engine/security/apparmor/#understand-the-policies).
By default, Docker applies the `docker-default` AppArmor profile to new **containers**. In Docker 1.13 and later this is profile is created in `tmpfs` and then loaded into the kernel. On Docker 1.12 and earlier it is located in `/etc/apparmor.d/docker/`. You can find more information about it in the [documentation](https://docs.docker.com/engine/security/apparmor/#understand-the-policies).
Here are some quick pointers for how to understand AppArmor profiles:
- `Include` statements, such as `#include <abstractions/base>`, behave just like their `C` counterparts by expanding to additional AppArmor profile contents.
- `include` statements, such as `#include <abstractions/base>`, behave just like their `C` counterparts by expanding to additional AppArmor profile contents.
- AppArmor `deny` rules have precedence over `allow` and `owner` rules. This means that `deny` rules cannot be overridden by subsequent `allow` or `owner` rules for the same resource. Moreover, an `allow` will be overridden by a subsequent `deny` on the same resource
@ -52,7 +52,7 @@ For more information, see the [official AppArmor documentation wiki](http://wiki
In this step you will check the status of AppArmor on your Docker Host and learn how to identify whether or not Docker containers are running with an AppArmor profile.
1. View the status of AppArmor on your Docker Host with the `apparmor_status` command.
1. View the status of AppArmor on your Docker Host with the `apparmor_status` command. You may need to preceed the command with `sudo`.
```
$ apparmor_status
@ -77,22 +77,20 @@ In this step you will check the status of AppArmor on your Docker Host and learn
0 processes are unconfined but have a profile defined.
```
The command may require admin credentials.
Notice the `docker-default` profile is in enforce mode. This is the AppArmor profile that will be applied to new containers unless overridden with the `--security-opts` flag.
2. Run a new container and put it in the back ground.
```
$ sudo docker run -dit alpine sh
$ docker container run -dit --name apparmor1 alpine sh
```
3. Confirm that the container is running.
```
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1bb16561bc06 alpine "sh" 2 seconds ago Up 2 seconds sick_booth
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1bb16561bc06 alpine "sh" 2 secs ago Up 2 seconds apparmor1
```
4. Run the `apparmor_status` command again.
@ -120,10 +118,8 @@ In this step you will check the status of AppArmor on your Docker Host and learn
5. Stop and remove the container started in the previous steps.
The example below uses the Container ID "1bb16561bc06". This will be different in your environment.
```
$ sudo docker rm -f 1bb16561bc06
$ docker container rm -f appamror1
1bb16561bc06
```
@ -136,22 +132,24 @@ In this step you will see how to run a new container without an AppArmor profile
1. If you haven't already done so, stop and remove all containers on your system.
```
$ sudo docker rm -f $(docker ps -aq)
$ docker container rm -f $(docker container ls -aq)
```
2. Use the `--security-opt apparmor=unconfined` flag to start a new container in the background without an AppArmor profile.
```
$ sudo docker run -dit --security-opt apparmor=unconfined alpine sh
ace79581a19ace7b85009480a64fd378d43844f82559bd1178fce431e292277d
$ docker container run -dit --name apparmor2 \
--security-opt apparmor=unconfined \
alpine sh
ace79581a19a....559bd1178fce431e292277d
```
3. Confirm that the container is running.
```
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ace79581a19a alpine "sh" 41 seconds ago Up 40 seconds sharp_knuth
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ace79581a19a alpine "sh" 41 secs ago Up 40 secs apparmor2
```
4. Use the `apparmor_status` command to verify that the new container is not running with an AppArmor profile.
@ -170,10 +168,8 @@ ace79581a19a alpine "sh" 41 seconds ago
5. Stop and remove the container started in the previous steps.
The example below uses the Container ID "ace79581a19a". This will be different in your environment.
```
$ sudo docker rm -f ace79581a19a
$ docker container rm -f apparmor2
ace79581a19a
```
@ -181,20 +177,20 @@ In this step you learned that the `--security-opt apparmor=unconfined` flag will
# <a name="depth"></a>Step 4: AppArmor and defense in depth
Defense in depth is a model where multiple different lines of defense work together to provide increased overall defensive capabilities. Docker uses AppArmor, seccomp, and Capabilities to form a deep defense system.
Defense in depth is a model where multiple different lines of defense work together to provide increased overall defensive capabilities. Docker uses many Linux technologies, such as AppArmor, seccomp, and Capabilities, to form a deep defense system.
In this step you will see how AppArmor can protect a Docker Host even when other lines of defense such as seccomp and Capabilities are not effective.
1. If you haven't already done so, stop and remove all containers on your system.
```
$ sudo docker rm -f $(docker ps -aq)
$ docker container rm -f $(docker container ls -aq)
```
2. Start a new Ubuntu container with seccomp disabled and the `SYS_ADMIN` *capability* added.
```
$ sudo docker run --rm -it --cap-add SYS_ADMIN --security-opt seccomp=unconfined ubuntu sh
$ docker container run --rm -it --cap-add SYS_ADMIN --security-opt seccomp=unconfined ubuntu sh
#
```
@ -211,12 +207,12 @@ In this step you will see how AppArmor can protect a Docker Host even when other
The operation failed because the `default-docker` AppArmor profile denied the operation.
4. Exit the container.
4. Exit the container using the `exit` command.
5. Confirm that it was the `default-docker` AppArmor profile that denied the operation by starting a new container without an AppArmor profile and retrying the same operation.
```
$ sudo docker run --rm -it --cap-add SYS_ADMIN --security-opt seccomp=unconfined --security-opt apparmor=unconfined ubuntu sh
$ docker container run --rm -it --cap-add SYS_ADMIN --security-opt seccomp=unconfined --security-opt apparmor=unconfined ubuntu sh
# mkdir 1; mkdir 2; mount --bind 1 2
# ls -l
@ -230,6 +226,8 @@ In this step you will see how AppArmor can protect a Docker Host even when other
The operation succeeded this time. This proves that it was the `default-docker` AppArmor profile that prevented the operation in the previous attempt.
6. Exit the container with the `exit` command.
In this step you have seen how AppArmor works together with seccomp and Capabilities to form a defense in depth security model for Docker. You saw a scenario where even with seccomp and Capabilities not preventing an action, AppArmor still prevented it.
# <a name="custom"></a>Step 5: Custom AppArmor profile
@ -265,7 +263,7 @@ In this step we'll show how a custom AppArmor profile could have protected Docke
3. View the contents of the Docker Compose YAML file.
```
cat docker-compose.yml
$ cat docker-compose.yml
wordpress:
image: endophage/wordpress:lean
links:
@ -299,8 +297,12 @@ mysql:
4. Bring the application up.
It may take a minute or two to bring the application up. Once the application is up you will need to hit the <Return> key to bring the prompt to the foreground.
You may need to install `docker-compose` to complete this step. If your lab machine does not already have Docker Compose isntalled, you can install it with the following commands `sudo apt-get update` and `sudo apt-get install docker-compose`.
```
$ sudo docker-compose up &
$ docker-compose up &
Pulling mysql (mariadb:10.1.10)...
10.1.10: Pulling from library/mariadb
03e1855d4f31: Pull complete
@ -330,19 +332,28 @@ In the next few steps you'll apply a new Apparmor profile to a new WordPress con
9. Bring the WordPress application down.
Run this command from the shell of your Docker Host, not the shell of the `wordpress` container.
Run these commands from the shell of your Docker Host, not the shell of the `wordpress` container.
```
$ sudo docker-compose down
Stopping wordpress_wordpress_1 ...
Stopping wordpress_mysql_1 ...
<SNIP>
$ docker-compose stop
Stopping wordpress_wordpress_1 ... done
Stopping wordpress_mysql_1 ... done
$
$ docker-compose rm
Going to remove wordpress_wordpress_1, wordpress_mysql_1
Are you sure? [yN] y
Removing wordpress_wordpress_1 ... done
Removing wordpress_mysql_1 ... done
```
10. Add the `wparmor` profile to the `wordpress` service in the `docker-compose.yml` file.
10. Add the `wparmor` profile to the `wordpress` service in the `docker-compose.yml` file. You do this by deleting the two lines starting with `#` and replacing them with the following two lines:
```
security_opt:
- apparmor=wparmor
```
Be sure to add the lines with the correct indentation as shown below.
```
wordpress:
@ -387,7 +398,7 @@ In the next few steps you'll apply a new Apparmor profile to a new WordPress con
13. Bring the Docker Compose WordPress app back up.
```
$ sudo docker-compose up &
$ docker-compose up &
Pulling mysql (mariadb:10.1.10)...
10.1.10: Pulling from library/mariadb
03e1855d4f31: Pull complete
@ -401,7 +412,7 @@ In the next few steps you'll apply a new Apparmor profile to a new WordPress con
14. Verify that the app is up.
```
$ sudo docker-compose ps
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------
wordpress_mysql_1 /docker-entrypoint.sh mysqld Up 0.0.0.0:32770->3306/tcp
@ -413,7 +424,15 @@ In the next few steps you'll apply a new Apparmor profile to a new WordPress con
16. Bring the application down.
```
$ sudo docker-compose down
$ docker-compose stop
Stopping wordpress_wordpress_1 ... done
Stopping wordpress_mysql_1 ... done
$
$ docker-compose rm
Going to remove wordpress_wordpress_1, wordpress_mysql_1
Are you sure? [yN] y
Removing wordpress_wordpress_1 ... done
Removing wordpress_mysql_1 ... done
```
Congratulations! You've secured a WordPress instance against adding malicious plugins :)

Просмотреть файл

@ -17,7 +17,7 @@ You will complete the following steps as part of this lab.
You will need all of the following to complete this lab:
- A Linux-based Docker Host running Docker 1.12 or higher
- A Linux-based Docker Host running Docker 1.13 or higher
# <a name="cap_intro"></a>Step 1: Introduction to capabilities
@ -87,7 +87,7 @@ In this step you will start various new containers. Each time you will use the c
1. Start a new container and prove that the container's root account can change the ownership of files.
```
$ sudo docker run --rm -it alpine chown nobody /
$ docker container run --rm -it alpine chown nobody /
```
The command gives no return code indicating that the operation succeeded. The command works because the default behavior is for new containers to be started with a root user. This root user has the CAP_CHOWN capability by default.
@ -95,7 +95,7 @@ In this step you will start various new containers. Each time you will use the c
2. Start another new container and drop all capabilities for the containers root account other than the CAP\_CHOWN capability. Remember that Docker does not use the "CAP_" prefix when addressing capability constants.
```
$ sudo docker run --rm -it --cap-drop ALL --cap-add CHOWN alpine chown nobody /
$ docker container run --rm -it --cap-drop ALL --cap-add CHOWN alpine chown nobody /
```
This command also gives no return code, indicating a successful run. The operation succeeds because although you dropped all capabilities for the container's `root` account, you added the `chown` capability back. The `chown` capability is all that is needed to change the ownership of a file.
@ -103,7 +103,7 @@ In this step you will start various new containers. Each time you will use the c
3. Start another new container and drop only the `CHOWN` capability form its root account.
```
$ sudo docker run --rm -it --cap-drop CHOWN alpine chown nobody /
$ docker container run --rm -it --cap-drop CHOWN alpine chown nobody /
chown: /: Operation not permitted
```
@ -112,7 +112,7 @@ In this step you will start various new containers. Each time you will use the c
4. Create another new container and try adding the `CHOWN` capability to the non-root user called `nobody`. As part of the same command try and change the ownership of a file or folder.
```
$ sudo docker run --rm -it --cap-add chown -u nobody alpine chown nobody /
$ docker container run --rm -it --cap-add chown -u nobody alpine chown nobody /
chown: /: Operation not permitted
```
@ -130,7 +130,7 @@ There are two main sets of tools for managing capabilities:
Below are some useful commands from both.
> You may need to manually install the packages required for some of these commands.
> You may need to manually install the packages required for some of these commands.`sudo apt-get install libcap-dev`, `sudo apt-get install libcap-ng-dev`, `sudo apt-get install libcap-ng-utils`.
## **libcap**
@ -151,7 +151,7 @@ The remainder of this step will show you some examples of `libcap` and `libcap-n
The following command will start a new container using Alpine Linux, install the `libcap` package and then list capabilities.
```
$ sudo docker run --rm -it alpine sh -c 'apk add -U libcap; capsh --print'
$ docker container run --rm -it alpine sh -c 'apk add -U libcap; capsh --print'
(1/1) Installing libcap (2.25-r0)
Executing busybox-1.24.2-r9.trigger
@ -167,7 +167,7 @@ The following command will start a new container using Alpine Linux, install the
groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
```
**Current** is multiple sets separated by spaces. Multiple capabilities within the same set are separated by commas `,`. The letters following the `+` at the end of each set are as follows:
In the output above, **Current** is multiple sets separated by spaces. Multiple capabilities within the same *set* are separated by commas `,`. The letters following the `+` at the end of each set are as follows:
- `e` is effective
- `i` is inheritable
- `p` is permitted
@ -203,7 +203,7 @@ usage: capsh [args ...]
```
> Warning:
> `--drop` sounds like what you want to do, but it only affects the bounding set. This can be very confusing because it doesn't actually take away the capability from the effective or inheritable set. You almost always want to use `--caps`.
> `--drop` sounds like what you want to do, but it only affects the bounding set. This can be very confusing because it doesn't actually take away the capability from the effective or inheritable set. You almost always want to use `--caps`, `sudo apt-get install attr`.
### Modifying capabilities
@ -214,7 +214,7 @@ Libcap and libcap-ng can both be used to modify capabilities.
The command below shows how to set the CAP_NET_RAW capability as *effective* and *permitted* on the file represented by `$file`. The `setcap` command calls on libcap to do this.
```
$ setcap cap_net_raw=ep $file
$ sudo setcap cap_net_raw=ep $file
```
2. Use libcap-ng to set the capabilities of a file.

Просмотреть файл

@ -0,0 +1,208 @@
# Docker Networking Security Basics
# Lab Meta
> **Difficulty**: Beginner
> **Time**: Approximately 10 minutes
In this lab you'll look some of the built-in network security technologies available in Swarm Mode.
You will complete the following steps as part of this lab.
- [Step 1 - Create an encrypted overlay network](#network_create)
- [Step 2 - List networks](#list_networks)
- [Step 3 - Deploy a service](#deploy_service)
- [Step 4 - Clean-up](#clean)
# Prerequisites
You will need all of the following to complete this lab:
- At least two Linux-based Docker Hosts running Docker 1.13 or higher and configured as part of the same Swarm
- This lab assumes Swarm with at least one manager nodes and one worker node. In this lab, **node1** will be the manager and **node3** will be the worker. You may need to change these values if your lab is configured differently - the important thing is that one node is a manager and the other is a worker.
- This lab was built and tested using Ubuntu 16.04 and Docker 17.03.0-ce
# <a name="network_create"></a>Step 1: Create an encrypted overlay network
In this step you will create two overlay networks. The first will only have the control plane traffic encrypted. The second will have control plane **and** data plane traffic encrypted.
All Docker overlay networks have control plane traffic encrypted by default. To encrypt data plane traffic you need to pass the `--opt encrypted` flag to the `docker network create command`.
Perform all of the following commands from a *Manager node* in your lab. The examples in this lab guide will assume you are using **node1**. Your lab may be different.
1. Create a new overlay network called **net1**
```
$ docker network create -d overlay net1
xt3jwgsq20ob648uc5f8ow95q
```
2. Inspect the **net1** network to check for the **encrypted** flag
```
$ docker network inspect net1
[
{
"Name": "net1",
"Id": "xt3jwgsq20ob648uc5f8ow95q",
"Created": "0001-01-01T00:00:00Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": null
}
]
```
Notice that there is no **encrypted** flag under the **Options** section of the output. This indicates that data plane traffic (application traffic) is not encrypted on this network. Control plane traffic (gossip etc) is encrypted by default for all overlay networks.
3. Create another overlay network, but this time pass the `--opt encrypted` flag. Call this network **net2**.
```
$ docker network create -d overlay --opt encrypted net2
uaaw8ljwidoc5is2qo362hd8n
```
4. Inspect the **net2** network to check for the **encrypted** flag
```
$ docker network inspect net2
[
{
"Name": "net2",
"Id": "uaaw8ljwidoc5is2qo362hd8n",
"Created": "0001-01-01T00:00:00Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098",
"encrypted": ""
},
"Labels": null
}
]
```
Notice the presence of the **encrypted** flag below the VXLAN ID in the **Options** field. This indicates that data plane traffic (application traffic) on this network will be encrypted.
# <a name="list_networks"></a>Step 2: List networks
In this step you will list the networks visible on **node1** (*manager node*) and **node3** (*worker node*) in your lab. The networks you created in the previous step will be visible on **node1** but not **node3**. This is because Docker takes a lazy approach when propagating networks to *worker nodes* - a *worker node* only gets to know about a network if it runs a container or service task that specifically requires that network. This reduces network control plane chatter which assists with scalability and security.
>NOTE: All *manager nodes* know about all networks.
1. Run the `docker network ls` command on **node1**
```
node1$ docker network ls
NETWORK ID NAME DRIVER SCOPE
70bd606f9f81 bridge bridge local
475a3b8f04de docker_gwbridge bridge local
f94f673bfe7e host host local
3ecc06xxyb7d ingress overlay swarm
xt3jwgsq20ob net1 overlay swarm
uaaw8ljwidoc net2 overlay swarm
b535831c780f none null local
```
Notice that **net1** and **net2** are both present in the list. This is expected behavior because you created these networks on **node1** and it is also a *manager node*. *Worker nodes* in the Swarm should not be able to see these networks yet.
2. Run the `docker network ls` command on **node3** (*worker node*)
```
node3$ docker network ls
NETWORK ID NAME DRIVER SCOPE
abe97d2963b3 bridge bridge local
42295053cd72 docker_gwbridge bridge local
ad4f60192aa0 host host local
3ecc06xxyb7d ingress overlay swarm
1a85d1a0721f none null local
```
The **net1** and **net2** networks are not visible on this *worker node*. This is expected behavior because the node is not running a service task that is on that network. This proves that Docker does not extend newly created networks to all *worker nodes* in a Swarm - it delays this action until a node has a specific requirement to know about that network. This improves scalability and security.
# <a name="deploy_service"></a>Step 3: Deploy a service
In this step you will deploy a service on the **net2** overlay network. You will deploy the service with enough replica tasks so that at least one task will run on every node in your Swarm. This will force Docker to extend the **net2** network to all nodes in the Swarm.
1. Deploy a new service to all nodes in your Swarm. When executing this command, be sure to use an adequate number of replica tasks so that all Swarm nodes will run a task. This example deploys 4 replica tasks.
```
$ docker service create --name service1 \
--network=net2 --replicas=4 \
alpine:latest sleep 1d
ivfei61h3jvypuj7v0443ow84
```
2. Check that the service has deployed successfully
```
$ docker service ls
ID NAME MODE REPLICAS IMAGE
ivfei61h3jvy service1 replicated 4/4 alpine:latest
```
As long as all replicas are up (`4/4` in the example above) you can proceed to the next command. It may take a minute for the service tasks to deploy while the image is downloaded to each node in your Swarm.
3. Run the `docker network ls` command again from **node3**.
>NOTE: It is important that you run this step from a *worker node* that could previously not see the **net2** network.
```
node3$ docker network ls
NETWORK ID NAME DRIVER SCOPE
abe97d2963b3 bridge bridge local
42295053cd72 docker_gwbridge bridge local
ad4f60192aa0 host host local
3ecc06xxyb7d ingress overlay swarm
uaaw8ljwidoc net2 overlay swarm
1a85d1a0721f none null local
```
The **net2** network is now visible on **node3**. This is because **node3** is running a task for the **service1** service which is using the **net2** network.
Congratulations! You've created an encrypted network, deployed a service to it, and seen that new overlay networks are only made available to worker nodes in the Swarm as and when they runs service tasks on the network.
# <a name="clean"></a>Step 4: Clean-up
In this step you will clean-up the service and networks created in this lab.
Execute all of the following commands from **node1** or another Swarm manager node.
1. Remove the service you created in Step 3
```
$ docker service rm service1
service1
```
This will also remove the **net2** network from all worker nodes in the Swarm.
2. Remove the **net1** and **net2** networks
```
$ docker network rm net1 net2
net1
net2
```
Congratulations. You've completed this quick Docker Network Security lab. You've even cleaned up!

184
security/scanning/README.md Normal file
Просмотреть файл

@ -0,0 +1,184 @@
# Security Scanning with Docker Hub
# Lab Meta
> **Difficulty**: Beginner
> **Time**: Approximately 10 minutes
In this lab you'll learn how to use Docker Security Scanning with Docker Hub
*private repositories*.
You will complete the following steps as part of this lab.
- [Step 1 - Create a private Hub repo](#repo)
- [Step 2 - Pull an image](#pull)
- [Step 3 - Tag and push an image](#tag_push)
- [Step 4 - View scan results](#results)
- [Step 4 - Clean-up](#clean)
# Prerequisites
You will need all of the following to complete this lab:
- A Docker host running **Docker 1.13** or higher
- A **Docker ID** with at least one spare private repository on Docker Hub
# <a name="pull"></a>Step 1: Create a private Hub repo
Docker Security Scanning is a service currently offered for images stored in
Docker Hub private repositories. In this step you will create a new private
repository within your Docker Hub namespace.
>NOTE: This step assumes that you have a DockerHub account that will allow you
to create a new private repo. If you have used all of the private repos on your
account you will need to re-use one of them. If you do this you will need to
take care not to interfere with images you already have stored in the repo. The
only alternative is to upgrade to a plan that offers more private repos.
1. Log in to Docker Hub with your Docker ID.
2. Click the `Create Repository +` button.
![](images/scan1.png)
3. Give the repo a `name`, `short description`, and make sure that
`Visibility=private` then click `Create`.
![](images/scan2.png)
Now that you have created a new **private** Docker Hub repo, you can proceed to
the next step.
# <a name="pull"></a>Step 2: Pull an image
In this step you'll pull an image that you will use in Step 3.
1. Use the `docker pull` command to pull a copy of the `alpine:edge` image.
```
node1$ docker image pull alpine:edge
edge: Pulling from library/alpine
71c5a0cc58e4: Pull complete
Digest: sha256:99588bc8883c955c157...0c223e6c7cabca5f600a3e9f8d5cd
Status: Downloaded newer image for alpine:edge
```
2. Confirm that the image was pulled successfully.
```
node1$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine edge 8914de95a28d 2 weeks ago 4 MB
```
You will use this image in the next step.
# <a name="tag_push"></a>Step 3: Tag and push an image
In this step you'll `tag` the image that you pulled in the previous step so
that it is associated with the **private** Docker Hub repo you created in Step 1.
Be sure to substitute `nigelpoulton` with your own Docker ID in the steps below.
1. Tag the image so that it can be pushed to your newly created repo.
```
node1$ docker image tag alpine:edge nigelpoulton/scan:v1
```
This command has tagged the `alpine:edge` image so that it can be pushed to
the `nigelpoulton/scan` repo (remember to replace `nigelpoulton` with your
own Docker ID). It has also given it the `v1` tag.
2. Verify that the new tag exists.
```
node1$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine edge 8914de95a28d 2 weeks ago 4 MB
nigelpoulton/scan v1 8914de95a28d 2 weeks ago 4 MB
```
Notice that both lines show the same `IMAGE ID` but have different values for
`REPOSITORY` and `TAG`. This is because the exact same image has been tagged
twice.
3. From the CLI of your Docker host, login to Docker Hub with your Docker ID.
Be sure to use your own Docker ID.
```
node1$ docker login
Username: nigelpoulton
Password:
Login Succeeded
```
4. Push the newly tagged image.
Be sure to substitute the image tag below with the correct image from your
own environment.
```
node1$ docker image push nigelpoulton/scan:v1
The push refers to a repository [docker.io/nigelpoulton/scan]
ff7d0c6cd736: Mounted from library/alpine
v1: digest: sha256:99588bc8883c9...5f600a3e9f8d5cd size: 528
```
Congratualtions. In this step you tagged and pushed an image to your newly
created *private repo* on Docker Hub.
# <a name="results"></a>Step 4: View scan results
In this step you'll log back in to Docker Hub and view the scan results of the
image you pushed in Step 3.
If you followed the exercise and used the `alpine:edge` image, the scanning
may have completed by the time you log back in to Docker Hub. If you used a
different image, especially a larger image, it might take longer for the image
scan to complete.
1. Log back in to Docker Hub from your web browser.
2. Navigate to the repo you created in Step 1 and click the `Tags` tab.
3. View the high-level scan results and feel free to click into them for more
details.
If the scan is still in progress you may want to grab a coffee and refresh
the page in a couple of minutes. There are occasions when scan jobs can get
queued and take a while to complete. If your scan is taking a long time to
complete it might be worth searching Docker Hub for the `alpine:edge` image
and exploring the scan results of that image.
**Congratulations**, you have completed this lab Security Scanning with Docker
Hub.
# <a name="clean"></a>Step 5: Clean-up
In this step you will remove all images and containers on the host and clean up any other artifacts created in this lab.
1. Remove all images on the host.
This command will remove **all** images on your Docker host. Only perform this step if you know you will not use these images again.
```
$ docker image rm $(docker image ls -aq)
<Snip>
```
2. Remove all containers on the host.
This command will remove **all** containers on your Docker host. Only perform this step if you know you know you do not need any of the containers running on your system.
```
$ docker container rm $(docker container ls -q) -f
<Snip>
```
3. Log on to Docker Hub and delete the repository that you created for this demo.
`click on the repo` > `click on the Settings tabe of the repo` > `click Delete` and follow the instructions.

Двоичные данные
security/scanning/images/scan1.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 51 KiB

Двоичные данные
security/scanning/images/scan2.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 47 KiB

Двоичные данные
security/scanning/images/scan3.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 80 KiB

Просмотреть файл

@ -21,17 +21,18 @@ You will need all of the following to complete this lab:
- A Linux-based Docker Host with seccomp enabled
- Docker 1.10 or higher (preferably 1.12 or higher)
- This lab was created using Ubunti 16.04 and Docker 17.04.0-ce. If you are using older versions of Docker you may need to replace `docker container run` commands with `docker run` commands.
The following commands show you how to check if seccomp is enabled in your system's kernel:
Check from Docker 1.12 or higher
```
$ docker info | grep seccomp
Security Options: apparmor seccomp
Security Options: apparmor seccomp
```
If the above output does not return a line with `seccomp` then your system does not have seccomp enabled in its kernel.
Check from the Linux command line
Check from the Linux command line
```
$ grep SECCOMP /boot/config-$(uname -r)
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
@ -48,7 +49,7 @@ Docker uses seccomp in *filter mode* and has its own JSON-based DSL that allows
The following example command starts an interactive container based off the Alpine image and starts a shell process. It also applies the seccomp profile described by `<profile>.json` to it.
```
$ sudo docker run -it --rm --security-opt seccomp=<profile>.json alpine sh ...
$ docker container run -it --rm --security-opt seccomp=<profile>.json alpine sh ...
```
The above command sends the JSON file from the client to the daemon where it is compiled into a BPF program using a [thin Go wrapper around libseccomp](https://github.com/seccomp/libseccomp-golang).
@ -72,21 +73,19 @@ In this step you will clone the lab's GitHub repo so that you have the seccomp p
2. Change into the `labs/security/seccomp` directory.
```
$ cd labs/security/seccomp
$ cd labs/security/seccomp/seccomp-profiles
```
The remaining steps in this lab will assume that you are running commands from this `labs/security/seccomp` directory. This will be important when referencing the seccomp profiles on the various `docker run` commands throughout the lab.
# <a name="test"></a>Step 2: Test a seccomp profile
> **NOTE TO LAB OWNER: THIS STEP DOES NOT CURRENTLY WORK. I'M GUESSING THIS IS BECAUSE THE SECCOMP PROFILE IS APPLIED BEFORE THE CONTAINER IS STARTED AS PER GITHUB ISSUES CITED LATER IN THE LAB GUIDE. I'M LEAVING IT IN FOR THE TIME BEING (NIGELPOULTON@HOTMAIL.COM) BUT AM RAISING THE QUESTION OF WHETHER IT COULD BE REMOVED IN FAVOUR OF STEP 2 WHICH DEMOS SIMILAR CAPABILITIES**
In this step you will use the `deny.json` seccomp profile included the lab guides repo. This profile has an empty syscall whitelist meaning all syscalls will be blocked. As part of the demo you will add all *capabilities* and effectively disable *apparmor* so that you know that only your seccomp profile is preventing the syscalls.
1. Use the `docker run` command to try to start a new container with all capabilities added, apparmor unconfined, and the `seccomp-profiles/deny.json` seccomp profile applied.
```
$ sudo docker run --rm -it --cap-add ALL --security-opt apparmor=unconfined --security-opt seccomp=seccomp-profiles/deny.json alpine sh
$ docker container run --rm -it --cap-add ALL --security-opt apparmor=unconfined --security-opt seccomp=seccomp-profiles/deny.json alpine sh
docker: Error response from daemon: exit status 1: "cannot start a container that has run and stopped\n".
```
@ -119,7 +118,7 @@ Unless you specify a different profile, Docker will apply the [default seccomp p
1. Start a new container with the `--security-opt seccomp=unconfined` flag so that no seccomp profile is applied to it.
```
$ sudo docker run --rm -it --security-opt seccomp=unconfined debian:jessie sh
$ docker container run --rm -it --security-opt seccomp=unconfined debian:jessie sh
```
2. From the terminal of the container run a `whoami` command to confirm that the container works and can make syscalls back to the Docker Host.
@ -135,6 +134,7 @@ Unless you specify a different profile, Docker will apply the [default seccomp p
/ # whoami
root
```
If you try running the above `unshare` command from a container with the default seccomp profile applied it will fail with an `Operation not permitted` error.
4. Exit the container.
@ -177,7 +177,7 @@ The `default-no-chmod.json` profile is a modification of the `default.json` prof
1. Start a new container with the `default-no-chmod.json` profile and attempt to run the `chmod 777 / -v` command.
```
$ sudo docker run --rm -it --security-opt seccomp=default-no-chmod.json alpine sh
$ docker container run --rm -it --security-opt seccomp=default-no-chmod.json alpine sh
/ # chmod 777 / -v
chmod: /: Operation not permitted
@ -190,7 +190,7 @@ The `default-no-chmod.json` profile is a modification of the `default.json` prof
3. Start another new container with the `default.json` profile and run the same `chmod 777 / -v`.
```
$ sudo docker run --rm -it --security-opt seccomp=default.json alpine sh
$ docker container run --rm -it --security-opt seccomp=default.json alpine sh
/ # chmod 777 / -v
mode of '/' changed to 0777 (rwxrwxrwx)
@ -198,7 +198,7 @@ The `default-no-chmod.json` profile is a modification of the `default.json` prof
The command succeeds this time because the `default.json` profile has the `chmod()`, `fchmod()`, and `chmodat` syscalls included in its whitelist.
4. Exit the container.
4. Exit the container.
5. Check both profiles for the presence of the `chmod()`, `fchmod()`, and `chmodat()` syscalls.
@ -298,7 +298,7 @@ Profiles can contain more granular filters based on the value of the arguments t
* `value` is a parameter for the operation
* `valueTwo` is used only for SCMP_CMP_MASKED_EQ
The rule only matches if **all** args match. Add multiple rules to achieve the effect of an OR.
The rule only matches if **all** args match. Add multiple rules to achieve the effect of an OR.
`strace` can be used to get a list of all system calls made by a program.
It's a very good starting point for writing seccomp policies.

Просмотреть файл

@ -0,0 +1,211 @@
# Secrets
# Lab Meta
> **Difficulty**: Intermediate
> **Time**: Approximately 15 minutes
In this lab you'll learn how to use Docker Universal Control Plane (UCP) to
create a *secret* and use it with an application.
You will complete the following steps as part of this lab.
- [Step 1 - Create a Secret](#secret)
- [Step 2 - Deploy an App](#deploy)
- [Step 3 - Test the App](#test)
- [Step 4 - View the Secret](#view)
# Prerequisites
You will need all of the following to complete this lab:
- A UCP cluster comprising nodes running **Docker 1.13** or higher
- The public IP or public DNS name of at least one of your UCP cluster nodes
- An account in UCP with permission to create secrets and deploy applications
# <a name="secret"></a>Step 1: Create a Secret
In this step you'll use the UCP web UI to create a new *secret*.
1. Login in to UCP (your lab instructor will provide you with account details).
2. From within UCP click `Resources` > `Secrets` > `+ Create Secret`.
3. In the **NAME** field give the secret a name. The name is arbitrary but must
be unique.
4. Enter a text string as the **VALUE** of the secret.
5. Leave all other options as their default values.
The screenshot below shows a secret called **wp-1** with some plain text as
the value of the secret.
![](images/secret1.png)
The screenshot below shows the **DETAILS** of the secret including its `ID`,
`Name`, and `Permissions Label`. Notice that you cannot view the value of the
secret from with in the UCP UI. This is because it is stored securely in the
cluster store. You can also click on the **SERVICES** tab to see any
services/applications that are using the secret. Right now there will be no
services using the secret.
![](images/secret2.png)
Congratulations, you've created a secret. The next step will show you how to
deploy an application that will use it.
# <a name="deploy"></a>Step 2: Deploy the App
In this step we'll deploy a simple WordPress app. To do that we'll complete the
following high level tasks:
- Create a network for application
- Create a service for the front-end portion of the application
- Create a service for the back-end portion of the application
Perform all of the following actions from the UCP web UI.
1. Click the `Resources` tab > `Networks` > `+ Create Network`.
2. Name the network **wp-net** and leave everything else as defaults.
3. Click `Create` to create the network.
The `wp-net` is now created and ready to be used by the app.
4. Click `Services` from within the `Resources` tab and then click `Create a
Service`.
5. Give the service the following values and leave all others as default:
- **Details tab\SERVICE NAME**: wp-db
- **Details tab\IMAGE NAME**: mysql:5.7
- **Resources tab\NETWORKS**: wp-net
- **Environment tab\SECRET NAME**: wp-1
- **Environment tab\ENVIRONMENT VARIABLE NAME**: MYSQL_ROOT_PASSWORD_FILE
- **Environment tab\ENVIRONMENT VARIABLE VALUE**: /run/secrets/wp-1
Let's quickly cover the 6 values we've configured as per above. The service
name is arbitrary, but `wp-db` suggests it will be our WordPress database. We
are using the `mysql:5.7` image because we know this works with the demo. We
are attaching the service to the `wp-net` network and we are telling it to use
the `wp-1` secret we created in Step 1.
The application also needs to know where to find the secret that will give it
access to the database. It expects this value in an environment variable
called `MYSQL_ROOT_PASSWORD_FILE`. By default, Docker will mount secrets into
the container filesystem at `/run/secrets/<secret-name>`, so we tell the
application to expect the secret in the `/run/secrets/wp-1` file. This is
mounted as an in-memory read-only filesystem.
6. Click `Deploy Now` to deploy the service.
7. Deploy the front-end service by clicking `Services` from within the
`Resources` tab and then click `Create a Service`.
8. Give the service the following values and leave all others as default:
- **Details tab\SERVICE NAME**: wp-fe
- **Details tab\IMAGE NAME**: wordpress:latest
- **Resources tab\Ports\INTERNAL PORT**: 80
- **Resources tab\Ports\PUBLIC PORT**: 8000
- **Resources tab\NETWORKS**: wp-net
- **Environment tab\SECRET NAME**: wp-1
- **Environment tab\ENVIRONMENT VARIABLE NAME**: WORDPRESS_DB_PASSWORD_FILE
- **Environment tab\ENVIRONMENT VARIABLE VALUE**: /run/secrets/wp-1
- **Environment tab\ENVIRONMENT VARIABLE NAME**: WORDPRESS_DB_HOST
- **Environment tab\ENVIRONMENT VARIABLE VALUE**: wp-db:3306
We are calling this service `wp-fe` indicating it is the WorPress front-end.
We're telling it to use the `wordpress:latest` image and expose port `80` from
the service/container on port `8000` on the host - this will allow us to
connect to the service on port `8000` in a later step. We're attaching it to
the same `wp-net` network and mounting the same secret. This time we're
adding an additional environment variable `WORDPRESS_DB_HOST=wp-db:3306`.
This is telling the service that it can talk to the database backend via a
service called `wp-db` on port `3306`.
9. Click `Deploy Now` to deploy the service
The application is now deployed!
As shown in the diagram below you have deployed two services. `wp-fe` is the
application's web front end, and `wp-db` is the application's database back-end.
Both services are deployed on the `wp-net` network and both have the `wp-1`
secret mounted to a file called `wp-1` in the `/run/secrets` in-memory
filesystem. You also inserted environment variables into each service telling
them things like where to find the database service and where to find the
secret.
![](images/secret3.png)
You also published the web front end portion of the application (the `wp-fe`
service) on port `8000`. This means that you should be able to connect to the
application on port `8000` of any of the nodes in your DDC/UCP cluster.
# <a name="test"></a>Step 3: Test the Application
In this step you will use a web browser to connect to the application. You will
know the application is working correctly if the web page renders correctly and
there are no database related errors displayed.
1. Open a web browser and point it to the public IP or DNS name of any of the
nodes in your UCP cluster on port `8000`
> NOTE: Your lab instructor will be able to give you details of the public IP
or public DNS of your UCP nodes.
![](images/secret4.png)
# <a name="view"></a>Step 3: View the Secret
In this step you'll log on to the terminal of the container in the `web-fe`
service and verify that the secret is present.
Perform all of the following steps from within the UCP web UI.
1. Click the `Resources` tab > `Services` and click on the `web-fe` service.
2. Click the `Tasks` tab.
3. Click the running task (there will only be one).
4. Click the `Console` tab for the task.
5. Make sure that the command to run is showing as `sh` and then click `Run`.
You are now on the terminal of the `web-fe` service's container.
6. Run the following command to verify that the secret file exists.
```
# ls -l /run/secrets/
total 4
-r--r--r-- 1 root root 81 Mar 17 10:20 wp-1
```
The `/run/secrets/` directory is mounted read-only as an in-memory
filesystem that is only mounted into this container and using reserved
memory on the Docker host. This ensures it's contents are secure.
7. View the contents of the secret.
```
# cat /run/secrets/wp-1
I want to be with those who know secret things or else alone - Rainer Maria Rilke
```
The contents of the secret are visible as unencrypted plain text. This is so
that applications can use them as passwords. However, they are issued to the
container via a secure network and mounted as read-only in an in-memory
filesystem that is not accessible to the Docker host or other containers
(unless other containers have also been granted access to it).
**Congratulations**, you have completed this lab on Secrets management.

Двоичные данные
security/secrets-ddc/images/secret1.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 47 KiB

Двоичные данные
security/secrets-ddc/images/secret2.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 37 KiB

Двоичные данные
security/secrets-ddc/images/secret3.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 59 KiB

Двоичные данные
security/secrets-ddc/images/secret4.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 92 KiB

292
security/secrets/README.md Normal file
Просмотреть файл

@ -0,0 +1,292 @@
# Secrets
# Lab Meta
> **Difficulty**: Intermediate
> **Time**: Approximately 15 minutes
In this lab you'll learn how to create and manage *secrets* with Docker.
You will complete the following steps as part of this lab.
- [Step 1 - Create a Secret](#create)
- [Step 2 - Manage Secrets](#manage)
- [Step 3 - Access the secret within an app](#use)
- [Step 4 - Clean-up](#clean)
In this lab the terms *service task* and *container* are used interchangeably.
In all examples in the lab a *service tasks* is a container that is running as
part of a service.
# Prerequisites
You will need all of the following to complete this lab:
- A Docker Swarm cluster running **Docker 1.13** or higher
# <a name="create"></a>Step 1: Create a Secret
In this step you'll use the `docker secret create` command to create a new
*secret*.
Perform the following command from a *manager* node in your Swarm. This lab will assume that you are using **node1** in your lab.
1. Create a new text file containing the text you wish to use as your secret.
```
node1$ echo "secrets are important" > sec.txt
```
The command shown above will create a new file called `sec.txt` in your
working directory containing the string **secrets are important**. The text
string in the file is arbitrary but should be kept secure. You should follow
any existing corporate guidelines about keeping secrets safe.
2. Confirm that the file was created.
```
node1$ ls -l
total 4
-rw-r--r-- 1 root root 10 Mar 21 18:40 sec.txt
```
3. Use the `docker secret create` command to create a new secret using the file
created in the previous step.
```
node1$ docker secret create sec1 ./sec.txt
ftu76ghgsk7f9fmcrj3wx3xcd
```
The return code of the command is the ID of the newly created secret.
Congratulations. You have created a new secret called `sec1`.
If you created the secret from a remote Docker client, it would be sent to a
manager node in the Swarm over a mutual TLS Connection. Once the secret is
received on the manager node it is securely stored in the Swarm's Raft store
using the Swarm's native encryption.
You can now delete the `sec.txt` file used to create the secret.
# <a name="manage"></a>Step 2: Manage Secrets
In this step you'll use the `docker secret` sub-command to list and inspect
secrets.
Before going any further it's important to note that once a secret is created
it is securely stored in the Swarm's encrypted Raft store. This means that you
cannot view it in plain text using the `docker secret` command.
Perform all of the following commands from a Swarm *manager*. The lab assumes you will be using **node1** in your lab.
1. List existing secrets with the `docker secret ls` command.
```
node1$ docker secret ls
ID NAME CREATED UPDATED
ftu76ghg...rj3wx3xcd sec1 11 seconds ago 11 seconds ago
```
2. Inspect the **sec1** secret.
```
node1$ docker secret inspect sec1
[
{
"ID": "ftu76ghgsk7f9fmcrj3wx3xcd",
"Version": {
"Index": 113
},
"CreatedAt": "2017-03-21T18:41:08.790769302Z",
"UpdatedAt": "2017-03-21T18:41:08.790769302Z",
"Spec": {
"Name": "sec1"
}
}
]
```
Notice that the `docker secret inspect` command does not display the
unencrypted contents of the secret.
You can use the `docker secret rm` command to delete secrets. To delete the
**sec1** secret you would use the command `docker secret rm sec1`. **Do not
delete the sec1 secret as you will use it in the next section.**
# <a name="use"></a>Step 3: Access the secret within an app
In this step you'll deploy a service and grant it access to the secret. You'll
then `exec` on to a task in the service and view the unencrypted contents of the
secret.
Perform the following commands from a *manager* node in the Swarm and be sure
to remember that the outputs of the commands might be different in your lab.
E.g. service tasks in your lab might be scheduled on different nodes to those
shown in the examples below.
1. Create a new service and attach the `sec1` secret.
```
node1$ docker service create --name sec-test --secret="sec1" redis:alpine
p858ush7oeei8647na2xa12sc
```
This command creates a new service called **sec-test**. The service has a
single task (container), is given access to the **sec1** secret and is based
on the `redis:alpine` image.
2. Verify the service is running.
```
node1$ docker service ls
ID NAME MODE REPLICAS IMAGE
p858ush7oeei sec-test replicated 1/1 redis:alpine
```
3. Inspect the `sec-test` service to verify that the secret is associated with
it.
```
node1$ docker service inspect sec-test
[
{
"ID": "p858ush7oeei8647na2xa12sc",
"Version": {
"Index": 116
},
"CreatedAt": "2017-03-21T19:37:52.254797962Z",
"UpdatedAt": "2017-03-21T19:37:52.254797962Z",
"Spec": {
"Name": "sec-test",
"TaskTemplate": {
"ContainerSpec": {
"Image": "redis:alpine@sha256:9cd405cd...fb4ec7bdc3ee7",
"DNSConfig": {},
"Secrets": [
{
"File": {
"Name": "sec1",
"UID": "0",
"GID": "0",
"Mode": 292
},
"SecretID": "ftu76ghgsk7f9fmcrj3wx3xcd",
"SecretName": "sec1"
<Snip>
```
The output above shows that the `sec1` secret (ID:ftu76ghgsk7f9fmcrj3wx3xcd)
is successfully associated with the `sec-test` service. This is important as
it is what ultimately grants tasks within the service access to the secret.
4. Obtain the name of any of the tasks in the `sec-test` service (if you've been
following along there will only be one task running in the service).
```
//Run the following docker service ps command to see which node
the service task is running on.
node1$ docker service ps sec-test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
9qqp...htd sec-test.1 redis:alpine node1 Running Running 8 mins..
//Log on to the node running the service task (node1 in this example, but
might be different in your lab) and run a docker ps command.
node1$ docker ps --filter name=sec-test
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5652c1688f40 redis@sha256:9cd..c3ee7 "docker-entrypoint..." 15 mins Up 15 mins 6379/tcp sec-test.1.9qqp...vu2aw
```
You will use the `CONTAINER ID` from the output above in the next step.
> NOTE: The two commands above start out by listing all the tasks in the
`sec-test` service. Part of the output of the first command shows the `NODE`
that each task is running on - in the example above this was a single task
running on **node1**. The next command (`docker ps`) lists all running
containers on **node1** and filters the results to show just the containers
where the name starts with **sec-test** - this means that only containers
(tasks) that are part of the `sec-test` service are displayed.
5. Use the `docker exec` command to get a shell prompt on to the `sec-test`
service task. Be sure to substitute the Container ID in the command below with
a the container ID form your environment (see output of previous step).
```
node1$ docker exec -it 5652c1688f40 sh
data#
```
The `data#` prompt is a shell prompt inside the service task.
6. List the contents of the container's `/run/secrets` directory.
```
node1$ ls -l /run/secrets
total 4
-r--r--r-- 1 root root 10 Mar 21 19:37 sec1
```
Secrets are only shared to *service tasks/containers* that are granted access
to them, and the secrets are shared with the *service task* via the TLS
connections that already exists between nodes in the Swarm. Once a *node* has
a secret it mounts it as a regular file into an in-memory filesystem inside
the authorized service task (container). This file is mounted at
`/run/secrets` with the same name as the secret. In the example above, the
`sec1` secret is mounted as a file called **sec1**.
7. View the unencrypted contents of the *secret*.
```
node1$ cat /run/secrets/sec1
secrets are important
```
It's important to note several things about this unencrypted secret.
- The secret is only made available to services that have been specifically
granted access to it (in our example this was via the `docker service create`
command).
- The secret is issued to the service task by a manager in the Swarm via a
mutually authenticated TLS connection.
- Service tasks and nodes cannot request a secret - secrets are always issued
to the node/task by a manager as part of a service deployment or update.
- Secrets are only ever mounted to in-memory filesystems inside of authorized
containers/tasks and are never persisted to disk on worker nodes or containers.
- Nodes do not have access to the unencrypted secret.
- Other tasks and containers on the same node do not get access to the secret.
- As soon as a node is no longer running a task for a service it will delete
the secret from memory.
**Congratulations**, you have completed this lab on Secrets management.
# <a name="clean"></a>Step 5: Clean-up
In this step you will remove all secrets and services,as well as clean up any other artifacts created in this lab.
1. Remove all secrets on the host.
This command will remove **all** secrets on your Docker host. Only perform this step if you know you will not use these secrets again.
```
$ docker secret rm $(docker secret ls -q)
<Snip>
```
2. Remove all services on the host.
This command will remove **all** services on your Docker host. Only perform this step if you know you know you do not need any of the services running on your system.
```
$ docker service rm $(docker service ls -q)
<Snip>
```
3. If you haven;t already done so, delete the file that you used as the source of the secret data in Step 1. The lab assumed this was **node1** in your lab.
```
$ rm sec.txt
```

386
security/swarm/README.md Normal file
Просмотреть файл

@ -0,0 +1,386 @@
# Swarm Mode Security
# Lab Meta
> **Difficulty**: Beginner
> **Time**: Approximately 15 minutes
In this lab you'll build a new Swarm and view some of the built-in security
features of *Swarm mode*. These include *join tokens* and *client certificates*.
You will complete the following steps as part of this lab.
- [Step 1 - Create a new Swarm](#swarm_init)
- [Step 2 - Add a new Manager](#add_mgr)
- [Step 3 - Add a new Worker](#add_wrkr)
- [Step 4 - Rotate Join Keys](#rotate_join)
- [Step 5 - View certificates](#certs)
- [Step 6 - Rotate certificates](#rotate_cert)
# Prerequisites
You will need all of the following to complete this lab:
- Four Linux-based Docker hosts running **Docker 1.13** or higher and **not**
configured for Swarm Mode. You should use **node1**, **node2**, **node3**, and
**node4** from your lab.
- This lab was built and tested using Ubuntu 16.04
>NOTE: Things like IP addresses and Swarm *join tokens* will be different in
your lab. Remember to substitute the values shown here in the lab guide for the real values in your lab.
# <a name="swarm_init"></a>Step 1: Create a new Swarm
In this step you'll initialize a new Swarm and verify that the operation worked.
For this lab to work you will need your Docker hosts running in
*single-engine mode* and not in *Swarm mode*.
1. Execute the following command on **node1**.
```
node1$ docker swarm init
Swarm initialized: current node (kgwuvt1oqhqjsht0qcsq67rvu) is now a
manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-4h5log5xpip966...y6gdy1-44v7nl9i0...k4fb8dlf21 \
172.31.45.44:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and
follow the instructions.
```
The command above has created a brand new Swarm and made **node1** the first
*manager* of the Swarm. The first manager of any Swarm is automatically made
the *leader* and the *Certificate Authority (CA)* for the Swarm. If you
already have a CA and do not want Swarm to generate a new one, you can use
the `--external-ca` flag to specify an external CA.
2. Verify that the Swarm was created successfully and that **node1** is the
leader of the new Swarm with the following command.
```
node1$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
kgwuvt...0qcsq67rvu * node1 Ready Active Leader
```
The command above will list all nodes in the Swarm. Notice that the output
only lists one node and that the node is also the *leader*.
3. Run a `docker info` command and view the Swarm related information.
```
node1$ docker info
...
<Snip>
Swarm: active
NodeID: kgwuvt1oqhqjsht0qcsq67rvu
Is Manager: true
ClusterID: ohgi9ctpbev24dl6daf7insou
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
...
```
The important things to note from the output above are; `nodeID`,
`ClusterID`, `CA Configuration`.
It is important to know that the `docker swarm init` command performs at least
two important security related operations:
- It creates a new CA (unless you specify `--external-ca`) and creates a
key-pair to secure communications within the Swarm
- It creates two *join tokens* - one to join new *workers* to the Swarm, and the
other to join new *managers* to the Swarm.
We will look at these in the following steps.
# <a name="add_mgr"></a>Step 2: Add a new Manager
Now that you have a Swarm initialized, it's time to add another Manager.
In order to add a new Manager you must know the manager *join token* for the
Swarm you wish to join it to. The process below will show you how to obtain the
manager *join token* and use it to add **node2** as a new manager in the Swarm.
1. Use the `docker swarm join-token` command to get the *manager* join token.
```
node1$ docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-4h5log5xpip966c6c...z2cy6gdy1-7y6lqwu6...goyf26yyg2 \
172.31.45.44:2377
```
The output of the command gives you the full command, including the join
token, that you can run on any Docker node to join it as a manager.
> NOTE: The join token includes a digest of the root CA's certificate, as well as a
randomly generated secret. The format is as follows:
**SWMTKN-1-< digest-of-root-CA-cert>-< random-secret >**.
2. Copy and paste the command in to **node2**. Remember to use the command and
join token for your lab, and not the value shown in this lab guide.
```
node2$ docker swarm join \
--token SWMTKN-1-4h5log5xpip966c6c...z2cy6gdy1-7y6lqwu6...goyf26yyg2 \
172.31.45.44:2377
This node joined a swarm as a manager.
```
3. Run the `docker node ls` command from either **node1** or **node2** to list
the nodes in the Swarm.
```
node1$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ax2cmh63...tvjp8trs4 node2 Ready Active Reachable
kgwuvt1o...qcsq67rvu * node1 Ready Active Leader
```
The *join token* used in the commands above will join any node to your Swarm as
a *manager*. This means it is vital that you keep the join tokens private -
anyone in possession of it can join nodes to the Swarm as managers.
# <a name="add_wrkr"></a>Step 3: Add a new Worker
Adding a worker is the same process as adding a manager. The only difference is
the token used. Every Swarm maintains one *manager* join token and one
*worker* join token.
1. Run a `docker swarm join-token` command from any of the managers in your
Swarm to obtain the command and token required to add a new worker node.
```
node1$ docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-4h5log5xpip966c6c...z2cy6gdy1-44v7nl9...b8dlf21 \
172.31.45.44:2377
```
Notice that the join token for managers and workers share some of the same
values. Both start with "SWMTKN-1", and both share the same Swarm root CA
digest. It is only the last part of the token that determines if
the token is for a manager or worker.
2. Switch to **node3** and paste in the command from the previous step.
```
node3$ docker swarm join \
--token SWMTKN-1-4h5log5xpip966c6c...z2cy6gdy1-44v7nl9...b8dlf21 \
172.31.45.44:2377
This node joined a swarm as a worker.
```
3. Switch back to one of the manager nodes (**node1** or **node2**) and run a
`docker node ls` command to verify the node was added as a worker.
```
node1$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ax2cm...vjp8trs4 node2 Ready Active Reachable
kgwuv...csq67rvu * node1 Ready Active Leader
mfg9d...inwonsjh node3 Ready Active
```
The output above shows that **node3** was added to the Swarm and is
operating as a worker - the lack of a value in the **MANAGER STATUS**
column indicates that the node is a *worker*.
# <a name="rotate_join"></a>Step 4: Rotate Join Keys
In this step you will rotate the Swarms *worker* join-key. This will invalidate
the worker join-key used in previous steps. It will not affect the status of
workers already joined to the Swarm, this means all existing workers will
continue to be valid workers in the Swarm.
You will test that the *rotate operation* succeeded by attempting to add a new
worker with the old key. This operation will fail. You will then retry the
operation with the new key. This time it will succeed.
1. Rotate the existing worker key by execute the following command from either
of the Swarm managers.
```
node1$ docker swarm join-token --rotate worker
Successfully rotated worker join token.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-4h5log5xpip...cy6gdy1-55k4ywd...z5xtns4eq \
172.31.45.44:2377
```
Notice that the new join token still starts with `SWMTKN-1` and keeps the
same digest of the Swarms root CA `4h5log5...`. It is only the last part of
the token that has changed. This is because the new token is still a Swarm
join token for the same Swarm. The system has only rotated the *secret*
used to add new workers (the last portion).
2. Log on to **node4** and attempt to join the Swarm using the **old** join
token. You should be able to find the old join token in the terminal window of
**node3** from a previous step.
```
node4$ docker swarm join \
--token SWMTKN-1-4h5log5xpi...duz2cy6gdy1-44v7nl9...4fb8dlf21 \
172.31.45.44:2377
Error response from daemon: rpc error: code = 3 desc = A valid join token
is necessary to join this cluster
```
The operation fails because the join token is no longer valid.
3. Retry the previous operation using the new join token given as the output to
the `docker swarm join-token --rotate worker` command in a previous step.
```
node4$ docker swarm join \
--token SWMTKN-1-4h5log5...wzqlduz2cy6gdy1-55k4ywd...xtns4eq \
172.31.45.44:2377
This node joined a swarm as a worker.
```
Rotating join tokens is something that you will need to do if you suspect your
existing join tokens have been compromised. It is important that you manage
your join-tokens carefully. This is because unauthorized nodes joining the
Swarm is a security risk.
# <a name="certs"></a>Step 5: View certificates
Each time a new *manager* or *worker* joins the Swarm it is issued with a
*client certificate*. This client certificate is used in conjunction with the
existing Swarm public key infrastructure (PKI) to authenticate the node and
encrypt communications.
There are three important things to note about the *client certificate*:
1. It specifies which Swarm the node is an authorized member of
2. It contains the node ID
3. It specifies the role the node is authorized to perform in the Swarm
(*worker* or *manager*)
Execute the following command from any node in your Swarm to view the nodes
*client certificate*.
node1$ openssl x509 -in /var/lib/docker/swarm/certificates/swarm-node.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
59:53:84:47:3a:2d:15:5b:f0:39:46:93:dd:21:68:ad:70:62:40:d1
Signature Algorithm: ecdsa-with-SHA256
Issuer: CN=swarm-ca
Validity
Not Before: Mar 14 11:42:00 2017 GMT
Not After : Jun 12 12:42:00 2017 GMT
Subject: O=ohgi9...insou, OU=swarm-manager, CN=kgwuvt...csq67rvu
...
The important things to note about the output above are the three fields on the
bottom line:
- The Organization (O) field contains the Swarm ID
- The Organization Unit (OU) field contains the nodes *role*
- The Common Name (CN) field contains the nodes ID
These three fields make sure the node operates in the correct Swarm, operates in
the correct role, and is the node it says it is.
You can use the `docker swarm update --cert-expiry <TIME PERIOD>` command to
change frequency at which the client certificates in the Swarm are renewed. The
default is 90 days (3 months).
# <a name="rotate_certs"></a>Step 6: Rotate certificates
In this step you'll view the existing certificate rotation period for your
Swarm, and then alter that period.
Perform the following commands from a manager node in your Swarm.
1. Use the `docker info` command to view the existing certificate rotation
period enforced in your Swarm.
```
node1$ docker info
Swarm: active
NodeID: kgwuvt1oqhqjsht0qcsq67rvu
Is Manager: true
ClusterID: ohgi9ctpbev24dl6daf7insou
Managers: 2
Nodes: 4
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
```
The last two lines of the output above show that the current rotation period
(**Expiry Duration**) is **3 months**.
2. Use the `docker swarm update` command to change the rotation period.
```
node1$ docker swarm update --cert-expiry 168h
Swarm updated.
```
The `--cert-expiry` flag accepts time periods in the format `00h00m00s`,
where `h` is for hours, `m` is for minutes, and `s` is for seconds. The
example above sets the rotation period to 168 hours (7 days).
3. Run another `docker info` to check that the value has changed.
```
node1$ docker info
Swarm: active
NodeID: kgwuvt1oqhqjsht0qcsq67rvu
Is Manager: true
ClusterID: ohgi9ctpbev24dl6daf7insou
Managers: 2
Nodes: 4
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 7 days
```
**Congratulations**, you have completed this lab on basic Swarm security.

Просмотреть файл

@ -0,0 +1,158 @@
# Docker Content Trust Basics
# Lab Meta
> **Difficulty**: Beginner
> **Time**: Approximately 10 minutes
In this lab you'll learn how to enable Docker Content Trust as well as perform some basic signing and verification operations.
You will complete the following steps as part of this lab.
- [Step 1 - Enable Docker Content Trust](#enable_dct)
- [Step 2 - Push and sign an image](#push)
- [Step 4 - Clean-up](#clean)
# Prerequisites
You will need all of the following to complete this lab:
- At least one Linux-based Docker hosts running Docker 1.13 or higher
- The Docker host can be running in Swarm Mode
- This lab was built and tested using Ubuntu 16.04 and Docker 17.04.0-ce
# <a name="enable_dct"></a>Step 1: Enable Docker Content Trust
In this step you will enable Docker Content Trust on a single node. You will test it by pulling an unsigned and a signed image.
Execute all of the commands in this section form **node1** in your lab.
1. Enable Docker Content Trust
```
$ export DOCKER_CONTENT_TRUST=1
```
Docker Content Trust is now enabled on this host and you will no longer be able to pull unsigned images.
2. Try pulling an unsigned image (any unsigned image will do, you do not have to use the one in this demo)
```
$ docker image pull nigelpoulton/tu-demo
Using default tag: latest
Error: remote trust data does not exist for docker.io/nigelpoulton/tu-demo: notary.docker.io does not have trust data for docker.io/nigelpoulton/tu-demo
```
The operation fails because the image is not signed (no trust data for the image).
3. Try pulling the official `alpine:latest` image
```
$ docker image pull alpine:latest
Pull (1 of 1): alpine:latest@sha256:58e1a1bb75...3f105138f97eb53149673c4
sha256:58e1a1bb75...3f105138f97eb53149673c4: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75...3f105138f97eb53149673c4
Status: Downloaded newer image for alpine@sha256:58e1a1bb75...3f105138f97eb53149673c4
Tagging alpine@sha256:58e1a1bb75...3f105138f97eb53149673c4 as alpine:latest
```
This time the operation succeeds. This is because the image is signed - all **official** images are signed.
In this step you have seen how easy it is to enable Docker Content Trust (exporting the `DOCKER_CONTENT_TRUST` environment variable with a value of `1`). You have also proved that it is working by attempting to pull an unsigned image.
# <a name="push"></a>Step 2: Push and sign an image
In this step you will tag an image and push it to a new repository within your own namespace on Docker Hub. You will perform this step from the host that you enabled Docker Content Trust on in the previous step. This will ensure that the image gets signed when you push it.
To complete this step you will need a Docker ID.
Execute all of the following commands from **node1** (or whichever node you used for the previous step).
1. Tag the `alpine:latest` image so that it can be pushed to a new repository in your namespace on Docker Hub.
This command will add the following additional tag to the `alpine:latest` image: `nigelpoulton/sec-test:latest`. The format of the tag is **docker-id/repo-name/image-tag**. Be sure to replace the **docker-id** in the following command with your own Docker ID.
```
$ docker image tag alpine:latest nigelpoulton/sec-test:latest
```
2. Verify the tagging operation worked
```
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest 4a415e366388 4 weeks ago 3.99MB
nigelpoulton/sec-test latest 4a415e366388 4 weeks ago 3.99MB
```
Look closely and see that the image with **IMAGE ID** `4a415e366388` has two **REPOSITORY** tags.
3. Login to Docker Hub with your own Docker ID
```
$ docker login
Login with your Docker ID to push and pull images from Docker Hub...
Username: <your-docker-id>
Password:
Login Succeeded
```
4. Push the image to a new repository in your Docker Hub namespace. Remember to use the image tag you created earlier that includes your own Docker ID.
> NOTE: As part of this `push` operation you will be asked to enter two new keys:
- A new root key (this only happens the first time you push an image after enabling DCT)
- A repository signing key
```
$ docker image push nigelpoulton/sec-test:latest
The push refers to a repository [docker.io/nigelpoulton/sec-test]
23b9c7b43573: Pushed
latest: digest: sha256:d0a670140...35edb294e4a7a152a size: 528
Signing and pushing trust metadata
You are about to create a new root signing key passphrase...
<Snip>
Enter passphrase for new root key with ID 66997be: <root key passphrase>
Repeat passphrase for new root key with ID 66997be: <root key passphrase>
Enter passphrase for new repository key with ID 7ccd1b4 (docker.io/nigelpoulton/sec-test): <repo key passphrase>
Repeat passphrase for new repository key with ID 7ccd1b4 (docker.io/nigelpoulton/sec-test): <repo key passphrase>
Finished initializing "docker.io/nigelpoulton/sec-test"
Successfully signed "docker.io/nigelpoulton/sec-test":latest
```
The output above shows the image being signed as part of the normal `docker image push` command - no extra commands or steps are required to sign images with Docker Content Trust enabled.
Congratulations. You have pushed and signed an image.
By default the root and repository keys are stored below `~/.docker/trust`.
In the real world you will need to generate strong passphrases for each key and store them in a secure password manager/vault.
# <a name="clean"></a>Step 3: Clean-up
The following commands will clean-up the artifacts from this lab.
1. Delete the tagged image you created in Step 2
```
$ docker image rm nigelpoulton/sec-test:latest
Untagged: nigelpoulton/sec-test:latest
Untagged: nigelpoulton/sec-test@sha256:d0a6701...4e4a7a152a
```
2. Delete the alpine:latest image
```
$ docker image rm alpine:latest
Untagged: alpine:latest
Untagged: alpine@sha256:58e1a...38f97eb53149673c4
Deleted: sha256:4a415e366...718a4698e91ef2a526
Deleted: sha256:23b9c7b43...5f22803bcbe9ab24c2
```
3. Disable Docker Content Trust.
```
$ export DOCKER_CONTENT_TRUST=
```
4. Login to Docker Hub > Locate the repository you created with the `docker image push` command > Click Settings > Delete the repository.