removing quickstart as it is duplicative. Renaming beginner to cloud-quick-start

This commit is contained in:
Mano Marks 2016-09-22 14:29:20 -07:00
Родитель 1a23b04d27
Коммит 0cf81676bc
7 изменённых файлов: 3 добавлений и 598 удалений

Просмотреть файл

@ -3,5 +3,4 @@
[Docker Swarm Mode](https://docs.docker.com/engine/swarm/) is a release candidate feature included with Docker Engine 1.12. These tutorials are designed to help you quickly get started testing these new features.
* [Docker Swarm Mode full tutorial](beginner-tutorial/README.md)
* [Swarm quickstart tutorial](quickstart/README.md)
* [Service deployment on a swarm](beginner/README.md)
* [Service deployment on a swarm in the Cloud](cloud-quick-start/README.md)

Просмотреть файл

@ -382,4 +382,5 @@ Successfully removed manager3
```
## Next steps
Next, check out the documentation on [Docker Swarm Mode](https://docs.docker.com/engine/swarm/) for more information.
We have a similar tutorial using Docker Machine to do [Service deployment on a swarm in the Cloud](../cloud-quick-start/README.md).
Also check out the documentation on [Docker Swarm Mode](https://docs.docker.com/engine/swarm/) for more information.

Просмотреть файл

@ -1,170 +0,0 @@
# Service deployment on a swarm
Script that create a swarm cluster and deploy a simple service.
Swarm is created with Swarm mode of Engine 1.12. Can be created on
* Virtualbox
* Microsoft Azure
* Digitalocean
* Amazon EC2
Note: currently, if deploying on AWS, only EU (Ireland) region is available. Make sure you use a Key Pairs for this region
# Usage
```
./swarm.sh [--driver provider]
[--azure-subscription-id azure_subscription_id]
[--amazonec2-access-key ec2_access_key]
[--amazonec2-secret-key ec2_secret_key]
[--amazonec2-security-group ec2_security_group]
[--digitalocean_token]
[-m|--manager nbr_manager]
[-w|--worker nbr_worker]
[-r|--replica nbr_replica]
[-p|--port exposed_port]
[--service_image image_of_the_service_to_deploy]
[--service_port port_exposed_by_the_service_to_deploy]
```
Several parameters can be provided
* driver used ("azure", "virtualbox", "digitalocean", "amazonec2") (default: "virtualbox")
* number of manager (default: 3)
* number of worker (default: 5)
* number of replicas for the deployed service (lucj/randomcity:1.1) (default: 5)
* port exposed by the cluster (default: 8080)
* azure subscription id (if azure driver selected)
* digitalocean token (if digitalocean driver specified)
* amazon access key, secret key, security group (currently only for EU (Ireland) region) (if amazonec2 driver is specified)
# Example
Let's create a swarm cluster with 2 manager and 2 worker nodes locally (with virtualbox) and using service lucj/randomCity
Once deployed service will be available on port 8080 (default port)
```
$ ./swarm.sh --manager 2 --worker 2 --service_image lucj/randomCity --service_port 80
-> about to create a swarm with 2 manager(s) and 2 workers on virtualbox machines
-> creating Docker host for manager 1 (please wait)
-> creating Docker host for manager 2 (please wait)
-> creating Docker host for worker 1 (please wait)
-> creating Docker host for worker 2 (please wait)
-> init swarm
Swarm initialized: current node (99xi3bzlgobxmeff573qitctg) is now a manager.
-> join manager 2 to the swarm
Node f4wocnel60xwfn2z522a645ba accepted in the swarm.
-> join worker 1 to the swarm
This node joined a Swarm as a worker.
-> join worker 2 to the swarm
This node joined a Swarm as a worker.
-> deploy service with 5 replicas with exposed port 8080
-> waiting for service 5ny5u5pmfw75mnomleb34a3kp to be available
... retrying in 2 seconds
... retrying in 2 seconds
... retrying in 2 seconds
... retrying in 2 seconds
... retrying in 2 seconds
... retrying in 2 seconds
... retrying in 2 seconds
... retrying in 2 seconds
... retrying in 2 seconds
-> service available on port 8080 of any node
ID NAME REPLICAS IMAGE COMMAND
5ny5u5pmfw75 city 5/5 lucj/randomcity:1.1
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
1j157qz7nu4kaqmack4zuwibm city.1 city lucj/randomcity:1.1 Running 20 seconds Running manager1
72y2off8y5f8zp4djzmjdzowg city.2 city lucj/randomcity:1.1 Running 20 seconds Running worker1
efzaweh8lhj9aalrgdhnx26i0 city.3 city lucj/randomcity:1.1 Running 20 seconds Running manager2
1f5ccot3wn3yhrhfbqf6vj5d5 city.4 city lucj/randomcity:1.1 Running 20 seconds Running worker2
f53ummqn8mba0hzy15w08pxj4 city.5 city lucj/randomcity:1.1 Running 20 seconds Running worker2
```
# Docker hosts
List all Docker host created
```
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager1 - virtualbox Running tcp://192.168.99.100:2376 v1.12.0-rc2
manager2 - virtualbox Running tcp://192.168.99.101:2376 v1.12.0-rc2
worker1 - virtualbox Running tcp://192.168.99.102:2376 v1.12.0-rc2
worker2 - virtualbox Running tcp://192.168.99.103:2376 v1.12.0-rc2
```
# Service details
The test service deployed is a simple http server that returns a message with
* the ip of the container that handled the request
* a random city of the world
# Test deployed service
Send several requests to the manager1
```
$ curl 192.168.99.100:8080
{"message":"10.255.0.7 suggests to visit Zebunto"}
$ curl 192.168.99.100:8080
{"message":"10.255.0.8 suggests to visit Areugpip"}
$ curl 192.168.99.100:8080
{"message":"10.255.0.10 suggests to visit Fozbovsav"}
$ curl 192.168.99.100:8080
{"message":"10.255.0.9 suggests to visit Kitunweg"}
$ curl 192.168.99.100:8080
{"message":"10.255.0.11 suggests to visit Aviznuk"}
$ curl 192.168.99.100:8080
{"message":"10.255.0.7 suggests to visit Nedhikmu"}
$ curl 192.168.99.100:8080
{"message":"10.255.0.8 suggests to visit Palmenme"}
```
Send several requests to the worker2
```
$ curl http://192.168.99.102:8080
{"message":"10.255.0.8 suggests to visit Wehappap"}
$ curl http://192.168.99.102:8080
{"message":"10.255.0.11 suggests to visit Jocuvdam"}
$ curl http://192.168.99.102:8080
{"message":"10.255.0.12 suggests to visit Suvigenuh"}
$ curl http://192.168.99.102:8080
{"message":"10.255.0.9 suggests to visit Jinonat"}
```
The requests are dispatched in a round robin fashion to the running containers.
# Examples with other drivers
## Run 3 managers and 6 workers on Microsoft Azure based on ehazlett/docker-demo image (default image if none specified)
```
./swarm.sh --driver azure --azure-subscription-id $AZURE_SUBSCRIPTION_ID --manager 3 --worker 6
```
## Run 3 managers and 6 workers on DigitalOcean and use service based on ehazlett/docker-demo image (default image if none specified)
```
./swarm.sh --driver digitalocean --digitalocean_token $DO_TOKEN --manager 3 --worker 6
```
Once the service is deployed you got some nice Mobydock :)
![Mobydock](https://dl.dropboxusercontent.com/u/2330187/docker/labs/1.12/swarm-sample/mobydock.png)
Note: beware of the browser cache that prevents the hostname to be updated sometimes
## Run 3 managers and 6 workers on AmazonEC2 based on ehazlett/docker-demo image (default image if none specified)
```
./swarm.sh --driver amazonec2 --amazonec2-access-key $AWS_ACCESS_KEY --amazonec2-secret-key $AWS_SECRET_KEY --amazonec2-security-group default --manager 3 --worker 6
```
note: make sure the security group provided (**default** in this example) allow communication between hosts and open the exposed port (8080 by default) to the outside
# Status
- [ ] Azure deployment with image / size / region selection
- [ ] DigitalOcean deployment with image / size / region selection
- [ ] Amazon deployment with AMI / instance type / region selection
- [ ] Amazon deployment with automatic opening of exposed port in SecurityGroup

Просмотреть файл

@ -1,275 +0,0 @@
# Default cluster:
# - 3 manager node
# - 5 worker nodes
# - 5 replicas for the test service
# - service image: ehazlett/docker-demo
# - service port: 8080 (port exposed by the service)
# - exposed port: 8080 (port exposed to the outside)
DRIVER="virtualbox"
NBR_MANAGER=3
NBR_WORKER=5
NBR_REPLICA=5
SERVICE_IMAGE="ehazlett/docker-demo"
SERVICE_PORT=8080
EXPOSED_PORT=8080
# additional flags depending upon driver selection
ADDITIONAL_PARAMS=
PERMISSION=
PRIVATE=
# Manager and worker prefix
PREFIX=$(date "+%Y%m%dT%H%M%S")
MANAGER=${PREFIX}-manager
WORKER=${PREFIX}-worker
function usage {
echo "Usage: $0 [--driver provider] [--azure-subscription-id] [--amazonec2-access-key ec2_access_key] [--amazonec2-secret-key ec2_secret_key] [--amazonec2-security-group ec2_security_group] [--do_token do_token][-m|--manager nbr_manager] [-w|--worker nbr_worker] [-r|--replica nbr_replica] [-p|--port exposed_port] [--service_image service_image] [--service_port service_port]"
exit 1
}
function error {
echo "Error $1"
exit 1
}
while [ "$#" -gt 0 ]; do
case "$1" in
--driver|-d)
DRIVER="$2"
shift 2
;;
--manager|-m)
NBR_MANAGER="$2"
shift 2
;;
--worker|-w)
NBR_WORKER="$2"
shift 2
;;
--service_image)
SERVICE_IMAGE="$2"
shift 2
;;
--service_port)
SERVICE_PORT="$2"
shift 2
;;
--replica|-r)
NBR_REPLICA="$2"
shift 2
;;
--port|-p)
EXPOSED_PORT="$2"
shift 2
;;
--digitalocean_token)
DO_TOKEN="$2"
shift 2
;;
--amazonec2-access-key)
EC2_ACCESS_KEY="$2"
shift 2
;;
--amazonec2-secret-key)
EC2_SECRET_KEY="$2"
shift 2
;;
--amazonec2-security-group)
EC2_SECURITY_GROUP="$2"
shift 2
;;
--azure-subscription-id)
AZURE_SUBSCRIPTION_ID="$2"
shift 2
;;
-h|--help)
usage
;;
esac
done
# Value of driver parameter's value must be among "azure", "digitalocean", "amazonec2", "virtualbox" (if no value is provided, "virtualbox" driver is used)
if [ "$DRIVER" != "virtualbox" -a "$DRIVER" != "digitalocean" -a "$DRIVER" != "amazonec2" -a "$DRIVER" != "azure" ];then
error "driver value must be among azure, digitalocean, amazonec2, virtualbox"
fi
# No additional parameters needed for virtualbox driver
if [ "$DRIVER" == "virtualbox" ]; then
echo "-> about to create a swarm with $NBR_MANAGER manager(s) and $NBR_WORKER workers on $DRIVER machines"
fi
# Make sure mandatory parameter for digitalocean driver
if [ "$DRIVER" == "digitalocean" ]; then
ADDITIONAL_PARAMS="--digitalocean-access-token=${DO_TOKEN} --digitalocean-region=lon1 --digitalocean-size=1gb --digitalocean-image=ubuntu-14-04-x64 --engine-install-url=https://test.docker.com"
echo "-> about to create a swarm with $NBR_MANAGER manager(s) and $NBR_WORKER workers on $DRIVER machines (lon1 / 1gb / Ubuntu 14.04)"
fi
# Make sure mandatory parameter for amazonec2 driver
if [ "$DRIVER" == "amazonec2" ];then
if [ "$EC2_ACCESS_KEY" == "" ];then
error "--amazonec2-access-key must be provided"
fi
if [ "$EC2_SECRET_KEY" == "" ];then
error "--amazonec2-secret-key must be provided"
fi
if [ "$EC2_SECURITY_GROUP" == "" ];then
error "--amazonec2-security-group must be provided (+ make sure this one allows inter hosts communication and is has opened port $EXPOSED_PORT to the outside"
fi
PERMISSION="sudo"
ADDITIONAL_PARAMS="--amazonec2-access-key ${EC2_ACCESS_KEY} --amazonec2-secret-key ${EC2_SECRET_KEY} --amazonec2-security-group ${EC2_SECURITY_GROUP} --amazonec2-security-group docker-machine --amazonec2-region eu-west-1 --amazonec2-instance-type t2.micro --amazonec2-ami ami-f95ef58a --engine-install-url=https://test.docker.com"
echo "-> about to create a swarm with $NBR_MANAGER manager(s) and $NBR_WORKER workers on $DRIVER machines (eu-west-1 / t2.micro / Ubuntu 14.04)"
fi
# Make sure mandatory parameter for azure driver
if [ "$DRIVER" == "azure" ];then
if [ "$AZURE_SUBSCRIPTION_ID" == "" ];then
error "--azure-subscription-id must be provided"
fi
# For Azure Storage Container the Manager and Worker prefix must be lowercase
PREFIX=$(date "+%Y%m%dt%H%M%S")
MANAGER=${PREFIX}-manager
WORKER=${PREFIX}-worker
PERMISSION="sudo"
ADDITIONAL_PARAMS="--driver azure --azure-subscription-id ${AZURE_SUBSCRIPTION_ID} --azure-open-port ${EXPOSED_PORT}"
echo "-> about to create a swarm with $NBR_MANAGER manager(s) and $NBR_WORKER workers on $DRIVER machines (westus / Standard_A2 / Ubuntu 15.10)"
fi
echo "-> service is based on image ${SERVICE_IMAGE} exposing port ${SERVICE_PORT}"
echo "-> once deployed service will be accessible via port ${EXPOSED_PORT} to the outside"
echo -n "is that correct ? ([Y]/N)"
read build_demo
if [ "$build_demo" = "N" ]; then
echo "aborted !"
exit 0
fi
# Get Private vs Public IP
function getIP {
if [ "$DRIVER" == "amazonec2" ]; then
echo $(docker-machine inspect -f '{{ .Driver.PrivateIPAddress }}' $1)
elif [ "$DRIVER" == "azure" ]; then
echo $(docker-machine ssh $1 ifconfig eth0 | awk '/inet addr/{print substr($2,6)}')
else
echo $(docker-machine inspect -f '{{ .Driver.IPAddress }}' $1)
fi
}
function check_status {
if [ "$(docker-machine ls -f '{{ .Name }}' | grep ${MANAGER}1)" != "" ]; then
error "${MANAGER}1 already exist. Please remove managerX and workerY machines"
fi
}
function get_manager_token {
echo $(docker-machine ssh ${MANAGER}1 $PERMISSION docker swarm join-token manager -q)
}
function get_worker_token {
echo $(docker-machine ssh ${MANAGER}1 $PERMISSION docker swarm join-token worker -q)
}
# Create Docker host for managers
function create_manager {
for i in $(seq 1 $NBR_MANAGER); do
echo "-> creating Docker host for manager $i (please wait)"
# Azure needs Stdout for authentication. Workaround: Show Stdout on first Manager.
if [ "$DRIVER" == "azure" ] && [ "$i" -eq 1 ];then
docker-machine create --driver $DRIVER $ADDITIONAL_PARAMS ${MANAGER}$i
else
docker-machine create --driver $DRIVER $ADDITIONAL_PARAMS ${MANAGER}$i 1>/dev/null
fi
done
}
# Create Docker host for workers
function create_workers {
for i in $(seq 1 $NBR_WORKER); do
echo "-> creating Docker host for worker $i (please wait)"
docker-machine create --driver $DRIVER $ADDITIONAL_PARAMS ${WORKER}$i 1>/dev/null
done
}
# Init swarm from first manager
function init_swarm {
echo "-> init swarm from ${MANAGER}1"
docker-machine ssh ${MANAGER}1 $PERMISSION docker swarm init --listen-addr $(getIP ${MANAGER}1):2377 --advertise-addr $(getIP ${MANAGER}1):2377
}
# Join other managers to the cluster
function join_other_managers {
if [ "$((NBR_MANAGER-1))" -ge "1" ];then
for i in $(seq 2 $NBR_MANAGER);do
echo "-> ${MANAGER}$i requests membership to the swarm"
docker-machine ssh ${MANAGER}$i $PERMISSION docker swarm join --token $(get_manager_token) --listen-addr $(getIP ${MANAGER}$i):2377 --advertise-addr $(getIP ${MANAGER}$i):2377 $(getIP ${MANAGER}1):2377 2>&1
done
fi
}
# Join worker to the cluster
function join_workers {
for i in $(seq 1 $NBR_WORKER);do
echo "-> join worker $i to the swarm"
docker-machine ssh ${WORKER}$i $PERMISSION docker swarm join --token $(get_worker_token) --listen-addr $(getIP ${WORKER}$i):2377 --advertise-addr $(getIP ${WORKER}$i):2377 $(getIP ${MANAGER}1):2377
done
}
# Deploy a test service
function deploy_service {
echo "-> deploy service with $NBR_REPLICA replicas with exposed port $EXPOSED_PORT"
SERVICE_ID=$(docker-machine ssh ${MANAGER}1 $PERMISSION docker service create --name demo --replicas $NBR_REPLICA --publish "${EXPOSED_PORT}:${SERVICE_PORT}" ${SERVICE_IMAGE})
if [ "${SERVICE_ID}" == "" ]; then
error "deploying service: no id returned"
fi
}
# Wait for service to be available
function wait_service {
echo "-> waiting for service ${SERVICE_ID} to be available"
TASKS_NBR=$(docker-machine ssh ${MANAGER}1 $PERMISSION docker service ls | grep demo | awk '{print $3}' | cut -d '/' -f1)
while [ "$TASKS_NBR" -lt "$NBR_REPLICA" ]; do
echo "... retrying in 2 seconds"
sleep 2
TASKS_NBR=$(docker-machine ssh ${MANAGER}1 $PERMISSION docker service ls | grep demo | awk '{print $3}' | cut -d '/' -f1)
done
}
# Display status
function status {
echo "-> service available on port $EXPOSED_PORT of any node"
echo "-> list available service"
docker-machine ssh ${MANAGER}1 $PERMISSION docker service ls
echo
echo "-> list tasks"
echo
docker-machine ssh ${MANAGER}1 $PERMISSION docker service ps demo
echo
echo "-> list machines"
docker-machine ls | egrep $PREFIX
echo
if [ "$DRIVER" == "amazonec2" ]; then
echo "#####"
echo "Warning: make sure you opened the port $EXPOSED_PORT in AWS security group used"
echo "#####"
fi
}
function main {
check_status
create_manager
create_workers
init_swarm
join_other_managers
join_workers
deploy_service
wait_service
status
}
main

Просмотреть файл

@ -1,43 +0,0 @@
# docker swarm
Get in touch with the new `docker swarm` feature in Docker 1.12.
## Build a local swarm with Docker Machine
To create a local swarm with Docker Machine use the following script
```bash
./buildswarm-vbox.sh
```
It will create two VirtualBox machines `sw01` and `sw02`. The machine `sw01` is
the swarm manager, the machine `sw02` joins the swarm.
To run further commands, just login to the machine `sw01` with
```bash
docker-machine ssh sw01
```
or run each command through ssh, eg.
```bash
docker-machine ssh sw01 docker node ls
ID NAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS LEADER
0j9p42jcyur9a * sw01 Accepted Ready Active Reachable Yes
16cznbhxupre5 sw02 Accepted Ready Active
```
### Remote swarm with Docker Machine at DigitalOcean
To build a multi machine swarm at DigitalOcean, run this script with your token.
```bash
DO_TOKEN=xxxx ./buildswarm-do.sh
```
Then control your swarm with commands on your swarm manager node
```bash
docker-machine ssh do-sw01 docker node ls
```

Просмотреть файл

@ -1,33 +0,0 @@
#!/bin/bash
SIZE=2gb
REGION=ams2
IMAGE=ubuntu-15-10-x64
PREFIX=do
# create swarm manager
docker-machine create \
--driver=digitalocean \
--digitalocean-access-token=${DO_TOKEN} \
--digitalocean-size=${SIZE} \
--digitalocean-region=${REGION} \
--digitalocean-private-networking=true \
--digitalocean-image=${IMAGE} \
--engine-install-url=https://test.docker.com \
${PREFIX}-sw01
docker-machine ssh ${PREFIX}-sw01 docker swarm init --listen-addr $(docker-machine ip ${PREFIX}-sw01):2377
# create another swarm node
docker-machine create \
--driver=digitalocean \
--digitalocean-access-token=${DO_TOKEN} \
--digitalocean-size=${SIZE} \
--digitalocean-region=${REGION} \
--digitalocean-private-networking=true \
--digitalocean-image=${IMAGE} \
--engine-install-url=https://test.docker.com \
${PREFIX}-sw02
docker-machine ssh ${PREFIX}-sw02 docker swarm join --listen-addr $(docker-machine ip ${PREFIX}-sw02):2377 $(docker-machine ip ${PREFIX}-sw01):2377
# list nodes
docker-machine ssh ${PREFIX}-sw01 docker node ls

Просмотреть файл

@ -1,74 +0,0 @@
#!/bin/bash
# Swarm mode using Docker Machine
managers=3
workers=3
# create manager machines
echo "======> Creating $managers manager machines ...";
for node in $(seq 1 $managers);
do
echo "======> Creating manager$node machine ...";
docker-machine create -d virtualbox manager$node;
done
# create worker machines
echo "======> Creating $workers worker machines ...";
for node in $(seq 1 $workers);
do
echo "======> Creating worker$node machine ...";
docker-machine create -d virtualbox worker$node;
done
# list all machines
docker-machine ls
# initialize swarm mode and create a manager
echo "======> Initializing first swarm manager ..."
docker-machine ssh manager1 "docker swarm init --listen-addr $(docker-machine ip manager1) --advertise-addr $(docker-machine ip manager1)"
# get manager and worker tokens
export manager_token=`docker-machine ssh manager1 "docker swarm join-token manager -q"`
export worker_token=`docker-machine ssh manager1 "docker swarm join-token worker -q"`
echo "manager_token: $manager_token"
echo "worker_token: $worker_token"
# other masters join swarm
for node in $(seq 2 $managers);
do
echo "======> manager$node joining swarm as manager ..."
docker-machine ssh manager$node \
"docker swarm join \
--token $manager_token \
--listen-addr $(docker-machine ip manager$node) \
--advertise-addr $(docker-machine ip manager$node) \
$(docker-machine ip manager1)"
done
# show members of swarm
docker-machine ssh manager1 "docker node ls"
# workers join swarm
for node in $(seq 1 $workers);
do
echo "======> worker$node joining swarm as worker ..."
docker-machine ssh worker$node \
"docker swarm join \
--token $worker_token \
--listen-addr $(docker-machine ip worker$node) \
--advertise-addr $(docker-machine ip worker$node) \
$(docker-machine ip manager1):2377"
done
# show members of swarm
docker-machine ssh manager1 "docker node ls"
# Cleanup
# # Stop machines
# docker-machine stop worker1 worker2 worker3 manager1 manager2 manager3
# # remove machines
# docker-machine rm worker1 worker2 worker3 manager1 manager2 manager3