updating to WildFly + MySQL application, using the latest Docker CE stack
|
@ -24,7 +24,9 @@ Click on `Next`
|
|||
.Swarm size
|
||||
image::docker-aws-2.png[]
|
||||
|
||||
Select the number of Swarm manager (1) and worker (3) nodes. This wll create a 4 node cluster. Select the SSH key that will be used to access the cluster.
|
||||
Select the number of Swarm manager (3) and worker (5) nodes. This wll create a 8 node cluster. Each node will initialize a new EC2 instance. Feel free to alter the number of master and worker nodes. For example, a more reasonable number for testing may be 1 master and 3 worker nodes.
|
||||
|
||||
Select the SSH key that will be used to access the cluster.
|
||||
|
||||
By default, the template is configured to redirect all log statements to CloudWatch. Until https://github.com/moby/moby/issues/30691[#30691] is fixed, the logs will only be available using CloudWatch. Alternatively, you may select to not redirect logs to CloudWatch. In this case, the usual command to get the logs will work.
|
||||
|
||||
|
@ -33,7 +35,7 @@ Scroll down to select manager and worker properties.
|
|||
.Swarm manager/worker properties
|
||||
image::docker-aws-3.png[]
|
||||
|
||||
`m3.medium` (1 vCPU and 3.75 GB memory) is a good start for manager. `m3.large` (2 vCPU and 7.5 GB memory) is a good start for worker node. Make sure the EC2 instance size is chosen to accommodate the processing and memory needs of containers that will run there.
|
||||
`m4.large` (2 vCPU and 8 GB memory) is a good start for manager. `m4.xlarge` (4 vCPU and 16 GB memory) is a good start for worker node. Feel free to choose `m3.medium` (1 vCPU and 3.75 GB memory) for manager and `m3.large` (2 vCPU and 7.5 GB memory) for a smaller cluster. Make sure the EC2 instance size is chosen to accommodate the processing and memory needs of containers that will run there.
|
||||
|
||||
Click on `Next`
|
||||
|
||||
|
@ -50,23 +52,25 @@ image::docker-aws-6.png[]
|
|||
|
||||
Accept the checkbox for CloudFormation to create IAM resources. Click on `Create` to create the Swarm cluster.
|
||||
|
||||
It will take a few minutes for the CloudFormation template to complete. For example, it took about 10-12 minutes for this cluster to be created in `us-west-2` region. The output will look like:
|
||||
It will take a few minutes for the CloudFormation template to complete. The output will look like:
|
||||
|
||||
.Swarm CloudFormation complete
|
||||
image::docker-aws-7.png[]
|
||||
|
||||
https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#Instances:search=docker;sort=instanceState[EC2 Console] will show the EC2 instances for manager and worker.
|
||||
https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Instances:search=docker;sort=instanceState[EC2 Console] will show the EC2 instances for manager and worker.
|
||||
|
||||
.EC2 console
|
||||
image::docker-aws-8.png[]
|
||||
|
||||
Select the manager node, copy the public IP address:
|
||||
Select one of the manager nodes, copy the public IP address:
|
||||
|
||||
[[Swarm_manager]]
|
||||
.Swarm manager
|
||||
image::docker-aws-9.png[]
|
||||
|
||||
Create a SSH tunnel using the command `ssh -i ~/.ssh/arun-cb-west2.pem -NL localhost:2374:/var/run/docker.sock docker@ec2-52-37-65-169.us-west-2.compute.amazonaws.com &`
|
||||
Create a SSH tunnel using the command:
|
||||
|
||||
ssh -i ~/.ssh/arun-us-east1.pem -o StrictHostKeyChecking=no -NL localhost:2374:/var/run/docker.sock docker@ec2-34-200-216-30.compute-1.amazonaws.com &
|
||||
|
||||
Get more details about the cluster using the command `docker -H localhost:2374 info`. This shows the output:
|
||||
|
||||
|
@ -76,22 +80,23 @@ Containers: 5
|
|||
Paused: 0
|
||||
Stopped: 1
|
||||
Images: 5
|
||||
Server Version: 1.13.0
|
||||
Server Version: 17.09.0-ce
|
||||
Storage Driver: overlay2
|
||||
Backing Filesystem: extfs
|
||||
Supports d_type: true
|
||||
Native Overlay Diff: true
|
||||
Logging Driver: awslogs
|
||||
Cgroup Driver: cgroupfs
|
||||
Plugins:
|
||||
Plugins:
|
||||
Volume: local
|
||||
Network: bridge host ipvlan macvlan null overlay
|
||||
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
|
||||
Swarm: active
|
||||
NodeID: ep8668sq4y8n7qdkvm8l2lecf
|
||||
NodeID: rb6rju2eln0bn80z7lqocjkuy
|
||||
Is Manager: true
|
||||
ClusterID: mw186ukvx9rx5h87vxzkr0ic0
|
||||
Managers: 1
|
||||
Nodes: 4
|
||||
ClusterID: t38bbbex5w3bpfmnogalxn5k1
|
||||
Managers: 3
|
||||
Nodes: 8
|
||||
Orchestration:
|
||||
Task History Retention Limit: 5
|
||||
Raft:
|
||||
|
@ -103,35 +108,45 @@ Swarm: active
|
|||
Heartbeat Period: 5 seconds
|
||||
CA Configuration:
|
||||
Expiry Duration: 3 months
|
||||
Node Address: 172.31.42.42
|
||||
Force Rotate: 0
|
||||
Autolock Managers: false
|
||||
Root Rotation In Progress: false
|
||||
Node Address: 172.31.46.94
|
||||
Manager Addresses:
|
||||
172.31.42.42:2377
|
||||
172.31.26.163:2377
|
||||
172.31.46.94:2377
|
||||
172.31.8.136:2377
|
||||
Runtimes: runc
|
||||
Default Runtime: runc
|
||||
Init Binary: docker-init
|
||||
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
|
||||
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
|
||||
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
|
||||
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
|
||||
init version: 949e6fa
|
||||
Security Options:
|
||||
seccomp
|
||||
Profile: default
|
||||
Kernel Version: 4.9.4-moby
|
||||
Kernel Version: 4.9.49-moby
|
||||
Operating System: Alpine Linux v3.5
|
||||
OSType: linux
|
||||
Architecture: x86_64
|
||||
CPUs: 1
|
||||
Total Memory: 3.67 GiB
|
||||
Name: ip-172-31-42-42.us-west-2.compute.internal
|
||||
ID: NNAE:BGOF:DU6D:DE2V:TLEO:PBUL:CD5S:H5QB:MEA5:DBAW:DTIQ:ASVP
|
||||
CPUs: 2
|
||||
Total Memory: 7.785GiB
|
||||
Name: ip-172-31-46-94.ec2.internal
|
||||
ID: F65G:UTHH:7YEM:XPEZ:NBIZ:XN25:ONG6:QN5R:7MGJ:I3RS:BAX3:UO7A
|
||||
Docker Root Dir: /var/lib/docker
|
||||
Debug Mode (client): false
|
||||
Debug Mode (server): true
|
||||
File Descriptors: 69
|
||||
Goroutines: 182
|
||||
System Time: 2017-02-02T19:35:33.882319271Z
|
||||
File Descriptors: 299
|
||||
Goroutines: 399
|
||||
System Time: 2017-10-07T01:04:00.971903882Z
|
||||
EventsListeners: 0
|
||||
Username: arungupta
|
||||
Registry: https://index.docker.io/v1/
|
||||
Labels:
|
||||
os=linux
|
||||
region=us-east-1
|
||||
availability_zone=us-east-1c
|
||||
instance_type=m4.large
|
||||
node_type=manager
|
||||
Experimental: true
|
||||
Insecure Registries:
|
||||
127.0.0.0/8
|
||||
|
@ -141,16 +156,29 @@ Live Restore Enabled: false
|
|||
List of nodes in the cluster can be seen using `docker -H localhost:2374 node ls`:
|
||||
|
||||
```
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
4gj5tt6f2rtv9bmmtegn3sw7l ip-172-31-22-34.us-west-2.compute.internal Ready Active
|
||||
jul7u4x2yue1pz46lxb62n3lt * ip-172-31-45-44.us-west-2.compute.internal Ready Active Leader
|
||||
trg4x49872k5w178q306pljhz ip-172-31-36-119.us-west-2.compute.internal Ready Active
|
||||
zyg7i7pki0jqdq9kjzp92vq0j ip-172-31-7-184.us-west-2.compute.internal Ready Active
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
xdhwdiglfs5wsvkcl0j65wl04 ip-172-31-4-89.ec2.internal Ready Active
|
||||
xbrejk2g7mk9v15hg9xzu3syq ip-172-31-8-136.ec2.internal Ready Active Leader
|
||||
bhwc67r78cfqtquri82qdwtnk ip-172-31-13-38.ec2.internal Ready Active
|
||||
ygxdfloly3x203x9p5wbpk34d ip-172-31-17-74.ec2.internal Ready Active
|
||||
toyfec889wuqdix6z618mlj85 ip-172-31-26-163.ec2.internal Ready Active Reachable
|
||||
37lzvgrtlnnq0lnr3cip0fwhw ip-172-31-28-204.ec2.internal Ready Active
|
||||
k2aprr08b3q28nvze9uv26821 ip-172-31-39-252.ec2.internal Ready Active
|
||||
rb6rju2eln0bn80z7lqocjkuy * ip-172-31-46-94.ec2.internal Ready Active Reachable
|
||||
```
|
||||
|
||||
=== Multi-container application to multi-host
|
||||
|
||||
Use the Compose file as defined at https://github.com/docker/labs/blob/master/developer-tools/java/chapters/ch06-swarm.adoc#multi-container-application[Multi-container Application] to deploy a multi-container application to this Docker cluster. This will deploy a multi-container application to multiple hosts. The command is:
|
||||
Use the Compose file from https://github.com/docker/labs/blob/master/developer-tools/java/chapters/ch05-compose.adoc#configuration-file to deploy a multi-container application to this Docker cluster. This will deploy a multi-container application to multiple hosts.
|
||||
|
||||
Create a new directory and `cd` to it:
|
||||
|
||||
mkdir webapp
|
||||
cd webapp
|
||||
|
||||
Create a new Compose definition `docker-compose.yml` using the configuration file from https://github.com/docker/labs/blob/master/developer-tools/java/chapters/ch05-compose.adoc#configuration-file.
|
||||
|
||||
The command is:
|
||||
|
||||
```
|
||||
docker -H localhost:2374 stack deploy --compose-file=docker-compose.yml webapp
|
||||
|
@ -159,68 +187,123 @@ docker -H localhost:2374 stack deploy --compose-file=docker-compose.yml webapp
|
|||
The output is:
|
||||
|
||||
```
|
||||
Ignoring deprecated options:
|
||||
|
||||
container_name: Setting the container name is not supported.
|
||||
|
||||
Creating network webapp_default
|
||||
Creating service webapp_db
|
||||
Creating service webapp_web
|
||||
Creating service webapp_db
|
||||
```
|
||||
|
||||
WildFly and Couchbase services are started on this cluster. Each service has a single container. A new overlay network is created. This allows multiple containers on different hosts to communicate with each other.
|
||||
WildFly Swarm and MySQL services are started on this cluster. Each service has a single container. A new overlay network is created. This allows multiple containers on different hosts to communicate with each other.
|
||||
|
||||
=== Verify service/containers in application
|
||||
|
||||
Verify that the WildFly and Couchbase services are running using `docker -H localhost:2374 service ls`:
|
||||
|
||||
```
|
||||
ID NAME MODE REPLICAS IMAGE
|
||||
bfi9s7t5sdjo webapp_db replicated 1/1 arungupta/couchbase:travel
|
||||
ij04s9di00xw webapp_web replicated 1/1 arungupta/couchbase-javaee:travel
|
||||
ID NAME MODE REPLICAS IMAGE PORTS
|
||||
q4d578ime45e webapp_db replicated 1/1 mysql:8 *:3306->3306/tcp
|
||||
qt5qrzp1jpyq webapp_web replicated 1/1 arungupta/docker-javaee:dockerconeu17 *:8080->8080/tcp,*:9990->9990/tcp
|
||||
```
|
||||
|
||||
`REPLICAS` colum shows that one of one replica for the container is running for each service. It might take a few minutes for the service to be running as the image needs to be downloaded on the host where the container is started.
|
||||
|
||||
Let's find out which node the services are running. Do this for the web application first:
|
||||
|
||||
```
|
||||
docker -H localhost:2374 service ps webapp_web
|
||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
||||
npmunk4ll9f4 webapp_web.1 arungupta/docker-javaee:dockerconeu17 ip-172-31-39-252.ec2.internal Running Running 2 hours ago
|
||||
```
|
||||
|
||||
The `NODE` column shows the internal IP address of the node where this service is running.
|
||||
|
||||
Now, do this for the database:
|
||||
|
||||
```
|
||||
docker -H localhost:2374 service ps webapp_db
|
||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
||||
vzaji4xdi2qh webapp_db.1 mysql:8 ip-172-31-17-74.ec2.internal Running Running 2 hours ago
|
||||
```
|
||||
|
||||
The `NODE` column for this service shows that the service is running on a different node.
|
||||
|
||||
More details about the service can be obtained using `docker -H localhost:2374 service inspect webapp_web`:
|
||||
|
||||
```
|
||||
[
|
||||
{
|
||||
"ID": "ssf0kj0hagl7c1tcpw8bbsiue",
|
||||
"ID": "qt5qrzp1jpyq1ur7qhg55ijf1",
|
||||
"Version": {
|
||||
"Index": 29
|
||||
"Index": 58
|
||||
},
|
||||
"CreatedAt": "2017-02-02T22:38:20.424806786Z",
|
||||
"UpdatedAt": "2017-02-02T22:38:20.428265482Z",
|
||||
"CreatedAt": "2017-10-07T01:09:32.519975146Z",
|
||||
"UpdatedAt": "2017-10-07T01:09:32.535587602Z",
|
||||
"Spec": {
|
||||
"Name": "webapp_web",
|
||||
"Labels": {
|
||||
"com.docker.stack.image": "arungupta/docker-javaee:dockerconeu17",
|
||||
"com.docker.stack.namespace": "webapp"
|
||||
},
|
||||
"TaskTemplate": {
|
||||
"ContainerSpec": {
|
||||
"Image": "arungupta/couchbase-javaee:travel@sha256:e48e05c0327e30e1d11f226b7b68e403e6c9c8d977bf09cb23188c6fff46bf39",
|
||||
"Image": "arungupta/docker-javaee:dockerconeu17@sha256:6a403c35d2ab4442f029849207068eadd8180c67e2166478bc3294adbf578251",
|
||||
"Labels": {
|
||||
"com.docker.stack.namespace": "webapp"
|
||||
},
|
||||
"Env": [
|
||||
"COUCHBASE_URI=db"
|
||||
]
|
||||
"Privileges": {
|
||||
"CredentialSpec": null,
|
||||
"SELinuxContext": null
|
||||
},
|
||||
"StopGracePeriod": 10000000000,
|
||||
"DNSConfig": {}
|
||||
},
|
||||
"Resources": {},
|
||||
"Placement": {},
|
||||
"ForceUpdate": 0
|
||||
"RestartPolicy": {
|
||||
"Condition": "any",
|
||||
"Delay": 5000000000,
|
||||
"MaxAttempts": 0
|
||||
},
|
||||
"Placement": {
|
||||
"Platforms": [
|
||||
{
|
||||
"Architecture": "amd64",
|
||||
"OS": "linux"
|
||||
}
|
||||
]
|
||||
},
|
||||
"Networks": [
|
||||
{
|
||||
"Target": "b0ig9m1qsjax95tp9m1i2m4yo",
|
||||
"Aliases": [
|
||||
"web"
|
||||
]
|
||||
}
|
||||
],
|
||||
"ForceUpdate": 0,
|
||||
"Runtime": "container"
|
||||
},
|
||||
"Mode": {
|
||||
"Replicated": {
|
||||
"Replicas": 1
|
||||
}
|
||||
},
|
||||
"Networks": [
|
||||
{
|
||||
"Target": "poh9n7fbrl3mlue6lkl6qwbst",
|
||||
"Aliases": [
|
||||
"web"
|
||||
]
|
||||
}
|
||||
],
|
||||
"UpdateConfig": {
|
||||
"Parallelism": 1,
|
||||
"FailureAction": "pause",
|
||||
"Monitor": 5000000000,
|
||||
"MaxFailureRatio": 0,
|
||||
"Order": "stop-first"
|
||||
},
|
||||
"RollbackConfig": {
|
||||
"Parallelism": 1,
|
||||
"FailureAction": "pause",
|
||||
"Monitor": 5000000000,
|
||||
"MaxFailureRatio": 0,
|
||||
"Order": "stop-first"
|
||||
},
|
||||
"EndpointSpec": {
|
||||
"Mode": "vip",
|
||||
"Ports": [
|
||||
|
@ -273,24 +356,20 @@ More details about the service can be obtained using `docker -H localhost:2374 s
|
|||
],
|
||||
"VirtualIPs": [
|
||||
{
|
||||
"NetworkID": "vsr5otzk5gwz7afwafjmiiv40",
|
||||
"Addr": "10.255.0.7/16"
|
||||
"NetworkID": "i41xh4kmuwl5vc47h536l3mxs",
|
||||
"Addr": "10.255.0.10/16"
|
||||
},
|
||||
{
|
||||
"NetworkID": "poh9n7fbrl3mlue6lkl6qwbst",
|
||||
"NetworkID": "b0ig9m1qsjax95tp9m1i2m4yo",
|
||||
"Addr": "10.0.0.2/24"
|
||||
}
|
||||
]
|
||||
},
|
||||
"UpdateStatus": {
|
||||
"StartedAt": "0001-01-01T00:00:00Z",
|
||||
"CompletedAt": "0001-01-01T00:00:00Z"
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Logs for the service cannot be seen using `docker service logs`. This will be fixed with https://github.com/moby/moby/issues/30691[#30691]. Instead they are visible using https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logs:prefix=Docker[CloudWatch Logs].
|
||||
Logs for the service are redirected to CloudWatch and thus cannot be seen using `docker service logs`. This will be fixed with https://github.com/moby/moby/issues/30691[#30691]. Let's view the logs using using https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs:prefix=Docker[CloudWatch Logs].
|
||||
|
||||
.CloudWatch log group
|
||||
image::docker-aws-10.png[]
|
||||
|
@ -300,49 +379,44 @@ Select the log group:
|
|||
.CloudWatch log stream
|
||||
image::docker-aws-11.png[]
|
||||
|
||||
Pick `webapp_db.xxx` log stream to see log statements from the Couchbase image:
|
||||
|
||||
.CloudWatch database log stream
|
||||
image::docker-aws-12.png[]
|
||||
|
||||
Pick `webapp_db.xxx` log stream to see log statements from the WildFly application server:
|
||||
Pick `webapp_web.xxx` log stream to see the log statements from WildFly Swarm:
|
||||
|
||||
.CloudWatch application log stream
|
||||
image::docker-aws-13.png[]
|
||||
image::docker-aws-12.png[]
|
||||
|
||||
=== Access application
|
||||
|
||||
Application is accessed using manager's IP address and on port 8080. By default, the port 8080 is not open. In <<Swarm_manager>>, click on `Docker-Managerxxx` in `Security groups`. Click on `Inbound`, `Edit`, `Add Rule`, and create a rule to enable TCP traffic on port 8080.
|
||||
Application is accessed using manager's IP address and on port 8080. By default, the port 8080 is not open.
|
||||
|
||||
In https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Instances:search=docker;sort=instanceState[EC2 Console], select an EC2 instance with name `Docker-Manager`, click on `Docker-Managerxxx` in `Security groups`. Click on `Inbound`, `Edit`, `Add Rule`, and create a rule to enable TCP traffic on port 8080.
|
||||
|
||||
.Open port 8080 in Docker manager
|
||||
image::docker-aws-14.png[]
|
||||
image::docker-aws-13.png[]
|
||||
|
||||
Click on `Save` to save the rules.
|
||||
|
||||
Now, the application is accessible using the command `curl -v http://ec2-52-37-65-169.us-west-2.compute.amazonaws.com:8080/airlines/resources/airline` and shows output:
|
||||
Now, the application is accessible using the command `curl -v http://ec2-34-200-216-30.compute-1.amazonaws.com:8080/resources/employees` and shows output:
|
||||
|
||||
```
|
||||
* Trying 52.37.65.169...
|
||||
* Connected to ec2-52-37-65-169.us-west-2.compute.amazonaws.com (52.37.65.169) port 8080 (#0)
|
||||
> GET /airlines/resources/airline HTTP/1.1
|
||||
> Host: ec2-52-37-65-169.us-west-2.compute.amazonaws.com:8080
|
||||
> User-Agent: curl/7.43.0
|
||||
* Trying 34.200.216.30...
|
||||
* TCP_NODELAY set
|
||||
* Connected to ec2-34-200-216-30.compute-1.amazonaws.com (34.200.216.30) port 8080 (#0)
|
||||
> GET /resources/employees HTTP/1.1
|
||||
> Host: ec2-34-200-216-30.compute-1.amazonaws.com:8080
|
||||
> User-Agent: curl/7.51.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Connection: keep-alive
|
||||
< X-Powered-By: Undertow/1
|
||||
< Server: WildFly/10
|
||||
< Content-Type: application/octet-stream
|
||||
< Content-Length: 1402
|
||||
< Date: Thu, 02 Feb 2017 23:42:41 GMT
|
||||
< Content-Type: application/xml
|
||||
< Content-Length: 478
|
||||
< Date: Sat, 07 Oct 2017 02:53:11 GMT
|
||||
<
|
||||
* Connection #0 to host ec2-52-37-65-169.us-west-2.compute.amazonaws.com left intact
|
||||
[{"travel-sample":{"country":"United States","iata":"Q5","callsign":"MILE-AIR","name":"40-Mile Air","icao":"MLA","id":10,"type":"airline"}}, {"travel-sample":{"country":"United States","iata":"TQ","callsign":"TXW","name":"Texas Wings","icao":"TXW","id":10123,"type":"airline"}}, {"travel-sample":{"country":"United States","iata":"A1","callsign":"atifly","name":"Atifly","icao":"A1F","id":10226,"type":"airline"}}, {"travel-sample":{"country":"United Kingdom","iata":null,"callsign":null,"name":"Jc royal.britannica","icao":"JRB","id":10642,"type":"airline"}}, {"travel-sample":{"country":"United States","iata":"ZQ","callsign":"LOCAIR","name":"Locair","icao":"LOC","id":10748,"type":"airline"}}, {"travel-sample":{"country":"United States","iata":"K5","callsign":"SASQUATCH","name":"SeaPort Airlines","icao":"SQH","id":10765,"type":"airline"}}, {"travel-sample":{"country":"United States","iata":"KO","callsign":"ACE AIR","name":"Alaska Central Express","icao":"AER","id":109,"type":"airline"}}, {"travel-sample":{"country":"United Kingdom","iata":"5W","callsign":"FLYSTAR","name":"Astraeus","icao":"AEU","id":112,"type":"airline"}}, {"travel-sample":{"country":"France","iata":"UU","callsign":"REUNION","name":"Air Austral","icao":"REU","id":1191,"type":"airline"}}, {"travel-sample":{"country":"France","iata":"A5","callsign":"AIRLINAIR","name":"Airlinair","icao":"RLA","id":1203,"type":"airline"}}]
|
||||
* Curl_http_done: called premature == 0
|
||||
* Connection #0 to host ec2-34-200-216-30.compute-1.amazonaws.com left intact
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>
|
||||
```
|
||||
|
||||
Complete set of commands are shown at https://github.com/docker/labs/blob/master/developer-tools/java/chapters/ch05-compose.adoc#access-application. Make sure to replace `localhost` with public IP address of the manager.
|
||||
|
||||
=== Shutdown application
|
||||
|
||||
Shutdown the application using the command `docker -H localhost:2374 stack rm webapp`:
|
||||
|
@ -357,8 +431,8 @@ This stops the container in each service and removes the services. It also delet
|
|||
|
||||
=== Shutdown cluster
|
||||
|
||||
Docker cluster can be shutdown by deleting the stack created by CloudFormataion:
|
||||
Docker cluster can be shutdown by deleting the stack created by CloudFormation:
|
||||
|
||||
.Delete CloudFormation template
|
||||
image::docker-aws-15.png[]
|
||||
image::docker-aws-14.png[]
|
||||
|
||||
|
|
Двоичные данные
developer-tools/java/chapters/images/docker-aws-1.png
До Ширина: | Высота: | Размер: 311 KiB После Ширина: | Высота: | Размер: 353 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-10.png
До Ширина: | Высота: | Размер: 71 KiB После Ширина: | Высота: | Размер: 197 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-11.png
До Ширина: | Высота: | Размер: 58 KiB После Ширина: | Высота: | Размер: 271 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-12.png
До Ширина: | Высота: | Размер: 260 KiB После Ширина: | Высота: | Размер: 939 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-13.png
До Ширина: | Высота: | Размер: 561 KiB После Ширина: | Высота: | Размер: 618 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-14.png
До Ширина: | Высота: | Размер: 130 KiB После Ширина: | Высота: | Размер: 270 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-15.png
До Ширина: | Высота: | Размер: 60 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-2.png
До Ширина: | Высота: | Размер: 280 KiB После Ширина: | Высота: | Размер: 298 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-3.png
До Ширина: | Высота: | Размер: 225 KiB После Ширина: | Высота: | Размер: 279 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-4.png
До Ширина: | Высота: | Размер: 246 KiB После Ширина: | Высота: | Размер: 283 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-5.png
До Ширина: | Высота: | Размер: 297 KiB После Ширина: | Высота: | Размер: 268 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-6.png
До Ширина: | Высота: | Размер: 222 KiB После Ширина: | Высота: | Размер: 229 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-7.png
До Ширина: | Высота: | Размер: 171 KiB После Ширина: | Высота: | Размер: 405 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-8.png
До Ширина: | Высота: | Размер: 119 KiB После Ширина: | Высота: | Размер: 334 KiB |
Двоичные данные
developer-tools/java/chapters/images/docker-aws-9.png
До Ширина: | Высота: | Размер: 175 KiB После Ширина: | Высота: | Размер: 529 KiB |