This commit is contained in:
Oren K 2020-10-15 22:31:15 -07:00
Родитель 37f79b2bb4
Коммит a321a387e8
15 изменённых файлов: 80 добавлений и 48 удалений

Просмотреть файл

@ -39,8 +39,9 @@ OSToy is a simple Node.js application that we will deploy to Azure Red Hat OpenS
4. **Config Maps:** Shows the contents of configmaps available to the application and the key:value pairs. 4. **Config Maps:** Shows the contents of configmaps available to the application and the key:value pairs.
5. **Secrets:** Shows the contents of secrets available to the application and the key:value pairs. 5. **Secrets:** Shows the contents of secrets available to the application and the key:value pairs.
6. **ENV Variables:** Shows the environment variables available to the application. 6. **ENV Variables:** Shows the environment variables available to the application.
7. **Networking:** Tools to illustrate networking within the application. 7. **Auto Scaling:** Explore the Horizontal Pod Autoscaler to see how increased loads are handled.
8. Shows some more information about the application. 8. **Networking:** Tools to illustrate networking within the application.
9. Shows some more information about the application.
![Home Page](/media/managedlab/10-ostoy-homepage-1.png) ![Home Page](/media/managedlab/10-ostoy-homepage-1.png)

Просмотреть файл

@ -13,17 +13,20 @@ If not logged in via the CLI, click on the dropdown arrow next to your name in t
![CLI Login](/media/managedlab/7-ostoy-login.png) ![CLI Login](/media/managedlab/7-ostoy-login.png)
Then go to your terminal and paste that command and press enter. You will see a similar confirmation message if you successfully logged in. A new tab will open and select the authentication method you are using
Click Display Token
Copy the command under where it says "Log in with this token". Then go to your terminal and paste that command and press enter. You will see a similar confirmation message if you successfully logged in.
```sh ```sh
$ oc login https://openshift.abcd1234.eastus.azmosa.io --token=hUXXXXXX $oc login --token=iQ-USIs2vTdl_7TD1xSMIPaFxJ6RD6AAAAAAAAAAAAA --server=https://api.abcd1234.westus2.aroapp.io:6443
Logged into "https://openshift.abcd1234.eastus.azmosa.io:443" as "okashi" using the token provided. Logged into "https://api.abcd1234.westus2.aroapp.io:6443" as "0kashi" using the token provided.
You have access to the following projects and can switch between them with 'oc project <projectname>': You have access to 85 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
aro-demo
* aro-shifty
...
``` ```
{% endcollapsible %} {% endcollapsible %}
@ -42,32 +45,34 @@ You should receive the following response
```sh ```sh
$ oc new-project ostoy $ oc new-project ostoy
Now using project "ostoy" on server "https://openshift.abcd1234.eastus.azmosa.io:443". Now using project "ostoy" on server "https://api.yq1h7kpq.westus2.aroapp.io:6443".
You can add applications to this project with the 'new-app' command. For example, try: You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git oc new-app ruby~https://github.com/sclorg/ruby-ex.git
to build a new example application in Ruby. to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
``` ```
Equivalently you can also create this new project using the web UI by selecting "Application Console" at the top then clicking on "+Create Project" button on the right. Equivalently you can also create this new project using the web UI by selecting "Projects" under "Home" on the left menu, then clicking on "Create Project" button on the left.
![UI Create Project](/media/managedlab/6-ostoy-newproj.png) ![UI Create Project](/media/managedlab/6-ostoy-newproj.png)
{% endcollapsible %} {% endcollapsible %}
### Download YAML configuration ### View the YAML deployment objects
Download the Kubernetes deployment object yamls from the following locations to your Azure Cloud Shell, in a directory of your choosing (just remember where you placed them for the next step). View the Kubernetes deployment object yamls. If you wish you can download them from the following locations to your Azure Cloud Shell, in a directory of your choosing (just remember where you placed them for the next step). Or just use the direct link in the next step.
{% collapsible %} {% collapsible %}
Feel free to open them up and take a look at what we will be deploying. For simplicity of this lab we have placed all the Kubernetes objects we are deploying in one "all-in-one" yaml file. Though in reality there are benefits to separating these out into individual yaml files. Feel free to open them up and take a look at what we will be deploying. For simplicity of this lab we have placed all the Kubernetes objects we are deploying in one "all-in-one" yaml file. Though in reality there are benefits to separating these out into individual yaml files.
[ostoy-fe-deployment.yaml](/yaml/ostoy-fe-deployment.yaml) [ostoy-fe-deployment.yaml](https://github.com/microsoft/aroworkshop/blob/master/yaml/ostoy-fe-deployment.yaml)
[ostoy-microservice-deployment.yaml](/yaml/ostoy-microservice-deployment.yaml) [ostoy-microservice-deployment.yaml](https://github.com/microsoft/aroworkshop/blob/master/yaml/ostoy-microservice-deployment.yaml)
{% endcollapsible %} {% endcollapsible %}
@ -79,11 +84,11 @@ The microservice application serves internal web requests and returns a JSON obj
In your command line deploy the microservice using the following command: In your command line deploy the microservice using the following command:
`oc apply -f ostoy-microservice-deployment.yaml` `oc apply -f https://raw.githubusercontent.com/microsoft/aroworkshop/master/yaml/ostoy-microservice-deployment.yaml`
You should see the following response: You should see the following response:
``` ```sh
$ oc apply -f ostoy-microservice-deployment.yaml $ oc apply -f https://raw.githubusercontent.com/microsoft/aroworkshop/master/yaml/ostoy-microservice-deployment.yaml
deployment.apps/ostoy-microservice created deployment.apps/ostoy-microservice created
service/ostoy-microservice-svc created service/ostoy-microservice-svc created
``` ```
@ -107,12 +112,12 @@ The frontend deployment contains the node.js frontend for our application along
In your command line deploy the frontend along with creating all objects mentioned above by entering: In your command line deploy the frontend along with creating all objects mentioned above by entering:
`oc apply -f ostoy-fe-deployment.yaml` `oc apply -f https://raw.githubusercontent.com/microsoft/aroworkshop/master/yaml/ostoy-fe-deployment.yaml`
You should see all objects created successfully You should see all objects created successfully
```sh ```sh
$ oc apply -f ostoy-fe-deployment.yaml $ oc apply -f https://raw.githubusercontent.com/microsoft/aroworkshop/master/yaml/ostoy-fe-deployment.yaml
persistentvolumeclaim/ostoy-pvc created persistentvolumeclaim/ostoy-pvc created
deployment.apps/ostoy-frontend created deployment.apps/ostoy-frontend created
service/ostoy-frontend-svc created service/ostoy-frontend-svc created
@ -135,10 +140,10 @@ You should see the following response:
```sh ```sh
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
ostoy-route ostoy-route-ostoy.apps.abcd1234.eastus.azmosa.io ostoy-frontend-svc <all> None ostoy-route ostoy-route-ostoy.apps.abcd1234.westus2.aroapp.io ostoy-frontend-svc <all> None
``` ```
Copy `ostoy-route-ostoy.apps.abcd1234.eastus.azmosa.io` above and paste it into your browser and press enter. You should see the homepage of our application. Copy `ostoy-route-ostoy.apps.abcd1234.westus2.aroapp.io` above and paste it into your browser and press enter. You should see the homepage of our application.
![Home Page](/media/managedlab/10-ostoy-homepage.png) ![Home Page](/media/managedlab/10-ostoy-homepage.png)

Просмотреть файл

@ -46,33 +46,56 @@ You should see both the *stdout* and *stderr* messages.
{% collapsible %} {% collapsible %}
One can use the native Azure service, Azure Monitor, to view and keep application logs along with metrics. This lab assumes that the cluster was already configured to use Azure Monitor for application logs at cluster creation. If you want more information on how to connect this for a new or existing cluster see the docs here: (https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-azure-redhat-setup) One can use the native Azure service, Azure Monitor, to view and keep application logs along with metrics. In order to complete this integration you will need to follow the documentation [here](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-azure-redhat4-setup) and particularly the prerequisites. The prerequisites are:
- The Azure CLI version 2.0.72 or later
Access the azure portal at (https://portal.azure.com/) - The Helm 3 CLI tool
Click on "Monitor". - Bash version 4
- The Kubectl command-line tool
- A [Log Analytics workspace](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/design-logs-deployment) (see [here](https://docs.microsoft.com/en-us/azure/azure-monitor/learn/quick-create-workspace) if you need to create one)
This lab assumes you have the prerequisites already set up in your environment.
Then follow the steps to (Enable Azure Monitor)[https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-azure-redhat4-setup#integrate-with-an-existing-workspace] for our cluster.
> **Note:** Although not required it would be easier to keep track of our logs by deploying into an existing Log Analytics workspace. It is recommended prior to this step.)
Once the steps to connect Azure Monitor to an existing cluster were successfully completed, access the Azure portal at [https://portal.azure.com]
Click on "Monitor" under the left hamburger menu.
![Monitor](/media/managedlab/24-ostoy-azuremonitor.png) ![Monitor](/media/managedlab/24-ostoy-azuremonitor.png)
Click Logs in the left menu. Click Logs in the left menu. Click the "Get started" button if that screen shows up.
> Note: if you are asked to select a scope select the Log Analytics scope for your cluster
![container logs](/media/managedlab/29-ostoy-logs.png) ![container logs](/media/managedlab/29-ostoy-logs.png)
If you are asked to select a scope select the Log Analytics workspace you created
Expand "ContainerInsights". Expand "ContainerInsights".
Double click "ContainerLog". Double click "ContainerLog".
Change the time range to be "Last 30 Minutes".
Then click the "Run" button at the top. Then click the "Run" button at the top.
![container logs](/media/managedlab/30-ostoy-logs.png) ![container logs](/media/managedlab/30-ostoy-logs.png)
In the bottom pane you will see the results of the application logs returned. You might need to sort, but you should see the two lines we outputted to *stdout* and *stderr*. In the bottom pane you will see the results of the application logs returned. You might need to sort, but you should see the two lines we outputted to *stdout* and *stderr*.
![container logs](/media/managedlab/31-ostoy-logout.png) ![container logs](/media/managedlab/31-ostoy-logout.png)
If the logs are particularly chatty then you can paste the following query to see your message.
```
ContainerLog
| where LogEntry contains "<Your Message>"
```
{% endcollapsible %} {% endcollapsible %}
@ -84,7 +107,7 @@ Click on "Containers" in the left menu under Insights.
![Containers](/media/managedlab/25-ostoy-monitorcontainers.png) ![Containers](/media/managedlab/25-ostoy-monitorcontainers.png)
Click on your cluster that is integrated with Azure Monitor. You might need to click on the "Monitored clusters" tab. Click on your cluster that is integrated with Azure Monitor.
![Cluster](/media/managedlab/26-ostoy-monitorcluster.png) ![Cluster](/media/managedlab/26-ostoy-monitorcluster.png)

Просмотреть файл

@ -13,22 +13,25 @@ It would be best to prepare by splitting your screen between the OpenShift Web U
![Splitscreen](/media/managedlab/23-ostoy-splitscreen.png) ![Splitscreen](/media/managedlab/23-ostoy-splitscreen.png)
But if your screen is too small or that just won't work, then open the OSToy application in another tab so you can quickly switch to the OpenShift Web Console once you click the button. To get to this deployment in the OpenShift Web Console go to: But if your screen is too small or that just won't work, then open the OSToy application in another tab so you can quickly switch to the OpenShift Web Console once you click the button. To get to this deployment in the OpenShift Web Console go to the left menu and click:
*Applications > Deployments >* click the number in the "Last Version" column for the "ostoy-frontend" row *Workloads > Deployments > "ostoy-frontend"*
![Deploy Num](/media/managedlab/11-ostoy-deploynum.png) Go to the tab for the OSToy app, click on *Home* in the left menu, and enter a message in the "Crash Pod" tile (ie: "This is goodbye!") and press the "Crash Pod" button. This will cause the pod to crash and Kubernetes should restart the pod. After you press the button you will see:
Go to the OSToy app, click on *Home* in the left menu, and enter a message in the "Crash Pod" tile (ie: "This is goodbye!") and press the "Crash Pod" button. This will cause the pod to crash and Kubernetes should restart the pod. After you press the button you will see:
![Crash Message](/media/managedlab/12-ostoy-crashmsg.png) ![Crash Message](/media/managedlab/12-ostoy-crashmsg.png)
Quickly switch to the Deployment screen. You will see that the pod is red, meaning it is down but should quickly come back up and show blue. Quickly switch to the tab with the Deployment showing in the Web Console. You will see that the pod is red, meaning it is down but should quickly come back up and show blue.
![Pod Crash](/media/managedlab/13-ostoy-podcrash.png) ![Pod Crash](/media/managedlab/13-ostoy-podcrash.png)
You can also check in the pod events and further verify that the container has crashed and been restarted. You can also check in the pod events and further verify that the container has crashed and been restarted.
Click on *Pods > [Pod Name] > Events*
![Pods](/media/managedlab/13.1-ostoy-fepod.png)
![Pod Events](/media/managedlab/14-ostoy-podevents.png) ![Pod Events](/media/managedlab/14-ostoy-podevents.png)
Keep the page from the pod events still open from the previous step. Then in the OSToy app click on the "Toggle Health" button, in the "Toggle Health Status" tile. You will see the "Current Health" switch to "I'm not feeling all that well". Keep the page from the pod events still open from the previous step. Then in the OSToy app click on the "Toggle Health" button, in the "Toggle Health Status" tile. You will see the "Current Health" switch to "I'm not feeling all that well".

Просмотреть файл

@ -9,9 +9,9 @@ In this section we will execute a simple example of using persistent storage by
{% collapsible %} {% collapsible %}
Inside the OpenShift web UI click on *Storage* in the left menu. You will then see a list of all persistent volume claims that our application has made. In this case there is just one called "ostoy-pvc". You will also see other pertinent information such as whether it is bound or not, size, access mode and age. Inside the OpenShift web UI click on *Storage > Persistent Volume Claims* in the left menu. You will then see a list of all persistent volume claims that our application has made. In this case there is just one called "ostoy-pvc". If you click on it you will also see other pertinent information such as whether it is bound or not, size, access mode and creation time.
In this case the mode is RWO (Read-Write-Once) which means that the volume can only be mounted to one node, but the pod(s) can both read and write to that volume. The default in ARO is for Persistent Volumes to be backed by Azure Disk, but it is possible to chose Azure Files so that you can use the RWX (Read-Write-Many) access mode. ([See here for more info on access modes](https://docs.openshift.com/aro/architecture/additional_concepts/storage.html#pv-access-modes)) In this case the mode is RWO (Read-Write-Once) which means that the volume can only be mounted to one node, but the pod(s) can both read and write to that volume. The [default in ARO](https://docs.microsoft.com/en-us/azure/openshift/openshift-faq#can-we-choose-any-persistent-storage-solution-like-ocs) is for Persistent Volumes to be backed by Azure Disk, but it is possible to chose Azure Files so that you can use the RWX (Read-Write-Many) access mode. [See here for more info on access modes](https://docs.openshift.com/aro/4/storage/understanding-persistent-storage.html#pv-access-modes_understanding-persistent-storage)
In the OSToy app click on *Persistent Storage* in the left menu. In the "Filename" area enter a filename for the file you will create. (ie: "test-pv.txt") In the OSToy app click on *Persistent Storage* in the left menu. In the "Filename" area enter a filename for the file you will create. (ie: "test-pv.txt")
@ -33,13 +33,13 @@ You will see the file you created is still there and you can open it to view its
![Crash Message](/media/managedlab/19-ostoy-existingfile.png) ![Crash Message](/media/managedlab/19-ostoy-existingfile.png)
Now let's confirm that it's actually there by using the CLI and checking if it is available to the container. If you remember we mounted the directory `/var/demo_files` to our PVC. So get the name of your frontend pod Now let's confirm that it's actually there by using the CLI and checking if it is available to the container. If you remember we [mounted the directory](yaml/ostoy-fe-deployment.yaml#L50) `/var/demo_files` to our PVC. So get the name of your frontend pod
`oc get pods` `oc get pods`
then get an SSH session into the container then get an SSH session into the container
`oc rsh <podname>` `oc rsh <pod name>`
then `cd /var/demo_files` then `cd /var/demo_files`

Просмотреть файл

@ -11,7 +11,7 @@ Let's review how this application is set up...
![OSToy Diagram](/media/managedlab/4-ostoy-arch.png) ![OSToy Diagram](/media/managedlab/4-ostoy-arch.png)
As can be seen in the image above we see we have defined at least 2 separate pods, each with its own service. One is the frontend web application (with a service and a publicly accessible route) and the other is the backend microservice with a service object created so that the frontend pod can communicate with the microservice (across the pods if more than one). Therefore this microservice is not accessible from outside this cluster, nor from other namespaces/projects (due to ARO's network policy, **ovs-networkpolicy**). The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled "Intra-cluster Communication". As can be seen in the image above we have defined at least 2 separate pods, each with its own service. One is the frontend web application (with a service and a publicly accessible route) and the other is the backend microservice with a service object created so that the frontend pod can communicate with the microservice (across the pods if more than one). Therefore this microservice is not accessible from outside this cluster, nor from other namespaces/projects (due to ARO's [network policy](https://docs.openshift.com/aro/4/networking/network_policy/about-network-policy.html#nw-networkpolicy-about_about-network-policy), **ovs-networkpolicy**). The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled "Intra-cluster Communication".
### Networking ### Networking
@ -57,7 +57,7 @@ If we look at the tile on the left we should see one box randomly changing color
To confirm that we only have one pod running for our microservice, run the following command, or use the web UI. To confirm that we only have one pod running for our microservice, run the following command, or use the web UI.
```sh ```sh
[okashi@ok-vm ostoy]# oc get pods $ oc get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
ostoy-frontend-679cb85695-5cn7x 1/1 Running 0 1h ostoy-frontend-679cb85695-5cn7x 1/1 Running 0 1h
ostoy-microservice-86b4c6f559-p594d 1/1 Running 0 1h ostoy-microservice-86b4c6f559-p594d 1/1 Running 0 1h

Просмотреть файл

@ -30,11 +30,11 @@ In the OSToy app in the left menu click on "Autoscaling" to access this portion
As was in the networking section you will see the total number of pods available for the microservice by counting the number of colored boxes. In this case we have only one. This can be verified through the web UI or from the CLI. As was in the networking section you will see the total number of pods available for the microservice by counting the number of colored boxes. In this case we have only one. This can be verified through the web UI or from the CLI.
![HPA Main](/media/managedlab/33-hpa-mainpage.png)
You can use the following command to see the running microservice pods only: You can use the following command to see the running microservice pods only:
`oc get pods --field-selector=status.phase=Running | grep microservice` `oc get pods --field-selector=status.phase=Running | grep microservice`
![HPA Main](/media/managedlab/33-hpa-mainpage.png)
#### 3. Increase the load #### 3. Increase the load
Now that we know that we only have one pod let's increase the workload that the pod needs to perform. Click the link in the center of the card that says "increase the load". **Please click only *ONCE*!** Now that we know that we only have one pod let's increase the workload that the pod needs to perform. Click the link in the center of the card that says "increase the load". **Please click only *ONCE*!**

Двоичные данные
media/managedlab/10-ostoy-homepage-1.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 140 KiB

После

Ширина:  |  Высота:  |  Размер: 158 KiB

Двоичные данные
media/managedlab/10-ostoy-homepage.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 135 KiB

После

Ширина:  |  Высота:  |  Размер: 148 KiB

Двоичные данные
media/managedlab/13.1-ostoy-fepod.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 42 KiB

Двоичные данные
media/managedlab/14-ostoy-podevents.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 73 KiB

После

Ширина:  |  Высота:  |  Размер: 85 KiB

Двоичные данные
media/managedlab/24-ostoy-azuremonitor.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 59 KiB

После

Ширина:  |  Высота:  |  Размер: 34 KiB

Двоичные данные
media/managedlab/6-ostoy-newproj.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 55 KiB

После

Ширина:  |  Высота:  |  Размер: 29 KiB

Просмотреть файл

@ -29,7 +29,7 @@ spec:
spec: spec:
containers: containers:
- name: ostoy-frontend - name: ostoy-frontend
image: quay.io/ostoylab/ostoy-frontend:1.3.0 image: quay.io/ostoylab/ostoy-frontend:1.4.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
ports: ports:
- name: ostoy-port - name: ostoy-port

Просмотреть файл

@ -16,7 +16,7 @@ spec:
spec: spec:
containers: containers:
- name: ostoy-microservice - name: ostoy-microservice
image: quay.io/ostoylab/ostoy-microservice:1.3.0 image: quay.io/ostoylab/ostoy-microservice:1.4.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
ports: ports:
- containerPort: 8080 - containerPort: 8080