Merge pull request #253 from microsoft/chrisworking

General tidy-up performed on all sections up to and including section 3.3
This commit is contained in:
Buck Woody 2020-03-05 12:02:10 -05:00 коммит произвёл GitHub
Родитель 9a3671281e 21b6271e5c
Коммит 7ae5813f11
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
2 изменённых файлов: 33 добавлений и 30 удалений

Просмотреть файл

@ -229,31 +229,38 @@ The activity covers the installation of a Container Storage Interface compliant
kubectl get sc
```
2. Verify that the iSCSI target endpoints are reachable:
2. Verify that each iSCSI IP address associated with the interfaces `ct0` and `ct1` is reachable from every node host in the cluster via the use of the ping command:
`ping <iSCSI target ip address(es)>`
In this example, each IP address associated with the interfaces `ct0` and `ct1` should be reachable from each node host in the cluster:
<img style="width=80; float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_4_2_purity.PNG?raw=true">
<img style="width=80; float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_4_1_purity.PNG?raw=true">
3. Confirm the version of helm that is installed by executing the command
`helm version`
4. Download the YAML template for the storage plugin configuration:
4. Versions of Helm prior to version 3.0 require that a server side component known as 'Tiller' is installed on the Kubernetes cluster, if this is not present, install Tiller using the following commands:
```
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
```
5. Download the YAML template for the storage plugin configuration:
`curl --output pso-values.yaml https://raw.githubusercontent.com/purestorage/helm-charts/master/pure-csi/values.yaml`
4. Using a text editor such as VI or nano, open the `pso-values.yaml` file.
6. Using a text editor such as VI or nano, open the `pso-values.yaml` file.
5. Uncomment lines 82 through to 84 by removing the hash symbol (`#`)from each line.
7. Uncomment lines 82 through to 84 by removing the hash symbol (`#`)from each line.
6. On line 83, replace the `template IP addres`s` with the `management endpoint IP address` of the array that persistent volumes are to be created on.
8. On line 83, replace the `template IP addres`s` with the `management endpoint IP address` of the array that persistent volumes are to be created on.
7. On line 84, replace the `template API token` with `API token for the array` that persistent volumes are to be created on.
9. On line 84, replace the `template API token` with `API token for the array` that persistent volumes are to be created on.
8. Add the repo containing the Helm chart for the storage plugin:
10. Add the repo containing the Helm chart for the storage plugin:
- For all versions of Helm run:
@ -274,7 +281,7 @@ helm search pure-csi
helm search repo pure-csi
```
9. Perform a dry run install of the plugin, this will verify that the contents of the `pso-values.yaml` file is correct:
11. Perform a dry run install of the plugin, this will verify that the contents of the `pso-values.yaml` file is correct:
- For Helm version 2, run:
@ -288,7 +295,7 @@ helm install --name pure-storage-driver pure/pure-csi --namespace <namespace> -f
helm install pure-storage-driver pure/pure-csi --namespace <namespace> -f <your_own_dir>/pso-values.yaml --dry-run --debug
```
10. If the dry run of the installation completed successfully, the actual install of the plugin can be performed, otherwise the `pso-values.yaml` file needs to be corrected:
12. If the dry run of the installation completed successfully, the actual install of the plugin can be performed, otherwise the `pso-values.yaml` file needs to be corrected:
- For Helm version 2, run:
@ -302,7 +309,7 @@ helm install --name pure-storage-driver pure/pure-csi --namespace <namespace> -f
helm install pure-storage-driver pure/pure-csi --namespace <namespace> -f <your_own_dir>/pso-values.yaml
```
11. List the type of storage classes that are now installed, a new storage class should be present:
13. List the type of storage classes that are now installed, a new storage class should be present:
```
kubectl get sc
@ -310,4 +317,4 @@ kubectl get sc
<p style="border-bottom: 1px solid lightgrey;"></p>
Next, Continue to <a href="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/KubernetesToBDC/03-kubernetes.md" target="_blank"><i> Module 3 - Kubernetes</i></a>.
Next, Continue to <a href="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/KubernetesToBDC/03-kubernetes.md" target="_blank"><i> Module 3 - Kubernetes</i></a>.

Просмотреть файл

@ -254,22 +254,22 @@ We'll begin with a set of definitions. These aren't all the terms used in Kubern
Provision must be made for the control plane to be highly available, this includes:
- The API server
- Master Nodes
- Master Nodes (which can only run on Linux hosts)
- An `etcd` instance
It is recommended that a production grade Cluster has a minimum of two master Nodes and three `etcd` instances.
It is recommended that a production grade Cluster has a minimum of two master Nodes and three `etcd` instances, in that in the event of an etcd instance failure, the etcd cluster can only remain operational if quorum is establised.
The standard method for bootstrapping the control plane in is to use the command ```kubeadm init```.
### 3.2.2 Worker Nodes ###
A production-grade SQL Server 2019 Big Data Cluster requires a minimum of three Nodes each with 64 GB of RAM and 8 logical processors. This is a minimu, na da full sizing document is available from Microsoft. The standard method for bootstrapping worker Nodes and joining them to the Cluster is to use the command ```kubeadm join```.
The minimum requirement for a production-grade SQL Server 2019 Big Data Cluster in terms of worker nodes is for three nodes, each with 64 GB of RAM and 8 logical processors. The standard method for bootstrapping worker Nodes and joining them to the Cluster is to use the command ```kubeadm join```.
Consideration needs to be made for upgrading a Kubernetes Cluster from one version to another and allowing the Cluster to tolerate Node failure(s). There are two options:
- **Upgrade each Node in the Cluster**
This requires that a [`Taint`](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) is applied to a Node so that it cannot accept any Pods. The Node is then "drained" of its current Pod workload, after which it can be upgraded. When the Node is drained, the Pods that are running on it need somewhere else to go, therefore this approach mandates that there are N+1 worker Nodes (assuming one Node is upgraded at a time). This approach runs the risk that if the upgrade fails for any reason, the Cluster may be left in a state with worker Nodes on different versions of Kubernetes.
- **Create a new Cluster**
In this case, you can create a new Cluster, deploy a big data Cluster to it, and then restore a backup of the data from the original Cluster. This approach requires more hardware than the upgrade method. If the upgrade spans multiple versions of Kubernetes - for example, the upgrade is from version 1.15 to 1.17 - this method allows a 1.17 Cluster to be created from scratch cleanly and then the data from 1.15 Cluster restored onto the new 1.17 Cluster.
In this case, you can create a new Cluster, deploy a big data Cluster to it, and then restore a backup of the data from the original Cluster. This approach requires more hardware than the upgrade method. If the upgrade spans multiple versions of Kubernetes - for example, the upgrade is from version 1.15 to 1.17 - this method allows a 1.17 Cluster to be created from scratch cleanly and then the data from 1.15 Cluster restored onto the new 1.17 Cluster.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/point1.png?raw=true"><b>Activity: An Introduction To The Workshop Sandbox Environment (Optional)</b></p>
@ -282,8 +282,6 @@ In the previous section we looked at the workshop sandbox environment from an in
- Helm
- The Kubernetes dashboard
- A local-storage Storage Class
- A SQL Server 2019 big data cluster
- `The azdata` utility
<p><img style="margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/checkmark.png?raw=true"><b>Steps</b></p>
@ -328,9 +326,9 @@ top
### 3.2.3 Kubernetes Production Grade Deployments ###
The environments used for the workshop hands on activities are created via a single script that leverages `kubeadm`. Consider that production deployments of a Kubernetes Cluster might require:
The environments used for the workshop hands on activities are created via a single script that leverages `kubeadm`. A deployment of a Kubernetes Cluster that is fit for production purposes might require:
- Deployment of multi-Node Clusters.
- Deployment of multi-node Clusters.
- Repeatable Cluster deployments for different environments with minimal scripting and manual command entry.
Also consider the number of steps required to deploy a Cluster using `kubeadm`:
@ -372,8 +370,6 @@ In order to carry out the deployment of the Kubernetes Cluster, a basic understa
Unlike other available deployment tools, Kubespray does everything for you in “One shot”. For example, Kubeadm requires that certificates on Nodes are created manually, Kubespray not only leverages Kubeadm but it also looks after everything including certificate creation for you. Kubespray works against most of the popular public cloud providers and has been tested for the deployment of Clusters with thousands of Nodes. The real elegance of Kubespray is the reuse it promotes. If an organization has a requirement to deploy multiple Clusters, once Kubespray is setup, for every new Cluster that needs to be created, the only prerequisite is to create a new inventory file for the Nodes the new Cluster will use.
#### 3.2.5 High Level Kubespray Workflow
The deployment of a Kubernetes Cluster via Kubespray follows this workflow:
- Preinstall step
@ -424,11 +420,11 @@ Now that your sandbox environment is up and running, its time to work with the `
## 3.2.6 Application Deployment (Package Management) ##
The deployment of applications often comes with following requirements:
The deployment of applications often comes with the following requirements:
- The ability to package components together
- Version control
- The ability to downgrade and upgrade packaged applications
- The ability to package components together.
- Version control.
- The ability to downgrade and upgrade packaged applications.
> Simply using a `YAML` file does not meet the requirements of deploying complex applications - a problem exacerbated by the rise of microservice based architectures.