Detailed vm creation steps added
|
@ -7,7 +7,7 @@
|
|||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/textbubble.png?raw=true"> 02 - Hardware and Virtualization environment for Kubernetes </h2>
|
||||
# 02 - Hardware and Virtualization environment for Kubernetes #
|
||||
|
||||
In this workshop you have covered the fundamentals behind contains and container orchestration. The end of this Module contains several helpful references you can use in these exercises and in production.
|
||||
|
||||
|
@ -17,7 +17,7 @@ This module covers the infrastructure foundation for building a Kubernetes clust
|
|||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/pencil2.png?raw=true">2.1 Kubernetes Targets</h2>
|
||||
## 2.1 Kubernetes Targets ##
|
||||
|
||||
Kubernetes and it's variants, can run on "bare metal" - a server-hardware system. You can also use a Hypervisor, such as VMWare or Hyper-V. In many companies, the IT infrastructure is completely virtualized, so this module covers the installation on that platform. Most of the principles for deploying Kubernetes apply when carrying out the activity on non-virtualized hardware, and you'll learn the differences as you progress through the exercises.
|
||||
|
||||
|
@ -36,127 +36,145 @@ In this activity you will <TODO: Explain Activity>
|
|||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/pencil2.png?raw=true">2.2 Computing and Networking Hardware Setup and Configuration</h2>
|
||||
## 2.2 Computing and Networking Hardware Setup and Configuration ##
|
||||
|
||||
In this instructor-led session, the general hardware you will use involves the following components:
|
||||
In this instructor-led session, the hardware for the hands on lab exercises consists of the following components:
|
||||
|
||||
The base hardware consists of:
|
||||
|
||||
- 2 x servers each with 2 12 core processors and 512GB of RAM
|
||||
- 3 x servers each with 2 12 core processors and 512GB of RAM
|
||||
- One all-Flash solid state storage array connected to the compute resource via iSCSI
|
||||
|
||||
In this course, each machine hosts a VMware 6.7 ESXi server, this provides the ability to provision virtual machines for:
|
||||
|
||||
|
||||
<table style="tr:nth-child(even) {background-color: #f2f2f2;}; text-align: left; display: table; border-collapse: collapse; border-spacing: 2px; border-color: gray;">
|
||||
|
||||
<tr>
|
||||
<th style="background-color: #1b20a1; color: white;">Virtual Machine Assignment</th>
|
||||
<th style="background-color: #1b20a1; color: white;">Operating System</th>
|
||||
<th style="background-color: #1b20a1; color: white;">Logical CPUs</th>
|
||||
<th style="background-color: #1b20a1; color: white;">Memory Assigned (GB)</th>
|
||||
<th style="background-color: #1b20a1; color: white;">Description</th>
|
||||
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>Master node (x2)</td>
|
||||
<td>Ubuntu 18.04.03 LTS 64 bit <b>*</b></td>
|
||||
<td>2</td>
|
||||
<td>8</td>
|
||||
<td>Kubernetes cluster control plane</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Worker node (x15)</td>
|
||||
<td>Ubuntu 18.04.03 LTS 64 bit <b>*</b></td>
|
||||
<td>8</td>
|
||||
<td>64</td>
|
||||
<td>Kubernetes cluster compute and data planes</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Jump Box (Client) </td>
|
||||
<td>Ubuntu 18.04.03 LTS 64 bit <b>*</b></td>
|
||||
<td>2</td>
|
||||
<td>2</td>
|
||||
<td>"Guest" for Kubernetes and big data cluster deployment commands</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Active Directory Server</td>
|
||||
<td>Windows Server 2019 Standard Edition</b></td>
|
||||
<td>8</td>
|
||||
<td>8</td>
|
||||
<td>Active Directory authentication for Kubernetes cluster and SQL Server Pods</td>
|
||||
</tr>
|
||||
|
||||
</table>
|
||||
|
||||
<i>*Refer to https://help.ubuntu.com/community/Installation/MinimalCD</i>
|
||||
|
||||
The Kubernetes cluster state is stored in **etcd**, a high-performance key-value store database. Our lab will use an **etcd** configuration of three instances, one on each of the master nodes and one instance on a worker node.
|
||||
|
||||
Tips
|
||||
- This configuration will work equally well on blade servers and on Hyper-V,
|
||||
- To speed up node creation, a template can used for creation of all node hosts.
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/point1.png?raw=true"><b>Activity: <TODO: Activity Name></b></p>
|
||||
|
||||
In this activity you will <TODO: Explain Activity>
|
||||
<TODO: insert image here>
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/checkmark.png?raw=true"><b>Steps</b></p>
|
||||
## 2.3 Virtualised Infrastructure Setup and Configuration ##
|
||||
|
||||
<TODO: Enter specific steps to perform the activity>
|
||||
With the hardware and layout in place, you'll now turn to the configuration of the cluster environment. The operating system for each master and worker node is Ubuntu 18.04.3 LTS, and the storage orchestrator requires the open-iscsi package. To begin with, a virtual machine is required to base the template off for both the master and worker nodes:
|
||||
|
||||
1. First start by creating a virtual machine in VMware vCenter:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_1_vcenter.png?raw=true">
|
||||
|
||||
2. Provide a name for the virtual machine:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_2_vcenter.png?raw=true">
|
||||
|
||||
3. Select a VNMware ESXi server which the virtual machine will use for compute resources:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_3_vcenter.png?raw=true">
|
||||
|
||||
4. Select a VNMware datastore that the virtual machine will use for storage
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_4_vcenter.png?raw=true">
|
||||
|
||||
5. Select the virtual machine compatibility level:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_5_vcenter.png?raw=true">
|
||||
|
||||
6. Select the guest OS family and version; Linux and Ubuntu 64 bit respectively in this example:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_6_vcenter.png?raw=true">
|
||||
|
||||
7. For a single node cluster sandbox enivoronment and the Pure Storage Kubernetes storage plugin, the virtual machine requires:
|
||||
|
||||
- 4 logical processors
|
||||
- 16GB of memory
|
||||
- 100GB of storage
|
||||
- a dedicated network adapter for iSCSI
|
||||
- a CD/DVD drive configured to connect to the Ubuntu 64 bit 18.04.03 LTS ISO
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_7_vcenter.png?raw=true">
|
||||
|
||||
8. Review the configuration for the virtual machine and then hit FINISH:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_8_vcenter.png?raw=true">
|
||||
|
||||
9. Power on the virtual machine and install VMware tools:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_9_vcenter.png?raw=true">
|
||||
|
||||
10. Now that the basic virtual machine is configured and powered on, it is time to configure the Ubuntu operating system after the machine boots up from the ISO,
|
||||
the first step is to specify the language that the guest operating system will use:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_11_vcenter.png?raw=true">
|
||||
|
||||
11. Select the preferred keyboard layout
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_12_vcenter.png?raw=true">
|
||||
|
||||
12. The network adapter configuration screen should display entries for two adapters; the first for "East - West" traffic within the Kubernetes cluster and the second for the clusters persistent storage accessed via iSCSI:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_13_vcenter.png?raw=true">
|
||||
|
||||
13. Assign a static IPv4 address to the first adapter:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_14_vcenter.png?raw=true">
|
||||
|
||||
14. specify a subnet mask in **CIDR format**, the ip address, gateway, name (dns) server and search domain:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_15_vcenter.png?raw=true">
|
||||
|
||||
15. Repeat the same steps for the adapter for the iSCSI network, note that this does not require a default gateway or name server:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_16_vcenter.png?raw=true">
|
||||
|
||||
16. Enter a http proxy, if one is used for accessing the internet:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_17_vcenter.png?raw=true">
|
||||
|
||||
17. Accept the default mirror site for Ubuntu:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_18_vcenter.png?raw=true">
|
||||
|
||||
18. Select "Use An Entire Disk" for the filesystem:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_19_vcenter.png?raw=true">
|
||||
|
||||
19. Select the default of /dev/sda as the device to install Ubuntu on:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_20_vcenter.png?raw=true">
|
||||
|
||||
20. Select done to confirm the filesystem configuration:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_21_vcenter.png?raw=true">
|
||||
|
||||
21. Select continue to confirm that the target disk of the installation will be formatted:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_22_vcenter.png?raw=true">
|
||||
|
||||
22. Enter details for the user to be used to login to the system:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_23_vcenter.png?raw=true">
|
||||
|
||||
23. Install the OpenSSH server:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_24_vcenter.png?raw=true">
|
||||
|
||||
24. Hit done to confirm that no featured server snaps are to be installed, the single node cluster script will install everything that is required:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_25_vcenter.png?raw=true">
|
||||
|
||||
25. Installation of the Ubuntu operating system will now commence:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_26_vcenter.png?raw=true">
|
||||
|
||||
26. The full log of the installation can be viewed if required
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_27_vcenter.png?raw=true">
|
||||
|
||||
27. Once the install has completed, the guest operating system needs to be rebooted:
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_28_vcenter.png?raw=true">
|
||||
|
||||
28. In order for the reboot to take place, the DVD/CD drive needs to be removed from the virtual machine
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/k8stobdc/graphics/2_3_29_vcenter.png?raw=true">
|
||||
|
||||
29. Perform a basic test of the virtual
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/pencil2.png?raw=true">2.3 Virtualization (Hypervisor) Setup and Configuration</h2>
|
||||
|
||||
With the hardware and layout in place, you'll now turn to the configuration of the cluster environment. Note that the operating system for each master and worker node is Ubuntu 18.04.3 LTS, and the storage orchestrator requires the open-iscsi package.
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/point1.png?raw=true"><b>Activity: <TODO: Activity Name></b></p>
|
||||
|
||||
In this activity you will <TODO: Explain Activity>
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/checkmark.png?raw=true"><b>Steps</b></p>
|
||||
|
||||
<TODO: Enter specific steps to perform the activity>
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/pencil2.png?raw=true">2.4 Creating the "Jump Box"</h2>
|
||||
|
||||
The "Jump Box" is a Virtual Machine that emulates the the workstation you would use to deploy, configure and manage your cluster using tools such as **kubespray**, **kubectl** and the **azdata** utility. This system is within the Hypervisor environment. You might use a physical machine on the same network or the Jump Box might be used in production.
|
||||
|
||||
The following packages are all required by **kubespray**, the mechanism used to deploy a Kubernetes cluster. Refer to [the *requirement.txt* file in the Kubespray GitHub repository](https://github.com/kubernetes-sigs/kubespray/blob/master/requirements.txt) for the exact versions of these packages that are required:
|
||||
|
||||
- ansible
|
||||
- jinja2
|
||||
- netaddr
|
||||
- pbr
|
||||
- hvac
|
||||
- jmespath
|
||||
- ruamel.yaml
|
||||
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/point1.png?raw=true"><b>Activity: <TODO: Activity Name></b></p>
|
||||
|
||||
In this activity you will <TODO: Explain Activity>
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/checkmark.png?raw=true"><b>Steps</b></p>
|
||||
|
||||
<TODO: Enter specific steps to perform the activity>
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/pencil2.png?raw=true">2.5 Storage Orchestration</h2>
|
||||
## 2.5 Storage Orchestration ##
|
||||
|
||||
In this instructor-led workshop, Storage Orchestration is facilitated via the [Pure Service Orchestrator](https://github.com/purestorage/helm-charts/blob/master/pure-k8s-plugin/README.md). This component is a [Kubernetes Container Storage Interface-compliant plugin](https://github.com/container-storage-interface/spec) that automatically provisions storage across one or more Pure Storage FlashArray™ and FlashBlade™ storage arrays.
|
||||
|
||||
|
|
|
@ -39,6 +39,31 @@ We'll begin with a set of definitions. These aren't all the terms used in Kubern
|
|||
<td> </td>
|
||||
<td><a href="https://kubernetes.io/docs/reference/tools/#dashboard"><i>The Kubernetes Dashboard</i></a> </td>
|
||||
<td>A web-based Kubernetes interface that allows you to deploy containerized applications to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources. </td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td><a href="https://kubernetes.io/docs/concepts/"><b>Concepts</b></a> </td>
|
||||
<td><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/"><i>Declaritive API</i></a> </td>
|
||||
<td>Objects are specified according to the desired state that an object should be instantiated in. </td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/"><i>Overlay network</i></a> </td>
|
||||
<td>A software defined network via which nodes in the cluster communicate, this is usually implemented via a Container Network Interface compliant plugin, the default of which is Calico. </td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"><i>Ingress</i></a> </td>
|
||||
<td>Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. </td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><i><a href="https://github.com/container-storage-interface/spec/blob/master/spec.md">Persistent Storage </i></a> </td>
|
||||
<td>A hardware and software combination used to persist state. One of the key aims is ensure that if a Pod is rescheduled to run on a different Node, its state is not lost as it moves from its original Node to a new one. In the early days of Kubernetes, most storage drivers were called as “In tree”, meaning that vendors who wanted Kubernetes to use their storage had to integrate the code for their drivers directly with the Kubernetes code base. The IT industry is now gravitating towards the Container Storage Interface specification which allows Kubernetes to seamlessly use any storage platform that supports this standard without having to touch the Kubernetes code base. Ultimately, the aim of the CSI standard is to promote storage portability. </td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><i><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/">Bin packing</i></a> </td>
|
||||
<td>The algorithm by which pods are assigned to nodes based on each node's CPU and memory requirements and the available resouirces on each node.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td><b><a href="https://kubernetes.io/docs/concepts/#kubernetes-objects">Object</b></a> </td>
|
||||
|
@ -65,21 +90,51 @@ We'll begin with a set of definitions. These aren't all the terms used in Kubern
|
|||
<td><i><a href="https://kubernetes.io/docs/concepts/services-networking/service/">Service</i></a> </td>
|
||||
<td>A "description" of a set of Pods and a policy to access them. This de-couples the call to an application to it's physical representation, and allows the application running on the Pod to be more stateless. </td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><a href="https://kubernetes.io/docs/concepts/storage/volumes/"><i>Volume</i></a> </td>
|
||||
<td>A pointer to a storage directory - either "ethereal" (has the same lifetime as the Pod) or permanent. Can use various providers such as cloud storage and on-premises devices, and is set with various parameters. </td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><i><a href="https://github.com/container-storage-interface/spec/blob/master/spec.md">Persistent Storage </i></a> </td>
|
||||
<td>A hardware and software combination used to persist state. One of the key aims is ensure that if a Pod is rescheduled to run on a different Node, its state is not lost as it moves from its original Node to a new one. In the early days of Kubernetes, most storage drivers were called as “In tree”, meaning that vendors who wanted Kubernetes to use their storage had to integrate the code for their drivers directly with the Kubernetes code base. The IT industry is now gravitating towards the Container Storage Interface specification which allows Kubernetes to seamlessly use any storage platform that supports this standard without having to touch the Kubernetes code base. Ultimately, the aim of the CSI standard is to promote storage portability. </td>
|
||||
<td><a href="https://kubernetes.io/docs/concepts/storage/volumes/"><i>Persistent Volume</i></a> </td>
|
||||
<td>A piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><a href="https://kubernetes.io/docs/concepts/storage/volumes/"><i>Persistent Volume Claim</i></a> </td>
|
||||
<td>A request for storage from a PersistentVolume by a user.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><a href="https://kubernetes.io/docs/concepts/storage/volumes/"><i>StorageClass</i></a> </td>
|
||||
<td>A StorageClass provides a way for administrators to describe the “classes” of storage available to a Kubenetes cluster.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/"><i>Namespace</i></a> </td>
|
||||
<td>Used to define multiple virtual clusters backed by the same physical cluster. Namespaces are a critical component in the Kubernetes role based access control security model.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><i><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/">Label</i></a> </td>
|
||||
<td>key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><i><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/">Selector</i></a> </td>
|
||||
<td>Mechanism by which a client/user can identify a set of objects that have specific label(s).</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><i><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/">Secret</i></a> </td>
|
||||
<td>An object that contains a small amount of sensitive data such as a password, a token, or a key.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td> </td>
|
||||
<td><i><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/">ConfigMap</i></a> </td>
|
||||
<td>A resource for injecting containers with configuration data that allows containers to be Kubernetes agnostic.</td>
|
||||
</tr>
|
||||
<tr style="vertical-align:top;">
|
||||
<td><a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/"><b>Kubernetes Master</b></a> </td>
|
||||
<td><a href="https://kubernetes.io/docs/admin/kube-apiserver/"><i>kube-apiserver </i></a> </td>
|
||||
|
@ -192,9 +247,13 @@ Provision must be made for the control plane to be highly available, this includ
|
|||
|
||||
It is recommended that a production grade cluster has a minimum of two master nodes and three etcd instances.
|
||||
|
||||
The standard method for bootstrapping the control plane in is use ```kubeadm init```.
|
||||
|
||||
### 3.2.2 Worker Nodes ###
|
||||
|
||||
A production grade SQL Server 2019 Big Data Cluster requires a minimum of three nodes each with 64 GB of RAM and 8 logical processors. However, consideration also needs to be made for upgrading a Kubernetes cluster from one version to another. There are two options:
|
||||
A production grade SQL Server 2019 Big Data Cluster requires a minimum of three nodes each with 64 GB of RAM and 8 logical processors. The standard method for bootstrapping worker nodes and joining them to the cluster is to use ```kubeadm join```.
|
||||
|
||||
Consideration needs to be made for upgrading a Kubernetes cluster from one version to another and allowing the cluster to tolerate node failure(s). There are two options:
|
||||
|
||||
- **Upgrade each node in the cluster in-situ**
|
||||
|
||||
|
@ -206,7 +265,25 @@ A production grade SQL Server 2019 Big Data Cluster requires a minimum of three
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/point1.png?raw=true"><b>Activity: Create a single node big data cluster sandpit environment</b></p>
|
||||
|
||||
In this activity you will deploy a single node big data cluster sandpit environment on an Ubuntu virtual machine using [this script](https://docs.microsoft.com/en-us/sql/big-data-cluster/deployment-script-single-node-kubeadm?view=sql-server-ver15)
|
||||
In this activity workshop attendees will familiarise themselves with the single node sandbox lab environment created using this script [this script](https://docs.microsoft.com/en-us/sql/big-data-cluster/deployment-script-single-node-kubeadm?view=sql-server-ver15):
|
||||
|
||||
1. Connect to your sandbox environment using a ssh client, macOS comes with a built in client, windows users can use a DOS command shell windows and issue the following command:
|
||||
|
||||
```ssh labuser@<ip-address>```
|
||||
|
||||
Your workshop hosts will provide each attendee with an ip address and password.
|
||||
|
||||
2. List some of the key processes that your sandbox Kubernetes cluster consists of:
|
||||
|
||||
```ps -ef | egrep'(containerd|docker|etcd|kubelet)'```
|
||||
|
||||
3. List the log files for the pods that form your sandbox big data cluster:
|
||||
|
||||
```ls -l /var/log/pods```
|
||||
|
||||
4. Observe live process stats that include those of the comnponents that make up the sandbox Kubernetes cluster:
|
||||
|
||||
```top```
|
||||
|
||||
### 3.2.3 Kubernetes Production Grade Deployments ###
|
||||
|
||||
|
@ -218,7 +295,23 @@ In the last activity, we deployed a single node SQL Server 2019 big data cluster
|
|||
|
||||
- Automate the tasks that have to be performed in addition to running kubeadm
|
||||
|
||||
There is a tool that leverages kubeadm in order to achieve all of these goals.
|
||||
Also consider the number of steps required to deploy a cluster using kubeadm:
|
||||
|
||||
- Import the keys and register the repository for Kubernetes on each machine that will host a cluster node.
|
||||
|
||||
- Configure docker and Kubernetes prerequisites on each machine.
|
||||
|
||||
- Configure docker and Kubernetes prerequisites on each machine.
|
||||
|
||||
- Create a YAML file in order to enable RBAC for the cluster
|
||||
|
||||
- Initialise the Kubernetes master on this machine.
|
||||
|
||||
- Boostrap each work node and join it to the cluster.
|
||||
|
||||
- Configure an agent / kubelet on each worker node.
|
||||
|
||||
Can this process be automated ?.
|
||||
|
||||
### 3.2.4 Introducing Kubespray ###
|
||||
|
||||
|
@ -232,7 +325,6 @@ There is a tool that leverages kubeadm in order to achieve all of these goals.
|
|||
|
||||
- Add nodes to existing clusters
|
||||
|
||||
|
||||
Kubespray is a Cloud Native Computing Foundation project and with its own [GitHub repository](https://github.com/kubernetes-sigs/kubespray).
|
||||
|
||||
### 3.2.5 What Is Ansible? ###
|
||||
|
@ -289,7 +381,6 @@ Note:
|
|||
|
||||
### 3.2.8 Requirements ###
|
||||
|
||||
|
||||
### 3.2.9 Post Cluster Deployment Activities ###
|
||||
|
||||
Install kubectl - the primary tool for administering a Kubernetes cluster. kubectl requires a configuration file in order to access the cluster, by default kubectl will look for a file named config in the .kube directory under the home directory of the user that is logged in:
|
||||
|
@ -308,13 +399,17 @@ For this activity, workshop attendees will log onto the jump server and use a pr
|
|||
|
||||
Use the kubectl cheat sheet to familiarise yourself with various kubectl commands. One of the key commands to be aware of is kubectl get.
|
||||
|
||||
- Use kubectl to obtain the state of each node in the cluster, all nodes in a healthy cluster should have a state of ‘Ready’
|
||||
1. Display the config containing the context for accessing the sandbox Kubernetes cluster:
|
||||
|
||||
- Ordinarily, with the exception of single node clusters that are used for learning purposes, pods should never run on master nodes. As such a NoSchedule taint should be present on each master node, use kubectl describe to verify this.
|
||||
```kubectl config view```
|
||||
|
||||
- Labels can be assigned to any object created in a Kubernetes cluster, an entity known as a ‘Selector’ is used to filter objects with labels. Use kubectl get to display the nodes with the role of master. Labels and selectors are covered by the Kubernetes documentation in detail.
|
||||
2. Use kubectl to obtain the state of each node in the cluster, all nodes in a healthy cluster should have a state of ‘Ready’.
|
||||
|
||||
- All objects that live in a Kubernetes cluster reside in a namespace, when a big data cluster is created, all its objects reside in a namespace dedicated to that big data cluster. Use kubectl to obtain the names of namespaces present in the workshop cluster.
|
||||
3. Obtained detailed information on the sandbox cluster's single node using describe.
|
||||
|
||||
4. Labels can be assigned to any object created in a Kubernetes cluster, an entity known as a ‘Selector’ is used to filter objects with labels. Use kubectl get to display the nodes with the role of master. Labels and selectors are covered by the Kubernetes documentation in detail.
|
||||
|
||||
5. All objects that live in a Kubernetes cluster reside in a namespace, when a big data cluster is created, all its objects reside in a namespace dedicated to that big data cluster. Use kubectl to obtain the names of namespaces present in the workshop cluster.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
|
|
После Ширина: | Высота: | Размер: 273 KiB |
После Ширина: | Высота: | Размер: 116 KiB |
После Ширина: | Высота: | Размер: 72 KiB |
После Ширина: | Высота: | Размер: 111 KiB |
После Ширина: | Высота: | Размер: 120 KiB |
После Ширина: | Высота: | Размер: 118 KiB |
После Ширина: | Высота: | Размер: 139 KiB |
После Ширина: | Высота: | Размер: 135 KiB |
После Ширина: | Высота: | Размер: 123 KiB |
После Ширина: | Высота: | Размер: 98 KiB |
После Ширина: | Высота: | Размер: 73 KiB |
После Ширина: | Высота: | Размер: 92 KiB |
После Ширина: | Высота: | Размер: 87 KiB |
После Ширина: | Высота: | Размер: 125 KiB |
После Ширина: | Высота: | Размер: 118 KiB |
После Ширина: | Высота: | Размер: 112 KiB |
После Ширина: | Высота: | Размер: 85 KiB |
После Ширина: | Высота: | Размер: 320 KiB |
После Ширина: | Высота: | Размер: 218 KiB |
После Ширина: | Высота: | Размер: 381 KiB |
После Ширина: | Высота: | Размер: 250 KiB |
После Ширина: | Высота: | Размер: 218 KiB |
После Ширина: | Высота: | Размер: 207 KiB |
После Ширина: | Высота: | Размер: 176 KiB |
После Ширина: | Высота: | Размер: 272 KiB |
После Ширина: | Высота: | Размер: 258 KiB |
После Ширина: | Высота: | Размер: 278 KiB |
После Ширина: | Высота: | Размер: 276 KiB |
После Ширина: | Высота: | Размер: 273 KiB |
После Ширина: | Высота: | Размер: 316 KiB |
После Ширина: | Высота: | Размер: 310 KiB |
После Ширина: | Высота: | Размер: 357 KiB |