shipping the first draft of demos

This commit is contained in:
Mohammad Nofal 2019-10-30 10:48:53 +01:00
Родитель 5469156ce8
Коммит 8c9f6d0e85
24 изменённых файлов: 1002 добавлений и 1 удалений

Просмотреть файл

@ -1,5 +1,19 @@
# AKS Best Practices Ignite 19
this repository contains the demos and the transcript for the "Applying best practices to Azure Kubernetes Service (AKS)" delivered in Ignite, video and slides can be found [here](https://aka.ms/aks-best-practices-ignite)
# Contributing
## Content
* [Backup and Restore Demo](backup_restore)
* [Multipe Availability Zones Demo](availability_zones)
* [Multiple Regions Mysql Demo](multi_region_mysql)
* [Multiple Regions Multi-Masters Cosmos](multi_region_mysql)
* [AKS Cluster and Tags Demo](multi_region_mysql)
* [AKS and Azure Policy](multi_region_mysql)
* [Safe Kubernetes Dashboard](dashboard_demo)
* [AKS Cluster Upgrade - Nodepools](cluster_upgrade_node_pools)
* [AKS Cluster Upgrade - Nodepools](cluster_upgrades_blue_green)
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us

Просмотреть файл

@ -0,0 +1 @@
## AKS With Azure Policy

Просмотреть файл

@ -0,0 +1,152 @@
## AKS With Availability Zones
AKS Now supports [Availability Zones (AZs)](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) the official AKS docs can be found [here](https://docs.microsoft.com/en-us/azure/aks/availability-zones) .
With AZs AKS can offer higher availability for your applications as they will spread across different AZs to achieve an SLA of 99,99%.
AKS with AZs support requires the use of [Standard Load Balancer (SLB)](https://docs.microsoft.com/en-us/azure/aks/load-balancer-standard), also note that disks are AZ bound by default.
#### Demo#1 Create an AKS cluster which spans AZs
1. Create an AKS Cluster
```shell
#define your variables
location=westeurope
rg=ignite
clustername=aks-ignite-azs
vmsize=Standard_B2s
k8s_version="1.14.7"
#create the cluster
$ az aks create \
--resource-group $rg \
--name $clustername \
--kubernetes-version $k8s_version \
--generate-ssh-keys \
--enable-vmss \
--load-balancer-sku standard \
--node-count 3 \
--node-zones 1 2 3 \
--location $location
#get the credintials
$ az aks get-credentials --resource-group $rg --name $clustername
```
2. Verify the nodes are spread across AZs (you can remove the '-l agentpool=nodes1')
```shell
$ kubectl describe nodes -l agentpool=nodes1 | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"
Name: aks-nodes1-14441868-vmss000000
failure-domain.beta.kubernetes.io/zone=westeurope-1
Name: aks-nodes1-14441868-vmss000001
failure-domain.beta.kubernetes.io/zone=westeurope-2
Name: aks-nodes1-14441868-vmss000002
failure-domain.beta.kubernetes.io/zone=westeurope-3
$ kubectl get nodes -l agentpool=nodes1 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
aks-nodes1-14441868-vmss000000 Ready agent 44d v1.14.7 agentpool=nodes1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_B2s,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=westeurope,**failure-domain.beta.kubernetes.io/zone=westeurope-1**,...
aks-nodes1-14441868-vmss000001 Ready agent 44d v1.14.7 agentpool=nodes1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_B2s,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=westeurope,failure-domain.beta.kubernetes.io/zone=westeurope-2,...
aks-nodes1-14441868-vmss000002 Ready agent 44d v1.14.7 agentpool=nodes1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_B2s,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=westeurope,failure-domain.beta.kubernetes.io/zone=westeurope-3,...
```
Now you maybe wondering where did this label "failure-domain.beta.kubernetes.io/zone" come from, this label is automatically assigned by the cloud provider and its very important for how will you create affinity rules for your workloads deployed in Kubernetes, to learn more please check [here](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domain-beta-kubernetes-io-zone) .
The Kube Scheduler will automatically understand that the spreading behavior should be extended across zones, and it will try in "best effort" to distribute your pods evenly across zones assuming your nodes are heterogeneous (similar SKU), to learn more about how the Kube Scheduler works check [here](https://kubernetes.io/docs/concepts/scheduling/kube-scheduler/) .
#### Demo#2 Deploy an application across AZs
Deploy a demo app using the Nginx image
```shell
$ kubectl run az-test --image=nginx --replicas=6
deployment.apps/az-test created
# verify the spread behavior
kubectl get pods -l run=az-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
az-test-5b6b9977dd-2d4pp 1/1 Running 0 8s 10.244.0.24 aks-nodes1-14441868-vmss000002 <none> <none>
az-test-5b6b9977dd-5lllz 1/1 Running 0 8s 10.244.0.35 aks-nodes1-14441868-vmss000002 <none> <none>
az-test-5b6b9977dd-jg67d 1/1 Running 0 8s 10.244.1.36 aks-nodes1-14441868-vmss000000 <none> <none>
az-test-5b6b9977dd-nks9k 1/1 Running 0 8s 10.244.2.31 aks-nodes1-14441868-vmss000001 <none> <none>
az-test-5b6b9977dd-xgj5f 1/1 Running 0 8s 10.244.1.37 aks-nodes1-14441868-vmss000000 <none> <none>
az-test-5b6b9977dd-xqrwl 1/1 Running 0 8s 10.244.2.30 aks-nodes1-14441868-vmss000001 <none> <none>
```
you can see from the above that because my nodes are heterogeneous and mostly have the same usage, the Kube Scheduler managed to achieve even spread across the AZs
#### Demo#3 Deploy to specific AZs by using Affinity
The example here is a 2 tier application (Frontend and Backend), Backend needs to be deployed in AZs 1 and 2 only, and because my Frontends require super low latency I want the scheduler to allocate my pods next to my Backend pods.
For the Backend we will be using [Node Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity) rules and for the Frontend we will be using [Inter-Pod Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity) rules.
To achieve the above i'll be adding the below to my Backend deployment file, i'm essentially asking for my pods to be scheduled in Zone 1 and 2 in West Europe.
```yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- westeurope-1
- westeurope-2
```
And the below to my Frontend deployment file, here i'm asking that my pod should land in a zoned node where there is at least one pod that holds the "app=backend" label
```yaml
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- backend
topologyKey: failure-domain.beta.kubernetes.io/zone
```
Moving to our actual demo, deploy your apps
```shell
$ kubectl apply -f backend.yaml
deployment.apps/backend-deployment created
#wait until all the pods are running
$ kubectl get pods -l app=backend -w (ctrl+C when you are done)
NAME READY STATUS RESTARTS AGE
backend-deployment-5cc466474d-6z72s 1/1 Running 0 11s
backend-deployment-5cc466474d-f26s8 1/1 Running 0 12s
backend-deployment-5cc466474d-wmhpv 1/1 Running 0 11s
backend-deployment-5cc466474d-zrhr4 1/1 Running 0 11s
#deploy your frontend
$ kubectl apply -f frontend.yaml
deployment.apps/frontend-deployment created
$ kubectl get pods -l app=frontend -w
```
lets verify the placement of the pods
```shell
$ kubectl get pods -l app=backend -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend-deployment-5cc466474d-6z72s 1/1 Running 0 97s 10.244.2.32 aks-nodes1-14441868-vmss000001 <none> <none>
backend-deployment-5cc466474d-f26s8 1/1 Running 0 98s 10.244.2.33 aks-nodes1-14441868-vmss000001 <none> <none>
backend-deployment-5cc466474d-wmhpv 1/1 Running 0 97s 10.244.1.39 aks-nodes1-14441868-vmss000000 <none> <none>
backend-deployment-5cc466474d-zrhr4 1/1 Running 0 97s 10.244.1.38 aks-nodes1-14441868-vmss000000 <none> <none>
$ kubectl get pods -l app=frontend -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-deployment-7665467f6b-5k8xz 1/1 Running 0 46s 10.244.2.35 aks-nodes1-14441868-vmss000001 <none> <none>
frontend-deployment-7665467f6b-g78xd 1/1 Running 0 46s 10.244.1.40 aks-nodes1-14441868-vmss000000 <none> <none>
frontend-deployment-7665467f6b-mbndr 1/1 Running 0 46s 10.244.2.34 aks-nodes1-14441868-vmss000001 <none> <none>
frontend-deployment-7665467f6b-n4vnm 1/1 Running 0 46s 10.244.1.41 aks-nodes1-14441868-vmss000000 <none> <none>
```
You can see from the above how you can influence the scheduler to achieve your business needs, and how beautiful AZs are:)
#### Note
Cross AZ traffic is [charged](https://azure.microsoft.com/en-us/pricing/details/bandwidth/) if you have very chatty services you may want to allocate them in one Zone, with the option of failing over on others in case of failure.

Просмотреть файл

@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
replicas: 4
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: nginx:1.7.9
ports:
- containerPort: 80
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- westeurope-1
- westeurope-2

Просмотреть файл

@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
replicas: 4
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: nginx:1.7.9
ports:
- containerPort: 80
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- backend
topologyKey: failure-domain.beta.kubernetes.io/zone

Двоичные данные
backup_restore/backup.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 320 KiB

Просмотреть файл

@ -0,0 +1,170 @@
# Backup and Restore AKS
Backup and Restore is your should be the top item in your business contunity plan. in this demo i'll walk you through the different options you have in Azure and the recommendations
## Why this is important
The need for back up and restore is concerned when state is involved in your Kubernetes deployment, once you have a persistent volume as part of a Kubernetes stateful set or a deployment then your cluster is no longer the same, it requires more care, as losing the cluster or the volume will result in losing data.
## Azure Snapshot API
This section is focused on the Azure Disk snapshots, if you're using Azure File then the process is much easier, you can create a share snapshot as documented [here](https://docs.microsoft.com/en-us/azure/storage/files/storage-snapshots-files), then remount in case of any sort of failure.
AKS docs has the process of how you can backup and restore Azure Disks, which can be found [here](https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv#back-up-a-persistent-volume),if you went through the article you will notice the below limitations:
* Azure Snapshot API isn't Kubernetes native
* There are no means of tracking which persistent volume belongs to which deployment, statefulset, namespace, etc...
* you will need to script your way through to have proper mapping
If you're ok with the above limitations, then Azure Snapshot API is what you should use, if not then we need to find a more native solution to satisfy your backup and restore needs, we will cover this in the coming section.
*Note* Azure Backup team is busy working on a native solution.
## ARK/Velero
[Velero](https://velero.io/docs/master/index.html) (Formerly Heptio ARK), is a Kubernetes native Backup and Retore tool for your persistent volumes.
Installing Velero on AKS is pretty straightforward, follow the document [here](https://velero.io/docs/v1.1.0/azure-config/) and you will be up and running with Velero in under 30 mnts.
After completing the installation, you will have local velero client, Azure Blob Storage account to ship all the metadata to, and your snapshots will be stored in your AKS Infrastrucre (MC_) Resource Group.
#### The Demo
We will create a Namespace, deploy a MySQL database and write some records to it, backup the namespace using Velero, then delete the namespace to simulate failure. Finally, try to restore, all goes well, our DB should be restored along with its last know state.
###### 1- Make sure Velero is running:
```shell
$ kubectl get pods -n velero
NAME READY STATUS RESTARTS AGE
velero-78d99d758b-qq8tk 1/1 Running 0 33d
```
###### 2- Verify your local client
```shell
$ velero version
Client:
Version: v1.0.0
Git commit: 72f5cadc3a865019ab9dc043d4952c9bfd5f2ecb
Server:
Version: v1.0.0
```
###### 3- Create a namespace
```shell
$ kubectl create namespace ignite
namespace/ignite created
```
###### 4- Create your mysql database, this process will take some minutes in order to create the azure disk and attach it to your node
```shell
kubectl apply -f mysql-configmap.yaml
kubectl apply -f mysql-services.yaml
kubectl apply -f mysql-statefulset.yaml
kubectl get pods -l app=mysql -n ignite --watch
```
###### 5- Create a Pod from the mysql image which will act as our client/front end, the pod will craete a database called "igintedb" and insert a record in it
```shell
kubectl -n ignite run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\
mysql -h mysql-0.mysql <<EOF
CREATE DATABASE ignitedb;
CREATE TABLE ignitedb.messages (message VARCHAR(250));
INSERT INTO ignitedb.messages VALUES ('Hello Ignite');
EOF
```
###### 6- Ensure the record got written
```shell
$ kubectl -n ignite run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
mysql -h mysql -e "SELECT * FROM ignitedb.messages"
+--------------+
| message |
+--------------+
| Hello Ignite |
+--------------+
pod "mysql-client" deleted
```
###### 7- Backup your namespace
```shell
$ velero backup create ignite-v1 --include-namespaces ignite
Backup request "ignite-v1" submitted successfully.
Run `velero backup describe ignite-v1` or `velero backup logs ignite-v1` for more details.
$ velero backup describe ignite-v1
Name: ignite-v1
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: <none>
Phase: Completed
Namespaces:
Included: ignite
....
```
###### 8- Check your storage account for your backup
![backup](backup.png)
###### 9- Find the disk snapshot under your Infrastructure resource group MC_
![snapshot](snapshot.png)
###### 10- Now that we have a valid backup, proceed with deleting your namespace to simulate failure
```shell
$ kubectl delete namespaces ignite
namespace "ignite" deleted
$ kubectl get namespace ignite
Error from server (NotFound): namespaces "ignite" not found
```
###### 11- Restore your snapshot
```shell
$ velero restore create --from-backup ignite-v1
Restore request "ignite-v1-20191028172722" submitted successfully.
Run `velero restore describe ignite-v1-20191028172722` or `velero restore logs ignite-v1-20191028172722` for more details.
$ velero restore describe ignite-v1-20191028172722
Name: ignite-v1-20191028172722
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Completed
Backup: ignite-v1
....
```
###### 12- Now check your namespace, the process will take some minutes as well, as we are creating the MySQL statefulset and creatting and attaching the disk to the node
```shell
$ kubectl get pods -n ignite
NAME READY STATUS RESTARTS AGE
mysql-0 2/2 Running 0 2m32s
```
###### 13- Now that the databse got restored, lets try to retrieve our record
```shell
$ kubectl -n ignite run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
mysql -h mysql -e "SELECT * FROM ignitedb.messages"
+--------------+
| message |
+--------------+
| Hello Ignite |
+--------------+
pod "mysql-client" deleted
```
Our demo is concluded
#### Important Note
The other important part of your business continuity plan is to ship your snapshots and disks to another region
1. Azure Blob Storage and blob containers can be easily geo replicated check [here](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy)
2. The other remaining part is copying the actual disk snapshot to another region, unfortunitly as it stands we don't have a native API to accomplish this, however, there is a Power Shell module which can accomplish this you can find it [here](https://docs.microsoft.com/en-us/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-copy-snapshot-to-storage-account), also please add your use case to the user voice [here](https://feedback.azure.com/forums/216843-virtual-machines/suggestions/34900495-should-be-able-to-copy-snapshots-between-regions-i)
Note that the backup team is already working on the above capability.
#### clean up
```shell
$ kubectl -n ignite run front-end --image=mysql:5.7 -i -t --rm --restart=Never --\
mysql -h mysql -e "drop database ignitedb"
$ velero delete backup ignite-v1
```

Просмотреть файл

@ -0,0 +1,16 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
namespace: ignite
labels:
app: mysql
data:
master.cnf: |
# Apply this config only on the master.
[mysqld]
log-bin
slave.cnf: |
# Apply this config only on slaves.
[mysqld]
super-read-only

Просмотреть файл

@ -0,0 +1,31 @@
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: ignite
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
name: mysql-read
namespace: readyapp
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql

Просмотреть файл

@ -0,0 +1,167 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
namespace: ignite
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi

Двоичные данные
backup_restore/snapshot.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 536 KiB

Просмотреть файл

@ -0,0 +1,42 @@
## Create an AKS cluster with TAGs
AKS now have the ability to pass the tags to the infrastructure resource group (MC_), note that the tags will not be passed to the resources.
Also AKS now gives you the ability to influence the name of the infrastructure resource group, note the resource group has to be new, it can't be an existing one.
```shell
$ az aks create --help
....
--node-resource-group : The node resource group is the resource group where
all customer's resources will be created in, such as virtual machines.
....
--tags : Space-separated tags in 'key[=value]' format. Use ''
to clear existing tags.
....
```
Example
```shell
#create a cluster with tags and a custom name for the infrastructure resource group
$ az aks create \
--resource-group ignite-tags \
--node-resource-group ignite-tags-nodes-rg \
--name ignite-tags \
--generate-ssh-keys \
--node-count 1 \
--tags project=ignite \
--location westeurope
#check the tags on the infra resource group
$ az group show -n ignite-tags-nodes-rg -o json --query "tags"
{
"project": "ignite"
}
```
If you're interested in how we can enforce tags and work with Azure Policy, then check the AKS with Azure Policy section.

Просмотреть файл

Просмотреть файл

Просмотреть файл

@ -0,0 +1,83 @@
## Safe Kubernetes Dashboard
Although k8s dashboard is helpful in some cases, its always been a risk to run it, in this walkthough i'll provide you with some options of how you can make it safer.
AKS has a secured by default dashboard which is exposed only behind a proxy and run behined a slimmed down service account, please check the docs [here](https://docs.microsoft.com/en-us/azure/aks/kubernetes-dashboard).
#### Option#1 No Dashboard
Yes, if you don't use it remove it, AKS provides the ability to disable the dashboard after the cluster was provisioned.
```shell
#check of the dashboard is running
$ kubectl get pods -n kube-system | grep "dashboard"
kubernetes-dashboard-cc4cc9f58-whmhv 1/1 Running 0 30d
#disable the dashboard addon
az aks disable-addons -a kube-dashboard -g ResourceGroup -n ClusterName
#dashboard should be gone
kubectl get pods -n kube-system | grep "dashboard"
```
###### Note
Follow this [issue](https://github.com/Azure/AKS/issues/1074) where we are working on having the feature of creating an AKS cluster with no Dashboard.
#### Option#2 Use Service Accounts to access the dashboard
If its an absolute necessity to run the dashboard then you can access using service account tokens, more on the topic can be found [here](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
)
```shell
#First edit your dashboard deployment
$ kubectl edit deployments -n kube-system kubernetes-dashboard
```
Add the below lines to your deployment (yes it will persist)
```yaml
- args:
- --authentication-mode=token
- --enable-insecure-login
```
Your dashboard deployment will look similar to the below
```yaml
containers:
- args:
- --authentication-mode=token
- --enable-insecure-login
image: aksrepos.azurecr.io/mirror/kubernetes-dashboard-amd64:v1.10.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9090
scheme: HTTP
```
Now that we enabled token auth, we can proceed with creating the service account
```shell
# Create the service account in the current namespace
# (we assume default)
kubectl create serviceaccount my-dashboard-sa
# Give that service account root on the cluster
kubectl create clusterrolebinding my-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:my-dashboard-sa
# Find the secret that was created to hold the token for the SA
kubectl get secrets
# Show the contents of the secret to extract the token
kubectl describe secret my-dashboard-sa-token-tqknr
#use the token to access the dashboard
```
Notes:
1. more information on the 2 arguments you added can be found [here](https://github.com/kubernetes/dashboard/blob/master/docs/common/dashboard-arguments.md
) . for reference:
* authentication-mode token Enables authentication options that will be reflected on login screen. Supported values: token, basic. Note that basic option should only be used if apiserver has '--authorization-mode=ABAC' and '--basic-auth-file' flags set.
* enable-insecure-login false When enabled, Dashboard login view will also be shown when Dashboard is not served over HTTPS.
2. Accessing the dashboard using your AAD identity is WiP and can be tracked [here](https://github.com/MicrosoftDocs/azure-docs/issues/23789)

Просмотреть файл

Просмотреть файл

@ -0,0 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
name: php-westeurope
labels:
location: westeurope
spec:
containers:
- name: php-westeurope
image: mohamman/php_we:v2
env:
- name: MYSQL_HOST
value: "CHANGEME"
- name: MYSQL_USERNAME
value: "CHANGEME"
- name: MYSQL_PASSWORD
value: "CHANGEME"
- name: DATABASE_NAME
value: "CHANGEME"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name

Просмотреть файл

@ -0,0 +1,27 @@
apiVersion: v1
kind: Pod
metadata:
name: php-northeurope
labels:
location: northeurope
spec:
containers:
- name: php-northeurope
image: mohamman/php_we:v2
env:
- name: MYSQL_HOST
value: "CHANHEME"
- name: MYSQL_USERNAME
value: "CHANGEME"
- name: MYSQL_PASSWORD
value: "CHANGEME"
- name: DATABASE_NAME
value: "CHANGEME"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name

Просмотреть файл

@ -0,0 +1,38 @@
<?php
#define connection parameters
$host = getenv('MYSQL_HOST');
$username = getenv('MYSQL_USERNAME');
$password = getenv('MYSQL_PASSWORD');
$db_name = getenv('DATABASE_NAME');
$pod_name = getenv('POD_NAME');
$node_name= getenv('NODE_NAME');
//Establishes the connection
$conn = mysqli_init();
mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306);
if (mysqli_connect_errno($conn)) {
die('Failed to connect to MySQL: '.mysqli_connect_error());
}
//Run the Select query
$res = mysqli_query($conn, 'SELECT name FROM messages');
if ($res->num_rows > 0) {
// output data of each row
while($row = $res->fetch_assoc()) {
echo "Message: " . $row["name"]. "<br><br><br>";
}
} else {
echo "0 results";
}
##print pod and node hostnames
echo "Data Was Read From Pod: " . gethostname()."<br><br>";
echo "Pod is located on Node: " . $node_name."<br><br>";
//Close the connection
mysqli_close($conn);
?>

Двоичные данные
multi_region_mysql/mr_mysql.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 75 KiB

Просмотреть файл

@ -0,0 +1,169 @@
## AKS Multi Region Setup With Azure MySQL
Fact is Kubernetes Statefulsets don't span clusters, which means your MySQL running inside one K8s cluster won't be able to replicate out of the box to another cluster unless you wan't to create some custom logic, which will be messy, also its not fun to manage databases so long there is a managed option available for you.
In this demo, we will externalize our state using [Azure MySQL](https://docs.microsoft.com/en-us/azure/mysql/) which supports read replicas [in regions](https://docs.microsoft.com/en-us/azure/mysql/howto-read-replicas-portal) and [across regions](https://docs.microsoft.com/en-us/azure/mysql/concepts-read-replicas#cross-region-replication)
We will use a simple application which reads records from the database and print its hostname to reflict which regions its in.
Also we will be using [Azure Traffic Manager](https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview) to have a single read endpoint for both regions.
This is how our application will look like
![mysql](mr_mysql.png)
## Do you really need Multi-region
Before we go on with the demo, I need to stress the fact that multi-region is such a hard task especially if state is involved, and more so, if the state is reliant of a relational database, then having multi-master that spans regions is almost non-existent for most of the DB engines out there.
Now, you could argue that all Azure regions have AZs in them, so you should invest in multi-region. I agree, but you need to have reasonable expectations i.e. active/active (read/write) multi-region setup with a relational database is almost impossible with most of the OSS DB engines out there. active/passive is fully achievable nonetheless and is a good option.
always weigh in the risk vs effor in whatever you're trying to achieve.
## the demo
1. Spin up 2 AKS clusters in 2 different regions
```shell
#our primary cluster
#define the variables
location=westeurope
rg=aks-ignite-we
clustername=aks-iginte-we
vmsize=Standard_B2s
nodecount=2
#create the resource group
az group create --name $rg --location $location
#create the clsuter
az aks create --resource-group $rg \
--name $clustername \
--location $location \
--generate-ssh-keys \
--node-count $nodecount \
--node-vm-size $vmsize \
--no-wait
#get the credintials
$ az aks get-credentials -n $clustername -g $rg
#our remote/secondary cluster
#define the variables
rlocation=northeurope
rrg=aks-ignite-ne
rclustername=aks-iginte-ne
vmsize=Standard_B2s
nodecount=2
#create the resource group
az group create --name $rrg --location $rlocation
#create the clsuter
az aks create --resource-group $rrg \
--name $rclustername \
--location $rlocation \
--generate-ssh-keys \
--node-count $nodecount \
--node-vm-size $vmsize \
--no-wait
#get the credintials
az aks get-credentials -n $rclustername -g $rrg
```
2. Spin up an Azure MySQL DB and create one local read replica and one cross region read replica, follow the docs to achieve this which should be pretty straight forward [here](https://docs.microsoft.com/en-us/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal) and [here](https://docs.microsoft.com/en-us/azure/mysql/howto-read-replicas-portal).
We should end up with a setup similar to the below
![mysql](mysql.png)
3. connect to your main Azure Mysql instance and create a database with some records
```shell
mysql -h YOUR_MAIN_INSTANCE_NAME.mysql.database.azure.com -u USERNAME@YOUR_MAIN_INSTANCE_NAME -p
#create a database, create a table and insert a record
> CREATE DATABASE ignite;
> USE ignite;
> CREATE TABLE messages (name varchar(20));
> INSERT INTO messages values ("Hello MSIgnite!");
> exit;
```
4. Deploy the application
The application is a simple (single page) PHP application, which reads the messages from the Database, print the Pod Hostname, and the Node Hostname as well.
We have 2 files app_regionp[1,2].yaml, each one is supposed to be deployed to a different region, and you should modify the database connection parameters in each of them.
###### Note
I used Environment Variables inside my pod manifest for demo purposes only, in production you should be using a Key Vault or K8s secrets to store such parameters.
I'll be using a tool called kubectx to switch between clusters, if you never used it before then stop everything you're doing and go and install it from [here](https://github.com/ahmetb/kubectx).
```shell
#switch context to the first cluster
kubectx $clustername
#deploy and expose the application using type loadbalancer
$ kubectl apply -f app/pp_region1.yaml
$ kubectl expose pod php-westeurope --type=LoadBalancer --port 80
#make sure your container is running and you have got a Public IP
$ kubectl get pods -l location=westeurope
NAME READY STATUS RESTARTS AGE
php-westeurope 1/1 Running 0 23m
$ kubectl get svc php-westeurope
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
php-westeurope LoadBalancer 10.0.13.198 51.XXX.XXX.XXX 80:30310/TCP 23m
*head to your browser and type* http://EXTERNAL-IP-Value if you see data then things are working fine.
#repeat the exact same for the second cluster, and remeber you have to change the values in the app_regoin2.yaml
#switch context to the first cluster
kubectx $rclustername
#deploy and expose the application using type loadbalancer
$ kubectl apply -f app/app_region2.yaml
$ kubectl expose pod php-northeurope --type=LoadBalancer --port 80
#make sure your container is running and you have got a Public IP
$ kubectl get pods -l location=northeurope
NAME READY STATUS RESTARTS AGE
php-northeurope 1/1 Running 0 39s
$ kubectl get svc php-westeurope
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
php-northeurope LoadBalancer 10.0.171.160 52.XXX.XXX.XXX 80:32214/TCP 46m
#test again, http://EXTERNAL-IP, all goes well, proceed.
```
5. Create a traffic manager profile
Now that we have 2 read endpoints for our application lets use Azure Traffic Manager to load balance across them.
Creating a traffic manager should be straigt forward, head to the [docs](https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-configure-weighted-routing-method) and follow along, but note the below:
* You can choose any routing method, in my case I chose "Weighted" to Round Rubin across the endpoints
* Your endpoint type will be IP Address
* You won't find your IPs in the endpoint selection unless you created DNS names for them before, so head to your MC_ resource group -> find your IP -> configure -> create DNS name -> save. repeated for the second cluster.
You should end up with something similar to the below
Configuration
![conf](tm_conf.png)
Endpoints
![endpoints](tm_endpoints.png)
6. Find the DNS name for your traffic manager profile (located in the overview section) and test, this concludes our demo!
#### Important Notes
1. I used docker hub for my application image, as Azure Container Registry can't be public
2. In production you're highly advised to make use of Azure Container Registry, which has features like geo-replication which should help you to ship your images to a remote region
3. Relational databases are just a pain when it comes to multi-region deployment, try to avoid it as much as you can, if you can't then have the reasonable expectations
4. The above was just for demo purposes and was created in a rush, don't use the same in production, please follow the best practices in production

Двоичные данные
multi_region_mysql/mysql.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 383 KiB

Двоичные данные
multi_region_mysql/tm_conf.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 126 KiB

Двоичные данные
multi_region_mysql/tm_endpoints.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 99 KiB