* testing a commit permissions to jamesserDev branch in main project

* undo test

* Testing circle CI install az cli

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Pushing a script to test CI CD with

* testing

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* Updated config.yml

* added section in testing which shows a PoC of automated azure testing we can do by creating a resource group, comparing results, and deleting it as part of the test pipeline

* deleted old Azure test and removed lines in circleci to call two separate tests. Now everything runs out of the main test folder

* forgot to add circle CI script, adding now

* Putting in AKS Auotmation test, modifying the requirements to include envsubst and making a few changes of script hardening to the normal AKS test

* added kubectl to circleci image

* trying to install ssh key for circleci

* changed ssh key to be different format

* testing default add-keys behavior

* added helm-client to circleci

* Update README.md

breaking change to test CICD

* added validation that Resource group creates quickly so that we can fail fast in cases where Azure breaks

* fixed merge conflict and adding again test so we fail fast if there are any issues with Azure

* adding breaking change to test circleci

* fixing breaking change

* Updated config.yml

* Updated config.yml

* Update README.md

* Update README.md

* polishing documentation wording and fixing breaking change where priority is needed to create Application Gateway V2

* Update README.md

Removing priority

* Update README.md

* Update README.md

* Creating a more reusable test framework. Still not great, but that probelm is more difficult and can be solved later

* put some polish on AKS Deployment Tutorial

* fixed a typo

* Updated config.yml

* Updated config.yml

* Updated config.yml to combine rm id_rsa and id_rsa.pub commands to a single rm command & updates the name

* Updated config.yml
This commit is contained in:
jasonmesser7 2022-07-12 11:12:12 -07:00 коммит произвёл GitHub
Родитель bff1857a5a
Коммит 6995bb74b5
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
11 изменённых файлов: 619 добавлений и 130 удалений

Просмотреть файл

@ -1,4 +1,8 @@
version: 2
version: 2.1
orbs:
azure-cli: circleci/azure-cli@1.0.0
kubernetes: circleci/kubernetes@1.3.0
helm: circleci/helm@2.0.0
jobs:
build:
docker:
@ -8,6 +12,10 @@ jobs:
steps:
- checkout
- kubernetes/install-kubectl
- helm/install-helm-client
- azure-cli/install
- azure-cli/login-with-service-principal
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
@ -25,9 +33,17 @@ jobs:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
- run:
name: run tests
name: removing keys that are invalid for az aks create
command: rm ~/.ssh/id_rsa ~/.ssh/id_rsa.pub
- run:
name: generate new SSH-2 RSA keys
command: ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
- run:
name: run SimDem tests
command: |
. venv/bin/activate
python main.py test test
@ -35,4 +51,9 @@ jobs:
- store_artifacts:
path: test-reports
destination: test-reports
workflows:
version: 2
workflow:
jobs:
- build

Просмотреть файл

@ -1,4 +1,5 @@
Welcome to this tutorial where we will take you step by step in creating an AKS Application with a custom domain that is secured via https. This tutorial assumes you are logged into Azure CLI already and have selected a subscription to use with the CLI. It also assumes that you have helm installed (Instructions ca be found here https://helm.sh/docs/intro/install/). If you have not done this already. Press b and hit ctl c to exit the program.
# Quickstart: Deploy a Scalable & Secure Azure Kubernetes Service cluster using the Azure CLI
Welcome to this tutorial where we will take you step by step in creating an Azure Kubernetes Web Application with a custom domain that is secured via https. This tutorial assumes you are logged into Azure CLI already and have selected a subscription to use with the CLI. It also assumes that you have Helm installed (Instructions can be found here https://helm.sh/docs/intro/install/). If you have not done this already. Press b and hit ctl c to exit the program.
To Login to Az CLI and select a subscription
'az login' followed by 'az account list --output table' and 'az account set --subscription "name of subscription to use"'
@ -30,7 +31,7 @@ For example mycooldomain - this domain is already taken btw :)
Note - Do not add any capitalization or .com
```
echo $CUSTOM_DOMAIN_NAME
if [[ ! $CUSTOM_DOMAIN_NAME =~ ^[a-z][a-z0-9-]{1,61}[a-z0-9] ]]; then echo "Invalid Domain, re enter your domain by pressing b and running 'export CUSTOM_DOMAIN_NAME="customdomainname"' then press r to re-run the previous command and validate the custom domain"; else echo "Custom Domain Set!"; fi;
```
For the email address any enter a valid email. I.e sarajane@gmail.com
@ -39,234 +40,304 @@ echo $SSL_EMAIL_ADDRESS
```
## Create A Resource Group
The first step is to create a resource group.
An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:
- The storage location of your resource group metadata.
- Where your resources will run in Azure if you don't specify another region during resource creation.
Validate that ResourceGroup does not already exist
Validate Resource Group does not already exist. If it does, select a new resource group name by running the following:
```
if [ "$(az group exists --name $RESOURCE_GROUP_NAME)" = 'true' ]; then export RAND=$RANDOM; export RESOURCE_GROUP_NAME="$RESOURCE_GROUP_NAME$RAND"; echo "Your new Resource Group Name is $RESOURCE_GROUP_NAME"; fi
```
Create Resource Group
Create a resource group using the az group create command:
```
az group create --name $RESOURCE_GROUP_NAME --location $RESOURCE_LOCATION
```
Results:
## Create an AKS Cluster
The next step is to create an AKS Cluster. This can be done with the following command -
```expected_similarity=0.5
{
"id": "/subscriptions/bb318642-28fd-482d-8d07-79182df07999/resourceGroups/testResourceGroup24763",
"location": "eastus",
"managedBy": null,
"name": "testResourceGroup",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null,
"type": "Microsoft.Resources/resourceGroups"
}
```
## Create AKS Cluster
Create an AKS cluster using the az aks create command with the --enable-addons monitoring parameter to enable Container insights. The following example creates a cluster named myAKSCluster with one node:
This will take a few minutes
```
az aks create --resource-group $RESOURCE_GROUP_NAME --name $AKS_CLUSTER_NAME --node-count 1 --enable-addons monitoring --generate-ssh-keys
```
## Install az aks CLI locally using the az aks install-cli command
## Connect to the cluster
To manage a Kubernetes cluster, use the Kubernetes command-line client, kubectl. kubectl is already installed if you use Azure Cloud Shell.
1. Install az aks CLI locally using the az aks install-cli command
```
if ! [ -x "$(command -v kubectl)" ]; then az aks install-cli; fi
```
## Download AKS Credentials
Configure kubectl to connect to your Kubernetes cluster using the az aks get-credentials command. The following command:
Downloads credentials and configures the Kubernetes CLI to use them.
Uses ~/.kube/config, the default location for the Kubernetes configuration file. Specify a different location for your Kubernetes configuration file using --file argument. WARNING - This will overwrite any existing credentials with the same entry
2. Configure kubectl to connect to your Kubernetes cluster using the az aks get-credentials command. The following command:
- Downloads credentials and configures the Kubernetes CLI to use them.
- Uses ~/.kube/config, the default location for the Kubernetes configuration file. Specify a different location for your Kubernetes configuration file using --file argument.
> [!WARNING]
> This will overwrite any existing credentials with the same entry
```
az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $AKS_CLUSTER_NAME --overwrite-existing
```
Verify Connection
Verify the connection to your cluster using the kubectl get command. This command returns a list of the cluster nodes.
3. Verify the connection to your cluster using the kubectl get command. This command returns a list of the cluster nodes.
```
kubectl get nodes
```
## Deploy the Application
A test voting app is already prepared. To deploy this app run the following command
A Kubernetes manifest file defines a cluster's desired state, such as which container images to run.
In this quickstart, you will use a manifest to create all objects needed to run the Azure Vote application. This manifest includes two Kubernetes deployments:
- The sample Azure Vote Python applications.
- A Redis instance.
Two Kubernetes Services are also created:
- An internal service for the Redis instance.
- An external service to access the Azure Vote application from the internet.
A test voting app YML file is already prepared. To deploy this app run the following command
```
kubectl apply -f azure-vote-start.yml
```
Store the public IP Address as an environment variable:
## Test The Application
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
Check progress using the kubectl get service command.
```
kubectl get service
```
Store the public IP Address as an environment variable for later use.
>[!Note]
> This commmand loops for 2 minutes and queries the output of kubectl get service for the IP Address. Sometimes it can take a few seconds to propogate correctly
```
runtime="2 minute"; endtime=$(date -ud "$runtime" +%s); while [[ $(date -u +%s) -le $endtime ]]; do export IP_ADDRESS=$(kubectl get service azure-vote-front --output jsonpath='{.status.loadBalancer.ingress[0].ip}'); if ! [ -z $IP_ADDRESS ]; then break; else sleep 10; fi; done
```
Validate IP Address by running:
After running this command you should be able to see your application running!
Note - The IP may take ~30 seconds to resolve
Validate IP Address by running the following:
```
echo $IP_ADDRESS
```
## Deploy a new Application Gateway
The next step is to add Application Gateway as an Ingress controller.
# Add Application Gateway Ingress Controller
The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an Application Gateway, so that selected services are exposed to the Internet
Create a Public IP for Application Gateway by running the following:
AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and does not require NodePort or KubeProxy services. This also brings better performance to your deployments.
## Deploy a new Application Gateway
1. Create a Public IP for Application Gateway by running the following:
```
az network public-ip create --name $PUBLIC_IP_NAME --resource-group $RESOURCE_GROUP_NAME --allocation-method Static --sku Standard
```
Create Vnet for Application Gateway by running the following:
2. Create a Virtual Network(Vnet) for Application Gateway by running the following:
```
az network vnet create --name $VNET_NAME --resource-group $RESOURCE_GROUP_NAME --address-prefix 11.0.0.0/8 --subnet-name $SUBNET_NAME --subnet-prefix 11.1.0.0/16
```
Create Application Gateway by running the following:
3. Create Application Gateway by running the following:
This will take ~5 minutes
> [!NOTE]
> This will take around 5 minutes
```
az network application-gateway create --name $APPLICATION_GATEWAY_NAME --location $RESOURCE_LOCATION --resource-group $RESOURCE_GROUP_NAME --sku Standard_v2 --public-ip-address $PUBLIC_IP_NAME --vnet-name $VNET_NAME --subnet $SUBNET_NAME --priority 100
```
## Enable the AGIC add-on in existing AKS cluster and peer Vnets through Azure CLI
## Enable the AGIC add-on in existing AKS cluster
Store Application Gateway ID by running the following:
1. Store Application Gateway ID by running the following:
```
APPLICATION_GATEWAY_ID=$(az network application-gateway show --name $APPLICATION_GATEWAY_NAME --resource-group $RESOURCE_GROUP_NAME --output tsv --query "id")
```
Enable Application Gateway Ingress Addon by running the following:
2. Enable Application Gateway Ingress Addon by running the following:
This may take a few minutes
> [!NOTE]
> This will take a few minutes
```
az aks enable-addons --name $AKS_CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --addon ingress-appgw --appgw-id $APPLICATION_GATEWAY_ID
```
Peer the two virtual networks together
Since we deployed the AKS cluster in its own virtual network and the Application Gateway in another virtual network, you'll need to peer the two virtual networks together in order for traffic to flow from the Application Gateway to the pods in the cluster. Peering the two virtual networks requires running the Azure CLI command two separate times, to ensure that the connection is bi-directional. The first command will create a peering connection from the Application Gateway virtual network to the AKS virtual network; the second command will create a peering connection in the other direction.
Store the node resource group by running the following:
3. Store the node resource as an environment variable group by running the following:
```
NODE_RESOURCE_GROUP=$(az aks show --name myAKSCluster --resource-group $RESOURCE_GROUP_NAME --output tsv --query "nodeResourceGroup")
```
Store the Vnet name by running the following:
4. Store the Vnet name as an environment variable by running the following:
```
AKS_VNET_NAME=$(az network vnet list --resource-group $NODE_RESOURCE_GROUP --output tsv --query "[0].name")
```
Store the Vnet ID by running the following:
5. Store the Vnet ID as an environment variable by running the following:
```
AKS_VNET_ID=$(az network vnet show --name $AKS_VNET_NAME --resource-group $NODE_RESOURCE_GROUP --output tsv --query "id")
```
Create the peering from Application Gateway to AKS by runnig the following:
## Peer the two virtual networks together
Since we deployed the AKS cluster in its own virtual network and the Application Gateway in another virtual network, you'll need to peer the two virtual networks together in order for traffic to flow from the Application Gateway to the pods in the cluster. Peering the two virtual networks requires running the Azure CLI command two separate times, to ensure that the connection is bi-directional. The first command will create a peering connection from the Application Gateway virtual network to the AKS virtual network; the second command will create a peering connection in the other direction.
1. Create the peering from Application Gateway to AKS by runnig the following:
```
az network vnet peering create --name $APPGW_TO_AKS_PEERING_NAME --resource-group $RESOURCE_GROUP_NAME --vnet-name $VNET_NAME --remote-vnet $AKS_VNET_ID --allow-vnet-access
```
Grab Id of Application Gateway Vnet:
2. Store Id of Application Gateway Vnet As enviornment variable by running the following:
```
APPLICATION_GATEWAY_VNET_ID=$(az network vnet show --name $VNET_NAME --resource-group $RESOURCE_GROUP_NAME --output tsv --query "id")
```
Create Vnet Peering from AKS to Application Gateway
3. Create Vnet Peering from AKS to Application Gateway
```
az network vnet peering create --name $AKS_TO_APPGW_PEERING_NAME --resource-group $NODE_RESOURCE_GROUP --vnet-name $AKS_VNET_NAME --remote-vnet $APPLICATION_GATEWAY_VNET_ID --allow-vnet-access
```
## Update YAML file to use AppGw Ingress:
```
kubectl apply -f azure-vote-agic.yml
```
Validate that the original IP Address is no longer working
```
echo $IP_ADDRESS
```
Get Ingress and Check New Address
```
kubectl get ingress
```
Store New IP Address
4. Store New IP address as environment variable by running the following command:
```
runtime="2 minute"; endtime=$(date -ud "$runtime" +%s); while [[ $(date -u +%s) -le $endtime ]]; do export IP_ADDRESS=$(az network public-ip show --resource-group $RESOURCE_GROUP_NAME --name $PUBLIC_IP_NAME --query ipAddress --output tsv); if ! [ -z $IP_ADDRESS ]; then break; else sleep 10; fi; done
```
Store Public IP ID
## Apply updated application YAML complete with AGIC
In order to use the Application Gateway Ingress Controller we deployed we need to re-deploy an update Voting App YML file. The following command will update the application:
The full updated YML file can be viewed at `azure-vote-agic-yml`
```
export PUBLIC_IP_ID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP_ADDRESS')].[id]" --output tsv)
kubectl apply -f azure-vote-agic.yml
```
Set Public IP to Custom DNS Name by running the following:
```
az network public-ip update --ids $PUBLIC_IP_ID --dns-name $CUSTOM_DOMAIN_NAME
```
## Check that the application is reachable
Now that the Application Gateway is set up to serve traffic to the AKS cluster, let's verify that your application is reachable.
Check custom domain to see application running -
```
az network public-ip show --ids $PUBLIC_IP_ID --query "[dnsSettings.fqdn]" --output tsv
```
Store Custom Domain by running the following:
```
export FQDN=$(az network public-ip show --ids $PUBLIC_IP_ID --query "[dnsSettings.fqdn]" --output tsv)
```
# Part 3 Install SSL
Create namespace for cert manager by running the following:
```
kubectl create namespace cert-manager
```
Apply Cert-Manager
```
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.0/cert-manager.crds.yaml
```
Label the Namespace
```
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
```
Add jetstack addon via helm
```
helm repo add jetstack https://charts.jetstack.io
```
Update Repo
```
helm repo update
```
Install Cert-Manager addon via helm by running the following:
```
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.7.0
```
Apply CertIssuer YAML File
```
envsubst < cluster-issuer-prod.yaml | kubectl apply -f -
```
Apply Updated AKS Application via YAML file with SSL
```
envsubst < azure-vote-agic-ssl.yml | kubectl apply -f -
```
Check to make sure Ingress is working
Check that the sample application you created is up and running by either visiting the IP address of the Application Gateway that get from running the following command or check with curl. It may take Application Gateway a minute to get the update, so if the Application Gateway is still in an "Updating" state on Portal, then let it finish before trying to reach the IP address. Run the following to check the status:
```
kubectl get ingress
```
Check SSL Certificate - The following command will query the status of the SSL certificate for 3 minutes.
In rare occasions it may take up to 15 minutes for Lets Encrypt to issue a successful challenge and the ready state to be 'True'
## Add custom subdomain to AGIC
Now Application Gateway Ingress has been added to the application gateway the next step is to add a custom domain. This will allow the endpoint to be reached by a human readable URL as well as allow for SSL Termination at the endpoint.
1. Store Unique ID of the Public IP Address as an environment variable by running the following:
```
runtime="3 minute"; endtime=$(date -ud "$runtime" +%s); while [[ $(date -u +%s) -le $endtime ]]; do STATUS=$(kubectl get certificate --output jsonpath={..status.conditions[0].status}); echo $STATUS; if [ "$STATUS" = 'True' ]; then break; else sleep 10; fi; done
export PUBLIC_IP_ID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP_ADDRESS')].[id]" --output tsv)
```
Validate certificate status is true - Sometimes there may be a slight delay
2. Update public IP to respond to custom domain requests by running the following:
```
kubectl get certificate
az network public-ip update --ids $PUBLIC_IP_ID --dns-name $CUSTOM_DOMAIN_NAME
```
## Browse your secured AKS Deployment!
Paste the following link into your browser with https:// as the prefix
3. Validate the resource is reachable via the custom domain.
```
echo $FQDN
az network public-ip show --ids $PUBLIC_IP_ID --query "[dnsSettings.fqdn]" --output tsv
```
4. Store the custom domain as en enviornment variable. This will be used later when setting up https termination.
```
export FQDN=$(az network public-ip show --ids $PUBLIC_IP_ID --query "[dnsSettings.fqdn]" --output tsv)
```
# Add HTTPS termination to custom domain
At this point in the tutorial you have an AKS web app with Application Gateway as the Ingress controller and a custom domain you can use to access your application. The next step is to add an SSL certificate to the domain so that users can reach your application securely via https.
## Set Up Cert Manager
In order to add HTTPS we are going to use Cert Manager. Cert Manager is an open source tool used to obtain and manage SSL certificate for Kubernetes deployments. Cert Manager will obtain certificates from a variety of Issuers, both popular public Issuers as well as private Issuers, and ensure the certificates are valid and up-to-date, and will attempt to renew certificates at a configured time before expiry.
1. In order to install cert-manager, we must first create a namespace to run it in. This tutorial will install cert-manager into the cert-manager namespace. It is possible to run cert-manager in a different namespace, although you will need to make modifications to the deployment manifests.
```
kubectl create namespace cert-manager
```
2. We can now install cert-manager. All resources are included in a single YAML manifest file. This can be installed by running the following:
```
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.0/cert-manager.crds.yaml
```
3. Add the certmanager.k8s.io/disable-validation: "true" label to the cert-manager namespace by running the following. This will allow the system resources that cert-manager requires to bootstrap TLS to be created in its own namespace.
```
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
```
## Obtain certificate via Helm Charts
Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters.
Cert-manager provides Helm charts as a first-class method of installation on Kubernetes.
1. Add the Jetstack Helm repository
This repository is the only supported source of cert-manager charts. There are some other mirrors and copies across the internet, but those are entirely unofficial and could present a security risk.
```
helm repo add jetstack https://charts.jetstack.io
```
2. Update local Helm Chart repository cache
```
helm repo update
```
3. Install Cert-Manager addon via helm by running the following:
```
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.7.0
```
4. Apply Certificate Issuer YAML File
ClusterIssuers are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request.
The issuer we are using can be found in the `cluster-issuer-prod.yaml file`
```
envsubst < cluster-issuer-prod.yaml | kubectl apply -f -
```
5. Upate Voting App Application to use Cert-Manager to obtain an SSL Certificate.
The full YAML file can be found in `azure-vote-agic-ssl-yml`
```
envsubst < azure-vote-agic-ssl.yml | kubectl apply -f -
```
## Validate application is working
Wait for SSL certificate to issue. The following command will query the status of the SSL certificate for 3 minutes.
In rare occasions it may take up to 15 minutes for Lets Encrypt to issue a successful challenge and the ready state to be 'True'
```
runtime="10 minute"; endtime=$(date -ud "$runtime" +%s); while [[ $(date -u +%s) -le $endtime ]]; do STATUS=$(kubectl get certificate --output jsonpath={..status.conditions[0].status}); echo $STATUS; if [ "$STATUS" = 'True' ]; then break; else sleep 10; fi; done
```
Validate SSL certificate is True by running the follow command:
```
kubectl get certificate --output jsonpath={..status.conditions[0].status}
```
Results:
```expected_similarity=0.8
True
```
## Browse your AKS Deployment Secured via HTTPS!
Run the following command to get the HTTPS endpoint for your application:
>[!Note]
> It often takes 2-3 minutes for the SSL certificate to propogate and the site to be reachable via https
```
echo https://$FQDN
```
Paste this into the browser to validate your deployment.

Просмотреть файл

@ -7,6 +7,12 @@ succession. This script also lists commands known not to work.
Ensure the test environment is correctly setup.
Create a resource group to deploy Azure Resources to
```
az group create --name $RESOURCE_GROUP_NAME --location $RESOURCE_LOCATION
```
## SimDem version check
```
@ -37,6 +43,13 @@ test files. The [prerequisite test script](./prerequisites/README.md)
validates whether the file exists and, if it doesn't it will execute
and create it.
Run Azure Tests
This will run our Azure test scripts.
The [Azure scripts](./azureTests/README.md)
creates azure resources and validates that our
current documentation is up to date
Each [prerequisite](./prerequisites/README.md) will only be run once,
so even though this partucular prereq appears twice it will only
execute once. This is important when building multi-part tutorials/
@ -107,7 +120,7 @@ date
Results:
```expected_Similarity=0.2
```expected_Similarity=0.1
Tue Jun 6 15:23:53 UTC 2017
```
@ -154,4 +167,9 @@ Results:
Normal Underlined Normal
```
Delete Azure resource group
```
az group delete --name $RESOURCE_GROUP_NAME --no-wait --yes
```

Просмотреть файл

@ -0,0 +1,21 @@
# Azure Script Testing
Replace this file with the readme.md of the Azure scenario you would like to test. The goal is to automatically do this anytime a readme is changed, but we haven't worked out that feature yet :)
If there are any additional files you take advantage of you can put them in this folder as well. For example, for the AKSDeployment test we require some YML files which are also placed here.
You need to use the environment variable $RESOURCE_GROUP_NAME for any resource group that you create. This will deploy to a test resource group which is automatically deleted at the end of the process.
Create Azure resources
```
echo $RESOURCE_GROUP_NAME
```
Check resource group exists
```
az group exists --name $RESOURCE_GROUP_NAME
```
Finished with Azure Tests for now...

Просмотреть файл

@ -0,0 +1,112 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type:
ports:
- port: 80
selector:
app: azure-vote-front
---
# INGRESS WITH SSL PROD
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: azure-vote-ingress-agic-ssl
annotations:
kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/tls-acme: 'true'
appgw.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- $FQDN
secretName: azure-vote-agic-secret
rules:
- host: $FQDN
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: azure-vote-front
port:
number: 80

Просмотреть файл

@ -0,0 +1,104 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type:
ports:
- port: 80
selector:
app: azure-vote-front
---
#Application Gateway Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: azure-vote-front
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
backend:
service:
name: azure-vote-front
port:
number: 80
pathType: Exact

Просмотреть файл

@ -0,0 +1,85 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front

Просмотреть файл

@ -0,0 +1,34 @@
#!/bin/bash
#kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: $SSL_EMAIL_ADDRESS
# ACME server URL for Lets Encrypts prod environment.
# The staging environment will not issue trusted certificates but is
# used to ensure that the verification process is working properly
# before moving to production
server: https://acme-v02.api.letsencrypt.org/directory
# Secret resource used to store the account's private key.
privateKeySecretRef:
name: example-issuer-account-key
# Enable the HTTP-01 challenge provider
# you prove ownership of a domain by ensuring that a particular
# file is present at the domain
solvers:
- http01:
ingress:
class: azure/application-gateway
#EOF
# References:
# https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway
# https://cert-manager.io/docs/configuration/acme/
# kubectl delete -f clusterIssuer.yaml
# kubectl apply -f clusterIssuer-prod.yaml

Просмотреть файл

@ -1,4 +1,15 @@
{
"TEST": "Hello from the test script",
"DIR_IN_HOME": "~/should/be/expanded"
"DIR_IN_HOME": "~/should/be/expanded",
"RESOURCE_GROUP_NAME": "testResourceGroup",
"RESOURCE_LOCATION": "eastus",
"AKS_CLUSTER_NAME": "myAKSCluster",
"PUBLIC_IP_NAME": "myPublicIp",
"VNET_NAME": "myVnet",
"SUBNET_NAME": "mySubnet",
"APPLICATION_GATEWAY_NAME": "myApplicationGateway",
"APPGW_TO_AKS_PEERING_NAME": "AppGWtoAKSVnetPeering",
"AKS_TO_APPGW_PEERING_NAME": "AKStoAppGWVnetPeering",
"CUSTOM_DOMAIN_NAME": "circleciautomatedtesting",
"SSL_EMAIL_ADDRESS": "jamesser@microsoft.com"
}

Просмотреть файл

@ -45,7 +45,18 @@ Results:
```
{
"TEST": "Hello from the test script",
"DIR_IN_HOME": "~/should/be/expanded"
"DIR_IN_HOME": "~/should/be/expanded",
"RESOURCE_GROUP_NAME": "testResourceGroup",
"RESOURCE_LOCATION": "eastus",
"AKS_CLUSTER_NAME": "myAKSCluster",
"PUBLIC_IP_NAME": "myPublicIp",
"VNET_NAME": "myVnet",
"SUBNET_NAME": "mySubnet",
"APPLICATION_GATEWAY_NAME": "myApplicationGateway",
"APPGW_TO_AKS_PEERING_NAME": "AppGWtoAKSVnetPeering",
"AKS_TO_APPGW_PEERING_NAME": "AKStoAppGWVnetPeering",
"CUSTOM_DOMAIN_NAME": "circleciautomatedtesting",
"SSL_EMAIL_ADDRESS": "jamesser@microsoft.com"
}
```

Просмотреть файл

@ -2,4 +2,5 @@ colorama
flask
flask-socketio
pexpect
envsubst