diff --git a/055-ChaosStudio4AKS/.wordlist.txt b/055-ChaosStudio4AKS/.wordlist.txt new file mode 100644 index 000000000..a4018fc27 --- /dev/null +++ b/055-ChaosStudio4AKS/.wordlist.txt @@ -0,0 +1,18 @@ +contosoappmysql +PizzaApp +Rhoads +Falgout +EastUS +PizzaAppEastUS +PizzaAppWestUS +namespaces +TTL +GeoPeeker +instanceID +hangry +PizzeriaApp +dataSourceURL +appConfig +databaseType +globalConfig +ChaosStudio diff --git a/055-ChaosStudio4AKS/Coach/Lectures.pptx b/055-ChaosStudio4AKS/Coach/Lectures.pptx new file mode 100644 index 000000000..b7fb51cba Binary files /dev/null and b/055-ChaosStudio4AKS/Coach/Lectures.pptx differ diff --git a/055-ChaosStudio4AKS/Coach/README.md b/055-ChaosStudio4AKS/Coach/README.md new file mode 100644 index 000000000..28fca1069 --- /dev/null +++ b/055-ChaosStudio4AKS/Coach/README.md @@ -0,0 +1,74 @@ +# What The Hack - ChaosStudio4AKS - Coach Guide + +## Introduction + +Welcome to the coach's guide for the ChaosStudio4AKS What The Hack. Here you will find links to specific guidance for coaches for each of the challenges. + +This hack includes an optional [lecture presentation](Lectures.pptx) that features short presentations to introduce key topics associated with each challenge. It is recommended that the host present each short presentation before attendees kick off that challenge. + +**NOTE:** If you are a Hackathon participant, this is the answer guide. Don't cheat yourself by looking at these during the hack! Go learn something. :) + +## Coach's Guides + +- Challenge 00: **[Prerequisites - Ready, Set, GO!](./Solution-00.md)** + - Prepare your workstation to work with Azure. +- Challenge 01: **[Is your Application ready for the Super Bowl?](./Solution-01.md)** + - How does your application handle failure during large scale events? +- Challenge 02: **[My AZ burned down, now what?](./Solution-02.md)** + - Can your application survive an Azure outage of 1 or more Availability Zones? +- Challenge 03: **[Godzilla takes out an Azure region!](./Solution-03.md)** + - Can your application survive a region failure? +- Challenge 04: **[Injecting Chaos into your pipeline](./Solution-04.md)** + - Optional challenge, using Chaos Studio experiments in your CI/CD pipeline + +## Coach Prerequisites + +This hack has pre-reqs that a coach is responsible for understanding and/or setting up BEFORE hosting an event. Please review the [What The Hack Hosting Guide](https://aka.ms/wthhost) for information on how to host a hack event. + +The guide covers the common preparation steps a coach needs to do before any What The Hack event, including how to properly configure Microsoft Teams. + +### Student Resources + +Before the hack, it is the Coach's responsibility to download and package up the contents of the `/Student/Resources` folder of this hack into a "Resources.zip" file. The coach should then provide a copy of the Resources.zip file to all students at the start of the hack. + +Always refer students to the [What The Hack website](https://aka.ms/wth) for the student guide: [https://aka.ms/wth](https://aka.ms/wth) + +**NOTE:** Students should **not** be given a link to the What The Hack repo before or during a hack. The student guide does **NOT** have any links to the Coach's guide or the What The Hack repo on GitHub. + +### Additional Coach Prerequisites (Optional) + +None are required for this hack + +## Azure Requirements + +This hack requires students to have access to an Azure subscription where they can create and consume Azure resources. These Azure requirements should be shared with a stakeholder in the organization that will be providing the Azure subscription(s) that will be used by the students. + +- Azure subscription with contributor access +- Visual Studio Code terminal or Azure Shell +- Latest Azure CLI (if not using Azure Shell) +- Chaos Studio, Azure Kubernetes Service (AKS) and Traffic Manager services will be used in this hack + + +## Suggested Hack Agenda + +- Day 1 + - Challenge 0 (1.5 hours) + - Challenge 1 (2 hours) + - Challenge 2 (1 hours) + - Challenge 3 (1 hours) + +- Day 2 + - Challenge 4 (4 hours) + +## Repository Contents + +_The default files & folders are listed below. You may add to this if you want to specify what is in additional sub-folders you may add._ + +- `./Coach` + - Coach's Guide and related files +- `./Coach/Solutions` + - Solution files with completed example answers to a challenge +- `./Student` + - Student's Challenge Guide +- `./Student/Resources` + - Resource files, sample code, scripts, etc meant to be provided to students. (Must be packaged up by the coach and provided to students at start of event) diff --git a/055-ChaosStudio4AKS/Coach/Solution-00.md b/055-ChaosStudio4AKS/Coach/Solution-00.md new file mode 100644 index 000000000..0a60b1d6b --- /dev/null +++ b/055-ChaosStudio4AKS/Coach/Solution-00.md @@ -0,0 +1,20 @@ +# Challenge 00: Prerequisites - Ready, Set, GO! - Coach's Guide + +**[Home](./README.md)** - [Next Solution >](./Solution-01.md) + +## Notes & Guidance + +The student will need an Azure subscription with "Contributor" permissions. +The entirety of this hack's challenges can be done using the [Azure Cloud Shell](#work-from-azure-cloud-shell) in a web browser (fastest path), or you can choose to install the necessary tools on your [local workstation (Windows/WSL, Mac, or Linux)](#work-from-local-workstation). + +We recommend installing the tools on your workstation. + +- The AKS "contosoappmysql" web front end has a public IP address that you can connect to. +- If this is an internal AIRS ACCOUNT, keep the security auto bot happy and create a Network Security Group on the Vnet call is PizzaAppEastUS / PizzaAppWestUS and enable (allow) TCP port 8081 priority 200 and disable (deny) TCP port 3306 priority 210 +- The student will need this NSG for a future challenge + +```bash + + kubectl -n mysql get svc + +``` diff --git a/055-ChaosStudio4AKS/Coach/Solution-01.md b/055-ChaosStudio4AKS/Coach/Solution-01.md new file mode 100644 index 000000000..af21050c9 --- /dev/null +++ b/055-ChaosStudio4AKS/Coach/Solution-01.md @@ -0,0 +1,36 @@ +# Challenge 01: Is your Application ready for the Super Bowl? - Coach's Guide + +[< Previous Solution](./Solution-00.md) - **[Home](./README.md)** - [Next Solution >](./Solution-02.md) + +## Notes & Guidance + +This challenge is where the student will simulate a pod failure. For Chaos Studio to work with AKS, Chaos Mesh will need to be installed. +Chaos Studio doesn't work with private AKS clusters. + +- Instructions to install chaos studio are at https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-tutorial-aks-portal#set-up-chaos-mesh-on-your-aks-cluster +- Once installed, create a pod failure experiment to fail a pod + - If using the Pizza App, the application should become unresponsive + + +Command to view the private and public IP of the pizza application + +```bash +kubectl get -n contosoappmysql svc + +``` + +Command to view all names spaces running in the AKS cluster + +```bash +kubectl get pods --all-namespaces + +``` +Have the student explore how to make PODs resilient by creating a replica of the POD + +```bash +kubectl scale statefulset -n APPNAME NAMESPACE --replicas=2 +``` +- Have the student run the experiment again and notice how the application is available with a failed pod + - In the experiment, make the mode = "one" versus "all: as per the JSON spec below: + - {"action":"pod-failure","mode":"one","duration":"600s","selector":{"namespaces":["contosoappmysql"]}} + diff --git a/055-ChaosStudio4AKS/Coach/Solution-02.md b/055-ChaosStudio4AKS/Coach/Solution-02.md new file mode 100644 index 000000000..10105f712 --- /dev/null +++ b/055-ChaosStudio4AKS/Coach/Solution-02.md @@ -0,0 +1,35 @@ +# Challenge 02: My Availability Zone burned down, now what? - Coach's Guide + +[< Previous Solution](./Solution-01.md) - **[Home](./README.md)** - [Next Solution >](./Solution-03.md) + +## Notes & Guidance + +This challenge will simulate an AZ failure by failing a virtual machine that is a member of the Virtual Machines Scale Set created by AKS. +Chaos Studio will use the VMSS shutdown fault + +- Student will create experiment for VMSS shutdown +- Have the student think about how to make the cluster resilient + - Student should scale VMSS + - Scale the VMSS via AKS + - Scale the PizzaApp or the student's AKS deployment or statefulset + - Rerun the experiment + +Verify where your pods are running (Portal or CLI) + +```bash +kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName= + +``` +Scale the cluster to a minimum of 2 VMs + +```bash +az aks scale --resource-group myResourceGroup --name myAKSCluster --node-count 1 --nodepool-name + +``` + +Scale your Kubernetes environment (hint it is a stateful deployment) + +```bash +kubectl scale statefulset -n contosoappmysql contosopizza --replicas=2 + +``` diff --git a/055-ChaosStudio4AKS/Coach/Solution-03.md b/055-ChaosStudio4AKS/Coach/Solution-03.md new file mode 100644 index 000000000..7cc825a23 --- /dev/null +++ b/055-ChaosStudio4AKS/Coach/Solution-03.md @@ -0,0 +1,26 @@ +# Challenge 03: Godzilla takes out an Azure region! - Coach's Guide + +[< Previous Solution](./Solution-02.md) - **[Home](./README.md)** - [Next Solution >](./Solution-04.md) + +## Notes & Guidance + +In this Challenge, students will simulate a region failure. + +This can be done via the following: +- NSG and blocking port 8081, +- Chaos Mesh's POD failures set to all PODs in a region +- VMSS fault and selecting all nodes in a region + +Traffic manager is the solution. +- Verify students installed the application in WestUS and EastUS. +- Routing method = Performance +- Configuration profile needs to be created + - DNS TTL = 1 + - Protocol = Http + - Port = 8081 + - Path = /pizzeria/ + - Probing interval = 10 + - Tolerated number of failures = 3 + - Probe timeout = 5 + +Use GeoPeeker to visualize multi-region DNS resolution https://GeoPeeker.com/home/default diff --git a/055-ChaosStudio4AKS/Coach/Solution-04.md b/055-ChaosStudio4AKS/Coach/Solution-04.md new file mode 100644 index 000000000..090dff73d --- /dev/null +++ b/055-ChaosStudio4AKS/Coach/Solution-04.md @@ -0,0 +1,13 @@ +# Challenge 04: Injecting Chaos into your CI/CD pipeline - Coach's Guide + +[< Previous Solution](./Solution-03.md) - **[Home](./README.md)** + +## Notes & Guidance +This challenge may be a larger lift as the students are not required to know GitHub Actions or any other DevOps pipeline tool. We have provided links to the actions needed to complete this task but feel free to nudge more on the GitHub Actions syntax portion as the challenge is more about integrating Chaos into your pipeline and less about the syntax of GitHub Actions. + +A sample solution is located [here](./Solutions/Solution-04/Solution-04.yml) + +From a high level, it logs into Azure and leverages the AZ Rest command to issue a rest api call to trigger the experiment. The students could also leverage a standard rest api call, however the AZ rest command is easier to use as it handles many headers for you automatically such as authorization. + +[Chaos Studio Rest API Samples](https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-samples-rest-api) +[Starting Experiment with Rest API](https://learn.microsoft.com/en-us/rest/api/chaosstudio/experiments/start?tabs=HTTP) diff --git a/055-ChaosStudio4AKS/Coach/Solutions/.gitkeep b/055-ChaosStudio4AKS/Coach/Solutions/.gitkeep new file mode 100644 index 000000000..e69de29bb diff --git a/055-ChaosStudio4AKS/Coach/Solutions/Solution-04/Solution-04.yml b/055-ChaosStudio4AKS/Coach/Solutions/Solution-04/Solution-04.yml new file mode 100644 index 000000000..c6948ff05 --- /dev/null +++ b/055-ChaosStudio4AKS/Coach/Solutions/Solution-04/Solution-04.yml @@ -0,0 +1,28 @@ +name: Trigger Azure Chaos Studio Experiment(AZ CLI) + +on: + workflow_dispatch: + +permissions: + id-token: write + contents: read + +jobs: + trigger-chaos-experiment: + runs-on: ubuntu-latest + + steps: + - name: 'Az CLI login' + uses: azure/login@v1 + with: + client-id: ${{ secrets.AZURE_CLIENT_ID }} + tenant-id: ${{ secrets.AZURE_TENANT_ID }} + subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} + enable-AzPSSession: true + + - name: Azure CLI + uses: azure/CLI@v1 + with: + azcliversion: 2.30.0 + inlineScript: | + az rest --method post --url ${{ secrets.AZURE_CHAOS_STUDIO_API_URL }} diff --git a/055-ChaosStudio4AKS/README.md b/055-ChaosStudio4AKS/README.md new file mode 100644 index 000000000..2828a2d27 --- /dev/null +++ b/055-ChaosStudio4AKS/README.md @@ -0,0 +1,58 @@ +# What The Hack - ChaosStudio4AKS + +## Introduction + +Azure Chaos Studio (Preview) is a managed service for improving resilience by injecting faults into your Azure applications. Running controlled fault +injection +experiments against your applications, a practice known as chaos engineering, helps you to measure, understand, and improve resilience against real-world +incidents, such as a region outages or application failures causing high CPU utilization on a VMs, Scale Sets, and Azure Kubernetes. + + +## Learning Objectives +This “What the Hack” WTH is designed to introduce you to Azure Chaos Studio (Preview) and guide you through a series of hands-on challenges to accomplish +the following: + +* Leverage the Azure Chaos Studio to inject failure into an application/workload +* Provide hands-on understanding of Chaos Engineering +* Understand how resiliency can be achieved with Azure + +In this WTH, you are the system owner of the Contoso Pizzeria Application (or you may bring your own application). Super Bowl Sunday is Contoso Pizza's busiest time of the year, the pizzeria +ordering application must be available during the Super Bowl. + +You have been tasked to test the resiliency of the pizzeria application (or your application). The pizzeria application is running on Azure and you will use Chaos Studio to +simulate various failures. + +## Challenges +* Challenge 00: **[Prerequisites - Ready, Set, GO!](Student/Challenge-00.md)** + - Deploy the multi-region Kubernetes pizzeria application +* Challenge 01: **[Is your application ready for the Super Bowl?](Student/Challenge-01.md)** + - How does your application handle failure during large scale events? +* Challenge 02: **[My AZ burned down, now what?](Student/Challenge-02.md)** + - Can your application survive an Azure outage of 1 or more Availability Zones? +* Challenge 03: **[Godzilla takes out an Azure region!](Student/Challenge-03.md)** + - Can your application survive a region failure? +* Challenge 04: **[Injecting Chaos into your CI/CD pipeline](Student/Challenge-04.md)** + - Optional challenge, using Chaos Studio experiments in your CI/CD pipeline + +## Prerequisites +- Azure subscription with contributor access +- Visual Studio Code terminal or Azure Shell (recommended) +- Latest Azure CLI (if not using Azure Shell) +- GitHub or Azure DevOps to automate Chaos Testing +- Azure fundamentals, Vnets, NSGs, Scale Sets, Traffic Manager +- Fundamentals of Chaos Engineering +- Intermediate understanding of Kubernetes (kubectl commands)and AKS + +## Learning Resources +* [What is Azure Chaos Studio](https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-overview) +* [What is Chaos Engineering](https://docs.microsoft.com/en-us/azure/architecture/framework/resiliency/chaos-engineering?toc=%2Fazure%2Fchaos-studio%2Ftoc.json&bc=%2Fazure%2Fchaos-studio%2Fbreadcrumb%2Ftoc.json) +* [How Netflix pioneered Chaos Engineering](https://techhq.com/2019/03/how-netflix-pioneered-chaos-engineering/) +* [Embrace the Chaos](https://medium.com/capital-one-tech/embrace-the-chaos-engineering-203fd6fc6ff7) +* [Why you should break more things on purpose --AWS, Azure, and LinkedIn case studies](https://www.contino.io/insights/chaos-engineering) + + +## Contributors +- Jerry Rhoads +- Kevin Gates +- Andy Huang +- Tommy Falgout diff --git a/055-ChaosStudio4AKS/Student/Challenge-00.md b/055-ChaosStudio4AKS/Student/Challenge-00.md new file mode 100644 index 000000000..1834726b6 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Challenge-00.md @@ -0,0 +1,109 @@ +# Challenge 00: Prerequisites - Ready, Set, GO! + +**[Home](../README.md)** - [Next Challenge >](./Challenge-01.md) + +## Pre-requisites + +You will need an Azure subscription with "Contributor" permissions. + +Before starting, you should decide how and where you will want to work on the challenges of this hackathon. + +You can complete the entirety of this hack's challenges using the [Azure Cloud Shell](#work-from-azure-cloud-shell) in a web browser (fastest path), or you can choose to install the necessary tools on your [local workstation (Windows/WSL, Mac, or Linux)](#work-from-local-workstation). + +We recommend installing the tools on your workstation. + +### Work from Azure Cloud Shell + +Azure Cloud Shell (using Bash) provides a convenient shell environment with all tools you will need to run these challenges already included such as the Azure CLI, kubectl, helm, and MySQL client tools, and editors such as vim, nano, code, etc. + +This is the fastest path. To get started, simply open [Azure Cloud Shell](https://shell.azure.com) in a web browser, and you're all set! + +### Work from Local Workstation + +As an alternative to Azure Cloud Shell, this hackathon can also be run from a Bash shell on your computer. You can use the Windows Subsystem for Linux (WSL2), Linux Bash or Mac Terminal. While Linux and Mac include Bash and Terminal out of the box respectively, on Windows you will need to install the WSL: [Windows Subsystem for Linux Installation Guide for Windows 10](https://docs.microsoft.com/en-us/windows/wsl/install-win10). + +If you choose to run it from your local workstation, you need to install the following tools into your Bash environment (on Windows, install these into the WSL environment, **NOT** the Windows command prompt!): + +- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/) +- Kubectl (using `az aks install-cli`) +- [Helm3](https://helm.sh/docs/intro/install/) + +Take into consideration how much time you will need to install these tools on your own computer. Depending on your Internet and computer's speed, this additional local setup will probably take around 30 minutes. + +## Introduction + +Once the pre-requisites are set up, now it's time to build the hack's environment. + +This hack is designed to help you learn chaos testing with Azure Chaos Studio, however you should have a basic knowledge of Kubernetes (K8s). The hack uses pre-canned Azure Kubernetes (AKS) environments that you will deploy into your Azure subscription. You many bring your own AKS application versus using the pre-canned AKS Pizza Application. + +If you are using the Pizzeria Application, the Pizzeria Application will run in 2 Azure regions and entirely on an AKS cluster, consisting of the following: + - 1 instance of the "Pizzeria" sample app (1 per region) + - A MySQL database (1 per region) + +## Description + +The Pizzeria Application is deployed in two steps by scripts that invoke ARM Templates & Helm charts to create the AKS cluster, database, and the sample Pizzeria application. Your coach will provide you with a link to the Pizzeria.zip file that contains deployment files needed to deploy the AKS environment into EastUS and WestUS. Since the end goal is to test a multi-region application, deploy the application into each region. For best results, perform all experiments in your nearest region. + + - Download the required Pizzeria.zip file (or you can use your own AKS application) for this hack. You should do this in Azure Cloud Shell or in a Mac/Linux/WSL environment which has the Azure CLI installed. + - Unzip the file + +### Deploy the AKS Environment + +Run the following command to setup the AKS environments (you will do this for each region): + +```bash +cd ~/REGION-NAME-AKS/ARM-Templates/KubernetesCluster +chmod +x ./create-cluster.sh +./create-cluster.sh + +``` + + **NOTE:** Creating the cluster will take around 10 minutes + + **NOTE:** The Kubernetes cluster will consist of one container contosoappmysql. + +### Deploy the Sample Application + +Deploy the Pizzeria application as follows: + +```bash +cd ~/REGION-NAME/HelmCharts/ContosoPizza +chmod +x ./*.sh +./deploy-pizza.sh + +``` + +**NOTE:** Deploying the Pizzeria application will take around 5 minutes + +### View the Sample Application + +Once the applications are deployed, you will see a link to a websites running on port 8081. In Azure Cloud Shell, these are clickable links. Otherwise, you can cut and paste the URL in your web browser. + +```bash + Pizzeria app on MySQL is ready at http://some_ip_address:8081/pizzeria +``` + +## Success Criteria + +* You have a Unix/Linux Shell for setting up the Pizzeria application or your AKS application (e.g. Azure Cloud Shell, WSL2 bash, Mac zsh etc.) +* You have validated that the Pizzeria or your application is working in both regions (EastUS & WestUS) + + +## Tips + +* The AKS "contosoappmysql" web front end has a public IP address that you can connect to. At this time you should create a Network Security Group on the Vnet, call is PizzaAppEastUS / PizzaAppWestUS and enable (allow) TCP port 8081 priority 200 and disable (deny) TCP port 3306 priority 210 --you will need this NSG for future challenges. + +```bash + + kubectl -n mysql get svc + +``` + +There are more useful kubernetes commands in the reference section below. + + +## Learning Resources + +* [Kubernetes cheat sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) + + diff --git a/055-ChaosStudio4AKS/Student/Challenge-01.md b/055-ChaosStudio4AKS/Student/Challenge-01.md new file mode 100644 index 000000000..4253cb7f9 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Challenge-01.md @@ -0,0 +1,49 @@ +# Challenge 01: Is your Application ready for the Super Bowl? + +[< Previous Challenge](./Challenge-00.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-02.md) + +## Pre-requisites + +Before creating your Azure Chaos Studio Experiment, ensure you have deployed and verified the pizzeria application is available. + +## Introduction + +Welcome to Challenge 1. + +In this challenge you will simulate failure in your compute tier. It is Super Bowl Sunday and you are the system owner of Contoso Pizza's pizza ordering +workload. This workload is hosted in Azure's Kubernetes Service (AKS). Super Bowl Sunday is Contoso Pizza's busiest day of the year. +To make Super Bowl Sunday a success, your job is to plan for possible failures that could occur during the Superbowl event. + +If you are using your own AKS application, your application should be ready to handle its peak operating time: this is when Chaos strikes! + + +## Description + +Create failure at the AKS pod level in your preferred region e.g. EastUS + +- Show that your AKS environment has been prepared +- Show that your Chaos Experiment has been scoped to the web tier workload +- Show (if any) any failure you observed during the experiment + +During the experiment, were you able to order a pizza or perform your application functionality? If not, what could you do to make your application resilient at the pod layer? + + +## Success Criteria + +- Verify Chaos Mesh is running on the Cluster +- Verify Pod Chaos restarted the application's AKS pod +- Show any failure you observed during the experiment +- If your application went offline, show what change could you make to the application to make it resilient + +## Tips + +These tips apply to the Pizza Application + - Verify the "selector" in the experiment uses namespace of the application + - Verify the PizzaApp is a statefulset versus a deployment + + +## Learning Resources +- [Simulate AKS pod failure with Chaos Studio](https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-tutorial-aks-portal) +- [AKS cheat-sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) + + diff --git a/055-ChaosStudio4AKS/Student/Challenge-02.md b/055-ChaosStudio4AKS/Student/Challenge-02.md new file mode 100644 index 000000000..17f7df16c --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Challenge-02.md @@ -0,0 +1,47 @@ +# Challenge 02: My AZ burned down, now what? + +[< Previous Challenge](./Challenge-01.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-03.md) + +## Pre-requisites + +Before creating your Azure Chaos Studio Experiment, ensure you have deployed and verified the pizzeria application is available. + +## Introduction + +Welcome to Challenge 2. + +Can your Application survive an Availability Zone Failure? + +How did your application perform with pod failures? Are you still in business? Now that you have tested for pod faults and have +overcome with resiliency at the pod level --it is time to kick it up to the next level. Winter storms are a possibility on Superbowl Sunday and you need to +prepare for an Azure datacenter going offline. Choose your preferred region and AKS cluster to simulate an Availability Zone failure. + + +## Description + +As the purpose of this WTH is to show Chaos Studio, we are going to pretend that an Azure Availability Zone (datacenter) is offline. The way you will simulate this will be failing an AKS node with Chaos Studio. + +- Create and scope an Azure Chaos Studio Experiment to fail 1 of the pizza application's virtual machine(s) + +During the experiment, were you able to order a pizza? If not, what could you do to make your application resilient at the Availability Zone/Virtual +Machine layer? + + + +## Success Criteria + +- Show that Chaos Experiment fails a node running the pizzeria application +- Show any failure you observed during the experiment +- Discuss with your coach how your application is (or was made) resilient +- Verify the pizzeria application is available while a virtual machine is offline + +## Tip + +Take note of your virtual machine's instanceID + + +## Learning Resources +- [Simulate AKS pod failure with Chaos Studio](https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-tutorial-aks-portal) +- [Scale an AKS cluster](https://docs.microsoft.com/en-us/azure/aks/scale-cluster) +- [AKS cheat-sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) + diff --git a/055-ChaosStudio4AKS/Student/Challenge-03.md b/055-ChaosStudio4AKS/Student/Challenge-03.md new file mode 100644 index 000000000..c9efc8ef0 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Challenge-03.md @@ -0,0 +1,54 @@ +# Challenge 03: Godzilla takes out an Azure region! + +[< Previous Challenge](./Challenge-02.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-04.md) + + +## Pre-requisites + +Before creating your Azure Chaos Studio Experiment, ensure you have deployed and verified the pizzeria application is available in both regions (EastUS +and WestUS) + +## Introduction + +Welcome to Challenge 3. + +Can your application survive a region failure? + +So far you have tested failures with Contoso Pizza's AKS pod(s), AKS node(s), and now it is time to test failures at the regional +level. + +As Contoso Pizza is a national pizza chain, hungry people all over the United States are ordering pizzas and watching the Super +Bowl. Enter Godzilla! He exists! He is hungry! He is upset (hangry)! He is going to destroy the WestUS! What will your application +do? + + +## Description + +As the purpose of this WTH is to demonstrate Chaos Studio, we are going to simulate a region failure. As you have deployed the pizzeria application in 2 regions +(EastUS/WestUS). As we are hacking on Azure's Chaos Studio, we are pretending the databases are in sync, and we are showing how Chaos Studio can simulate +the failure of a region. + +- Create an Azure Chaos Studio's Experiment(s) that can simulate a region failure + +During the experiment, were you able to order a pizza? If not, what could you do to make your application more resilient + + +## Success Criteria + +- Verify the experiment is running +- Show any failure you observed during the experiment +- Verify application is available after WestUS region is offline +- Verify all application traffic is routing to the surviving region + +## Tips + +- Think of the multiple ways to simulate a region failure +- Did you create the NSG from Challenge 0? +- Use [GeoPeeker](https://geopeeker.com/home/default) to verify traffic routing + + +## Learning Resources + +- [Azure Traffic Manager](https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-configure-priority-routing-method) +- [Azure Traffic Manager endpoint monitoring](https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring) + diff --git a/055-ChaosStudio4AKS/Student/Challenge-04.md b/055-ChaosStudio4AKS/Student/Challenge-04.md new file mode 100644 index 000000000..0c0cb3534 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Challenge-04.md @@ -0,0 +1,33 @@ +# Challenge 04: Injecting Chaos into your CI/CD pipeline + +[< Previous Challenge](./Challenge-03.md) - **[Home](../README.md)** + +## Pre-requisites +To complete this challenge you will use the Pizza Application or your AKS application + + +## Introduction +You will need to have an in depth understanding of DevOps and your CI/CD tool of choice. + +This is where the rubber meets the road. You will take what you have learned from the previous challenges and apply the knowledge here. + + +## Description +In this challenge you will conduct a chaos experiment in your CI/CD pipeline. +You will take the Pizzeria Application or your application and add a chaos experiment to your deployment pipeline. +Run your experiments in Dev/Test, do not run in Prod. + + +## Tips +1. You want your application to be available (healthy state) during failure. +2. What kinds of faults and remediation come to mind from the previous challenges? + +## Success Criteria + +- Show that Chaos Studio injects fault(s) into your application via your pipeline. +- Verify that your application remains healthy during the Chaos Experiment. + +## Learning Resources +- [How to deploy a simple experiment](https://blog.meadon.me/chaos-studio-part-1/) +- [How to deploy a simple application and experiment in a CI/CD pipeline](https://blog.meadon.me/chaos-studio-part-2/) + diff --git a/055-ChaosStudio4AKS/Student/Resources/.gitkeep b/055-ChaosStudio4AKS/Student/Resources/.gitkeep new file mode 100644 index 000000000..e69de29bb diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/aks-cluster.json b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/aks-cluster.json new file mode 100644 index 000000000..98d0f84a1 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/aks-cluster.json @@ -0,0 +1,398 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "clusterName": { + "type": "string", + "metadata": { + "description": "The name of the Managed Cluster resource." + } + }, + "agentPoolNodeCount": { + "type": "int", + "metadata": { + "description": "Number of virtual machines in the agent pool" + } + }, + "agentPoolNodeType": { + "type": "string", + "metadata": { + "description": "SKU or Type of virtual machines in the agent pool" + } + }, + "systemPoolNodeCount": { + "type": "int", + "metadata": { + "description": "Number of virtual machines in the system pool" + } + }, + "systemPoolNodeType": { + "type": "string", + "metadata": { + "description": "SKU or Type of virtual machines in the system pool" + } + }, + "resourceGroupName": { + "type": "string", + "metadata": { + "description": "The name of the Resource Group" + } + }, + "virtualNetworkName": { + "type": "string", + "metadata": { + "description": "The name of the Virtual Network" + } + }, + "subnetName": { + "type": "string", + "metadata": { + "description": "The name of the Subnet within the Virtual Network" + } + }, + "location": { + "type": "string", + "metadata": { + "description": "The geographical location of AKS resource." + } + }, + "dnsPrefix": { + "type": "string", + "metadata": { + "description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN." + } + }, + "addressSpaces": { + "type": "array" + }, + "ddosProtectionPlanEnabled": { + "type": "bool" + }, + "osDiskSizeGB": { + "type": "int", + "defaultValue": 0, + "metadata": { + "description": "Disk size (in GiB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize." + }, + "minValue": 0, + "maxValue": 1023 + }, + "kubernetesVersion": { + "type": "string", + "defaultValue": "1.25.5", + "metadata": { + "description": "The version of Kubernetes." + } + }, + "networkPlugin": { + "type": "string", + "allowedValues": [ + "azure", + "kubenet" + ], + "metadata": { + "description": "Network plugin used for building Kubernetes network." + } + }, + "maxPods": { + "type": "int", + "defaultValue": 64, + "metadata": { + "description": "Maximum number of pods that can run on a node." + } + }, + "enableRBAC": { + "type": "bool", + "defaultValue": true, + "metadata": { + "description": "Boolean flag to turn on and off of RBAC." + } + }, + "enablePrivateCluster": { + "type": "bool", + "defaultValue": false, + "metadata": { + "description": "Enable private network access to the Kubernetes cluster." + } + }, + "enableHttpApplicationRouting": { + "type": "bool", + "defaultValue": true, + "metadata": { + "description": "Boolean flag to turn on and off http application routing." + } + }, + "enableAzurePolicy": { + "type": "bool", + "defaultValue": false, + "metadata": { + "description": "Boolean flag to turn on and off Azure Policy addon." + } + }, + "enableOmsAgent": { + "type": "bool", + "defaultValue": true, + "metadata": { + "description": "Boolean flag to turn on and off omsagent addon." + } + }, + "workspaceRegion": { + "type": "string", + "defaultValue": "eastus", + "metadata": { + "description": "Specify the region for your OMS workspace." + } + }, + "workspaceName": { + "type": "string", + "metadata": { + "description": "Specify the prefix of the OMS workspace." + } + }, + "omsSku": { + "type": "string", + "defaultValue": "standalone", + "allowedValues": [ + "free", + "standalone", + "pernode" + ], + "metadata": { + "description": "Select the SKU for your workspace." + } + }, + "serviceCidr": { + "type": "string", + "metadata": { + "description": "A CIDR notation IP range from which to assign service cluster IPs." + } + }, + "subnetAddressSpace": { + "type": "string", + "metadata": { + "description": "A CIDR notation IP range from which to assign service cluster IPs." + } + }, + "dnsServiceIP": { + "type": "string", + "metadata": { + "description": "Containers DNS server IP address." + } + }, + "dockerBridgeCidr": { + "type": "string", + "metadata": { + "description": "A CIDR notation IP for Docker bridge." + } + } + }, + "variables": { + "deploymentSuffix": "MDP2020", + "subscriptionId" : "[subscription().id]", + "workspaceName" : "[concat(parameters('workspaceName'), uniqueString(variables('subscriptionId')))]", + "omsWorkspaceId": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.OperationalInsights/workspaces/', variables('workspaceName'))]", + "clusterID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.ContainerService/managedClusters/', parameters('clusterName'))]", + "vnetSubnetID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'), '/subnets/', parameters('subnetName'))]", + "solutionDeploymentId": "[concat('SolutionDeployment-', variables('deploymentSuffix'))]", + "workspaceDeploymentId": "[concat('WorkspaceDeployment-', variables('deploymentSuffix'))]", + "clusterMonitoringMetricId": "[concat('ClusterMonitoringMetric-', variables('deploymentSuffix'))]", + "clusterSubnetRoleAssignmentId": "[concat('ClusterSubnetRoleAssignment-', variables('deploymentSuffix'))]" + }, + "resources": [ + { + "name": "[parameters('virtualNetworkName')]", + "type": "Microsoft.Network/VirtualNetworks", + "apiVersion": "2019-09-01", + "location": "[parameters('location')]", + "dependsOn": [], + "tags": { + "cluster": "Kubernetes" + }, + "properties": { + "addressSpace": { + "addressPrefixes": "[parameters('addressSpaces')]" + }, + "subnets": [ + { + "name": "[parameters('subnetName')]", + "properties": { + "addressPrefix": "[parameters('subnetAddressSpace')]" + } + } + ], + "enableDdosProtection": "[parameters('ddosProtectionPlanEnabled')]" + } + }, + { + "apiVersion": "2020-03-01", + "dependsOn": [ + "[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]", + "[resourceId('Microsoft.Network/VirtualNetworks', parameters('virtualNetworkName'))]" + ], + "type": "Microsoft.ContainerService/managedClusters", + "location": "[parameters('location')]", + "name": "[parameters('clusterName')]", + "properties": { + "kubernetesVersion": "[parameters('kubernetesVersion')]", + "enableRBAC": "[parameters('enableRBAC')]", + "dnsPrefix": "[parameters('dnsPrefix')]", + "agentPoolProfiles": [ + { + "name": "systempool", + "osDiskSizeGB": "[parameters('osDiskSizeGB')]", + "count": "[parameters('systemPoolNodeCount')]", + "vmSize": "[parameters('systemPoolNodeType')]", + "osType": "Linux", + "storageProfile": "ManagedDisks", + "type": "VirtualMachineScaleSets", + "mode": "System", + "vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]", + "maxPods": "[parameters('maxPods')]" + }, + { + "name": "userpool", + "osDiskSizeGB": "[parameters('osDiskSizeGB')]", + "count": "[parameters('agentPoolNodeCount')]", + "vmSize": "[parameters('agentPoolNodeType')]", + "osType": "Linux", + "storageProfile": "ManagedDisks", + "type": "VirtualMachineScaleSets", + "mode": "User", + "vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]", + "maxPods": "[parameters('maxPods')]" + } + ], + "networkProfile": { + "loadBalancerSku": "standard", + "networkPlugin": "[parameters('networkPlugin')]", + "serviceCidr": "[parameters('serviceCidr')]", + "dnsServiceIP": "[parameters('dnsServiceIP')]", + "dockerBridgeCidr": "[parameters('dockerBridgeCidr')]" + }, + "apiServerAccessProfile": { + "enablePrivateCluster": "[parameters('enablePrivateCluster')]" + }, + "addonProfiles": { + "httpApplicationRouting": { + "enabled": "[parameters('enableHttpApplicationRouting')]" + }, + "azurePolicy": { + "enabled": "[parameters('enableAzurePolicy')]" + }, + "omsagent": { + "enabled": "[parameters('enableOmsAgent')]", + "config": { + "logAnalyticsWorkspaceResourceID": "[variables('omsWorkspaceId')]" + } + } + } + }, + "tags": {}, + "identity": { + "type": "SystemAssigned" + } + }, + { + "type": "Microsoft.Resources/deployments", + "name": "[variables('solutionDeploymentId')]", + "apiVersion": "2017-05-10", + "resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]", + "subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "apiVersion": "2015-11-01-preview", + "type": "Microsoft.OperationsManagement/solutions", + "location": "[parameters('workspaceRegion')]", + "name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]", + "properties": { + "workspaceResourceId": "[variables('omsWorkspaceId')]" + }, + "plan": { + "name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]", + "product": "[concat('OMSGallery/', 'ContainerInsights')]", + "promotionCode": "", + "publisher": "Microsoft" + } + } + ] + } + }, + "dependsOn": [ + "[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]" + ] + }, + { + "type": "Microsoft.Resources/deployments", + "name": "[variables('workspaceDeploymentId')]", + "apiVersion": "2017-05-10", + "resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]", + "subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "apiVersion": "2015-11-01-preview", + "type": "Microsoft.OperationalInsights/workspaces", + "location": "[parameters('workspaceRegion')]", + "name": "[variables('workspaceName')]", + "properties": { + "sku": { + "name": "[parameters('omsSku')]" + } + } + } + ] + } + } + }, + { + "type": "Microsoft.Resources/deployments", + "name": "[variables('clusterMonitoringMetricId')]", + "apiVersion": "2017-05-10", + "resourceGroup": "[parameters('resourceGroupName')]", + "subscriptionId": "[subscription().subscriptionId]", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "type": "Microsoft.ContainerService/managedClusters/providers/roleAssignments", + "apiVersion": "2018-01-01-preview", + "name": "[concat(parameters('clusterName'), '/Microsoft.Authorization/', guid(subscription().subscriptionId))]", + "properties": { + "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', '3913510d-42f4-4e42-8a64-420c390055eb')]", + "principalId": "[reference(parameters('clusterName')).addonProfiles.omsagent.identity.objectId]", + "scope": "[variables('clusterID')]" + } + } + ] + } + }, + "dependsOn": [ + "[variables('clusterID')]" + ] + } + ], + "outputs": { + "controlPlaneFQDN": { + "type": "string", + "value": "[reference(concat('Microsoft.ContainerService/managedClusters/', parameters('clusterName'))).fqdn]" + } + } + } \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/create-cluster.sh b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/create-cluster.sh new file mode 100644 index 000000000..fe20c2489 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/create-cluster.sh @@ -0,0 +1,32 @@ +# Add line to set login to az +#az login +# Set your azure subscription +#az account set -s "" +# Defines the ARM template file location +export templateFile="aks-cluster.json" + +# Defines the parameters that will be used in the ARM template +export parameterFile="parameters.json" + +# Defines the name of the Resource Group our resources are deployed into +export resourceGroupName="PizzaAppEast" + +export clusterName="pizzaappeast" + +export location="eastus" + +# Creates the resources group if it does not already exist +az group create --name $resourceGroupName --location $location + +# Creates the Kubernetes cluster and the associated resources and dependencies for the cluster +az deployment group create --name dataProductionDeployment --resource-group $resourceGroupName --template-file $templateFile --parameters $parameterFile + +# Install the Kubectl CLI. This will be used to interact with the remote Kubernetes cluster +#sudo az aks install-cli + +# Get the Credentials to Access the Cluster with Kubectl +az aks get-credentials --name $clusterName --resource-group $resourceGroupName + +# List the node pools - expect two aks nodepools + +az aks nodepool list --resource-group $resourceGroupName --cluster-name $clusterName -o table diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/parameters.json b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/parameters.json new file mode 100644 index 000000000..eea8d0b03 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/ARM-Templates/KubernetesCluster/parameters.json @@ -0,0 +1,84 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "resourceGroupName": { + "value": "PizzaAppEast" + }, + "virtualNetworkName": { + "value": "PizzaAppEastVNet" + }, + "subnetName": { + "value": "PizzaAppEastSNet" + }, + "clusterName": { + "value": "PizzAappEast" + }, + "maxPods": { + "value": 64 + }, + "systemPoolNodeCount": { + "value": 1 + }, + "systemPoolNodeType": { + "value": "Standard_D2s_v4" + }, + "agentPoolNodeCount": { + "value": 1 + }, + "agentPoolNodeType": { + "value": "Standard_D2s_v4" + }, + "location": { + "value": "eastus" + }, + "dnsPrefix": { + "value": "pizzaappeast-dns" + }, + "kubernetesVersion": { + "value": "1.25.5" + }, + "networkPlugin": { + "value": "azure" + }, + "enableRBAC": { + "value": true + }, + "enablePrivateCluster": { + "value": false + }, + "enableHttpApplicationRouting": { + "value": false + }, + "enableAzurePolicy": { + "value": false + }, + "serviceCidr": { + "value": "10.71.0.0/16" + }, + "dnsServiceIP": { + "value": "10.71.0.3" + }, + "dockerBridgeCidr": { + "value": "172.17.0.1/16" + }, + "addressSpaces": { + "value": [ + "10.250.0.0/16" + ] + }, + "subnetAddressSpace": { + "value": "10.250.0.0/20" + }, + "ddosProtectionPlanEnabled": { + "value": false + }, + "workspaceName": { + "value": "PizzaAppEast" + }, + "workspaceRegion": { + "value": "eastus" + } + } + } + diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/Chart.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/Chart.yaml new file mode 100644 index 000000000..0db71b1b3 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/Chart.yaml @@ -0,0 +1,11 @@ +apiVersion: v2 + +name: Contoso Pizza + +description: A Helm chart for deploying Contoso Pizza Web Application + +type: application + +version: 1.0 + +appVersion: 15.08 \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/deploy-pizza.sh b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/deploy-pizza.sh new file mode 100644 index 000000000..7994510b7 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/deploy-pizza.sh @@ -0,0 +1,74 @@ +status="Running" + +# Install the Kubernetes Resources +helm upgrade --install wth-mysql ../MySQL57 --set infrastructure.password=OCPHack8 + +# Install the Kubernetes Resources Postgres +# helm upgrade --install wth-postgresql ../PostgreSQL116 --set infrastructure.password=OCPHack8 +# +# for ((i = 0 ; i < 30 ; i++)); do +# pgStatus=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":status.phase") +# +# +# if [ "$pgStatus" != "$status" ]; then +# sleep 10 +# fi +# done + +# Get the postgres pod name +# pgPodName=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":metadata.name") + +#Copy pg.sql to the postgresql pod +# kubectl -n postgresql cp ./pg.sql $pgPodName:/tmp/pg.sql + +# Use this to connect to the database server +# kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres -f /tmp/pg.sql + +# Install the Kubernettes Resources MySQL +for ((i = 0 ; i < 30 ; i++)); do + mysqlStatus=$(kubectl -n mysql get pods --no-headers -o custom-columns=":status.phase") + + if [ "$mysqlStatus" != "$status" ]; then + sleep 30 + fi +done + +# Use this to connect to the database server + +kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8 <./mysql.sql + +# postgresClusterIP=$(kubectl -n postgresql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"') + +mysqlClusterIP=$(kubectl -n mysql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"') + +# sed "s/XXX.XXX.XXX.XXX/$postgresClusterIP/" ./values-postgresql-orig.yaml >temp_postgresql.yaml && mv temp_postgresql.yaml ./values-postgresql.yaml + +sed "s/XXX.XXX.XXX.XXX/$mysqlClusterIP/" ./values-mysql-orig.yaml >temp_mysql.yaml && mv temp_mysql.yaml ./values-mysql.yaml + +helm upgrade --install mysql-contosopizza . -f ./values.yaml -f ./values-mysql.yaml + +# helm upgrade --install postgres-contosopizza . -f ./values.yaml -f ./values-postgresql.yaml + +for ((i = 0 ; i < 30 ; i++)); do + appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"') + + if [ "$appStatus" == "null" ]; then + sleep 30 + fi +done + +for ((i = 0 ; i < 30 ; i++)); do + appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"') + + if [ "$appStatus" == "null" ]; then + sleep 30 + fi +done + +# postgresAppIP=$(kubectl -n contosoapppostgres get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip|tr -d '"') + +mysqlAppIP=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"') + +echo "Pizzeria app on MySQL is ready at http://$mysqlAppIP:8081/pizzeria" + +# echo "Pizzeria app on PostgreSQL is ready at http://$postgresAppIP:8082/pizzeria" diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/modify_nsg_for_postgres_mysql.sh b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/modify_nsg_for_postgres_mysql.sh new file mode 100644 index 000000000..d1b354f8e --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/modify_nsg_for_postgres_mysql.sh @@ -0,0 +1,88 @@ + + +# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only + +# Find out your local client ip address. + +echo -e "\n This script restricts the access to your ""on-prem"" Postgres and MySQL database from the shell where it is run from. + It removes public access to the databases and adds your shell IP address as an source IP to connect from. + If you are running this script from Azure Cloud Shell and want to add your computer's IP address as a source for Gui tools to connect to, + then you have to edit the variable my_ip below - put your computer's IP address. + + In order to find the public IP address of your computer ip address, point a browser to https://ifconfig.me + + If this script is run again it appends your IP address to the current white listed source IP addresses. \n" + +my_ip=`curl -s ifconfig.me`/32 + + +# In this resource group, there is only one NSG + +export rg_nsg="MC_OSSDBMigration_ossdbmigration_westus" +export nsg_name=` az network nsg list -g $rg_nsg --query "[].name" -o tsv` + +# For this NSG, there are two rules for connecting to Postgres and MySQL. + +export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-5432" ` +export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-3306" ` + +# Capture the existing allowed_source_ip_address. + +existing_my_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --query "sourceAddressPrefix" -o tsv` +existing_pg_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --query "sourceAddressPrefix" -o tsv` + +# If it says "Internet" we treat it as 0.0.0.0 + +if [ "$existing_my_source_ip_allowed" = "Internet" ] +then + existing_my_source_ip_allowed="0.0.0.0" +fi + + +if [ "$existing_pg_source_ip_allowed" = "Internet" ] +then + existing_pg_source_ip_allowed="0.0.0.0" +fi + +# if the existing source ip allowed is open to the world - then we need to remove it first. Otherwise it is a ( list of ) IP addresses then +# we append to it another IP address. Open the world is 0.0.0.0 or 0.0.0.0/0. + + +existing_my_source_ip_allowed_prefix=`echo $existing_my_source_ip_allowed | cut -d "/" -f1` +existing_pg_source_ip_allowed_prefix=`echo $existing_pg_source_ip_allowed | cut -d "/" -f1` + +# If it was open to public, we take off the existing 0.0.0.0 or else we append to it. + + +if [ "$existing_my_source_ip_allowed_prefix" = "0.0.0.0" ] +then + new_my_source_ip_allowed="$my_ip" +else + new_my_source_ip_allowed="$existing_my_source_ip_allowed $my_ip" +fi + + +if [ "$existing_pg_source_ip_allowed_prefix" = "0.0.0.0" ] +then + new_pg_source_ip_allowed="$my_ip" +else + new_pg_source_ip_allowed="$existing_pg_source_ip_allowed $my_ip" +fi + +# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip". Also discard errors - as if you run the script +# simply twice back to back - it gives an error message - does not do any harm though . + +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $new_my_source_ip_allowed 2>/dev/zero + +if [ $? -ne 0 ] +then + echo -e "\n Your MySQL Firewall rule was not changed. It is possible that you already have $my_ip white listed \n" +fi + +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $new_pg_source_ip_allowed 2>/dev/zero +if [ $? -ne 0 ] +then + echo -e "\n Your Postgres Firewall rule was not changed. It is possible that you already have $my_ip white listed \n" +fi + + diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/mysql.sql b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/mysql.sql new file mode 100644 index 000000000..23e6449b8 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/mysql.sql @@ -0,0 +1,16 @@ +-- Create wth database +CREATE DATABASE wth; + +-- Create a user Contosoapp that would own the application data for migration + +CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ; + +GRANT SUPER on *.* to contosoapp identified by 'OCPHack8'; -- may not be needed + +GRANT ALL PRIVILEGES ON wth.* to contosoapp ; + +GRANT PROCESS, SELECT ON *.* to contosoapp ; + +SET GLOBAL gtid_mode=ON_PERMISSIVE; +SET GLOBAL gtid_mode=OFF_PERMISSIVE; +SET GLOBAL gtid_mode=OFF; diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/pg.sql b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/pg.sql new file mode 100644 index 000000000..aa1361fb6 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/pg.sql @@ -0,0 +1,7 @@ +--Create the wth database +CREATE DATABASE wth; + +-- Create user contosoapp that would own the application schema + + CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8'; + diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/start_vmss_node.sh b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/start_vmss_node.sh new file mode 100644 index 000000000..8fb57323e --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/start_vmss_node.sh @@ -0,0 +1,11 @@ + +# Start the VMSS that hosts the AKS nodes. There are only two VMSS in the resource group -one each for systempool and userpool. +# Change the value of the resource group, if required. + +export vmss_user=$(az vmss list -g MC_PizzaAppEast_pizzaappeast_eastus --query '[].name' | grep userpool | tr -d "," | tr -d '"') +export vmss_system=$(az vmss list -g MC_PizzaAppEast_pizzaappeast_eastus --query '[].name' | grep systempool | tr -d "," | tr -d '"') + +# Now start the VM scale sets + +az vmss start -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_system +az vmss start -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_user diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/stop_vmss_node.sh b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/stop_vmss_node.sh new file mode 100644 index 000000000..34f3f5b27 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/stop_vmss_node.sh @@ -0,0 +1,11 @@ + +# Stop the VMSS that hosts the AKS nodes to stop incurring compute charges. There are only two VMSS in the resource group -one each for system and userpool. +# Change the value of the resource group, if required. + +export vmss_user=$(az vmss list -g MC_PizzaAppEast_pizzappeast_eastus --query '[].name' | grep userpool | tr -d "," | tr -d '"') +export vmss_system=$(az vmss list -g MC_PizzaAppEast_pizzaappeast_eastus --query '[].name' | grep systempool | tr -d "," | tr -d '"') + +# Now stop the VM scale sets + +az vmss stop -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_user +az vmss stop -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_system diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/deployment-mysql.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/deployment-mysql.yaml new file mode 100644 index 000000000..272ed7c0b --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/deployment-mysql.yaml @@ -0,0 +1,106 @@ +{{ if eq .Values.appConfig.databaseType "mysql" }} +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: {{ .Values.infrastructure.appName }} + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + replicas: 1 + serviceName: "{{ .Values.infrastructure.appName }}-external" + selector: + matchLabels: + app: {{ .Values.application.labelValue }} + template: + metadata: + labels: + app: {{ .Values.application.labelValue }} + spec: + containers: + - image: "{{ .Values.image.name }}:{{ .Values.image.tag }}" + name: {{ .Values.infrastructure.appName }} + resources: + requests: + memory: "{{ .Values.resources.requests.memory }}" + cpu: "{{ .Values.resources.requests.cpu }}" + limits: + memory: "{{ .Values.resources.limits.memory }}" + cpu: "{{ .Values.resources.limits.cpu }}" + env: + - name: APP_DATASOURCE_DRIVER + value: "{{ .Values.appSettings.mysql.driverClass }}" + - name: APP_HIBERNATE_DIALECT + value: "{{ .Values.appSettings.mysql.dialect }}" + - name: APP_HIBERNATE_HBM2DDL_AUTO + value: "{{ .Values.globalConfig.hibernateDdlAuto }}" + - name: APP_PORT + value: "{{ .Values.appConfig.webPort }}" + - name: APP_CONTEXT_PATH + value: "{{ .Values.appConfig.webContext }}" + - name: APP_BRAINTREE_MERCHANT_ID + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_braintree_merchant_id + - name: APP_BRAINTREE_PUBLIC_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_braintree_public_key + - name: APP_BRAINTREE_PRIVATE_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_braintree_private_key + - name: APP_RECAPTCHA_PUBLIC_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_recaptcha_public_key + - name: APP_RECAPTCHA_PRIVATE_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_recaptcha_private_key + - name: APP_DATASOURCE_URL + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_datasource_url + - name: APP_DATASOURCE_USERNAME + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_datasource_username + - name: APP_DATASOURCE_PASSWORD + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_datasource_password + ports: + - containerPort: {{ .Values.appConfig.webPort }} + name: contosopizza + readinessProbe: + tcpSocket: + port: {{ .Values.appConfig.webPort }} + initialDelaySeconds: 5 + periodSeconds: 10 + failureThreshold: 3 + livenessProbe: + tcpSocket: + port: {{ .Values.appConfig.webPort }} + initialDelaySeconds: 15 + failureThreshold: 5 + periodSeconds: 16 + volumeMounts: + - name: "contosopizza-persistent-storage" + mountPath: {{ .Values.infrastructure.dataVolume }} + volumeClaimTemplates: + - metadata: + name: contosopizza-persistent-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "managed-premium" + resources: + requests: + storage: 1Gi +{{ end }} diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/namespace.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/namespace.yaml new file mode 100644 index 000000000..877dd0c22 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/namespace.yaml @@ -0,0 +1,8 @@ +{{ if eq .Values.infrastructure.namespace "default" }} +# Do not create namespace +{{ else }} +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.infrastructure.namespace }} +{{ end }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/secret.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/secret.yaml new file mode 100644 index 000000000..b782f3ff7 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/secret.yaml @@ -0,0 +1,16 @@ +# These are secrets used to configure the application +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: "{{ .Values.globalConfig.secretName }}" + namespace: "{{ .Values.infrastructure.namespace }}" +data: + app_braintree_merchant_id: {{ .Values.globalConfig.brainTreeMerchantId | b64enc }} + app_braintree_public_key: {{ .Values.globalConfig.brainTreePublicKey | b64enc }} + app_braintree_private_key: {{ .Values.globalConfig.brainTreePrivateKey | b64enc }} + app_recaptcha_public_key: {{ .Values.globalConfig.recaptchaPublicKey | b64enc }} + app_recaptcha_private_key: {{ .Values.globalConfig.recaptchaPrivateKey | b64enc }} + app_datasource_url: {{ .Values.appConfig.dataSourceURL | b64enc }} + app_datasource_username: {{ .Values.appConfig.dataSourceUser | b64enc }} + app_datasource_password: {{ .Values.appConfig.dataSourcePassword | b64enc }} diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/service.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/service.yaml new file mode 100644 index 000000000..78585b8b3 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/templates/service.yaml @@ -0,0 +1,14 @@ +--- +# This is the internal load balancer, routing traffic to the Application +apiVersion: v1 +kind: Service +metadata: + name: "{{ .Values.infrastructure.appName }}-external" + namespace: {{ .Values.infrastructure.namespace }} +spec: + type: "{{ .Values.service.type }}" + ports: + - port: {{ .Values.appConfig.webPort }} + protocol: {{ .Values.service.protocol }} + selector: + app: {{ .Values.application.labelValue }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/uninstall-pizza.sh b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/uninstall-pizza.sh new file mode 100644 index 000000000..280755484 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/uninstall-pizza.sh @@ -0,0 +1,6 @@ +# helm uninstall wth-postgresql +helm uninstall wth-mysql +helm uninstall mysql-contosopizza +# helm uninstall postgres-contosopizza +echo "" +echo "Use 'kubectl get ns' to make sure your pods are not in a Terminating status before redeploying" diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/update_nsg_for_postgres_mysql.sh b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/update_nsg_for_postgres_mysql.sh new file mode 100644 index 000000000..b1cb8b748 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/update_nsg_for_postgres_mysql.sh @@ -0,0 +1,27 @@ + + +# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only. The first step - to find out your local client ip address. + +echo -e "\n This script restricts the access to your Postgres and MySQL database from your computer only. + + The variable myip will get the ip address of the shell environment where this script is running from - be it a cloud shell or your own computer. + You can get your computer's IP adress by browsing to https://ifconfig.me. So if the browser says it is 102.194.87.201, your myip=102.194.87.201/32. +\n" + +myip=`curl -s ifconfig.me`/32 + + +# In this resource group, there is only one NSG. Change the value of the resource group, if required + +export rg_nsg="MC_OSSDBMigration_ossdbmigration_westus" +export nsg_name=`az network nsg list -g $rg_nsg --query "[].name" -o tsv` + +# For this NSG, there are two rules for connecting to Postgres and MySQL. + +export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-5432" | sed 's/"//g'` +export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-3306" | sed 's/"//g'` + +# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip" + +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $myip +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $myip diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-lowspec.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-lowspec.yaml new file mode 100644 index 000000000..7564b216b --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-lowspec.yaml @@ -0,0 +1,78 @@ + +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + #databaseType: "postgres" # mysql or postgres + #local example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here + + #Azure example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here + + #local example of MySQL JDBC Connection string + dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #Azure example of MySQL JDBC Connection string + #dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #local examples of dataSourceUser and dataSourcePassword + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + + #Azure examples of dataSourceUser and dataSourcePassword + #dataSourceUser: "postgres@petepgdbtest01" # your database username goes here + #dataSourcePassword: "OCPHack8" # your database password goes here + + webPort: 8083 # the port the app listens on + #webPort: 8082 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +# These changes applies to any database type used +globalConfig: + secretName: contosopizza + brainTreeMerchantId: "3fk8mrzyr665jb6d" + brainTreePublicKey: "72wqqdk75tmh44n9" + brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33" + recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04" + recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI" + hibernateDdlAuto: "create-only" + +application: + labelValue: contosopizza + +infrastructure: + namespace: contosopizza + appName: contosopizza + dataVolume: "/usr/local/contosopizza" + volumeName: "contosopizza" + +image: + name: izzymsft/ubuntu-pizza + pullPolicy: IfNotPresent + tag: "1.0" + +service: + type: LoadBalancer + port: 8082 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 256m + memory: 512Mi + volume: + size: 1Gi + storageClass: managed-premium + +appSettings: + mysql: + dialect: "org.hibernate.dialect.MySQL57Dialect" + driverClass: "com.mysql.jdbc.Driver" +# postgres: +# dialect: "org.hibernate.dialect.PostgreSQL95Dialect" +# driverClass: "org.postgresql.Driver" \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-mysql-orig.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-mysql-orig.yaml new file mode 100644 index 000000000..f477357d2 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-mysql-orig.yaml @@ -0,0 +1,13 @@ +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + webPort: 8081 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +infrastructure: + namespace: contosoappmysql \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-mysql.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-mysql.yaml new file mode 100644 index 000000000..f477357d2 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values-mysql.yaml @@ -0,0 +1,13 @@ +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + webPort: 8081 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +infrastructure: + namespace: contosoappmysql \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values.yaml new file mode 100644 index 000000000..7564b216b --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/ContosoPizza/values.yaml @@ -0,0 +1,78 @@ + +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + #databaseType: "postgres" # mysql or postgres + #local example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here + + #Azure example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here + + #local example of MySQL JDBC Connection string + dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #Azure example of MySQL JDBC Connection string + #dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #local examples of dataSourceUser and dataSourcePassword + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + + #Azure examples of dataSourceUser and dataSourcePassword + #dataSourceUser: "postgres@petepgdbtest01" # your database username goes here + #dataSourcePassword: "OCPHack8" # your database password goes here + + webPort: 8083 # the port the app listens on + #webPort: 8082 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +# These changes applies to any database type used +globalConfig: + secretName: contosopizza + brainTreeMerchantId: "3fk8mrzyr665jb6d" + brainTreePublicKey: "72wqqdk75tmh44n9" + brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33" + recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04" + recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI" + hibernateDdlAuto: "create-only" + +application: + labelValue: contosopizza + +infrastructure: + namespace: contosopizza + appName: contosopizza + dataVolume: "/usr/local/contosopizza" + volumeName: "contosopizza" + +image: + name: izzymsft/ubuntu-pizza + pullPolicy: IfNotPresent + tag: "1.0" + +service: + type: LoadBalancer + port: 8082 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 256m + memory: 512Mi + volume: + size: 1Gi + storageClass: managed-premium + +appSettings: + mysql: + dialect: "org.hibernate.dialect.MySQL57Dialect" + driverClass: "com.mysql.jdbc.Driver" +# postgres: +# dialect: "org.hibernate.dialect.PostgreSQL95Dialect" +# driverClass: "org.postgresql.Driver" \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/Chart.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/Chart.yaml new file mode 100644 index 000000000..01d1f9a62 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/Chart.yaml @@ -0,0 +1,11 @@ +apiVersion: v2 + +name: MySQL Database Server + +description: A Helm chart for deploying a single node MySQL database server + +type: application + +version: 2.0 + +appVersion: 5.7 \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/configmap.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/configmap.yaml new file mode 100644 index 000000000..e53cddb93 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/configmap.yaml @@ -0,0 +1,56 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: mysqld-config + namespace: "{{ .Values.infrastructure.namespace }}" +data: + mysqld.cnf: |- + # Mounted at /etc/mysql/mysql.conf.d/mysqld.cnf + [mysqld] + + lower_case_table_names = 1 + server_id = 3 + + pid-file = /var/run/mysqld/mysqld.pid + socket = /var/run/mysqld/mysqld.sock + datadir = /usr/local/mysql/data + + explicit_defaults_for_timestamp = on + + #log-error = /var/log/mysql/error.log + + # Disabling symbolic-links is recommended to prevent assorted security risks + symbolic-links=0 + + # The value of log_bin is the base name of the sequence of binlog files. + log_bin = mysql-bin + + # The binlog-format must be set to ROW or row. + binlog_format = row + + # The binlog_row_image must be set to FULL or full + binlog_row_image = full + + # This is the number of days for automatic binlog file removal. The default is 0 which means no automatic removal. + expire_logs_days = 7 + + # Boolean which enables/disables support for including the original SQL statement in the binlog entry. + binlog_rows_query_log_events = on + + # Whether updates received by a replica server from a replication source server should be logged to the replica's own binary log + log_slave_updates = on + + # Boolean which specifies whether GTID mode of the MySQL server is enabled or not. + gtid_mode = on + + # Boolean which instructs the server whether or not to enforce GTID consistency by allowing + # the execution of statements that can be logged in a transactionally safe manner; required when using GTIDs. + enforce_gtid_consistency = on + + # The number of seconds the server waits for activity on an interactive connection before closing it. + interactive_timeout = 36000 + + # The number of seconds the server waits for activity on a noninteractive connection before closing it. + wait_timeout = 72000 + + # end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/deployment.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/deployment.yaml new file mode 100644 index 000000000..7ad89c931 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/deployment.yaml @@ -0,0 +1,74 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ .Values.infrastructure.appName }} + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + replicas: 1 + selector: + matchLabels: + app: {{ .Values.application.labelValue }} + strategy: + type: Recreate + template: + metadata: + labels: + app: {{ .Values.application.labelValue }} + spec: + containers: + - image: "{{ .Values.image.name }}:{{ .Values.image.tag }}" + name: {{ .Values.infrastructure.appName }} + resources: + requests: + memory: "{{ .Values.resources.requests.memory }}" + cpu: "{{ .Values.resources.requests.cpu }}" + limits: + memory: "{{ .Values.resources.limits.memory }}" + cpu: "{{ .Values.resources.limits.cpu }}" + env: + - name: MYSQL_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mysqld + key: mysql_password + ports: + - containerPort: {{ .Values.service.port }} + name: mysql + readinessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 5 + periodSeconds: 10 + failureThreshold: 3 + livenessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 15 + failureThreshold: 5 + periodSeconds: 16 + volumeMounts: + - name: "{{ .Values.infrastructure.volumeName }}-volume" + mountPath: {{ .Values.infrastructure.dataVolume }} + - name: mysqld-configuration2 + mountPath: /etc/mysql/mysql.conf.d + volumes: + - name: "{{ .Values.infrastructure.volumeName }}-volume" + persistentVolumeClaim: + claimName: "{{ .Values.infrastructure.volumeName }}-persistent-storage" + - name: mysqld-configuration2 + configMap: + name: mysqld-config + +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: "{{ .Values.infrastructure.volumeName }}-persistent-storage" + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + accessModes: + - ReadWriteOnce + storageClassName: {{ .Values.resources.volume.storageClass }} + resources: + requests: + storage: {{ .Values.resources.volume.size }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/namespace.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/namespace.yaml new file mode 100644 index 000000000..877dd0c22 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/namespace.yaml @@ -0,0 +1,8 @@ +{{ if eq .Values.infrastructure.namespace "default" }} +# Do not create namespace +{{ else }} +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.infrastructure.namespace }} +{{ end }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/secret.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/secret.yaml new file mode 100644 index 000000000..c714862b0 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: mysqld + namespace: "{{ .Values.infrastructure.namespace }}" +data: + mysql_default_user: {{ .Values.infrastructure.username | b64enc }} + mysql_password: {{ .Values.infrastructure.password | b64enc }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/service.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/service.yaml new file mode 100644 index 000000000..7b2a9ca8a --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/templates/service.yaml @@ -0,0 +1,14 @@ +--- +# This is the internal load balancer, routing traffic to the MySQL Pod +apiVersion: v1 +kind: Service +metadata: + name: "{{ .Values.infrastructure.appName }}-external" + namespace: {{ .Values.infrastructure.namespace }} +spec: + type: "{{ .Values.service.type }}" + ports: + - port: {{ .Values.service.port }} + protocol: {{ .Values.service.protocol }} + selector: + app: {{ .Values.application.labelValue }} diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/values.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/values.yaml new file mode 100644 index 000000000..4ce5bad0b --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/MySQL57/values.yaml @@ -0,0 +1,34 @@ + +replicaCount: 1 + +application: + labelValue: mysql + +infrastructure: + namespace: mysql + appName: mysql + username: izzy + password: "OCPHack8" + dataVolume: "/usr/local/mysql" + volumeName: "wthmysql" + +image: + name: mysql + pullPolicy: IfNotPresent + tag: "5.7.32" + +service: + type: LoadBalancer + port: 3306 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 750m + memory: 2048Mi + volume: + size: 5Gi + storageClass: managed-premium \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/Chart.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/Chart.yaml new file mode 100644 index 000000000..76ae49cb7 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/Chart.yaml @@ -0,0 +1,11 @@ +apiVersion: v2 + +name: PostgreSQL + +description: A Helm chart for deploying a single node PostgreSQL database server + +type: application + +version: 2.0 + +appVersion: 11.6 \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/deployment.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/deployment.yaml new file mode 100644 index 000000000..88567b112 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/deployment.yaml @@ -0,0 +1,91 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ .Values.infrastructure.appName }} + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + replicas: 1 + selector: + matchLabels: + app: {{ .Values.application.labelValue }} + strategy: + type: Recreate + template: + metadata: + labels: + app: {{ .Values.application.labelValue }} + spec: + securityContext: + runAsUser: 0 + runAsGroup: 999 + fsGroup: 999 + containers: + - image: "{{ .Values.image.name }}:{{ .Values.image.tag }}" + name: {{ .Values.infrastructure.appName }} + args: ["-c", "config_file=/etc/postgresql/postgresql.conf"] + resources: + requests: + memory: "{{ .Values.resources.requests.memory }}" + cpu: "{{ .Values.resources.requests.cpu }}" + limits: + memory: "{{ .Values.resources.limits.memory }}" + cpu: "{{ .Values.resources.limits.cpu }}" + env: + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: postgres + key: postgres_password + - name: PGDATA + value: {{ .Values.infrastructure.dataPath }} + ports: + - containerPort: {{ .Values.service.port }} + name: postgres + readinessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 5 + periodSeconds: 10 + failureThreshold: 3 + livenessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 15 + failureThreshold: 5 + periodSeconds: 16 + volumeMounts: + - name: "{{ .Values.infrastructure.appName }}-volume" + mountPath: {{ .Values.infrastructure.dataVolume }} + - name: "postgresql-configuration" + mountPath: "/etc/postgresql" + - name: "postgresql-tls-keys" + mountPath: "/etc/postgresql/keys" + volumes: + - name: "{{ .Values.infrastructure.appName }}-volume" + persistentVolumeClaim: + claimName: "{{ .Values.infrastructure.appName }}-persistent-storage" + - name: postgresql-configuration + configMap: + name: postgresql-config + - name: postgresql-tls-keys + secret: + secretName: postgresql-tls-secret + items: + - key: tls.crt + path: "tls.crt" + - key: tls.key + path: "tls.key" + mode: 0640 +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: "{{ .Values.infrastructure.appName }}-persistent-storage" + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + accessModes: + - ReadWriteOnce + storageClassName: {{ .Values.resources.volume.storageClass }} + resources: + requests: + storage: {{ .Values.resources.volume.size }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/namespace.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/namespace.yaml new file mode 100644 index 000000000..1da5da543 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/namespace.yaml @@ -0,0 +1,8 @@ +{{ if eq .Values.infrastructure.namespace "default" }} +# Do not create namespace +{{ else }} +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.infrastructure.namespace }} +{{ end }} diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-configmap.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-configmap.yaml new file mode 100644 index 000000000..62c3e6b10 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-configmap.yaml @@ -0,0 +1,699 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: postgresql-config + namespace: {{ .Values.infrastructure.namespace }} +data: + postgresql.conf: |- + # Mounted at /etc/postgresql/postgresql.conf + # ----------------------------- + # PostgreSQL configuration file + # ----------------------------- + # + # This file consists of lines of the form: + # + # name = value + # + # (The "=" is optional.) Whitespace may be used. Comments are introduced with + # "#" anywhere on a line. The complete list of parameter names and allowed + # values can be found in the PostgreSQL documentation. + # + # The commented-out settings shown in this file represent the default values. + # Re-commenting a setting is NOT sufficient to revert it to the default value; + # you need to reload the server. + # + # This file is read on server startup and when the server receives a SIGHUP + # signal. If you edit the file on a running system, you have to SIGHUP the + # server for the changes to take effect, run "pg_ctl reload", or execute + # "SELECT pg_reload_conf()". Some parameters, which are marked below, + # require a server shutdown and restart to take effect. + # + # Any parameter can also be given as a command-line option to the server, e.g., + # "postgres -c log_connections=on". Some parameters can be changed at run time + # with the "SET" SQL command. + # + # Memory units: kB = kilobytes Time units: ms = milliseconds + # MB = megabytes s = seconds + # GB = gigabytes min = minutes + # TB = terabytes h = hours + # d = days + + + #------------------------------------------------------------------------------ + # FILE LOCATIONS + #------------------------------------------------------------------------------ + + # The default values of these variables are driven from the -D command-line + # option or PGDATA environment variable, represented here as ConfigDir. + + #data_directory = 'ConfigDir' # use data in another directory + # (change requires restart) + #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file + # (change requires restart) + #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file + # (change requires restart) + + # If external_pid_file is not explicitly set, no extra PID file is written. + #external_pid_file = '' # write an extra PID file + # (change requires restart) + + + #------------------------------------------------------------------------------ + # CONNECTIONS AND AUTHENTICATION + #------------------------------------------------------------------------------ + + # - Connection Settings - + + listen_addresses = '*' + # comma-separated list of addresses; + # defaults to 'localhost'; use '*' for all + # (change requires restart) + #port = 5432 # (change requires restart) + max_connections = 100 # (change requires restart) + #superuser_reserved_connections = 3 # (change requires restart) + #unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories + # (change requires restart) + #unix_socket_group = '' # (change requires restart) + #unix_socket_permissions = 0777 # begin with 0 to use octal notation + # (change requires restart) + #bonjour = off # advertise server via Bonjour + # (change requires restart) + #bonjour_name = '' # defaults to the computer name + # (change requires restart) + + # - TCP Keepalives - + # see "man 7 tcp" for details + + #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; + # 0 selects the system default + #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; + # 0 selects the system default + #tcp_keepalives_count = 0 # TCP_KEEPCNT; + # 0 selects the system default + + # - Authentication - + + #authentication_timeout = 1min # 1s-600s + #password_encryption = md5 # md5 or scram-sha-256 + #db_user_namespace = off + + # GSSAPI using Kerberos + #krb_server_keyfile = '' + #krb_caseins_users = off + + # - SSL - + + ssl = on + ssl_ca_file = '/etc/postgresql/keys/tls.crt' + ssl_cert_file = '/etc/postgresql/keys/tls.crt' + #ssl_crl_file = '' + ssl_key_file = '/etc/postgresql/keys/tls.key' + #ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers + #ssl_prefer_server_ciphers = on + #ssl_ecdh_curve = 'prime256v1' + #ssl_dh_params_file = '' + #ssl_passphrase_command = '' + #ssl_passphrase_command_supports_reload = off + + + #------------------------------------------------------------------------------ + # RESOURCE USAGE (except WAL) + #------------------------------------------------------------------------------ + + # - Memory - + + shared_buffers = 128MB # min 128kB + # (change requires restart) + #huge_pages = try # on, off, or try + # (change requires restart) + #temp_buffers = 8MB # min 800kB + #max_prepared_transactions = 0 # zero disables the feature + # (change requires restart) + # Caution: it is not advisable to set max_prepared_transactions nonzero unless + # you actively intend to use prepared transactions. + #work_mem = 4MB # min 64kB + #maintenance_work_mem = 64MB # min 1MB + #autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem + #max_stack_depth = 2MB # min 100kB + dynamic_shared_memory_type = posix # the default is the first option + # supported by the operating system: + # posix + # sysv + # windows + # mmap + # use none to disable dynamic shared memory + # (change requires restart) + + # - Disk - + + #temp_file_limit = -1 # limits per-process temp file space + # in kB, or -1 for no limit + + # - Kernel Resources - + + #max_files_per_process = 1000 # min 25 + # (change requires restart) + + # - Cost-Based Vacuum Delay - + + #vacuum_cost_delay = 0 # 0-100 milliseconds + #vacuum_cost_page_hit = 1 # 0-10000 credits + #vacuum_cost_page_miss = 10 # 0-10000 credits + #vacuum_cost_page_dirty = 20 # 0-10000 credits + #vacuum_cost_limit = 200 # 1-10000 credits + + # - Background Writer - + + #bgwriter_delay = 200ms # 10-10000ms between rounds + #bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables + #bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round + #bgwriter_flush_after = 512kB # measured in pages, 0 disables + + # - Asynchronous Behavior - + + #effective_io_concurrency = 1 # 1-1000; 0 disables prefetching + #max_worker_processes = 8 # (change requires restart) + #max_parallel_maintenance_workers = 2 # taken from max_parallel_workers + #max_parallel_workers_per_gather = 2 # taken from max_parallel_workers + #parallel_leader_participation = on + #max_parallel_workers = 8 # maximum number of max_worker_processes that + # can be used in parallel operations + #old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate + # (change requires restart) + #backend_flush_after = 0 # measured in pages, 0 disables + + + #------------------------------------------------------------------------------ + # WRITE-AHEAD LOG + #------------------------------------------------------------------------------ + + # - Settings - + + wal_level = logical # minimal, replica, or logical + # (change requires restart) + #fsync = on # flush data to disk for crash safety + # (turning this off can cause + # unrecoverable data corruption) + #synchronous_commit = on # synchronization level; + # off, local, remote_write, remote_apply, or on + #wal_sync_method = fsync # the default is the first option + # supported by the operating system: + # open_datasync + # fdatasync (default on Linux) + # fsync + # fsync_writethrough + # open_sync + #full_page_writes = on # recover from partial page writes + #wal_compression = off # enable compression of full-page writes + #wal_log_hints = off # also do full page writes of non-critical updates + # (change requires restart) + #wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers + # (change requires restart) + #wal_writer_delay = 200ms # 1-10000 milliseconds + #wal_writer_flush_after = 1MB # measured in pages, 0 disables + + #commit_delay = 0 # range 0-100000, in microseconds + #commit_siblings = 5 # range 1-1000 + + # - Checkpoints - + + #checkpoint_timeout = 5min # range 30s-1d + max_wal_size = 1GB + min_wal_size = 80MB + #checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0 + #checkpoint_flush_after = 256kB # measured in pages, 0 disables + #checkpoint_warning = 30s # 0 disables + + # - Archiving - + + #archive_mode = off # enables archiving; off, on, or always + # (change requires restart) + #archive_command = '' # command to use to archive a logfile segment + # placeholders: %p = path of file to archive + # %f = file name only + # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f' + #archive_timeout = 0 # force a logfile segment switch after this + # number of seconds; 0 disables + + + #------------------------------------------------------------------------------ + # REPLICATION + #------------------------------------------------------------------------------ + + # - Sending Servers - + + # Set these on the master and on any standby that will send replication data. + + #max_wal_senders = 10 # max number of walsender processes + # (change requires restart) + #wal_keep_segments = 0 # in logfile segments; 0 disables + #wal_sender_timeout = 60s # in milliseconds; 0 disables + + #max_replication_slots = 10 # max number of replication slots + # (change requires restart) + #track_commit_timestamp = off # collect timestamp of transaction commit + # (change requires restart) + + # - Master Server - + + # These settings are ignored on a standby server. + + #synchronous_standby_names = '' # standby servers that provide sync rep + # method to choose sync standbys, number of sync standbys, + # and comma-separated list of application_name + # from standby(s); '*' = all + #vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed + + # - Standby Servers - + + # These settings are ignored on a master server. + + #hot_standby = on # "off" disallows queries during recovery + # (change requires restart) + #max_standby_archive_delay = 30s # max delay before canceling queries + # when reading WAL from archive; + # -1 allows indefinite delay + #max_standby_streaming_delay = 30s # max delay before canceling queries + # when reading streaming WAL; + # -1 allows indefinite delay + #wal_receiver_status_interval = 10s # send replies at least this often + # 0 disables + #hot_standby_feedback = off # send info from standby to prevent + # query conflicts + #wal_receiver_timeout = 60s # time that receiver waits for + # communication from master + # in milliseconds; 0 disables + #wal_retrieve_retry_interval = 5s # time to wait before retrying to + # retrieve WAL after a failed attempt + + # - Subscribers - + + # These settings are ignored on a publisher. + + #max_logical_replication_workers = 4 # taken from max_worker_processes + # (change requires restart) + #max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers + + + #------------------------------------------------------------------------------ + # QUERY TUNING + #------------------------------------------------------------------------------ + + # - Planner Method Configuration - + + #enable_bitmapscan = on + #enable_hashagg = on + #enable_hashjoin = on + #enable_indexscan = on + #enable_indexonlyscan = on + #enable_material = on + #enable_mergejoin = on + #enable_nestloop = on + #enable_parallel_append = on + #enable_seqscan = on + #enable_sort = on + #enable_tidscan = on + #enable_partitionwise_join = off + #enable_partitionwise_aggregate = off + #enable_parallel_hash = on + #enable_partition_pruning = on + + # - Planner Cost Constants - + + #seq_page_cost = 1.0 # measured on an arbitrary scale + #random_page_cost = 4.0 # same scale as above + #cpu_tuple_cost = 0.01 # same scale as above + #cpu_index_tuple_cost = 0.005 # same scale as above + #cpu_operator_cost = 0.0025 # same scale as above + #parallel_tuple_cost = 0.1 # same scale as above + #parallel_setup_cost = 1000.0 # same scale as above + + #jit_above_cost = 100000 # perform JIT compilation if available + # and query more expensive than this; + # -1 disables + #jit_inline_above_cost = 500000 # inline small functions if query is + # more expensive than this; -1 disables + #jit_optimize_above_cost = 500000 # use expensive JIT optimizations if + # query is more expensive than this; + # -1 disables + + #min_parallel_table_scan_size = 8MB + #min_parallel_index_scan_size = 512kB + #effective_cache_size = 4GB + + # - Genetic Query Optimizer - + + #geqo = on + #geqo_threshold = 12 + #geqo_effort = 5 # range 1-10 + #geqo_pool_size = 0 # selects default based on effort + #geqo_generations = 0 # selects default based on effort + #geqo_selection_bias = 2.0 # range 1.5-2.0 + #geqo_seed = 0.0 # range 0.0-1.0 + + # - Other Planner Options - + + #default_statistics_target = 100 # range 1-10000 + #constraint_exclusion = partition # on, off, or partition + #cursor_tuple_fraction = 0.1 # range 0.0-1.0 + #from_collapse_limit = 8 + #join_collapse_limit = 8 # 1 disables collapsing of explicit + # JOIN clauses + #force_parallel_mode = off + #jit = off # allow JIT compilation + + + #------------------------------------------------------------------------------ + # REPORTING AND LOGGING + #------------------------------------------------------------------------------ + + # - Where to Log - + + #log_destination = 'stderr' # Valid values are combinations of + # stderr, csvlog, syslog, and eventlog, + # depending on platform. csvlog + # requires logging_collector to be on. + + # This is used when logging to stderr: + #logging_collector = off # Enable capturing of stderr and csvlog + # into log files. Required to be on for + # csvlogs. + # (change requires restart) + + # These are only used if logging_collector is on: + #log_directory = 'log' # directory where log files are written, + # can be absolute or relative to PGDATA + #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern, + # can include strftime() escapes + #log_file_mode = 0600 # creation mode for log files, + # begin with 0 to use octal notation + #log_truncate_on_rotation = off # If on, an existing log file with the + # same name as the new log file will be + # truncated rather than appended to. + # But such truncation only occurs on + # time-driven rotation, not on restarts + # or size-driven rotation. Default is + # off, meaning append to existing files + # in all cases. + #log_rotation_age = 1d # Automatic rotation of logfiles will + # happen after that time. 0 disables. + #log_rotation_size = 10MB # Automatic rotation of logfiles will + # happen after that much log output. + # 0 disables. + + # These are relevant when logging to syslog: + #syslog_facility = 'LOCAL0' + #syslog_ident = 'postgres' + #syslog_sequence_numbers = on + #syslog_split_messages = on + + # This is only relevant when logging to eventlog (win32): + # (change requires restart) + #event_source = 'PostgreSQL' + + # - When to Log - + + #log_min_messages = warning # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic + + #log_min_error_statement = error # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic (effectively off) + + #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements + # and their durations, > 0 logs only + # statements running at least this number + # of milliseconds + + + # - What to Log - + + #debug_print_parse = off + #debug_print_rewritten = off + #debug_print_plan = off + #debug_pretty_print = on + #log_checkpoints = off + #log_connections = off + #log_disconnections = off + #log_duration = off + #log_error_verbosity = default # terse, default, or verbose messages + #log_hostname = off + #log_line_prefix = '%m [%p] ' # special values: + # %a = application name + # %u = user name + # %d = database name + # %r = remote host and port + # %h = remote host + # %p = process ID + # %t = timestamp without milliseconds + # %m = timestamp with milliseconds + # %n = timestamp with milliseconds (as a Unix epoch) + # %i = command tag + # %e = SQL state + # %c = session ID + # %l = session line number + # %s = session start timestamp + # %v = virtual transaction ID + # %x = transaction ID (0 if none) + # %q = stop here in non-session + # processes + # %% = '%' + # e.g. '<%u%%%d> ' + #log_lock_waits = off # log lock waits >= deadlock_timeout + #log_statement = 'none' # none, ddl, mod, all + #log_replication_commands = off + #log_temp_files = -1 # log temporary files equal or larger + # than the specified size in kilobytes; + # -1 disables, 0 logs all temp files + log_timezone = 'Etc/UTC' + + #------------------------------------------------------------------------------ + # PROCESS TITLE + #------------------------------------------------------------------------------ + + #cluster_name = '' # added to process titles if nonempty + # (change requires restart) + #update_process_title = on + + + #------------------------------------------------------------------------------ + # STATISTICS + #------------------------------------------------------------------------------ + + # - Query and Index Statistics Collector - + + #track_activities = on + #track_counts = on + #track_io_timing = off + #track_functions = none # none, pl, all + #track_activity_query_size = 1024 # (change requires restart) + #stats_temp_directory = 'pg_stat_tmp' + + + # - Monitoring - + + #log_parser_stats = off + #log_planner_stats = off + #log_executor_stats = off + #log_statement_stats = off + + + #------------------------------------------------------------------------------ + # AUTOVACUUM + #------------------------------------------------------------------------------ + + #autovacuum = on # Enable autovacuum subprocess? 'on' + # requires track_counts to also be on. + #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and + # their durations, > 0 logs only + # actions running at least this number + # of milliseconds. + #autovacuum_max_workers = 3 # max number of autovacuum subprocesses + # (change requires restart) + #autovacuum_naptime = 1min # time between autovacuum runs + #autovacuum_vacuum_threshold = 50 # min number of row updates before + # vacuum + #autovacuum_analyze_threshold = 50 # min number of row updates before + # analyze + #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum + #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze + #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum + # (change requires restart) + #autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age + # before forced vacuum + # (change requires restart) + #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for + # autovacuum, in milliseconds; + # -1 means use vacuum_cost_delay + #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for + # autovacuum, -1 means use + # vacuum_cost_limit + + + #------------------------------------------------------------------------------ + # CLIENT CONNECTION DEFAULTS + #------------------------------------------------------------------------------ + + # - Statement Behavior - + + #client_min_messages = notice # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # log + # notice + # warning + # error + #search_path = '"$user", public' # schema names + #row_security = on + #default_tablespace = '' # a tablespace name, '' uses the default + #temp_tablespaces = '' # a list of tablespace names, '' uses + # only default tablespace + #check_function_bodies = on + #default_transaction_isolation = 'read committed' + #default_transaction_read_only = off + #default_transaction_deferrable = off + #session_replication_role = 'origin' + #statement_timeout = 0 # in milliseconds, 0 is disabled + #lock_timeout = 0 # in milliseconds, 0 is disabled + #idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled + #vacuum_freeze_min_age = 50000000 + #vacuum_freeze_table_age = 150000000 + #vacuum_multixact_freeze_min_age = 5000000 + #vacuum_multixact_freeze_table_age = 150000000 + #vacuum_cleanup_index_scale_factor = 0.1 # fraction of total number of tuples + # before index cleanup, 0 always performs + # index cleanup + #bytea_output = 'hex' # hex, escape + #xmlbinary = 'base64' + #xmloption = 'content' + #gin_fuzzy_search_limit = 0 + #gin_pending_list_limit = 4MB + + # - Locale and Formatting - + + datestyle = 'iso, mdy' + #intervalstyle = 'postgres' + timezone = 'Etc/UTC' + #timezone_abbreviations = 'Default' # Select the set of available time zone + # abbreviations. Currently, there are + # Default + # Australia (historical usage) + # India + # You can create your own file in + # share/timezonesets/. + #extra_float_digits = 0 # min -15, max 3 + #client_encoding = sql_ascii # actually, defaults to database + # encoding + + # These settings are initialized by initdb, but they can be changed. + lc_messages = 'en_US.utf8' # locale for system error message + # strings + lc_monetary = 'en_US.utf8' # locale for monetary formatting + lc_numeric = 'en_US.utf8' # locale for number formatting + lc_time = 'en_US.utf8' # locale for time formatting + + # default configuration for text search + default_text_search_config = 'pg_catalog.english' + + # - Shared Library Preloading - + + #shared_preload_libraries = '' # (change requires restart) + #local_preload_libraries = '' + #session_preload_libraries = '' + #jit_provider = 'llvmjit' # JIT library to use + + # - Other Defaults - + + #dynamic_library_path = '$libdir' + + + #------------------------------------------------------------------------------ + # LOCK MANAGEMENT + #------------------------------------------------------------------------------ + + #deadlock_timeout = 1s + #max_locks_per_transaction = 64 # min 10 + # (change requires restart) + #max_pred_locks_per_transaction = 64 # min 10 + # (change requires restart) + #max_pred_locks_per_relation = -2 # negative values mean + # (max_pred_locks_per_transaction + # / -max_pred_locks_per_relation) - 1 + #max_pred_locks_per_page = 2 # min 0 + + + #------------------------------------------------------------------------------ + # VERSION AND PLATFORM COMPATIBILITY + #------------------------------------------------------------------------------ + + # - Previous PostgreSQL Versions - + + #array_nulls = on + #backslash_quote = safe_encoding # on, off, or safe_encoding + #default_with_oids = off + #escape_string_warning = on + #lo_compat_privileges = off + #operator_precedence_warning = off + #quote_all_identifiers = off + #standard_conforming_strings = on + #synchronize_seqscans = on + + # - Other Platforms and Clients - + + #transform_null_equals = off + + + #------------------------------------------------------------------------------ + # ERROR HANDLING + #------------------------------------------------------------------------------ + + #exit_on_error = off # terminate session on any error? + #restart_after_crash = on # reinitialize after backend crash? + #data_sync_retry = off # retry or panic on failure to fsync + # data? + # (change requires restart) + + + #------------------------------------------------------------------------------ + # CONFIG FILE INCLUDES + #------------------------------------------------------------------------------ + + # These options allow settings to be loaded from files other than the + # default postgresql.conf. Note that these are directives, not variable + # assignments, so they can usefully be given more than once. + + #include_dir = '...' # include files ending in '.conf' from + # a directory, e.g., 'conf.d' + #include_if_exists = '...' # include file only if it exists + #include = '...' # include file + + + #------------------------------------------------------------------------------ + # CUSTOMIZED OPTIONS + #------------------------------------------------------------------------------ + + # Add settings for extensions here diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-tls-secret.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-tls-secret.yaml new file mode 100644 index 000000000..7cc4aee04 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-tls-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + tls.crt: Q2VydGlmaWNhdGU6CiAgICBEYXRhOgogICAgICAgIFZlcnNpb246IDMgKDB4MikKICAgICAgICBTZXJpYWwgTnVtYmVyOgogICAgICAgICAgICA0NDpkNjpkNjo2Mzo3Yjo2MjoxMjpjZTo3NTo2ZDozZDoxODo0NzplYjo1Nzo2MjplZjphNTo4ZjoyZgogICAgICAgIFNpZ25hdHVyZSBBbGdvcml0aG06IHNoYTI1NldpdGhSU0FFbmNyeXB0aW9uCiAgICAgICAgSXNzdWVyOiBDID0gVVMsIFNUID0gSUwsIE8gPSBJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQsIENOID0gbG9jYWxob3N0CiAgICAgICAgVmFsaWRpdHkKICAgICAgICAgICAgTm90IEJlZm9yZTogRGVjIDE1IDIyOjQ4OjI2IDIwMjAgR01UCiAgICAgICAgICAgIE5vdCBBZnRlciA6IEphbiAxNCAyMjo0ODoyNiAyMDIxIEdNVAogICAgICAgIFN1YmplY3Q6IEMgPSBVUywgU1QgPSBJTCwgTyA9IEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZCwgQ04gPSBsb2NhbGhvc3QKICAgICAgICBTdWJqZWN0IFB1YmxpYyBLZXkgSW5mbzoKICAgICAgICAgICAgUHVibGljIEtleSBBbGdvcml0aG06IHJzYUVuY3J5cHRpb24KICAgICAgICAgICAgICAgIFJTQSBQdWJsaWMtS2V5OiAoMjA0OCBiaXQpCiAgICAgICAgICAgICAgICBNb2R1bHVzOgogICAgICAgICAgICAgICAgICAgIDAwOmQzOmQxOjQ4OjE1OmZmOmY3OjY1OjgwOmU5OmRhOjc5OmFkOjk2OjFkOgogICAgICAgICAgICAgICAgICAgIDhkOjQ2OjY4OmVlOjYzOjM5Ojg2OjdhOmFmOmQxOjUwOmU0OmJhOjU5OmI1OgogICAgICAgICAgICAgICAgICAgIGYzOjJlOmJjOmM5OmVhOjg4OjQzOmM3OjM1OjdiOmU0OjA2OmFlOmM5OmQ2OgogICAgICAgICAgICAgICAgICAgIDJiOjNmOjNkOmNiOmJmOmZiOjlkOmU0OjcyOjk3OjZkOmM2OjI4OjBkOmIxOgogICAgICAgICAgICAgICAgICAgIDYzOmU5OjhmOmFiOjhmOjhjOmQyOmFhOjUzOjBmOmUyOjg1OmRkOmYwOjZmOgogICAgICAgICAgICAgICAgICAgIDk3OmIwOmRmOmIxOmEzOmM0OjdhOjJlOjIyOjVjOmYyOjliOjM5OjE0OjE5OgogICAgICAgICAgICAgICAgICAgIDI0OmRiOjA3OjdiOmNmOmUxOjliOjJhOjViOmZmOmY2OmUzOmQ3OjMzOmRhOgogICAgICAgICAgICAgICAgICAgIDBiOjg0OjhmOjhiOjIxOmZkOjZiOmQzOjAyOjcxOmUwOmU0OjdlOmY0OjE1OgogICAgICAgICAgICAgICAgICAgIDhhOjJiOmRlOmFmOjM5OjRlOjdjOjY5OjU1OjU1OjM4OmRhOjhlOjkyOjU1OgogICAgICAgICAgICAgICAgICAgIGQzOmQ4OmMxOjBlOmVjOjc5OjZjOjQwOjJhOjVkOmI1Ojg4OjI0OjVlOjFkOgogICAgICAgICAgICAgICAgICAgIDcyOmYwOjZlOmMxOmY3OmRmOjg1OjVhOmNjOjM0OjYxOjk2Ojk4OjE0OjI0OgogICAgICAgICAgICAgICAgICAgIGZmOmRmOjE2OjBmOmExOmZmOmJjOmY3OmY5OjlhOjNjOjU4OjcwOmQxOmJiOgogICAgICAgICAgICAgICAgICAgIDAzOmVkOjE4OjA2OmJmOjczOjM4OmZhOjY0OjRhOmExOmNhOjAzOjAzOjRlOgogICAgICAgICAgICAgICAgICAgIDYzOjE4OmM4OmMzOjRlOjc0OjA4OjA3OmJjOjQxOmE2OjgyOjRlOjRhOmE4OgogICAgICAgICAgICAgICAgICAgIDdmOmFkOmJhOmYxOjhmOjY2OjIyOmNlOmUwOjQ2OjVkOmRlOmEwOjA3OjMzOgogICAgICAgICAgICAgICAgICAgIDE3OjI1Ojc0OjQ5OjBlOmNjOmRmOjkyOmQzOjMwOjM1OmRhOjYwOjJjOjdlOgogICAgICAgICAgICAgICAgICAgIDE4OjVlOjg5OmQyOjhmOmY3OjZkOjgzOjE3OjJkOjVlOjczOjQyOmZkOjBkOgogICAgICAgICAgICAgICAgICAgIDc2Ojc1CiAgICAgICAgICAgICAgICBFeHBvbmVudDogNjU1MzcgKDB4MTAwMDEpCiAgICAgICAgWDUwOXYzIGV4dGVuc2lvbnM6CiAgICAgICAgICAgIFg1MDl2MyBTdWJqZWN0IEtleSBJZGVudGlmaWVyOiAKICAgICAgICAgICAgICAgIDFGOjJFOjlBOkNEOjg2OjhDOjRDOjU4Ojg4OkQxOkYxOjFGOjQxOkM3OjRBOjk4OjgxOkM3OjY0OjhECiAgICAgICAgICAgIFg1MDl2MyBBdXRob3JpdHkgS2V5IElkZW50aWZpZXI6IAogICAgICAgICAgICAgICAga2V5aWQ6MUY6MkU6OUE6Q0Q6ODY6OEM6NEM6NTg6ODg6RDE6RjE6MUY6NDE6Qzc6NEE6OTg6ODE6Qzc6NjQ6OEQKCiAgICAgICAgICAgIFg1MDl2MyBCYXNpYyBDb25zdHJhaW50czogY3JpdGljYWwKICAgICAgICAgICAgICAgIENBOlRSVUUKICAgIFNpZ25hdHVyZSBBbGdvcml0aG06IHNoYTI1NldpdGhSU0FFbmNyeXB0aW9uCiAgICAgICAgIDQ5Ojc0Ojc2OmM0OmVkOmMxOmU2OjdkOmRjOjA3OjY2Ojg5OjFlOjg4Ojk3OjgyOjAzOjQ3OgogICAgICAgICA2Mzo2YjowYjpiMTowZTo3ODo1MDo0MDoxNDpjNDpkNzplYToxNzowMTozNjo3OTo0NjphZToKICAgICAgICAgNGU6MzM6ZTc6MWU6OTQ6OWI6NTg6YmY6OTk6OGQ6MDc6YjU6NDY6MWQ6Mjk6ZmY6NTY6ZDc6CiAgICAgICAgIGZjOmY2OmI5OmNjOjYwOmRmOjdkOjE5OjU4OmJiOjc2OmY1OjdkOjVhOjlkOjM2OjU2OjMxOgogICAgICAgICBlOTpiNDowYTo5NjplMDpiYjo0OTo1YTpmNDpkOTo1MDplMzo1YzpjZTo4Nzo2NzpjOToyMjoKICAgICAgICAgNTE6NjQ6MWU6YTY6ZWE6NTA6NjY6ZDg6Mzc6MjU6ODE6Yzg6OTc6MmY6NDI6MWM6YTk6M2Y6CiAgICAgICAgIDVkOmVjOjA1OjFjOjQ4OjE2Ojk3OmE3OmQwOmZhOjI5Ojg5OmNmOjEzOjk4OmQwOjBhOjNjOgogICAgICAgICAxOTowZjpjMzpkMTpkYzo1MTozNjo5ZDo4ZTowMDo1YToyMDo5Njo1ZDo1NzoxNjo5YTpkMToKICAgICAgICAgNmQ6ODc6ODc6NDk6YzE6MjU6YjQ6ZDI6Y2Y6MzI6MzM6YjM6MTc6ZGY6Njg6NWM6ZWQ6MzQ6CiAgICAgICAgIGIxOjQ0OmM3OjM4OjM2OmJhOjQ5OjYwOjI2OjQzOmQzOjFkOjE5OmIyOmU1OmQ1OmY0OmY1OgogICAgICAgICBhYzplNTpiNjo0NzplYzplMTowYzpkODo0ZDo2NTo0OToyMTo2Zjo1MDphNzo0NzoyNjphZjoKICAgICAgICAgZGE6MTU6NjE6MzU6YTg6MTI6YmU6MTk6YTg6NDE6MzI6MDY6MGE6NDY6YjI6ZWU6Y2Y6N2M6CiAgICAgICAgIDAzOjNhOjgzOjIxOjFmOjE5OmY1OjE1OmVkOjdmOjNjOjhlOmY5OmNkOmJkOjg0OjQ0OjljOgogICAgICAgICBiZjo0OToyZDo0MDo0ZTphZjplNzo2YjoyMDozNzo2NzpkMDoxMTpjYTpkOTo1ODpjNzo2ODoKICAgICAgICAgMTE6Yjc6ZjA6MGYKLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnekNDQW11Z0F3SUJBZ0lVUk5iV1kzdGlFczUxYlQwWVIrdFhZdStsank4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1VURUxNQWtHQTFVRUJoTUNWVk14Q3pBSkJnTlZCQWdNQWtsTU1TRXdId1lEVlFRS0RCaEpiblJsY201bApkQ0JYYVdSbmFYUnpJRkIwZVNCTWRHUXhFakFRQmdOVkJBTU1DV3h2WTJGc2FHOXpkREFlRncweU1ERXlNVFV5Ck1qUTRNalphRncweU1UQXhNVFF5TWpRNE1qWmFNRkV4Q3pBSkJnTlZCQVlUQWxWVE1Rc3dDUVlEVlFRSURBSkoKVERFaE1COEdBMVVFQ2d3WVNXNTBaWEp1WlhRZ1YybGtaMmwwY3lCUWRIa2dUSFJrTVJJd0VBWURWUVFEREFscwpiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEVDBVZ1YvL2RsCmdPbmFlYTJXSFkxR2FPNWpPWVo2cjlGUTVMcFp0Zk11dk1ucWlFUEhOWHZrQnE3SjFpcy9QY3UvKzUza2NwZHQKeGlnTnNXUHBqNnVQak5LcVV3L2loZDN3YjVldzM3R2p4SG91SWx6eW16a1VHU1RiQjN2UDRac3FXLy8yNDljegoyZ3VFajRzaC9XdlRBbkhnNUg3MEZZb3IzcTg1VG54cFZWVTQybzZTVmRQWXdRN3NlV3hBS2wyMWlDUmVIWEx3CmJzSDMzNFZhekRSaGxwZ1VKUC9mRmcraC83ejMrWm84V0hEUnV3UHRHQWEvY3pqNlpFcWh5Z01EVG1NWXlNTk8KZEFnSHZFR21nazVLcUgrdHV2R1BaaUxPNEVaZDNxQUhNeGNsZEVrT3pOK1MwekExMm1Bc2ZoaGVpZEtQOTIyRApGeTFlYzBMOURYWjFBZ01CQUFHalV6QlJNQjBHQTFVZERnUVdCQlFmTHByTmhveE1XSWpSOFI5QngwcVlnY2RrCmpUQWZCZ05WSFNNRUdEQVdnQlFmTHByTmhveE1XSWpSOFI5QngwcVlnY2RralRBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQkpkSGJFN2NIbWZkd0hab2tlaUplQ0EwZGphd3V4RG5oUQpRQlRFMStvWEFUWjVScTVPTStjZWxKdFl2NW1OQjdWR0hTbi9WdGY4OXJuTVlOOTlHVmk3ZHZWOVdwMDJWakhwCnRBcVc0THRKV3ZUWlVPTmN6b2RueVNKUlpCNm02bEJtMkRjbGdjaVhMMEljcVQ5ZDdBVWNTQmFYcDlENktZblAKRTVqUUNqd1pEOFBSM0ZFMm5ZNEFXaUNXWFZjV210RnRoNGRKd1NXMDBzOHlNN01YMzJoYzdUU3hSTWM0TnJwSgpZQ1pEMHgwWnN1WFY5UFdzNWJaSDdPRU0yRTFsU1NGdlVLZEhKcS9hRldFMXFCSytHYWhCTWdZS1JyTHV6M3dECk9vTWhIeG4xRmUxL1BJNzV6YjJFUkp5L1NTMUFUcS9uYXlBM1o5QVJ5dGxZeDJnUnQvQVAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMDlGSUZmLzNaWURwMm5tdGxoMk5SbWp1WXptR2VxL1JVT1M2V2JYekxyeko2b2hECnh6Vjc1QWF1eWRZclB6M0x2L3VkNUhLWGJjWW9EYkZqNlkrcmo0elNxbE1QNG9YZDhHK1hzTit4bzhSNkxpSmMKOHBzNUZCa2syd2Q3eitHYktsdi85dVBYTTlvTGhJK0xJZjFyMHdKeDRPUis5QldLSzk2dk9VNThhVlZWT05xTwprbFhUMk1FTzdIbHNRQ3BkdFlna1hoMXk4RzdCOTkrRldzdzBZWmFZRkNULzN4WVBvZis4OS9tYVBGaHcwYnNECjdSZ0d2M000K21SS29jb0RBMDVqR01qRFRuUUlCN3hCcG9KT1NxaC9yYnJ4ajJZaXp1QkdYZDZnQnpNWEpYUkoKRHN6Zmt0TXdOZHBnTEg0WVhvblNqL2R0Z3hjdFhuTkMvUTEyZFFJREFRQUJBb0lCQURhWWJyZ2M3YXRmK3VheApEaWp2SFFiVTdQenVTdGM4a2ZzRHVYUitEVnd5bE9pNmpwMitEMXpLekNxQjVVdTdwZFNxQ2h4ajNOd1NneWhrClhKaEt5N0dJWHBSQUxJdjZiU1lYM1VWZG91L1BLSjdUaEptVG9MYXBkSEp3RDEyWmpPRHlMWnQ1Um5LNjlOVUsKR3BaOE4xcC8rdEk0a3ZCZXpPcFp6MWc1L3A4M1F6MTVhK3hmZ2lUWHRqYkI2U2pKUlk3QWF5aWZhc3hxb1RFRApuaFd5Z0I3aC8vUXhXZXpTTm1XdmhrZWJYQm10QmtTVldvUFRRUERBOTZBRTdVZXd5VTNEbXlsTDdjNUVaTEFsCkpHMnhCcUhyU1NwaWlEd2J0Z0s5dmJXOVF0bzBSMHcvanh6ZWZOT1cwUHlOOTZwNHdBcFdCMjNCYTk2R2d2TUMKS0gycDRkRUNnWUVBOUdYUUgxTFltTjYwcXA5SzR6bXY0WjVaZmVTekdwZ0dMd2ZoRjVHclZJQ0F1elQ2QTE3SgpqVVRoWnlEZE9iUW9VeURrT0FUT2QybFpJUHB1YnNwdGZ6Z1M1bmNlYUxvSk1NYjhxYS9qYjc1SVJkdEc3R3N4Cjl6UUpNQXJJbEFSOTZWeFJYejdJTittMDMzQTZiRXVMcUxvYW5Mb0ZnaTZqM3p4NUVwRm90ck1DZ1lFQTNkK0UKdnRDK2lvRml1c1QzUUMzbnp2VlRlb0k2R1pzUnZiMnFxa1BRdFBCajVoMEtwTUFoVEhsSElpVHNNRE9xZ21yagppbHZkR1MvaDFROVpmNy9ldGwrT2hkVUdWQjdtMDAzdm1LWGFKclBmRHNHYTlaVWo4OXVHb3hmUW9LanhLNWhxCk5tNG5EOFpuU3pSVll0NDJlWW1NVjRzL2JacWMzbE5kdkVaRDhqY0NnWUE5V2lHNCswOHNjUnZoaVVOL2IwZmIKMTZpWGxnWHdNeUc2Ukx3WThwU1VEZjVEQUxXU2l3VUYxYmpQN3N3YVpFT0xPc0tQM1lVSExRY1c1RWM4d014awpGMnVITjNnR3lrenNWY2V2d1Z2Uy9XMmZPOEMrTU5yR04rWG1qWTUwdWZ2eHpSOFFUZTV0T3RvUkRWZGRRRW02Ci9aMFlvd29tK0JaalFBY1V4alFIU1FLQmdRRFdlckUvS0ZsWldQUVE2a0M5aU9MU1hMTWk5V3Ftd0JHcFl3VHMKN1B0L1BmYkVSd1MzK0liMy96RDFYODMyVnF1WXdTMU8zYmpoRlRseEZoS0ZmUHdWUGxCdkxWdWR5L1dGQkl6OQoraTNsUmZIMXVOQk1ZS3pObWtRUHV3RFJuaDdzN3J5VisydkZReDB0Uk56WjQwZXp1M1N3V0FxcnNFKytWOGFBCkwwaVZod0tCZ0d3eHA5SmlHVEgwOXZkaDEzR285cjJ3ZGRkdjlERElkK2hmTUlvci9RaUM4bDVqRG9zRmY3d0sKYVVmcGZ3NzRkaFlsaG42RWlneGl5UU5ObkFzby9ZbjhVeUtNSG96VUN0L3ZuTk1IZXFmUmxrN3U3MUVYdllKZwpoamdPTHVrem53N1FXdG85V2ZTb1QwdXY4ZnJxWUxtSFk2Zk9OSzQ0eVE2bXlJeTU4c21uCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== +kind: Secret +metadata: + name: postgresql-tls-secret + namespace: "{{ .Values.infrastructure.namespace }}" +type: kubernetes.io/tls \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/secret.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/secret.yaml new file mode 100644 index 000000000..1c2bf8291 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: postgres + namespace: "{{ .Values.infrastructure.namespace }}" +data: + postgres_default_user: {{ .Values.infrastructure.username | b64enc }} + postgres_password: {{ .Values.infrastructure.password | b64enc }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/service.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/service.yaml new file mode 100644 index 000000000..cdf72b7dc --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/templates/service.yaml @@ -0,0 +1,14 @@ +--- +# This is the internal load balancer, routing traffic to the PostgreSQL Pod +apiVersion: v1 +kind: Service +metadata: + name: "{{ .Values.infrastructure.appName }}-external" + namespace: {{ .Values.infrastructure.namespace }} +spec: + type: "{{ .Values.service.type }}" + ports: + - port: {{ .Values.service.port }} + protocol: {{ .Values.service.protocol }} + selector: + app: {{ .Values.application.labelValue }} diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/values.yaml b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/values.yaml new file mode 100644 index 000000000..eb358ca3e --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/PostgreSQL116/values.yaml @@ -0,0 +1,34 @@ + +replicaCount: 1 + +application: + labelValue: postgres + +infrastructure: + namespace: postgresql + appName: postgres + username: postgres + password: "OCPHack8" + dataVolume: "/var/lib/postgresql" + dataPath: "/var/lib/postgresql/data" + +image: + name: postgres + pullPolicy: IfNotPresent + tag: "11.6" + +service: + type: LoadBalancer + port: 5432 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 750m + memory: 2048Mi + volume: + size: 5Gi + storageClass: managed-premium \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/README.md b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/README.md new file mode 100644 index 000000000..b8f36b72f --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/EastUS-AKS/HelmCharts/README.md @@ -0,0 +1,269 @@ +**[Home](../../../README.md)** - [Prerequisites >](../../../00-prereqs.md) + +## Setting up Kubernetes + +NOTE: YOU DO NOT NEED TO RUN THROUGH THE STEPS IN THIS FILE IF YOU ALREADY PROVISIONED AKS. + +The steps to deploy the AKS cluster, scale it up and scale it down are available in the README file for that section: [README](../ARM-Templates/README.md). + +You should have not have to do provisioning again since you have already provisioned AKS using the create-cluster.sh script in [Prerequisites >](../../../00-prereqs.md) + +## PostgreSQL Setup on Kubernetes + +These instructions provide guidance on how to setup PostgreSQL 11 on AKS + +This requires Helm3 and the latest version of Azure CLI to be installed. These are pre-installed in Azure Cloud Shell but you will need to install or download them if you are using a different environment. + +## Installing the PostgreSQL Database + +```bash + +# Navigate to the Helm Charts +#cd Resources/HelmCharts + +# Install the Kubernetes Resources +helm upgrade --install wth-postgresql ./PostgreSQL116 --set infrastructure.password=OCPHack8 + +``` + +## Checking the Service IP Addresses and Ports + +```bash + +kubectl -n postgresql get svc + +``` +**Important: you will need to copy the postgres-external Cluster-IP value to use for the dataSourceURL in later steps** + +## Checking the Pod for Postgres + +```bash + +kubectl -n postgresql get pods + +``` +Wait a few minutes until the pod status shows as Running + +## Getting into the Container + +```bash + +# Use this to connect to the database server SQL prompt + +kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres + +``` +Run the following commands to check the Postgres Version and create the WTH database (warning: application deployment will fail if you don't do this) + +```sql + +--Check the DB Version +SELECT version(); + +--Create the wth database +CREATE DATABASE wth; + +--List databases. notice that there is a database called wth +\l + +-- Create user contosoapp that would own the application schema + + CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8'; + +-- List the tables in wth +\dt + +-- exit out of Postgres Sql prompt +exit + +``` + +## Uninstalling the PostgreSQL from Kubernetes (only if you need to cleanup and try the helm deployment again) + +Use this to uninstall the PostgreSQL 11 instance from Kubernetes cluster + +```bash + +# Uninstall to the database server. To install again, run helm upgrade +helm uninstall wth-postgresql + +``` + +## Installing MySQL + +```bash + +# Install the Kubernetes Resources +helm upgrade --install wth-mysql ./MySQL57 --set infrastructure.password=OCPHack8 + +``` + +## Checking the Service IP Addresses and Ports + +```bash + +kubectl -n mysql get svc + +``` +**Important: you will need to copy the mysql-external Cluster-IP value to use for the dataSourceURL in later steps** + +## Checking the Pod for MySQL + +```bash + +kubectl -n mysql get pods + +``` + +## Getting into the Container + +```bash + +# Use this to connect to the database server + +kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8 + +``` + +Run the following commands to check the MySQL Version and create the WTH database (warning: application deployment will fail if you don't do this) + +```sql + +-- Check the mysql DB Version +SELECT version(); + +-- List databases +SHOW DATABASES; + +--Create wth database +CREATE DATABASE wth; + +-- Create a user Contosoapp that would own the application data for migration + +CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ; + +GRANT SUPER on *.* to conotosoapp identified by 'OCPHack8'; -- may not be needed + +GRANT ALL PRIVILEGES ON wth.* to contosoapp ; + +-- Show tables in wth database + +SHOW TABLES; + +-- exit out of mysql Sql prompt +exit + +``` + +## Uninstalling the MySQL from Kubernetes (only if you need to cleanup and try the helm deployment again) + +Use this to uninstall the MySQL instance from Kubernetes cluster + +```bash + +# Uninstall to the database server. To install again, run helm upgrade command previously executed +helm uninstall wth-mysql + +``` + +## Deploying the Web Application + +First we navigate to the Helm charts directory + +```bash + +cd Resources/HelmCharts + + +``` + +We can deploy in two ways. As part of this hack, you will need to do both ways + +* Backed by MySQL Database +* Backed by PostgreSQL Database + +For the MySQL database setup, the developer/operator can make changes to the values-mysql.yaml file. + +For the PostgreSQL database setup, the developer/operator can make changes to the values-postgresql.yaml file. + +In the yaml files we can specify the database Type (appConfig.databaseType) as "mysql" or postgres" and then we can set the JDBC URL, username and password under the appConfig objects. + +In the globalConfig object we can change the merchant id, public keys and other values as needed but you generally can leave those alone as they apply to both MySQL and PostgreSQL deployment options + +```yaml +appConfig: + databaseType: "databaseType goes here" # mysql or postgres + dataSourceURL: "jdbc url goes here" # database is either mysql or postgres - jdbc:database://ip-address/wth + dataSourceUser: "user name goes here" # database username mentioned in values-postgres or values-mysql yaml - contosoap + dataSourcePassword: "Pass word goes here!" # your database password goes here - # OCPHack8 + webPort: 8083 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext +``` + +The developer or operator can specify the '--values'/'-f' flag multiple times. +When more than one values file is specified, priority will be given to the last (right-most) file specified in the sequence. +For example, if both values.yaml and override.yaml contained a key called 'namespace', the value set in override.yaml would take precedence. + +The commands below allows us to use settings from the values file and then override certain values in the database specific values file. + +```bash + +helm upgrade --install release-name ./HelmChartFolder -f ./HelmChartFolder/values.yaml -f ./HelmChartFolder/override.yaml + +``` + +To deploy the app backed by MySQL, run the following command after you have edited the values file to match your desired database type + +```bash + +helm upgrade --install mysql-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-mysql.yaml + +``` + +To deploy the app backed by PostgreSQL, run the following command after you have edited the values file to match your desired database type + +```bash + +helm upgrade --install postgres-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-postgresql.yaml + +``` + +If you wish to uninstall the app, you can use one of the following commands: + +```bash + +# Use this to uninstall, if you are using MySQL as the database +helm uninstall mysql-contosopizza + +# Use this to uninstall, if you are using PostgreSQL as the database +helm uninstall postgres-contosopizza + +``` + + +After the apps have booted up, you can find out their service addresses and ports as well as their status as follows + +```bash + +# get service ports and IP addresses +kubectl -n {infrastructure.namespace goes here} get svc + +# get service pods running the app +kubectl -n {infrastructure.namespace goes here} get pods + +# view the first 5k lines of the application logs +kubectl -n {infrastructure.namespace goes here} logs deploy/contosopizza --tail=5000 + +# example for ports and services +kubectl -n {infrastructure.namespace goes here} get svc + +``` + +Verify that contoso pizza application is running on AKS + +```bash + +# Insert the external IP address of the command + +http://{external_ip_contoso_app}:8081/pizzeria/ +``` diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/# Configure the Microsoft Azure Provider.groovy b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/# Configure the Microsoft Azure Provider.groovy new file mode 100644 index 000000000..2f6fe53df --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/# Configure the Microsoft Azure Provider.groovy @@ -0,0 +1,115 @@ +# Configure the Microsoft Azure Provider. +provider "azurerm" { + version = "=1.31.0" +} + +# Create a resource group +resource "azurerm_resource_group" "rg" { + name = "myTFResourceGroup" + location = "westus2" +} +# Create virtual network +resource "azurerm_virtual_network" "vnet" { + name = "myTFVnet" + address_space = ["10.0.0.0/16"] + location = "westus2" + resource_group_name = "${azurerm_resource_group.rg.name}" +} + +# Create subnet +resource "azurerm_subnet" "subnet" { + name = "myTFSubnet" + resource_group_name = "${azurerm_resource_group.rg.name}" + virtual_network_name = "${azurerm_virtual_network.vnet.name}" + address_prefix = "10.0.1.0/24" +} + +# Create public IP +resource "azurerm_public_ip" "publicip" { + name = "myTFPublicIP" + location = "westus2" + resource_group_name = "${azurerm_resource_group.rg.name}" + public_ip_address_allocation = "dynamic" +} + +# Create Network Security Group and rule +resource "azurerm_network_security_group" "nsg" { + name = "myTFNSG" + location = "westus2" + resource_group_name = "${azurerm_resource_group.rg.name}" + + security_rule { + name = "SSH" + priority = 1001 + direction = "Inbound" + access = "Allow" + protocol = "Tcp" + source_port_range = "*" + destination_port_range = "22" + source_address_prefix = "*" + destination_address_prefix = "*" + } +} +# Create network interface +resource "azurerm_network_interface" "nic" { + name = "myNIC" + location = "westus2" + resource_group_name = "${azurerm_resource_group.rg.name}" + network_security_group_id = "${azurerm_network_security_group.nsg.id}" + + ip_configuration { + name = "myNICConfg" + subnet_id = "${azurerm_subnet.subnet.id}" + private_ip_address_allocation = "dynamic" + public_ip_address_id = "${azurerm_public_ip.publicip.id}" + } +} + +# Create a Linux virtual machine +resource "azurerm_virtual_machine" "vm" { + name = "myTFVM" + location = "westus2" + resource_group_name = "${azurerm_resource_group.rg.name}" + network_security_group_id = "${azurerm_network_security_group.nsg.id}" + + ip_configuration { + name = "myNICConfg" + subnet_id = "${azurerm_subnet.subnet.id}" + private_ip_address_allocation = "dynamic" + public_ip_address_id = "${azurerm_public_ip.publicip.id}" + } +} + +# Create a Linux virtual machine +resource "azurerm_virtual_machine" "vm" { + name = "myTFVM" + location = "westus2" + resource_group_name = "${azurerm_resource_group.rg.name}" + network_interface_ids = ["${azurerm_network_interface.nic.id}"] + vm_size = "Standard_DS1_v2" + + storage_os_disk { + name = "myOsDisk" + caching = "ReadWrite" + create_option = "FromImage" + managed_disk_type = "Premium_LRS" + } + + storage_image_reference { + publisher = "Canonical" + offer = "UbuntuServer" + sku = "16.04.0-LTS" + version = "latest" + } + + os_profile { + computer_name = "myTFVM" + admin_username = "plankton" + admin_password = "Password1234!" + } + + os_profile_linux_config { + disable_password_authentication = false + } + +} diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/aks-cluster.json b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/aks-cluster.json new file mode 100644 index 000000000..e6a1b3b4d --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/aks-cluster.json @@ -0,0 +1,398 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "clusterName": { + "type": "string", + "metadata": { + "description": "The name of the Managed Cluster resource." + } + }, + "agentPoolNodeCount": { + "type": "int", + "metadata": { + "description": "Number of virtual machines in the agent pool" + } + }, + "agentPoolNodeType": { + "type": "string", + "metadata": { + "description": "SKU or Type of virtual machines in the agent pool" + } + }, + "systemPoolNodeCount": { + "type": "int", + "metadata": { + "description": "Number of virtual machines in the system pool" + } + }, + "systemPoolNodeType": { + "type": "string", + "metadata": { + "description": "SKU or Type of virtual machines in the system pool" + } + }, + "resourceGroupName": { + "type": "string", + "metadata": { + "description": "The name of the Resource Group" + } + }, + "virtualNetworkName": { + "type": "string", + "metadata": { + "description": "The name of the Virtual Network" + } + }, + "subnetName": { + "type": "string", + "metadata": { + "description": "The name of the Subnet within the Virtual Network" + } + }, + "location": { + "type": "string", + "metadata": { + "description": "The geographical location of AKS resource." + } + }, + "dnsPrefix": { + "type": "string", + "metadata": { + "description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN." + } + }, + "addressSpaces": { + "type": "array" + }, + "ddosProtectionPlanEnabled": { + "type": "bool" + }, + "osDiskSizeGB": { + "type": "int", + "defaultValue": 0, + "metadata": { + "description": "Disk size (in GiB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize." + }, + "minValue": 0, + "maxValue": 1023 + }, + "kubernetesVersion": { + "type": "string", + "defaultValue": "1.25.5", + "metadata": { + "description": "The version of Kubernetes." + } + }, + "networkPlugin": { + "type": "string", + "allowedValues": [ + "azure", + "kubenet" + ], + "metadata": { + "description": "Network plugin used for building Kubernetes network." + } + }, + "maxPods": { + "type": "int", + "defaultValue": 64, + "metadata": { + "description": "Maximum number of pods that can run on a node." + } + }, + "enableRBAC": { + "type": "bool", + "defaultValue": true, + "metadata": { + "description": "Boolean flag to turn on and off of RBAC." + } + }, + "enablePrivateCluster": { + "type": "bool", + "defaultValue": false, + "metadata": { + "description": "Enable private network access to the Kubernetes cluster." + } + }, + "enableHttpApplicationRouting": { + "type": "bool", + "defaultValue": true, + "metadata": { + "description": "Boolean flag to turn on and off http application routing." + } + }, + "enableAzurePolicy": { + "type": "bool", + "defaultValue": false, + "metadata": { + "description": "Boolean flag to turn on and off Azure Policy addon." + } + }, + "enableOmsAgent": { + "type": "bool", + "defaultValue": true, + "metadata": { + "description": "Boolean flag to turn on and off omsagent addon." + } + }, + "workspaceRegion": { + "type": "string", + "defaultValue": "WestUS", + "metadata": { + "description": "Specify the region for your OMS workspace." + } + }, + "workspaceName": { + "type": "string", + "metadata": { + "description": "Specify the prefix of the OMS workspace." + } + }, + "omsSku": { + "type": "string", + "defaultValue": "standalone", + "allowedValues": [ + "free", + "standalone", + "pernode" + ], + "metadata": { + "description": "Select the SKU for your workspace." + } + }, + "serviceCidr": { + "type": "string", + "metadata": { + "description": "A CIDR notation IP range from which to assign service cluster IPs." + } + }, + "subnetAddressSpace": { + "type": "string", + "metadata": { + "description": "A CIDR notation IP range from which to assign service cluster IPs." + } + }, + "dnsServiceIP": { + "type": "string", + "metadata": { + "description": "Containers DNS server IP address." + } + }, + "dockerBridgeCidr": { + "type": "string", + "metadata": { + "description": "A CIDR notation IP for Docker bridge." + } + } + }, + "variables": { + "deploymentSuffix": "MDP2020", + "subscriptionId" : "[subscription().id]", + "workspaceName" : "[concat(parameters('workspaceName'), uniqueString(variables('subscriptionId')))]", + "omsWorkspaceId": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.OperationalInsights/workspaces/', variables('workspaceName'))]", + "clusterID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.ContainerService/managedClusters/', parameters('clusterName'))]", + "vnetSubnetID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'), '/subnets/', parameters('subnetName'))]", + "solutionDeploymentId": "[concat('SolutionDeployment-', variables('deploymentSuffix'))]", + "workspaceDeploymentId": "[concat('WorkspaceDeployment-', variables('deploymentSuffix'))]", + "clusterMonitoringMetricId": "[concat('ClusterMonitoringMetric-', variables('deploymentSuffix'))]", + "clusterSubnetRoleAssignmentId": "[concat('ClusterSubnetRoleAssignment-', variables('deploymentSuffix'))]" + }, + "resources": [ + { + "name": "[parameters('virtualNetworkName')]", + "type": "Microsoft.Network/VirtualNetworks", + "apiVersion": "2019-09-01", + "location": "[parameters('location')]", + "dependsOn": [], + "tags": { + "cluster": "Kubernetes" + }, + "properties": { + "addressSpace": { + "addressPrefixes": "[parameters('addressSpaces')]" + }, + "subnets": [ + { + "name": "[parameters('subnetName')]", + "properties": { + "addressPrefix": "[parameters('subnetAddressSpace')]" + } + } + ], + "enableDdosProtection": "[parameters('ddosProtectionPlanEnabled')]" + } + }, + { + "apiVersion": "2020-03-01", + "dependsOn": [ + "[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]", + "[resourceId('Microsoft.Network/VirtualNetworks', parameters('virtualNetworkName'))]" + ], + "type": "Microsoft.ContainerService/managedClusters", + "location": "[parameters('location')]", + "name": "[parameters('clusterName')]", + "properties": { + "kubernetesVersion": "[parameters('kubernetesVersion')]", + "enableRBAC": "[parameters('enableRBAC')]", + "dnsPrefix": "[parameters('dnsPrefix')]", + "agentPoolProfiles": [ + { + "name": "systempool", + "osDiskSizeGB": "[parameters('osDiskSizeGB')]", + "count": "[parameters('systemPoolNodeCount')]", + "vmSize": "[parameters('systemPoolNodeType')]", + "osType": "Linux", + "storageProfile": "ManagedDisks", + "type": "VirtualMachineScaleSets", + "mode": "System", + "vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]", + "maxPods": "[parameters('maxPods')]" + }, + { + "name": "userpool", + "osDiskSizeGB": "[parameters('osDiskSizeGB')]", + "count": "[parameters('agentPoolNodeCount')]", + "vmSize": "[parameters('agentPoolNodeType')]", + "osType": "Linux", + "storageProfile": "ManagedDisks", + "type": "VirtualMachineScaleSets", + "mode": "User", + "vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]", + "maxPods": "[parameters('maxPods')]" + } + ], + "networkProfile": { + "loadBalancerSku": "standard", + "networkPlugin": "[parameters('networkPlugin')]", + "serviceCidr": "[parameters('serviceCidr')]", + "dnsServiceIP": "[parameters('dnsServiceIP')]", + "dockerBridgeCidr": "[parameters('dockerBridgeCidr')]" + }, + "apiServerAccessProfile": { + "enablePrivateCluster": "[parameters('enablePrivateCluster')]" + }, + "addonProfiles": { + "httpApplicationRouting": { + "enabled": "[parameters('enableHttpApplicationRouting')]" + }, + "azurePolicy": { + "enabled": "[parameters('enableAzurePolicy')]" + }, + "omsagent": { + "enabled": "[parameters('enableOmsAgent')]", + "config": { + "logAnalyticsWorkspaceResourceID": "[variables('omsWorkspaceId')]" + } + } + } + }, + "tags": {}, + "identity": { + "type": "SystemAssigned" + } + }, + { + "type": "Microsoft.Resources/deployments", + "name": "[variables('solutionDeploymentId')]", + "apiVersion": "2017-05-10", + "resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]", + "subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "apiVersion": "2015-11-01-preview", + "type": "Microsoft.OperationsManagement/solutions", + "location": "[parameters('workspaceRegion')]", + "name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]", + "properties": { + "workspaceResourceId": "[variables('omsWorkspaceId')]" + }, + "plan": { + "name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]", + "product": "[concat('OMSGallery/', 'ContainerInsights')]", + "promotionCode": "", + "publisher": "Microsoft" + } + } + ] + } + }, + "dependsOn": [ + "[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]" + ] + }, + { + "type": "Microsoft.Resources/deployments", + "name": "[variables('workspaceDeploymentId')]", + "apiVersion": "2017-05-10", + "resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]", + "subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "apiVersion": "2015-11-01-preview", + "type": "Microsoft.OperationalInsights/workspaces", + "location": "[parameters('workspaceRegion')]", + "name": "[variables('workspaceName')]", + "properties": { + "sku": { + "name": "[parameters('omsSku')]" + } + } + } + ] + } + } + }, + { + "type": "Microsoft.Resources/deployments", + "name": "[variables('clusterMonitoringMetricId')]", + "apiVersion": "2017-05-10", + "resourceGroup": "[parameters('resourceGroupName')]", + "subscriptionId": "[subscription().subscriptionId]", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "type": "Microsoft.ContainerService/managedClusters/providers/roleAssignments", + "apiVersion": "2018-01-01-preview", + "name": "[concat(parameters('clusterName'), '/Microsoft.Authorization/', guid(subscription().subscriptionId))]", + "properties": { + "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', '3913510d-42f4-4e42-8a64-420c390055eb')]", + "principalId": "[reference(parameters('clusterName')).addonProfiles.omsagent.identity.objectId]", + "scope": "[variables('clusterID')]" + } + } + ] + } + }, + "dependsOn": [ + "[variables('clusterID')]" + ] + } + ], + "outputs": { + "controlPlaneFQDN": { + "type": "string", + "value": "[reference(concat('Microsoft.ContainerService/managedClusters/', parameters('clusterName'))).fqdn]" + } + } + } \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/create-cluster.sh b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/create-cluster.sh new file mode 100644 index 000000000..d3f8562b2 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/create-cluster.sh @@ -0,0 +1,32 @@ +# Add line to set login to az +#az login +# Set your azure subscription +#az account set -s "" +# Defines the ARM template file location +export templateFile="aks-cluster.json" + +# Defines the parameters that will be used in the ARM template +export parameterFile="parameters.json" + +# Defines the name of the Resource Group our resources are deployed into +export resourceGroupName="PizzaAppWest" + +export clusterName="pizzaappwest" + +export location="westus" + +# Creates the resources group if it does not already exist +az group create --name $resourceGroupName --location $location + +# Creates the Kubernetes cluster and the associated resources and dependencies for the cluster +az deployment group create --name dataProductionDeployment --resource-group $resourceGroupName --template-file $templateFile --parameters $parameterFile + +# Install the Kubectl CLI. This will be used to interact with the remote Kubernetes cluster +#sudo az aks install-cli + +# Get the Credentials to Access the Cluster with Kubectl +az aks get-credentials --name $clusterName --resource-group $resourceGroupName + +# List the node pools - expect two aks nodepools + +az aks nodepool list --resource-group $resourceGroupName --cluster-name $clusterName -o table diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/parameters.json b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/parameters.json new file mode 100644 index 000000000..43adb70d7 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/ARM-Templates/KubernetesCluster/parameters.json @@ -0,0 +1,83 @@ +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "resourceGroupName": { + "value": "PizzaAppWest" + }, + "virtualNetworkName": { + "value": "PizzaAppWestVNet" + }, + "subnetName": { + "value": "PizzaAppWestSNet" + }, + "clusterName": { + "value": "PizzaAppWest" + }, + "maxPods": { + "value": 64 + }, + "systemPoolNodeCount": { + "value": 1 + }, + "systemPoolNodeType": { + "value": "Standard_D2s_v4" + }, + "agentPoolNodeCount": { + "value": 1 + }, + "agentPoolNodeType": { + "value": "Standard_D2s_v4" + }, + "location": { + "value": "westus" + }, + "dnsPrefix": { + "value": "pizzaappwest-dns" + }, + "kubernetesVersion": { + "value": "1.25.5" + }, + "networkPlugin": { + "value": "azure" + }, + "enableRBAC": { + "value": true + }, + "enablePrivateCluster": { + "value": false + }, + "enableHttpApplicationRouting": { + "value": false + }, + "enableAzurePolicy": { + "value": false + }, + "serviceCidr": { + "value": "10.71.0.0/16" + }, + "dnsServiceIP": { + "value": "10.71.0.3" + }, + "dockerBridgeCidr": { + "value": "172.17.0.1/16" + }, + "addressSpaces": { + "value": [ + "10.250.0.0/16" + ] + }, + "subnetAddressSpace": { + "value": "10.250.0.0/20" + }, + "ddosProtectionPlanEnabled": { + "value": false + }, + "workspaceName": { + "value": "PizzaAppWest" + }, + "workspaceRegion": { + "value": "westus" + } + } + } diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/Chart.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/Chart.yaml new file mode 100644 index 000000000..0db71b1b3 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/Chart.yaml @@ -0,0 +1,11 @@ +apiVersion: v2 + +name: Contoso Pizza + +description: A Helm chart for deploying Contoso Pizza Web Application + +type: application + +version: 1.0 + +appVersion: 15.08 \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/deploy-pizza.sh b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/deploy-pizza.sh new file mode 100644 index 000000000..2aa39fdcd --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/deploy-pizza.sh @@ -0,0 +1,74 @@ +status="Running" + +# Install the Kubernetes Resources +helm upgrade --install wth-mysql ../MySQL57 --set infrastructure.password=OCPHack8 + +# Install the Kubernetes Resources Postgres (un comment if you want Postgress vs MySQL) +# helm upgrade --install wth-postgresql ../PostgreSQL116 --set infrastructure.password=OCPHack8 +# +# for ((i = 0 ; i < 30 ; i++)); do +# pgStatus=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":status.phase") +# +# +# if [ "$pgStatus" != "$status" ]; then +# sleep 10 +# fi +# done + +# Get the postgres pod name +# pgPodName=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":metadata.name") + +#Copy pg.sql to the postgresql pod +# kubectl -n postgresql cp ./pg.sql $pgPodName:/tmp/pg.sql + +# Use this to connect to the database server +# kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres -f /tmp/pg.sql + +# Install the Kubernettes Resources MySQL +for ((i = 0 ; i < 30 ; i++)); do + mysqlStatus=$(kubectl -n mysql get pods --no-headers -o custom-columns=":status.phase") + + if [ "$mysqlStatus" != "$status" ]; then + sleep 30 + fi +done + +# Use this to connect to the database server + +kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8 <./mysql.sql + +# postgresClusterIP=$(kubectl -n postgresql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"') + +mysqlClusterIP=$(kubectl -n mysql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"') + +# sed "s/XXX.XXX.XXX.XXX/$postgresClusterIP/" ./values-postgresql-orig.yaml >temp_postgresql.yaml && mv temp_postgresql.yaml ./values-postgresql.yaml + +sed "s/XXX.XXX.XXX.XXX/$mysqlClusterIP/" ./values-mysql-orig.yaml >temp_mysql.yaml && mv temp_mysql.yaml ./values-mysql.yaml + +helm upgrade --install mysql-contosopizza . -f ./values.yaml -f ./values-mysql.yaml + +# helm upgrade --install postgres-contosopizza . -f ./values.yaml -f ./values-postgresql.yaml + +for ((i = 0 ; i < 30 ; i++)); do + appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"') + + if [ "$appStatus" == "null" ]; then + sleep 30 + fi +done + +for ((i = 0 ; i < 30 ; i++)); do + appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"') + + if [ "$appStatus" == "null" ]; then + sleep 30 + fi +done + +# postgresAppIP=$(kubectl -n contosoapppostgres get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip|tr -d '"') + +mysqlAppIP=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"') + +echo "Pizzeria app on MySQL is ready at http://$mysqlAppIP:8081/pizzeria" + +# echo "Pizzeria app on PostgreSQL is ready at http://$postgresAppIP:8082/pizzeria" diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/modify_nsg_for_postgres_mysql.sh b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/modify_nsg_for_postgres_mysql.sh new file mode 100644 index 000000000..1ac93daf5 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/modify_nsg_for_postgres_mysql.sh @@ -0,0 +1,88 @@ + + +# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only + +# Find out your local client ip address. + +echo -e "\n This script restricts the access to your ""on-prem"" Postgres and MySQL database from the shell where it is run from. + It removes public access to the databases and adds your shell IP address as an source IP to connect from. + If you are running this script from Azure Cloud Shell and want to add your computer's IP address as a source for Gui tools to connect to, + then you have to edit the variable my_ip below - put your computer's IP address. + + In order to find the public IP address of your computer ip address, point a browser to https://ifconfig.me + + If this script is run again it appends your IP address to the current white listed source IP addresses. \n" + +my_ip=`curl -s ifconfig.me`/32 + + +# In this resource group, there is only one NSG + +export rg_nsg="MC_PizzaAppWest_pizzaappwest_westus" +export nsg_name=` az network nsg list -g $rg_nsg --query "[].name" -o tsv` + +# For this NSG, there are two rules for connecting to Postgres and MySQL. + +export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-5432" ` +export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-3306" ` + +# Capture the existing allowed_source_ip_address. + +existing_my_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --query "sourceAddressPrefix" -o tsv` +existing_pg_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --query "sourceAddressPrefix" -o tsv` + +# If it says "Internet" we treat it as 0.0.0.0 + +if [ "$existing_my_source_ip_allowed" = "Internet" ] +then + existing_my_source_ip_allowed="0.0.0.0" +fi + + +if [ "$existing_pg_source_ip_allowed" = "Internet" ] +then + existing_pg_source_ip_allowed="0.0.0.0" +fi + +# if the existing source ip allowed is open to the world - then we need to remove it first. Otherwise it is a ( list of ) IP addresses then +# we append to it another IP address. Open the world is 0.0.0.0 or 0.0.0.0/0. + + +existing_my_source_ip_allowed_prefix=`echo $existing_my_source_ip_allowed | cut -d "/" -f1` +existing_pg_source_ip_allowed_prefix=`echo $existing_pg_source_ip_allowed | cut -d "/" -f1` + +# If it was open to public, we take off the existing 0.0.0.0 or else we append to it. + + +if [ "$existing_my_source_ip_allowed_prefix" = "0.0.0.0" ] +then + new_my_source_ip_allowed="$my_ip" +else + new_my_source_ip_allowed="$existing_my_source_ip_allowed $my_ip" +fi + + +if [ "$existing_pg_source_ip_allowed_prefix" = "0.0.0.0" ] +then + new_pg_source_ip_allowed="$my_ip" +else + new_pg_source_ip_allowed="$existing_pg_source_ip_allowed $my_ip" +fi + +# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip". Also discard errors - as if you run the script +# simply twice back to back - it gives an error message - does not do any harm though . + +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $new_my_source_ip_allowed 2>/dev/zero + +if [ $? -ne 0 ] +then + echo -e "\n Your MySQL Firewall rule was not changed. It is possible that you already have $my_ip white listed \n" +fi + +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $new_pg_source_ip_allowed 2>/dev/zero +if [ $? -ne 0 ] +then + echo -e "\n Your Postgres Firewall rule was not changed. It is possible that you already have $my_ip white listed \n" +fi + + diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/mysql.sql b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/mysql.sql new file mode 100644 index 000000000..23e6449b8 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/mysql.sql @@ -0,0 +1,16 @@ +-- Create wth database +CREATE DATABASE wth; + +-- Create a user Contosoapp that would own the application data for migration + +CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ; + +GRANT SUPER on *.* to contosoapp identified by 'OCPHack8'; -- may not be needed + +GRANT ALL PRIVILEGES ON wth.* to contosoapp ; + +GRANT PROCESS, SELECT ON *.* to contosoapp ; + +SET GLOBAL gtid_mode=ON_PERMISSIVE; +SET GLOBAL gtid_mode=OFF_PERMISSIVE; +SET GLOBAL gtid_mode=OFF; diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/pg.sql b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/pg.sql new file mode 100644 index 000000000..aa1361fb6 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/pg.sql @@ -0,0 +1,7 @@ +--Create the wth database +CREATE DATABASE wth; + +-- Create user contosoapp that would own the application schema + + CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8'; + diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/start_vmss_node.sh b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/start_vmss_node.sh new file mode 100644 index 000000000..5aba926b9 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/start_vmss_node.sh @@ -0,0 +1,11 @@ + +# Start the VMSS that hosts the AKS nodes. There are only two VMSS in the resource group -one each for systempool and userpool. +# Change the value of the resource group, if required. + +export vmss_user=$(az vmss list -g MC_PizzaAppWest_pizzaappwest_westus --query '[].name' | grep userpool | tr -d "," | tr -d '"') +export vmss_system=$(az vmss list -g MC_PizzaAppWest_pizzaappwest_westus --query '[].name' | grep systempool | tr -d "," | tr -d '"') + +# Now start the VM scale sets + +az vmss start -g MC_PizzaAppWest_pizzaappwest_westus -n $vmss_system +az vmss start -g MC_PizzaAppWest_pizzaappwest_westus -n $vmss_user diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/stop_vmss_node.sh b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/stop_vmss_node.sh new file mode 100644 index 000000000..da978151d --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/stop_vmss_node.sh @@ -0,0 +1,11 @@ + +# Stop the VMSS that hosts the AKS nodes to stop incurring compute charges. There are only two VMSS in the resource group -one each for system and userpool. +# Change the value of the resource group, if required. + +export vmss_user=$(az vmss list -g MC_OSSDBMigration_ossdbmigration_westus --query '[].name' | grep userpool | tr -d "," | tr -d '"') +export vmss_system=$(az vmss list -g MC_OSSDBMigration_ossdbmigration_westus --query '[].name' | grep systempool | tr -d "," | tr -d '"') + +# Now stop the VM scale sets + +az vmss stop -g MC_OSSDBMigration_ossdbmigration_westus -n $vmss_user +az vmss stop -g MC_OSSDBMigration_ossdbmigration_westus -n $vmss_system diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/deployment-mysql.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/deployment-mysql.yaml new file mode 100644 index 000000000..272ed7c0b --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/deployment-mysql.yaml @@ -0,0 +1,106 @@ +{{ if eq .Values.appConfig.databaseType "mysql" }} +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: {{ .Values.infrastructure.appName }} + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + replicas: 1 + serviceName: "{{ .Values.infrastructure.appName }}-external" + selector: + matchLabels: + app: {{ .Values.application.labelValue }} + template: + metadata: + labels: + app: {{ .Values.application.labelValue }} + spec: + containers: + - image: "{{ .Values.image.name }}:{{ .Values.image.tag }}" + name: {{ .Values.infrastructure.appName }} + resources: + requests: + memory: "{{ .Values.resources.requests.memory }}" + cpu: "{{ .Values.resources.requests.cpu }}" + limits: + memory: "{{ .Values.resources.limits.memory }}" + cpu: "{{ .Values.resources.limits.cpu }}" + env: + - name: APP_DATASOURCE_DRIVER + value: "{{ .Values.appSettings.mysql.driverClass }}" + - name: APP_HIBERNATE_DIALECT + value: "{{ .Values.appSettings.mysql.dialect }}" + - name: APP_HIBERNATE_HBM2DDL_AUTO + value: "{{ .Values.globalConfig.hibernateDdlAuto }}" + - name: APP_PORT + value: "{{ .Values.appConfig.webPort }}" + - name: APP_CONTEXT_PATH + value: "{{ .Values.appConfig.webContext }}" + - name: APP_BRAINTREE_MERCHANT_ID + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_braintree_merchant_id + - name: APP_BRAINTREE_PUBLIC_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_braintree_public_key + - name: APP_BRAINTREE_PRIVATE_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_braintree_private_key + - name: APP_RECAPTCHA_PUBLIC_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_recaptcha_public_key + - name: APP_RECAPTCHA_PRIVATE_KEY + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_recaptcha_private_key + - name: APP_DATASOURCE_URL + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_datasource_url + - name: APP_DATASOURCE_USERNAME + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_datasource_username + - name: APP_DATASOURCE_PASSWORD + valueFrom: + secretKeyRef: + name: "{{ .Values.globalConfig.secretName }}" + key: app_datasource_password + ports: + - containerPort: {{ .Values.appConfig.webPort }} + name: contosopizza + readinessProbe: + tcpSocket: + port: {{ .Values.appConfig.webPort }} + initialDelaySeconds: 5 + periodSeconds: 10 + failureThreshold: 3 + livenessProbe: + tcpSocket: + port: {{ .Values.appConfig.webPort }} + initialDelaySeconds: 15 + failureThreshold: 5 + periodSeconds: 16 + volumeMounts: + - name: "contosopizza-persistent-storage" + mountPath: {{ .Values.infrastructure.dataVolume }} + volumeClaimTemplates: + - metadata: + name: contosopizza-persistent-storage + spec: + accessModes: [ "ReadWriteOnce" ] + storageClassName: "managed-premium" + resources: + requests: + storage: 1Gi +{{ end }} diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/namespace.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/namespace.yaml new file mode 100644 index 000000000..877dd0c22 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/namespace.yaml @@ -0,0 +1,8 @@ +{{ if eq .Values.infrastructure.namespace "default" }} +# Do not create namespace +{{ else }} +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.infrastructure.namespace }} +{{ end }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/secret.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/secret.yaml new file mode 100644 index 000000000..b782f3ff7 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/secret.yaml @@ -0,0 +1,16 @@ +# These are secrets used to configure the application +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: "{{ .Values.globalConfig.secretName }}" + namespace: "{{ .Values.infrastructure.namespace }}" +data: + app_braintree_merchant_id: {{ .Values.globalConfig.brainTreeMerchantId | b64enc }} + app_braintree_public_key: {{ .Values.globalConfig.brainTreePublicKey | b64enc }} + app_braintree_private_key: {{ .Values.globalConfig.brainTreePrivateKey | b64enc }} + app_recaptcha_public_key: {{ .Values.globalConfig.recaptchaPublicKey | b64enc }} + app_recaptcha_private_key: {{ .Values.globalConfig.recaptchaPrivateKey | b64enc }} + app_datasource_url: {{ .Values.appConfig.dataSourceURL | b64enc }} + app_datasource_username: {{ .Values.appConfig.dataSourceUser | b64enc }} + app_datasource_password: {{ .Values.appConfig.dataSourcePassword | b64enc }} diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/service.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/service.yaml new file mode 100644 index 000000000..78585b8b3 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/templates/service.yaml @@ -0,0 +1,14 @@ +--- +# This is the internal load balancer, routing traffic to the Application +apiVersion: v1 +kind: Service +metadata: + name: "{{ .Values.infrastructure.appName }}-external" + namespace: {{ .Values.infrastructure.namespace }} +spec: + type: "{{ .Values.service.type }}" + ports: + - port: {{ .Values.appConfig.webPort }} + protocol: {{ .Values.service.protocol }} + selector: + app: {{ .Values.application.labelValue }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/uninstall-pizza.sh b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/uninstall-pizza.sh new file mode 100644 index 000000000..0425a4390 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/uninstall-pizza.sh @@ -0,0 +1,6 @@ +helm uninstall wth-postgresql +helm uninstall wth-mysql +helm uninstall mysql-contosopizza +helm uninstall postgres-contosopizza +echo "" +echo "Use 'kubectl get ns' to make sure your pods are not in a Terminating status before redeploying" diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/update_nsg_for_postgres_mysql.sh b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/update_nsg_for_postgres_mysql.sh new file mode 100644 index 000000000..b1cb8b748 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/update_nsg_for_postgres_mysql.sh @@ -0,0 +1,27 @@ + + +# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only. The first step - to find out your local client ip address. + +echo -e "\n This script restricts the access to your Postgres and MySQL database from your computer only. + + The variable myip will get the ip address of the shell environment where this script is running from - be it a cloud shell or your own computer. + You can get your computer's IP adress by browsing to https://ifconfig.me. So if the browser says it is 102.194.87.201, your myip=102.194.87.201/32. +\n" + +myip=`curl -s ifconfig.me`/32 + + +# In this resource group, there is only one NSG. Change the value of the resource group, if required + +export rg_nsg="MC_OSSDBMigration_ossdbmigration_westus" +export nsg_name=`az network nsg list -g $rg_nsg --query "[].name" -o tsv` + +# For this NSG, there are two rules for connecting to Postgres and MySQL. + +export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-5432" | sed 's/"//g'` +export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-3306" | sed 's/"//g'` + +# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip" + +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $myip +az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $myip diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-lowspec.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-lowspec.yaml new file mode 100644 index 000000000..5423e6f3a --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-lowspec.yaml @@ -0,0 +1,78 @@ + +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + #databaseType: "postgres" # mysql or postgres + #local example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here + + #Azure example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here + + #local example of MySQL JDBC Connection string + dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #Azure example of MySQL JDBC Connection string + #dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #local examples of dataSourceUser and dataSourcePassword + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + + #Azure examples of dataSourceUser and dataSourcePassword + #dataSourceUser: "postgres@petepgdbtest01" # your database username goes here + #dataSourcePassword: "OCPHack8" # your database password goes here + + webPort: 8083 # the port the app listens on + #webPort: 8082 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +# These changes applies to any database type used +globalConfig: + secretName: contosopizza + brainTreeMerchantId: "3fk8mrzyr665jb6d" + brainTreePublicKey: "72wqqdk75tmh44n9" + brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33" + recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04" + recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI" + hibernateDdlAuto: "create-only" + +application: + labelValue: contosopizza + +infrastructure: + namespace: contosopizza + appName: contosopizza + dataVolume: "/usr/local/contosopizza" + volumeName: "contosopizza" + +image: + name: izzymsft/ubuntu-pizza + pullPolicy: IfNotPresent + tag: "1.0" + +service: + type: LoadBalancer + port: 8082 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 256m + memory: 512Mi + volume: + size: 1Gi + storageClass: managed-premium + +appSettings: + mysql: + dialect: "org.hibernate.dialect.MySQL57Dialect" + driverClass: "com.mysql.jdbc.Driver" + postgres: + dialect: "org.hibernate.dialect.PostgreSQL95Dialect" + driverClass: "org.postgresql.Driver" \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-mysql-orig.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-mysql-orig.yaml new file mode 100644 index 000000000..f477357d2 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-mysql-orig.yaml @@ -0,0 +1,13 @@ +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + webPort: 8081 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +infrastructure: + namespace: contosoappmysql \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-mysql.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-mysql.yaml new file mode 100644 index 000000000..f477357d2 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-mysql.yaml @@ -0,0 +1,13 @@ +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + webPort: 8081 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +infrastructure: + namespace: contosoappmysql \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-postgresql-orig.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-postgresql-orig.yaml new file mode 100644 index 000000000..a52f4021a --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-postgresql-orig.yaml @@ -0,0 +1,13 @@ +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "postgres" # mysql or postgres + dataSourceURL: "jdbc:postgresql://XXX.XXX.XXX.XXX:5432/wth" # your JDBC connection string goes here + dataSourceUser: "contosoapp" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + webPort: 8082 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +infrastructure: + namespace: contosoapppostgres diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-postgresql.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-postgresql.yaml new file mode 100644 index 000000000..a52f4021a --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values-postgresql.yaml @@ -0,0 +1,13 @@ +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "postgres" # mysql or postgres + dataSourceURL: "jdbc:postgresql://XXX.XXX.XXX.XXX:5432/wth" # your JDBC connection string goes here + dataSourceUser: "contosoapp" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + webPort: 8082 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +infrastructure: + namespace: contosoapppostgres diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values.yaml new file mode 100644 index 000000000..5423e6f3a --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/ContosoPizza/values.yaml @@ -0,0 +1,78 @@ + +replicaCount: 1 + +# Change the application settings here +appConfig: + databaseType: "mysql" # mysql or postgres + #databaseType: "postgres" # mysql or postgres + #local example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here + + #Azure example of Postgres JDBC Connection string + #dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here + + #local example of MySQL JDBC Connection string + dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #Azure example of MySQL JDBC Connection string + #dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here + + #local examples of dataSourceUser and dataSourcePassword + dataSourceUser: "root" # your database username goes here + dataSourcePassword: "OCPHack8" # your database password goes here + + #Azure examples of dataSourceUser and dataSourcePassword + #dataSourceUser: "postgres@petepgdbtest01" # your database username goes here + #dataSourcePassword: "OCPHack8" # your database password goes here + + webPort: 8083 # the port the app listens on + #webPort: 8082 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext + +# These changes applies to any database type used +globalConfig: + secretName: contosopizza + brainTreeMerchantId: "3fk8mrzyr665jb6d" + brainTreePublicKey: "72wqqdk75tmh44n9" + brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33" + recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04" + recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI" + hibernateDdlAuto: "create-only" + +application: + labelValue: contosopizza + +infrastructure: + namespace: contosopizza + appName: contosopizza + dataVolume: "/usr/local/contosopizza" + volumeName: "contosopizza" + +image: + name: izzymsft/ubuntu-pizza + pullPolicy: IfNotPresent + tag: "1.0" + +service: + type: LoadBalancer + port: 8082 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 256m + memory: 512Mi + volume: + size: 1Gi + storageClass: managed-premium + +appSettings: + mysql: + dialect: "org.hibernate.dialect.MySQL57Dialect" + driverClass: "com.mysql.jdbc.Driver" + postgres: + dialect: "org.hibernate.dialect.PostgreSQL95Dialect" + driverClass: "org.postgresql.Driver" \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/Chart.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/Chart.yaml new file mode 100644 index 000000000..01d1f9a62 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/Chart.yaml @@ -0,0 +1,11 @@ +apiVersion: v2 + +name: MySQL Database Server + +description: A Helm chart for deploying a single node MySQL database server + +type: application + +version: 2.0 + +appVersion: 5.7 \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/configmap.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/configmap.yaml new file mode 100644 index 000000000..e53cddb93 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/configmap.yaml @@ -0,0 +1,56 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: mysqld-config + namespace: "{{ .Values.infrastructure.namespace }}" +data: + mysqld.cnf: |- + # Mounted at /etc/mysql/mysql.conf.d/mysqld.cnf + [mysqld] + + lower_case_table_names = 1 + server_id = 3 + + pid-file = /var/run/mysqld/mysqld.pid + socket = /var/run/mysqld/mysqld.sock + datadir = /usr/local/mysql/data + + explicit_defaults_for_timestamp = on + + #log-error = /var/log/mysql/error.log + + # Disabling symbolic-links is recommended to prevent assorted security risks + symbolic-links=0 + + # The value of log_bin is the base name of the sequence of binlog files. + log_bin = mysql-bin + + # The binlog-format must be set to ROW or row. + binlog_format = row + + # The binlog_row_image must be set to FULL or full + binlog_row_image = full + + # This is the number of days for automatic binlog file removal. The default is 0 which means no automatic removal. + expire_logs_days = 7 + + # Boolean which enables/disables support for including the original SQL statement in the binlog entry. + binlog_rows_query_log_events = on + + # Whether updates received by a replica server from a replication source server should be logged to the replica's own binary log + log_slave_updates = on + + # Boolean which specifies whether GTID mode of the MySQL server is enabled or not. + gtid_mode = on + + # Boolean which instructs the server whether or not to enforce GTID consistency by allowing + # the execution of statements that can be logged in a transactionally safe manner; required when using GTIDs. + enforce_gtid_consistency = on + + # The number of seconds the server waits for activity on an interactive connection before closing it. + interactive_timeout = 36000 + + # The number of seconds the server waits for activity on a noninteractive connection before closing it. + wait_timeout = 72000 + + # end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/deployment.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/deployment.yaml new file mode 100644 index 000000000..7ad89c931 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/deployment.yaml @@ -0,0 +1,74 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ .Values.infrastructure.appName }} + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + replicas: 1 + selector: + matchLabels: + app: {{ .Values.application.labelValue }} + strategy: + type: Recreate + template: + metadata: + labels: + app: {{ .Values.application.labelValue }} + spec: + containers: + - image: "{{ .Values.image.name }}:{{ .Values.image.tag }}" + name: {{ .Values.infrastructure.appName }} + resources: + requests: + memory: "{{ .Values.resources.requests.memory }}" + cpu: "{{ .Values.resources.requests.cpu }}" + limits: + memory: "{{ .Values.resources.limits.memory }}" + cpu: "{{ .Values.resources.limits.cpu }}" + env: + - name: MYSQL_ROOT_PASSWORD + valueFrom: + secretKeyRef: + name: mysqld + key: mysql_password + ports: + - containerPort: {{ .Values.service.port }} + name: mysql + readinessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 5 + periodSeconds: 10 + failureThreshold: 3 + livenessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 15 + failureThreshold: 5 + periodSeconds: 16 + volumeMounts: + - name: "{{ .Values.infrastructure.volumeName }}-volume" + mountPath: {{ .Values.infrastructure.dataVolume }} + - name: mysqld-configuration2 + mountPath: /etc/mysql/mysql.conf.d + volumes: + - name: "{{ .Values.infrastructure.volumeName }}-volume" + persistentVolumeClaim: + claimName: "{{ .Values.infrastructure.volumeName }}-persistent-storage" + - name: mysqld-configuration2 + configMap: + name: mysqld-config + +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: "{{ .Values.infrastructure.volumeName }}-persistent-storage" + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + accessModes: + - ReadWriteOnce + storageClassName: {{ .Values.resources.volume.storageClass }} + resources: + requests: + storage: {{ .Values.resources.volume.size }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/namespace.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/namespace.yaml new file mode 100644 index 000000000..877dd0c22 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/namespace.yaml @@ -0,0 +1,8 @@ +{{ if eq .Values.infrastructure.namespace "default" }} +# Do not create namespace +{{ else }} +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.infrastructure.namespace }} +{{ end }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/secret.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/secret.yaml new file mode 100644 index 000000000..c714862b0 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: mysqld + namespace: "{{ .Values.infrastructure.namespace }}" +data: + mysql_default_user: {{ .Values.infrastructure.username | b64enc }} + mysql_password: {{ .Values.infrastructure.password | b64enc }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/service.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/service.yaml new file mode 100644 index 000000000..7b2a9ca8a --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/templates/service.yaml @@ -0,0 +1,14 @@ +--- +# This is the internal load balancer, routing traffic to the MySQL Pod +apiVersion: v1 +kind: Service +metadata: + name: "{{ .Values.infrastructure.appName }}-external" + namespace: {{ .Values.infrastructure.namespace }} +spec: + type: "{{ .Values.service.type }}" + ports: + - port: {{ .Values.service.port }} + protocol: {{ .Values.service.protocol }} + selector: + app: {{ .Values.application.labelValue }} diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/values.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/values.yaml new file mode 100644 index 000000000..4ce5bad0b --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/MySQL57/values.yaml @@ -0,0 +1,34 @@ + +replicaCount: 1 + +application: + labelValue: mysql + +infrastructure: + namespace: mysql + appName: mysql + username: izzy + password: "OCPHack8" + dataVolume: "/usr/local/mysql" + volumeName: "wthmysql" + +image: + name: mysql + pullPolicy: IfNotPresent + tag: "5.7.32" + +service: + type: LoadBalancer + port: 3306 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 750m + memory: 2048Mi + volume: + size: 5Gi + storageClass: managed-premium \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/Chart.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/Chart.yaml new file mode 100644 index 000000000..76ae49cb7 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/Chart.yaml @@ -0,0 +1,11 @@ +apiVersion: v2 + +name: PostgreSQL + +description: A Helm chart for deploying a single node PostgreSQL database server + +type: application + +version: 2.0 + +appVersion: 11.6 \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/deployment.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/deployment.yaml new file mode 100644 index 000000000..88567b112 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/deployment.yaml @@ -0,0 +1,91 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ .Values.infrastructure.appName }} + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + replicas: 1 + selector: + matchLabels: + app: {{ .Values.application.labelValue }} + strategy: + type: Recreate + template: + metadata: + labels: + app: {{ .Values.application.labelValue }} + spec: + securityContext: + runAsUser: 0 + runAsGroup: 999 + fsGroup: 999 + containers: + - image: "{{ .Values.image.name }}:{{ .Values.image.tag }}" + name: {{ .Values.infrastructure.appName }} + args: ["-c", "config_file=/etc/postgresql/postgresql.conf"] + resources: + requests: + memory: "{{ .Values.resources.requests.memory }}" + cpu: "{{ .Values.resources.requests.cpu }}" + limits: + memory: "{{ .Values.resources.limits.memory }}" + cpu: "{{ .Values.resources.limits.cpu }}" + env: + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: postgres + key: postgres_password + - name: PGDATA + value: {{ .Values.infrastructure.dataPath }} + ports: + - containerPort: {{ .Values.service.port }} + name: postgres + readinessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 5 + periodSeconds: 10 + failureThreshold: 3 + livenessProbe: + tcpSocket: + port: {{ .Values.service.port }} + initialDelaySeconds: 15 + failureThreshold: 5 + periodSeconds: 16 + volumeMounts: + - name: "{{ .Values.infrastructure.appName }}-volume" + mountPath: {{ .Values.infrastructure.dataVolume }} + - name: "postgresql-configuration" + mountPath: "/etc/postgresql" + - name: "postgresql-tls-keys" + mountPath: "/etc/postgresql/keys" + volumes: + - name: "{{ .Values.infrastructure.appName }}-volume" + persistentVolumeClaim: + claimName: "{{ .Values.infrastructure.appName }}-persistent-storage" + - name: postgresql-configuration + configMap: + name: postgresql-config + - name: postgresql-tls-keys + secret: + secretName: postgresql-tls-secret + items: + - key: tls.crt + path: "tls.crt" + - key: tls.key + path: "tls.key" + mode: 0640 +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: "{{ .Values.infrastructure.appName }}-persistent-storage" + namespace: "{{ .Values.infrastructure.namespace }}" +spec: + accessModes: + - ReadWriteOnce + storageClassName: {{ .Values.resources.volume.storageClass }} + resources: + requests: + storage: {{ .Values.resources.volume.size }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/namespace.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/namespace.yaml new file mode 100644 index 000000000..1da5da543 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/namespace.yaml @@ -0,0 +1,8 @@ +{{ if eq .Values.infrastructure.namespace "default" }} +# Do not create namespace +{{ else }} +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.infrastructure.namespace }} +{{ end }} diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-configmap.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-configmap.yaml new file mode 100644 index 000000000..62c3e6b10 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-configmap.yaml @@ -0,0 +1,699 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: postgresql-config + namespace: {{ .Values.infrastructure.namespace }} +data: + postgresql.conf: |- + # Mounted at /etc/postgresql/postgresql.conf + # ----------------------------- + # PostgreSQL configuration file + # ----------------------------- + # + # This file consists of lines of the form: + # + # name = value + # + # (The "=" is optional.) Whitespace may be used. Comments are introduced with + # "#" anywhere on a line. The complete list of parameter names and allowed + # values can be found in the PostgreSQL documentation. + # + # The commented-out settings shown in this file represent the default values. + # Re-commenting a setting is NOT sufficient to revert it to the default value; + # you need to reload the server. + # + # This file is read on server startup and when the server receives a SIGHUP + # signal. If you edit the file on a running system, you have to SIGHUP the + # server for the changes to take effect, run "pg_ctl reload", or execute + # "SELECT pg_reload_conf()". Some parameters, which are marked below, + # require a server shutdown and restart to take effect. + # + # Any parameter can also be given as a command-line option to the server, e.g., + # "postgres -c log_connections=on". Some parameters can be changed at run time + # with the "SET" SQL command. + # + # Memory units: kB = kilobytes Time units: ms = milliseconds + # MB = megabytes s = seconds + # GB = gigabytes min = minutes + # TB = terabytes h = hours + # d = days + + + #------------------------------------------------------------------------------ + # FILE LOCATIONS + #------------------------------------------------------------------------------ + + # The default values of these variables are driven from the -D command-line + # option or PGDATA environment variable, represented here as ConfigDir. + + #data_directory = 'ConfigDir' # use data in another directory + # (change requires restart) + #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file + # (change requires restart) + #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file + # (change requires restart) + + # If external_pid_file is not explicitly set, no extra PID file is written. + #external_pid_file = '' # write an extra PID file + # (change requires restart) + + + #------------------------------------------------------------------------------ + # CONNECTIONS AND AUTHENTICATION + #------------------------------------------------------------------------------ + + # - Connection Settings - + + listen_addresses = '*' + # comma-separated list of addresses; + # defaults to 'localhost'; use '*' for all + # (change requires restart) + #port = 5432 # (change requires restart) + max_connections = 100 # (change requires restart) + #superuser_reserved_connections = 3 # (change requires restart) + #unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories + # (change requires restart) + #unix_socket_group = '' # (change requires restart) + #unix_socket_permissions = 0777 # begin with 0 to use octal notation + # (change requires restart) + #bonjour = off # advertise server via Bonjour + # (change requires restart) + #bonjour_name = '' # defaults to the computer name + # (change requires restart) + + # - TCP Keepalives - + # see "man 7 tcp" for details + + #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; + # 0 selects the system default + #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; + # 0 selects the system default + #tcp_keepalives_count = 0 # TCP_KEEPCNT; + # 0 selects the system default + + # - Authentication - + + #authentication_timeout = 1min # 1s-600s + #password_encryption = md5 # md5 or scram-sha-256 + #db_user_namespace = off + + # GSSAPI using Kerberos + #krb_server_keyfile = '' + #krb_caseins_users = off + + # - SSL - + + ssl = on + ssl_ca_file = '/etc/postgresql/keys/tls.crt' + ssl_cert_file = '/etc/postgresql/keys/tls.crt' + #ssl_crl_file = '' + ssl_key_file = '/etc/postgresql/keys/tls.key' + #ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers + #ssl_prefer_server_ciphers = on + #ssl_ecdh_curve = 'prime256v1' + #ssl_dh_params_file = '' + #ssl_passphrase_command = '' + #ssl_passphrase_command_supports_reload = off + + + #------------------------------------------------------------------------------ + # RESOURCE USAGE (except WAL) + #------------------------------------------------------------------------------ + + # - Memory - + + shared_buffers = 128MB # min 128kB + # (change requires restart) + #huge_pages = try # on, off, or try + # (change requires restart) + #temp_buffers = 8MB # min 800kB + #max_prepared_transactions = 0 # zero disables the feature + # (change requires restart) + # Caution: it is not advisable to set max_prepared_transactions nonzero unless + # you actively intend to use prepared transactions. + #work_mem = 4MB # min 64kB + #maintenance_work_mem = 64MB # min 1MB + #autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem + #max_stack_depth = 2MB # min 100kB + dynamic_shared_memory_type = posix # the default is the first option + # supported by the operating system: + # posix + # sysv + # windows + # mmap + # use none to disable dynamic shared memory + # (change requires restart) + + # - Disk - + + #temp_file_limit = -1 # limits per-process temp file space + # in kB, or -1 for no limit + + # - Kernel Resources - + + #max_files_per_process = 1000 # min 25 + # (change requires restart) + + # - Cost-Based Vacuum Delay - + + #vacuum_cost_delay = 0 # 0-100 milliseconds + #vacuum_cost_page_hit = 1 # 0-10000 credits + #vacuum_cost_page_miss = 10 # 0-10000 credits + #vacuum_cost_page_dirty = 20 # 0-10000 credits + #vacuum_cost_limit = 200 # 1-10000 credits + + # - Background Writer - + + #bgwriter_delay = 200ms # 10-10000ms between rounds + #bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables + #bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round + #bgwriter_flush_after = 512kB # measured in pages, 0 disables + + # - Asynchronous Behavior - + + #effective_io_concurrency = 1 # 1-1000; 0 disables prefetching + #max_worker_processes = 8 # (change requires restart) + #max_parallel_maintenance_workers = 2 # taken from max_parallel_workers + #max_parallel_workers_per_gather = 2 # taken from max_parallel_workers + #parallel_leader_participation = on + #max_parallel_workers = 8 # maximum number of max_worker_processes that + # can be used in parallel operations + #old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate + # (change requires restart) + #backend_flush_after = 0 # measured in pages, 0 disables + + + #------------------------------------------------------------------------------ + # WRITE-AHEAD LOG + #------------------------------------------------------------------------------ + + # - Settings - + + wal_level = logical # minimal, replica, or logical + # (change requires restart) + #fsync = on # flush data to disk for crash safety + # (turning this off can cause + # unrecoverable data corruption) + #synchronous_commit = on # synchronization level; + # off, local, remote_write, remote_apply, or on + #wal_sync_method = fsync # the default is the first option + # supported by the operating system: + # open_datasync + # fdatasync (default on Linux) + # fsync + # fsync_writethrough + # open_sync + #full_page_writes = on # recover from partial page writes + #wal_compression = off # enable compression of full-page writes + #wal_log_hints = off # also do full page writes of non-critical updates + # (change requires restart) + #wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers + # (change requires restart) + #wal_writer_delay = 200ms # 1-10000 milliseconds + #wal_writer_flush_after = 1MB # measured in pages, 0 disables + + #commit_delay = 0 # range 0-100000, in microseconds + #commit_siblings = 5 # range 1-1000 + + # - Checkpoints - + + #checkpoint_timeout = 5min # range 30s-1d + max_wal_size = 1GB + min_wal_size = 80MB + #checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0 + #checkpoint_flush_after = 256kB # measured in pages, 0 disables + #checkpoint_warning = 30s # 0 disables + + # - Archiving - + + #archive_mode = off # enables archiving; off, on, or always + # (change requires restart) + #archive_command = '' # command to use to archive a logfile segment + # placeholders: %p = path of file to archive + # %f = file name only + # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f' + #archive_timeout = 0 # force a logfile segment switch after this + # number of seconds; 0 disables + + + #------------------------------------------------------------------------------ + # REPLICATION + #------------------------------------------------------------------------------ + + # - Sending Servers - + + # Set these on the master and on any standby that will send replication data. + + #max_wal_senders = 10 # max number of walsender processes + # (change requires restart) + #wal_keep_segments = 0 # in logfile segments; 0 disables + #wal_sender_timeout = 60s # in milliseconds; 0 disables + + #max_replication_slots = 10 # max number of replication slots + # (change requires restart) + #track_commit_timestamp = off # collect timestamp of transaction commit + # (change requires restart) + + # - Master Server - + + # These settings are ignored on a standby server. + + #synchronous_standby_names = '' # standby servers that provide sync rep + # method to choose sync standbys, number of sync standbys, + # and comma-separated list of application_name + # from standby(s); '*' = all + #vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed + + # - Standby Servers - + + # These settings are ignored on a master server. + + #hot_standby = on # "off" disallows queries during recovery + # (change requires restart) + #max_standby_archive_delay = 30s # max delay before canceling queries + # when reading WAL from archive; + # -1 allows indefinite delay + #max_standby_streaming_delay = 30s # max delay before canceling queries + # when reading streaming WAL; + # -1 allows indefinite delay + #wal_receiver_status_interval = 10s # send replies at least this often + # 0 disables + #hot_standby_feedback = off # send info from standby to prevent + # query conflicts + #wal_receiver_timeout = 60s # time that receiver waits for + # communication from master + # in milliseconds; 0 disables + #wal_retrieve_retry_interval = 5s # time to wait before retrying to + # retrieve WAL after a failed attempt + + # - Subscribers - + + # These settings are ignored on a publisher. + + #max_logical_replication_workers = 4 # taken from max_worker_processes + # (change requires restart) + #max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers + + + #------------------------------------------------------------------------------ + # QUERY TUNING + #------------------------------------------------------------------------------ + + # - Planner Method Configuration - + + #enable_bitmapscan = on + #enable_hashagg = on + #enable_hashjoin = on + #enable_indexscan = on + #enable_indexonlyscan = on + #enable_material = on + #enable_mergejoin = on + #enable_nestloop = on + #enable_parallel_append = on + #enable_seqscan = on + #enable_sort = on + #enable_tidscan = on + #enable_partitionwise_join = off + #enable_partitionwise_aggregate = off + #enable_parallel_hash = on + #enable_partition_pruning = on + + # - Planner Cost Constants - + + #seq_page_cost = 1.0 # measured on an arbitrary scale + #random_page_cost = 4.0 # same scale as above + #cpu_tuple_cost = 0.01 # same scale as above + #cpu_index_tuple_cost = 0.005 # same scale as above + #cpu_operator_cost = 0.0025 # same scale as above + #parallel_tuple_cost = 0.1 # same scale as above + #parallel_setup_cost = 1000.0 # same scale as above + + #jit_above_cost = 100000 # perform JIT compilation if available + # and query more expensive than this; + # -1 disables + #jit_inline_above_cost = 500000 # inline small functions if query is + # more expensive than this; -1 disables + #jit_optimize_above_cost = 500000 # use expensive JIT optimizations if + # query is more expensive than this; + # -1 disables + + #min_parallel_table_scan_size = 8MB + #min_parallel_index_scan_size = 512kB + #effective_cache_size = 4GB + + # - Genetic Query Optimizer - + + #geqo = on + #geqo_threshold = 12 + #geqo_effort = 5 # range 1-10 + #geqo_pool_size = 0 # selects default based on effort + #geqo_generations = 0 # selects default based on effort + #geqo_selection_bias = 2.0 # range 1.5-2.0 + #geqo_seed = 0.0 # range 0.0-1.0 + + # - Other Planner Options - + + #default_statistics_target = 100 # range 1-10000 + #constraint_exclusion = partition # on, off, or partition + #cursor_tuple_fraction = 0.1 # range 0.0-1.0 + #from_collapse_limit = 8 + #join_collapse_limit = 8 # 1 disables collapsing of explicit + # JOIN clauses + #force_parallel_mode = off + #jit = off # allow JIT compilation + + + #------------------------------------------------------------------------------ + # REPORTING AND LOGGING + #------------------------------------------------------------------------------ + + # - Where to Log - + + #log_destination = 'stderr' # Valid values are combinations of + # stderr, csvlog, syslog, and eventlog, + # depending on platform. csvlog + # requires logging_collector to be on. + + # This is used when logging to stderr: + #logging_collector = off # Enable capturing of stderr and csvlog + # into log files. Required to be on for + # csvlogs. + # (change requires restart) + + # These are only used if logging_collector is on: + #log_directory = 'log' # directory where log files are written, + # can be absolute or relative to PGDATA + #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern, + # can include strftime() escapes + #log_file_mode = 0600 # creation mode for log files, + # begin with 0 to use octal notation + #log_truncate_on_rotation = off # If on, an existing log file with the + # same name as the new log file will be + # truncated rather than appended to. + # But such truncation only occurs on + # time-driven rotation, not on restarts + # or size-driven rotation. Default is + # off, meaning append to existing files + # in all cases. + #log_rotation_age = 1d # Automatic rotation of logfiles will + # happen after that time. 0 disables. + #log_rotation_size = 10MB # Automatic rotation of logfiles will + # happen after that much log output. + # 0 disables. + + # These are relevant when logging to syslog: + #syslog_facility = 'LOCAL0' + #syslog_ident = 'postgres' + #syslog_sequence_numbers = on + #syslog_split_messages = on + + # This is only relevant when logging to eventlog (win32): + # (change requires restart) + #event_source = 'PostgreSQL' + + # - When to Log - + + #log_min_messages = warning # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic + + #log_min_error_statement = error # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic (effectively off) + + #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements + # and their durations, > 0 logs only + # statements running at least this number + # of milliseconds + + + # - What to Log - + + #debug_print_parse = off + #debug_print_rewritten = off + #debug_print_plan = off + #debug_pretty_print = on + #log_checkpoints = off + #log_connections = off + #log_disconnections = off + #log_duration = off + #log_error_verbosity = default # terse, default, or verbose messages + #log_hostname = off + #log_line_prefix = '%m [%p] ' # special values: + # %a = application name + # %u = user name + # %d = database name + # %r = remote host and port + # %h = remote host + # %p = process ID + # %t = timestamp without milliseconds + # %m = timestamp with milliseconds + # %n = timestamp with milliseconds (as a Unix epoch) + # %i = command tag + # %e = SQL state + # %c = session ID + # %l = session line number + # %s = session start timestamp + # %v = virtual transaction ID + # %x = transaction ID (0 if none) + # %q = stop here in non-session + # processes + # %% = '%' + # e.g. '<%u%%%d> ' + #log_lock_waits = off # log lock waits >= deadlock_timeout + #log_statement = 'none' # none, ddl, mod, all + #log_replication_commands = off + #log_temp_files = -1 # log temporary files equal or larger + # than the specified size in kilobytes; + # -1 disables, 0 logs all temp files + log_timezone = 'Etc/UTC' + + #------------------------------------------------------------------------------ + # PROCESS TITLE + #------------------------------------------------------------------------------ + + #cluster_name = '' # added to process titles if nonempty + # (change requires restart) + #update_process_title = on + + + #------------------------------------------------------------------------------ + # STATISTICS + #------------------------------------------------------------------------------ + + # - Query and Index Statistics Collector - + + #track_activities = on + #track_counts = on + #track_io_timing = off + #track_functions = none # none, pl, all + #track_activity_query_size = 1024 # (change requires restart) + #stats_temp_directory = 'pg_stat_tmp' + + + # - Monitoring - + + #log_parser_stats = off + #log_planner_stats = off + #log_executor_stats = off + #log_statement_stats = off + + + #------------------------------------------------------------------------------ + # AUTOVACUUM + #------------------------------------------------------------------------------ + + #autovacuum = on # Enable autovacuum subprocess? 'on' + # requires track_counts to also be on. + #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and + # their durations, > 0 logs only + # actions running at least this number + # of milliseconds. + #autovacuum_max_workers = 3 # max number of autovacuum subprocesses + # (change requires restart) + #autovacuum_naptime = 1min # time between autovacuum runs + #autovacuum_vacuum_threshold = 50 # min number of row updates before + # vacuum + #autovacuum_analyze_threshold = 50 # min number of row updates before + # analyze + #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum + #autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze + #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum + # (change requires restart) + #autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age + # before forced vacuum + # (change requires restart) + #autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for + # autovacuum, in milliseconds; + # -1 means use vacuum_cost_delay + #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for + # autovacuum, -1 means use + # vacuum_cost_limit + + + #------------------------------------------------------------------------------ + # CLIENT CONNECTION DEFAULTS + #------------------------------------------------------------------------------ + + # - Statement Behavior - + + #client_min_messages = notice # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # log + # notice + # warning + # error + #search_path = '"$user", public' # schema names + #row_security = on + #default_tablespace = '' # a tablespace name, '' uses the default + #temp_tablespaces = '' # a list of tablespace names, '' uses + # only default tablespace + #check_function_bodies = on + #default_transaction_isolation = 'read committed' + #default_transaction_read_only = off + #default_transaction_deferrable = off + #session_replication_role = 'origin' + #statement_timeout = 0 # in milliseconds, 0 is disabled + #lock_timeout = 0 # in milliseconds, 0 is disabled + #idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled + #vacuum_freeze_min_age = 50000000 + #vacuum_freeze_table_age = 150000000 + #vacuum_multixact_freeze_min_age = 5000000 + #vacuum_multixact_freeze_table_age = 150000000 + #vacuum_cleanup_index_scale_factor = 0.1 # fraction of total number of tuples + # before index cleanup, 0 always performs + # index cleanup + #bytea_output = 'hex' # hex, escape + #xmlbinary = 'base64' + #xmloption = 'content' + #gin_fuzzy_search_limit = 0 + #gin_pending_list_limit = 4MB + + # - Locale and Formatting - + + datestyle = 'iso, mdy' + #intervalstyle = 'postgres' + timezone = 'Etc/UTC' + #timezone_abbreviations = 'Default' # Select the set of available time zone + # abbreviations. Currently, there are + # Default + # Australia (historical usage) + # India + # You can create your own file in + # share/timezonesets/. + #extra_float_digits = 0 # min -15, max 3 + #client_encoding = sql_ascii # actually, defaults to database + # encoding + + # These settings are initialized by initdb, but they can be changed. + lc_messages = 'en_US.utf8' # locale for system error message + # strings + lc_monetary = 'en_US.utf8' # locale for monetary formatting + lc_numeric = 'en_US.utf8' # locale for number formatting + lc_time = 'en_US.utf8' # locale for time formatting + + # default configuration for text search + default_text_search_config = 'pg_catalog.english' + + # - Shared Library Preloading - + + #shared_preload_libraries = '' # (change requires restart) + #local_preload_libraries = '' + #session_preload_libraries = '' + #jit_provider = 'llvmjit' # JIT library to use + + # - Other Defaults - + + #dynamic_library_path = '$libdir' + + + #------------------------------------------------------------------------------ + # LOCK MANAGEMENT + #------------------------------------------------------------------------------ + + #deadlock_timeout = 1s + #max_locks_per_transaction = 64 # min 10 + # (change requires restart) + #max_pred_locks_per_transaction = 64 # min 10 + # (change requires restart) + #max_pred_locks_per_relation = -2 # negative values mean + # (max_pred_locks_per_transaction + # / -max_pred_locks_per_relation) - 1 + #max_pred_locks_per_page = 2 # min 0 + + + #------------------------------------------------------------------------------ + # VERSION AND PLATFORM COMPATIBILITY + #------------------------------------------------------------------------------ + + # - Previous PostgreSQL Versions - + + #array_nulls = on + #backslash_quote = safe_encoding # on, off, or safe_encoding + #default_with_oids = off + #escape_string_warning = on + #lo_compat_privileges = off + #operator_precedence_warning = off + #quote_all_identifiers = off + #standard_conforming_strings = on + #synchronize_seqscans = on + + # - Other Platforms and Clients - + + #transform_null_equals = off + + + #------------------------------------------------------------------------------ + # ERROR HANDLING + #------------------------------------------------------------------------------ + + #exit_on_error = off # terminate session on any error? + #restart_after_crash = on # reinitialize after backend crash? + #data_sync_retry = off # retry or panic on failure to fsync + # data? + # (change requires restart) + + + #------------------------------------------------------------------------------ + # CONFIG FILE INCLUDES + #------------------------------------------------------------------------------ + + # These options allow settings to be loaded from files other than the + # default postgresql.conf. Note that these are directives, not variable + # assignments, so they can usefully be given more than once. + + #include_dir = '...' # include files ending in '.conf' from + # a directory, e.g., 'conf.d' + #include_if_exists = '...' # include file only if it exists + #include = '...' # include file + + + #------------------------------------------------------------------------------ + # CUSTOMIZED OPTIONS + #------------------------------------------------------------------------------ + + # Add settings for extensions here diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-tls-secret.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-tls-secret.yaml new file mode 100644 index 000000000..7cc4aee04 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/postgresql-tls-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + tls.crt: Q2VydGlmaWNhdGU6CiAgICBEYXRhOgogICAgICAgIFZlcnNpb246IDMgKDB4MikKICAgICAgICBTZXJpYWwgTnVtYmVyOgogICAgICAgICAgICA0NDpkNjpkNjo2Mzo3Yjo2MjoxMjpjZTo3NTo2ZDozZDoxODo0NzplYjo1Nzo2MjplZjphNTo4ZjoyZgogICAgICAgIFNpZ25hdHVyZSBBbGdvcml0aG06IHNoYTI1NldpdGhSU0FFbmNyeXB0aW9uCiAgICAgICAgSXNzdWVyOiBDID0gVVMsIFNUID0gSUwsIE8gPSBJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQsIENOID0gbG9jYWxob3N0CiAgICAgICAgVmFsaWRpdHkKICAgICAgICAgICAgTm90IEJlZm9yZTogRGVjIDE1IDIyOjQ4OjI2IDIwMjAgR01UCiAgICAgICAgICAgIE5vdCBBZnRlciA6IEphbiAxNCAyMjo0ODoyNiAyMDIxIEdNVAogICAgICAgIFN1YmplY3Q6IEMgPSBVUywgU1QgPSBJTCwgTyA9IEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZCwgQ04gPSBsb2NhbGhvc3QKICAgICAgICBTdWJqZWN0IFB1YmxpYyBLZXkgSW5mbzoKICAgICAgICAgICAgUHVibGljIEtleSBBbGdvcml0aG06IHJzYUVuY3J5cHRpb24KICAgICAgICAgICAgICAgIFJTQSBQdWJsaWMtS2V5OiAoMjA0OCBiaXQpCiAgICAgICAgICAgICAgICBNb2R1bHVzOgogICAgICAgICAgICAgICAgICAgIDAwOmQzOmQxOjQ4OjE1OmZmOmY3OjY1OjgwOmU5OmRhOjc5OmFkOjk2OjFkOgogICAgICAgICAgICAgICAgICAgIDhkOjQ2OjY4OmVlOjYzOjM5Ojg2OjdhOmFmOmQxOjUwOmU0OmJhOjU5OmI1OgogICAgICAgICAgICAgICAgICAgIGYzOjJlOmJjOmM5OmVhOjg4OjQzOmM3OjM1OjdiOmU0OjA2OmFlOmM5OmQ2OgogICAgICAgICAgICAgICAgICAgIDJiOjNmOjNkOmNiOmJmOmZiOjlkOmU0OjcyOjk3OjZkOmM2OjI4OjBkOmIxOgogICAgICAgICAgICAgICAgICAgIDYzOmU5OjhmOmFiOjhmOjhjOmQyOmFhOjUzOjBmOmUyOjg1OmRkOmYwOjZmOgogICAgICAgICAgICAgICAgICAgIDk3OmIwOmRmOmIxOmEzOmM0OjdhOjJlOjIyOjVjOmYyOjliOjM5OjE0OjE5OgogICAgICAgICAgICAgICAgICAgIDI0OmRiOjA3OjdiOmNmOmUxOjliOjJhOjViOmZmOmY2OmUzOmQ3OjMzOmRhOgogICAgICAgICAgICAgICAgICAgIDBiOjg0OjhmOjhiOjIxOmZkOjZiOmQzOjAyOjcxOmUwOmU0OjdlOmY0OjE1OgogICAgICAgICAgICAgICAgICAgIDhhOjJiOmRlOmFmOjM5OjRlOjdjOjY5OjU1OjU1OjM4OmRhOjhlOjkyOjU1OgogICAgICAgICAgICAgICAgICAgIGQzOmQ4OmMxOjBlOmVjOjc5OjZjOjQwOjJhOjVkOmI1Ojg4OjI0OjVlOjFkOgogICAgICAgICAgICAgICAgICAgIDcyOmYwOjZlOmMxOmY3OmRmOjg1OjVhOmNjOjM0OjYxOjk2Ojk4OjE0OjI0OgogICAgICAgICAgICAgICAgICAgIGZmOmRmOjE2OjBmOmExOmZmOmJjOmY3OmY5OjlhOjNjOjU4OjcwOmQxOmJiOgogICAgICAgICAgICAgICAgICAgIDAzOmVkOjE4OjA2OmJmOjczOjM4OmZhOjY0OjRhOmExOmNhOjAzOjAzOjRlOgogICAgICAgICAgICAgICAgICAgIDYzOjE4OmM4OmMzOjRlOjc0OjA4OjA3OmJjOjQxOmE2OjgyOjRlOjRhOmE4OgogICAgICAgICAgICAgICAgICAgIDdmOmFkOmJhOmYxOjhmOjY2OjIyOmNlOmUwOjQ2OjVkOmRlOmEwOjA3OjMzOgogICAgICAgICAgICAgICAgICAgIDE3OjI1Ojc0OjQ5OjBlOmNjOmRmOjkyOmQzOjMwOjM1OmRhOjYwOjJjOjdlOgogICAgICAgICAgICAgICAgICAgIDE4OjVlOjg5OmQyOjhmOmY3OjZkOjgzOjE3OjJkOjVlOjczOjQyOmZkOjBkOgogICAgICAgICAgICAgICAgICAgIDc2Ojc1CiAgICAgICAgICAgICAgICBFeHBvbmVudDogNjU1MzcgKDB4MTAwMDEpCiAgICAgICAgWDUwOXYzIGV4dGVuc2lvbnM6CiAgICAgICAgICAgIFg1MDl2MyBTdWJqZWN0IEtleSBJZGVudGlmaWVyOiAKICAgICAgICAgICAgICAgIDFGOjJFOjlBOkNEOjg2OjhDOjRDOjU4Ojg4OkQxOkYxOjFGOjQxOkM3OjRBOjk4OjgxOkM3OjY0OjhECiAgICAgICAgICAgIFg1MDl2MyBBdXRob3JpdHkgS2V5IElkZW50aWZpZXI6IAogICAgICAgICAgICAgICAga2V5aWQ6MUY6MkU6OUE6Q0Q6ODY6OEM6NEM6NTg6ODg6RDE6RjE6MUY6NDE6Qzc6NEE6OTg6ODE6Qzc6NjQ6OEQKCiAgICAgICAgICAgIFg1MDl2MyBCYXNpYyBDb25zdHJhaW50czogY3JpdGljYWwKICAgICAgICAgICAgICAgIENBOlRSVUUKICAgIFNpZ25hdHVyZSBBbGdvcml0aG06IHNoYTI1NldpdGhSU0FFbmNyeXB0aW9uCiAgICAgICAgIDQ5Ojc0Ojc2OmM0OmVkOmMxOmU2OjdkOmRjOjA3OjY2Ojg5OjFlOjg4Ojk3OjgyOjAzOjQ3OgogICAgICAgICA2Mzo2YjowYjpiMTowZTo3ODo1MDo0MDoxNDpjNDpkNzplYToxNzowMTozNjo3OTo0NjphZToKICAgICAgICAgNGU6MzM6ZTc6MWU6OTQ6OWI6NTg6YmY6OTk6OGQ6MDc6YjU6NDY6MWQ6Mjk6ZmY6NTY6ZDc6CiAgICAgICAgIGZjOmY2OmI5OmNjOjYwOmRmOjdkOjE5OjU4OmJiOjc2OmY1OjdkOjVhOjlkOjM2OjU2OjMxOgogICAgICAgICBlOTpiNDowYTo5NjplMDpiYjo0OTo1YTpmNDpkOTo1MDplMzo1YzpjZTo4Nzo2NzpjOToyMjoKICAgICAgICAgNTE6NjQ6MWU6YTY6ZWE6NTA6NjY6ZDg6Mzc6MjU6ODE6Yzg6OTc6MmY6NDI6MWM6YTk6M2Y6CiAgICAgICAgIDVkOmVjOjA1OjFjOjQ4OjE2Ojk3OmE3OmQwOmZhOjI5Ojg5OmNmOjEzOjk4OmQwOjBhOjNjOgogICAgICAgICAxOTowZjpjMzpkMTpkYzo1MTozNjo5ZDo4ZTowMDo1YToyMDo5Njo1ZDo1NzoxNjo5YTpkMToKICAgICAgICAgNmQ6ODc6ODc6NDk6YzE6MjU6YjQ6ZDI6Y2Y6MzI6MzM6YjM6MTc6ZGY6Njg6NWM6ZWQ6MzQ6CiAgICAgICAgIGIxOjQ0OmM3OjM4OjM2OmJhOjQ5OjYwOjI2OjQzOmQzOjFkOjE5OmIyOmU1OmQ1OmY0OmY1OgogICAgICAgICBhYzplNTpiNjo0NzplYzplMTowYzpkODo0ZDo2NTo0OToyMTo2Zjo1MDphNzo0NzoyNjphZjoKICAgICAgICAgZGE6MTU6NjE6MzU6YTg6MTI6YmU6MTk6YTg6NDE6MzI6MDY6MGE6NDY6YjI6ZWU6Y2Y6N2M6CiAgICAgICAgIDAzOjNhOjgzOjIxOjFmOjE5OmY1OjE1OmVkOjdmOjNjOjhlOmY5OmNkOmJkOjg0OjQ0OjljOgogICAgICAgICBiZjo0OToyZDo0MDo0ZTphZjplNzo2YjoyMDozNzo2NzpkMDoxMTpjYTpkOTo1ODpjNzo2ODoKICAgICAgICAgMTE6Yjc6ZjA6MGYKLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnekNDQW11Z0F3SUJBZ0lVUk5iV1kzdGlFczUxYlQwWVIrdFhZdStsank4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1VURUxNQWtHQTFVRUJoTUNWVk14Q3pBSkJnTlZCQWdNQWtsTU1TRXdId1lEVlFRS0RCaEpiblJsY201bApkQ0JYYVdSbmFYUnpJRkIwZVNCTWRHUXhFakFRQmdOVkJBTU1DV3h2WTJGc2FHOXpkREFlRncweU1ERXlNVFV5Ck1qUTRNalphRncweU1UQXhNVFF5TWpRNE1qWmFNRkV4Q3pBSkJnTlZCQVlUQWxWVE1Rc3dDUVlEVlFRSURBSkoKVERFaE1COEdBMVVFQ2d3WVNXNTBaWEp1WlhRZ1YybGtaMmwwY3lCUWRIa2dUSFJrTVJJd0VBWURWUVFEREFscwpiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEVDBVZ1YvL2RsCmdPbmFlYTJXSFkxR2FPNWpPWVo2cjlGUTVMcFp0Zk11dk1ucWlFUEhOWHZrQnE3SjFpcy9QY3UvKzUza2NwZHQKeGlnTnNXUHBqNnVQak5LcVV3L2loZDN3YjVldzM3R2p4SG91SWx6eW16a1VHU1RiQjN2UDRac3FXLy8yNDljegoyZ3VFajRzaC9XdlRBbkhnNUg3MEZZb3IzcTg1VG54cFZWVTQybzZTVmRQWXdRN3NlV3hBS2wyMWlDUmVIWEx3CmJzSDMzNFZhekRSaGxwZ1VKUC9mRmcraC83ejMrWm84V0hEUnV3UHRHQWEvY3pqNlpFcWh5Z01EVG1NWXlNTk8KZEFnSHZFR21nazVLcUgrdHV2R1BaaUxPNEVaZDNxQUhNeGNsZEVrT3pOK1MwekExMm1Bc2ZoaGVpZEtQOTIyRApGeTFlYzBMOURYWjFBZ01CQUFHalV6QlJNQjBHQTFVZERnUVdCQlFmTHByTmhveE1XSWpSOFI5QngwcVlnY2RrCmpUQWZCZ05WSFNNRUdEQVdnQlFmTHByTmhveE1XSWpSOFI5QngwcVlnY2RralRBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQkpkSGJFN2NIbWZkd0hab2tlaUplQ0EwZGphd3V4RG5oUQpRQlRFMStvWEFUWjVScTVPTStjZWxKdFl2NW1OQjdWR0hTbi9WdGY4OXJuTVlOOTlHVmk3ZHZWOVdwMDJWakhwCnRBcVc0THRKV3ZUWlVPTmN6b2RueVNKUlpCNm02bEJtMkRjbGdjaVhMMEljcVQ5ZDdBVWNTQmFYcDlENktZblAKRTVqUUNqd1pEOFBSM0ZFMm5ZNEFXaUNXWFZjV210RnRoNGRKd1NXMDBzOHlNN01YMzJoYzdUU3hSTWM0TnJwSgpZQ1pEMHgwWnN1WFY5UFdzNWJaSDdPRU0yRTFsU1NGdlVLZEhKcS9hRldFMXFCSytHYWhCTWdZS1JyTHV6M3dECk9vTWhIeG4xRmUxL1BJNzV6YjJFUkp5L1NTMUFUcS9uYXlBM1o5QVJ5dGxZeDJnUnQvQVAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMDlGSUZmLzNaWURwMm5tdGxoMk5SbWp1WXptR2VxL1JVT1M2V2JYekxyeko2b2hECnh6Vjc1QWF1eWRZclB6M0x2L3VkNUhLWGJjWW9EYkZqNlkrcmo0elNxbE1QNG9YZDhHK1hzTit4bzhSNkxpSmMKOHBzNUZCa2syd2Q3eitHYktsdi85dVBYTTlvTGhJK0xJZjFyMHdKeDRPUis5QldLSzk2dk9VNThhVlZWT05xTwprbFhUMk1FTzdIbHNRQ3BkdFlna1hoMXk4RzdCOTkrRldzdzBZWmFZRkNULzN4WVBvZis4OS9tYVBGaHcwYnNECjdSZ0d2M000K21SS29jb0RBMDVqR01qRFRuUUlCN3hCcG9KT1NxaC9yYnJ4ajJZaXp1QkdYZDZnQnpNWEpYUkoKRHN6Zmt0TXdOZHBnTEg0WVhvblNqL2R0Z3hjdFhuTkMvUTEyZFFJREFRQUJBb0lCQURhWWJyZ2M3YXRmK3VheApEaWp2SFFiVTdQenVTdGM4a2ZzRHVYUitEVnd5bE9pNmpwMitEMXpLekNxQjVVdTdwZFNxQ2h4ajNOd1NneWhrClhKaEt5N0dJWHBSQUxJdjZiU1lYM1VWZG91L1BLSjdUaEptVG9MYXBkSEp3RDEyWmpPRHlMWnQ1Um5LNjlOVUsKR3BaOE4xcC8rdEk0a3ZCZXpPcFp6MWc1L3A4M1F6MTVhK3hmZ2lUWHRqYkI2U2pKUlk3QWF5aWZhc3hxb1RFRApuaFd5Z0I3aC8vUXhXZXpTTm1XdmhrZWJYQm10QmtTVldvUFRRUERBOTZBRTdVZXd5VTNEbXlsTDdjNUVaTEFsCkpHMnhCcUhyU1NwaWlEd2J0Z0s5dmJXOVF0bzBSMHcvanh6ZWZOT1cwUHlOOTZwNHdBcFdCMjNCYTk2R2d2TUMKS0gycDRkRUNnWUVBOUdYUUgxTFltTjYwcXA5SzR6bXY0WjVaZmVTekdwZ0dMd2ZoRjVHclZJQ0F1elQ2QTE3SgpqVVRoWnlEZE9iUW9VeURrT0FUT2QybFpJUHB1YnNwdGZ6Z1M1bmNlYUxvSk1NYjhxYS9qYjc1SVJkdEc3R3N4Cjl6UUpNQXJJbEFSOTZWeFJYejdJTittMDMzQTZiRXVMcUxvYW5Mb0ZnaTZqM3p4NUVwRm90ck1DZ1lFQTNkK0UKdnRDK2lvRml1c1QzUUMzbnp2VlRlb0k2R1pzUnZiMnFxa1BRdFBCajVoMEtwTUFoVEhsSElpVHNNRE9xZ21yagppbHZkR1MvaDFROVpmNy9ldGwrT2hkVUdWQjdtMDAzdm1LWGFKclBmRHNHYTlaVWo4OXVHb3hmUW9LanhLNWhxCk5tNG5EOFpuU3pSVll0NDJlWW1NVjRzL2JacWMzbE5kdkVaRDhqY0NnWUE5V2lHNCswOHNjUnZoaVVOL2IwZmIKMTZpWGxnWHdNeUc2Ukx3WThwU1VEZjVEQUxXU2l3VUYxYmpQN3N3YVpFT0xPc0tQM1lVSExRY1c1RWM4d014awpGMnVITjNnR3lrenNWY2V2d1Z2Uy9XMmZPOEMrTU5yR04rWG1qWTUwdWZ2eHpSOFFUZTV0T3RvUkRWZGRRRW02Ci9aMFlvd29tK0JaalFBY1V4alFIU1FLQmdRRFdlckUvS0ZsWldQUVE2a0M5aU9MU1hMTWk5V3Ftd0JHcFl3VHMKN1B0L1BmYkVSd1MzK0liMy96RDFYODMyVnF1WXdTMU8zYmpoRlRseEZoS0ZmUHdWUGxCdkxWdWR5L1dGQkl6OQoraTNsUmZIMXVOQk1ZS3pObWtRUHV3RFJuaDdzN3J5VisydkZReDB0Uk56WjQwZXp1M1N3V0FxcnNFKytWOGFBCkwwaVZod0tCZ0d3eHA5SmlHVEgwOXZkaDEzR285cjJ3ZGRkdjlERElkK2hmTUlvci9RaUM4bDVqRG9zRmY3d0sKYVVmcGZ3NzRkaFlsaG42RWlneGl5UU5ObkFzby9ZbjhVeUtNSG96VUN0L3ZuTk1IZXFmUmxrN3U3MUVYdllKZwpoamdPTHVrem53N1FXdG85V2ZTb1QwdXY4ZnJxWUxtSFk2Zk9OSzQ0eVE2bXlJeTU4c21uCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== +kind: Secret +metadata: + name: postgresql-tls-secret + namespace: "{{ .Values.infrastructure.namespace }}" +type: kubernetes.io/tls \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/secret.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/secret.yaml new file mode 100644 index 000000000..1c2bf8291 --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: postgres + namespace: "{{ .Values.infrastructure.namespace }}" +data: + postgres_default_user: {{ .Values.infrastructure.username | b64enc }} + postgres_password: {{ .Values.infrastructure.password | b64enc }} \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/service.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/service.yaml new file mode 100644 index 000000000..cdf72b7dc --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/templates/service.yaml @@ -0,0 +1,14 @@ +--- +# This is the internal load balancer, routing traffic to the PostgreSQL Pod +apiVersion: v1 +kind: Service +metadata: + name: "{{ .Values.infrastructure.appName }}-external" + namespace: {{ .Values.infrastructure.namespace }} +spec: + type: "{{ .Values.service.type }}" + ports: + - port: {{ .Values.service.port }} + protocol: {{ .Values.service.protocol }} + selector: + app: {{ .Values.application.labelValue }} diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/values.yaml b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/values.yaml new file mode 100644 index 000000000..eb358ca3e --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/PostgreSQL116/values.yaml @@ -0,0 +1,34 @@ + +replicaCount: 1 + +application: + labelValue: postgres + +infrastructure: + namespace: postgresql + appName: postgres + username: postgres + password: "OCPHack8" + dataVolume: "/var/lib/postgresql" + dataPath: "/var/lib/postgresql/data" + +image: + name: postgres + pullPolicy: IfNotPresent + tag: "11.6" + +service: + type: LoadBalancer + port: 5432 + protocol: TCP + +resources: + limits: + cpu: 1000m + memory: 4096Mi + requests: + cpu: 750m + memory: 2048Mi + volume: + size: 5Gi + storageClass: managed-premium \ No newline at end of file diff --git a/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/README.md b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/README.md new file mode 100644 index 000000000..b8f36b72f --- /dev/null +++ b/055-ChaosStudio4AKS/Student/Resources/WestUS-AKS/HelmCharts/README.md @@ -0,0 +1,269 @@ +**[Home](../../../README.md)** - [Prerequisites >](../../../00-prereqs.md) + +## Setting up Kubernetes + +NOTE: YOU DO NOT NEED TO RUN THROUGH THE STEPS IN THIS FILE IF YOU ALREADY PROVISIONED AKS. + +The steps to deploy the AKS cluster, scale it up and scale it down are available in the README file for that section: [README](../ARM-Templates/README.md). + +You should have not have to do provisioning again since you have already provisioned AKS using the create-cluster.sh script in [Prerequisites >](../../../00-prereqs.md) + +## PostgreSQL Setup on Kubernetes + +These instructions provide guidance on how to setup PostgreSQL 11 on AKS + +This requires Helm3 and the latest version of Azure CLI to be installed. These are pre-installed in Azure Cloud Shell but you will need to install or download them if you are using a different environment. + +## Installing the PostgreSQL Database + +```bash + +# Navigate to the Helm Charts +#cd Resources/HelmCharts + +# Install the Kubernetes Resources +helm upgrade --install wth-postgresql ./PostgreSQL116 --set infrastructure.password=OCPHack8 + +``` + +## Checking the Service IP Addresses and Ports + +```bash + +kubectl -n postgresql get svc + +``` +**Important: you will need to copy the postgres-external Cluster-IP value to use for the dataSourceURL in later steps** + +## Checking the Pod for Postgres + +```bash + +kubectl -n postgresql get pods + +``` +Wait a few minutes until the pod status shows as Running + +## Getting into the Container + +```bash + +# Use this to connect to the database server SQL prompt + +kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres + +``` +Run the following commands to check the Postgres Version and create the WTH database (warning: application deployment will fail if you don't do this) + +```sql + +--Check the DB Version +SELECT version(); + +--Create the wth database +CREATE DATABASE wth; + +--List databases. notice that there is a database called wth +\l + +-- Create user contosoapp that would own the application schema + + CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8'; + +-- List the tables in wth +\dt + +-- exit out of Postgres Sql prompt +exit + +``` + +## Uninstalling the PostgreSQL from Kubernetes (only if you need to cleanup and try the helm deployment again) + +Use this to uninstall the PostgreSQL 11 instance from Kubernetes cluster + +```bash + +# Uninstall to the database server. To install again, run helm upgrade +helm uninstall wth-postgresql + +``` + +## Installing MySQL + +```bash + +# Install the Kubernetes Resources +helm upgrade --install wth-mysql ./MySQL57 --set infrastructure.password=OCPHack8 + +``` + +## Checking the Service IP Addresses and Ports + +```bash + +kubectl -n mysql get svc + +``` +**Important: you will need to copy the mysql-external Cluster-IP value to use for the dataSourceURL in later steps** + +## Checking the Pod for MySQL + +```bash + +kubectl -n mysql get pods + +``` + +## Getting into the Container + +```bash + +# Use this to connect to the database server + +kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8 + +``` + +Run the following commands to check the MySQL Version and create the WTH database (warning: application deployment will fail if you don't do this) + +```sql + +-- Check the mysql DB Version +SELECT version(); + +-- List databases +SHOW DATABASES; + +--Create wth database +CREATE DATABASE wth; + +-- Create a user Contosoapp that would own the application data for migration + +CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ; + +GRANT SUPER on *.* to conotosoapp identified by 'OCPHack8'; -- may not be needed + +GRANT ALL PRIVILEGES ON wth.* to contosoapp ; + +-- Show tables in wth database + +SHOW TABLES; + +-- exit out of mysql Sql prompt +exit + +``` + +## Uninstalling the MySQL from Kubernetes (only if you need to cleanup and try the helm deployment again) + +Use this to uninstall the MySQL instance from Kubernetes cluster + +```bash + +# Uninstall to the database server. To install again, run helm upgrade command previously executed +helm uninstall wth-mysql + +``` + +## Deploying the Web Application + +First we navigate to the Helm charts directory + +```bash + +cd Resources/HelmCharts + + +``` + +We can deploy in two ways. As part of this hack, you will need to do both ways + +* Backed by MySQL Database +* Backed by PostgreSQL Database + +For the MySQL database setup, the developer/operator can make changes to the values-mysql.yaml file. + +For the PostgreSQL database setup, the developer/operator can make changes to the values-postgresql.yaml file. + +In the yaml files we can specify the database Type (appConfig.databaseType) as "mysql" or postgres" and then we can set the JDBC URL, username and password under the appConfig objects. + +In the globalConfig object we can change the merchant id, public keys and other values as needed but you generally can leave those alone as they apply to both MySQL and PostgreSQL deployment options + +```yaml +appConfig: + databaseType: "databaseType goes here" # mysql or postgres + dataSourceURL: "jdbc url goes here" # database is either mysql or postgres - jdbc:database://ip-address/wth + dataSourceUser: "user name goes here" # database username mentioned in values-postgres or values-mysql yaml - contosoap + dataSourcePassword: "Pass word goes here!" # your database password goes here - # OCPHack8 + webPort: 8083 # the port the app listens on + webContext: "pizzeria" # the application context http://hostname:port/webContext +``` + +The developer or operator can specify the '--values'/'-f' flag multiple times. +When more than one values file is specified, priority will be given to the last (right-most) file specified in the sequence. +For example, if both values.yaml and override.yaml contained a key called 'namespace', the value set in override.yaml would take precedence. + +The commands below allows us to use settings from the values file and then override certain values in the database specific values file. + +```bash + +helm upgrade --install release-name ./HelmChartFolder -f ./HelmChartFolder/values.yaml -f ./HelmChartFolder/override.yaml + +``` + +To deploy the app backed by MySQL, run the following command after you have edited the values file to match your desired database type + +```bash + +helm upgrade --install mysql-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-mysql.yaml + +``` + +To deploy the app backed by PostgreSQL, run the following command after you have edited the values file to match your desired database type + +```bash + +helm upgrade --install postgres-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-postgresql.yaml + +``` + +If you wish to uninstall the app, you can use one of the following commands: + +```bash + +# Use this to uninstall, if you are using MySQL as the database +helm uninstall mysql-contosopizza + +# Use this to uninstall, if you are using PostgreSQL as the database +helm uninstall postgres-contosopizza + +``` + + +After the apps have booted up, you can find out their service addresses and ports as well as their status as follows + +```bash + +# get service ports and IP addresses +kubectl -n {infrastructure.namespace goes here} get svc + +# get service pods running the app +kubectl -n {infrastructure.namespace goes here} get pods + +# view the first 5k lines of the application logs +kubectl -n {infrastructure.namespace goes here} logs deploy/contosopizza --tail=5000 + +# example for ports and services +kubectl -n {infrastructure.namespace goes here} get svc + +``` + +Verify that contoso pizza application is running on AKS + +```bash + +# Insert the external IP address of the command + +http://{external_ip_contoso_app}:8081/pizzeria/ +```