[New Hack] 055-ChaosStudio4AKS (#588)

* Created WhatTheHack template stub

* Update Challenge-00.md

Initial creation

* Update Challenge-00.md

Initial Creation

* Update Challenge-01.md

Initial Creation of Challenge

* Update Challenge-00.md

Update link to next challenge

* Update Challenge-02.md

Initial creation of challenge

* Update Challenge-03.md

Initial creation of challenge

* Update Challenge-03.md

Update challenge number

* Create Challenge-04.md

* Update README.md

Initial creation of readme

* Create ContosoPizzaApp

Creation of App folder

* Delete ContosoPizzaApp

* Create ContosoPizzaApp

* Add files via upload

Initial App Upload

* Delete ContosoPizzaApp

delete dummy file

* Update README.md

Updated links

* Update README.md

added ability to bring your own application

* Update Challenge-00.md

Added use your own AKS application for this hack

* Update Challenge-01.md

Updated Success Criteria

* Update Challenge-00.md

Added K8s language

* Update Challenge-04.md

Added DevOps and Pizza Application language

* Delete Lectures.pptx

* Add files via upload

Uploaded Lecture

* Update Challenge-01.md

Updated Title

* Update Challenge-02.md

Updated title

* Update Challenge-02.md

Updated title

* Update Challenge-03.md

* Update Challenge-04.md

Updated title and application language

* Update README.md

Added sections

* Update README.md

change type of access to contributor

* Update README.md

Intermediate understanding of AKS / K8s

* Add files via upload

added zip file

* Update README.md

updated prerequisites

* Update README.md

added to optional requirements

* Update Solution-00.md

* Update Solution-00.md

creation of guide

* Update Solution-00.md

Added NSG for Vnet

* Update Solution-01.md

initial creation of challenge

* Update Challenge-01.md

removed a tip for scaling the PODs

* Update Solution-01.md

Added command box

* Update Solution-01.md

* Update Solution-01.md

Updated JSON spec

* Update Solution-01.md

* Update Solution-01.md

* Update Solution-01.md

Added kubectl commands

* Update Solution-01.md

Updated kubectl commands

* Update Solution-01.md

updated bullets

* Update Challenge-01.md

Updated bullets

* Update Solution-02.md

draft challenge 2

* Update Challenge-02.md

removed tips and moved to coach guide

* Update Solution-01.md

changed deployment to statefulset

* Update Challenge-01.md

update tips for statefulset

* Update Solution-02.md

Updated workflow bullets

* Update Solution-02.md

Updated workflow bullets

* Update Challenge-03.md

Updated Tips

* Update Solution-03.md

Added traffic manager profile info

* Update Solution-03.md

Added Geo Peeker info

* Update Challenge-02.md

update success criteria

* Update Challenge-03.md

Updated success criteria

* Update README.md

Updated Agenda

* Update README.md

Added Day 2

* Create Solution-04.md

* Update Solution-04.md

* Update README.md

* Update Solution-03.md

added navigation setting

* Update README.md

removed repo contains section

* Delete resources.zip

* Create EastUS-AKS

* Delete EastUS-AKS

* Create test

* Delete test

* Add files via upload

* Add files via upload

* Delete xxx-ChaosStudio4AKS/Student/Resources/EastUS-AKS directory

* Delete xxx-ChaosStudio4AKS/Student/Resources/WestUS-AKS directory

* Added AKS files for East-US deployment

* West-US deployment files

* Delete xxx-ChaosStudio4AKS/ContosoPizzaApp directory

* Update README.md

updated Title of hack

* Update README.md

Removed XXX

* Update README.md

* Update README.md

* Update README.md

* Create 55 -ChaosStudio4AKS

* Add files via upload

* Delete Lectures.pptx?raw=true

* Update Solution-02.md

* Lecture

* Lecture

* upadted punc

* format

* name change on hack

* updated AKS version

* delete zip file

* Lectures.pptx raw

* removed raw pptx

* added raw pptx

* removed non raw pptx

* fixing raw issue

* removed old raw file

* Delete xxx-ChaosStudio4AKS directory

removed old XXX-ChaosStudio4AKS

* Delete 55 -ChaosStudio4AKS

* Making typo updates to the hack

* Update README.md

added questionmark

* Update Challenge-00.md

added  00

* Update Solution-04.md

added [Optional] Injecting Chaos into your CI/CD pipeline

* Update README.md

Prerequisites - Ready, Set, GO! to challenge 00 to match

* Update Challenge-02.md

change to AZ in title versus Availability Zone

* Update README.md

updated title for [Optional] Injecting Chaos into your CI/CD pipeline

* Update README.md

added [

* Update Solution-00.md

change - to :

* Update Solution-01.md

change - to :

* Update Solution-02.md

change - to :

* Update Solution-03.md

change - to :

* Update Solution-04.md

change - to :

* Update Challenge-01.md

change - to :

* Update Challenge-02.md

change - to :

* Update Challenge-03.md

change - to :

* Update Challenge-04.md

change - to :

* Update README.md

Updates Day 2 to show 4 hour duration

* Update Challenge-03.md

added "Verify application is available after WestUS region is offline"

* Update Challenge-01.md

- If your application went offline, what change could you make to the application?
- Rerun the experiment and verify if the change was successful

* Adding solution to challenge 4 and updating some verbiage

* Create .wordlist.txt

* Update .wordlist.txt

* Update Solution-00.md

* Update Solution-01.md

* Update .wordlist.txt

* Update .wordlist.txt

* Update Solution-02.md

* Update .wordlist.txt

* Update Solution-03.md

* Update Solution-03.md

* Update .wordlist.txt

* Update Solution-00.md

* Update README.md

* Update Challenge-00.md

* Update Challenge-01.md

* Update Challenge-03.md

* Update Challenge-04.md

* Update Challenge-00.md

* Update .wordlist.txt

* Update Challenge-01.md

* Update README.md

* Update README.md

* Update README.md

* Update .wordlist.txt

* Update Solution-04.md

* Update README.md

* Update Solution-00.md

added "can be done"

* Update Solution-01.md

* Update Solution-01.md

* Update Solution-01.md

fixed typos

* Update Solution-01.md

* Update Challenge-00.md

Fixed some typos

* Update Challenge-01.md

Fixed typos

* Update Challenge-01.md

fixed typo

* Update Challenge-01.md

* Update Challenge-02.md

Fixed typos

* Update Challenge-03.md

fixed typos

* Update Challenge-04.md

fixed typos

---------

Co-authored-by: GitHub Actions Bot <>
Co-authored-by: Andy Huang <54148527+Whowong@users.noreply.github.com>
Co-authored-by: Pete Rodriguez <perktime@users.noreply.github.com>
This commit is contained in:
Jerry@MSFT 2023-10-24 14:52:16 -04:00 коммит произвёл GitHub
Родитель ca8387c72e
Коммит d4c3e452fb
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
92 изменённых файлов: 5622 добавлений и 0 удалений

Просмотреть файл

@ -0,0 +1,18 @@
contosoappmysql
PizzaApp
Rhoads
Falgout
EastUS
PizzaAppEastUS
PizzaAppWestUS
namespaces
TTL
GeoPeeker
instanceID
hangry
PizzeriaApp
dataSourceURL
appConfig
databaseType
globalConfig
ChaosStudio

Двоичные данные
055-ChaosStudio4AKS/Coach/Lectures.pptx Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,74 @@
# What The Hack - ChaosStudio4AKS - Coach Guide
## Introduction
Welcome to the coach's guide for the ChaosStudio4AKS What The Hack. Here you will find links to specific guidance for coaches for each of the challenges.
This hack includes an optional [lecture presentation](Lectures.pptx) that features short presentations to introduce key topics associated with each challenge. It is recommended that the host present each short presentation before attendees kick off that challenge.
**NOTE:** If you are a Hackathon participant, this is the answer guide. Don't cheat yourself by looking at these during the hack! Go learn something. :)
## Coach's Guides
- Challenge 00: **[Prerequisites - Ready, Set, GO!](./Solution-00.md)**
- Prepare your workstation to work with Azure.
- Challenge 01: **[Is your Application ready for the Super Bowl?](./Solution-01.md)**
- How does your application handle failure during large scale events?
- Challenge 02: **[My AZ burned down, now what?](./Solution-02.md)**
- Can your application survive an Azure outage of 1 or more Availability Zones?
- Challenge 03: **[Godzilla takes out an Azure region!](./Solution-03.md)**
- Can your application survive a region failure?
- Challenge 04: **[Injecting Chaos into your pipeline](./Solution-04.md)**
- Optional challenge, using Chaos Studio experiments in your CI/CD pipeline
## Coach Prerequisites
This hack has pre-reqs that a coach is responsible for understanding and/or setting up BEFORE hosting an event. Please review the [What The Hack Hosting Guide](https://aka.ms/wthhost) for information on how to host a hack event.
The guide covers the common preparation steps a coach needs to do before any What The Hack event, including how to properly configure Microsoft Teams.
### Student Resources
Before the hack, it is the Coach's responsibility to download and package up the contents of the `/Student/Resources` folder of this hack into a "Resources.zip" file. The coach should then provide a copy of the Resources.zip file to all students at the start of the hack.
Always refer students to the [What The Hack website](https://aka.ms/wth) for the student guide: [https://aka.ms/wth](https://aka.ms/wth)
**NOTE:** Students should **not** be given a link to the What The Hack repo before or during a hack. The student guide does **NOT** have any links to the Coach's guide or the What The Hack repo on GitHub.
### Additional Coach Prerequisites (Optional)
None are required for this hack
## Azure Requirements
This hack requires students to have access to an Azure subscription where they can create and consume Azure resources. These Azure requirements should be shared with a stakeholder in the organization that will be providing the Azure subscription(s) that will be used by the students.
- Azure subscription with contributor access
- Visual Studio Code terminal or Azure Shell
- Latest Azure CLI (if not using Azure Shell)
- Chaos Studio, Azure Kubernetes Service (AKS) and Traffic Manager services will be used in this hack
## Suggested Hack Agenda
- Day 1
- Challenge 0 (1.5 hours)
- Challenge 1 (2 hours)
- Challenge 2 (1 hours)
- Challenge 3 (1 hours)
- Day 2
- Challenge 4 (4 hours)
## Repository Contents
_The default files & folders are listed below. You may add to this if you want to specify what is in additional sub-folders you may add._
- `./Coach`
- Coach's Guide and related files
- `./Coach/Solutions`
- Solution files with completed example answers to a challenge
- `./Student`
- Student's Challenge Guide
- `./Student/Resources`
- Resource files, sample code, scripts, etc meant to be provided to students. (Must be packaged up by the coach and provided to students at start of event)

Просмотреть файл

@ -0,0 +1,20 @@
# Challenge 00: Prerequisites - Ready, Set, GO! - Coach's Guide
**[Home](./README.md)** - [Next Solution >](./Solution-01.md)
## Notes & Guidance
The student will need an Azure subscription with "Contributor" permissions.
The entirety of this hack's challenges can be done using the [Azure Cloud Shell](#work-from-azure-cloud-shell) in a web browser (fastest path), or you can choose to install the necessary tools on your [local workstation (Windows/WSL, Mac, or Linux)](#work-from-local-workstation).
We recommend installing the tools on your workstation.
- The AKS "contosoappmysql" web front end has a public IP address that you can connect to.
- If this is an internal AIRS ACCOUNT, keep the security auto bot happy and create a Network Security Group on the Vnet call is PizzaAppEastUS / PizzaAppWestUS and enable (allow) TCP port 8081 priority 200 and disable (deny) TCP port 3306 priority 210
- The student will need this NSG for a future challenge
```bash
kubectl -n mysql get svc
```

Просмотреть файл

@ -0,0 +1,36 @@
# Challenge 01: Is your Application ready for the Super Bowl? - Coach's Guide
[< Previous Solution](./Solution-00.md) - **[Home](./README.md)** - [Next Solution >](./Solution-02.md)
## Notes & Guidance
This challenge is where the student will simulate a pod failure. For Chaos Studio to work with AKS, Chaos Mesh will need to be installed.
Chaos Studio doesn't work with private AKS clusters.
- Instructions to install chaos studio are at https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-tutorial-aks-portal#set-up-chaos-mesh-on-your-aks-cluster
- Once installed, create a pod failure experiment to fail a pod
- If using the Pizza App, the application should become unresponsive
Command to view the private and public IP of the pizza application
```bash
kubectl get -n contosoappmysql svc
```
Command to view all names spaces running in the AKS cluster
```bash
kubectl get pods --all-namespaces
```
Have the student explore how to make PODs resilient by creating a replica of the POD
```bash
kubectl scale statefulset -n APPNAME NAMESPACE --replicas=2
```
- Have the student run the experiment again and notice how the application is available with a failed pod
- In the experiment, make the mode = "one" versus "all: as per the JSON spec below:
- {"action":"pod-failure","mode":"one","duration":"600s","selector":{"namespaces":["contosoappmysql"]}}

Просмотреть файл

@ -0,0 +1,35 @@
# Challenge 02: My Availability Zone burned down, now what? - Coach's Guide
[< Previous Solution](./Solution-01.md) - **[Home](./README.md)** - [Next Solution >](./Solution-03.md)
## Notes & Guidance
This challenge will simulate an AZ failure by failing a virtual machine that is a member of the Virtual Machines Scale Set created by AKS.
Chaos Studio will use the VMSS shutdown fault
- Student will create experiment for VMSS shutdown
- Have the student think about how to make the cluster resilient
- Student should scale VMSS
- Scale the VMSS via AKS
- Scale the PizzaApp or the student's AKS deployment or statefulset
- Rerun the experiment
Verify where your pods are running (Portal or CLI)
```bash
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node>
```
Scale the cluster to a minimum of 2 VMs
```bash
az aks scale --resource-group myResourceGroup --name myAKSCluster --node-count 1 --nodepool-name <your node pool name>
```
Scale your Kubernetes environment (hint it is a stateful deployment)
```bash
kubectl scale statefulset -n contosoappmysql contosopizza --replicas=2
```

Просмотреть файл

@ -0,0 +1,26 @@
# Challenge 03: Godzilla takes out an Azure region! - Coach's Guide
[< Previous Solution](./Solution-02.md) - **[Home](./README.md)** - [Next Solution >](./Solution-04.md)
## Notes & Guidance
In this Challenge, students will simulate a region failure.
This can be done via the following:
- NSG and blocking port 8081,
- Chaos Mesh's POD failures set to all PODs in a region
- VMSS fault and selecting all nodes in a region
Traffic manager is the solution.
- Verify students installed the application in WestUS and EastUS.
- Routing method = Performance
- Configuration profile needs to be created
- DNS TTL = 1
- Protocol = Http
- Port = 8081
- Path = /pizzeria/
- Probing interval = 10
- Tolerated number of failures = 3
- Probe timeout = 5
Use GeoPeeker to visualize multi-region DNS resolution https://GeoPeeker.com/home/default

Просмотреть файл

@ -0,0 +1,13 @@
# Challenge 04: Injecting Chaos into your CI/CD pipeline - Coach's Guide
[< Previous Solution](./Solution-03.md) - **[Home](./README.md)**
## Notes & Guidance
This challenge may be a larger lift as the students are not required to know GitHub Actions or any other DevOps pipeline tool. We have provided links to the actions needed to complete this task but feel free to nudge more on the GitHub Actions syntax portion as the challenge is more about integrating Chaos into your pipeline and less about the syntax of GitHub Actions.
A sample solution is located [here](./Solutions/Solution-04/Solution-04.yml)
From a high level, it logs into Azure and leverages the AZ Rest command to issue a rest api call to trigger the experiment. The students could also leverage a standard rest api call, however the AZ rest command is easier to use as it handles many headers for you automatically such as authorization.
[Chaos Studio Rest API Samples](https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-samples-rest-api)
[Starting Experiment with Rest API](https://learn.microsoft.com/en-us/rest/api/chaosstudio/experiments/start?tabs=HTTP)

Просмотреть файл

Просмотреть файл

@ -0,0 +1,28 @@
name: Trigger Azure Chaos Studio Experiment(AZ CLI)
on:
workflow_dispatch:
permissions:
id-token: write
contents: read
jobs:
trigger-chaos-experiment:
runs-on: ubuntu-latest
steps:
- name: 'Az CLI login'
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
enable-AzPSSession: true
- name: Azure CLI
uses: azure/CLI@v1
with:
azcliversion: 2.30.0
inlineScript: |
az rest --method post --url ${{ secrets.AZURE_CHAOS_STUDIO_API_URL }}

Просмотреть файл

@ -0,0 +1,58 @@
# What The Hack - ChaosStudio4AKS
## Introduction
Azure Chaos Studio (Preview) is a managed service for improving resilience by injecting faults into your Azure applications. Running controlled fault
injection
experiments against your applications, a practice known as chaos engineering, helps you to measure, understand, and improve resilience against real-world
incidents, such as a region outages or application failures causing high CPU utilization on a VMs, Scale Sets, and Azure Kubernetes.
## Learning Objectives
This “What the Hack” WTH is designed to introduce you to Azure Chaos Studio (Preview) and guide you through a series of hands-on challenges to accomplish
the following:
* Leverage the Azure Chaos Studio to inject failure into an application/workload
* Provide hands-on understanding of Chaos Engineering
* Understand how resiliency can be achieved with Azure
In this WTH, you are the system owner of the Contoso Pizzeria Application (or you may bring your own application). Super Bowl Sunday is Contoso Pizza's busiest time of the year, the pizzeria
ordering application must be available during the Super Bowl.
You have been tasked to test the resiliency of the pizzeria application (or your application). The pizzeria application is running on Azure and you will use Chaos Studio to
simulate various failures.
## Challenges
* Challenge 00: **[Prerequisites - Ready, Set, GO!](Student/Challenge-00.md)**
- Deploy the multi-region Kubernetes pizzeria application
* Challenge 01: **[Is your application ready for the Super Bowl?](Student/Challenge-01.md)**
- How does your application handle failure during large scale events?
* Challenge 02: **[My AZ burned down, now what?](Student/Challenge-02.md)**
- Can your application survive an Azure outage of 1 or more Availability Zones?
* Challenge 03: **[Godzilla takes out an Azure region!](Student/Challenge-03.md)**
- Can your application survive a region failure?
* Challenge 04: **[Injecting Chaos into your CI/CD pipeline](Student/Challenge-04.md)**
- Optional challenge, using Chaos Studio experiments in your CI/CD pipeline
## Prerequisites
- Azure subscription with contributor access
- Visual Studio Code terminal or Azure Shell (recommended)
- Latest Azure CLI (if not using Azure Shell)
- GitHub or Azure DevOps to automate Chaos Testing
- Azure fundamentals, Vnets, NSGs, Scale Sets, Traffic Manager
- Fundamentals of Chaos Engineering
- Intermediate understanding of Kubernetes (kubectl commands)and AKS
## Learning Resources
* [What is Azure Chaos Studio](https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-overview)
* [What is Chaos Engineering](https://docs.microsoft.com/en-us/azure/architecture/framework/resiliency/chaos-engineering?toc=%2Fazure%2Fchaos-studio%2Ftoc.json&bc=%2Fazure%2Fchaos-studio%2Fbreadcrumb%2Ftoc.json)
* [How Netflix pioneered Chaos Engineering](https://techhq.com/2019/03/how-netflix-pioneered-chaos-engineering/)
* [Embrace the Chaos](https://medium.com/capital-one-tech/embrace-the-chaos-engineering-203fd6fc6ff7)
* [Why you should break more things on purpose --AWS, Azure, and LinkedIn case studies](https://www.contino.io/insights/chaos-engineering)
## Contributors
- Jerry Rhoads
- Kevin Gates
- Andy Huang
- Tommy Falgout

Просмотреть файл

@ -0,0 +1,109 @@
# Challenge 00: Prerequisites - Ready, Set, GO!
**[Home](../README.md)** - [Next Challenge >](./Challenge-01.md)
## Pre-requisites
You will need an Azure subscription with "Contributor" permissions.
Before starting, you should decide how and where you will want to work on the challenges of this hackathon.
You can complete the entirety of this hack's challenges using the [Azure Cloud Shell](#work-from-azure-cloud-shell) in a web browser (fastest path), or you can choose to install the necessary tools on your [local workstation (Windows/WSL, Mac, or Linux)](#work-from-local-workstation).
We recommend installing the tools on your workstation.
### Work from Azure Cloud Shell
Azure Cloud Shell (using Bash) provides a convenient shell environment with all tools you will need to run these challenges already included such as the Azure CLI, kubectl, helm, and MySQL client tools, and editors such as vim, nano, code, etc.
This is the fastest path. To get started, simply open [Azure Cloud Shell](https://shell.azure.com) in a web browser, and you're all set!
### Work from Local Workstation
As an alternative to Azure Cloud Shell, this hackathon can also be run from a Bash shell on your computer. You can use the Windows Subsystem for Linux (WSL2), Linux Bash or Mac Terminal. While Linux and Mac include Bash and Terminal out of the box respectively, on Windows you will need to install the WSL: [Windows Subsystem for Linux Installation Guide for Windows 10](https://docs.microsoft.com/en-us/windows/wsl/install-win10).
If you choose to run it from your local workstation, you need to install the following tools into your Bash environment (on Windows, install these into the WSL environment, **NOT** the Windows command prompt!):
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/)
- Kubectl (using `az aks install-cli`)
- [Helm3](https://helm.sh/docs/intro/install/)
Take into consideration how much time you will need to install these tools on your own computer. Depending on your Internet and computer's speed, this additional local setup will probably take around 30 minutes.
## Introduction
Once the pre-requisites are set up, now it's time to build the hack's environment.
This hack is designed to help you learn chaos testing with Azure Chaos Studio, however you should have a basic knowledge of Kubernetes (K8s). The hack uses pre-canned Azure Kubernetes (AKS) environments that you will deploy into your Azure subscription. You many bring your own AKS application versus using the pre-canned AKS Pizza Application.
If you are using the Pizzeria Application, the Pizzeria Application will run in 2 Azure regions and entirely on an AKS cluster, consisting of the following:
- 1 instance of the "Pizzeria" sample app (1 per region)
- A MySQL database (1 per region)
## Description
The Pizzeria Application is deployed in two steps by scripts that invoke ARM Templates & Helm charts to create the AKS cluster, database, and the sample Pizzeria application. Your coach will provide you with a link to the Pizzeria.zip file that contains deployment files needed to deploy the AKS environment into EastUS and WestUS. Since the end goal is to test a multi-region application, deploy the application into each region. For best results, perform all experiments in your nearest region.
- Download the required Pizzeria.zip file (or you can use your own AKS application) for this hack. You should do this in Azure Cloud Shell or in a Mac/Linux/WSL environment which has the Azure CLI installed.
- Unzip the file
### Deploy the AKS Environment
Run the following command to setup the AKS environments (you will do this for each region):
```bash
cd ~/REGION-NAME-AKS/ARM-Templates/KubernetesCluster
chmod +x ./create-cluster.sh
./create-cluster.sh
```
**NOTE:** Creating the cluster will take around 10 minutes
**NOTE:** The Kubernetes cluster will consist of one container contosoappmysql.
### Deploy the Sample Application
Deploy the Pizzeria application as follows:
```bash
cd ~/REGION-NAME/HelmCharts/ContosoPizza
chmod +x ./*.sh
./deploy-pizza.sh
```
**NOTE:** Deploying the Pizzeria application will take around 5 minutes
### View the Sample Application
Once the applications are deployed, you will see a link to a websites running on port 8081. In Azure Cloud Shell, these are clickable links. Otherwise, you can cut and paste the URL in your web browser.
```bash
Pizzeria app on MySQL is ready at http://some_ip_address:8081/pizzeria
```
## Success Criteria
* You have a Unix/Linux Shell for setting up the Pizzeria application or your AKS application (e.g. Azure Cloud Shell, WSL2 bash, Mac zsh etc.)
* You have validated that the Pizzeria or your application is working in both regions (EastUS & WestUS)
## Tips
* The AKS "contosoappmysql" web front end has a public IP address that you can connect to. At this time you should create a Network Security Group on the Vnet, call is PizzaAppEastUS / PizzaAppWestUS and enable (allow) TCP port 8081 priority 200 and disable (deny) TCP port 3306 priority 210 --you will need this NSG for future challenges.
```bash
kubectl -n mysql get svc
```
There are more useful kubernetes commands in the reference section below.
## Learning Resources
* [Kubernetes cheat sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/)

Просмотреть файл

@ -0,0 +1,49 @@
# Challenge 01: Is your Application ready for the Super Bowl?
[< Previous Challenge](./Challenge-00.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-02.md)
## Pre-requisites
Before creating your Azure Chaos Studio Experiment, ensure you have deployed and verified the pizzeria application is available.
## Introduction
Welcome to Challenge 1.
In this challenge you will simulate failure in your compute tier. It is Super Bowl Sunday and you are the system owner of Contoso Pizza's pizza ordering
workload. This workload is hosted in Azure's Kubernetes Service (AKS). Super Bowl Sunday is Contoso Pizza's busiest day of the year.
To make Super Bowl Sunday a success, your job is to plan for possible failures that could occur during the Superbowl event.
If you are using your own AKS application, your application should be ready to handle its peak operating time: this is when Chaos strikes!
## Description
Create failure at the AKS pod level in your preferred region e.g. EastUS
- Show that your AKS environment has been prepared
- Show that your Chaos Experiment has been scoped to the web tier workload
- Show (if any) any failure you observed during the experiment
During the experiment, were you able to order a pizza or perform your application functionality? If not, what could you do to make your application resilient at the pod layer?
## Success Criteria
- Verify Chaos Mesh is running on the Cluster
- Verify Pod Chaos restarted the application's AKS pod
- Show any failure you observed during the experiment
- If your application went offline, show what change could you make to the application to make it resilient
## Tips
These tips apply to the Pizza Application
- Verify the "selector" in the experiment uses namespace of the application
- Verify the PizzaApp is a statefulset versus a deployment
## Learning Resources
- [Simulate AKS pod failure with Chaos Studio](https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-tutorial-aks-portal)
- [AKS cheat-sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/)

Просмотреть файл

@ -0,0 +1,47 @@
# Challenge 02: My AZ burned down, now what?
[< Previous Challenge](./Challenge-01.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-03.md)
## Pre-requisites
Before creating your Azure Chaos Studio Experiment, ensure you have deployed and verified the pizzeria application is available.
## Introduction
Welcome to Challenge 2.
Can your Application survive an Availability Zone Failure?
How did your application perform with pod failures? Are you still in business? Now that you have tested for pod faults and have
overcome with resiliency at the pod level --it is time to kick it up to the next level. Winter storms are a possibility on Superbowl Sunday and you need to
prepare for an Azure datacenter going offline. Choose your preferred region and AKS cluster to simulate an Availability Zone failure.
## Description
As the purpose of this WTH is to show Chaos Studio, we are going to pretend that an Azure Availability Zone (datacenter) is offline. The way you will simulate this will be failing an AKS node with Chaos Studio.
- Create and scope an Azure Chaos Studio Experiment to fail 1 of the pizza application's virtual machine(s)
During the experiment, were you able to order a pizza? If not, what could you do to make your application resilient at the Availability Zone/Virtual
Machine layer?
## Success Criteria
- Show that Chaos Experiment fails a node running the pizzeria application
- Show any failure you observed during the experiment
- Discuss with your coach how your application is (or was made) resilient
- Verify the pizzeria application is available while a virtual machine is offline
## Tip
Take note of your virtual machine's instanceID
## Learning Resources
- [Simulate AKS pod failure with Chaos Studio](https://docs.microsoft.com/en-us/azure/chaos-studio/chaos-studio-tutorial-aks-portal)
- [Scale an AKS cluster](https://docs.microsoft.com/en-us/azure/aks/scale-cluster)
- [AKS cheat-sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/)

Просмотреть файл

@ -0,0 +1,54 @@
# Challenge 03: Godzilla takes out an Azure region!
[< Previous Challenge](./Challenge-02.md) - **[Home](../README.md)** - [Next Challenge >](./Challenge-04.md)
## Pre-requisites
Before creating your Azure Chaos Studio Experiment, ensure you have deployed and verified the pizzeria application is available in both regions (EastUS
and WestUS)
## Introduction
Welcome to Challenge 3.
Can your application survive a region failure?
So far you have tested failures with Contoso Pizza's AKS pod(s), AKS node(s), and now it is time to test failures at the regional
level.
As Contoso Pizza is a national pizza chain, hungry people all over the United States are ordering pizzas and watching the Super
Bowl. Enter Godzilla! He exists! He is hungry! He is upset (hangry)! He is going to destroy the WestUS! What will your application
do?
## Description
As the purpose of this WTH is to demonstrate Chaos Studio, we are going to simulate a region failure. As you have deployed the pizzeria application in 2 regions
(EastUS/WestUS). As we are hacking on Azure's Chaos Studio, we are pretending the databases are in sync, and we are showing how Chaos Studio can simulate
the failure of a region.
- Create an Azure Chaos Studio's Experiment(s) that can simulate a region failure
During the experiment, were you able to order a pizza? If not, what could you do to make your application more resilient
## Success Criteria
- Verify the experiment is running
- Show any failure you observed during the experiment
- Verify application is available after WestUS region is offline
- Verify all application traffic is routing to the surviving region
## Tips
- Think of the multiple ways to simulate a region failure
- Did you create the NSG from Challenge 0?
- Use [GeoPeeker](https://geopeeker.com/home/default) to verify traffic routing
## Learning Resources
- [Azure Traffic Manager](https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-configure-priority-routing-method)
- [Azure Traffic Manager endpoint monitoring](https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring)

Просмотреть файл

@ -0,0 +1,33 @@
# Challenge 04: Injecting Chaos into your CI/CD pipeline
[< Previous Challenge](./Challenge-03.md) - **[Home](../README.md)**
## Pre-requisites
To complete this challenge you will use the Pizza Application or your AKS application
## Introduction
You will need to have an in depth understanding of DevOps and your CI/CD tool of choice.
This is where the rubber meets the road. You will take what you have learned from the previous challenges and apply the knowledge here.
## Description
In this challenge you will conduct a chaos experiment in your CI/CD pipeline.
You will take the Pizzeria Application or your application and add a chaos experiment to your deployment pipeline.
Run your experiments in Dev/Test, do not run in Prod.
## Tips
1. You want your application to be available (healthy state) during failure.
2. What kinds of faults and remediation come to mind from the previous challenges?
## Success Criteria
- Show that Chaos Studio injects fault(s) into your application via your pipeline.
- Verify that your application remains healthy during the Chaos Experiment.
## Learning Resources
- [How to deploy a simple experiment](https://blog.meadon.me/chaos-studio-part-1/)
- [How to deploy a simple application and experiment in a CI/CD pipeline](https://blog.meadon.me/chaos-studio-part-2/)

Просмотреть файл

Просмотреть файл

@ -0,0 +1,398 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"clusterName": {
"type": "string",
"metadata": {
"description": "The name of the Managed Cluster resource."
}
},
"agentPoolNodeCount": {
"type": "int",
"metadata": {
"description": "Number of virtual machines in the agent pool"
}
},
"agentPoolNodeType": {
"type": "string",
"metadata": {
"description": "SKU or Type of virtual machines in the agent pool"
}
},
"systemPoolNodeCount": {
"type": "int",
"metadata": {
"description": "Number of virtual machines in the system pool"
}
},
"systemPoolNodeType": {
"type": "string",
"metadata": {
"description": "SKU or Type of virtual machines in the system pool"
}
},
"resourceGroupName": {
"type": "string",
"metadata": {
"description": "The name of the Resource Group"
}
},
"virtualNetworkName": {
"type": "string",
"metadata": {
"description": "The name of the Virtual Network"
}
},
"subnetName": {
"type": "string",
"metadata": {
"description": "The name of the Subnet within the Virtual Network"
}
},
"location": {
"type": "string",
"metadata": {
"description": "The geographical location of AKS resource."
}
},
"dnsPrefix": {
"type": "string",
"metadata": {
"description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN."
}
},
"addressSpaces": {
"type": "array"
},
"ddosProtectionPlanEnabled": {
"type": "bool"
},
"osDiskSizeGB": {
"type": "int",
"defaultValue": 0,
"metadata": {
"description": "Disk size (in GiB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize."
},
"minValue": 0,
"maxValue": 1023
},
"kubernetesVersion": {
"type": "string",
"defaultValue": "1.25.5",
"metadata": {
"description": "The version of Kubernetes."
}
},
"networkPlugin": {
"type": "string",
"allowedValues": [
"azure",
"kubenet"
],
"metadata": {
"description": "Network plugin used for building Kubernetes network."
}
},
"maxPods": {
"type": "int",
"defaultValue": 64,
"metadata": {
"description": "Maximum number of pods that can run on a node."
}
},
"enableRBAC": {
"type": "bool",
"defaultValue": true,
"metadata": {
"description": "Boolean flag to turn on and off of RBAC."
}
},
"enablePrivateCluster": {
"type": "bool",
"defaultValue": false,
"metadata": {
"description": "Enable private network access to the Kubernetes cluster."
}
},
"enableHttpApplicationRouting": {
"type": "bool",
"defaultValue": true,
"metadata": {
"description": "Boolean flag to turn on and off http application routing."
}
},
"enableAzurePolicy": {
"type": "bool",
"defaultValue": false,
"metadata": {
"description": "Boolean flag to turn on and off Azure Policy addon."
}
},
"enableOmsAgent": {
"type": "bool",
"defaultValue": true,
"metadata": {
"description": "Boolean flag to turn on and off omsagent addon."
}
},
"workspaceRegion": {
"type": "string",
"defaultValue": "eastus",
"metadata": {
"description": "Specify the region for your OMS workspace."
}
},
"workspaceName": {
"type": "string",
"metadata": {
"description": "Specify the prefix of the OMS workspace."
}
},
"omsSku": {
"type": "string",
"defaultValue": "standalone",
"allowedValues": [
"free",
"standalone",
"pernode"
],
"metadata": {
"description": "Select the SKU for your workspace."
}
},
"serviceCidr": {
"type": "string",
"metadata": {
"description": "A CIDR notation IP range from which to assign service cluster IPs."
}
},
"subnetAddressSpace": {
"type": "string",
"metadata": {
"description": "A CIDR notation IP range from which to assign service cluster IPs."
}
},
"dnsServiceIP": {
"type": "string",
"metadata": {
"description": "Containers DNS server IP address."
}
},
"dockerBridgeCidr": {
"type": "string",
"metadata": {
"description": "A CIDR notation IP for Docker bridge."
}
}
},
"variables": {
"deploymentSuffix": "MDP2020",
"subscriptionId" : "[subscription().id]",
"workspaceName" : "[concat(parameters('workspaceName'), uniqueString(variables('subscriptionId')))]",
"omsWorkspaceId": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.OperationalInsights/workspaces/', variables('workspaceName'))]",
"clusterID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.ContainerService/managedClusters/', parameters('clusterName'))]",
"vnetSubnetID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'), '/subnets/', parameters('subnetName'))]",
"solutionDeploymentId": "[concat('SolutionDeployment-', variables('deploymentSuffix'))]",
"workspaceDeploymentId": "[concat('WorkspaceDeployment-', variables('deploymentSuffix'))]",
"clusterMonitoringMetricId": "[concat('ClusterMonitoringMetric-', variables('deploymentSuffix'))]",
"clusterSubnetRoleAssignmentId": "[concat('ClusterSubnetRoleAssignment-', variables('deploymentSuffix'))]"
},
"resources": [
{
"name": "[parameters('virtualNetworkName')]",
"type": "Microsoft.Network/VirtualNetworks",
"apiVersion": "2019-09-01",
"location": "[parameters('location')]",
"dependsOn": [],
"tags": {
"cluster": "Kubernetes"
},
"properties": {
"addressSpace": {
"addressPrefixes": "[parameters('addressSpaces')]"
},
"subnets": [
{
"name": "[parameters('subnetName')]",
"properties": {
"addressPrefix": "[parameters('subnetAddressSpace')]"
}
}
],
"enableDdosProtection": "[parameters('ddosProtectionPlanEnabled')]"
}
},
{
"apiVersion": "2020-03-01",
"dependsOn": [
"[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]",
"[resourceId('Microsoft.Network/VirtualNetworks', parameters('virtualNetworkName'))]"
],
"type": "Microsoft.ContainerService/managedClusters",
"location": "[parameters('location')]",
"name": "[parameters('clusterName')]",
"properties": {
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"enableRBAC": "[parameters('enableRBAC')]",
"dnsPrefix": "[parameters('dnsPrefix')]",
"agentPoolProfiles": [
{
"name": "systempool",
"osDiskSizeGB": "[parameters('osDiskSizeGB')]",
"count": "[parameters('systemPoolNodeCount')]",
"vmSize": "[parameters('systemPoolNodeType')]",
"osType": "Linux",
"storageProfile": "ManagedDisks",
"type": "VirtualMachineScaleSets",
"mode": "System",
"vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
"maxPods": "[parameters('maxPods')]"
},
{
"name": "userpool",
"osDiskSizeGB": "[parameters('osDiskSizeGB')]",
"count": "[parameters('agentPoolNodeCount')]",
"vmSize": "[parameters('agentPoolNodeType')]",
"osType": "Linux",
"storageProfile": "ManagedDisks",
"type": "VirtualMachineScaleSets",
"mode": "User",
"vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
"maxPods": "[parameters('maxPods')]"
}
],
"networkProfile": {
"loadBalancerSku": "standard",
"networkPlugin": "[parameters('networkPlugin')]",
"serviceCidr": "[parameters('serviceCidr')]",
"dnsServiceIP": "[parameters('dnsServiceIP')]",
"dockerBridgeCidr": "[parameters('dockerBridgeCidr')]"
},
"apiServerAccessProfile": {
"enablePrivateCluster": "[parameters('enablePrivateCluster')]"
},
"addonProfiles": {
"httpApplicationRouting": {
"enabled": "[parameters('enableHttpApplicationRouting')]"
},
"azurePolicy": {
"enabled": "[parameters('enableAzurePolicy')]"
},
"omsagent": {
"enabled": "[parameters('enableOmsAgent')]",
"config": {
"logAnalyticsWorkspaceResourceID": "[variables('omsWorkspaceId')]"
}
}
}
},
"tags": {},
"identity": {
"type": "SystemAssigned"
}
},
{
"type": "Microsoft.Resources/deployments",
"name": "[variables('solutionDeploymentId')]",
"apiVersion": "2017-05-10",
"resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]",
"subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"apiVersion": "2015-11-01-preview",
"type": "Microsoft.OperationsManagement/solutions",
"location": "[parameters('workspaceRegion')]",
"name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]",
"properties": {
"workspaceResourceId": "[variables('omsWorkspaceId')]"
},
"plan": {
"name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]",
"product": "[concat('OMSGallery/', 'ContainerInsights')]",
"promotionCode": "",
"publisher": "Microsoft"
}
}
]
}
},
"dependsOn": [
"[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]"
]
},
{
"type": "Microsoft.Resources/deployments",
"name": "[variables('workspaceDeploymentId')]",
"apiVersion": "2017-05-10",
"resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]",
"subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"apiVersion": "2015-11-01-preview",
"type": "Microsoft.OperationalInsights/workspaces",
"location": "[parameters('workspaceRegion')]",
"name": "[variables('workspaceName')]",
"properties": {
"sku": {
"name": "[parameters('omsSku')]"
}
}
}
]
}
}
},
{
"type": "Microsoft.Resources/deployments",
"name": "[variables('clusterMonitoringMetricId')]",
"apiVersion": "2017-05-10",
"resourceGroup": "[parameters('resourceGroupName')]",
"subscriptionId": "[subscription().subscriptionId]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"type": "Microsoft.ContainerService/managedClusters/providers/roleAssignments",
"apiVersion": "2018-01-01-preview",
"name": "[concat(parameters('clusterName'), '/Microsoft.Authorization/', guid(subscription().subscriptionId))]",
"properties": {
"roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', '3913510d-42f4-4e42-8a64-420c390055eb')]",
"principalId": "[reference(parameters('clusterName')).addonProfiles.omsagent.identity.objectId]",
"scope": "[variables('clusterID')]"
}
}
]
}
},
"dependsOn": [
"[variables('clusterID')]"
]
}
],
"outputs": {
"controlPlaneFQDN": {
"type": "string",
"value": "[reference(concat('Microsoft.ContainerService/managedClusters/', parameters('clusterName'))).fqdn]"
}
}
}

Просмотреть файл

@ -0,0 +1,32 @@
# Add line to set login to az
#az login
# Set your azure subscription
#az account set -s "<subscription-id>"
# Defines the ARM template file location
export templateFile="aks-cluster.json"
# Defines the parameters that will be used in the ARM template
export parameterFile="parameters.json"
# Defines the name of the Resource Group our resources are deployed into
export resourceGroupName="PizzaAppEast"
export clusterName="pizzaappeast"
export location="eastus"
# Creates the resources group if it does not already exist
az group create --name $resourceGroupName --location $location
# Creates the Kubernetes cluster and the associated resources and dependencies for the cluster
az deployment group create --name dataProductionDeployment --resource-group $resourceGroupName --template-file $templateFile --parameters $parameterFile
# Install the Kubectl CLI. This will be used to interact with the remote Kubernetes cluster
#sudo az aks install-cli
# Get the Credentials to Access the Cluster with Kubectl
az aks get-credentials --name $clusterName --resource-group $resourceGroupName
# List the node pools - expect two aks nodepools
az aks nodepool list --resource-group $resourceGroupName --cluster-name $clusterName -o table

Просмотреть файл

@ -0,0 +1,84 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"resourceGroupName": {
"value": "PizzaAppEast"
},
"virtualNetworkName": {
"value": "PizzaAppEastVNet"
},
"subnetName": {
"value": "PizzaAppEastSNet"
},
"clusterName": {
"value": "PizzAappEast"
},
"maxPods": {
"value": 64
},
"systemPoolNodeCount": {
"value": 1
},
"systemPoolNodeType": {
"value": "Standard_D2s_v4"
},
"agentPoolNodeCount": {
"value": 1
},
"agentPoolNodeType": {
"value": "Standard_D2s_v4"
},
"location": {
"value": "eastus"
},
"dnsPrefix": {
"value": "pizzaappeast-dns"
},
"kubernetesVersion": {
"value": "1.25.5"
},
"networkPlugin": {
"value": "azure"
},
"enableRBAC": {
"value": true
},
"enablePrivateCluster": {
"value": false
},
"enableHttpApplicationRouting": {
"value": false
},
"enableAzurePolicy": {
"value": false
},
"serviceCidr": {
"value": "10.71.0.0/16"
},
"dnsServiceIP": {
"value": "10.71.0.3"
},
"dockerBridgeCidr": {
"value": "172.17.0.1/16"
},
"addressSpaces": {
"value": [
"10.250.0.0/16"
]
},
"subnetAddressSpace": {
"value": "10.250.0.0/20"
},
"ddosProtectionPlanEnabled": {
"value": false
},
"workspaceName": {
"value": "PizzaAppEast"
},
"workspaceRegion": {
"value": "eastus"
}
}
}

Просмотреть файл

@ -0,0 +1,11 @@
apiVersion: v2
name: Contoso Pizza
description: A Helm chart for deploying Contoso Pizza Web Application
type: application
version: 1.0
appVersion: 15.08

Просмотреть файл

@ -0,0 +1,74 @@
status="Running"
# Install the Kubernetes Resources
helm upgrade --install wth-mysql ../MySQL57 --set infrastructure.password=OCPHack8
# Install the Kubernetes Resources Postgres
# helm upgrade --install wth-postgresql ../PostgreSQL116 --set infrastructure.password=OCPHack8
#
# for ((i = 0 ; i < 30 ; i++)); do
# pgStatus=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":status.phase")
#
#
# if [ "$pgStatus" != "$status" ]; then
# sleep 10
# fi
# done
# Get the postgres pod name
# pgPodName=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":metadata.name")
#Copy pg.sql to the postgresql pod
# kubectl -n postgresql cp ./pg.sql $pgPodName:/tmp/pg.sql
# Use this to connect to the database server
# kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres -f /tmp/pg.sql
# Install the Kubernettes Resources MySQL
for ((i = 0 ; i < 30 ; i++)); do
mysqlStatus=$(kubectl -n mysql get pods --no-headers -o custom-columns=":status.phase")
if [ "$mysqlStatus" != "$status" ]; then
sleep 30
fi
done
# Use this to connect to the database server
kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8 <./mysql.sql
# postgresClusterIP=$(kubectl -n postgresql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"')
mysqlClusterIP=$(kubectl -n mysql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"')
# sed "s/XXX.XXX.XXX.XXX/$postgresClusterIP/" ./values-postgresql-orig.yaml >temp_postgresql.yaml && mv temp_postgresql.yaml ./values-postgresql.yaml
sed "s/XXX.XXX.XXX.XXX/$mysqlClusterIP/" ./values-mysql-orig.yaml >temp_mysql.yaml && mv temp_mysql.yaml ./values-mysql.yaml
helm upgrade --install mysql-contosopizza . -f ./values.yaml -f ./values-mysql.yaml
# helm upgrade --install postgres-contosopizza . -f ./values.yaml -f ./values-postgresql.yaml
for ((i = 0 ; i < 30 ; i++)); do
appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"')
if [ "$appStatus" == "null" ]; then
sleep 30
fi
done
for ((i = 0 ; i < 30 ; i++)); do
appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"')
if [ "$appStatus" == "null" ]; then
sleep 30
fi
done
# postgresAppIP=$(kubectl -n contosoapppostgres get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip|tr -d '"')
mysqlAppIP=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"')
echo "Pizzeria app on MySQL is ready at http://$mysqlAppIP:8081/pizzeria"
# echo "Pizzeria app on PostgreSQL is ready at http://$postgresAppIP:8082/pizzeria"

Просмотреть файл

@ -0,0 +1,88 @@
# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only
# Find out your local client ip address.
echo -e "\n This script restricts the access to your ""on-prem"" Postgres and MySQL database from the shell where it is run from.
It removes public access to the databases and adds your shell IP address as an source IP to connect from.
If you are running this script from Azure Cloud Shell and want to add your computer's IP address as a source for Gui tools to connect to,
then you have to edit the variable my_ip below - put your computer's IP address.
In order to find the public IP address of your computer ip address, point a browser to https://ifconfig.me
If this script is run again it appends your IP address to the current white listed source IP addresses. \n"
my_ip=`curl -s ifconfig.me`/32
# In this resource group, there is only one NSG
export rg_nsg="MC_OSSDBMigration_ossdbmigration_westus"
export nsg_name=` az network nsg list -g $rg_nsg --query "[].name" -o tsv`
# For this NSG, there are two rules for connecting to Postgres and MySQL.
export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-5432" `
export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-3306" `
# Capture the existing allowed_source_ip_address.
existing_my_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --query "sourceAddressPrefix" -o tsv`
existing_pg_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --query "sourceAddressPrefix" -o tsv`
# If it says "Internet" we treat it as 0.0.0.0
if [ "$existing_my_source_ip_allowed" = "Internet" ]
then
existing_my_source_ip_allowed="0.0.0.0"
fi
if [ "$existing_pg_source_ip_allowed" = "Internet" ]
then
existing_pg_source_ip_allowed="0.0.0.0"
fi
# if the existing source ip allowed is open to the world - then we need to remove it first. Otherwise it is a ( list of ) IP addresses then
# we append to it another IP address. Open the world is 0.0.0.0 or 0.0.0.0/0.
existing_my_source_ip_allowed_prefix=`echo $existing_my_source_ip_allowed | cut -d "/" -f1`
existing_pg_source_ip_allowed_prefix=`echo $existing_pg_source_ip_allowed | cut -d "/" -f1`
# If it was open to public, we take off the existing 0.0.0.0 or else we append to it.
if [ "$existing_my_source_ip_allowed_prefix" = "0.0.0.0" ]
then
new_my_source_ip_allowed="$my_ip"
else
new_my_source_ip_allowed="$existing_my_source_ip_allowed $my_ip"
fi
if [ "$existing_pg_source_ip_allowed_prefix" = "0.0.0.0" ]
then
new_pg_source_ip_allowed="$my_ip"
else
new_pg_source_ip_allowed="$existing_pg_source_ip_allowed $my_ip"
fi
# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip". Also discard errors - as if you run the script
# simply twice back to back - it gives an error message - does not do any harm though .
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $new_my_source_ip_allowed 2>/dev/zero
if [ $? -ne 0 ]
then
echo -e "\n Your MySQL Firewall rule was not changed. It is possible that you already have $my_ip white listed \n"
fi
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $new_pg_source_ip_allowed 2>/dev/zero
if [ $? -ne 0 ]
then
echo -e "\n Your Postgres Firewall rule was not changed. It is possible that you already have $my_ip white listed \n"
fi

Просмотреть файл

@ -0,0 +1,16 @@
-- Create wth database
CREATE DATABASE wth;
-- Create a user Contosoapp that would own the application data for migration
CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ;
GRANT SUPER on *.* to contosoapp identified by 'OCPHack8'; -- may not be needed
GRANT ALL PRIVILEGES ON wth.* to contosoapp ;
GRANT PROCESS, SELECT ON *.* to contosoapp ;
SET GLOBAL gtid_mode=ON_PERMISSIVE;
SET GLOBAL gtid_mode=OFF_PERMISSIVE;
SET GLOBAL gtid_mode=OFF;

Просмотреть файл

@ -0,0 +1,7 @@
--Create the wth database
CREATE DATABASE wth;
-- Create user contosoapp that would own the application schema
CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8';

Просмотреть файл

@ -0,0 +1,11 @@
# Start the VMSS that hosts the AKS nodes. There are only two VMSS in the resource group -one each for systempool and userpool.
# Change the value of the resource group, if required.
export vmss_user=$(az vmss list -g MC_PizzaAppEast_pizzaappeast_eastus --query '[].name' | grep userpool | tr -d "," | tr -d '"')
export vmss_system=$(az vmss list -g MC_PizzaAppEast_pizzaappeast_eastus --query '[].name' | grep systempool | tr -d "," | tr -d '"')
# Now start the VM scale sets
az vmss start -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_system
az vmss start -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_user

Просмотреть файл

@ -0,0 +1,11 @@
# Stop the VMSS that hosts the AKS nodes to stop incurring compute charges. There are only two VMSS in the resource group -one each for system and userpool.
# Change the value of the resource group, if required.
export vmss_user=$(az vmss list -g MC_PizzaAppEast_pizzappeast_eastus --query '[].name' | grep userpool | tr -d "," | tr -d '"')
export vmss_system=$(az vmss list -g MC_PizzaAppEast_pizzaappeast_eastus --query '[].name' | grep systempool | tr -d "," | tr -d '"')
# Now stop the VM scale sets
az vmss stop -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_user
az vmss stop -g MC_PizzaAppEast_pizzaappeast_eastus -n $vmss_system

Просмотреть файл

@ -0,0 +1,106 @@
{{ if eq .Values.appConfig.databaseType "mysql" }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.infrastructure.appName }}
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
replicas: 1
serviceName: "{{ .Values.infrastructure.appName }}-external"
selector:
matchLabels:
app: {{ .Values.application.labelValue }}
template:
metadata:
labels:
app: {{ .Values.application.labelValue }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Values.infrastructure.appName }}
resources:
requests:
memory: "{{ .Values.resources.requests.memory }}"
cpu: "{{ .Values.resources.requests.cpu }}"
limits:
memory: "{{ .Values.resources.limits.memory }}"
cpu: "{{ .Values.resources.limits.cpu }}"
env:
- name: APP_DATASOURCE_DRIVER
value: "{{ .Values.appSettings.mysql.driverClass }}"
- name: APP_HIBERNATE_DIALECT
value: "{{ .Values.appSettings.mysql.dialect }}"
- name: APP_HIBERNATE_HBM2DDL_AUTO
value: "{{ .Values.globalConfig.hibernateDdlAuto }}"
- name: APP_PORT
value: "{{ .Values.appConfig.webPort }}"
- name: APP_CONTEXT_PATH
value: "{{ .Values.appConfig.webContext }}"
- name: APP_BRAINTREE_MERCHANT_ID
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_braintree_merchant_id
- name: APP_BRAINTREE_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_braintree_public_key
- name: APP_BRAINTREE_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_braintree_private_key
- name: APP_RECAPTCHA_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_recaptcha_public_key
- name: APP_RECAPTCHA_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_recaptcha_private_key
- name: APP_DATASOURCE_URL
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_datasource_url
- name: APP_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_datasource_username
- name: APP_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_datasource_password
ports:
- containerPort: {{ .Values.appConfig.webPort }}
name: contosopizza
readinessProbe:
tcpSocket:
port: {{ .Values.appConfig.webPort }}
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
tcpSocket:
port: {{ .Values.appConfig.webPort }}
initialDelaySeconds: 15
failureThreshold: 5
periodSeconds: 16
volumeMounts:
- name: "contosopizza-persistent-storage"
mountPath: {{ .Values.infrastructure.dataVolume }}
volumeClaimTemplates:
- metadata:
name: contosopizza-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium"
resources:
requests:
storage: 1Gi
{{ end }}

Просмотреть файл

@ -0,0 +1,8 @@
{{ if eq .Values.infrastructure.namespace "default" }}
# Do not create namespace
{{ else }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.infrastructure.namespace }}
{{ end }}

Просмотреть файл

@ -0,0 +1,16 @@
# These are secrets used to configure the application
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: "{{ .Values.globalConfig.secretName }}"
namespace: "{{ .Values.infrastructure.namespace }}"
data:
app_braintree_merchant_id: {{ .Values.globalConfig.brainTreeMerchantId | b64enc }}
app_braintree_public_key: {{ .Values.globalConfig.brainTreePublicKey | b64enc }}
app_braintree_private_key: {{ .Values.globalConfig.brainTreePrivateKey | b64enc }}
app_recaptcha_public_key: {{ .Values.globalConfig.recaptchaPublicKey | b64enc }}
app_recaptcha_private_key: {{ .Values.globalConfig.recaptchaPrivateKey | b64enc }}
app_datasource_url: {{ .Values.appConfig.dataSourceURL | b64enc }}
app_datasource_username: {{ .Values.appConfig.dataSourceUser | b64enc }}
app_datasource_password: {{ .Values.appConfig.dataSourcePassword | b64enc }}

Просмотреть файл

@ -0,0 +1,14 @@
---
# This is the internal load balancer, routing traffic to the Application
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.infrastructure.appName }}-external"
namespace: {{ .Values.infrastructure.namespace }}
spec:
type: "{{ .Values.service.type }}"
ports:
- port: {{ .Values.appConfig.webPort }}
protocol: {{ .Values.service.protocol }}
selector:
app: {{ .Values.application.labelValue }}

Просмотреть файл

@ -0,0 +1,6 @@
# helm uninstall wth-postgresql
helm uninstall wth-mysql
helm uninstall mysql-contosopizza
# helm uninstall postgres-contosopizza
echo ""
echo "Use 'kubectl get ns' to make sure your pods are not in a Terminating status before redeploying"

Просмотреть файл

@ -0,0 +1,27 @@
# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only. The first step - to find out your local client ip address.
echo -e "\n This script restricts the access to your Postgres and MySQL database from your computer only.
The variable myip will get the ip address of the shell environment where this script is running from - be it a cloud shell or your own computer.
You can get your computer's IP adress by browsing to https://ifconfig.me. So if the browser says it is 102.194.87.201, your myip=102.194.87.201/32.
\n"
myip=`curl -s ifconfig.me`/32
# In this resource group, there is only one NSG. Change the value of the resource group, if required
export rg_nsg="MC_OSSDBMigration_ossdbmigration_westus"
export nsg_name=`az network nsg list -g $rg_nsg --query "[].name" -o tsv`
# For this NSG, there are two rules for connecting to Postgres and MySQL.
export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-5432" | sed 's/"//g'`
export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-3306" | sed 's/"//g'`
# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip"
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $myip
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $myip

Просмотреть файл

@ -0,0 +1,78 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
#databaseType: "postgres" # mysql or postgres
#local example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here
#Azure example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here
#local example of MySQL JDBC Connection string
dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#Azure example of MySQL JDBC Connection string
#dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#local examples of dataSourceUser and dataSourcePassword
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
#Azure examples of dataSourceUser and dataSourcePassword
#dataSourceUser: "postgres@petepgdbtest01" # your database username goes here
#dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8083 # the port the app listens on
#webPort: 8082 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
# These changes applies to any database type used
globalConfig:
secretName: contosopizza
brainTreeMerchantId: "3fk8mrzyr665jb6d"
brainTreePublicKey: "72wqqdk75tmh44n9"
brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33"
recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04"
recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI"
hibernateDdlAuto: "create-only"
application:
labelValue: contosopizza
infrastructure:
namespace: contosopizza
appName: contosopizza
dataVolume: "/usr/local/contosopizza"
volumeName: "contosopizza"
image:
name: izzymsft/ubuntu-pizza
pullPolicy: IfNotPresent
tag: "1.0"
service:
type: LoadBalancer
port: 8082
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 256m
memory: 512Mi
volume:
size: 1Gi
storageClass: managed-premium
appSettings:
mysql:
dialect: "org.hibernate.dialect.MySQL57Dialect"
driverClass: "com.mysql.jdbc.Driver"
# postgres:
# dialect: "org.hibernate.dialect.PostgreSQL95Dialect"
# driverClass: "org.postgresql.Driver"

Просмотреть файл

@ -0,0 +1,13 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8081 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
infrastructure:
namespace: contosoappmysql

Просмотреть файл

@ -0,0 +1,13 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8081 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
infrastructure:
namespace: contosoappmysql

Просмотреть файл

@ -0,0 +1,78 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
#databaseType: "postgres" # mysql or postgres
#local example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here
#Azure example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here
#local example of MySQL JDBC Connection string
dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#Azure example of MySQL JDBC Connection string
#dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#local examples of dataSourceUser and dataSourcePassword
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
#Azure examples of dataSourceUser and dataSourcePassword
#dataSourceUser: "postgres@petepgdbtest01" # your database username goes here
#dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8083 # the port the app listens on
#webPort: 8082 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
# These changes applies to any database type used
globalConfig:
secretName: contosopizza
brainTreeMerchantId: "3fk8mrzyr665jb6d"
brainTreePublicKey: "72wqqdk75tmh44n9"
brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33"
recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04"
recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI"
hibernateDdlAuto: "create-only"
application:
labelValue: contosopizza
infrastructure:
namespace: contosopizza
appName: contosopizza
dataVolume: "/usr/local/contosopizza"
volumeName: "contosopizza"
image:
name: izzymsft/ubuntu-pizza
pullPolicy: IfNotPresent
tag: "1.0"
service:
type: LoadBalancer
port: 8082
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 256m
memory: 512Mi
volume:
size: 1Gi
storageClass: managed-premium
appSettings:
mysql:
dialect: "org.hibernate.dialect.MySQL57Dialect"
driverClass: "com.mysql.jdbc.Driver"
# postgres:
# dialect: "org.hibernate.dialect.PostgreSQL95Dialect"
# driverClass: "org.postgresql.Driver"

Просмотреть файл

@ -0,0 +1,11 @@
apiVersion: v2
name: MySQL Database Server
description: A Helm chart for deploying a single node MySQL database server
type: application
version: 2.0
appVersion: 5.7

Просмотреть файл

@ -0,0 +1,56 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: mysqld-config
namespace: "{{ .Values.infrastructure.namespace }}"
data:
mysqld.cnf: |-
# Mounted at /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
lower_case_table_names = 1
server_id = 3
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /usr/local/mysql/data
explicit_defaults_for_timestamp = on
#log-error = /var/log/mysql/error.log
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# The value of log_bin is the base name of the sequence of binlog files.
log_bin = mysql-bin
# The binlog-format must be set to ROW or row.
binlog_format = row
# The binlog_row_image must be set to FULL or full
binlog_row_image = full
# This is the number of days for automatic binlog file removal. The default is 0 which means no automatic removal.
expire_logs_days = 7
# Boolean which enables/disables support for including the original SQL statement in the binlog entry.
binlog_rows_query_log_events = on
# Whether updates received by a replica server from a replication source server should be logged to the replica's own binary log
log_slave_updates = on
# Boolean which specifies whether GTID mode of the MySQL server is enabled or not.
gtid_mode = on
# Boolean which instructs the server whether or not to enforce GTID consistency by allowing
# the execution of statements that can be logged in a transactionally safe manner; required when using GTIDs.
enforce_gtid_consistency = on
# The number of seconds the server waits for activity on an interactive connection before closing it.
interactive_timeout = 36000
# The number of seconds the server waits for activity on a noninteractive connection before closing it.
wait_timeout = 72000
# end of file

Просмотреть файл

@ -0,0 +1,74 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.infrastructure.appName }}
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.application.labelValue }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ .Values.application.labelValue }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Values.infrastructure.appName }}
resources:
requests:
memory: "{{ .Values.resources.requests.memory }}"
cpu: "{{ .Values.resources.requests.cpu }}"
limits:
memory: "{{ .Values.resources.limits.memory }}"
cpu: "{{ .Values.resources.limits.cpu }}"
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysqld
key: mysql_password
ports:
- containerPort: {{ .Values.service.port }}
name: mysql
readinessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 15
failureThreshold: 5
periodSeconds: 16
volumeMounts:
- name: "{{ .Values.infrastructure.volumeName }}-volume"
mountPath: {{ .Values.infrastructure.dataVolume }}
- name: mysqld-configuration2
mountPath: /etc/mysql/mysql.conf.d
volumes:
- name: "{{ .Values.infrastructure.volumeName }}-volume"
persistentVolumeClaim:
claimName: "{{ .Values.infrastructure.volumeName }}-persistent-storage"
- name: mysqld-configuration2
configMap:
name: mysqld-config
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ .Values.infrastructure.volumeName }}-persistent-storage"
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.resources.volume.storageClass }}
resources:
requests:
storage: {{ .Values.resources.volume.size }}

Просмотреть файл

@ -0,0 +1,8 @@
{{ if eq .Values.infrastructure.namespace "default" }}
# Do not create namespace
{{ else }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.infrastructure.namespace }}
{{ end }}

Просмотреть файл

@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: mysqld
namespace: "{{ .Values.infrastructure.namespace }}"
data:
mysql_default_user: {{ .Values.infrastructure.username | b64enc }}
mysql_password: {{ .Values.infrastructure.password | b64enc }}

Просмотреть файл

@ -0,0 +1,14 @@
---
# This is the internal load balancer, routing traffic to the MySQL Pod
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.infrastructure.appName }}-external"
namespace: {{ .Values.infrastructure.namespace }}
spec:
type: "{{ .Values.service.type }}"
ports:
- port: {{ .Values.service.port }}
protocol: {{ .Values.service.protocol }}
selector:
app: {{ .Values.application.labelValue }}

Просмотреть файл

@ -0,0 +1,34 @@
replicaCount: 1
application:
labelValue: mysql
infrastructure:
namespace: mysql
appName: mysql
username: izzy
password: "OCPHack8"
dataVolume: "/usr/local/mysql"
volumeName: "wthmysql"
image:
name: mysql
pullPolicy: IfNotPresent
tag: "5.7.32"
service:
type: LoadBalancer
port: 3306
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 750m
memory: 2048Mi
volume:
size: 5Gi
storageClass: managed-premium

Просмотреть файл

@ -0,0 +1,11 @@
apiVersion: v2
name: PostgreSQL
description: A Helm chart for deploying a single node PostgreSQL database server
type: application
version: 2.0
appVersion: 11.6

Просмотреть файл

@ -0,0 +1,91 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.infrastructure.appName }}
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.application.labelValue }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ .Values.application.labelValue }}
spec:
securityContext:
runAsUser: 0
runAsGroup: 999
fsGroup: 999
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Values.infrastructure.appName }}
args: ["-c", "config_file=/etc/postgresql/postgresql.conf"]
resources:
requests:
memory: "{{ .Values.resources.requests.memory }}"
cpu: "{{ .Values.resources.requests.cpu }}"
limits:
memory: "{{ .Values.resources.limits.memory }}"
cpu: "{{ .Values.resources.limits.cpu }}"
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres
key: postgres_password
- name: PGDATA
value: {{ .Values.infrastructure.dataPath }}
ports:
- containerPort: {{ .Values.service.port }}
name: postgres
readinessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 15
failureThreshold: 5
periodSeconds: 16
volumeMounts:
- name: "{{ .Values.infrastructure.appName }}-volume"
mountPath: {{ .Values.infrastructure.dataVolume }}
- name: "postgresql-configuration"
mountPath: "/etc/postgresql"
- name: "postgresql-tls-keys"
mountPath: "/etc/postgresql/keys"
volumes:
- name: "{{ .Values.infrastructure.appName }}-volume"
persistentVolumeClaim:
claimName: "{{ .Values.infrastructure.appName }}-persistent-storage"
- name: postgresql-configuration
configMap:
name: postgresql-config
- name: postgresql-tls-keys
secret:
secretName: postgresql-tls-secret
items:
- key: tls.crt
path: "tls.crt"
- key: tls.key
path: "tls.key"
mode: 0640
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ .Values.infrastructure.appName }}-persistent-storage"
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.resources.volume.storageClass }}
resources:
requests:
storage: {{ .Values.resources.volume.size }}

Просмотреть файл

@ -0,0 +1,8 @@
{{ if eq .Values.infrastructure.namespace "default" }}
# Do not create namespace
{{ else }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.infrastructure.namespace }}
{{ end }}

Просмотреть файл

@ -0,0 +1,699 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: postgresql-config
namespace: {{ .Values.infrastructure.namespace }}
data:
postgresql.conf: |-
# Mounted at /etc/postgresql/postgresql.conf
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*'
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = md5 # md5 or scram-sha-256
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - SSL -
ssl = on
ssl_ca_file = '/etc/postgresql/keys/tls.crt'
ssl_cert_file = '/etc/postgresql/keys/tls.crt'
#ssl_crl_file = ''
ssl_key_file = '/etc/postgresql/keys/tls.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 128MB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# use none to disable dynamic shared memory
# (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kB, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 25
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#parallel_leader_participation = on
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel operations
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#backend_flush_after = 0 # measured in pages, 0 disables
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = logical # minimal, replica, or logical
# (change requires restart)
#fsync = on # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
#synchronous_commit = on # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_compression = off # enable compression of full-page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
max_wal_size = 1GB
min_wal_size = 80MB
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the master and on any standby that will send replication data.
#max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
#wal_keep_segments = 0 # in logfile segments; 0 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_parallel_hash = on
#enable_partition_pruning = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#force_parallel_mode = off
#jit = off # allow JIT compilation
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '%m [%p] ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Etc/UTC'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
#cluster_name = '' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#stats_temp_directory = 'pg_stat_tmp'
# - Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_tablespace = '' # a tablespace name, '' uses the default
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_cleanup_index_scale_factor = 0.1 # fraction of total number of tuples
# before index cleanup, 0 always performs
# index cleanup
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_fuzzy_search_limit = 0
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Etc/UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'en_US.utf8' # locale for system error message
# strings
lc_monetary = 'en_US.utf8' # locale for monetary formatting
lc_numeric = 'en_US.utf8' # locale for number formatting
lc_time = 'en_US.utf8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#shared_preload_libraries = '' # (change requires restart)
#local_preload_libraries = ''
#session_preload_libraries = ''
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#operator_precedence_warning = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
#include_dir = '...' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: postgres
namespace: "{{ .Values.infrastructure.namespace }}"
data:
postgres_default_user: {{ .Values.infrastructure.username | b64enc }}
postgres_password: {{ .Values.infrastructure.password | b64enc }}

Просмотреть файл

@ -0,0 +1,14 @@
---
# This is the internal load balancer, routing traffic to the PostgreSQL Pod
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.infrastructure.appName }}-external"
namespace: {{ .Values.infrastructure.namespace }}
spec:
type: "{{ .Values.service.type }}"
ports:
- port: {{ .Values.service.port }}
protocol: {{ .Values.service.protocol }}
selector:
app: {{ .Values.application.labelValue }}

Просмотреть файл

@ -0,0 +1,34 @@
replicaCount: 1
application:
labelValue: postgres
infrastructure:
namespace: postgresql
appName: postgres
username: postgres
password: "OCPHack8"
dataVolume: "/var/lib/postgresql"
dataPath: "/var/lib/postgresql/data"
image:
name: postgres
pullPolicy: IfNotPresent
tag: "11.6"
service:
type: LoadBalancer
port: 5432
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 750m
memory: 2048Mi
volume:
size: 5Gi
storageClass: managed-premium

Просмотреть файл

@ -0,0 +1,269 @@
**[Home](../../../README.md)** - [Prerequisites >](../../../00-prereqs.md)
## Setting up Kubernetes
NOTE: YOU DO NOT NEED TO RUN THROUGH THE STEPS IN THIS FILE IF YOU ALREADY PROVISIONED AKS.
The steps to deploy the AKS cluster, scale it up and scale it down are available in the README file for that section: [README](../ARM-Templates/README.md).
You should have not have to do provisioning again since you have already provisioned AKS using the create-cluster.sh script in [Prerequisites >](../../../00-prereqs.md)
## PostgreSQL Setup on Kubernetes
These instructions provide guidance on how to setup PostgreSQL 11 on AKS
This requires Helm3 and the latest version of Azure CLI to be installed. These are pre-installed in Azure Cloud Shell but you will need to install or download them if you are using a different environment.
## Installing the PostgreSQL Database
```bash
# Navigate to the Helm Charts
#cd Resources/HelmCharts
# Install the Kubernetes Resources
helm upgrade --install wth-postgresql ./PostgreSQL116 --set infrastructure.password=OCPHack8
```
## Checking the Service IP Addresses and Ports
```bash
kubectl -n postgresql get svc
```
**Important: you will need to copy the postgres-external Cluster-IP value to use for the dataSourceURL in later steps**
## Checking the Pod for Postgres
```bash
kubectl -n postgresql get pods
```
Wait a few minutes until the pod status shows as Running
## Getting into the Container
```bash
# Use this to connect to the database server SQL prompt
kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres
```
Run the following commands to check the Postgres Version and create the WTH database (warning: application deployment will fail if you don't do this)
```sql
--Check the DB Version
SELECT version();
--Create the wth database
CREATE DATABASE wth;
--List databases. notice that there is a database called wth
\l
-- Create user contosoapp that would own the application schema
CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8';
-- List the tables in wth
\dt
-- exit out of Postgres Sql prompt
exit
```
## Uninstalling the PostgreSQL from Kubernetes (only if you need to cleanup and try the helm deployment again)
Use this to uninstall the PostgreSQL 11 instance from Kubernetes cluster
```bash
# Uninstall to the database server. To install again, run helm upgrade
helm uninstall wth-postgresql
```
## Installing MySQL
```bash
# Install the Kubernetes Resources
helm upgrade --install wth-mysql ./MySQL57 --set infrastructure.password=OCPHack8
```
## Checking the Service IP Addresses and Ports
```bash
kubectl -n mysql get svc
```
**Important: you will need to copy the mysql-external Cluster-IP value to use for the dataSourceURL in later steps**
## Checking the Pod for MySQL
```bash
kubectl -n mysql get pods
```
## Getting into the Container
```bash
# Use this to connect to the database server
kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8
```
Run the following commands to check the MySQL Version and create the WTH database (warning: application deployment will fail if you don't do this)
```sql
-- Check the mysql DB Version
SELECT version();
-- List databases
SHOW DATABASES;
--Create wth database
CREATE DATABASE wth;
-- Create a user Contosoapp that would own the application data for migration
CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ;
GRANT SUPER on *.* to conotosoapp identified by 'OCPHack8'; -- may not be needed
GRANT ALL PRIVILEGES ON wth.* to contosoapp ;
-- Show tables in wth database
SHOW TABLES;
-- exit out of mysql Sql prompt
exit
```
## Uninstalling the MySQL from Kubernetes (only if you need to cleanup and try the helm deployment again)
Use this to uninstall the MySQL instance from Kubernetes cluster
```bash
# Uninstall to the database server. To install again, run helm upgrade command previously executed
helm uninstall wth-mysql
```
## Deploying the Web Application
First we navigate to the Helm charts directory
```bash
cd Resources/HelmCharts
```
We can deploy in two ways. As part of this hack, you will need to do both ways
* Backed by MySQL Database
* Backed by PostgreSQL Database
For the MySQL database setup, the developer/operator can make changes to the values-mysql.yaml file.
For the PostgreSQL database setup, the developer/operator can make changes to the values-postgresql.yaml file.
In the yaml files we can specify the database Type (appConfig.databaseType) as "mysql" or postgres" and then we can set the JDBC URL, username and password under the appConfig objects.
In the globalConfig object we can change the merchant id, public keys and other values as needed but you generally can leave those alone as they apply to both MySQL and PostgreSQL deployment options
```yaml
appConfig:
databaseType: "databaseType goes here" # mysql or postgres
dataSourceURL: "jdbc url goes here" # database is either mysql or postgres - jdbc:database://ip-address/wth
dataSourceUser: "user name goes here" # database username mentioned in values-postgres or values-mysql yaml - contosoap
dataSourcePassword: "Pass word goes here!" # your database password goes here - # OCPHack8
webPort: 8083 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
```
The developer or operator can specify the '--values'/'-f' flag multiple times.
When more than one values file is specified, priority will be given to the last (right-most) file specified in the sequence.
For example, if both values.yaml and override.yaml contained a key called 'namespace', the value set in override.yaml would take precedence.
The commands below allows us to use settings from the values file and then override certain values in the database specific values file.
```bash
helm upgrade --install release-name ./HelmChartFolder -f ./HelmChartFolder/values.yaml -f ./HelmChartFolder/override.yaml
```
To deploy the app backed by MySQL, run the following command after you have edited the values file to match your desired database type
```bash
helm upgrade --install mysql-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-mysql.yaml
```
To deploy the app backed by PostgreSQL, run the following command after you have edited the values file to match your desired database type
```bash
helm upgrade --install postgres-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-postgresql.yaml
```
If you wish to uninstall the app, you can use one of the following commands:
```bash
# Use this to uninstall, if you are using MySQL as the database
helm uninstall mysql-contosopizza
# Use this to uninstall, if you are using PostgreSQL as the database
helm uninstall postgres-contosopizza
```
After the apps have booted up, you can find out their service addresses and ports as well as their status as follows
```bash
# get service ports and IP addresses
kubectl -n {infrastructure.namespace goes here} get svc
# get service pods running the app
kubectl -n {infrastructure.namespace goes here} get pods
# view the first 5k lines of the application logs
kubectl -n {infrastructure.namespace goes here} logs deploy/contosopizza --tail=5000
# example for ports and services
kubectl -n {infrastructure.namespace goes here} get svc
```
Verify that contoso pizza application is running on AKS
```bash
# Insert the external IP address of the command <kubectl -n contosoappmysql or contosoapppostgres get svc below>
http://{external_ip_contoso_app}:8081/pizzeria/
```

Просмотреть файл

@ -0,0 +1,115 @@
# Configure the Microsoft Azure Provider.
provider "azurerm" {
version = "=1.31.0"
}
# Create a resource group
resource "azurerm_resource_group" "rg" {
name = "myTFResourceGroup"
location = "westus2"
}
# Create virtual network
resource "azurerm_virtual_network" "vnet" {
name = "myTFVnet"
address_space = ["10.0.0.0/16"]
location = "westus2"
resource_group_name = "${azurerm_resource_group.rg.name}"
}
# Create subnet
resource "azurerm_subnet" "subnet" {
name = "myTFSubnet"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.1.0/24"
}
# Create public IP
resource "azurerm_public_ip" "publicip" {
name = "myTFPublicIP"
location = "westus2"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "dynamic"
}
# Create Network Security Group and rule
resource "azurerm_network_security_group" "nsg" {
name = "myTFNSG"
location = "westus2"
resource_group_name = "${azurerm_resource_group.rg.name}"
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# Create network interface
resource "azurerm_network_interface" "nic" {
name = "myNIC"
location = "westus2"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "myNICConfg"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.publicip.id}"
}
}
# Create a Linux virtual machine
resource "azurerm_virtual_machine" "vm" {
name = "myTFVM"
location = "westus2"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "myNICConfg"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.publicip.id}"
}
}
# Create a Linux virtual machine
resource "azurerm_virtual_machine" "vm" {
name = "myTFVM"
location = "westus2"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myTFVM"
admin_username = "plankton"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}

Просмотреть файл

@ -0,0 +1,398 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"clusterName": {
"type": "string",
"metadata": {
"description": "The name of the Managed Cluster resource."
}
},
"agentPoolNodeCount": {
"type": "int",
"metadata": {
"description": "Number of virtual machines in the agent pool"
}
},
"agentPoolNodeType": {
"type": "string",
"metadata": {
"description": "SKU or Type of virtual machines in the agent pool"
}
},
"systemPoolNodeCount": {
"type": "int",
"metadata": {
"description": "Number of virtual machines in the system pool"
}
},
"systemPoolNodeType": {
"type": "string",
"metadata": {
"description": "SKU or Type of virtual machines in the system pool"
}
},
"resourceGroupName": {
"type": "string",
"metadata": {
"description": "The name of the Resource Group"
}
},
"virtualNetworkName": {
"type": "string",
"metadata": {
"description": "The name of the Virtual Network"
}
},
"subnetName": {
"type": "string",
"metadata": {
"description": "The name of the Subnet within the Virtual Network"
}
},
"location": {
"type": "string",
"metadata": {
"description": "The geographical location of AKS resource."
}
},
"dnsPrefix": {
"type": "string",
"metadata": {
"description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN."
}
},
"addressSpaces": {
"type": "array"
},
"ddosProtectionPlanEnabled": {
"type": "bool"
},
"osDiskSizeGB": {
"type": "int",
"defaultValue": 0,
"metadata": {
"description": "Disk size (in GiB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize."
},
"minValue": 0,
"maxValue": 1023
},
"kubernetesVersion": {
"type": "string",
"defaultValue": "1.25.5",
"metadata": {
"description": "The version of Kubernetes."
}
},
"networkPlugin": {
"type": "string",
"allowedValues": [
"azure",
"kubenet"
],
"metadata": {
"description": "Network plugin used for building Kubernetes network."
}
},
"maxPods": {
"type": "int",
"defaultValue": 64,
"metadata": {
"description": "Maximum number of pods that can run on a node."
}
},
"enableRBAC": {
"type": "bool",
"defaultValue": true,
"metadata": {
"description": "Boolean flag to turn on and off of RBAC."
}
},
"enablePrivateCluster": {
"type": "bool",
"defaultValue": false,
"metadata": {
"description": "Enable private network access to the Kubernetes cluster."
}
},
"enableHttpApplicationRouting": {
"type": "bool",
"defaultValue": true,
"metadata": {
"description": "Boolean flag to turn on and off http application routing."
}
},
"enableAzurePolicy": {
"type": "bool",
"defaultValue": false,
"metadata": {
"description": "Boolean flag to turn on and off Azure Policy addon."
}
},
"enableOmsAgent": {
"type": "bool",
"defaultValue": true,
"metadata": {
"description": "Boolean flag to turn on and off omsagent addon."
}
},
"workspaceRegion": {
"type": "string",
"defaultValue": "WestUS",
"metadata": {
"description": "Specify the region for your OMS workspace."
}
},
"workspaceName": {
"type": "string",
"metadata": {
"description": "Specify the prefix of the OMS workspace."
}
},
"omsSku": {
"type": "string",
"defaultValue": "standalone",
"allowedValues": [
"free",
"standalone",
"pernode"
],
"metadata": {
"description": "Select the SKU for your workspace."
}
},
"serviceCidr": {
"type": "string",
"metadata": {
"description": "A CIDR notation IP range from which to assign service cluster IPs."
}
},
"subnetAddressSpace": {
"type": "string",
"metadata": {
"description": "A CIDR notation IP range from which to assign service cluster IPs."
}
},
"dnsServiceIP": {
"type": "string",
"metadata": {
"description": "Containers DNS server IP address."
}
},
"dockerBridgeCidr": {
"type": "string",
"metadata": {
"description": "A CIDR notation IP for Docker bridge."
}
}
},
"variables": {
"deploymentSuffix": "MDP2020",
"subscriptionId" : "[subscription().id]",
"workspaceName" : "[concat(parameters('workspaceName'), uniqueString(variables('subscriptionId')))]",
"omsWorkspaceId": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.OperationalInsights/workspaces/', variables('workspaceName'))]",
"clusterID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.ContainerService/managedClusters/', parameters('clusterName'))]",
"vnetSubnetID": "[concat(variables('subscriptionId'), '/resourceGroups/', parameters('resourceGroupName'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworkName'), '/subnets/', parameters('subnetName'))]",
"solutionDeploymentId": "[concat('SolutionDeployment-', variables('deploymentSuffix'))]",
"workspaceDeploymentId": "[concat('WorkspaceDeployment-', variables('deploymentSuffix'))]",
"clusterMonitoringMetricId": "[concat('ClusterMonitoringMetric-', variables('deploymentSuffix'))]",
"clusterSubnetRoleAssignmentId": "[concat('ClusterSubnetRoleAssignment-', variables('deploymentSuffix'))]"
},
"resources": [
{
"name": "[parameters('virtualNetworkName')]",
"type": "Microsoft.Network/VirtualNetworks",
"apiVersion": "2019-09-01",
"location": "[parameters('location')]",
"dependsOn": [],
"tags": {
"cluster": "Kubernetes"
},
"properties": {
"addressSpace": {
"addressPrefixes": "[parameters('addressSpaces')]"
},
"subnets": [
{
"name": "[parameters('subnetName')]",
"properties": {
"addressPrefix": "[parameters('subnetAddressSpace')]"
}
}
],
"enableDdosProtection": "[parameters('ddosProtectionPlanEnabled')]"
}
},
{
"apiVersion": "2020-03-01",
"dependsOn": [
"[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]",
"[resourceId('Microsoft.Network/VirtualNetworks', parameters('virtualNetworkName'))]"
],
"type": "Microsoft.ContainerService/managedClusters",
"location": "[parameters('location')]",
"name": "[parameters('clusterName')]",
"properties": {
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"enableRBAC": "[parameters('enableRBAC')]",
"dnsPrefix": "[parameters('dnsPrefix')]",
"agentPoolProfiles": [
{
"name": "systempool",
"osDiskSizeGB": "[parameters('osDiskSizeGB')]",
"count": "[parameters('systemPoolNodeCount')]",
"vmSize": "[parameters('systemPoolNodeType')]",
"osType": "Linux",
"storageProfile": "ManagedDisks",
"type": "VirtualMachineScaleSets",
"mode": "System",
"vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
"maxPods": "[parameters('maxPods')]"
},
{
"name": "userpool",
"osDiskSizeGB": "[parameters('osDiskSizeGB')]",
"count": "[parameters('agentPoolNodeCount')]",
"vmSize": "[parameters('agentPoolNodeType')]",
"osType": "Linux",
"storageProfile": "ManagedDisks",
"type": "VirtualMachineScaleSets",
"mode": "User",
"vnetSubnetID": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
"maxPods": "[parameters('maxPods')]"
}
],
"networkProfile": {
"loadBalancerSku": "standard",
"networkPlugin": "[parameters('networkPlugin')]",
"serviceCidr": "[parameters('serviceCidr')]",
"dnsServiceIP": "[parameters('dnsServiceIP')]",
"dockerBridgeCidr": "[parameters('dockerBridgeCidr')]"
},
"apiServerAccessProfile": {
"enablePrivateCluster": "[parameters('enablePrivateCluster')]"
},
"addonProfiles": {
"httpApplicationRouting": {
"enabled": "[parameters('enableHttpApplicationRouting')]"
},
"azurePolicy": {
"enabled": "[parameters('enableAzurePolicy')]"
},
"omsagent": {
"enabled": "[parameters('enableOmsAgent')]",
"config": {
"logAnalyticsWorkspaceResourceID": "[variables('omsWorkspaceId')]"
}
}
}
},
"tags": {},
"identity": {
"type": "SystemAssigned"
}
},
{
"type": "Microsoft.Resources/deployments",
"name": "[variables('solutionDeploymentId')]",
"apiVersion": "2017-05-10",
"resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]",
"subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"apiVersion": "2015-11-01-preview",
"type": "Microsoft.OperationsManagement/solutions",
"location": "[parameters('workspaceRegion')]",
"name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]",
"properties": {
"workspaceResourceId": "[variables('omsWorkspaceId')]"
},
"plan": {
"name": "[concat('ContainerInsights', '(', split(variables('omsWorkspaceId'),'/')[8], ')')]",
"product": "[concat('OMSGallery/', 'ContainerInsights')]",
"promotionCode": "",
"publisher": "Microsoft"
}
}
]
}
},
"dependsOn": [
"[concat('Microsoft.Resources/deployments/', variables('workspaceDeploymentId'))]"
]
},
{
"type": "Microsoft.Resources/deployments",
"name": "[variables('workspaceDeploymentId')]",
"apiVersion": "2017-05-10",
"resourceGroup": "[split(variables('omsWorkspaceId'),'/')[4]]",
"subscriptionId": "[split(variables('omsWorkspaceId'),'/')[2]]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"apiVersion": "2015-11-01-preview",
"type": "Microsoft.OperationalInsights/workspaces",
"location": "[parameters('workspaceRegion')]",
"name": "[variables('workspaceName')]",
"properties": {
"sku": {
"name": "[parameters('omsSku')]"
}
}
}
]
}
}
},
{
"type": "Microsoft.Resources/deployments",
"name": "[variables('clusterMonitoringMetricId')]",
"apiVersion": "2017-05-10",
"resourceGroup": "[parameters('resourceGroupName')]",
"subscriptionId": "[subscription().subscriptionId]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"type": "Microsoft.ContainerService/managedClusters/providers/roleAssignments",
"apiVersion": "2018-01-01-preview",
"name": "[concat(parameters('clusterName'), '/Microsoft.Authorization/', guid(subscription().subscriptionId))]",
"properties": {
"roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', '3913510d-42f4-4e42-8a64-420c390055eb')]",
"principalId": "[reference(parameters('clusterName')).addonProfiles.omsagent.identity.objectId]",
"scope": "[variables('clusterID')]"
}
}
]
}
},
"dependsOn": [
"[variables('clusterID')]"
]
}
],
"outputs": {
"controlPlaneFQDN": {
"type": "string",
"value": "[reference(concat('Microsoft.ContainerService/managedClusters/', parameters('clusterName'))).fqdn]"
}
}
}

Просмотреть файл

@ -0,0 +1,32 @@
# Add line to set login to az
#az login
# Set your azure subscription
#az account set -s "<subscription-id>"
# Defines the ARM template file location
export templateFile="aks-cluster.json"
# Defines the parameters that will be used in the ARM template
export parameterFile="parameters.json"
# Defines the name of the Resource Group our resources are deployed into
export resourceGroupName="PizzaAppWest"
export clusterName="pizzaappwest"
export location="westus"
# Creates the resources group if it does not already exist
az group create --name $resourceGroupName --location $location
# Creates the Kubernetes cluster and the associated resources and dependencies for the cluster
az deployment group create --name dataProductionDeployment --resource-group $resourceGroupName --template-file $templateFile --parameters $parameterFile
# Install the Kubectl CLI. This will be used to interact with the remote Kubernetes cluster
#sudo az aks install-cli
# Get the Credentials to Access the Cluster with Kubectl
az aks get-credentials --name $clusterName --resource-group $resourceGroupName
# List the node pools - expect two aks nodepools
az aks nodepool list --resource-group $resourceGroupName --cluster-name $clusterName -o table

Просмотреть файл

@ -0,0 +1,83 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"resourceGroupName": {
"value": "PizzaAppWest"
},
"virtualNetworkName": {
"value": "PizzaAppWestVNet"
},
"subnetName": {
"value": "PizzaAppWestSNet"
},
"clusterName": {
"value": "PizzaAppWest"
},
"maxPods": {
"value": 64
},
"systemPoolNodeCount": {
"value": 1
},
"systemPoolNodeType": {
"value": "Standard_D2s_v4"
},
"agentPoolNodeCount": {
"value": 1
},
"agentPoolNodeType": {
"value": "Standard_D2s_v4"
},
"location": {
"value": "westus"
},
"dnsPrefix": {
"value": "pizzaappwest-dns"
},
"kubernetesVersion": {
"value": "1.25.5"
},
"networkPlugin": {
"value": "azure"
},
"enableRBAC": {
"value": true
},
"enablePrivateCluster": {
"value": false
},
"enableHttpApplicationRouting": {
"value": false
},
"enableAzurePolicy": {
"value": false
},
"serviceCidr": {
"value": "10.71.0.0/16"
},
"dnsServiceIP": {
"value": "10.71.0.3"
},
"dockerBridgeCidr": {
"value": "172.17.0.1/16"
},
"addressSpaces": {
"value": [
"10.250.0.0/16"
]
},
"subnetAddressSpace": {
"value": "10.250.0.0/20"
},
"ddosProtectionPlanEnabled": {
"value": false
},
"workspaceName": {
"value": "PizzaAppWest"
},
"workspaceRegion": {
"value": "westus"
}
}
}

Просмотреть файл

@ -0,0 +1,11 @@
apiVersion: v2
name: Contoso Pizza
description: A Helm chart for deploying Contoso Pizza Web Application
type: application
version: 1.0
appVersion: 15.08

Просмотреть файл

@ -0,0 +1,74 @@
status="Running"
# Install the Kubernetes Resources
helm upgrade --install wth-mysql ../MySQL57 --set infrastructure.password=OCPHack8
# Install the Kubernetes Resources Postgres (un comment if you want Postgress vs MySQL)
# helm upgrade --install wth-postgresql ../PostgreSQL116 --set infrastructure.password=OCPHack8
#
# for ((i = 0 ; i < 30 ; i++)); do
# pgStatus=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":status.phase")
#
#
# if [ "$pgStatus" != "$status" ]; then
# sleep 10
# fi
# done
# Get the postgres pod name
# pgPodName=$(kubectl -n postgresql get pods --no-headers -o custom-columns=":metadata.name")
#Copy pg.sql to the postgresql pod
# kubectl -n postgresql cp ./pg.sql $pgPodName:/tmp/pg.sql
# Use this to connect to the database server
# kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres -f /tmp/pg.sql
# Install the Kubernettes Resources MySQL
for ((i = 0 ; i < 30 ; i++)); do
mysqlStatus=$(kubectl -n mysql get pods --no-headers -o custom-columns=":status.phase")
if [ "$mysqlStatus" != "$status" ]; then
sleep 30
fi
done
# Use this to connect to the database server
kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8 <./mysql.sql
# postgresClusterIP=$(kubectl -n postgresql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"')
mysqlClusterIP=$(kubectl -n mysql get svc -o json |jq .items[0].spec.clusterIP |tr -d '"')
# sed "s/XXX.XXX.XXX.XXX/$postgresClusterIP/" ./values-postgresql-orig.yaml >temp_postgresql.yaml && mv temp_postgresql.yaml ./values-postgresql.yaml
sed "s/XXX.XXX.XXX.XXX/$mysqlClusterIP/" ./values-mysql-orig.yaml >temp_mysql.yaml && mv temp_mysql.yaml ./values-mysql.yaml
helm upgrade --install mysql-contosopizza . -f ./values.yaml -f ./values-mysql.yaml
# helm upgrade --install postgres-contosopizza . -f ./values.yaml -f ./values-postgresql.yaml
for ((i = 0 ; i < 30 ; i++)); do
appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"')
if [ "$appStatus" == "null" ]; then
sleep 30
fi
done
for ((i = 0 ; i < 30 ; i++)); do
appStatus=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"')
if [ "$appStatus" == "null" ]; then
sleep 30
fi
done
# postgresAppIP=$(kubectl -n contosoapppostgres get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip|tr -d '"')
mysqlAppIP=$(kubectl -n contosoappmysql get svc -o json |jq .items[0].status.loadBalancer.ingress[0].ip |tr -d '"')
echo "Pizzeria app on MySQL is ready at http://$mysqlAppIP:8081/pizzeria"
# echo "Pizzeria app on PostgreSQL is ready at http://$postgresAppIP:8082/pizzeria"

Просмотреть файл

@ -0,0 +1,88 @@
# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only
# Find out your local client ip address.
echo -e "\n This script restricts the access to your ""on-prem"" Postgres and MySQL database from the shell where it is run from.
It removes public access to the databases and adds your shell IP address as an source IP to connect from.
If you are running this script from Azure Cloud Shell and want to add your computer's IP address as a source for Gui tools to connect to,
then you have to edit the variable my_ip below - put your computer's IP address.
In order to find the public IP address of your computer ip address, point a browser to https://ifconfig.me
If this script is run again it appends your IP address to the current white listed source IP addresses. \n"
my_ip=`curl -s ifconfig.me`/32
# In this resource group, there is only one NSG
export rg_nsg="MC_PizzaAppWest_pizzaappwest_westus"
export nsg_name=` az network nsg list -g $rg_nsg --query "[].name" -o tsv`
# For this NSG, there are two rules for connecting to Postgres and MySQL.
export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-5432" `
export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query "[].[name]" -o tsv | grep "TCP-3306" `
# Capture the existing allowed_source_ip_address.
existing_my_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --query "sourceAddressPrefix" -o tsv`
existing_pg_source_ip_allowed=`az network nsg rule show -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --query "sourceAddressPrefix" -o tsv`
# If it says "Internet" we treat it as 0.0.0.0
if [ "$existing_my_source_ip_allowed" = "Internet" ]
then
existing_my_source_ip_allowed="0.0.0.0"
fi
if [ "$existing_pg_source_ip_allowed" = "Internet" ]
then
existing_pg_source_ip_allowed="0.0.0.0"
fi
# if the existing source ip allowed is open to the world - then we need to remove it first. Otherwise it is a ( list of ) IP addresses then
# we append to it another IP address. Open the world is 0.0.0.0 or 0.0.0.0/0.
existing_my_source_ip_allowed_prefix=`echo $existing_my_source_ip_allowed | cut -d "/" -f1`
existing_pg_source_ip_allowed_prefix=`echo $existing_pg_source_ip_allowed | cut -d "/" -f1`
# If it was open to public, we take off the existing 0.0.0.0 or else we append to it.
if [ "$existing_my_source_ip_allowed_prefix" = "0.0.0.0" ]
then
new_my_source_ip_allowed="$my_ip"
else
new_my_source_ip_allowed="$existing_my_source_ip_allowed $my_ip"
fi
if [ "$existing_pg_source_ip_allowed_prefix" = "0.0.0.0" ]
then
new_pg_source_ip_allowed="$my_ip"
else
new_pg_source_ip_allowed="$existing_pg_source_ip_allowed $my_ip"
fi
# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip". Also discard errors - as if you run the script
# simply twice back to back - it gives an error message - does not do any harm though .
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $new_my_source_ip_allowed 2>/dev/zero
if [ $? -ne 0 ]
then
echo -e "\n Your MySQL Firewall rule was not changed. It is possible that you already have $my_ip white listed \n"
fi
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $new_pg_source_ip_allowed 2>/dev/zero
if [ $? -ne 0 ]
then
echo -e "\n Your Postgres Firewall rule was not changed. It is possible that you already have $my_ip white listed \n"
fi

Просмотреть файл

@ -0,0 +1,16 @@
-- Create wth database
CREATE DATABASE wth;
-- Create a user Contosoapp that would own the application data for migration
CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ;
GRANT SUPER on *.* to contosoapp identified by 'OCPHack8'; -- may not be needed
GRANT ALL PRIVILEGES ON wth.* to contosoapp ;
GRANT PROCESS, SELECT ON *.* to contosoapp ;
SET GLOBAL gtid_mode=ON_PERMISSIVE;
SET GLOBAL gtid_mode=OFF_PERMISSIVE;
SET GLOBAL gtid_mode=OFF;

Просмотреть файл

@ -0,0 +1,7 @@
--Create the wth database
CREATE DATABASE wth;
-- Create user contosoapp that would own the application schema
CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8';

Просмотреть файл

@ -0,0 +1,11 @@
# Start the VMSS that hosts the AKS nodes. There are only two VMSS in the resource group -one each for systempool and userpool.
# Change the value of the resource group, if required.
export vmss_user=$(az vmss list -g MC_PizzaAppWest_pizzaappwest_westus --query '[].name' | grep userpool | tr -d "," | tr -d '"')
export vmss_system=$(az vmss list -g MC_PizzaAppWest_pizzaappwest_westus --query '[].name' | grep systempool | tr -d "," | tr -d '"')
# Now start the VM scale sets
az vmss start -g MC_PizzaAppWest_pizzaappwest_westus -n $vmss_system
az vmss start -g MC_PizzaAppWest_pizzaappwest_westus -n $vmss_user

Просмотреть файл

@ -0,0 +1,11 @@
# Stop the VMSS that hosts the AKS nodes to stop incurring compute charges. There are only two VMSS in the resource group -one each for system and userpool.
# Change the value of the resource group, if required.
export vmss_user=$(az vmss list -g MC_OSSDBMigration_ossdbmigration_westus --query '[].name' | grep userpool | tr -d "," | tr -d '"')
export vmss_system=$(az vmss list -g MC_OSSDBMigration_ossdbmigration_westus --query '[].name' | grep systempool | tr -d "," | tr -d '"')
# Now stop the VM scale sets
az vmss stop -g MC_OSSDBMigration_ossdbmigration_westus -n $vmss_user
az vmss stop -g MC_OSSDBMigration_ossdbmigration_westus -n $vmss_system

Просмотреть файл

@ -0,0 +1,106 @@
{{ if eq .Values.appConfig.databaseType "mysql" }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.infrastructure.appName }}
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
replicas: 1
serviceName: "{{ .Values.infrastructure.appName }}-external"
selector:
matchLabels:
app: {{ .Values.application.labelValue }}
template:
metadata:
labels:
app: {{ .Values.application.labelValue }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Values.infrastructure.appName }}
resources:
requests:
memory: "{{ .Values.resources.requests.memory }}"
cpu: "{{ .Values.resources.requests.cpu }}"
limits:
memory: "{{ .Values.resources.limits.memory }}"
cpu: "{{ .Values.resources.limits.cpu }}"
env:
- name: APP_DATASOURCE_DRIVER
value: "{{ .Values.appSettings.mysql.driverClass }}"
- name: APP_HIBERNATE_DIALECT
value: "{{ .Values.appSettings.mysql.dialect }}"
- name: APP_HIBERNATE_HBM2DDL_AUTO
value: "{{ .Values.globalConfig.hibernateDdlAuto }}"
- name: APP_PORT
value: "{{ .Values.appConfig.webPort }}"
- name: APP_CONTEXT_PATH
value: "{{ .Values.appConfig.webContext }}"
- name: APP_BRAINTREE_MERCHANT_ID
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_braintree_merchant_id
- name: APP_BRAINTREE_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_braintree_public_key
- name: APP_BRAINTREE_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_braintree_private_key
- name: APP_RECAPTCHA_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_recaptcha_public_key
- name: APP_RECAPTCHA_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_recaptcha_private_key
- name: APP_DATASOURCE_URL
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_datasource_url
- name: APP_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_datasource_username
- name: APP_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ .Values.globalConfig.secretName }}"
key: app_datasource_password
ports:
- containerPort: {{ .Values.appConfig.webPort }}
name: contosopizza
readinessProbe:
tcpSocket:
port: {{ .Values.appConfig.webPort }}
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
tcpSocket:
port: {{ .Values.appConfig.webPort }}
initialDelaySeconds: 15
failureThreshold: 5
periodSeconds: 16
volumeMounts:
- name: "contosopizza-persistent-storage"
mountPath: {{ .Values.infrastructure.dataVolume }}
volumeClaimTemplates:
- metadata:
name: contosopizza-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium"
resources:
requests:
storage: 1Gi
{{ end }}

Просмотреть файл

@ -0,0 +1,8 @@
{{ if eq .Values.infrastructure.namespace "default" }}
# Do not create namespace
{{ else }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.infrastructure.namespace }}
{{ end }}

Просмотреть файл

@ -0,0 +1,16 @@
# These are secrets used to configure the application
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: "{{ .Values.globalConfig.secretName }}"
namespace: "{{ .Values.infrastructure.namespace }}"
data:
app_braintree_merchant_id: {{ .Values.globalConfig.brainTreeMerchantId | b64enc }}
app_braintree_public_key: {{ .Values.globalConfig.brainTreePublicKey | b64enc }}
app_braintree_private_key: {{ .Values.globalConfig.brainTreePrivateKey | b64enc }}
app_recaptcha_public_key: {{ .Values.globalConfig.recaptchaPublicKey | b64enc }}
app_recaptcha_private_key: {{ .Values.globalConfig.recaptchaPrivateKey | b64enc }}
app_datasource_url: {{ .Values.appConfig.dataSourceURL | b64enc }}
app_datasource_username: {{ .Values.appConfig.dataSourceUser | b64enc }}
app_datasource_password: {{ .Values.appConfig.dataSourcePassword | b64enc }}

Просмотреть файл

@ -0,0 +1,14 @@
---
# This is the internal load balancer, routing traffic to the Application
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.infrastructure.appName }}-external"
namespace: {{ .Values.infrastructure.namespace }}
spec:
type: "{{ .Values.service.type }}"
ports:
- port: {{ .Values.appConfig.webPort }}
protocol: {{ .Values.service.protocol }}
selector:
app: {{ .Values.application.labelValue }}

Просмотреть файл

@ -0,0 +1,6 @@
helm uninstall wth-postgresql
helm uninstall wth-mysql
helm uninstall mysql-contosopizza
helm uninstall postgres-contosopizza
echo ""
echo "Use 'kubectl get ns' to make sure your pods are not in a Terminating status before redeploying"

Просмотреть файл

@ -0,0 +1,27 @@
# Change NSG firewall rule to restrict Postgres and MySQL database from client machine only. The first step - to find out your local client ip address.
echo -e "\n This script restricts the access to your Postgres and MySQL database from your computer only.
The variable myip will get the ip address of the shell environment where this script is running from - be it a cloud shell or your own computer.
You can get your computer's IP adress by browsing to https://ifconfig.me. So if the browser says it is 102.194.87.201, your myip=102.194.87.201/32.
\n"
myip=`curl -s ifconfig.me`/32
# In this resource group, there is only one NSG. Change the value of the resource group, if required
export rg_nsg="MC_OSSDBMigration_ossdbmigration_westus"
export nsg_name=`az network nsg list -g $rg_nsg --query "[].name" -o tsv`
# For this NSG, there are two rules for connecting to Postgres and MySQL.
export pg_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-5432" | sed 's/"//g'`
export my_nsg_rule_name=`az network nsg rule list -g $rg_nsg --nsg-name $nsg_name --query '[].[name]' | grep "TCP-3306" | sed 's/"//g'`
# Update the rule to allow access to Postgres and MySQL only from your client ip address - "myip"
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $my_nsg_rule_name --source-address-prefixes $myip
az network nsg rule update -g $rg_nsg --nsg-name $nsg_name --name $pg_nsg_rule_name --source-address-prefixes $myip

Просмотреть файл

@ -0,0 +1,78 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
#databaseType: "postgres" # mysql or postgres
#local example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here
#Azure example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here
#local example of MySQL JDBC Connection string
dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#Azure example of MySQL JDBC Connection string
#dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#local examples of dataSourceUser and dataSourcePassword
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
#Azure examples of dataSourceUser and dataSourcePassword
#dataSourceUser: "postgres@petepgdbtest01" # your database username goes here
#dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8083 # the port the app listens on
#webPort: 8082 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
# These changes applies to any database type used
globalConfig:
secretName: contosopizza
brainTreeMerchantId: "3fk8mrzyr665jb6d"
brainTreePublicKey: "72wqqdk75tmh44n9"
brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33"
recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04"
recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI"
hibernateDdlAuto: "create-only"
application:
labelValue: contosopizza
infrastructure:
namespace: contosopizza
appName: contosopizza
dataVolume: "/usr/local/contosopizza"
volumeName: "contosopizza"
image:
name: izzymsft/ubuntu-pizza
pullPolicy: IfNotPresent
tag: "1.0"
service:
type: LoadBalancer
port: 8082
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 256m
memory: 512Mi
volume:
size: 1Gi
storageClass: managed-premium
appSettings:
mysql:
dialect: "org.hibernate.dialect.MySQL57Dialect"
driverClass: "com.mysql.jdbc.Driver"
postgres:
dialect: "org.hibernate.dialect.PostgreSQL95Dialect"
driverClass: "org.postgresql.Driver"

Просмотреть файл

@ -0,0 +1,13 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8081 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
infrastructure:
namespace: contosoappmysql

Просмотреть файл

@ -0,0 +1,13 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
dataSourceURL: "jdbc:mysql://XXX.XXX.XXX.XXX:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8081 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
infrastructure:
namespace: contosoappmysql

Просмотреть файл

@ -0,0 +1,13 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "postgres" # mysql or postgres
dataSourceURL: "jdbc:postgresql://XXX.XXX.XXX.XXX:5432/wth" # your JDBC connection string goes here
dataSourceUser: "contosoapp" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8082 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
infrastructure:
namespace: contosoapppostgres

Просмотреть файл

@ -0,0 +1,13 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "postgres" # mysql or postgres
dataSourceURL: "jdbc:postgresql://XXX.XXX.XXX.XXX:5432/wth" # your JDBC connection string goes here
dataSourceUser: "contosoapp" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8082 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
infrastructure:
namespace: contosoapppostgres

Просмотреть файл

@ -0,0 +1,78 @@
replicaCount: 1
# Change the application settings here
appConfig:
databaseType: "mysql" # mysql or postgres
#databaseType: "postgres" # mysql or postgres
#local example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://10.71.25.217:5432/wth" # your JDBC connection string goes here
#Azure example of Postgres JDBC Connection string
#dataSourceURL: "jdbc:postgresql://petepgdbtest01.postgres.database.azure.com:5432/wth?sslmode=require" # your JDBC connection string goes here
#local example of MySQL JDBC Connection string
dataSourceURL: "jdbc:mysql://10.71.215.5:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#Azure example of MySQL JDBC Connection string
#dataSourceURL: "jdbc:mysql://petewthmysql01.mysql.database.azure.com:3306/wth?useSSL=true&requireSSL=false&serverTimezone=UTC" # your JDBC connection string goes here
#local examples of dataSourceUser and dataSourcePassword
dataSourceUser: "root" # your database username goes here
dataSourcePassword: "OCPHack8" # your database password goes here
#Azure examples of dataSourceUser and dataSourcePassword
#dataSourceUser: "postgres@petepgdbtest01" # your database username goes here
#dataSourcePassword: "OCPHack8" # your database password goes here
webPort: 8083 # the port the app listens on
#webPort: 8082 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
# These changes applies to any database type used
globalConfig:
secretName: contosopizza
brainTreeMerchantId: "3fk8mrzyr665jb6d"
brainTreePublicKey: "72wqqdk75tmh44n9"
brainTreePrivateKey: "cf094c3345159aaa473a8f50d56c2e33"
recaptchaPublicKey: "6LfqpNsZAAAAACIMnNaeW7fS_-19pZ7K__dREk04"
recaptchaPrivateKey: "6LfqpNsZAAAAAGrUYTOGR69aUvzaRoz7f_tnMBeI"
hibernateDdlAuto: "create-only"
application:
labelValue: contosopizza
infrastructure:
namespace: contosopizza
appName: contosopizza
dataVolume: "/usr/local/contosopizza"
volumeName: "contosopizza"
image:
name: izzymsft/ubuntu-pizza
pullPolicy: IfNotPresent
tag: "1.0"
service:
type: LoadBalancer
port: 8082
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 256m
memory: 512Mi
volume:
size: 1Gi
storageClass: managed-premium
appSettings:
mysql:
dialect: "org.hibernate.dialect.MySQL57Dialect"
driverClass: "com.mysql.jdbc.Driver"
postgres:
dialect: "org.hibernate.dialect.PostgreSQL95Dialect"
driverClass: "org.postgresql.Driver"

Просмотреть файл

@ -0,0 +1,11 @@
apiVersion: v2
name: MySQL Database Server
description: A Helm chart for deploying a single node MySQL database server
type: application
version: 2.0
appVersion: 5.7

Просмотреть файл

@ -0,0 +1,56 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: mysqld-config
namespace: "{{ .Values.infrastructure.namespace }}"
data:
mysqld.cnf: |-
# Mounted at /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
lower_case_table_names = 1
server_id = 3
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /usr/local/mysql/data
explicit_defaults_for_timestamp = on
#log-error = /var/log/mysql/error.log
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# The value of log_bin is the base name of the sequence of binlog files.
log_bin = mysql-bin
# The binlog-format must be set to ROW or row.
binlog_format = row
# The binlog_row_image must be set to FULL or full
binlog_row_image = full
# This is the number of days for automatic binlog file removal. The default is 0 which means no automatic removal.
expire_logs_days = 7
# Boolean which enables/disables support for including the original SQL statement in the binlog entry.
binlog_rows_query_log_events = on
# Whether updates received by a replica server from a replication source server should be logged to the replica's own binary log
log_slave_updates = on
# Boolean which specifies whether GTID mode of the MySQL server is enabled or not.
gtid_mode = on
# Boolean which instructs the server whether or not to enforce GTID consistency by allowing
# the execution of statements that can be logged in a transactionally safe manner; required when using GTIDs.
enforce_gtid_consistency = on
# The number of seconds the server waits for activity on an interactive connection before closing it.
interactive_timeout = 36000
# The number of seconds the server waits for activity on a noninteractive connection before closing it.
wait_timeout = 72000
# end of file

Просмотреть файл

@ -0,0 +1,74 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.infrastructure.appName }}
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.application.labelValue }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ .Values.application.labelValue }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Values.infrastructure.appName }}
resources:
requests:
memory: "{{ .Values.resources.requests.memory }}"
cpu: "{{ .Values.resources.requests.cpu }}"
limits:
memory: "{{ .Values.resources.limits.memory }}"
cpu: "{{ .Values.resources.limits.cpu }}"
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysqld
key: mysql_password
ports:
- containerPort: {{ .Values.service.port }}
name: mysql
readinessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 15
failureThreshold: 5
periodSeconds: 16
volumeMounts:
- name: "{{ .Values.infrastructure.volumeName }}-volume"
mountPath: {{ .Values.infrastructure.dataVolume }}
- name: mysqld-configuration2
mountPath: /etc/mysql/mysql.conf.d
volumes:
- name: "{{ .Values.infrastructure.volumeName }}-volume"
persistentVolumeClaim:
claimName: "{{ .Values.infrastructure.volumeName }}-persistent-storage"
- name: mysqld-configuration2
configMap:
name: mysqld-config
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ .Values.infrastructure.volumeName }}-persistent-storage"
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.resources.volume.storageClass }}
resources:
requests:
storage: {{ .Values.resources.volume.size }}

Просмотреть файл

@ -0,0 +1,8 @@
{{ if eq .Values.infrastructure.namespace "default" }}
# Do not create namespace
{{ else }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.infrastructure.namespace }}
{{ end }}

Просмотреть файл

@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: mysqld
namespace: "{{ .Values.infrastructure.namespace }}"
data:
mysql_default_user: {{ .Values.infrastructure.username | b64enc }}
mysql_password: {{ .Values.infrastructure.password | b64enc }}

Просмотреть файл

@ -0,0 +1,14 @@
---
# This is the internal load balancer, routing traffic to the MySQL Pod
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.infrastructure.appName }}-external"
namespace: {{ .Values.infrastructure.namespace }}
spec:
type: "{{ .Values.service.type }}"
ports:
- port: {{ .Values.service.port }}
protocol: {{ .Values.service.protocol }}
selector:
app: {{ .Values.application.labelValue }}

Просмотреть файл

@ -0,0 +1,34 @@
replicaCount: 1
application:
labelValue: mysql
infrastructure:
namespace: mysql
appName: mysql
username: izzy
password: "OCPHack8"
dataVolume: "/usr/local/mysql"
volumeName: "wthmysql"
image:
name: mysql
pullPolicy: IfNotPresent
tag: "5.7.32"
service:
type: LoadBalancer
port: 3306
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 750m
memory: 2048Mi
volume:
size: 5Gi
storageClass: managed-premium

Просмотреть файл

@ -0,0 +1,11 @@
apiVersion: v2
name: PostgreSQL
description: A Helm chart for deploying a single node PostgreSQL database server
type: application
version: 2.0
appVersion: 11.6

Просмотреть файл

@ -0,0 +1,91 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.infrastructure.appName }}
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.application.labelValue }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ .Values.application.labelValue }}
spec:
securityContext:
runAsUser: 0
runAsGroup: 999
fsGroup: 999
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Values.infrastructure.appName }}
args: ["-c", "config_file=/etc/postgresql/postgresql.conf"]
resources:
requests:
memory: "{{ .Values.resources.requests.memory }}"
cpu: "{{ .Values.resources.requests.cpu }}"
limits:
memory: "{{ .Values.resources.limits.memory }}"
cpu: "{{ .Values.resources.limits.cpu }}"
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres
key: postgres_password
- name: PGDATA
value: {{ .Values.infrastructure.dataPath }}
ports:
- containerPort: {{ .Values.service.port }}
name: postgres
readinessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
tcpSocket:
port: {{ .Values.service.port }}
initialDelaySeconds: 15
failureThreshold: 5
periodSeconds: 16
volumeMounts:
- name: "{{ .Values.infrastructure.appName }}-volume"
mountPath: {{ .Values.infrastructure.dataVolume }}
- name: "postgresql-configuration"
mountPath: "/etc/postgresql"
- name: "postgresql-tls-keys"
mountPath: "/etc/postgresql/keys"
volumes:
- name: "{{ .Values.infrastructure.appName }}-volume"
persistentVolumeClaim:
claimName: "{{ .Values.infrastructure.appName }}-persistent-storage"
- name: postgresql-configuration
configMap:
name: postgresql-config
- name: postgresql-tls-keys
secret:
secretName: postgresql-tls-secret
items:
- key: tls.crt
path: "tls.crt"
- key: tls.key
path: "tls.key"
mode: 0640
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ .Values.infrastructure.appName }}-persistent-storage"
namespace: "{{ .Values.infrastructure.namespace }}"
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.resources.volume.storageClass }}
resources:
requests:
storage: {{ .Values.resources.volume.size }}

Просмотреть файл

@ -0,0 +1,8 @@
{{ if eq .Values.infrastructure.namespace "default" }}
# Do not create namespace
{{ else }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.infrastructure.namespace }}
{{ end }}

Просмотреть файл

@ -0,0 +1,699 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: postgresql-config
namespace: {{ .Values.infrastructure.namespace }}
data:
postgresql.conf: |-
# Mounted at /etc/postgresql/postgresql.conf
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*'
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = md5 # md5 or scram-sha-256
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = ''
#krb_caseins_users = off
# - SSL -
ssl = on
ssl_ca_file = '/etc/postgresql/keys/tls.crt'
ssl_cert_file = '/etc/postgresql/keys/tls.crt'
#ssl_crl_file = ''
ssl_key_file = '/etc/postgresql/keys/tls.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 128MB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# use none to disable dynamic shared memory
# (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kB, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 25
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#parallel_leader_participation = on
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel operations
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#backend_flush_after = 0 # measured in pages, 0 disables
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = logical # minimal, replica, or logical
# (change requires restart)
#fsync = on # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
#synchronous_commit = on # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_compression = off # enable compression of full-page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
max_wal_size = 1GB
min_wal_size = 80MB
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the master and on any standby that will send replication data.
#max_wal_senders = 10 # max number of walsender processes
# (change requires restart)
#wal_keep_segments = 0 # in logfile segments; 0 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_parallel_hash = on
#enable_partition_pruning = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#force_parallel_mode = off
#jit = off # allow JIT compilation
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (win32):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
#log_line_prefix = '%m [%p] ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Etc/UTC'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
#cluster_name = '' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#stats_temp_directory = 'pg_stat_tmp'
# - Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_tablespace = '' # a tablespace name, '' uses the default
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_cleanup_index_scale_factor = 0.1 # fraction of total number of tuples
# before index cleanup, 0 always performs
# index cleanup
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_fuzzy_search_limit = 0
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Etc/UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'en_US.utf8' # locale for system error message
# strings
lc_monetary = 'en_US.utf8' # locale for monetary formatting
lc_numeric = 'en_US.utf8' # locale for number formatting
lc_time = 'en_US.utf8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#shared_preload_libraries = '' # (change requires restart)
#local_preload_libraries = ''
#session_preload_libraries = ''
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#operator_precedence_warning = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
#include_dir = '...' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: postgres
namespace: "{{ .Values.infrastructure.namespace }}"
data:
postgres_default_user: {{ .Values.infrastructure.username | b64enc }}
postgres_password: {{ .Values.infrastructure.password | b64enc }}

Просмотреть файл

@ -0,0 +1,14 @@
---
# This is the internal load balancer, routing traffic to the PostgreSQL Pod
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.infrastructure.appName }}-external"
namespace: {{ .Values.infrastructure.namespace }}
spec:
type: "{{ .Values.service.type }}"
ports:
- port: {{ .Values.service.port }}
protocol: {{ .Values.service.protocol }}
selector:
app: {{ .Values.application.labelValue }}

Просмотреть файл

@ -0,0 +1,34 @@
replicaCount: 1
application:
labelValue: postgres
infrastructure:
namespace: postgresql
appName: postgres
username: postgres
password: "OCPHack8"
dataVolume: "/var/lib/postgresql"
dataPath: "/var/lib/postgresql/data"
image:
name: postgres
pullPolicy: IfNotPresent
tag: "11.6"
service:
type: LoadBalancer
port: 5432
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 4096Mi
requests:
cpu: 750m
memory: 2048Mi
volume:
size: 5Gi
storageClass: managed-premium

Просмотреть файл

@ -0,0 +1,269 @@
**[Home](../../../README.md)** - [Prerequisites >](../../../00-prereqs.md)
## Setting up Kubernetes
NOTE: YOU DO NOT NEED TO RUN THROUGH THE STEPS IN THIS FILE IF YOU ALREADY PROVISIONED AKS.
The steps to deploy the AKS cluster, scale it up and scale it down are available in the README file for that section: [README](../ARM-Templates/README.md).
You should have not have to do provisioning again since you have already provisioned AKS using the create-cluster.sh script in [Prerequisites >](../../../00-prereqs.md)
## PostgreSQL Setup on Kubernetes
These instructions provide guidance on how to setup PostgreSQL 11 on AKS
This requires Helm3 and the latest version of Azure CLI to be installed. These are pre-installed in Azure Cloud Shell but you will need to install or download them if you are using a different environment.
## Installing the PostgreSQL Database
```bash
# Navigate to the Helm Charts
#cd Resources/HelmCharts
# Install the Kubernetes Resources
helm upgrade --install wth-postgresql ./PostgreSQL116 --set infrastructure.password=OCPHack8
```
## Checking the Service IP Addresses and Ports
```bash
kubectl -n postgresql get svc
```
**Important: you will need to copy the postgres-external Cluster-IP value to use for the dataSourceURL in later steps**
## Checking the Pod for Postgres
```bash
kubectl -n postgresql get pods
```
Wait a few minutes until the pod status shows as Running
## Getting into the Container
```bash
# Use this to connect to the database server SQL prompt
kubectl -n postgresql exec deploy/postgres -it -- /usr/bin/psql -U postgres
```
Run the following commands to check the Postgres Version and create the WTH database (warning: application deployment will fail if you don't do this)
```sql
--Check the DB Version
SELECT version();
--Create the wth database
CREATE DATABASE wth;
--List databases. notice that there is a database called wth
\l
-- Create user contosoapp that would own the application schema
CREATE ROLE CONTOSOAPP WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD 'OCPHack8';
-- List the tables in wth
\dt
-- exit out of Postgres Sql prompt
exit
```
## Uninstalling the PostgreSQL from Kubernetes (only if you need to cleanup and try the helm deployment again)
Use this to uninstall the PostgreSQL 11 instance from Kubernetes cluster
```bash
# Uninstall to the database server. To install again, run helm upgrade
helm uninstall wth-postgresql
```
## Installing MySQL
```bash
# Install the Kubernetes Resources
helm upgrade --install wth-mysql ./MySQL57 --set infrastructure.password=OCPHack8
```
## Checking the Service IP Addresses and Ports
```bash
kubectl -n mysql get svc
```
**Important: you will need to copy the mysql-external Cluster-IP value to use for the dataSourceURL in later steps**
## Checking the Pod for MySQL
```bash
kubectl -n mysql get pods
```
## Getting into the Container
```bash
# Use this to connect to the database server
kubectl -n mysql exec deploy/mysql -it -- /usr/bin/mysql -u root -pOCPHack8
```
Run the following commands to check the MySQL Version and create the WTH database (warning: application deployment will fail if you don't do this)
```sql
-- Check the mysql DB Version
SELECT version();
-- List databases
SHOW DATABASES;
--Create wth database
CREATE DATABASE wth;
-- Create a user Contosoapp that would own the application data for migration
CREATE USER if not exists 'contosoapp' identified by 'OCPHack8' ;
GRANT SUPER on *.* to conotosoapp identified by 'OCPHack8'; -- may not be needed
GRANT ALL PRIVILEGES ON wth.* to contosoapp ;
-- Show tables in wth database
SHOW TABLES;
-- exit out of mysql Sql prompt
exit
```
## Uninstalling the MySQL from Kubernetes (only if you need to cleanup and try the helm deployment again)
Use this to uninstall the MySQL instance from Kubernetes cluster
```bash
# Uninstall to the database server. To install again, run helm upgrade command previously executed
helm uninstall wth-mysql
```
## Deploying the Web Application
First we navigate to the Helm charts directory
```bash
cd Resources/HelmCharts
```
We can deploy in two ways. As part of this hack, you will need to do both ways
* Backed by MySQL Database
* Backed by PostgreSQL Database
For the MySQL database setup, the developer/operator can make changes to the values-mysql.yaml file.
For the PostgreSQL database setup, the developer/operator can make changes to the values-postgresql.yaml file.
In the yaml files we can specify the database Type (appConfig.databaseType) as "mysql" or postgres" and then we can set the JDBC URL, username and password under the appConfig objects.
In the globalConfig object we can change the merchant id, public keys and other values as needed but you generally can leave those alone as they apply to both MySQL and PostgreSQL deployment options
```yaml
appConfig:
databaseType: "databaseType goes here" # mysql or postgres
dataSourceURL: "jdbc url goes here" # database is either mysql or postgres - jdbc:database://ip-address/wth
dataSourceUser: "user name goes here" # database username mentioned in values-postgres or values-mysql yaml - contosoap
dataSourcePassword: "Pass word goes here!" # your database password goes here - # OCPHack8
webPort: 8083 # the port the app listens on
webContext: "pizzeria" # the application context http://hostname:port/webContext
```
The developer or operator can specify the '--values'/'-f' flag multiple times.
When more than one values file is specified, priority will be given to the last (right-most) file specified in the sequence.
For example, if both values.yaml and override.yaml contained a key called 'namespace', the value set in override.yaml would take precedence.
The commands below allows us to use settings from the values file and then override certain values in the database specific values file.
```bash
helm upgrade --install release-name ./HelmChartFolder -f ./HelmChartFolder/values.yaml -f ./HelmChartFolder/override.yaml
```
To deploy the app backed by MySQL, run the following command after you have edited the values file to match your desired database type
```bash
helm upgrade --install mysql-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-mysql.yaml
```
To deploy the app backed by PostgreSQL, run the following command after you have edited the values file to match your desired database type
```bash
helm upgrade --install postgres-contosopizza ./ContosoPizza -f ./ContosoPizza/values.yaml -f ./ContosoPizza/values-postgresql.yaml
```
If you wish to uninstall the app, you can use one of the following commands:
```bash
# Use this to uninstall, if you are using MySQL as the database
helm uninstall mysql-contosopizza
# Use this to uninstall, if you are using PostgreSQL as the database
helm uninstall postgres-contosopizza
```
After the apps have booted up, you can find out their service addresses and ports as well as their status as follows
```bash
# get service ports and IP addresses
kubectl -n {infrastructure.namespace goes here} get svc
# get service pods running the app
kubectl -n {infrastructure.namespace goes here} get pods
# view the first 5k lines of the application logs
kubectl -n {infrastructure.namespace goes here} logs deploy/contosopizza --tail=5000
# example for ports and services
kubectl -n {infrastructure.namespace goes here} get svc
```
Verify that contoso pizza application is running on AKS
```bash
# Insert the external IP address of the command <kubectl -n contosoappmysql or contosoapppostgres get svc below>
http://{external_ip_contoso_app}:8081/pizzeria/
```